提交 616efe09 编写于 作者: P Ping Xiao

Merge branch 'develop' into xiaoping/add_test_case

......@@ -32,7 +32,7 @@ ELSEIF (TD_WINDOWS)
#INSTALL(TARGETS taos RUNTIME DESTINATION driver)
#INSTALL(TARGETS shell RUNTIME DESTINATION .)
IF (TD_MVN_INSTALLED)
INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos-jdbcdriver-2.0.16-dist.jar DESTINATION connector/jdbc)
INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos-jdbcdriver-2.0.17-dist.jar DESTINATION connector/jdbc)
ENDIF ()
ELSEIF (TD_DARWIN)
SET(TD_MAKE_INSTALL_SH "${TD_COMMUNITY_DIR}/packaging/tools/make_install.sh")
......
......@@ -16,13 +16,25 @@ ENDIF ()
IF (DEFINED GITINFO)
SET(TD_VER_GIT ${GITINFO})
ELSE ()
SET(TD_VER_GIT "community")
execute_process(
COMMAND git log -1 --format=%H
WORKING_DIRECTORY ${TD_COMMUNITY_DIR}
OUTPUT_VARIABLE GIT_COMMITID
)
string (REGEX REPLACE "[\n\t\r]" "" GIT_COMMITID ${GIT_COMMITID})
SET(TD_VER_GIT ${GIT_COMMITID})
ENDIF ()
IF (DEFINED GITINFOI)
SET(TD_VER_GIT_INTERNAL ${GITINFOI})
ELSE ()
SET(TD_VER_GIT_INTERNAL "internal")
execute_process(
COMMAND git log -1 --format=%H
WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}
OUTPUT_VARIABLE GIT_COMMITID
)
string (REGEX REPLACE "[\n\t\r]" "" GIT_COMMITID ${GIT_COMMITID})
SET(TD_VER_GIT_INTERNAL ${GIT_COMMITID})
ENDIF ()
IF (DEFINED VERDATE)
......
......@@ -9,7 +9,7 @@
TDengine 采用 SQL 作为查询语言。应用程序可以通过 C/C++, Java, Go, Python 连接器发送 SQL 语句,用户可以通过 TDengine 提供的命令行(Command Line Interface, CLI)工具 TAOS Shell 手动执行 SQL 即席查询(Ad-Hoc Query)。TDengine 支持如下查询功能:
- 单列、多列数据查询
- 标签和数值的多种过滤条件:\>, \<, =, \<>, like 等
- 标签和数值的多种过滤条件:>, <, =, <>, like 等
- 聚合结果的分组(Group by)、排序(Order by)、约束输出(Limit/Offset)
- 数值列及聚合结果的四则运算
- 时间戳对齐的连接查询(Join Query: 隐式连接)操作
......
......@@ -58,26 +58,26 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic
- **创建数据库**
```mysql
CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [UPDATE 1];
```
说明:
1) KEEP是该数据库的数据保留多长天数,缺省是3650天(10年),数据库会自动删除超过时限的数据;
2) UPDATE 标志数据库支持更新相同时间戳数据;
3) 数据库名最大长度为33;
4) 一条SQL 语句的最大长度为65480个字符;
5) 数据库还有更多与存储相关的配置参数,请参见系统管理。
```mysql
CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [UPDATE 1];
```
说明:
1) KEEP是该数据库的数据保留多长天数,缺省是3650天(10年),数据库会自动删除超过时限的数据;
2) UPDATE 标志数据库支持更新相同时间戳数据;
3) 数据库名最大长度为33;
4) 一条SQL 语句的最大长度为65480个字符;
5) 数据库还有更多与存储相关的配置参数,请参见系统管理。
- **显示系统当前参数**
```mysql
SHOW VARIABLES;
```
```mysql
SHOW VARIABLES;
```
- **使用数据库**
......@@ -1125,11 +1125,8 @@ SELECT function_list FROM stb_name
- WHERE语句可以指定查询的起止时间和其他过滤条件
- FILL语句指定某一时间区间数据缺失的情况下的填充模式。填充模式包括以下几种:
1. 不进行填充:NONE(默认填充模式)。
2. VALUE填充:固定值填充,此时需要指定填充的数值。例如:fill(value, 1.23)。
3. NULL填充:使用NULL填充数据。例如:fill(null)。
4. PREV填充:使用前一个非NULL值填充数据。例如:fill(prev)。
说明:
......
......@@ -197,7 +197,7 @@ select * from meters where ts > now - 1d and current > 10;
`restart`**false****0**),用户程序就不会读到之前已经读取的数据了。
`taos_subscribe`的最后一个参数是以毫秒为单位的轮询周期。
在同步模式下,如前后两次调用`taos_consume`的时间间隔小于此时间,
在同步模式下,如前后两次调用`taos_consume`的时间间隔小于此时间,
`taos_consume`会阻塞,直到间隔超过此时间。
异步模式下,这个时间是两次调用回调函数的最小时间间隔。
......@@ -414,7 +414,7 @@ TDengine通过查询函数向用户提供毫秒级的数据获取能力。直接
TDengine分配固定大小的内存空间作为缓存空间,缓存空间可根据应用的需求和硬件资源配置。通过适当的设置缓存空间,TDengine可以提供极高性能的写入和查询的支持。TDengine中每个虚拟节点(virtual node)创建时分配独立的缓存池。每个虚拟节点管理自己的缓存池,不同虚拟节点间不共享缓存池。每个虚拟节点内部所属的全部表共享该虚拟节点的缓存池。
TDengine将内存池按块划分进行管理,数据在内存块里按照列式存储。一个vnode的内存池是在vnode创建时按块分配好的,而且每个内存块按照先进先出的原则进行管理。一张表所需要的内存块是从vnode的内存池中进行分配的,块的大小由系统配置参数cache决定。每张表最大内存块的数目由配置参数tblocks决定,每张表平均的内存块的个数由配置参数ablocks决定。因此对于一个vnode, 总的内存大小为: `cache * ablocks * tables`。内存块参数cache不宜过小,一个cache block需要能存储至少几十条以上记录,才会有效率。参数ablocks最小为2,保证每张表平均至少能分配两个内存块
TDengine将内存池按块划分进行管理,数据在内存块里是以行(row)的形式存储。一个vnode的内存池是在vnode创建时按块分配好,而且每个内存块按照先进先出的原则进行管理。在创建内存池时,块的大小由系统配置参数cache决定;每个vnode中内存块的数目则由配置参数blocks决定。因此对于一个vnode,总的内存大小为:`cache * blocks`。一个cache block需要保证每张表能存储至少几十条以上记录,才会有效率
你可以通过函数last_row快速获取一张表或一张超级表的最后一条记录,这样很便于在大屏显示各设备的实时状态或采集值。例如:
......
......@@ -225,7 +225,7 @@ SHOW MNODES;
## Arbitrator的使用
如果副本数为偶数,当一个vnode group里一半vnode不工作时,是无法从中选出master的。同理,一半mnode不工作时,是无法选出mnode的master的,因为存在“split brain”问题。为解决这个问题,TDengine引入了arbitrator的概念。Arbitrator模拟一个vnode或mnode在工作,但只简单的负责网络连接,不处理任何数据插入或访问。只要包含arbitrator在内,超过半数的vnode或mnode工作,那么该vnode group或mnode组就可以正常的提供数据插入或查询服务。比如对于副本数为2的情形,如果一个节点A离线,但另外一个节点B正常,而且能连接到arbitrator, 那么节点B就能正常工作。
如果副本数为偶数,当一个vnode group里一半vnode不工作时,是无法从中选出master的。同理,一半mnode不工作时,是无法选出mnode的master的,因为存在“split brain”问题。为解决这个问题,TDengine引入了Arbitrator的概念。Arbitrator模拟一个vnode或mnode在工作,但只简单的负责网络连接,不处理任何数据插入或访问。只要包含Arbitrator在内,超过半数的vnode或mnode工作,那么该vnode group或mnode组就可以正常的提供数据插入或查询服务。比如对于副本数为2的情形,如果一个节点A离线,但另外一个节点B正常,而且能连接到Arbitrator,那么节点B就能正常工作。
TDengine提供一个执行程序tarbitrator, 找任何一台Linux服务器运行它即可。请点击[安装包下载](https://www.taosdata.com/cn/all-downloads/),在TDengine Arbitrator Linux一节中,选择适合的版本下载并安装。该程序对系统资源几乎没有要求,只需要保证有网络连接即可。该应用的命令行参数`-p`可以指定其对外服务的端口号,缺省是6042。配置每个taosd实例时,可以在配置文件taos.cfg里将参数arbitrator设置为arbitrator的End Point。如果该参数配置了,当副本数为偶数时,系统将自动连接配置的arbitrator。如果副本数为奇数,即使配置了arbitrator, 系统也不会去建立连接。
TDengine提供一个执行程序,名为 tarbitrator,找任何一台Linux服务器运行它即可。请点击[安装包下载](https://www.taosdata.com/cn/all-downloads/),在TDengine Arbitrator Linux一节中,选择适合的版本下载并安装。该程序对系统资源几乎没有要求,只需要保证有网络连接即可。该应用的命令行参数`-p`可以指定其对外服务的端口号,缺省是6042。配置每个taosd实例时,可以在配置文件taos.cfg里将参数arbitrator设置为Arbitrator的End Point。如果该参数配置了,当副本数为偶数时,系统将自动连接配置的Arbitrator。如果副本数为奇数,即使配置了Arbitrator,系统也不会去建立连接。
#集群安装、管理
多个taosd的运行实例可以组成一个集群,以保证TDengine的高可靠运行,并提供水平扩展能力。要了解TDengine 2.0的集群管理,需要对集群的基本概念有所了解,请看TDengine 2.0整体架构一章。
集群的每个节点是由End Point来唯一标识的,End Point是由FQDN(Fully Qualified Domain Name)外加Port组成,比如 h1.taosdata.com:6030。一般FQDN就是服务器的hostname,可通过Linux命令“hostname"获取。端口是这个节点对外服务的端口号,缺省是6030,但可以通过taos.cfg里配置参数serverPort进行修改。
TDengine的集群管理极其简单,除添加和删除节点需要人工干预之外,其他全部是自动完成,最大程度的降低了运维的工作量。本章对集群管理的操作做详细的描述。
##安装、创建第一个节点
集群是由一个一个dnode组成的,是从一个dnode的创建开始的。创建第一个节点很简单,就按照["立即开始“](https://www.taosdata.com/cn/getting-started/)一章的方法进行安装、启动即可。
启动后,请执行taos, 启动taos shell,从shell里执行命令"show dnodes;",如下所示:
```
Welcome to the TDengine shell from Linux, Client Version:2.0.0.0
Copyright (c) 2017 by TAOS Data, Inc. All rights reserved.
taos> show dnodes;
id | end_point | vnodes | cores | status | role | create_time |
=====================================================================================
1 | h1.taos.com:6030 | 0 | 2 | ready | any | 2020-07-31 03:49:29.202 |
Query OK, 1 row(s) in set (0.006385s)
taos>
```
上述命令里,可以看到这个刚启动的这个节点的End Point是:h1.taos.com:6030
## 安装、创建后续节点
将新的节点添加到现有集群,具体有以下几步:
1. 按照["立即开始“](https://www.taosdata.com/cn/getting-started/)一章的方法进行安装,但不要启动taosd
2. 如果是使用涛思数据的官方安装包进行安装,在安装结束时,会询问集群的End Port, 输入第一个节点的End Point即可。如果是源码安装,请编辑配置文件taos.cfg(缺省是在/etc/taos/目录),增加一行:
```
firstEp h1.taos.com:6030
```
请注意将示例的“h1.taos.com:6030" 替换为你自己第一个节点的End Point
3. 按照["立即开始“](https://www.taosdata.com/cn/getting-started/)一章的方法启动taosd
4. 在Linux shell里执行命令"hostname"找出本机的FQDN, 假设为h2.taos.com。如果无法找到,可以查看taosd日志文件taosdlog.0里前面几行日志(一般在/var/log/taos目录),fqdn以及port都会打印出来。
5. 在第一个节点,使用CLI程序taos, 登录进TDengine系统, 使用命令:
```
CREATE DNODE "h2.taos.com:6030";
```
将新节点的End Point添加进集群的EP列表。**"fqdn:port"需要用双引号引起来**,否则出错。请注意将示例的“h2.taos.com:6030" 替换为你自己第一个节点的End Point
6. 使用命令
```
SHOW DNODES;
```
查看新节点是否被成功加入。
按照上述步骤可以源源不断的将新的节点加入到集群。
**提示:**
- firstEp, secondEp这两个参数仅仅在该节点第一次加入集群时有作用,加入集群后,该节点会保存最新的mnode的End Point列表,不再依赖这两个参数。
- 两个没有配置first, second参数的dnode启动后,会独立运行起来。这个时候,无法将其中一个节点加入到另外一个节点,形成集群。**无法将两个独立的集群合并成为新的集群**
##节点管理
###添加节点
执行CLI程序taos, 使用root账号登录进系统, 执行:
```
CREATE DNODE "fqdn:port";
```
将新节点的End Point添加进集群的EP列表。**"fqdn:port"需要用双引号引起来**,否则出错。一个节点对外服务的fqdn和port可以通过配置文件taos.cfg进行配置,缺省是自动获取。
###删除节点
执行CLI程序taos, 使用root账号登录进TDengine系统,执行:
```
DROP DNODE "fqdn:port";
```
其中fqdn是被删除的节点的FQDN,port是其对外服务器的端口号
###查看节点
执行CLI程序taos,使用root账号登录进TDengine系统,执行:
```
SHOW DNODES;
```
它将列出集群中所有的dnode,每个dnode的fqdn:port, 状态(ready, offline等),vnode数目,还未使用的vnode数目等信息。在添加或删除一个节点后,可以使用该命令查看。
如果集群配置了Arbitrator,那么它也会在这个节点列表中显示出来,其role列的值会是“arb”。
###查看虚拟节点组
为充分利用多核技术,并提供scalability,数据需要分片处理。因此TDengine会将一个DB的数据切分成多份,存放在多个vnode里。这些vnode可能分布在多个dnode里,这样就实现了水平扩展。一个vnode仅仅属于一个DB,但一个DB可以有多个vnode。vnode的是mnode根据当前系统资源的情况,自动进行分配的,无需任何人工干预。
执行CLI程序taos,使用root账号登录进TDengine系统,执行:
```
SHOW VGROUPS;
```
##高可用性
TDengine通过多副本的机制来提供系统的高可用性。副本数是与DB关联的,一个集群里可以有多个DB,根据运营的需求,每个DB可以配置不同的副本数。创建数据库时,通过参数replica 指定副本数(缺省为1)。如果副本数为1,系统的可靠性无法保证,只要数据所在的节点宕机,就将无法提供服务。集群的节点数必须大于等于副本数,否则创建表时将返回错误“more dnodes are needed"。比如下面的命令将创建副本数为3的数据库demo:
```
CREATE DATABASE demo replica 3;
```
一个DB里的数据会被切片分到多个vnode group,vnode group里的vnode数目就是DB的副本数,同一个vnode group里各vnode的数据是完全一致的。为保证高可用性,vnode group里的vnode一定要分布在不同的dnode里(实际部署时,需要在不同的物理机上),只要一个vgroup里超过半数的vnode处于工作状态,这个vgroup就能正常的对外服务。
一个dnode里可能有多个DB的数据,因此一个dnode离线时,可能会影响到多个DB。如果一个vnode group里的一半或一半以上的vnode不工作,那么该vnode group就无法对外服务,无法插入或读取数据,这样会影响到它所属的DB的一部分表的d读写操作。
因为vnode的引入,无法简单的给出结论:“集群中过半dnode工作,集群就应该工作”。但是对于简单的情形,很好下结论。比如副本数为3,只有三个dnode,那如果仅有一个节点不工作,整个集群还是可以正常工作的,但如果有两个节点不工作,那整个集群就无法正常工作了。
##Mnode的高可用
TDengine集群是由mnode (taosd的一个模块,逻辑节点) 负责管理的,为保证mnode的高可用,可以配置多个mnode副本,副本数由系统配置参数numOfMnodes决定,有效范围为1-3。为保证元数据的强一致性,mnode副本之间是通过同步的方式进行数据复制的。
一个集群有多个dnode, 但一个dnode至多运行一个mnode实例。多个dnode情况下,哪个dnode可以作为mnode呢?这是完全由系统根据整个系统资源情况,自动指定的。用户可通过CLI程序taos,在TDengine的console里,执行如下命令:
```
SHOW MNODES;
```
来查看mnode列表,该列表将列出mnode所处的dnode的End Point和角色(master, slave, unsynced 或offline)。
当集群中第一个节点启动时,该节点一定会运行一个mnode实例,否则该dnode无法正常工作,因为一个系统是必须有至少一个mnode的。如果numOfMnodes配置为2,启动第二个dnode时,该dnode也将运行一个mnode实例。
为保证mnode服务的高可用性,numOfMnodes必须设置为2或更大。因为mnode保存的元数据必须是强一致的,如果numOfMnodes大于2,复制参数quorum自动设为2,也就是说,至少要保证有两个副本写入数据成功,才通知客户端应用写入成功。
##负载均衡
有三种情况,将触发负载均衡,而且都无需人工干预。
- 当一个新节点添加进集群时,系统将自动触发负载均衡,一些节点上的数据将被自动转移到新节点上,无需任何人工干预。
- 当一个节点从集群中移除时,系统将自动把该节点上的数据转移到其他节点,无需任何人工干预。
- 如果一个节点过热(数据量过大),系统将自动进行负载均衡,将该节点的一些vnode自动挪到其他节点。
当上述三种情况发生时,系统将启动一各个节点的负载计算,从而决定如何挪动。
##节点离线处理
如果一个节点离线,TDengine集群将自动检测到。有如下两种情况:
- 改节点离线超过一定时间(taos.cfg里配置参数offlineThreshold控制时长),系统将自动把该节点删除,产生系统报警信息,触发负载均衡流程。如果该被删除的节点重现上线时,它将无法加入集群,需要系统管理员重新将其添加进集群才会开始工作。
- 离线后,在offlineThreshold的时长内重新上线,系统将自动启动数据恢复流程,等数据完全恢复后,该节点将开始正常工作。
##Arbitrator的使用
如果副本数为偶数,当一个vnode group里一半vnode不工作时,是无法从中选出master的。同理,一半mnode不工作时,是无法选出mnode的master的,因为存在“split brain”问题。为解决这个问题,TDengine引入了arbitrator的概念。Arbitrator模拟一个vnode或mnode在工作,但只简单的负责网络连接,不处理任何数据插入或访问。只要包含arbitrator在内,超过半数的vnode或mnode工作,那么该vnode group或mnode组就可以正常的提供数据插入或查询服务。比如对于副本数为2的情形,如果一个节点A离线,但另外一个节点B正常,而且能连接到arbitrator, 那么节点B就能正常工作。
TDengine安装包里带有一个执行程序tarbitrator, 找任何一台Linux服务器运行它即可。该程序对系统资源几乎没有要求,只需要保证有网络连接即可。该应用的命令行参数`-p`可以指定其对外服务的端口号,缺省是6030。配置每个taosd实例时,可以在配置文件taos.cfg里将参数arbitrator设置为Arbitrator的End Point。如果该参数配置了,当副本数为偶数时,系统将自动连接配置的Arbitrator。
在配置了Arbitrator的情况下,它也会显示在“show dnodes;”指令给出的节点列表中。
......@@ -2,7 +2,7 @@
TDengine 提供了遵循 JDBC 标准(3.0)API 规范的 `taos-jdbcdriver` 实现,可在 maven 的中央仓库 [Sonatype Repository][1] 搜索下载。
`taos-jdbcdriver的` 实现包括 2 种形式: JDBC-JNI 和 JDBC-RESTful(taos-jdbcdriver-2.0.16 开始支持 JDBC-RESTful)。 JDBC-JNI 通过调用客户端 libtaos.so(或 taos.dll )的本地方法实现, JDBC-RESTful 则在内部封装了 RESTful 接口实现。
`taos-jdbcdriver` 的实现包括 2 种形式: JDBC-JNI 和 JDBC-RESTful(taos-jdbcdriver-2.0.17 开始支持 JDBC-RESTful)。 JDBC-JNI 通过调用客户端 libtaos.so(或 taos.dll )的本地方法实现, JDBC-RESTful 则在内部封装了 RESTful 接口实现。
![tdengine-connector](../assets/tdengine-jdbc-connector.png)
......@@ -67,7 +67,7 @@ maven 项目中使用如下 pom.xml 配置即可:
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.16</version>
<version>2.0.17</version>
</dependency>
```
......
......@@ -222,7 +222,7 @@ MQTT是一流行的物联网数据传输协议,TDengine 可以很方便的接
## EMQ Broker 直接写入
<a href="https://github.com/emqx/emqx">EMQ</a>是一开源的MQTT Broker软件,无需任何代码,只需要在EMQ Dashboard里使用“规则”做简单配置,即可将MQTT的数据直接写入TDengine。EMQ X 支持通过 发送到 Web 服务 的方式保存数据到 TDengine,也在企业版上提供原生的 TDEngine 驱动实现直接保存。详细使用方法请参考<a href="https://docs.emqx.io/broker/latest/cn/rule/rule-example.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine">EMQ 官方文档</a>
<a href="https://github.com/emqx/emqx">EMQ</a>是一开源的MQTT Broker软件,无需任何代码,只需要在EMQ Dashboard里使用“规则”做简单配置,即可将MQTT的数据直接写入TDengine。EMQ X 支持通过 发送到 Web 服务 的方式保存数据到 TDengine,也在企业版上提供原生的 TDengine 驱动实现直接保存。详细使用方法请参考<a href="https://docs.emqx.io/broker/latest/cn/rule/rule-example.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine">EMQ 官方文档</a>
## HiveMQ Broker 直接写入
......
FROM ubuntu:20.04
FROM ubuntu:18.04
WORKDIR /root
ARG version
RUN echo $version
COPY tdengine.tar.gz /root/
RUN tar -zxf tdengine.tar.gz
WORKDIR /root/TDengine-server-$version/
RUN /bin/bash install.sh -e no
ARG pkgFile
ARG dirName
RUN echo ${pkgFile}
RUN echo ${dirName}
COPY ${pkgFile} /root/
RUN tar -zxf ${pkgFile}
WORKDIR /root/${dirName}/
RUN /bin/bash install.sh -e no
ENV LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/lib"
ENV LANG=en_US.UTF-8
......
#!/bin/bash
set -e
#set -x
# dockerbuild.sh
# -n [version number]
# -p [xxxx]
# set parameters by default value
verNumber=""
passWord=""
while getopts "hn:p:" arg
do
case $arg in
n)
#echo "verNumber=$OPTARG"
verNumber=$(echo $OPTARG)
;;
p)
#echo "passWord=$OPTARG"
passWord=$(echo $OPTARG)
;;
h)
echo "Usage: `basename $0` -n [version number] "
echo " -p [password for docker hub] "
exit 0
;;
?) #unknow option
echo "unkonw argument"
exit 1
;;
esac
done
echo "verNumber=${verNumber}"
docker manifest create -a tdengine/tdengine:${verNumber} tdengine/tdengine-amd64:${verNumber} tdengine/tdengine-aarch64:${verNumber} tdengine/tdengine-aarch32:${verNumber}
docker login -u tdengine -p ${passWord} #replace the docker registry username and password
docker manifest push tdengine/tdengine:${verNumber}
# how set latest version ???
#!/bin/bash
set -x
docker build --rm -f "Dockerfile" -t tdengine/tdengine-aarch64:$1 "." --build-arg version=$1
docker login -u tdengine -p $2 #replace the docker registry username and password
docker push tdengine/tdengine-aarch64:$1
#!/bin/bash
set -x
docker build --rm -f "Dockerfile" -t tdengine/tdengine:$1 "." --build-arg version=$1
docker login -u tdengine -p $2 #replace the docker registry username and password
docker push tdengine/tdengine:$1
set -e
#set -x
# dockerbuild.sh
# -c [aarch32 | aarch64 | amd64 | x86 | mips64 ...]
# -f [pkg file]
# -n [version number]
# -p [password for docker hub]
# set parameters by default value
cpuType=amd64
verNumber=""
passWord=""
pkgFile=""
while getopts "hc:n:p:f:" arg
do
case $arg in
c)
#echo "cpuType=$OPTARG"
cpuType=$(echo $OPTARG)
;;
n)
#echo "verNumber=$OPTARG"
verNumber=$(echo $OPTARG)
;;
p)
#echo "passWord=$OPTARG"
passWord=$(echo $OPTARG)
;;
f)
#echo "pkgFile=$OPTARG"
pkgFile=$(echo $OPTARG)
;;
h)
echo "Usage: `basename $0` -c [aarch32 | aarch64 | amd64 | x86 | mips64 ...] "
echo " -f [pkg file] "
echo " -n [version number] "
echo " -p [password for docker hub] "
exit 0
;;
?) #unknow option
echo "unkonw argument"
exit 1
;;
esac
done
echo "cpuType=${cpuType} verNumber=${verNumber} pkgFile=${pkgFile} "
echo "$(pwd)"
echo "====NOTES: ${pkgFile} must be in the same directory as dockerbuild.sh===="
dirName=${pkgFile%-Linux*}
#echo "dirName=${dirName}"
docker build --rm -f "Dockerfile" -t tdengine/tdengine-${cpuType}:${verNumber} "." --build-arg pkgFile=${pkgFile} --build-arg dirName=${dirName}
docker login -u tdengine -p ${passWord} #replace the docker registry username and password
docker push tdengine/tdengine-${cpuType}:${verNumber}
# set this version to latest version
docker tag tdengine/tdengine-${cpuType}:${verNumber} tdengine/tdengine-${cpuType}:latest
docker push tdengine/tdengine-${cpuType}:latest
#!/bin/bash
#
set -e
#set -x
# dockerbuild.sh
# -c [aarch32 | aarch64 | amd64 | x86 | mips64 ...]
# -n [version number]
# -p [password for docker hub]
# set parameters by default value
cpuType=aarch64
verNumber=""
passWord=""
while getopts "hc:n:p:f:" arg
do
case $arg in
c)
#echo "cpuType=$OPTARG"
cpuType=$(echo $OPTARG)
;;
n)
#echo "verNumber=$OPTARG"
verNumber=$(echo $OPTARG)
;;
p)
#echo "passWord=$OPTARG"
passWord=$(echo $OPTARG)
;;
h)
echo "Usage: `basename $0` -c [aarch32 | aarch64 | amd64 | x86 | mips64 ...] "
echo " -n [version number] "
echo " -p [password for docker hub] "
exit 0
;;
?) #unknow option
echo "unkonw argument"
exit 1
;;
esac
done
pkgFile=TDengine-server-${verNumber}-Linux-${cpuType}.tar.gz
echo "cpuType=${cpuType} verNumber=${verNumber} pkgFile=${pkgFile} "
scriptDir=`pwd`
pkgDir=$scriptDir/../../release/
cp -f ${pkgDir}/${pkgFile} .
./dockerbuild.sh -c ${cpuType} -f ${pkgFile} -n ${verNumber} -p ${passWord}
rm -f ${pkgFile}
......@@ -56,7 +56,7 @@ int32_t bnInitThread() {
pthread_attr_destroy(&thattr);
if (ret != 0) {
mError("failed to create balance thread since %s", strerror(errno));
mError("failed to create balance thread since %s", strerror(ret));
return -1;
}
......
......@@ -38,7 +38,7 @@ typedef struct SLocalDataSource {
tFilePage filePage;
} SLocalDataSource;
typedef struct SLocalReducer {
typedef struct SLocalMerger {
SLocalDataSource ** pLocalDataSrc;
int32_t numOfBuffer;
int32_t numOfCompleted;
......@@ -62,7 +62,7 @@ typedef struct SLocalReducer {
bool discard;
int32_t offset; // limit offset value
bool orderPrjOnSTable; // projection query on stable
} SLocalReducer;
} SLocalMerger;
typedef struct SRetrieveSupport {
tExtMemBuffer ** pExtMemBuffer; // for build loser tree
......@@ -89,10 +89,10 @@ int32_t tscFlushTmpBuffer(tExtMemBuffer *pMemoryBuf, tOrderDescriptor *pDesc, tF
/*
* create local reducer to launch the second-stage reduce process at client site
*/
void tscCreateLocalReducer(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrderDescriptor *pDesc,
void tscCreateLocalMerger(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrderDescriptor *pDesc,
SColumnModel *finalModel, SColumnModel *pFFModel, SSqlObj* pSql);
void tscDestroyLocalReducer(SSqlObj *pSql);
void tscDestroyLocalMerger(SSqlObj *pSql);
int32_t tscDoLocalMerge(SSqlObj *pSql);
......
......@@ -98,8 +98,7 @@ static FORCE_INLINE SQueryInfo* tscGetQueryInfoDetail(SSqlCmd* pCmd, int32_t sub
return pCmd->pQueryInfo[subClauseIndex];
}
int32_t tscCreateDataBlock(size_t initialSize, int32_t rowSize, int32_t startOffset, const char* name,
STableMeta* pTableMeta, STableDataBlocks** dataBlocks);
int32_t tscCreateDataBlock(size_t initialSize, int32_t rowSize, int32_t startOffset, SName* name, STableMeta* pTableMeta, STableDataBlocks** dataBlocks);
void tscDestroyDataBlock(STableDataBlocks* pDataBlock);
void tscSortRemoveDataBlockDupRows(STableDataBlocks* dataBuf);
......@@ -111,7 +110,7 @@ void* tscDestroyBlockHashTable(SHashObj* pBlockHashTable);
int32_t tscCopyDataBlockToPayload(SSqlObj* pSql, STableDataBlocks* pDataBlock);
int32_t tscMergeTableDataBlocks(SSqlObj* pSql, bool freeBlockMap);
int32_t tscGetDataBlockFromList(SHashObj* pHashList, int64_t id, int32_t size, int32_t startOffset, int32_t rowSize, const char* tableId, STableMeta* pTableMeta,
int32_t tscGetDataBlockFromList(SHashObj* pHashList, int64_t id, int32_t size, int32_t startOffset, int32_t rowSize, SName* pName, STableMeta* pTableMeta,
STableDataBlocks** dataBlocks, SArray* pBlockList);
/**
......@@ -142,10 +141,6 @@ void tscClearInterpInfo(SQueryInfo* pQueryInfo);
bool tscIsInsertData(char* sqlstr);
/* use for keep current db info temporarily, for handle table with db prefix */
// todo remove it
void tscGetDBInfoFromTableFullName(char* tableId, char* db);
int tscAllocPayload(SSqlCmd* pCmd, int size);
TAOS_FIELD tscCreateField(int8_t type, const char* name, int16_t bytes);
......@@ -215,8 +210,8 @@ SQueryInfo *tscGetQueryInfoDetailSafely(SSqlCmd *pCmd, int32_t subClauseIndex);
void tscClearTableMetaInfo(STableMetaInfo* pTableMetaInfo);
STableMetaInfo* tscAddTableMetaInfo(SQueryInfo* pQueryInfo, const char* name, STableMeta* pTableMeta,
SVgroupsInfo* vgroupList, SArray* pTagCols, SArray* pVgroupTables);
STableMetaInfo* tscAddTableMetaInfo(SQueryInfo* pQueryInfo, SName* name, STableMeta* pTableMeta,
SVgroupsInfo* vgroupList, SArray* pTagCols, SArray* pVgroupTables);
STableMetaInfo* tscAddEmptyMetaInfo(SQueryInfo *pQueryInfo);
int32_t tscAddSubqueryInfo(SSqlCmd *pCmd);
......@@ -225,7 +220,7 @@ void tscInitQueryInfo(SQueryInfo* pQueryInfo);
void tscClearSubqueryInfo(SSqlCmd* pCmd);
void tscFreeVgroupTableInfo(SArray* pVgroupTables);
SArray* tscVgroupTableInfoClone(SArray* pVgroupTables);
SArray* tscVgroupTableInfoDup(SArray* pVgroupTables);
void tscRemoveVgroupTableGroup(SArray* pVgroupTable, int32_t index);
void tscVgroupTableCopy(SVgroupTableInfo* info, SVgroupTableInfo* pInfo);
......@@ -292,7 +287,7 @@ uint32_t tscGetTableMetaSize(STableMeta* pTableMeta);
CChildTableMeta* tscCreateChildMeta(STableMeta* pTableMeta);
uint32_t tscGetTableMetaMaxSize();
int32_t tscCreateTableMetaFromCChildMeta(STableMeta* pChild, const char* name);
STableMeta* tscTableMetaClone(STableMeta* pTableMeta);
STableMeta* tscTableMetaDup(STableMeta* pTableMeta);
void* malloc_throw(size_t size);
......
......@@ -39,7 +39,7 @@ extern "C" {
// forward declaration
struct SSqlInfo;
struct SLocalReducer;
struct SLocalMerger;
// data source from sql string or from file
enum {
......@@ -67,7 +67,7 @@ typedef struct CChildTableMeta {
int32_t vgId;
STableId id;
uint8_t tableType;
char sTableName[TSDB_TABLE_FNAME_LEN];
char sTableName[TSDB_TABLE_FNAME_LEN]; //super table name, not full name
} CChildTableMeta;
typedef struct STableMeta {
......@@ -91,7 +91,7 @@ typedef struct STableMetaInfo {
* 2. keep the vgroup index for multi-vnode insertion
*/
int32_t vgroupIndex;
char name[TSDB_TABLE_FNAME_LEN]; // (super) table name
SName name;
char aliasName[TSDB_TABLE_NAME_LEN]; // alias name of table specified in query sql
SArray *tagColList; // SArray<SColumn*>, involved tag columns
} STableMetaInfo;
......@@ -142,7 +142,7 @@ typedef struct SCond {
} SCond;
typedef struct SJoinNode {
char tableId[TSDB_TABLE_FNAME_LEN];
char tableName[TSDB_TABLE_FNAME_LEN];
uint64_t uid;
int16_t tagColId;
} SJoinNode;
......@@ -176,7 +176,7 @@ typedef struct SParamInfo {
} SParamInfo;
typedef struct STableDataBlocks {
char tableName[TSDB_TABLE_FNAME_LEN];
SName tableName;
int8_t tsSource; // where does the UNIX timestamp come from, server or client
bool ordered; // if current rows are ordered or not
int64_t vgId; // virtual group id
......@@ -254,7 +254,7 @@ typedef struct {
int8_t submitSchema; // submit block is built with table schema
STagData tagData; // NOTE: pTagData->data is used as a variant length array
char **pTableNameList; // all involved tableMeta list of current insert sql statement.
SName **pTableNameList; // all involved tableMeta list of current insert sql statement.
int32_t numOfTables;
SHashObj *pTableBlockHashList; // data block for each table
......@@ -292,7 +292,7 @@ typedef struct {
SColumnIndex* pColumnIndex;
SArithmeticSupport *pArithSup; // support the arithmetic expression calculation on agg functions
struct SLocalReducer *pLocalReducer;
struct SLocalMerger *pLocalMerger;
} SSqlRes;
typedef struct STscObj {
......@@ -436,7 +436,7 @@ void waitForQueryRsp(void *param, TAOS_RES *tres, int code);
void doAsyncQuery(STscObj *pObj, SSqlObj *pSql, __async_cb_func_t fp, void *param, const char *sqlstr, size_t sqlLen);
void tscProcessMultiVnodesImportFromFile(SSqlObj *pSql);
void tscImportDataFromFile(SSqlObj *pSql);
void tscInitResObjForLocalQuery(SSqlObj *pObj, int32_t numOfRes, int32_t rowLen);
bool tscIsUpdateQuery(SSqlObj* pSql);
bool tscHasReachLimitation(SQueryInfo *pQueryInfo, SSqlRes *pRes);
......
......@@ -569,10 +569,12 @@ static int32_t tscRebuildDDLForSubTable(SSqlObj *pSql, const char *tableName, ch
}
char fullName[TSDB_TABLE_FNAME_LEN * 2] = {0};
extractDBName(pTableMetaInfo->name, fullName);
tNameGetDbName(&pTableMetaInfo->name, fullName);
extractTableName(pMeta->sTableName, param->sTableName);
snprintf(fullName + strlen(fullName), TSDB_TABLE_FNAME_LEN - strlen(fullName), ".%s", param->sTableName);
extractTableName(pTableMetaInfo->name, param->buf);
strncpy(param->buf, tNameGetTableName(&pTableMetaInfo->name), TSDB_TABLE_NAME_LEN);
param->pParentSql = pSql;
param->pInterSql = pInterSql;
......@@ -602,6 +604,7 @@ static int32_t tscRebuildDDLForSubTable(SSqlObj *pSql, const char *tableName, ch
return TSDB_CODE_TSC_ACTION_IN_PROGRESS;
}
static int32_t tscRebuildDDLForNormalTable(SSqlObj *pSql, const char *tableName, char *ddl) {
SQueryInfo* pQueryInfo = tscGetQueryInfoDetail(&pSql->cmd, 0);
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
......@@ -675,8 +678,7 @@ static int32_t tscProcessShowCreateTable(SSqlObj *pSql) {
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
assert(pTableMetaInfo->pTableMeta != NULL);
char tableName[TSDB_TABLE_NAME_LEN] = {0};
extractTableName(pTableMetaInfo->name, tableName);
const char* tableName = tNameGetTableName(&pTableMetaInfo->name);
char *result = (char *)calloc(1, TSDB_MAX_BINARY_LEN);
int32_t code = TSDB_CODE_SUCCESS;
......@@ -712,7 +714,9 @@ static int32_t tscProcessShowCreateDatabase(SSqlObj *pSql) {
free(pInterSql);
return TSDB_CODE_TSC_OUT_OF_MEMORY;
}
extractTableName(pTableMetaInfo->name, param->buf);
strncpy(param->buf, tNameGetTableName(&pTableMetaInfo->name), TSDB_TABLE_NAME_LEN);
param->pParentSql = pSql;
param->pInterSql = pInterSql;
param->fp = tscRebuildCreateDBStatement;
......
此差异已折叠。
......@@ -703,7 +703,7 @@ static int32_t doParseInsertStatement(SSqlCmd* pCmd, char **str, SParsedDataColI
STableDataBlocks *dataBuf = NULL;
int32_t ret = tscGetDataBlockFromList(pCmd->pTableBlockHashList, pTableMeta->id.uid, TSDB_DEFAULT_PAYLOAD_SIZE,
sizeof(SSubmitBlk), tinfo.rowSize, pTableMetaInfo->name, pTableMeta, &dataBuf, NULL);
sizeof(SSubmitBlk), tinfo.rowSize, &pTableMetaInfo->name, pTableMeta, &dataBuf, NULL);
if (ret != TSDB_CODE_SUCCESS) {
return ret;
}
......@@ -813,26 +813,26 @@ static int32_t tscCheckIfCreateTable(char **sqlstr, SSqlObj *pSql) {
tscAddEmptyMetaInfo(pQueryInfo);
}
STableMetaInfo *pSTableMeterMetaInfo = tscGetMetaInfo(pQueryInfo, STABLE_INDEX);
code = tscSetTableFullName(pSTableMeterMetaInfo, &sToken, pSql);
STableMetaInfo *pSTableMetaInfo = tscGetMetaInfo(pQueryInfo, STABLE_INDEX);
code = tscSetTableFullName(pSTableMetaInfo, &sToken, pSql);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
tstrncpy(pCmd->tagData.name, pSTableMeterMetaInfo->name, sizeof(pCmd->tagData.name));
tNameExtractFullName(&pSTableMetaInfo->name, pCmd->tagData.name);
pCmd->tagData.dataLen = 0;
code = tscGetTableMeta(pSql, pSTableMeterMetaInfo);
code = tscGetTableMeta(pSql, pSTableMetaInfo);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
if (!UTIL_TABLE_IS_SUPER_TABLE(pSTableMeterMetaInfo)) {
if (!UTIL_TABLE_IS_SUPER_TABLE(pSTableMetaInfo)) {
return tscInvalidSQLErrMsg(pCmd->payload, "create table only from super table is allowed", sToken.z);
}
SSchema *pTagSchema = tscGetTableTagSchema(pSTableMeterMetaInfo->pTableMeta);
STableComInfo tinfo = tscGetTableInfo(pSTableMeterMetaInfo->pTableMeta);
SSchema *pTagSchema = tscGetTableTagSchema(pSTableMetaInfo->pTableMeta);
STableComInfo tinfo = tscGetTableInfo(pSTableMetaInfo->pTableMeta);
index = 0;
sToken = tStrGetToken(sql, &index, false, 0, NULL);
......@@ -840,7 +840,7 @@ static int32_t tscCheckIfCreateTable(char **sqlstr, SSqlObj *pSql) {
SParsedDataColInfo spd = {0};
uint8_t numOfTags = tscGetNumOfTags(pSTableMeterMetaInfo->pTableMeta);
uint8_t numOfTags = tscGetNumOfTags(pSTableMetaInfo->pTableMeta);
spd.numOfCols = numOfTags;
// if specify some tags column
......@@ -1406,39 +1406,38 @@ typedef struct SImportFileSupport {
FILE *fp;
} SImportFileSupport;
static void parseFileSendDataBlock(void *param, TAOS_RES *tres, int code) {
static void parseFileSendDataBlock(void *param, TAOS_RES *tres, int32_t numOfRows) {
assert(param != NULL && tres != NULL);
char * tokenBuf = NULL;
size_t n = 0;
ssize_t readLen = 0;
char * line = NULL;
int32_t count = 0;
int32_t maxRows = 0;
FILE * fp = NULL;
SSqlObj *pSql = tres;
SSqlCmd *pCmd = &pSql->cmd;
SImportFileSupport *pSupporter = (SImportFileSupport *) param;
SImportFileSupport *pSupporter = (SImportFileSupport *)param;
SSqlObj *pParentSql = pSupporter->pSql;
FILE *fp = pSupporter->fp;
if (taos_errno(pSql) != TSDB_CODE_SUCCESS) { // handle error
assert(taos_errno(pSql) == code);
do {
if (code == TSDB_CODE_TDB_TABLE_RECONFIGURE) {
assert(pSql->res.numOfRows == 0);
int32_t errc = fseek(fp, 0, SEEK_SET);
if (errc < 0) {
tscError("%p failed to seek SEEK_SET since:%s", pSql, tstrerror(errno));
} else {
break;
}
}
taos_free_result(pSql);
tfree(pSupporter);
fclose(fp);
pParentSql->res.code = code;
tscAsyncResultOnError(pParentSql);
return;
} while (0);
fp = pSupporter->fp;
int32_t code = pSql->res.code;
// retry parse data from file and import data from the begining again
if (code == TSDB_CODE_TDB_TABLE_RECONFIGURE) {
assert(pSql->res.numOfRows == 0);
int32_t ret = fseek(fp, 0, SEEK_SET);
if (ret < 0) {
tscError("%p failed to seek SEEK_SET since:%s", pSql, tstrerror(errno));
code = TAOS_SYSTEM_ERROR(errno);
goto _error;
}
} else if (code != TSDB_CODE_SUCCESS) {
goto _error;
}
// accumulate the total submit records
......@@ -1452,28 +1451,32 @@ static void parseFileSendDataBlock(void *param, TAOS_RES *tres, int code) {
SParsedDataColInfo spd = {.numOfCols = tinfo.numOfColumns};
tscSetAssignedColumnInfo(&spd, pSchema, tinfo.numOfColumns);
size_t n = 0;
ssize_t readLen = 0;
char * line = NULL;
int32_t count = 0;
int32_t maxRows = 0;
tfree(pCmd->pTableNameList);
pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks);
if (pCmd->pTableBlockHashList == NULL) {
pCmd->pTableBlockHashList = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false);
if (pCmd->pTableBlockHashList == NULL) {
code = TSDB_CODE_TSC_OUT_OF_MEMORY;
goto _error;
}
}
STableDataBlocks *pTableDataBlock = NULL;
int32_t ret = tscGetDataBlockFromList(pCmd->pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE,
sizeof(SSubmitBlk), tinfo.rowSize, pTableMetaInfo->name, pTableMeta, &pTableDataBlock, NULL);
int32_t ret =
tscGetDataBlockFromList(pCmd->pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE, sizeof(SSubmitBlk),
tinfo.rowSize, &pTableMetaInfo->name, pTableMeta, &pTableDataBlock, NULL);
if (ret != TSDB_CODE_SUCCESS) {
// return ret;
pParentSql->res.code = TSDB_CODE_TSC_OUT_OF_MEMORY;
goto _error;
}
tscAllocateMemIfNeed(pTableDataBlock, tinfo.rowSize, &maxRows);
char *tokenBuf = calloc(1, 4096);
tokenBuf = calloc(1, TSDB_MAX_BYTES_PER_ROW);
if (tokenBuf == NULL) {
code = TSDB_CODE_TSC_OUT_OF_MEMORY;
goto _error;
}
while ((readLen = tgetline(&line, &n, fp)) != -1) {
if (('\r' == line[readLen - 1]) || ('\n' == line[readLen - 1])) {
......@@ -1501,30 +1504,42 @@ static void parseFileSendDataBlock(void *param, TAOS_RES *tres, int code) {
}
tfree(tokenBuf);
free(line);
tfree(line);
pParentSql->res.code = code;
if (code == TSDB_CODE_SUCCESS) {
if (count > 0) {
code = doPackSendDataBlock(pSql, count, pTableDataBlock);
if (code == TSDB_CODE_SUCCESS) {
return;
} else {
goto _error;
}
} else {
taos_free_result(pSql);
tfree(pSupporter);
fclose(fp);
if (count > 0) {
code = doPackSendDataBlock(pSql, count, pTableDataBlock);
if (code != TSDB_CODE_SUCCESS) {
pParentSql->res.code = code;
tscAsyncResultOnError(pParentSql);
pParentSql->fp = pParentSql->fetchFp;
// all data has been sent to vnode, call user function
int32_t v = (code != TSDB_CODE_SUCCESS) ? code : (int32_t)pParentSql->res.numOfRows;
(*pParentSql->fp)(pParentSql->param, pParentSql, v);
return;
}
}
} else {
taos_free_result(pSql);
tfree(pSupporter);
fclose(fp);
pParentSql->fp = pParentSql->fetchFp;
_error:
tfree(tokenBuf);
tfree(line);
taos_free_result(pSql);
tfree(pSupporter);
fclose(fp);
// all data has been sent to vnode, call user function
int32_t v = (pParentSql->res.code != TSDB_CODE_SUCCESS) ? pParentSql->res.code : (int32_t)pParentSql->res.numOfRows;
(*pParentSql->fp)(pParentSql->param, pParentSql, v);
}
tscAsyncResultOnError(pParentSql);
}
void tscProcessMultiVnodesImportFromFile(SSqlObj *pSql) {
void tscImportDataFromFile(SSqlObj *pSql) {
SSqlCmd *pCmd = &pSql->cmd;
if (pCmd->command != TSDB_SQL_INSERT) {
return;
......@@ -1543,12 +1558,11 @@ void tscProcessMultiVnodesImportFromFile(SSqlObj *pSql) {
tfree(pSupporter);
tscAsyncResultOnError(pSql);
return;
}
pSupporter->pSql = pSql;
pSupporter->fp = fp;
pSupporter->fp = fp;
parseFileSendDataBlock(pSupporter, pNew, 0);
parseFileSendDataBlock(pSupporter, pNew, TSDB_CODE_SUCCESS);
}
......@@ -707,7 +707,7 @@ static int insertStmtBindParam(STscStmt* stmt, TAOS_BIND* bind) {
int32_t ret =
tscGetDataBlockFromList(pCmd->pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE, sizeof(SSubmitBlk),
pTableMeta->tableInfo.rowSize, pTableMetaInfo->name, pTableMeta, &pBlock, NULL);
pTableMeta->tableInfo.rowSize, &pTableMetaInfo->name, pTableMeta, &pBlock, NULL);
if (ret != 0) {
// todo handle error
}
......@@ -790,7 +790,7 @@ static int insertStmtExecute(STscStmt* stmt) {
int32_t ret =
tscGetDataBlockFromList(pCmd->pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE, sizeof(SSubmitBlk),
pTableMeta->tableInfo.rowSize, pTableMetaInfo->name, pTableMeta, &pBlock, NULL);
pTableMeta->tableInfo.rowSize, &pTableMetaInfo->name, pTableMeta, &pBlock, NULL);
assert(ret == 0);
pBlock->size = sizeof(SSubmitBlk) + pCmd->batchSize * pBlock->rowSize;
SSubmitBlk* pBlk = (SSubmitBlk*) pBlock->pData;
......
此差异已折叠。
......@@ -106,6 +106,7 @@ STableMeta* tscCreateTableMetaFromMsg(STableMetaMsg* pTableMetaMsg) {
pTableMeta->sversion = pTableMetaMsg->sversion;
pTableMeta->tversion = pTableMetaMsg->tversion;
tstrncpy(pTableMeta->sTableName, pTableMetaMsg->sTableName, TSDB_TABLE_FNAME_LEN);
memcpy(pTableMeta->schema, pTableMetaMsg->schema, schemaSize);
......
此差异已折叠。
......@@ -995,7 +995,8 @@ static int tscParseTblNameList(SSqlObj *pSql, const char *tblNameList, int32_t t
return code;
}
if (payloadLen + strlen(pTableMetaInfo->name) + 128 >= pCmd->allocSize) {
int32_t xlen = tNameLen(&pTableMetaInfo->name);
if (payloadLen + xlen + 128 >= pCmd->allocSize) {
char *pNewMem = realloc(pCmd->payload, pCmd->allocSize + tblListLen);
if (pNewMem == NULL) {
code = TSDB_CODE_TSC_OUT_OF_MEMORY;
......@@ -1008,7 +1009,9 @@ static int tscParseTblNameList(SSqlObj *pSql, const char *tblNameList, int32_t t
pMsg = pCmd->payload;
}
payloadLen += sprintf(pMsg + payloadLen, "%s,", pTableMetaInfo->name);
char n[TSDB_TABLE_FNAME_LEN] = {0};
tNameExtractFullName(&pTableMetaInfo->name, n);
payloadLen += sprintf(pMsg + payloadLen, "%s,", n);
}
*(pMsg + payloadLen) = '\0';
......
......@@ -104,7 +104,7 @@ static void doLaunchQuery(void* param, TAOS_RES* tres, int32_t code) {
// failed to get table Meta or vgroup list, retry in 10sec.
if (code == TSDB_CODE_SUCCESS) {
tscTansformSQLFuncForSTableQuery(pQueryInfo);
tscDebug("%p stream:%p, start stream query on:%s", pSql, pStream, pTableMetaInfo->name);
tscDebug("%p stream:%p, start stream query on:%s", pSql, pStream, tNameGetTableName(&pTableMetaInfo->name));
pSql->fp = tscProcessStreamQueryCallback;
pSql->fetchFp = tscProcessStreamQueryCallback;
......@@ -191,8 +191,9 @@ static void tscProcessStreamQueryCallback(void *param, TAOS_RES *tres, int numOf
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(&pStream->pSql->cmd, 0, 0);
char* name = pTableMetaInfo->name;
taosHashRemove(tscTableMetaInfo, name, strnlen(name, TSDB_TABLE_FNAME_LEN));
assert(0);
// char* name = pTableMetaInfo->name;
// taosHashRemove(tscTableMetaInfo, name, strnlen(name, TSDB_TABLE_FNAME_LEN));
pTableMetaInfo->vgroupList = tscVgroupInfoClear(pTableMetaInfo->vgroupList);
tscSetRetryTimer(pStream, pStream->pSql, retryDelay);
......@@ -291,8 +292,8 @@ static void tscProcessStreamRetrieveResult(void *param, TAOS_RES *res, int numOf
pStream->stime += 1;
}
tscDebug("%p stream:%p, query on:%s, fetch result completed, fetched rows:%" PRId64, pSql, pStream, pTableMetaInfo->name,
pStream->numOfRes);
// tscDebug("%p stream:%p, query on:%s, fetch result completed, fetched rows:%" PRId64, pSql, pStream, pTableMetaInfo->name,
// pStream->numOfRes);
tfree(pTableMetaInfo->pTableMeta);
......@@ -555,8 +556,8 @@ static void tscCreateStream(void *param, TAOS_RES *res, int code) {
taosTmrReset(tscProcessStreamTimer, (int32_t)starttime, pStream, tscTmr, &pStream->pTimer);
tscDebug("%p stream:%p is opened, query on:%s, interval:%" PRId64 ", sliding:%" PRId64 ", first launched in:%" PRId64 ", sql:%s", pSql,
pStream, pTableMetaInfo->name, pStream->interval.interval, pStream->interval.sliding, starttime, pSql->sqlstr);
// tscDebug("%p stream:%p is opened, query on:%s, interval:%" PRId64 ", sliding:%" PRId64 ", first launched in:%" PRId64 ", sql:%s", pSql,
// pStream, pTableMetaInfo->name, pStream->interval.interval, pStream->interval.sliding, starttime, pSql->sqlstr);
}
void tscSetStreamDestTable(SSqlStream* pStream, const char* dstTable) {
......
......@@ -534,7 +534,7 @@ static int32_t tscLaunchRealSubqueries(SSqlObj* pSql) {
size_t numOfCols = taosArrayGetSize(pQueryInfo->colList);
tscDebug("%p subquery:%p tableIndex:%d, vgroupIndex:%d, type:%d, exprInfo:%" PRIzu ", colList:%" PRIzu ", fieldsInfo:%d, name:%s",
pSql, pNew, 0, pTableMetaInfo->vgroupIndex, pQueryInfo->type, taosArrayGetSize(pQueryInfo->exprList),
numOfCols, pQueryInfo->fieldsInfo.numOfOutput, pTableMetaInfo->name);
numOfCols, pQueryInfo->fieldsInfo.numOfOutput, tNameGetTableName(&pTableMetaInfo->name));
}
//prepare the subqueries object failed, abort
......@@ -730,7 +730,7 @@ static void issueTSCompQuery(SSqlObj* pSql, SJoinSupporter* pSupporter, SSqlObj*
"%p subquery:%p tableIndex:%d, vgroupIndex:%d, numOfVgroups:%d, type:%d, ts_comp query to retrieve timestamps, "
"numOfExpr:%" PRIzu ", colList:%" PRIzu ", numOfOutputFields:%d, name:%s",
pParent, pSql, 0, pTableMetaInfo->vgroupIndex, pTableMetaInfo->vgroupList->numOfVgroups, pQueryInfo->type,
tscSqlExprNumOfExprs(pQueryInfo), numOfCols, pQueryInfo->fieldsInfo.numOfOutput, pTableMetaInfo->name);
tscSqlExprNumOfExprs(pQueryInfo), numOfCols, pQueryInfo->fieldsInfo.numOfOutput, tNameGetTableName(&pTableMetaInfo->name));
tscProcessSql(pSql);
}
......@@ -951,10 +951,10 @@ static void tidTagRetrieveCallback(void* param, TAOS_RES* tres, int32_t numOfRow
tscBuildVgroupTableInfo(pParentSql, pTableMetaInfo2, s2);
SSqlObj* psub1 = pParentSql->pSubs[0];
((SJoinSupporter*)psub1->param)->pVgroupTables = tscVgroupTableInfoClone(pTableMetaInfo1->pVgroupTables);
((SJoinSupporter*)psub1->param)->pVgroupTables = tscVgroupTableInfoDup(pTableMetaInfo1->pVgroupTables);
SSqlObj* psub2 = pParentSql->pSubs[1];
((SJoinSupporter*)psub2->param)->pVgroupTables = tscVgroupTableInfoClone(pTableMetaInfo2->pVgroupTables);
((SJoinSupporter*)psub2->param)->pVgroupTables = tscVgroupTableInfoDup(pTableMetaInfo2->pVgroupTables);
pParentSql->subState.numOfSub = 2;
......@@ -1636,7 +1636,7 @@ int32_t tscCreateJoinSubquery(SSqlObj *pSql, int16_t tableIndex, SJoinSupporter
"%p subquery:%p tableIndex:%d, vgroupIndex:%d, type:%d, transfer to tid_tag query to retrieve (tableId, tags), "
"exprInfo:%" PRIzu ", colList:%" PRIzu ", fieldsInfo:%d, tagIndex:%d, name:%s",
pSql, pNew, tableIndex, pTableMetaInfo->vgroupIndex, pNewQueryInfo->type, tscSqlExprNumOfExprs(pNewQueryInfo),
numOfCols, pNewQueryInfo->fieldsInfo.numOfOutput, colIndex.columnIndex, pNewQueryInfo->pTableMetaInfo[0]->name);
numOfCols, pNewQueryInfo->fieldsInfo.numOfOutput, colIndex.columnIndex, tNameGetTableName(&pNewQueryInfo->pTableMetaInfo[0]->name));
} else {
SSchema colSchema = {.type = TSDB_DATA_TYPE_BINARY, .bytes = 1};
SColumnIndex colIndex = {0, PRIMARYKEY_TIMESTAMP_COL_INDEX};
......@@ -1671,7 +1671,7 @@ int32_t tscCreateJoinSubquery(SSqlObj *pSql, int16_t tableIndex, SJoinSupporter
"%p subquery:%p tableIndex:%d, vgroupIndex:%d, type:%u, transfer to ts_comp query to retrieve timestamps, "
"exprInfo:%" PRIzu ", colList:%" PRIzu ", fieldsInfo:%d, name:%s",
pSql, pNew, tableIndex, pTableMetaInfo->vgroupIndex, pNewQueryInfo->type, tscSqlExprNumOfExprs(pNewQueryInfo),
numOfCols, pNewQueryInfo->fieldsInfo.numOfOutput, pNewQueryInfo->pTableMetaInfo[0]->name);
numOfCols, pNewQueryInfo->fieldsInfo.numOfOutput, tNameGetTableName(&pNewQueryInfo->pTableMetaInfo[0]->name));
}
} else {
assert(0);
......@@ -2133,7 +2133,7 @@ static void tscAllDataRetrievedFromDnode(SRetrieveSupport *trsupport, SSqlObj* p
SQueryInfo *pPQueryInfo = tscGetQueryInfoDetail(&pParentSql->cmd, 0);
tscClearInterpInfo(pPQueryInfo);
tscCreateLocalReducer(trsupport->pExtMemBuffer, pState->numOfSub, pDesc, trsupport->pFinalColModel, trsupport->pFFColModel, pParentSql);
tscCreateLocalMerger(trsupport->pExtMemBuffer, pState->numOfSub, pDesc, trsupport->pFinalColModel, trsupport->pFFColModel, pParentSql);
tscDebug("%p build loser tree completed", pParentSql);
pParentSql->res.precision = pSql->res.precision;
......@@ -2421,7 +2421,7 @@ static void multiVnodeInsertFinalize(void* param, TAOS_RES* tres, int numOfRows)
tscFreeQueryInfo(&pSql->cmd);
SQueryInfo* pQueryInfo = tscGetQueryInfoDetailSafely(&pSql->cmd, 0);
STableMetaInfo* pMasterTableMetaInfo = tscGetTableMetaInfoFromCmd(&pParentObj->cmd, pSql->cmd.clauseIndex, 0);
tscAddTableMetaInfo(pQueryInfo, pMasterTableMetaInfo->name, NULL, NULL, NULL, NULL);
tscAddTableMetaInfo(pQueryInfo, &pMasterTableMetaInfo->name, NULL, NULL, NULL, NULL);
subquerySetState(pSql, &pParentObj->subState, i, 0);
......@@ -2434,7 +2434,8 @@ static void multiVnodeInsertFinalize(void* param, TAOS_RES* tres, int numOfRows)
tscDebug("%p cleanup %d tableMeta in hashTable", pParentObj, pParentObj->cmd.numOfTables);
for(int32_t i = 0; i < pParentObj->cmd.numOfTables; ++i) {
char* name = pParentObj->cmd.pTableNameList[i];
char name[TSDB_TABLE_FNAME_LEN] = {0};
tNameExtractFullName(pParentObj->cmd.pTableNameList[i], name);
taosHashRemove(tscTableMetaInfo, name, strnlen(name, TSDB_TABLE_FNAME_LEN));
}
......
......@@ -89,21 +89,6 @@ bool tscQueryTags(SQueryInfo* pQueryInfo) {
return true;
}
// todo refactor, extract methods and move the common module
void tscGetDBInfoFromTableFullName(char* tableId, char* db) {
char* st = strstr(tableId, TS_PATH_DELIMITER);
if (st != NULL) {
char* end = strstr(st + 1, TS_PATH_DELIMITER);
if (end != NULL) {
memcpy(db, tableId, (end - tableId));
db[end - tableId] = 0;
return;
}
}
db[0] = 0;
}
bool tscIsTwoStageSTableQuery(SQueryInfo* pQueryInfo, int32_t tableIndex) {
if (pQueryInfo == NULL) {
return false;
......@@ -420,7 +405,7 @@ void tscResetSqlCmdObj(SSqlCmd* pCmd) {
}
void tscFreeSqlResult(SSqlObj* pSql) {
tscDestroyLocalReducer(pSql);
tscDestroyLocalMerger(pSql);
SSqlRes* pRes = &pSql->res;
tscDestroyResPointerInfo(pRes);
......@@ -612,15 +597,13 @@ int32_t tscCopyDataBlockToPayload(SSqlObj* pSql, STableDataBlocks* pDataBlock) {
// todo refactor
// set the correct table meta object, the table meta has been locked in pDataBlocks, so it must be in the cache
if (pTableMetaInfo->pTableMeta != pDataBlock->pTableMeta) {
tstrncpy(pTableMetaInfo->name, pDataBlock->tableName, sizeof(pTableMetaInfo->name));
tNameAssign(&pTableMetaInfo->name, &pDataBlock->tableName);
if (pTableMetaInfo->pTableMeta != NULL) {
tfree(pTableMetaInfo->pTableMeta);
}
pTableMetaInfo->pTableMeta = tscTableMetaClone(pDataBlock->pTableMeta);
} else {
assert(strncmp(pTableMetaInfo->name, pDataBlock->tableName, tListLen(pDataBlock->tableName)) == 0);
pTableMetaInfo->pTableMeta = tscTableMetaDup(pDataBlock->pTableMeta);
}
/*
......@@ -655,7 +638,7 @@ int32_t tscCopyDataBlockToPayload(SSqlObj* pSql, STableDataBlocks* pDataBlock) {
* @param dataBlocks
* @return
*/
int32_t tscCreateDataBlock(size_t initialSize, int32_t rowSize, int32_t startOffset, const char* name,
int32_t tscCreateDataBlock(size_t initialSize, int32_t rowSize, int32_t startOffset, SName* name,
STableMeta* pTableMeta, STableDataBlocks** dataBlocks) {
STableDataBlocks* dataBuf = (STableDataBlocks*)calloc(1, sizeof(STableDataBlocks));
if (dataBuf == NULL) {
......@@ -683,18 +666,18 @@ int32_t tscCreateDataBlock(size_t initialSize, int32_t rowSize, int32_t startOff
dataBuf->size = startOffset;
dataBuf->tsSource = -1;
tstrncpy(dataBuf->tableName, name, sizeof(dataBuf->tableName));
tNameAssign(&dataBuf->tableName, name);
//Here we keep the tableMeta to avoid it to be remove by other threads.
dataBuf->pTableMeta = tscTableMetaClone(pTableMeta);
dataBuf->pTableMeta = tscTableMetaDup(pTableMeta);
assert(initialSize > 0 && pTableMeta != NULL && dataBuf->pTableMeta != NULL);
*dataBlocks = dataBuf;
return TSDB_CODE_SUCCESS;
}
int32_t tscGetDataBlockFromList(SHashObj* pHashList, int64_t id, int32_t size, int32_t startOffset, int32_t rowSize, const char* tableId, STableMeta* pTableMeta,
STableDataBlocks** dataBlocks, SArray* pBlockList) {
int32_t tscGetDataBlockFromList(SHashObj* pHashList, int64_t id, int32_t size, int32_t startOffset, int32_t rowSize,
SName* name, STableMeta* pTableMeta, STableDataBlocks** dataBlocks, SArray* pBlockList) {
*dataBlocks = NULL;
STableDataBlocks** t1 = (STableDataBlocks**)taosHashGet(pHashList, (const char*)&id, sizeof(id));
if (t1 != NULL) {
......@@ -702,7 +685,7 @@ int32_t tscGetDataBlockFromList(SHashObj* pHashList, int64_t id, int32_t size, i
}
if (*dataBlocks == NULL) {
int32_t ret = tscCreateDataBlock((size_t)size, rowSize, startOffset, tableId, pTableMeta, dataBlocks);
int32_t ret = tscCreateDataBlock((size_t)size, rowSize, startOffset, name, pTableMeta, dataBlocks);
if (ret != TSDB_CODE_SUCCESS) {
return ret;
}
......@@ -803,7 +786,7 @@ static void extractTableNameList(SSqlCmd* pCmd, bool freeBlockMap) {
int32_t i = 0;
while(p1) {
STableDataBlocks* pBlocks = *p1;
pCmd->pTableNameList[i++] = strndup(pBlocks->tableName, TSDB_TABLE_FNAME_LEN);
pCmd->pTableNameList[i++] = tNameDup(&pBlocks->tableName);
p1 = taosHashIterate(pCmd->pTableBlockHashList, p1);
}
......@@ -828,7 +811,7 @@ int32_t tscMergeTableDataBlocks(SSqlObj* pSql, bool freeBlockMap) {
STableDataBlocks* dataBuf = NULL;
int32_t ret = tscGetDataBlockFromList(pVnodeDataBlockHashList, pOneTableBlock->vgId, TSDB_PAYLOAD_SIZE,
INSERT_HEAD_SIZE, 0, pOneTableBlock->tableName, pOneTableBlock->pTableMeta, &dataBuf, pVnodeDataBlockList);
INSERT_HEAD_SIZE, 0, &pOneTableBlock->tableName, pOneTableBlock->pTableMeta, &dataBuf, pVnodeDataBlockList);
if (ret != TSDB_CODE_SUCCESS) {
tscError("%p failed to prepare the data block buffer for merging table data, code:%d", pSql, ret);
taosHashCleanup(pVnodeDataBlockHashList);
......@@ -861,8 +844,8 @@ int32_t tscMergeTableDataBlocks(SSqlObj* pSql, bool freeBlockMap) {
tscSortRemoveDataBlockDupRows(pOneTableBlock);
char* ekey = (char*)pBlocks->data + pOneTableBlock->rowSize*(pBlocks->numOfRows-1);
tscDebug("%p name:%s, sid:%d rows:%d sversion:%d skey:%" PRId64 ", ekey:%" PRId64, pSql, pOneTableBlock->tableName,
tscDebug("%p name:%s, name:%d rows:%d sversion:%d skey:%" PRId64 ", ekey:%" PRId64, pSql, tNameGetTableName(&pOneTableBlock->tableName),
pBlocks->tid, pBlocks->numOfRows, pBlocks->sversion, GET_INT64_VAL(pBlocks->data), GET_INT64_VAL(ekey));
int32_t len = pBlocks->numOfRows * (pOneTableBlock->rowSize + expandSize) + sizeof(STColumn) * tscGetNumOfColumns(pOneTableBlock->pTableMeta);
......@@ -1310,7 +1293,7 @@ SColumn* tscColumnClone(const SColumn* src) {
dst->colIndex = src->colIndex;
dst->numOfFilters = src->numOfFilters;
dst->filterInfo = tscFilterInfoClone(src->filterInfo, src->numOfFilters);
dst->filterInfo = tFilterInfoDup(src->filterInfo, src->numOfFilters);
return dst;
}
......@@ -1816,10 +1799,10 @@ void tscVgroupTableCopy(SVgroupTableInfo* info, SVgroupTableInfo* pInfo) {
info->vgInfo.epAddr[j].fqdn = strdup(pInfo->vgInfo.epAddr[j].fqdn);
}
info->itemList = taosArrayClone(pInfo->itemList);
info->itemList = taosArrayDup(pInfo->itemList);
}
SArray* tscVgroupTableInfoClone(SArray* pVgroupTables) {
SArray* tscVgroupTableInfoDup(SArray* pVgroupTables) {
if (pVgroupTables == NULL) {
return NULL;
}
......@@ -1850,7 +1833,7 @@ void clearAllTableMetaInfo(SQueryInfo* pQueryInfo) {
tfree(pQueryInfo->pTableMetaInfo);
}
STableMetaInfo* tscAddTableMetaInfo(SQueryInfo* pQueryInfo, const char* name, STableMeta* pTableMeta,
STableMetaInfo* tscAddTableMetaInfo(SQueryInfo* pQueryInfo, SName* name, STableMeta* pTableMeta,
SVgroupsInfo* vgroupList, SArray* pTagCols, SArray* pVgroupTables) {
void* pAlloc = realloc(pQueryInfo->pTableMetaInfo, (pQueryInfo->numOfTables + 1) * POINTER_BYTES);
if (pAlloc == NULL) {
......@@ -1868,7 +1851,7 @@ STableMetaInfo* tscAddTableMetaInfo(SQueryInfo* pQueryInfo, const char* name, ST
pQueryInfo->pTableMetaInfo[pQueryInfo->numOfTables] = pTableMetaInfo;
if (name != NULL) {
tstrncpy(pTableMetaInfo->name, name, sizeof(pTableMetaInfo->name));
tNameAssign(&pTableMetaInfo->name, name);
}
pTableMetaInfo->pTableMeta = pTableMeta;
......@@ -1887,7 +1870,7 @@ STableMetaInfo* tscAddTableMetaInfo(SQueryInfo* pQueryInfo, const char* name, ST
tscColumnListCopy(pTableMetaInfo->tagColList, pTagCols, -1);
}
pTableMetaInfo->pVgroupTables = tscVgroupTableInfoClone(pVgroupTables);
pTableMetaInfo->pVgroupTables = tscVgroupTableInfoDup(pVgroupTables);
pQueryInfo->numOfTables += 1;
return pTableMetaInfo;
......@@ -1965,7 +1948,7 @@ SSqlObj* createSimpleSubObj(SSqlObj* pSql, __async_cb_func_t fp, void* param, in
assert(pSql->cmd.clauseIndex == 0);
STableMetaInfo* pMasterTableMetaInfo = tscGetTableMetaInfoFromCmd(&pSql->cmd, pSql->cmd.clauseIndex, 0);
tscAddTableMetaInfo(pQueryInfo, pMasterTableMetaInfo->name, NULL, NULL, NULL, NULL);
tscAddTableMetaInfo(pQueryInfo, &pMasterTableMetaInfo->name, NULL, NULL, NULL, NULL);
registerSqlObj(pNew);
return pNew;
......@@ -2070,7 +2053,7 @@ SSqlObj* createSubqueryObj(SSqlObj* pSql, int16_t tableIndex, __async_cb_func_t
pNewQueryInfo->groupbyExpr = pQueryInfo->groupbyExpr;
if (pQueryInfo->groupbyExpr.columnInfo != NULL) {
pNewQueryInfo->groupbyExpr.columnInfo = taosArrayClone(pQueryInfo->groupbyExpr.columnInfo);
pNewQueryInfo->groupbyExpr.columnInfo = taosArrayDup(pQueryInfo->groupbyExpr.columnInfo);
if (pNewQueryInfo->groupbyExpr.columnInfo == NULL) {
terrno = TSDB_CODE_TSC_OUT_OF_MEMORY;
goto _error;
......@@ -2121,27 +2104,26 @@ SSqlObj* createSubqueryObj(SSqlObj* pSql, int16_t tableIndex, __async_cb_func_t
pNew->param = param;
pNew->maxRetry = TSDB_MAX_REPLICA;
char* name = pTableMetaInfo->name;
STableMetaInfo* pFinalInfo = NULL;
if (pPrevSql == NULL) {
STableMeta* pTableMeta = tscTableMetaClone(pTableMetaInfo->pTableMeta);
STableMeta* pTableMeta = tscTableMetaDup(pTableMetaInfo->pTableMeta);
assert(pTableMeta != NULL);
pFinalInfo = tscAddTableMetaInfo(pNewQueryInfo, name, pTableMeta, pTableMetaInfo->vgroupList,
pFinalInfo = tscAddTableMetaInfo(pNewQueryInfo, &pTableMetaInfo->name, pTableMeta, pTableMetaInfo->vgroupList,
pTableMetaInfo->tagColList, pTableMetaInfo->pVgroupTables);
} else { // transfer the ownership of pTableMeta to the newly create sql object.
STableMetaInfo* pPrevInfo = tscGetTableMetaInfoFromCmd(&pPrevSql->cmd, pPrevSql->cmd.clauseIndex, 0);
STableMeta* pPrevTableMeta = tscTableMetaClone(pPrevInfo->pTableMeta);
STableMeta* pPrevTableMeta = tscTableMetaDup(pPrevInfo->pTableMeta);
SVgroupsInfo* pVgroupsInfo = pPrevInfo->vgroupList;
pFinalInfo = tscAddTableMetaInfo(pNewQueryInfo, name, pPrevTableMeta, pVgroupsInfo, pTableMetaInfo->tagColList,
pFinalInfo = tscAddTableMetaInfo(pNewQueryInfo, &pTableMetaInfo->name, pPrevTableMeta, pVgroupsInfo, pTableMetaInfo->tagColList,
pTableMetaInfo->pVgroupTables);
}
// this case cannot be happened
if (pFinalInfo->pTableMeta == NULL) {
tscError("%p new subquery failed since no tableMeta, name:%s", pSql, name);
tscError("%p new subquery failed since no tableMeta, name:%s", pSql, tNameGetTableName(&pTableMetaInfo->name));
if (pPrevSql != NULL) { // pass the previous error to client
assert(pPrevSql->res.code != TSDB_CODE_SUCCESS);
......@@ -2166,7 +2148,7 @@ SSqlObj* createSubqueryObj(SSqlObj* pSql, int16_t tableIndex, __async_cb_func_t
"%p new subquery:%p, tableIndex:%d, vgroupIndex:%d, type:%d, exprInfo:%" PRIzu ", colList:%" PRIzu ","
"fieldInfo:%d, name:%s, qrang:%" PRId64 " - %" PRId64 " order:%d, limit:%" PRId64,
pSql, pNew, tableIndex, pTableMetaInfo->vgroupIndex, pNewQueryInfo->type, tscSqlExprNumOfExprs(pNewQueryInfo),
size, pNewQueryInfo->fieldsInfo.numOfOutput, pFinalInfo->name, pNewQueryInfo->window.skey,
size, pNewQueryInfo->fieldsInfo.numOfOutput, tNameGetTableName(&pFinalInfo->name), pNewQueryInfo->window.skey,
pNewQueryInfo->window.ekey, pNewQueryInfo->order.order, pNewQueryInfo->limit.limit);
tscPrintSelectClause(pNew, 0);
......@@ -2203,7 +2185,7 @@ void tscDoQuery(SSqlObj* pSql) {
}
if (pCmd->dataSourceType == DATA_FROM_DATA_FILE) {
tscProcessMultiVnodesImportFromFile(pSql);
tscImportDataFromFile(pSql);
} else {
SQueryInfo *pQueryInfo = tscGetQueryInfoDetail(pCmd, pCmd->clauseIndex);
uint16_t type = pQueryInfo->type;
......@@ -2303,7 +2285,6 @@ int32_t tscSQLSyntaxErrMsg(char* msg, const char* additionalInfo, const char* s
}
return TSDB_CODE_TSC_SQL_SYNTAX_ERROR;
}
int32_t tscInvalidSQLErrMsg(char* msg, const char* additionalInfo, const char* sql) {
......@@ -2701,7 +2682,7 @@ uint32_t tscGetTableMetaMaxSize() {
return sizeof(STableMeta) + TSDB_MAX_COLUMNS * sizeof(SSchema);
}
STableMeta* tscTableMetaClone(STableMeta* pTableMeta) {
STableMeta* tscTableMetaDup(STableMeta* pTableMeta) {
assert(pTableMeta != NULL);
uint32_t size = tscGetTableMetaSize(pTableMeta);
STableMeta* p = calloc(1, size);
......
......@@ -21,6 +21,20 @@ typedef struct SColumnInfoData {
void* pData; // the corresponding block data in memory
} SColumnInfoData;
#define TSDB_DB_NAME_T 1
#define TSDB_TABLE_NAME_T 2
#define T_NAME_ACCT 0x1u
#define T_NAME_DB 0x2u
#define T_NAME_TABLE 0x4u
typedef struct SName {
uint8_t type; //db_name_t, table_name_t
char acctId[TSDB_ACCT_ID_LEN];
char dbname[TSDB_DB_NAME_LEN];
char tname[TSDB_TABLE_NAME_LEN];
} SName;
void extractTableName(const char *tableId, char *name);
char* extractDBName(const char *tableId, char *name);
......@@ -35,9 +49,9 @@ SSchema tGetUserSpecifiedColumnSchema(tVariant* pVal, SStrToken* exprStr, const
bool tscValidateTableNameLength(size_t len);
SColumnFilterInfo* tscFilterInfoClone(const SColumnFilterInfo* src, int32_t numOfFilters);
SColumnFilterInfo* tFilterInfoDup(const SColumnFilterInfo* src, int32_t numOfFilters);
SSchema tscGetTbnameColumnSchema();
SSchema tGetTbnameColumnSchema();
/**
* check if the schema is valid or not, including following aspects:
......@@ -51,6 +65,28 @@ SSchema tscGetTbnameColumnSchema();
* @param numOfCols
* @return
*/
bool isValidSchema(struct SSchema* pSchema, int32_t numOfCols, int32_t numOfTags);
bool tIsValidSchema(struct SSchema* pSchema, int32_t numOfCols, int32_t numOfTags);
int32_t tNameExtractFullName(const SName* name, char* dst);
int32_t tNameLen(const SName* name);
SName* tNameDup(const SName* name);
bool tIsValidName(const SName* name);
const char* tNameGetTableName(const SName* name);
int32_t tNameGetDbName(const SName* name, char* dst);
int32_t tNameGetFullDbName(const SName* name, char* dst);
bool tNameIsEmpty(const SName* name);
void tNameAssign(SName* dst, const SName* src);
int32_t tNameFromString(SName* dst, const char* str, uint32_t type);
int32_t tNameSetAcctId(SName* dst, const char* acct);
int32_t tNameSetDbName(SName* dst, const char* acct, SStrToken* dbToken);
#endif // TDENGINE_NAME_H
......@@ -3,31 +3,12 @@
#include "tname.h"
#include "tstoken.h"
#include "ttokendef.h"
#include "tvariant.h"
#define VALIDNUMOFCOLS(x) ((x) >= TSDB_MIN_COLUMNS && (x) <= TSDB_MAX_COLUMNS)
#define VALIDNUMOFCOLS(x) ((x) >= TSDB_MIN_COLUMNS && (x) <= TSDB_MAX_COLUMNS)
#define VALIDNUMOFTAGS(x) ((x) >= 0 && (x) <= TSDB_MAX_TAGS)
// todo refactor
UNUSED_FUNC static FORCE_INLINE const char* skipSegments(const char* input, char delim, int32_t num) {
for (int32_t i = 0; i < num; ++i) {
while (*input != 0 && *input++ != delim) {
};
}
return input;
}
UNUSED_FUNC static FORCE_INLINE size_t copy(char* dst, const char* src, char delimiter) {
size_t len = 0;
while (*src != delimiter && *src != 0) {
*dst++ = *src++;
len++;
}
return len;
}
#define VALID_NAME_TYPE(x) ((x) == TSDB_DB_NAME_T || (x) == TSDB_TABLE_NAME_T)
void extractTableName(const char* tableId, char* name) {
size_t s1 = strcspn(tableId, &TS_PATH_DELIMITER[0]);
......@@ -85,7 +66,7 @@ bool tscValidateTableNameLength(size_t len) {
return len < TSDB_TABLE_NAME_LEN;
}
SColumnFilterInfo* tscFilterInfoClone(const SColumnFilterInfo* src, int32_t numOfFilters) {
SColumnFilterInfo* tFilterInfoDup(const SColumnFilterInfo* src, int32_t numOfFilters) {
if (numOfFilters == 0) {
assert(src == NULL);
return NULL;
......@@ -200,7 +181,7 @@ void extractTableNameFromToken(SStrToken* pToken, SStrToken* pTable) {
}
}
SSchema tscGetTbnameColumnSchema() {
SSchema tGetTbnameColumnSchema() {
struct SSchema s = {
.colId = TSDB_TBNAME_COLUMN_INDEX,
.type = TSDB_DATA_TYPE_BINARY,
......@@ -248,7 +229,7 @@ static bool doValidateSchema(SSchema* pSchema, int32_t numOfCols, int32_t maxLen
return rowLen <= maxLen;
}
bool isValidSchema(struct SSchema* pSchema, int32_t numOfCols, int32_t numOfTags) {
bool tIsValidSchema(struct SSchema* pSchema, int32_t numOfCols, int32_t numOfTags) {
if (!VALIDNUMOFCOLS(numOfCols)) {
return false;
}
......@@ -272,3 +253,179 @@ bool isValidSchema(struct SSchema* pSchema, int32_t numOfCols, int32_t numOfTags
return true;
}
int32_t tNameExtractFullName(const SName* name, char* dst) {
assert(name != NULL && dst != NULL);
// invalid full name format, abort
if (!tIsValidName(name)) {
return -1;
}
int32_t len = snprintf(dst, TSDB_ACCT_ID_LEN + 1 + TSDB_DB_NAME_LEN, "%s.%s", name->acctId, name->dbname);
size_t tnameLen = strlen(name->tname);
if (tnameLen > 0) {
assert(name->type == TSDB_TABLE_NAME_T);
dst[len] = TS_PATH_DELIMITER[0];
memcpy(dst + len + 1, name->tname, tnameLen);
dst[len + tnameLen + 1] = 0;
}
return 0;
}
int32_t tNameLen(const SName* name) {
assert(name != NULL);
int32_t len = (int32_t) strlen(name->acctId);
int32_t len1 = (int32_t) strlen(name->dbname);
int32_t len2 = (int32_t) strlen(name->tname);
if (name->type == TSDB_DB_NAME_T) {
assert(len2 == 0);
return len + len1 + TS_PATH_DELIMITER_LEN;
} else {
assert(len2 > 0);
return len + len1 + len2 + TS_PATH_DELIMITER_LEN * 2;
}
}
bool tIsValidName(const SName* name) {
assert(name != NULL);
if (!VALID_NAME_TYPE(name->type)) {
return false;
}
if (strlen(name->acctId) <= 0) {
return false;
}
if (name->type == TSDB_DB_NAME_T) {
return strlen(name->dbname) > 0;
} else {
return strlen(name->dbname) > 0 && strlen(name->tname) > 0;
}
}
SName* tNameDup(const SName* name) {
assert(name != NULL);
SName* p = calloc(1, sizeof(SName));
memcpy(p, name, sizeof(SName));
return p;
}
int32_t tNameGetDbName(const SName* name, char* dst) {
assert(name != NULL && dst != NULL);
strncpy(dst, name->dbname, tListLen(name->dbname));
return 0;
}
int32_t tNameGetFullDbName(const SName* name, char* dst) {
assert(name != NULL && dst != NULL);
snprintf(dst, TSDB_ACCT_ID_LEN + TS_PATH_DELIMITER_LEN + TSDB_DB_NAME_LEN,
"%s.%s", name->acctId, name->dbname);
return 0;
}
bool tNameIsEmpty(const SName* name) {
assert(name != NULL);
return name->type == 0 || strlen(name->acctId) <= 0;
}
const char* tNameGetTableName(const SName* name) {
assert(name != NULL && name->type == TSDB_TABLE_NAME_T);
return &name->tname[0];
}
void tNameAssign(SName* dst, const SName* src) {
memcpy(dst, src, sizeof(SName));
}
int32_t tNameSetDbName(SName* dst, const char* acct, SStrToken* dbToken) {
assert(dst != NULL && dbToken != NULL && acct != NULL);
// too long account id or too long db name
if (strlen(acct) >= tListLen(dst->acctId) || dbToken->n >= tListLen(dst->dbname)) {
return -1;
}
dst->type = TSDB_DB_NAME_T;
tstrncpy(dst->acctId, acct, tListLen(dst->acctId));
tstrncpy(dst->dbname, dbToken->z, dbToken->n + 1);
return 0;
}
int32_t tNameSetAcctId(SName* dst, const char* acct) {
assert(dst != NULL && acct != NULL);
// too long account id or too long db name
if (strlen(acct) >= tListLen(dst->acctId)) {
return -1;
}
tstrncpy(dst->acctId, acct, tListLen(dst->acctId));
return 0;
}
int32_t tNameFromString(SName* dst, const char* str, uint32_t type) {
assert(dst != NULL && str != NULL && strlen(str) > 0);
char* p = NULL;
if ((type & T_NAME_ACCT) == T_NAME_ACCT) {
p = strstr(str, TS_PATH_DELIMITER);
if (p == NULL) {
return -1;
}
int32_t len = (int32_t)(p - str);
// too long account id or too long db name
if (len >= tListLen(dst->acctId) || len == 0) {
return -1;
}
memcpy (dst->acctId, str, len);
dst->acctId[len] = 0;
}
if ((type & T_NAME_DB) == T_NAME_DB) {
dst->type = TSDB_DB_NAME_T;
char* start = (char*)((p == NULL)? str:(p+1));
int32_t len = 0;
p = strstr(start, TS_PATH_DELIMITER);
if (p == NULL) {
len = (int32_t) strlen(start);
} else {
len = (int32_t) (p - start);
}
// too long account id or too long db name
if (len >= tListLen(dst->dbname) || len == 0) {
return -1;
}
memcpy (dst->dbname, start, len);
dst->dbname[len] = 0;
}
if ((type & T_NAME_TABLE) == T_NAME_TABLE) {
dst->type = TSDB_TABLE_NAME_T;
char* start = (char*) ((p == NULL)? str: (p+1));
int32_t len = (int32_t) strlen(start);
// too long account id or too long db name
if (len >= tListLen(dst->tname) || len == 0) {
return -1;
}
memcpy (dst->tname, start, len);
dst->tname[len] = 0;
}
return 0;
}
......@@ -8,7 +8,7 @@ IF (TD_MVN_INSTALLED)
ADD_CUSTOM_COMMAND(OUTPUT ${JDBC_CMD_NAME}
POST_BUILD
COMMAND mvn -Dmaven.test.skip=true install -f ${CMAKE_CURRENT_SOURCE_DIR}/pom.xml
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/target/taos-jdbcdriver-2.0.16-dist.jar ${LIBRARY_OUTPUT_PATH}
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/target/taos-jdbcdriver-2.0.17-dist.jar ${LIBRARY_OUTPUT_PATH}
COMMAND mvn -Dmaven.test.skip=true clean -f ${CMAKE_CURRENT_SOURCE_DIR}/pom.xml
COMMENT "build jdbc driver")
ADD_CUSTOM_TARGET(${JDBC_TARGET_NAME} ALL WORKING_DIRECTORY ${EXECUTABLE_OUTPUT_PATH} DEPENDS ${JDBC_CMD_NAME})
......
......@@ -5,7 +5,7 @@
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.16</version>
<version>2.0.17</version>
<packaging>jar</packaging>
<name>JDBCDriver</name>
......
......@@ -3,7 +3,7 @@
<modelVersion>4.0.0</modelVersion>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.16</version>
<version>2.0.17</version>
<packaging>jar</packaging>
<name>JDBCDriver</name>
<url>https://github.com/taosdata/TDengine/tree/master/src/connector/jdbc</url>
......
......@@ -24,7 +24,6 @@ import java.sql.SQLException;
*/
public class CatalogResultSet extends TSDBResultSetWrapper {
public CatalogResultSet(ResultSet resultSet) {
super.setOriginalResultSet(resultSet);
}
......
/***************************************************************************
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*****************************************************************************/
package com.taosdata.jdbc;
import java.sql.ResultSet;
/*
* TDengine only supports a subset of the standard SQL, thus this implemetation of the
* standard JDBC API contains more or less some adjustments customized for certain
* compatibility needs.
*/
public class GetColumnsResultSet extends TSDBResultSetWrapper {
private String catalog;
private String schemaPattern;
private String tableNamePattern;
private String columnNamePattern;
public GetColumnsResultSet(ResultSet resultSet, String catalog, String schemaPattern, String tableNamePattern, String columnNamePattern) {
super.setOriginalResultSet(resultSet);
this.catalog = catalog;
this.schemaPattern = schemaPattern;
this.tableNamePattern = tableNamePattern;
this.columnNamePattern = columnNamePattern;
}
@Override
public String getString(int columnIndex) {
switch (columnIndex) {
case 1:
return catalog;
case 2:
return null;
case 3:
return tableNamePattern;
default:
return null;
}
}
}
......@@ -620,6 +620,7 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
ResultSet tables = stmt.executeQuery("show tables");
while (tables.next()) {
TSDBResultSetRowData rowData = new TSDBResultSetRowData(10);
rowData.setString(0, dbname);
rowData.setString(2, tables.getString("table_name"));
rowData.setString(3, "TABLE");
rowData.setString(4, "");
......@@ -629,6 +630,7 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
ResultSet stables = stmt.executeQuery("show stables");
while (stables.next()) {
TSDBResultSetRowData rowData = new TSDBResultSetRowData(10);
rowData.setString(0, dbname);
rowData.setString(2, stables.getString("name"));
rowData.setString(3, "TABLE");
rowData.setString(4, "STABLE");
......
/***************************************************************************
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*****************************************************************************/
package com.taosdata.jdbc;
import java.sql.ParameterMetaData;
import java.sql.SQLException;
public class TSDBParameterMetaData implements ParameterMetaData {
@Override
public int getParameterCount() throws SQLException {
return 0;
}
@Override
public int isNullable(int param) throws SQLException {
return 0;
}
@Override
public boolean isSigned(int param) throws SQLException {
return false;
}
@Override
public int getPrecision(int param) throws SQLException {
return 0;
}
@Override
public int getScale(int param) throws SQLException {
return 0;
}
@Override
public int getParameterType(int param) throws SQLException {
return 0;
}
@Override
public String getParameterTypeName(int param) throws SQLException {
return null;
}
@Override
public String getParameterClassName(int param) throws SQLException {
return null;
}
@Override
public int getParameterMode(int param) throws SQLException {
return 0;
}
@Override
public <T> T unwrap(Class<T> iface) throws SQLException {
return null;
}
@Override
public boolean isWrapperFor(Class<?> iface) throws SQLException {
return false;
}
}
/***************************************************************************
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*****************************************************************************/
package com.taosdata.jdbc;
public interface TSDBSubscribeCallBack {
void invoke(TSDBResultSet resultSet);
}
......@@ -44,6 +44,8 @@ public class RestfulDriver extends AbstractTaosDriver {
String result = HttpClientPoolUtil.execute(loginUrl);
JSONObject jsonResult = JSON.parseObject(result);
String status = jsonResult.getString("status");
String token = jsonResult.getString("desc");
HttpClientPoolUtil.token = token;
if (!status.equals("succ")) {
throw new SQLException(jsonResult.getString("desc"));
}
......
......@@ -23,6 +23,7 @@ import java.nio.charset.Charset;
public class HttpClientPoolUtil {
public static PoolingHttpClientConnectionManager cm = null;
public static CloseableHttpClient httpClient = null;
public static String token = "cm9vdDp0YW9zZGF0YQ==";
/**
* 默认content 类型
*/
......@@ -61,9 +62,7 @@ public class HttpClientPoolUtil {
try {
return Long.parseLong(value) * 1000;
} catch (Exception e) {
new Exception(
"format KeepAlive timeout exception, exception:" + e.toString())
.printStackTrace();
new Exception("format KeepAlive timeout exception, exception:" + e.toString()).printStackTrace();
}
}
}
......@@ -96,7 +95,7 @@ public class HttpClientPoolUtil {
initPools();
}
method = (HttpEntityEnclosingRequestBase) getRequest(uri, HttpPost.METHOD_NAME, DEFAULT_CONTENT_TYPE, 0);
method.setHeader("Authorization", "Basic cm9vdDp0YW9zZGF0YQ==");
method.setHeader("Authorization", "Taosd " + token);
method.setHeader("Content-Type", "text/plain");
method.setEntity(new StringEntity(data, Charset.forName("UTF-8")));
HttpContext context = HttpClientContext.create();
......
......@@ -642,6 +642,7 @@ public class TSDBDatabaseMetaDataTest {
ResultSet tables = metaData.getTables("log", "", null, null);
ResultSetMetaData metaData = tables.getMetaData();
while (tables.next()) {
System.out.print(metaData.getColumnLabel(1) + ":" + tables.getString(1) + "\t");
System.out.print(metaData.getColumnLabel(3) + ":" + tables.getString(3) + "\t");
System.out.print(metaData.getColumnLabel(4) + ":" + tables.getString(4) + "\t");
System.out.print(metaData.getColumnLabel(5) + ":" + tables.getString(5) + "\n");
......
package com.taosdata.jdbc.rs;
import org.junit.Before;
import org.junit.Test;
import java.sql.*;
public class AuthenticationTest {
// private static final String host = "127.0.0.1";
private static final String host = "master";
private static final String user = "root";
private static final String password = "123456";
private Connection conn;
@Test
public void test() {
// change password
try {
conn = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/restful_test?user=" + user + "&password=taosdata");
Statement stmt = conn.createStatement();
stmt.execute("alter user " + user + " pass '" + password + "'");
stmt.close();
conn.close();
} catch (SQLException e) {
e.printStackTrace();
}
// use new to login and execute query
try {
conn = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/restful_test?user=" + user + "&password=" + password);
Statement stmt = conn.createStatement();
stmt.execute("show databases");
ResultSet rs = stmt.getResultSet();
ResultSetMetaData meta = rs.getMetaData();
while (rs.next()) {
for (int i = 1; i <= meta.getColumnCount(); i++) {
System.out.print(meta.getColumnLabel(i) + ":" + rs.getString(i) + "\t");
}
System.out.println();
}
} catch (SQLException e) {
e.printStackTrace();
}
// change password back
try {
conn = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/restful_test?user=" + user + "&password=" + password);
Statement stmt = conn.createStatement();
stmt.execute("alter user " + user + " pass 'taosdata'");
stmt.close();
conn.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
@Before
public void before() {
try {
Class.forName("com.taosdata.jdbc.rs.RestfulDriver");
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
}
}
......@@ -6,6 +6,7 @@ import org.junit.Test;
import java.sql.*;
public class RestfulDriverTest {
private static final String host = "master";
@Test
public void connect() {
......@@ -15,9 +16,9 @@ public class RestfulDriverTest {
@Test
public void acceptsURL() throws SQLException {
Driver driver = new RestfulDriver();
boolean isAccept = driver.acceptsURL("jdbc:TAOS-RS://master:6041");
boolean isAccept = driver.acceptsURL("jdbc:TAOS-RS://" + host + ":6041");
Assert.assertTrue(isAccept);
isAccept = driver.acceptsURL("jdbc:TAOS://master:6041");
isAccept = driver.acceptsURL("jdbc:TAOS://" + host + ":6041");
Assert.assertFalse(isAccept);
}
......@@ -26,6 +27,9 @@ public class RestfulDriverTest {
Driver driver = new RestfulDriver();
final String url = "";
DriverPropertyInfo[] propertyInfo = driver.getPropertyInfo(url, null);
for (DriverPropertyInfo prop : propertyInfo) {
System.out.println(prop);
}
}
@Test
......
package com.taosdata.jdbc.rs;
import org.junit.*;
import org.junit.runners.MethodSorters;
......@@ -10,12 +9,13 @@ import java.util.Random;
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class RestfulJDBCTest {
private static final String host = "master";
private Connection connection;
@Before
public void before() throws ClassNotFoundException, SQLException {
Class.forName("com.taosdata.jdbc.rs.RestfulDriver");
connection = DriverManager.getConnection("jdbc:TAOS-RS://master:6041/restful_test?user=root&password=taosdata");
connection = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/restful_test?user=root&password=taosdata");
}
@After
......
......@@ -21,4 +21,5 @@ public class SqlSyntaxValidatorTest {
Assert.assertTrue(SqlSyntaxValidator.isUseSql("drop database test"));
Assert.assertTrue(SqlSyntaxValidator.isUseSql("drop database if exist test"));
}
}
\ No newline at end of file
......@@ -291,7 +291,7 @@ int32_t dnodeInitTelemetry() {
int32_t code = pthread_create(&tsTelemetryThread, &attr, telemetryThread, NULL);
pthread_attr_destroy(&attr);
if (code != 0) {
dTrace("failed to create telemetry thread, reason:%s", strerror(errno));
dTrace("failed to create telemetry thread, reason:%s", strerror(code));
}
dInfo("dnode telemetry is initialized");
......
......@@ -268,8 +268,7 @@ typedef struct {
typedef struct {
int32_t len; // one create table message
char tableFname[TSDB_TABLE_FNAME_LEN];
char db[TSDB_ACCT_ID_LEN + TSDB_DB_NAME_LEN];
char tableName[TSDB_TABLE_FNAME_LEN];
int8_t igExists;
int8_t getMeta;
int16_t numOfTags;
......@@ -285,7 +284,7 @@ typedef struct {
} SCMCreateTableMsg;
typedef struct {
char tableFname[TSDB_TABLE_FNAME_LEN];
char name[TSDB_TABLE_FNAME_LEN];
int8_t igNotExists;
} SCMDropTableMsg;
......
......@@ -227,9 +227,6 @@
#define TK_SPACE 300
#define TK_COMMENT 301
#define TK_ILLEGAL 302
......
......@@ -4507,6 +4507,7 @@ void setParaFromArg(){
strncpy(g_Dbs.db[0].superTbls[0].sTblName, "meters", MAX_TB_NAME_SIZE);
g_Dbs.db[0].superTbls[0].childTblCount = g_args.num_of_tables;
g_Dbs.threadCount = g_args.num_of_threads;
g_Dbs.threadCountByCreateTbl = 1;
g_Dbs.queryMode = g_args.mode;
g_Dbs.db[0].superTbls[0].autoCreateTable = PRE_CREATE_SUBTBL;
......
......@@ -32,7 +32,7 @@ int32_t mnodeInitDbs();
void mnodeCleanupDbs();
int64_t mnodeGetDbNum();
SDbObj *mnodeGetDb(char *db);
SDbObj *mnodeGetDbByTableId(char *db);
SDbObj *mnodeGetDbByTableName(char *db);
void * mnodeGetNextDb(void *pIter, SDbObj **pDb);
void mnodeCancelGetNextDb(void *pIter);
void mnodeIncDbRef(SDbObj *pDb);
......
......@@ -199,18 +199,13 @@ void mnodeDecDbRef(SDbObj *pDb) {
return sdbDecRef(tsDbSdb, pDb);
}
SDbObj *mnodeGetDbByTableId(char *tableId) {
char db[TSDB_TABLE_FNAME_LEN], *pos;
// tableId format should be : acct.db.table
pos = strstr(tableId, TS_PATH_DELIMITER);
assert(NULL != pos);
pos = strstr(pos + 1, TS_PATH_DELIMITER);
assert(NULL != pos);
memset(db, 0, sizeof(db));
strncpy(db, tableId, pos - tableId);
SDbObj *mnodeGetDbByTableName(char *tableName) {
SName name = {0};
tNameFromString(&name, tableName, T_NAME_ACCT|T_NAME_DB|T_NAME_TABLE);
// validate the tableName?
char db[TSDB_TABLE_FNAME_LEN] = {0};
tNameGetFullDbName(&name, db);
return mnodeGetDb(db);
}
......
......@@ -297,7 +297,7 @@ static int32_t mnodeChildTableActionRestored() {
pIter = mnodeGetNextChildTable(pIter, &pTable);
if (pTable == NULL) break;
SDbObj *pDb = mnodeGetDbByTableId(pTable->info.tableId);
SDbObj *pDb = mnodeGetDbByTableName(pTable->info.tableId);
if (pDb == NULL || pDb->status != TSDB_DB_STATUS_READY) {
mError("ctable:%s, failed to get db or db in dropping, discard it", pTable->info.tableId);
SSdbRow desc = {.type = SDB_OPER_LOCAL, .pObj = pTable, .pTable = tsChildTableSdb};
......@@ -443,7 +443,7 @@ static int32_t mnodeSuperTableActionDestroy(SSdbRow *pRow) {
static int32_t mnodeSuperTableActionInsert(SSdbRow *pRow) {
SSTableObj *pStable = pRow->pObj;
SDbObj *pDb = mnodeGetDbByTableId(pStable->info.tableId);
SDbObj *pDb = mnodeGetDbByTableName(pStable->info.tableId);
if (pDb != NULL && pDb->status == TSDB_DB_STATUS_READY) {
mnodeAddSuperTableIntoDb(pDb);
}
......@@ -455,7 +455,7 @@ static int32_t mnodeSuperTableActionInsert(SSdbRow *pRow) {
static int32_t mnodeSuperTableActionDelete(SSdbRow *pRow) {
SSTableObj *pStable = pRow->pObj;
SDbObj *pDb = mnodeGetDbByTableId(pStable->info.tableId);
SDbObj *pDb = mnodeGetDbByTableName(pStable->info.tableId);
if (pDb != NULL) {
mnodeRemoveSuperTableFromDb(pDb);
mnodeDropAllChildTablesInStable((SSTableObj *)pStable);
......@@ -748,9 +748,12 @@ void mnodeDestroySubMsg(SMnodeMsg *pSubMsg) {
}
static int32_t mnodeValidateCreateTableMsg(SCreateTableMsg *pCreateTable, SMnodeMsg *pMsg) {
if (pMsg->pDb == NULL) pMsg->pDb = mnodeGetDb(pCreateTable->db);
if (pMsg->pDb == NULL) {
mError("msg:%p, app:%p table:%s, failed to create, db not selected", pMsg, pMsg->rpcMsg.ahandle, pCreateTable->tableFname);
pMsg->pDb = mnodeGetDbByTableName(pCreateTable->tableName);
}
if (pMsg->pDb == NULL) {
mError("msg:%p, app:%p table:%s, failed to create, db not selected", pMsg, pMsg->rpcMsg.ahandle, pCreateTable->tableName);
return TSDB_CODE_MND_DB_NOT_SELECTED;
}
......@@ -759,28 +762,28 @@ static int32_t mnodeValidateCreateTableMsg(SCreateTableMsg *pCreateTable, SMnode
return TSDB_CODE_MND_DB_IN_DROPPING;
}
if (pMsg->pTable == NULL) pMsg->pTable = mnodeGetTable(pCreateTable->tableFname);
if (pMsg->pTable == NULL) pMsg->pTable = mnodeGetTable(pCreateTable->tableName);
if (pMsg->pTable != NULL && pMsg->retry == 0) {
if (pCreateTable->getMeta) {
mDebug("msg:%p, app:%p table:%s, continue to get meta", pMsg, pMsg->rpcMsg.ahandle, pCreateTable->tableFname);
mDebug("msg:%p, app:%p table:%s, continue to get meta", pMsg, pMsg->rpcMsg.ahandle, pCreateTable->tableName);
return mnodeGetChildTableMeta(pMsg);
} else if (pCreateTable->igExists) {
mDebug("msg:%p, app:%p table:%s, is already exist", pMsg, pMsg->rpcMsg.ahandle, pCreateTable->tableFname);
mDebug("msg:%p, app:%p table:%s, is already exist", pMsg, pMsg->rpcMsg.ahandle, pCreateTable->tableName);
return TSDB_CODE_SUCCESS;
} else {
mError("msg:%p, app:%p table:%s, failed to create, table already exist", pMsg, pMsg->rpcMsg.ahandle,
pCreateTable->tableFname);
pCreateTable->tableName);
return TSDB_CODE_MND_TABLE_ALREADY_EXIST;
}
}
if (pCreateTable->numOfTags != 0) {
mDebug("msg:%p, app:%p table:%s, create stable msg is received from thandle:%p", pMsg, pMsg->rpcMsg.ahandle,
pCreateTable->tableFname, pMsg->rpcMsg.handle);
pCreateTable->tableName, pMsg->rpcMsg.handle);
return mnodeProcessCreateSuperTableMsg(pMsg);
} else {
mDebug("msg:%p, app:%p table:%s, create ctable msg is received from thandle:%p", pMsg, pMsg->rpcMsg.ahandle,
pCreateTable->tableFname, pMsg->rpcMsg.handle);
pCreateTable->tableName, pMsg->rpcMsg.handle);
return mnodeProcessCreateChildTableMsg(pMsg);
}
}
......@@ -860,9 +863,12 @@ static int32_t mnodeProcessCreateTableMsg(SMnodeMsg *pMsg) {
}
SCreateTableMsg *p = (SCreateTableMsg*)((char*) pCreate + sizeof(SCMCreateTableMsg));
if (pMsg->pDb == NULL) pMsg->pDb = mnodeGetDb(p->db);
if (pMsg->pDb == NULL) {
mError("msg:%p, app:%p table:%s, failed to create, db not selected", pMsg, pMsg->rpcMsg.ahandle, p->tableFname);
pMsg->pDb = mnodeGetDbByTableName(p->tableName);
}
if (pMsg->pDb == NULL) {
mError("msg:%p, app:%p table:%s, failed to create, db not selected", pMsg, pMsg->rpcMsg.ahandle, p->tableName);
return TSDB_CODE_MND_DB_NOT_SELECTED;
}
......@@ -871,37 +877,37 @@ static int32_t mnodeProcessCreateTableMsg(SMnodeMsg *pMsg) {
return TSDB_CODE_MND_DB_IN_DROPPING;
}
if (pMsg->pTable == NULL) pMsg->pTable = mnodeGetTable(p->tableFname);
if (pMsg->pTable == NULL) pMsg->pTable = mnodeGetTable(p->tableName);
if (pMsg->pTable != NULL && pMsg->retry == 0) {
if (p->getMeta) {
mDebug("msg:%p, app:%p table:%s, continue to get meta", pMsg, pMsg->rpcMsg.ahandle, p->tableFname);
mDebug("msg:%p, app:%p table:%s, continue to get meta", pMsg, pMsg->rpcMsg.ahandle, p->tableName);
return mnodeGetChildTableMeta(pMsg);
} else if (p->igExists) {
mDebug("msg:%p, app:%p table:%s, is already exist", pMsg, pMsg->rpcMsg.ahandle, p->tableFname);
mDebug("msg:%p, app:%p table:%s, is already exist", pMsg, pMsg->rpcMsg.ahandle, p->tableName);
return TSDB_CODE_SUCCESS;
} else {
mError("msg:%p, app:%p table:%s, failed to create, table already exist", pMsg, pMsg->rpcMsg.ahandle, p->tableFname);
mError("msg:%p, app:%p table:%s, failed to create, table already exist", pMsg, pMsg->rpcMsg.ahandle, p->tableName);
return TSDB_CODE_MND_TABLE_ALREADY_EXIST;
}
}
if (p->numOfTags != 0) {
mDebug("msg:%p, app:%p table:%s, create stable msg is received from thandle:%p", pMsg, pMsg->rpcMsg.ahandle,
p->tableFname, pMsg->rpcMsg.handle);
p->tableName, pMsg->rpcMsg.handle);
return mnodeProcessCreateSuperTableMsg(pMsg);
} else {
mDebug("msg:%p, app:%p table:%s, create ctable msg is received from thandle:%p", pMsg, pMsg->rpcMsg.ahandle,
p->tableFname, pMsg->rpcMsg.handle);
p->tableName, pMsg->rpcMsg.handle);
return mnodeProcessCreateChildTableMsg(pMsg);
}
}
static int32_t mnodeProcessDropTableMsg(SMnodeMsg *pMsg) {
SCMDropTableMsg *pDrop = pMsg->rpcMsg.pCont;
if (pMsg->pDb == NULL) pMsg->pDb = mnodeGetDbByTableId(pDrop->tableFname);
if (pMsg->pDb == NULL) pMsg->pDb = mnodeGetDbByTableName(pDrop->name);
if (pMsg->pDb == NULL) {
mError("msg:%p, app:%p table:%s, failed to drop table, db not selected or db in dropping", pMsg,
pMsg->rpcMsg.ahandle, pDrop->tableFname);
pMsg->rpcMsg.ahandle, pDrop->name);
return TSDB_CODE_MND_DB_NOT_SELECTED;
}
......@@ -912,17 +918,17 @@ static int32_t mnodeProcessDropTableMsg(SMnodeMsg *pMsg) {
if (mnodeCheckIsMonitorDB(pMsg->pDb->name, tsMonitorDbName)) {
mError("msg:%p, app:%p table:%s, failed to drop table, in monitor database", pMsg, pMsg->rpcMsg.ahandle,
pDrop->tableFname);
pDrop->name);
return TSDB_CODE_MND_MONITOR_DB_FORBIDDEN;
}
if (pMsg->pTable == NULL) pMsg->pTable = mnodeGetTable(pDrop->tableFname);
if (pMsg->pTable == NULL) pMsg->pTable = mnodeGetTable(pDrop->name);
if (pMsg->pTable == NULL) {
if (pDrop->igNotExists) {
mDebug("msg:%p, app:%p table:%s is not exist, treat as success", pMsg, pMsg->rpcMsg.ahandle, pDrop->tableFname);
mDebug("msg:%p, app:%p table:%s is not exist, treat as success", pMsg, pMsg->rpcMsg.ahandle, pDrop->name);
return TSDB_CODE_SUCCESS;
} else {
mError("msg:%p, app:%p table:%s, failed to drop, table not exist", pMsg, pMsg->rpcMsg.ahandle, pDrop->tableFname);
mError("msg:%p, app:%p table:%s, failed to drop, table not exist", pMsg, pMsg->rpcMsg.ahandle, pDrop->name);
return TSDB_CODE_MND_INVALID_TABLE_NAME;
}
}
......@@ -930,12 +936,12 @@ static int32_t mnodeProcessDropTableMsg(SMnodeMsg *pMsg) {
if (pMsg->pTable->type == TSDB_SUPER_TABLE) {
SSTableObj *pSTable = (SSTableObj *)pMsg->pTable;
mInfo("msg:%p, app:%p table:%s, start to drop stable, uid:%" PRIu64 ", numOfChildTables:%d, sizeOfVgList:%d", pMsg,
pMsg->rpcMsg.ahandle, pDrop->tableFname, pSTable->uid, pSTable->numOfTables, taosHashGetSize(pSTable->vgHash));
pMsg->rpcMsg.ahandle, pDrop->name, pSTable->uid, pSTable->numOfTables, taosHashGetSize(pSTable->vgHash));
return mnodeProcessDropSuperTableMsg(pMsg);
} else {
SCTableObj *pCTable = (SCTableObj *)pMsg->pTable;
mInfo("msg:%p, app:%p table:%s, start to drop ctable, vgId:%d tid:%d uid:%" PRIu64, pMsg, pMsg->rpcMsg.ahandle,
pDrop->tableFname, pCTable->vgId, pCTable->tid, pCTable->uid);
pDrop->name, pCTable->vgId, pCTable->tid, pCTable->uid);
return mnodeProcessDropChildTableMsg(pMsg);
}
}
......@@ -946,7 +952,7 @@ static int32_t mnodeProcessTableMetaMsg(SMnodeMsg *pMsg) {
mDebug("msg:%p, app:%p table:%s, table meta msg is received from thandle:%p, createFlag:%d", pMsg, pMsg->rpcMsg.ahandle,
pInfo->tableFname, pMsg->rpcMsg.handle, pInfo->createFlag);
if (pMsg->pDb == NULL) pMsg->pDb = mnodeGetDbByTableId(pInfo->tableFname);
if (pMsg->pDb == NULL) pMsg->pDb = mnodeGetDbByTableName(pInfo->tableFname);
if (pMsg->pDb == NULL) {
mError("msg:%p, app:%p table:%s, failed to get table meta, db not selected", pMsg, pMsg->rpcMsg.ahandle,
pInfo->tableFname);
......@@ -1006,12 +1012,12 @@ static int32_t mnodeProcessCreateSuperTableMsg(SMnodeMsg *pMsg) {
SSTableObj * pStable = calloc(1, sizeof(SSTableObj));
if (pStable == NULL) {
mError("msg:%p, app:%p table:%s, failed to create, no enough memory", pMsg, pMsg->rpcMsg.ahandle, pCreate->tableFname);
mError("msg:%p, app:%p table:%s, failed to create, no enough memory", pMsg, pMsg->rpcMsg.ahandle, pCreate->tableName);
return TSDB_CODE_MND_OUT_OF_MEMORY;
}
int64_t us = taosGetTimestampUs();
pStable->info.tableId = strdup(pCreate->tableFname);
pStable->info.tableId = strdup(pCreate->tableName);
pStable->info.type = TSDB_SUPER_TABLE;
pStable->createdTime = taosGetTimestampMs();
pStable->uid = (us << 24) + ((sdbGetVersion() & ((1ul << 16) - 1ul)) << 8) + (taosRand() & ((1ul << 8) - 1ul));
......@@ -1025,14 +1031,14 @@ static int32_t mnodeProcessCreateSuperTableMsg(SMnodeMsg *pMsg) {
pStable->schema = (SSchema *)calloc(1, schemaSize);
if (pStable->schema == NULL) {
free(pStable);
mError("msg:%p, app:%p table:%s, failed to create, no schema input", pMsg, pMsg->rpcMsg.ahandle, pCreate->tableFname);
mError("msg:%p, app:%p table:%s, failed to create, no schema input", pMsg, pMsg->rpcMsg.ahandle, pCreate->tableName);
return TSDB_CODE_MND_INVALID_TABLE_NAME;
}
memcpy(pStable->schema, pCreate->schema, numOfCols * sizeof(SSchema));
if (pStable->numOfColumns > TSDB_MAX_COLUMNS || pStable->numOfTags > TSDB_MAX_TAGS) {
mError("msg:%p, app:%p table:%s, failed to create, too many columns", pMsg, pMsg->rpcMsg.ahandle, pCreate->tableFname);
mError("msg:%p, app:%p table:%s, failed to create, too many columns", pMsg, pMsg->rpcMsg.ahandle, pCreate->tableName);
return TSDB_CODE_MND_INVALID_TABLE_NAME;
}
......@@ -1044,8 +1050,8 @@ static int32_t mnodeProcessCreateSuperTableMsg(SMnodeMsg *pMsg) {
tschema[col].bytes = htons(tschema[col].bytes);
}
if (!isValidSchema(pStable->schema, pStable->numOfColumns, pStable->numOfTags)) {
mError("msg:%p, app:%p table:%s, failed to create table, invalid schema", pMsg, pMsg->rpcMsg.ahandle, pCreate->tableFname);
if (!tIsValidSchema(pStable->schema, pStable->numOfColumns, pStable->numOfTags)) {
mError("msg:%p, app:%p table:%s, failed to create table, invalid schema", pMsg, pMsg->rpcMsg.ahandle, pCreate->tableName);
return TSDB_CODE_MND_INVALID_CREATE_TABLE_MSG;
}
......@@ -1065,7 +1071,7 @@ static int32_t mnodeProcessCreateSuperTableMsg(SMnodeMsg *pMsg) {
if (code != TSDB_CODE_SUCCESS && code != TSDB_CODE_MND_ACTION_IN_PROGRESS) {
mnodeDestroySuperTable(pStable);
pMsg->pTable = NULL;
mError("msg:%p, app:%p table:%s, failed to create, sdb error", pMsg, pMsg->rpcMsg.ahandle, pCreate->tableFname);
mError("msg:%p, app:%p table:%s, failed to create, sdb error", pMsg, pMsg->rpcMsg.ahandle, pCreate->tableName);
}
return code;
......@@ -1907,12 +1913,12 @@ static int32_t mnodeDoCreateChildTable(SMnodeMsg *pMsg, int32_t tid) {
SCTableObj *pTable = calloc(1, sizeof(SCTableObj));
if (pTable == NULL) {
mError("msg:%p, app:%p table:%s, failed to alloc memory", pMsg, pMsg->rpcMsg.ahandle, pCreate->tableFname);
mError("msg:%p, app:%p table:%s, failed to alloc memory", pMsg, pMsg->rpcMsg.ahandle, pCreate->tableName);
return TSDB_CODE_MND_OUT_OF_MEMORY;
}
pTable->info.type = (pCreate->numOfColumns == 0)? TSDB_CHILD_TABLE:TSDB_NORMAL_TABLE;
pTable->info.tableId = strdup(pCreate->tableFname);
pTable->info.tableId = strdup(pCreate->tableName);
pTable->createdTime = taosGetTimestampMs();
pTable->tid = tid;
pTable->vgId = pVgroup->vgId;
......@@ -1928,7 +1934,7 @@ static int32_t mnodeDoCreateChildTable(SMnodeMsg *pMsg, int32_t tid) {
size_t prefixLen = tableIdPrefix(pMsg->pDb->name, prefix, 64);
if (0 != strncasecmp(prefix, stableName, prefixLen)) {
mError("msg:%p, app:%p table:%s, corresponding super table:%s not in this db", pMsg, pMsg->rpcMsg.ahandle,
pCreate->tableFname, stableName);
pCreate->tableName, stableName);
mnodeDestroyChildTable(pTable);
return TSDB_CODE_TDB_INVALID_CREATE_TB_MSG;
}
......@@ -1936,7 +1942,7 @@ static int32_t mnodeDoCreateChildTable(SMnodeMsg *pMsg, int32_t tid) {
if (pMsg->pSTable == NULL) pMsg->pSTable = mnodeGetSuperTable(stableName);
if (pMsg->pSTable == NULL) {
mError("msg:%p, app:%p table:%s, corresponding super table:%s does not exist", pMsg, pMsg->rpcMsg.ahandle,
pCreate->tableFname, stableName);
pCreate->tableName, stableName);
mnodeDestroyChildTable(pTable);
return TSDB_CODE_MND_INVALID_TABLE_NAME;
}
......@@ -2003,7 +2009,7 @@ static int32_t mnodeDoCreateChildTable(SMnodeMsg *pMsg, int32_t tid) {
if (code != TSDB_CODE_SUCCESS && code != TSDB_CODE_MND_ACTION_IN_PROGRESS) {
mnodeDestroyChildTable(pTable);
pMsg->pTable = NULL;
mError("msg:%p, app:%p table:%s, failed to create, reason:%s", pMsg, pMsg->rpcMsg.ahandle, pCreate->tableFname,
mError("msg:%p, app:%p table:%s, failed to create, reason:%s", pMsg, pMsg->rpcMsg.ahandle, pCreate->tableName,
tstrerror(code));
} else {
mDebug("msg:%p, app:%p table:%s, allocated in vgroup, vgId:%d sid:%d uid:%" PRIu64, pMsg, pMsg->rpcMsg.ahandle,
......@@ -2020,7 +2026,7 @@ static int32_t mnodeProcessCreateChildTableMsg(SMnodeMsg *pMsg) {
int32_t code = grantCheck(TSDB_GRANT_TIMESERIES);
if (code != TSDB_CODE_SUCCESS) {
mError("msg:%p, app:%p table:%s, failed to create, grant timeseries failed", pMsg, pMsg->rpcMsg.ahandle,
pCreate->tableFname);
pCreate->tableName);
return code;
}
......@@ -2031,7 +2037,7 @@ static int32_t mnodeProcessCreateChildTableMsg(SMnodeMsg *pMsg) {
code = mnodeGetAvailableVgroup(pMsg, &pVgroup, &tid);
if (code != TSDB_CODE_SUCCESS) {
mDebug("msg:%p, app:%p table:%s, failed to get available vgroup, reason:%s", pMsg, pMsg->rpcMsg.ahandle,
pCreate->tableFname, tstrerror(code));
pCreate->tableName, tstrerror(code));
return code;
}
......@@ -2045,15 +2051,15 @@ static int32_t mnodeProcessCreateChildTableMsg(SMnodeMsg *pMsg) {
return mnodeDoCreateChildTable(pMsg, tid);
}
} else {
if (pMsg->pTable == NULL) pMsg->pTable = mnodeGetTable(pCreate->tableFname);
if (pMsg->pTable == NULL) pMsg->pTable = mnodeGetTable(pCreate->tableName);
}
if (pMsg->pTable == NULL) {
mError("msg:%p, app:%p table:%s, object not found, retry:%d reason:%s", pMsg, pMsg->rpcMsg.ahandle, pCreate->tableFname, pMsg->retry,
mError("msg:%p, app:%p table:%s, object not found, retry:%d reason:%s", pMsg, pMsg->rpcMsg.ahandle, pCreate->tableName, pMsg->retry,
tstrerror(terrno));
return terrno;
} else {
mDebug("msg:%p, app:%p table:%s, send create msg to vnode again", pMsg, pMsg->rpcMsg.ahandle, pCreate->tableFname);
mDebug("msg:%p, app:%p table:%s, send create msg to vnode again", pMsg, pMsg->rpcMsg.ahandle, pCreate->tableName);
return mnodeDoCreateChildTableFp(pMsg);
}
}
......@@ -2398,8 +2404,7 @@ static int32_t mnodeAutoCreateChildTable(SMnodeMsg *pMsg) {
SCreateTableMsg* pCreate = (SCreateTableMsg*) ((char*) pCreateMsg + sizeof(SCMCreateTableMsg));
size_t size = tListLen(pInfo->tableFname);
tstrncpy(pCreate->tableFname, pInfo->tableFname, size);
tstrncpy(pCreate->db, pMsg->pDb->name, sizeof(pCreate->db));
tstrncpy(pCreate->tableName, pInfo->tableFname, size);
pCreate->igExists = 1;
pCreate->getMeta = 1;
......@@ -2767,7 +2772,7 @@ static int32_t mnodeProcessMultiTableMetaMsg(SMnodeMsg *pMsg) {
SCTableObj *pTable = mnodeGetChildTable(tableId);
if (pTable == NULL) continue;
if (pMsg->pDb == NULL) pMsg->pDb = mnodeGetDbByTableId(tableId);
if (pMsg->pDb == NULL) pMsg->pDb = mnodeGetDbByTableName(tableId);
if (pMsg->pDb == NULL || pMsg->pDb->status != TSDB_DB_STATUS_READY) {
mnodeDecTableRef(pTable);
continue;
......@@ -2988,7 +2993,7 @@ static int32_t mnodeProcessAlterTableMsg(SMnodeMsg *pMsg) {
mDebug("msg:%p, app:%p table:%s, alter table msg is received from thandle:%p", pMsg, pMsg->rpcMsg.ahandle,
pAlter->tableFname, pMsg->rpcMsg.handle);
if (pMsg->pDb == NULL) pMsg->pDb = mnodeGetDbByTableId(pAlter->tableFname);
if (pMsg->pDb == NULL) pMsg->pDb = mnodeGetDbByTableName(pAlter->tableFname);
if (pMsg->pDb == NULL) {
mError("msg:%p, app:%p table:%s, failed to alter table, db not selected", pMsg, pMsg->rpcMsg.ahandle, pAlter->tableFname);
return TSDB_CODE_MND_DB_NOT_SELECTED;
......
......@@ -39,7 +39,8 @@
#define HTTP_GC_TARGET_SIZE 512
#define HTTP_WRITE_RETRY_TIMES 500
#define HTTP_WRITE_WAIT_TIME_MS 5
#define HTTP_SESSION_ID_LEN (TSDB_USER_LEN + TSDB_KEY_LEN)
#define HTTP_PASSWORD_LEN TSDB_UNI_LEN
#define HTTP_SESSION_ID_LEN (TSDB_USER_LEN + HTTP_PASSWORD_LEN)
typedef enum HttpReqType {
HTTP_REQTYPE_OTHERS = 0,
......@@ -147,7 +148,7 @@ typedef struct HttpContext {
uint8_t parsed;
char ipstr[22];
char user[TSDB_USER_LEN]; // parsed from auth token or login message
char pass[TSDB_KEY_LEN];
char pass[HTTP_PASSWORD_LEN];
TAOS * taos;
void * ppContext;
HttpSession *session;
......
......@@ -51,7 +51,7 @@ int32_t httpParseBasicAuthToken(HttpContext *pContext, char *token, int32_t len)
char *password = user + 1;
int32_t pass_len = (int32_t)((base64 + outlen) - password);
if (pass_len < 1 || pass_len >= TSDB_KEY_LEN) {
if (pass_len < 1 || pass_len >= HTTP_PASSWORD_LEN) {
httpError("context:%p, fd:%d, basic token:%s parse password error", pContext, pContext->fd, token);
free(base64);
return -1;
......@@ -73,7 +73,7 @@ int32_t httpParseTaosdAuthToken(HttpContext *pContext, char *token, int32_t len)
if (base64) free(base64);
return 01;
}
if (outlen != (TSDB_USER_LEN + TSDB_KEY_LEN)) {
if (outlen != (TSDB_USER_LEN + HTTP_PASSWORD_LEN)) {
httpError("context:%p, fd:%d, taosd token:%s length error", pContext, pContext->fd, token);
free(base64);
return -1;
......@@ -103,8 +103,8 @@ int32_t httpGenTaosdAuthToken(HttpContext *pContext, char *token, int32_t maxLen
size = sizeof(pContext->pass);
tstrncpy(buffer + sizeof(pContext->user), pContext->pass, size);
char *encrypt = taosDesEncode(KEY_DES_4, buffer, TSDB_USER_LEN + TSDB_KEY_LEN);
char *base64 = base64_encode((const unsigned char *)encrypt, TSDB_USER_LEN + TSDB_KEY_LEN);
char *encrypt = taosDesEncode(KEY_DES_4, buffer, TSDB_USER_LEN + HTTP_PASSWORD_LEN);
char *base64 = base64_encode((const unsigned char *)encrypt, TSDB_USER_LEN + HTTP_PASSWORD_LEN);
size_t len = strlen(base64);
tstrncpy(token, base64, len + 1);
......
......@@ -59,11 +59,11 @@ bool gcGetUserFromUrl(HttpContext* pContext) {
bool gcGetPassFromUrl(HttpContext* pContext) {
HttpParser* pParser = pContext->parser;
if (pParser->path[GC_PASS_URL_POS].pos >= TSDB_KEY_LEN || pParser->path[GC_PASS_URL_POS].pos <= 0) {
if (pParser->path[GC_PASS_URL_POS].pos >= HTTP_PASSWORD_LEN || pParser->path[GC_PASS_URL_POS].pos <= 0) {
return false;
}
tstrncpy(pContext->pass, pParser->path[GC_PASS_URL_POS].str, TSDB_KEY_LEN);
tstrncpy(pContext->pass, pParser->path[GC_PASS_URL_POS].str, HTTP_PASSWORD_LEN);
return true;
}
......
......@@ -72,11 +72,11 @@ bool restGetUserFromUrl(HttpContext* pContext) {
bool restGetPassFromUrl(HttpContext* pContext) {
HttpParser* pParser = pContext->parser;
if (pParser->path[REST_PASS_URL_POS].pos >= TSDB_KEY_LEN || pParser->path[REST_PASS_URL_POS].pos <= 0) {
if (pParser->path[REST_PASS_URL_POS].pos >= HTTP_PASSWORD_LEN || pParser->path[REST_PASS_URL_POS].pos <= 0) {
return false;
}
tstrncpy(pContext->pass, pParser->path[REST_PASS_URL_POS].str, TSDB_KEY_LEN);
tstrncpy(pContext->pass, pParser->path[REST_PASS_URL_POS].str, HTTP_PASSWORD_LEN);
return true;
}
......
......@@ -60,7 +60,10 @@ void httpCleanUpConnect() {
HttpServer *pServer = &tsHttpServer;
if (pServer->pThreads == NULL) return;
pthread_join(pServer->thread, NULL);
if (taosCheckPthreadValid(pServer->thread)) {
pthread_join(pServer->thread, NULL);
}
for (int32_t i = 0; i < pServer->numOfThreads; ++i) {
HttpThread* pThread = pServer->pThreads + i;
if (pThread != NULL) {
......
......@@ -324,7 +324,7 @@ bool tgGetUserFromUrl(HttpContext *pContext) {
bool tgGetPassFromUrl(HttpContext *pContext) {
HttpParser *pParser = pContext->parser;
if (pParser->path[TG_PASS_URL_POS].pos >= TSDB_KEY_LEN || pParser->path[TG_PASS_URL_POS].pos <= 0) {
if (pParser->path[TG_PASS_URL_POS].pos >= HTTP_PASSWORD_LEN || pParser->path[TG_PASS_URL_POS].pos <= 0) {
return false;
}
......
......@@ -246,7 +246,10 @@ void monStopSystem() {
void monCleanupSystem() {
tsMonitor.quiting = 1;
monStopSystem();
pthread_join(tsMonitor.thread, NULL);
if (taosCheckPthreadValid(tsMonitor.thread)) {
pthread_join(tsMonitor.thread, NULL);
}
if (tsMonitor.conn != NULL) {
taos_close(tsMonitor.conn);
tsMonitor.conn = NULL;
......
......@@ -96,16 +96,16 @@ typedef struct SCreateTableSQL {
SQuerySQL *pSelect;
} SCreateTableSQL;
typedef struct SAlterTableSQL {
typedef struct SAlterTableInfo {
SStrToken name;
int16_t tableType;
int16_t type;
STagData tagData;
SArray *pAddColumns; // SArray<TAOS_FIELD>
SArray *varList; // set t=val or: change src dst, SArray<tVariantListItem>
} SAlterTableSQL;
} SAlterTableInfo;
typedef struct SCreateDBInfo {
typedef struct SCreateDbInfo {
SStrToken dbname;
int32_t replica;
int32_t cacheBlockSize;
......@@ -123,11 +123,10 @@ typedef struct SCreateDBInfo {
bool ignoreExists;
int8_t update;
int8_t cachelast;
SArray *keep;
} SCreateDBInfo;
SArray *keep;
} SCreateDbInfo;
typedef struct SCreateAcctSQL {
typedef struct SCreateAcctInfo {
int32_t maxUsers;
int32_t maxDbs;
int32_t maxTimeSeries;
......@@ -137,7 +136,7 @@ typedef struct SCreateAcctSQL {
int64_t maxQueryTime;
int32_t maxConnections;
SStrToken stat;
} SCreateAcctSQL;
} SCreateAcctInfo;
typedef struct SShowInfo {
uint8_t showType;
......@@ -152,23 +151,18 @@ typedef struct SUserInfo {
int16_t type;
} SUserInfo;
typedef struct tDCLSQL {
int32_t nTokens; /* Number of expressions on the list */
int32_t nAlloc; /* Number of entries allocated below */
SStrToken *a; /* one entry for element */
bool existsCheck;
typedef struct SMiscInfo {
SArray *a; // SArray<SStrToken>
bool existsCheck;
int16_t tableType;
SUserInfo user;
union {
SCreateDBInfo dbOpt;
SCreateAcctSQL acctOpt;
SShowInfo showOpt;
SStrToken ip;
SCreateDbInfo dbOpt;
SCreateAcctInfo acctOpt;
SShowInfo showOpt;
SStrToken id;
};
SUserInfo user;
} tDCLSQL;
} SMiscInfo;
typedef struct SSubclauseInfo { // "UNION" multiple select sub-clause
SQuerySQL **pClause;
......@@ -178,15 +172,13 @@ typedef struct SSubclauseInfo { // "UNION" multiple select sub-clause
typedef struct SSqlInfo {
int32_t type;
bool valid;
SSubclauseInfo subclauseInfo;
char msg[256];
union {
SCreateTableSQL *pCreateTableInfo;
SAlterTableSQL *pAlterInfo;
tDCLSQL *pDCLInfo;
SCreateTableSQL *pCreateTableInfo;
SAlterTableInfo *pAlterInfo;
SMiscInfo *pMiscInfo;
};
SSubclauseInfo subclauseInfo;
char pzErrMsg[256];
} SSqlInfo;
typedef struct tSQLExpr {
......@@ -252,7 +244,7 @@ SCreateTableSQL *tSetCreateSqlElems(SArray *pCols, SArray *pTags, SQuerySQL *pSe
void tSqlExprNodeDestroy(tSQLExpr *pExpr);
SAlterTableSQL * tAlterTableSqlElems(SStrToken *pTableName, SArray *pCols, SArray *pVals, int32_t type, int16_t tableTable);
SAlterTableInfo * tAlterTableSqlElems(SStrToken *pTableName, SArray *pCols, SArray *pVals, int32_t type, int16_t tableTable);
SCreatedTableInfo createNewChildTableInfo(SStrToken *pTableName, SArray *pTagVals, SStrToken *pToken, SStrToken* igExists);
void destroyAllSelectClause(SSubclauseInfo *pSql);
......@@ -272,16 +264,14 @@ void setDCLSQLElems(SSqlInfo *pInfo, int32_t type, int32_t nParams, ...);
void setDropDbTableInfo(SSqlInfo *pInfo, int32_t type, SStrToken* pToken, SStrToken* existsCheck,int16_t tableType);
void setShowOptions(SSqlInfo *pInfo, int32_t type, SStrToken* prefix, SStrToken* pPatterns);
tDCLSQL *tTokenListAppend(tDCLSQL *pTokenList, SStrToken *pToken);
void setCreateDBSQL(SSqlInfo *pInfo, int32_t type, SStrToken *pToken, SCreateDBInfo *pDB, SStrToken *pIgExists);
void setCreateDbInfo(SSqlInfo *pInfo, int32_t type, SStrToken *pToken, SCreateDbInfo *pDB, SStrToken *pIgExists);
void setCreateAcctSql(SSqlInfo *pInfo, int32_t type, SStrToken *pName, SStrToken *pPwd, SCreateAcctSQL *pAcctInfo);
void setCreateAcctSql(SSqlInfo *pInfo, int32_t type, SStrToken *pName, SStrToken *pPwd, SCreateAcctInfo *pAcctInfo);
void setCreateUserSql(SSqlInfo *pInfo, SStrToken *pName, SStrToken *pPasswd);
void setKillSql(SSqlInfo *pInfo, int32_t type, SStrToken *ip);
void setAlterUserSql(SSqlInfo *pInfo, int16_t type, SStrToken *pName, SStrToken* pPwd, SStrToken *pPrivilege);
void setDefaultCreateDbOption(SCreateDBInfo *pDBInfo);
void setDefaultCreateDbOption(SCreateDbInfo *pDBInfo);
// prefix show db.tables;
void setDbName(SStrToken *pCpxName, SStrToken *pDb);
......
......@@ -36,7 +36,7 @@
%syntax_error {
pInfo->valid = false;
int32_t outputBufLen = tListLen(pInfo->pzErrMsg);
int32_t outputBufLen = tListLen(pInfo->msg);
int32_t len = 0;
if(TOKEN.z) {
......@@ -46,13 +46,13 @@
if (sqlLen + sizeof(msg)/sizeof(msg[0]) + 1 > outputBufLen) {
char tmpstr[128] = {0};
memcpy(tmpstr, &TOKEN.z[0], sizeof(tmpstr)/sizeof(tmpstr[0]) - 1);
len = sprintf(pInfo->pzErrMsg, msg, tmpstr);
len = sprintf(pInfo->msg, msg, tmpstr);
} else {
len = sprintf(pInfo->pzErrMsg, msg, &TOKEN.z[0]);
len = sprintf(pInfo->msg, msg, &TOKEN.z[0]);
}
} else {
len = sprintf(pInfo->pzErrMsg, "Incomplete SQL statement");
len = sprintf(pInfo->msg, "Incomplete SQL statement");
}
assert(len <= outputBufLen);
......@@ -216,7 +216,7 @@ conns(Y) ::= CONNS INTEGER(X). { Y = X; }
state(Y) ::= . { Y.n = 0; }
state(Y) ::= STATE ids(X). { Y = X; }
%type acct_optr {SCreateAcctSQL}
%type acct_optr {SCreateAcctInfo}
acct_optr(Y) ::= pps(C) tseries(D) storage(P) streams(F) qtime(Q) dbs(E) users(K) conns(L) state(M). {
Y.maxUsers = (K.n>0)?atoi(K.z):-1;
Y.maxDbs = (E.n>0)?atoi(E.z):-1;
......@@ -248,7 +248,7 @@ prec(Y) ::= PRECISION STRING(X). { Y = X; }
update(Y) ::= UPDATE INTEGER(X). { Y = X; }
cachelast(Y) ::= CACHELAST INTEGER(X). { Y = X; }
%type db_optr {SCreateDBInfo}
%type db_optr {SCreateDbInfo}
db_optr(Y) ::= . {setDefaultCreateDbOption(&Y);}
db_optr(Y) ::= db_optr(Z) cache(X). { Y = Z; Y.cacheBlockSize = strtol(X.z, NULL, 10); }
......@@ -267,7 +267,7 @@ db_optr(Y) ::= db_optr(Z) keep(X). { Y = Z; Y.keep = X; }
db_optr(Y) ::= db_optr(Z) update(X). { Y = Z; Y.update = strtol(X.z, NULL, 10); }
db_optr(Y) ::= db_optr(Z) cachelast(X). { Y = Z; Y.cachelast = strtol(X.z, NULL, 10); }
%type alter_db_optr {SCreateDBInfo}
%type alter_db_optr {SCreateDbInfo}
alter_db_optr(Y) ::= . { setDefaultCreateDbOption(&Y);}
alter_db_optr(Y) ::= alter_db_optr(Z) replica(X). { Y = Z; Y.replica = strtol(X.z, NULL, 10); }
......@@ -692,7 +692,7 @@ cmd ::= RESET QUERY CACHE. { setDCLSQLElems(pInfo, TSDB_SQL_RESET_CACHE, 0);}
///////////////////////////////////ALTER TABLE statement//////////////////////////////////
cmd ::= ALTER TABLE ids(X) cpxName(F) ADD COLUMN columnlist(A). {
X.n += F.n;
SAlterTableSQL* pAlterTable = tAlterTableSqlElems(&X, A, NULL, TSDB_ALTER_TABLE_ADD_COLUMN, -1);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, A, NULL, TSDB_ALTER_TABLE_ADD_COLUMN, -1);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
......@@ -702,14 +702,14 @@ cmd ::= ALTER TABLE ids(X) cpxName(F) DROP COLUMN ids(A). {
toTSDBType(A.type);
SArray* K = tVariantListAppendToken(NULL, &A, -1);
SAlterTableSQL* pAlterTable = tAlterTableSqlElems(&X, NULL, K, TSDB_ALTER_TABLE_DROP_COLUMN, -1);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, NULL, K, TSDB_ALTER_TABLE_DROP_COLUMN, -1);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
//////////////////////////////////ALTER TAGS statement/////////////////////////////////////
cmd ::= ALTER TABLE ids(X) cpxName(Y) ADD TAG columnlist(A). {
X.n += Y.n;
SAlterTableSQL* pAlterTable = tAlterTableSqlElems(&X, A, NULL, TSDB_ALTER_TABLE_ADD_TAG_COLUMN, -1);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, A, NULL, TSDB_ALTER_TABLE_ADD_TAG_COLUMN, -1);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
cmd ::= ALTER TABLE ids(X) cpxName(Z) DROP TAG ids(Y). {
......@@ -718,7 +718,7 @@ cmd ::= ALTER TABLE ids(X) cpxName(Z) DROP TAG ids(Y). {
toTSDBType(Y.type);
SArray* A = tVariantListAppendToken(NULL, &Y, -1);
SAlterTableSQL* pAlterTable = tAlterTableSqlElems(&X, NULL, A, TSDB_ALTER_TABLE_DROP_TAG_COLUMN, -1);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, NULL, A, TSDB_ALTER_TABLE_DROP_TAG_COLUMN, -1);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
......@@ -731,7 +731,7 @@ cmd ::= ALTER TABLE ids(X) cpxName(F) CHANGE TAG ids(Y) ids(Z). {
toTSDBType(Z.type);
A = tVariantListAppendToken(A, &Z, -1);
SAlterTableSQL* pAlterTable = tAlterTableSqlElems(&X, NULL, A, TSDB_ALTER_TABLE_CHANGE_TAG_COLUMN, -1);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, NULL, A, TSDB_ALTER_TABLE_CHANGE_TAG_COLUMN, -1);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
......@@ -742,7 +742,7 @@ cmd ::= ALTER TABLE ids(X) cpxName(F) SET TAG ids(Y) EQ tagitem(Z). {
SArray* A = tVariantListAppendToken(NULL, &Y, -1);
A = tVariantListAppend(A, &Z, -1);
SAlterTableSQL* pAlterTable = tAlterTableSqlElems(&X, NULL, A, TSDB_ALTER_TABLE_UPDATE_TAG_VAL, -1);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, NULL, A, TSDB_ALTER_TABLE_UPDATE_TAG_VAL, -1);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
......@@ -750,7 +750,7 @@ cmd ::= ALTER TABLE ids(X) cpxName(F) SET TAG ids(Y) EQ tagitem(Z). {
///////////////////////////////////ALTER STABLE statement//////////////////////////////////
cmd ::= ALTER STABLE ids(X) cpxName(F) ADD COLUMN columnlist(A). {
X.n += F.n;
SAlterTableSQL* pAlterTable = tAlterTableSqlElems(&X, A, NULL, TSDB_ALTER_TABLE_ADD_COLUMN, TSDB_SUPER_TABLE);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, A, NULL, TSDB_ALTER_TABLE_ADD_COLUMN, TSDB_SUPER_TABLE);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
......@@ -760,14 +760,14 @@ cmd ::= ALTER STABLE ids(X) cpxName(F) DROP COLUMN ids(A). {
toTSDBType(A.type);
SArray* K = tVariantListAppendToken(NULL, &A, -1);
SAlterTableSQL* pAlterTable = tAlterTableSqlElems(&X, NULL, K, TSDB_ALTER_TABLE_DROP_COLUMN, TSDB_SUPER_TABLE);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, NULL, K, TSDB_ALTER_TABLE_DROP_COLUMN, TSDB_SUPER_TABLE);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
//////////////////////////////////ALTER TAGS statement/////////////////////////////////////
cmd ::= ALTER STABLE ids(X) cpxName(Y) ADD TAG columnlist(A). {
X.n += Y.n;
SAlterTableSQL* pAlterTable = tAlterTableSqlElems(&X, A, NULL, TSDB_ALTER_TABLE_ADD_TAG_COLUMN, TSDB_SUPER_TABLE);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, A, NULL, TSDB_ALTER_TABLE_ADD_TAG_COLUMN, TSDB_SUPER_TABLE);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
cmd ::= ALTER STABLE ids(X) cpxName(Z) DROP TAG ids(Y). {
......@@ -776,7 +776,7 @@ cmd ::= ALTER STABLE ids(X) cpxName(Z) DROP TAG ids(Y). {
toTSDBType(Y.type);
SArray* A = tVariantListAppendToken(NULL, &Y, -1);
SAlterTableSQL* pAlterTable = tAlterTableSqlElems(&X, NULL, A, TSDB_ALTER_TABLE_DROP_TAG_COLUMN, TSDB_SUPER_TABLE);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, NULL, A, TSDB_ALTER_TABLE_DROP_TAG_COLUMN, TSDB_SUPER_TABLE);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
......@@ -789,7 +789,7 @@ cmd ::= ALTER STABLE ids(X) cpxName(F) CHANGE TAG ids(Y) ids(Z). {
toTSDBType(Z.type);
A = tVariantListAppendToken(A, &Z, -1);
SAlterTableSQL* pAlterTable = tAlterTableSqlElems(&X, NULL, A, TSDB_ALTER_TABLE_CHANGE_TAG_COLUMN, TSDB_SUPER_TABLE);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, NULL, A, TSDB_ALTER_TABLE_CHANGE_TAG_COLUMN, TSDB_SUPER_TABLE);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
......
......@@ -407,7 +407,7 @@ tExprNode* exprTreeFromTableName(const char* tbnameCond) {
SSchema* pSchema = exception_calloc(1, sizeof(SSchema));
left->pSchema = pSchema;
*pSchema = tscGetTbnameColumnSchema();
*pSchema = tGetTbnameColumnSchema();
tExprNode* right = exception_calloc(1, sizeof(tExprNode));
expr->_node.pRight = right;
......
......@@ -1875,6 +1875,7 @@ static int32_t setCtxTagColumnInfo(SQueryRuntimeEnv *pRuntimeEnv, SQLFunctionCtx
return TSDB_CODE_SUCCESS;
}
// todo refactor
static int32_t setupQueryRuntimeEnv(SQueryRuntimeEnv *pRuntimeEnv, int16_t order) {
qDebug("QInfo:%p setup runtime env", GET_QINFO_ADDR(pRuntimeEnv));
SQuery *pQuery = pRuntimeEnv->pQuery;
......@@ -3849,11 +3850,6 @@ void setExecutionContext(SQInfo *pQInfo, int32_t groupIndex, TSKEY nextKey) {
// lastKey needs to be updated
pTableQueryInfo->lastKey = nextKey;
if (pRuntimeEnv->hasTagResults || pRuntimeEnv->pTsBuf != NULL) {
setAdditionalInfo(pQInfo, pTableQueryInfo->pTable, pTableQueryInfo);
}
if (pRuntimeEnv->prevGroupId != INT32_MIN && pRuntimeEnv->prevGroupId == groupIndex) {
return;
}
......@@ -4779,19 +4775,17 @@ static void enableExecutionForNextTable(SQueryRuntimeEnv *pRuntimeEnv) {
}
}
// TODO refactor: setAdditionalInfo
static FORCE_INLINE void setEnvForEachBlock(SQInfo* pQInfo, STableQueryInfo* pTableQueryInfo, SDataBlockInfo* pBlockInfo) {
SQueryRuntimeEnv* pRuntimeEnv = &pQInfo->runtimeEnv;
SQuery* pQuery = pQInfo->runtimeEnv.pQuery;
int32_t step = GET_FORWARD_DIRECTION_FACTOR(pQuery->order.order);
if (QUERY_IS_INTERVAL_QUERY(pQuery)) {
TSKEY nextKey = pBlockInfo->window.skey;
setIntervalQueryRange(pQInfo, nextKey);
if (pRuntimeEnv->hasTagResults || pRuntimeEnv->pTsBuf != NULL) {
setAdditionalInfo(pQInfo, pTableQueryInfo->pTable, pTableQueryInfo);
}
if (pRuntimeEnv->hasTagResults || pRuntimeEnv->pTsBuf != NULL) {
setAdditionalInfo(pQInfo, pTableQueryInfo->pTable, pTableQueryInfo);
}
if (QUERY_IS_INTERVAL_QUERY(pQuery)) {
setIntervalQueryRange(pQInfo, pBlockInfo->window.skey);
} else { // non-interval query
setExecutionContext(pQInfo, pTableQueryInfo->groupIndex, pBlockInfo->window.ekey + step);
}
......@@ -4814,7 +4808,7 @@ static void doTableQueryInfoTimeWindowCheck(SQuery* pQuery, STableQueryInfo* pTa
static int64_t scanMultiTableDataBlocks(SQInfo *pQInfo) {
SQueryRuntimeEnv *pRuntimeEnv = &pQInfo->runtimeEnv;
SQuery* pQuery = pRuntimeEnv->pQuery;
SQueryCostInfo* summary = &pRuntimeEnv->summary;
SQueryCostInfo* summary = &pRuntimeEnv->summary;
int64_t st = taosGetTimestampMs();
......@@ -5047,7 +5041,7 @@ static void sequentialTableProcess(SQInfo *pQInfo) {
STsdbQueryCond cond = createTsdbQueryCond(pQuery, &pQuery->window);
SArray *g1 = taosArrayInit(1, POINTER_BYTES);
SArray *tx = taosArrayClone(group);
SArray *tx = taosArrayDup(group);
taosArrayPush(g1, &tx);
STableGroupInfo gp = {.numOfTables = taosArrayGetSize(tx), .pGroupList = g1};
......@@ -5107,7 +5101,7 @@ static void sequentialTableProcess(SQInfo *pQInfo) {
STsdbQueryCond cond = createTsdbQueryCond(pQuery, &pQuery->window);
SArray *g1 = taosArrayInit(1, POINTER_BYTES);
SArray *tx = taosArrayClone(group);
SArray *tx = taosArrayDup(group);
taosArrayPush(g1, &tx);
STableGroupInfo gp = {.numOfTables = taosArrayGetSize(tx), .pGroupList = g1};
......@@ -5471,7 +5465,7 @@ static void doRestoreContext(SQInfo *pQInfo) {
SET_MASTER_SCAN_FLAG(pRuntimeEnv);
}
static void doCloseAllTimeWindowAfterScan(SQInfo *pQInfo) {
static void doCloseAllTimeWindow(SQInfo *pQInfo) {
SQuery *pQuery = pQInfo->runtimeEnv.pQuery;
if (QUERY_IS_INTERVAL_QUERY(pQuery)) {
......@@ -5523,7 +5517,7 @@ static void multiTableQueryProcess(SQInfo *pQInfo) {
}
// close all time window results
doCloseAllTimeWindowAfterScan(pQInfo);
doCloseAllTimeWindow(pQInfo);
if (needReverseScan(pQuery)) {
int32_t code = doSaveContext(pQInfo);
......@@ -5848,8 +5842,7 @@ static void stableQueryImpl(SQInfo *pQInfo) {
(isFixedOutputQuery(pRuntimeEnv) && (!isPointInterpoQuery(pQuery)) && (!pRuntimeEnv->groupbyColumn))) {
multiTableQueryProcess(pQInfo);
} else {
assert((pQuery->checkResultBuf == 1 && pQuery->interval.interval == 0) || isPointInterpoQuery(pQuery) ||
pRuntimeEnv->groupbyColumn);
assert(pQuery->checkResultBuf == 1 || isPointInterpoQuery(pQuery) || pRuntimeEnv->groupbyColumn);
sequentialTableProcess(pQInfo);
}
......@@ -6377,7 +6370,7 @@ static int32_t createQueryFuncExprFromMsg(SQueryTableMsg *pQueryMsg, int32_t num
if (functId == TSDB_FUNC_TOP || functId == TSDB_FUNC_BOTTOM) {
int32_t j = getColumnIndexInSource(pQueryMsg, &pExprs[i].base, pTagCols);
if (j < 0 || j >= pQueryMsg->numOfCols) {
assert(0);
return TSDB_CODE_QRY_INVALID_MSG;
} else {
SColumnInfo *pCol = &pQueryMsg->colList[j];
int32_t ret =
......@@ -6576,7 +6569,7 @@ static SQInfo *createQInfoImpl(SQueryTableMsg *pQueryMsg, SSqlGroupbyExpr *pGrou
int32_t srcSize = 0;
for (int16_t i = 0; i < numOfCols; ++i) {
pQuery->colList[i] = pQueryMsg->colList[i];
pQuery->colList[i].filters = tscFilterInfoClone(pQueryMsg->colList[i].filters, pQuery->colList[i].numOfFilters);
pQuery->colList[i].filters = tFilterInfoDup(pQueryMsg->colList[i].filters, pQuery->colList[i].numOfFilters);
srcSize += pQuery->colList[i].bytes;
}
......@@ -6642,7 +6635,7 @@ static SQInfo *createQInfoImpl(SQueryTableMsg *pQueryMsg, SSqlGroupbyExpr *pGrou
pQInfo->runtimeEnv.summary.tableInfoSize += (pTableGroupInfo->numOfTables * sizeof(STableQueryInfo));
pQInfo->runtimeEnv.pResultRowHashTable = taosHashInit(pTableGroupInfo->numOfTables, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK);
pQInfo->runtimeEnv.keyBuf = malloc(TSDB_MAX_BYTES_PER_ROW);
pQInfo->runtimeEnv.keyBuf = malloc(TSDB_MAX_BYTES_PER_ROW); // todo opt size
pQInfo->runtimeEnv.pool = initResultRowPool(getResultRowSize(&pQInfo->runtimeEnv));
pQInfo->runtimeEnv.prevRow = malloc(POINTER_BYTES * pQuery->numOfCols + srcSize);
......
此差异已折叠。
此差异已折叠。
......@@ -154,7 +154,7 @@ void *taosInitTcpServer(uint32_t ip, uint16_t port, char *label, int numOfThread
if (code == 0) {
code = pthread_create(&pServerObj->thread, &thattr, taosAcceptTcpConnection, (void *)pServerObj);
if (code != 0) {
tError("%s failed to create TCP accept thread(%s)", label, strerror(errno));
tError("%s failed to create TCP accept thread(%s)", label, strerror(code));
}
}
......
......@@ -1388,8 +1388,8 @@ static void doMergeTwoLevelData(STsdbQueryHandle* pQueryHandle, STableCheckInfo*
break;
}
if (((tsArray[pos] > pQueryHandle->window.ekey || pos > endPos) && ASCENDING_TRAVERSE(pQueryHandle->order)) ||
((tsArray[pos] < pQueryHandle->window.ekey || pos < endPos) && !ASCENDING_TRAVERSE(pQueryHandle->order))) {
if (((pos > endPos || tsArray[pos] > pQueryHandle->window.ekey) && ASCENDING_TRAVERSE(pQueryHandle->order)) ||
((pos < endPos || tsArray[pos] < pQueryHandle->window.ekey) && !ASCENDING_TRAVERSE(pQueryHandle->order))) {
break;
}
......
......@@ -110,7 +110,7 @@ void taosArrayCopy(SArray* pDst, const SArray* pSrc);
* clone a new array
* @param pSrc
*/
SArray* taosArrayClone(const SArray* pSrc);
SArray* taosArrayDup(const SArray* pSrc);
/**
* clear the array (remove all element)
......
......@@ -165,7 +165,7 @@ void taosArrayCopy(SArray* pDst, const SArray* pSrc) {
pDst->size = pSrc->size;
}
SArray* taosArrayClone(const SArray* pSrc) {
SArray* taosArrayDup(const SArray* pSrc) {
assert(pSrc != NULL);
if (pSrc->size == 0) { // empty array list
......
......@@ -510,7 +510,9 @@ void taosCacheCleanup(SCacheObj *pCacheObj) {
}
pCacheObj->deleting = 1;
pthread_join(pCacheObj->refreshWorker, NULL);
if (taosCheckPthreadValid(pCacheObj->refreshWorker)) {
pthread_join(pCacheObj->refreshWorker, NULL);
}
uInfo("cache:%s will be cleaned up", pCacheObj->name);
doCleanupDataCache(pCacheObj);
......
......@@ -201,6 +201,8 @@ int32_t vnodeOpen(int32_t vgId) {
pthread_mutex_init(&pVnode->statusMutex, NULL);
vnodeSetInitStatus(pVnode);
tsdbIncCommitRef(pVnode->vgId);
int32_t code = vnodeReadCfg(pVnode);
if (code != TSDB_CODE_SUCCESS) {
vnodeCleanUp(pVnode);
......@@ -297,7 +299,6 @@ int32_t vnodeOpen(int32_t vgId) {
pVnode->events = NULL;
vDebug("vgId:%d, vnode is opened in %s, pVnode:%p", pVnode->vgId, rootDir, pVnode);
tsdbIncCommitRef(pVnode->vgId);
vnodeAddIntoHash(pVnode);
......
......@@ -67,9 +67,9 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.16</version>
<!-- <scope>system</scope>-->
<!-- <systemPath>${project.basedir}/src/main/resources/lib/taos-jdbcdriver-2.0.15-dist.jar</systemPath>-->
<version>2.0.17</version>
<!-- <scope>system</scope>-->
<!-- <systemPath>${project.basedir}/src/main/resources/lib/taos-jdbcdriver-2.0.15-dist.jar</systemPath>-->
</dependency>
<!-- fastjson -->
<dependency>
......
......@@ -3,6 +3,6 @@ PROJECT(TDengine)
IF (TD_LINUX)
INCLUDE_DIRECTORIES(. ${TD_COMMUNITY_DIR}/src/inc ${TD_COMMUNITY_DIR}/src/client/inc ${TD_COMMUNITY_DIR}/inc)
AUX_SOURCE_DIRECTORY(. SRC)
ADD_EXECUTABLE(demo demo.c)
ADD_EXECUTABLE(demo apitest.c)
TARGET_LINK_LIBRARIES(demo taos_static trpc tutil pthread )
ENDIF ()
......@@ -18,6 +18,8 @@ import time
import random
import requests
import argparse
import datetime
import string
from requests.auth import HTTPBasicAuth
func_list=['avg','count','twa','sum','stddev','leastsquares','min',
'max','first','last','top','bottom','percentile','apercentile',
......@@ -31,7 +33,7 @@ condition_list=[
'fill(null)'
]
where_list = ['_c0>now-10d',' <50'," like \'%a%\'"]
where_list = ['_c0>now-10d',' <50','like',' is null']
class ConcurrentInquiry:
# def __init__(self,ts=1500000001000,host='127.0.0.1',user='root',password='taosdata',dbname='test',
# stb_prefix='st',subtb_prefix='t',n_Therads=10,r_Therads=10,probabilities=0.05,loop=5,
......@@ -54,13 +56,15 @@ class ConcurrentInquiry:
self.subtb_stru_list=[]
self.stb_tag_list=[]
self.subtb_tag_list=[]
self.probabilities = [probabilities,1-probabilities]
self.ifjoin = [0,1]
self.probabilities = [1-probabilities,probabilities]
self.ifjoin = [1,0]
self.loop = loop
self.stableNum = stableNum
self.subtableNum = subtableNum
self.insertRows = insertRows
self.mix_table = mix_table
self.max_ts = datetime.datetime.now()
self.min_ts = datetime.datetime.now() - datetime.timedelta(days=5)
def SetThreadsNum(self,num):
self.numOfTherads=num
......@@ -103,6 +107,14 @@ class ConcurrentInquiry:
self.subtb_stru_list.append(tb)
self.subtb_tag_list.append(tag)
def get_timespan(self,cl): #获取时间跨度(仅第一个超级表)
sql = 'select first(_c0),last(_c0) from ' + self.dbname + '.' + self.stb_list[0] + ';'
print(sql)
cl.execute(sql)
for data in cl:
self.max_ts = data[1]
self.min_ts = data[0]
def get_full(self): #获取所有的表、表结构
host = self.host
user = self.user
......@@ -118,6 +130,7 @@ class ConcurrentInquiry:
self.r_subtb_list(cl,i)
self.r_stb_stru(cl)
self.r_subtb_stru(cl)
self.get_timespan(cl)
cl.close()
conn.close()
......@@ -127,9 +140,21 @@ class ConcurrentInquiry:
for i in range(random.randint(0,len(tlist))):
c = random.choice(where_list)
if c == '_c0>now-10d':
l.append(c)
rdate = self.min_ts + (self.max_ts - self.min_ts)/10 * random.randint(-11,11)
conlist = ' _c0 ' + random.choice(['<','>','>=','<=','<>']) + "'" + str(rdate) + "'"
if self.random_pick():
l.append(conlist)
else: l.append(c)
elif '<50' in c:
conlist = ' ' + random.choice(tlist) + random.choice(['<','>','>=','<=','<>']) + str(random.randrange(-100,100))
l.append(conlist)
elif 'is null' in c:
conlist = ' ' + random.choice(tlist) + random.choice([' is null',' is not null'])
l.append(conlist)
else:
l.append(random.choice(tlist)+c)
s_all = string.ascii_letters
conlist = ' ' + random.choice(tlist) + " like \'%" + random.choice(s_all) + "%\' "
l.append(conlist)
return 'where '+random.choice([' and ',' or ']).join(l)
def con_interval(self,tlist,col_list,tag_list):
......@@ -195,8 +220,10 @@ class ConcurrentInquiry:
if bool(random.getrandbits(1)):
pick_func+=alias
sel_col_list.append(pick_func)
sql=sql+','.join(sel_col_list) #select col & func
if col_rand == 0:
sql = sql + '*'
else:
sql=sql+','.join(sel_col_list) #select col & func
if self.mix_table == 0:
sql = sql + ' from '+random.choice(self.stb_list+self.subtb_list)+' '
elif self.mix_table == 1:
......@@ -262,7 +289,26 @@ class ConcurrentInquiry:
else:
sel_col_tag.append('t1.' + str(random.choice(col_list[0] + tag_list[0])))
sel_col_tag.append('t2.' + str(random.choice(col_list[1] + tag_list[1])))
sql += ','.join(sel_col_tag)
sel_col_list = []
random.shuffle(func_list)
if self.random_pick():
loop = 0
for i,j in zip(sel_col_tag,func_list): #决定每个被查询col的函数
alias = ' as '+ 'taos%d ' % loop
loop += 1
pick_func = ''
if j == 'leastsquares':
pick_func=j+'('+i+',1,1)'
elif j == 'top' or j == 'bottom' or j == 'percentile' or j == 'apercentile':
pick_func=j+'('+i+',1)'
else:
pick_func=j+'('+i+')'
if bool(random.getrandbits(1)):
pick_func+=alias
sel_col_list.append(pick_func)
sql += ','.join(sel_col_list)
else:
sql += ','.join(sel_col_tag)
sql = sql + ' from '+ str(tbname[0]) +' t1,' + str(tbname[1]) + ' t2 ' #select col & func
join_section = None
......@@ -274,7 +320,6 @@ class ConcurrentInquiry:
else:
temp = random.choices(col_intersection+tag_intersection)
join_section = temp.pop()
print(random.choices(col_intersection))
sql += 'where t1._c0 = t2._c0 and ' + 't1.' + str(join_section) + '=t2.' + str(join_section)
return sql
......
......@@ -7,7 +7,7 @@ system sh/cfg.sh -n dnode1 -c mnodeEqualVnodeNum -v 4
print ========= start dnode1 as master
system sh/exec.sh -n dnode1 -s start
sleep 3000
sleep 2000
sql connect
print ======== step1
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册