提交 885012b1 编写于 作者: L liuyq-617

Merge branch 'develop' into test/testcase

......@@ -171,6 +171,8 @@ matrix:
- build-essential
- cmake
- binutils-2.26
- unixodbc
- unixodbc-dev
env:
- DESC="trusty/gcc-4.8/bintuils-2.26 build"
......@@ -198,6 +200,8 @@ matrix:
packages:
- build-essential
- cmake
- unixodbc
- unixodbc-dev
before_script:
- export TZ=Asia/Harbin
......@@ -252,6 +256,8 @@ matrix:
packages:
- build-essential
- cmake
- unixodbc
- unixodbc-dev
env:
- DESC="arm64 xenial build"
......@@ -280,6 +286,7 @@ matrix:
addons:
homebrew:
- cmake
- unixodbc
script:
- cd ${TRAVIS_BUILD_DIR}
......
......@@ -17,6 +17,7 @@ SET(TD_MQTT FALSE)
SET(TD_TSDB_PLUGINS FALSE)
SET(TD_STORAGE FALSE)
SET(TD_TOPIC FALSE)
SET(TD_MODULE FALSE)
SET(TD_COVER FALSE)
SET(TD_MEM_CHECK FALSE)
......
......@@ -258,10 +258,16 @@ TDengine 社区生态中也有一些非常友好的第三方连接器,可以
TDengine 的测试框架和所有测试例全部开源。
点击[这里](tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md),了解如何运行测试例和添加新的测试例。
点击 [这里](tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md),了解如何运行测试例和添加新的测试例。
# 成为社区贡献者
点击[这里](https://www.taosdata.com/cn/contributor/),了解如何成为 TDengine 的贡献者。
#加入技术交流群
TDengine官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小T为好友,即可入群。
点击 [这里](https://www.taosdata.com/cn/contributor/),了解如何成为 TDengine 的贡献者。
# 加入技术交流群
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小T为好友,即可入群。
# [谁在使用TDengine](https://github.com/taosdata/TDengine/issues/2432)
欢迎所有 TDengine 用户及贡献者在 [这里](https://github.com/taosdata/TDengine/issues/2432) 分享您在当前工作中开发/使用 TDengine 的故事。
......@@ -31,7 +31,7 @@ For user manual, system design and architecture, engineering blogs, refer to [TD
# Building
At the moment, TDengine only supports building and running on Linux systems. You can choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) or from the source code. This quick guide is for installation from the source only.
To build TDengine, use [CMake](https://cmake.org/) 3.5 or higher versions in the project directory.
To build TDengine, use [CMake](https://cmake.org/) 2.8.12.x or higher versions in the project directory.
## Install tools
......@@ -250,3 +250,6 @@ Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to th
Add WeChat “tdengine” to join the group,you can communicate with other users.
# [User List](https://github.com/taosdata/TDengine/issues/2432)
If you are using TDengine and feel it helps or you'd like to do some contributions, please add your company to [user list](https://github.com/taosdata/TDengine/issues/2432) and let us know your needs.
......@@ -29,6 +29,10 @@ IF (TD_TOPIC)
ADD_DEFINITIONS(-D_TOPIC)
ENDIF ()
IF (TD_MODULE)
ADD_DEFINITIONS(-D_MODULE)
ENDIF ()
IF (TD_GODLL)
ADD_DEFINITIONS(-D_TD_GO_DLL_)
ENDIF ()
......
......@@ -17,6 +17,14 @@ ELSEIF (${TOPIC} MATCHES "false")
MESSAGE(STATUS "Build without topic plugins")
ENDIF ()
IF (${TD_MODULE} MATCHES "true")
SET(TD_MODULE TRUE)
MESSAGE(STATUS "Build with module plugins")
ELSEIF (${TOPIC} MATCHES "false")
SET(TD_MODULE FALSE)
MESSAGE(STATUS "Build without module plugins")
ENDIF ()
IF (${COVER} MATCHES "true")
SET(TD_COVER TRUE)
MESSAGE(STATUS "Build with test coverage")
......
......@@ -32,7 +32,7 @@ ELSEIF (TD_WINDOWS)
#INSTALL(TARGETS taos RUNTIME DESTINATION driver)
#INSTALL(TARGETS shell RUNTIME DESTINATION .)
IF (TD_MVN_INSTALLED)
INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos-jdbcdriver-2.0.22-dist.jar DESTINATION connector/jdbc)
INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos-jdbcdriver-2.0.25-dist.jar DESTINATION connector/jdbc)
ENDIF ()
ELSEIF (TD_DARWIN)
SET(TD_MAKE_INSTALL_SH "${TD_COMMUNITY_DIR}/packaging/tools/make_install.sh")
......
......@@ -4,7 +4,7 @@ PROJECT(TDengine)
IF (DEFINED VERNUMBER)
SET(TD_VER_NUMBER ${VERNUMBER})
ELSE ()
SET(TD_VER_NUMBER "2.0.17.0")
SET(TD_VER_NUMBER "2.0.19.0")
ENDIF ()
IF (DEFINED VERCOMPATIBLE)
......
......@@ -120,7 +120,7 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专
* [TDengine性能对比测试工具](https://www.taosdata.com/blog/2020/01/18/1166.html)
* [IDEA数据库管理工具可视化使用TDengine](https://www.taosdata.com/blog/2020/08/27/1767.html)
* [基于eletron开发的跨平台TDengine图形化管理工具](https://github.com/skye0207/TDengineGUI)
* [DataX,支持TDengine的离线数据采集/同步工具](https://github.com/alibaba/DataX)
* [DataX,支持TDengine的离线数据采集/同步工具](https://github.com/wgzhao/DataX)(文档:[读取插件](https://github.com/wgzhao/DataX/blob/master/docs/src/main/sphinx/reader/tdenginereader.md)[写入插件](https://github.com/wgzhao/DataX/blob/master/docs/src/main/sphinx/writer/tdenginewriter.md)
## TDengine与其他数据库的对比测试
......
......@@ -101,7 +101,7 @@ $ taos -h 192.168.0.1 -s "use db; show tables;"
### 运行SQL命令脚本
TDengine终端可以通过`source`命令来运行SQL命令脚本.
TDengine 终端可以通过 `source` 命令来运行 SQL 命令脚本.
```mysql
taos> source <filename>;
......@@ -109,10 +109,10 @@ taos> source <filename>;
### Shell小技巧
- 可以使用上下光标键查看已经历史输入的命
- 修改用户密码。在shell中使用alter user命
- 可以使用上下光标键查看历史输入的指
- 修改用户密码。在 shell 中使用 alter user 指
- ctrl+c 中止正在进行中的查询
- 执行`RESET QUERY CACHE`清空本地缓存的表的schema
- 执行 `RESET QUERY CACHE` 清空本地缓存的表 schema
## <a class="anchor" id="demo"></a>TDengine 极速体验
......@@ -179,19 +179,20 @@ taos> select avg(f1), max(f2), min(f3) from test.t10 interval(10s);
### TDengine服务器支持的平台列表
| | **CentOS** **6/7/8** | **Ubuntu** **16/18/20** | **Other Linux** | **统信****UOS** | **银河****/****中标麒麟** | **凝思** **V60/V80** |
| -------------- | --------------------- | ------------------------ | --------------- | --------------- | ------------------------- | --------------------- |
| X64 | ● | ● | | ○ | ● | ● |
| 树莓派ARM32 | | ● | ● | | | |
| 龙芯MIPS64 | | | ● | | | |
| 鲲鹏 ARM64 | | ○ | ○ | | ● | |
| 申威 Alpha64 | | | ○ | ● | | |
| 飞腾ARM64 | | ○优麒麟 | | | | |
| 海光X64 | ● | ● | ● | ○ | ● | ● |
| 瑞芯微ARM64/32 | | | ○ | | | |
| 全志ARM64/32 | | | ○ | | | |
| 炬力ARM64/32 | | | ○ | | | |
| TI ARM32 | | | ○ | | | |
| | **CentOS 6/7/8** | **Ubuntu 16/18/20** | **Other Linux** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** | **华为 EulerOS** |
| -------------- | --------------------- | ------------------------ | --------------- | --------------- | ------------------------- | --------------------- | --------------------- |
| X64 | ● | ● | | ○ | ● | ● | ● |
| 树莓派 ARM32 | | ● | ● | | | | |
| 龙芯 MIPS64 | | | ● | | | | |
| 鲲鹏 ARM64 | | ○ | ○ | | ● | | |
| 申威 Alpha64 | | | ○ | ● | | | |
| 飞腾 ARM64 | | ○ 优麒麟 | | | | | |
| 海光 X64 | ● | ● | ● | ○ | ● | ● | |
| 瑞芯微 ARM64/32 | | | ○ | | | | |
| 全志 ARM64/32 | | | ○ | | | | |
| 炬力 ARM64/32 | | | ○ | | | | |
| TI ARM32 | | | ○ | | | | |
| 华为云 ARM64 | | | | | | | ● |
注: ● 表示经过官方测试验证, ○ 表示非官方测试验证。
......@@ -203,7 +204,7 @@ taos> select avg(f1), max(f2), min(f3) from test.t10 interval(10s);
对照矩阵如下:
| **CPU** | **X64 64bit** | | | **X86 32bit** | **ARM64** | **ARM32** | **MIPS ** **龙芯** | **Alpha ** **申威** | **X64 ** **海光** |
| **CPU** | **X64 64bit** | | | **X86 32bit** | **ARM64** | **ARM32** | **MIPS 龙芯** | **Alpha 申威** | **X64 海光** |
| ----------- | --------------- | --------- | --------- | --------------- | --------- | --------- | ------------------- | -------------------- | ------------------ |
| **OS** | **Linux** | **Win64** | **Win32** | **Win32** | **Linux** | **Linux** | **Linux** | **Linux** | **Linux** |
| **C/C++** | ● | ● | ● | ○ | ● | ● | ● | ● | ● |
......@@ -211,7 +212,7 @@ taos> select avg(f1), max(f2), min(f3) from test.t10 interval(10s);
| **Python** | ● | ● | ● | ○ | ● | ● | ● | -- | ● |
| **Go** | ● | ● | ● | ○ | ● | ● | ○ | -- | -- |
| **NodeJs** | ● | ● | ○ | ○ | ● | ● | ○ | -- | -- |
| **C#** | ○ | ● | ● | ○ | ○ | ○ | ○ | -- | -- |
| **C#** | ● | ● | ○ | ○ | ○ | ○ | ○ | -- | -- |
| **RESTful** | ● | ● | ● | ● | ● | ● | ● | ● | ● |
注: ● 表示经过官方测试验证, ○ 表示非官方测试验证。
......
......@@ -285,7 +285,7 @@ JDBC连接器可能报错的错误码包括3种:JDBC driver本身的报错(
* https://github.com/taosdata/TDengine/blob/develop/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java
* https://github.com/taosdata/TDengine/blob/develop/src/inc/taoserror.h
### 订阅
### <a class="anchor" id="subscribe"></a>订阅
#### 创建
......@@ -451,7 +451,8 @@ Query OK, 1 row(s) in set (0.000141s)
| taos-jdbcdriver 版本 | TDengine 版本 | JDK 版本 |
| -------------------- | ----------------- | -------- |
| 2.0.12 及以上 | 2.0.8.0 及以上 | 1.8.x |
| 2.0.22 | 2.0.18.0 及以上 | 1.8.x |
| 2.0.12 - 2.0.21 | 2.0.8.0 - 2.0.17.0 | 1.8.x |
| 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.x | 1.8.x |
| 1.0.3 | 1.6.1.x 及以上 | 1.8.x |
| 1.0.2 | 1.6.1.x 及以上 | 1.8.x |
......@@ -470,9 +471,11 @@ TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对
| BIGINT | java.lang.Long |
| FLOAT | java.lang.Float |
| DOUBLE | java.lang.Double |
| SMALLINT, TINYINT | java.lang.Short |
| SMALLINT | java.lang.Short |
| TINYINT | java.lang.Byte |
| BOOL | java.lang.Boolean |
| BINARY, NCHAR | java.lang.String |
| BINARY | byte array |
| NCHAR | java.lang.String |
......
......@@ -14,7 +14,7 @@ TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、
| **Python** | ● | ● | ● | ○ | ● | ● | ○ | -- | ○ |
| **Go** | ● | ● | ● | ○ | ● | ● | ○ | -- | -- |
| **NodeJs** | ● | ● | ○ | ○ | ● | ● | ○ | -- | -- |
| **C#** | ○ | ● | ● | ○ | ○ | ○ | ○ | -- | -- |
| **C#** | ● | ● | ○ | ○ | ○ | ○ | ○ | -- | -- |
| **RESTful** | ● | ● | ● | ● | ● | ● | ○ | ○ | ○ |
其中 ● 表示经过官方测试验证, ○ 表示非官方测试验证。
......@@ -23,7 +23,7 @@ TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、
* 在没有安装TDengine服务端软件的系统中使用连接器(除RESTful外)访问 TDengine 数据库,需要安装相应版本的客户端安装包来使应用驱动(Linux系统中文件名为libtaos.so,Windows系统中为taos.dll)被安装在系统中,否则会产生无法找到相应库文件的错误。
* 所有执行 SQL 语句的 API,例如 C/C++ Connector 中的 `tao_query``taos_query_a``taos_subscribe` 等,以及其它语言中与它们对应的API,每次都只能执行一条 SQL 语句,如果实际参数中包含了多条语句,它们的行为是未定义的。
* 升级到TDengine到2.0.8.0版本的用户,必须更新JDBC连接TDengine必须升级taos-jdbcdriver到2.0.12及以上。
* 升级到TDengine到2.0.8.0版本的用户,必须更新JDBC连接TDengine必须升级taos-jdbcdriver到2.0.12及以上。详细的版本依赖关系请参见 [taos-jdbcdriver 文档](https://www.taosdata.com/cn/documentation/connector/java#version)
* 无论选用何种编程语言的连接器,2.0 及以上版本的 TDengine 推荐数据库应用的每个线程都建立一个独立的连接,或基于线程建立连接池,以避免连接内的“USE statement”状态量在线程之间相互干扰(但连接的查询和写入操作都是线程安全的)。
## <a class="anchor" id="driver"></a>安装连接器驱动步骤
......
......@@ -111,9 +111,10 @@ taos>
**提示:**
- 任何已经加入集群在线的数据节点,都可以作为后续待加入节点的firstEP。
- firstEp这个参数仅仅在该数据节点首次加入集群时有作用,加入集群后,该数据节点会保存最新的mnode的End Point列表,不再依赖这个参数。
- 两个没有配置firstEp参数的数据节点dnode启动后,会独立运行起来。这个时候,无法将其中一个数据节点加入到另外一个数据节点,形成集群。**无法将两个独立的集群合并成为新的集群**
- 任何已经加入集群在线的数据节点,都可以作为后续待加入节点的 firstEP。
- firstEp 这个参数仅仅在该数据节点首次加入集群时有作用,加入集群后,该数据节点会保存最新的 mnode 的 End Point 列表,不再依赖这个参数。
- 接下来,配置文件中的 firstEp 参数就主要在客户端连接的时候使用了,例如 taos shell 如果不加参数,会默认连接由 firstEp 指定的节点。
- 两个没有配置 firstEp 参数的数据节点 dnode 启动后,会独立运行起来。这个时候,无法将其中一个数据节点加入到另外一个数据节点,形成集群。**无法将两个独立的集群合并成为新的集群**
## <a class="anchor" id="management"></a>数据节点管理
......
......@@ -6,19 +6,27 @@
### 内存需求
每个 DB 可以创建固定数目的 vgroup,默认与 CPU 核数相同,可通过 maxVgroupsPerDb 配置;vgroup 中的每个副本会是一个 vnode;每个 vnode 会占用固定大小的内存(大小与数据库的配置参数 blocks 和 cache 有关);每个 Table 会占用与标签总长度有关的内存;此外,系统会有一些固定的内存开销。因此,每个 DB 需要的系统内存可通过如下公式计算:
每个 Database 可以创建固定数目的 vgroup,默认与 CPU 核数相同,可通过 maxVgroupsPerDb 配置;vgroup 中的每个副本会是一个 vnode;每个 vnode 会占用固定大小的内存(大小与数据库的配置参数 blocks 和 cache 有关);每个 Table 会占用与标签总长度有关的内存;此外,系统会有一些固定的内存开销。因此,每个 DB 需要的系统内存可通过如下公式计算:
```
Memory Size = maxVgroupsPerDb * (blocks * cache + 10MB) + numOfTables * (tagSizePerTable + 0.5KB)
Database Memory Size = maxVgroupsPerDb * (blocks * cache + 10MB) + numOfTables * (tagSizePerTable + 0.5KB)
```
示例:假设是 4 核机器,cache 是缺省大小 16M, blocks 是缺省值 6,假设有 10 万张表,标签总长度是 256 字节,则总的内存需求为:4 \* (16 \* 6 + 10) + 100000 \* (0.25 + 0.5) / 1000 = 499M。
示例:假设是 4 核机器,cache 是缺省大小 16M, blocks 是缺省值 6,并且一个 DB 中有 10 万张表,标签总长度是 256 字节,则这个 DB 总的内存需求为:4 \* (16 \* 6 + 10) + 100000 \* (0.25 + 0.5) / 1000 = 499M。
注意:从这个公式计算得到的内存容量,应理解为系统的“必要需求”,而不是“内存总数”。在实际运行的生产系统中,由于操作系统缓存、资源管理调度等方面的需要,内存规划应当在计算结果的基础上保留一定冗余,以维持系统状态和系统性能的稳定性。
在实际的系统运维中,我们通常会更关心 TDengine 服务进程(taosd)会占用的内存量。
```
taosd 内存总量 = vnode 内存 + mnode 内存 + 查询内存
```
其中:
1. “vnode 内存”指的是集群中所有的 Database 存储分摊到当前 taosd 节点上所占用的内存资源。可以按上文“Database Memory Size”计算公式估算每个 DB 的内存占用量进行加总,再按集群中总共的 TDengine 节点数做平均(如果设置为多副本,则还需要乘以对应的副本倍数)。
2. “mnode 内存”指的是集群中管理节点所占用的资源。如果一个 taosd 节点上分布有 mnode 管理节点,则内存消耗还需要增加“0.2KB * 集群中数据表总数”。
3. “查询内存”指的是服务端处理查询请求时所需要占用的内存。单条查询语句至少会占用“0.2KB * 查询涉及的数据表总数”的内存量。
实际运行的系统往往会根据数据特点的不同,将数据存放在不同的 DB 里。因此做规划时,也需要考虑
注意:以上内存估算方法,主要讲解了系统的“必须内存需求”,而不是“内存总数上限”。在实际运行的生产环境中,由于操作系统缓存、资源管理调度等方面的原因,内存规划应当在估算结果的基础上保留一定冗余,以维持系统状态和系统性能的稳定性。并且,生产环境通常会配置系统资源的监控工具,以便及时发现硬件资源的紧缺情况
如果内存充裕,可以加大 Blocks 的配置,这样更多数据将保存在内存里,提高查询速度。
最后,如果内存充裕,可以考虑加大 Blocks 的配置,这样更多数据将保存在内存里,提高查询速度。
### CPU 需求
......@@ -112,17 +120,17 @@ taosd -C
不同应用场景的数据往往具有不同的数据特征,比如保留天数、副本数、采集频次、记录大小、采集点的数量、压缩等都可完全不同。为获得在存储上的最高效率,TDengine提供如下存储相关的系统配置参数:
- days:一个数据文件存储数据的时间跨度,单位为天,默认值:10。
- keep:数据库中数据保留的天数,单位为天,默认值:3650。
- keep:数据库中数据保留的天数,单位为天,默认值:3650。(可通过 alter database 修改)
- minRows:文件块中记录的最小条数,单位为条,默认值:100。
- maxRows:文件块中记录的最大条数,单位为条,默认值:4096。
- comp:文件压缩标志位,0:关闭;1:一阶段压缩;2:两阶段压缩。默认值:2。
- comp:文件压缩标志位,0:关闭;1:一阶段压缩;2:两阶段压缩。默认值:2。(可通过 alter database 修改)
- walLevel:WAL级别。1:写wal,但不执行fsync;2:写wal, 而且执行fsync。默认值:1。
- fsync:当wal设置为2时,执行fsync的周期。设置为0,表示每次写入,立即执行fsync。单位为毫秒,默认值:3000。
- cache:内存块的大小,单位为兆字节(MB),默认值:16。
- blocks:每个VNODE(TSDB)中有多少cache大小的内存块。因此一个VNODE的用的内存大小粗略为(cache * blocks)。单位为块,默认值:4。
- replica:副本个数,取值范围:1-3。单位为个,默认值:1
- precision:时间戳精度标识,ms表示毫秒,us表示微秒。默认值:ms
- cacheLast:是否在内存中缓存子表 last_row,0:关闭;1:开启。默认值:0。(从 2.0.11 版本开始支持此参数)
- blocks:每个VNODE(TSDB)中有多少cache大小的内存块。因此一个VNODE的用的内存大小粗略为(cache * blocks)。单位为块,默认值:4。(可通过 alter database 修改)
- replica:副本个数,取值范围:1-3。单位为个,默认值:1。(可通过 alter database 修改)
- precision:时间戳精度标识,ms表示毫秒,us表示微秒。默认值:ms
- cacheLast:是否在内存中缓存子表 last_row,0:关闭;1:开启。默认值:0。(可通过 alter database 修改)(从 2.0.11 版本开始支持此参数)
对于一个应用场景,可能有多种数据特征的数据并存,最佳的设计是将具有相同数据特征的表放在一个库里,这样一个应用有多个库,而每个库可以配置不同的存储参数,从而保证系统有最优的性能。TDengine允许应用在创建库时指定上述存储参数,如果指定,该参数就将覆盖对应的系统配置参数。举例,有下述SQL:
......
......@@ -29,21 +29,21 @@ taos> DESCRIBE meters;
## <a class="anchor" id="data-type"></a>支持的数据类型
使用TDengine,最重要的是时间戳。创建并插入记录、查询历史记录的时候,均需要指定时间戳。时间戳有如下规则:
使用 TDengine,最重要的是时间戳。创建并插入记录、查询历史记录的时候,均需要指定时间戳。时间戳有如下规则:
- 时间格式为```YYYY-MM-DD HH:mm:ss.MS```, 默认时间分辨率为毫秒。比如:```2017-08-12 18:25:58.128```
- 内部函数now是服务器的当前时间
- 插入记录时,如果时间戳为now,插入数据时使用服务器当前时间
- Epoch Time: 时间戳也可以是一个长整数,表示从1970-01-01 08:00:00.000开始的毫秒数
- 时间可以加减,比如 now-2h,表明查询时刻向前推2个小时(最近2小时)。 数字后面的时间单位可以是 a(毫秒)、s(秒)、 m(分)、h(小时)、d(天)、w(周)。 比如select * from t1 where ts > now-2w and ts <= now-1w, 表示查询两周前整整一周的数据。 在指定降频操作(down sampling)的时间窗口(interval)时,时间单位还可以使用 n(自然月) 和 y(自然年)。
- 时间格式为 ```YYYY-MM-DD HH:mm:ss.MS```默认时间分辨率为毫秒。比如:```2017-08-12 18:25:58.128```
- 内部函数 now 是客户端的当前时间
- 插入记录时,如果时间戳为 now,插入数据时使用提交这条记录的客户端的当前时间
- Epoch Time:时间戳也可以是一个长整数,表示从 1970-01-01 08:00:00.000 开始的毫秒数
- 时间可以加减,比如 now-2h,表明查询时刻向前推 2 个小时(最近 2 小时)。数字后面的时间单位可以是 u(微秒)、a(毫秒)、s(秒)、m(分)、h(小时)、d(天)、w(周)。 比如 `select * from t1 where ts > now-2w and ts <= now-1w`,表示查询两周前整整一周的数据。在指定降频操作(down sampling)的时间窗口(interval)时,时间单位还可以使用 n(自然月) 和 y(自然年)。
TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMicrosecond就可支持微秒。
TDengine 缺省的时间戳是毫秒精度,但通过修改配置参数 enableMicrosecond 就可以支持微秒。
在TDengine中,普通表的数据模型中可使用以下10种数据类型。
在TDengine中,普通表的数据模型中可使用以下 10 种数据类型。
| | 类型 | Bytes | 说明 |
| ---- | :-------: | ------ | ------------------------------------------------------------ |
| 1 | TIMESTAMP | 8 | 时间戳。缺省精度毫秒,可支持微秒。从格林威治时间 1970-01-01 00:00:00.000 (UTC/GMT) 开始,计时不能早于该时间。 |
| 1 | TIMESTAMP | 8 | 时间戳。缺省精度毫秒,可支持微秒。从格林威治时间 1970-01-01 00:00:00.000 (UTC/GMT) 开始,计时不能早于该时间。(从 2.0.18 版本开始,已经去除了这一时间范围限制) |
| 2 | INT | 4 | 整型,范围 [-2^31+1, 2^31-1], -2^31 用作 NULL |
| 3 | BIGINT | 8 | 长整型,范围 [-2^63+1, 2^63-1], -2^63 用于 NULL |
| 4 | FLOAT | 4 | 浮点型,有效位数 6-7,范围 [-3.4E38, 3.4E38] |
......@@ -249,7 +249,7 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic
3) TAGS 列名不能为预留关键字;
4) TAGS 最多允许128个,至少1个,总长度不超过16k个字符
4) TAGS 最多允许 128 个,至少 1 个,总长度不超过 16 KB
- **删除超级表**
......
......@@ -156,3 +156,13 @@ ALTER LOCAL RESETLOG;
```
其含义是,清空本机所有由客户端生成的日志文件。
## <a class="anchor" id="timezone"></a>18. 时间戳的时区信息是怎样处理的?
TDengine 中时间戳的时区总是由客户端进行处理,而与服务端无关。具体来说,客户端会对 SQL 语句中的时间戳进行时区转换,转为 UTC 时区(即 Unix 时间戳——Unix Timestamp)再交由服务端进行写入和查询;在读取数据时,服务端也是采用 UTC 时区提供原始数据,客户端收到后再根据本地设置,把时间戳转换为本地系统所要求的时区进行显示。
客户端在处理时间戳字符串时,会采取如下逻辑:
1. 在未做特殊设置的情况下,客户端默认使用所在操作系统的时区设置。
2. 如果在 taos.cfg 中设置了 timezone 参数,则客户端会以这个配置文件中的设置为准。
3. 如果在 C/C++/Java/Python 等各种编程语言的 Connector Driver 中,在建立数据库连接时显式指定了 timezone,那么会以这个指定的时区设置为准。例如 Java Connector 的 JDBC URL 中就有 timezone 参数。
4. 在书写 SQL 语句时,也可以直接使用 Unix 时间戳(例如 `1554984068000`)或带有时区的时间戳字符串,也即以 RFC 3339 格式(例如 `2013-04-12T15:52:01.123+08:00`)或 ISO-8601 格式(例如 `2013-04-12T15:52:01.123+0800`)来书写时间戳,此时这些时间戳的取值将不再受其他时区设置的影响。
......@@ -29,6 +29,9 @@
# number of threads per CPU core
# numOfThreadsPerCore 1.0
# number of threads to commit cache data
# numOfCommitThreads 4
# the proportion of total CPU cores available for query processing
# 2.0: the query threads will be set to double of the CPU cores.
# 1.0: all CPU cores are available for query processing [default].
......
......@@ -120,7 +120,7 @@ function clean_service_on_systemd() {
if [ "$verMode" == "cluster" ]; then
nginx_service_config="${service_config_dir}/${nginx_service_name}.service"
if [ -d ${bin_dir}/web ]; then
if [ -d ${install_nginxd_dir} ]; then
if systemctl is-active --quiet ${nginx_service_name}; then
echo "Nginx for TDengine is running, stopping it..."
${csudo} systemctl stop ${nginx_service_name} &> /dev/null || echo &> /dev/null
......
name: tdengine
base: core18
version: '2.0.17.0'
version: '2.0.19.0'
icon: snap/gui/t-dengine.svg
summary: an open-source big data platform designed and optimized for IoT.
description: |
......@@ -72,7 +72,7 @@ parts:
- usr/bin/taosd
- usr/bin/taos
- usr/bin/taosdemo
- usr/lib/libtaos.so.2.0.17.0
- usr/lib/libtaos.so.2.0.19.0
- usr/lib/libtaos.so.1
- usr/lib/libtaos.so
......
......@@ -19,6 +19,6 @@ ADD_SUBDIRECTORY(tsdb)
ADD_SUBDIRECTORY(wal)
ADD_SUBDIRECTORY(cq)
ADD_SUBDIRECTORY(dnode)
#ADD_SUBDIRECTORY(connector/odbc)
ADD_SUBDIRECTORY(connector/odbc)
ADD_SUBDIRECTORY(connector/jdbc)
......@@ -36,19 +36,6 @@ extern "C" {
#define UTIL_TABLE_IS_NORMAL_TABLE(metaInfo)\
(!(UTIL_TABLE_IS_SUPER_TABLE(metaInfo) || UTIL_TABLE_IS_CHILD_TABLE(metaInfo)))
typedef struct SParsedColElem {
int16_t colIndex;
uint16_t offset;
} SParsedColElem;
typedef struct SParsedDataColInfo {
int16_t numOfCols;
int16_t numOfAssignedCols;
SParsedColElem elems[TSDB_MAX_COLUMNS];
bool hasVal[TSDB_MAX_COLUMNS];
} SParsedDataColInfo;
#pragma pack(push,1)
// this struct is transfered as binary, padding two bytes to avoid
// an 'uid' whose low bytes is 0xff being recoginized as NULL,
......@@ -83,6 +70,22 @@ typedef struct SJoinSupporter {
SArray* pVgroupTables;
} SJoinSupporter;
typedef struct SMergeCtx {
SJoinSupporter* p;
int32_t idx;
SArray* res;
int8_t compared;
}SMergeCtx;
typedef struct SMergeTsCtx {
SJoinSupporter* p;
STSBuf* res;
int64_t numOfInput;
int8_t compared;
}SMergeTsCtx;
typedef struct SVgroupTableInfo {
SVgroupInfo vgInfo;
SArray* itemList; //SArray<STableIdInfo>
......@@ -102,6 +105,8 @@ int32_t tscCreateDataBlock(size_t initialSize, int32_t rowSize, int32_t startOff
void tscDestroyDataBlock(STableDataBlocks* pDataBlock, bool removeMeta);
void tscSortRemoveDataBlockDupRows(STableDataBlocks* dataBuf);
void tscDestroyBoundColumnInfo(SParsedDataColInfo* pColInfo);
SParamInfo* tscAddParamToDataBlock(STableDataBlocks* pDataBlock, char type, uint8_t timePrec, int16_t bytes,
uint32_t offset);
......@@ -124,6 +129,8 @@ bool tscIsPointInterpQuery(SQueryInfo* pQueryInfo);
bool tscIsTWAQuery(SQueryInfo* pQueryInfo);
bool tscIsSecondStageQuery(SQueryInfo* pQueryInfo);
bool tscGroupbyColumn(SQueryInfo* pQueryInfo);
bool tscIsTopbotQuery(SQueryInfo* pQueryInfo);
int32_t tscGetTopbotQueryParam(SQueryInfo* pQueryInfo);
bool tscNonOrderedProjectionQueryOnSTable(SQueryInfo *pQueryInfo, int32_t tableIndex);
bool tscOrderedProjectionQueryOnSTable(SQueryInfo* pQueryInfo, int32_t tableIndex);
......@@ -183,6 +190,7 @@ int32_t tscSqlExprCopy(SArray* dst, const SArray* src, uint64_t uid, bool deep
void tscSqlExprInfoDestroy(SArray* pExprInfo);
SColumn* tscColumnClone(const SColumn* src);
bool tscColumnExists(SArray* pColumnList, SColumnIndex* pColIndex);
SColumn* tscColumnListInsert(SArray* pColList, SColumnIndex* colIndex);
SArray* tscColumnListClone(const SArray* src, int16_t tableIndex);
void tscColumnListDestroy(SArray* pColList);
......
......@@ -142,15 +142,15 @@ typedef struct SCond {
} SCond;
typedef struct SJoinNode {
char tableName[TSDB_TABLE_FNAME_LEN];
uint64_t uid;
int16_t tagColId;
SArray* tsJoin;
SArray* tagJoin;
} SJoinNode;
typedef struct SJoinInfo {
bool hasJoin;
SJoinNode left;
SJoinNode right;
SJoinNode* joinTables[TSDB_MAX_JOIN_TABLE_NUM];
} SJoinInfo;
typedef struct STagCond {
......@@ -175,6 +175,19 @@ typedef struct SParamInfo {
uint32_t offset;
} SParamInfo;
typedef struct SBoundColumn {
bool hasVal; // denote if current column has bound or not
int32_t offset; // all column offset value
} SBoundColumn;
typedef struct SParsedDataColInfo {
int16_t numOfCols;
int16_t numOfBound;
int32_t *boundedColumns;
SBoundColumn *cols;
} SParsedDataColInfo;
typedef struct STableDataBlocks {
SName tableName;
int8_t tsSource; // where does the UNIX timestamp come from, server or client
......@@ -189,6 +202,8 @@ typedef struct STableDataBlocks {
STableMeta *pTableMeta; // the tableMeta of current table, the table meta will be used during submit, keep a ref to avoid to be removed from cache
char *pData;
SParsedDataColInfo boundColumnInfo;
// for parameter ('?') binding
uint32_t numOfAllocedParams;
uint32_t numOfParams;
......@@ -284,7 +299,7 @@ typedef struct {
char * pRsp;
int32_t rspType;
int32_t rspLen;
uint64_t qhandle;
uint64_t qId;
int64_t useconds;
int64_t offset; // offset value from vnode during projection query of stable
int32_t row;
......@@ -367,7 +382,7 @@ typedef struct SSqlObj {
int64_t svgroupRid;
int64_t squeryLock;
int32_t retryReason; // previous error code
struct SSqlObj *prev, *next;
int64_t self;
} SSqlObj;
......@@ -425,6 +440,7 @@ void tscRestoreFuncForSTableQuery(SQueryInfo *pQueryInfo);
int32_t tscCreateResPointerInfo(SSqlRes *pRes, SQueryInfo *pQueryInfo);
void tscSetResRawPtr(SSqlRes* pRes, SQueryInfo* pQueryInfo);
void destroyTableNameList(SSqlCmd* pCmd);
void tscResetSqlCmd(SSqlCmd *pCmd, bool removeMeta);
......@@ -439,6 +455,8 @@ void tscFreeSqlResult(SSqlObj *pSql);
* @param pObj
*/
void tscFreeSqlObj(SSqlObj *pSql);
void tscFreeSubobj(SSqlObj* pSql);
void tscFreeRegisteredSqlObj(void *pSql);
void tscCloseTscObj(void *pObj);
......@@ -460,6 +478,7 @@ char* tscGetSqlStr(SSqlObj* pSql);
bool tscIsQueryWithLimit(SSqlObj* pSql);
bool tscHasReachLimitation(SQueryInfo *pQueryInfo, SSqlRes *pRes);
void tscSetBoundColumnInfo(SParsedDataColInfo *pColInfo, SSchema *pSchema, int32_t numOfCols);
char *tscGetErrorMsgPayload(SSqlCmd *pCmd);
......
......@@ -160,8 +160,8 @@ static void tscProcessAsyncRetrieveImpl(void *param, TAOS_RES *tres, int numOfRo
SSqlCmd *pCmd = &pSql->cmd;
SSqlRes *pRes = &pSql->res;
if ((pRes->qhandle == 0 || numOfRows != 0) && pCmd->command < TSDB_SQL_LOCAL) {
if (pRes->qhandle == 0 && numOfRows != 0) {
if ((pRes->qId == 0 || numOfRows != 0) && pCmd->command < TSDB_SQL_LOCAL) {
if (pRes->qId == 0 && numOfRows != 0) {
tscError("qhandle is NULL");
} else {
pRes->code = numOfRows;
......@@ -208,7 +208,7 @@ void taos_fetch_rows_a(TAOS_RES *taosa, __async_cb_func_t fp, void *param) {
pSql->fetchFp = fp;
pSql->fp = tscAsyncFetchRowsProxy;
if (pRes->qhandle == 0) {
if (pRes->qId == 0) {
tscError("qhandle is NULL");
pRes->code = TSDB_CODE_TSC_INVALID_QHANDLE;
pSql->param = param;
......@@ -310,9 +310,50 @@ void tscAsyncResultOnError(SSqlObj* pSql) {
taosScheduleTask(tscQhandle, &schedMsg);
}
int tscSendMsgToServer(SSqlObj *pSql);
static int32_t updateMetaBeforeRetryQuery(SSqlObj* pSql, STableMetaInfo* pTableMetaInfo, SQueryInfo* pQueryInfo) {
// handle the invalid table error code for super table.
// update the pExpr info, colList info, number of table columns
// TODO Re-parse this sql and issue the corresponding subquery as an alternative for this case.
if (pSql->retryReason == TSDB_CODE_TDB_INVALID_TABLE_ID) {
int32_t numOfExprs = (int32_t) tscSqlExprNumOfExprs(pQueryInfo);
int32_t numOfCols = tscGetNumOfColumns(pTableMetaInfo->pTableMeta);
int32_t numOfTags = tscGetNumOfTags(pTableMetaInfo->pTableMeta);
int tscSendMsgToServer(SSqlObj *pSql);
SSchema *pSchema = tscGetTableSchema(pTableMetaInfo->pTableMeta);
for (int32_t i = 0; i < numOfExprs; ++i) {
SSqlExpr *pExpr = tscSqlExprGet(pQueryInfo, i);
pExpr->uid = pTableMetaInfo->pTableMeta->id.uid;
if (pExpr->colInfo.colIndex >= 0) {
int32_t index = pExpr->colInfo.colIndex;
if ((TSDB_COL_IS_NORMAL_COL(pExpr->colInfo.flag) && index >= numOfCols) ||
(TSDB_COL_IS_TAG(pExpr->colInfo.flag) && (index < numOfCols || index >= (numOfCols + numOfTags)))) {
return pSql->retryReason;
}
if ((pSchema[pExpr->colInfo.colIndex].colId != pExpr->colInfo.colId) &&
strcasecmp(pExpr->colInfo.name, pSchema[pExpr->colInfo.colIndex].name) != 0) {
return pSql->retryReason;
}
}
}
// validate the table columns information
for (int32_t i = 0; i < taosArrayGetSize(pQueryInfo->colList); ++i) {
SColumn *pCol = taosArrayGetP(pQueryInfo->colList, i);
if (pCol->colIndex.columnIndex >= numOfCols) {
return pSql->retryReason;
}
}
} else {
// do nothing
}
return TSDB_CODE_SUCCESS;
}
void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) {
SSqlObj* pSql = (SSqlObj*)taosAcquireRef(tscObjRef, (int64_t)param);
......@@ -339,7 +380,8 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) {
if (TSDB_QUERY_HAS_TYPE(pQueryInfo->type, (TSDB_QUERY_TYPE_STABLE_SUBQUERY|TSDB_QUERY_TYPE_SUBQUERY|TSDB_QUERY_TYPE_TAG_FILTER_QUERY))) {
tscDebug("%p update local table meta, continue to process sql and send the corresponding query", pSql);
STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
code = tscGetTableMeta(pSql, pTableMetaInfo);
assert(code == TSDB_CODE_TSC_ACTION_IN_PROGRESS || code == TSDB_CODE_SUCCESS);
......@@ -349,6 +391,10 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) {
}
assert((tscGetNumOfTags(pTableMetaInfo->pTableMeta) != 0));
code = updateMetaBeforeRetryQuery(pSql, pTableMetaInfo, pQueryInfo);
if (code != TSDB_CODE_SUCCESS) {
goto _error;
}
// tscProcessSql can add error into async res
tscProcessSql(pSql);
......@@ -459,10 +505,7 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) {
return;
_error:
if (code != TSDB_CODE_SUCCESS) {
pSql->res.code = code;
pRes->code = code;
tscAsyncResultOnError(pSql);
}
taosReleaseRef(tscObjRef, pSql->self);
}
......@@ -309,7 +309,7 @@ TAOS_ROW tscFetchRow(void *param) {
SSqlCmd *pCmd = &pSql->cmd;
SSqlRes *pRes = &pSql->res;
if (pRes->qhandle == 0 ||
if (pRes->qId == 0 ||
pCmd->command == TSDB_SQL_RETRIEVE_EMPTY_RESULT ||
pCmd->command == TSDB_SQL_INSERT) {
return NULL;
......@@ -905,7 +905,7 @@ int tscProcessLocalCmd(SSqlObj *pSql) {
* set the qhandle to be 1 in order to pass the qhandle check, and to call partial release function to
* free allocated resources and remove the SqlObj from sql query linked list
*/
pRes->qhandle = 0x1;
pRes->qId = 0x1;
pRes->numOfRows = 0;
} else if (pCmd->command == TSDB_SQL_SHOW_CREATE_TABLE) {
pRes->code = tscProcessShowCreateTable(pSql);
......
......@@ -338,11 +338,20 @@ void tscCreateLocalMerger(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrde
pReducer->resColModel->capacity = pReducer->nResultBufSize;
pReducer->finalModel = pFFModel;
int32_t expandFactor = 1;
if (finalmodel->rowSize > 0) {
bool topBotQuery = tscIsTopbotQuery(pQueryInfo);
if (topBotQuery) {
expandFactor = tscGetTopbotQueryParam(pQueryInfo);
pReducer->resColModel->capacity /= (finalmodel->rowSize * expandFactor);
pReducer->resColModel->capacity *= expandFactor;
} else {
pReducer->resColModel->capacity /= finalmodel->rowSize;
}
}
assert(finalmodel->rowSize > 0 && finalmodel->rowSize <= pReducer->rowSize);
pReducer->pFinalRes = calloc(1, pReducer->rowSize * pReducer->resColModel->capacity);
if (pReducer->pTempBuffer == NULL || pReducer->discardData == NULL || pReducer->pResultBuf == NULL ||
......@@ -1150,9 +1159,10 @@ static void fillMultiRowsOfTagsVal(SQueryInfo *pQueryInfo, int32_t numOfRes, SLo
memset(buf, 0, (size_t)maxBufSize);
memcpy(buf, pCtx->pOutput, (size_t)pCtx->outputBytes);
char* next = pCtx->pOutput;
for (int32_t i = 0; i < inc; ++i) {
pCtx->pOutput += pCtx->outputBytes;
memcpy(pCtx->pOutput, buf, (size_t)pCtx->outputBytes);
next += pCtx->outputBytes;
memcpy(next, buf, (size_t)pCtx->outputBytes);
}
}
......@@ -1440,6 +1450,11 @@ int32_t tscDoLocalMerge(SSqlObj *pSql) {
SQueryInfo *pQueryInfo = tscGetQueryInfoDetail(pCmd, pCmd->clauseIndex);
tFilePage *tmpBuffer = pLocalMerge->pTempBuffer;
int32_t remain = 1;
if (tscIsTopbotQuery(pQueryInfo)) {
remain = tscGetTopbotQueryParam(pQueryInfo);
}
if (doHandleLastRemainData(pSql)) {
return TSDB_CODE_SUCCESS;
}
......@@ -1528,7 +1543,7 @@ int32_t tscDoLocalMerge(SSqlObj *pSql) {
* if the previous group does NOT generate any result (pResBuf->num == 0),
* continue to process results instead of return results.
*/
if ((!sameGroup && pResBuf->num > 0) || (pResBuf->num == pLocalMerge->resColModel->capacity)) {
if ((!sameGroup && pResBuf->num > 0) || (pResBuf->num + remain >= pLocalMerge->resColModel->capacity)) {
// does not belong to the same group
bool notSkipped = genFinalResults(pSql, pLocalMerge, !sameGroup);
......@@ -1607,7 +1622,7 @@ void tscInitResObjForLocalQuery(SSqlObj *pObj, int32_t numOfRes, int32_t rowLen)
tscDestroyLocalMerger(pObj);
}
pRes->qhandle = 1; // hack to pass the safety check in fetch_row function
pRes->qId = 1; // hack to pass the safety check in fetch_row function
pRes->numOfRows = 0;
pRes->row = 0;
......
此差异已折叠。
......@@ -261,7 +261,7 @@ static int doBindParam(char* data, SParamInfo* param, TAOS_BIND* bind) {
return TSDB_CODE_SUCCESS;
}
if (1) {
if (0) {
// allow user bind param data with different type
union {
int8_t v1;
......@@ -903,7 +903,7 @@ int taos_stmt_prepare(TAOS_STMT* stmt, const char* sql, unsigned long length) {
return TSDB_CODE_TSC_OUT_OF_MEMORY;
}
pRes->qhandle = 0;
pRes->qId = 0;
pRes->numOfRows = 1;
strtolower(pSql->sqlstr, sql);
......@@ -1057,14 +1057,28 @@ int taos_stmt_get_param(TAOS_STMT *stmt, int idx, int *type, int *bytes) {
}
if (pStmt->isInsert) {
SSqlObj* pSql = pStmt->pSql;
SSqlCmd *pCmd = &pSql->cmd;
STableDataBlocks* pBlock = taosArrayGetP(pCmd->pDataBlocks, 0);
SSqlCmd* pCmd = &pStmt->pSql->cmd;
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, 0, 0);
STableMeta* pTableMeta = pTableMetaInfo->pTableMeta;
if (pCmd->pTableBlockHashList == NULL) {
pCmd->pTableBlockHashList = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false);
}
STableDataBlocks* pBlock = NULL;
assert(pCmd->numOfParams == pBlock->numOfParams);
if (idx < 0 || idx >= pBlock->numOfParams) return -1;
int32_t ret =
tscGetDataBlockFromList(pCmd->pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE, sizeof(SSubmitBlk),
pTableMeta->tableInfo.rowSize, &pTableMetaInfo->name, pTableMeta, &pBlock, NULL);
if (ret != 0) {
// todo handle error
}
if (idx<0 || idx>=pBlock->numOfParams) {
tscError("param %d: out of range", idx);
abort();
}
SParamInfo* param = pBlock->params + idx;
SParamInfo* param = &pBlock->params[idx];
if (type) *type = param->type;
if (bytes) *bytes = param->bytes;
......
......@@ -249,8 +249,8 @@ int tscBuildQueryStreamDesc(void *pMsg, STscObj *pObj) {
pQdesc->stime = htobe64(pSql->stime);
pQdesc->queryId = htonl(pSql->queryId);
//pQdesc->useconds = htobe64(pSql->res.useconds);
pQdesc->useconds = htobe64(now - pSql->stime);
pQdesc->qHandle = htobe64(pSql->res.qhandle);
pQdesc->useconds = htobe64(now - pSql->stime); // use local time instead of sever rsp elapsed time
pQdesc->qHandle = htobe64(pSql->res.qId);
pHeartbeat->numOfQueries++;
pQdesc++;
......
此差异已折叠。
......@@ -350,8 +350,8 @@ void tscProcessMsgFromServer(SRpcMsg *rpcMsg, SRpcEpSet *pEpSet) {
taosMsleep(duration);
}
pSql->retryReason = rpcMsg->code;
rpcMsg->code = tscRenewTableMeta(pSql, 0);
// if there is an error occurring, proceed to the following error handling procedure.
if (rpcMsg->code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) {
taosReleaseRef(tscObjRef, handle);
......@@ -508,7 +508,7 @@ int tscBuildFetchMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
SQueryInfo *pQueryInfo = tscGetQueryInfoDetail(&pSql->cmd, pSql->cmd.clauseIndex);
pRetrieveMsg->free = htons(pQueryInfo->type);
pRetrieveMsg->qhandle = htobe64(pSql->res.qhandle);
pRetrieveMsg->qId = htobe64(pSql->res.qId);
// todo valid the vgroupId at the client side
STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
......@@ -520,7 +520,7 @@ int tscBuildFetchMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
assert(pVgroupInfo->vgroups[vgIndex].vgId > 0 && vgIndex < pTableMetaInfo->vgroupList->numOfVgroups);
pRetrieveMsg->header.vgId = htonl(pVgroupInfo->vgroups[vgIndex].vgId);
tscDebug("%p build fetch msg from vgId:%d, vgIndex:%d, qhandle:%" PRIX64, pSql, pVgroupInfo->vgroups[vgIndex].vgId, vgIndex, pSql->res.qhandle);
tscDebug("%p build fetch msg from vgId:%d, vgIndex:%d, qhandle:%" PRIX64, pSql, pVgroupInfo->vgroups[vgIndex].vgId, vgIndex, pSql->res.qId);
} else {
int32_t numOfVgroups = (int32_t)taosArrayGetSize(pTableMetaInfo->pVgroupTables);
assert(vgIndex >= 0 && vgIndex < numOfVgroups);
......@@ -528,12 +528,12 @@ int tscBuildFetchMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
SVgroupTableInfo* pTableIdList = taosArrayGet(pTableMetaInfo->pVgroupTables, vgIndex);
pRetrieveMsg->header.vgId = htonl(pTableIdList->vgInfo.vgId);
tscDebug("%p build fetch msg from vgId:%d, vgIndex:%d, qhandle:%" PRIX64, pSql, pTableIdList->vgInfo.vgId, vgIndex, pSql->res.qhandle);
tscDebug("%p build fetch msg from vgId:%d, vgIndex:%d, qId:%" PRIu64, pSql, pTableIdList->vgInfo.vgId, vgIndex, pSql->res.qId);
}
} else {
STableMeta* pTableMeta = pTableMetaInfo->pTableMeta;
pRetrieveMsg->header.vgId = htonl(pTableMeta->vgId);
tscDebug("%p build fetch msg from only one vgroup, vgId:%d, qhandle:%" PRIX64, pSql, pTableMeta->vgId, pSql->res.qhandle);
tscDebug("%p build fetch msg from only one vgroup, vgId:%d, qId:%" PRIu64, pSql, pTableMeta->vgId, pSql->res.qId);
}
pSql->cmd.payloadLen = sizeof(SRetrieveTableMsg);
......@@ -613,7 +613,7 @@ static int32_t tscEstimateQueryMsgSize(SSqlObj *pSql, int32_t clauseIndex) {
tableSerialize + sqlLen + 4096 + pQueryInfo->bufLen;
}
static char *doSerializeTableInfo(SQueryTableMsg* pQueryMsg, SSqlObj *pSql, char *pMsg) {
static char *doSerializeTableInfo(SQueryTableMsg* pQueryMsg, SSqlObj *pSql, char *pMsg, int32_t *succeed) {
STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(&pSql->cmd, pSql->cmd.clauseIndex, 0);
TSKEY dfltKey = htobe64(pQueryMsg->window.skey);
......@@ -626,9 +626,14 @@ static char *doSerializeTableInfo(SQueryTableMsg* pQueryMsg, SSqlObj *pSql, char
assert(index >= 0);
SVgroupInfo* pVgroupInfo = NULL;
if (pTableMetaInfo->vgroupList->numOfVgroups > 0) {
if (pTableMetaInfo->vgroupList && pTableMetaInfo->vgroupList->numOfVgroups > 0) {
assert(index < pTableMetaInfo->vgroupList->numOfVgroups);
pVgroupInfo = &pTableMetaInfo->vgroupList->vgroups[index];
} else {
tscError("%p No vgroup info found", pSql);
*succeed = 0;
return pMsg;
}
vgId = pVgroupInfo->vgId;
......@@ -948,8 +953,13 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
pQueryMsg->secondStageOutput = 0;
}
int32_t succeed = 1;
// serialize the table info (sid, uid, tags)
pMsg = doSerializeTableInfo(pQueryMsg, pSql, pMsg);
pMsg = doSerializeTableInfo(pQueryMsg, pSql, pMsg, &succeed);
if (succeed == 0) {
return TSDB_CODE_TSC_APP_ERROR;
}
SSqlGroupbyExpr *pGroupbyExpr = &pQueryInfo->groupbyExpr;
if (pGroupbyExpr->numOfGroupCols > 0) {
......@@ -1284,6 +1294,23 @@ int32_t tscBuildUseDbMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
return TSDB_CODE_SUCCESS;
}
int32_t tscBuildSyncDbReplicaMsg(SSqlObj* pSql, SSqlInfo *pInfo) {
SSqlCmd *pCmd = &pSql->cmd;
pCmd->payloadLen = sizeof(SSyncDbMsg);
if (TSDB_CODE_SUCCESS != tscAllocPayload(pCmd, pCmd->payloadLen)) {
tscError("%p failed to malloc for query msg", pSql);
return TSDB_CODE_TSC_OUT_OF_MEMORY;
}
SSyncDbMsg *pSyncMsg = (SSyncDbMsg *)pCmd->payload;
STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0);
tNameExtractFullName(&pTableMetaInfo->name, pSyncMsg->db);
pCmd->msgType = TSDB_MSG_TYPE_CM_SYNC_DB;
return TSDB_CODE_SUCCESS;
}
int32_t tscBuildShowMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
STscObj *pObj = pSql->pTscObj;
SSqlCmd *pCmd = &pSql->cmd;
......@@ -1556,7 +1583,7 @@ int tscBuildRetrieveFromMgmtMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
SQueryInfo *pQueryInfo = tscGetQueryInfoDetail(pCmd, 0);
SRetrieveTableMsg *pRetrieveMsg = (SRetrieveTableMsg*)pCmd->payload;
pRetrieveMsg->qhandle = htobe64(pSql->res.qhandle);
pRetrieveMsg->qId = htobe64(pSql->res.qId);
pRetrieveMsg->free = htons(pQueryInfo->type);
return TSDB_CODE_SUCCESS;
......@@ -2064,6 +2091,10 @@ int tscProcessSTableVgroupRsp(SSqlObj *pSql) {
assert(pInfo->vgroupList != NULL);
pInfo->vgroupList->numOfVgroups = pVgroupMsg->numOfVgroups;
if (pInfo->vgroupList->numOfVgroups <= 0) {
//tfree(pInfo->vgroupList);
tscError("%p empty vgroup info", pSql);
} else {
for (int32_t j = 0; j < pInfo->vgroupList->numOfVgroups; ++j) {
//just init, no need to lock
SVgroupInfo *pVgroups = &pInfo->vgroupList->vgroups[j];
......@@ -2079,6 +2110,7 @@ int tscProcessSTableVgroupRsp(SSqlObj *pSql) {
pVgroups->epAddr[k].fqdn = strndup(vmsg->epAddr[k].fqdn, tListLen(vmsg->epAddr[k].fqdn));
}
}
}
pMsg += size;
}
......@@ -2102,7 +2134,7 @@ int tscProcessShowRsp(SSqlObj *pSql) {
pShow = (SShowRsp *)pRes->pRsp;
pShow->qhandle = htobe64(pShow->qhandle);
pRes->qhandle = pShow->qhandle;
pRes->qId = pShow->qhandle;
tscResetForNextRetrieve(pRes);
pMetaMsg = &(pShow->tableMeta);
......@@ -2284,11 +2316,12 @@ int tscProcessQueryRsp(SSqlObj *pSql) {
SSqlRes *pRes = &pSql->res;
SQueryTableRsp *pQuery = (SQueryTableRsp *)pRes->pRsp;
pQuery->qhandle = htobe64(pQuery->qhandle);
pRes->qhandle = pQuery->qhandle;
pQuery->qId = htobe64(pQuery->qId);
pRes->qId = pQuery->qId;
pRes->data = NULL;
tscResetForNextRetrieve(pRes);
tscDebug("%p query rsp received, qId:%"PRIu64, pSql, pRes->qId);
return 0;
}
......@@ -2346,7 +2379,8 @@ int tscProcessRetrieveRspFromNode(SSqlObj *pSql) {
}
pRes->row = 0;
tscDebug("%p numOfRows:%d, offset:%" PRId64 ", complete:%d", pSql, pRes->numOfRows, pRes->offset, pRes->completed);
tscDebug("%p numOfRows:%d, offset:%" PRId64 ", complete:%d, qId:%"PRIu64, pSql, pRes->numOfRows, pRes->offset,
pRes->completed, pRes->qId);
return 0;
}
......@@ -2559,6 +2593,7 @@ void tscInitMsgsFp() {
tscBuildMsg[TSDB_SQL_DROP_USER] = tscBuildDropUserAcctMsg;
tscBuildMsg[TSDB_SQL_DROP_ACCT] = tscBuildDropUserAcctMsg;
tscBuildMsg[TSDB_SQL_DROP_DB] = tscBuildDropDbMsg;
tscBuildMsg[TSDB_SQL_SYNC_DB_REPLICA] = tscBuildSyncDbReplicaMsg;
tscBuildMsg[TSDB_SQL_DROP_TABLE] = tscBuildDropTableMsg;
tscBuildMsg[TSDB_SQL_ALTER_USER] = tscBuildUserMsg;
tscBuildMsg[TSDB_SQL_CREATE_DNODE] = tscBuildCreateDnodeMsg;
......@@ -2612,7 +2647,6 @@ void tscInitMsgsFp() {
tscProcessMsgRsp[TSDB_SQL_SHOW_CREATE_TABLE] = tscProcessShowCreateRsp;
tscProcessMsgRsp[TSDB_SQL_SHOW_CREATE_DATABASE] = tscProcessShowCreateRsp;
tscKeepConn[TSDB_SQL_SHOW] = 1;
tscKeepConn[TSDB_SQL_RETRIEVE] = 1;
tscKeepConn[TSDB_SQL_SELECT] = 1;
......
......@@ -476,7 +476,7 @@ TAOS_ROW taos_fetch_row(TAOS_RES *res) {
SSqlCmd *pCmd = &pSql->cmd;
SSqlRes *pRes = &pSql->res;
if (pRes->qhandle == 0 ||
if (pRes->qId == 0 ||
pRes->code == TSDB_CODE_TSC_QUERY_CANCELLED ||
pCmd->command == TSDB_SQL_RETRIEVE_EMPTY_RESULT ||
pCmd->command == TSDB_SQL_INSERT) {
......@@ -508,7 +508,7 @@ int taos_fetch_block(TAOS_RES *res, TAOS_ROW *rows) {
SSqlCmd *pCmd = &pSql->cmd;
SSqlRes *pRes = &pSql->res;
if (pRes->qhandle == 0 ||
if (pRes->qId == 0 ||
pRes->code == TSDB_CODE_TSC_QUERY_CANCELLED ||
pCmd->command == TSDB_SQL_RETRIEVE_EMPTY_RESULT ||
pCmd->command == TSDB_SQL_INSERT) {
......@@ -554,7 +554,7 @@ static bool tscKillQueryInDnode(SSqlObj* pSql) {
SSqlCmd* pCmd = &pSql->cmd;
SSqlRes* pRes = &pSql->res;
if (pRes == NULL || pRes->qhandle == 0) {
if (pRes == NULL || pRes->qId == 0) {
return true;
}
......@@ -1050,7 +1050,7 @@ int taos_load_table_info(TAOS *taos, const char *tableNameList) {
* If qhandle is NOT set 0, the function of taos_free_result() will send message to server by calling tscProcessSql()
* to free connection, which may cause segment fault, when the parse phrase is not even successfully executed.
*/
pRes->qhandle = 0;
pRes->qId = 0;
free(str);
if (code != TSDB_CODE_SUCCESS) {
......
......@@ -299,6 +299,7 @@ static void tscProcessStreamRetrieveResult(void *param, TAOS_RES *res, int numOf
tfree(pTableMetaInfo->pTableMeta);
tscFreeSqlResult(pSql);
tscFreeSubobj(pSql);
tfree(pSql->pSubs);
pSql->subState.numOfSub = 0;
pTableMetaInfo->vgroupList = tscVgroupInfoClear(pTableMetaInfo->vgroupList);
......
......@@ -149,7 +149,7 @@ static SSub* tscCreateSubscription(STscObj* pObj, const char* topic, const char*
}
strtolower(pSql->sqlstr, pSql->sqlstr);
pRes->qhandle = 0;
pRes->qId = 0;
pRes->numOfRows = 1;
code = tscAllocPayload(pCmd, TSDB_DEFAULT_PAYLOAD_SIZE);
......@@ -448,7 +448,7 @@ SSqlObj* recreateSqlObj(SSub* pSub) {
return NULL;
}
pRes->qhandle = 0;
pRes->qId = 0;
pRes->numOfRows = 1;
int code = tscAllocPayload(pCmd, TSDB_DEFAULT_PAYLOAD_SIZE);
......@@ -546,7 +546,7 @@ TAOS_RES *taos_consume(TAOS_SUB *tsub) {
uint32_t type = pQueryInfo->type;
tscFreeSqlResult(pSql);
pRes->numOfRows = 1;
pRes->qhandle = 0;
pRes->qId = 0;
pSql->cmd.command = TSDB_SQL_SELECT;
pQueryInfo->type = type;
......
此差异已折叠。
......@@ -271,6 +271,41 @@ bool tscIsTWAQuery(SQueryInfo* pQueryInfo) {
return false;
}
bool tscIsTopbotQuery(SQueryInfo* pQueryInfo) {
size_t numOfExprs = tscSqlExprNumOfExprs(pQueryInfo);
for (int32_t i = 0; i < numOfExprs; ++i) {
SSqlExpr* pExpr = tscSqlExprGet(pQueryInfo, i);
if (pExpr == NULL) {
continue;
}
int32_t functionId = pExpr->functionId;
if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM) {
return true;
}
}
return false;
}
int32_t tscGetTopbotQueryParam(SQueryInfo* pQueryInfo) {
size_t numOfExprs = tscSqlExprNumOfExprs(pQueryInfo);
for (int32_t i = 0; i < numOfExprs; ++i) {
SSqlExpr* pExpr = tscSqlExprGet(pQueryInfo, i);
if (pExpr == NULL) {
continue;
}
int32_t functionId = pExpr->functionId;
if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM) {
return (int32_t) pExpr->param[0].i64;
}
}
return 0;
}
void tscClearInterpInfo(SQueryInfo* pQueryInfo) {
if (!tscIsPointInterpQuery(pQueryInfo)) {
return;
......@@ -309,7 +344,7 @@ void tscSetResRawPtr(SSqlRes* pRes, SQueryInfo* pQueryInfo) {
int32_t offset = 0;
for (int32_t i = 0; i < pRes->numOfCols; ++i) {
for (int32_t i = 0; i < pQueryInfo->fieldsInfo.numOfOutput; ++i) {
SInternalField* pInfo = (SInternalField*)TARRAY_GET_ELEM(pQueryInfo->fieldsInfo.internalField, i);
pRes->urow[i] = pRes->data + offset * pRes->numOfRows;
......@@ -415,6 +450,20 @@ void tscFreeQueryInfo(SSqlCmd* pCmd, bool removeMeta) {
tfree(pCmd->pQueryInfo);
}
void destroyTableNameList(SSqlCmd* pCmd) {
if (pCmd->numOfTables == 0) {
assert(pCmd->pTableNameList == NULL);
return;
}
for(int32_t i = 0; i < pCmd->numOfTables; ++i) {
tfree(pCmd->pTableNameList[i]);
}
pCmd->numOfTables = 0;
tfree(pCmd->pTableNameList);
}
void tscResetSqlCmd(SSqlCmd* pCmd, bool removeMeta) {
pCmd->command = 0;
pCmd->numOfCols = 0;
......@@ -424,14 +473,7 @@ void tscResetSqlCmd(SSqlCmd* pCmd, bool removeMeta) {
pCmd->parseFinished = 0;
pCmd->autoCreated = 0;
for(int32_t i = 0; i < pCmd->numOfTables; ++i) {
if (pCmd->pTableNameList && pCmd->pTableNameList[i]) {
tfree(pCmd->pTableNameList[i]);
}
}
pCmd->numOfTables = 0;
tfree(pCmd->pTableNameList);
destroyTableNameList(pCmd);
pCmd->pTableBlockHashList = tscDestroyBlockHashTable(pCmd->pTableBlockHashList, removeMeta);
pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks);
......@@ -447,7 +489,7 @@ void tscFreeSqlResult(SSqlObj* pSql) {
memset(&pSql->res, 0, sizeof(SSqlRes));
}
static void tscFreeSubobj(SSqlObj* pSql) {
void tscFreeSubobj(SSqlObj* pSql) {
if (pSql->subState.numOfSub == 0) {
return;
}
......@@ -548,6 +590,11 @@ void tscFreeSqlObj(SSqlObj* pSql) {
free(pSql);
}
void tscDestroyBoundColumnInfo(SParsedDataColInfo* pColInfo) {
tfree(pColInfo->boundedColumns);
tfree(pColInfo->cols);
}
void tscDestroyDataBlock(STableDataBlocks* pDataBlock, bool removeMeta) {
if (pDataBlock == NULL) {
return;
......@@ -568,6 +615,7 @@ void tscDestroyDataBlock(STableDataBlocks* pDataBlock, bool removeMeta) {
taosHashRemove(tscTableMetaInfo, name, strnlen(name, TSDB_TABLE_FNAME_LEN));
}
tscDestroyBoundColumnInfo(&pDataBlock->boundColumnInfo);
tfree(pDataBlock);
}
......@@ -678,7 +726,7 @@ int32_t tscCopyDataBlockToPayload(SSqlObj* pSql, STableDataBlocks* pDataBlock) {
* @param dataBlocks
* @return
*/
int32_t tscCreateDataBlock(size_t initialSize, int32_t rowSize, int32_t startOffset, SName* name,
int32_t tscCreateDataBlock(size_t defaultSize, int32_t rowSize, int32_t startOffset, SName* name,
STableMeta* pTableMeta, STableDataBlocks** dataBlocks) {
STableDataBlocks* dataBuf = (STableDataBlocks*)calloc(1, sizeof(STableDataBlocks));
if (dataBuf == NULL) {
......@@ -686,10 +734,12 @@ int32_t tscCreateDataBlock(size_t initialSize, int32_t rowSize, int32_t startOff
return TSDB_CODE_TSC_OUT_OF_MEMORY;
}
dataBuf->nAllocSize = (uint32_t)initialSize;
dataBuf->headerSize = startOffset; // the header size will always be the startOffset value, reserved for the subumit block header
dataBuf->nAllocSize = (uint32_t)defaultSize;
dataBuf->headerSize = startOffset;
// the header size will always be the startOffset value, reserved for the subumit block header
if (dataBuf->nAllocSize <= dataBuf->headerSize) {
dataBuf->nAllocSize = dataBuf->headerSize*2;
dataBuf->nAllocSize = dataBuf->headerSize * 2;
}
dataBuf->pData = calloc(1, dataBuf->nAllocSize);
......@@ -699,25 +749,31 @@ int32_t tscCreateDataBlock(size_t initialSize, int32_t rowSize, int32_t startOff
return TSDB_CODE_TSC_OUT_OF_MEMORY;
}
//Here we keep the tableMeta to avoid it to be remove by other threads.
dataBuf->pTableMeta = tscTableMetaDup(pTableMeta);
SParsedDataColInfo* pColInfo = &dataBuf->boundColumnInfo;
SSchema* pSchema = tscGetTableSchema(dataBuf->pTableMeta);
tscSetBoundColumnInfo(pColInfo, pSchema, dataBuf->pTableMeta->tableInfo.numOfColumns);
dataBuf->ordered = true;
dataBuf->prevTS = INT64_MIN;
dataBuf->rowSize = rowSize;
dataBuf->size = startOffset;
dataBuf->tsSource = -1;
dataBuf->vgId = dataBuf->pTableMeta->vgId;
tNameAssign(&dataBuf->tableName, name);
//Here we keep the tableMeta to avoid it to be remove by other threads.
dataBuf->pTableMeta = tscTableMetaDup(pTableMeta);
assert(initialSize > 0 && pTableMeta != NULL && dataBuf->pTableMeta != NULL);
assert(defaultSize > 0 && pTableMeta != NULL && dataBuf->pTableMeta != NULL);
*dataBlocks = dataBuf;
return TSDB_CODE_SUCCESS;
}
int32_t tscGetDataBlockFromList(SHashObj* pHashList, int64_t id, int32_t size, int32_t startOffset, int32_t rowSize,
SName* name, STableMeta* pTableMeta, STableDataBlocks** dataBlocks, SArray* pBlockList) {
SName* name, STableMeta* pTableMeta, STableDataBlocks** dataBlocks,
SArray* pBlockList) {
*dataBlocks = NULL;
STableDataBlocks** t1 = (STableDataBlocks**)taosHashGet(pHashList, (const char*)&id, sizeof(id));
if (t1 != NULL) {
......@@ -826,6 +882,8 @@ static void extractTableNameList(SSqlCmd* pCmd, bool freeBlockMap) {
int32_t i = 0;
while(p1) {
STableDataBlocks* pBlocks = *p1;
tfree(pCmd->pTableNameList[i]);
pCmd->pTableNameList[i++] = tNameDup(&pBlocks->tableName);
p1 = taosHashIterate(pCmd->pTableBlockHashList, p1);
}
......@@ -942,7 +1000,7 @@ bool tscIsInsertData(char* sqlstr) {
int32_t index = 0;
do {
SStrToken t0 = tStrGetToken(sqlstr, &index, false, 0, NULL);
SStrToken t0 = tStrGetToken(sqlstr, &index, false);
if (t0.type != TK_LP) {
return t0.type == TK_INSERT || t0.type == TK_IMPORT;
}
......@@ -1279,6 +1337,34 @@ int32_t tscSqlExprCopy(SArray* dst, const SArray* src, uint64_t uid, bool deepco
return 0;
}
bool tscColumnExists(SArray* pColumnList, SColumnIndex* pColIndex) {
// ignore the tbname columnIndex to be inserted into source list
if (pColIndex->columnIndex < 0) {
return false;
}
size_t numOfCols = taosArrayGetSize(pColumnList);
int16_t col = pColIndex->columnIndex;
int32_t i = 0;
while (i < numOfCols) {
SColumn* pCol = taosArrayGetP(pColumnList, i);
if ((pCol->colIndex.columnIndex != col) || (pCol->colIndex.tableIndex != pColIndex->tableIndex)) {
++i;
continue;
} else {
break;
}
}
if (i >= numOfCols || numOfCols == 0) {
return false;
}
return true;
}
SColumn* tscColumnListInsert(SArray* pColumnList, SColumnIndex* pColIndex) {
// ignore the tbname columnIndex to be inserted into source list
if (pColIndex->columnIndex < 0) {
......@@ -1583,7 +1669,25 @@ int32_t tscTagCondCopy(STagCond* dest, const STagCond* src) {
dest->tbnameCond.uid = src->tbnameCond.uid;
dest->tbnameCond.len = src->tbnameCond.len;
memcpy(&dest->joinInfo, &src->joinInfo, sizeof(SJoinInfo));
dest->joinInfo.hasJoin = src->joinInfo.hasJoin;
for (int32_t i = 0; i < TSDB_MAX_JOIN_TABLE_NUM; ++i) {
if (src->joinInfo.joinTables[i]) {
dest->joinInfo.joinTables[i] = calloc(1, sizeof(SJoinNode));
memcpy(dest->joinInfo.joinTables[i], src->joinInfo.joinTables[i], sizeof(SJoinNode));
if (src->joinInfo.joinTables[i]->tsJoin) {
dest->joinInfo.joinTables[i]->tsJoin = taosArrayDup(src->joinInfo.joinTables[i]->tsJoin);
}
if (src->joinInfo.joinTables[i]->tagJoin) {
dest->joinInfo.joinTables[i]->tagJoin = taosArrayDup(src->joinInfo.joinTables[i]->tagJoin);
}
}
}
dest->relType = src->relType;
if (src->pCond == NULL) {
......@@ -1629,6 +1733,23 @@ void tscTagCondRelease(STagCond* pTagCond) {
taosArrayDestroy(pTagCond->pCond);
}
for (int32_t i = 0; i < TSDB_MAX_JOIN_TABLE_NUM; ++i) {
SJoinNode *node = pTagCond->joinInfo.joinTables[i];
if (node == NULL) {
continue;
}
if (node->tsJoin != NULL) {
taosArrayDestroy(node->tsJoin);
}
if (node->tagJoin != NULL) {
taosArrayDestroy(node->tagJoin);
}
tfree(node);
}
memset(pTagCond, 0, sizeof(STagCond));
}
......@@ -2318,16 +2439,21 @@ void tscDoQuery(SSqlObj* pSql) {
}
int16_t tscGetJoinTagColIdByUid(STagCond* pTagCond, uint64_t uid) {
if (pTagCond->joinInfo.left.uid == uid) {
return pTagCond->joinInfo.left.tagColId;
} else if (pTagCond->joinInfo.right.uid == uid) {
return pTagCond->joinInfo.right.tagColId;
} else {
int32_t i = 0;
while (i < TSDB_MAX_JOIN_TABLE_NUM) {
SJoinNode* node = pTagCond->joinInfo.joinTables[i];
if (node && node->uid == uid) {
return node->tagColId;
}
i++;
}
assert(0);
return -1;
}
}
int16_t tscGetTagColIndexById(STableMeta* pTableMeta, int16_t colId) {
int32_t numOfTags = tscGetNumOfTags(pTableMeta);
......
......@@ -51,6 +51,7 @@ enum {
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_ALTER_ACCT, "alter-acct" )
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_ALTER_TABLE, "alter-table" )
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_ALTER_DB, "alter-db" )
TSDB_DEFINE_SQL_TYPE(TSDB_SQL_SYNC_DB_REPLICA, "sync db-replica")
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_CREATE_MNODE, "create-mnode" )
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_DROP_MNODE, "drop-mnode" )
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_CREATE_DNODE, "create-dnode" )
......
......@@ -51,7 +51,7 @@ int32_t tsMaxShellConns = 50000;
int32_t tsMaxConnections = 5000;
int32_t tsShellActivityTimer = 3; // second
float tsNumOfThreadsPerCore = 1.0f;
int32_t tsNumOfCommitThreads = 1;
int32_t tsNumOfCommitThreads = 4;
float tsRatioOfQueryCores = 1.0f;
int8_t tsDaylight = 0;
char tsTimezone[TSDB_TIMEZONE_LEN] = {0};
......@@ -71,7 +71,7 @@ int32_t tsMaxBinaryDisplayWidth = 30;
int32_t tsCompressMsgSize = -1;
// client
int32_t tsMaxSQLStringLen = TSDB_MAX_SQL_LEN;
int32_t tsMaxSQLStringLen = TSDB_MAX_ALLOWED_SQL_LEN;
int8_t tsTscEnableRecordSql = 0;
// the maximum number of results for projection query on super table that are returned from
......
......@@ -8,7 +8,7 @@ IF (TD_MVN_INSTALLED)
ADD_CUSTOM_COMMAND(OUTPUT ${JDBC_CMD_NAME}
POST_BUILD
COMMAND mvn -Dmaven.test.skip=true install -f ${CMAKE_CURRENT_SOURCE_DIR}/pom.xml
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/target/taos-jdbcdriver-2.0.22-dist.jar ${LIBRARY_OUTPUT_PATH}
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/target/taos-jdbcdriver-2.0.25-dist.jar ${LIBRARY_OUTPUT_PATH}
COMMAND mvn -Dmaven.test.skip=true clean -f ${CMAKE_CURRENT_SOURCE_DIR}/pom.xml
COMMENT "build jdbc driver")
ADD_CUSTOM_TARGET(${JDBC_TARGET_NAME} ALL WORKING_DIRECTORY ${EXECUTABLE_OUTPUT_PATH} DEPENDS ${JDBC_CMD_NAME})
......
......@@ -5,7 +5,7 @@
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.22</version>
<version>2.0.25</version>
<packaging>jar</packaging>
<name>JDBCDriver</name>
......@@ -37,17 +37,6 @@
</developers>
<dependencies>
<dependency>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
<version>1.2</version>
<exclusions>
<exclusion>
<groupId>*</groupId>
<artifactId>*</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
......@@ -61,21 +50,20 @@
<artifactId>httpclient</artifactId>
<version>4.5.8</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.9</version>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.58</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>29.0-jre</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
......
......@@ -3,7 +3,7 @@
<modelVersion>4.0.0</modelVersion>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.22</version>
<version>2.0.25</version>
<packaging>jar</packaging>
<name>JDBCDriver</name>
<url>https://github.com/taosdata/TDengine/tree/master/src/connector/jdbc</url>
......@@ -43,7 +43,6 @@
<version>4.13</version>
<scope>test</scope>
</dependency>
<!-- for restful -->
<dependency>
<groupId>org.apache.httpcomponents</groupId>
......@@ -55,10 +54,22 @@
<artifactId>fastjson</artifactId>
<version>1.2.58</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>29.0-jre</version>
</dependency>
</dependencies>
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
<excludes>
<exclude>**/*.md</exclude>
</excludes>
</resource>
</resources>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
......
......@@ -30,9 +30,12 @@ public abstract class AbstractConnection extends WrapperImpl implements Connecti
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_CONNECTION_CLOSED);
// do nothing
return sql;
}
@Override
public void setAutoCommit(boolean autoCommit) throws SQLException {
if (isClosed())
......@@ -448,7 +451,6 @@ public abstract class AbstractConnection extends WrapperImpl implements Connecti
if (isClosed)
throw (SQLClientInfoException) TSDBError.createSQLException(TSDBErrorNumbers.ERROR_SQLCLIENT_EXCEPTION_ON_CONNECTION_CLOSED);
for (Enumeration<Object> enumer = properties.keys(); enumer.hasMoreElements(); ) {
String name = (String) enumer.nextElement();
clientInfoProps.put(name, properties.getProperty(name));
......
package com.taosdata.jdbc;
import java.sql.ParameterMetaData;
import java.sql.SQLException;
import java.sql.Timestamp;
import java.sql.Types;
public abstract class AbstractParameterMetaData extends WrapperImpl implements ParameterMetaData {
private final Object[] parameters;
public AbstractParameterMetaData(Object[] parameters) {
this.parameters = parameters;
}
@Override
public int getParameterCount() throws SQLException {
return parameters == null ? 0 : parameters.length;
}
@Override
public int isNullable(int param) throws SQLException {
return ParameterMetaData.parameterNullableUnknown;
}
@Override
public boolean isSigned(int param) throws SQLException {
if (param < 1 && param >= parameters.length)
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE);
if (parameters[param - 1] instanceof Byte)
return true;
if (parameters[param - 1] instanceof Short)
return true;
if (parameters[param - 1] instanceof Integer)
return true;
if (parameters[param - 1] instanceof Long)
return true;
if (parameters[param - 1] instanceof Float)
return true;
if (parameters[param - 1] instanceof Double)
return true;
return false;
}
@Override
public int getPrecision(int param) throws SQLException {
if (param < 1 && param >= parameters.length)
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE);
if (parameters[param - 1] instanceof String)
return ((String) parameters[param - 1]).length();
if (parameters[param - 1] instanceof byte[])
return ((byte[]) parameters[param - 1]).length;
return 0;
}
@Override
public int getScale(int param) throws SQLException {
if (param < 1 && param >= parameters.length)
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE);
return 0;
}
@Override
public int getParameterType(int param) throws SQLException {
if (param < 1 && param >= parameters.length)
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE);
if (parameters[param - 1] instanceof Timestamp)
return Types.TIMESTAMP;
if (parameters[param - 1] instanceof Byte)
return Types.TINYINT;
if (parameters[param - 1] instanceof Short)
return Types.SMALLINT;
if (parameters[param - 1] instanceof Integer)
return Types.INTEGER;
if (parameters[param - 1] instanceof Long)
return Types.BIGINT;
if (parameters[param - 1] instanceof Float)
return Types.FLOAT;
if (parameters[param - 1] instanceof Double)
return Types.DOUBLE;
if (parameters[param - 1] instanceof String)
return Types.NCHAR;
if (parameters[param - 1] instanceof byte[])
return Types.BINARY;
if (parameters[param - 1] instanceof Boolean)
return Types.BOOLEAN;
return Types.OTHER;
}
@Override
public String getParameterTypeName(int param) throws SQLException {
if (param < 1 && param >= parameters.length)
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE);
if (parameters[param - 1] instanceof Timestamp)
return TSDBConstants.jdbcType2TaosTypeName(Types.TIMESTAMP);
if (parameters[param - 1] instanceof Byte)
return TSDBConstants.jdbcType2TaosTypeName(Types.TINYINT);
if (parameters[param - 1] instanceof Short)
return TSDBConstants.jdbcType2TaosTypeName(Types.SMALLINT);
if (parameters[param - 1] instanceof Integer)
return TSDBConstants.jdbcType2TaosTypeName(Types.INTEGER);
if (parameters[param - 1] instanceof Long)
return TSDBConstants.jdbcType2TaosTypeName(Types.BIGINT);
if (parameters[param - 1] instanceof Float)
return TSDBConstants.jdbcType2TaosTypeName(Types.FLOAT);
if (parameters[param - 1] instanceof Double)
return TSDBConstants.jdbcType2TaosTypeName(Types.DOUBLE);
if (parameters[param - 1] instanceof String)
return TSDBConstants.jdbcType2TaosTypeName(Types.NCHAR);
if (parameters[param - 1] instanceof byte[])
return TSDBConstants.jdbcType2TaosTypeName(Types.BINARY);
if (parameters[param - 1] instanceof Boolean)
return TSDBConstants.jdbcType2TaosTypeName(Types.BOOLEAN);
return parameters[param - 1].getClass().getName();
}
@Override
public String getParameterClassName(int param) throws SQLException {
if (param < 1 && param >= parameters.length)
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE);
return parameters[param - 1].getClass().getName();
}
@Override
public int getParameterMode(int param) throws SQLException {
if (param < 1 && param >= parameters.length)
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE);
return ParameterMetaData.parameterModeUnknown;
}
}
......@@ -11,6 +11,15 @@ import java.util.Map;
public abstract class AbstractResultSet extends WrapperImpl implements ResultSet {
private int fetchSize;
protected void checkAvailability(int columnIndex, int bounds) throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_RESULTSET_CLOSED);
if (columnIndex < 1)
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE, "Column Index out of range, " + columnIndex + " < 1");
if (columnIndex > bounds)
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE, "Column Index out of range, " + columnIndex + " > " + bounds);
}
@Override
public abstract boolean next() throws SQLException;
......@@ -46,38 +55,20 @@ public abstract class AbstractResultSet extends WrapperImpl implements ResultSet
@Override
public abstract double getDouble(int columnIndex) throws SQLException;
@Deprecated
@Override
public BigDecimal getBigDecimal(int columnIndex, int scale) throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_RESULTSET_CLOSED);
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
return getBigDecimal(columnIndex);
}
@Override
public byte[] getBytes(int columnIndex) throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_RESULTSET_CLOSED);
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
}
public abstract byte[] getBytes(int columnIndex) throws SQLException;
@Override
public Date getDate(int columnIndex) throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_RESULTSET_CLOSED);
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
}
public abstract Date getDate(int columnIndex) throws SQLException;
@Override
public Time getTime(int columnIndex) throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_RESULTSET_CLOSED);
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
}
public abstract Time getTime(int columnIndex) throws SQLException;
@Override
public abstract Timestamp getTimestamp(int columnIndex) throws SQLException;
......@@ -147,9 +138,10 @@ public abstract class AbstractResultSet extends WrapperImpl implements ResultSet
return getDouble(findColumn(columnLabel));
}
@Deprecated
@Override
public BigDecimal getBigDecimal(String columnLabel, int scale) throws SQLException {
return getBigDecimal(findColumn(columnLabel));
return getBigDecimal(findColumn(columnLabel), scale);
}
@Override
......@@ -214,12 +206,7 @@ public abstract class AbstractResultSet extends WrapperImpl implements ResultSet
public abstract ResultSetMetaData getMetaData() throws SQLException;
@Override
public Object getObject(int columnIndex) throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_RESULTSET_CLOSED);
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
}
public abstract Object getObject(int columnIndex) throws SQLException;
@Override
public Object getObject(String columnLabel) throws SQLException {
......@@ -243,12 +230,7 @@ public abstract class AbstractResultSet extends WrapperImpl implements ResultSet
}
@Override
public BigDecimal getBigDecimal(int columnIndex) throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_RESULTSET_CLOSED);
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
}
public abstract BigDecimal getBigDecimal(int columnIndex) throws SQLException;
@Override
public BigDecimal getBigDecimal(String columnLabel) throws SQLException {
......@@ -718,9 +700,7 @@ public abstract class AbstractResultSet extends WrapperImpl implements ResultSet
@Override
public Object getObject(String columnLabel, Map<String, Class<?>> map) throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_STATEMENT_CLOSED);
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
return getObject(findColumn(columnLabel), map);
}
@Override
......@@ -760,9 +740,7 @@ public abstract class AbstractResultSet extends WrapperImpl implements ResultSet
@Override
public Date getDate(String columnLabel, Calendar cal) throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_STATEMENT_CLOSED);
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
return getDate(findColumn(columnLabel), cal);
}
@Override
......@@ -774,23 +752,15 @@ public abstract class AbstractResultSet extends WrapperImpl implements ResultSet
@Override
public Time getTime(String columnLabel, Calendar cal) throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_STATEMENT_CLOSED);
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
return getTime(findColumn(columnLabel), cal);
}
@Override
public Timestamp getTimestamp(int columnIndex, Calendar cal) throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_STATEMENT_CLOSED);
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
}
public abstract Timestamp getTimestamp(int columnIndex, Calendar cal) throws SQLException;
@Override
public Timestamp getTimestamp(String columnLabel, Calendar cal) throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_STATEMENT_CLOSED);
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
return getTimestamp(findColumn(columnLabel), cal);
}
@Override
......@@ -1198,9 +1168,7 @@ public abstract class AbstractResultSet extends WrapperImpl implements ResultSet
@Override
public <T> T getObject(String columnLabel, Class<T> type) throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_STATEMENT_CLOSED);
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
return getObject(findColumn(columnLabel), type);
}
}
......@@ -9,6 +9,8 @@ public abstract class AbstractStatement extends WrapperImpl implements Statement
protected List<String> batchedArgs;
private int fetchSize;
@Override
public abstract ResultSet executeQuery(String sql) throws SQLException;
......
package com.taosdata.jdbc;
import com.taosdata.jdbc.bean.TSDBPreparedParam;
import java.sql.SQLException;
import java.sql.Timestamp;
import java.util.ArrayList;
import java.util.List;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
/**
* this class is used to precompile the sql of tdengine insert or import ops
*/
public class SavedPreparedStatement {
private TSDBPreparedStatement tsdbPreparedStatement;
/**
* sql param List
*/
private List<TSDBPreparedParam> sqlParamList;
/**
* init param according the sql
*/
private TSDBPreparedParam initPreparedParam;
/**
* is table name dynamic in the prepared sql
*/
private boolean isTableNameDynamic;
/**
* insert or import sql template pattern, the template are the following:
* <p>
* insert/import into tableName [(field1, field2, ...)] [using stables tags(?, ?, ...) ] values(?, ?, ...) (?, ?, ...)
* <p>
* we split it to three part:
* 1. prefix, insert/import
* 2. middle, tableName [(field1, field2, ...)] [using stables tags(?, ?, ...) ]
* 3. valueList, the content after values, for example (?, ?, ...) (?, ?, ...)
*/
private Pattern sqlPattern = Pattern.compile("(?s)(?i)^\\s*(INSERT|IMPORT)\\s+INTO\\s+((?<tablename>\\S+)\\s*(\\(.*\\))?\\s+(USING\\s+(?<stableName>\\S+)\\s+TAGS\\s*\\((?<tagValue>.+)\\))?)\\s*VALUES\\s*(?<valueList>\\(.*\\)).*");
/**
* the raw sql template
*/
private String sql;
/**
* the prefix part of sql
*/
private String prefix;
/**
* the middle part of sql
*/
private String middle;
private int middleParamSize;
/**
* the valueList part of sql
*/
private String valueList;
private int valueListSize;
/**
* default param value
*/
private static final String DEFAULT_VALUE = "NULL";
private static final String PLACEHOLDER = "?";
private String tableName;
/**
* is the parameter add to batch list
*/
private boolean isAddBatch;
public SavedPreparedStatement(String sql, TSDBPreparedStatement tsdbPreparedStatement) throws SQLException {
this.sql = sql;
this.tsdbPreparedStatement = tsdbPreparedStatement;
this.sqlParamList = new ArrayList<>();
parsePreparedParam(this.sql);
}
/**
* parse the init param according the sql param
*
* @param sql
*/
private void parsePreparedParam(String sql) throws SQLException {
Matcher matcher = sqlPattern.matcher(sql);
if (matcher.find()) {
tableName = matcher.group("tablename");
if (tableName != null && PLACEHOLDER.equals(tableName)) {
// the table name is dynamic
this.isTableNameDynamic = true;
}
prefix = matcher.group(1);
middle = matcher.group(2);
valueList = matcher.group("valueList");
if (middle != null && !"".equals(middle)) {
middleParamSize = parsePlaceholder(middle);
}
if (valueList != null && !"".equals(valueList)) {
valueListSize = parsePlaceholder(valueList);
}
initPreparedParam = initDefaultParam(tableName, middleParamSize, valueListSize);
} else {
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_INVALID_SQL);
}
}
private TSDBPreparedParam initDefaultParam(String tableName, int middleParamSize, int valueListSize) {
TSDBPreparedParam tsdbPreparedParam = new TSDBPreparedParam(tableName);
tsdbPreparedParam.setMiddleParamList(getDefaultParamList(middleParamSize));
tsdbPreparedParam.setValueList(getDefaultParamList(valueListSize));
return tsdbPreparedParam;
}
/**
* generate the default param value list
*
* @param paramSize
* @return
*/
private List<Object> getDefaultParamList(int paramSize) {
List<Object> paramList = new ArrayList<>(paramSize);
if (paramSize > 0) {
for (int i = 0; i < paramSize; i++) {
paramList.add(i, DEFAULT_VALUE);
}
}
return paramList;
}
/**
* calculate the placeholder num
*
* @param value
* @return
*/
private int parsePlaceholder(String value) {
Pattern pattern = Pattern.compile("[?]");
Matcher matcher = pattern.matcher(value);
int result = 0;
while (matcher.find()) {
result++;
}
return result;
}
/**
* set current row params
*
* @param parameterIndex the first parameter is 1, the second is 2, ...
* @param x the parameter value
*/
public void setParam(int parameterIndex, Object x) throws SQLException {
int paramSize = this.middleParamSize + this.valueListSize;
String errorMsg = String.format("the parameterIndex %s out of the range [1, %s]", parameterIndex, paramSize);
if (parameterIndex < 1 || parameterIndex > paramSize) {
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE,errorMsg);
}
this.isAddBatch = false; //set isAddBatch to false
if (x == null) {
x = DEFAULT_VALUE; // set default null string
}
parameterIndex = parameterIndex - 1; // start from 0 in param list
if (this.middleParamSize > 0 && parameterIndex >= 0 && parameterIndex < this.middleParamSize) {
this.initPreparedParam.setMiddleParam(parameterIndex, x);
return;
}
if (this.valueListSize > 0 && parameterIndex >= this.middleParamSize && parameterIndex < paramSize) {
this.initPreparedParam.setValueParam(parameterIndex - this.middleParamSize, x);
return;
}
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE,errorMsg);
}
public void addBatch() {
addCurrentRowParamToList();
this.initPreparedParam = initDefaultParam(tableName, middleParamSize, valueListSize);
}
/**
* add current param to batch list
*/
private void addCurrentRowParamToList() {
if (initPreparedParam != null && (this.middleParamSize > 0 || this.valueListSize > 0)) {
this.sqlParamList.add(initPreparedParam); // add current param to batch list
}
this.isAddBatch = true;
}
/**
* execute the sql with batch sql
*
* @return
* @throws SQLException
*/
public int[] executeBatch() throws SQLException {
int result = executeBatchInternal();
return new int[]{result};
}
public int executeBatchInternal() throws SQLException {
if (!isAddBatch) {
addCurrentRowParamToList(); // add current param to batch list
}
//1. generate batch sql
String sql = generateExecuteSql();
//2. execute batch sql
int result = executeSql(sql);
//3. clear batch param list
this.sqlParamList.clear();
return result;
}
/**
* generate the batch sql
*
* @return
*/
private String generateExecuteSql() {
StringBuilder stringBuilder = new StringBuilder();
stringBuilder.append(prefix);
stringBuilder.append(" into ");
if (!isTableNameDynamic) {
// tablename will not need to be replaced
String middleValue = replaceMiddleListParam(middle, sqlParamList);
stringBuilder.append(middleValue);
stringBuilder.append(" values");
stringBuilder.append(replaceValueListParam(valueList, sqlParamList));
} else {
// need to replace tablename
if (sqlParamList.size() > 0) {
TSDBPreparedParam firstPreparedParam = sqlParamList.get(0);
//replace middle part and value part of first row
String firstRow = replaceMiddleAndValuePart(firstPreparedParam);
stringBuilder.append(firstRow);
//the first param in the middleParamList is the tableName
String lastTableName = firstPreparedParam.getMiddleParamList().get(0).toString();
if (sqlParamList.size() > 1) {
for (int i = 1; i < sqlParamList.size(); i++) {
TSDBPreparedParam currentParam = sqlParamList.get(i);
String currentTableName = currentParam.getMiddleParamList().get(0).toString();
if (lastTableName.equalsIgnoreCase(currentTableName)) {
// tablename is same with the last row ,so only need to append the part of value
String values = replaceTemplateParam(valueList, currentParam.getValueList());
stringBuilder.append(values);
} else {
// tablename difference with the last row
//need to replace middle part and value part
String row = replaceMiddleAndValuePart(currentParam);
stringBuilder.append(row);
lastTableName = currentTableName;
}
}
}
} else {
stringBuilder.append(middle);
stringBuilder.append(" values");
stringBuilder.append(valueList);
}
}
return stringBuilder.toString();
}
/**
* replace the middle and value part
*
* @param tsdbPreparedParam
* @return
*/
private String replaceMiddleAndValuePart(TSDBPreparedParam tsdbPreparedParam) {
StringBuilder stringBuilder = new StringBuilder(" ");
String middlePart = replaceTemplateParam(middle, tsdbPreparedParam.getMiddleParamList());
stringBuilder.append(middlePart);
stringBuilder.append(" values ");
String valuePart = replaceTemplateParam(valueList, tsdbPreparedParam.getValueList());
stringBuilder.append(valuePart);
stringBuilder.append(" ");
return stringBuilder.toString();
}
/**
* replace the placeholder of the middle part of sql template with TSDBPreparedParam list
*
* @param template
* @param sqlParamList
* @return
*/
private String replaceMiddleListParam(String template, List<TSDBPreparedParam> sqlParamList) {
if (sqlParamList.size() > 0) {
//becase once the subTableName is static then will be ignore the tag which after the first setTag
return replaceTemplateParam(template, sqlParamList.get(0).getMiddleParamList());
}
return template;
}
/**
* replace the placeholder of the template with TSDBPreparedParam list
*
* @param template
* @param sqlParamList
* @return
*/
private String replaceValueListParam(String template, List<TSDBPreparedParam> sqlParamList) {
StringBuilder stringBuilder = new StringBuilder();
if (sqlParamList.size() > 0) {
for (TSDBPreparedParam tsdbPreparedParam : sqlParamList) {
String tmp = replaceTemplateParam(template, tsdbPreparedParam.getValueList());
stringBuilder.append(tmp);
}
} else {
stringBuilder.append(template);
}
return stringBuilder.toString();
}
/**
* replace the placeholder of the template with paramList
*
* @param template
* @param paramList
* @return
*/
private String replaceTemplateParam(String template, List<Object> paramList) {
if (paramList.size() > 0) {
String tmp = template;
for (int i = 0; i < paramList.size(); ++i) {
String paraStr = getParamString(paramList.get(i));
tmp = tmp.replaceFirst("[" + PLACEHOLDER + "]", paraStr);
}
return tmp;
} else {
return template;
}
}
/**
* get the string of param object
*
* @param paramObj
* @return
*/
private String getParamString(Object paramObj) {
String paraStr = paramObj.toString();
if (paramObj instanceof Timestamp || (paramObj instanceof String && !DEFAULT_VALUE.equalsIgnoreCase(paraStr))) {
paraStr = "'" + paraStr + "'";
}
return paraStr;
}
private int executeSql(String sql) throws SQLException {
return tsdbPreparedStatement.executeUpdate(sql);
}
}
......@@ -27,10 +27,6 @@ public class TSDBConnection extends AbstractConnection {
return this.batchFetch;
}
public void setBatchFetch(Boolean batchFetch) {
this.batchFetch = batchFetch;
}
public TSDBConnection(Properties info, TSDBDatabaseMetaData meta) throws SQLException {
this.databaseMetaData = meta;
connect(info.getProperty(TSDBDriver.PROPERTY_KEY_HOST),
......@@ -61,9 +57,7 @@ public class TSDBConnection extends AbstractConnection {
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_CONNECTION_CLOSED);
}
TSDBStatement statement = new TSDBStatement(this, this.connector);
statement.setConnection(this);
return statement;
return new TSDBStatement(this, this.connector);
}
public TSDBSubscribe subscribe(String topic, String sql, boolean restart) throws SQLException {
......@@ -79,10 +73,8 @@ public class TSDBConnection extends AbstractConnection {
}
public PreparedStatement prepareStatement(String sql) throws SQLException {
if (isClosed()) {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_CONNECTION_CLOSED);
}
return new TSDBPreparedStatement(this, this.connector, sql);
}
......@@ -104,11 +96,4 @@ public class TSDBConnection extends AbstractConnection {
return this.databaseMetaData;
}
public Statement createStatement(int resultSetType, int resultSetConcurrency) throws SQLException {
if (isClosed()) {
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_CONNECTION_CLOSED);
}
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
}
}
\ No newline at end of file
......@@ -29,45 +29,25 @@ public class TSDBJNIConnector {
private static volatile Boolean isInitialized = false;
private TaosInfo taosInfo = TaosInfo.getInstance();
// Connection pointer used in C
private long taos = TSDBConstants.JNI_NULL_POINTER;
// result set status in current connection
private boolean isResultsetClosed = true;
private int affectedRows = -1;
static {
System.loadLibrary("taos");
System.out.println("java.library.path:" + System.getProperty("java.library.path"));
}
/**
* Connection pointer used in C
*/
private long taos = TSDBConstants.JNI_NULL_POINTER;
/**
* Result set pointer for the current connection
*/
// private long taosResultSetPointer = TSDBConstants.JNI_NULL_POINTER;
/**
* result set status in current connection
*/
private boolean isResultsetClosed = true;
private int affectedRows = -1;
/**
* Whether the connection is closed
*/
public boolean isClosed() {
return this.taos == TSDBConstants.JNI_NULL_POINTER;
}
/**
* Returns the status of last result set in current connection
*/
public boolean isResultsetClosed() {
return this.isResultsetClosed;
}
/**
* Initialize static variables in JNI to optimize performance
*/
public static void init(String configDir, String locale, String charset, String timezone) throws SQLWarning {
synchronized (isInitialized) {
if (!isInitialized) {
......@@ -93,11 +73,6 @@ public class TSDBJNIConnector {
public static native String getTsCharset();
/**
* Get connection pointer
*
* @throws SQLException
*/
public boolean connect(String host, int port, String dbName, String user, String password) throws SQLException {
if (this.taos != TSDBConstants.JNI_NULL_POINTER) {
// this.closeConnectionImp(this.taos);
......@@ -185,13 +160,6 @@ public class TSDBJNIConnector {
private native String getErrMsgImp(long pSql);
/**
* Get resultset pointer
* Each connection should have a single open result set at a time
*/
// public long getResultSet() {
// return taosResultSetPointer;
// }
private native long getResultSetImp(long connection, long pSql);
public boolean isUpdateQuery(long pSql) {
......@@ -231,6 +199,7 @@ public class TSDBJNIConnector {
// }
// return resCode;
// }
private native int freeResultSetImp(long connection, long result);
/**
......@@ -323,8 +292,7 @@ public class TSDBJNIConnector {
* Validate if a <I>create table</I> sql statement is correct without actually creating that table
*/
public boolean validateCreateTableSql(String sql) {
long connection = taos;
int res = validateCreateTableSqlImp(connection, sql.getBytes());
int res = validateCreateTableSqlImp(taos, sql.getBytes());
return res != 0 ? false : true;
}
......
package com.taosdata.jdbc;
public class TSDBParameterMetaData extends AbstractParameterMetaData {
public TSDBParameterMetaData(Object[] parameters) {
super(parameters);
}
}
......@@ -22,7 +22,7 @@ import java.util.List;
public class TSDBResultSetMetaData extends WrapperImpl implements ResultSetMetaData {
List<ColumnMetaData> colMetaDataList = null;
List<ColumnMetaData> colMetaDataList;
public TSDBResultSetMetaData(List<ColumnMetaData> metaDataList) {
this.colMetaDataList = metaDataList;
......@@ -52,6 +52,9 @@ public class TSDBResultSetMetaData extends WrapperImpl implements ResultSetMetaD
}
public int isNullable(int column) throws SQLException {
if (column < 1 && column >= colMetaDataList.size())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE);
if (column == 1) {
return columnNoNulls;
}
......@@ -59,6 +62,9 @@ public class TSDBResultSetMetaData extends WrapperImpl implements ResultSetMetaD
}
public boolean isSigned(int column) throws SQLException {
if (column < 1 && column >= colMetaDataList.size())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE);
ColumnMetaData meta = this.colMetaDataList.get(column - 1);
switch (meta.getColType()) {
case TSDBConstants.TSDB_DATA_TYPE_TINYINT:
......@@ -74,22 +80,37 @@ public class TSDBResultSetMetaData extends WrapperImpl implements ResultSetMetaD
}
public int getColumnDisplaySize(int column) throws SQLException {
if (column < 1 && column >= colMetaDataList.size())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE);
return colMetaDataList.get(column - 1).getColSize();
}
public String getColumnLabel(int column) throws SQLException {
if (column < 1 && column >= colMetaDataList.size())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE);
return colMetaDataList.get(column - 1).getColName();
}
public String getColumnName(int column) throws SQLException {
if (column < 1 && column >= colMetaDataList.size())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE);
return colMetaDataList.get(column - 1).getColName();
}
public String getSchemaName(int column) throws SQLException {
if (column < 1 && column >= colMetaDataList.size())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE);
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
}
public int getPrecision(int column) throws SQLException {
if (column < 1 && column >= colMetaDataList.size())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE);
ColumnMetaData columnMetaData = this.colMetaDataList.get(column - 1);
switch (columnMetaData.getColType()) {
case TSDBConstants.TSDB_DATA_TYPE_FLOAT:
......@@ -105,6 +126,9 @@ public class TSDBResultSetMetaData extends WrapperImpl implements ResultSetMetaD
}
public int getScale(int column) throws SQLException {
if (column < 1 && column >= colMetaDataList.size())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_PARAMETER_INDEX_OUT_RANGE);
ColumnMetaData meta = this.colMetaDataList.get(column - 1);
switch (meta.getColType()) {
case TSDBConstants.TSDB_DATA_TYPE_FLOAT:
......
......@@ -21,18 +21,13 @@ import java.util.ArrayList;
import java.util.Collections;
public class TSDBResultSetRowData {
private ArrayList<Object> data = null;
private ArrayList<Object> data;
private int colSize = 0;
public TSDBResultSetRowData(int colSize) {
this.setColSize(colSize);
}
public TSDBResultSetRowData() {
this.data = new ArrayList<>();
this.setColSize(0);
}
public void clear() {
if (this.data != null) {
this.data.clear();
......@@ -71,9 +66,9 @@ public class TSDBResultSetRowData {
case TSDBConstants.TSDB_DATA_TYPE_TIMESTAMP:
case TSDBConstants.TSDB_DATA_TYPE_BIGINT:
return ((Long) obj) == 1L ? Boolean.TRUE : Boolean.FALSE;
default:
return false;
}
return Boolean.TRUE;
}
public void setByte(int col, byte value) {
......@@ -198,7 +193,7 @@ public class TSDBResultSetRowData {
data.set(col, value);
}
public float getFloat(int col, int srcType) throws SQLException {
public float getFloat(int col, int srcType) {
Object obj = data.get(col);
switch (srcType) {
......@@ -226,7 +221,7 @@ public class TSDBResultSetRowData {
data.set(col, value);
}
public double getDouble(int col, int srcType) throws SQLException {
public double getDouble(int col, int srcType) {
Object obj = data.get(col);
switch (srcType) {
......@@ -267,9 +262,8 @@ public class TSDBResultSetRowData {
*
* @param col column index
* @return
* @throws SQLException
*/
public String getString(int col, int srcType) throws SQLException {
public String getString(int col, int srcType) {
switch (srcType) {
case TSDBConstants.TSDB_DATA_TYPE_BINARY:
case TSDBConstants.TSDB_DATA_TYPE_NCHAR:
......@@ -305,11 +299,11 @@ public class TSDBResultSetRowData {
}
public void setTimestamp(int col, long ts) {
data.set(col, ts);
data.set(col, new Timestamp(ts));
}
public Timestamp getTimestamp(int col) {
return new Timestamp((Long) data.get(col));
return (Timestamp) data.get(col);
}
public Object get(int col) {
......@@ -320,7 +314,7 @@ public class TSDBResultSetRowData {
return colSize;
}
public void setColSize(int colSize) {
private void setColSize(int colSize) {
this.colSize = colSize;
this.clear();
}
......
package com.taosdata.jdbc.bean;
import java.util.List;
/**
* tdengine batch insert or import param object
*/
public class TSDBPreparedParam {
/**
* tableName, if sTable Name is not null, and this is sub table name.
*/
private String tableName;
/**
* sub middle param list
*/
private List<Object> middleParamList;
/**
* value list
*/
private List<Object> valueList;
public TSDBPreparedParam(String tableName) {
this.tableName = tableName;
}
public String getTableName() {
return tableName;
}
public void setTableName(String tableName) {
this.tableName = tableName;
}
public List<Object> getMiddleParamList() {
return middleParamList;
}
public void setMiddleParamList(List<Object> middleParamList) {
this.middleParamList = middleParamList;
}
public void setMiddleParam(int parameterIndex, Object x) {
this.middleParamList.set(parameterIndex, x);
}
public List<Object> getValueList() {
return valueList;
}
public void setValueList(List<Object> valueList) {
this.valueList = valueList;
}
public void setValueParam(int parameterIndex, Object x) {
this.valueList.set(parameterIndex, x);
}
}
package com.taosdata.jdbc.rs;
import com.taosdata.jdbc.*;
import java.sql.*;
import com.taosdata.jdbc.AbstractConnection;
import com.taosdata.jdbc.TSDBDriver;
import com.taosdata.jdbc.TSDBError;
import com.taosdata.jdbc.TSDBErrorNumbers;
import java.sql.DatabaseMetaData;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.Properties;
public class RestfulConnection extends AbstractConnection {
......@@ -55,7 +61,6 @@ public class RestfulConnection extends AbstractConnection {
public DatabaseMetaData getMetaData() throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_CONNECTION_CLOSED);
;
return this.metadata;
}
......
......@@ -17,7 +17,7 @@ public class RestfulDriver extends AbstractDriver {
static {
try {
DriverManager.registerDriver(new RestfulDriver());
java.sql.DriverManager.registerDriver(new RestfulDriver());
} catch (SQLException e) {
throw TSDBError.createRuntimeException(TSDBErrorNumbers.ERROR_URL_NOT_SET, e);
}
......
package com.taosdata.jdbc.rs;
import com.taosdata.jdbc.AbstractParameterMetaData;
public class RestfulParameterMetaData extends AbstractParameterMetaData {
RestfulParameterMetaData(Object[] parameters) {
super(parameters);
}
}
package com.taosdata.jdbc.utils;
public class OSUtils {
private static final String OS = System.getProperty("os.name").toLowerCase();
public static boolean isWindows() {
return OS.indexOf("win") >= 0;
}
public static boolean isMac() {
return OS.indexOf("mac") >= 0;
}
public static boolean isLinux() {
return OS.indexOf("nux") >= 0;
}
}
......@@ -233,7 +233,7 @@ public class TSDBConnectionTest {
int status = rs.getInt("server_status()");
Assert.assertEquals(1, status);
conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY, ResultSet.HOLD_CURSORS_OVER_COMMIT);
conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY, ResultSet.HOLD_CURSORS_OVER_COMMIT);
}
@Test(expected = SQLFeatureNotSupportedException.class)
......
......@@ -14,7 +14,6 @@ import java.util.Random;
public class InsertDbwithoutUseDbTest {
private static String host = "127.0.0.1";
// private static String host = "master";
private static Properties properties;
private static Random random = new Random(System.currentTimeMillis());
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册