提交 48d70094 编写于 作者: L liuyq-617

Merge branch 'develop' into test/jenkins

...@@ -84,6 +84,7 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专 ...@@ -84,6 +84,7 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专
- [数据导出](https://www.taosdata.com/cn/documentation20/administrator/#数据导出):从shell按表导出,也可用taosdump工具做各种导出 - [数据导出](https://www.taosdata.com/cn/documentation20/administrator/#数据导出):从shell按表导出,也可用taosdump工具做各种导出
- [系统监控](https://www.taosdata.com/cn/documentation20/administrator/#系统监控):检查系统现有的连接、查询、流式计算,日志和事件等 - [系统监控](https://www.taosdata.com/cn/documentation20/administrator/#系统监控):检查系统现有的连接、查询、流式计算,日志和事件等
- [文件目录结构](https://www.taosdata.com/cn/documentation20/administrator/#文件目录结构):TDengine数据文件、配置文件等所在目录 - [文件目录结构](https://www.taosdata.com/cn/documentation20/administrator/#文件目录结构):TDengine数据文件、配置文件等所在目录
- [参数限制和保留关键字](https://www.taosdata.com/cn/documentation20/administrator/#参数限制和保留关键字):TDengine的参数限制和保留关键字列表
## [TAOS SQL](https://www.taosdata.com/cn/documentation20/taos-sql) ## [TAOS SQL](https://www.taosdata.com/cn/documentation20/taos-sql)
...@@ -128,4 +129,4 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专 ...@@ -128,4 +129,4 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专
## [培训和FAQ](https://www.taosdata.com/cn/faq) ## [培训和FAQ](https://www.taosdata.com/cn/faq)
- [FAQ](https://www.taosdata.com/cn/documentation20/faq):常见问题与答案 - [FAQ](https://www.taosdata.com/cn/documentation20/faq):常见问题与答案
- [应用案列](https://www.taosdata.com/cn/blog/?categories=4):一些使用实例来解释如何使用TDengine - [应用案列](https://www.taosdata.com/cn/blog/?categories=4):一些使用实例来解释如何使用TDengine
\ No newline at end of file
...@@ -2,7 +2,35 @@ ...@@ -2,7 +2,35 @@
## 快捷安装 ## 快捷安装
TDengine软件分为服务器、客户端和报警模块三部分,目前2.0版仅能在Linux系统上安装和运行,后续会支持Windows、MAC OS等系统。如果应用需要在Windows或Mac上运行,目前只能使用TDengine的RESTful接口连接服务器。硬件支持X64,后续会支持ARM、龙芯等CPU系统。用户可根据需求选择通过[源码](https://www.taosdata.com/cn/getting-started/#通过源码安装)或者[安装包](https://www.taosdata.com/cn/getting-started/#通过安装包安装)来安装。 TDengine软件分为服务器、客户端和报警模块三部分,目前2.0版服务器仅能在Linux系统上安装和运行,后续会支持Windows、mac OS等系统。
**应用驱动**
如果应用在Windows和Linux上运行,可使用C/C++/C#/JAVA/Python/Go/Node.js接口连接服务器。如果应用在Mac上运行,目前可以使用RESTful接口连接服务器。
**CPU**
CPU支持X64/ARM64/MIPS64/Alpha64,后续会支持ARM32、RISC-V等CPU架构。用户可根据需求选择通过[源码](https://www.taosdata.com/cn/getting-started/#通过源码安装)或者[安装包](https://www.taosdata.com/cn/getting-started/#通过安装包安装)来安装。
**服务器**
目前TDengine服务器可以运行在以下平台上:
| | **CentOS** **6/7/8** | **Ubuntu** **16/18/20** | **Other Linux** | **Win64/32** | **macOS** | **统信****UOS** | **银河****/****中标麒麟** | **凝思** **V60/V80** |
| -------------- | --------------------- | ------------------------ | --------------- | ------------ | --------- | --------------- | ------------------------- | --------------------- |
| X64 | ● | ● | | ○/○ | ○ | ○ | ● | ● |
| 树莓派ARM32 | | ● | ● | | | | | |
| 龙芯MIPS64 | | | ● | | | | | |
| 鲲鹏 ARM64 | | ○ | ○ | | | | ● | |
| 申威 Alpha64 | | | ○ | | | ● | | |
| 飞腾ARM64 | | | ○优麒麟 | | | | | |
| 海光X64 | ● | ● | ● | | | ○ | ● | ● |
| 瑞芯微ARM64/32 | | | ○ | | | | | |
| 全志ARM64/32 | | | ○ | | | | | |
| 炬力ARM64/32 | | | ○ | | | | | |
| TI ARM32 | | | ○ | | | | | |
### 通过源码安装 ### 通过源码安装
...@@ -16,28 +44,27 @@ TDengine软件分为服务器、客户端和报警模块三部分,目前2.0版 ...@@ -16,28 +44,27 @@ TDengine软件分为服务器、客户端和报警模块三部分,目前2.0版
服务器部分,我们提供三种安装包,您可以根据需要选择。TDengine的安装非常简单,从下载到安装成功仅仅只要几秒钟。 服务器部分,我们提供三种安装包,您可以根据需要选择。TDengine的安装非常简单,从下载到安装成功仅仅只要几秒钟。
<ul id='packageList'> - TDengine-server-2.0.9.0-Linux-x64.rpm (4.2M)
<li><a id='tdengine-rpm' style='color:var(--b2)'>TDengine-server-2.0.0.0-Linux-x64.rpm (5.3M)</a></li> - TDengine-server-2.0.9.0-Linux-x64.deb (2.7M)
<li><a id='tdengine-deb' style='color:var(--b2)'>TDengine-server-2.0.0.0-Linux-x64.deb (2.5M)</a></li> - TDengine-server-2.0.9.0-Linux-x64.tar.gz (4.5M)
<li><a id='tdengine-tar' style='color:var(--b2)'>TDengine-server-2.0.0.0-Linux-x64.tar.gz (5.3M)</a></li>
</ul>
客户端部分,Linux安装包如下: 客户端部分,Linux安装包如下:
- TDengine-client-2.0.0.0-Linux-x64.tar.gz (3.4M) - TDengine-client-2.0.9.0-Linux-x64.tar.gz(3.0M)
- TDengine-client-2.0.9.0-Windows-x64.exe(2.8M)
- TDengine-client-2.0.9.0-Windows-x86.exe(2.8M)
报警模块的Linux安装包如下(请参考[报警模块的使用方法](https://github.com/taosdata/TDengine/blob/master/alert/README_cn.md)): 报警模块的Linux安装包如下(请参考[报警模块的使用方法](https://github.com/taosdata/TDengine/blob/master/alert/README_cn.md)):
- TDengine-alert-2.0.0-Linux-x64.tar.gz (8.1M) - TDengine-alert-2.0.9.0-Linux-x64.tar.gz (8.1M)
目前,TDengine只支持在使用[`systemd`](https://en.wikipedia.org/wiki/Systemd)做进程服务管理的linux系统上安装。其他linux系统的支持正在开发中。`which systemctl`命令来检测系统中是否存在`systemd`包: 目前,TDengine 支持在使用[`systemd`](https://en.wikipedia.org/wiki/Systemd)做进程服务管理的linux系统上安装,`which systemctl`命令来检测系统中是否存在`systemd`包:
```cmd ```cmd
which systemctl which systemctl
``` ```
如果系统中不存在`systemd`包,请考虑[通过源码安装](#通过源码安装)TDengine。
具体的安装过程,请参见<a href="https://www.taosdata.com/blog/2019/08/09/566.html">TDengine多种安装包的安装和卸载</a> 具体的安装过程,请参见<a href="https://www.taosdata.com/blog/2019/08/09/566.html">TDengine多种安装包的安装和卸载</a>
## 轻松启动 ## 轻松启动
...@@ -54,12 +81,14 @@ systemctl status taosd ...@@ -54,12 +81,14 @@ systemctl status taosd
``` ```
如果TDengine服务正常工作,那么您可以通过TDengine的命令行程序`taos`来访问并体验TDengine。 如果TDengine服务正常工作,那么您可以通过TDengine的命令行程序`taos`来访问并体验TDengine。
**注意:** **注意:**
- systemctl命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 sudo - systemctl命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 sudo
- 为更好的获得产品反馈,改善产品,TDengine会采集基本的使用信息,但您可以修改系统配置文件taos.cfg里的配置参数telemetryReporting, 将其设为0,就可将其关闭。 - 为更好的获得产品反馈,改善产品,TDengine会采集基本的使用信息,但您可以修改系统配置文件taos.cfg里的配置参数telemetryReporting, 将其设为0,就可将其关闭。
如果系统中不支持`systemd`,也可以用手动运行 /usr/local/taos/bin/taosd 方式启动 TDengine 服务。
## TDengine命令行程序 ## TDengine命令行程序
执行TDengine命令行程序,您只要在Linux终端执行`taos`即可。 执行TDengine命令行程序,您只要在Linux终端执行`taos`即可。
......
...@@ -6,12 +6,12 @@ TDengine采用关系型数据模型,需要建库、建表。因此对于一个 ...@@ -6,12 +6,12 @@ TDengine采用关系型数据模型,需要建库、建表。因此对于一个
## 创建库 ## 创建库
不同类型的数据采集点往往具有不同的数据特征,包括数据采集频率的高低,数据保留时间的长短,副本的数目,数据块的大小等等。为让各种场景下TDengine都能最大效率的工作,TDengine建议将不同数据特征的表创建在不同的库里,因为每个库可以配置不同的存储策略。创建一个库时,除SQL标准的选项外,应用还可以指定保留时长、副本数、内存块个数、时间精度、文件块里最大最小记录条数、是否压缩、一个数据文件覆盖的天数等多种参数。比如: 不同类型的数据采集点往往具有不同的数据特征,包括数据采集频率的高低,数据保留时间的长短,副本的数目,数据块的大小,是否允许更新数据等等。为让各种场景下TDengine都能最大效率的工作,TDengine建议将不同数据特征的表创建在不同的库里,因为每个库可以配置不同的存储策略。创建一个库时,除SQL标准的选项外,应用还可以指定保留时长、副本数、内存块个数、时间精度、文件块里最大最小记录条数、是否压缩、一个数据文件覆盖的天数等多种参数。比如:
```cmd ```cmd
CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 4; CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 4 UPDATE 1;
``` ```
上述语句将创建一个名为power的库,这个库的数据将保留365天(超过365天将被自动删除),每10天一个数据文件,内存块数为4。详细的语法及参数请见<a href="https://www.taosdata.com/cn/documentation20/taos-sql/">TAOS SQL</a> 上述语句将创建一个名为power的库,这个库的数据将保留365天(超过365天将被自动删除),每10天一个数据文件,内存块数为4,允许更新数据。详细的语法及参数请见<a href="https://www.taosdata.com/cn/documentation20/taos-sql/">TAOS SQL</a>
创建库之后,需要使用SQL命令USE将当前库切换过来,例如: 创建库之后,需要使用SQL命令USE将当前库切换过来,例如:
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
本文档说明TAOS SQL支持的语法规则、主要查询功能、支持的SQL查询函数,以及常用技巧等内容。阅读本文档需要读者具有基本的SQL语言的基础。 本文档说明TAOS SQL支持的语法规则、主要查询功能、支持的SQL查询函数,以及常用技巧等内容。阅读本文档需要读者具有基本的SQL语言的基础。
TAOS SQL是用户对TDengine进行数据写入和查询的主要工具。TAOS SQL为了便于用户快速上手,在一定程度上提供类似于标准SQL类似的风格和模式。严格意义上,TAOS SQL并不是也不试图提供SQL标准的语法。此外,由于TDengine针对的时序性结构化数据不提供修改和更新功能,因此在TAO SQL中不提供数据更新和数据删除的相关功能。 TAOS SQL是用户对TDengine进行数据写入和查询的主要工具。TAOS SQL为了便于用户快速上手,在一定程度上提供类似于标准SQL类似的风格和模式。严格意义上,TAOS SQL并不是也不试图提供SQL标准的语法。此外,由于TDengine针对的时序性结构化数据不提供删除功能,因此在TAO SQL中不提供数据删除的相关功能。
本章节SQL语法遵循如下约定: 本章节SQL语法遵循如下约定:
...@@ -57,18 +57,29 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic ...@@ -57,18 +57,29 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic
## 数据库管理 ## 数据库管理
- **创建数据库** - **创建数据库**
```mysql
CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep]; ```mysql
``` CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [UPDATE 1];
说明: ```
说明:
1) `KEEP`是该数据库的数据保留多长天数,缺省是3650天(10年),数据库会自动删除超过时限的数据;
2) 数据库名最大长度为33; 1) KEEP是该数据库的数据保留多长天数,缺省是3650天(10年),数据库会自动删除超过时限的数据;
3) 一条SQL 语句的最大长度为65480个字符;
4) 数据库还有更多与存储相关的配置参数,请参见[系统管理](../administrator/#服务端配置) 2) UPDATE 标志数据库支持更新相同时间戳数据;
3) 数据库名最大长度为33;
4) 一条SQL 语句的最大长度为65480个字符;
5) 数据库还有更多与存储相关的配置参数,请参见系统管理。
- **显示系统当前参数**
```mysql
SHOW VARIABLES;
```
- **使用数据库** - **使用数据库**
```mysql ```mysql
USE db_name; USE db_name;
``` ```
...@@ -143,6 +154,12 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic ...@@ -143,6 +154,12 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic
显示当前数据库下的所有数据表信息。说明:可在like中使用通配符进行名称的匹配。 通配符匹配:1)’%’ (百分号)匹配0到任意个字符;2)’_’下划线匹配一个字符。 显示当前数据库下的所有数据表信息。说明:可在like中使用通配符进行名称的匹配。 通配符匹配:1)’%’ (百分号)匹配0到任意个字符;2)’_’下划线匹配一个字符。
- **在线修改显示字符宽度**
```mysql
SET MAX_BINARY_DISPLAY_WIDTH <nn>;
```
- **获取表的结构信息** - **获取表的结构信息**
```mysql ```mysql
......
...@@ -96,6 +96,7 @@ TDengine系统后台服务由taosd提供,可以在配置文件taos.cfg里修 ...@@ -96,6 +96,7 @@ TDengine系统后台服务由taosd提供,可以在配置文件taos.cfg里修
- maxSQLLength:单条SQL语句允许最长限制。默认值:65380字节。 - maxSQLLength:单条SQL语句允许最长限制。默认值:65380字节。
- telemetryReporting: 是否允许 TDengine 采集和上报基本使用信息,0表示不允许,1表示允许。 默认值:1。 - telemetryReporting: 是否允许 TDengine 采集和上报基本使用信息,0表示不允许,1表示允许。 默认值:1。
- stream: 是否启用连续查询(流计算功能),0表示不允许,1表示允许。 默认值:1。 - stream: 是否启用连续查询(流计算功能),0表示不允许,1表示允许。 默认值:1。
- queryBufferSize: 为所有并发查询占用保留的内存大小。计算规则可以根据实际应用可能的最大并发数和表的数字相乘,再乘 170 。单位为字节。
**注意:**对于端口,TDengine会使用从serverPort起13个连续的TCP和UDP端口号,请务必在防火墙打开。因此如果是缺省配置,需要打开从6030都6042共13个端口,而且必须TCP和UDP都打开。 **注意:**对于端口,TDengine会使用从serverPort起13个连续的TCP和UDP端口号,请务必在防火墙打开。因此如果是缺省配置,需要打开从6030都6042共13个端口,而且必须TCP和UDP都打开。
...@@ -156,6 +157,9 @@ TDengine系统的前台交互客户端应用程序为taos,它与taosd共享同 ...@@ -156,6 +157,9 @@ TDengine系统的前台交互客户端应用程序为taos,它与taosd共享同
客户端配置参数 客户端配置参数
- firstEp: taos启动时,主动连接的集群中第一个taosd实例的end point, 缺省值为 localhost:6030。 - firstEp: taos启动时,主动连接的集群中第一个taosd实例的end point, 缺省值为 localhost:6030。
- secondEp: taos 启动时,如果 firstEp 连不上,将尝试连接 secondEp。
- locale - locale
默认值:系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过API设置 默认值:系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过API设置
...@@ -229,7 +233,7 @@ TDengine系统的前台交互客户端应用程序为taos,它与taosd共享同 ...@@ -229,7 +233,7 @@ TDengine系统的前台交互客户端应用程序为taos,它与taosd共享同
- maxBinaryDisplayWidth - maxBinaryDisplayWidth
Shell中binary 和 nchar字段的显示宽度上限,超过此限制的部分将被隐藏。默认值:30。可在 shell 中通过命令 set max_binary_display_width nn 动态修改此选项。 Shell中binary 和 nchar字段的显示宽度上限,超过此限制的部分将被隐藏。默认值:30。可在 shell 中通过命令 set max_binary_display_width nn 动态修改此选项。
## 用户管理 ## 用户管理
...@@ -264,7 +268,7 @@ SHOW USERS; ...@@ -264,7 +268,7 @@ SHOW USERS;
``` ```
显示所有用户 显示所有用户
**注意:**SQL 语法中,< >表示需要用户输入的部分,但请不要输入< >本身 **注意:**SQL 语法中,< >表示需要用户输入的部分,但请不要输入< >本身
## 数据导入 ## 数据导入
...@@ -413,3 +417,65 @@ TDengine的所有可执行文件默认存放在 _/usr/local/taos/bin_ 目录下 ...@@ -413,3 +417,65 @@ TDengine的所有可执行文件默认存放在 _/usr/local/taos/bin_ 目录下
您可以通过修改系统配置文件taos.cfg来配置不同的数据目录和日志目录。 您可以通过修改系统配置文件taos.cfg来配置不同的数据目录和日志目录。
## TDengine参数限制与保留关键字
- 数据库名:不能包含“.”以及特殊字符,不能超过32个字符
- 表名:不能包含“.”以及特殊字符,与所属数据库名一起,不能超过192个字符
- 表的列名:不能包含特殊字符,不能超过64个字符
- 表的列数:不能超过1024列
- 记录的最大长度:包括时间戳8 byte,不能超过16KB
- 单条SQL语句默认最大字符串长度:65480 byte
- 数据库副本数:不能超过3
- 用户名:不能超过20个byte
- 用户密码:不能超过15个byte
- 标签(Tags)数量:不能超过128个
- 标签的总长度:不能超过16Kbyte
- 记录条数:仅受存储空间限制
- 表的个数:仅受节点个数限制
- 库的个数:仅受节点个数限制
- 单个库上虚拟节点个数:不能超过64个
目前TDengine有将近200个内部保留关键字,这些关键字无论大小写均不可以用作库名、表名、STable名、数据列名及标签列名等。这些关键字列表如下:
| 关键字列表 | | | | |
| ---------- | ----------- | ------------ | ---------- | --------- |
| ABLOCKS | CONNECTION | GT | MINUS | SHOW |
| ABORT | CONNECTIONS | ID | MNODES | SLASH |
| ACCOUNT | COPY | IF | MODULES | SLIDING |
| ACCOUNTS | COUNT | IGNORE | NCHAR | SMALLINT |
| ADD | CREATE | IMMEDIATE | NE | SPREAD |
| AFTER | CTIME | IMPORT | NONE | STAR |
| ALL | DATABASE | IN | NOT | STATEMENT |
| ALTER | DATABASES | INITIALLY | NOTNULL | STDDEV |
| AND | DAYS | INSERT | NOW | STREAM |
| AS | DEFERRED | INSTEAD | OF | STREAMS |
| ASC | DELIMITERS | INTEGER | OFFSET | STRING |
| ATTACH | DESC | INTERVAL | OR | SUM |
| AVG | DESCRIBE | INTO | ORDER | TABLE |
| BEFORE | DETACH | IP | PASS | TABLES |
| BEGIN | DIFF | IS | PERCENTILE | TAG |
| BETWEEN | DIVIDE | ISNULL | PLUS | TAGS |
| BIGINT | DNODE | JOIN | PRAGMA | TBLOCKS |
| BINARY | DNODES | KEEP | PREV | TBNAME |
| BITAND | DOT | KEY | PRIVILEGE | TIMES |
| BITNOT | DOUBLE | KILL | QUERIES | TIMESTAMP |
| BITOR | DROP | LAST | QUERY | TINYINT |
| BOOL | EACH | LE | RAISE | TOP |
| BOTTOM | END | LEASTSQUARES | REM | TRIGGER |
| BY | EQ | LIKE | REPLACE | UMINUS |
| CACHE | EXISTS | LIMIT | REPLICA | UPLUS |
| CASCADE | EXPLAIN | LINEAR | RESET | USE |
| CHANGE | FAIL | LOCAL | RESTRICT | USER |
| CLOG | FILL | LP | ROW | USERS |
| CLUSTER | FIRST | LSHIFT | ROWS | USING |
| COLON | FLOAT | LT | RP | VALUES |
| COLUMN | FOR | MATCH | RSHIFT | VARIABLE |
| COMMA | FROM | MAX | SCORES | VGROUPS |
| COMP | GE | METRIC | SELECT | VIEW |
| CONCAT | GLOB | METRICS | SEMI | WAVG |
| CONFIGS | GRANTS | MIN | SET | WHERE |
| CONFLICT | GROUP | | | |
# 连接器 # 连接器
TDengine提供了丰富的应用程序开发接口,其中包括C/C++、JAVA、Python、RESTful、Go等,便于用户快速开发应用。 TDengine提供了丰富的应用程序开发接口,其中包括C/C++、C# 、Java、Python、Go、Node.js、RESTful 等,便于用户快速开发应用。
![image-connecotr](../assets/connector.png)
目前TDengine的连接器可支持的平台广泛,目前包括:X64/X86/ARM64/ARM32/MIPS/Alpha等硬件平台,以及Linux/Win64/Win32等开发环境。对照矩阵如下:
| | **CPU** | **X64 64bit** | **X86 32bit** | **ARM64** | **ARM32** | **MIPS ** **龙芯** | **Alpha ** **申威** | **X64 ** **海光** | | |
| ---------------------------- | --------- | --------------- | --------------- | --------- | --------- | ------------------- | -------------------- | ------------------ | --------- | --------- |
| | **OS** | **Linux** | **Win64** | **Win32** | **Win32** | **Linux** | **Linux** | **Linux** | **Linux** | **Linux** |
| **连** **接** **器** | **C/C++** | ● | ● | ● | ○ | ● | ● | ● | ● | ● |
| **JDBC** | ● | ● | ● | ○ | ● | ● | ● | ● | ● | |
| **Python** | ● | ● | ● | ○ | ● | ● | ● | -- | ● | |
| **Go** | ● | ● | ● | ○ | ● | ● | ○ | -- | -- | |
| **NodeJs** | ● | ● | ○ | ○ | ● | ● | ○ | -- | -- | |
| **C#** | ○ | ● | ● | ○ | ○ | ○ | ○ | -- | -- | |
| **RESTful** | ● | ● | ● | ● | ● | ● | ● | ● | ● | |
注意:所有执行 SQL 语句的 API,例如 C/C++ Connector 中的 `tao_query``taos_query_a``taos_subscribe` 等,以及其它语言中与它们对应的API,每次都只能执行一条 SQL 语句,如果实际参数中包含了多条语句,它们的行为是未定义的。 注意:所有执行 SQL 语句的 API,例如 C/C++ Connector 中的 `tao_query``taos_query_a``taos_subscribe` 等,以及其它语言中与它们对应的API,每次都只能执行一条 SQL 语句,如果实际参数中包含了多条语句,它们的行为是未定义的。
...@@ -664,23 +679,45 @@ import ( ...@@ -664,23 +679,45 @@ import (
_ "github.com/taosdata/driver-go/taosSql" _ "github.com/taosdata/driver-go/taosSql"
) )
``` ```
**建议Go版本是1.13或以上,并开启模块支持:**
```
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.io,direct
```
### 常用API ### 常用API
* `sql.Open(DRIVER_NAME string, dataSourceName string) *DB`
- sql.Open(DRIVER_NAME string, dataSourceName string) *DB`
该API用来打开DB,返回一个类型为*DB的对象,一般情况下,DRIVER_NAME设置为字符串`taosSql`, dataSourceName设置为字符串`user:password@/tcp(host:port)/dbname`,如果客户想要用多个goroutine并发访问TDengine, 那么需要在各个goroutine中分别创建一个sql.Open对象并用之访问TDengine
该API用来打开DB,返回一个类型为*DB的对象,一般情况下,DRIVER_NAME设置为字符串`taosSql`, dataSourceName设置为字符串`user:password@/tcp(host:port)/dbname`,如果客户想要用多个goroutine并发访问TDengine, 那么需要在各个goroutine中分别创建一个sql.Open对象并用之访问TDengine
**注意**: 该API成功创建的时候,并没有做权限等检查,只有在真正执行Query或者Exec的时候才能真正的去创建连接,并同时检查user/password/host/port是不是合法。 另外,由于整个驱动程序大部分实现都下沉到taosSql所依赖的libtaos中。所以,sql.Open本身特别轻量。
**注意**: 该API成功创建的时候,并没有做权限等检查,只有在真正执行Query或者Exec的时候才能真正的去创建连接,并同时检查user/password/host/port是不是合法。 另外,由于整个驱动程序大部分实现都下沉到taosSql所依赖的libtaos中。所以,sql.Open本身特别轻量。
* `func (db *DB) Exec(query string, args ...interface{}) (Result, error)`
- `func (db *DB) Exec(query string, args ...interface{}) (Result, error)`
sql.Open内置的方法,用来执行非查询相关SQL sql.Open内置的方法,用来执行非查询相关SQL
- `func (db *DB) Query(query string, args ...interface{}) (*Rows, error)`
* `func (db *DB) Query(query string, args ...interface{}) (*Rows, error)`
sql.Open内置的方法,用来执行查询语句 sql.Open内置的方法,用来执行查询语句
- `func (db *DB) Prepare(query string) (*Stmt, error)`
sql.Open内置的方法,Prepare creates a prepared statement for later queries or executions.
- `func (s *Stmt) Exec(args ...interface{}) (Result, error)`
sql.Open内置的方法,executes a prepared statement with the given arguments and returns a Result summarizing the effect of the statement.
- `func (s *Stmt) Query(args ...interface{}) (*Rows, error)`
sql.Open内置的方法,Query executes a prepared query statement with the given arguments and returns the query results as a *Rows.
- `func (s *Stmt) Close() error`
sql.Open内置的方法,Close closes the statement.
## Node.js Connector ## Node.js Connector
TDengine 同时也提供了node.js 的连接器。用户可以通过[npm](https://www.npmjs.com/)来进行安装,也可以通过源代码*src/connector/nodejs/* 来进行安装。[具体安装步骤如下](https://github.com/taosdata/tdengine/tree/master/src/connector/nodejs): TDengine 同时也提供了node.js 的连接器。用户可以通过[npm](https://www.npmjs.com/)来进行安装,也可以通过源代码*src/connector/nodejs/* 来进行安装。[具体安装步骤如下](https://github.com/taosdata/tdengine/tree/master/src/connector/nodejs):
...@@ -701,32 +738,6 @@ npm install td2.0-connector ...@@ -701,32 +738,6 @@ npm install td2.0-connector
- `make` - `make`
- c语言编译器比如[GCC](https://gcc.gnu.org) - c语言编译器比如[GCC](https://gcc.gnu.org)
### macOS
- `python` (建议`v2.7` , `v3.x.x` 目前还不支持)
- Xcode
- 然后通过Xcode安装
```
Command Line Tools
```
```
Xcode -> Preferences -> Locations
```
目录下可以找到这个工具。或者在终端里执行
```
xcode-select --install
```
- 该步执行后 `gcc` 和 `make`就被安装上了
### Windows ### Windows
#### 安装方法1 #### 安装方法1
......
# Java Connector # Java Connector
Java连接器支持的系统有: Linux 64/Windows x64/Windows x86。
TDengine 为了方便 Java 应用使用,提供了遵循 JDBC 标准(3.0)API 规范的 `taos-jdbcdriver` 实现。目前可以通过 [Sonatype Repository][1] 搜索并下载。 TDengine 为了方便 Java 应用使用,提供了遵循 JDBC 标准(3.0)API 规范的 `taos-jdbcdriver` 实现。目前可以通过 [Sonatype Repository][1] 搜索并下载。
由于 TDengine 是使用 c 语言开发的,使用 taos-jdbcdriver 驱动包时需要依赖系统对应的本地函数库。 由于 TDengine 是使用 c 语言开发的,使用 taos-jdbcdriver 驱动包时需要依赖系统对应的本地函数库。
...@@ -24,7 +26,8 @@ TDengine 的 JDBC 驱动实现尽可能的与关系型数据库驱动保持一 ...@@ -24,7 +26,8 @@ TDengine 的 JDBC 驱动实现尽可能的与关系型数据库驱动保持一
| taos-jdbcdriver 版本 | TDengine 版本 | JDK 版本 | | taos-jdbcdriver 版本 | TDengine 版本 | JDK 版本 |
| --- | --- | --- | | --- | --- | --- |
| 2.0.4 | 2.0.0.x 及以上 | 1.8.x | | 2.0.12 及以上 | 2.0.8.0 及以上 | 1.8.x |
| 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.x | 1.8.x |
| 1.0.3 | 1.6.1.x 及以上 | 1.8.x | | 1.0.3 | 1.6.1.x 及以上 | 1.8.x |
| 1.0.2 | 1.6.1.x 及以上 | 1.8.x | | 1.0.2 | 1.6.1.x 及以上 | 1.8.x |
| 1.0.1 | 1.6.1.x 及以上 | 1.8.x | | 1.0.1 | 1.6.1.x 及以上 | 1.8.x |
......
...@@ -117,7 +117,17 @@ Connection = DriverManager.getConnection(url, properties); ...@@ -117,7 +117,17 @@ Connection = DriverManager.getConnection(url, properties);
## 16. 怎么报告问题? ## 16. 如何进行数据迁移?
TDengine是根据hostname唯一标志一台机器的,在数据文件从机器A移动机器B时,注意如下两件事:
- 2.0.0.0 至 2.0.6.x 的版本,重新配置机器B的hostname为机器A的hostname
- 2.0.7.0 及以后的版本,到/var/lib/taos/dnode下,修复dnodeEps.json的dnodeId对应的FQDN,重启。确保机器内所有机器的此文件是完全相同的。
- 1.x 和 2.x 版本的存储结构不兼容,需要使用迁移工具或者自己开发应用导出导入数据。
## 17. 怎么报告问题?
如果 FAQ 中的信息不能够帮到您,需要 TDengine 技术团队的技术支持与协助,请将以下两个目录中内容打包: 如果 FAQ 中的信息不能够帮到您,需要 TDengine 技术团队的技术支持与协助,请将以下两个目录中内容打包:
1. /var/log/taos 1. /var/log/taos
......
...@@ -24,7 +24,7 @@ INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, ...@@ -24,7 +24,7 @@ INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6,
- 要提高写入效率,需要批量写入。一批写入的记录条数越多,插入效率就越高。但一条记录不能超过16K,一条SQL语句总长度不能超过64K(可通过参数maxSQLLength配置,最大可配置为1M)。 - 要提高写入效率,需要批量写入。一批写入的记录条数越多,插入效率就越高。但一条记录不能超过16K,一条SQL语句总长度不能超过64K(可通过参数maxSQLLength配置,最大可配置为1M)。
- TDengine支持多线程同时写入,要进一步提高写入速度,一个客户端需要打开20个以上的线程同时写。但线程数达到一定数量后,无法再提高,甚至还会下降,因为线程切频繁切换,带来额外开销。 - TDengine支持多线程同时写入,要进一步提高写入速度,一个客户端需要打开20个以上的线程同时写。但线程数达到一定数量后,无法再提高,甚至还会下降,因为线程切频繁切换,带来额外开销。
- 对同一张表,如果新插入记录的时间戳已经存在,新记录将被直接抛弃,也就是说,在一张表里,时间戳必须是唯一的。如果应用自动生成记录,很有可能生成的时间戳是一样的,这样,成功插入的记录条数会小于应用插入的记录条数 - 对同一张表,如果新插入记录的时间戳已经存在,默认(没有使用 UPDATE 1 创建数据库)新记录将被直接抛弃,也就是说,在一张表里,时间戳必须是唯一的。如果应用自动生成记录,很有可能生成的时间戳是一样的,这样,成功插入的记录条数会小于应用插入的记录条数。如果在创建数据库时使用 UPDATE 1 选项,插入相同时间戳的新记录将覆盖原有记录
- 写入的数据的时间戳必须大于当前时间减去配置参数keep的时间。如果keep配置为3650天,那么无法写入比3650天还老的数据。写入数据的时间戳也不能大于当前时间加配置参数days。如果days配置为2,那么无法写入比当前时间还晚2天的数据。 - 写入的数据的时间戳必须大于当前时间减去配置参数keep的时间。如果keep配置为3650天,那么无法写入比3650天还老的数据。写入数据的时间戳也不能大于当前时间加配置参数days。如果days配置为2,那么无法写入比当前时间还晚2天的数据。
## Prometheus直接写入 ## Prometheus直接写入
......
...@@ -29,8 +29,12 @@ ...@@ -29,8 +29,12 @@
# number of threads per CPU core # number of threads per CPU core
# numOfThreadsPerCore 1.0 # numOfThreadsPerCore 1.0
# the proportion of total threads responsible for query # the proportion of total CPU cores available for query processing
# ratioOfQueryThreads 0.5 # 2.0: the query threads will be set to double of the CPU cores.
# 1.0: all CPU cores are available for query processing [default].
# 0.5: only half of the CPU cores are available for query.
# 0.0: only one core available.
# tsRatioOfQueryCores 1.0
# number of management nodes in the system # number of management nodes in the system
# numOfMnodes 3 # numOfMnodes 3
...@@ -265,5 +269,5 @@ ...@@ -265,5 +269,5 @@
# enable/disable stream (continuous query) # enable/disable stream (continuous query)
# stream 1 # stream 1
# only 50% CPU resources will be used in query processing # in retrieve blocking model, only in 50% query threads will be used in query processing in dnode
# halfCoresForQuery 0 # retrieveBlockingModel 0
...@@ -36,7 +36,7 @@ int32_t tscHandleMasterSTableQuery(SSqlObj *pSql); ...@@ -36,7 +36,7 @@ int32_t tscHandleMasterSTableQuery(SSqlObj *pSql);
int32_t tscHandleMultivnodeInsert(SSqlObj *pSql); int32_t tscHandleMultivnodeInsert(SSqlObj *pSql);
int32_t tscHandleInsertRetry(SSqlObj* pSql); int32_t tscHandleInsertRetry(SSqlObj* parent, SSqlObj* child);
void tscBuildResFromSubqueries(SSqlObj *pSql); void tscBuildResFromSubqueries(SSqlObj *pSql);
TAOS_ROW doSetResultRowData(SSqlObj *pSql); TAOS_ROW doSetResultRowData(SSqlObj *pSql);
......
...@@ -110,11 +110,12 @@ SParamInfo* tscAddParamToDataBlock(STableDataBlocks* pDataBlock, char type, uint ...@@ -110,11 +110,12 @@ SParamInfo* tscAddParamToDataBlock(STableDataBlocks* pDataBlock, char type, uint
uint32_t offset); uint32_t offset);
void* tscDestroyBlockArrayList(SArray* pDataBlockList); void* tscDestroyBlockArrayList(SArray* pDataBlockList);
void* tscDestroyBlockHashTable(SHashObj* pBlockHashTable);
int32_t tscCopyDataBlockToPayload(SSqlObj* pSql, STableDataBlocks* pDataBlock); int32_t tscCopyDataBlockToPayload(SSqlObj* pSql, STableDataBlocks* pDataBlock);
int32_t tscMergeTableDataBlocks(SSqlObj* pSql, SArray* pDataList); int32_t tscMergeTableDataBlocks(SSqlObj* pSql);
int32_t tscGetDataBlockFromList(void* pHashList, SArray* pDataBlockList, int64_t id, int32_t size, int32_t tscGetDataBlockFromList(SHashObj* pHashList, int64_t id, int32_t size, int32_t startOffset, int32_t rowSize, const char* tableId, STableMeta* pTableMeta,
int32_t startOffset, int32_t rowSize, const char* tableId, STableMeta* pTableMeta, STableDataBlocks** dataBlocks, SArray* pBlockList);
STableDataBlocks** dataBlocks);
/** /**
* for the projection query on metric or point interpolation query on metric, * for the projection query on metric or point interpolation query on metric,
...@@ -275,6 +276,8 @@ void tscPrintSelectClause(SSqlObj* pSql, int32_t subClauseIndex); ...@@ -275,6 +276,8 @@ void tscPrintSelectClause(SSqlObj* pSql, int32_t subClauseIndex);
bool hasMoreVnodesToTry(SSqlObj *pSql); bool hasMoreVnodesToTry(SSqlObj *pSql);
bool hasMoreClauseToTry(SSqlObj* pSql); bool hasMoreClauseToTry(SSqlObj* pSql);
void tscFreeQueryInfo(SSqlCmd* pCmd, bool removeFromCache);
void tscTryQueryNextVnode(SSqlObj *pSql, __async_cb_func_t fp); void tscTryQueryNextVnode(SSqlObj *pSql, __async_cb_func_t fp);
void tscAsyncQuerySingleRowForNextVnode(void *param, TAOS_RES *tres, int numOfRows); void tscAsyncQuerySingleRowForNextVnode(void *param, TAOS_RES *tres, int numOfRows);
void tscTryQueryNextClause(SSqlObj* pSql, __async_cb_func_t fp); void tscTryQueryNextClause(SSqlObj* pSql, __async_cb_func_t fp);
......
...@@ -37,40 +37,6 @@ extern "C" { ...@@ -37,40 +37,6 @@ extern "C" {
#include "qTsbuf.h" #include "qTsbuf.h"
#include "tcmdtype.h" #include "tcmdtype.h"
#if 0
static UNUSED_FUNC void *u_malloc (size_t __size) {
uint32_t v = rand();
if (v % 5000 <= 0) {
return NULL;
} else {
return malloc(__size);
}
}
static UNUSED_FUNC void* u_calloc(size_t num, size_t __size) {
uint32_t v = rand();
if (v % 5000 <= 0) {
return NULL;
} else {
return calloc(num, __size);
}
}
static UNUSED_FUNC void* u_realloc(void* p, size_t __size) {
uint32_t v = rand();
if (v % 5000 <= 0) {
return NULL;
} else {
return realloc(p, __size);
}
}
#define calloc u_calloc
#define malloc u_malloc
#define realloc u_realloc
#endif
// forward declaration // forward declaration
struct SSqlInfo; struct SSqlInfo;
struct SLocalReducer; struct SLocalReducer;
...@@ -78,7 +44,7 @@ struct SLocalReducer; ...@@ -78,7 +44,7 @@ struct SLocalReducer;
// data source from sql string or from file // data source from sql string or from file
enum { enum {
DATA_FROM_SQL_STRING = 1, DATA_FROM_SQL_STRING = 1,
DATA_FROM_DATA_FILE = 2, DATA_FROM_DATA_FILE = 2,
}; };
typedef void (*__async_cb_func_t)(void *param, TAOS_RES *tres, int32_t numOfRows); typedef void (*__async_cb_func_t)(void *param, TAOS_RES *tres, int32_t numOfRows);
...@@ -118,10 +84,10 @@ typedef struct STableMetaInfo { ...@@ -118,10 +84,10 @@ typedef struct STableMetaInfo {
* 1. keep the vgroup index during the multi-vnode super table projection query * 1. keep the vgroup index during the multi-vnode super table projection query
* 2. keep the vgroup index for multi-vnode insertion * 2. keep the vgroup index for multi-vnode insertion
*/ */
int32_t vgroupIndex; int32_t vgroupIndex;
char name[TSDB_TABLE_FNAME_LEN]; // (super) table name char name[TSDB_TABLE_FNAME_LEN]; // (super) table name
char aliasName[TSDB_TABLE_NAME_LEN]; // alias name of table specified in query sql char aliasName[TSDB_TABLE_NAME_LEN]; // alias name of table specified in query sql
SArray* tagColList; // SArray<SColumn*>, involved tag columns SArray *tagColList; // SArray<SColumn*>, involved tag columns
} STableMetaInfo; } STableMetaInfo;
/* the structure for sql function in select clause */ /* the structure for sql function in select clause */
...@@ -136,7 +102,7 @@ typedef struct SSqlExpr { ...@@ -136,7 +102,7 @@ typedef struct SSqlExpr {
int16_t numOfParams; // argument value of each function int16_t numOfParams; // argument value of each function
tVariant param[3]; // parameters are not more than 3 tVariant param[3]; // parameters are not more than 3
int32_t offset; // sub result column value of arithmetic expression. int32_t offset; // sub result column value of arithmetic expression.
int16_t resColId; // result column id int16_t resColId; // result column id
} SSqlExpr; } SSqlExpr;
typedef struct SColumnIndex { typedef struct SColumnIndex {
...@@ -204,22 +170,17 @@ typedef struct SParamInfo { ...@@ -204,22 +170,17 @@ typedef struct SParamInfo {
} SParamInfo; } SParamInfo;
typedef struct STableDataBlocks { typedef struct STableDataBlocks {
char tableId[TSDB_TABLE_FNAME_LEN]; char tableId[TSDB_TABLE_FNAME_LEN];
int8_t tsSource; // where does the UNIX timestamp come from, server or client int8_t tsSource; // where does the UNIX timestamp come from, server or client
bool ordered; // if current rows are ordered or not bool ordered; // if current rows are ordered or not
int64_t vgId; // virtual group id int64_t vgId; // virtual group id
int64_t prevTS; // previous timestamp, recorded to decide if the records array is ts ascending int64_t prevTS; // previous timestamp, recorded to decide if the records array is ts ascending
int32_t numOfTables; // number of tables in current submit block int32_t numOfTables; // number of tables in current submit block
int32_t rowSize; // row size for current table int32_t rowSize; // row size for current table
uint32_t nAllocSize; uint32_t nAllocSize;
uint32_t headerSize; // header for table info (uid, tid, submit metadata) uint32_t headerSize; // header for table info (uid, tid, submit metadata)
uint32_t size; uint32_t size;
STableMeta *pTableMeta; // the tableMeta of current table, the table meta will be used during submit, keep a ref to avoid to be removed from cache
/*
* the table meta of table, the table meta will be used during submit, keep a ref
* to avoid it to be removed from cache
*/
STableMeta *pTableMeta;
char *pData; char *pData;
// for parameter ('?') binding // for parameter ('?') binding
...@@ -252,7 +213,7 @@ typedef struct SQueryInfo { ...@@ -252,7 +213,7 @@ typedef struct SQueryInfo {
int64_t clauseLimit; // limit for current sub clause int64_t clauseLimit; // limit for current sub clause
int64_t prjOffset; // offset value in the original sql expression, only applied at client side int64_t prjOffset; // offset value in the original sql expression, only applied at client side
int64_t tableLimit; // table limit in case of super table projection query + global order + limit int64_t vgroupLimit; // table limit in case of super table projection query + global order + limit
int32_t udColumnId; // current user-defined constant output field column id, monotonically decreases from TSDB_UD_COLUMN_INDEX int32_t udColumnId; // current user-defined constant output field column id, monotonically decreases from TSDB_UD_COLUMN_INDEX
int16_t resColumnId; // result column id int16_t resColumnId; // result column id
...@@ -284,10 +245,14 @@ typedef struct { ...@@ -284,10 +245,14 @@ typedef struct {
int32_t numOfParams; int32_t numOfParams;
int8_t dataSourceType; // load data from file or not int8_t dataSourceType; // load data from file or not
int8_t submitSchema; // submit block is built with table schema int8_t submitSchema; // submit block is built with table schema
STagData *pTagData; // NOTE: pTagData->data is used as a variant length array STagData *pTagData; // NOTE: pTagData->data is used as a variant length array
SHashObj *pTableList; // referred table involved in sql
SArray *pDataBlocks; // SArray<STableDataBlocks*> submit data blocks after parsing sql STableMeta **pTableMetaList; // all involved tableMeta list of current insert sql statement.
int32_t numOfTables;
SHashObj *pTableBlockHashList; // data block for each table
SArray *pDataBlocks; // SArray<STableDataBlocks*>. Merged submit block for each vgroup
} SSqlCmd; } SSqlCmd;
typedef struct SResRec { typedef struct SResRec {
......
...@@ -365,6 +365,7 @@ void tscProcessFetchRow(SSchedMsg *pMsg) { ...@@ -365,6 +365,7 @@ void tscProcessFetchRow(SSchedMsg *pMsg) {
static void tscProcessAsyncError(SSchedMsg *pMsg) { static void tscProcessAsyncError(SSchedMsg *pMsg) {
void (*fp)() = pMsg->ahandle; void (*fp)() = pMsg->ahandle;
terrno = *(int32_t*) pMsg->msg; terrno = *(int32_t*) pMsg->msg;
tfree(pMsg->msg);
(*fp)(pMsg->thandle, NULL, *(int32_t*)pMsg->msg); (*fp)(pMsg->thandle, NULL, *(int32_t*)pMsg->msg);
} }
...@@ -410,52 +411,26 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) { ...@@ -410,52 +411,26 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) {
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
tscError("%p get %s failed, code:%s", pSql, msg, tstrerror(code)); tscError("%p get %s failed, code:%s", pSql, msg, tstrerror(code));
goto _error; goto _error;
} else {
tscDebug("%p get %s successfully", pSql, msg);
} }
tscDebug("%p get %s successfully", pSql, msg);
if (pSql->pStream == NULL) { if (pSql->pStream == NULL) {
SQueryInfo* pQueryInfo = tscGetQueryInfoDetail(pCmd, pCmd->clauseIndex); SQueryInfo* pQueryInfo = tscGetQueryInfoDetail(pCmd, pCmd->clauseIndex);
// check if it is a sub-query of super table query first, if true, enter another routine // check if it is a sub-query of super table query first, if true, enter another routine
if (TSDB_QUERY_HAS_TYPE(pQueryInfo->type, TSDB_QUERY_TYPE_STABLE_SUBQUERY)) { if (TSDB_QUERY_HAS_TYPE(pQueryInfo->type, (TSDB_QUERY_TYPE_STABLE_SUBQUERY|TSDB_QUERY_TYPE_TAG_FILTER_QUERY))) {
tscDebug("%p update table meta in local cache, continue to process sql and send corresponding subquery", pSql); tscDebug("%p update table meta in local cache, continue to process sql and send the corresponding query", pSql);
STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0); STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
code = tscGetTableMeta(pSql, pTableMetaInfo); code = tscGetTableMeta(pSql, pTableMetaInfo);
if (code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) { assert(code == TSDB_CODE_TSC_ACTION_IN_PROGRESS || code == TSDB_CODE_SUCCESS);
return;
} else {
assert(code == TSDB_CODE_SUCCESS);
}
// param already freed by other routine and pSql in tscCache when ctrl + c
if (atomic_load_ptr(&pSql->param) == NULL) {
return;
}
assert((tscGetNumOfTags(pTableMetaInfo->pTableMeta) != 0));
SRetrieveSupport *trs = (SRetrieveSupport *)pSql->param;
SSqlObj * pParObj = trs->pParentSql;
// NOTE: the vgroupInfo for the queried super table must be existed here.
assert(pParObj->signature == pParObj && trs->subqueryIndex == pTableMetaInfo->vgroupIndex &&
pTableMetaInfo->vgroupIndex >= 0 && pTableMetaInfo->vgroupList != NULL);
// tscProcessSql can add error into async res
tscProcessSql(pSql);
return;
} else if (TSDB_QUERY_HAS_TYPE(pQueryInfo->type, TSDB_QUERY_TYPE_TAG_FILTER_QUERY)) {
tscDebug("%p update table meta in local cache, continue to process sql and send corresponding tid_tag query", pSql);
STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
code = tscGetTableMeta(pSql, pTableMetaInfo);
if (code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) { if (code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) {
return; return;
} else {
assert(code == TSDB_CODE_SUCCESS);
} }
assert((tscGetNumOfTags(pTableMetaInfo->pTableMeta) != 0)); assert((tscGetNumOfTags(pTableMetaInfo->pTableMeta) != 0));
// tscProcessSql can add error into async res // tscProcessSql can add error into async res
tscProcessSql(pSql); tscProcessSql(pSql);
return; return;
...@@ -465,16 +440,15 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) { ...@@ -465,16 +440,15 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) {
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0); STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0);
code = tscGetTableMeta(pSql, pTableMetaInfo); code = tscGetTableMeta(pSql, pTableMetaInfo);
assert(code == TSDB_CODE_TSC_ACTION_IN_PROGRESS || code == TSDB_CODE_SUCCESS);
if (code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) { if (code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) {
return; return;
} else {
assert(code == TSDB_CODE_SUCCESS);
} }
// in case of insert, redo parsing the sql string and build new submit data block for two reasons: assert(pCmd->command != TSDB_SQL_INSERT);
// 1. the table Id(tid & uid) may have been update, the submit block needs to be updated accordingly.
// 2. vnode may need the schema information along with submit block to update its local table schema. if (pCmd->command == TSDB_SQL_SELECT) {
if (pCmd->command == TSDB_SQL_INSERT || pCmd->command == TSDB_SQL_SELECT) {
tscDebug("%p redo parse sql string and proceed", pSql); tscDebug("%p redo parse sql string and proceed", pSql);
pCmd->parseFinished = false; pCmd->parseFinished = false;
tscResetSqlCmdObj(pCmd, false); tscResetSqlCmdObj(pCmd, false);
...@@ -486,17 +460,8 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) { ...@@ -486,17 +460,8 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) {
goto _error; goto _error;
} }
if (pCmd->command == TSDB_SQL_INSERT) { tscProcessSql(pSql);
/* } else { // in all other cases, simple retry
* Discard previous built submit blocks, and then parse the sql string again and build up all submit blocks,
* and send the required submit block according to index value in supporter to server.
*/
pSql->fp = pSql->fetchFp; // restore the fp
tscHandleInsertRetry(pSql);
} else if (pCmd->command == TSDB_SQL_SELECT) { // in case of other query type, continue
tscProcessSql(pSql);
}
}else { // in all other cases, simple retry
tscProcessSql(pSql); tscProcessSql(pSql);
} }
...@@ -551,6 +516,7 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) { ...@@ -551,6 +516,7 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) {
if (!pSql->cmd.parseFinished) { if (!pSql->cmd.parseFinished) {
tsParseSql(pSql, false); tsParseSql(pSql, false);
} }
(*pSql->fp)(pSql->param, pSql, code); (*pSql->fp)(pSql->param, pSql, code);
return; return;
......
...@@ -2589,10 +2589,11 @@ static void percentile_next_step(SQLFunctionCtx *pCtx) { ...@@ -2589,10 +2589,11 @@ static void percentile_next_step(SQLFunctionCtx *pCtx) {
// all data are null, set it completed // all data are null, set it completed
if (pInfo->numOfElems == 0) { if (pInfo->numOfElems == 0) {
pResInfo->complete = true; pResInfo->complete = true;
} else {
pInfo->pMemBucket = tMemBucketCreate(pCtx->inputBytes, pCtx->inputType, GET_DOUBLE_VAL(&pInfo->minval), GET_DOUBLE_VAL(&pInfo->maxval));
} }
pInfo->stage += 1; pInfo->stage += 1;
pInfo->pMemBucket = tMemBucketCreate(pCtx->inputBytes, pCtx->inputType, GET_DOUBLE_VAL(&pInfo->minval), GET_DOUBLE_VAL(&pInfo->maxval));
} else { } else {
pResInfo->complete = true; pResInfo->complete = true;
} }
...@@ -3647,11 +3648,21 @@ static bool twa_function_setup(SQLFunctionCtx *pCtx) { ...@@ -3647,11 +3648,21 @@ static bool twa_function_setup(SQLFunctionCtx *pCtx) {
SResultRowCellInfo *pResInfo = GET_RES_INFO(pCtx); SResultRowCellInfo *pResInfo = GET_RES_INFO(pCtx);
STwaInfo *pInfo = GET_ROWCELL_INTERBUF(pResInfo); STwaInfo *pInfo = GET_ROWCELL_INTERBUF(pResInfo);
pInfo->lastKey = INT64_MIN; pInfo->p.key = INT64_MIN;
pInfo->win = TSWINDOW_INITIALIZER; pInfo->win = TSWINDOW_INITIALIZER;
return true; return true;
} }
static double twa_get_area(SPoint1 s, SPoint1 e) {
if ((s.val >= 0 && e.val >= 0)|| (s.val <=0 && e.val <= 0)) {
return (s.val + e.val) * (e.key - s.key) / 2;
}
double x = (s.key * e.val - e.key * s.val)/(e.val - s.val);
double val = (s.val * (x - s.key) + e.val * (e.key - x)) / 2;
return val;
}
static int32_t twa_function_impl(SQLFunctionCtx* pCtx, int32_t tsIndex, int32_t index, int32_t size) { static int32_t twa_function_impl(SQLFunctionCtx* pCtx, int32_t tsIndex, int32_t index, int32_t size) {
int32_t notNullElems = 0; int32_t notNullElems = 0;
TSKEY *primaryKey = pCtx->ptsList; TSKEY *primaryKey = pCtx->ptsList;
...@@ -3662,28 +3673,29 @@ static int32_t twa_function_impl(SQLFunctionCtx* pCtx, int32_t tsIndex, int32_t ...@@ -3662,28 +3673,29 @@ static int32_t twa_function_impl(SQLFunctionCtx* pCtx, int32_t tsIndex, int32_t
int32_t i = index; int32_t i = index;
int32_t step = GET_FORWARD_DIRECTION_FACTOR(pCtx->order); int32_t step = GET_FORWARD_DIRECTION_FACTOR(pCtx->order);
SPoint1* last = &pInfo->p;
if (pCtx->start.key != INT64_MIN) { if (pCtx->start.key != INT64_MIN) {
assert((pCtx->start.key < primaryKey[tsIndex + i] && pCtx->order == TSDB_ORDER_ASC) || assert((pCtx->start.key < primaryKey[tsIndex + i] && pCtx->order == TSDB_ORDER_ASC) ||
(pCtx->start.key > primaryKey[tsIndex + i] && pCtx->order == TSDB_ORDER_DESC)); (pCtx->start.key > primaryKey[tsIndex + i] && pCtx->order == TSDB_ORDER_DESC));
assert(pInfo->lastKey == INT64_MIN); assert(last->key == INT64_MIN);
pInfo->lastKey = primaryKey[tsIndex + i]; last->key = primaryKey[tsIndex + i];
GET_TYPED_DATA(pInfo->lastValue, double, pCtx->inputType, GET_INPUT_CHAR_INDEX(pCtx, index)); GET_TYPED_DATA(last->val, double, pCtx->inputType, GET_INPUT_CHAR_INDEX(pCtx, index));
pInfo->dOutput += ((pInfo->lastValue + pCtx->start.val) / 2) * (pInfo->lastKey - pCtx->start.key); pInfo->dOutput += twa_get_area(pCtx->start, *last);
pInfo->hasResult = DATA_SET_FLAG; pInfo->hasResult = DATA_SET_FLAG;
pInfo->win.skey = pCtx->start.key; pInfo->win.skey = pCtx->start.key;
notNullElems++; notNullElems++;
i += step; i += step;
} else if (pInfo->lastKey == INT64_MIN) { } else if (pInfo->p.key == INT64_MIN) {
pInfo->lastKey = primaryKey[tsIndex + i]; last->key = primaryKey[tsIndex + i];
GET_TYPED_DATA(pInfo->lastValue, double, pCtx->inputType, GET_INPUT_CHAR_INDEX(pCtx, index)); GET_TYPED_DATA(last->val, double, pCtx->inputType, GET_INPUT_CHAR_INDEX(pCtx, index));
pInfo->hasResult = DATA_SET_FLAG; pInfo->hasResult = DATA_SET_FLAG;
pInfo->win.skey = pInfo->lastKey; pInfo->win.skey = last->key;
notNullElems++; notNullElems++;
i += step; i += step;
} }
...@@ -3697,9 +3709,9 @@ static int32_t twa_function_impl(SQLFunctionCtx* pCtx, int32_t tsIndex, int32_t ...@@ -3697,9 +3709,9 @@ static int32_t twa_function_impl(SQLFunctionCtx* pCtx, int32_t tsIndex, int32_t
continue; continue;
} }
pInfo->dOutput += ((val[i] + pInfo->lastValue) / 2) * (primaryKey[i + tsIndex] - pInfo->lastKey); SPoint1 st = {.key = primaryKey[i + tsIndex], .val = val[i]};
pInfo->lastValue = val[i]; pInfo->dOutput += twa_get_area(pInfo->p, st);
pInfo->lastKey = primaryKey[i + tsIndex]; pInfo->p = st;
} }
break; break;
} }
...@@ -3710,9 +3722,9 @@ static int32_t twa_function_impl(SQLFunctionCtx* pCtx, int32_t tsIndex, int32_t ...@@ -3710,9 +3722,9 @@ static int32_t twa_function_impl(SQLFunctionCtx* pCtx, int32_t tsIndex, int32_t
continue; continue;
} }
pInfo->dOutput += ((val[i] + pInfo->lastValue) / 2) * (primaryKey[i + tsIndex] - pInfo->lastKey); SPoint1 st = {.key = primaryKey[i + tsIndex], .val = val[i]};
pInfo->lastValue = val[i]; pInfo->dOutput += twa_get_area(pInfo->p, st);
pInfo->lastKey = primaryKey[i + tsIndex]; pInfo->p = st;
} }
break; break;
} }
...@@ -3723,9 +3735,9 @@ static int32_t twa_function_impl(SQLFunctionCtx* pCtx, int32_t tsIndex, int32_t ...@@ -3723,9 +3735,9 @@ static int32_t twa_function_impl(SQLFunctionCtx* pCtx, int32_t tsIndex, int32_t
continue; continue;
} }
pInfo->dOutput += ((val[i] + pInfo->lastValue) / 2) * (primaryKey[i + tsIndex] - pInfo->lastKey); SPoint1 st = {.key = primaryKey[i + tsIndex], .val = val[i]};
pInfo->lastValue = val[i]; pInfo->dOutput += twa_get_area(pInfo->p, st);
pInfo->lastKey = primaryKey[i + tsIndex]; pInfo->p = st;
} }
break; break;
} }
...@@ -3736,9 +3748,9 @@ static int32_t twa_function_impl(SQLFunctionCtx* pCtx, int32_t tsIndex, int32_t ...@@ -3736,9 +3748,9 @@ static int32_t twa_function_impl(SQLFunctionCtx* pCtx, int32_t tsIndex, int32_t
continue; continue;
} }
pInfo->dOutput += ((val[i] + pInfo->lastValue) / 2) * (primaryKey[i + tsIndex] - pInfo->lastKey); SPoint1 st = {.key = primaryKey[i + tsIndex], .val = (double) val[i]};
pInfo->lastValue = (double) val[i]; pInfo->dOutput += twa_get_area(pInfo->p, st);
pInfo->lastKey = primaryKey[i + tsIndex]; pInfo->p = st;
} }
break; break;
} }
...@@ -3749,9 +3761,9 @@ static int32_t twa_function_impl(SQLFunctionCtx* pCtx, int32_t tsIndex, int32_t ...@@ -3749,9 +3761,9 @@ static int32_t twa_function_impl(SQLFunctionCtx* pCtx, int32_t tsIndex, int32_t
continue; continue;
} }
pInfo->dOutput += ((val[i] + pInfo->lastValue) / 2) * (primaryKey[i + tsIndex] - pInfo->lastKey); SPoint1 st = {.key = primaryKey[i + tsIndex], .val = val[i]};
pInfo->lastValue = val[i]; pInfo->dOutput += twa_get_area(pInfo->p, st);
pInfo->lastKey = primaryKey[i + tsIndex]; pInfo->p = st;
} }
break; break;
} }
...@@ -3762,9 +3774,9 @@ static int32_t twa_function_impl(SQLFunctionCtx* pCtx, int32_t tsIndex, int32_t ...@@ -3762,9 +3774,9 @@ static int32_t twa_function_impl(SQLFunctionCtx* pCtx, int32_t tsIndex, int32_t
continue; continue;
} }
pInfo->dOutput += ((val[i] + pInfo->lastValue) / 2) * (primaryKey[i + tsIndex] - pInfo->lastKey); SPoint1 st = {.key = primaryKey[i + tsIndex], .val = val[i]};
pInfo->lastValue = val[i]; pInfo->dOutput += twa_get_area(pInfo->p, st);
pInfo->lastKey = primaryKey[i + tsIndex]; pInfo->p = st;
} }
break; break;
} }
...@@ -3773,20 +3785,19 @@ static int32_t twa_function_impl(SQLFunctionCtx* pCtx, int32_t tsIndex, int32_t ...@@ -3773,20 +3785,19 @@ static int32_t twa_function_impl(SQLFunctionCtx* pCtx, int32_t tsIndex, int32_t
// the last interpolated time window value // the last interpolated time window value
if (pCtx->end.key != INT64_MIN) { if (pCtx->end.key != INT64_MIN) {
pInfo->dOutput += ((pInfo->lastValue + pCtx->end.val) / 2) * (pCtx->end.key - pInfo->lastKey); pInfo->dOutput += twa_get_area(pInfo->p, pCtx->end);
pInfo->lastValue = pCtx->end.val; pInfo->p = pCtx->end;
pInfo->lastKey = pCtx->end.key;
} }
pInfo->win.ekey = pInfo->lastKey; pInfo->win.ekey = pInfo->p.key;
return notNullElems; return notNullElems;
} }
static void twa_function(SQLFunctionCtx *pCtx) { static void twa_function(SQLFunctionCtx *pCtx) {
void * data = GET_INPUT_CHAR(pCtx); void *data = GET_INPUT_CHAR(pCtx);
SResultRowCellInfo *pResInfo = GET_RES_INFO(pCtx); SResultRowCellInfo *pResInfo = GET_RES_INFO(pCtx);
STwaInfo * pInfo = GET_ROWCELL_INTERBUF(pResInfo); STwaInfo *pInfo = GET_ROWCELL_INTERBUF(pResInfo);
// skip null value // skip null value
int32_t step = GET_FORWARD_DIRECTION_FACTOR(pCtx->order); int32_t step = GET_FORWARD_DIRECTION_FACTOR(pCtx->order);
...@@ -3807,6 +3818,7 @@ static void twa_function(SQLFunctionCtx *pCtx) { ...@@ -3807,6 +3818,7 @@ static void twa_function(SQLFunctionCtx *pCtx) {
} }
} }
//TODO refactor
static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) { static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) {
void *pData = GET_INPUT_CHAR_INDEX(pCtx, index); void *pData = GET_INPUT_CHAR_INDEX(pCtx, index);
if (pCtx->hasNull && isNull(pData, pCtx->inputType)) { if (pCtx->hasNull && isNull(pData, pCtx->inputType)) {
...@@ -3823,23 +3835,23 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) { ...@@ -3823,23 +3835,23 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) {
int32_t size = pCtx->size; int32_t size = pCtx->size;
if (pCtx->start.key != INT64_MIN) { if (pCtx->start.key != INT64_MIN) {
assert(pInfo->lastKey == INT64_MIN); assert(pInfo->p.key == INT64_MIN);
pInfo->lastKey = primaryKey[index]; pInfo->p.key = primaryKey[index];
GET_TYPED_DATA(pInfo->lastValue, double, pCtx->inputType, GET_INPUT_CHAR_INDEX(pCtx, index)); GET_TYPED_DATA(pInfo->p.val, double, pCtx->inputType, GET_INPUT_CHAR_INDEX(pCtx, index));
pInfo->dOutput += ((pInfo->lastValue + pCtx->start.val) / 2) * (pInfo->lastKey - pCtx->start.key); pInfo->dOutput += twa_get_area(pCtx->start, pInfo->p);
pInfo->hasResult = DATA_SET_FLAG; pInfo->hasResult = DATA_SET_FLAG;
pInfo->win.skey = pCtx->start.key; pInfo->win.skey = pCtx->start.key;
notNullElems++; notNullElems++;
i += 1; i += 1;
} else if (pInfo->lastKey == INT64_MIN) { } else if (pInfo->p.key == INT64_MIN) {
pInfo->lastKey = primaryKey[index]; pInfo->p.key = primaryKey[index];
GET_TYPED_DATA(pInfo->lastValue, double, pCtx->inputType, GET_INPUT_CHAR_INDEX(pCtx, index)); GET_TYPED_DATA(pInfo->p.val, double, pCtx->inputType, GET_INPUT_CHAR_INDEX(pCtx, index));
pInfo->hasResult = DATA_SET_FLAG; pInfo->hasResult = DATA_SET_FLAG;
pInfo->win.skey = pInfo->lastKey; pInfo->win.skey = pInfo->p.key;
notNullElems++; notNullElems++;
i += 1; i += 1;
} }
...@@ -3853,9 +3865,9 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) { ...@@ -3853,9 +3865,9 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) {
continue; continue;
} }
pInfo->dOutput += ((val[i] + pInfo->lastValue) / 2) * (primaryKey[i + index] - pInfo->lastKey); SPoint1 st = {.key = primaryKey[i + index], .val = val[i]};
pInfo->lastValue = val[i]; pInfo->dOutput += twa_get_area(pInfo->p, st);
pInfo->lastKey = primaryKey[i + index]; pInfo->p = st;
} }
break; break;
} }
...@@ -3866,9 +3878,9 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) { ...@@ -3866,9 +3878,9 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) {
continue; continue;
} }
pInfo->dOutput += ((val[i] + pInfo->lastValue) / 2) * (primaryKey[i + index] - pInfo->lastKey); SPoint1 st = {.key = primaryKey[i + index], .val = val[i]};
pInfo->lastValue = val[i]; pInfo->dOutput += twa_get_area(pInfo->p, st);
pInfo->lastKey = primaryKey[i + index]; pInfo->p = st;
} }
break; break;
} }
...@@ -3879,9 +3891,9 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) { ...@@ -3879,9 +3891,9 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) {
continue; continue;
} }
pInfo->dOutput += ((val[i] + pInfo->lastValue) / 2) * (primaryKey[i + index] - pInfo->lastKey); SPoint1 st = {.key = primaryKey[i + index], .val = val[i]};
pInfo->lastValue = val[i]; pInfo->dOutput += twa_get_area(pInfo->p, st);
pInfo->lastKey = primaryKey[i + index]; pInfo->p = st;
} }
break; break;
} }
...@@ -3892,9 +3904,9 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) { ...@@ -3892,9 +3904,9 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) {
continue; continue;
} }
pInfo->dOutput += ((val[i] + pInfo->lastValue) / 2) * (primaryKey[i + index] - pInfo->lastKey); SPoint1 st = {.key = primaryKey[i + index], .val = (double) val[i]};
pInfo->lastValue = (double) val[i]; pInfo->dOutput += twa_get_area(pInfo->p, st);
pInfo->lastKey = primaryKey[i + index]; pInfo->p = st;
} }
break; break;
} }
...@@ -3905,9 +3917,9 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) { ...@@ -3905,9 +3917,9 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) {
continue; continue;
} }
pInfo->dOutput += ((val[i] + pInfo->lastValue) / 2) * (primaryKey[i + index] - pInfo->lastKey); SPoint1 st = {.key = primaryKey[i + index], .val = val[i]};
pInfo->lastValue = val[i]; pInfo->dOutput += twa_get_area(pInfo->p, st);//((val[i] + pInfo->p.val) / 2) * (primaryKey[i + index] - pInfo->p.key);
pInfo->lastKey = primaryKey[i + index]; pInfo->p = st;
} }
break; break;
} }
...@@ -3918,9 +3930,9 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) { ...@@ -3918,9 +3930,9 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) {
continue; continue;
} }
pInfo->dOutput += ((val[i] + pInfo->lastValue) / 2) * (primaryKey[i + index] - pInfo->lastKey); SPoint1 st = {.key = primaryKey[i + index], .val = val[i]};
pInfo->lastValue = val[i]; pInfo->dOutput += twa_get_area(pInfo->p, st);//((val[i] + pInfo->p.val) / 2) * (primaryKey[i + index] - pInfo->p.key);
pInfo->lastKey = primaryKey[i + index]; pInfo->p = st;
} }
break; break;
} }
...@@ -3929,12 +3941,11 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) { ...@@ -3929,12 +3941,11 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) {
// the last interpolated time window value // the last interpolated time window value
if (pCtx->end.key != INT64_MIN) { if (pCtx->end.key != INT64_MIN) {
pInfo->dOutput += ((pInfo->lastValue + pCtx->end.val) / 2) * (pCtx->end.key - pInfo->lastKey); pInfo->dOutput += twa_get_area(pInfo->p, pCtx->end);//((pInfo->p.val + pCtx->end.val) / 2) * (pCtx->end.key - pInfo->p.key);
pInfo->lastValue = pCtx->end.val; pInfo->p = pCtx->end;
pInfo->lastKey = pCtx->end.key;
} }
pInfo->win.ekey = pInfo->lastKey; pInfo->win.ekey = pInfo->p.key;
SET_VAL(pCtx, notNullElems, 1); SET_VAL(pCtx, notNullElems, 1);
...@@ -3965,7 +3976,7 @@ static void twa_func_merge(SQLFunctionCtx *pCtx) { ...@@ -3965,7 +3976,7 @@ static void twa_func_merge(SQLFunctionCtx *pCtx) {
pBuf->dOutput += pInput->dOutput; pBuf->dOutput += pInput->dOutput;
pBuf->win = pInput->win; pBuf->win = pInput->win;
pBuf->lastKey = pInput->lastKey; pBuf->p = pInput->p;
} }
SET_VAL(pCtx, numOfNotNull, 1); SET_VAL(pCtx, numOfNotNull, 1);
...@@ -3992,15 +4003,14 @@ void twa_function_finalizer(SQLFunctionCtx *pCtx) { ...@@ -3992,15 +4003,14 @@ void twa_function_finalizer(SQLFunctionCtx *pCtx) {
SResultRowCellInfo *pResInfo = GET_RES_INFO(pCtx); SResultRowCellInfo *pResInfo = GET_RES_INFO(pCtx);
STwaInfo *pInfo = (STwaInfo *)GET_ROWCELL_INTERBUF(pResInfo); STwaInfo *pInfo = (STwaInfo *)GET_ROWCELL_INTERBUF(pResInfo);
assert(pInfo->win.ekey == pInfo->lastKey && pInfo->hasResult == pResInfo->hasResult);
if (pInfo->hasResult != DATA_SET_FLAG) { if (pInfo->hasResult != DATA_SET_FLAG) {
setNull(pCtx->aOutputBuf, TSDB_DATA_TYPE_DOUBLE, sizeof(double)); setNull(pCtx->aOutputBuf, TSDB_DATA_TYPE_DOUBLE, sizeof(double));
return; return;
} }
assert(pInfo->win.ekey == pInfo->p.key && pInfo->hasResult == pResInfo->hasResult);
if (pInfo->win.ekey == pInfo->win.skey) { if (pInfo->win.ekey == pInfo->win.skey) {
*(double *)pCtx->aOutputBuf = pInfo->lastValue; *(double *)pCtx->aOutputBuf = pInfo->p.val;
} else { } else {
*(double *)pCtx->aOutputBuf = pInfo->dOutput / (pInfo->win.ekey - pInfo->win.skey); *(double *)pCtx->aOutputBuf = pInfo->dOutput / (pInfo->win.ekey - pInfo->win.skey);
} }
......
...@@ -726,10 +726,14 @@ int32_t tscLocalReducerEnvCreate(SSqlObj *pSql, tExtMemBuffer ***pMemBuffer, tOr ...@@ -726,10 +726,14 @@ int32_t tscLocalReducerEnvCreate(SSqlObj *pSql, tExtMemBuffer ***pMemBuffer, tOr
SSqlExpr *pExpr = tscSqlExprGet(pQueryInfo, i); SSqlExpr *pExpr = tscSqlExprGet(pQueryInfo, i);
SSchema p1 = {0}; SSchema p1 = {0};
if (pExpr->colInfo.colIndex != TSDB_TBNAME_COLUMN_INDEX) { if (pExpr->colInfo.colIndex == TSDB_TBNAME_COLUMN_INDEX) {
p1 = *tscGetTableColumnSchema(pTableMetaInfo->pTableMeta, pExpr->colInfo.colIndex);
} else {
p1 = tGetTableNameColumnSchema(); p1 = tGetTableNameColumnSchema();
} else if (TSDB_COL_IS_UD_COL(pExpr->colInfo.flag)) {
p1.bytes = pExpr->resBytes;
p1.type = (uint8_t) pExpr->resType;
tstrncpy(p1.name, pExpr->aliasName, tListLen(p1.name));
} else {
p1 = *tscGetTableColumnSchema(pTableMetaInfo->pTableMeta, pExpr->colInfo.colIndex);
} }
int32_t inter = 0; int32_t inter = 0;
......
...@@ -686,17 +686,14 @@ void tscSortRemoveDataBlockDupRows(STableDataBlocks *dataBuf) { ...@@ -686,17 +686,14 @@ void tscSortRemoveDataBlockDupRows(STableDataBlocks *dataBuf) {
} }
} }
static int32_t doParseInsertStatement(SSqlObj *pSql, void *pTableList, char **str, SParsedDataColInfo *spd, static int32_t doParseInsertStatement(SSqlCmd* pCmd, char **str, SParsedDataColInfo *spd, int32_t *totalNum) {
int32_t *totalNum) {
SSqlCmd * pCmd = &pSql->cmd;
STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0); STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0);
STableMeta * pTableMeta = pTableMetaInfo->pTableMeta; STableMeta *pTableMeta = pTableMetaInfo->pTableMeta;
STableComInfo tinfo = tscGetTableInfo(pTableMeta); STableComInfo tinfo = tscGetTableInfo(pTableMeta);
STableDataBlocks *dataBuf = NULL; STableDataBlocks *dataBuf = NULL;
int32_t ret = tscGetDataBlockFromList(pTableList, pCmd->pDataBlocks, pTableMeta->id.uid, TSDB_DEFAULT_PAYLOAD_SIZE, int32_t ret = tscGetDataBlockFromList(pCmd->pTableBlockHashList, pTableMeta->id.uid, TSDB_DEFAULT_PAYLOAD_SIZE,
sizeof(SSubmitBlk), tinfo.rowSize, pTableMetaInfo->name, sizeof(SSubmitBlk), tinfo.rowSize, pTableMetaInfo->name, pTableMeta, &dataBuf, NULL);
pTableMeta, &dataBuf);
if (ret != TSDB_CODE_SUCCESS) { if (ret != TSDB_CODE_SUCCESS) {
return ret; return ret;
} }
...@@ -1058,18 +1055,17 @@ int tsParseInsertSql(SSqlObj *pSql) { ...@@ -1058,18 +1055,17 @@ int tsParseInsertSql(SSqlObj *pSql) {
return code; return code;
} }
if (NULL == pCmd->pTableList) { if (NULL == pCmd->pTableBlockHashList) {
pCmd->pTableList = taosHashInit(128, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false); pCmd->pTableBlockHashList = taosHashInit(128, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false);
pCmd->pDataBlocks = taosArrayInit(4, POINTER_BYTES); if (NULL == pCmd->pTableBlockHashList) {
if (NULL == pCmd->pTableList || NULL == pSql->cmd.pDataBlocks) {
code = TSDB_CODE_TSC_OUT_OF_MEMORY; code = TSDB_CODE_TSC_OUT_OF_MEMORY;
goto _error; goto _clean;
} }
} else { } else {
str = pCmd->curSql; str = pCmd->curSql;
} }
tscDebug("%p create data block list for submit data:%p, pTableList:%p", pSql, pCmd->pDataBlocks, pCmd->pTableList); tscDebug("%p create data block list hashList:%p", pSql, pCmd->pTableBlockHashList);
while (1) { while (1) {
int32_t index = 0; int32_t index = 0;
...@@ -1091,7 +1087,7 @@ int tsParseInsertSql(SSqlObj *pSql) { ...@@ -1091,7 +1087,7 @@ int tsParseInsertSql(SSqlObj *pSql) {
*/ */
if (totalNum == 0) { if (totalNum == 0) {
code = TSDB_CODE_TSC_INVALID_SQL; code = TSDB_CODE_TSC_INVALID_SQL;
goto _error; goto _clean;
} else { } else {
break; break;
} }
...@@ -1104,11 +1100,11 @@ int tsParseInsertSql(SSqlObj *pSql) { ...@@ -1104,11 +1100,11 @@ int tsParseInsertSql(SSqlObj *pSql) {
// Check if the table name available or not // Check if the table name available or not
if (validateTableName(sToken.z, sToken.n, &sTblToken) != TSDB_CODE_SUCCESS) { if (validateTableName(sToken.z, sToken.n, &sTblToken) != TSDB_CODE_SUCCESS) {
code = tscInvalidSQLErrMsg(pCmd->payload, "table name invalid", sToken.z); code = tscInvalidSQLErrMsg(pCmd->payload, "table name invalid", sToken.z);
goto _error; goto _clean;
} }
if ((code = tscSetTableFullName(pTableMetaInfo, &sTblToken, pSql)) != TSDB_CODE_SUCCESS) { if ((code = tscSetTableFullName(pTableMetaInfo, &sTblToken, pSql)) != TSDB_CODE_SUCCESS) {
goto _error; goto _clean;
} }
if ((code = tscCheckIfCreateTable(&str, pSql)) != TSDB_CODE_SUCCESS) { if ((code = tscCheckIfCreateTable(&str, pSql)) != TSDB_CODE_SUCCESS) {
...@@ -1122,12 +1118,12 @@ int tsParseInsertSql(SSqlObj *pSql) { ...@@ -1122,12 +1118,12 @@ int tsParseInsertSql(SSqlObj *pSql) {
tscError("%p async insert parse error, code:%s", pSql, tstrerror(code)); tscError("%p async insert parse error, code:%s", pSql, tstrerror(code));
pCmd->curSql = NULL; pCmd->curSql = NULL;
goto _error; goto _clean;
} }
if (UTIL_TABLE_IS_SUPER_TABLE(pTableMetaInfo)) { if (UTIL_TABLE_IS_SUPER_TABLE(pTableMetaInfo)) {
code = tscInvalidSQLErrMsg(pCmd->payload, "insert data into super table is not supported", NULL); code = tscInvalidSQLErrMsg(pCmd->payload, "insert data into super table is not supported", NULL);
goto _error; goto _clean;
} }
index = 0; index = 0;
...@@ -1136,7 +1132,7 @@ int tsParseInsertSql(SSqlObj *pSql) { ...@@ -1136,7 +1132,7 @@ int tsParseInsertSql(SSqlObj *pSql) {
if (sToken.n == 0) { if (sToken.n == 0) {
code = tscInvalidSQLErrMsg(pCmd->payload, "keyword VALUES or FILE required", sToken.z); code = tscInvalidSQLErrMsg(pCmd->payload, "keyword VALUES or FILE required", sToken.z);
goto _error; goto _clean;
} }
STableComInfo tinfo = tscGetTableInfo(pTableMetaInfo->pTableMeta); STableComInfo tinfo = tscGetTableInfo(pTableMetaInfo->pTableMeta);
...@@ -1148,32 +1144,32 @@ int tsParseInsertSql(SSqlObj *pSql) { ...@@ -1148,32 +1144,32 @@ int tsParseInsertSql(SSqlObj *pSql) {
tscSetAssignedColumnInfo(&spd, pSchema, tinfo.numOfColumns); tscSetAssignedColumnInfo(&spd, pSchema, tinfo.numOfColumns);
if (validateDataSource(pCmd, DATA_FROM_SQL_STRING, sToken.z) != TSDB_CODE_SUCCESS) { if (validateDataSource(pCmd, DATA_FROM_SQL_STRING, sToken.z) != TSDB_CODE_SUCCESS) {
goto _error; goto _clean;
} }
/* /*
* app here insert data in different vnodes, so we need to set the following * app here insert data in different vnodes, so we need to set the following
* data in another submit procedure using async insert routines * data in another submit procedure using async insert routines
*/ */
code = doParseInsertStatement(pSql, pCmd->pTableList, &str, &spd, &totalNum); code = doParseInsertStatement(pCmd, &str, &spd, &totalNum);
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
goto _error; goto _clean;
} }
} else if (sToken.type == TK_FILE) { } else if (sToken.type == TK_FILE) {
if (validateDataSource(pCmd, DATA_FROM_DATA_FILE, sToken.z) != TSDB_CODE_SUCCESS) { if (validateDataSource(pCmd, DATA_FROM_DATA_FILE, sToken.z) != TSDB_CODE_SUCCESS) {
goto _error; goto _clean;
} }
index = 0; index = 0;
sToken = tStrGetToken(str, &index, false, 0, NULL); sToken = tStrGetToken(str, &index, false, 0, NULL);
if (sToken.type != TK_STRING && sToken.type != TK_ID) { if (sToken.type != TK_STRING && sToken.type != TK_ID) {
code = tscInvalidSQLErrMsg(pCmd->payload, "file path is required following keyword FILE", sToken.z); code = tscInvalidSQLErrMsg(pCmd->payload, "file path is required following keyword FILE", sToken.z);
goto _error; goto _clean;
} }
str += index; str += index;
if (sToken.n == 0) { if (sToken.n == 0) {
code = tscInvalidSQLErrMsg(pCmd->payload, "file path is required following keyword FILE", sToken.z); code = tscInvalidSQLErrMsg(pCmd->payload, "file path is required following keyword FILE", sToken.z);
goto _error; goto _clean;
} }
strncpy(pCmd->payload, sToken.z, sToken.n); strncpy(pCmd->payload, sToken.z, sToken.n);
...@@ -1183,7 +1179,7 @@ int tsParseInsertSql(SSqlObj *pSql) { ...@@ -1183,7 +1179,7 @@ int tsParseInsertSql(SSqlObj *pSql) {
wordexp_t full_path; wordexp_t full_path;
if (wordexp(pCmd->payload, &full_path, 0) != 0) { if (wordexp(pCmd->payload, &full_path, 0) != 0) {
code = tscInvalidSQLErrMsg(pCmd->payload, "invalid filename", sToken.z); code = tscInvalidSQLErrMsg(pCmd->payload, "invalid filename", sToken.z);
goto _error; goto _clean;
} }
tstrncpy(pCmd->payload, full_path.we_wordv[0], pCmd->allocSize); tstrncpy(pCmd->payload, full_path.we_wordv[0], pCmd->allocSize);
...@@ -1195,7 +1191,7 @@ int tsParseInsertSql(SSqlObj *pSql) { ...@@ -1195,7 +1191,7 @@ int tsParseInsertSql(SSqlObj *pSql) {
SSchema * pSchema = tscGetTableSchema(pTableMeta); SSchema * pSchema = tscGetTableSchema(pTableMeta);
if (validateDataSource(pCmd, DATA_FROM_SQL_STRING, sToken.z) != TSDB_CODE_SUCCESS) { if (validateDataSource(pCmd, DATA_FROM_SQL_STRING, sToken.z) != TSDB_CODE_SUCCESS) {
goto _error; goto _clean;
} }
SParsedDataColInfo spd = {0}; SParsedDataColInfo spd = {0};
...@@ -1230,7 +1226,7 @@ int tsParseInsertSql(SSqlObj *pSql) { ...@@ -1230,7 +1226,7 @@ int tsParseInsertSql(SSqlObj *pSql) {
if (spd.hasVal[t] == true) { if (spd.hasVal[t] == true) {
code = tscInvalidSQLErrMsg(pCmd->payload, "duplicated column name", sToken.z); code = tscInvalidSQLErrMsg(pCmd->payload, "duplicated column name", sToken.z);
goto _error; goto _clean;
} }
spd.hasVal[t] = true; spd.hasVal[t] = true;
...@@ -1241,13 +1237,13 @@ int tsParseInsertSql(SSqlObj *pSql) { ...@@ -1241,13 +1237,13 @@ int tsParseInsertSql(SSqlObj *pSql) {
if (!findColumnIndex) { if (!findColumnIndex) {
code = tscInvalidSQLErrMsg(pCmd->payload, "invalid column name", sToken.z); code = tscInvalidSQLErrMsg(pCmd->payload, "invalid column name", sToken.z);
goto _error; goto _clean;
} }
} }
if (spd.numOfAssignedCols == 0 || spd.numOfAssignedCols > tinfo.numOfColumns) { if (spd.numOfAssignedCols == 0 || spd.numOfAssignedCols > tinfo.numOfColumns) {
code = tscInvalidSQLErrMsg(pCmd->payload, "column name expected", sToken.z); code = tscInvalidSQLErrMsg(pCmd->payload, "column name expected", sToken.z);
goto _error; goto _clean;
} }
index = 0; index = 0;
...@@ -1256,16 +1252,16 @@ int tsParseInsertSql(SSqlObj *pSql) { ...@@ -1256,16 +1252,16 @@ int tsParseInsertSql(SSqlObj *pSql) {
if (sToken.type != TK_VALUES) { if (sToken.type != TK_VALUES) {
code = tscInvalidSQLErrMsg(pCmd->payload, "keyword VALUES is expected", sToken.z); code = tscInvalidSQLErrMsg(pCmd->payload, "keyword VALUES is expected", sToken.z);
goto _error; goto _clean;
} }
code = doParseInsertStatement(pSql, pCmd->pTableList, &str, &spd, &totalNum); code = doParseInsertStatement(pCmd, &str, &spd, &totalNum);
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
goto _error; goto _clean;
} }
} else { } else {
code = tscInvalidSQLErrMsg(pCmd->payload, "keyword VALUES or FILE are required", sToken.z); code = tscInvalidSQLErrMsg(pCmd->payload, "keyword VALUES or FILE are required", sToken.z);
goto _error; goto _clean;
} }
} }
...@@ -1274,25 +1270,18 @@ int tsParseInsertSql(SSqlObj *pSql) { ...@@ -1274,25 +1270,18 @@ int tsParseInsertSql(SSqlObj *pSql) {
goto _clean; goto _clean;
} }
if (taosArrayGetSize(pCmd->pDataBlocks) > 0) { // merge according to vgId if (taosHashGetSize(pCmd->pTableBlockHashList) > 0) { // merge according to vgId
if ((code = tscMergeTableDataBlocks(pSql, pCmd->pDataBlocks)) != TSDB_CODE_SUCCESS) { if ((code = tscMergeTableDataBlocks(pSql)) != TSDB_CODE_SUCCESS) {
goto _error; goto _clean;
} }
} }
code = TSDB_CODE_SUCCESS; code = TSDB_CODE_SUCCESS;
goto _clean; goto _clean;
_error:
pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks);
_clean: _clean:
taosHashCleanup(pCmd->pTableList); pCmd->curSql = NULL;
pCmd->pTableList = NULL;
pCmd->curSql = NULL;
pCmd->parseFinished = 1; pCmd->parseFinished = 1;
return code; return code;
} }
...@@ -1373,6 +1362,7 @@ int tsParseSql(SSqlObj *pSql, bool initial) { ...@@ -1373,6 +1362,7 @@ int tsParseSql(SSqlObj *pSql, bool initial) {
pSql->parseRetry++; pSql->parseRetry++;
ret = tscToSQLCmd(pSql, &SQLInfo); ret = tscToSQLCmd(pSql, &SQLInfo);
} }
SQLInfoDestroy(&SQLInfo); SQLInfoDestroy(&SQLInfo);
} }
...@@ -1399,7 +1389,7 @@ static int doPackSendDataBlock(SSqlObj *pSql, int32_t numOfRows, STableDataBlock ...@@ -1399,7 +1389,7 @@ static int doPackSendDataBlock(SSqlObj *pSql, int32_t numOfRows, STableDataBlock
return tscInvalidSQLErrMsg(pCmd->payload, "too many rows in sql, total number of rows should be less than 32767", NULL); return tscInvalidSQLErrMsg(pCmd->payload, "too many rows in sql, total number of rows should be less than 32767", NULL);
} }
if ((code = tscMergeTableDataBlocks(pSql, pCmd->pDataBlocks)) != TSDB_CODE_SUCCESS) { if ((code = tscMergeTableDataBlocks(pSql)) != TSDB_CODE_SUCCESS) {
return code; return code;
} }
...@@ -1456,18 +1446,21 @@ static void parseFileSendDataBlock(void *param, TAOS_RES *tres, int code) { ...@@ -1456,18 +1446,21 @@ static void parseFileSendDataBlock(void *param, TAOS_RES *tres, int code) {
int32_t count = 0; int32_t count = 0;
int32_t maxRows = 0; int32_t maxRows = 0;
tscDestroyBlockArrayList(pSql->cmd.pDataBlocks); tfree(pCmd->pTableMetaList);
pCmd->pDataBlocks = taosArrayInit(1, POINTER_BYTES); pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks);
if (pCmd->pTableBlockHashList == NULL) {
pCmd->pTableBlockHashList = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false);
}
STableDataBlocks *pTableDataBlock = NULL; STableDataBlocks *pTableDataBlock = NULL;
int32_t ret = tscCreateDataBlock(TSDB_PAYLOAD_SIZE, tinfo.rowSize, sizeof(SSubmitBlk), pTableMetaInfo->name, pTableMeta, &pTableDataBlock); int32_t ret = tscGetDataBlockFromList(pCmd->pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE,
sizeof(SSubmitBlk), tinfo.rowSize, pTableMetaInfo->name, pTableMeta, &pTableDataBlock, NULL);
if (ret != TSDB_CODE_SUCCESS) { if (ret != TSDB_CODE_SUCCESS) {
// return ret; // return ret;
} }
taosArrayPush(pCmd->pDataBlocks, &pTableDataBlock);
tscAllocateMemIfNeed(pTableDataBlock, tinfo.rowSize, &maxRows); tscAllocateMemIfNeed(pTableDataBlock, tinfo.rowSize, &maxRows);
char *tokenBuf = calloc(1, 4096); char *tokenBuf = calloc(1, 4096);
while ((readLen = tgetline(&line, &n, fp)) != -1) { while ((readLen = tgetline(&line, &n, fp)) != -1) {
...@@ -1529,8 +1522,6 @@ void tscProcessMultiVnodesImportFromFile(SSqlObj *pSql) { ...@@ -1529,8 +1522,6 @@ void tscProcessMultiVnodesImportFromFile(SSqlObj *pSql) {
SImportFileSupport *pSupporter = calloc(1, sizeof(SImportFileSupport)); SImportFileSupport *pSupporter = calloc(1, sizeof(SImportFileSupport));
SSqlObj *pNew = createSubqueryObj(pSql, 0, parseFileSendDataBlock, pSupporter, TSDB_SQL_INSERT, NULL); SSqlObj *pNew = createSubqueryObj(pSql, 0, parseFileSendDataBlock, pSupporter, TSDB_SQL_INSERT, NULL);
pNew->cmd.pDataBlocks = taosArrayInit(4, POINTER_BYTES);
pCmd->count = 1; pCmd->count = 1;
FILE *fp = fopen(pCmd->payload, "r"); FILE *fp = fopen(pCmd->payload, "r");
......
...@@ -800,9 +800,9 @@ static int insertStmtExecute(STscStmt* stmt) { ...@@ -800,9 +800,9 @@ static int insertStmtExecute(STscStmt* stmt) {
STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0); STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0);
assert(pCmd->numOfClause == 1); assert(pCmd->numOfClause == 1);
if (taosArrayGetSize(pCmd->pDataBlocks) > 0) { if (taosHashGetSize(pCmd->pTableBlockHashList) > 0) {
// merge according to vgid // merge according to vgid
int code = tscMergeTableDataBlocks(stmt->pSql, pCmd->pDataBlocks); int code = tscMergeTableDataBlocks(stmt->pSql);
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
return code; return code;
} }
......
...@@ -1310,7 +1310,7 @@ static int32_t handleArithmeticExpr(SSqlCmd* pCmd, int32_t clauseIndex, int32_t ...@@ -1310,7 +1310,7 @@ static int32_t handleArithmeticExpr(SSqlCmd* pCmd, int32_t clauseIndex, int32_t
SColumnIndex index = {.tableIndex = tableIndex}; SColumnIndex index = {.tableIndex = tableIndex};
SSqlExpr* pExpr = tscSqlExprAppend(pQueryInfo, TSDB_FUNC_ARITHM, &index, TSDB_DATA_TYPE_DOUBLE, sizeof(double), SSqlExpr* pExpr = tscSqlExprAppend(pQueryInfo, TSDB_FUNC_ARITHM, &index, TSDB_DATA_TYPE_DOUBLE, sizeof(double),
-1000, sizeof(double), false); getNewResColId(pQueryInfo), sizeof(double), false);
char* name = (pItem->aliasName != NULL)? pItem->aliasName:pItem->pNode->token.z; char* name = (pItem->aliasName != NULL)? pItem->aliasName:pItem->pNode->token.z;
size_t len = MIN(sizeof(pExpr->aliasName), pItem->pNode->token.n + 1); size_t len = MIN(sizeof(pExpr->aliasName), pItem->pNode->token.n + 1);
...@@ -5317,7 +5317,7 @@ int32_t parseLimitClause(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t clauseIn ...@@ -5317,7 +5317,7 @@ int32_t parseLimitClause(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t clauseIn
// keep original limitation value in globalLimit // keep original limitation value in globalLimit
pQueryInfo->clauseLimit = pQueryInfo->limit.limit; pQueryInfo->clauseLimit = pQueryInfo->limit.limit;
pQueryInfo->prjOffset = pQueryInfo->limit.offset; pQueryInfo->prjOffset = pQueryInfo->limit.offset;
pQueryInfo->tableLimit = -1; pQueryInfo->vgroupLimit = -1;
if (tscOrderedProjectionQueryOnSTable(pQueryInfo, 0)) { if (tscOrderedProjectionQueryOnSTable(pQueryInfo, 0)) {
/* /*
...@@ -5327,7 +5327,7 @@ int32_t parseLimitClause(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t clauseIn ...@@ -5327,7 +5327,7 @@ int32_t parseLimitClause(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t clauseIn
* than or equal to the value of limit. * than or equal to the value of limit.
*/ */
if (pQueryInfo->limit.limit > 0) { if (pQueryInfo->limit.limit > 0) {
pQueryInfo->tableLimit = pQueryInfo->limit.limit + pQueryInfo->limit.offset; pQueryInfo->vgroupLimit = pQueryInfo->limit.limit + pQueryInfo->limit.offset;
pQueryInfo->limit.limit = -1; pQueryInfo->limit.limit = -1;
} }
...@@ -5913,25 +5913,33 @@ int32_t doLocalQueryProcess(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, SQuerySQL* pQ ...@@ -5913,25 +5913,33 @@ int32_t doLocalQueryProcess(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, SQuerySQL* pQ
if (pExprList->nExpr != 1) { if (pExprList->nExpr != 1) {
return invalidSqlErrMsg(tscGetErrorMsgPayload(pCmd), msg1); return invalidSqlErrMsg(tscGetErrorMsgPayload(pCmd), msg1);
} }
bool server_status = false;
tSQLExpr* pExpr = pExprList->a[0].pNode; tSQLExpr* pExpr = pExprList->a[0].pNode;
if (pExpr->operand.z == NULL) { if (pExpr->operand.z == NULL) {
return invalidSqlErrMsg(tscGetErrorMsgPayload(pCmd), msg2); //handle 'select 1'
} if (pExpr->token.n == 1 && 0 == strncasecmp(pExpr->token.z, "1", 1)) {
server_status = true;
} else {
return invalidSqlErrMsg(tscGetErrorMsgPayload(pCmd), msg2);
}
}
// TODO redefine the function // TODO redefine the function
SDNodeDynConfOption functionsInfo[5] = {{"database()", 10}, SDNodeDynConfOption functionsInfo[5] = {{"database()", 10},
{"server_version()", 16}, {"server_version()", 16},
{"server_status()", 15}, {"server_status()", 15},
{"client_version()", 16}, {"client_version()", 16},
{"current_user()", 14}}; {"current_user()", 14}};
int32_t index = -1; int32_t index = -1;
for (int32_t i = 0; i < tListLen(functionsInfo); ++i) { if (server_status == true) {
if (strncasecmp(functionsInfo[i].name, pExpr->operand.z, functionsInfo[i].len) == 0 && index = 2;
functionsInfo[i].len == pExpr->operand.n) { } else {
index = i; for (int32_t i = 0; i < tListLen(functionsInfo); ++i) {
break; if (strncasecmp(functionsInfo[i].name, pExpr->operand.z, functionsInfo[i].len) == 0 &&
functionsInfo[i].len == pExpr->operand.n) {
index = i;
break;
}
} }
} }
......
...@@ -152,7 +152,13 @@ void tscProcessHeartBeatRsp(void *param, TAOS_RES *tres, int code) { ...@@ -152,7 +152,13 @@ void tscProcessHeartBeatRsp(void *param, TAOS_RES *tres, int code) {
SRpcEpSet * epSet = &pRsp->epSet; SRpcEpSet * epSet = &pRsp->epSet;
if (epSet->numOfEps > 0) { if (epSet->numOfEps > 0) {
tscEpSetHtons(epSet); tscEpSetHtons(epSet);
tscUpdateMgmtEpSet(pSql, epSet); if (!tscEpSetIsEqual(&pSql->pTscObj->tscCorMgmtEpSet->epSet, epSet)) {
tscTrace("%p updating epset: numOfEps: %d, inUse: %d", pSql, epSet->numOfEps, epSet->inUse);
for (int8_t i = 0; i < epSet->numOfEps; i++) {
tscTrace("endpoint %d: fqdn = %s, port=%d", i, epSet->fqdn[i], epSet->port[i]);
}
tscUpdateMgmtEpSet(pSql, epSet);
}
} }
pSql->pTscObj->connId = htonl(pRsp->connId); pSql->pTscObj->connId = htonl(pRsp->connId);
...@@ -280,19 +286,19 @@ void tscProcessMsgFromServer(SRpcMsg *rpcMsg, SRpcEpSet *pEpSet) { ...@@ -280,19 +286,19 @@ void tscProcessMsgFromServer(SRpcMsg *rpcMsg, SRpcEpSet *pEpSet) {
} }
int32_t cmd = pCmd->command; int32_t cmd = pCmd->command;
if ((cmd == TSDB_SQL_SELECT || cmd == TSDB_SQL_FETCH || cmd == TSDB_SQL_INSERT || cmd == TSDB_SQL_UPDATE_TAGS_VAL) &&
// set the flag to denote that sql string needs to be re-parsed and build submit block with table schema
if (cmd == TSDB_SQL_INSERT && rpcMsg->code == TSDB_CODE_TDB_TABLE_RECONFIGURE) {
pSql->cmd.submitSchema = 1;
}
if ((cmd == TSDB_SQL_SELECT || cmd == TSDB_SQL_FETCH || cmd == TSDB_SQL_UPDATE_TAGS_VAL) &&
(rpcMsg->code == TSDB_CODE_TDB_INVALID_TABLE_ID || (rpcMsg->code == TSDB_CODE_TDB_INVALID_TABLE_ID ||
rpcMsg->code == TSDB_CODE_VND_INVALID_VGROUP_ID || rpcMsg->code == TSDB_CODE_VND_INVALID_VGROUP_ID ||
rpcMsg->code == TSDB_CODE_RPC_NETWORK_UNAVAIL || rpcMsg->code == TSDB_CODE_RPC_NETWORK_UNAVAIL ||
rpcMsg->code == TSDB_CODE_APP_NOT_READY || rpcMsg->code == TSDB_CODE_APP_NOT_READY)) {
rpcMsg->code == TSDB_CODE_TDB_TABLE_RECONFIGURE)) {
tscWarn("%p it shall renew table meta, code:%s, retry:%d", pSql, tstrerror(rpcMsg->code), ++pSql->retry); tscWarn("%p it shall renew table meta, code:%s, retry:%d", pSql, tstrerror(rpcMsg->code), ++pSql->retry);
// set the flag to denote that sql string needs to be re-parsed and build submit block with table schema
if (rpcMsg->code == TSDB_CODE_TDB_TABLE_RECONFIGURE) {
pSql->cmd.submitSchema = 1;
}
pSql->res.code = rpcMsg->code; // keep the previous error code pSql->res.code = rpcMsg->code; // keep the previous error code
if (pSql->retry > pSql->maxRetry) { if (pSql->retry > pSql->maxRetry) {
tscError("%p max retry %d reached, give up", pSql, pSql->maxRetry); tscError("%p max retry %d reached, give up", pSql, pSql->maxRetry);
...@@ -451,10 +457,10 @@ int tscProcessSql(SSqlObj *pSql) { ...@@ -451,10 +457,10 @@ int tscProcessSql(SSqlObj *pSql) {
int tscBuildFetchMsg(SSqlObj *pSql, SSqlInfo *pInfo) { int tscBuildFetchMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
SRetrieveTableMsg *pRetrieveMsg = (SRetrieveTableMsg *) pSql->cmd.payload; SRetrieveTableMsg *pRetrieveMsg = (SRetrieveTableMsg *) pSql->cmd.payload;
pRetrieveMsg->qhandle = htobe64(pSql->res.qhandle);
SQueryInfo *pQueryInfo = tscGetQueryInfoDetail(&pSql->cmd, pSql->cmd.clauseIndex); SQueryInfo *pQueryInfo = tscGetQueryInfoDetail(&pSql->cmd, pSql->cmd.clauseIndex);
pRetrieveMsg->free = htons(pQueryInfo->type); pRetrieveMsg->free = htons(pQueryInfo->type);
pRetrieveMsg->qhandle = htobe64(pSql->res.qhandle);
// todo valid the vgroupId at the client side // todo valid the vgroupId at the client side
STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0); STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
...@@ -681,7 +687,7 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -681,7 +687,7 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
pQueryMsg->tagNameRelType = htons(pQueryInfo->tagCond.relType); pQueryMsg->tagNameRelType = htons(pQueryInfo->tagCond.relType);
pQueryMsg->numOfTags = htonl(numOfTags); pQueryMsg->numOfTags = htonl(numOfTags);
pQueryMsg->queryType = htonl(pQueryInfo->type); pQueryMsg->queryType = htonl(pQueryInfo->type);
pQueryMsg->tableLimit = htobe64(pQueryInfo->tableLimit); pQueryMsg->vgroupLimit = htobe64(pQueryInfo->vgroupLimit);
size_t numOfOutput = tscSqlExprNumOfExprs(pQueryInfo); size_t numOfOutput = tscSqlExprNumOfExprs(pQueryInfo);
pQueryMsg->numOfOutput = htons((int16_t)numOfOutput); // this is the stage one output column number pQueryMsg->numOfOutput = htons((int16_t)numOfOutput); // this is the stage one output column number
...@@ -1394,6 +1400,43 @@ int tscBuildUpdateTagMsg(SSqlObj* pSql, SSqlInfo *pInfo) { ...@@ -1394,6 +1400,43 @@ int tscBuildUpdateTagMsg(SSqlObj* pSql, SSqlInfo *pInfo) {
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
} }
//int tscBuildCancelQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
// SCancelQueryMsg *pCancelMsg = (SCancelQueryMsg*) pSql->cmd.payload;
// pCancelMsg->qhandle = htobe64(pSql->res.qhandle);
//
// SQueryInfo *pQueryInfo = tscGetQueryInfoDetail(&pSql->cmd, pSql->cmd.clauseIndex);
// STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
//
// if (UTIL_TABLE_IS_SUPER_TABLE(pTableMetaInfo)) {
// int32_t vgIndex = pTableMetaInfo->vgroupIndex;
// if (pTableMetaInfo->pVgroupTables == NULL) {
// SVgroupsInfo *pVgroupInfo = pTableMetaInfo->vgroupList;
// assert(pVgroupInfo->vgroups[vgIndex].vgId > 0 && vgIndex < pTableMetaInfo->vgroupList->numOfVgroups);
//
// pCancelMsg->header.vgId = htonl(pVgroupInfo->vgroups[vgIndex].vgId);
// tscDebug("%p build cancel query msg from vgId:%d, vgIndex:%d", pSql, pVgroupInfo->vgroups[vgIndex].vgId, vgIndex);
// } else {
// int32_t numOfVgroups = (int32_t)taosArrayGetSize(pTableMetaInfo->pVgroupTables);
// assert(vgIndex >= 0 && vgIndex < numOfVgroups);
//
// SVgroupTableInfo* pTableIdList = taosArrayGet(pTableMetaInfo->pVgroupTables, vgIndex);
//
// pCancelMsg->header.vgId = htonl(pTableIdList->vgInfo.vgId);
// tscDebug("%p build cancel query msg from vgId:%d, vgIndex:%d", pSql, pTableIdList->vgInfo.vgId, vgIndex);
// }
// } else {
// STableMeta* pTableMeta = pTableMetaInfo->pTableMeta;
// pCancelMsg->header.vgId = htonl(pTableMeta->vgroupInfo.vgId);
// tscDebug("%p build cancel query msg from only one vgroup, vgId:%d", pSql, pTableMeta->vgroupInfo.vgId);
// }
//
// pSql->cmd.payloadLen = sizeof(SCancelQueryMsg);
// pSql->cmd.msgType = TSDB_MSG_TYPE_CANCEL_QUERY;
//
// pCancelMsg->header.contLen = htonl(sizeof(SCancelQueryMsg));
// return TSDB_CODE_SUCCESS;
//}
int tscAlterDbMsg(SSqlObj *pSql, SSqlInfo *pInfo) { int tscAlterDbMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
SSqlCmd *pCmd = &pSql->cmd; SSqlCmd *pCmd = &pSql->cmd;
pCmd->payloadLen = sizeof(SAlterDbMsg); pCmd->payloadLen = sizeof(SAlterDbMsg);
......
...@@ -900,9 +900,9 @@ int taos_validate_sql(TAOS *taos, const char *sql) { ...@@ -900,9 +900,9 @@ int taos_validate_sql(TAOS *taos, const char *sql) {
strtolower(pSql->sqlstr, sql); strtolower(pSql->sqlstr, sql);
pCmd->curSql = NULL; pCmd->curSql = NULL;
if (NULL != pCmd->pTableList) { if (NULL != pCmd->pTableBlockHashList) {
taosHashCleanup(pCmd->pTableList); taosHashCleanup(pCmd->pTableBlockHashList);
pCmd->pTableList = NULL; pCmd->pTableBlockHashList = NULL;
} }
pSql->fp = asyncCallback; pSql->fp = asyncCallback;
......
...@@ -2149,6 +2149,38 @@ void tscRetrieveDataRes(void *param, TAOS_RES *tres, int code) { ...@@ -2149,6 +2149,38 @@ void tscRetrieveDataRes(void *param, TAOS_RES *tres, int code) {
} }
} }
static bool needRetryInsert(SSqlObj* pParentObj, int32_t numOfSub) {
if (pParentObj->retry > pParentObj->maxRetry) {
tscError("%p max retry reached, abort the retry effort", pParentObj)
return false;
}
for (int32_t i = 0; i < numOfSub; ++i) {
int32_t code = pParentObj->pSubs[i]->res.code;
if (code == TSDB_CODE_SUCCESS) {
continue;
}
if (code != TSDB_CODE_TDB_TABLE_RECONFIGURE && code != TSDB_CODE_TDB_INVALID_TABLE_ID &&
code != TSDB_CODE_VND_INVALID_VGROUP_ID && code != TSDB_CODE_RPC_NETWORK_UNAVAIL &&
code != TSDB_CODE_APP_NOT_READY) {
pParentObj->res.code = code;
return false;
}
}
return true;
}
static void doFreeInsertSupporter(SSqlObj* pSqlObj) {
assert(pSqlObj != NULL && pSqlObj->subState.numOfSub > 0);
for(int32_t i = 0; i < pSqlObj->subState.numOfSub; ++i) {
SSqlObj* pSql = pSqlObj->pSubs[i];
tfree(pSql->param);
}
}
static void multiVnodeInsertFinalize(void* param, TAOS_RES* tres, int numOfRows) { static void multiVnodeInsertFinalize(void* param, TAOS_RES* tres, int numOfRows) {
SInsertSupporter *pSupporter = (SInsertSupporter *)param; SInsertSupporter *pSupporter = (SInsertSupporter *)param;
SSqlObj* pParentObj = pSupporter->pSql; SSqlObj* pParentObj = pSupporter->pSql;
...@@ -2163,23 +2195,81 @@ static void multiVnodeInsertFinalize(void* param, TAOS_RES* tres, int numOfRows) ...@@ -2163,23 +2195,81 @@ static void multiVnodeInsertFinalize(void* param, TAOS_RES* tres, int numOfRows)
assert(pSql != NULL && pSql->res.code == numOfRows); assert(pSql != NULL && pSql->res.code == numOfRows);
pParentObj->res.code = pSql->res.code; pParentObj->res.code = pSql->res.code;
}
tfree(pSupporter); // set the flag in the parent sqlObj
if (pSql->cmd.submitSchema) {
pParentObj->cmd.submitSchema = 1;
}
}
if (atomic_sub_fetch_32(&pParentObj->subState.numOfRemain, 1) > 0) { if (atomic_sub_fetch_32(&pParentObj->subState.numOfRemain, 1) > 0) {
return; return;
} }
tscDebug("%p Async insertion completed, total inserted:%d", pParentObj, pParentObj->res.numOfRows);
// restore user defined fp // restore user defined fp
pParentObj->fp = pParentObj->fetchFp; pParentObj->fp = pParentObj->fetchFp;
int32_t numOfSub = pParentObj->subState.numOfSub;
if (pParentObj->res.code == TSDB_CODE_SUCCESS) {
tscDebug("%p Async insertion completed, total inserted:%d", pParentObj, pParentObj->res.numOfRows);
doFreeInsertSupporter(pParentObj);
// todo remove this parameter in async callback function definition.
// all data has been sent to vnode, call user function
int32_t v = (pParentObj->res.code != TSDB_CODE_SUCCESS) ? pParentObj->res.code : (int32_t)pParentObj->res.numOfRows;
(*pParentObj->fp)(pParentObj->param, pParentObj, v);
} else {
if (!needRetryInsert(pParentObj, numOfSub)) {
doFreeInsertSupporter(pParentObj);
tscQueueAsyncRes(pParentObj);
return;
}
// todo remove this parameter in async callback function definition. int32_t numOfFailed = 0;
// all data has been sent to vnode, call user function for(int32_t i = 0; i < numOfSub; ++i) {
int32_t v = (pParentObj->res.code != TSDB_CODE_SUCCESS) ? pParentObj->res.code : (int32_t)pParentObj->res.numOfRows; SSqlObj* pSql = pParentObj->pSubs[i];
(*pParentObj->fp)(pParentObj->param, pParentObj, v); if (pSql->res.code != TSDB_CODE_SUCCESS) {
numOfFailed += 1;
// clean up tableMeta in cache
tscFreeQueryInfo(&pSql->cmd, true);
SQueryInfo* pQueryInfo = tscGetQueryInfoDetailSafely(&pSql->cmd, 0);
STableMetaInfo* pMasterTableMetaInfo = tscGetTableMetaInfoFromCmd(&pParentObj->cmd, pSql->cmd.clauseIndex, 0);
tscAddTableMetaInfo(pQueryInfo, pMasterTableMetaInfo->name, NULL, NULL, NULL, NULL);
tscDebug("%p, failed sub:%d, %p", pParentObj, i, pSql);
}
}
tscError("%p Async insertion completed, total inserted:%d rows, numOfFailed:%d, numOfTotal:%d", pParentObj,
pParentObj->res.numOfRows, numOfFailed, numOfSub);
tscDebug("%p cleanup %d tableMeta in cache", pParentObj, pParentObj->cmd.numOfTables);
for(int32_t i = 0; i < pParentObj->cmd.numOfTables; ++i) {
taosCacheRelease(tscMetaCache, (void**)&(pParentObj->cmd.pTableMetaList[i]), true);
}
pParentObj->cmd.parseFinished = false;
pParentObj->subState.numOfRemain = numOfFailed;
tscResetSqlCmdObj(&pParentObj->cmd, false);
// in case of insert, redo parsing the sql string and build new submit data block for two reasons:
// 1. the table Id(tid & uid) may have been update, the submit block needs to be updated accordingly.
// 2. vnode may need the schema information along with submit block to update its local table schema.
tscDebug("%p re-parse sql to generate submit data, retry:%d", pParentObj, pParentObj->retry++);
int32_t code = tsParseSql(pParentObj, true);
if (code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) return;
if (code != TSDB_CODE_SUCCESS) {
pParentObj->res.code = code;
doFreeInsertSupporter(pParentObj);
tscQueueAsyncRes(pParentObj);
return;
}
tscDoQuery(pParentObj);
}
} }
/** /**
...@@ -2187,19 +2277,19 @@ static void multiVnodeInsertFinalize(void* param, TAOS_RES* tres, int numOfRows) ...@@ -2187,19 +2277,19 @@ static void multiVnodeInsertFinalize(void* param, TAOS_RES* tres, int numOfRows)
* @param pSql * @param pSql
* @return * @return
*/ */
int32_t tscHandleInsertRetry(SSqlObj* pSql) { int32_t tscHandleInsertRetry(SSqlObj* pParent, SSqlObj* pSql) {
assert(pSql != NULL && pSql->param != NULL); assert(pSql != NULL && pSql->param != NULL);
SSqlCmd* pCmd = &pSql->cmd; // SSqlCmd* pCmd = &pSql->cmd;
SSqlRes* pRes = &pSql->res; SSqlRes* pRes = &pSql->res;
SInsertSupporter* pSupporter = (SInsertSupporter*) pSql->param; SInsertSupporter* pSupporter = (SInsertSupporter*) pSql->param;
assert(pSupporter->index < pSupporter->pSql->subState.numOfSub); assert(pSupporter->index < pSupporter->pSql->subState.numOfSub);
STableDataBlocks* pTableDataBlock = taosArrayGetP(pCmd->pDataBlocks, pSupporter->index); STableDataBlocks* pTableDataBlock = taosArrayGetP(pParent->cmd.pDataBlocks, pSupporter->index);
int32_t code = tscCopyDataBlockToPayload(pSql, pTableDataBlock); int32_t code = tscCopyDataBlockToPayload(pSql, pTableDataBlock);
// free the data block created from insert sql string // free the data block created from insert sql string
pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks); // pCmd->pDataBlocks = tscDestroyBlockArrayList(pParent->cmd.pDataBlocks);
if ((pRes->code = code)!= TSDB_CODE_SUCCESS) { if ((pRes->code = code)!= TSDB_CODE_SUCCESS) {
tscQueueAsyncRes(pSql); tscQueueAsyncRes(pSql);
...@@ -2213,6 +2303,20 @@ int32_t tscHandleMultivnodeInsert(SSqlObj *pSql) { ...@@ -2213,6 +2303,20 @@ int32_t tscHandleMultivnodeInsert(SSqlObj *pSql) {
SSqlCmd *pCmd = &pSql->cmd; SSqlCmd *pCmd = &pSql->cmd;
SSqlRes *pRes = &pSql->res; SSqlRes *pRes = &pSql->res;
// it is the failure retry insert
if (pSql->pSubs != NULL) {
for(int32_t i = 0; i < pSql->subState.numOfSub; ++i) {
SSqlObj* pSub = pSql->pSubs[i];
tscDebug("%p sub:%p launch sub insert, orderOfSub:%d", pSql, pSub, i);
if (pSub->res.code != TSDB_CODE_SUCCESS) {
tscHandleInsertRetry(pSql, pSub);
}
}
return TSDB_CODE_SUCCESS;
}
pSql->subState.numOfSub = (uint16_t)taosArrayGetSize(pCmd->pDataBlocks); pSql->subState.numOfSub = (uint16_t)taosArrayGetSize(pCmd->pDataBlocks);
assert(pSql->subState.numOfSub > 0); assert(pSql->subState.numOfSub > 0);
......
...@@ -333,13 +333,15 @@ void tscSetResRawPtr(SSqlRes* pRes, SQueryInfo* pQueryInfo) { ...@@ -333,13 +333,15 @@ void tscSetResRawPtr(SSqlRes* pRes, SQueryInfo* pQueryInfo) {
if (isNull(p, TSDB_DATA_TYPE_NCHAR)) { if (isNull(p, TSDB_DATA_TYPE_NCHAR)) {
memcpy(dst, p, varDataTLen(p)); memcpy(dst, p, varDataTLen(p));
} else { } else if (varDataLen(p) > 0) {
int32_t length = taosUcs4ToMbs(varDataVal(p), varDataLen(p), varDataVal(dst)); int32_t length = taosUcs4ToMbs(varDataVal(p), varDataLen(p), varDataVal(dst));
varDataSetLen(dst, length); varDataSetLen(dst, length);
if (length == 0) { if (length == 0) {
tscError("charset:%s to %s. val:%s convert failed.", DEFAULT_UNICODE_ENCODEC, tsCharset, (char*)p); tscError("charset:%s to %s. val:%s convert failed.", DEFAULT_UNICODE_ENCODEC, tsCharset, (char*)p);
} }
} else {
varDataSetLen(dst, 0);
} }
p += pInfo->field.bytes; p += pInfo->field.bytes;
...@@ -377,7 +379,7 @@ static void tscDestroyResPointerInfo(SSqlRes* pRes) { ...@@ -377,7 +379,7 @@ static void tscDestroyResPointerInfo(SSqlRes* pRes) {
pRes->data = NULL; // pRes->data points to the buffer of pRsp, no need to free pRes->data = NULL; // pRes->data points to the buffer of pRsp, no need to free
} }
static void tscFreeQueryInfo(SSqlCmd* pCmd, bool removeFromCache) { void tscFreeQueryInfo(SSqlCmd* pCmd, bool removeFromCache) {
if (pCmd == NULL || pCmd->numOfClause == 0) { if (pCmd == NULL || pCmd->numOfClause == 0) {
return; return;
} }
...@@ -403,12 +405,12 @@ void tscResetSqlCmdObj(SSqlCmd* pCmd, bool removeFromCache) { ...@@ -403,12 +405,12 @@ void tscResetSqlCmdObj(SSqlCmd* pCmd, bool removeFromCache) {
pCmd->msgType = 0; pCmd->msgType = 0;
pCmd->parseFinished = 0; pCmd->parseFinished = 0;
pCmd->autoCreated = 0; pCmd->autoCreated = 0;
pCmd->numOfTables = 0;
taosHashCleanup(pCmd->pTableList);
pCmd->pTableList = NULL; tfree(pCmd->pTableMetaList);
pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks);
pCmd->pTableBlockHashList = tscDestroyBlockHashTable(pCmd->pTableBlockHashList);
pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks);
tscFreeQueryInfo(pCmd, removeFromCache); tscFreeQueryInfo(pCmd, removeFromCache);
} }
...@@ -575,6 +577,21 @@ void* tscDestroyBlockArrayList(SArray* pDataBlockList) { ...@@ -575,6 +577,21 @@ void* tscDestroyBlockArrayList(SArray* pDataBlockList) {
return NULL; return NULL;
} }
void* tscDestroyBlockHashTable(SHashObj* pBlockHashTable) {
if (pBlockHashTable == NULL) {
return NULL;
}
STableDataBlocks** p = taosHashIterate(pBlockHashTable, NULL);
while(p) {
tscDestroyDataBlock(*p);
p = taosHashIterate(pBlockHashTable, p);
}
taosHashCleanup(pBlockHashTable);
return NULL;
}
int32_t tscCopyDataBlockToPayload(SSqlObj* pSql, STableDataBlocks* pDataBlock) { int32_t tscCopyDataBlockToPayload(SSqlObj* pSql, STableDataBlocks* pDataBlock) {
SSqlCmd* pCmd = &pSql->cmd; SSqlCmd* pCmd = &pSql->cmd;
assert(pDataBlock->pTableMeta != NULL); assert(pDataBlock->pTableMeta != NULL);
...@@ -671,9 +688,8 @@ int32_t tscCreateDataBlock(size_t initialSize, int32_t rowSize, int32_t startOff ...@@ -671,9 +688,8 @@ int32_t tscCreateDataBlock(size_t initialSize, int32_t rowSize, int32_t startOff
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
} }
int32_t tscGetDataBlockFromList(void* pHashList, SArray* pDataBlockList, int64_t id, int32_t size, int32_t tscGetDataBlockFromList(SHashObj* pHashList, int64_t id, int32_t size, int32_t startOffset, int32_t rowSize, const char* tableId, STableMeta* pTableMeta,
int32_t startOffset, int32_t rowSize, const char* tableId, STableMeta* pTableMeta, STableDataBlocks** dataBlocks, SArray* pBlockList) {
STableDataBlocks** dataBlocks) {
*dataBlocks = NULL; *dataBlocks = NULL;
STableDataBlocks** t1 = (STableDataBlocks**)taosHashGet(pHashList, (const char*)&id, sizeof(id)); STableDataBlocks** t1 = (STableDataBlocks**)taosHashGet(pHashList, (const char*)&id, sizeof(id));
...@@ -688,7 +704,9 @@ int32_t tscGetDataBlockFromList(void* pHashList, SArray* pDataBlockList, int64_t ...@@ -688,7 +704,9 @@ int32_t tscGetDataBlockFromList(void* pHashList, SArray* pDataBlockList, int64_t
} }
taosHashPut(pHashList, (const char*)&id, sizeof(int64_t), (char*)dataBlocks, POINTER_BYTES); taosHashPut(pHashList, (const char*)&id, sizeof(int64_t), (char*)dataBlocks, POINTER_BYTES);
taosArrayPush(pDataBlockList, dataBlocks); if (pBlockList) {
taosArrayPush(pBlockList, dataBlocks);
}
} }
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
...@@ -769,22 +787,37 @@ static int32_t getRowExpandSize(STableMeta* pTableMeta) { ...@@ -769,22 +787,37 @@ static int32_t getRowExpandSize(STableMeta* pTableMeta) {
return result; return result;
} }
int32_t tscMergeTableDataBlocks(SSqlObj* pSql, SArray* pTableDataBlockList) { static void extractTableMeta(SSqlCmd* pCmd) {
pCmd->numOfTables = (int32_t) taosHashGetSize(pCmd->pTableBlockHashList);
pCmd->pTableMetaList = calloc(pCmd->numOfTables, POINTER_BYTES);
STableDataBlocks **p1 = taosHashIterate(pCmd->pTableBlockHashList, NULL);
int32_t i = 0;
while(p1) {
STableDataBlocks* pBlocks = *p1;
pCmd->pTableMetaList[i++] = taosCacheTransfer(tscMetaCache, (void**) &pBlocks->pTableMeta);
p1 = taosHashIterate(pCmd->pTableBlockHashList, p1);
}
pCmd->pTableBlockHashList = tscDestroyBlockHashTable(pCmd->pTableBlockHashList);
}
int32_t tscMergeTableDataBlocks(SSqlObj* pSql) {
SSqlCmd* pCmd = &pSql->cmd; SSqlCmd* pCmd = &pSql->cmd;
void* pVnodeDataBlockHashList = taosHashInit(128, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false); void* pVnodeDataBlockHashList = taosHashInit(128, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false);
SArray* pVnodeDataBlockList = taosArrayInit(8, POINTER_BYTES); SArray* pVnodeDataBlockList = taosArrayInit(8, POINTER_BYTES);
size_t total = taosArrayGetSize(pTableDataBlockList); STableDataBlocks** p = taosHashIterate(pCmd->pTableBlockHashList, NULL);
for (int32_t i = 0; i < total; ++i) {
STableDataBlocks* pOneTableBlock = *p;
while(pOneTableBlock) {
// the maximum expanded size in byte when a row-wise data is converted to SDataRow format // the maximum expanded size in byte when a row-wise data is converted to SDataRow format
STableDataBlocks* pOneTableBlock = taosArrayGetP(pTableDataBlockList, i);
int32_t expandSize = getRowExpandSize(pOneTableBlock->pTableMeta); int32_t expandSize = getRowExpandSize(pOneTableBlock->pTableMeta);
STableDataBlocks* dataBuf = NULL; STableDataBlocks* dataBuf = NULL;
int32_t ret = int32_t ret = tscGetDataBlockFromList(pVnodeDataBlockHashList, pOneTableBlock->vgId, TSDB_PAYLOAD_SIZE,
tscGetDataBlockFromList(pVnodeDataBlockHashList, pVnodeDataBlockList, pOneTableBlock->vgId, TSDB_PAYLOAD_SIZE, tsInsertHeadSize, 0, pOneTableBlock->tableId, pOneTableBlock->pTableMeta, &dataBuf, pVnodeDataBlockList);
tsInsertHeadSize, 0, pOneTableBlock->tableId, pOneTableBlock->pTableMeta, &dataBuf);
if (ret != TSDB_CODE_SUCCESS) { if (ret != TSDB_CODE_SUCCESS) {
tscError("%p failed to prepare the data block buffer for merging table data, code:%d", pSql, ret); tscError("%p failed to prepare the data block buffer for merging table data, code:%d", pSql, ret);
taosHashCleanup(pVnodeDataBlockHashList); taosHashCleanup(pVnodeDataBlockHashList);
...@@ -839,14 +872,19 @@ int32_t tscMergeTableDataBlocks(SSqlObj* pSql, SArray* pTableDataBlockList) { ...@@ -839,14 +872,19 @@ int32_t tscMergeTableDataBlocks(SSqlObj* pSql, SArray* pTableDataBlockList) {
// the length does not include the SSubmitBlk structure // the length does not include the SSubmitBlk structure
pBlocks->dataLen = htonl(finalLen); pBlocks->dataLen = htonl(finalLen);
dataBuf->numOfTables += 1; dataBuf->numOfTables += 1;
p = taosHashIterate(pCmd->pTableBlockHashList, p);
if (p == NULL) {
break;
}
pOneTableBlock = *p;
} }
tscDestroyBlockArrayList(pTableDataBlockList); extractTableMeta(pCmd);
// free the table data blocks; // free the table data blocks;
pCmd->pDataBlocks = pVnodeDataBlockList; pCmd->pDataBlocks = pVnodeDataBlockList;
// tscFreeUnusedDataBlocks(pCmd->pDataBlocks);
taosHashCleanup(pVnodeDataBlockHashList); taosHashCleanup(pVnodeDataBlockHashList);
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
...@@ -2006,7 +2044,11 @@ SSqlObj* createSubqueryObj(SSqlObj* pSql, int16_t tableIndex, void (*fp)(), void ...@@ -2006,7 +2044,11 @@ SSqlObj* createSubqueryObj(SSqlObj* pSql, int16_t tableIndex, void (*fp)(), void
pnCmd->numOfClause = 0; pnCmd->numOfClause = 0;
pnCmd->clauseIndex = 0; pnCmd->clauseIndex = 0;
pnCmd->pDataBlocks = NULL; pnCmd->pDataBlocks = NULL;
pnCmd->numOfTables = 0;
pnCmd->parseFinished = 1; pnCmd->parseFinished = 1;
pnCmd->pTableMetaList = NULL;
pnCmd->pTableBlockHashList = NULL;
if (tscAddSubqueryInfo(pnCmd) != TSDB_CODE_SUCCESS) { if (tscAddSubqueryInfo(pnCmd) != TSDB_CODE_SUCCESS) {
terrno = TSDB_CODE_TSC_OUT_OF_MEMORY; terrno = TSDB_CODE_TSC_OUT_OF_MEMORY;
...@@ -2023,6 +2065,7 @@ SSqlObj* createSubqueryObj(SSqlObj* pSql, int16_t tableIndex, void (*fp)(), void ...@@ -2023,6 +2065,7 @@ SSqlObj* createSubqueryObj(SSqlObj* pSql, int16_t tableIndex, void (*fp)(), void
pNewQueryInfo->limit = pQueryInfo->limit; pNewQueryInfo->limit = pQueryInfo->limit;
pNewQueryInfo->slimit = pQueryInfo->slimit; pNewQueryInfo->slimit = pQueryInfo->slimit;
pNewQueryInfo->order = pQueryInfo->order; pNewQueryInfo->order = pQueryInfo->order;
pNewQueryInfo->vgroupLimit = pQueryInfo->vgroupLimit;
pNewQueryInfo->tsBuf = NULL; pNewQueryInfo->tsBuf = NULL;
pNewQueryInfo->fillType = pQueryInfo->fillType; pNewQueryInfo->fillType = pQueryInfo->fillType;
pNewQueryInfo->fillVal = NULL; pNewQueryInfo->fillVal = NULL;
......
...@@ -36,7 +36,7 @@ enum { ...@@ -36,7 +36,7 @@ enum {
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_FETCH, "fetch" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_FETCH, "fetch" )
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_INSERT, "insert" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_INSERT, "insert" )
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_UPDATE_TAGS_VAL, "update-tag-val" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_UPDATE_TAGS_VAL, "update-tag-val" )
// the SQL below is for mgmt node // the SQL below is for mgmt node
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_MGMT, "mgmt" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_MGMT, "mgmt" )
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_CREATE_DB, "create-db" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_CREATE_DB, "create-db" )
......
...@@ -46,7 +46,7 @@ extern int32_t tsShellActivityTimer; ...@@ -46,7 +46,7 @@ extern int32_t tsShellActivityTimer;
extern uint32_t tsMaxTmrCtrl; extern uint32_t tsMaxTmrCtrl;
extern float tsNumOfThreadsPerCore; extern float tsNumOfThreadsPerCore;
extern int32_t tsNumOfCommitThreads; extern int32_t tsNumOfCommitThreads;
extern float tsRatioOfQueryThreads; // todo remove it extern float tsRatioOfQueryCores;
extern int8_t tsDaylight; extern int8_t tsDaylight;
extern char tsTimezone[]; extern char tsTimezone[];
extern char tsLocale[]; extern char tsLocale[];
...@@ -57,7 +57,7 @@ extern char tsTempDir[]; ...@@ -57,7 +57,7 @@ extern char tsTempDir[];
//query buffer management //query buffer management
extern int32_t tsQueryBufferSize; // maximum allowed usage buffer for each data node during query processing extern int32_t tsQueryBufferSize; // maximum allowed usage buffer for each data node during query processing
extern int32_t tsHalfCoresForQuery; // only 50% will be used in query processing extern int32_t tsRetrieveBlockingModel; // only 50% will be used in query processing
// client // client
extern int32_t tsTableMetaKeepTimer; extern int32_t tsTableMetaKeepTimer;
......
...@@ -52,7 +52,7 @@ int32_t tsMaxConnections = 5000; ...@@ -52,7 +52,7 @@ int32_t tsMaxConnections = 5000;
int32_t tsShellActivityTimer = 3; // second int32_t tsShellActivityTimer = 3; // second
float tsNumOfThreadsPerCore = 1.0f; float tsNumOfThreadsPerCore = 1.0f;
int32_t tsNumOfCommitThreads = 1; int32_t tsNumOfCommitThreads = 1;
float tsRatioOfQueryThreads = 0.5f; float tsRatioOfQueryCores = 1.0f;
int8_t tsDaylight = 0; int8_t tsDaylight = 0;
char tsTimezone[TSDB_TIMEZONE_LEN] = {0}; char tsTimezone[TSDB_TIMEZONE_LEN] = {0};
char tsLocale[TSDB_LOCALE_LEN] = {0}; char tsLocale[TSDB_LOCALE_LEN] = {0};
...@@ -107,8 +107,8 @@ int64_t tsMaxRetentWindow = 24 * 3600L; // maximum time window tolerance ...@@ -107,8 +107,8 @@ int64_t tsMaxRetentWindow = 24 * 3600L; // maximum time window tolerance
// positive value (in MB) // positive value (in MB)
int32_t tsQueryBufferSize = -1; int32_t tsQueryBufferSize = -1;
// only 50% cpu will be used in query processing in dnode // in retrieve blocking model, the retrieve threads will wait for the completion of the query processing.
int32_t tsHalfCoresForQuery = 0; int32_t tsRetrieveBlockingModel = 0;
// db parameters // db parameters
int32_t tsCacheBlockSize = TSDB_DEFAULT_CACHE_BLOCK_SIZE; int32_t tsCacheBlockSize = TSDB_DEFAULT_CACHE_BLOCK_SIZE;
...@@ -206,7 +206,7 @@ int32_t tsNumOfLogLines = 10000000; ...@@ -206,7 +206,7 @@ int32_t tsNumOfLogLines = 10000000;
int32_t mDebugFlag = 131; int32_t mDebugFlag = 131;
int32_t sdbDebugFlag = 131; int32_t sdbDebugFlag = 131;
int32_t dDebugFlag = 135; int32_t dDebugFlag = 135;
int32_t vDebugFlag = 131; int32_t vDebugFlag = 135;
int32_t cDebugFlag = 131; int32_t cDebugFlag = 131;
int32_t jniDebugFlag = 131; int32_t jniDebugFlag = 131;
int32_t odbcDebugFlag = 131; int32_t odbcDebugFlag = 131;
...@@ -444,12 +444,12 @@ static void doInitGlobalConfig(void) { ...@@ -444,12 +444,12 @@ static void doInitGlobalConfig(void) {
cfg.unitType = TAOS_CFG_UTYPE_NONE; cfg.unitType = TAOS_CFG_UTYPE_NONE;
taosInitConfigOption(cfg); taosInitConfigOption(cfg);
cfg.option = "ratioOfQueryThreads"; cfg.option = "ratioOfQueryCores";
cfg.ptr = &tsRatioOfQueryThreads; cfg.ptr = &tsRatioOfQueryCores;
cfg.valType = TAOS_CFG_VTYPE_FLOAT; cfg.valType = TAOS_CFG_VTYPE_FLOAT;
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG; cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG;
cfg.minValue = 0.1f; cfg.minValue = 0.0f;
cfg.maxValue = 0.9f; cfg.maxValue = 2.0f;
cfg.ptrLength = 0; cfg.ptrLength = 0;
cfg.unitType = TAOS_CFG_UTYPE_NONE; cfg.unitType = TAOS_CFG_UTYPE_NONE;
taosInitConfigOption(cfg); taosInitConfigOption(cfg);
...@@ -887,8 +887,8 @@ static void doInitGlobalConfig(void) { ...@@ -887,8 +887,8 @@ static void doInitGlobalConfig(void) {
cfg.unitType = TAOS_CFG_UTYPE_BYTE; cfg.unitType = TAOS_CFG_UTYPE_BYTE;
taosInitConfigOption(cfg); taosInitConfigOption(cfg);
cfg.option = "halfCoresForQuery"; cfg.option = "retrieveBlockingModel";
cfg.ptr = &tsHalfCoresForQuery; cfg.ptr = &tsRetrieveBlockingModel;
cfg.valType = TAOS_CFG_VTYPE_INT32; cfg.valType = TAOS_CFG_VTYPE_INT32;
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_SHOW; cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_SHOW;
cfg.minValue = 0; cfg.minValue = 0;
......
...@@ -161,7 +161,7 @@ void cqStop(void *handle) { ...@@ -161,7 +161,7 @@ void cqStop(void *handle) {
return; return;
} }
SCqContext *pContext = handle; SCqContext *pContext = handle;
cInfo("vgId:%d, stop all CQs", pContext->vgId); cDebug("vgId:%d, stop all CQs", pContext->vgId);
if (pContext->dbConn == NULL || pContext->master == 0) return; if (pContext->dbConn == NULL || pContext->master == 0) return;
pthread_mutex_lock(&pContext->mutex); pthread_mutex_lock(&pContext->mutex);
......
...@@ -24,8 +24,10 @@ extern "C" { ...@@ -24,8 +24,10 @@ extern "C" {
int32_t dnodeInitVRead(); int32_t dnodeInitVRead();
void dnodeCleanupVRead(); void dnodeCleanupVRead();
void dnodeDispatchToVReadQueue(SRpcMsg *pMsg); void dnodeDispatchToVReadQueue(SRpcMsg *pMsg);
void * dnodeAllocVReadQueue(void *pVnode); void * dnodeAllocVQueryQueue(void *pVnode);
void dnodeFreeVReadQueue(void *pRqueue); void * dnodeAllocVFetchQueue(void *pVnode);
void dnodeFreeVQueryQueue(void *pQqueue);
void dnodeFreeVFetchQueue(void *pFqueue);
#ifdef __cplusplus #ifdef __cplusplus
} }
......
...@@ -130,7 +130,7 @@ static void dnodePrintEps(SDnodeEps *eps) { ...@@ -130,7 +130,7 @@ static void dnodePrintEps(SDnodeEps *eps) {
dDebug("print dnodeEp, dnodeNum:%d", eps->dnodeNum); dDebug("print dnodeEp, dnodeNum:%d", eps->dnodeNum);
for (int32_t i = 0; i < eps->dnodeNum; i++) { for (int32_t i = 0; i < eps->dnodeNum; i++) {
SDnodeEp *ep = &eps->dnodeEps[i]; SDnodeEp *ep = &eps->dnodeEps[i];
dDebug("dnodeId:%d, dnodeFqdn:%s dnodePort:%u", ep->dnodeId, ep->dnodeFqdn, ep->dnodePort); dDebug("dnode:%d, dnodeFqdn:%s dnodePort:%u", ep->dnodeId, ep->dnodeFqdn, ep->dnodePort);
} }
} }
......
...@@ -70,8 +70,7 @@ int32_t dnodeInitShell() { ...@@ -70,8 +70,7 @@ int32_t dnodeInitShell() {
dnodeProcessShellMsgFp[TSDB_MSG_TYPE_NETWORK_TEST] = dnodeSendStartupStep; dnodeProcessShellMsgFp[TSDB_MSG_TYPE_NETWORK_TEST] = dnodeSendStartupStep;
int32_t numOfThreads = tsNumOfCores * tsNumOfThreadsPerCore; int32_t numOfThreads = (tsNumOfCores * tsNumOfThreadsPerCore) / 2.0;
numOfThreads = (int32_t) ((1.0 - tsRatioOfQueryThreads) * numOfThreads / 2.0);
if (numOfThreads < 1) { if (numOfThreads < 1) {
numOfThreads = 1; numOfThreads = 1;
} }
......
...@@ -16,12 +16,15 @@ ...@@ -16,12 +16,15 @@
#define _DEFAULT_SOURCE #define _DEFAULT_SOURCE
#include "os.h" #include "os.h"
#include "tgrant.h" #include "tgrant.h"
#include "tconfig.h"
#include "dnodeMain.h" #include "dnodeMain.h"
static void signal_handler(int32_t signum, siginfo_t *sigInfo, void *context); static void signal_handler(int32_t signum, siginfo_t *sigInfo, void *context);
static tsem_t exitSem; static tsem_t exitSem;
int32_t main(int32_t argc, char *argv[]) { int32_t main(int32_t argc, char *argv[]) {
int dump_config = 0;
// Set global configuration file // Set global configuration file
for (int32_t i = 1; i < argc; ++i) { for (int32_t i = 1; i < argc; ++i) {
if (strcmp(argv[i], "-c") == 0) { if (strcmp(argv[i], "-c") == 0) {
...@@ -35,6 +38,8 @@ int32_t main(int32_t argc, char *argv[]) { ...@@ -35,6 +38,8 @@ int32_t main(int32_t argc, char *argv[]) {
printf("'-c' requires a parameter, default:%s\n", configDir); printf("'-c' requires a parameter, default:%s\n", configDir);
exit(EXIT_FAILURE); exit(EXIT_FAILURE);
} }
} else if (strcmp(argv[i], "-C") == 0) {
dump_config = 1;
} else if (strcmp(argv[i], "-V") == 0) { } else if (strcmp(argv[i], "-V") == 0) {
#ifdef _ACCT #ifdef _ACCT
char *versionStr = "enterprise"; char *versionStr = "enterprise";
...@@ -87,6 +92,20 @@ int32_t main(int32_t argc, char *argv[]) { ...@@ -87,6 +92,20 @@ int32_t main(int32_t argc, char *argv[]) {
#endif #endif
} }
if (0 != dump_config) {
tscEmbedded = 1;
taosInitGlobalCfg();
taosReadGlobalLogCfg();
if (!taosReadGlobalCfg()) {
printf("TDengine read global config failed");
exit(EXIT_FAILURE);
}
taosDumpGlobalCfg();
exit(EXIT_SUCCESS);
}
if (tsem_init(&exitSem, 0, 0) != 0) { if (tsem_init(&exitSem, 0, 0) != 0) {
printf("failed to create exit semphore\n"); printf("failed to create exit semphore\n");
exit(EXIT_FAILURE); exit(EXIT_FAILURE);
......
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
#define _DEFAULT_SOURCE #define _DEFAULT_SOURCE
#include "os.h" #include "os.h"
#include "tqueue.h" #include "tqueue.h"
#include "tworker.h"
#include "dnodeVMgmt.h" #include "dnodeVMgmt.h"
typedef struct { typedef struct {
...@@ -23,9 +24,8 @@ typedef struct { ...@@ -23,9 +24,8 @@ typedef struct {
char pCont[]; char pCont[];
} SMgmtMsg; } SMgmtMsg;
static taos_qset tsMgmtQset = NULL; static SWorkerPool tsVMgmtWP;
static taos_queue tsMgmtQueue = NULL; static taos_queue tsVMgmtQueue = NULL;
static pthread_t tsQthread;
static void * dnodeProcessMgmtQueue(void *param); static void * dnodeProcessMgmtQueue(void *param);
static int32_t dnodeProcessCreateVnodeMsg(SRpcMsg *pMsg); static int32_t dnodeProcessCreateVnodeMsg(SRpcMsg *pMsg);
...@@ -47,45 +47,23 @@ int32_t dnodeInitVMgmt() { ...@@ -47,45 +47,23 @@ int32_t dnodeInitVMgmt() {
int32_t code = vnodeInitMgmt(); int32_t code = vnodeInitMgmt();
if (code != TSDB_CODE_SUCCESS) return -1; if (code != TSDB_CODE_SUCCESS) return -1;
tsMgmtQset = taosOpenQset(); tsVMgmtWP.name = "vmgmt";
if (tsMgmtQset == NULL) { tsVMgmtWP.workerFp = dnodeProcessMgmtQueue;
dError("failed to create the vmgmt queue set"); tsVMgmtWP.min = 1;
return -1; tsVMgmtWP.max = 1;
} if (tWorkerInit(&tsVMgmtWP) != 0) return -1;
tsMgmtQueue = taosOpenQueue();
if (tsMgmtQueue == NULL) {
dError("failed to create the vmgmt queue");
return -1;
}
taosAddIntoQset(tsMgmtQset, tsMgmtQueue, NULL);
pthread_attr_t thAttr; tsVMgmtQueue = tWorkerAllocQueue(&tsVMgmtWP, NULL);
pthread_attr_init(&thAttr);
pthread_attr_setdetachstate(&thAttr, PTHREAD_CREATE_JOINABLE);
code = pthread_create(&tsQthread, &thAttr, dnodeProcessMgmtQueue, NULL);
pthread_attr_destroy(&thAttr);
if (code != 0) {
dError("failed to create thread to process vmgmt queue, reason:%s", strerror(errno));
return -1;
}
dInfo("dnode vmgmt is initialized"); dInfo("dnode vmgmt is initialized");
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
} }
void dnodeCleanupVMgmt() { void dnodeCleanupVMgmt() {
if (tsMgmtQset) taosQsetThreadResume(tsMgmtQset); tWorkerFreeQueue(&tsVMgmtWP, tsVMgmtQueue);
if (tsQthread) pthread_join(tsQthread, NULL); tWorkerCleanup(&tsVMgmtWP);
if (tsMgmtQueue) taosCloseQueue(tsMgmtQueue);
if (tsMgmtQset) taosCloseQset(tsMgmtQset);
tsMgmtQset = NULL;
tsMgmtQueue = NULL;
tsVMgmtQueue = NULL;
vnodeCleanupMgmt(); vnodeCleanupMgmt();
} }
...@@ -97,7 +75,7 @@ static int32_t dnodeWriteToMgmtQueue(SRpcMsg *pMsg) { ...@@ -97,7 +75,7 @@ static int32_t dnodeWriteToMgmtQueue(SRpcMsg *pMsg) {
pMgmt->rpcMsg = *pMsg; pMgmt->rpcMsg = *pMsg;
pMgmt->rpcMsg.pCont = pMgmt->pCont; pMgmt->rpcMsg.pCont = pMgmt->pCont;
memcpy(pMgmt->pCont, pMsg->pCont, pMsg->contLen); memcpy(pMgmt->pCont, pMsg->pCont, pMsg->contLen);
taosWriteQitem(tsMgmtQueue, TAOS_QTYPE_RPC, pMgmt); taosWriteQitem(tsVMgmtQueue, TAOS_QTYPE_RPC, pMgmt);
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
} }
...@@ -112,16 +90,18 @@ void dnodeDispatchToVMgmtQueue(SRpcMsg *pMsg) { ...@@ -112,16 +90,18 @@ void dnodeDispatchToVMgmtQueue(SRpcMsg *pMsg) {
rpcFreeCont(pMsg->pCont); rpcFreeCont(pMsg->pCont);
} }
static void *dnodeProcessMgmtQueue(void *param) { static void *dnodeProcessMgmtQueue(void *wparam) {
SMgmtMsg *pMgmt; SWorker * pWorker = wparam;
SRpcMsg * pMsg; SWorkerPool *pPool = pWorker->pPool;
SRpcMsg rsp = {0}; SMgmtMsg * pMgmt;
int32_t qtype; SRpcMsg * pMsg;
void * handle; SRpcMsg rsp = {0};
int32_t qtype;
void * handle;
while (1) { while (1) {
if (taosReadQitemFromQset(tsMgmtQset, &qtype, (void **)&pMgmt, &handle) == 0) { if (taosReadQitemFromQset(pPool->qset, &qtype, (void **)&pMgmt, &handle) == 0) {
dDebug("qset:%p, dnode mgmt got no message from qset, exit", tsMgmtQset); dDebug("qdnode mgmt got no message from qset:%p, , exit", pPool->qset);
break; break;
} }
...@@ -218,7 +198,7 @@ static int32_t dnodeProcessCreateMnodeMsg(SRpcMsg *pMsg) { ...@@ -218,7 +198,7 @@ static int32_t dnodeProcessCreateMnodeMsg(SRpcMsg *pMsg) {
SCreateMnodeMsg *pCfg = pMsg->pCont; SCreateMnodeMsg *pCfg = pMsg->pCont;
pCfg->dnodeId = htonl(pCfg->dnodeId); pCfg->dnodeId = htonl(pCfg->dnodeId);
if (pCfg->dnodeId != dnodeGetDnodeId()) { if (pCfg->dnodeId != dnodeGetDnodeId()) {
dDebug("dnodeId:%d, in create mnode msg is not equal with saved dnodeId:%d", pCfg->dnodeId, dnodeGetDnodeId()); dDebug("dnode:%d, in create mnode msg is not equal with saved dnodeId:%d", pCfg->dnodeId, dnodeGetDnodeId());
return TSDB_CODE_MND_DNODE_ID_NOT_CONFIGURED; return TSDB_CODE_MND_DNODE_ID_NOT_CONFIGURED;
} }
...@@ -227,7 +207,7 @@ static int32_t dnodeProcessCreateMnodeMsg(SRpcMsg *pMsg) { ...@@ -227,7 +207,7 @@ static int32_t dnodeProcessCreateMnodeMsg(SRpcMsg *pMsg) {
return TSDB_CODE_MND_DNODE_EP_NOT_CONFIGURED; return TSDB_CODE_MND_DNODE_EP_NOT_CONFIGURED;
} }
dDebug("dnodeId:%d, create mnode msg is received from mnodes, numOfMnodes:%d", pCfg->dnodeId, pCfg->mnodes.mnodeNum); dDebug("dnode:%d, create mnode msg is received from mnodes, numOfMnodes:%d", pCfg->dnodeId, pCfg->mnodes.mnodeNum);
for (int i = 0; i < pCfg->mnodes.mnodeNum; ++i) { for (int i = 0; i < pCfg->mnodes.mnodeNum; ++i) {
pCfg->mnodes.mnodeInfos[i].mnodeId = htonl(pCfg->mnodes.mnodeInfos[i].mnodeId); pCfg->mnodes.mnodeInfos[i].mnodeId = htonl(pCfg->mnodes.mnodeInfos[i].mnodeId);
dDebug("mnode index:%d, mnode:%d:%s", i, pCfg->mnodes.mnodeInfos[i].mnodeId, pCfg->mnodes.mnodeInfos[i].mnodeEp); dDebug("mnode index:%d, mnode:%d:%s", i, pCfg->mnodes.mnodeInfos[i].mnodeId, pCfg->mnodes.mnodeInfos[i].mnodeEp);
......
...@@ -16,66 +16,39 @@ ...@@ -16,66 +16,39 @@
#define _DEFAULT_SOURCE #define _DEFAULT_SOURCE
#include "os.h" #include "os.h"
#include "tqueue.h" #include "tqueue.h"
#include "tworker.h"
#include "dnodeVRead.h" #include "dnodeVRead.h"
typedef struct {
pthread_t thread; // thread
int32_t workerId; // worker ID
} SVReadWorker;
typedef struct {
int32_t max; // max number of workers
int32_t min; // min number of workers
int32_t num; // current number of workers
SVReadWorker * worker;
pthread_mutex_t mutex;
} SVReadWorkerPool;
static void *dnodeProcessReadQueue(void *pWorker); static void *dnodeProcessReadQueue(void *pWorker);
// module global variable // module global variable
static SVReadWorkerPool tsVReadWP; static SWorkerPool tsVQueryWP;
static taos_qset tsVReadQset; static SWorkerPool tsVFetchWP;
int32_t dnodeInitVRead() { int32_t dnodeInitVRead() {
tsVReadQset = taosOpenQset(); const int32_t maxFetchThreads = 4;
tsVReadWP.min = tsNumOfCores;
tsVReadWP.max = tsNumOfCores * tsNumOfThreadsPerCore;
if (tsVReadWP.max <= tsVReadWP.min * 2) tsVReadWP.max = 2 * tsVReadWP.min;
tsVReadWP.worker = calloc(sizeof(SVReadWorker), tsVReadWP.max);
pthread_mutex_init(&tsVReadWP.mutex, NULL);
if (tsVReadWP.worker == NULL) return -1;
for (int i = 0; i < tsVReadWP.max; ++i) {
SVReadWorker *pWorker = tsVReadWP.worker + i;
pWorker->workerId = i;
}
dInfo("dnode vread is initialized, min worker:%d max worker:%d", tsVReadWP.min, tsVReadWP.max); // calculate the available query thread
return 0; float threadsForQuery = MAX(tsNumOfCores * tsRatioOfQueryCores, 1);
}
void dnodeCleanupVRead() { tsVQueryWP.name = "vquery";
for (int i = 0; i < tsVReadWP.max; ++i) { tsVQueryWP.workerFp = dnodeProcessReadQueue;
SVReadWorker *pWorker = tsVReadWP.worker + i; tsVQueryWP.min = (int32_t) threadsForQuery;
if (pWorker->thread) { tsVQueryWP.max = tsVQueryWP.min;
taosQsetThreadResume(tsVReadQset); if (tWorkerInit(&tsVQueryWP) != 0) return -1;
}
}
for (int i = 0; i < tsVReadWP.max; ++i) { tsVFetchWP.name = "vfetch";
SVReadWorker *pWorker = tsVReadWP.worker + i; tsVFetchWP.workerFp = dnodeProcessReadQueue;
if (pWorker->thread) { tsVFetchWP.min = MIN(maxFetchThreads, tsNumOfCores);
pthread_join(pWorker->thread, NULL); tsVFetchWP.max = tsVFetchWP.min;
} if (tWorkerInit(&tsVFetchWP) != 0) return -1;
}
free(tsVReadWP.worker); return 0;
taosCloseQset(tsVReadQset); }
pthread_mutex_destroy(&tsVReadWP.mutex);
dInfo("dnode vread is closed"); void dnodeCleanupVRead() {
tWorkerCleanup(&tsVFetchWP);
tWorkerCleanup(&tsVQueryWP);
} }
void dnodeDispatchToVReadQueue(SRpcMsg *pMsg) { void dnodeDispatchToVReadQueue(SRpcMsg *pMsg) {
...@@ -88,6 +61,7 @@ void dnodeDispatchToVReadQueue(SRpcMsg *pMsg) { ...@@ -88,6 +61,7 @@ void dnodeDispatchToVReadQueue(SRpcMsg *pMsg) {
pHead->vgId = htonl(pHead->vgId); pHead->vgId = htonl(pHead->vgId);
pHead->contLen = htonl(pHead->contLen); pHead->contLen = htonl(pHead->contLen);
assert(pHead->contLen > 0);
void *pVnode = vnodeAcquire(pHead->vgId); void *pVnode = vnodeAcquire(pHead->vgId);
if (pVnode != NULL) { if (pVnode != NULL) {
int32_t code = vnodeWriteToRQueue(pVnode, pCont, pHead->contLen, TAOS_QTYPE_RPC, pMsg); int32_t code = vnodeWriteToRQueue(pVnode, pCont, pHead->contLen, TAOS_QTYPE_RPC, pMsg);
...@@ -107,43 +81,20 @@ void dnodeDispatchToVReadQueue(SRpcMsg *pMsg) { ...@@ -107,43 +81,20 @@ void dnodeDispatchToVReadQueue(SRpcMsg *pMsg) {
rpcFreeCont(pMsg->pCont); rpcFreeCont(pMsg->pCont);
} }
void *dnodeAllocVReadQueue(void *pVnode) { void *dnodeAllocVQueryQueue(void *pVnode) {
pthread_mutex_lock(&tsVReadWP.mutex); return tWorkerAllocQueue(&tsVQueryWP, pVnode);
taos_queue queue = taosOpenQueue(); }
if (queue == NULL) {
pthread_mutex_unlock(&tsVReadWP.mutex);
return NULL;
}
taosAddIntoQset(tsVReadQset, queue, pVnode);
// spawn a thread to process queue
if (tsVReadWP.num < tsVReadWP.max) {
do {
SVReadWorker *pWorker = tsVReadWP.worker + tsVReadWP.num;
pthread_attr_t thAttr;
pthread_attr_init(&thAttr);
pthread_attr_setdetachstate(&thAttr, PTHREAD_CREATE_JOINABLE);
if (pthread_create(&pWorker->thread, &thAttr, dnodeProcessReadQueue, pWorker) != 0) {
dError("failed to create thread to process vread vqueue since %s", strerror(errno));
}
pthread_attr_destroy(&thAttr);
tsVReadWP.num++;
dDebug("dnode vread worker:%d is launched, total:%d", pWorker->workerId, tsVReadWP.num);
} while (tsVReadWP.num < tsVReadWP.min);
}
pthread_mutex_unlock(&tsVReadWP.mutex); void *dnodeAllocVFetchQueue(void *pVnode) {
dDebug("pVnode:%p, dnode vread queue:%p is allocated", pVnode, queue); return tWorkerAllocQueue(&tsVFetchWP, pVnode);
}
return queue; void dnodeFreeVQueryQueue(void *pQqueue) {
tWorkerFreeQueue(&tsVQueryWP, pQqueue);
} }
void dnodeFreeVReadQueue(void *pRqueue) { void dnodeFreeVFetchQueue(void *pFqueue) {
taosCloseQueue(pRqueue); tWorkerFreeQueue(&tsVFetchWP, pFqueue);
} }
void dnodeSendRpcVReadRsp(void *pVnode, SVReadMsg *pRead, int32_t code) { void dnodeSendRpcVReadRsp(void *pVnode, SVReadMsg *pRead, int32_t code) {
...@@ -160,18 +111,20 @@ void dnodeSendRpcVReadRsp(void *pVnode, SVReadMsg *pRead, int32_t code) { ...@@ -160,18 +111,20 @@ void dnodeSendRpcVReadRsp(void *pVnode, SVReadMsg *pRead, int32_t code) {
void dnodeDispatchNonRspMsg(void *pVnode, SVReadMsg *pRead, int32_t code) { void dnodeDispatchNonRspMsg(void *pVnode, SVReadMsg *pRead, int32_t code) {
} }
static void *dnodeProcessReadQueue(void *pWorker) { static void *dnodeProcessReadQueue(void *wparam) {
SVReadMsg *pRead; SWorker * pWorker = wparam;
int32_t qtype; SWorkerPool *pPool = pWorker->pPool;
void * pVnode; SVReadMsg * pRead;
int32_t qtype;
void * pVnode;
while (1) { while (1) {
if (taosReadQitemFromQset(tsVReadQset, &qtype, (void **)&pRead, &pVnode) == 0) { if (taosReadQitemFromQset(pPool->qset, &qtype, (void **)&pRead, &pVnode) == 0) {
dDebug("qset:%p dnode vread got no message from qset, exiting", tsVReadQset); dDebug("dnode vquery got no message from qset:%p, exiting", pPool->qset);
break; break;
} }
dTrace("msg:%p, app:%p type:%s will be processed in vread queue, qtype:%d", pRead, pRead->rpcAhandle, dTrace("msg:%p, app:%p type:%s will be processed in vquery queue, qtype:%d", pRead, pRead->rpcAhandle,
taosMsg[pRead->msgType], qtype); taosMsg[pRead->msgType], qtype);
int32_t code = vnodeProcessRead(pVnode, pRead); int32_t code = vnodeProcessRead(pVnode, pRead);
......
...@@ -49,8 +49,10 @@ void *dnodeSendCfgTableToRecv(int32_t vgId, int32_t tid); ...@@ -49,8 +49,10 @@ void *dnodeSendCfgTableToRecv(int32_t vgId, int32_t tid);
void *dnodeAllocVWriteQueue(void *pVnode); void *dnodeAllocVWriteQueue(void *pVnode);
void dnodeFreeVWriteQueue(void *pWqueue); void dnodeFreeVWriteQueue(void *pWqueue);
void dnodeSendRpcVWriteRsp(void *pVnode, void *pWrite, int32_t code); void dnodeSendRpcVWriteRsp(void *pVnode, void *pWrite, int32_t code);
void *dnodeAllocVReadQueue(void *pVnode); void *dnodeAllocVQueryQueue(void *pVnode);
void dnodeFreeVReadQueue(void *pRqueue); void *dnodeAllocVFetchQueue(void *pVnode);
void dnodeFreeVQueryQueue(void *pQqueue);
void dnodeFreeVFetchQueue(void *pFqueue);
int32_t dnodeAllocateMPeerQueue(); int32_t dnodeAllocateMPeerQueue();
void dnodeFreeMPeerQueue(); void dnodeFreeMPeerQueue();
......
...@@ -206,9 +206,10 @@ TAOS_DEFINE_ERROR(TSDB_CODE_VND_NO_SUCH_FILE_OR_DIR, 0, 0x0507, "Missing da ...@@ -206,9 +206,10 @@ TAOS_DEFINE_ERROR(TSDB_CODE_VND_NO_SUCH_FILE_OR_DIR, 0, 0x0507, "Missing da
TAOS_DEFINE_ERROR(TSDB_CODE_VND_OUT_OF_MEMORY, 0, 0x0508, "Out of memory") TAOS_DEFINE_ERROR(TSDB_CODE_VND_OUT_OF_MEMORY, 0, 0x0508, "Out of memory")
TAOS_DEFINE_ERROR(TSDB_CODE_VND_APP_ERROR, 0, 0x0509, "Unexpected generic error in vnode") TAOS_DEFINE_ERROR(TSDB_CODE_VND_APP_ERROR, 0, 0x0509, "Unexpected generic error in vnode")
TAOS_DEFINE_ERROR(TSDB_CODE_VND_INVALID_VRESION_FILE, 0, 0x050A, "Invalid version file") TAOS_DEFINE_ERROR(TSDB_CODE_VND_INVALID_VRESION_FILE, 0, 0x050A, "Invalid version file")
TAOS_DEFINE_ERROR(TSDB_CODE_VND_IS_FULL, 0, 0x050B, "Vnode memory is full because commit failed") TAOS_DEFINE_ERROR(TSDB_CODE_VND_IS_FULL, 0, 0x050B, "Database memory is full for commit failed")
TAOS_DEFINE_ERROR(TSDB_CODE_VND_IS_FLOWCTRL, 0, 0x050C, "Database memory is full for waiting commit")
TAOS_DEFINE_ERROR(TSDB_CODE_VND_NOT_SYNCED, 0, 0x0511, "Database suspended") TAOS_DEFINE_ERROR(TSDB_CODE_VND_NOT_SYNCED, 0, 0x0511, "Database suspended")
TAOS_DEFINE_ERROR(TSDB_CODE_VND_NO_WRITE_AUTH, 0, 0x0512, "Write operation denied") TAOS_DEFINE_ERROR(TSDB_CODE_VND_NO_WRITE_AUTH, 0, 0x0512, "Database write operation denied")
TAOS_DEFINE_ERROR(TSDB_CODE_VND_SYNCING, 0, 0x0513, "Database is syncing") TAOS_DEFINE_ERROR(TSDB_CODE_VND_SYNCING, 0, 0x0513, "Database is syncing")
// tsdb // tsdb
......
...@@ -473,7 +473,7 @@ typedef struct { ...@@ -473,7 +473,7 @@ typedef struct {
int16_t numOfGroupCols; // num of group by columns int16_t numOfGroupCols; // num of group by columns
int16_t orderByIdx; int16_t orderByIdx;
int16_t orderType; // used in group by xx order by xxx int16_t orderType; // used in group by xx order by xxx
int64_t tableLimit; // limit the number of rows for each table, used in order by + limit in stable projection query. int64_t vgroupLimit; // limit the number of rows for each table, used in order by + limit in stable projection query.
int16_t prjOrder; // global order in super table projection query. int16_t prjOrder; // global order in super table projection query.
int64_t limit; int64_t limit;
int64_t offset; int64_t offset;
......
...@@ -34,6 +34,7 @@ typedef struct { ...@@ -34,6 +34,7 @@ typedef struct {
void * rpcHandle; void * rpcHandle;
void * rpcAhandle; void * rpcAhandle;
void * qhandle; void * qhandle;
void * pVnode;
int8_t qtype; int8_t qtype;
int8_t msgType; int8_t msgType;
SRspRet rspRet; SRspRet rspRet;
......
...@@ -45,6 +45,7 @@ typedef struct SShellArguments { ...@@ -45,6 +45,7 @@ typedef struct SShellArguments {
char* timezone; char* timezone;
bool is_raw_time; bool is_raw_time;
bool is_use_passwd; bool is_use_passwd;
bool dump_config;
char file[TSDB_FILENAME_LEN]; char file[TSDB_FILENAME_LEN];
char dir[TSDB_FILENAME_LEN]; char dir[TSDB_FILENAME_LEN];
int threadNum; int threadNum;
......
...@@ -509,7 +509,9 @@ static int dumpResultToFile(const char* fname, TAOS_RES* tres) { ...@@ -509,7 +509,9 @@ static int dumpResultToFile(const char* fname, TAOS_RES* tres) {
static void shellPrintNChar(const char *str, int length, int width) { static void shellPrintNChar(const char *str, int length, int width) {
int pos = 0, cols = 0; wchar_t tail[3];
int pos = 0, cols = 0, totalCols = 0, tailLen = 0;
while (pos < length) { while (pos < length) {
wchar_t wc; wchar_t wc;
int bytes = mbtowc(&wc, str + pos, MB_CUR_MAX); int bytes = mbtowc(&wc, str + pos, MB_CUR_MAX);
...@@ -526,15 +528,44 @@ static void shellPrintNChar(const char *str, int length, int width) { ...@@ -526,15 +528,44 @@ static void shellPrintNChar(const char *str, int length, int width) {
#else #else
int w = wcwidth(wc); int w = wcwidth(wc);
#endif #endif
if (w > 0) { if (w <= 0) {
if (width > 0 && cols + w > width) { continue;
break; }
}
if (width <= 0) {
printf("%lc", wc);
continue;
}
totalCols += w;
if (totalCols > width) {
break;
}
if (totalCols <= (width - 3)) {
printf("%lc", wc); printf("%lc", wc);
cols += w; cols += w;
} else {
tail[tailLen] = wc;
tailLen++;
} }
} }
if (totalCols > width) {
// width could be 1 or 2, so printf("...") cannot be used
for (int i = 0; i < 3; i++) {
if (cols >= width) {
break;
}
putchar('.');
++cols;
}
} else {
for (int i = 0; i < tailLen; i++) {
printf("%lc", tail[i]);
}
cols = totalCols;
}
for (; cols < width; cols++) { for (; cols < width; cols++) {
putchar(' '); putchar(' ');
} }
...@@ -656,13 +687,21 @@ static int calcColWidth(TAOS_FIELD* field, int precision) { ...@@ -656,13 +687,21 @@ static int calcColWidth(TAOS_FIELD* field, int precision) {
return MAX(25, width); return MAX(25, width);
case TSDB_DATA_TYPE_BINARY: case TSDB_DATA_TYPE_BINARY:
case TSDB_DATA_TYPE_NCHAR:
if (field->bytes > tsMaxBinaryDisplayWidth) { if (field->bytes > tsMaxBinaryDisplayWidth) {
return MAX(tsMaxBinaryDisplayWidth, width); return MAX(tsMaxBinaryDisplayWidth, width);
} else { } else {
return MAX(field->bytes, width); return MAX(field->bytes, width);
} }
case TSDB_DATA_TYPE_NCHAR: {
int16_t bytes = field->bytes * TSDB_NCHAR_SIZE;
if (bytes > tsMaxBinaryDisplayWidth) {
return MAX(tsMaxBinaryDisplayWidth, width);
} else {
return MAX(bytes, width);
}
}
case TSDB_DATA_TYPE_TIMESTAMP: case TSDB_DATA_TYPE_TIMESTAMP:
if (args.is_raw_time) { if (args.is_raw_time) {
return MAX(14, width); return MAX(14, width);
......
...@@ -39,6 +39,7 @@ static struct argp_option options[] = { ...@@ -39,6 +39,7 @@ static struct argp_option options[] = {
{"user", 'u', "USER", 0, "The user name to use when connecting to the server."}, {"user", 'u', "USER", 0, "The user name to use when connecting to the server."},
{"user", 'A', "Auth", 0, "The user auth to use when connecting to the server."}, {"user", 'A', "Auth", 0, "The user auth to use when connecting to the server."},
{"config-dir", 'c', "CONFIG_DIR", 0, "Configuration directory."}, {"config-dir", 'c', "CONFIG_DIR", 0, "Configuration directory."},
{"dump-config", 'C', 0, 0, "Dump configuration."},
{"commands", 's', "COMMANDS", 0, "Commands to run without enter the shell."}, {"commands", 's', "COMMANDS", 0, "Commands to run without enter the shell."},
{"raw-time", 'r', 0, 0, "Output time as uint64_t."}, {"raw-time", 'r', 0, 0, "Output time as uint64_t."},
{"file", 'f', "FILE", 0, "Script to run without enter the shell."}, {"file", 'f', "FILE", 0, "Script to run without enter the shell."},
...@@ -96,6 +97,9 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) { ...@@ -96,6 +97,9 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) {
tstrncpy(configDir, full_path.we_wordv[0], TSDB_FILENAME_LEN); tstrncpy(configDir, full_path.we_wordv[0], TSDB_FILENAME_LEN);
wordfree(&full_path); wordfree(&full_path);
break; break;
case 'C':
arguments->dump_config = true;
break;
case 's': case 's':
arguments->commands = arg; arguments->commands = arg;
break; break;
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include "os.h" #include "os.h"
#include "shell.h" #include "shell.h"
#include "tconfig.h"
#include "tnettest.h" #include "tnettest.h"
pthread_t pid; pthread_t pid;
...@@ -58,6 +59,7 @@ SShellArguments args = { ...@@ -58,6 +59,7 @@ SShellArguments args = {
.timezone = NULL, .timezone = NULL,
.is_raw_time = false, .is_raw_time = false,
.is_use_passwd = false, .is_use_passwd = false,
.dump_config = false,
.file = "\0", .file = "\0",
.dir = "\0", .dir = "\0",
.threadNum = 5, .threadNum = 5,
...@@ -78,6 +80,19 @@ int main(int argc, char* argv[]) { ...@@ -78,6 +80,19 @@ int main(int argc, char* argv[]) {
shellParseArgument(argc, argv, &args); shellParseArgument(argc, argv, &args);
if (args.dump_config) {
taosInitGlobalCfg();
taosReadGlobalLogCfg();
if (!taosReadGlobalCfg()) {
printf("TDengine read global config failed");
exit(EXIT_FAILURE);
}
taosDumpGlobalCfg();
exit(0);
}
if (args.netTestRole && args.netTestRole[0] != 0) { if (args.netTestRole && args.netTestRole[0] != 0) {
taos_init(); taos_init();
taosNetTest(args.netTestRole, args.host, args.port, args.pktLen); taosNetTest(args.netTestRole, args.host, args.port, args.pktLen);
......
...@@ -35,6 +35,8 @@ void printHelp() { ...@@ -35,6 +35,8 @@ void printHelp() {
printf("%s%s%s\n", indent, indent, "The user auth to use when connecting to the server."); printf("%s%s%s\n", indent, indent, "The user auth to use when connecting to the server.");
printf("%s%s\n", indent, "-c"); printf("%s%s\n", indent, "-c");
printf("%s%s%s\n", indent, indent, "Configuration directory."); printf("%s%s%s\n", indent, indent, "Configuration directory.");
printf("%s%s\n", indent, "-C");
printf("%s%s%s\n", indent, indent, "Dump configuration.");
printf("%s%s\n", indent, "-s"); printf("%s%s\n", indent, "-s");
printf("%s%s%s\n", indent, indent, "Commands to run without enter the shell."); printf("%s%s%s\n", indent, indent, "Commands to run without enter the shell.");
printf("%s%s\n", indent, "-r"); printf("%s%s\n", indent, "-r");
...@@ -104,6 +106,8 @@ void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) { ...@@ -104,6 +106,8 @@ void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) {
fprintf(stderr, "Option -c requires an argument\n"); fprintf(stderr, "Option -c requires an argument\n");
exit(EXIT_FAILURE); exit(EXIT_FAILURE);
} }
} else if (strcmp(argv[i], "-C") == 0) {
arguments->dump_config = true;
} else if (strcmp(argv[i], "-s") == 0) { } else if (strcmp(argv[i], "-s") == 0) {
if (i < argc - 1) { if (i < argc - 1) {
arguments->commands = argv[++i]; arguments->commands = argv[++i];
......
...@@ -14,6 +14,9 @@ ...@@ -14,6 +14,9 @@
*/ */
#include <iconv.h> #include <iconv.h>
#include <sys/stat.h>
#include <sys/syscall.h>
#include "os.h" #include "os.h"
#include "taos.h" #include "taos.h"
#include "taosdef.h" #include "taosdef.h"
...@@ -366,6 +369,7 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) { ...@@ -366,6 +369,7 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) {
static struct argp argp = {options, parse_opt, args_doc, doc}; static struct argp argp = {options, parse_opt, args_doc, doc};
static resultStatistics g_resultStatistics = {0}; static resultStatistics g_resultStatistics = {0};
static FILE *g_fpOfResult = NULL; static FILE *g_fpOfResult = NULL;
static int g_numOfCores = 1;
int taosDumpOut(struct arguments *arguments); int taosDumpOut(struct arguments *arguments);
int taosDumpIn(struct arguments *arguments); int taosDumpIn(struct arguments *arguments);
...@@ -378,7 +382,7 @@ int32_t taosDumpTable(char *table, char *metric, struct arguments *arguments, FI ...@@ -378,7 +382,7 @@ int32_t taosDumpTable(char *table, char *metric, struct arguments *arguments, FI
int taosDumpTableData(FILE *fp, char *tbname, struct arguments *arguments, TAOS* taosCon, char* dbName); int taosDumpTableData(FILE *fp, char *tbname, struct arguments *arguments, TAOS* taosCon, char* dbName);
int taosCheckParam(struct arguments *arguments); int taosCheckParam(struct arguments *arguments);
void taosFreeDbInfos(); void taosFreeDbInfos();
static void taosStartDumpOutWorkThreads(struct arguments* args, int32_t numOfThread, char *dbName); static void taosStartDumpOutWorkThreads(void* taosCon, struct arguments* args, int32_t numOfThread, char *dbName);
struct arguments tsArguments = { struct arguments tsArguments = {
// connection option // connection option
...@@ -540,6 +544,8 @@ int main(int argc, char *argv[]) { ...@@ -540,6 +544,8 @@ int main(int argc, char *argv[]) {
} }
} }
g_numOfCores = (int32_t)sysconf(_SC_NPROCESSORS_ONLN);
time_t tTime = time(NULL); time_t tTime = time(NULL);
struct tm tm = *localtime(&tTime); struct tm tm = *localtime(&tTime);
...@@ -692,64 +698,97 @@ int32_t taosSaveTableOfMetricToTempFile(TAOS *taosCon, char* metric, struct argu ...@@ -692,64 +698,97 @@ int32_t taosSaveTableOfMetricToTempFile(TAOS *taosCon, char* metric, struct argu
sprintf(tmpCommand, "select tbname from %s", metric); sprintf(tmpCommand, "select tbname from %s", metric);
TAOS_RES *result = taos_query(taosCon, tmpCommand); TAOS_RES *res = taos_query(taosCon, tmpCommand);
int32_t code = taos_errno(result); int32_t code = taos_errno(res);
if (code != 0) { if (code != 0) {
fprintf(stderr, "failed to run command %s\n", tmpCommand); fprintf(stderr, "failed to run command %s\n", tmpCommand);
free(tmpCommand); free(tmpCommand);
taos_free_result(result); taos_free_result(res);
return -1; return -1;
} }
free(tmpCommand);
TAOS_FIELD *fields = taos_fetch_fields(result); char tmpBuf[TSDB_FILENAME_LEN + 1];
memset(tmpBuf, 0, TSDB_FILENAME_LEN);
sprintf(tmpBuf, ".select-tbname.tmp");
fd = open(tmpBuf, O_RDWR | O_CREAT | O_TRUNC, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH);
if (fd == -1) {
fprintf(stderr, "failed to open temp file: %s\n", tmpBuf);
taos_free_result(res);
return -1;
}
int32_t numOfTable = 0; TAOS_FIELD *fields = taos_fetch_fields(res);
int32_t numOfThread = *totalNumOfThread;
char tmpFileName[TSDB_FILENAME_LEN + 1];
while ((row = taos_fetch_row(result)) != NULL) {
if (0 == numOfTable) {
memset(tmpFileName, 0, TSDB_FILENAME_LEN);
sprintf(tmpFileName, ".tables.tmp.%d", numOfThread);
fd = open(tmpFileName, O_RDWR | O_CREAT, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH);
if (fd == -1) {
fprintf(stderr, "failed to open temp file: %s\n", tmpFileName);
taos_free_result(result);
for (int32_t loopCnt = 0; loopCnt < numOfThread; loopCnt++) {
sprintf(tmpFileName, ".tables.tmp.%d", loopCnt);
(void)remove(tmpFileName);
}
free(tmpCommand);
return -1;
}
numOfThread++;
}
int32_t numOfTable = 0;
while ((row = taos_fetch_row(res)) != NULL) {
memset(&tableRecord, 0, sizeof(STableRecord)); memset(&tableRecord, 0, sizeof(STableRecord));
tstrncpy(tableRecord.name, (char *)row[0], fields[0].bytes); tstrncpy(tableRecord.name, (char *)row[0], fields[0].bytes);
tstrncpy(tableRecord.metric, metric, TSDB_TABLE_NAME_LEN); tstrncpy(tableRecord.metric, metric, TSDB_TABLE_NAME_LEN);
taosWrite(fd, &tableRecord, sizeof(STableRecord)); taosWrite(fd, &tableRecord, sizeof(STableRecord));
numOfTable++; numOfTable++;
}
taos_free_result(res);
lseek(fd, 0, SEEK_SET);
int maxThreads = arguments->thread_num;
int tableOfPerFile ;
if (numOfTable <= arguments->thread_num) {
tableOfPerFile = 1;
maxThreads = numOfTable;
} else {
tableOfPerFile = numOfTable / arguments->thread_num;
if (0 != numOfTable % arguments->thread_num) {
tableOfPerFile += 1;
}
}
if (numOfTable >= arguments->table_batch) { char* tblBuf = (char*)calloc(1, tableOfPerFile * sizeof(STableRecord));
numOfTable = 0; if (NULL == tblBuf){
fprintf(stderr, "failed to calloc %" PRIzu "\n", tableOfPerFile * sizeof(STableRecord));
close(fd);
return -1;
}
int32_t numOfThread = *totalNumOfThread;
int subFd = -1;
for (; numOfThread < maxThreads; numOfThread++) {
memset(tmpBuf, 0, TSDB_FILENAME_LEN);
sprintf(tmpBuf, ".tables.tmp.%d", numOfThread);
subFd = open(tmpBuf, O_RDWR | O_CREAT | O_TRUNC, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH);
if (subFd == -1) {
fprintf(stderr, "failed to open temp file: %s\n", tmpBuf);
for (int32_t loopCnt = 0; loopCnt < numOfThread; loopCnt++) {
sprintf(tmpBuf, ".tables.tmp.%d", loopCnt);
(void)remove(tmpBuf);
}
sprintf(tmpBuf, ".select-tbname.tmp");
(void)remove(tmpBuf);
close(fd); close(fd);
fd = -1; return -1;
} }
// read tableOfPerFile for fd, write to subFd
ssize_t readLen = read(fd, tblBuf, tableOfPerFile * sizeof(STableRecord));
if (readLen <= 0) {
close(subFd);
break;
}
taosWrite(subFd, tblBuf, readLen);
close(subFd);
} }
sprintf(tmpBuf, ".select-tbname.tmp");
(void)remove(tmpBuf);
if (fd >= 0) { if (fd >= 0) {
close(fd); close(fd);
fd = -1; fd = -1;
} }
taos_free_result(result);
*totalNumOfThread = numOfThread; *totalNumOfThread = numOfThread;
free(tmpCommand);
return 0; return 0;
} }
...@@ -946,7 +985,7 @@ int taosDumpOut(struct arguments *arguments) { ...@@ -946,7 +985,7 @@ int taosDumpOut(struct arguments *arguments) {
} }
// start multi threads to dumpout // start multi threads to dumpout
taosStartDumpOutWorkThreads(arguments, totalNumOfThread, dbInfos[0]->name); taosStartDumpOutWorkThreads(taos, arguments, totalNumOfThread, dbInfos[0]->name);
char tmpFileName[TSDB_FILENAME_LEN + 1]; char tmpFileName[TSDB_FILENAME_LEN + 1];
_clean_tmp_file: _clean_tmp_file:
...@@ -1181,34 +1220,34 @@ void* taosDumpOutWorkThreadFp(void *arg) ...@@ -1181,34 +1220,34 @@ void* taosDumpOutWorkThreadFp(void *arg)
STableRecord tableRecord; STableRecord tableRecord;
int fd; int fd;
char tmpFileName[TSDB_FILENAME_LEN*4] = {0}; char tmpBuf[TSDB_FILENAME_LEN*4] = {0};
sprintf(tmpFileName, ".tables.tmp.%d", pThread->threadIndex); sprintf(tmpBuf, ".tables.tmp.%d", pThread->threadIndex);
fd = open(tmpFileName, O_RDWR | O_CREAT, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH); fd = open(tmpBuf, O_RDWR | O_CREAT, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH);
if (fd == -1) { if (fd == -1) {
fprintf(stderr, "taosDumpTableFp() failed to open temp file: %s\n", tmpFileName); fprintf(stderr, "taosDumpTableFp() failed to open temp file: %s\n", tmpBuf);
return NULL; return NULL;
} }
FILE *fp = NULL; FILE *fp = NULL;
memset(tmpFileName, 0, TSDB_FILENAME_LEN + 128); memset(tmpBuf, 0, TSDB_FILENAME_LEN + 128);
if (tsArguments.outpath[0] != 0) { if (tsArguments.outpath[0] != 0) {
sprintf(tmpFileName, "%s/%s.tables.%d.sql", tsArguments.outpath, pThread->dbName, pThread->threadIndex); sprintf(tmpBuf, "%s/%s.tables.%d.sql", tsArguments.outpath, pThread->dbName, pThread->threadIndex);
} else { } else {
sprintf(tmpFileName, "%s.tables.%d.sql", pThread->dbName, pThread->threadIndex); sprintf(tmpBuf, "%s.tables.%d.sql", pThread->dbName, pThread->threadIndex);
} }
fp = fopen(tmpFileName, "w"); fp = fopen(tmpBuf, "w");
if (fp == NULL) { if (fp == NULL) {
fprintf(stderr, "failed to open file %s\n", tmpFileName); fprintf(stderr, "failed to open file %s\n", tmpBuf);
close(fd); close(fd);
return NULL; return NULL;
} }
memset(tmpFileName, 0, TSDB_FILENAME_LEN); memset(tmpBuf, 0, TSDB_FILENAME_LEN);
sprintf(tmpFileName, "use %s", pThread->dbName); sprintf(tmpBuf, "use %s", pThread->dbName);
TAOS_RES* tmpResult = taos_query(pThread->taosCon, tmpFileName); TAOS_RES* tmpResult = taos_query(pThread->taosCon, tmpBuf);
int32_t code = taos_errno(tmpResult); int32_t code = taos_errno(tmpResult);
if (code != 0) { if (code != 0) {
fprintf(stderr, "invalid database %s\n", pThread->dbName); fprintf(stderr, "invalid database %s\n", pThread->dbName);
...@@ -1218,6 +1257,9 @@ void* taosDumpOutWorkThreadFp(void *arg) ...@@ -1218,6 +1257,9 @@ void* taosDumpOutWorkThreadFp(void *arg)
return NULL; return NULL;
} }
int fileNameIndex = 1;
int tablesInOneFile = 0;
int64_t lastRowsPrint = 5000000;
fprintf(fp, "USE %s;\n\n", pThread->dbName); fprintf(fp, "USE %s;\n\n", pThread->dbName);
while (1) { while (1) {
ssize_t readLen = read(fd, &tableRecord, sizeof(STableRecord)); ssize_t readLen = read(fd, &tableRecord, sizeof(STableRecord));
...@@ -1228,6 +1270,33 @@ void* taosDumpOutWorkThreadFp(void *arg) ...@@ -1228,6 +1270,33 @@ void* taosDumpOutWorkThreadFp(void *arg)
// TODO: sum table count and table rows by self // TODO: sum table count and table rows by self
pThread->tablesOfDumpOut++; pThread->tablesOfDumpOut++;
pThread->rowsOfDumpOut += ret; pThread->rowsOfDumpOut += ret;
if (pThread->rowsOfDumpOut >= lastRowsPrint) {
printf(" %"PRId64 " rows already be dumpout from database %s\n", pThread->rowsOfDumpOut, pThread->dbName);
lastRowsPrint += 5000000;
}
tablesInOneFile++;
if (tablesInOneFile >= tsArguments.table_batch) {
fclose(fp);
tablesInOneFile = 0;
memset(tmpBuf, 0, TSDB_FILENAME_LEN + 128);
if (tsArguments.outpath[0] != 0) {
sprintf(tmpBuf, "%s/%s.tables.%d-%d.sql", tsArguments.outpath, pThread->dbName, pThread->threadIndex, fileNameIndex);
} else {
sprintf(tmpBuf, "%s.tables.%d-%d.sql", pThread->dbName, pThread->threadIndex, fileNameIndex);
}
fileNameIndex++;
fp = fopen(tmpBuf, "w");
if (fp == NULL) {
fprintf(stderr, "failed to open file %s\n", tmpBuf);
close(fd);
taos_free_result(tmpResult);
return NULL;
}
}
} }
} }
...@@ -1238,7 +1307,7 @@ void* taosDumpOutWorkThreadFp(void *arg) ...@@ -1238,7 +1307,7 @@ void* taosDumpOutWorkThreadFp(void *arg)
return NULL; return NULL;
} }
static void taosStartDumpOutWorkThreads(struct arguments* args, int32_t numOfThread, char *dbName) static void taosStartDumpOutWorkThreads(void* taosCon, struct arguments* args, int32_t numOfThread, char *dbName)
{ {
pthread_attr_t thattr; pthread_attr_t thattr;
SThreadParaObj *threadObj = (SThreadParaObj *)calloc(numOfThread, sizeof(SThreadParaObj)); SThreadParaObj *threadObj = (SThreadParaObj *)calloc(numOfThread, sizeof(SThreadParaObj));
...@@ -1249,12 +1318,7 @@ static void taosStartDumpOutWorkThreads(struct arguments* args, int32_t numOfTh ...@@ -1249,12 +1318,7 @@ static void taosStartDumpOutWorkThreads(struct arguments* args, int32_t numOfTh
pThread->threadIndex = t; pThread->threadIndex = t;
pThread->totalThreads = numOfThread; pThread->totalThreads = numOfThread;
tstrncpy(pThread->dbName, dbName, TSDB_TABLE_NAME_LEN); tstrncpy(pThread->dbName, dbName, TSDB_TABLE_NAME_LEN);
pThread->taosCon = taos_connect(args->host, args->user, args->password, NULL, args->port); pThread->taosCon = taosCon;
if (pThread->taosCon == NULL) {
fprintf(stderr, "ERROR: thread:%d failed connect to TDengine, reason:%s\n", pThread->threadIndex, taos_errstr(NULL));
exit(0);
}
pthread_attr_init(&thattr); pthread_attr_init(&thattr);
pthread_attr_setdetachstate(&thattr, PTHREAD_CREATE_JOINABLE); pthread_attr_setdetachstate(&thattr, PTHREAD_CREATE_JOINABLE);
...@@ -1273,7 +1337,6 @@ static void taosStartDumpOutWorkThreads(struct arguments* args, int32_t numOfTh ...@@ -1273,7 +1337,6 @@ static void taosStartDumpOutWorkThreads(struct arguments* args, int32_t numOfTh
int64_t totalRowsOfDumpOut = 0; int64_t totalRowsOfDumpOut = 0;
int64_t totalChildTblsOfDumpOut = 0; int64_t totalChildTblsOfDumpOut = 0;
for (int32_t t = 0; t < numOfThread; ++t) { for (int32_t t = 0; t < numOfThread; ++t) {
taos_close(threadObj[t].taosCon);
totalChildTblsOfDumpOut += threadObj[t].tablesOfDumpOut; totalChildTblsOfDumpOut += threadObj[t].tablesOfDumpOut;
totalRowsOfDumpOut += threadObj[t].rowsOfDumpOut; totalRowsOfDumpOut += threadObj[t].rowsOfDumpOut;
} }
...@@ -1398,44 +1461,81 @@ int taosDumpDb(SDbInfo *dbInfo, struct arguments *arguments, FILE *fp, TAOS *tao ...@@ -1398,44 +1461,81 @@ int taosDumpDb(SDbInfo *dbInfo, struct arguments *arguments, FILE *fp, TAOS *tao
return -1; return -1;
} }
TAOS_FIELD *fields = taos_fetch_fields(res); char tmpBuf[TSDB_FILENAME_LEN + 1];
memset(tmpBuf, 0, TSDB_FILENAME_LEN);
int32_t numOfTable = 0; sprintf(tmpBuf, ".show-tables.tmp");
int32_t numOfThread = 0; fd = open(tmpBuf, O_RDWR | O_CREAT | O_TRUNC, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH);
char tmpFileName[TSDB_FILENAME_LEN + 1]; if (fd == -1) {
while ((row = taos_fetch_row(res)) != NULL) { fprintf(stderr, "failed to open temp file: %s\n", tmpBuf);
if (0 == numOfTable) { taos_free_result(res);
memset(tmpFileName, 0, TSDB_FILENAME_LEN); return -1;
sprintf(tmpFileName, ".tables.tmp.%d", numOfThread); }
fd = open(tmpFileName, O_RDWR | O_CREAT, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH);
if (fd == -1) {
fprintf(stderr, "failed to open temp file: %s\n", tmpFileName);
taos_free_result(res);
for (int32_t loopCnt = 0; loopCnt < numOfThread; loopCnt++) {
sprintf(tmpFileName, ".tables.tmp.%d", loopCnt);
(void)remove(tmpFileName);
}
return -1;
}
numOfThread++; TAOS_FIELD *fields = taos_fetch_fields(res);
}
int32_t numOfTable = 0;
while ((row = taos_fetch_row(res)) != NULL) {
memset(&tableRecord, 0, sizeof(STableRecord)); memset(&tableRecord, 0, sizeof(STableRecord));
tstrncpy(tableRecord.name, (char *)row[TSDB_SHOW_TABLES_NAME_INDEX], fields[TSDB_SHOW_TABLES_NAME_INDEX].bytes); tstrncpy(tableRecord.name, (char *)row[TSDB_SHOW_TABLES_NAME_INDEX], fields[TSDB_SHOW_TABLES_NAME_INDEX].bytes);
tstrncpy(tableRecord.metric, (char *)row[TSDB_SHOW_TABLES_METRIC_INDEX], fields[TSDB_SHOW_TABLES_METRIC_INDEX].bytes); tstrncpy(tableRecord.metric, (char *)row[TSDB_SHOW_TABLES_METRIC_INDEX], fields[TSDB_SHOW_TABLES_METRIC_INDEX].bytes);
taosWrite(fd, &tableRecord, sizeof(STableRecord)); taosWrite(fd, &tableRecord, sizeof(STableRecord));
numOfTable++; numOfTable++;
}
taos_free_result(res);
lseek(fd, 0, SEEK_SET);
if (numOfTable >= arguments->table_batch) { int maxThreads = tsArguments.thread_num;
numOfTable = 0; int tableOfPerFile ;
if (numOfTable <= tsArguments.thread_num) {
tableOfPerFile = 1;
maxThreads = numOfTable;
} else {
tableOfPerFile = numOfTable / tsArguments.thread_num;
if (0 != numOfTable % tsArguments.thread_num) {
tableOfPerFile += 1;
}
}
char* tblBuf = (char*)calloc(1, tableOfPerFile * sizeof(STableRecord));
if (NULL == tblBuf){
fprintf(stderr, "failed to calloc %" PRIzu "\n", tableOfPerFile * sizeof(STableRecord));
close(fd);
return -1;
}
int32_t numOfThread = 0;
int subFd = -1;
for (numOfThread = 0; numOfThread < maxThreads; numOfThread++) {
memset(tmpBuf, 0, TSDB_FILENAME_LEN);
sprintf(tmpBuf, ".tables.tmp.%d", numOfThread);
subFd = open(tmpBuf, O_RDWR | O_CREAT | O_TRUNC, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH);
if (subFd == -1) {
fprintf(stderr, "failed to open temp file: %s\n", tmpBuf);
for (int32_t loopCnt = 0; loopCnt < numOfThread; loopCnt++) {
sprintf(tmpBuf, ".tables.tmp.%d", loopCnt);
(void)remove(tmpBuf);
}
sprintf(tmpBuf, ".show-tables.tmp");
(void)remove(tmpBuf);
close(fd); close(fd);
fd = -1; return -1;
} }
// read tableOfPerFile for fd, write to subFd
ssize_t readLen = read(fd, tblBuf, tableOfPerFile * sizeof(STableRecord));
if (readLen <= 0) {
close(subFd);
break;
}
taosWrite(subFd, tblBuf, readLen);
close(subFd);
} }
sprintf(tmpBuf, ".show-tables.tmp");
(void)remove(tmpBuf);
if (fd >= 0) { if (fd >= 0) {
close(fd); close(fd);
fd = -1; fd = -1;
...@@ -1444,10 +1544,10 @@ int taosDumpDb(SDbInfo *dbInfo, struct arguments *arguments, FILE *fp, TAOS *tao ...@@ -1444,10 +1544,10 @@ int taosDumpDb(SDbInfo *dbInfo, struct arguments *arguments, FILE *fp, TAOS *tao
taos_free_result(res); taos_free_result(res);
// start multi threads to dumpout // start multi threads to dumpout
taosStartDumpOutWorkThreads(arguments, numOfThread, dbInfo->name); taosStartDumpOutWorkThreads(taosCon, arguments, numOfThread, dbInfo->name);
for (int loopCnt = 0; loopCnt < numOfThread; loopCnt++) { for (int loopCnt = 0; loopCnt < numOfThread; loopCnt++) {
sprintf(tmpFileName, ".tables.tmp.%d", loopCnt); sprintf(tmpBuf, ".tables.tmp.%d", loopCnt);
(void)remove(tmpFileName); (void)remove(tmpBuf);
} }
return 0; return 0;
...@@ -1552,8 +1652,8 @@ void taosDumpCreateMTableClause(STableDef *tableDes, char *metric, int numOfCols ...@@ -1552,8 +1652,8 @@ void taosDumpCreateMTableClause(STableDef *tableDes, char *metric, int numOfCols
} }
int taosDumpTableData(FILE *fp, char *tbname, struct arguments *arguments, TAOS* taosCon, char* dbName) { int taosDumpTableData(FILE *fp, char *tbname, struct arguments *arguments, TAOS* taosCon, char* dbName) {
/* char temp[MAX_COMMAND_SIZE] = "\0"; */ int64_t lastRowsPrint = 5000000;
int64_t totalRows = 0; int64_t totalRows = 0;
int count = 0; int count = 0;
char *pstr = NULL; char *pstr = NULL;
TAOS_ROW row = NULL; TAOS_ROW row = NULL;
...@@ -1680,9 +1780,14 @@ int taosDumpTableData(FILE *fp, char *tbname, struct arguments *arguments, TAOS* ...@@ -1680,9 +1780,14 @@ int taosDumpTableData(FILE *fp, char *tbname, struct arguments *arguments, TAOS*
curr_sqlstr_len += sprintf(pstr + curr_sqlstr_len, ") "); curr_sqlstr_len += sprintf(pstr + curr_sqlstr_len, ") ");
totalRows++; totalRows++;
count++; count++;
fprintf(fp, "%s", tmpBuffer); fprintf(fp, "%s", tmpBuffer);
if (totalRows >= lastRowsPrint) {
printf(" %"PRId64 " rows already be dumpout from %s.%s\n", totalRows, dbName, tbname);
lastRowsPrint += 5000000;
}
total_sqlstr_len += curr_sqlstr_len; total_sqlstr_len += curr_sqlstr_len;
...@@ -2048,6 +2153,7 @@ int taosDumpInOneFile(TAOS * taos, FILE* fp, char* fcharset, char* encode, c ...@@ -2048,6 +2153,7 @@ int taosDumpInOneFile(TAOS * taos, FILE* fp, char* fcharset, char* encode, c
return -1; return -1;
} }
int lastRowsPrint = 5000000;
int lineNo = 0; int lineNo = 0;
while ((read_len = getline(&line, &line_len, fp)) != -1) { while ((read_len = getline(&line, &line_len, fp)) != -1) {
++lineNo; ++lineNo;
...@@ -2074,7 +2180,12 @@ int taosDumpInOneFile(TAOS * taos, FILE* fp, char* fcharset, char* encode, c ...@@ -2074,7 +2180,12 @@ int taosDumpInOneFile(TAOS * taos, FILE* fp, char* fcharset, char* encode, c
} }
memset(cmd, 0, TSDB_MAX_ALLOWED_SQL_LEN); memset(cmd, 0, TSDB_MAX_ALLOWED_SQL_LEN);
cmd_len = 0; cmd_len = 0;
if (lineNo >= lastRowsPrint) {
printf(" %d lines already be executed from file %s\n", lineNo, fileName);
lastRowsPrint += 5000000;
}
} }
tfree(cmd); tfree(cmd);
...@@ -2101,7 +2212,7 @@ void* taosDumpInWorkThreadFp(void *arg) ...@@ -2101,7 +2212,7 @@ void* taosDumpInWorkThreadFp(void *arg)
return NULL; return NULL;
} }
static void taosStartDumpInWorkThreads(struct arguments *args) static void taosStartDumpInWorkThreads(void* taosCon, struct arguments *args)
{ {
pthread_attr_t thattr; pthread_attr_t thattr;
SThreadParaObj *pThread; SThreadParaObj *pThread;
...@@ -2116,11 +2227,7 @@ static void taosStartDumpInWorkThreads(struct arguments *args) ...@@ -2116,11 +2227,7 @@ static void taosStartDumpInWorkThreads(struct arguments *args)
pThread = threadObj + t; pThread = threadObj + t;
pThread->threadIndex = t; pThread->threadIndex = t;
pThread->totalThreads = totalThreads; pThread->totalThreads = totalThreads;
pThread->taosCon = taos_connect(args->host, args->user, args->password, NULL, args->port); pThread->taosCon = taosCon;
if (pThread->taosCon == NULL) {
fprintf(stderr, "ERROR: thread:%d failed connect to TDengine, reason:%s\n", pThread->threadIndex, taos_errstr(NULL));
exit(0);
}
pthread_attr_init(&thattr); pthread_attr_init(&thattr);
pthread_attr_setdetachstate(&thattr, PTHREAD_CREATE_JOINABLE); pthread_attr_setdetachstate(&thattr, PTHREAD_CREATE_JOINABLE);
...@@ -2169,7 +2276,7 @@ int taosDumpIn(struct arguments *arguments) { ...@@ -2169,7 +2276,7 @@ int taosDumpIn(struct arguments *arguments) {
taosDumpInOneFile(taos, fp, tsfCharset, arguments->encode, tsDbSqlFile); taosDumpInOneFile(taos, fp, tsfCharset, arguments->encode, tsDbSqlFile);
} }
taosStartDumpInWorkThreads(arguments); taosStartDumpInWorkThreads(taos, arguments);
taos_close(taos); taos_close(taos);
taosFreeSQLFiles(); taosFreeSQLFiles();
......
...@@ -43,8 +43,8 @@ void mnodeIncMnodeRef(struct SMnodeObj *pMnode); ...@@ -43,8 +43,8 @@ void mnodeIncMnodeRef(struct SMnodeObj *pMnode);
void mnodeDecMnodeRef(struct SMnodeObj *pMnode); void mnodeDecMnodeRef(struct SMnodeObj *pMnode);
char * mnodeGetMnodeRoleStr(); char * mnodeGetMnodeRoleStr();
void mnodeGetMnodeEpSetForPeer(SRpcEpSet *epSet); void mnodeGetMnodeEpSetForPeer(SRpcEpSet *epSet, bool redirect);
void mnodeGetMnodeEpSetForShell(SRpcEpSet *epSet); void mnodeGetMnodeEpSetForShell(SRpcEpSet *epSet, bool redirect);
char* mnodeGetMnodeMasterEp(); char* mnodeGetMnodeMasterEp();
void mnodeGetMnodeInfos(void *mnodes); void mnodeGetMnodeInfos(void *mnodes);
......
...@@ -303,7 +303,7 @@ void mnodeUpdateDnode(SDnodeObj *pDnode) { ...@@ -303,7 +303,7 @@ void mnodeUpdateDnode(SDnodeObj *pDnode) {
int32_t code = sdbUpdateRow(&row); int32_t code = sdbUpdateRow(&row);
if (code != TSDB_CODE_SUCCESS && code != TSDB_CODE_MND_ACTION_IN_PROGRESS) { if (code != TSDB_CODE_SUCCESS && code != TSDB_CODE_MND_ACTION_IN_PROGRESS) {
mError("dnodeId:%d, failed update", pDnode->dnodeId); mError("dnode:%d, failed update", pDnode->dnodeId);
} }
} }
......
...@@ -273,14 +273,14 @@ void mnodeUpdateMnodeEpSet(SMInfos *pMinfos) { ...@@ -273,14 +273,14 @@ void mnodeUpdateMnodeEpSet(SMInfos *pMinfos) {
mnodeMnodeUnLock(); mnodeMnodeUnLock();
} }
void mnodeGetMnodeEpSetForPeer(SRpcEpSet *epSet) { void mnodeGetMnodeEpSetForPeer(SRpcEpSet *epSet, bool redirect) {
mnodeMnodeRdLock(); mnodeMnodeRdLock();
*epSet = tsMEpForPeer; *epSet = tsMEpForPeer;
mnodeMnodeUnLock(); mnodeMnodeUnLock();
mTrace("vgId:1, mnodes epSet for peer is returned, num:%d inUse:%d", tsMEpForPeer.numOfEps, tsMEpForPeer.inUse); mTrace("vgId:1, mnodes epSet for peer is returned, num:%d inUse:%d", tsMEpForPeer.numOfEps, tsMEpForPeer.inUse);
for (int32_t i = 0; i < epSet->numOfEps; ++i) { for (int32_t i = 0; i < epSet->numOfEps; ++i) {
if (strcmp(epSet->fqdn[i], tsLocalFqdn) == 0 && htons(epSet->port[i]) == tsServerPort + TSDB_PORT_DNODEDNODE) { if (redirect && strcmp(epSet->fqdn[i], tsLocalFqdn) == 0 && htons(epSet->port[i]) == tsServerPort + TSDB_PORT_DNODEDNODE) {
epSet->inUse = (i + 1) % epSet->numOfEps; epSet->inUse = (i + 1) % epSet->numOfEps;
mTrace("vgId:1, mnode:%d, for peer ep:%s:%u, set inUse to %d", i, epSet->fqdn[i], htons(epSet->port[i]), epSet->inUse); mTrace("vgId:1, mnode:%d, for peer ep:%s:%u, set inUse to %d", i, epSet->fqdn[i], htons(epSet->port[i]), epSet->inUse);
} else { } else {
...@@ -289,14 +289,14 @@ void mnodeGetMnodeEpSetForPeer(SRpcEpSet *epSet) { ...@@ -289,14 +289,14 @@ void mnodeGetMnodeEpSetForPeer(SRpcEpSet *epSet) {
} }
} }
void mnodeGetMnodeEpSetForShell(SRpcEpSet *epSet) { void mnodeGetMnodeEpSetForShell(SRpcEpSet *epSet, bool redirect) {
mnodeMnodeRdLock(); mnodeMnodeRdLock();
*epSet = tsMEpForShell; *epSet = tsMEpForShell;
mnodeMnodeUnLock(); mnodeMnodeUnLock();
mTrace("vgId:1, mnodes epSet for shell is returned, num:%d inUse:%d", tsMEpForShell.numOfEps, tsMEpForShell.inUse); mTrace("vgId:1, mnodes epSet for shell is returned, num:%d inUse:%d", tsMEpForShell.numOfEps, tsMEpForShell.inUse);
for (int32_t i = 0; i < epSet->numOfEps; ++i) { for (int32_t i = 0; i < epSet->numOfEps; ++i) {
if (strcmp(epSet->fqdn[i], tsLocalFqdn) == 0 && htons(epSet->port[i]) == tsServerPort) { if (redirect && strcmp(epSet->fqdn[i], tsLocalFqdn) == 0 && htons(epSet->port[i]) == tsServerPort) {
epSet->inUse = (i + 1) % epSet->numOfEps; epSet->inUse = (i + 1) % epSet->numOfEps;
mTrace("vgId:1, mnode:%d, for shell ep:%s:%u, set inUse to %d", i, epSet->fqdn[i], htons(epSet->port[i]), epSet->inUse); mTrace("vgId:1, mnode:%d, for shell ep:%s:%u, set inUse to %d", i, epSet->fqdn[i], htons(epSet->port[i]), epSet->inUse);
} else { } else {
......
...@@ -54,7 +54,7 @@ int32_t mnodeProcessPeerReq(SMnodeMsg *pMsg) { ...@@ -54,7 +54,7 @@ int32_t mnodeProcessPeerReq(SMnodeMsg *pMsg) {
if (!sdbIsMaster()) { if (!sdbIsMaster()) {
SMnodeRsp *rpcRsp = &pMsg->rpcRsp; SMnodeRsp *rpcRsp = &pMsg->rpcRsp;
SRpcEpSet *epSet = rpcMallocCont(sizeof(SRpcEpSet)); SRpcEpSet *epSet = rpcMallocCont(sizeof(SRpcEpSet));
mnodeGetMnodeEpSetForPeer(epSet); mnodeGetMnodeEpSetForPeer(epSet, true);
rpcRsp->rsp = epSet; rpcRsp->rsp = epSet;
rpcRsp->len = sizeof(SRpcEpSet); rpcRsp->len = sizeof(SRpcEpSet);
......
...@@ -282,27 +282,34 @@ static int32_t mnodeRetrieveConns(SShowObj *pShow, char *data, int32_t rows, voi ...@@ -282,27 +282,34 @@ static int32_t mnodeRetrieveConns(SShowObj *pShow, char *data, int32_t rows, voi
// not thread safe, need optimized // not thread safe, need optimized
int32_t mnodeSaveQueryStreamList(SConnObj *pConn, SHeartBeatMsg *pHBMsg) { int32_t mnodeSaveQueryStreamList(SConnObj *pConn, SHeartBeatMsg *pHBMsg) {
pConn->numOfQueries = htonl(pHBMsg->numOfQueries); pConn->numOfQueries = 0;
if (pConn->numOfQueries > 0) { pConn->numOfStreams = 0;
int32_t numOfQueries = htonl(pHBMsg->numOfQueries);
int32_t numOfStreams = htonl(pHBMsg->numOfStreams);
if (numOfQueries > 0) {
if (pConn->pQueries == NULL) { if (pConn->pQueries == NULL) {
pConn->pQueries = calloc(sizeof(SQueryDesc), QUERY_STREAM_SAVE_SIZE); pConn->pQueries = calloc(sizeof(SQueryDesc), QUERY_STREAM_SAVE_SIZE);
} }
int32_t saveSize = MIN(QUERY_STREAM_SAVE_SIZE, pConn->numOfQueries) * sizeof(SQueryDesc); pConn->numOfQueries = MIN(QUERY_STREAM_SAVE_SIZE, numOfQueries);
int32_t saveSize = pConn->numOfQueries * sizeof(SQueryDesc);
if (saveSize > 0 && pConn->pQueries != NULL) { if (saveSize > 0 && pConn->pQueries != NULL) {
memcpy(pConn->pQueries, pHBMsg->pData, saveSize); memcpy(pConn->pQueries, pHBMsg->pData, saveSize);
} }
} }
pConn->numOfStreams = htonl(pHBMsg->numOfStreams); if (numOfStreams > 0) {
if (pConn->numOfStreams > 0) {
if (pConn->pStreams == NULL) { if (pConn->pStreams == NULL) {
pConn->pStreams = calloc(sizeof(SStreamDesc), QUERY_STREAM_SAVE_SIZE); pConn->pStreams = calloc(sizeof(SStreamDesc), QUERY_STREAM_SAVE_SIZE);
} }
int32_t saveSize = MIN(QUERY_STREAM_SAVE_SIZE, pConn->numOfStreams) * sizeof(SStreamDesc); pConn->numOfStreams = MIN(QUERY_STREAM_SAVE_SIZE, numOfStreams);
int32_t saveSize = pConn->numOfStreams * sizeof(SStreamDesc);
if (saveSize > 0 && pConn->pStreams != NULL) { if (saveSize > 0 && pConn->pStreams != NULL) {
memcpy(pConn->pStreams, pHBMsg->pData + pConn->numOfQueries * sizeof(SQueryDesc), saveSize); memcpy(pConn->pStreams, pHBMsg->pData + numOfQueries * sizeof(SQueryDesc), saveSize);
} }
} }
......
...@@ -50,7 +50,7 @@ int32_t mnodeProcessRead(SMnodeMsg *pMsg) { ...@@ -50,7 +50,7 @@ int32_t mnodeProcessRead(SMnodeMsg *pMsg) {
if (!sdbIsMaster()) { if (!sdbIsMaster()) {
SMnodeRsp *rpcRsp = &pMsg->rpcRsp; SMnodeRsp *rpcRsp = &pMsg->rpcRsp;
SRpcEpSet *epSet = rpcMallocCont(sizeof(SRpcEpSet)); SRpcEpSet *epSet = rpcMallocCont(sizeof(SRpcEpSet));
mnodeGetMnodeEpSetForShell(epSet); mnodeGetMnodeEpSetForShell(epSet, true);
rpcRsp->rsp = epSet; rpcRsp->rsp = epSet;
rpcRsp->len = sizeof(SRpcEpSet); rpcRsp->len = sizeof(SRpcEpSet);
......
...@@ -282,7 +282,7 @@ static int32_t mnodeProcessHeartBeatMsg(SMnodeMsg *pMsg) { ...@@ -282,7 +282,7 @@ static int32_t mnodeProcessHeartBeatMsg(SMnodeMsg *pMsg) {
pRsp->onlineDnodes = htonl(mnodeGetOnlineDnodesNum()); pRsp->onlineDnodes = htonl(mnodeGetOnlineDnodesNum());
pRsp->totalDnodes = htonl(mnodeGetDnodesNum()); pRsp->totalDnodes = htonl(mnodeGetDnodesNum());
mnodeGetMnodeEpSetForShell(&pRsp->epSet); mnodeGetMnodeEpSetForShell(&pRsp->epSet, false);
pMsg->rpcRsp.rsp = pRsp; pMsg->rpcRsp.rsp = pRsp;
pMsg->rpcRsp.len = sizeof(SHeartBeatRsp); pMsg->rpcRsp.len = sizeof(SHeartBeatRsp);
...@@ -349,7 +349,7 @@ static int32_t mnodeProcessConnectMsg(SMnodeMsg *pMsg) { ...@@ -349,7 +349,7 @@ static int32_t mnodeProcessConnectMsg(SMnodeMsg *pMsg) {
pConnectRsp->writeAuth = pUser->writeAuth; pConnectRsp->writeAuth = pUser->writeAuth;
pConnectRsp->superAuth = pUser->superAuth; pConnectRsp->superAuth = pUser->superAuth;
mnodeGetMnodeEpSetForShell(&pConnectRsp->epSet); mnodeGetMnodeEpSetForShell(&pConnectRsp->epSet, false);
connect_over: connect_over:
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
......
...@@ -315,7 +315,8 @@ void mnodeUpdateVgroupStatus(SVgObj *pVgroup, SDnodeObj *pDnode, SVnodeLoad *pVl ...@@ -315,7 +315,8 @@ void mnodeUpdateVgroupStatus(SVgObj *pVgroup, SDnodeObj *pDnode, SVnodeLoad *pVl
for (int32_t i = 0; i < pVgroup->numOfVnodes; ++i) { for (int32_t i = 0; i < pVgroup->numOfVnodes; ++i) {
SVnodeGid *pVgid = &pVgroup->vnodeGid[i]; SVnodeGid *pVgid = &pVgroup->vnodeGid[i];
if (pVgid->pDnode == pDnode) { if (pVgid->pDnode == pDnode) {
mTrace("dnode:%d, receive status from dnode, vgId:%d status is %d:%s", pDnode->dnodeId, pVgroup->vgId, pVgid->role, syncRole[pVgid->role]); mTrace("dnode:%d, receive status from dnode, vgId:%d status:%s last:%s", pDnode->dnodeId, pVgroup->vgId,
syncRole[pVload->role], syncRole[pVgid->role]);
pVgid->role = pVload->role; pVgid->role = pVload->role;
if (pVload->role == TAOS_SYNC_ROLE_MASTER) { if (pVload->role == TAOS_SYNC_ROLE_MASTER) {
pVgroup->inUse = i; pVgroup->inUse = i;
......
...@@ -50,7 +50,7 @@ int32_t mnodeProcessWrite(SMnodeMsg *pMsg) { ...@@ -50,7 +50,7 @@ int32_t mnodeProcessWrite(SMnodeMsg *pMsg) {
if (!sdbIsMaster()) { if (!sdbIsMaster()) {
SMnodeRsp *rpcRsp = &pMsg->rpcRsp; SMnodeRsp *rpcRsp = &pMsg->rpcRsp;
SRpcEpSet *epSet = rpcMallocCont(sizeof(SRpcEpSet)); SRpcEpSet *epSet = rpcMallocCont(sizeof(SRpcEpSet));
mnodeGetMnodeEpSetForShell(epSet); mnodeGetMnodeEpSetForShell(epSet, true);
rpcRsp->rsp = epSet; rpcRsp->rsp = epSet;
rpcRsp->len = sizeof(SRpcEpSet); rpcRsp->len = sizeof(SRpcEpSet);
......
...@@ -140,6 +140,11 @@ typedef struct SQueryCostInfo { ...@@ -140,6 +140,11 @@ typedef struct SQueryCostInfo {
uint64_t numOfTimeWindows; uint64_t numOfTimeWindows;
} SQueryCostInfo; } SQueryCostInfo;
typedef struct {
int64_t vgroupLimit;
int64_t ts;
} SOrderedPrjQueryInfo;
typedef struct SQuery { typedef struct SQuery {
int16_t numOfCols; int16_t numOfCols;
int16_t numOfTags; int16_t numOfTags;
...@@ -167,6 +172,7 @@ typedef struct SQuery { ...@@ -167,6 +172,7 @@ typedef struct SQuery {
tFilePage** sdata; tFilePage** sdata;
STableQueryInfo* current; STableQueryInfo* current;
SOrderedPrjQueryInfo prjInfo; // limit value for each vgroup, only available in global order projection query.
SSingleColumnFilterInfo* pFilterInfo; SSingleColumnFilterInfo* pFilterInfo;
} SQuery; } SQuery;
...@@ -185,7 +191,7 @@ typedef struct SQueryRuntimeEnv { ...@@ -185,7 +191,7 @@ typedef struct SQueryRuntimeEnv {
void* pQueryHandle; void* pQueryHandle;
void* pSecQueryHandle; // another thread for void* pSecQueryHandle; // another thread for
bool stableQuery; // super table query or not bool stableQuery; // super table query or not
bool topBotQuery; // false bool topBotQuery; // TODO used bitwise flag
bool groupbyNormalCol; // denote if this is a groupby normal column query bool groupbyNormalCol; // denote if this is a groupby normal column query
bool hasTagResults; // if there are tag values in final result or not bool hasTagResults; // if there are tag values in final result or not
bool timeWindowInterpo;// if the time window start/end required interpolation bool timeWindowInterpo;// if the time window start/end required interpolation
...@@ -210,14 +216,13 @@ enum { ...@@ -210,14 +216,13 @@ enum {
typedef struct SQInfo { typedef struct SQInfo {
void* signature; void* signature;
int32_t code; // error code to returned to client int32_t code; // error code to returned to client
int64_t owner; // if it is in execution int64_t owner; // if it is in execution
void* tsdb; void* tsdb;
SMemRef memRef; SMemRef memRef;
int32_t vgId; int32_t vgId;
STableGroupInfo tableGroupInfo; // table <tid, last_key> list SArray<STableKeyInfo> STableGroupInfo tableGroupInfo; // table <tid, last_key> list SArray<STableKeyInfo>
STableGroupInfo tableqinfoGroupInfo; // this is a group array list, including SArray<STableQueryInfo*> structure STableGroupInfo tableqinfoGroupInfo; // this is a group array list, including SArray<STableQueryInfo*> structure
SQueryRuntimeEnv runtimeEnv; SQueryRuntimeEnv runtimeEnv;
// SArray* arrTableIdInfo;
SHashObj* arrTableIdInfo; SHashObj* arrTableIdInfo;
int32_t groupIndex; int32_t groupIndex;
...@@ -233,6 +238,7 @@ typedef struct SQInfo { ...@@ -233,6 +238,7 @@ typedef struct SQInfo {
tsem_t ready; tsem_t ready;
int32_t dataReady; // denote if query result is ready or not int32_t dataReady; // denote if query result is ready or not
void* rspContext; // response context void* rspContext; // response context
int64_t startExecTs; // start to exec timestamp
} SQInfo; } SQInfo;
#endif // TDENGINE_QUERYEXECUTOR_H #endif // TDENGINE_QUERYEXECUTOR_H
...@@ -250,10 +250,9 @@ enum { ...@@ -250,10 +250,9 @@ enum {
}; };
typedef struct STwaInfo { typedef struct STwaInfo {
TSKEY lastKey;
int8_t hasResult; // flag to denote has value int8_t hasResult; // flag to denote has value
double dOutput; double dOutput;
double lastValue; SPoint1 p;
STimeWindow win; STimeWindow win;
} STwaInfo; } STwaInfo;
......
...@@ -128,11 +128,14 @@ static UNUSED_FUNC void* u_realloc(void* p, size_t __size) { ...@@ -128,11 +128,14 @@ static UNUSED_FUNC void* u_realloc(void* p, size_t __size) {
#define CLEAR_QUERY_STATUS(q, st) ((q)->status &= (~(st))) #define CLEAR_QUERY_STATUS(q, st) ((q)->status &= (~(st)))
#define GET_NUM_OF_TABLEGROUP(q) taosArrayGetSize((q)->tableqinfoGroupInfo.pGroupList) #define GET_NUM_OF_TABLEGROUP(q) taosArrayGetSize((q)->tableqinfoGroupInfo.pGroupList)
#define GET_TABLEGROUP(q, _index) ((SArray*) taosArrayGetP((q)->tableqinfoGroupInfo.pGroupList, (_index))) #define GET_TABLEGROUP(q, _index) ((SArray*) taosArrayGetP((q)->tableqinfoGroupInfo.pGroupList, (_index)))
#define QUERY_IS_INTERVAL_QUERY(_q) ((_q)->interval.interval > 0)
static void setQueryStatus(SQuery *pQuery, int8_t status); static void setQueryStatus(SQuery *pQuery, int8_t status);
static void finalizeQueryResult(SQueryRuntimeEnv *pRuntimeEnv); static void finalizeQueryResult(SQueryRuntimeEnv *pRuntimeEnv);
#define QUERY_IS_INTERVAL_QUERY(_q) ((_q)->interval.interval > 0) static int32_t getMaximumIdleDurationSec() {
return tsShellActivityTimer * 2;
}
static void getNextTimeWindow(SQuery* pQuery, STimeWindow* tw) { static void getNextTimeWindow(SQuery* pQuery, STimeWindow* tw) {
int32_t factor = GET_FORWARD_DIRECTION_FACTOR(pQuery->order.order); int32_t factor = GET_FORWARD_DIRECTION_FACTOR(pQuery->order.order);
...@@ -700,65 +703,60 @@ static FORCE_INLINE int32_t getForwardStepsInBlock(int32_t numOfRows, __block_se ...@@ -700,65 +703,60 @@ static FORCE_INLINE int32_t getForwardStepsInBlock(int32_t numOfRows, __block_se
return forwardStep; return forwardStep;
} }
static int32_t updateResultRowCurrentIndex(SResultRowInfo* pWindowResInfo, TSKEY lastKey, bool ascQuery) {
int32_t i = 0;
int64_t skey = TSKEY_INITIAL_VAL;
int32_t numOfClosed = 0;
for (i = 0; i < pWindowResInfo->size; ++i) {
SResultRow *pResult = pWindowResInfo->pResult[i];
if (pResult->closed) {
numOfClosed += 1;
continue;
}
TSKEY ekey = pResult->win.ekey;
if ((ekey <= lastKey && ascQuery) || (pResult->win.skey >= lastKey && !ascQuery)) {
closeTimeWindow(pWindowResInfo, i);
} else {
skey = pResult->win.skey;
break;
}
}
// all windows are closed, set the last one to be the skey
if (skey == TSKEY_INITIAL_VAL) {
assert(i == pWindowResInfo->size);
pWindowResInfo->curIndex = pWindowResInfo->size - 1;
} else {
pWindowResInfo->curIndex = i;
pWindowResInfo->prevSKey = pWindowResInfo->pResult[pWindowResInfo->curIndex]->win.skey;
}
return numOfClosed;
}
/** /**
* NOTE: the query status only set for the first scan of master scan. * NOTE: the query status only set for the first scan of master scan.
*/ */
static int32_t doCheckQueryCompleted(SQueryRuntimeEnv *pRuntimeEnv, TSKEY lastKey, SResultRowInfo *pWindowResInfo) { static int32_t doCheckQueryCompleted(SQueryRuntimeEnv *pRuntimeEnv, TSKEY lastKey, SResultRowInfo *pWindowResInfo) {
SQuery *pQuery = pRuntimeEnv->pQuery; SQuery *pQuery = pRuntimeEnv->pQuery;
if (pRuntimeEnv->scanFlag != MASTER_SCAN) { if (pRuntimeEnv->scanFlag != MASTER_SCAN || pWindowResInfo->size == 0) {
return pWindowResInfo->size;
}
// for group by normal column query, close time window and return.
if (!QUERY_IS_INTERVAL_QUERY(pQuery)) {
closeAllTimeWindow(pWindowResInfo);
return pWindowResInfo->size; return pWindowResInfo->size;
} }
// no qualified results exist, abort check // no qualified results exist, abort check
int32_t numOfClosed = 0; int32_t numOfClosed = 0;
bool ascQuery = QUERY_IS_ASC_QUERY(pQuery);
if (pWindowResInfo->size == 0) {
return pWindowResInfo->size;
}
// query completed // query completed
if ((lastKey >= pQuery->current->win.ekey && QUERY_IS_ASC_QUERY(pQuery)) || if ((lastKey >= pQuery->current->win.ekey && ascQuery) || (lastKey <= pQuery->current->win.ekey && (!ascQuery))) {
(lastKey <= pQuery->current->win.ekey && !QUERY_IS_ASC_QUERY(pQuery))) {
closeAllTimeWindow(pWindowResInfo); closeAllTimeWindow(pWindowResInfo);
pWindowResInfo->curIndex = pWindowResInfo->size - 1; pWindowResInfo->curIndex = pWindowResInfo->size - 1;
setQueryStatus(pQuery, QUERY_COMPLETED | QUERY_RESBUF_FULL); setQueryStatus(pQuery, QUERY_COMPLETED | QUERY_RESBUF_FULL);
} else { // set the current index to be the last unclosed window } else { // set the current index to be the last unclosed window
int32_t i = 0; numOfClosed = updateResultRowCurrentIndex(pWindowResInfo, lastKey, ascQuery);
int64_t skey = TSKEY_INITIAL_VAL;
for (i = 0; i < pWindowResInfo->size; ++i) {
SResultRow *pResult = pWindowResInfo->pResult[i];
if (pResult->closed) {
numOfClosed += 1;
continue;
}
TSKEY ekey = pResult->win.ekey;
if ((ekey <= lastKey && QUERY_IS_ASC_QUERY(pQuery)) ||
(pResult->win.skey >= lastKey && !QUERY_IS_ASC_QUERY(pQuery))) {
closeTimeWindow(pWindowResInfo, i);
} else {
skey = pResult->win.skey;
break;
}
}
// all windows are closed, set the last one to be the skey
if (skey == TSKEY_INITIAL_VAL) {
assert(i == pWindowResInfo->size);
pWindowResInfo->curIndex = pWindowResInfo->size - 1;
} else {
pWindowResInfo->curIndex = i;
}
pWindowResInfo->prevSKey = pWindowResInfo->pResult[pWindowResInfo->curIndex]->win.skey;
// the number of completed slots are larger than the threshold, return current generated results to client. // the number of completed slots are larger than the threshold, return current generated results to client.
if (numOfClosed > pQuery->rec.threshold) { if (numOfClosed > pQuery->rec.threshold) {
...@@ -1047,24 +1045,6 @@ static void setNotInterpoWindowKey(SQLFunctionCtx* pCtx, int32_t numOfOutput, in ...@@ -1047,24 +1045,6 @@ static void setNotInterpoWindowKey(SQLFunctionCtx* pCtx, int32_t numOfOutput, in
} }
} }
//static double getTSWindowInterpoVal(SColumnInfoData* pColInfo, int16_t srcColIndex, int16_t rowIndex, TSKEY key, char** prevRow, TSKEY* tsCols, int32_t step) {
// TSKEY start = tsCols[rowIndex];
// TSKEY prevTs = (rowIndex == 0)? *(TSKEY *) prevRow[0] : tsCols[rowIndex - step];
//
// double v1 = 0, v2 = 0, v = 0;
// char *prevVal = (rowIndex == 0)? prevRow[srcColIndex] : ((char*)pColInfo->pData) + (rowIndex - step) * pColInfo->info.bytes;
//
// GET_TYPED_DATA(v1, double, pColInfo->info.type, (char *)prevVal);
// GET_TYPED_DATA(v2, double, pColInfo->info.type, (char *)pColInfo->pData + rowIndex * pColInfo->info.bytes);
//
// SPoint point1 = (SPoint){.key = prevTs, .val = &v1};
// SPoint point2 = (SPoint){.key = start, .val = &v2};
// SPoint point = (SPoint){.key = key, .val = &v};
// taosGetLinearInterpolationVal(TSDB_DATA_TYPE_DOUBLE, &point1, &point2, &point);
//
// return v;
//}
// window start key interpolation // window start key interpolation
static bool setTimeWindowInterpolationStartTs(SQueryRuntimeEnv* pRuntimeEnv, int32_t pos, int32_t numOfRows, SArray* pDataBlock, TSKEY* tsCols, STimeWindow* win) { static bool setTimeWindowInterpolationStartTs(SQueryRuntimeEnv* pRuntimeEnv, int32_t pos, int32_t numOfRows, SArray* pDataBlock, TSKEY* tsCols, STimeWindow* win) {
SQuery* pQuery = pRuntimeEnv->pQuery; SQuery* pQuery = pRuntimeEnv->pQuery;
...@@ -1235,6 +1215,8 @@ static void blockwiseApplyFunctions(SQueryRuntimeEnv *pRuntimeEnv, SDataStatis * ...@@ -1235,6 +1215,8 @@ static void blockwiseApplyFunctions(SQueryRuntimeEnv *pRuntimeEnv, SDataStatis *
if (interp) { if (interp) {
setResultRowInterpo(pResult, RESULT_ROW_START_INTERP); setResultRowInterpo(pResult, RESULT_ROW_START_INTERP);
} }
} else {
setNotInterpoWindowKey(pRuntimeEnv->pCtx, pQuery->numOfOutput, RESULT_ROW_START_INTERP);
} }
done = resultRowInterpolated(pResult, RESULT_ROW_END_INTERP); done = resultRowInterpolated(pResult, RESULT_ROW_END_INTERP);
...@@ -1246,6 +1228,8 @@ static void blockwiseApplyFunctions(SQueryRuntimeEnv *pRuntimeEnv, SDataStatis * ...@@ -1246,6 +1228,8 @@ static void blockwiseApplyFunctions(SQueryRuntimeEnv *pRuntimeEnv, SDataStatis *
if (interp) { if (interp) {
setResultRowInterpo(pResult, RESULT_ROW_END_INTERP); setResultRowInterpo(pResult, RESULT_ROW_END_INTERP);
} }
} else {
setNotInterpoWindowKey(pRuntimeEnv->pCtx, pQuery->numOfOutput, RESULT_ROW_END_INTERP);
} }
} }
...@@ -1286,6 +1270,8 @@ static void blockwiseApplyFunctions(SQueryRuntimeEnv *pRuntimeEnv, SDataStatis * ...@@ -1286,6 +1270,8 @@ static void blockwiseApplyFunctions(SQueryRuntimeEnv *pRuntimeEnv, SDataStatis *
if (interp) { if (interp) {
setResultRowInterpo(pResult, RESULT_ROW_START_INTERP); setResultRowInterpo(pResult, RESULT_ROW_START_INTERP);
} }
} else {
setNotInterpoWindowKey(pRuntimeEnv->pCtx, pQuery->numOfOutput, RESULT_ROW_START_INTERP);
} }
done = resultRowInterpolated(pResult, RESULT_ROW_END_INTERP); done = resultRowInterpolated(pResult, RESULT_ROW_END_INTERP);
...@@ -1296,6 +1282,8 @@ static void blockwiseApplyFunctions(SQueryRuntimeEnv *pRuntimeEnv, SDataStatis * ...@@ -1296,6 +1282,8 @@ static void blockwiseApplyFunctions(SQueryRuntimeEnv *pRuntimeEnv, SDataStatis *
if (interp) { if (interp) {
setResultRowInterpo(pResult, RESULT_ROW_END_INTERP); setResultRowInterpo(pResult, RESULT_ROW_END_INTERP);
} }
} else {
setNotInterpoWindowKey(pRuntimeEnv->pCtx, pQuery->numOfOutput, RESULT_ROW_END_INTERP);
} }
} }
...@@ -1799,9 +1787,12 @@ static int32_t tableApplyFunctionsOnBlock(SQueryRuntimeEnv *pRuntimeEnv, SDataBl ...@@ -1799,9 +1787,12 @@ static int32_t tableApplyFunctionsOnBlock(SQueryRuntimeEnv *pRuntimeEnv, SDataBl
// interval query with limit applied // interval query with limit applied
int32_t numOfRes = 0; int32_t numOfRes = 0;
if (QUERY_IS_INTERVAL_QUERY(pQuery) || pRuntimeEnv->groupbyNormalCol) { if (QUERY_IS_INTERVAL_QUERY(pQuery)) {
numOfRes = doCheckQueryCompleted(pRuntimeEnv, lastKey, pWindowResInfo); numOfRes = doCheckQueryCompleted(pRuntimeEnv, lastKey, pWindowResInfo);
} else { } else if (pRuntimeEnv->groupbyNormalCol) {
closeAllTimeWindow(pWindowResInfo);
numOfRes = pWindowResInfo->size;
} else { // projection query
numOfRes = (int32_t)getNumOfResult(pRuntimeEnv); numOfRes = (int32_t)getNumOfResult(pRuntimeEnv);
// update the number of output result // update the number of output result
...@@ -2138,8 +2129,31 @@ static void teardownQueryRuntimeEnv(SQueryRuntimeEnv *pRuntimeEnv) { ...@@ -2138,8 +2129,31 @@ static void teardownQueryRuntimeEnv(SQueryRuntimeEnv *pRuntimeEnv) {
pRuntimeEnv->pool = destroyResultRowPool(pRuntimeEnv->pool); pRuntimeEnv->pool = destroyResultRowPool(pRuntimeEnv->pool);
} }
static bool needBuildResAfterQueryComplete(SQInfo* pQInfo) {
return pQInfo->rspContext != NULL;
}
#define IS_QUERY_KILLED(_q) ((_q)->code == TSDB_CODE_TSC_QUERY_CANCELLED) #define IS_QUERY_KILLED(_q) ((_q)->code == TSDB_CODE_TSC_QUERY_CANCELLED)
static bool isQueryKilled(SQInfo *pQInfo) {
if (IS_QUERY_KILLED(pQInfo)) {
return true;
}
// query has been executed more than tsShellActivityTimer, and the retrieve has not arrived
// abort current query execution.
if (pQInfo->owner != 0 && ((taosGetTimestampSec() - pQInfo->startExecTs) > getMaximumIdleDurationSec()) &&
(!needBuildResAfterQueryComplete(pQInfo))) {
assert(pQInfo->startExecTs != 0);
qDebug("QInfo:%p retrieve not arrive beyond %d sec, abort current query execution, start:%"PRId64", current:%d", pQInfo, 1,
pQInfo->startExecTs, taosGetTimestampSec());
return true;
}
return false;
}
static void setQueryKilled(SQInfo *pQInfo) { pQInfo->code = TSDB_CODE_TSC_QUERY_CANCELLED;} static void setQueryKilled(SQInfo *pQInfo) { pQInfo->code = TSDB_CODE_TSC_QUERY_CANCELLED;}
static bool isFixedOutputQuery(SQueryRuntimeEnv* pRuntimeEnv) { static bool isFixedOutputQuery(SQueryRuntimeEnv* pRuntimeEnv) {
...@@ -2864,7 +2878,7 @@ static int64_t doScanAllDataBlocks(SQueryRuntimeEnv *pRuntimeEnv) { ...@@ -2864,7 +2878,7 @@ static int64_t doScanAllDataBlocks(SQueryRuntimeEnv *pRuntimeEnv) {
while (tsdbNextDataBlock(pQueryHandle)) { while (tsdbNextDataBlock(pQueryHandle)) {
summary->totalBlocks += 1; summary->totalBlocks += 1;
if (IS_QUERY_KILLED(GET_QINFO_ADDR(pRuntimeEnv))) { if (isQueryKilled(GET_QINFO_ADDR(pRuntimeEnv))) {
longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED); longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED);
} }
...@@ -3432,7 +3446,7 @@ int32_t mergeIntoGroupResultImpl(SQInfo *pQInfo, SArray *pGroup) { ...@@ -3432,7 +3446,7 @@ int32_t mergeIntoGroupResultImpl(SQInfo *pQInfo, SArray *pGroup) {
int64_t startt = taosGetTimestampMs(); int64_t startt = taosGetTimestampMs();
while (1) { while (1) {
if (IS_QUERY_KILLED(pQInfo)) { if (isQueryKilled(pQInfo)) {
qDebug("QInfo:%p it is already killed, abort", pQInfo); qDebug("QInfo:%p it is already killed, abort", pQInfo);
tfree(pTableList); tfree(pTableList);
...@@ -4018,7 +4032,7 @@ void scanOneTableDataBlocks(SQueryRuntimeEnv *pRuntimeEnv, TSKEY start) { ...@@ -4018,7 +4032,7 @@ void scanOneTableDataBlocks(SQueryRuntimeEnv *pRuntimeEnv, TSKEY start) {
cond.twindow.skey, cond.twindow.ekey); cond.twindow.skey, cond.twindow.ekey);
// check if query is killed or not // check if query is killed or not
if (IS_QUERY_KILLED(pQInfo)) { if (isQueryKilled(pQInfo)) {
longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED); longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED);
} }
} }
...@@ -4461,6 +4475,18 @@ static void stableApplyFunctionsOnBlock(SQueryRuntimeEnv *pRuntimeEnv, SDataBloc ...@@ -4461,6 +4475,18 @@ static void stableApplyFunctionsOnBlock(SQueryRuntimeEnv *pRuntimeEnv, SDataBloc
} else { } else {
blockwiseApplyFunctions(pRuntimeEnv, pStatis, pDataBlockInfo, pWindowResInfo, searchFn, pDataBlock); blockwiseApplyFunctions(pRuntimeEnv, pStatis, pDataBlockInfo, pWindowResInfo, searchFn, pDataBlock);
} }
if (QUERY_IS_INTERVAL_QUERY(pQuery)) {
bool ascQuery = QUERY_IS_ASC_QUERY(pQuery);
// TODO refactor
if ((pTableQueryInfo->lastKey >= pTableQueryInfo->win.ekey && ascQuery) || (pTableQueryInfo->lastKey <= pTableQueryInfo->win.ekey && (!ascQuery))) {
closeAllTimeWindow(pWindowResInfo);
pWindowResInfo->curIndex = pWindowResInfo->size - 1;
} else {
updateResultRowCurrentIndex(pWindowResInfo, pTableQueryInfo->lastKey, ascQuery);
}
}
} }
bool queryHasRemainResForTableQuery(SQueryRuntimeEnv* pRuntimeEnv) { bool queryHasRemainResForTableQuery(SQueryRuntimeEnv* pRuntimeEnv) {
...@@ -4675,7 +4701,7 @@ void skipBlocks(SQueryRuntimeEnv *pRuntimeEnv) { ...@@ -4675,7 +4701,7 @@ void skipBlocks(SQueryRuntimeEnv *pRuntimeEnv) {
SDataBlockInfo blockInfo = SDATA_BLOCK_INITIALIZER; SDataBlockInfo blockInfo = SDATA_BLOCK_INITIALIZER;
while (tsdbNextDataBlock(pQueryHandle)) { while (tsdbNextDataBlock(pQueryHandle)) {
if (IS_QUERY_KILLED(GET_QINFO_ADDR(pRuntimeEnv))) { if (isQueryKilled(GET_QINFO_ADDR(pRuntimeEnv))) {
longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED); longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED);
} }
...@@ -5112,7 +5138,7 @@ static int64_t scanMultiTableDataBlocks(SQInfo *pQInfo) { ...@@ -5112,7 +5138,7 @@ static int64_t scanMultiTableDataBlocks(SQInfo *pQInfo) {
while (tsdbNextDataBlock(pQueryHandle)) { while (tsdbNextDataBlock(pQueryHandle)) {
summary->totalBlocks += 1; summary->totalBlocks += 1;
if (IS_QUERY_KILLED(pQInfo)) { if (isQueryKilled(pQInfo)) {
longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED); longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED);
} }
...@@ -5479,19 +5505,25 @@ static void sequentialTableProcess(SQInfo *pQInfo) { ...@@ -5479,19 +5505,25 @@ static void sequentialTableProcess(SQInfo *pQInfo) {
// return; // return;
// } // }
if (pQuery->prjInfo.vgroupLimit != -1) {
assert(pQuery->limit.limit == -1 && pQuery->limit.offset == 0);
} else if (pQuery->limit.limit != -1) {
assert(pQuery->prjInfo.vgroupLimit == -1);
}
bool hasMoreBlock = true; bool hasMoreBlock = true;
int32_t step = GET_FORWARD_DIRECTION_FACTOR(pQuery->order.order); int32_t step = GET_FORWARD_DIRECTION_FACTOR(pQuery->order.order);
SQueryCostInfo *summary = &pRuntimeEnv->summary; SQueryCostInfo *summary = &pRuntimeEnv->summary;
while ((hasMoreBlock = tsdbNextDataBlock(pQueryHandle)) == true) { while ((hasMoreBlock = tsdbNextDataBlock(pQueryHandle)) == true) {
summary->totalBlocks += 1; summary->totalBlocks += 1;
if (IS_QUERY_KILLED(pQInfo)) { if (isQueryKilled(pQInfo)) {
longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED); longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED);
} }
tsdbRetrieveDataBlockInfo(pQueryHandle, &blockInfo); tsdbRetrieveDataBlockInfo(pQueryHandle, &blockInfo);
STableQueryInfo **pTableQueryInfo = STableQueryInfo **pTableQueryInfo =
(STableQueryInfo **)taosHashGet(pQInfo->tableqinfoGroupInfo.map, &blockInfo.tid, sizeof(blockInfo.tid)); (STableQueryInfo **) taosHashGet(pQInfo->tableqinfoGroupInfo.map, &blockInfo.tid, sizeof(blockInfo.tid));
if (pTableQueryInfo == NULL) { if (pTableQueryInfo == NULL) {
break; break;
} }
...@@ -5503,6 +5535,25 @@ static void sequentialTableProcess(SQInfo *pQInfo) { ...@@ -5503,6 +5535,25 @@ static void sequentialTableProcess(SQInfo *pQInfo) {
setTagVal(pRuntimeEnv, pQuery->current->pTable, pQInfo->tsdb); setTagVal(pRuntimeEnv, pQuery->current->pTable, pQInfo->tsdb);
} }
if (pQuery->prjInfo.vgroupLimit > 0 && pQuery->current->windowResInfo.size > pQuery->prjInfo.vgroupLimit) {
pQuery->current->lastKey =
QUERY_IS_ASC_QUERY(pQuery) ? blockInfo.window.ekey + step : blockInfo.window.skey + step;
continue;
}
// it is a super table ordered projection query, check for the number of output for each vgroup
if (pQuery->prjInfo.vgroupLimit > 0 && pQuery->rec.rows >= pQuery->prjInfo.vgroupLimit) {
if (QUERY_IS_ASC_QUERY(pQuery) && blockInfo.window.skey >= pQuery->prjInfo.ts) {
pQuery->current->lastKey =
QUERY_IS_ASC_QUERY(pQuery) ? blockInfo.window.ekey + step : blockInfo.window.skey + step;
continue;
} else if (!QUERY_IS_ASC_QUERY(pQuery) && blockInfo.window.ekey <= pQuery->prjInfo.ts) {
pQuery->current->lastKey =
QUERY_IS_ASC_QUERY(pQuery) ? blockInfo.window.ekey + step : blockInfo.window.skey + step;
continue;
}
}
uint32_t status = 0; uint32_t status = 0;
SDataStatis *pStatis = NULL; SDataStatis *pStatis = NULL;
SArray *pDataBlock = NULL; SArray *pDataBlock = NULL;
...@@ -5520,6 +5571,8 @@ static void sequentialTableProcess(SQInfo *pQInfo) { ...@@ -5520,6 +5571,8 @@ static void sequentialTableProcess(SQInfo *pQInfo) {
} }
ensureOutputBuffer(pRuntimeEnv, &blockInfo); ensureOutputBuffer(pRuntimeEnv, &blockInfo);
int64_t prev = getNumOfResult(pRuntimeEnv);
pQuery->pos = QUERY_IS_ASC_QUERY(pQuery) ? 0 : blockInfo.rows - 1; pQuery->pos = QUERY_IS_ASC_QUERY(pQuery) ? 0 : blockInfo.rows - 1;
int32_t numOfRes = tableApplyFunctionsOnBlock(pRuntimeEnv, &blockInfo, pStatis, binarySearchForKey, pDataBlock); int32_t numOfRes = tableApplyFunctionsOnBlock(pRuntimeEnv, &blockInfo, pStatis, binarySearchForKey, pDataBlock);
...@@ -5530,17 +5583,30 @@ static void sequentialTableProcess(SQInfo *pQInfo) { ...@@ -5530,17 +5583,30 @@ static void sequentialTableProcess(SQInfo *pQInfo) {
pQuery->rec.rows = getNumOfResult(pRuntimeEnv); pQuery->rec.rows = getNumOfResult(pRuntimeEnv);
int64_t inc = pQuery->rec.rows - prev;
pQuery->current->windowResInfo.size += (int32_t) inc;
// the flag may be set by tableApplyFunctionsOnBlock, clear it here // the flag may be set by tableApplyFunctionsOnBlock, clear it here
CLEAR_QUERY_STATUS(pQuery, QUERY_COMPLETED); CLEAR_QUERY_STATUS(pQuery, QUERY_COMPLETED);
updateTableIdInfo(pQuery, pQInfo->arrTableIdInfo); updateTableIdInfo(pQuery, pQInfo->arrTableIdInfo);
skipResults(pRuntimeEnv);
// the limitation of output result is reached, set the query completed if (pQuery->prjInfo.vgroupLimit >= 0) {
if (limitResults(pRuntimeEnv)) { if (((pQuery->rec.rows + pQuery->rec.total) < pQuery->prjInfo.vgroupLimit) || ((pQuery->rec.rows + pQuery->rec.total) > pQuery->prjInfo.vgroupLimit && prev < pQuery->prjInfo.vgroupLimit)) {
setQueryStatus(pQuery, QUERY_COMPLETED); if (QUERY_IS_ASC_QUERY(pQuery) && pQuery->prjInfo.ts < blockInfo.window.ekey) {
SET_STABLE_QUERY_OVER(pQInfo); pQuery->prjInfo.ts = blockInfo.window.ekey;
break; } else if (!QUERY_IS_ASC_QUERY(pQuery) && pQuery->prjInfo.ts > blockInfo.window.skey) {
pQuery->prjInfo.ts = blockInfo.window.skey;
}
}
} else {
// the limitation of output result is reached, set the query completed
skipResults(pRuntimeEnv);
if (limitResults(pRuntimeEnv)) {
setQueryStatus(pQuery, QUERY_COMPLETED);
SET_STABLE_QUERY_OVER(pQInfo);
break;
}
} }
// while the output buffer is full or limit/offset is applied, query may be paused here // while the output buffer is full or limit/offset is applied, query may be paused here
...@@ -5582,7 +5648,7 @@ static void sequentialTableProcess(SQInfo *pQInfo) { ...@@ -5582,7 +5648,7 @@ static void sequentialTableProcess(SQInfo *pQInfo) {
1 == taosArrayGetSize(pQInfo->tableqinfoGroupInfo.pGroupList)); 1 == taosArrayGetSize(pQInfo->tableqinfoGroupInfo.pGroupList));
while (pQInfo->tableIndex < pQInfo->tableqinfoGroupInfo.numOfTables) { while (pQInfo->tableIndex < pQInfo->tableqinfoGroupInfo.numOfTables) {
if (IS_QUERY_KILLED(pQInfo)) { if (isQueryKilled(pQInfo)) {
longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED); longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED);
} }
...@@ -5768,7 +5834,7 @@ static void multiTableQueryProcess(SQInfo *pQInfo) { ...@@ -5768,7 +5834,7 @@ static void multiTableQueryProcess(SQInfo *pQInfo) {
qDebug("QInfo:%p master scan completed, elapsed time: %" PRId64 "ms, reverse scan start", pQInfo, el); qDebug("QInfo:%p master scan completed, elapsed time: %" PRId64 "ms, reverse scan start", pQInfo, el);
// query error occurred or query is killed, abort current execution // query error occurred or query is killed, abort current execution
if (pQInfo->code != TSDB_CODE_SUCCESS || IS_QUERY_KILLED(pQInfo)) { if (pQInfo->code != TSDB_CODE_SUCCESS || isQueryKilled(pQInfo)) {
qDebug("QInfo:%p query killed or error occurred, code:%s, abort", pQInfo, tstrerror(pQInfo->code)); qDebug("QInfo:%p query killed or error occurred, code:%s, abort", pQInfo, tstrerror(pQInfo->code));
longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED); longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED);
} }
...@@ -5789,7 +5855,7 @@ static void multiTableQueryProcess(SQInfo *pQInfo) { ...@@ -5789,7 +5855,7 @@ static void multiTableQueryProcess(SQInfo *pQInfo) {
setQueryStatus(pQuery, QUERY_COMPLETED); setQueryStatus(pQuery, QUERY_COMPLETED);
if (pQInfo->code != TSDB_CODE_SUCCESS || IS_QUERY_KILLED(pQInfo)) { if (pQInfo->code != TSDB_CODE_SUCCESS || isQueryKilled(pQInfo)) {
qDebug("QInfo:%p query killed or error occurred, code:%s, abort", pQInfo, tstrerror(pQInfo->code)); qDebug("QInfo:%p query killed or error occurred, code:%s, abort", pQInfo, tstrerror(pQInfo->code));
//TODO finalizeQueryResult may cause SEGSEV, since the memory may not allocated yet, add a cleanup function instead //TODO finalizeQueryResult may cause SEGSEV, since the memory may not allocated yet, add a cleanup function instead
longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED); longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED);
...@@ -5905,7 +5971,7 @@ static void tableFixedOutputProcess(SQInfo *pQInfo, STableQueryInfo* pTableInfo) ...@@ -5905,7 +5971,7 @@ static void tableFixedOutputProcess(SQInfo *pQInfo, STableQueryInfo* pTableInfo)
pQuery->rec.rows = getNumOfResult(pRuntimeEnv); pQuery->rec.rows = getNumOfResult(pRuntimeEnv);
doSecondaryArithmeticProcess(pQuery); doSecondaryArithmeticProcess(pQuery);
if (IS_QUERY_KILLED(pQInfo)) { if (isQueryKilled(pQInfo)) {
longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED); longjmp(pRuntimeEnv->env, TSDB_CODE_TSC_QUERY_CANCELLED);
} }
...@@ -6284,7 +6350,7 @@ static int32_t convertQueryMsg(SQueryTableMsg *pQueryMsg, SArray **pTableIdList, ...@@ -6284,7 +6350,7 @@ static int32_t convertQueryMsg(SQueryTableMsg *pQueryMsg, SArray **pTableIdList,
pQueryMsg->interval.offset = htobe64(pQueryMsg->interval.offset); pQueryMsg->interval.offset = htobe64(pQueryMsg->interval.offset);
pQueryMsg->limit = htobe64(pQueryMsg->limit); pQueryMsg->limit = htobe64(pQueryMsg->limit);
pQueryMsg->offset = htobe64(pQueryMsg->offset); pQueryMsg->offset = htobe64(pQueryMsg->offset);
pQueryMsg->tableLimit = htobe64(pQueryMsg->tableLimit); pQueryMsg->vgroupLimit = htobe64(pQueryMsg->vgroupLimit);
pQueryMsg->order = htons(pQueryMsg->order); pQueryMsg->order = htons(pQueryMsg->order);
pQueryMsg->orderColId = htons(pQueryMsg->orderColId); pQueryMsg->orderColId = htons(pQueryMsg->orderColId);
...@@ -6885,6 +6951,8 @@ static SQInfo *createQInfoImpl(SQueryTableMsg *pQueryMsg, SSqlGroupbyExpr *pGrou ...@@ -6885,6 +6951,8 @@ static SQInfo *createQInfoImpl(SQueryTableMsg *pQueryMsg, SSqlGroupbyExpr *pGrou
pQuery->fillType = pQueryMsg->fillType; pQuery->fillType = pQueryMsg->fillType;
pQuery->numOfTags = pQueryMsg->numOfTags; pQuery->numOfTags = pQueryMsg->numOfTags;
pQuery->tagColList = pTagCols; pQuery->tagColList = pTagCols;
pQuery->prjInfo.vgroupLimit = pQueryMsg->vgroupLimit;
pQuery->prjInfo.ts = (pQueryMsg->order == TSDB_ORDER_ASC)? INT64_MIN:INT64_MAX;
pQuery->colList = calloc(numOfCols, sizeof(SSingleColumnFilterInfo)); pQuery->colList = calloc(numOfCols, sizeof(SSingleColumnFilterInfo));
if (pQuery->colList == NULL) { if (pQuery->colList == NULL) {
...@@ -7479,7 +7547,7 @@ static bool doBuildResCheck(SQInfo* pQInfo) { ...@@ -7479,7 +7547,7 @@ static bool doBuildResCheck(SQInfo* pQInfo) {
pthread_mutex_lock(&pQInfo->lock); pthread_mutex_lock(&pQInfo->lock);
pQInfo->dataReady = QUERY_RESULT_READY; pQInfo->dataReady = QUERY_RESULT_READY;
buildRes = (pQInfo->rspContext != NULL); buildRes = needBuildResAfterQueryComplete(pQInfo);
// clear qhandle owner, it must be in the secure area. other thread may run ahead before current, after it is // clear qhandle owner, it must be in the secure area. other thread may run ahead before current, after it is
// put into task to be executed. // put into task to be executed.
...@@ -7488,6 +7556,7 @@ static bool doBuildResCheck(SQInfo* pQInfo) { ...@@ -7488,6 +7556,7 @@ static bool doBuildResCheck(SQInfo* pQInfo) {
pthread_mutex_unlock(&pQInfo->lock); pthread_mutex_unlock(&pQInfo->lock);
// used in retrieve blocking model.
tsem_post(&pQInfo->ready); tsem_post(&pQInfo->ready);
return buildRes; return buildRes;
} }
...@@ -7504,7 +7573,9 @@ bool qTableQuery(qinfo_t qinfo) { ...@@ -7504,7 +7573,9 @@ bool qTableQuery(qinfo_t qinfo) {
return false; return false;
} }
if (IS_QUERY_KILLED(pQInfo)) { pQInfo->startExecTs = taosGetTimestampSec();
if (isQueryKilled(pQInfo)) {
qDebug("QInfo:%p it is already killed, abort", pQInfo); qDebug("QInfo:%p it is already killed, abort", pQInfo);
return doBuildResCheck(pQInfo); return doBuildResCheck(pQInfo);
} }
...@@ -7536,7 +7607,7 @@ bool qTableQuery(qinfo_t qinfo) { ...@@ -7536,7 +7607,7 @@ bool qTableQuery(qinfo_t qinfo) {
} }
SQuery* pQuery = pRuntimeEnv->pQuery; SQuery* pQuery = pRuntimeEnv->pQuery;
if (IS_QUERY_KILLED(pQInfo)) { if (isQueryKilled(pQInfo)) {
qDebug("QInfo:%p query is killed", pQInfo); qDebug("QInfo:%p query is killed", pQInfo);
} else if (pQuery->rec.rows == 0) { } else if (pQuery->rec.rows == 0) {
qDebug("QInfo:%p over, %" PRIzu " tables queried, %"PRId64" rows are returned", pQInfo, pQInfo->tableqinfoGroupInfo.numOfTables, pQuery->rec.total); qDebug("QInfo:%p over, %" PRIzu " tables queried, %"PRId64" rows are returned", pQInfo, pQInfo->tableqinfoGroupInfo.numOfTables, pQuery->rec.total);
...@@ -7564,30 +7635,31 @@ int32_t qRetrieveQueryResultInfo(qinfo_t qinfo, bool* buildRes, void* pRspContex ...@@ -7564,30 +7635,31 @@ int32_t qRetrieveQueryResultInfo(qinfo_t qinfo, bool* buildRes, void* pRspContex
int32_t code = TSDB_CODE_SUCCESS; int32_t code = TSDB_CODE_SUCCESS;
#if _NON_BLOCKING_RETRIEVE if (tsRetrieveBlockingModel) {
SQuery *pQuery = pQInfo->runtimeEnv.pQuery; pQInfo->rspContext = pRspContext;
tsem_wait(&pQInfo->ready);
pthread_mutex_lock(&pQInfo->lock);
assert(pQInfo->rspContext == NULL);
if (pQInfo->dataReady == QUERY_RESULT_READY) {
*buildRes = true; *buildRes = true;
qDebug("QInfo:%p retrieve result info, rowsize:%d, rows:%"PRId64", code:%d", pQInfo, pQuery->rowSize, pQuery->rec.rows, code = pQInfo->code;
pQInfo->code);
} else { } else {
*buildRes = false; SQuery *pQuery = pQInfo->runtimeEnv.pQuery;
qDebug("QInfo:%p retrieve req set query return result after paused", pQInfo);
pQInfo->rspContext = pRspContext;
assert(pQInfo->rspContext != NULL);
}
code = pQInfo->code; pthread_mutex_lock(&pQInfo->lock);
pthread_mutex_unlock(&pQInfo->lock);
#else assert(pQInfo->rspContext == NULL);
tsem_wait(&pQInfo->ready); if (pQInfo->dataReady == QUERY_RESULT_READY) {
*buildRes = true; *buildRes = true;
code = pQInfo->code; qDebug("QInfo:%p retrieve result info, rowsize:%d, rows:%" PRId64 ", code:%s", pQInfo, pQuery->rowSize,
#endif pQuery->rec.rows, tstrerror(pQInfo->code));
} else {
*buildRes = false;
qDebug("QInfo:%p retrieve req set query return result after paused", pQInfo);
pQInfo->rspContext = pRspContext;
assert(pQInfo->rspContext != NULL);
}
code = pQInfo->code;
pthread_mutex_unlock(&pQInfo->lock);
}
return code; return code;
} }
...@@ -7655,7 +7727,7 @@ int32_t qQueryCompleted(qinfo_t qinfo) { ...@@ -7655,7 +7727,7 @@ int32_t qQueryCompleted(qinfo_t qinfo) {
} }
SQuery* pQuery = pQInfo->runtimeEnv.pQuery; SQuery* pQuery = pQInfo->runtimeEnv.pQuery;
return IS_QUERY_KILLED(pQInfo) || Q_STATUS_EQUAL(pQuery->status, QUERY_OVER); return isQueryKilled(pQInfo) || Q_STATUS_EQUAL(pQuery->status, QUERY_OVER);
} }
int32_t qKillQuery(qinfo_t qinfo) { int32_t qKillQuery(qinfo_t qinfo) {
...@@ -7952,8 +8024,6 @@ void** qRegisterQInfo(void* pMgmt, uint64_t qInfo) { ...@@ -7952,8 +8024,6 @@ void** qRegisterQInfo(void* pMgmt, uint64_t qInfo) {
return NULL; return NULL;
} }
const int32_t DEFAULT_QHANDLE_LIFE_SPAN = tsShellActivityTimer * 2 * 1000;
SQueryMgmt *pQueryMgmt = pMgmt; SQueryMgmt *pQueryMgmt = pMgmt;
if (pQueryMgmt->qinfoPool == NULL) { if (pQueryMgmt->qinfoPool == NULL) {
qError("QInfo:%p failed to add qhandle into qMgmt, since qMgmt is closed", (void *)qInfo); qError("QInfo:%p failed to add qhandle into qMgmt, since qMgmt is closed", (void *)qInfo);
...@@ -7969,7 +8039,8 @@ void** qRegisterQInfo(void* pMgmt, uint64_t qInfo) { ...@@ -7969,7 +8039,8 @@ void** qRegisterQInfo(void* pMgmt, uint64_t qInfo) {
return NULL; return NULL;
} else { } else {
TSDB_CACHE_PTR_TYPE handleVal = (TSDB_CACHE_PTR_TYPE) qInfo; TSDB_CACHE_PTR_TYPE handleVal = (TSDB_CACHE_PTR_TYPE) qInfo;
void** handle = taosCachePut(pQueryMgmt->qinfoPool, &handleVal, sizeof(TSDB_CACHE_PTR_TYPE), &qInfo, sizeof(TSDB_CACHE_PTR_TYPE), DEFAULT_QHANDLE_LIFE_SPAN); void** handle = taosCachePut(pQueryMgmt->qinfoPool, &handleVal, sizeof(TSDB_CACHE_PTR_TYPE), &qInfo, sizeof(TSDB_CACHE_PTR_TYPE),
(getMaximumIdleDurationSec()*1000));
// pthread_mutex_unlock(&pQueryMgmt->lock); // pthread_mutex_unlock(&pQueryMgmt->lock);
return handle; return handle;
......
...@@ -21,6 +21,12 @@ ...@@ -21,6 +21,12 @@
#include "tcompare.h" #include "tcompare.h"
#include "tsqlfunction.h" #include "tsqlfunction.h"
#define FLT_EQUAL(_x, _y) (fabs((_x) - (_y)) <= (4 * FLT_EPSILON))
#define FLT_GREATER(_x, _y) (!FLT_EQUAL((_x), (_y)) && ((_x) > (_y)))
#define FLT_LESS(_x, _y) (!FLT_EQUAL((_x), (_y)) && ((_x) < (_y)))
#define FLT_GREATEREQUAL(_x, _y) (FLT_EQUAL((_x), (_y)) || ((_x) > (_y)))
#define FLT_LESSEQUAL(_x, _y) (FLT_EQUAL((_x), (_y)) || ((_x) < (_y)))
bool less_i8(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool less_i8(SColumnFilterElem *pFilter, char *minval, char *maxval) {
return (*(int8_t *)minval < pFilter->filterInfo.upperBndi); return (*(int8_t *)minval < pFilter->filterInfo.upperBndi);
} }
...@@ -38,35 +44,35 @@ bool less_i64(SColumnFilterElem *pFilter, char *minval, char *maxval) { ...@@ -38,35 +44,35 @@ bool less_i64(SColumnFilterElem *pFilter, char *minval, char *maxval) {
} }
bool less_ds(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool less_ds(SColumnFilterElem *pFilter, char *minval, char *maxval) {
return (*(float *)minval < pFilter->filterInfo.upperBndd); return FLT_LESS(*(float*)minval, pFilter->filterInfo.upperBndd);
} }
bool less_dd(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool less_dd(SColumnFilterElem *pFilter, char *minval, char *maxval) {
return (*(double *)minval < pFilter->filterInfo.upperBndd); return *(double *)minval < pFilter->filterInfo.upperBndd;
} }
////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////
bool large_i8(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool larger_i8(SColumnFilterElem *pFilter, char *minval, char *maxval) {
return (*(int8_t *)maxval > pFilter->filterInfo.lowerBndi); return (*(int8_t *)maxval > pFilter->filterInfo.lowerBndi);
} }
bool large_i16(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool larger_i16(SColumnFilterElem *pFilter, char *minval, char *maxval) {
return (*(int16_t *)maxval > pFilter->filterInfo.lowerBndi); return (*(int16_t *)maxval > pFilter->filterInfo.lowerBndi);
} }
bool large_i32(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool larger_i32(SColumnFilterElem *pFilter, char *minval, char *maxval) {
return (*(int32_t *)maxval > pFilter->filterInfo.lowerBndi); return (*(int32_t *)maxval > pFilter->filterInfo.lowerBndi);
} }
bool large_i64(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool larger_i64(SColumnFilterElem *pFilter, char *minval, char *maxval) {
return (*(int64_t *)maxval > pFilter->filterInfo.lowerBndi); return (*(int64_t *)maxval > pFilter->filterInfo.lowerBndi);
} }
bool large_ds(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool larger_ds(SColumnFilterElem *pFilter, char *minval, char *maxval) {
return (*(float *)maxval > pFilter->filterInfo.lowerBndd); return FLT_GREATER(*(float*)maxval, pFilter->filterInfo.lowerBndd);
} }
bool large_dd(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool larger_dd(SColumnFilterElem *pFilter, char *minval, char *maxval) {
return (*(double *)maxval > pFilter->filterInfo.lowerBndd); return (*(double *)maxval > pFilter->filterInfo.lowerBndd);
} }
///////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////
...@@ -88,10 +94,14 @@ bool lessEqual_i64(SColumnFilterElem *pFilter, char *minval, char *maxval) { ...@@ -88,10 +94,14 @@ bool lessEqual_i64(SColumnFilterElem *pFilter, char *minval, char *maxval) {
} }
bool lessEqual_ds(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool lessEqual_ds(SColumnFilterElem *pFilter, char *minval, char *maxval) {
return (*(float *)minval <= pFilter->filterInfo.upperBndd); return FLT_LESSEQUAL(*(float*)minval, pFilter->filterInfo.upperBndd);
} }
bool lessEqual_dd(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool lessEqual_dd(SColumnFilterElem *pFilter, char *minval, char *maxval) {
if ((fabs(*(double*)minval) - pFilter->filterInfo.upperBndd) <= 2 * DBL_EPSILON) {
return true;
}
return (*(double *)minval <= pFilter->filterInfo.upperBndd); return (*(double *)minval <= pFilter->filterInfo.upperBndd);
} }
...@@ -113,11 +123,15 @@ bool largeEqual_i64(SColumnFilterElem *pFilter, char *minval, char *maxval) { ...@@ -113,11 +123,15 @@ bool largeEqual_i64(SColumnFilterElem *pFilter, char *minval, char *maxval) {
} }
bool largeEqual_ds(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool largeEqual_ds(SColumnFilterElem *pFilter, char *minval, char *maxval) {
return (*(float *)maxval >= pFilter->filterInfo.lowerBndd); return FLT_GREATEREQUAL(*(float*)maxval, pFilter->filterInfo.lowerBndd);
} }
bool largeEqual_dd(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool largeEqual_dd(SColumnFilterElem *pFilter, char *minval, char *maxval) {
return (*(double *)maxval >= pFilter->filterInfo.lowerBndd); if (fabs(*(double *)maxval - pFilter->filterInfo.lowerBndd) <= 2 * DBL_EPSILON) {
return true;
}
return (*(double *)maxval - pFilter->filterInfo.lowerBndd > (2 * DBL_EPSILON));
} }
//////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////
...@@ -162,10 +176,12 @@ bool equal_i64(SColumnFilterElem *pFilter, char *minval, char *maxval) { ...@@ -162,10 +176,12 @@ bool equal_i64(SColumnFilterElem *pFilter, char *minval, char *maxval) {
} }
} }
// user specified input filter value and the original saved float value may needs to
// increase the tolerance to obtain the correct result.
bool equal_ds(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool equal_ds(SColumnFilterElem *pFilter, char *minval, char *maxval) {
if (*(float *)minval == *(float *)maxval) { if (*(float *)minval == *(float *)maxval) {
return (fabs(*(float *)minval - pFilter->filterInfo.lowerBndd) <= FLT_EPSILON); return FLT_EQUAL(*(float*)minval, pFilter->filterInfo.lowerBndd);
} else { /* range filter */ } else { // range filter
assert(*(float *)minval < *(float *)maxval); assert(*(float *)minval < *(float *)maxval);
return *(float *)minval <= pFilter->filterInfo.lowerBndd && *(float *)maxval >= pFilter->filterInfo.lowerBndd; return *(float *)minval <= pFilter->filterInfo.lowerBndd && *(float *)maxval >= pFilter->filterInfo.lowerBndd;
} }
...@@ -173,10 +189,9 @@ bool equal_ds(SColumnFilterElem *pFilter, char *minval, char *maxval) { ...@@ -173,10 +189,9 @@ bool equal_ds(SColumnFilterElem *pFilter, char *minval, char *maxval) {
bool equal_dd(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool equal_dd(SColumnFilterElem *pFilter, char *minval, char *maxval) {
if (*(double *)minval == *(double *)maxval) { if (*(double *)minval == *(double *)maxval) {
return (*(double *)minval == pFilter->filterInfo.lowerBndd); return (fabs(*(double *)minval - pFilter->filterInfo.lowerBndd) <= 2 * DBL_EPSILON);
} else { /* range filter */ } else { // range filter
assert(*(double *)minval < *(double *)maxval); assert(*(double *)minval < *(double *)maxval);
return *(double *)minval <= pFilter->filterInfo.lowerBndi && *(double *)maxval >= pFilter->filterInfo.lowerBndi; return *(double *)minval <= pFilter->filterInfo.lowerBndi && *(double *)maxval >= pFilter->filterInfo.lowerBndi;
} }
} }
...@@ -255,7 +270,7 @@ bool nequal_i64(SColumnFilterElem *pFilter, char *minval, char *maxval) { ...@@ -255,7 +270,7 @@ bool nequal_i64(SColumnFilterElem *pFilter, char *minval, char *maxval) {
bool nequal_ds(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool nequal_ds(SColumnFilterElem *pFilter, char *minval, char *maxval) {
if (*(float *)minval == *(float *)maxval) { if (*(float *)minval == *(float *)maxval) {
return (*(float *)minval != pFilter->filterInfo.lowerBndd); return !FLT_EQUAL(*(float *)minval, pFilter->filterInfo.lowerBndd);
} }
return true; return true;
...@@ -364,7 +379,8 @@ bool rangeFilter_i64_ei(SColumnFilterElem *pFilter, char *minval, char *maxval) ...@@ -364,7 +379,8 @@ bool rangeFilter_i64_ei(SColumnFilterElem *pFilter, char *minval, char *maxval)
//////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////
bool rangeFilter_ds_ii(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool rangeFilter_ds_ii(SColumnFilterElem *pFilter, char *minval, char *maxval) {
return (*(float *)minval <= pFilter->filterInfo.upperBndd && *(float *)maxval >= pFilter->filterInfo.lowerBndd); return FLT_LESSEQUAL(*(float *)minval, pFilter->filterInfo.upperBndd) &&
FLT_GREATEREQUAL(*(float *)maxval, pFilter->filterInfo.lowerBndd);
} }
bool rangeFilter_ds_ee(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool rangeFilter_ds_ee(SColumnFilterElem *pFilter, char *minval, char *maxval) {
...@@ -376,7 +392,8 @@ bool rangeFilter_ds_ie(SColumnFilterElem *pFilter, char *minval, char *maxval) { ...@@ -376,7 +392,8 @@ bool rangeFilter_ds_ie(SColumnFilterElem *pFilter, char *minval, char *maxval) {
} }
bool rangeFilter_ds_ei(SColumnFilterElem *pFilter, char *minval, char *maxval) { bool rangeFilter_ds_ei(SColumnFilterElem *pFilter, char *minval, char *maxval) {
return (*(float *)minval <= pFilter->filterInfo.upperBndd && *(float *)maxval > pFilter->filterInfo.lowerBndd); return FLT_GREATER(*(float *)maxval, pFilter->filterInfo.lowerBndd) &&
FLT_LESSEQUAL(*(float *)minval, pFilter->filterInfo.upperBndd);
} }
////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////
...@@ -400,7 +417,7 @@ bool rangeFilter_dd_ei(SColumnFilterElem *pFilter, char *minval, char *maxval) { ...@@ -400,7 +417,7 @@ bool rangeFilter_dd_ei(SColumnFilterElem *pFilter, char *minval, char *maxval) {
bool (*filterFunc_i8[])(SColumnFilterElem *pFilter, char *minval, char *maxval) = { bool (*filterFunc_i8[])(SColumnFilterElem *pFilter, char *minval, char *maxval) = {
NULL, NULL,
less_i8, less_i8,
large_i8, larger_i8,
equal_i8, equal_i8,
lessEqual_i8, lessEqual_i8,
largeEqual_i8, largeEqual_i8,
...@@ -413,7 +430,7 @@ bool (*filterFunc_i8[])(SColumnFilterElem *pFilter, char *minval, char *maxval) ...@@ -413,7 +430,7 @@ bool (*filterFunc_i8[])(SColumnFilterElem *pFilter, char *minval, char *maxval)
bool (*filterFunc_i16[])(SColumnFilterElem *pFilter, char *minval, char *maxval) = { bool (*filterFunc_i16[])(SColumnFilterElem *pFilter, char *minval, char *maxval) = {
NULL, NULL,
less_i16, less_i16,
large_i16, larger_i16,
equal_i16, equal_i16,
lessEqual_i16, lessEqual_i16,
largeEqual_i16, largeEqual_i16,
...@@ -426,7 +443,7 @@ bool (*filterFunc_i16[])(SColumnFilterElem *pFilter, char *minval, char *maxval) ...@@ -426,7 +443,7 @@ bool (*filterFunc_i16[])(SColumnFilterElem *pFilter, char *minval, char *maxval)
bool (*filterFunc_i32[])(SColumnFilterElem *pFilter, char *minval, char *maxval) = { bool (*filterFunc_i32[])(SColumnFilterElem *pFilter, char *minval, char *maxval) = {
NULL, NULL,
less_i32, less_i32,
large_i32, larger_i32,
equal_i32, equal_i32,
lessEqual_i32, lessEqual_i32,
largeEqual_i32, largeEqual_i32,
...@@ -439,7 +456,7 @@ bool (*filterFunc_i32[])(SColumnFilterElem *pFilter, char *minval, char *maxval) ...@@ -439,7 +456,7 @@ bool (*filterFunc_i32[])(SColumnFilterElem *pFilter, char *minval, char *maxval)
bool (*filterFunc_i64[])(SColumnFilterElem *pFilter, char *minval, char *maxval) = { bool (*filterFunc_i64[])(SColumnFilterElem *pFilter, char *minval, char *maxval) = {
NULL, NULL,
less_i64, less_i64,
large_i64, larger_i64,
equal_i64, equal_i64,
lessEqual_i64, lessEqual_i64,
largeEqual_i64, largeEqual_i64,
...@@ -452,7 +469,7 @@ bool (*filterFunc_i64[])(SColumnFilterElem *pFilter, char *minval, char *maxval) ...@@ -452,7 +469,7 @@ bool (*filterFunc_i64[])(SColumnFilterElem *pFilter, char *minval, char *maxval)
bool (*filterFunc_ds[])(SColumnFilterElem *pFilter, char *minval, char *maxval) = { bool (*filterFunc_ds[])(SColumnFilterElem *pFilter, char *minval, char *maxval) = {
NULL, NULL,
less_ds, less_ds,
large_ds, larger_ds,
equal_ds, equal_ds,
lessEqual_ds, lessEqual_ds,
largeEqual_ds, largeEqual_ds,
...@@ -465,7 +482,7 @@ bool (*filterFunc_ds[])(SColumnFilterElem *pFilter, char *minval, char *maxval) ...@@ -465,7 +482,7 @@ bool (*filterFunc_ds[])(SColumnFilterElem *pFilter, char *minval, char *maxval)
bool (*filterFunc_dd[])(SColumnFilterElem *pFilter, char *minval, char *maxval) = { bool (*filterFunc_dd[])(SColumnFilterElem *pFilter, char *minval, char *maxval) = {
NULL, NULL,
less_dd, less_dd,
large_dd, larger_dd,
equal_dd, equal_dd,
lessEqual_dd, lessEqual_dd,
largeEqual_dd, largeEqual_dd,
...@@ -551,7 +568,7 @@ bool (*rangeFilterFunc_dd[])(SColumnFilterElem *pFilter, char *minval, char *max ...@@ -551,7 +568,7 @@ bool (*rangeFilterFunc_dd[])(SColumnFilterElem *pFilter, char *minval, char *max
__filter_func_t* getRangeFilterFuncArray(int32_t type) { __filter_func_t* getRangeFilterFuncArray(int32_t type) {
switch(type) { switch(type) {
case TSDB_DATA_TYPE_BOOL: return rangeFilterFunc_i8; case TSDB_DATA_TYPE_BOOL:
case TSDB_DATA_TYPE_TINYINT: return rangeFilterFunc_i8; case TSDB_DATA_TYPE_TINYINT: return rangeFilterFunc_i8;
case TSDB_DATA_TYPE_SMALLINT: return rangeFilterFunc_i16; case TSDB_DATA_TYPE_SMALLINT: return rangeFilterFunc_i16;
case TSDB_DATA_TYPE_INT: return rangeFilterFunc_i32; case TSDB_DATA_TYPE_INT: return rangeFilterFunc_i32;
...@@ -565,7 +582,7 @@ __filter_func_t* getRangeFilterFuncArray(int32_t type) { ...@@ -565,7 +582,7 @@ __filter_func_t* getRangeFilterFuncArray(int32_t type) {
__filter_func_t* getValueFilterFuncArray(int32_t type) { __filter_func_t* getValueFilterFuncArray(int32_t type) {
switch(type) { switch(type) {
case TSDB_DATA_TYPE_BOOL: return filterFunc_i8; case TSDB_DATA_TYPE_BOOL:
case TSDB_DATA_TYPE_TINYINT: return filterFunc_i8; case TSDB_DATA_TYPE_TINYINT: return filterFunc_i8;
case TSDB_DATA_TYPE_SMALLINT: return filterFunc_i16; case TSDB_DATA_TYPE_SMALLINT: return filterFunc_i16;
case TSDB_DATA_TYPE_INT: return filterFunc_i32; case TSDB_DATA_TYPE_INT: return filterFunc_i32;
......
...@@ -234,7 +234,13 @@ int32_t tBucketIntHash(tMemBucket *pBucket, const void *value) { ...@@ -234,7 +234,13 @@ int32_t tBucketIntHash(tMemBucket *pBucket, const void *value) {
} }
int32_t tBucketDoubleHash(tMemBucket *pBucket, const void *value) { int32_t tBucketDoubleHash(tMemBucket *pBucket, const void *value) {
double v = GET_DOUBLE_VAL(value); double v = 0;
if (pBucket->type == TSDB_DATA_TYPE_FLOAT) {
v = GET_FLOAT_VAL(value);
} else {
v = GET_DOUBLE_VAL(value);
}
int32_t index = -1; int32_t index = -1;
if (pBucket->range.dMinVal == DBL_MAX) { if (pBucket->range.dMinVal == DBL_MAX) {
......
...@@ -631,15 +631,19 @@ static void rpcReleaseConn(SRpcConn *pConn) { ...@@ -631,15 +631,19 @@ static void rpcReleaseConn(SRpcConn *pConn) {
// if there is an outgoing message, free it // if there is an outgoing message, free it
if (pConn->outType && pConn->pReqMsg) { if (pConn->outType && pConn->pReqMsg) {
SRpcReqContext *pContext = pConn->pContext; SRpcReqContext *pContext = pConn->pContext;
if (pContext->pRsp) { if (pContext) {
if (pContext->pRsp) {
// for synchronous API, post semaphore to unblock app // for synchronous API, post semaphore to unblock app
pContext->pRsp->code = TSDB_CODE_RPC_APP_ERROR; pContext->pRsp->code = TSDB_CODE_RPC_APP_ERROR;
pContext->pRsp->pCont = NULL; pContext->pRsp->pCont = NULL;
pContext->pRsp->contLen = 0; pContext->pRsp->contLen = 0;
tsem_post(pContext->pSem); tsem_post(pContext->pSem);
}
pContext->pConn = NULL;
taosRemoveRef(tsRpcRefId, pContext->rid);
} else {
assert(0);
} }
pContext->pConn = NULL;
taosRemoveRef(tsRpcRefId, pContext->rid);
} }
} }
...@@ -1083,7 +1087,11 @@ static void *rpcProcessMsgFromPeer(SRecvInfo *pRecv) { ...@@ -1083,7 +1087,11 @@ static void *rpcProcessMsgFromPeer(SRecvInfo *pRecv) {
if (code == TSDB_CODE_RPC_INVALID_TIME_STAMP || code == TSDB_CODE_RPC_AUTH_FAILURE) { if (code == TSDB_CODE_RPC_INVALID_TIME_STAMP || code == TSDB_CODE_RPC_AUTH_FAILURE) {
rpcCloseConn(pConn); rpcCloseConn(pConn);
} }
tDebug("%s %p %p, %s is sent with error code:0x%x", pRpc->label, pConn, (void *)pHead->ahandle, taosMsg[pHead->msgType+1], code); if (pHead->msgType + 1 > 1 && pHead->msgType+1 < TSDB_MSG_TYPE_MAX) {
tDebug("%s %p %p, %s is sent with error code:0x%x", pRpc->label, pConn, (void *)pHead->ahandle, taosMsg[pHead->msgType+1], code);
} else {
tError("%s %p %p, %s is sent with error code:0x%x", pRpc->label, pConn, (void *)pHead->ahandle, taosMsg[pHead->msgType], code);
}
} }
} else { // msg is passed to app only parsing is ok } else { // msg is passed to app only parsing is ok
rpcProcessIncomingMsg(pConn, pHead, pContext); rpcProcessIncomingMsg(pConn, pHead, pContext);
......
...@@ -242,7 +242,14 @@ static void *taosAcceptTcpConnection(void *arg) { ...@@ -242,7 +242,14 @@ static void *taosAcceptTcpConnection(void *arg) {
taosKeepTcpAlive(connFd); taosKeepTcpAlive(connFd);
struct timeval to={1, 0}; struct timeval to={1, 0};
taosSetSockOpt(connFd, SOL_SOCKET, SO_RCVTIMEO, &to, sizeof(to)); int32_t ret = taosSetSockOpt(connFd, SOL_SOCKET, SO_RCVTIMEO, &to, sizeof(to));
if (ret != 0) {
taosCloseSocket(connFd);
tError("%s failed to set recv timeout fd(%s)for connection from:%s:%hu", pServerObj->label, strerror(errno),
taosInetNtoa(caddr.sin_addr), htons(caddr.sin_port));
continue;
}
// pick up the thread to handle this connection // pick up the thread to handle this connection
pThreadObj = pServerObj->pThreadObj[threadId]; pThreadObj = pServerObj->pThreadObj[threadId];
......
...@@ -188,7 +188,8 @@ int main(int argc, char *argv[]) { ...@@ -188,7 +188,8 @@ int main(int argc, char *argv[]) {
tInfo("it takes %.3f mseconds to send %d requests to server", usedTime, numOfReqs*appThreads); tInfo("it takes %.3f mseconds to send %d requests to server", usedTime, numOfReqs*appThreads);
tInfo("Performance: %.3f requests per second, msgSize:%d bytes", 1000.0*numOfReqs*appThreads/usedTime, msgSize); tInfo("Performance: %.3f requests per second, msgSize:%d bytes", 1000.0*numOfReqs*appThreads/usedTime, msgSize);
getchar(); int ch = getchar();
UNUSED(ch);
taosCloseLog(); taosCloseLog();
......
...@@ -62,12 +62,15 @@ typedef struct { ...@@ -62,12 +62,15 @@ typedef struct {
typedef struct { typedef struct {
SSyncHead syncHead; SSyncHead syncHead;
uint16_t port; uint16_t port;
uint16_t tranId;
char fqdn[TSDB_FQDN_LEN]; char fqdn[TSDB_FQDN_LEN];
int32_t sourceId; // only for arbitrator int32_t sourceId; // only for arbitrator
} SFirstPkt; } SFirstPkt;
typedef struct { typedef struct {
int8_t sync; int8_t sync;
int8_t reserved;
uint16_t tranId;
} SFirstPktRsp; } SFirstPktRsp;
typedef struct { typedef struct {
...@@ -187,6 +190,7 @@ void syncRestartConnection(SSyncPeer *pPeer); ...@@ -187,6 +190,7 @@ void syncRestartConnection(SSyncPeer *pPeer);
void syncBroadcastStatus(SSyncNode *pNode); void syncBroadcastStatus(SSyncNode *pNode);
void syncAddPeerRef(SSyncPeer *pPeer); void syncAddPeerRef(SSyncPeer *pPeer);
int32_t syncDecPeerRef(SSyncPeer *pPeer); int32_t syncDecPeerRef(SSyncPeer *pPeer);
uint16_t syncGenTranId();
#ifdef __cplusplus #ifdef __cplusplus
} }
......
...@@ -396,9 +396,7 @@ void syncConfirmForward(int64_t rid, uint64_t version, int32_t code) { ...@@ -396,9 +396,7 @@ void syncConfirmForward(int64_t rid, uint64_t version, int32_t code) {
pFwdRsp->code = code; pFwdRsp->code = code;
int32_t msgLen = sizeof(SSyncHead) + sizeof(SFwdRsp); int32_t msgLen = sizeof(SSyncHead) + sizeof(SFwdRsp);
int32_t retLen = taosWriteMsg(pPeer->peerFd, msg, msgLen); if (taosWriteMsg(pPeer->peerFd, msg, msgLen) == msgLen) {
if (retLen == msgLen) {
sTrace("%s, forward-rsp is sent, code:%x hver:%" PRIu64, pPeer->id, code, version); sTrace("%s, forward-rsp is sent, code:%x hver:%" PRIu64, pPeer->id, code, version);
} else { } else {
sDebug("%s, failed to send forward ack, restart", pPeer->id); sDebug("%s, failed to send forward ack, restart", pPeer->id);
...@@ -795,7 +793,7 @@ void syncRestartConnection(SSyncPeer *pPeer) { ...@@ -795,7 +793,7 @@ void syncRestartConnection(SSyncPeer *pPeer) {
static void syncProcessSyncRequest(char *msg, SSyncPeer *pPeer) { static void syncProcessSyncRequest(char *msg, SSyncPeer *pPeer) {
SSyncNode *pNode = pPeer->pSyncNode; SSyncNode *pNode = pPeer->pSyncNode;
sDebug("%s, sync-req is received", pPeer->id); sInfo("%s, sync-req is received", pPeer->id);
if (pPeer->ip == 0) return; if (pPeer->ip == 0) return;
...@@ -873,6 +871,7 @@ static void syncRecoverFromMaster(SSyncPeer *pPeer) { ...@@ -873,6 +871,7 @@ static void syncRecoverFromMaster(SSyncPeer *pPeer) {
firstPkt.syncHead.type = TAOS_SMSG_SYNC_REQ; firstPkt.syncHead.type = TAOS_SMSG_SYNC_REQ;
firstPkt.syncHead.vgId = pNode->vgId; firstPkt.syncHead.vgId = pNode->vgId;
firstPkt.syncHead.len = sizeof(firstPkt) - sizeof(SSyncHead); firstPkt.syncHead.len = sizeof(firstPkt) - sizeof(SSyncHead);
firstPkt.tranId = syncGenTranId();
tstrncpy(firstPkt.fqdn, tsNodeFqdn, sizeof(firstPkt.fqdn)); tstrncpy(firstPkt.fqdn, tsNodeFqdn, sizeof(firstPkt.fqdn));
firstPkt.port = tsSyncPort; firstPkt.port = tsSyncPort;
taosTmrReset(syncNotStarted, tsSyncTimer * 1000, pPeer, tsSyncTmrCtrl, &pPeer->timer); taosTmrReset(syncNotStarted, tsSyncTimer * 1000, pPeer, tsSyncTmrCtrl, &pPeer->timer);
...@@ -880,8 +879,7 @@ static void syncRecoverFromMaster(SSyncPeer *pPeer) { ...@@ -880,8 +879,7 @@ static void syncRecoverFromMaster(SSyncPeer *pPeer) {
if (taosWriteMsg(pPeer->peerFd, &firstPkt, sizeof(firstPkt)) != sizeof(firstPkt)) { if (taosWriteMsg(pPeer->peerFd, &firstPkt, sizeof(firstPkt)) != sizeof(firstPkt)) {
sError("%s, failed to send sync-req to peer", pPeer->id); sError("%s, failed to send sync-req to peer", pPeer->id);
} else { } else {
nodeSStatus = TAOS_SYNC_STATUS_START; sInfo("%s, sync-req is sent to peer, tranId:%u, sstatus:%s", pPeer->id, firstPkt.tranId, syncStatus[nodeSStatus]);
sInfo("%s, sync-req is sent to peer, set sstatus:%s", pPeer->id, syncStatus[nodeSStatus]);
} }
} }
...@@ -1018,8 +1016,7 @@ static void syncSendPeersStatusMsgToPeer(SSyncPeer *pPeer, char ack, int8_t type ...@@ -1018,8 +1016,7 @@ static void syncSendPeersStatusMsgToPeer(SSyncPeer *pPeer, char ack, int8_t type
pPeersStatus->peersStatus[i].version = pNode->peerInfo[i]->version; pPeersStatus->peersStatus[i].version = pNode->peerInfo[i]->version;
} }
int32_t retLen = taosWriteMsg(pPeer->peerFd, msg, statusMsgLen); if (taosWriteMsg(pPeer->peerFd, msg, statusMsgLen) == statusMsgLen) {
if (retLen == statusMsgLen) {
sDebug("%s, status is sent, self:%s:%s:%" PRIu64 ", peer:%s:%s:%" PRIu64 ", ack:%d tranId:%u type:%s pfd:%d", sDebug("%s, status is sent, self:%s:%s:%" PRIu64 ", peer:%s:%s:%" PRIu64 ", ack:%d tranId:%u type:%s pfd:%d",
pPeer->id, syncRole[nodeRole], syncStatus[nodeSStatus], nodeVersion, syncRole[pPeer->role], pPeer->id, syncRole[nodeRole], syncStatus[nodeSStatus], nodeVersion, syncRole[pPeer->role],
syncStatus[pPeer->sstatus], pPeer->version, pPeersStatus->ack, pPeersStatus->tranId, syncStatus[pPeer->sstatus], pPeer->version, pPeersStatus->ack, pPeersStatus->tranId,
...@@ -1053,10 +1050,11 @@ static void syncSetupPeerConnection(SSyncPeer *pPeer) { ...@@ -1053,10 +1050,11 @@ static void syncSetupPeerConnection(SSyncPeer *pPeer) {
firstPkt.syncHead.type = TAOS_SMSG_STATUS; firstPkt.syncHead.type = TAOS_SMSG_STATUS;
tstrncpy(firstPkt.fqdn, tsNodeFqdn, sizeof(firstPkt.fqdn)); tstrncpy(firstPkt.fqdn, tsNodeFqdn, sizeof(firstPkt.fqdn));
firstPkt.port = tsSyncPort; firstPkt.port = tsSyncPort;
firstPkt.tranId = syncGenTranId();
firstPkt.sourceId = pNode->vgId; // tell arbitrator its vgId firstPkt.sourceId = pNode->vgId; // tell arbitrator its vgId
if (taosWriteMsg(connFd, &firstPkt, sizeof(firstPkt)) == sizeof(firstPkt)) { if (taosWriteMsg(connFd, &firstPkt, sizeof(firstPkt)) == sizeof(firstPkt)) {
sDebug("%s, connection to peer server is setup, pfd:%d sfd:%d", pPeer->id, connFd, pPeer->syncFd); sDebug("%s, connection to peer server is setup, pfd:%d sfd:%d tranId:%u", pPeer->id, connFd, pPeer->syncFd, firstPkt.tranId);
pPeer->peerFd = connFd; pPeer->peerFd = connFd;
pPeer->role = TAOS_SYNC_ROLE_UNSYNCED; pPeer->role = TAOS_SYNC_ROLE_UNSYNCED;
pPeer->pConn = taosAllocateTcpConn(tsTcpPool, pPeer, connFd); pPeer->pConn = taosAllocateTcpConn(tsTcpPool, pPeer, connFd);
...@@ -1093,7 +1091,9 @@ static void syncCreateRestoreDataThread(SSyncPeer *pPeer) { ...@@ -1093,7 +1091,9 @@ static void syncCreateRestoreDataThread(SSyncPeer *pPeer) {
pthread_attr_destroy(&thattr); pthread_attr_destroy(&thattr);
if (ret < 0) { if (ret < 0) {
sError("%s, failed to create sync thread", pPeer->id); SSyncNode *pNode = pPeer->pSyncNode;
nodeSStatus = TAOS_SYNC_STATUS_INIT;
sError("%s, failed to create sync thread, set sstatus:%s", pPeer->id, syncStatus[nodeSStatus]);
taosClose(pPeer->syncFd); taosClose(pPeer->syncFd);
syncDecPeerRef(pPeer); syncDecPeerRef(pPeer);
} else { } else {
...@@ -1123,6 +1123,8 @@ static void syncProcessIncommingConnection(int32_t connFd, uint32_t sourceIp) { ...@@ -1123,6 +1123,8 @@ static void syncProcessIncommingConnection(int32_t connFd, uint32_t sourceIp) {
return; return;
} }
sDebug("vgId:%d, firstPkt is received, tranId:%u", vgId, firstPkt.tranId);
SSyncNode *pNode = *ppNode; SSyncNode *pNode = *ppNode;
pthread_mutex_lock(&pNode->mutex); pthread_mutex_lock(&pNode->mutex);
...@@ -1141,6 +1143,9 @@ static void syncProcessIncommingConnection(int32_t connFd, uint32_t sourceIp) { ...@@ -1141,6 +1143,9 @@ static void syncProcessIncommingConnection(int32_t connFd, uint32_t sourceIp) {
// first packet tells what kind of link // first packet tells what kind of link
if (firstPkt.syncHead.type == TAOS_SMSG_SYNC_DATA) { if (firstPkt.syncHead.type == TAOS_SMSG_SYNC_DATA) {
pPeer->syncFd = connFd; pPeer->syncFd = connFd;
nodeSStatus = TAOS_SYNC_STATUS_START;
sInfo("%s, sync-data pkt from master is received, tranId:%u, set sstatus:%s", pPeer->id, firstPkt.tranId,
syncStatus[nodeSStatus]);
syncCreateRestoreDataThread(pPeer); syncCreateRestoreDataThread(pPeer);
} else { } else {
sDebug("%s, TCP connection is up, pfd:%d sfd:%d, old pfd:%d", pPeer->id, connFd, pPeer->syncFd, pPeer->peerFd); sDebug("%s, TCP connection is up, pfd:%d sfd:%d, old pfd:%d", pPeer->id, connFd, pPeer->syncFd, pPeer->peerFd);
...@@ -1312,7 +1317,7 @@ static int32_t syncForwardToPeerImpl(SSyncNode *pNode, void *data, void *mhandle ...@@ -1312,7 +1317,7 @@ static int32_t syncForwardToPeerImpl(SSyncNode *pNode, void *data, void *mhandle
} }
// always update version // always update version
sTrace("vgId:%d, forward to peer, replica:%d role:%s qtype:%s hver:%" PRIu64, pNode->vgId, pNode->replica, sTrace("vgId:%d, update version, replica:%d role:%s qtype:%s hver:%" PRIu64, pNode->vgId, pNode->replica,
syncRole[nodeRole], qtypeStr[qtype], pWalHead->version); syncRole[nodeRole], qtypeStr[qtype], pWalHead->version);
nodeVersion = pWalHead->version; nodeVersion = pWalHead->version;
......
...@@ -36,6 +36,8 @@ static void syncRemoveExtraFile(SSyncPeer *pPeer, int32_t sindex, int32_t eindex ...@@ -36,6 +36,8 @@ static void syncRemoveExtraFile(SSyncPeer *pPeer, int32_t sindex, int32_t eindex
if (sindex < 0 || eindex < sindex) return; if (sindex < 0 || eindex < sindex) return;
sDebug("%s, extra files will be removed between sindex:%d and eindex:%d", pPeer->id, sindex, eindex);
while (1) { while (1) {
name[0] = 0; name[0] = 0;
magic = (*pNode->getFileInfo)(pNode->vgId, name, &index, eindex, &size, &fversion); magic = (*pNode->getFileInfo)(pNode->vgId, name, &index, eindex, &size, &fversion);
...@@ -43,7 +45,7 @@ static void syncRemoveExtraFile(SSyncPeer *pPeer, int32_t sindex, int32_t eindex ...@@ -43,7 +45,7 @@ static void syncRemoveExtraFile(SSyncPeer *pPeer, int32_t sindex, int32_t eindex
snprintf(fname, sizeof(fname), "%s/%s", pNode->path, name); snprintf(fname, sizeof(fname), "%s/%s", pNode->path, name);
(void)remove(fname); (void)remove(fname);
sDebug("%s, %s is removed", pPeer->id, fname); sInfo("%s, %s is removed for its extra", pPeer->id, fname);
index++; index++;
if (index > eindex) break; if (index > eindex) break;
...@@ -61,11 +63,12 @@ static int32_t syncRestoreFile(SSyncPeer *pPeer, uint64_t *fversion) { ...@@ -61,11 +63,12 @@ static int32_t syncRestoreFile(SSyncPeer *pPeer, uint64_t *fversion) {
bool fileChanged = false; bool fileChanged = false;
*fversion = 0; *fversion = 0;
sinfo.index = 0; sinfo.index = -1;
while (1) { while (1) {
// read file info // read file info
int32_t ret = taosReadMsg(pPeer->syncFd, &(minfo), sizeof(minfo)); minfo.index = -1;
if (ret < 0) { int32_t ret = taosReadMsg(pPeer->syncFd, &minfo, sizeof(SFileInfo));
if (ret != sizeof(SFileInfo) || minfo.index == -1) {
sError("%s, failed to read file info while restore file since %s", pPeer->id, strerror(errno)); sError("%s, failed to read file info while restore file since %s", pPeer->id, strerror(errno));
break; break;
} }
...@@ -75,7 +78,7 @@ static int32_t syncRestoreFile(SSyncPeer *pPeer, uint64_t *fversion) { ...@@ -75,7 +78,7 @@ static int32_t syncRestoreFile(SSyncPeer *pPeer, uint64_t *fversion) {
sDebug("%s, no more files to restore", pPeer->id); sDebug("%s, no more files to restore", pPeer->id);
// remove extra files after the current index // remove extra files after the current index
syncRemoveExtraFile(pPeer, sinfo.index + 1, TAOS_SYNC_MAX_INDEX); if (sinfo.index != -1) syncRemoveExtraFile(pPeer, sinfo.index + 1, TAOS_SYNC_MAX_INDEX);
code = 0; code = 0;
break; break;
} }
...@@ -96,7 +99,7 @@ static int32_t syncRestoreFile(SSyncPeer *pPeer, uint64_t *fversion) { ...@@ -96,7 +99,7 @@ static int32_t syncRestoreFile(SSyncPeer *pPeer, uint64_t *fversion) {
// send file ack // send file ack
ret = taosWriteMsg(pPeer->syncFd, &fileAck, sizeof(fileAck)); ret = taosWriteMsg(pPeer->syncFd, &fileAck, sizeof(fileAck));
if (ret < 0) { if (ret != sizeof(fileAck)) {
sError("%s, failed to write file:%s ack while restore file since %s", pPeer->id, minfo.name, strerror(errno)); sError("%s, failed to write file:%s ack while restore file since %s", pPeer->id, minfo.name, strerror(errno));
break; break;
} }
...@@ -154,7 +157,7 @@ static int32_t syncRestoreWal(SSyncPeer *pPeer) { ...@@ -154,7 +157,7 @@ static int32_t syncRestoreWal(SSyncPeer *pPeer) {
while (1) { while (1) {
ret = taosReadMsg(pPeer->syncFd, pHead, sizeof(SWalHead)); ret = taosReadMsg(pPeer->syncFd, pHead, sizeof(SWalHead));
if (ret < 0) { if (ret != sizeof(SWalHead)) {
sError("%s, failed to read walhead while restore wal since %s", pPeer->id, strerror(errno)); sError("%s, failed to read walhead while restore wal since %s", pPeer->id, strerror(errno));
break; break;
} }
...@@ -166,7 +169,7 @@ static int32_t syncRestoreWal(SSyncPeer *pPeer) { ...@@ -166,7 +169,7 @@ static int32_t syncRestoreWal(SSyncPeer *pPeer) {
} // wal sync over } // wal sync over
ret = taosReadMsg(pPeer->syncFd, pHead->cont, pHead->len); ret = taosReadMsg(pPeer->syncFd, pHead->cont, pHead->len);
if (ret < 0) { if (ret != pHead->len) {
sError("%s, failed to read walcont, len:%d while restore wal since %s", pPeer->id, pHead->len, strerror(errno)); sError("%s, failed to read walcont, len:%d while restore wal since %s", pPeer->id, pHead->len, strerror(errno));
break; break;
} }
...@@ -286,11 +289,12 @@ static int32_t syncRestoreDataStepByStep(SSyncPeer *pPeer) { ...@@ -286,11 +289,12 @@ static int32_t syncRestoreDataStepByStep(SSyncPeer *pPeer) {
uint64_t fversion = 0; uint64_t fversion = 0;
sInfo("%s, start to restore, sstatus:%s", pPeer->id, syncStatus[pPeer->sstatus]); sInfo("%s, start to restore, sstatus:%s", pPeer->id, syncStatus[pPeer->sstatus]);
SFirstPktRsp firstPktRsp = {.sync = 1}; SFirstPktRsp firstPktRsp = {.sync = 1, .tranId = syncGenTranId()};
if (taosWriteMsg(pPeer->syncFd, &firstPktRsp, sizeof(SFirstPktRsp)) < 0) { if (taosWriteMsg(pPeer->syncFd, &firstPktRsp, sizeof(SFirstPktRsp)) != sizeof(SFirstPktRsp)) {
sError("%s, failed to send sync firstPkt rsp since %s", pPeer->id, strerror(errno)); sError("%s, failed to send sync firstPkt rsp since %s", pPeer->id, strerror(errno));
return -1; return -1;
} }
sDebug("%s, send firstPktRsp to peer, tranId:%u", pPeer->id, firstPktRsp.tranId);
sInfo("%s, start to restore file, set sstatus:%s", pPeer->id, syncStatus[nodeSStatus]); sInfo("%s, start to restore file, set sstatus:%s", pPeer->id, syncStatus[nodeSStatus]);
int32_t code = syncRestoreFile(pPeer, &fversion); int32_t code = syncRestoreFile(pPeer, &fversion);
......
...@@ -58,7 +58,7 @@ static int32_t syncGetFileVersion(SSyncNode *pNode, SSyncPeer *pPeer) { ...@@ -58,7 +58,7 @@ static int32_t syncGetFileVersion(SSyncNode *pNode, SSyncPeer *pPeer) {
uint64_t fver, wver; uint64_t fver, wver;
int32_t code = (*pNode->getVersion)(pNode->vgId, &fver, &wver); int32_t code = (*pNode->getVersion)(pNode->vgId, &fver, &wver);
if (code != 0) { if (code != 0) {
sDebug("%s, vnode is commiting while retrieve, last fver:%" PRIu64, pPeer->id, pPeer->lastFileVer); sDebug("%s, vnode is commiting while get fver for retrieve, last fver:%" PRIu64, pPeer->id, pPeer->lastFileVer);
return -1; return -1;
} }
...@@ -92,7 +92,10 @@ static int32_t syncRetrieveFile(SSyncPeer *pPeer) { ...@@ -92,7 +92,10 @@ static int32_t syncRetrieveFile(SSyncPeer *pPeer) {
int32_t code = -1; int32_t code = -1;
char name[TSDB_FILENAME_LEN * 2] = {0}; char name[TSDB_FILENAME_LEN * 2] = {0};
if (syncGetFileVersion(pNode, pPeer) < 0) return -1; if (syncGetFileVersion(pNode, pPeer) < 0) {
pPeer->fileChanged = 1;
return -1;
}
while (1) { while (1) {
// retrieve file info // retrieve file info
...@@ -100,12 +103,11 @@ static int32_t syncRetrieveFile(SSyncPeer *pPeer) { ...@@ -100,12 +103,11 @@ static int32_t syncRetrieveFile(SSyncPeer *pPeer) {
fileInfo.size = 0; fileInfo.size = 0;
fileInfo.magic = (*pNode->getFileInfo)(pNode->vgId, fileInfo.name, &fileInfo.index, TAOS_SYNC_MAX_INDEX, fileInfo.magic = (*pNode->getFileInfo)(pNode->vgId, fileInfo.name, &fileInfo.index, TAOS_SYNC_MAX_INDEX,
&fileInfo.size, &fileInfo.fversion); &fileInfo.size, &fileInfo.fversion);
// fileInfo.size = htonl(size);
sDebug("%s, file:%s info is sent, size:%" PRId64, pPeer->id, fileInfo.name, fileInfo.size); sDebug("%s, file:%s info is sent, size:%" PRId64, pPeer->id, fileInfo.name, fileInfo.size);
// send the file info // send the file info
int32_t ret = taosWriteMsg(pPeer->syncFd, &(fileInfo), sizeof(fileInfo)); int32_t ret = taosWriteMsg(pPeer->syncFd, &(fileInfo), sizeof(fileInfo));
if (ret < 0) { if (ret != sizeof(fileInfo)) {
code = -1; code = -1;
sError("%s, failed to write file:%s info while retrieve file since %s", pPeer->id, fileInfo.name, strerror(errno)); sError("%s, failed to write file:%s info while retrieve file since %s", pPeer->id, fileInfo.name, strerror(errno));
break; break;
...@@ -119,8 +121,8 @@ static int32_t syncRetrieveFile(SSyncPeer *pPeer) { ...@@ -119,8 +121,8 @@ static int32_t syncRetrieveFile(SSyncPeer *pPeer) {
} }
// wait for the ack from peer // wait for the ack from peer
ret = taosReadMsg(pPeer->syncFd, &fileAck, sizeof(fileAck)); ret = taosReadMsg(pPeer->syncFd, &fileAck, sizeof(SFileAck));
if (ret < 0) { if (ret != sizeof(SFileAck)) {
code = -1; code = -1;
sError("%s, failed to read file:%s ack while retrieve file since %s", pPeer->id, fileInfo.name, strerror(errno)); sError("%s, failed to read file:%s ack while retrieve file since %s", pPeer->id, fileInfo.name, strerror(errno));
break; break;
...@@ -384,12 +386,15 @@ static int32_t syncRetrieveWal(SSyncPeer *pPeer) { ...@@ -384,12 +386,15 @@ static int32_t syncRetrieveWal(SSyncPeer *pPeer) {
} }
if (code == 0) { if (code == 0) {
pPeer->sstatus = TAOS_SYNC_STATUS_CACHE;
sInfo("%s, wal retrieve is finished, set sstatus:%s", pPeer->id, syncStatus[pPeer->sstatus]);
SWalHead walHead; SWalHead walHead;
memset(&walHead, 0, sizeof(walHead)); memset(&walHead, 0, sizeof(walHead));
taosWriteMsg(pPeer->syncFd, &walHead, sizeof(walHead)); if (taosWriteMsg(pPeer->syncFd, &walHead, sizeof(walHead)) == sizeof(walHead)) {
pPeer->sstatus = TAOS_SYNC_STATUS_CACHE;
sInfo("%s, wal retrieve is finished, set sstatus:%s", pPeer->id, syncStatus[pPeer->sstatus]);
} else {
sError("%s, failed to send last wal record since %s", pPeer->id, strerror(errno));
code = -1;
}
} else { } else {
sError("%s, failed to send wal since %s, code:0x%x", pPeer->id, strerror(errno), code); sError("%s, failed to send wal since %s, code:0x%x", pPeer->id, strerror(errno), code);
} }
...@@ -404,20 +409,23 @@ static int32_t syncRetrieveFirstPkt(SSyncPeer *pPeer) { ...@@ -404,20 +409,23 @@ static int32_t syncRetrieveFirstPkt(SSyncPeer *pPeer) {
memset(&firstPkt, 0, sizeof(firstPkt)); memset(&firstPkt, 0, sizeof(firstPkt));
firstPkt.syncHead.type = TAOS_SMSG_SYNC_DATA; firstPkt.syncHead.type = TAOS_SMSG_SYNC_DATA;
firstPkt.syncHead.vgId = pNode->vgId; firstPkt.syncHead.vgId = pNode->vgId;
firstPkt.tranId = syncGenTranId();
tstrncpy(firstPkt.fqdn, tsNodeFqdn, sizeof(firstPkt.fqdn)); tstrncpy(firstPkt.fqdn, tsNodeFqdn, sizeof(firstPkt.fqdn));
firstPkt.port = tsSyncPort; firstPkt.port = tsSyncPort;
if (taosWriteMsg(pPeer->syncFd, &firstPkt, sizeof(firstPkt)) < 0) { if (taosWriteMsg(pPeer->syncFd, &firstPkt, sizeof(firstPkt)) != sizeof(firstPkt)) {
sError("%s, failed to send sync firstPkt since %s", pPeer->id, strerror(errno)); sError("%s, failed to send sync firstPkt since %s, tranId:%u", pPeer->id, strerror(errno), firstPkt.tranId);
return -1; return -1;
} }
sDebug("%s, send sync-data pkt to peer, tranId:%u", pPeer->id, firstPkt.tranId);
SFirstPktRsp firstPktRsp; SFirstPktRsp firstPktRsp;
if (taosReadMsg(pPeer->syncFd, &firstPktRsp, sizeof(SFirstPktRsp)) < 0) { if (taosReadMsg(pPeer->syncFd, &firstPktRsp, sizeof(SFirstPktRsp)) != sizeof(SFirstPktRsp)) {
sError("%s, failed to read sync firstPkt rsp since %s", pPeer->id, strerror(errno)); sError("%s, failed to read sync firstPkt rsp since %s, tranId:%u", pPeer->id, strerror(errno), firstPkt.tranId);
return -1; return -1;
} }
sDebug("%s, recv firstPktRsp from peer, tranId:%u", pPeer->id, firstPkt.tranId);
return 0; return 0;
} }
......
...@@ -78,6 +78,7 @@ extern char * tsCfgStatusStr[]; ...@@ -78,6 +78,7 @@ extern char * tsCfgStatusStr[];
void taosReadGlobalLogCfg(); void taosReadGlobalLogCfg();
bool taosReadGlobalCfg(); bool taosReadGlobalCfg();
void taosPrintGlobalCfg(); void taosPrintGlobalCfg();
void taosDumpGlobalCfg();
void taosInitConfigOption(SGlobalCfg cfg); void taosInitConfigOption(SGlobalCfg cfg);
SGlobalCfg * taosGetConfigOption(const char *option); SGlobalCfg * taosGetConfigOption(const char *option);
......
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef TDENGINE_TWORKER_H
#define TDENGINE_TWORKER_H
#ifdef __cplusplus
extern "C" {
#endif
typedef void *(*FWorkerThread)(void *pWorker);
struct SWorkerPool;
typedef struct {
pthread_t thread; // thread
int32_t id; // worker ID
struct SWorkerPool *pPool;
} SWorker;
typedef struct SWorkerPool {
int32_t max; // max number of workers
int32_t min; // min number of workers
int32_t num; // current number of workers
void * qset;
char * name;
SWorker *worker;
FWorkerThread workerFp;
pthread_mutex_t mutex;
} SWorkerPool;
int32_t tWorkerInit(SWorkerPool *pPool);
void tWorkerCleanup(SWorkerPool *pPool);
void * tWorkerAllocQueue(SWorkerPool *pPool, void *ahandle);
void tWorkerFreeQueue(SWorkerPool *pPool, void *pQueue);
#ifdef __cplusplus
}
#endif
#endif
...@@ -397,3 +397,57 @@ void taosPrintGlobalCfg() { ...@@ -397,3 +397,57 @@ void taosPrintGlobalCfg() {
taosPrintOsInfo(); taosPrintOsInfo();
} }
static void taosDumpCfg(SGlobalCfg *cfg) {
int optionLen = (int)strlen(cfg->option);
int blankLen = TSDB_CFG_PRINT_LEN - optionLen;
blankLen = blankLen < 0 ? 0 : blankLen;
char blank[TSDB_CFG_PRINT_LEN];
memset(blank, ' ', TSDB_CFG_PRINT_LEN);
blank[blankLen] = 0;
switch (cfg->valType) {
case TAOS_CFG_VTYPE_INT16:
printf(" %s:%s%d%s\n", cfg->option, blank, *((int16_t *)cfg->ptr), tsGlobalUnit[cfg->unitType]);
break;
case TAOS_CFG_VTYPE_INT32:
printf(" %s:%s%d%s\n", cfg->option, blank, *((int32_t *)cfg->ptr), tsGlobalUnit[cfg->unitType]);
break;
case TAOS_CFG_VTYPE_FLOAT:
printf(" %s:%s%f%s\n", cfg->option, blank, *((float *)cfg->ptr), tsGlobalUnit[cfg->unitType]);
break;
case TAOS_CFG_VTYPE_STRING:
case TAOS_CFG_VTYPE_IPSTR:
case TAOS_CFG_VTYPE_DIRECTORY:
printf(" %s:%s%s%s\n", cfg->option, blank, (char *)cfg->ptr, tsGlobalUnit[cfg->unitType]);
break;
default:
break;
}
}
void taosDumpGlobalCfg() {
printf("taos global config:\n");
printf("==================================\n");
for (int i = 0; i < tsGlobalConfigNum; ++i) {
SGlobalCfg *cfg = tsGlobalConfig + i;
if (tscEmbedded == 0 && !(cfg->cfgType & TSDB_CFG_CTYPE_B_CLIENT)) continue;
if (cfg->cfgType & TSDB_CFG_CTYPE_B_NOT_PRINT) continue;
if (!(cfg->cfgType & TSDB_CFG_CTYPE_B_SHOW)) continue;
taosDumpCfg(cfg);
}
printf("\ntaos local config:\n");
printf("==================================\n");
for (int i = 0; i < tsGlobalConfigNum; ++i) {
SGlobalCfg *cfg = tsGlobalConfig + i;
if (tscEmbedded == 0 && !(cfg->cfgType & TSDB_CFG_CTYPE_B_CLIENT)) continue;
if (cfg->cfgType & TSDB_CFG_CTYPE_B_NOT_PRINT) continue;
if (cfg->cfgType & TSDB_CFG_CTYPE_B_SHOW) continue;
taosDumpCfg(cfg);
}
}
...@@ -225,10 +225,11 @@ static void addToWheel(tmr_obj_t* timer, uint32_t delay) { ...@@ -225,10 +225,11 @@ static void addToWheel(tmr_obj_t* timer, uint32_t delay) {
} }
static bool removeFromWheel(tmr_obj_t* timer) { static bool removeFromWheel(tmr_obj_t* timer) {
if (timer->wheel >= tListLen(wheels)) { uint8_t wheelIdx = timer->wheel;
if (wheelIdx >= tListLen(wheels)) {
return false; return false;
} }
time_wheel_t* wheel = wheels + timer->wheel; time_wheel_t* wheel = wheels + wheelIdx;
bool removed = false; bool removed = false;
pthread_mutex_lock(&wheel->mutex); pthread_mutex_lock(&wheel->mutex);
......
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#define _DEFAULT_SOURCE
#include "os.h"
#include "tulog.h"
#include "tqueue.h"
#include "tworker.h"
int32_t tWorkerInit(SWorkerPool *pPool) {
pPool->qset = taosOpenQset();
pPool->worker = calloc(sizeof(SWorker), pPool->max);
pthread_mutex_init(&pPool->mutex, NULL);
for (int i = 0; i < pPool->max; ++i) {
SWorker *pWorker = pPool->worker + i;
pWorker->id = i;
pWorker->pPool = pPool;
}
uInfo("worker:%s is initialized, min:%d max:%d", pPool->name, pPool->min, pPool->max);
return 0;
}
void tWorkerCleanup(SWorkerPool *pPool) {
for (int i = 0; i < pPool->max; ++i) {
SWorker *pWorker = pPool->worker + i;
if(taosCheckPthreadValid(pWorker->thread)) {
taosQsetThreadResume(pPool->qset);
}
}
for (int i = 0; i < pPool->max; ++i) {
SWorker *pWorker = pPool->worker + i;
if (taosCheckPthreadValid(pWorker->thread)) {
pthread_join(pWorker->thread, NULL);
}
}
free(pPool->worker);
taosCloseQset(pPool->qset);
pthread_mutex_destroy(&pPool->mutex);
uInfo("worker:%s is closed", pPool->name);
}
void *tWorkerAllocQueue(SWorkerPool *pPool, void *ahandle) {
pthread_mutex_lock(&pPool->mutex);
taos_queue pQueue = taosOpenQueue();
if (pQueue == NULL) {
pthread_mutex_unlock(&pPool->mutex);
return NULL;
}
taosAddIntoQset(pPool->qset, pQueue, ahandle);
// spawn a thread to process queue
if (pPool->num < pPool->max) {
do {
SWorker *pWorker = pPool->worker + pPool->num;
pthread_attr_t thAttr;
pthread_attr_init(&thAttr);
pthread_attr_setdetachstate(&thAttr, PTHREAD_CREATE_JOINABLE);
if (pthread_create(&pWorker->thread, &thAttr, pPool->workerFp, pWorker) != 0) {
uError("worker:%s:%d failed to create thread to process since %s", pPool->name, pWorker->id, strerror(errno));
}
pthread_attr_destroy(&thAttr);
pPool->num++;
uDebug("worker:%s:%d is launched, total:%d", pPool->name, pWorker->id, pPool->num);
} while (pPool->num < pPool->min);
}
pthread_mutex_unlock(&pPool->mutex);
uDebug("worker:%s, queue:%p is allocated, ahandle:%p", pPool->name, pQueue, ahandle);
return pQueue;
}
void tWorkerFreeQueue(SWorkerPool *pPool, void *pQueue) {
taosCloseQueue(pQueue);
uDebug("worker:%s, queue:%p is freed", pPool->name, pQueue);
}
...@@ -46,9 +46,11 @@ typedef struct { ...@@ -46,9 +46,11 @@ typedef struct {
int8_t isFull; int8_t isFull;
int8_t isCommiting; int8_t isCommiting;
uint64_t version; // current version uint64_t version; // current version
uint64_t cversion; // version while commit start
uint64_t fversion; // version on saved data file uint64_t fversion; // version on saved data file
void * wqueue; void * wqueue; // write queue
void * rqueue; void * qqueue; // read query queue
void * fqueue; // read fetch/cancel queue
void * wal; void * wal;
void * tsdb; void * tsdb;
int64_t sync; int64_t sync;
......
...@@ -203,8 +203,8 @@ int32_t vnodeOpen(int32_t vgId) { ...@@ -203,8 +203,8 @@ int32_t vnodeOpen(int32_t vgId) {
code = vnodeReadVersion(pVnode); code = vnodeReadVersion(pVnode);
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
vError("vgId:%d, failed to read version, generate it from data file", pVnode->vgId); vError("vgId:%d, failed to read file version, generate it from data file", pVnode->vgId);
// Allow vnode start even when read version fails, set version as walVersion or zero // Allow vnode start even when read file version fails, set file version as wal version or zero
// vnodeCleanUp(pVnode); // vnodeCleanUp(pVnode);
// return code; // return code;
} }
...@@ -212,8 +212,9 @@ int32_t vnodeOpen(int32_t vgId) { ...@@ -212,8 +212,9 @@ int32_t vnodeOpen(int32_t vgId) {
pVnode->fversion = pVnode->version; pVnode->fversion = pVnode->version;
pVnode->wqueue = dnodeAllocVWriteQueue(pVnode); pVnode->wqueue = dnodeAllocVWriteQueue(pVnode);
pVnode->rqueue = dnodeAllocVReadQueue(pVnode); pVnode->qqueue = dnodeAllocVQueryQueue(pVnode);
if (pVnode->wqueue == NULL || pVnode->rqueue == NULL) { pVnode->fqueue = dnodeAllocVFetchQueue(pVnode);
if (pVnode->wqueue == NULL || pVnode->qqueue == NULL || pVnode->fqueue == NULL) {
vnodeCleanUp(pVnode); vnodeCleanUp(pVnode);
return terrno; return terrno;
} }
...@@ -373,9 +374,14 @@ void vnodeDestroy(SVnodeObj *pVnode) { ...@@ -373,9 +374,14 @@ void vnodeDestroy(SVnodeObj *pVnode) {
pVnode->wqueue = NULL; pVnode->wqueue = NULL;
} }
if (pVnode->rqueue) { if (pVnode->qqueue) {
dnodeFreeVReadQueue(pVnode->rqueue); dnodeFreeVQueryQueue(pVnode->qqueue);
pVnode->rqueue = NULL; pVnode->qqueue = NULL;
}
if (pVnode->fqueue) {
dnodeFreeVFetchQueue(pVnode->fqueue);
pVnode->fqueue = NULL;
} }
tfree(pVnode->rootDir); tfree(pVnode->rootDir);
...@@ -441,6 +447,7 @@ static int32_t vnodeProcessTsdbStatus(void *arg, int32_t status, int32_t eno) { ...@@ -441,6 +447,7 @@ static int32_t vnodeProcessTsdbStatus(void *arg, int32_t status, int32_t eno) {
if (status == TSDB_STATUS_COMMIT_START) { if (status == TSDB_STATUS_COMMIT_START) {
pVnode->isCommiting = 1; pVnode->isCommiting = 1;
pVnode->cversion = pVnode->version;
vDebug("vgId:%d, start commit, fver:%" PRIu64 " vver:%" PRIu64, pVnode->vgId, pVnode->fversion, pVnode->version); vDebug("vgId:%d, start commit, fver:%" PRIu64 " vver:%" PRIu64, pVnode->vgId, pVnode->fversion, pVnode->version);
if (!vnodeInInitStatus(pVnode)) { if (!vnodeInInitStatus(pVnode)) {
return walRenew(pVnode->wal); return walRenew(pVnode->wal);
...@@ -451,7 +458,7 @@ static int32_t vnodeProcessTsdbStatus(void *arg, int32_t status, int32_t eno) { ...@@ -451,7 +458,7 @@ static int32_t vnodeProcessTsdbStatus(void *arg, int32_t status, int32_t eno) {
if (status == TSDB_STATUS_COMMIT_OVER) { if (status == TSDB_STATUS_COMMIT_OVER) {
pVnode->isCommiting = 0; pVnode->isCommiting = 0;
pVnode->isFull = 0; pVnode->isFull = 0;
pVnode->fversion = pVnode->version; pVnode->fversion = pVnode->cversion;
vDebug("vgId:%d, commit over, fver:%" PRIu64 " vver:%" PRIu64, pVnode->vgId, pVnode->fversion, pVnode->version); vDebug("vgId:%d, commit over, fver:%" PRIu64 " vver:%" PRIu64, pVnode->vgId, pVnode->fversion, pVnode->version);
if (!vnodeInInitStatus(pVnode)) { if (!vnodeInInitStatus(pVnode)) {
walRemoveOneOldFile(pVnode->wal); walRemoveOneOldFile(pVnode->wal);
......
...@@ -14,7 +14,6 @@ ...@@ -14,7 +14,6 @@
*/ */
#define _DEFAULT_SOURCE #define _DEFAULT_SOURCE
#define _NON_BLOCKING_RETRIEVE 0
#include "os.h" #include "os.h"
#include "taosmsg.h" #include "taosmsg.h"
#include "tqueue.h" #include "tqueue.h"
...@@ -25,12 +24,12 @@ ...@@ -25,12 +24,12 @@
static int32_t (*vnodeProcessReadMsgFp[TSDB_MSG_TYPE_MAX])(SVnodeObj *pVnode, SVReadMsg *pRead); static int32_t (*vnodeProcessReadMsgFp[TSDB_MSG_TYPE_MAX])(SVnodeObj *pVnode, SVReadMsg *pRead);
static int32_t vnodeProcessQueryMsg(SVnodeObj *pVnode, SVReadMsg *pRead); static int32_t vnodeProcessQueryMsg(SVnodeObj *pVnode, SVReadMsg *pRead);
static int32_t vnodeProcessFetchMsg(SVnodeObj *pVnode, SVReadMsg *pRead); static int32_t vnodeProcessFetchMsg(SVnodeObj *pVnode, SVReadMsg *pRead);
static int32_t vnodeNotifyCurrentQhandle(void* handle, void* qhandle, int32_t vgId); static int32_t vnodeNotifyCurrentQhandle(void* handle, void* qhandle, int32_t vgId);
int32_t vnodeInitRead(void) { int32_t vnodeInitRead(void) {
vnodeProcessReadMsgFp[TSDB_MSG_TYPE_QUERY] = vnodeProcessQueryMsg; vnodeProcessReadMsgFp[TSDB_MSG_TYPE_QUERY] = vnodeProcessQueryMsg;
vnodeProcessReadMsgFp[TSDB_MSG_TYPE_FETCH] = vnodeProcessFetchMsg; vnodeProcessReadMsgFp[TSDB_MSG_TYPE_FETCH] = vnodeProcessFetchMsg;
return 0; return 0;
} }
...@@ -115,13 +114,16 @@ int32_t vnodeWriteToRQueue(void *vparam, void *pCont, int32_t contLen, int8_t qt ...@@ -115,13 +114,16 @@ int32_t vnodeWriteToRQueue(void *vparam, void *pCont, int32_t contLen, int8_t qt
} }
pRead->qtype = qtype; pRead->qtype = qtype;
atomic_add_fetch_32(&pVnode->refCount, 1); atomic_add_fetch_32(&pVnode->refCount, 1);
atomic_add_fetch_32(&pVnode->queuedRMsg, 1); atomic_add_fetch_32(&pVnode->queuedRMsg, 1);
vTrace("vgId:%d, write into vrqueue, refCount:%d queued:%d", pVnode->vgId, pVnode->refCount, pVnode->queuedRMsg);
taosWriteQitem(pVnode->rqueue, qtype, pRead); if (pRead->code == TSDB_CODE_RPC_NETWORK_UNAVAIL || pRead->msgType == TSDB_MSG_TYPE_FETCH) {
return TSDB_CODE_SUCCESS; vTrace("vgId:%d, write into vfetch queue, refCount:%d queued:%d", pVnode->vgId, pVnode->refCount, pVnode->queuedRMsg);
return taosWriteQitem(pVnode->fqueue, qtype, pRead);
} else {
vTrace("vgId:%d, write into vquery queue, refCount:%d queued:%d", pVnode->vgId, pVnode->refCount, pVnode->queuedRMsg);
return taosWriteQitem(pVnode->qqueue, qtype, pRead);
}
} }
static int32_t vnodePutItemIntoReadQueue(SVnodeObj *pVnode, void **qhandle, void *ahandle) { static int32_t vnodePutItemIntoReadQueue(SVnodeObj *pVnode, void **qhandle, void *ahandle) {
...@@ -131,7 +133,7 @@ static int32_t vnodePutItemIntoReadQueue(SVnodeObj *pVnode, void **qhandle, void ...@@ -131,7 +133,7 @@ static int32_t vnodePutItemIntoReadQueue(SVnodeObj *pVnode, void **qhandle, void
int32_t code = vnodeWriteToRQueue(pVnode, qhandle, 0, TAOS_QTYPE_QUERY, &rpcMsg); int32_t code = vnodeWriteToRQueue(pVnode, qhandle, 0, TAOS_QTYPE_QUERY, &rpcMsg);
if (code == TSDB_CODE_SUCCESS) { if (code == TSDB_CODE_SUCCESS) {
vDebug("QInfo:%p add to vread queue for exec query", *qhandle); vTrace("QInfo:%p add to vread queue for exec query", *qhandle);
} }
return code; return code;
...@@ -162,7 +164,7 @@ static int32_t vnodeDumpQueryResult(SRspRet *pRet, void *pVnode, void **handle, ...@@ -162,7 +164,7 @@ static int32_t vnodeDumpQueryResult(SRspRet *pRet, void *pVnode, void **handle,
} }
} else { } else {
*freeHandle = true; *freeHandle = true;
vDebug("QInfo:%p exec completed, free handle:%d", *handle, *freeHandle); vTrace("QInfo:%p exec completed, free handle:%d", *handle, *freeHandle);
} }
} else { } else {
SRetrieveTableRsp *pRsp = (SRetrieveTableRsp *)rpcMallocCont(sizeof(SRetrieveTableRsp)); SRetrieveTableRsp *pRsp = (SRetrieveTableRsp *)rpcMallocCont(sizeof(SRetrieveTableRsp));
...@@ -197,26 +199,29 @@ static int32_t vnodeProcessQueryMsg(SVnodeObj *pVnode, SVReadMsg *pRead) { ...@@ -197,26 +199,29 @@ static int32_t vnodeProcessQueryMsg(SVnodeObj *pVnode, SVReadMsg *pRead) {
// qHandle needs to be freed correctly // qHandle needs to be freed correctly
if (pRead->code == TSDB_CODE_RPC_NETWORK_UNAVAIL) { if (pRead->code == TSDB_CODE_RPC_NETWORK_UNAVAIL) {
SRetrieveTableMsg *killQueryMsg = (SRetrieveTableMsg *)pRead->pCont; vError("error rpc msg in query, %s", tstrerror(pRead->code));
killQueryMsg->free = htons(killQueryMsg->free);
killQueryMsg->qhandle = htobe64(killQueryMsg->qhandle);
vWarn("QInfo:%p connection %p broken, kill query", (void *)killQueryMsg->qhandle, pRead->rpcHandle);
assert(pRead->contLen > 0 && killQueryMsg->free == 1);
void **qhandle = qAcquireQInfo(pVnode->qMgmt, (uint64_t)killQueryMsg->qhandle);
if (qhandle == NULL || *qhandle == NULL) {
vWarn("QInfo:%p invalid qhandle, no matched query handle, conn:%p", (void *)killQueryMsg->qhandle,
pRead->rpcHandle);
} else {
assert(*qhandle == (void *)killQueryMsg->qhandle);
qKillQuery(*qhandle);
qReleaseQInfo(pVnode->qMgmt, (void **)&qhandle, true);
}
return TSDB_CODE_TSC_QUERY_CANCELLED;
} }
// assert(pRead->code != TSDB_CODE_RPC_NETWORK_UNAVAIL);
// if (pRead->code == TSDB_CODE_RPC_NETWORK_UNAVAIL) {
// SCancelQueryMsg *pMsg = (SCancelQueryMsg *)pRead->pCont;
//// pMsg->free = htons(killQueryMsg->free);
// pMsg->qhandle = htobe64(pMsg->qhandle);
//
// vWarn("QInfo:%p connection %p broken, kill query", (void *)pMsg->qhandle, pRead->rpcHandle);
//// assert(pRead->contLen > 0 && pMsg->free == 1);
//
// void **qhandle = qAcquireQInfo(pVnode->qMgmt, (uint64_t)pMsg->qhandle);
// if (qhandle == NULL || *qhandle == NULL) {
// vWarn("QInfo:%p invalid qhandle, no matched query handle, conn:%p", (void *)pMsg->qhandle, pRead->rpcHandle);
// } else {
// assert(*qhandle == (void *)pMsg->qhandle);
//
// qKillQuery(*qhandle);
// qReleaseQInfo(pVnode->qMgmt, (void **)&qhandle, true);
// }
//
// return TSDB_CODE_TSC_QUERY_CANCELLED;
// }
int32_t code = TSDB_CODE_SUCCESS; int32_t code = TSDB_CODE_SUCCESS;
void ** handle = NULL; void ** handle = NULL;
...@@ -261,7 +266,7 @@ static int32_t vnodeProcessQueryMsg(SVnodeObj *pVnode, SVReadMsg *pRead) { ...@@ -261,7 +266,7 @@ static int32_t vnodeProcessQueryMsg(SVnodeObj *pVnode, SVReadMsg *pRead) {
} }
if (handle != NULL) { if (handle != NULL) {
vDebug("vgId:%d, QInfo:%p, dnode query msg disposed, create qhandle and returns to app", vgId, *handle); vTrace("vgId:%d, QInfo:%p, dnode query msg disposed, create qhandle and returns to app", vgId, *handle);
code = vnodePutItemIntoReadQueue(pVnode, handle, pRead->rpcHandle); code = vnodePutItemIntoReadQueue(pVnode, handle, pRead->rpcHandle);
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
pRsp->code = code; pRsp->code = code;
...@@ -273,10 +278,10 @@ static int32_t vnodeProcessQueryMsg(SVnodeObj *pVnode, SVReadMsg *pRead) { ...@@ -273,10 +278,10 @@ static int32_t vnodeProcessQueryMsg(SVnodeObj *pVnode, SVReadMsg *pRead) {
assert(pCont != NULL); assert(pCont != NULL);
void **qhandle = (void **)pRead->qhandle; void **qhandle = (void **)pRead->qhandle;
vDebug("vgId:%d, QInfo:%p, dnode continues to exec query", pVnode->vgId, *qhandle); vTrace("vgId:%d, QInfo:%p, dnode continues to exec query", pVnode->vgId, *qhandle);
// In the retrieve blocking model, only 50% CPU will be used in query processing // In the retrieve blocking model, only 50% CPU will be used in query processing
if (tsHalfCoresForQuery) { if (tsRetrieveBlockingModel) {
qTableQuery(*qhandle); // do execute query qTableQuery(*qhandle); // do execute query
qReleaseQInfo(pVnode->qMgmt, (void **)&qhandle, false); qReleaseQInfo(pVnode->qMgmt, (void **)&qhandle, false);
} else { } else {
...@@ -289,7 +294,7 @@ static int32_t vnodeProcessQueryMsg(SVnodeObj *pVnode, SVReadMsg *pRead) { ...@@ -289,7 +294,7 @@ static int32_t vnodeProcessQueryMsg(SVnodeObj *pVnode, SVReadMsg *pRead) {
pRead->rpcHandle = qGetResultRetrieveMsg(*qhandle); pRead->rpcHandle = qGetResultRetrieveMsg(*qhandle);
assert(pRead->rpcHandle != NULL); assert(pRead->rpcHandle != NULL);
vDebug("vgId:%d, QInfo:%p, start to build retrieval rsp after query paused, %p", pVnode->vgId, *qhandle, vTrace("vgId:%d, QInfo:%p, start to build retrieval rsp after query paused, %p", pVnode->vgId, *qhandle,
pRead->rpcHandle); pRead->rpcHandle);
// set the real rsp error code // set the real rsp error code
...@@ -322,7 +327,7 @@ static int32_t vnodeProcessFetchMsg(SVnodeObj *pVnode, SVReadMsg *pRead) { ...@@ -322,7 +327,7 @@ static int32_t vnodeProcessFetchMsg(SVnodeObj *pVnode, SVReadMsg *pRead) {
pRetrieve->free = htons(pRetrieve->free); pRetrieve->free = htons(pRetrieve->free);
pRetrieve->qhandle = htobe64(pRetrieve->qhandle); pRetrieve->qhandle = htobe64(pRetrieve->qhandle);
vDebug("vgId:%d, QInfo:%p, retrieve msg is disposed, free:%d, conn:%p", pVnode->vgId, (void *)pRetrieve->qhandle, vTrace("vgId:%d, QInfo:%p, retrieve msg is disposed, free:%d, conn:%p", pVnode->vgId, (void *)pRetrieve->qhandle,
pRetrieve->free, pRead->rpcHandle); pRetrieve->free, pRead->rpcHandle);
memset(pRet, 0, sizeof(SRspRet)); memset(pRet, 0, sizeof(SRspRet));
...@@ -338,11 +343,12 @@ static int32_t vnodeProcessFetchMsg(SVnodeObj *pVnode, SVReadMsg *pRead) { ...@@ -338,11 +343,12 @@ static int32_t vnodeProcessFetchMsg(SVnodeObj *pVnode, SVReadMsg *pRead) {
} }
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
vError("vgId:%d, invalid handle in retrieving result, code:0x%08x, QInfo:%p", pVnode->vgId, code, (void *)pRetrieve->qhandle); vError("vgId:%d, invalid handle in retrieving result, code:%s, QInfo:%p", pVnode->vgId, tstrerror(code), (void *)pRetrieve->qhandle);
vnodeBuildNoResultQueryRsp(pRet); vnodeBuildNoResultQueryRsp(pRet);
return code; return code;
} }
// kill current query and free corresponding resources.
if (pRetrieve->free == 1) { if (pRetrieve->free == 1) {
vWarn("vgId:%d, QInfo:%p, retrieve msg received to kill query and free qhandle", pVnode->vgId, *handle); vWarn("vgId:%d, QInfo:%p, retrieve msg received to kill query and free qhandle", pVnode->vgId, *handle);
qKillQuery(*handle); qKillQuery(*handle);
...@@ -373,10 +379,8 @@ static int32_t vnodeProcessFetchMsg(SVnodeObj *pVnode, SVReadMsg *pRead) { ...@@ -373,10 +379,8 @@ static int32_t vnodeProcessFetchMsg(SVnodeObj *pVnode, SVReadMsg *pRead) {
memset(pRet->rsp, 0, sizeof(SRetrieveTableRsp)); memset(pRet->rsp, 0, sizeof(SRetrieveTableRsp));
freeHandle = true; freeHandle = true;
} else { // result is not ready, return immediately } else { // result is not ready, return immediately
assert(buildRes == true);
// Only effects in the non-blocking model // Only effects in the non-blocking model
if (!tsHalfCoresForQuery) { if (!tsRetrieveBlockingModel) {
if (!buildRes) { if (!buildRes) {
assert(pRead->rpcHandle != NULL); assert(pRead->rpcHandle != NULL);
...@@ -401,12 +405,11 @@ static int32_t vnodeProcessFetchMsg(SVnodeObj *pVnode, SVReadMsg *pRead) { ...@@ -401,12 +405,11 @@ static int32_t vnodeProcessFetchMsg(SVnodeObj *pVnode, SVReadMsg *pRead) {
// notify connection(handle) that current qhandle is created, if current connection from // notify connection(handle) that current qhandle is created, if current connection from
// client is broken, the query needs to be killed immediately. // client is broken, the query needs to be killed immediately.
int32_t vnodeNotifyCurrentQhandle(void *handle, void *qhandle, int32_t vgId) { int32_t vnodeNotifyCurrentQhandle(void *handle, void *qhandle, int32_t vgId) {
SRetrieveTableMsg *killQueryMsg = rpcMallocCont(sizeof(SRetrieveTableMsg)); SRetrieveTableMsg *pMsg = rpcMallocCont(sizeof(SRetrieveTableMsg));
killQueryMsg->qhandle = htobe64((uint64_t)qhandle); pMsg->qhandle = htobe64((uint64_t)qhandle);
killQueryMsg->free = htons(1); pMsg->header.vgId = htonl(vgId);
killQueryMsg->header.vgId = htonl(vgId); pMsg->header.contLen = htonl(sizeof(SRetrieveTableMsg));
killQueryMsg->header.contLen = htonl(sizeof(SRetrieveTableMsg));
vTrace("QInfo:%p register qhandle to connect:%p", qhandle, handle);
vDebug("QInfo:%p register qhandle to connect:%p", qhandle, handle); return rpcReportProgress(handle, (char *)pMsg, sizeof(SRetrieveTableMsg));
return rpcReportProgress(handle, (char *)killQueryMsg, sizeof(SRetrieveTableMsg));
} }
...@@ -243,8 +243,10 @@ int32_t vnodeWriteToWQueue(void *vparam, void *wparam, int32_t qtype, void *rpar ...@@ -243,8 +243,10 @@ int32_t vnodeWriteToWQueue(void *vparam, void *wparam, int32_t qtype, void *rpar
int32_t queued = atomic_add_fetch_32(&pVnode->queuedWMsg, 1); int32_t queued = atomic_add_fetch_32(&pVnode->queuedWMsg, 1);
if (queued > MAX_QUEUED_MSG_NUM) { if (queued > MAX_QUEUED_MSG_NUM) {
vDebug("vgId:%d, too many msg:%d in vwqueue, flow control", pVnode->vgId, queued); int32_t ms = (queued / MAX_QUEUED_MSG_NUM) * 10 + 3;
taosMsleep(1); if (ms > 100) ms = 100;
vDebug("vgId:%d, too many msg:%d in vwqueue, flow control %dms", pVnode->vgId, queued, ms);
taosMsleep(ms);
} }
code = vnodePerformFlowCtrl(pWrite); code = vnodePerformFlowCtrl(pWrite);
...@@ -271,6 +273,8 @@ static void vnodeFlowCtrlMsgToWQueue(void *param, void *tmrId) { ...@@ -271,6 +273,8 @@ static void vnodeFlowCtrlMsgToWQueue(void *param, void *tmrId) {
SVnodeObj * pVnode = pWrite->pVnode; SVnodeObj * pVnode = pWrite->pVnode;
int32_t code = TSDB_CODE_VND_SYNCING; int32_t code = TSDB_CODE_VND_SYNCING;
if (pVnode->flowctrlLevel <= 0) code = TSDB_CODE_VND_IS_FLOWCTRL;
pWrite->processedCount++; pWrite->processedCount++;
if (pWrite->processedCount > 100) { if (pWrite->processedCount > 100) {
vError("vgId:%d, msg:%p, failed to process since %s, retry:%d", pVnode->vgId, pWrite, tstrerror(code), vError("vgId:%d, msg:%p, failed to process since %s, retry:%d", pVnode->vgId, pWrite, tstrerror(code),
...@@ -290,8 +294,8 @@ static void vnodeFlowCtrlMsgToWQueue(void *param, void *tmrId) { ...@@ -290,8 +294,8 @@ static void vnodeFlowCtrlMsgToWQueue(void *param, void *tmrId) {
static int32_t vnodePerformFlowCtrl(SVWriteMsg *pWrite) { static int32_t vnodePerformFlowCtrl(SVWriteMsg *pWrite) {
SVnodeObj *pVnode = pWrite->pVnode; SVnodeObj *pVnode = pWrite->pVnode;
if (pVnode->flowctrlLevel <= 0) return 0;
if (pWrite->qtype != TAOS_QTYPE_RPC) return 0; if (pWrite->qtype != TAOS_QTYPE_RPC) return 0;
if (pVnode->queuedWMsg < MAX_QUEUED_MSG_NUM && pVnode->flowctrlLevel <= 0) return 0;
if (tsFlowCtrl == 0) { if (tsFlowCtrl == 0) {
int32_t ms = pow(2, pVnode->flowctrlLevel + 2); int32_t ms = pow(2, pVnode->flowctrlLevel + 2);
......
...@@ -38,7 +38,7 @@ extern int32_t wDebugFlag; ...@@ -38,7 +38,7 @@ extern int32_t wDebugFlag;
#define WAL_SIGNATURE ((uint32_t)(0xFAFBFDFE)) #define WAL_SIGNATURE ((uint32_t)(0xFAFBFDFE))
#define WAL_PATH_LEN (TSDB_FILENAME_LEN + 12) #define WAL_PATH_LEN (TSDB_FILENAME_LEN + 12)
#define WAL_FILE_LEN (WAL_PATH_LEN + 32) #define WAL_FILE_LEN (WAL_PATH_LEN + 32)
#define WAL_FILE_NUM 3 #define WAL_FILE_NUM 1 // 3
typedef struct { typedef struct {
uint64_t version; uint64_t version;
......
...@@ -173,7 +173,7 @@ int32_t walRestore(void *handle, void *pVnode, FWalWrite writeFp) { ...@@ -173,7 +173,7 @@ int32_t walRestore(void *handle, void *pVnode, FWalWrite writeFp) {
continue; continue;
} }
wInfo("vgId:%d, file:%s, restore success", pWal->vgId, walName); wInfo("vgId:%d, file:%s, restore success, wver:%" PRIu64, pWal->vgId, walName, pWal->version);
count++; count++;
} }
...@@ -267,8 +267,6 @@ static int32_t walRestoreWalFile(SWal *pWal, void *pVnode, FWalWrite writeFp, ch ...@@ -267,8 +267,6 @@ static int32_t walRestoreWalFile(SWal *pWal, void *pVnode, FWalWrite writeFp, ch
return TAOS_SYSTEM_ERROR(errno); return TAOS_SYSTEM_ERROR(errno);
} }
wDebug("vgId:%d, file:%s, start to restore", pWal->vgId, name);
int32_t code = TSDB_CODE_SUCCESS; int32_t code = TSDB_CODE_SUCCESS;
int64_t offset = 0; int64_t offset = 0;
SWalHead *pHead = buffer; SWalHead *pHead = buffer;
......
HELP.md
target/
!.mvn/wrapper/maven-wrapper.jar
!**/src/main/**/target/
!**/src/test/**/target/
### STS ###
.apt_generated
.classpath
.factorypath
.project
.settings
.springBeans
.sts4-cache
### IntelliJ IDEA ###
.idea
*.iws
*.iml
*.ipr
### NetBeans ###
/nbproject/private/
/nbbuild/
/dist/
/nbdist/
/.nb-gradle/
build/
!**/src/main/**/build/
!**/src/test/**/build/
### VS Code ###
.vscode/
/*
* Copyright 2007-present the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import java.net.*;
import java.io.*;
import java.nio.channels.*;
import java.util.Properties;
public class MavenWrapperDownloader {
private static final String WRAPPER_VERSION = "0.5.6";
/**
* Default URL to download the maven-wrapper.jar from, if no 'downloadUrl' is provided.
*/
private static final String DEFAULT_DOWNLOAD_URL = "https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/"
+ WRAPPER_VERSION + "/maven-wrapper-" + WRAPPER_VERSION + ".jar";
/**
* Path to the maven-wrapper.properties file, which might contain a downloadUrl property to
* use instead of the default one.
*/
private static final String MAVEN_WRAPPER_PROPERTIES_PATH =
".mvn/wrapper/maven-wrapper.properties";
/**
* Path where the maven-wrapper.jar will be saved to.
*/
private static final String MAVEN_WRAPPER_JAR_PATH =
".mvn/wrapper/maven-wrapper.jar";
/**
* Name of the property which should be used to override the default download url for the wrapper.
*/
private static final String PROPERTY_NAME_WRAPPER_URL = "wrapperUrl";
public static void main(String args[]) {
System.out.println("- Downloader started");
File baseDirectory = new File(args[0]);
System.out.println("- Using base directory: " + baseDirectory.getAbsolutePath());
// If the maven-wrapper.properties exists, read it and check if it contains a custom
// wrapperUrl parameter.
File mavenWrapperPropertyFile = new File(baseDirectory, MAVEN_WRAPPER_PROPERTIES_PATH);
String url = DEFAULT_DOWNLOAD_URL;
if (mavenWrapperPropertyFile.exists()) {
FileInputStream mavenWrapperPropertyFileInputStream = null;
try {
mavenWrapperPropertyFileInputStream = new FileInputStream(mavenWrapperPropertyFile);
Properties mavenWrapperProperties = new Properties();
mavenWrapperProperties.load(mavenWrapperPropertyFileInputStream);
url = mavenWrapperProperties.getProperty(PROPERTY_NAME_WRAPPER_URL, url);
} catch (IOException e) {
System.out.println("- ERROR loading '" + MAVEN_WRAPPER_PROPERTIES_PATH + "'");
} finally {
try {
if (mavenWrapperPropertyFileInputStream != null) {
mavenWrapperPropertyFileInputStream.close();
}
} catch (IOException e) {
// Ignore ...
}
}
}
System.out.println("- Downloading from: " + url);
File outputFile = new File(baseDirectory.getAbsolutePath(), MAVEN_WRAPPER_JAR_PATH);
if (!outputFile.getParentFile().exists()) {
if (!outputFile.getParentFile().mkdirs()) {
System.out.println(
"- ERROR creating output directory '" + outputFile.getParentFile().getAbsolutePath() + "'");
}
}
System.out.println("- Downloading to: " + outputFile.getAbsolutePath());
try {
downloadFileFromURL(url, outputFile);
System.out.println("Done");
System.exit(0);
} catch (Throwable e) {
System.out.println("- Error downloading");
e.printStackTrace();
System.exit(1);
}
}
private static void downloadFileFromURL(String urlString, File destination) throws Exception {
if (System.getenv("MVNW_USERNAME") != null && System.getenv("MVNW_PASSWORD") != null) {
String username = System.getenv("MVNW_USERNAME");
char[] password = System.getenv("MVNW_PASSWORD").toCharArray();
Authenticator.setDefault(new Authenticator() {
@Override
protected PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication(username, password);
}
});
}
URL website = new URL(urlString);
ReadableByteChannel rbc;
rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream(destination);
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
fos.close();
rbc.close();
}
}
distributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.6.3/apache-maven-3.6.3-bin.zip
wrapperUrl=https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar
#!/bin/sh
# ----------------------------------------------------------------------------
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
# Maven Start Up Batch script
#
# Required ENV vars:
# ------------------
# JAVA_HOME - location of a JDK home dir
#
# Optional ENV vars
# -----------------
# M2_HOME - location of maven2's installed home dir
# MAVEN_OPTS - parameters passed to the Java VM when running Maven
# e.g. to debug Maven itself, use
# set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000
# MAVEN_SKIP_RC - flag to disable loading of mavenrc files
# ----------------------------------------------------------------------------
if [ -z "$MAVEN_SKIP_RC" ]; then
if [ -f /etc/mavenrc ]; then
. /etc/mavenrc
fi
if [ -f "$HOME/.mavenrc" ]; then
. "$HOME/.mavenrc"
fi
fi
# OS specific support. $var _must_ be set to either true or false.
cygwin=false
darwin=false
mingw=false
case "$(uname)" in
CYGWIN*) cygwin=true ;;
MINGW*) mingw=true ;;
Darwin*)
darwin=true
# Use /usr/libexec/java_home if available, otherwise fall back to /Library/Java/Home
# See https://developer.apple.com/library/mac/qa/qa1170/_index.html
if [ -z "$JAVA_HOME" ]; then
if [ -x "/usr/libexec/java_home" ]; then
export JAVA_HOME="$(/usr/libexec/java_home)"
else
export JAVA_HOME="/Library/Java/Home"
fi
fi
;;
esac
if [ -z "$JAVA_HOME" ]; then
if [ -r /etc/gentoo-release ]; then
JAVA_HOME=$(java-config --jre-home)
fi
fi
if [ -z "$M2_HOME" ]; then
## resolve links - $0 may be a link to maven's home
PRG="$0"
# need this for relative symlinks
while [ -h "$PRG" ]; do
ls=$(ls -ld "$PRG")
link=$(expr "$ls" : '.*-> \(.*\)$')
if expr "$link" : '/.*' >/dev/null; then
PRG="$link"
else
PRG="$(dirname "$PRG")/$link"
fi
done
saveddir=$(pwd)
M2_HOME=$(dirname "$PRG")/..
# make it fully qualified
M2_HOME=$(cd "$M2_HOME" && pwd)
cd "$saveddir"
# echo Using m2 at $M2_HOME
fi
# For Cygwin, ensure paths are in UNIX format before anything is touched
if $cygwin; then
[ -n "$M2_HOME" ] &&
M2_HOME=$(cygpath --unix "$M2_HOME")
[ -n "$JAVA_HOME" ] &&
JAVA_HOME=$(cygpath --unix "$JAVA_HOME")
[ -n "$CLASSPATH" ] &&
CLASSPATH=$(cygpath --path --unix "$CLASSPATH")
fi
# For Mingw, ensure paths are in UNIX format before anything is touched
if $mingw; then
[ -n "$M2_HOME" ] &&
M2_HOME="$( (
cd "$M2_HOME"
pwd
))"
[ -n "$JAVA_HOME" ] &&
JAVA_HOME="$( (
cd "$JAVA_HOME"
pwd
))"
fi
if [ -z "$JAVA_HOME" ]; then
javaExecutable="$(which javac)"
if [ -n "$javaExecutable" ] && ! [ "$(expr \"$javaExecutable\" : '\([^ ]*\)')" = "no" ]; then
# readlink(1) is not available as standard on Solaris 10.
readLink=$(which readlink)
if [ ! $(expr "$readLink" : '\([^ ]*\)') = "no" ]; then
if $darwin; then
javaHome="$(dirname \"$javaExecutable\")"
javaExecutable="$(cd \"$javaHome\" && pwd -P)/javac"
else
javaExecutable="$(readlink -f \"$javaExecutable\")"
fi
javaHome="$(dirname \"$javaExecutable\")"
javaHome=$(expr "$javaHome" : '\(.*\)/bin')
JAVA_HOME="$javaHome"
export JAVA_HOME
fi
fi
fi
if [ -z "$JAVACMD" ]; then
if [ -n "$JAVA_HOME" ]; then
if [ -x "$JAVA_HOME/jre/sh/java" ]; then
# IBM's JDK on AIX uses strange locations for the executables
JAVACMD="$JAVA_HOME/jre/sh/java"
else
JAVACMD="$JAVA_HOME/bin/java"
fi
else
JAVACMD="$(which java)"
fi
fi
if [ ! -x "$JAVACMD" ]; then
echo "Error: JAVA_HOME is not defined correctly." >&2
echo " We cannot execute $JAVACMD" >&2
exit 1
fi
if [ -z "$JAVA_HOME" ]; then
echo "Warning: JAVA_HOME environment variable is not set."
fi
CLASSWORLDS_LAUNCHER=org.codehaus.plexus.classworlds.launcher.Launcher
# traverses directory structure from process work directory to filesystem root
# first directory with .mvn subdirectory is considered project base directory
find_maven_basedir() {
if [ -z "$1" ]; then
echo "Path not specified to find_maven_basedir"
return 1
fi
basedir="$1"
wdir="$1"
while [ "$wdir" != '/' ]; do
if [ -d "$wdir"/.mvn ]; then
basedir=$wdir
break
fi
# workaround for JBEAP-8937 (on Solaris 10/Sparc)
if [ -d "${wdir}" ]; then
wdir=$(
cd "$wdir/.."
pwd
)
fi
# end of workaround
done
echo "${basedir}"
}
# concatenates all lines of a file
concat_lines() {
if [ -f "$1" ]; then
echo "$(tr -s '\n' ' ' <"$1")"
fi
}
BASE_DIR=$(find_maven_basedir "$(pwd)")
if [ -z "$BASE_DIR" ]; then
exit 1
fi
##########################################################################################
# Extension to allow automatically downloading the maven-wrapper.jar from Maven-central
# This allows using the maven wrapper in projects that prohibit checking in binary data.
##########################################################################################
if [ -r "$BASE_DIR/.mvn/wrapper/maven-wrapper.jar" ]; then
if [ "$MVNW_VERBOSE" = true ]; then
echo "Found .mvn/wrapper/maven-wrapper.jar"
fi
else
if [ "$MVNW_VERBOSE" = true ]; then
echo "Couldn't find .mvn/wrapper/maven-wrapper.jar, downloading it ..."
fi
if [ -n "$MVNW_REPOURL" ]; then
jarUrl="$MVNW_REPOURL/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
else
jarUrl="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
fi
while IFS="=" read key value; do
case "$key" in wrapperUrl)
jarUrl="$value"
break
;;
esac
done <"$BASE_DIR/.mvn/wrapper/maven-wrapper.properties"
if [ "$MVNW_VERBOSE" = true ]; then
echo "Downloading from: $jarUrl"
fi
wrapperJarPath="$BASE_DIR/.mvn/wrapper/maven-wrapper.jar"
if $cygwin; then
wrapperJarPath=$(cygpath --path --windows "$wrapperJarPath")
fi
if command -v wget >/dev/null; then
if [ "$MVNW_VERBOSE" = true ]; then
echo "Found wget ... using wget"
fi
if [ -z "$MVNW_USERNAME" ] || [ -z "$MVNW_PASSWORD" ]; then
wget "$jarUrl" -O "$wrapperJarPath"
else
wget --http-user=$MVNW_USERNAME --http-password=$MVNW_PASSWORD "$jarUrl" -O "$wrapperJarPath"
fi
elif command -v curl >/dev/null; then
if [ "$MVNW_VERBOSE" = true ]; then
echo "Found curl ... using curl"
fi
if [ -z "$MVNW_USERNAME" ] || [ -z "$MVNW_PASSWORD" ]; then
curl -o "$wrapperJarPath" "$jarUrl" -f
else
curl --user $MVNW_USERNAME:$MVNW_PASSWORD -o "$wrapperJarPath" "$jarUrl" -f
fi
else
if [ "$MVNW_VERBOSE" = true ]; then
echo "Falling back to using Java to download"
fi
javaClass="$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.java"
# For Cygwin, switch paths to Windows format before running javac
if $cygwin; then
javaClass=$(cygpath --path --windows "$javaClass")
fi
if [ -e "$javaClass" ]; then
if [ ! -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then
if [ "$MVNW_VERBOSE" = true ]; then
echo " - Compiling MavenWrapperDownloader.java ..."
fi
# Compiling the Java class
("$JAVA_HOME/bin/javac" "$javaClass")
fi
if [ -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then
# Running the downloader
if [ "$MVNW_VERBOSE" = true ]; then
echo " - Running MavenWrapperDownloader.java ..."
fi
("$JAVA_HOME/bin/java" -cp .mvn/wrapper MavenWrapperDownloader "$MAVEN_PROJECTBASEDIR")
fi
fi
fi
fi
##########################################################################################
# End of extension
##########################################################################################
export MAVEN_PROJECTBASEDIR=${MAVEN_BASEDIR:-"$BASE_DIR"}
if [ "$MVNW_VERBOSE" = true ]; then
echo $MAVEN_PROJECTBASEDIR
fi
MAVEN_OPTS="$(concat_lines "$MAVEN_PROJECTBASEDIR/.mvn/jvm.config") $MAVEN_OPTS"
# For Cygwin, switch paths to Windows format before running java
if $cygwin; then
[ -n "$M2_HOME" ] &&
M2_HOME=$(cygpath --path --windows "$M2_HOME")
[ -n "$JAVA_HOME" ] &&
JAVA_HOME=$(cygpath --path --windows "$JAVA_HOME")
[ -n "$CLASSPATH" ] &&
CLASSPATH=$(cygpath --path --windows "$CLASSPATH")
[ -n "$MAVEN_PROJECTBASEDIR" ] &&
MAVEN_PROJECTBASEDIR=$(cygpath --path --windows "$MAVEN_PROJECTBASEDIR")
fi
# Provide a "standardized" way to retrieve the CLI args that will
# work with both Windows and non-Windows executions.
MAVEN_CMD_LINE_ARGS="$MAVEN_CONFIG $@"
export MAVEN_CMD_LINE_ARGS
WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
exec "$JAVACMD" \
$MAVEN_OPTS \
-classpath "$MAVEN_PROJECTBASEDIR/.mvn/wrapper/maven-wrapper.jar" \
"-Dmaven.home=${M2_HOME}" "-Dmaven.multiModuleProjectDirectory=${MAVEN_PROJECTBASEDIR}" \
${WRAPPER_LAUNCHER} $MAVEN_CONFIG "$@"
@REM ----------------------------------------------------------------------------
@REM Licensed to the Apache Software Foundation (ASF) under one
@REM or more contributor license agreements. See the NOTICE file
@REM distributed with this work for additional information
@REM regarding copyright ownership. The ASF licenses this file
@REM to you under the Apache License, Version 2.0 (the
@REM "License"); you may not use this file except in compliance
@REM with the License. You may obtain a copy of the License at
@REM
@REM https://www.apache.org/licenses/LICENSE-2.0
@REM
@REM Unless required by applicable law or agreed to in writing,
@REM software distributed under the License is distributed on an
@REM "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
@REM KIND, either express or implied. See the License for the
@REM specific language governing permissions and limitations
@REM under the License.
@REM ----------------------------------------------------------------------------
@REM ----------------------------------------------------------------------------
@REM Maven Start Up Batch script
@REM
@REM Required ENV vars:
@REM JAVA_HOME - location of a JDK home dir
@REM
@REM Optional ENV vars
@REM M2_HOME - location of maven2's installed home dir
@REM MAVEN_BATCH_ECHO - set to 'on' to enable the echoing of the batch commands
@REM MAVEN_BATCH_PAUSE - set to 'on' to wait for a keystroke before ending
@REM MAVEN_OPTS - parameters passed to the Java VM when running Maven
@REM e.g. to debug Maven itself, use
@REM set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000
@REM MAVEN_SKIP_RC - flag to disable loading of mavenrc files
@REM ----------------------------------------------------------------------------
@REM Begin all REM lines with '@' in case MAVEN_BATCH_ECHO is 'on'
@echo off
@REM set title of command window
title %0
@REM enable echoing by setting MAVEN_BATCH_ECHO to 'on'
@if "%MAVEN_BATCH_ECHO%" == "on" echo %MAVEN_BATCH_ECHO%
@REM set %HOME% to equivalent of $HOME
if "%HOME%" == "" (set "HOME=%HOMEDRIVE%%HOMEPATH%")
@REM Execute a user defined script before this one
if not "%MAVEN_SKIP_RC%" == "" goto skipRcPre
@REM check for pre script, once with legacy .bat ending and once with .cmd ending
if exist "%HOME%\mavenrc_pre.bat" call "%HOME%\mavenrc_pre.bat"
if exist "%HOME%\mavenrc_pre.cmd" call "%HOME%\mavenrc_pre.cmd"
:skipRcPre
@setlocal
set ERROR_CODE=0
@REM To isolate internal variables from possible post scripts, we use another setlocal
@setlocal
@REM ==== START VALIDATION ====
if not "%JAVA_HOME%" == "" goto OkJHome
echo.
echo Error: JAVA_HOME not found in your environment. >&2
echo Please set the JAVA_HOME variable in your environment to match the >&2
echo location of your Java installation. >&2
echo.
goto error
:OkJHome
if exist "%JAVA_HOME%\bin\java.exe" goto init
echo.
echo Error: JAVA_HOME is set to an invalid directory. >&2
echo JAVA_HOME = "%JAVA_HOME%" >&2
echo Please set the JAVA_HOME variable in your environment to match the >&2
echo location of your Java installation. >&2
echo.
goto error
@REM ==== END VALIDATION ====
:init
@REM Find the project base dir, i.e. the directory that contains the folder ".mvn".
@REM Fallback to current working directory if not found.
set MAVEN_PROJECTBASEDIR=%MAVEN_BASEDIR%
IF NOT "%MAVEN_PROJECTBASEDIR%"=="" goto endDetectBaseDir
set EXEC_DIR=%CD%
set WDIR=%EXEC_DIR%
:findBaseDir
IF EXIST "%WDIR%"\.mvn goto baseDirFound
cd ..
IF "%WDIR%"=="%CD%" goto baseDirNotFound
set WDIR=%CD%
goto findBaseDir
:baseDirFound
set MAVEN_PROJECTBASEDIR=%WDIR%
cd "%EXEC_DIR%"
goto endDetectBaseDir
:baseDirNotFound
set MAVEN_PROJECTBASEDIR=%EXEC_DIR%
cd "%EXEC_DIR%"
:endDetectBaseDir
IF NOT EXIST "%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config" goto endReadAdditionalConfig
@setlocal EnableExtensions EnableDelayedExpansion
for /F "usebackq delims=" %%a in ("%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config") do set JVM_CONFIG_MAVEN_PROPS=!JVM_CONFIG_MAVEN_PROPS! %%a
@endlocal & set JVM_CONFIG_MAVEN_PROPS=%JVM_CONFIG_MAVEN_PROPS%
:endReadAdditionalConfig
SET MAVEN_JAVA_EXE="%JAVA_HOME%\bin\java.exe"
set WRAPPER_JAR="%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.jar"
set WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
set DOWNLOAD_URL="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
FOR /F "tokens=1,2 delims==" %%A IN ("%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.properties") DO (
IF "%%A"=="wrapperUrl" SET DOWNLOAD_URL=%%B
)
@REM Extension to allow automatically downloading the maven-wrapper.jar from Maven-central
@REM This allows using the maven wrapper in projects that prohibit checking in binary data.
if exist %WRAPPER_JAR% (
if "%MVNW_VERBOSE%" == "true" (
echo Found %WRAPPER_JAR%
)
) else (
if not "%MVNW_REPOURL%" == "" (
SET DOWNLOAD_URL="%MVNW_REPOURL%/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar"
)
if "%MVNW_VERBOSE%" == "true" (
echo Couldn't find %WRAPPER_JAR%, downloading it ...
echo Downloading from: %DOWNLOAD_URL%
)
powershell -Command "&{"^
"$webclient = new-object System.Net.WebClient;"^
"if (-not ([string]::IsNullOrEmpty('%MVNW_USERNAME%') -and [string]::IsNullOrEmpty('%MVNW_PASSWORD%'))) {"^
"$webclient.Credentials = new-object System.Net.NetworkCredential('%MVNW_USERNAME%', '%MVNW_PASSWORD%');"^
"}"^
"[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; $webclient.DownloadFile('%DOWNLOAD_URL%', '%WRAPPER_JAR%')"^
"}"
if "%MVNW_VERBOSE%" == "true" (
echo Finished downloading %WRAPPER_JAR%
)
)
@REM End of extension
@REM Provide a "standardized" way to retrieve the CLI args that will
@REM work with both Windows and non-Windows executions.
set MAVEN_CMD_LINE_ARGS=%*
%MAVEN_JAVA_EXE% %JVM_CONFIG_MAVEN_PROPS% %MAVEN_OPTS% %MAVEN_DEBUG_OPTS% -classpath %WRAPPER_JAR% "-Dmaven.multiModuleProjectDirectory=%MAVEN_PROJECTBASEDIR%" %WRAPPER_LAUNCHER% %MAVEN_CONFIG% %*
if ERRORLEVEL 1 goto error
goto end
:error
set ERROR_CODE=1
:end
@endlocal & set ERROR_CODE=%ERROR_CODE%
if not "%MAVEN_SKIP_RC%" == "" goto skipRcPost
@REM check for post script, once with legacy .bat ending and once with .cmd ending
if exist "%HOME%\mavenrc_post.bat" call "%HOME%\mavenrc_post.bat"
if exist "%HOME%\mavenrc_post.cmd" call "%HOME%\mavenrc_post.cmd"
:skipRcPost
@REM pause the script if MAVEN_BATCH_PAUSE is set to 'on'
if "%MAVEN_BATCH_PAUSE%" == "on" pause
if "%MAVEN_TERMINATE_CMD%" == "on" exit %ERROR_CODE%
exit /B %ERROR_CODE%
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.4.0</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.taosdata</groupId>
<artifactId>taosdemo</artifactId>
<version>2.0</version>
<name>taosdemo</name>
<description>Demo project for TDengine</description>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<!-- taos jdbc -->
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.14</version>
</dependency>
<!-- mysql -->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.47</version>
</dependency>
<!-- mybatis-plus -->
<dependency>
<groupId>com.baomidou</groupId>
<artifactId>mybatis-plus-boot-starter</artifactId>
<version>3.1.2</version>
</dependency>
<!-- log4j -->
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
<!-- springboot -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jdbc</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.mybatis.spring.boot</groupId>
<artifactId>mybatis-spring-boot-starter</artifactId>
<version>2.1.4</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
<includes>
<include>**/*.properties</include>
<include>**/*.xml</include>
</includes>
<filtering>true</filtering>
</resource>
<resource>
<directory>src/main/java</directory>
<includes>
<include>**/*.properties</include>
<include>**/*.xml</include>
</includes>
</resource>
</resources>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
package com.taosdata.taosdemo;
import org.mybatis.spring.annotation.MapperScan;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@MapperScan(basePackages = {"com.taosdata.taosdemo.mapper"})
@SpringBootApplication
public class TaosdemoApplication {
public static void main(String[] args) {
SpringApplication.run(TaosdemoApplication.class, args);
}
}
package com.taosdata.taosdemo.components;
import com.taosdata.taosdemo.domain.*;
import com.taosdata.taosdemo.service.DatabaseService;
import com.taosdata.taosdemo.service.SubTableService;
import com.taosdata.taosdemo.service.SuperTableService;
import com.taosdata.taosdemo.service.data.SubTableMetaGenerator;
import com.taosdata.taosdemo.service.data.SubTableValueGenerator;
import com.taosdata.taosdemo.service.data.SuperTableMetaGenerator;
import com.taosdata.taosdemo.utils.JdbcTaosdemoConfig;
import org.apache.log4j.Logger;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;
import java.util.*;
import java.util.concurrent.TimeUnit;
@Component
public class TaosDemoCommandLineRunner implements CommandLineRunner {
private static Logger logger = Logger.getLogger(TaosDemoCommandLineRunner.class);
@Autowired
private DatabaseService databaseService;
@Autowired
private SuperTableService superTableService;
@Autowired
private SubTableService subTableService;
private SuperTableMeta superTableMeta;
private List<SubTableMeta> subTableMetaList;
private List<SubTableValue> subTableValueList;
private List<List<SubTableValue>> dataList;
@Override
public void run(String... args) throws Exception {
// 读配置参数
JdbcTaosdemoConfig config = new JdbcTaosdemoConfig(args);
boolean isHelp = Arrays.asList(args).contains("--help");
if (isHelp) {
JdbcTaosdemoConfig.printHelp();
System.exit(0);
}
// 准备数据
prepareData(config);
// 创建数据库
createDatabaseTask(config);
// 建表
createTableTask(config);
// 插入
insertTask(config);
// 查询: 1. 生成查询语句, 2. 执行查询
// 删除表
if (config.dropTable) {
superTableService.drop(config.database, config.superTable);
}
System.exit(0);
}
private void createDatabaseTask(JdbcTaosdemoConfig config) {
long start = System.currentTimeMillis();
Map<String, String> databaseParam = new HashMap<>();
databaseParam.put("database", config.database);
databaseParam.put("keep", Integer.toString(config.keep));
databaseParam.put("days", Integer.toString(config.days));
databaseParam.put("replica", Integer.toString(config.replica));
//TODO: other database parameters
databaseService.dropDatabase(config.database);
databaseService.createDatabase(databaseParam);
databaseService.useDatabase(config.database);
long end = System.currentTimeMillis();
logger.info(">>> insert time cost : " + (end - start) + " ms.");
}
// 建超级表,三种方式:1. 指定SQL,2. 指定field和tags的个数,3. 默认
private void createTableTask(JdbcTaosdemoConfig config) {
long start = System.currentTimeMillis();
if (config.doCreateTable) {
superTableService.create(superTableMeta);
// 批量建子表
subTableService.createSubTable(subTableMetaList, config.numOfThreadsForCreate);
}
long end = System.currentTimeMillis();
logger.info(">>> create table time cost : " + (end - start) + " ms.");
}
private void insertTask(JdbcTaosdemoConfig config) {
long start = System.currentTimeMillis();
int numOfThreadsForInsert = config.numOfThreadsForInsert;
int sleep = config.sleep;
if (config.autoCreateTable) {
// 批量插入,自动建表
dataList.stream().forEach(subTableValues -> {
subTableService.insertAutoCreateTable(subTableValues, numOfThreadsForInsert);
sleep(sleep);
});
} else {
dataList.stream().forEach(subTableValues -> {
subTableService.insert(subTableValues, numOfThreadsForInsert);
sleep(sleep);
});
}
long end = System.currentTimeMillis();
logger.info(">>> insert time cost : " + (end - start) + " ms.");
}
private void prepareData(JdbcTaosdemoConfig config) {
long start = System.currentTimeMillis();
// 超级表的meta
superTableMeta = createSupertable(config);
// 子表的meta
subTableMetaList = SubTableMetaGenerator.generate(superTableMeta, config.numOfTables, config.tablePrefix);
// 子表的data
subTableValueList = SubTableValueGenerator.generate(subTableMetaList, config.numOfRowsPerTable, config.startTime, config.timeGap);
// 如果有乱序,给数据搞乱
if (config.order != 0) {
SubTableValueGenerator.disrupt(subTableValueList, config.rate, config.range);
}
// 分割数据
int numOfTables = config.numOfTables;
int numOfTablesPerSQL = config.numOfTablesPerSQL;
int numOfRowsPerTable = config.numOfRowsPerTable;
int numOfValuesPerSQL = config.numOfValuesPerSQL;
dataList = SubTableValueGenerator.split(subTableValueList, numOfTables, numOfTablesPerSQL, numOfRowsPerTable, numOfValuesPerSQL);
long end = System.currentTimeMillis();
logger.info(">>> prepare data time cost : " + (end - start) + " ms.");
}
private SuperTableMeta createSupertable(JdbcTaosdemoConfig config) {
SuperTableMeta tableMeta;
// create super table
logger.info(">>> create super table <<<");
if (config.superTableSQL != null) {
// use a sql to create super table
tableMeta = SuperTableMetaGenerator.generate(config.superTableSQL);
} else if (config.numOfFields == 0) {
// default sql = "create table test.weather (ts timestamp, temperature float, humidity int) tags(location nchar(64), groupId int)";
SuperTableMeta superTableMeta = new SuperTableMeta();
superTableMeta.setDatabase(config.database);
superTableMeta.setName(config.superTable);
List<FieldMeta> fields = new ArrayList<>();
fields.add(new FieldMeta("ts", "timestamp"));
fields.add(new FieldMeta("temperature", "float"));
fields.add(new FieldMeta("humidity", "int"));
superTableMeta.setFields(fields);
List<TagMeta> tags = new ArrayList<>();
tags.add(new TagMeta("location", "nchar(64)"));
tags.add(new TagMeta("groupId", "int"));
superTableMeta.setTags(tags);
return superTableMeta;
} else {
// create super table with specified field size and tag size
tableMeta = SuperTableMetaGenerator.generate(config.database, config.superTable, config.numOfFields, config.prefixOfFields, config.numOfTags, config.prefixOfTags);
}
return tableMeta;
}
private static void sleep(int sleep) {
if (sleep <= 0)
return;
try {
TimeUnit.MILLISECONDS.sleep(sleep);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
package com.taosdata.taosdemo.controller;
import com.taosdata.taosdemo.service.DatabaseService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;
import java.util.Map;
@RestController
@RequestMapping
public class DatabaseController {
@Autowired
private DatabaseService databaseService;
/**
* create database
***/
@PostMapping
public int create(@RequestBody Map<String, String> map) {
return databaseService.createDatabase(map);
}
/**
* drop database
**/
@DeleteMapping("/{dbname}")
public int delete(@PathVariable("dbname") String dbname) {
return databaseService.dropDatabase(dbname);
}
/**
* use database
**/
@GetMapping("/{dbname}")
public int use(@PathVariable("dbname") String dbname) {
return databaseService.useDatabase(dbname);
}
}
package com.taosdata.taosdemo.controller;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class InsertController {
//TODO:多线程写一张表, thread = 10, table = 1
//TODO:一个批次写多张表, insert into t1 using weather values() t2 using weather values()
//TODO:插入的频率,
//TODO:指定一张表内的records数量
//TODO:是否乱序,
//TODO:乱序的比例,乱序的范围
//TODO:先建表,自动建表
//TODO:一个批次写多张表
}
package com.taosdata.taosdemo.controller;
import com.taosdata.taosdemo.domain.TableValue;
import com.taosdata.taosdemo.service.SuperTableService;
import com.taosdata.taosdemo.service.TableService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class SubTableController {
@Autowired
private TableService tableService;
@Autowired
private SuperTableService superTableService;
//TODO: 使用supertable创建一个子表
//TODO:使用supertable创建多个子表
//TODO:使用supertable多线程创建子表
//TODO:使用supertable多线程创建子表,指定子表的name_prefix,子表的数量,使用线程的个数
/**
* 创建表,超级表或者普通表
**/
/**
* 创建超级表的子表
**/
@PostMapping("/{database}/{superTable}")
public int createTable(@PathVariable("database") String database,
@PathVariable("superTable") String superTable,
@RequestBody TableValue tableMetadta) {
tableMetadta.setDatabase(database);
return 0;
}
}
package com.taosdata.taosdemo.controller;
import com.taosdata.taosdemo.domain.SuperTableMeta;
import com.taosdata.taosdemo.service.SuperTableService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
public class SuperTableController {
@Autowired
private SuperTableService superTableService;
@PostMapping("/{database}")
public int createTable(@PathVariable("database") String database, @RequestBody SuperTableMeta tableMetadta) {
tableMetadta.setDatabase(database);
return superTableService.create(tableMetadta);
}
//TODO: 删除超级表
//TODO:查询超级表
//TODO:统计查询表
}
package com.taosdata.taosdemo.controller;
public class TableController {
//TODO:创建普通表,create table(ts timestamp, temperature float)
//TODO:创建普通表,指定表的列数,包括第一列timestamp
//TODO:创建普通表,指定表每列的name和type
}
package com.taosdata.taosdemo.domain;
import lombok.Data;
@Data
public class FieldMeta {
private String name;
private String type;
public FieldMeta() {
}
public FieldMeta(String name, String type) {
this.name = name;
this.type = type;
}
}
\ No newline at end of file
package com.taosdata.taosdemo.domain;
import lombok.Data;
@Data
public class FieldValue<T> {
private String name;
private T value;
public FieldValue() {
}
public FieldValue(String name, T value) {
this.name = name;
this.value = value;
}
}
package com.taosdata.taosdemo.domain;
import lombok.Data;
import java.util.List;
@Data
public class RowValue {
private List<FieldValue> fields;
public RowValue(List<FieldValue> fields) {
this.fields = fields;
}
}
\ No newline at end of file
package com.taosdata.taosdemo.domain;
import lombok.Data;
import java.util.List;
@Data
public class SubTableMeta {
private String database;
private String supertable;
private String name;
private List<TagValue> tags;
private List<FieldMeta> fields;
}
package com.taosdata.taosdemo.domain;
import lombok.Data;
import java.util.List;
@Data
public class SubTableValue {
private String database;
private String supertable;
private String name;
private List<TagValue> tags;
private List<RowValue> values;
}
package com.taosdata.taosdemo.domain;
import lombok.Data;
import java.util.List;
@Data
public class SuperTableMeta {
private String database;
private String name;
private List<FieldMeta> fields;
private List<TagMeta> tags;
}
\ No newline at end of file
package com.taosdata.taosdemo.domain;
import lombok.Data;
import java.util.List;
@Data
public class TableMeta {
private String database;
private String name;
private List<FieldMeta> fields;
}
package com.taosdata.taosdemo.domain;
import lombok.Data;
import java.util.List;
@Data
public class TableValue {
private String database;
private String name;
private List<FieldMeta> columns;
private List<RowValue> values;
}
package com.taosdata.taosdemo.domain;
import lombok.Data;
@Data
public class TagMeta {
private String name;
private String type;
public TagMeta() {
}
public TagMeta(String name, String type) {
this.name = name;
this.type = type;
}
}
package com.taosdata.taosdemo.domain;
import lombok.Data;
@Data
public class TagValue<T> {
private String name;
private T value;
public TagValue() {
}
public TagValue(String name, T value) {
this.name = name;
this.value = value;
}
}
package com.taosdata.taosdemo.mapper;
import org.apache.ibatis.annotations.Param;
import org.springframework.stereotype.Repository;
import java.util.Map;
@Repository
public interface DatabaseMapper {
// create database if not exists XXX
int createDatabase(@Param("database") String dbname);
// drop database if exists XXX
int dropDatabase(@Param("database") String dbname);
// create database if not exists XXX keep XX days XX replica XX
int createDatabaseWithParameters(Map<String, String> map);
// use XXX
int useDatabase(@Param("database") String dbname);
//TODO: alter database
//TODO: show database
}
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="com.taosdata.taosdemo.mapper.DatabaseMapper">
<!-- create database XXX -->
<update id="createDatabase" parameterType="java.lang.String">
create database if not exists ${database}
</update>
<update id="dropDatabase" parameterType="java.lang.String">
DROP database if exists ${database}
</update>
<update id="createDatabaseWithParameters" parameterType="map">
CREATE database if not exists ${database}
<if test="keep != null">
KEEP ${keep}
</if>
<if test="days != null">
DAYS ${days}
</if>
<if test="replica != null">
REPLICA ${replica}
</if>
<if test="cache != null">
cache ${cache}
</if>
<if test="blocks != null">
blocks ${blocks}
</if>
<if test="minrows != null">
minrows ${minrows}
</if>
<if test="maxrows != null">
maxrows ${maxrows}
</if>
</update>
<update id="useDatabase" parameterType="java.lang.String">
use ${database}
</update>
<!-- TODO: alter database -->
<!-- TODO: show database -->
</mapper>
\ No newline at end of file
package com.taosdata.taosdemo.mapper;
import com.taosdata.taosdemo.domain.SubTableMeta;
import com.taosdata.taosdemo.domain.SubTableValue;
import org.apache.ibatis.annotations.Param;
import org.springframework.stereotype.Repository;
import java.util.List;
@Repository
public interface SubTableMapper {
// 创建:子表
int createUsingSuperTable(SubTableMeta subTableMeta);
// 插入:一张子表多个values
int insertOneTableMultiValues(SubTableValue subTableValue);
// 插入:一张子表多个values, 自动建表
int insertOneTableMultiValuesUsingSuperTable(SubTableValue subTableValue);
// 插入:多张表多个values
int insertMultiTableMultiValues(@Param("tables") List<SubTableValue> tables);
// 插入:多张表多个values,自动建表
int insertMultiTableMultiValuesUsingSuperTable(@Param("tables") List<SubTableValue> tables);
//<!-- TODO:修改子表标签值 alter table ${tablename} set tag tagName=newTagValue-->
}
\ No newline at end of file
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="com.taosdata.taosdemo.mapper.SubTableMapper">
<!-- 创建子表 -->
<update id="createUsingSuperTable">
CREATE table IF NOT EXISTS ${database}.${name} USING ${supertable} TAGS
<foreach collection="tags" item="tag" index="index" open="(" close=")" separator=",">
#{tag.value}
</foreach>
</update>
<!-- 插入:向一张表中插入多张表 -->
<insert id="insertOneTableMultiValues">
INSERT INTO ${database}.${name}
VALUES
<foreach collection="values" item="value">
<foreach collection="value.fields" item="field" open="(" close=")" separator=",">
#{field.value}
</foreach>
</foreach>
</insert>
<!-- 插入:使用自动建表模式,向一张表中插入多条数据 -->
<insert id="insertOneTableMultiValuesUsingSuperTable">
INSERT INTO ${database}.${name} USING ${supertable} TAGS
<foreach collection="tags" item="tag" index="index" open="(" close=")" separator=",">
#{tag.value}
</foreach>
VALUES
<foreach collection="values" item="value">
<foreach collection="value.fields" item="field" open="(" close=")" separator=",">
#{field.value}
</foreach>
</foreach>
</insert>
<!-- TODO:插入:向一张表中插入多张表, 指定列 -->
<!-- TODO:插入:向一张表中插入多张表, 自动建表,指定列 -->
<!-- 插入:向多张表中插入多条数据 -->
<insert id="insertMultiTableMultiValues">
INSERT INTO
<foreach collection="tables" item="table">
${table.database}.${table.name}
VALUES
<foreach collection="table.values" item="value">
<foreach collection="value.fields" item="field" open="(" close=")" separator=",">
#{field.value}
</foreach>
</foreach>
</foreach>
</insert>
<!-- 插入:向多张表中插入多条数据,自动建表 -->
<insert id="insertMultiTableMultiValuesUsingSuperTable">
INSERT INTO
<foreach collection="tables" item="table">
${table.database}.${table.name} USING ${table.supertable} TAGS
<foreach collection="table.tags" item="tag" index="index" open="(" close=")" separator=",">
#{tag.value}
</foreach>
VALUES
<foreach collection="table.values" item="value">
<foreach collection="value.fields" item="field" open="(" close=")" separator=",">
#{field.value}
</foreach>
</foreach>
</foreach>
</insert>
<!-- TODO:插入:向多张表中插入多张表, 指定列 -->
<!-- TODO:插入:向多张表中插入多张表, 自动建表,指定列 -->
<!-- TODO:修改子表标签值 alter table ${tablename} set tag tagName=newTagValue -->
<!-- TODO: -->
</mapper>
\ No newline at end of file
package com.taosdata.taosdemo.mapper;
import com.taosdata.taosdemo.domain.SuperTableMeta;
import org.apache.ibatis.annotations.Param;
import org.springframework.stereotype.Repository;
@Repository
public interface SuperTableMapper {
// 创建超级表,使用自己定义的SQL语句
int createSuperTableUsingSQL(@Param("createSuperTableSQL") String sql);
// 创建超级表 create table if not exists xxx.xxx (f1 type1, f2 type2, ... ) tags( t1 type1, t2 type2 ...)
int createSuperTable(SuperTableMeta tableMetadata);
// 删除超级表 drop table if exists xxx;
int dropSuperTable(@Param("database") String database, @Param("name") String name);
//<!-- TODO:查询所有超级表信息 show stables -->
//<!-- TODO:查询表结构 describe stable -->
//<!-- TODO:增加列 alter table ${tablename} add column fieldName dataType -->
//<!-- TODO:删除列 alter table ${tablename} drop column fieldName -->
//<!-- TODO:添加标签 alter table ${tablename} add tag new_tagName tag_type -->
//<!-- TODO:删除标签 alter table ${tablename} drop tag_name -->
//<!-- TODO:修改标签名 alter table ${tablename} change tag old_tagName new_tagName -->
}
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="com.taosdata.taosdemo.mapper.SuperTableMapper">
<update id="createSuperTableUsingSQL">
${createSuperTableSQL}
</update>
<!-- 创建超级表 -->
<update id="createSuperTable">
create table if not exists ${database}.${name}
<foreach collection="fields" item="field" index="index" open="(" close=")" separator=",">
${field.name} ${field.type}
</foreach>
tags
<foreach collection="tags" item="tag" index="index" open="(" close=")" separator=",">
${tag.name} ${tag.type}
</foreach>
</update>
<!-- 删除超级表:drop super table -->
<delete id="dropSuperTable">
drop table if exists ${database}.${name}
</delete>
<!-- TODO:查询所有超级表信息 show stables -->
<!-- TODO:查询表结构 describe stable -->
<!-- TODO:增加列 alter table ${tablename} add column fieldName dataType -->
<!-- TODO:删除列 alter table ${tablename} drop column fieldName -->
<!-- TODO:添加标签 alter table ${tablename} add tag new_tagName tag_type -->
<!-- TODO:删除标签 alter table ${tablename} drop tag_name -->
<!-- TODO:修改标签名 alter table ${tablename} change tag old_tagName new_tagName -->
</mapper>
\ No newline at end of file
package com.taosdata.taosdemo.mapper;
import com.taosdata.taosdemo.domain.TableMeta;
import com.taosdata.taosdemo.domain.TableValue;
import org.apache.ibatis.annotations.Param;
import org.springframework.stereotype.Repository;
import java.util.List;
@Repository
public interface TableMapper {
// 创建:普通表
int create(TableMeta tableMeta);
// 插入:一张表多个value
int insertOneTableMultiValues(TableValue values);
// 插入: 一张表多个value,指定的列
int insertOneTableMultiValuesWithColumns(TableValue values);
// 插入:多个表多个value
int insertMultiTableMultiValues(@Param("tables") List<TableValue> tables);
// 插入:多个表多个value, 指定的列
int insertMultiTableMultiValuesWithColumns(@Param("tables") List<TableValue> tables);
}
\ No newline at end of file
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="com.taosdata.taosdemo.mapper.TableMapper">
<!-- 创建普通表 -->
<update id="create" parameterType="com.taosdata.taosdemo.domain.TableMeta">
create table if not exists ${database}.${name}
<foreach collection="fields" item="field" index="index" open="(" close=")" separator=",">
${field.name} ${field.type}
</foreach>
</update>
<!-- 插入:向一张普通表中插入多条数据 -->
<insert id="insertOneTableMultiValues" parameterType="com.taosdata.taosdemo.domain.TableValue">
insert into ${database}.${name} values
<foreach collection="values" item="value">
<foreach collection="value.fields" item="field" open="(" close=")" separator=",">
${field.value}
</foreach>
</foreach>
</insert>
<!-- 向一张表中插入指定列的数据 insert into XXX.xx (f1,f2,f3...) values(v1,v2,v3...) -->
<insert id="insertOneTableMultiValuesWithColumns" parameterType="com.taosdata.taosdemo.domain.TableValue">
insert into ${database}.${name}
<foreach collection="columns" item="column" open="(" close=")" separator=",">
${column.name}
</foreach>
values
<foreach collection="values" item="value">
<foreach collection="value.fields" item="field" open="(" close=")" separator=",">
${field.value}
</foreach>
</foreach>
</insert>
<!-- 向多个表中插入多条数据 -->
<insert id="insertMultiTableMultiValues">
insert into
<foreach collection="tables" item="table">
${table.database}.${table.name} values
<foreach collection="table.values" item="value">
<foreach collection="value.fields" item="field" open="(" close=")" separator=",">
${field.value}
</foreach>
</foreach>
</foreach>
</insert>
<!-- 向多张表中指定的列插入多条数据 -->
<insert id="insertMultiTableMultiValuesWithColumns">
insert into
<foreach collection="tables" item="table">
${table.database}.${table.name}
<foreach collection="table.columns" item="column" open="(" close=")" separator=",">
${column.name}
</foreach>
values
<foreach collection="table.values" item="value">
<foreach collection="value.fields" item="field" open="(" close=")" separator=",">
${field.value}
</foreach>
</foreach>
</foreach>
</insert>
</mapper>
\ No newline at end of file
package com.taosdata.taosdemo.service;
import java.util.List;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Future;
public class AbstractService {
protected int getAffectRows(List<Future<Integer>> futureList) {
int count = 0;
for (Future<Integer> future : futureList) {
try {
count += future.get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
}
return count;
}
protected int getAffectRows(Future<Integer> future) {
int count = 0;
try {
count += future.get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
return count;
}
}
package com.taosdata.taosdemo.service;
import com.taosdata.taosdemo.mapper.DatabaseMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.Map;
@Service
public class DatabaseService {
@Autowired
private DatabaseMapper databaseMapper;
// 建库,指定 name
public int createDatabase(String database) {
return databaseMapper.createDatabase(database);
}
// 建库,指定参数 keep,days,replica等
public int createDatabase(Map<String, String> map) {
if (map.isEmpty())
return 0;
if (map.containsKey("database") && map.size() == 1)
return databaseMapper.createDatabase(map.get("database"));
return databaseMapper.createDatabaseWithParameters(map);
}
// drop database
public int dropDatabase(String dbname) {
return databaseMapper.dropDatabase(dbname);
}
// use database
public int useDatabase(String dbname) {
return databaseMapper.useDatabase(dbname);
}
}
package com.taosdata.taosdemo.service;
import com.taosdata.taosdemo.domain.SubTableMeta;
import com.taosdata.taosdemo.domain.SubTableValue;
import com.taosdata.taosdemo.mapper.SubTableMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
@Service
public class SubTableService extends AbstractService {
@Autowired
private SubTableMapper mapper;
/**
* 1. 选择database,找到所有supertable
* 2. 选择supertable,可以拿到表结构,包括field和tag
* 3. 指定子表的前缀和个数
* 4. 指定创建子表的线程数
*/
//TODO:指定database、supertable、子表前缀、子表个数、线程数
// 多线程创建表,指定线程个数
public int createSubTable(List<SubTableMeta> subTables, int threadSize) {
ExecutorService executor = Executors.newFixedThreadPool(threadSize);
List<Future<Integer>> futureList = new ArrayList<>();
for (SubTableMeta subTableMeta : subTables) {
Future<Integer> future = executor.submit(() -> createSubTable(subTableMeta));
futureList.add(future);
}
executor.shutdown();
return getAffectRows(futureList);
}
// 创建一张子表,可以指定database,supertable,tablename,tag值
public int createSubTable(SubTableMeta subTableMeta) {
return mapper.createUsingSuperTable(subTableMeta);
}
// 单线程创建多张子表,每张子表分别可以指定自己的database,supertable,tablename,tag值
public int createSubTable(List<SubTableMeta> subTables) {
return createSubTable(subTables, 1);
}
/*************************************************************************************************************************/
// 插入:多线程,多表
public int insert(List<SubTableValue> subTableValues, int threadSize) {
ExecutorService executor = Executors.newFixedThreadPool(threadSize);
Future<Integer> future = executor.submit(() -> insert(subTableValues));
executor.shutdown();
return getAffectRows(future);
}
// 插入:多线程,多表, 自动建表
public int insertAutoCreateTable(List<SubTableValue> subTableValues, int threadSize) {
ExecutorService executor = Executors.newFixedThreadPool(threadSize);
Future<Integer> future = executor.submit(() -> insertAutoCreateTable(subTableValues));
executor.shutdown();
return getAffectRows(future);
}
// 插入:单表,insert into xxx values(),()...
public int insert(SubTableValue subTableValue) {
return mapper.insertOneTableMultiValues(subTableValue);
}
// 插入: 多表,insert into xxx values(),()... xxx values(),()...
public int insert(List<SubTableValue> subTableValues) {
return mapper.insertMultiTableMultiValuesUsingSuperTable(subTableValues);
}
// 插入:单表,自动建表, insert into xxx using xxx tags(...) values(),()...
public int insertAutoCreateTable(SubTableValue subTableValue) {
return mapper.insertOneTableMultiValuesUsingSuperTable(subTableValue);
}
// 插入:多表,自动建表, insert into xxx using XXX tags(...) values(),()... xxx using XXX tags(...) values(),()...
public int insertAutoCreateTable(List<SubTableValue> subTableValues) {
return mapper.insertMultiTableMultiValuesUsingSuperTable(subTableValues);
}
// ExecutorService executors = Executors.newFixedThreadPool(threadSize);
// int count = 0;
//
// //
// List<SubTableValue> subTableValues = new ArrayList<>();
// for (int tableIndex = 1; tableIndex <= numOfTablesPerSQL; tableIndex++) {
// // each table
// SubTableValue subTableValue = new SubTableValue();
// subTableValue.setDatabase();
// subTableValue.setName();
// subTableValue.setSupertable();
//
// List<RowValue> values = new ArrayList<>();
// for (int valueCnt = 0; valueCnt < numOfValuesPerSQL; valueCnt++) {
// List<FieldValue> fields = new ArrayList<>();
// for (int fieldInd = 0; fieldInd <; fieldInd++) {
// FieldValue<Object> field = new FieldValue<>("", "");
// fields.add(field);
// }
// RowValue row = new RowValue();
// row.setFields(fields);
// values.add(row);
// }
// subTableValue.setValues(values);
// subTableValues.add(subTableValue);
// }
}
package com.taosdata.taosdemo.service;
import com.taosdata.taosdemo.domain.SuperTableMeta;
import com.taosdata.taosdemo.mapper.SuperTableMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
@Service
public class SuperTableService {
@Autowired
private SuperTableMapper superTableMapper;
// 创建超级表,指定每个field的名称和类型,每个tag的名称和类型
public int create(SuperTableMeta superTableMeta) {
return superTableMapper.createSuperTable(superTableMeta);
}
public void drop(String database, String name) {
superTableMapper.dropSuperTable(database, name);
}
}
package com.taosdata.taosdemo.service;
import com.taosdata.taosdemo.domain.TableMeta;
import com.taosdata.taosdemo.mapper.TableMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
@Service
public class TableService extends AbstractService {
@Autowired
private TableMapper tableMapper;
//创建一张表
public int create(TableMeta tableMeta) {
return tableMapper.create(tableMeta);
}
//创建多张表
public int create(List<TableMeta> tables) {
return create(tables, 1);
}
//多线程创建多张表
public int create(List<TableMeta> tables, int threadSize) {
ExecutorService executors = Executors.newFixedThreadPool(threadSize);
List<Future<Integer>> futures = new ArrayList<>();
for (TableMeta table : tables) {
Future<Integer> future = executors.submit(() -> create(table));
futures.add(future);
}
return getAffectRows(futures);
}
}
package com.taosdata.taosdemo.service.data;
import com.taosdata.taosdemo.domain.FieldMeta;
import com.taosdata.taosdemo.domain.FieldValue;
import com.taosdata.taosdemo.domain.RowValue;
import com.taosdata.taosdemo.utils.DataGenerator;
import java.util.*;
public class FieldValueGenerator {
public static Random random = new Random(System.currentTimeMillis());
// 生成start到end的时间序列,时间戳为顺序,不含有乱序,field的value为随机生成
public static List<RowValue> generate(long start, long end, long timeGap, List<FieldMeta> fieldMetaList) {
List<RowValue> values = new ArrayList<>();
for (long ts = start; ts < end; ts += timeGap) {
List<FieldValue> fieldValues = new ArrayList<>();
// timestamp
fieldValues.add(new FieldValue(fieldMetaList.get(0).getName(), ts));
// other values
for (int fieldInd = 1; fieldInd < fieldMetaList.size(); fieldInd++) {
FieldMeta fieldMeta = fieldMetaList.get(fieldInd);
fieldValues.add(new FieldValue(fieldMeta.getName(), DataGenerator.randomValue(fieldMeta.getType())));
}
values.add(new RowValue(fieldValues));
}
return values;
}
// 生成start到end的时间序列,时间戳为顺序,含有乱序,rate为乱序的比例,range为乱序前跳范围,field的value为随机生成
public static List<RowValue> disrupt(List<RowValue> values, int rate, long range) {
long timeGap = (long) (values.get(1).getFields().get(0).getValue()) - (long) (values.get(0).getFields().get(0).getValue());
int bugSize = values.size() * rate / 100;
Set<Integer> bugIndSet = new HashSet<>();
while (bugIndSet.size() < bugSize) {
bugIndSet.add(random.nextInt(values.size()));
}
for (Integer bugInd : bugIndSet) {
Long timestamp = (Long) values.get(bugInd).getFields().get(0).getValue();
Long newTimestamp = timestamp - timeGap - random.nextInt((int) range);
values.get(bugInd).getFields().get(0).setValue(newTimestamp);
}
return values;
}
}
package com.taosdata.taosdemo.service.data;
import com.taosdata.taosdemo.domain.SubTableMeta;
import com.taosdata.taosdemo.domain.SuperTableMeta;
import com.taosdata.taosdemo.domain.TagValue;
import java.util.ArrayList;
import java.util.List;
public class SubTableMetaGenerator {
// 创建tableSize张子表,使用tablePrefix作为子表名的前缀,使用superTableMeta的元数据
// create table xxx using XXX tags(XXX)
public static List<SubTableMeta> generate(SuperTableMeta superTableMeta, int tableSize, String tablePrefix) {
List<SubTableMeta> subTableMetaList = new ArrayList<>();
for (int i = 1; i <= tableSize; i++) {
SubTableMeta subTableMeta = new SubTableMeta();
// create table xxx.xxx using xxx tags(...)
subTableMeta.setDatabase(superTableMeta.getDatabase());
subTableMeta.setName(tablePrefix + i);
subTableMeta.setSupertable(superTableMeta.getName());
subTableMeta.setFields(superTableMeta.getFields());
List<TagValue> tagValues = TagValueGenerator.generate(superTableMeta.getTags());
subTableMeta.setTags(tagValues);
subTableMetaList.add(subTableMeta);
}
return subTableMetaList;
}
}
package com.taosdata.taosdemo.service.data;
import com.taosdata.taosdemo.domain.RowValue;
import com.taosdata.taosdemo.domain.SubTableMeta;
import com.taosdata.taosdemo.domain.SubTableValue;
import com.taosdata.taosdemo.utils.TimeStampUtil;
import org.springframework.beans.BeanUtils;
import java.util.ArrayList;
import java.util.List;
public class SubTableValueGenerator {
public static List<SubTableValue> generate(List<SubTableMeta> subTableMetaList, int numOfRowsPerTable, long start, long timeGap) {
List<SubTableValue> subTableValueList = new ArrayList<>();
subTableMetaList.stream().forEach((subTableMeta) -> {
// insert into xxx.xxx using xxxx tags(...) values(),()...
SubTableValue subTableValue = new SubTableValue();
subTableValue.setDatabase(subTableMeta.getDatabase());
subTableValue.setName(subTableMeta.getName());
subTableValue.setSupertable(subTableMeta.getSupertable());
subTableValue.setTags(subTableMeta.getTags());
TimeStampUtil.TimeTuple tuple = TimeStampUtil.range(start, timeGap, numOfRowsPerTable);
List<RowValue> values = FieldValueGenerator.generate(tuple.start, tuple.end, tuple.timeGap, subTableMeta.getFields());
subTableValue.setValues(values);
subTableValueList.add(subTableValue);
});
return subTableValueList;
}
public static void disrupt(List<SubTableValue> subTableValueList, int rate, long range) {
subTableValueList.stream().forEach((tableValue) -> {
List<RowValue> values = tableValue.getValues();
FieldValueGenerator.disrupt(values, rate, range);
});
}
public static List<List<SubTableValue>> split(List<SubTableValue> subTableValueList, int numOfTables, int numOfTablesPerSQL, int numOfRowsPerTable, int numOfValuesPerSQL) {
List<List<SubTableValue>> dataList = new ArrayList<>();
if (numOfRowsPerTable < numOfValuesPerSQL)
numOfValuesPerSQL = numOfRowsPerTable;
if (numOfTables < numOfTablesPerSQL)
numOfTablesPerSQL = numOfTables;
//table
for (int tableCnt = 0; tableCnt < numOfTables; ) {
int tableSize = numOfTablesPerSQL;
if (tableCnt + tableSize > numOfTables) {
tableSize = numOfTables - tableCnt;
}
// row
for (int rowCnt = 0; rowCnt < numOfRowsPerTable; ) {
int rowSize = numOfValuesPerSQL;
if (rowCnt + rowSize > numOfRowsPerTable) {
rowSize = numOfRowsPerTable - rowCnt;
}
// System.out.println("rowCnt: " + rowCnt + ", rowSize: " + rowSize + ", tableCnt: " + tableCnt + ", tableSize: " + tableSize);
// split
List<SubTableValue> blocks = subTableValueList.subList(tableCnt, tableCnt + tableSize);
List<SubTableValue> newBlocks = new ArrayList<>();
for (int i = 0; i < blocks.size(); i++) {
SubTableValue subTableValue = blocks.get(i);
SubTableValue newSubTableValue = new SubTableValue();
BeanUtils.copyProperties(subTableValue, newSubTableValue);
List<RowValue> values = subTableValue.getValues().subList(rowCnt, rowCnt + rowSize);
newSubTableValue.setValues(values);
newBlocks.add(newSubTableValue);
}
dataList.add(newBlocks);
rowCnt += rowSize;
}
tableCnt += tableSize;
}
return dataList;
}
public static void main(String[] args) {
split(null, 99, 10, 99, 10);
}
}
package com.taosdata.taosdemo.service.data;
import com.taosdata.taosdemo.domain.FieldMeta;
import com.taosdata.taosdemo.domain.SuperTableMeta;
import com.taosdata.taosdemo.domain.TagMeta;
import com.taosdata.taosdemo.utils.TaosConstants;
import java.util.ArrayList;
import java.util.List;
public class SuperTableMetaGenerator {
// 创建超级表,使用指定SQL语句
public static SuperTableMeta generate(String superTableSQL) {
SuperTableMeta tableMeta = new SuperTableMeta();
// for example : create table superTable (ts timestamp, temperature float, humidity int) tags(location nchar(64), groupId int)
superTableSQL = superTableSQL.trim().toLowerCase();
if (!superTableSQL.startsWith("create"))
throw new RuntimeException("invalid create super table SQL");
if (superTableSQL.contains("tags")) {
String tagSQL = superTableSQL.substring(superTableSQL.indexOf("tags") + 4).trim();
tagSQL = tagSQL.substring(tagSQL.indexOf("(") + 1, tagSQL.lastIndexOf(")"));
String[] tagPairs = tagSQL.split(",");
List<TagMeta> tagMetaList = new ArrayList<>();
for (String tagPair : tagPairs) {
String name = tagPair.trim().split("\\s+")[0];
String type = tagPair.trim().split("\\s+")[1];
tagMetaList.add(new TagMeta(name, type));
}
tableMeta.setTags(tagMetaList);
superTableSQL = superTableSQL.substring(0, superTableSQL.indexOf("tags"));
}
if (superTableSQL.contains("(")) {
String fieldSQL = superTableSQL.substring(superTableSQL.indexOf("(") + 1, superTableSQL.indexOf(")"));
String[] fieldPairs = fieldSQL.split(",");
List<FieldMeta> fieldList = new ArrayList<>();
for (String fieldPair : fieldPairs) {
String name = fieldPair.trim().split("\\s+")[0];
String type = fieldPair.trim().split("\\s+")[1];
fieldList.add(new FieldMeta(name, type));
}
tableMeta.setFields(fieldList);
superTableSQL = superTableSQL.substring(0, superTableSQL.indexOf("("));
}
superTableSQL = superTableSQL.substring(superTableSQL.indexOf("table") + 5).trim();
if (superTableSQL.contains(".")) {
String database = superTableSQL.split("\\.")[0];
tableMeta.setDatabase(database);
superTableSQL = superTableSQL.substring(superTableSQL.indexOf(".") + 1);
}
tableMeta.setName(superTableSQL.trim());
return tableMeta;
}
// 创建超级表,指定field和tag的个数
public static SuperTableMeta generate(String database, String name, int fieldSize, String fieldPrefix, int tagSize, String tagPrefix) {
if (fieldSize < 2 || tagSize < 1) {
throw new RuntimeException("create super table but fieldSize less than 2 or tagSize less than 1");
}
SuperTableMeta tableMetadata = new SuperTableMeta();
tableMetadata.setDatabase(database);
tableMetadata.setName(name);
// fields
List<FieldMeta> fields = new ArrayList<>();
fields.add(new FieldMeta("ts", "timestamp"));
for (int i = 1; i <= fieldSize; i++) {
fields.add(new FieldMeta(fieldPrefix + "" + i, TaosConstants.DATA_TYPES[i % TaosConstants.DATA_TYPES.length]));
}
tableMetadata.setFields(fields);
// tags
List<TagMeta> tags = new ArrayList<>();
for (int i = 1; i <= tagSize; i++) {
tags.add(new TagMeta(tagPrefix + "" + i, TaosConstants.DATA_TYPES[i % TaosConstants.DATA_TYPES.length]));
}
tableMetadata.setTags(tags);
return tableMetadata;
}
}
package com.taosdata.taosdemo.service.data;
import com.taosdata.taosdemo.domain.TagMeta;
import com.taosdata.taosdemo.domain.TagValue;
import com.taosdata.taosdemo.utils.DataGenerator;
import java.util.ArrayList;
import java.util.List;
public class TagValueGenerator {
// 创建标签值:使用tagMetas
public static List<TagValue> generate(List<TagMeta> tagMetas) {
List<TagValue> tagValues = new ArrayList<>();
for (int i = 0; i < tagMetas.size(); i++) {
TagMeta tagMeta = tagMetas.get(i);
TagValue tagValue = new TagValue();
tagValue.setName(tagMeta.getName());
tagValue.setValue(DataGenerator.randomValue(tagMeta.getType()));
tagValues.add(tagValue);
}
return tagValues;
}
}
package com.taosdata.taosdemo.utils;
import java.util.Random;
public class DataGenerator {
private static Random random = new Random(System.currentTimeMillis());
private static final String alphabet = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890";
// "timestamp", "int", "bigint", "float", "double", "binary(64)", "smallint", "tinyint", "bool", "nchar(64)",
public static Object randomValue(String type) {
int length = 64;
if (type.contains("(")) {
length = Integer.parseInt(type.substring(type.indexOf("(") + 1, type.indexOf(")")));
type = type.substring(0, type.indexOf("("));
}
switch (type.trim().toLowerCase()) {
case "timestamp":
return randomTimestamp();
case "int":
return randomInt();
case "bigint":
return randomBigint();
case "float":
return randomFloat();
case "double":
return randomDouble();
case "binary":
return randomBinary(length);
case "smallint":
return randomSmallint();
case "tinyint":
return randomTinyint();
case "bool":
return randomBoolean();
case "nchar":
return randomNchar(length);
default:
throw new IllegalArgumentException("Unexpected value: " + type);
}
}
public static Long randomTimestamp() {
long start = System.currentTimeMillis();
return randomTimestamp(start, start + 60l * 60l * 1000l);
}
public static Long randomTimestamp(Long start, Long end) {
return start + (long) random.nextInt((int) (end - start));
}
public static String randomNchar(int length) {
return randomChinese(length);
}
public static Boolean randomBoolean() {
return random.nextBoolean();
}
public static Integer randomTinyint() {
return randomInt(-127, 127);
}
public static Integer randomSmallint() {
return randomInt(-32767, 32767);
}
public static String randomBinary(int length) {
return randomString(length);
}
public static String randomString(int length) {
String zh_en = "";
for (int i = 0; i < length; i++) {
zh_en += alphabet.charAt(random.nextInt(alphabet.length()));
}
return zh_en;
}
public static String randomChinese(int length) {
String zh_cn = "";
int bottom = Integer.parseInt("4e00", 16);
int top = Integer.parseInt("9fa5", 16);
for (int i = 0; i < length; i++) {
char c = (char) (random.nextInt(top - bottom + 1) + bottom);
zh_cn += new String(new char[]{c});
}
return zh_cn;
}
public static Double randomDouble() {
return randomDouble(0, 100);
}
public static Double randomDouble(double bottom, double top) {
return bottom + (top - bottom) * random.nextDouble();
}
public static Float randomFloat() {
return randomFloat(0, 100);
}
public static Float randomFloat(float bottom, float top) {
return bottom + (top - bottom) * random.nextFloat();
}
public static Long randomBigint() {
return random.nextLong();
}
public static Integer randomInt(int bottom, int top) {
return bottom + random.nextInt((top - bottom));
}
public static Integer randomInt() {
return randomInt(0, 100);
}
}
package com.taosdata.taosdemo.utils;
public final class JdbcTaosdemoConfig {
// instance
public String host; //host
public int port = 6030; //port
public String user = "root"; //user
public String password = "taosdata"; //password
// database
public String database = "test"; //database
public int keep = 3650; //keep
public int days = 30; //days
public int replica = 1; //replica
//super table
public boolean doCreateTable = true;
public String superTable = "weather"; //super table name
public String prefixOfFields = "col";
public int numOfFields;
public String prefixOfTags = "tag";
public int numOfTags;
public String superTableSQL;
//sub table
public String tablePrefix = "t";
public int numOfTables = 100;
public int numOfThreadsForCreate = 1;
// insert task
public boolean autoCreateTable;
public int numOfRowsPerTable = 100;
public int numOfThreadsForInsert = 1;
public int numOfTablesPerSQL = 10;
public int numOfValuesPerSQL = 10;
public long startTime;
public long timeGap;
public int sleep = 0;
public int order = 0;
public int rate = 10;
public long range = 1000l;
// select task
// drop task
public boolean dropTable = false;
public static void printHelp() {
System.out.println("Usage: java -jar jdbc-taosdemo-2.0.jar [OPTION...]");
// instance
System.out.println("-host The host to connect to TDengine which you must specify");
System.out.println("-port The TCP/IP port number to use for the connection. Default is 6030");
System.out.println("-user The TDengine user name to use when connecting to the server. Default is 'root'");
System.out.println("-password The password to use when connecting to the server.Default is 'taosdata'");
// database
System.out.println("-database Destination database. Default is 'test'");
System.out.println("-keep database keep parameter. Default is 3650");
System.out.println("-days database days parameter. Default is 30");
System.out.println("-replica database replica parameter. Default 1, min: 1, max: 3");
// super table
System.out.println("-doCreateTable do create super table and sub table, true or false, Default true");
System.out.println("-superTable super table name. Default 'weather'");
System.out.println("-prefixOfFields The prefix of field in super table. Default is 'col'");
System.out.println("-numOfFields The number of field in super table. Default is (ts timestamp, temperature float, humidity int).");
System.out.println("-prefixOfTags The prefix of tag in super table. Default is 'tag'");
System.out.println("-numOfTags The number of tag in super table. Default is (location nchar(64), groupId int).");
System.out.println("-superTableSQL specify a sql statement for the super table.\n" +
" Default is 'create table weather(ts timestamp, temperature float, humidity int) tags(location nchar(64), groupId int). \n" +
" if you use this parameter, the numOfFields and numOfTags will be invalid'");
// sub table
System.out.println("-tablePrefix The prefix of sub tables. Default is 't'");
System.out.println("-numOfTables The number of tables. Default is 1");
System.out.println("-numOfThreadsForCreate The number of thread during create sub table. Default is 1");
// insert task
System.out.println("-autoCreateTable Use auto Create sub tables SQL. Default is false");
System.out.println("-numOfRowsPerTable The number of records per table. Default is 1");
System.out.println("-numOfThreadsForInsert The number of threads during insert row. Default is 1");
System.out.println("-numOfTablesPerSQL The number of table per SQL. Default is 1");
System.out.println("-numOfValuesPerSQL The number of value per SQL. Default is 1");
System.out.println("-startTime start time for insert task, The format is \"yyyy-MM-dd HH:mm:ss.SSS\".");
System.out.println("-timeGap the number of time gap. Default is 1000 ms");
System.out.println("-sleep The number of milliseconds for sleep after each insert. default is 0");
System.out.println("-order Insert mode--0: In order, 1: Out of order. Default is in order");
System.out.println("-rate The proportion of data out of order. effective only if order is 1. min 0, max 100, default is 10");
System.out.println("-range The range of data out of order. effective only if order is 1. default is 1000 ms");
// query task
// System.out.println("-sqlFile The select sql file");
// drop task
System.out.println("-dropTable Drop data before quit. Default is false");
System.out.println("--help Give this help list");
}
/**
* parse args from command line
*
* @param args command line args
* @return JdbcTaosdemoConfig
*/
public JdbcTaosdemoConfig(String[] args) {
for (int i = 0; i < args.length; i++) {
// instance
if ("-host".equals(args[i]) && i < args.length - 1) {
host = args[++i];
}
if ("-port".equals(args[i]) && i < args.length - 1) {
port = Integer.parseInt(args[++i]);
}
if ("-user".equals(args[i]) && i < args.length - 1) {
user = args[++i];
}
if ("-password".equals(args[i]) && i < args.length - 1) {
password = args[++i];
}
// database
if ("-database".equals(args[i]) && i < args.length - 1) {
database = args[++i];
}
if ("-keep".equals(args[i]) && i < args.length - 1) {
keep = Integer.parseInt(args[++i]);
}
if ("-days".equals(args[i]) && i < args.length - 1) {
days = Integer.parseInt(args[++i]);
}
if ("-replica".equals(args[i]) && i < args.length - 1) {
replica = Integer.parseInt(args[++i]);
}
// super table
if ("-doCreateTable".equals(args[i]) && i < args.length - 1) {
doCreateTable = Boolean.parseBoolean(args[++i]);
}
if ("-superTable".equals(args[i]) && i < args.length - 1) {
superTable = args[++i];
}
if ("-prefixOfFields".equals(args[i]) && i < args.length - 1) {
prefixOfFields = args[++i];
}
if ("-numOfFields".equals(args[i]) && i < args.length - 1) {
numOfFields = Integer.parseInt(args[++i]);
}
if ("-prefixOfTags".equals(args[i]) && i < args.length - 1) {
prefixOfTags = args[++i];
}
if ("-numOfTags".equals(args[i]) && i < args.length - 1) {
numOfTags = Integer.parseInt(args[++i]);
}
if ("-superTableSQL".equals(args[i]) && i < args.length - 1) {
superTableSQL = args[++i];
}
// sub table
if ("-tablePrefix".equals(args[i]) && i < args.length - 1) {
tablePrefix = args[++i];
}
if ("-numOfTables".equals(args[i]) && i < args.length - 1) {
numOfTables = Integer.parseInt(args[++i]);
}
if ("-autoCreateTable".equals(args[i]) && i < args.length - 1) {
autoCreateTable = Boolean.parseBoolean(args[++i]);
}
if ("-numOfThreadsForCreate".equals(args[i]) && i < args.length - 1) {
numOfThreadsForCreate = Integer.parseInt(args[++i]);
}
// insert task
if ("-numOfRowsPerTable".equals(args[i]) && i < args.length - 1) {
numOfRowsPerTable = Integer.parseInt(args[++i]);
}
if ("-numOfThreadsForInsert".equals(args[i]) && i < args.length - 1) {
numOfThreadsForInsert = Integer.parseInt(args[++i]);
}
if ("-numOfTablesPerSQL".equals(args[i]) && i < args.length - 1) {
numOfTablesPerSQL = Integer.parseInt(args[++i]);
}
if ("-numOfValuesPerSQL".equals(args[i]) && i < args.length - 1) {
numOfValuesPerSQL = Integer.parseInt(args[++i]);
}
if ("-startTime".equals(args[i]) && i < args.length - 1) {
startTime = TimeStampUtil.datetimeToLong(args[++i]);
}
if ("-timeGap".equals(args[i]) && i < args.length - 1) {
timeGap = Long.parseLong(args[++i]);
}
if ("-sleep".equals(args[i]) && i < args.length - 1) {
sleep = Integer.parseInt(args[++i]);
}
if ("-order".equals(args[i]) && i < args.length - 1) {
order = Integer.parseInt(args[++i]);
}
if ("-rate".equals(args[i]) && i < args.length - 1) {
rate = Integer.parseInt(args[++i]);
if (rate < 0 || rate > 100)
throw new IllegalArgumentException("rate must between 0 and 100");
}
if ("-range".equals(args[i]) && i < args.length - 1) {
range = Integer.parseInt(args[++i]);
}
// select task
// drop task
if ("-dropTable".equals(args[i]) && i < args.length - 1) {
dropTable = Boolean.parseBoolean(args[++i]);
}
}
}
public static void main(String[] args) {
JdbcTaosdemoConfig config = new JdbcTaosdemoConfig(args);
}
}
package com.taosdata.taosdemo.utils;
public class TaosConstants {
public static final String[] DATA_TYPES = {
"timestamp", "int", "bigint", "float", "double",
"binary(64)", "smallint", "tinyint", "bool", "nchar(64)",
};
}
package com.taosdata.taosdemo.utils;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;
public class TimeStampUtil {
private static final String datetimeFormat = "yyyy-MM-dd HH:mm:ss.SSS";
public static long datetimeToLong(String dateTime) {
SimpleDateFormat sdf = new SimpleDateFormat(datetimeFormat);
try {
return sdf.parse(dateTime).getTime();
} catch (ParseException e) {
throw new IllegalArgumentException("invalid datetime string >>> " + dateTime);
}
}
public static String longToDatetime(long time) {
SimpleDateFormat sdf = new SimpleDateFormat(datetimeFormat);
return sdf.format(new Date(time));
}
public static class TimeTuple {
public Long start;
public Long end;
public Long timeGap;
TimeTuple(long start, long end, long timeGap) {
this.start = start;
this.end = end;
this.timeGap = timeGap;
}
}
public static TimeTuple range(long start, long timeGap, long size) {
long now = System.currentTimeMillis();
if (timeGap < 1)
timeGap = 1;
if (start == 0)
start = now - size * timeGap;
// 如果size小于1异常
if (size < 1)
throw new IllegalArgumentException("size less than 1.");
// 如果timeGap为1,已经超长,需要前移start
if (start + size > now) {
start = now - size;
return new TimeTuple(start, now, 1);
}
long end = start + (long) (timeGap * size);
if (end > now) {
//压缩timeGap
end = now;
double gap = (end - start) / (size * 1.0f);
if (gap < 1.0f) {
timeGap = 1;
start = end - size;
} else {
timeGap = (long) gap;
end = start + (long) (timeGap * size);
}
}
return new TimeTuple(start, end, timeGap);
}
}
#spring.datasource.url=jdbc:mysql://master:3306/?useSSL=false&useUnicode=true&characterEncoding=UTF-8
#spring.datasource.driver-class-name=com.mysql.jdbc.Driver
#spring.datasource.username=root
#spring.datasource.password=123456
spring.datasource.url=jdbc:TAOS://master:6030/?charset=UTF-8&locale=en_US.UTF-8&timezone=UTC-8
spring.datasource.driver-class-name=com.taosdata.jdbc.TSDBDriver
spring.datasource.username=root
spring.datasource.password=taosdata
spring.datasource.hikari.maximum-pool-size=10
spring.datasource.hikari.minimum-idle=10
spring.datasource.hikari.max-lifetime=600000
logging.level.com.taosdata.taosdemo.mapper=debug
\ No newline at end of file
### 设置###
log4j.rootLogger=debug,stdout,DebugLog,ErrorLog
### 输出信息到控制抬 ###
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%-5p] %d{yyyy-MM-dd HH:mm:ss,SSS} method:%l%n%m%n
### 输出DEBUG 级别以上的日志到=logs/debug.log
log4j.appender.DebugLog=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DebugLog.File=logs/debug.log
log4j.appender.DebugLog.Append=true
log4j.appender.DebugLog.Threshold=DEBUG
log4j.appender.DebugLog.layout=org.apache.log4j.PatternLayout
log4j.appender.DebugLog.layout.ConversionPattern=%-d{yyyy-MM-dd HH:mm:ss} [ %t:%r ] - [ %p ] %m%n
### 输出ERROR 级别以上的日志到=logs/error.log
log4j.appender.ErrorLog=org.apache.log4j.DailyRollingFileAppender
log4j.appender.ErrorLog.File=logs/error.log
log4j.appender.ErrorLog.Append=true
log4j.appender.ErrorLog.Threshold=ERROR
log4j.appender.ErrorLog.layout=org.apache.log4j.PatternLayout
log4j.appender.ErrorLog.layout.ConversionPattern=%-d{yyyy-MM-dd HH:mm:ss} [ %t:%r ] - [ %p ] %m%n
\ No newline at end of file
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Index</title>
</head>
<body>
<h1>Hello~~~</h1>
</body>
</html>
\ No newline at end of file
package com.taosdata.taosdemo;
import org.junit.jupiter.api.Test;
import org.springframework.boot.test.context.SpringBootTest;
@SpringBootTest
class TaosdemoApplicationTests {
@Test
void contextLoads() {
}
}
package com.taosdata.taosdemo.mapper;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
import java.util.HashMap;
import java.util.Map;
@RunWith(SpringRunner.class)
@SpringBootTest
public class DatabaseMapperTest {
@Autowired
private DatabaseMapper databaseMapper;
@Test
public void createDatabase() {
databaseMapper.createDatabase("db_test");
}
@Test
public void dropDatabase() {
databaseMapper.dropDatabase("db_test");
}
@Test
public void creatDatabaseWithParameters() {
Map<String, String> map = new HashMap<>();
map.put("dbname", "weather");
map.put("keep", "3650");
map.put("days", "30");
map.put("replica", "1");
databaseMapper.createDatabaseWithParameters(map);
}
@Test
public void useDatabase() {
databaseMapper.useDatabase("test");
}
}
\ No newline at end of file
package com.taosdata.taosdemo.mapper;
import com.taosdata.taosdemo.domain.*;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
import java.util.ArrayList;
import java.util.List;
@RunWith(SpringRunner.class)
@SpringBootTest
public class SubTableMapperTest {
@Autowired
private SubTableMapper subTableMapper;
private List<SubTableValue> tables;
@Test
public void createUsingSuperTable() {
SubTableMeta subTableMeta = new SubTableMeta();
subTableMeta.setDatabase("test");
subTableMeta.setSupertable("weather");
subTableMeta.setName("t1");
List<TagValue> tags = new ArrayList<>();
for (int i = 0; i < 3; i++) {
tags.add(new TagValue("tag" + (i + 1), "nchar(64)"));
}
subTableMeta.setTags(tags);
subTableMapper.createUsingSuperTable(subTableMeta);
}
@Test
public void insertOneTableMultiValues() {
subTableMapper.insertOneTableMultiValues(tables.get(0));
}
@Test
public void insertOneTableMultiValuesUsingSuperTable() {
subTableMapper.insertOneTableMultiValuesUsingSuperTable(tables.get(0));
}
@Test
public void insertMultiTableMultiValues() {
subTableMapper.insertMultiTableMultiValues(tables);
}
@Test
public void insertMultiTableMultiValuesUsingSuperTable() {
subTableMapper.insertMultiTableMultiValuesUsingSuperTable(tables);
}
@Before
public void before() {
tables = new ArrayList<>();
for (int ind = 0; ind < 3; ind++) {
SubTableValue table = new SubTableValue();
table.setDatabase("test");
// supertable
table.setSupertable("weather");
table.setName("t" + (ind + 1));
// tags
List<TagValue> tags = new ArrayList<>();
for (int i = 0; i < 3; i++) {
tags.add(new TagValue("tag" + (i + 1), "beijing"));
}
table.setTags(tags);
// values
List<RowValue> values = new ArrayList<>();
for (int i = 0; i < 2; i++) {
List<FieldValue> fields = new ArrayList<>();
for (int j = 0; j < 4; j++) {
fields.add(new FieldValue("f" + (j + 1), (j + 1) * 10));
}
values.add(new RowValue(fields));
}
table.setValues(values);
tables.add(table);
}
}
}
\ No newline at end of file
package com.taosdata.taosdemo.mapper;
import com.taosdata.taosdemo.domain.FieldMeta;
import com.taosdata.taosdemo.domain.SuperTableMeta;
import com.taosdata.taosdemo.domain.TagMeta;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
import java.util.ArrayList;
import java.util.List;
@RunWith(SpringRunner.class)
@SpringBootTest
public class SuperTableMapperTest {
@Autowired
private SuperTableMapper superTableMapper;
@Test
public void testCreateSuperTableUsingSQL() {
String sql = "create table test.weather (ts timestamp, temperature float, humidity int) tags(location nchar(64), groupId int)";
superTableMapper.createSuperTableUsingSQL(sql);
}
@Test
public void createSuperTable() {
SuperTableMeta superTableMeta = new SuperTableMeta();
superTableMeta.setDatabase("test");
superTableMeta.setName("weather");
List<FieldMeta> fields = new ArrayList<>();
for (int i = 0; i < 5; i++) {
fields.add(new FieldMeta("f" + (i + 1), "int"));
}
superTableMeta.setFields(fields);
List<TagMeta> tags = new ArrayList<>();
for (int i = 0; i < 3; i++) {
tags.add(new TagMeta("t" + (i + 1), "nchar(64)"));
}
superTableMeta.setTags(tags);
superTableMapper.createSuperTable(superTableMeta);
}
@Test
public void dropSuperTable() {
superTableMapper.dropSuperTable("test", "weather");
}
}
\ No newline at end of file
package com.taosdata.taosdemo.mapper;
import com.taosdata.taosdemo.domain.*;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
@SpringBootTest
@RunWith(SpringRunner.class)
public class TableMapperTest {
@Autowired
private TableMapper tableMapper;
private static Random random = new Random(System.currentTimeMillis());
@Test
public void create() {
TableMeta table = new TableMeta();
table.setDatabase("test");
table.setName("t1");
List<FieldMeta> fields = new ArrayList<>();
for (int i = 0; i < 3; i++) {
FieldMeta field = new FieldMeta();
field.setName("f" + (i + 1));
field.setType("nchar(64)");
fields.add(field);
}
table.setFields(fields);
tableMapper.create(table);
}
@Test
public void insertOneTableMultiValues() {
TableValue table = new TableValue();
table.setDatabase("test");
table.setName("t1");
List<RowValue> values = new ArrayList<>();
for (int j = 0; j < 5; j++) {
List<FieldValue> fields = new ArrayList<>();
for (int k = 0; k < 2; k++) {
FieldValue field = new FieldValue<>();
field.setValue((k + 1) * 100);
fields.add(field);
}
values.add(new RowValue(fields));
}
table.setValues(values);
tableMapper.insertOneTableMultiValues(table);
}
@Test
public void insertOneTableMultiValuesWithCoulmns() {
TableValue tableValue = new TableValue();
tableValue.setDatabase("test");
tableValue.setName("weather");
// columns
List<FieldMeta> columns = new ArrayList<>();
for (int i = 0; i < 3; i++) {
FieldMeta field = new FieldMeta();
field.setName("f" + (i + 1));
columns.add(field);
}
tableValue.setColumns(columns);
// values
List<RowValue> values = new ArrayList<>();
for (int i = 0; i < 3; i++) {
List<FieldValue> fields = new ArrayList<>();
for (int j = 0; j < 3; j++) {
FieldValue field = new FieldValue();
field.setValue(j);
fields.add(field);
}
values.add(new RowValue(fields));
}
tableValue.setValues(values);
tableMapper.insertOneTableMultiValuesWithColumns(tableValue);
}
@Test
public void insertMultiTableMultiValues() {
List<TableValue> tables = new ArrayList<>();
for (int i = 0; i < 3; i++) {
TableValue table = new TableValue();
table.setDatabase("test");
table.setName("t" + (i + 1));
List<RowValue> values = new ArrayList<>();
for (int j = 0; j < 5; j++) {
List<FieldValue> fields = new ArrayList<>();
for (int k = 0; k < 2; k++) {
FieldValue field = new FieldValue<>();
field.setValue((k + 1) * 10);
fields.add(field);
}
values.add(new RowValue(fields));
}
table.setValues(values);
tables.add(table);
}
tableMapper.insertMultiTableMultiValues(tables);
}
@Test
public void insertMultiTableMultiValuesWithCoulumns() {
List<TableValue> tables = new ArrayList<>();
for (int i = 0; i < 3; i++) {
TableValue table = new TableValue();
table.setDatabase("test");
table.setName("t" + (i + 1));
// columns
List<FieldMeta> columns = new ArrayList<>();
for (int j = 0; j < 3; j++) {
FieldMeta field = new FieldMeta();
field.setName("f" + (j + 1));
columns.add(field);
}
table.setColumns(columns);
// values
List<RowValue> values = new ArrayList<>();
for (int j = 0; j < 5; j++) {
List<FieldValue> fields = new ArrayList<>();
for (int k = 0; k < columns.size(); k++) {
FieldValue field = new FieldValue<>();
field.setValue((k + 1) * 10);
fields.add(field);
}
values.add(new RowValue(fields));
}
table.setValues(values);
tables.add(table);
}
tableMapper.insertMultiTableMultiValuesWithColumns(tables);
}
}
\ No newline at end of file
package com.taosdata.taosdemo.service;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
@RunWith(SpringRunner.class)
@SpringBootTest
public class DatabaseServiceTest {
@Autowired
private DatabaseService service;
@Test
public void testCreateDatabase1() {
service.createDatabase("testXXXX");
}
@Test
public void dropDatabase() {
service.dropDatabase("testXXXX");
}
@Test
public void useDatabase() {
service.useDatabase("test");
}
}
\ No newline at end of file
package com.taosdata.taosdemo.service;
import com.taosdata.taosdemo.domain.SubTableMeta;
import com.taosdata.taosdemo.domain.TagValue;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
import java.util.ArrayList;
import java.util.List;
@RunWith(SpringRunner.class)
@SpringBootTest
public class SubTableServiceTest {
@Autowired
private SubTableService service;
private List<SubTableMeta> subTables;
@Before
public void before() {
subTables = new ArrayList<>();
for (int i = 1; i <= 1; i++) {
SubTableMeta subTableMeta = new SubTableMeta();
subTableMeta.setDatabase("test");
subTableMeta.setSupertable("weather");
subTableMeta.setName("t" + i);
List<TagValue> tags = new ArrayList<>();
tags.add(new TagValue("location", "beijing"));
tags.add(new TagValue("groupId", i));
subTableMeta.setTags(tags);
subTables.add(subTableMeta);
}
}
@Test
public void testCreateSubTable() {
int count = service.createSubTable(subTables);
System.out.println("count >>> " + count);
}
@Test
public void testCreateSubTableList() {
int count = service.createSubTable(subTables, 10);
System.out.println("count >>> " + count);
}
}
\ No newline at end of file
package com.taosdata.taosdemo.service;
import com.taosdata.taosdemo.domain.FieldMeta;
import com.taosdata.taosdemo.domain.SuperTableMeta;
import com.taosdata.taosdemo.domain.TagMeta;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
import java.util.ArrayList;
import java.util.List;
@RunWith(SpringRunner.class)
@SpringBootTest
public class SuperTableServiceTest {
@Autowired
private SuperTableService service;
@Test
public void testCreate() {
SuperTableMeta superTableMeta = new SuperTableMeta();
superTableMeta.setDatabase("test");
superTableMeta.setName("weather");
List<FieldMeta> fields = new ArrayList<>();
fields.add(new FieldMeta("ts", "timestamp"));
fields.add(new FieldMeta("temperature", "float"));
fields.add(new FieldMeta("humidity", "int"));
superTableMeta.setFields(fields);
List<TagMeta> tags = new ArrayList<>();
tags.add(new TagMeta("location", "nchar(64)"));
tags.add(new TagMeta("groupId", "int"));
superTableMeta.setTags(tags);
service.create(superTableMeta);
}
}
\ No newline at end of file
package com.taosdata.taosdemo.service;
import com.taosdata.taosdemo.domain.TableMeta;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
import java.util.ArrayList;
import java.util.List;
@RunWith(SpringRunner.class)
@SpringBootTest
public class TableServiceTest {
@Autowired
private TableService tableService;
private List<TableMeta> tables;
@Before
public void before() {
tables = new ArrayList<>();
for (int i = 0; i < 1; i++) {
TableMeta tableMeta = new TableMeta();
tableMeta.setDatabase("test");
tableMeta.setName("weather" + (i + 1));
tables.add(tableMeta);
}
}
@Test
public void testCreate() {
int count = tableService.create(tables);
System.out.println(count);
}
@Test
public void testCreateMultiThreads() {
System.out.println(tableService.create(tables, 10));
}
}
\ No newline at end of file
package com.taosdata.taosdemo.service.data;
import com.taosdata.taosdemo.domain.FieldMeta;
import com.taosdata.taosdemo.domain.RowValue;
import com.taosdata.taosdemo.utils.TimeStampUtil;
import org.junit.After;
import org.junit.Assert;
import org.junit.Test;
import java.util.ArrayList;
import java.util.List;
public class FieldValueGeneratorTest {
private List<RowValue> rowValues;
@Test
public void generate() {
List<FieldMeta> fieldMetas = new ArrayList<>();
fieldMetas.add(new FieldMeta("ts", "timestamp"));
fieldMetas.add(new FieldMeta("temperature", "float"));
fieldMetas.add(new FieldMeta("humidity", "int"));
long start = TimeStampUtil.datetimeToLong("2020-01-01 00:00:00.000");
long end = TimeStampUtil.datetimeToLong("2020-01-01 10:00:00.000");
rowValues = FieldValueGenerator.generate(start, end, 1000l * 3600, fieldMetas);
Assert.assertEquals(10, rowValues.size());
}
@Test
public void disrupt() {
List<FieldMeta> fieldMetas = new ArrayList<>();
fieldMetas.add(new FieldMeta("ts", "timestamp"));
fieldMetas.add(new FieldMeta("temperature", "float"));
fieldMetas.add(new FieldMeta("humidity", "int"));
long start = TimeStampUtil.datetimeToLong("2020-01-01 00:00:00.000");
long end = TimeStampUtil.datetimeToLong("2020-01-01 10:00:00.000");
rowValues = FieldValueGenerator.generate(start, end, 1000l * 3600l, fieldMetas);
FieldValueGenerator.disrupt(rowValues, 20, 1000);
Assert.assertEquals(10, rowValues.size());
}
@After
public void after() {
for (RowValue row : rowValues) {
row.getFields().stream().forEach(field -> {
if (field.getName().equals("ts")) {
System.out.print(TimeStampUtil.longToDatetime((Long) field.getValue()));
} else
System.out.print(" ," + field.getValue());
});
System.out.println();
}
}
}
\ No newline at end of file
package com.taosdata.taosdemo.service.data;
import com.taosdata.taosdemo.domain.FieldMeta;
import com.taosdata.taosdemo.domain.SubTableMeta;
import com.taosdata.taosdemo.domain.SuperTableMeta;
import com.taosdata.taosdemo.domain.TagMeta;
import org.junit.After;
import org.junit.Assert;
import org.junit.Test;
import java.util.ArrayList;
import java.util.List;
public class SubTableMetaGeneratorTest {
List<SubTableMeta> subTableMetas;
@Test
public void generate() {
SuperTableMeta superTableMeta = new SuperTableMeta();
superTableMeta.setDatabase("test");
superTableMeta.setName("weather");
List<FieldMeta> fields = new ArrayList<>();
fields.add(new FieldMeta("ts", "timestamp"));
fields.add(new FieldMeta("temperature", "float"));
fields.add(new FieldMeta("humidity", "int"));
superTableMeta.setFields(fields);
List<TagMeta> tags = new ArrayList<>();
tags.add(new TagMeta("location", "nchar(64)"));
tags.add(new TagMeta("groupId", "int"));
superTableMeta.setTags(tags);
subTableMetas = SubTableMetaGenerator.generate(superTableMeta, 10, "t");
Assert.assertEquals(10, subTableMetas.size());
Assert.assertEquals("t1", subTableMetas.get(0).getName());
Assert.assertEquals("t2", subTableMetas.get(1).getName());
Assert.assertEquals("t3", subTableMetas.get(2).getName());
Assert.assertEquals("t4", subTableMetas.get(3).getName());
Assert.assertEquals("t5", subTableMetas.get(4).getName());
Assert.assertEquals("t6", subTableMetas.get(5).getName());
Assert.assertEquals("t7", subTableMetas.get(6).getName());
Assert.assertEquals("t8", subTableMetas.get(7).getName());
Assert.assertEquals("t9", subTableMetas.get(8).getName());
Assert.assertEquals("t10", subTableMetas.get(9).getName());
}
@After
public void after() {
for (SubTableMeta subTableMeta : subTableMetas) {
System.out.println(subTableMeta);
}
}
}
\ No newline at end of file
package com.taosdata.taosdemo.service.data;
import com.taosdata.taosdemo.domain.FieldMeta;
import com.taosdata.taosdemo.domain.SuperTableMeta;
import com.taosdata.taosdemo.domain.TagMeta;
import org.junit.After;
import org.junit.Assert;
import org.junit.Test;
public class SuperTableMetaGeneratorImplTest {
private SuperTableMeta meta;
@Test
public void generate() {
String sql = "create table test.weather (ts timestamp, temperature float, humidity int) tags(location nchar(64), groupId int)";
meta = SuperTableMetaGenerator.generate(sql);
Assert.assertEquals("test", meta.getDatabase());
Assert.assertEquals("weather", meta.getName());
Assert.assertEquals(3, meta.getFields().size());
Assert.assertEquals("ts", meta.getFields().get(0).getName());
Assert.assertEquals("timestamp", meta.getFields().get(0).getType());
Assert.assertEquals("temperature", meta.getFields().get(1).getName());
Assert.assertEquals("float", meta.getFields().get(1).getType());
Assert.assertEquals("humidity", meta.getFields().get(2).getName());
Assert.assertEquals("int", meta.getFields().get(2).getType());
Assert.assertEquals("location", meta.getTags().get(0).getName());
Assert.assertEquals("nchar(64)", meta.getTags().get(0).getType());
Assert.assertEquals("groupid", meta.getTags().get(1).getName());
Assert.assertEquals("int", meta.getTags().get(1).getType());
}
@Test
public void generate2() {
meta = SuperTableMetaGenerator.generate("test", "weather", 10, "col", 10, "tag");
Assert.assertEquals("test", meta.getDatabase());
Assert.assertEquals("weather", meta.getName());
Assert.assertEquals(11, meta.getFields().size());
for (FieldMeta fieldMeta : meta.getFields()) {
Assert.assertNotNull(fieldMeta.getName());
Assert.assertNotNull(fieldMeta.getType());
}
for (TagMeta tagMeta : meta.getTags()) {
Assert.assertNotNull(tagMeta.getName());
Assert.assertNotNull(tagMeta.getType());
}
}
@After
public void after() {
System.out.println(meta.getDatabase());
System.out.println(meta.getName());
for (FieldMeta fieldMeta : meta.getFields()) {
System.out.println(fieldMeta);
}
for (TagMeta tagMeta : meta.getTags()) {
System.out.println(tagMeta);
}
}
}
\ No newline at end of file
package com.taosdata.taosdemo.service.data;
import com.taosdata.taosdemo.domain.TagMeta;
import com.taosdata.taosdemo.domain.TagValue;
import org.junit.After;
import org.junit.Assert;
import org.junit.Test;
import java.util.ArrayList;
import java.util.List;
public class TagValueGeneratorTest {
List<TagValue> tagvalues;
@Test
public void generate() {
List<TagMeta> tagMetaList = new ArrayList<>();
tagMetaList.add(new TagMeta("location", "nchar(10)"));
tagMetaList.add(new TagMeta("groupId", "int"));
tagMetaList.add(new TagMeta("ts", "timestamp"));
tagMetaList.add(new TagMeta("temperature", "float"));
tagMetaList.add(new TagMeta("humidity", "double"));
tagMetaList.add(new TagMeta("text", "binary(10)"));
tagvalues = TagValueGenerator.generate(tagMetaList);
Assert.assertEquals("location", tagvalues.get(0).getName());
Assert.assertEquals("groupId", tagvalues.get(1).getName());
Assert.assertEquals("ts", tagvalues.get(2).getName());
Assert.assertEquals("temperature", tagvalues.get(3).getName());
Assert.assertEquals("humidity", tagvalues.get(4).getName());
Assert.assertEquals("text", tagvalues.get(5).getName());
}
@After
public void after() {
tagvalues.stream().forEach(System.out::println);
}
}
\ No newline at end of file
package com.taosdata.taosdemo.utils;
import org.junit.Assert;
import org.junit.Test;
public class DataGeneratorTest {
@Test
public void randomValue() {
for (int i = 0; i < TaosConstants.DATA_TYPES.length; i++) {
System.out.println(TaosConstants.DATA_TYPES[i] + " >>> " + DataGenerator.randomValue(TaosConstants.DATA_TYPES[i]));
}
}
@Test
public void randomNchar() {
String s = DataGenerator.randomNchar(10);
Assert.assertEquals(10, s.length());
}
}
\ No newline at end of file
package com.taosdata.taosdemo.utils;
import org.junit.Test;
import static org.junit.Assert.assertEquals;
public class TimeStampUtilTest {
@Test
public void datetimeToLong() {
final String startTime = "2005-01-01 00:00:00.000";
long start = TimeStampUtil.datetimeToLong(startTime);
assertEquals(1104508800000l, start);
String dateTimeStr = TimeStampUtil.longToDatetime(start);
assertEquals("2005-01-01 00:00:00.000", dateTimeStr);
}
@Test
public void longToDatetime() {
String datetime = TimeStampUtil.longToDatetime(1510000000000L);
assertEquals("2017-11-07 04:26:40.000", datetime);
long timestamp = TimeStampUtil.datetimeToLong(datetime);
assertEquals(1510000000000L, timestamp);
}
@Test
public void range() {
long start = TimeStampUtil.datetimeToLong("2020-10-01 00:00:00.000");
long timeGap = 1000;
long numOfRowsPerTable = 1000l * 3600l * 24l * 90l;
TimeStampUtil.TimeTuple timeTuple = TimeStampUtil.range(start, timeGap, numOfRowsPerTable);
System.out.println(TimeStampUtil.longToDatetime(timeTuple.start));
System.out.println(TimeStampUtil.longToDatetime(timeTuple.end));
System.out.println(timeTuple.timeGap);
}
}
\ No newline at end of file
...@@ -50,7 +50,7 @@ static void queryDB(TAOS *taos, char *command) { ...@@ -50,7 +50,7 @@ static void queryDB(TAOS *taos, char *command) {
taos_free_result(pSql); taos_free_result(pSql);
} }
void Test(char *qstr, const char *input, int i); void Test(TAOS *taos, char *qstr, int i);
int main(int argc, char *argv[]) { int main(int argc, char *argv[]) {
char qstr[1024]; char qstr[1024];
...@@ -63,21 +63,22 @@ int main(int argc, char *argv[]) { ...@@ -63,21 +63,22 @@ int main(int argc, char *argv[]) {
// init TAOS // init TAOS
taos_init(); taos_init();
TAOS *taos = taos_connect(argv[1], "root", "taosdata", NULL, 0);
if (taos == NULL) {
printf("failed to connect to server, reason:%s\n", "null taos"/*taos_errstr(taos)*/);
exit(1);
}
for (int i = 0; i < 4000000; i++) { for (int i = 0; i < 4000000; i++) {
Test(qstr, argv[1], i); Test(taos, qstr, i);
} }
taos_close(taos);
taos_cleanup(); taos_cleanup();
} }
void Test(char *qstr, const char *input, int index) { void Test(TAOS *taos, char *qstr, int index) {
TAOS *taos = taos_connect(input, "root", "taosdata", NULL, 0);
printf("==================test at %d\n================================", index); printf("==================test at %d\n================================", index);
queryDB(taos, "drop database if exists demo"); queryDB(taos, "drop database if exists demo");
queryDB(taos, "create database demo"); queryDB(taos, "create database demo");
TAOS_RES *result; TAOS_RES *result;
if (taos == NULL) {
printf("failed to connect to server, reason:%s\n", "null taos"/*taos_errstr(taos)*/);
exit(1);
}
queryDB(taos, "use demo"); queryDB(taos, "use demo");
queryDB(taos, "create table m1 (ts timestamp, ti tinyint, si smallint, i int, bi bigint, f float, d double, b binary(10))"); queryDB(taos, "create table m1 (ts timestamp, ti tinyint, si smallint, i int, bi bigint, f float, d double, b binary(10))");
...@@ -131,6 +132,5 @@ void Test(char *qstr, const char *input, int index) { ...@@ -131,6 +132,5 @@ void Test(char *qstr, const char *input, int index) {
taos_free_result(result); taos_free_result(result);
printf("====demo end====\n\n"); printf("====demo end====\n\n");
taos_close(taos);
} }
...@@ -32,10 +32,8 @@ python3 ./test.py -f table/create_sensitive.py ...@@ -32,10 +32,8 @@ python3 ./test.py -f table/create_sensitive.py
python3 ./test.py -f table/max_table_length.py python3 ./test.py -f table/max_table_length.py
python3 ./test.py -f table/alter_column.py python3 ./test.py -f table/alter_column.py
python3 ./test.py -f table/boundary.py python3 ./test.py -f table/boundary.py
python3 ./test.py -f table/create-a-lot.py
python3 ./test.py -f table/create.py python3 ./test.py -f table/create.py
python3 ./test.py -f table/del_stable.py python3 ./test.py -f table/del_stable.py
python3 ./test.py -f table/queryWithTaosdKilled.py
# tag # tag
...@@ -173,6 +171,9 @@ python3 ./test.py -f query/sliding.py ...@@ -173,6 +171,9 @@ python3 ./test.py -f query/sliding.py
python3 ./test.py -f query/unionAllTest.py python3 ./test.py -f query/unionAllTest.py
python3 ./test.py -f query/bug2281.py python3 ./test.py -f query/bug2281.py
python3 ./test.py -f query/bug2119.py python3 ./test.py -f query/bug2119.py
python3 ./test.py -f query/isNullTest.py
python3 ./test.py -f query/queryWithTaosdKilled.py
#stream #stream
python3 ./test.py -f stream/metric_1.py python3 ./test.py -f stream/metric_1.py
python3 ./test.py -f stream/new.py python3 ./test.py -f stream/new.py
...@@ -213,6 +214,7 @@ python3 ./test.py -f functions/function_stddev.py -r 1 ...@@ -213,6 +214,7 @@ python3 ./test.py -f functions/function_stddev.py -r 1
python3 ./test.py -f functions/function_sum.py -r 1 python3 ./test.py -f functions/function_sum.py -r 1
python3 ./test.py -f functions/function_top.py -r 1 python3 ./test.py -f functions/function_top.py -r 1
#python3 ./test.py -f functions/function_twa.py -r 1 #python3 ./test.py -f functions/function_twa.py -r 1
python3 ./test.py -f functions/function_twa_test2.py
python3 queryCount.py python3 queryCount.py
python3 ./test.py -f query/queryGroupbyWithInterval.py python3 ./test.py -f query/queryGroupbyWithInterval.py
python3 client/twoClients.py python3 client/twoClients.py
......
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import taos
from util.log import *
from util.cases import *
from util.sql import *
import numpy as np
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor())
self.rowNum = 10
self.ts = 1537146000000
def run(self):
tdSql.prepare()
tdSql.execute("create table t1(ts timestamp, c int)")
for i in range(self.rowNum):
tdSql.execute("insert into t1 values(%d, %d)" % (self.ts + i * 10000, i + 1))
# twa verifacation
tdSql.query("select twa(c) from t1 where ts >= '2018-09-17 09:00:00.000' and ts <= '2018-09-17 09:01:30.000' ")
tdSql.checkRows(1)
tdSql.checkData(0, 0, 5.5)
tdSql.query("select twa(c) from t1 where ts >= '2018-09-17 09:00:00.000' and ts <= '2018-09-17 09:01:30.000' interval(10s)")
tdSql.checkRows(10)
tdSql.checkData(0, 1, 1.49995)
tdSql.checkData(1, 1, 2.49995)
tdSql.checkData(2, 1, 3.49995)
tdSql.checkData(3, 1, 4.49995)
tdSql.checkData(4, 1, 5.49995)
tdSql.checkData(5, 1, 6.49995)
tdSql.checkData(6, 1, 7.49995)
tdSql.checkData(7, 1, 8.49995)
tdSql.checkData(8, 1, 9.49995)
tdSql.checkData(9, 1, 10)
tdSql.query("select twa(c) from t1 where ts >= '2018-09-17 09:00:00.000' and ts <= '2018-09-17 09:01:30.000' interval(10s) sliding(5s)")
tdSql.checkRows(20)
tdSql.checkData(0, 1, 1.24995)
tdSql.checkData(1, 1, 1.49995)
tdSql.checkData(2, 1, 1.99995)
tdSql.checkData(3, 1, 2.49995)
tdSql.checkData(4, 1, 2.99995)
tdSql.checkData(5, 1, 3.49995)
tdSql.checkData(6, 1, 3.99995)
tdSql.checkData(7, 1, 4.49995)
tdSql.checkData(8, 1, 4.99995)
tdSql.checkData(9, 1, 5.49995)
tdSql.checkData(10, 1, 5.99995)
tdSql.checkData(11, 1, 6.49995)
tdSql.checkData(12, 1, 6.99995)
tdSql.checkData(13, 1, 7.49995)
tdSql.checkData(14, 1, 7.99995)
tdSql.checkData(15, 1, 8.49995)
tdSql.checkData(16, 1, 8.99995)
tdSql.checkData(17, 1, 9.49995)
tdSql.checkData(18, 1, 9.75000)
tdSql.checkData(19, 1, 10)
tdSql.execute("create table t2(ts timestamp, c int)")
tdSql.execute("insert into t2 values(%d, 1)" % (self.ts + 3000))
tdSql.query("select twa(c) from t2 where ts >= '2018-09-17 09:00:00.000' and ts <= '2018-09-17 09:01:30.000' ")
tdSql.checkRows(1)
tdSql.checkData(0, 0, 1)
tdSql.query("select twa(c) from t2 where ts >= '2018-09-17 09:00:00.000' and ts <= '2018-09-17 09:01:30.000' interval(2s) ")
tdSql.checkRows(1)
tdSql.checkData(0, 1, 1)
tdSql.query("select twa(c) from t2 where ts >= '2018-09-17 09:00:00.000' and ts <= '2018-09-17 09:01:30.000' interval(2s) sliding(1s) ")
tdSql.checkRows(2)
tdSql.checkData(0, 1, 1)
tdSql.checkData(1, 1, 1)
tdSql.query("select twa(c) from t2 where ts >= '2018-09-17 09:00:04.000' and ts <= '2018-09-17 09:01:30.000' ")
tdSql.checkRows(0)
tdSql.query("select twa(c) from t2 where ts >= '2018-09-17 08:00:00.000' and ts <= '2018-09-17 09:00:00.000' ")
tdSql.checkRows(0)
tdSql.execute("create table t3(ts timestamp, c int)")
tdSql.execute("insert into t3 values(%d, 1)" % (self.ts))
tdSql.execute("insert into t3 values(%d, -2)" % (self.ts + 3000))
tdSql.query("select twa(c) from t3 where ts >= '2018-09-17 08:59:00.000' and ts <= '2018-09-17 09:01:30.000'")
tdSql.checkRows(1)
tdSql.checkData(-0.5)
tdSql.query("select twa(c) from t3 where ts >= '2018-09-17 08:59:00.000' and ts <= '2018-09-17 09:01:30.000' interval(1s)")
tdSql.checkRows(2)
tdSql.checkData(0, 1, 0.5005)
tdSql.checkData(1, 1, -2)
tdSql.query("select twa(c) from t3 where ts >= '2018-09-17 08:59:00.000' and ts <= '2018-09-17 09:01:30.000' interval(2s) sliding(1s)")
tdSql.checkRows(4)
tdSql.checkData(0, 1, 0.5005)
tdSql.checkData(1, 1, 0.0005)
tdSql.checkData(2, 1, -1.5)
tdSql.checkData(3, 1, -2)
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
...@@ -19,6 +19,7 @@ python3 ./test.py -f insert/randomNullCommit.py ...@@ -19,6 +19,7 @@ python3 ./test.py -f insert/randomNullCommit.py
python3 insert/retentionpolicy.py python3 insert/retentionpolicy.py
python3 ./test.py -f insert/alterTableAndInsert.py python3 ./test.py -f insert/alterTableAndInsert.py
python3 ./test.py -f insert/insertIntoTwoTables.py python3 ./test.py -f insert/insertIntoTwoTables.py
python3 ./test.py -f query/isNullTest.py
#table #table
python3 ./test.py -f table/alter_wal0.py python3 ./test.py -f table/alter_wal0.py
...@@ -30,7 +31,6 @@ python3 ./test.py -f table/create_sensitive.py ...@@ -30,7 +31,6 @@ python3 ./test.py -f table/create_sensitive.py
python3 ./test.py -f table/max_table_length.py python3 ./test.py -f table/max_table_length.py
python3 ./test.py -f table/alter_column.py python3 ./test.py -f table/alter_column.py
python3 ./test.py -f table/boundary.py python3 ./test.py -f table/boundary.py
python3 ./test.py -f table/create-a-lot.py
python3 ./test.py -f table/create.py python3 ./test.py -f table/create.py
python3 ./test.py -f table/del_stable.py python3 ./test.py -f table/del_stable.py
python3 ./test.py -f table/queryWithTaosdKilled.py python3 ./test.py -f table/queryWithTaosdKilled.py
...@@ -206,6 +206,7 @@ python3 ./test.py -f functions/function_stddev.py -r 1 ...@@ -206,6 +206,7 @@ python3 ./test.py -f functions/function_stddev.py -r 1
python3 ./test.py -f functions/function_sum.py -r 1 python3 ./test.py -f functions/function_sum.py -r 1
python3 ./test.py -f functions/function_top.py -r 1 python3 ./test.py -f functions/function_top.py -r 1
#python3 ./test.py -f functions/function_twa.py -r 1 #python3 ./test.py -f functions/function_twa.py -r 1
python3 ./test.py -f functions/function_twa_test2.py
python3 queryCount.py python3 queryCount.py
python3 ./test.py -f query/queryGroupbyWithInterval.py python3 ./test.py -f query/queryGroupbyWithInterval.py
python3 client/twoClients.py python3 client/twoClients.py
......
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import taos
from util.log import tdLog
from util.cases import tdCases
from util.sql import tdSql
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
self.ts = 1537146000000
def run(self):
tdSql.prepare()
print("==============step1")
tdSql.execute("create table st(ts timestamp, c1 int, c2 binary(20), c3 nchar(20)) tags(t1 int, t2 binary(20), t3 nchar(20))")
tdSql.execute("create table t1 using st tags(1, 'binary1', 'nchar1')")
tdSql.execute("insert into t2(ts, c2) using st(t2) tags('') values(%d, '')" % (self.ts + 10))
tdSql.execute("insert into t3(ts, c2) using st(t3) tags('') values(%d, '')" % (self.ts + 10))
for i in range(10):
tdSql.execute("insert into t1 values(%d, %d, 'binary%d', 'nchar%d')" % (self.ts + i, i, i, i))
tdSql.execute("insert into t2 values(%d, %d, 'binary%d', 'nchar%d')" % (self.ts + i, i, i, i))
tdSql.execute("insert into t3 values(%d, %d, 'binary%d', 'nchar%d')" % (self.ts + i, i, i, i))
tdSql.execute("insert into t1(ts, c2) values(%d, '')" % (self.ts + 10))
tdSql.execute("insert into t1(ts, c3) values(%d, '')" % (self.ts + 11))
tdSql.execute("insert into t2(ts, c3) values(%d, '')" % (self.ts + 11))
tdSql.execute("insert into t3(ts, c3) values(%d, '')" % (self.ts + 11))
tdSql.query("select count(*) from st")
tdSql.checkData(0, 0, 36)
tdSql.query("select count(*) from st where t1 is null")
tdSql.checkData(0, 0, 24)
tdSql.query("select count(*) from st where t1 is not null")
tdSql.checkData(0, 0, 12)
tdSql.query("select count(*) from st where t2 is null")
tdSql.checkData(0, 0, 12)
tdSql.query("select count(*) from st where t2 is not null")
tdSql.checkData(0, 0, 24)
tdSql.error("select count(*) from st where t2 <> null")
tdSql.error("select count(*) from st where t2 = null")
tdSql.query("select count(*) from st where t2 = '' ")
tdSql.checkData(0, 0, 12)
tdSql.query("select count(*) from st where t2 <> '' ")
tdSql.checkData(0, 0, 24)
tdSql.query("select count(*) from st where t3 is null")
tdSql.checkData(0, 0, 12)
tdSql.query("select count(*) from st where t3 is not null")
tdSql.checkData(0, 0, 24)
tdSql.error("select count(*) from st where t3 <> null")
tdSql.error("select count(*) from st where t3 = null")
tdSql.query("select count(*) from st where t3 = '' ")
tdSql.checkData(0, 0, 12)
tdSql.query("select count(*) from st where t3 <> '' ")
tdSql.checkData(0, 0, 24)
tdSql.query("select count(*) from st where c1 is not null")
tdSql.checkData(0, 0, 30)
tdSql.query("select count(*) from st where c1 is null")
tdSql.checkData(0, 0, 6)
tdSql.query("select count(*) from st where c2 is not null")
tdSql.checkData(0, 0, 33)
tdSql.query("select count(*) from st where c2 is null")
tdSql.checkData(0, 0, 3)
tdSql.error("select count(*) from st where c2 <> null")
tdSql.error("select count(*) from st where c2 = null")
tdSql.query("select count(*) from st where c2 = '' ")
tdSql.checkData(0, 0, 3)
tdSql.query("select count(*) from st where c2 <> '' ")
tdSql.checkData(0, 0, 30)
tdSql.query("select count(*) from st where c3 is not null")
tdSql.checkData(0, 0, 33)
tdSql.query("select count(*) from st where c3 is null")
tdSql.checkData(0, 0, 3)
tdSql.error("select count(*) from st where c3 <> null")
tdSql.error("select count(*) from st where c3 = null")
tdSql.query("select count(*) from st where c3 = '' ")
tdSql.checkData(0, 0, 3)
tdSql.query("select count(*) from st where c3 <> '' ")
tdSql.checkData(0, 0, 30)
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
...@@ -87,6 +87,10 @@ class TDTestCase: ...@@ -87,6 +87,10 @@ class TDTestCase:
tdSql.checkData(0, 3, rowNum) tdSql.checkData(0, 3, rowNum)
except Exception as e: except Exception as e:
tdLog.info(repr(e)) tdLog.info(repr(e))
tdSql.query("show streams")
tdSql.checkRows(1)
tdSql.checkData(0, 2, 's0')
tdLog.info("===== step8 =====") tdLog.info("===== step8 =====")
tdSql.query( tdSql.query(
...@@ -142,6 +146,12 @@ class TDTestCase: ...@@ -142,6 +146,12 @@ class TDTestCase:
except Exception as e: except Exception as e:
tdLog.info(repr(e)) tdLog.info(repr(e))
tdSql.query("show streams")
tdSql.checkRows(2)
tdSql.checkData(0, 2, 's1')
tdSql.checkData(1, 2, 's0')
def stop(self): def stop(self):
tdSql.close() tdSql.close()
tdLog.success("%s successfully executed" % __file__) tdLog.success("%s successfully executed" % __file__)
......
...@@ -22,7 +22,7 @@ $db = $dbPrefix . $i ...@@ -22,7 +22,7 @@ $db = $dbPrefix . $i
$mt = $mtPrefix . $i $mt = $mtPrefix . $i
sql drop database if exists $db sql drop database if exists $db
sql create database $db sql create database $db keep 36500
sql use $db sql use $db
print =====================================> test case for twa in single block print =====================================> test case for twa in single block
...@@ -111,7 +111,7 @@ if $rows != 2 then ...@@ -111,7 +111,7 @@ if $rows != 2 then
return -1 return -1
endi endi
if $data00 != @15-08-18 00:06:00.00@ then if $data00 != @15-08-18 00:06:00.000@ then
return -1 return -1
endi endi
...@@ -219,10 +219,28 @@ if $data02 != 6 then ...@@ -219,10 +219,28 @@ if $data02 != 6 then
return -1 return -1
endi endi
sql select twa(k) from t1 where ts>'2015-8-18 00:00:00' and ts<'2015-8-18 00:00:1'
if $rows != 0 then
return -1
endi
sql select twa(k),avg(k),count(1) from t1 where ts>='2015-8-18 00:00:00' and ts<='2015-8-18 00:30:00' interval(10m) order by ts asc sql select twa(k),avg(k),count(1) from t1 where ts>='2015-8-18 00:00:00' and ts<='2015-8-18 00:30:00' interval(10m) order by ts asc
sql select twa(k),avg(k),count(1) from t1 where ts>='2015-8-18 00:00:00' and ts<='2015-8-18 00:30:00' interval(10m) order by ts desc sql select twa(k),avg(k),count(1) from t1 where ts>='2015-8-18 00:00:00' and ts<='2015-8-18 00:30:00' interval(10m) order by ts desc
#todo add test case while column filte exists. #todo add test case while column filter exists.
#sql select count(*),TWA(k) from tm0 where ts>='1970-1-1 13:43:00' and ts<='1970-1-1 13:44:10' interval(9s)
sql create table tm0 (ts timestamp, k float);
sql insert into tm0 values(100000000, 5);
sql insert into tm0 values(100003000, -9);
sql select twa(k) from tm0 where ts<now
if $rows != 1 then
return -1
endi
select count(*),TWA(k) from tm0 where ts>='1970-1-1 13:43:00' and ts<='1970-1-1 13:44:10' interval(9s) if $data00 != -2.000000000 then
print expect -2.000000000, actual: $data00
return -1
endi
...@@ -149,4 +149,57 @@ if $rows != 2 then ...@@ -149,4 +149,57 @@ if $rows != 2 then
return -1 return -1
endi endi
print ==================>td-2424
sql create table t1(ts timestamp, k float)
sql insert into t1 values(now, 8.001)
sql select * from t1 where k=8.001
if $rows != 1 then
return -1
endi
sql select * from t1 where k<8.001
if $rows != 0 then
return -1
endi
sql select * from t1 where k<=8.001
if $rows != 1 then
return -1
endi
sql select * from t1 where k>8.001
if $rows != 0 then
return -1
endi
sql select * from t1 where k>=8.001
if $rows != 1 then
return -1
endi
sql select * from t1 where k<>8.001
if $rows != 0 then
return -1
endi
sql select * from t1 where k>=8.001 and k<=8.001
if $rows != 1 then
return -1
endi
sql select * from t1 where k>=8.0009999 and k<=8.001
if $rows != 1 then
return -1
endi
sql select * from t1 where k>8.001 and k<=8.001
if $rows != 0 then
return -1
endi
sql select * from t1 where k>=8.001 and k<8.001
if $rows != 0 then
return -1
endi
system sh/exec.sh -n dnode1 -s stop -x SIGINT system sh/exec.sh -n dnode1 -s stop -x SIGINT
\ No newline at end of file
...@@ -103,6 +103,8 @@ sleep 500 ...@@ -103,6 +103,8 @@ sleep 500
run general/parser/timestamp.sim run general/parser/timestamp.sim
sleep 500 sleep 500
run general/parser/sliding.sim run general/parser/sliding.sim
sleep 500
run general/parser/function.sim
#sleep 500 #sleep 500
#run general/parser/repeatStream.sim #run general/parser/repeatStream.sim
......
...@@ -91,8 +91,11 @@ while $i < $tblNum ...@@ -91,8 +91,11 @@ while $i < $tblNum
$i = $i + 1 $i = $i + 1
endw endw
sql show db.vgroups;
print d1: $data04 $data05 , d2: $data06 $data07
sql select count(*) from $stb sql select count(*) from $stb
print rows:$rows data00:$data00 print rtest1==> rows:$rows data00:$data00
if $rows != 1 then if $rows != 1 then
return -1 return -1
endi endi
...@@ -103,6 +106,15 @@ endi ...@@ -103,6 +106,15 @@ endi
$totalRows = $data00 $totalRows = $data00
sql select count(*) from $stb
print test2==> rows:$rows data00:$data00
sql select count(*) from $stb
print test3==> rows:$rows data00:$data00
sql select count(*) from $stb
print test4==> rows:$rows data00:$data00
sql select count(*) from $stb
print test5==> rows:$rows data00:$data00
print ============== step3: insert old data(now-15d) and new data(now+15d), control data rows in order to save in cache, not falling disc print ============== step3: insert old data(now-15d) and new data(now+15d), control data rows in order to save in cache, not falling disc
sql insert into $tb values ( now - 20d , -20 ) sql insert into $tb values ( now - 20d , -20 )
sql insert into $tb values ( now - 40d , -40 ) sql insert into $tb values ( now - 40d , -40 )
...@@ -153,12 +165,21 @@ if $data00 != $totalRows then ...@@ -153,12 +165,21 @@ if $data00 != $totalRows then
return -1 return -1
endi endi
sql select count(*) from $stb
print data00 $data00
sql select count(*) from $stb
print data00 $data00
sql select count(*) from $stb
print data00 $data00
sql select count(*) from $stb
print data00 $data00
print ============== step5: insert two data rows: now-16d, now+16d, print ============== step5: insert two data rows: now-16d, now+16d,
sql insert into $tb values ( now - 21d , -21 ) sql insert into $tb values ( now - 21d , -21 )
sql insert into $tb values ( now - 41d , -41 ) sql insert into $tb values ( now - 41d , -41 )
$totalRows = $totalRows + 2 $totalRows = $totalRows + 2
print ============== step5: restart dnode2, waiting sync end print ============== step6: restart dnode2, waiting sync end
system sh/exec.sh -n dnode2 -s start system sh/exec.sh -n dnode2 -s start
sleep 3000 sleep 3000
$loopCnt = 0 $loopCnt = 0
...@@ -192,9 +213,81 @@ endi ...@@ -192,9 +213,81 @@ endi
sleep $sleepTimer sleep $sleepTimer
# check using select # check using select
sleep 5000
sql select count(*) from $stb
print data00 $data00
if $data00 != $totalRows then
return -1
endi
sql select count(*) from $stb
print data00 $data00
if $data00 != $totalRows then
return -1
endi
sql select count(*) from $stb
print data00 $data00
if $data00 != $totalRows then
return -1
endi
sql select count(*) from $stb
print data00 $data00
if $data00 != $totalRows then
return -1
endi
sql select count(*) from $stb
print data00 $data00
if $data00 != $totalRows then
return -1
endi
sql select count(*) from $stb
print data00 $data00
if $data00 != $totalRows then
return -1
endi
sql select count(*) from $stb
print data00 $data00
if $data00 != $totalRows then
return -1
endi
sql select count(*) from $stb sql select count(*) from $stb
print data00 $data00 print data00 $data00
if $data00 != $totalRows then if $data00 != $totalRows then
return -1 return -1
endi endi
sql select count(*) from $stb
print data00 $data00
if $data00 != $totalRows then
return -1
endi
sql select count(*) from $stb
print data00 $data00
if $data00 != $totalRows then
return -1
endi
sql select count(*) from $stb
print data00 $data00
if $data00 != $totalRows then
return -1
endi
sql select count(*) from $stb
print data00 $data00
if $data00 != $totalRows then
return -1
endi
sql select count(*) from $stb
print data00 $data00
if $data00 != $totalRows then
return -1
endi
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册