diff --git a/Jenkinsfile b/Jenkinsfile index 976812bd0a4bd48c6a53f9e8e7b9d01a1031e81f..8d2429c137414fdb63260f94c0c99a6d264d8c48 100644 --- a/Jenkinsfile +++ b/Jenkinsfile @@ -227,6 +227,8 @@ pipeline { ./test-all.sh p4 cd ${WKC}/tests ./test-all.sh full jdbc + cd ${WKC}/tests + ./test-all.sh full unit date''' } } diff --git a/cmake/install.inc b/cmake/install.inc index 0dda0f68211b4508646d7b877b1e72b9753734e3..0ea79589caef1bf3ec72f4234f99e87328759c33 100755 --- a/cmake/install.inc +++ b/cmake/install.inc @@ -32,7 +32,7 @@ ELSEIF (TD_WINDOWS) #INSTALL(TARGETS taos RUNTIME DESTINATION driver) #INSTALL(TARGETS shell RUNTIME DESTINATION .) IF (TD_MVN_INSTALLED) - INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos-jdbcdriver-2.0.20-dist.jar DESTINATION connector/jdbc) + INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos-jdbcdriver-2.0.21-dist.jar DESTINATION connector/jdbc) ENDIF () ELSEIF (TD_DARWIN) SET(TD_MAKE_INSTALL_SH "${TD_COMMUNITY_DIR}/packaging/tools/make_install.sh") diff --git a/documentation20/cn/00.index/docs.md b/documentation20/cn/00.index/docs.md index b4dbc30cdaec12934d1084479db816602eaa8d2b..c16673cba59d42ab00aa2975cfa2341ede9e5fbb 100644 --- a/documentation20/cn/00.index/docs.md +++ b/documentation20/cn/00.index/docs.md @@ -120,6 +120,7 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专 * [TDengine性能对比测试工具](https://www.taosdata.com/blog/2020/01/18/1166.html) * [IDEA数据库管理工具可视化使用TDengine](https://www.taosdata.com/blog/2020/08/27/1767.html) * [基于eletron开发的跨平台TDengine图形化管理工具](https://github.com/skye0207/TDengineGUI) +* [DataX,支持TDengine的离线数据采集/同步工具](https://github.com/alibaba/DataX) ## TDengine与其他数据库的对比测试 diff --git a/documentation20/cn/03.architecture/docs.md b/documentation20/cn/03.architecture/docs.md index 34ffb5d9442be4f90a8ea15aa8450dbc68317681..dc710704e50c5f6b3504a8427ae7088a15b12e77 100644 --- a/documentation20/cn/03.architecture/docs.md +++ b/documentation20/cn/03.architecture/docs.md @@ -178,11 +178,11 @@ TDengine 分布式架构的逻辑结构图如下: **FQDN配置**:一个数据节点有一个或多个FQDN,可以在系统配置文件taos.cfg通过参数“fqdn"进行指定,如果没有指定,系统将自动获取计算机的hostname作为其FQDN。如果节点没有配置FQDN,可以直接将该节点的配置参数fqdn设置为它的IP地址。但不建议使用IP,因为IP地址可变,一旦变化,将让集群无法正常工作。一个数据节点的EP(End Point)由FQDN + Port组成。采用FQDN,需要保证DNS服务正常工作,或者在节点以及应用所在的节点配置好hosts文件。 -**端口配置:**一个数据节点对外的端口由TDengine的系统配置参数serverPort决定,对集群内部通讯的端口是serverPort+5。集群内数据节点之间的数据复制操作还占有一个TCP端口,是serverPort+10. 为支持多线程高效的处理UDP数据,每个对内和对外的UDP连接,都需要占用5个连续的端口。因此一个数据节点总的端口范围为serverPort到serverPort + 10,总共11个TCP/UDP端口。使用时,需要确保防火墙将这些端口打开。每个数据节点可以配置不同的serverPort。 +**端口配置:**一个数据节点对外的端口由TDengine的系统配置参数serverPort决定,对集群内部通讯的端口是serverPort+5。集群内数据节点之间的数据复制操作还占有一个TCP端口,是serverPort+10. 为支持多线程高效的处理UDP数据,每个对内和对外的UDP连接,都需要占用5个连续的端口。因此一个数据节点总的端口范围为serverPort到serverPort + 10,总共11个TCP/UDP端口。(另外还可能有 RESTful、Arbitrator 所使用的端口,那样的话就一共是 13 个。)使用时,需要确保防火墙将这些端口打开,以备使用。每个数据节点可以配置不同的serverPort。 **集群对外连接:** TDengine集群可以容纳单个、多个甚至几千个数据节点。应用只需要向集群中任何一个数据节点发起连接即可,连接需要提供的网络参数是一数据节点的End Point(FQDN加配置的端口号)。通过命令行CLI启动应用taos时,可以通过选项-h来指定数据节点的FQDN, -P来指定其配置的端口号,如果端口不配置,将采用TDengine的系统配置参数serverPort。 -**集群内部通讯**: 各个数据节点之间通过TCP/UDP进行连接。一个数据节点启动时,将获取mnode所在的dnode的EP信息,然后与系统中的mnode建立起连接,交换信息。获取mnode的EP信息有三步,1:检查mnodeEpList文件是否存在,如果不存在或不能正常打开获得mnode EP信息,进入第二步;2:检查系统配置文件taos.cfg, 获取节点配置参数firstEp, secondEp,(这两个参数指定的节点可以是不带mnode的普通节点,这样的话,节点被连接时会尝试重定向到mnode节点)如果不存在或者taos.cfg里没有这两个配置参数,或无效,进入第三步;3:将自己的EP设为mnode EP, 并独立运行起来。获取mnode EP列表后,数据节点发起连接,如果连接成功,则成功加入进工作的集群,如果不成功,则尝试mnode EP列表中的下一个。如果都尝试了,但连接都仍然失败,则休眠几秒后,再进行尝试。 +**集群内部通讯**: 各个数据节点之间通过TCP/UDP进行连接。一个数据节点启动时,将获取mnode所在的dnode的EP信息,然后与系统中的mnode建立起连接,交换信息。获取mnode的EP信息有三步,1:检查mnodeEpSet文件是否存在,如果不存在或不能正常打开获得mnode EP信息,进入第二步;2:检查系统配置文件taos.cfg, 获取节点配置参数firstEp, secondEp,(这两个参数指定的节点可以是不带mnode的普通节点,这样的话,节点被连接时会尝试重定向到mnode节点)如果不存在或者taos.cfg里没有这两个配置参数,或无效,进入第三步;3:将自己的EP设为mnode EP, 并独立运行起来。获取mnode EP列表后,数据节点发起连接,如果连接成功,则成功加入进工作的集群,如果不成功,则尝试mnode EP列表中的下一个。如果都尝试了,但连接都仍然失败,则休眠几秒后,再进行尝试。 **MNODE的选择:** TDengine逻辑上有管理节点,但没有单独的执行代码,服务器侧只有一套执行代码taosd。那么哪个数据节点会是管理节点呢?这是系统自动决定的,无需任何人工干预。原则如下:一个数据节点启动时,会检查自己的End Point, 并与获取的mnode EP List进行比对,如果在其中,该数据节点认为自己应该启动mnode模块,成为mnode。如果自己的EP不在mnode EP List里,则不启动mnode模块。在系统的运行过程中,由于负载均衡、宕机等原因,mnode有可能迁移至新的dnode,但一切都是透明的,无需人工干预,配置参数的修改,是mnode自己根据资源做出的决定。 diff --git a/documentation20/cn/08.connector/docs.md b/documentation20/cn/08.connector/docs.md index 9331d6e9cff622f21a38baea595241f7a6b90277..fb50e20e513aaf9b3791989c76c36eaca8df8f2b 100644 --- a/documentation20/cn/08.connector/docs.md +++ b/documentation20/cn/08.connector/docs.md @@ -209,7 +209,7 @@ C/C++的API类似于MySQL的C API。应用程序使用时,需要包含TDengine - `TAOS_RES* taos_query(TAOS *taos, const char *sql)` - 该API用来执行SQL语句,可以是DQL、DML或DDL语句。 其中的`taos`参数是通过`taos_connect`获得的指针。返回值 NULL 表示失败。 + 该API用来执行SQL语句,可以是DQL、DML或DDL语句。 其中的`taos`参数是通过`taos_connect`获得的指针。不能通过返回值是否是 NULL 来判断执行结果是否失败,而是需要用`taos_errno`函数解析结果集中的错误代码来进行判断。 - `int taos_result_precision(TAOS_RES *res)` @@ -591,7 +591,8 @@ curl -u username:password -d '' :/rest/sql ```json { "status": "succ", - "head": ["Time Stamp","current", …], + "head": ["ts","current", …], + "column_meta": [["ts",9,8],["current",6,4], …], "data": [ ["2018-10-03 14:38:05.000", 10.3, …], ["2018-10-03 14:38:15.000", 12.6, …] @@ -602,10 +603,23 @@ curl -u username:password -d '' :/rest/sql 说明: -- status: 告知操作结果是成功还是失败 -- head: 表的定义,如果不返回结果集,仅有一列“affected_rows” -- data: 具体返回的数据,一排一排的呈现,如果不返回结果集,仅[[affected_rows]] -- rows: 表明总共多少行数据 +- status: 告知操作结果是成功还是失败。 +- head: 表的定义,如果不返回结果集,则仅有一列“affected_rows”。(从 2.0.17 版本开始,建议不要依赖 head 返回值来判断数据列类型,而推荐使用 column_meta。在未来版本中,有可能会从返回值中去掉 head 这一项。) +- column_meta: 从 2.0.17 版本开始,返回值中增加这一项来说明 data 里每一列的数据类型。具体每个列会用三个值来说明,分别为:列名、列类型、类型长度。例如`["current",6,4]`表示列名为“current”;列类型为 6,也即 float 类型;类型长度为 4,也即对应 4 个字节表示的 float。如果列类型为 binary 或 nchar,则类型长度表示该列最多可以保存的内容长度,而不是本次返回值中的具体数据长度。当列类型是 nchar 的时候,其类型长度表示可以保存的 unicode 字符数量,而不是 bytes。 +- data: 具体返回的数据,一行一行的呈现,如果不返回结果集,那么就仅有[[affected_rows]]。data 中每一行的数据列顺序,与 column_meta 中描述数据列的顺序完全一致。 +- rows: 表明总共多少行数据。 + +column_meta 中的列类型说明: +* 1:BOOL +* 2:TINYINT +* 3:SMALLINT +* 4:INT +* 5:BIGINT +* 6:FLOAT +* 7:DOUBLE +* 8:BINARY +* 9:TIMESTAMP +* 10:NCHAR ### 自定义授权码 @@ -651,7 +665,8 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001 ```json { "status": "succ", - "head": ["Time Stamp","current","voltage","phase"], + "head": ["ts","current","voltage","phase"], + "column_meta": [["ts",9,8],["current",6,4],["voltage",4,4],["phase",6,4]], "data": [ ["2018-10-03 14:38:05.000",10.3,219,0.31], ["2018-10-03 14:38:15.000",12.6,218,0.33] @@ -671,8 +686,9 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'create database demo' 19 { "status": "succ", "head": ["affected_rows"], + "column_meta": [["affected_rows",4,4]], "data": [[1]], - "rows": 1, + "rows": 1 } ``` @@ -691,7 +707,8 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001 ```json { "status": "succ", - "head": ["column1","column2","column3"], + "head": ["ts","current","voltage","phase"], + "column_meta": [["ts",9,8],["current",6,4],["voltage",4,4],["phase",6,4]], "data": [ [1538548685000,10.3,219,0.31], [1538548695000,12.6,218,0.33] @@ -712,7 +729,8 @@ HTTP请求URL采用`sqlutc`时,返回结果集的时间戳将采用UTC时间 ```json { "status": "succ", - "head": ["column1","column2","column3"], + "head": ["ts","current","voltage","phase"], + "column_meta": [["ts",9,8],["current",6,4],["voltage",4,4],["phase",6,4]], "data": [ ["2018-10-03T14:38:05.000+0800",10.3,219,0.31], ["2018-10-03T14:38:15.000+0800",12.6,218,0.33] diff --git a/documentation20/cn/09.connections/docs.md b/documentation20/cn/09.connections/docs.md index 9a1d7f9d17088373ea8a89b6e441cacbb2dadf28..79380f3bbd9680120f63f89a0bfbe6f31f5c7a74 100644 --- a/documentation20/cn/09.connections/docs.md +++ b/documentation20/cn/09.connections/docs.md @@ -155,11 +155,3 @@ TDengine客户端暂不支持如下函数: - dbExistsTable(conn, "test"):是否存在表test - dbListTables(conn):显示连接中的所有表 - -## DataX - -[DataX](https://github.com/alibaba/DataX) 是阿里巴巴集团开源的一款通用离线数据采集/同步工具,能够简单、高效地接入 TDengine 进行数据写入和读取。 - -* 数据读取集成的方法请参见 [TSDBReader 插件文档](https://github.com/alibaba/DataX/blob/master/tsdbreader/doc/tsdbreader.md) -* 数据写入集成的方法请参见 [TSDBWriter 插件文档](https://github.com/alibaba/DataX/blob/master/tsdbwriter/doc/tsdbhttpwriter.md) - diff --git a/documentation20/cn/10.cluster/docs.md b/documentation20/cn/10.cluster/docs.md index 15ac449c1aeb576ab9dd3401a717a4364fa4c0b6..7b4073a8836da7b14a50afee5c243d0916bc9270 100644 --- a/documentation20/cn/10.cluster/docs.md +++ b/documentation20/cn/10.cluster/docs.md @@ -13,7 +13,7 @@ TDengine的集群管理极其简单,除添加和删除节点需要人工干预 **第零步**:规划集群所有物理节点的FQDN,将规划好的FQDN分别添加到每个物理节点的/etc/hostname;修改每个物理节点的/etc/hosts,将所有集群物理节点的IP与FQDN的对应添加好。【如部署了DNS,请联系网络管理员在DNS上做好相关配置】 **第一步**:如果搭建集群的物理节点中,存有之前的测试数据、装过1.X的版本,或者装过其他版本的TDengine,请先将其删除,并清空所有数据,具体步骤请参考博客[《TDengine多种安装包的安装和卸载》](https://www.taosdata.com/blog/2019/08/09/566.html ) -**注意1:**因为FQDN的信息会写进文件,如果之前没有配置或者更改FQDN,且启动了TDengine。请一定在确保数据无用或者备份的前提下,清理一下之前的数据(rm -rf /var/lib/taos/); +**注意1:**因为FQDN的信息会写进文件,如果之前没有配置或者更改FQDN,且启动了TDengine。请一定在确保数据无用或者备份的前提下,清理一下之前的数据(`rm -rf /var/lib/taos/*`); **注意2:**客户端也需要配置,确保它可以正确解析每个节点的FQDN配置,不管是通过DNS服务,还是 Host 文件。 **第二步**:建议关闭所有物理节点的防火墙,至少保证端口:6030 - 6042的TCP和UDP端口都是开放的。**强烈建议**先关闭防火墙,集群搭建完毕之后,再来配置端口; @@ -225,7 +225,13 @@ SHOW MNODES; ## Arbitrator的使用 -如果副本数为偶数,当一个vnode group里一半vnode不工作时,是无法从中选出master的。同理,一半mnode不工作时,是无法选出mnode的master的,因为存在“split brain”问题。为解决这个问题,TDengine引入了Arbitrator的概念。Arbitrator模拟一个vnode或mnode在工作,但只简单的负责网络连接,不处理任何数据插入或访问。只要包含Arbitrator在内,超过半数的vnode或mnode工作,那么该vnode group或mnode组就可以正常的提供数据插入或查询服务。比如对于副本数为2的情形,如果一个节点A离线,但另外一个节点B正常,而且能连接到Arbitrator,那么节点B就能正常工作。 +如果副本数为偶数,当一个 vnode group 里一半 vnode 不工作时,是无法从中选出 master 的。同理,一半 mnode 不工作时,是无法选出 mnode 的 master 的,因为存在“split brain”问题。为解决这个问题,TDengine 引入了 Arbitrator 的概念。Arbitrator 模拟一个 vnode 或 mnode 在工作,但只简单的负责网络连接,不处理任何数据插入或访问。只要包含 Arbitrator 在内,超过半数的 vnode 或 mnode 工作,那么该 vnode group 或 mnode 组就可以正常的提供数据插入或查询服务。比如对于副本数为 2 的情形,如果一个节点 A 离线,但另外一个节点 B 正常,而且能连接到 Arbitrator,那么节点 B 就能正常工作。 -TDengine提供一个执行程序,名为 tarbitrator,找任何一台Linux服务器运行它即可。请点击[安装包下载](https://www.taosdata.com/cn/all-downloads/),在TDengine Arbitrator Linux一节中,选择适合的版本下载并安装。该程序对系统资源几乎没有要求,只需要保证有网络连接即可。该应用的命令行参数`-p`可以指定其对外服务的端口号,缺省是6042。配置每个taosd实例时,可以在配置文件taos.cfg里将参数arbitrator设置为Arbitrator的End Point。如果该参数配置了,当副本数为偶数时,系统将自动连接配置的Arbitrator。如果副本数为奇数,即使配置了Arbitrator,系统也不会去建立连接。 +总之,在目前版本下,TDengine 建议在双副本环境要配置 Arbitrator,以提升系统的可用性。 + +Arbitrator 的执行程序名为 tarbitrator。该程序对系统资源几乎没有要求,只需要保证有网络连接,找任何一台 Linux 服务器运行它即可。以下简要描述安装配置的步骤: +1. 请点击 [安装包下载](https://www.taosdata.com/cn/all-downloads/),在 TDengine Arbitrator Linux 一节中,选择合适的版本下载并安装。 +2. 该应用的命令行参数 `-p` 可以指定其对外服务的端口号,缺省是 6042。 +3. 修改每个 taosd 实例的配置文件,在 taos.cfg 里将参数 arbitrator 设置为 tarbitrator 程序所对应的 End Point。(如果该参数配置了,当副本数为偶数时,系统将自动连接配置的 Arbitrator。如果副本数为奇数,即使配置了 Arbitrator,系统也不会去建立连接。) +4. 在配置文件中配置了的 Arbitrator,会出现在 `SHOW DNODES;` 指令的返回结果中,对应的 role 列的值会是“arb”。 diff --git a/documentation20/cn/11.administrator/docs.md b/documentation20/cn/11.administrator/docs.md index 86ad8e5bb91e8883b4be4a2bec7b5fb7dcb4c483..7c9cde2d7c39ae75d50ca8915ec3d559392a7900 100644 --- a/documentation20/cn/11.administrator/docs.md +++ b/documentation20/cn/11.administrator/docs.md @@ -432,60 +432,62 @@ TDengine的所有可执行文件默认存放在 _/usr/local/taos/bin_ 目录下 ## TDengine参数限制与保留关键字 -- 数据库名:不能包含“.”以及特殊字符,不能超过32个字符 -- 表名:不能包含“.”以及特殊字符,与所属数据库名一起,不能超过192个字符 -- 表的列名:不能包含特殊字符,不能超过64个字符 +- 数据库名:不能包含“.”以及特殊字符,不能超过 32 个字符 +- 表名:不能包含“.”以及特殊字符,与所属数据库名一起,不能超过 192 个字符 +- 表的列名:不能包含特殊字符,不能超过 64 个字符 - 数据库名、表名、列名,都不能以数字开头 -- 表的列数:不能超过1024列 -- 记录的最大长度:包括时间戳8 byte,不能超过16KB -- 单条SQL语句默认最大字符串长度:65480 byte -- 数据库副本数:不能超过3 -- 用户名:不能超过23个byte -- 用户密码:不能超过15个byte -- 标签(Tags)数量:不能超过128个 -- 标签的总长度:不能超过16Kbyte +- 表的列数:不能超过 1024 列 +- 记录的最大长度:包括时间戳 8 byte,不能超过 16KB(每个 BINARY/NCHAR 类型的列还会额外占用 2 个 byte 的存储位置) +- 单条 SQL 语句默认最大字符串长度:65480 byte +- 数据库副本数:不能超过 3 +- 用户名:不能超过 23 个 byte +- 用户密码:不能超过 15 个 byte +- 标签(Tags)数量:不能超过 128 个 +- 标签的总长度:不能超过 16K byte - 记录条数:仅受存储空间限制 - 表的个数:仅受节点个数限制 - 库的个数:仅受节点个数限制 -- 单个库上虚拟节点个数:不能超过64个 +- 单个库上虚拟节点个数:不能超过 64 个 -目前TDengine有将近200个内部保留关键字,这些关键字无论大小写均不可以用作库名、表名、STable名、数据列名及标签列名等。这些关键字列表如下: +目前 TDengine 有将近 200 个内部保留关键字,这些关键字无论大小写均不可以用作库名、表名、STable 名、数据列名及标签列名等。这些关键字列表如下: | 关键字列表 | | | | | | ---------- | ----------- | ------------ | ---------- | --------- | -| ABLOCKS | CONNECTION | GROUP | MINUS | SLASH | -| ABORT | CONNECTIONS | GT | MNODES | SLIDING | -| ACCOUNT | COPY | ID | MODULES | SMALLINT | -| ACCOUNTS | COUNT | IF | NCHAR | SPREAD | -| ADD | CREATE | IGNORE | NE | STABLE | -| AFTER | CTIME | IMMEDIATE | NONE | STABLES | -| ALL | DATABASE | IMPORT | NOT | STAR | -| ALTER | DATABASES | IN | NOTNULL | STATEMENT | -| AND | DAYS | INITIALLY | NOW | STDDEV | -| AS | DEFERRED | INSERT | OF | STREAM | -| ASC | DELIMITERS | INSTEAD | OFFSET | STREAMS | -| ATTACH | DESC | INTEGER | OR | STRING | -| AVG | DESCRIBE | INTERVAL | ORDER | SUM | -| BEFORE | DETACH | INTO | PASS | TABLE | -| BEGIN | DIFF | IP | PERCENTILE | TABLES | -| BETWEEN | DISTINCT | IS | PLUS | TAG | -| BIGINT | DIVIDE | ISNULL | PRAGMA | TAGS | -| BINARY | DNODE | JOIN | PREV | TBLOCKS | -| BITAND | DNODES | KEEP | PRIVILEGE | TBNAME | -| BITNOT | DOT | KEY | QUERIES | TIMES | -| BITOR | DOUBLE | KILL | QUERY | TIMESTAMP | -| BOOL | DROP | LAST | RAISE | TINYINT | -| BOTTOM | EACH | LE | REM | TOP | -| BY | END | LEASTSQUARES | REPLACE | TRIGGER | -| CACHE | EQ | LIKE | REPLICA | UMINUS | -| CASCADE | EXISTS | LIMIT | RESET | UPLUS | -| CHANGE | EXPLAIN | LINEAR | RESTRICT | USE | -| CLOG | FAIL | LOCAL | ROW | USER | -| CLUSTER | FILL | LP | ROWS | USERS | -| COLON | FIRST | LSHIFT | RP | USING | -| COLUMN | FLOAT | LT | RSHIFT | VALUES | -| COMMA | FOR | MATCH | SCORES | VARIABLE | -| COMP | FROM | MAX | SELECT | VGROUPS | -| CONCAT | GE | METRIC | SEMI | VIEW | -| CONFIGS | GLOB | METRICS | SET | WAVG | -| CONFLICT | GRANTS | MIN | SHOW | WHERE | +| ABLOCKS | CONNECTIONS | GT | MNODES | SLIDING | +| ABORT | COPY | ID | MODULES | SLIMIT | +| ACCOUNT | COUNT | IF | NCHAR | SMALLINT | +| ACCOUNTS | CREATE | IGNORE | NE | SPREAD | +| ADD | CTIME | IMMEDIATE | NONE | STABLE | +| AFTER | DATABASE | IMPORT | NOT | STABLES | +| ALL | DATABASES | IN | NOTNULL | STAR | +| ALTER | DAYS | INITIALLY | NOW | STATEMENT | +| AND | DEFERRED | INSERT | OF | STDDEV | +| AS | DELIMITERS | INSTEAD | OFFSET | STREAM | +| ASC | DESC | INTEGER | OR | STREAMS | +| ATTACH | DESCRIBE | INTERVAL | ORDER | STRING | +| AVG | DETACH | INTO | PASS | SUM | +| BEFORE | DIFF | IP | PERCENTILE | TABLE | +| BEGIN | DISTINCT | IS | PLUS | TABLES | +| BETWEEN | DIVIDE | ISNULL | PRAGMA | TAG | +| BIGINT | DNODE | JOIN | PREV | TAGS | +| BINARY | DNODES | KEEP | PRIVILEGE | TBLOCKS | +| BITAND | DOT | KEY | QUERIES | TBNAME | +| BITNOT | DOUBLE | KILL | QUERY | TIMES | +| BITOR | DROP | LAST | RAISE | TIMESTAMP | +| BOOL | EACH | LE | REM | TINYINT | +| BOTTOM | END | LEASTSQUARES | REPLACE | TOP | +| BY | EQ | LIKE | REPLICA | TRIGGER | +| CACHE | EXISTS | LIMIT | RESET | UMINUS | +| CASCADE | EXPLAIN | LINEAR | RESTRICT | UPLUS | +| CHANGE | FAIL | LOCAL | ROW | USE | +| CLOG | FILL | LP | ROWS | USER | +| CLUSTER | FIRST | LSHIFT | RP | USERS | +| COLON | FLOAT | LT | RSHIFT | USING | +| COLUMN | FOR | MATCH | SCORES | VALUES | +| COMMA | FROM | MAX | SELECT | VARIABLE | +| COMP | GE | METRIC | SEMI | VGROUPS | +| CONCAT | GLOB | METRICS | SET | VIEW | +| CONFIGS | GRANTS | MIN | SHOW | WAVG | +| CONFLICT | GROUP | MINUS | SLASH | WHERE | +| CONNECTION | | | | | + diff --git a/documentation20/cn/12.taos-sql/docs.md b/documentation20/cn/12.taos-sql/docs.md index ff6adad045ccb56675d8b20c9e956f87f278f9e2..1002ffa01c860b2cabd27ddeb4e0b0e9ca967c58 100644 --- a/documentation20/cn/12.taos-sql/docs.md +++ b/documentation20/cn/12.taos-sql/docs.md @@ -1,17 +1,19 @@ # TAOS SQL -本文档说明TAOS SQL支持的语法规则、主要查询功能、支持的SQL查询函数,以及常用技巧等内容。阅读本文档需要读者具有基本的SQL语言的基础。 +本文档说明 TAOS SQL 支持的语法规则、主要查询功能、支持的 SQL 查询函数,以及常用技巧等内容。阅读本文档需要读者具有基本的 SQL 语言的基础。 -TAOS SQL是用户对TDengine进行数据写入和查询的主要工具。TAOS SQL为了便于用户快速上手,在一定程度上提供类似于标准SQL类似的风格和模式。严格意义上,TAOS SQL并不是也不试图提供SQL标准的语法。此外,由于TDengine针对的时序性结构化数据不提供删除功能,因此在TAO SQL中不提供数据删除的相关功能。 +TAOS SQL 是用户对 TDengine 进行数据写入和查询的主要工具。TAOS SQL 为了便于用户快速上手,在一定程度上提供类似于标准 SQL 类似的风格和模式。严格意义上,TAOS SQL 并不是也不试图提供 SQL 标准的语法。此外,由于 TDengine 针对的时序性结构化数据不提供删除功能,因此在 TAO SQL 中不提供数据删除的相关功能。 -本章节SQL语法遵循如下约定: +TAOS SQL 不支持关键字的缩写,例如 DESCRIBE 不能缩写为 DESC。 -- < > 里的内容是用户需要输入的,但不要输入<>本身 -- [ ]表示内容为可选项,但不能输入[]本身 -- | 表示多选一,选择其中一个即可,但不能输入|本身 +本章节 SQL 语法遵循如下约定: + +- < > 里的内容是用户需要输入的,但不要输入 <> 本身 +- [ ] 表示内容为可选项,但不能输入 [] 本身 +- | 表示多选一,选择其中一个即可,但不能输入 | 本身 - … 表示前面的项可重复多个 -为更好地说明SQL语法的规则及其特点,本文假设存在一个数据集。以智能电表(meters)为例,假设每个智能电表采集电流、电压、相位三个量。其建模如下: +为更好地说明 SQL 语法的规则及其特点,本文假设存在一个数据集。以智能电表(meters)为例,假设每个智能电表采集电流、电压、相位三个量。其建模如下: ```mysql taos> DESCRIBE meters; Field | Type | Length | Note | @@ -23,7 +25,7 @@ taos> DESCRIBE meters; location | BINARY | 64 | TAG | groupid | INT | 4 | TAG | ``` -数据集包含4个智能电表的数据,按照TDengine的建模规则,对应4个子表,其名称分别是 d1001, d1002, d1003, d1004。 +数据集包含 4 个智能电表的数据,按照 TDengine 的建模规则,对应 4 个子表,其名称分别是 d1001, d1002, d1003, d1004。 ## 支持的数据类型 @@ -142,15 +144,15 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic ``` 说明: - 1) 表的第一个字段必须是TIMESTAMP,并且系统自动将其设为主键; + 1) 表的第一个字段必须是 TIMESTAMP,并且系统自动将其设为主键; - 2) 表名最大长度为192; + 2) 表名最大长度为 192; - 3) 表的每行长度不能超过16k个字符; + 3) 表的每行长度不能超过 16k 个字符;(注意:每个 BINARY/NCHAR 类型的列还会额外占用 2 个字节的存储位置) 4) 子表名只能由字母、数字和下划线组成,且不能以数字开头 - 5) 使用数据类型binary或nchar,需指定其最长的字节数,如binary(20),表示20字节; + 5) 使用数据类型 binary 或 nchar,需指定其最长的字节数,如 binary(20),表示 20 字节; - **以超级表为模板创建数据表** @@ -402,8 +404,8 @@ SELECT select_expr [, select_expr ...] FROM {tb_name_list} [WHERE where_condition] [INTERVAL (interval_val [, interval_offset])] + [SLIDING sliding_val] [FILL fill_val] - [SLIDING fill_val] [GROUP BY col_list] [ORDER BY col_list { DESC | ASC }] [SLIMIT limit_val [, SOFFSET offset_val]] @@ -619,27 +621,30 @@ taos> SELECT COUNT(tbname) FROM meters WHERE groupId > 2; Query OK, 1 row(s) in set (0.001091s) ``` -- 可以使用* 返回所有列,或指定列名。可以对数字列进行四则运算,可以给输出的列取列名 -- where语句可以使用各种逻辑判断来过滤数字值,或使用通配符来过滤字符串 -- 输出结果缺省按首列时间戳升序排序,但可以指定按降序排序(_c0指首列时间戳)。使用ORDER BY对其他字段进行排序为非法操作。 -- 参数LIMIT控制输出条数,OFFSET指定从第几条开始输出。LIMIT/OFFSET对结果集的执行顺序在ORDER BY之后。 +- 可以使用 * 返回所有列,或指定列名。可以对数字列进行四则运算,可以给输出的列取列名 +- WHERE 语句可以使用各种逻辑判断来过滤数字值,或使用通配符来过滤字符串 +- 输出结果缺省按首列时间戳升序排序,但可以指定按降序排序( _c0 指首列时间戳)。使用 ORDER BY 对其他字段进行排序为非法操作。 +- 参数 LIMIT 控制输出条数,OFFSET 指定从第几条开始输出。LIMIT/OFFSET 对结果集的执行顺序在 ORDER BY 之后。 +- 参数 SLIMIT 控制由 GROUP BY 指令划分的每个分组中的输出条数。 - 通过”>>"输出结果可以导出到指定文件 ### 支持的条件过滤操作 -| Operation | Note | Applicable Data Types | -| --------- | ----------------------------- | ------------------------------------- | -| > | larger than | **`timestamp`** and all numeric types | -| < | smaller than | **`timestamp`** and all numeric types | -| >= | larger than or equal to | **`timestamp`** and all numeric types | -| <= | smaller than or equal to | **`timestamp`** and all numeric types | -| = | equal to | all types | -| <> | not equal to | all types | -| % | match with any char sequences | **`binary`** **`nchar`** | -| _ | match with a single char | **`binary`** **`nchar`** | +| Operation | Note | Applicable Data Types | +| ----------- | ----------------------------- | ------------------------------------- | +| > | larger than | **`timestamp`** and all numeric types | +| < | smaller than | **`timestamp`** and all numeric types | +| >= | larger than or equal to | **`timestamp`** and all numeric types | +| <= | smaller than or equal to | **`timestamp`** and all numeric types | +| = | equal to | all types | +| <> | not equal to | all types | +| between and | within a certain range | **`timestamp`** and all numeric types | +| % | match with any char sequences | **`binary`** **`nchar`** | +| _ | match with a single char | **`binary`** **`nchar`** | 1. 同时进行多个字段的范围过滤,需要使用关键词 AND 来连接不同的查询条件,暂不支持 OR 连接的不同列之间的查询过滤条件。 -2. 针对单一字段的过滤,如果是时间过滤条件,则一条语句中只支持设定一个;但针对其他的(普通)列或标签列,则可以使用``` OR``` 关键字进行组合条件的查询过滤。例如:((value > 20 and value < 30) OR (value < 12)) 。 +2. 针对单一字段的过滤,如果是时间过滤条件,则一条语句中只支持设定一个;但针对其他的(普通)列或标签列,则可以使用 `OR` 关键字进行组合条件的查询过滤。例如:((value > 20 AND value < 30) OR (value < 12)) 。 +3. 从 2.0.17 版本开始,条件过滤开始支持 BETWEEN AND 语法,例如 `WHERE col2 BETWEEN 1.5 AND 3.25` 表示查询条件为“1.5 ≤ col2 ≤ 3.25”。 ### SQL 示例 @@ -1160,17 +1165,20 @@ TDengine支持按时间段进行聚合,可以将表中数据按照时间段进 SELECT function_list FROM tb_name [WHERE where_condition] INTERVAL (interval [, offset]) + [SLIDING sliding] [FILL ({NONE | VALUE | PREV | NULL | LINEAR})] SELECT function_list FROM stb_name [WHERE where_condition] INTERVAL (interval [, offset]) + [SLIDING sliding] [FILL ({ VALUE | PREV | NULL | LINEAR})] [GROUP BY tags] ``` - 聚合时间段的长度由关键词INTERVAL指定,最短时间间隔10毫秒(10a),并且支持偏移(偏移必须小于间隔)。聚合查询中,能够同时执行的聚合和选择函数仅限于单个输出的函数:count、avg、sum 、stddev、leastsquares、percentile、min、max、first、last,不能使用具有多行输出结果的函数(例如:top、bottom、diff以及四则运算)。 - WHERE语句可以指定查询的起止时间和其他过滤条件 +- SLIDING语句用于指定聚合时间段的前向增量 - FILL语句指定某一时间区间数据缺失的情况下的填充模式。填充模式包括以下几种: * 不进行填充:NONE(默认填充模式)。 * VALUE填充:固定值填充,此时需要指定填充的数值。例如:fill(value, 1.23)。 @@ -1182,6 +1190,8 @@ SELECT function_list FROM stb_name 2. 在时间维度聚合中,返回的结果中时间序列严格单调递增。 3. 如果查询对象是超级表,则聚合函数会作用于该超级表下满足值过滤条件的所有表的数据。如果查询中没有使用group by语句,则返回的结果按照时间序列严格单调递增;如果查询中使用了group by语句分组,则返回结果中每个group内不按照时间序列严格单调递增。 +时间聚合也常被用于连续查询场景,可以参考文档 [连续查询(Continuous Query)](https://www.taosdata.com/cn/documentation/advanced-features#continuous-query)。 + **示例:** 智能电表的建表语句如下: ```mysql @@ -1200,11 +1210,11 @@ SELECT AVG(current), MAX(current), LEASTSQUARES(current, start_val, step_val), P ## TAOS SQL 边界限制 -- 数据库名最大长度为32 -- 表名最大长度为192,每行数据最大长度16k个字符 -- 列名最大长度为64,最多允许1024列,最少需要2列,第一列必须是时间戳 -- 标签最多允许128个,可以1个,标签总长度不超过16k个字符 -- SQL语句最大长度65480个字符,但可通过系统配置参数maxSQLLength修改,最长可配置为1M +- 数据库名最大长度为 32 +- 表名最大长度为 192,每行数据最大长度 16k 个字符(注意:数据行内每个 BINARY/NCHAR 类型的列还会额外占用 2 个字节的存储位置) +- 列名最大长度为 64,最多允许 1024 列,最少需要 2 列,第一列必须是时间戳 +- 标签最多允许 128 个,可以 1 个,标签总长度不超过 16k 个字符 +- SQL 语句最大长度 65480 个字符,但可通过系统配置参数 maxSQLLength 修改,最长可配置为 1M - 库的数目,超级表的数目、表的数目,系统不做限制,仅受系统资源限制 ## TAOS SQL其他约定 @@ -1219,4 +1229,5 @@ TAOS SQL支持表之间按主键时间戳来join两张表的列,暂不支持 **is not null与不为空的表达式适用范围** -is not null支持所有类型的列。不为空的表达式为 <>"",仅对非数值类型的列适用。 \ No newline at end of file +is not null支持所有类型的列。不为空的表达式为 <>"",仅对非数值类型的列适用。 + diff --git a/documentation20/cn/13.faq/docs.md b/documentation20/cn/13.faq/docs.md index 727ccf653f50cb449c1f15e1c903706a2fab2068..d3169d507ac69d6d40eec698edf76a69a929bda2 100644 --- a/documentation20/cn/13.faq/docs.md +++ b/documentation20/cn/13.faq/docs.md @@ -92,6 +92,8 @@ TDengine 目前尚不支持删除功能,未来根据用户需求可能会支 从 2.0.8.0 开始,TDengine 支持更新已经写入数据的功能。使用更新功能需要在创建数据库时使用 UPDATE 1 参数,之后可以使用 INSERT INTO 命令更新已经写入的相同时间戳数据。UPDATE 参数不支持 ALTER DATABASE 命令修改。没有使用 UPDATE 1 参数创建的数据库,写入相同时间戳的数据不会修改之前的数据,也不会报错。 +另需注意,在 UPDATE 设置为 0 时,后发送的相同时间戳的数据会被直接丢弃,但并不会报错,而且仍然会被计入 affected rows (所以不能利用 INSERT 指令的返回信息进行时间戳查重)。这样设计的主要原因是,TDengine 把写入的数据看做一个数据流,无论时间戳是否出现冲突,TDengine 都认为产生数据的原始设备真实地产生了这样的数据。UPDATE 参数只是控制这样的流数据在进行持久化时要怎样处理——UPDATE 为 0 时,表示先写入的数据覆盖后写入的数据;而 UPDATE 为 1 时,表示后写入的数据覆盖先写入的数据。这种覆盖关系如何选择,取决于对数据的后续使用和统计中,希望以先还是后生成的数据为准。 + ## 10. 我怎么创建超过1024列的表? 使用2.0及其以上版本,默认支持1024列;2.0之前的版本,TDengine最大允许创建250列的表。但是如果确实超过限值,建议按照数据特性,逻辑地将这个宽表分解成几个小表。 diff --git a/src/connector/jdbc/CMakeLists.txt b/src/connector/jdbc/CMakeLists.txt index cec8197849ec3bdd398814fdbe6bf0e3fa9002a3..3c50ac566b5fe49a619a50c9eac446285a2b431c 100644 --- a/src/connector/jdbc/CMakeLists.txt +++ b/src/connector/jdbc/CMakeLists.txt @@ -8,7 +8,7 @@ IF (TD_MVN_INSTALLED) ADD_CUSTOM_COMMAND(OUTPUT ${JDBC_CMD_NAME} POST_BUILD COMMAND mvn -Dmaven.test.skip=true install -f ${CMAKE_CURRENT_SOURCE_DIR}/pom.xml - COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/target/taos-jdbcdriver-2.0.20-dist.jar ${LIBRARY_OUTPUT_PATH} + COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/target/taos-jdbcdriver-2.0.21-dist.jar ${LIBRARY_OUTPUT_PATH} COMMAND mvn -Dmaven.test.skip=true clean -f ${CMAKE_CURRENT_SOURCE_DIR}/pom.xml COMMENT "build jdbc driver") ADD_CUSTOM_TARGET(${JDBC_TARGET_NAME} ALL WORKING_DIRECTORY ${EXECUTABLE_OUTPUT_PATH} DEPENDS ${JDBC_CMD_NAME}) diff --git a/src/connector/jdbc/deploy-pom.xml b/src/connector/jdbc/deploy-pom.xml index 999e11357d7e7c09c011309a4e0e215b1f48b699..1c24b621efe52e152ac8ea70ef3c2afdbc72d5b0 100755 --- a/src/connector/jdbc/deploy-pom.xml +++ b/src/connector/jdbc/deploy-pom.xml @@ -5,7 +5,7 @@ com.taosdata.jdbc taos-jdbcdriver - 2.0.20 + 2.0.21 jar JDBCDriver diff --git a/src/connector/jdbc/pom.xml b/src/connector/jdbc/pom.xml index 25ed3d22f2b566a8f5761c2d0a2eaa6a0d0d5421..7a421eff2296cc8b61ec6e07174747ef8e224ff0 100755 --- a/src/connector/jdbc/pom.xml +++ b/src/connector/jdbc/pom.xml @@ -3,7 +3,7 @@ 4.0.0 com.taosdata.jdbc taos-jdbcdriver - 2.0.20 + 2.0.21 jar JDBCDriver https://github.com/taosdata/TDengine/tree/master/src/connector/jdbc diff --git a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/DatabaseMetaDataResultSet.java b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/DatabaseMetaDataResultSet.java index 66dc07a63452bb1b4ec0eaffb4579d79846b8e49..f6a0fcca316cb07d28c71a4e1d51d9405de083ba 100644 --- a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/DatabaseMetaDataResultSet.java +++ b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/DatabaseMetaDataResultSet.java @@ -308,7 +308,7 @@ public class DatabaseMetaDataResultSet implements ResultSet { return colMetaData.getColIndex() + 1; } } - throw new SQLException(TSDBConstants.INVALID_VARIABLES); + throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_INVALID_VARIABLE); } @Override diff --git a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBConstants.java b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBConstants.java index 043db9bbd75ffeaa56de3fb5439231849a824662..6179b47da1041a3461f1be43625f8c7d4e284e64 100644 --- a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBConstants.java +++ b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBConstants.java @@ -14,16 +14,13 @@ *****************************************************************************/ package com.taosdata.jdbc; +import java.sql.SQLException; +import java.sql.Types; import java.util.HashMap; import java.util.Map; public abstract class TSDBConstants { - public static final String STATEMENT_CLOSED = "statement is closed"; - public static final String UNSUPPORTED_METHOD_EXCEPTION_MSG = "this operation is NOT supported currently!"; - public static final String INVALID_VARIABLES = "invalid variables"; - public static final String RESULT_SET_IS_CLOSED = "resultSet is closed"; - public static final String DEFAULT_PORT = "6200"; public static Map DATATYPE_MAP = null; @@ -77,8 +74,65 @@ public abstract class TSDBConstants { return WrapErrMsg("unkown error!"); } + public static int taosType2JdbcType(int taosType) throws SQLException { + switch (taosType) { + case TSDBConstants.TSDB_DATA_TYPE_NULL: + return Types.NULL; + case TSDBConstants.TSDB_DATA_TYPE_BOOL: + return Types.BOOLEAN; + case TSDBConstants.TSDB_DATA_TYPE_TINYINT: + return Types.TINYINT; + case TSDBConstants.TSDB_DATA_TYPE_SMALLINT: + return Types.SMALLINT; + case TSDBConstants.TSDB_DATA_TYPE_INT: + return Types.INTEGER; + case TSDBConstants.TSDB_DATA_TYPE_BIGINT: + return Types.BIGINT; + case TSDBConstants.TSDB_DATA_TYPE_FLOAT: + return Types.FLOAT; + case TSDBConstants.TSDB_DATA_TYPE_DOUBLE: + return Types.DOUBLE; + case TSDBConstants.TSDB_DATA_TYPE_BINARY: + return Types.BINARY; + case TSDBConstants.TSDB_DATA_TYPE_TIMESTAMP: + return Types.TIMESTAMP; + case TSDBConstants.TSDB_DATA_TYPE_NCHAR: + return Types.NCHAR; + } + throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNKNOWN_SQL_TYPE_IN_TDENGINE); + } + + public static int jdbcType2TaosType(int jdbcType) throws SQLException { + switch (jdbcType){ + case Types.NULL: + return TSDBConstants.TSDB_DATA_TYPE_NULL; + case Types.BOOLEAN: + return TSDBConstants.TSDB_DATA_TYPE_BOOL; + case Types.TINYINT: + return TSDBConstants.TSDB_DATA_TYPE_TINYINT; + case Types.SMALLINT: + return TSDBConstants.TSDB_DATA_TYPE_SMALLINT; + case Types.INTEGER: + return TSDBConstants.TSDB_DATA_TYPE_INT; + case Types.BIGINT: + return TSDBConstants.TSDB_DATA_TYPE_BIGINT; + case Types.FLOAT: + return TSDBConstants.TSDB_DATA_TYPE_FLOAT; + case Types.DOUBLE: + return TSDBConstants.TSDB_DATA_TYPE_DOUBLE; + case Types.BINARY: + return TSDBConstants.TSDB_DATA_TYPE_BINARY; + case Types.TIMESTAMP: + return TSDBConstants.TSDB_DATA_TYPE_TIMESTAMP; + case Types.NCHAR: + return TSDBConstants.TSDB_DATA_TYPE_NCHAR; + } + throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNKNOWN_SQL_TYPE_IN_TDENGINE); + } + static { DATATYPE_MAP = new HashMap<>(); + DATATYPE_MAP.put(0, "NULL"); DATATYPE_MAP.put(1, "BOOL"); DATATYPE_MAP.put(2, "TINYINT"); DATATYPE_MAP.put(3, "SMALLINT"); @@ -90,4 +144,8 @@ public abstract class TSDBConstants { DATATYPE_MAP.put(9, "TIMESTAMP"); DATATYPE_MAP.put(10, "NCHAR"); } + + public static String jdbcType2TaosTypeName(int type) throws SQLException { + return DATATYPE_MAP.get(jdbcType2TaosType(type)); + } } diff --git a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java index 3ae8696f7d3931160ab55f31064839b33a9ceecd..78e7aec79c252a2e6b2eeafcba37ab1c66f8674d 100644 --- a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java +++ b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java @@ -18,6 +18,7 @@ public class TSDBErrorNumbers { public static final int ERROR_INVALID_FOR_EXECUTE = 0x230c; //not a valid sql for execute: (SQL) public static final int ERROR_PARAMETER_INDEX_OUT_RANGE = 0x230d; // parameter index out of range public static final int ERROR_SQLCLIENT_EXCEPTION_ON_CONNECTION_CLOSED = 0x230e; // connection already closed + public static final int ERROR_UNKNOWN_SQL_TYPE_IN_TDENGINE = 0x230f; //unknown sql type in tdengine public static final int ERROR_UNKNOWN = 0x2350; //unknown error @@ -49,6 +50,7 @@ public class TSDBErrorNumbers { errorNumbers.add(ERROR_INVALID_FOR_EXECUTE); errorNumbers.add(ERROR_PARAMETER_INDEX_OUT_RANGE); errorNumbers.add(ERROR_SQLCLIENT_EXCEPTION_ON_CONNECTION_CLOSED); + errorNumbers.add(ERROR_UNKNOWN_SQL_TYPE_IN_TDENGINE); /*****************************************************/ errorNumbers.add(ERROR_SUBSCRIBE_FAILED); diff --git a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBResultSetMetaData.java b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBResultSetMetaData.java index 0c0071a94902262d7f2497070c1034808608b329..e0b1c246cbea246f885ab4c56c7ece0be212ff66 100644 --- a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBResultSetMetaData.java +++ b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBResultSetMetaData.java @@ -20,7 +20,7 @@ import java.sql.Timestamp; import java.sql.Types; import java.util.List; -public class TSDBResultSetMetaData implements ResultSetMetaData { +public class TSDBResultSetMetaData extends WrapperImpl implements ResultSetMetaData { List colMetaDataList = null; @@ -28,14 +28,6 @@ public class TSDBResultSetMetaData implements ResultSetMetaData { this.colMetaDataList = metaDataList; } - public T unwrap(Class iface) throws SQLException { - throw new SQLException(TSDBConstants.UNSUPPORTED_METHOD_EXCEPTION_MSG); - } - - public boolean isWrapperFor(Class iface) throws SQLException { - throw new SQLException(TSDBConstants.UNSUPPORTED_METHOD_EXCEPTION_MSG); - } - public int getColumnCount() throws SQLException { return colMetaDataList.size(); } @@ -94,7 +86,7 @@ public class TSDBResultSetMetaData implements ResultSetMetaData { } public String getSchemaName(int column) throws SQLException { - throw new SQLException(TSDBConstants.UNSUPPORTED_METHOD_EXCEPTION_MSG); + throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD); } public int getPrecision(int column) throws SQLException { @@ -125,18 +117,18 @@ public class TSDBResultSetMetaData implements ResultSetMetaData { } public String getTableName(int column) throws SQLException { - throw new SQLException(TSDBConstants.UNSUPPORTED_METHOD_EXCEPTION_MSG); + throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD); } public String getCatalogName(int column) throws SQLException { - throw new SQLException(TSDBConstants.UNSUPPORTED_METHOD_EXCEPTION_MSG); + throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD); } public int getColumnType(int column) throws SQLException { ColumnMetaData meta = this.colMetaDataList.get(column - 1); switch (meta.getColType()) { case TSDBConstants.TSDB_DATA_TYPE_BOOL: - return java.sql.Types.BIT; + return Types.BOOLEAN; case TSDBConstants.TSDB_DATA_TYPE_TINYINT: return java.sql.Types.TINYINT; case TSDBConstants.TSDB_DATA_TYPE_SMALLINT: @@ -150,13 +142,13 @@ public class TSDBResultSetMetaData implements ResultSetMetaData { case TSDBConstants.TSDB_DATA_TYPE_DOUBLE: return java.sql.Types.DOUBLE; case TSDBConstants.TSDB_DATA_TYPE_BINARY: - return java.sql.Types.CHAR; + return Types.BINARY; case TSDBConstants.TSDB_DATA_TYPE_TIMESTAMP: - return java.sql.Types.BIGINT; + return java.sql.Types.TIMESTAMP; case TSDBConstants.TSDB_DATA_TYPE_NCHAR: - return java.sql.Types.CHAR; + return Types.NCHAR; } - throw new SQLException(TSDBConstants.INVALID_VARIABLES); + throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_INVALID_VARIABLE); } public String getColumnTypeName(int column) throws SQLException { @@ -173,7 +165,7 @@ public class TSDBResultSetMetaData implements ResultSetMetaData { } public boolean isDefinitelyWritable(int column) throws SQLException { - throw new SQLException(TSDBConstants.UNSUPPORTED_METHOD_EXCEPTION_MSG); + throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD); } public String getColumnClassName(int column) throws SQLException { diff --git a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBResultSetWrapper.java b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBResultSetWrapper.java index 98b823a3c1d5cefb99e5c5824ce334d42cc40d9e..48854e773f89a45784de3cd709ec5bbe6185e09b 100644 --- a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBResultSetWrapper.java +++ b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBResultSetWrapper.java @@ -1153,11 +1153,11 @@ public class TSDBResultSetWrapper implements ResultSet { } public T getObject(int columnIndex, Class type) throws SQLException { - throw new SQLException(TSDBConstants.UNSUPPORTED_METHOD_EXCEPTION_MSG); + throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD); } public T getObject(String columnLabel, Class type) throws SQLException { - throw new SQLException(TSDBConstants.UNSUPPORTED_METHOD_EXCEPTION_MSG); + throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD); } @Override diff --git a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBSubscribe.java b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBSubscribe.java index c21a058ba2bdc8de20a7ca2e9ceb9369396702be..dd0d0d5b7b2744e0e6d28a3d73ec2b29fab37743 100644 --- a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBSubscribe.java +++ b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBSubscribe.java @@ -14,12 +14,11 @@ *****************************************************************************/ package com.taosdata.jdbc; -import javax.management.OperationsException; import java.sql.SQLException; public class TSDBSubscribe { - private TSDBJNIConnector connecter = null; - private long id = 0; + private final TSDBJNIConnector connecter; + private final long id; TSDBSubscribe(TSDBJNIConnector connecter, long id) throws SQLException { if (null != connecter) { diff --git a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/rs/RestfulResultSet.java b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/rs/RestfulResultSet.java index b7a0df7de75cfeee3baad3d271c6639d52d94ddf..9d394b8b038917d637ab43826d57e4d79b1746c7 100644 --- a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/rs/RestfulResultSet.java +++ b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/rs/RestfulResultSet.java @@ -18,10 +18,10 @@ public class RestfulResultSet extends AbstractResultSet implements ResultSet { private final String database; private final Statement statement; // data - private ArrayList> resultSet = new ArrayList<>(); + private ArrayList> resultSet; // meta - private ArrayList columnNames = new ArrayList<>(); - private ArrayList columns = new ArrayList<>(); + private ArrayList columnNames; + private ArrayList columns; private RestfulResultSetMetaData metaData; /** @@ -29,11 +29,36 @@ public class RestfulResultSet extends AbstractResultSet implements ResultSet { * * @param resultJson: 包含data信息的结果集,有sql返回的结果集 ***/ - public RestfulResultSet(String database, Statement statement, JSONObject resultJson) { + public RestfulResultSet(String database, Statement statement, JSONObject resultJson) throws SQLException { this.database = database; this.statement = statement; + // column metadata + JSONArray columnMeta = resultJson.getJSONArray("column_meta"); + columnNames = new ArrayList<>(); + columns = new ArrayList<>(); + for (int colIndex = 0; colIndex < columnMeta.size(); colIndex++) { + JSONArray col = columnMeta.getJSONArray(colIndex); + String col_name = col.getString(0); + int col_type = TSDBConstants.taosType2JdbcType(col.getInteger(1)); + int col_length = col.getInteger(2); + columnNames.add(col_name); + columns.add(new Field(col_name, col_type, col_length, "")); + } + this.metaData = new RestfulResultSetMetaData(this.database, columns, this); + // row data JSONArray data = resultJson.getJSONArray("data"); + resultSet = new ArrayList<>(); + for (int rowIndex = 0; rowIndex < data.size(); rowIndex++) { + ArrayList row = new ArrayList(); + JSONArray jsonRow = data.getJSONArray(rowIndex); + for (int colIndex = 0; colIndex < jsonRow.size(); colIndex++) { + row.add(parseColumnData(jsonRow, colIndex, columns.get(colIndex).type)); + } + resultSet.add(row); + } + + /* int columnIndex = 0; for (; columnIndex < data.size(); columnIndex++) { ArrayList oneRow = new ArrayList<>(); @@ -52,50 +77,77 @@ public class RestfulResultSet extends AbstractResultSet implements ResultSet { columns.add(new Field(name, "", 0, "")); } this.metaData = new RestfulResultSetMetaData(this.database, columns, this); - } - - /** - * 由多个resultSet的JSON构造结果集 - * - * @param resultJson: 包含data信息的结果集,有sql返回的结果集 - * @param fieldJson: 包含多个(最多2个)meta信息的结果集,有describe xxx - **/ - public RestfulResultSet(String database, Statement statement, JSONObject resultJson, List fieldJson) { - this(database, statement, resultJson); - ArrayList newColumns = new ArrayList<>(); - - for (Field column : columns) { - Field field = findField(column.name, fieldJson); - if (field != null) { - newColumns.add(field); - } else { - newColumns.add(column); - } - } - this.columns = newColumns; - this.metaData = new RestfulResultSetMetaData(this.database, this.columns, this); - } - - public Field findField(String columnName, List fieldJsonList) { - for (JSONObject fieldJSON : fieldJsonList) { - JSONArray fieldDataJson = fieldJSON.getJSONArray("data"); - for (int i = 0; i < fieldDataJson.size(); i++) { - JSONArray field = fieldDataJson.getJSONArray(i); - if (columnName.equalsIgnoreCase(field.getString(0))) { - return new Field(field.getString(0), field.getString(1), field.getInteger(2), field.getString(3)); - } - } + */ + } + + private Object parseColumnData(JSONArray row, int colIndex, int sqlType) { + switch (sqlType) { + case Types.NULL: + return null; + case Types.BOOLEAN: + return row.getBoolean(colIndex); + case Types.TINYINT: + case Types.SMALLINT: + return row.getShort(colIndex); + case Types.INTEGER: + return row.getInteger(colIndex); + case Types.BIGINT: + return row.getBigInteger(colIndex); + case Types.FLOAT: + return row.getFloat(colIndex); + case Types.DOUBLE: + return row.getDouble(colIndex); + case Types.TIMESTAMP: + return new Timestamp(row.getDate(colIndex).getTime()); + case Types.BINARY: + case Types.NCHAR: + default: + return row.getString(colIndex); } - return null; } +// /** +// * 由多个resultSet的JSON构造结果集 +// * +// * @param resultJson: 包含data信息的结果集,有sql返回的结果集 +// * @param fieldJson: 包含多个(最多2个)meta信息的结果集,有describe xxx +// **/ +// public RestfulResultSet(String database, Statement statement, JSONObject resultJson, List fieldJson) throws SQLException { +// this(database, statement, resultJson); +// ArrayList newColumns = new ArrayList<>(); +// +// for (Field column : columns) { +// Field field = findField(column.name, fieldJson); +// if (field != null) { +// newColumns.add(field); +// } else { +// newColumns.add(column); +// } +// } +// this.columns = newColumns; +// this.metaData = new RestfulResultSetMetaData(this.database, this.columns, this); +// } + +// public Field findField(String columnName, List fieldJsonList) { +// for (JSONObject fieldJSON : fieldJsonList) { +// JSONArray fieldDataJson = fieldJSON.getJSONArray("data"); +// for (int i = 0; i < fieldDataJson.size(); i++) { +// JSONArray field = fieldDataJson.getJSONArray(i); +// if (columnName.equalsIgnoreCase(field.getString(0))) { +// return new Field(field.getString(0), field.getString(1), field.getInteger(2), field.getString(3)); +// } +// } +// } +// return null; +// } + public class Field { String name; - String type; + int type; int length; String note; - public Field(String name, String type, int length, String note) { + public Field(String name, int type, int length, String note) { this.name = name; this.type = type; this.length = length; diff --git a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/rs/RestfulResultSetMetaData.java b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/rs/RestfulResultSetMetaData.java index 44a02f486b22063cabb8950c21d3a9f55bfc70c8..29ba13bec107be4564a95d27e800fce53b114f96 100644 --- a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/rs/RestfulResultSetMetaData.java +++ b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/rs/RestfulResultSetMetaData.java @@ -5,6 +5,7 @@ import com.taosdata.jdbc.TSDBConstants; import java.sql.ResultSetMetaData; import java.sql.SQLException; import java.sql.Timestamp; +import java.sql.Types; import java.util.ArrayList; public class RestfulResultSetMetaData implements ResultSetMetaData { @@ -53,14 +54,14 @@ public class RestfulResultSetMetaData implements ResultSetMetaData { @Override public boolean isSigned(int column) throws SQLException { - String type = this.fields.get(column - 1).type.toUpperCase(); + int type = this.fields.get(column - 1).type; switch (type) { - case "TINYINT": - case "SMALLINT": - case "INT": - case "BIGINT": - case "FLOAT": - case "DOUBLE": + case Types.TINYINT: + case Types.SMALLINT: + case Types.INTEGER: + case Types.BIGINT: + case Types.FLOAT: + case Types.DOUBLE: return true; default: return false; @@ -89,14 +90,14 @@ public class RestfulResultSetMetaData implements ResultSetMetaData { @Override public int getPrecision(int column) throws SQLException { - String type = this.fields.get(column - 1).type.toUpperCase(); + int type = this.fields.get(column - 1).type; switch (type) { - case "FLOAT": + case Types.FLOAT: return 5; - case "DOUBLE": + case Types.DOUBLE: return 9; - case "BINARY": - case "NCHAR": + case Types.BINARY: + case Types.NCHAR: return this.fields.get(column - 1).length; default: return 0; @@ -105,11 +106,11 @@ public class RestfulResultSetMetaData implements ResultSetMetaData { @Override public int getScale(int column) throws SQLException { - String type = this.fields.get(column - 1).type.toUpperCase(); + int type = this.fields.get(column - 1).type; switch (type) { - case "FLOAT": + case Types.FLOAT: return 5; - case "DOUBLE": + case Types.DOUBLE: return 9; default: return 0; @@ -128,36 +129,13 @@ public class RestfulResultSetMetaData implements ResultSetMetaData { @Override public int getColumnType(int column) throws SQLException { - String type = this.fields.get(column - 1).type.toUpperCase(); - switch (type) { - case "BOOL": - return java.sql.Types.BOOLEAN; - case "TINYINT": - return java.sql.Types.TINYINT; - case "SMALLINT": - return java.sql.Types.SMALLINT; - case "INT": - return java.sql.Types.INTEGER; - case "BIGINT": - return java.sql.Types.BIGINT; - case "FLOAT": - return java.sql.Types.FLOAT; - case "DOUBLE": - return java.sql.Types.DOUBLE; - case "BINARY": - return java.sql.Types.BINARY; - case "TIMESTAMP": - return java.sql.Types.TIMESTAMP; - case "NCHAR": - return java.sql.Types.NCHAR; - } - throw new SQLException(TSDBConstants.INVALID_VARIABLES); + return this.fields.get(column - 1).type; } @Override public String getColumnTypeName(int column) throws SQLException { - String type = fields.get(column - 1).type; - return type.toUpperCase(); + int type = fields.get(column - 1).type; + return TSDBConstants.jdbcType2TaosTypeName(type); } @Override @@ -177,26 +155,26 @@ public class RestfulResultSetMetaData implements ResultSetMetaData { @Override public String getColumnClassName(int column) throws SQLException { - String type = this.fields.get(column - 1).type; + int type = this.fields.get(column - 1).type; String columnClassName = ""; switch (type) { - case "BOOL": + case Types.BOOLEAN: return Boolean.class.getName(); - case "TINYINT": - case "SMALLINT": + case Types.TINYINT: + case Types.SMALLINT: return Short.class.getName(); - case "INT": + case Types.INTEGER: return Integer.class.getName(); - case "BIGINT": + case Types.BIGINT: return Long.class.getName(); - case "FLOAT": + case Types.FLOAT: return Float.class.getName(); - case "DOUBLE": + case Types.DOUBLE: return Double.class.getName(); - case "TIMESTAMP": + case Types.TIMESTAMP: return Timestamp.class.getName(); - case "BINARY": - case "NCHAR": + case Types.BINARY: + case Types.NCHAR: return String.class.getName(); } return columnClassName; diff --git a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/rs/RestfulStatement.java b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/rs/RestfulStatement.java index 8d67586be257fab2bf516f8aecad825e1dfdc02e..d60940d87762340203e54fb7df23579c2f93d510 100644 --- a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/rs/RestfulStatement.java +++ b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/rs/RestfulStatement.java @@ -151,22 +151,21 @@ public class RestfulStatement extends AbstractStatement { throw new SQLException(TSDBConstants.WrapErrMsg("SQL execution error: " + resultJson.getString("desc") + "\n" + "error code: " + resultJson.getString("code"))); } // parse table name from sql - String[] tableIdentifiers = parseTableIdentifier(sql); - if (tableIdentifiers != null) { - List fieldJsonList = new ArrayList<>(); - for (String tableIdentifier : tableIdentifiers) { - // field meta - String fields = HttpClientPoolUtil.execute(url, "DESCRIBE " + tableIdentifier); - JSONObject fieldJson = JSON.parseObject(fields); - if (fieldJson.getString("status").equals("error")) { - throw new SQLException(TSDBConstants.WrapErrMsg("SQL execution error: " + fieldJson.getString("desc") + "\n" + "error code: " + fieldJson.getString("code"))); - } - fieldJsonList.add(fieldJson); - } - this.resultSet = new RestfulResultSet(database, this, resultJson, fieldJsonList); - } else { - this.resultSet = new RestfulResultSet(database, this, resultJson); - } +// String[] tableIdentifiers = parseTableIdentifier(sql); +// if (tableIdentifiers != null) { +// List fieldJsonList = new ArrayList<>(); +// for (String tableIdentifier : tableIdentifiers) { +// String fields = HttpClientPoolUtil.execute(url, "DESCRIBE " + tableIdentifier); +// JSONObject fieldJson = JSON.parseObject(fields); +// if (fieldJson.getString("status").equals("error")) { +// throw new SQLException(TSDBConstants.WrapErrMsg("SQL execution error: " + fieldJson.getString("desc") + "\n" + "error code: " + fieldJson.getString("code"))); +// } +// fieldJsonList.add(fieldJson); +// } +// this.resultSet = new RestfulResultSet(database, this, resultJson, fieldJsonList); +// } else { + this.resultSet = new RestfulResultSet(database, this, resultJson); +// } this.affectedRows = 0; return resultSet; } @@ -201,7 +200,7 @@ public class RestfulStatement extends AbstractStatement { @Override public ResultSet getResultSet() throws SQLException { if (isClosed()) - throw new SQLException(TSDBConstants.STATEMENT_CLOSED); + throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_STATEMENT_CLOSED); return resultSet; } diff --git a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/ResultSetTest.java b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/ResultSetTest.java index fb0053cb4b1be8c6aa72ed9ae6b1d70a073b7cff..87348f90261163eff650ecb225a168921c162f58 100644 --- a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/ResultSetTest.java +++ b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/ResultSetTest.java @@ -13,7 +13,6 @@ import java.util.HashMap; import java.util.Properties; import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertTrue; public class ResultSetTest { static Connection connection; diff --git a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/SubscribeTest.java b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/SubscribeTest.java index 685957d60af694ffb0327fa9acd580fa45eda39d..11c3de305280a48b690ccdacd2df9751f4b718a9 100644 --- a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/SubscribeTest.java +++ b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/SubscribeTest.java @@ -48,29 +48,28 @@ public class SubscribeTest { @Test public void subscribe() { try { - String rawSql = "select * from " + dbName + "." + tName + ";"; System.out.println(rawSql); - TSDBSubscribe subscribe = ((TSDBConnection) connection).subscribe(topic, rawSql, false); +// TSDBSubscribe subscribe = ((TSDBConnection) connection).subscribe(topic, rawSql, false); - int a = 0; - while (true) { - TimeUnit.MILLISECONDS.sleep(1000); - TSDBResultSet resSet = subscribe.consume(); - while (resSet.next()) { - for (int i = 1; i <= resSet.getMetaData().getColumnCount(); i++) { - System.out.printf(i + ": " + resSet.getString(i) + "\t"); - } - System.out.println("\n======" + a + "=========="); - } - a++; - if (a >= 2) { - break; - } +// int a = 0; +// while (true) { +// TimeUnit.MILLISECONDS.sleep(1000); +// TSDBResultSet resSet = subscribe.consume(); +// while (resSet.next()) { +// for (int i = 1; i <= resSet.getMetaData().getColumnCount(); i++) { +// System.out.printf(i + ": " + resSet.getString(i) + "\t"); +// } +// System.out.println("\n======" + a + "=========="); +// } +// a++; +// if (a >= 2) { +// break; +// } // resSet.close(); - } - - subscribe.close(true); +// } +// +// subscribe.close(true); } catch (Exception e) { e.printStackTrace(); } diff --git a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulJDBCTest.java b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulJDBCTest.java index 185c0306f5259206c96072b7f306f18a7a8a8f7e..451f5d49160a69b4786e806c51480236083eacac 100644 --- a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulJDBCTest.java +++ b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulJDBCTest.java @@ -10,7 +10,7 @@ import java.util.Random; public class RestfulJDBCTest { private static final String host = "127.0.0.1"; - // private static final String host = "master"; +// private static final String host = "master"; private static Connection connection; private Random random = new Random(System.currentTimeMillis()); diff --git a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/SQLTest.java b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/SQLTest.java index 313abfd27865ef37d8b522b38951b17902f5a486..fe4d04775da642bcf258e3680b0f5ba822e69684 100644 --- a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/SQLTest.java +++ b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/SQLTest.java @@ -12,7 +12,7 @@ import java.sql.*; @FixMethodOrder(MethodSorters.NAME_ASCENDING) public class SQLTest { private static final String host = "127.0.0.1"; - // private static final String host = "master"; +// private static final String host = "master"; private static Connection connection; @Test @@ -323,6 +323,18 @@ public class SQLTest { SQLExecutor.executeQuery(connection, sql); } + @Test + public void testCase052() { + String sql = "select server_status()"; + SQLExecutor.executeQuery(connection, sql); + } + + @Test + public void testCase053() { + String sql = "select avg(cpu_taosd), avg(cpu_system), max(cpu_cores), avg(mem_taosd), avg(mem_system), max(mem_total), avg(disk_used), max(disk_total), avg(band_speed), avg(io_read), avg(io_write), sum(req_http), sum(req_select), sum(req_insert) from log.dn1 where ts> now - 60m and ts<= now interval(1m) fill(value, 0)"; + SQLExecutor.executeQuery(connection, sql); + } + @BeforeClass public static void before() throws ClassNotFoundException, SQLException { Class.forName("com.taosdata.jdbc.rs.RestfulDriver"); diff --git a/src/dnode/src/dnodeVnodes.c b/src/dnode/src/dnodeVnodes.c index 9f32541612c48d9d68cbbbb0799a55ca2c2e5838..d00314fcbc806ab89b9f71a3a28880c7878f4579 100644 --- a/src/dnode/src/dnodeVnodes.c +++ b/src/dnode/src/dnodeVnodes.c @@ -198,6 +198,14 @@ void dnodeCleanupVnodes() { static void dnodeProcessStatusRsp(SRpcMsg *pMsg) { if (pMsg->code != TSDB_CODE_SUCCESS) { dError("status rsp is received, error:%s", tstrerror(pMsg->code)); + if (pMsg->code == TSDB_CODE_MND_DNODE_NOT_EXIST) { + char clusterId[TSDB_CLUSTER_ID_LEN]; + dnodeGetClusterId(clusterId); + if (clusterId[0] != '\0') { + dError("exit zombie dropped dnode"); + exit(EXIT_FAILURE); + } + } taosTmrReset(dnodeSendStatusMsg, tsStatusInterval * 1000, NULL, tsDnodeTmr, &tsStatusTimer); return; } diff --git a/src/kit/taosdemo/insert.json b/src/kit/taosdemo/insert.json index 9e252772ea42ea09225b8583696bda386ed7916e..5276b9fb61af9645578392a0035c8a95898c75e2 100644 --- a/src/kit/taosdemo/insert.json +++ b/src/kit/taosdemo/insert.json @@ -8,10 +8,12 @@ "thread_count": 4, "thread_count_create_tbl": 4, "result_file": "./insert_res.txt", - "confirm_parameter_prompt": "no", + "confirm_parameter_prompt": "no", + "insert_interval": 0, + "num_of_records_per_req": 100, "databases": [{ "dbinfo": { - "name": "dbx", + "name": "db", "drop": "yes", "replica": 1, "days": 10, @@ -30,14 +32,13 @@ }, "super_tables": [{ "name": "stb", - "child_table_exists":"no", + "child_table_exists":"no", "childtable_count": 100, "childtable_prefix": "stb_", "auto_create_table": "no", "data_source": "rand", "insert_mode": "taosc", - "insert_rate": 0, - "insert_rows": 1000, + "insert_rows": 100000, "multi_thread_write_one_tbl": "no", "number_of_tbl_in_one_sql": 0, "rows_per_tbl": 100, diff --git a/src/kit/taosdemo/taosdemo.c b/src/kit/taosdemo/taosdemo.c index 86b84534f6639102f75f8f70cbd07efe254dc164..045f459f8ecf276276cb4067067b255d6a0f8703 100644 --- a/src/kit/taosdemo/taosdemo.c +++ b/src/kit/taosdemo/taosdemo.c @@ -56,61 +56,12 @@ #include "cJSON.h" #include "taos.h" +#include "taoserror.h" #include "tutil.h" #define REQ_EXTRA_BUF_LEN 1024 #define RESP_BUF_LEN 4096 -#ifdef WINDOWS -#include -// Some old MinGW/CYGWIN distributions don't define this: -#ifndef ENABLE_VIRTUAL_TERMINAL_PROCESSING -#define ENABLE_VIRTUAL_TERMINAL_PROCESSING 0x0004 -#endif - -static HANDLE g_stdoutHandle; -static DWORD g_consoleMode; - -void setupForAnsiEscape(void) { - DWORD mode = 0; - g_stdoutHandle = GetStdHandle(STD_OUTPUT_HANDLE); - - if(g_stdoutHandle == INVALID_HANDLE_VALUE) { - exit(GetLastError()); - } - - if(!GetConsoleMode(g_stdoutHandle, &mode)) { - exit(GetLastError()); - } - - g_consoleMode = mode; - - // Enable ANSI escape codes - mode |= ENABLE_VIRTUAL_TERMINAL_PROCESSING; - - if(!SetConsoleMode(g_stdoutHandle, mode)) { - exit(GetLastError()); - } -} - -void resetAfterAnsiEscape(void) { - // Reset colors - printf("\x1b[0m"); - - // Reset console mode - if(!SetConsoleMode(g_stdoutHandle, g_consoleMode)) { - exit(GetLastError()); - } -} -#else -void setupForAnsiEscape(void) {} - -void resetAfterAnsiEscape(void) { - // Reset colors - printf("\x1b[0m"); -} -#endif - extern char configDir[]; #define INSERT_JSON_NAME "insert.json" @@ -130,7 +81,6 @@ extern char configDir[]; #define OPT_ABORT 1 /* –abort */ #define STRING_LEN 60000 #define MAX_PREPARED_RAND 1000000 -//#define MAX_SQL_SIZE 65536 #define MAX_FILE_NAME_LEN 256 #define MAX_SAMPLES_ONCE_FROM_FILE 10000 @@ -163,7 +113,7 @@ enum MODE { ASYNC, MODE_BUT }; - + enum QUERY_TYPE { NO_INSERT_TYPE, INSERT_TYPE, @@ -222,6 +172,7 @@ typedef struct { /* Used by main to communicate with parse_opt. */ typedef struct SArguments_S { char * metaFile; + int test_mode; char * host; uint16_t port; char * user; @@ -233,12 +184,15 @@ typedef struct SArguments_S { bool use_metric; bool insert_only; bool answer_yes; + bool debug_print; + bool verbose_print; char * output_file; int mode; char * datatype[MAX_NUM_DATATYPE + 1]; int len_of_binary; int num_of_CPR; int num_of_threads; + int insert_interval; int num_of_RPR; int num_of_tables; int num_of_DPT; @@ -266,7 +220,6 @@ typedef struct SSuperTable_S { char childTblPrefix[MAX_TB_NAME_SIZE]; char dataSource[MAX_TB_NAME_SIZE+1]; // rand_gen or sample char insertMode[MAX_TB_NAME_SIZE]; // taosc, restful - int insertRate; // 0: unlimit > 0 rows/s int multiThreadWriteOneTbl; // 0: no, 1: yes int numberOfTblInOneSql; // 0/1: one table, > 1: number of tbl @@ -288,7 +241,7 @@ typedef struct SSuperTable_S { StrColumn tags[MAX_TAG_COUNT]; char* childTblName; - char* colsOfCreatChildTable; + char* colsOfCreateChildTable; int lenOfOneRow; int lenOfTagOfOneRow; @@ -431,7 +384,7 @@ typedef struct SThreadInfo_S { int start_table_id; int end_table_id; int data_of_rate; - int64_t start_time; + uint64_t start_time; char* cols; bool use_metric; SSuperTable* superTblInfo; @@ -439,10 +392,9 @@ typedef struct SThreadInfo_S { // for async insert tsem_t lock_sem; int64_t counter; - int64_t st; - int64_t et; + uint64_t st; + uint64_t et; int64_t lastTs; - int nrecords_per_request; // statistics int64_t totalRowsInserted; @@ -457,224 +409,56 @@ typedef struct SThreadInfo_S { } threadInfo; +#ifdef WINDOWS +#include +// Some old MinGW/CYGWIN distributions don't define this: +#ifndef ENABLE_VIRTUAL_TERMINAL_PROCESSING +#define ENABLE_VIRTUAL_TERMINAL_PROCESSING 0x0004 +#endif -#ifdef LINUX - /* The options we understand. */ - static struct argp_option options[] = { - {0, 'f', "meta file", 0, "The meta data to the execution procedure, if use -f, all others options invalid. Default is NULL.", 0}, - #ifdef _TD_POWER_ - {0, 'c', "config_directory", 0, "Configuration directory. Default is '/etc/power/'.", 1}, - {0, 'P', "password", 0, "The password to use when connecting to the server. Default is 'powerdb'.", 2}, - #else - {0, 'c', "config_directory", 0, "Configuration directory. Default is '/etc/taos/'.", 1}, - {0, 'P', "password", 0, "The password to use when connecting to the server. Default is 'taosdata'.", 2}, - #endif - {0, 'h', "host", 0, "The host to connect to TDengine. Default is localhost.", 2}, - {0, 'p', "port", 0, "The TCP/IP port number to use for the connection. Default is 0.", 2}, - {0, 'u', "user", 0, "The TDengine user name to use when connecting to the server. Default is 'root'.", 2}, - {0, 'd', "database", 0, "Destination database. Default is 'test'.", 3}, - {0, 'a', "replica", 0, "Set the replica parameters of the database, Default 1, min: 1, max: 3.", 4}, - {0, 'm', "table_prefix", 0, "Table prefix name. Default is 't'.", 4}, - {0, 's', "sql file", 0, "The select sql file.", 6}, - {0, 'M', 0, 0, "Use metric flag.", 4}, - {0, 'o', "outputfile", 0, "Direct output to the named file. Default is './output.txt'.", 6}, - {0, 'q', "query_mode", 0, "Query mode--0: SYNC, 1: ASYNC. Default is SYNC.", 4}, - {0, 'b', "type_of_cols", 0, "The data_type of columns, default: TINYINT,SMALLINT,INT,BIGINT,FLOAT,DOUBLE,BINARY,NCHAR,BOOL,TIMESTAMP.", 4}, - {0, 'w', "length_of_chartype", 0, "The length of data_type 'BINARY' or 'NCHAR'. Default is 16", 4}, - {0, 'l', "num_of_cols_per_record", 0, "The number of columns per record. Default is 10.", 4}, - {0, 'T', "num_of_threads", 0, "The number of threads. Default is 10.", 4}, - // {0, 'r', "num_of_records_per_req", 0, "The number of records per request. Default is 100.", 4}, - {0, 't', "num_of_tables", 0, "The number of tables. Default is 10000.", 4}, - {0, 'n', "num_of_records_per_table", 0, "The number of records per table. Default is 10000.", 4}, - {0, 'x', 0, 0, "Not insert only flag.", 4}, - {0, 'y', 0, 0, "Default input yes for prompt.", 4}, - {0, 'O', "disorderRatio", 0, "Insert mode--0: In order, > 0: disorder ratio. Default is in order.", 4}, - {0, 'R', "disorderRang", 0, "Out of order data's range, ms, default is 1000.", 4}, - //{0, 'D', "delete database", 0, "if elete database if exists. 0: no, 1: yes, default is 1", 5}, - {0}}; - -/* Parse a single option. */ -static error_t parse_opt(int key, char *arg, struct argp_state *state) { - // Get the input argument from argp_parse, which we know is a pointer to our arguments structure. - SArguments *arguments = state->input; - wordexp_t full_path; - char **sptr; - switch (key) { - case 'f': - arguments->metaFile = arg; - break; - case 'h': - arguments->host = arg; - break; - case 'p': - arguments->port = atoi(arg); - break; - case 'u': - arguments->user = arg; - break; - case 'P': - arguments->password = arg; - break; - case 'o': - arguments->output_file = arg; - break; - case 's': - arguments->sqlFile = arg; - break; - case 'q': - arguments->mode = atoi(arg); - break; - case 'T': - arguments->num_of_threads = atoi(arg); - break; - //case 'r': - // arguments->num_of_RPR = atoi(arg); - // break; - case 't': - arguments->num_of_tables = atoi(arg); - break; - case 'n': - arguments->num_of_DPT = atoi(arg); - break; - case 'd': - arguments->database = arg; - break; - case 'l': - arguments->num_of_CPR = atoi(arg); - break; - case 'b': - sptr = arguments->datatype; - if (strstr(arg, ",") == NULL) { - if (strcasecmp(arg, "INT") != 0 && strcasecmp(arg, "FLOAT") != 0 && - strcasecmp(arg, "TINYINT") != 0 && strcasecmp(arg, "BOOL") != 0 && - strcasecmp(arg, "SMALLINT") != 0 && strcasecmp(arg, "TIMESTAMP") != 0 && - strcasecmp(arg, "BIGINT") != 0 && strcasecmp(arg, "DOUBLE") != 0 && - strcasecmp(arg, "BINARY") != 0 && strcasecmp(arg, "NCHAR") != 0) { - argp_error(state, "Invalid data_type!"); - } - sptr[0] = arg; - } else { - int index = 0; - char *dupstr = strdup(arg); - char *running = dupstr; - char *token = strsep(&running, ","); - while (token != NULL) { - if (strcasecmp(token, "INT") != 0 && strcasecmp(token, "FLOAT") != 0 && - strcasecmp(token, "TINYINT") != 0 && strcasecmp(token, "BOOL") != 0 && - strcasecmp(token, "SMALLINT") != 0 && strcasecmp(token, "TIMESTAMP") != 0 && - strcasecmp(token, "BIGINT") != 0 && strcasecmp(token, "DOUBLE") != 0 && - strcasecmp(token, "BINARY") != 0 && strcasecmp(token, "NCHAR") != 0) { - argp_error(state, "Invalid data_type!"); - } - sptr[index++] = token; - token = strsep(&running, ","); - if (index >= MAX_NUM_DATATYPE) break; - } - } - break; - case 'w': - arguments->len_of_binary = atoi(arg); - break; - case 'm': - arguments->tb_prefix = arg; - break; - case 'M': - arguments->use_metric = true; - break; - case 'x': - arguments->insert_only = true; - break; +static HANDLE g_stdoutHandle; +static DWORD g_consoleMode; - case 'y': - arguments->answer_yes = true; - break; - case 'c': - if (wordexp(arg, &full_path, 0) != 0) { - fprintf(stderr, "Invalid path %s\n", arg); - return -1; - } - taos_options(TSDB_OPTION_CONFIGDIR, full_path.we_wordv[0]); - wordfree(&full_path); - break; - case 'O': - arguments->disorderRatio = atoi(arg); - if (arguments->disorderRatio < 0 || arguments->disorderRatio > 100) - { - argp_error(state, "Invalid disorder ratio, should 1 ~ 100!"); - } - break; - case 'R': - arguments->disorderRange = atoi(arg); - break; - case 'a': - arguments->replica = atoi(arg); - if (arguments->replica > 3 || arguments->replica < 1) - { - arguments->replica = 1; - } - break; - //case 'D': - // arguments->method_of_delete = atoi(arg); - // break; - case OPT_ABORT: - arguments->abort = 1; - break; - case ARGP_KEY_ARG: - /*arguments->arg_list = &state->argv[state->next-1]; - state->next = state->argc;*/ - argp_usage(state); - break; +void setupForAnsiEscape(void) { + DWORD mode = 0; + g_stdoutHandle = GetStdHandle(STD_OUTPUT_HANDLE); - default: - return ARGP_ERR_UNKNOWN; + if(g_stdoutHandle == INVALID_HANDLE_VALUE) { + exit(GetLastError()); } - return 0; + + if(!GetConsoleMode(g_stdoutHandle, &mode)) { + exit(GetLastError()); + } + + g_consoleMode = mode; + + // Enable ANSI escape codes + mode |= ENABLE_VIRTUAL_TERMINAL_PROCESSING; + + if(!SetConsoleMode(g_stdoutHandle, mode)) { + exit(GetLastError()); + } } -static struct argp argp = {options, parse_opt, 0, 0}; +void resetAfterAnsiEscape(void) { + // Reset colors + printf("\x1b[0m"); -void parse_args(int argc, char *argv[], SArguments *arguments) { - argp_parse(&argp, argc, argv, 0, 0, arguments); - if (arguments->abort) { - #ifndef _ALPINE - error(10, 0, "ABORTED"); - #else - abort(); - #endif + // Reset console mode + if(!SetConsoleMode(g_stdoutHandle, g_consoleMode)) { + exit(GetLastError()); } } - #else - void printHelp() { - char indent[10] = " "; - printf("%s%s\n", indent, "-f"); - printf("%s%s%s\n", indent, indent, "The meta file to the execution procedure. Default is './meta.json'."); - printf("%s%s\n", indent, "-c"); - printf("%s%s%s\n", indent, indent, "config_directory, Configuration directory. Default is '/etc/taos/'."); - } - - void parse_args(int argc, char *argv[], SArguments *arguments) { - for (int i = 1; i < argc; i++) { - if (strcmp(argv[i], "-f") == 0) { - arguments->metaFile = argv[++i]; - } else if (strcmp(argv[i], "-c") == 0) { - strcpy(configDir, argv[++i]); - } else if (strcmp(argv[i], "--help") == 0) { - printHelp(); - exit(EXIT_FAILURE); - } else { - fprintf(stderr, "wrong options\n"); - printHelp(); - exit(EXIT_FAILURE); - } - } - } +void setupForAnsiEscape(void) {} + +void resetAfterAnsiEscape(void) { + // Reset colors + printf("\x1b[0m"); +} #endif -static bool getInfoFromJsonFile(char* file); -//static int generateOneRowDataForStb(SSuperTable* stbInfo); -//static int getDataIntoMemForStb(SSuperTable* stbInfo); -static void init_rand_data(); static int createDatabases(); static void createChildTables(); static int queryDbExec(TAOS *taos, char *command, int type); @@ -685,9 +469,12 @@ int32_t randint[MAX_PREPARED_RAND]; int64_t randbigint[MAX_PREPARED_RAND]; float randfloat[MAX_PREPARED_RAND]; double randdouble[MAX_PREPARED_RAND]; -char *aggreFunc[] = {"*", "count(*)", "avg(col0)", "sum(col0)", "max(col0)", "min(col0)", "first(col0)", "last(col0)"}; +char *aggreFunc[] = {"*", "count(*)", "avg(col0)", "sum(col0)", + "max(col0)", "min(col0)", "first(col0)", "last(col0)"}; -SArguments g_args = {NULL, +SArguments g_args = { + NULL, // metaFile + 0, // test_mode "127.0.0.1", // host 6030, // port "root", // user @@ -701,7 +488,9 @@ SArguments g_args = {NULL, "t", // tb_prefix NULL, // sqlFile false, // use_metric - false, // insert_only + false, // insert_only + false, // debug_print + false, // verbose_print false, // answer_yes; "./output.txt", // output_file 0, // mode : sync or async @@ -720,6 +509,7 @@ SArguments g_args = {NULL, 16, // len_of_binary 10, // num_of_CPR 10, // num_of_connections/thread + 0, // insert_interval 100, // num_of_RPR 10000, // num_of_tables 10000, // num_of_DPT @@ -731,13 +521,267 @@ SArguments g_args = {NULL, }; -static int g_jsonType = 0; + static SDbs g_Dbs; static int g_totalChildTables = 0; static SQueryMetaInfo g_queryInfo; static FILE * g_fpOfInsertResult = NULL; +#define debugPrint(fmt, ...) \ + do { if (g_args.debug_print || g_args.verbose_print) \ + fprintf(stderr, "DEBG: "fmt, __VA_ARGS__); } while(0) +#define verbosePrint(fmt, ...) \ + do { if (g_args.verbose_print) fprintf(stderr, "VERB: "fmt, __VA_ARGS__); } while(0) + +/////////////////////////////////////////////////// + +void printHelp() { + char indent[10] = " "; + printf("%s%s%s%s\n", indent, "-f", indent, + "The meta file to the execution procedure. Default is './meta.json'."); + printf("%s%s%s%s\n", indent, "-u", indent, + "The TDengine user name to use when connecting to the server. Default is 'root'."); +#ifdef _TD_POWER_ + printf("%s%s%s%s\n", indent, "-P", indent, + "The password to use when connecting to the server. Default is 'powerdb'."); + printf("%s%s%s%s\n", indent, "-c", indent, + "Configuration directory. Default is '/etc/power/'."); +#else + printf("%s%s%s%s\n", indent, "-P", indent, + "The password to use when connecting to the server. Default is 'taosdata'."); + printf("%s%s%s%s\n", indent, "-c", indent, + "Configuration directory. Default is '/etc/taos/'."); +#endif + printf("%s%s%s%s\n", indent, "-h", indent, + "The host to connect to TDengine. Default is localhost."); + printf("%s%s%s%s\n", indent, "-p", indent, + "The TCP/IP port number to use for the connection. Default is 0."); + printf("%s%s%s%s\n", indent, "-d", indent, + "Destination database. Default is 'test'."); + printf("%s%s%s%s\n", indent, "-a", indent, + "Set the replica parameters of the database, Default 1, min: 1, max: 3."); + printf("%s%s%s%s\n", indent, "-m", indent, + "Table prefix name. Default is 't'."); + printf("%s%s%s%s\n", indent, "-s", indent, "The select sql file."); + printf("%s%s%s%s\n", indent, "-M", indent, "Use metric flag."); + printf("%s%s%s%s\n", indent, "-o", indent, + "Direct output to the named file. Default is './output.txt'."); + printf("%s%s%s%s\n", indent, "-q", indent, + "Query mode--0: SYNC, 1: ASYNC. Default is SYNC."); + printf("%s%s%s%s\n", indent, "-b", indent, + "The data_type of columns, default: TINYINT,SMALLINT,INT,BIGINT,FLOAT,DOUBLE,BINARY,NCHAR,BOOL,TIMESTAMP."); + printf("%s%s%s%s\n", indent, "-w", indent, + "The length of data_type 'BINARY' or 'NCHAR'. Default is 16"); + printf("%s%s%s%s\n", indent, "-l", indent, + "The number of columns per record. Default is 10."); + printf("%s%s%s%s\n", indent, "-T", indent, + "The number of threads. Default is 10."); + printf("%s%s%s%s\n", indent, "-i", indent, + "The sleep time (ms) between insertion. Default is 0."); + printf("%s%s%s%s\n", indent, "-r", indent, + "The number of records per request. Default is 100."); + printf("%s%s%s%s\n", indent, "-t", indent, + "The number of tables. Default is 10000."); + printf("%s%s%s%s\n", indent, "-n", indent, + "The number of records per table. Default is 10000."); + printf("%s%s%s%s\n", indent, "-x", indent, "Not insert only flag."); + printf("%s%s%s%s\n", indent, "-y", indent, "Default input yes for prompt."); + printf("%s%s%s%s\n", indent, "-O", indent, + "Insert mode--0: In order, > 0: disorder ratio. Default is in order."); + printf("%s%s%s%s\n", indent, "-R", indent, + "Out of order data's range, ms, default is 1000."); + printf("%s%s%s%s\n", indent, "-g", indent, + "Print debug info."); +/* printf("%s%s%s%s\n", indent, "-D", indent, + "if elete database if exists. 0: no, 1: yes, default is 1"); + */ +} + +void parse_args(int argc, char *argv[], SArguments *arguments) { + char **sptr; + wordexp_t full_path; + + for (int i = 1; i < argc; i++) { + if (strcmp(argv[i], "-f") == 0) { + arguments->metaFile = argv[++i]; + } else if (strcmp(argv[i], "-c") == 0) { + char *configPath = argv[++i]; + if (wordexp(configPath, &full_path, 0) != 0) { + fprintf(stderr, "Invalid path %s\n", configPath); + return; + } + taos_options(TSDB_OPTION_CONFIGDIR, full_path.we_wordv[0]); + wordfree(&full_path); + } else if (strcmp(argv[i], "-h") == 0) { + arguments->host = argv[++i]; + } else if (strcmp(argv[i], "-p") == 0) { + arguments->port = atoi(argv[++i]); + } else if (strcmp(argv[i], "-u") == 0) { + arguments->user = argv[++i]; + } else if (strcmp(argv[i], "-P") == 0) { + arguments->password = argv[++i]; + } else if (strcmp(argv[i], "-o") == 0) { + arguments->output_file = argv[++i]; + } else if (strcmp(argv[i], "-s") == 0) { + arguments->sqlFile = argv[++i]; + } else if (strcmp(argv[i], "-q") == 0) { + arguments->mode = atoi(argv[++i]); + } else if (strcmp(argv[i], "-T") == 0) { + arguments->num_of_threads = atoi(argv[++i]); + } else if (strcmp(argv[i], "-i") == 0) { + arguments->insert_interval = atoi(argv[++i]); + } else if (strcmp(argv[i], "-r") == 0) { + arguments->num_of_RPR = atoi(argv[++i]); + } else if (strcmp(argv[i], "-t") == 0) { + arguments->num_of_tables = atoi(argv[++i]); + } else if (strcmp(argv[i], "-n") == 0) { + arguments->num_of_DPT = atoi(argv[++i]); + } else if (strcmp(argv[i], "-d") == 0) { + arguments->database = argv[++i]; + } else if (strcmp(argv[i], "-l") == 0) { + arguments->num_of_CPR = atoi(argv[++i]); + } else if (strcmp(argv[i], "-b") == 0) { + sptr = arguments->datatype; + ++i; + if (strstr(argv[i], ",") == NULL) { + // only one col + if (strcasecmp(argv[i], "INT") + && strcasecmp(argv[i], "FLOAT") + && strcasecmp(argv[i], "TINYINT") + && strcasecmp(argv[i], "BOOL") + && strcasecmp(argv[i], "SMALLINT") + && strcasecmp(argv[i], "BIGINT") + && strcasecmp(argv[i], "DOUBLE") + && strcasecmp(argv[i], "BINARY") + && strcasecmp(argv[i], "NCHAR")) { + fprintf(stderr, "Invalid data_type!\n"); + printHelp(); + exit(EXIT_FAILURE); + } + sptr[0] = argv[i]; + } else { + // more than one col + int index = 0; + char *dupstr = strdup(argv[i]); + char *running = dupstr; + char *token = strsep(&running, ","); + while (token != NULL) { + if (strcasecmp(token, "INT") + && strcasecmp(token, "FLOAT") + && strcasecmp(token, "TINYINT") + && strcasecmp(token, "BOOL") + && strcasecmp(token, "SMALLINT") + && strcasecmp(token, "BIGINT") + && strcasecmp(token, "DOUBLE") + && strcasecmp(token, "BINARY") + && strcasecmp(token, "NCHAR")) { + fprintf(stderr, "Invalid data_type!\n"); + printHelp(); + exit(EXIT_FAILURE); + } + sptr[index++] = token; + token = strsep(&running, ","); + if (index >= MAX_NUM_DATATYPE) break; + } + sptr[index] = NULL; + } + } else if (strcmp(argv[i], "-w") == 0) { + arguments->len_of_binary = atoi(argv[++i]); + } else if (strcmp(argv[i], "-m") == 0) { + arguments->tb_prefix = argv[++i]; + } else if (strcmp(argv[i], "-M") == 0) { + arguments->use_metric = true; + } else if (strcmp(argv[i], "-x") == 0) { + arguments->insert_only = true; + } else if (strcmp(argv[i], "-y") == 0) { + arguments->answer_yes = true; + } else if (strcmp(argv[i], "-g") == 0) { + arguments->debug_print = true; + } else if (strcmp(argv[i], "-gg") == 0) { + arguments->verbose_print = true; + } else if (strcmp(argv[i], "-c") == 0) { + strcpy(configDir, argv[++i]); + } else if (strcmp(argv[i], "-O") == 0) { + arguments->disorderRatio = atoi(argv[++i]); + if (arguments->disorderRatio > 1 + || arguments->disorderRatio < 0) { + arguments->disorderRatio = 0; + } else if (arguments->disorderRatio == 1) { + arguments->disorderRange = 10; + } + } else if (strcmp(argv[i], "-R") == 0) { + arguments->disorderRange = atoi(argv[++i]); + if (arguments->disorderRange == 1 + && (arguments->disorderRange > 50 + || arguments->disorderRange <= 0)) { + arguments->disorderRange = 10; + } + } else if (strcmp(argv[i], "-a") == 0) { + arguments->replica = atoi(argv[++i]); + if (arguments->replica > 3 || arguments->replica < 1) { + arguments->replica = 1; + } + } else if (strcmp(argv[i], "-D") == 0) { + arguments->method_of_delete = atoi(argv[++i]); + if (arguments->method_of_delete < 0 + || arguments->method_of_delete > 3) { + arguments->method_of_delete = 0; + } + } else if (strcmp(argv[i], "--help") == 0) { + printHelp(); + exit(0); + } else { + fprintf(stderr, "wrong options\n"); + printHelp(); + exit(EXIT_FAILURE); + } + } + + if (arguments->debug_print) { + printf("###################################################################\n"); + printf("# meta file: %s\n", arguments->metaFile); + printf("# Server IP: %s:%hu\n", + arguments->host == NULL ? "localhost" : arguments->host, + arguments->port ); + printf("# User: %s\n", arguments->user); + printf("# Password: %s\n", arguments->password); + printf("# Use metric: %s\n", arguments->use_metric ? "true" : "false"); + if (*(arguments->datatype)) { + printf("# Specified data type: "); + for (int i = 0; i < MAX_NUM_DATATYPE; i++) + if (arguments->datatype[i]) + printf("%s,", arguments->datatype[i]); + else + break; + printf("\n"); + } + printf("# Insertion interval: %d\n", arguments->insert_interval); + printf("# Number of records per req: %d\n", arguments->num_of_RPR); + printf("# Number of Threads: %d\n", arguments->num_of_threads); + printf("# Number of Tables: %d\n", arguments->num_of_tables); + printf("# Number of Data per Table: %d\n", arguments->num_of_DPT); + printf("# Database name: %s\n", arguments->database); + printf("# Table prefix: %s\n", arguments->tb_prefix); + if (arguments->disorderRatio) { + printf("# Data order: %d\n", arguments->disorderRatio); + printf("# Data out of order rate: %d\n", arguments->disorderRange); + + } + printf("# Delete method: %d\n", arguments->method_of_delete); + printf("# Answer yes when prompt: %d\n", arguments->answer_yes); + printf("# Print debug info: %d\n", arguments->debug_print); + printf("###################################################################\n"); + if (!arguments->answer_yes) { + printf("Press enter key to continue\n\n"); + (void) getchar(); + } + } +} +static bool getInfoFromJsonFile(char* file); +//static int generateOneRowDataForStb(SSuperTable* stbInfo); +//static int getDataIntoMemForStb(SSuperTable* stbInfo); +static void init_rand_data(); void tmfclose(FILE *fp) { if (NULL != fp) { fclose(fp); @@ -760,7 +804,7 @@ static int queryDbExec(TAOS *taos, char *command, int type) { taos_free_result(res); res = NULL; } - + res = taos_query(taos, command); code = taos_errno(res); if (0 == code) { @@ -769,6 +813,7 @@ static int queryDbExec(TAOS *taos, char *command, int type) { } if (code != 0) { + debugPrint("%s() LN%d - command: %s\n", __func__, __LINE__, command); fprintf(stderr, "Failed to run %s, reason: %s\n", command, taos_errstr(res)); taos_free_result(res); //taos_close(taos); @@ -802,7 +847,8 @@ static void getResult(TAOS_RES *res, char* resultFileName) { char* databuf = (char*) calloc(1, 100*1024*1024); if (databuf == NULL) { fprintf(stderr, "failed to malloc, warning: save result to file slowly!\n"); - fclose(fp); + if (fp) + fclose(fp); return ; } @@ -925,19 +971,42 @@ static void init_rand_data() { } } +#define SHOW_PARSE_RESULT_START() \ + do { if (g_args.metaFile) \ + printf("\033[1m\033[40;32m================ %s parse result START ================\033[0m\n", \ + g_args.metaFile); } while(0) + +#define SHOW_PARSE_RESULT_END() \ + do { if (g_args.metaFile) \ + printf("\033[1m\033[40;32m================ %s parse result END================\033[0m\n", \ + g_args.metaFile); } while(0) + +#define SHOW_PARSE_RESULT_START_TO_FILE(fp) \ + do { if (g_args.metaFile) \ + fprintf(fp, "\033[1m\033[40;32m================ %s parse result START ================\033[0m\n", \ + g_args.metaFile); } while(0) + +#define SHOW_PARSE_RESULT_END_TO_FILE(fp) \ + do { if (g_args.metaFile) \ + fprintf(fp, "\033[1m\033[40;32m================ %s parse result END================\033[0m\n", \ + g_args.metaFile); } while(0) + static int printfInsertMeta() { - printf("\033[1m\033[40;32m================ insert.json parse result START ================\033[0m\n"); + SHOW_PARSE_RESULT_START(); + printf("host: \033[33m%s:%u\033[0m\n", g_Dbs.host, g_Dbs.port); printf("user: \033[33m%s\033[0m\n", g_Dbs.user); printf("password: \033[33m%s\033[0m\n", g_Dbs.password); printf("resultFile: \033[33m%s\033[0m\n", g_Dbs.resultFile); printf("thread num of insert data: \033[33m%d\033[0m\n", g_Dbs.threadCount); printf("thread num of create table: \033[33m%d\033[0m\n", g_Dbs.threadCountByCreateTbl); + printf("insert interval: \033[33m%d\033[0m\n", g_args.insert_interval); + printf("number of records per req: \033[33m%d\033[0m\n", g_args.num_of_RPR); printf("database count: \033[33m%d\033[0m\n", g_Dbs.dbCount); for (int i = 0; i < g_Dbs.dbCount; i++) { printf("database[\033[33m%d\033[0m]:\n", i); - printf(" database name: \033[33m%s\033[0m\n", g_Dbs.db[i].dbName); + printf(" database[%d] name: \033[33m%s\033[0m\n", i, g_Dbs.db[i].dbName); if (0 == g_Dbs.db[i].drop) { printf(" drop: \033[33mno\033[0m\n"); }else { @@ -981,11 +1050,13 @@ static int printfInsertMeta() { printf(" quorum: \033[33m%d\033[0m\n", g_Dbs.db[i].dbCfg.quorum); } if (g_Dbs.db[i].dbCfg.precision[0] != 0) { - if ((0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "ms", 2)) || (0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "us", 2))) { + if ((0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "ms", 2)) + || (0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "us", 2))) { printf(" precision: \033[33m%s\033[0m\n", g_Dbs.db[i].dbCfg.precision); } else { - printf(" precision error: \033[33m%s\033[0m\n", g_Dbs.db[i].dbCfg.precision); - return -1; + printf("\033[1m\033[40;31m precision error: %s\033[0m\n", + g_Dbs.db[i].dbCfg.precision); + return -1; } } @@ -1015,7 +1086,6 @@ static int printfInsertMeta() { printf(" childTblPrefix: \033[33m%s\033[0m\n", g_Dbs.db[i].superTbls[j].childTblPrefix); printf(" dataSource: \033[33m%s\033[0m\n", g_Dbs.db[i].superTbls[j].dataSource); printf(" insertMode: \033[33m%s\033[0m\n", g_Dbs.db[i].superTbls[j].insertMode); - printf(" insertRate: \033[33m%d\033[0m\n", g_Dbs.db[i].superTbls[j].insertRate); printf(" insertRows: \033[33m%"PRId64"\033[0m\n", g_Dbs.db[i].superTbls[j].insertRows); if (0 == g_Dbs.db[i].superTbls[j].multiThreadWriteOneTbl) { @@ -1038,34 +1108,43 @@ static int printfInsertMeta() { printf(" columnCount: \033[33m%d\033[0m\n ", g_Dbs.db[i].superTbls[j].columnCount); for (int k = 0; k < g_Dbs.db[i].superTbls[j].columnCount; k++) { //printf("dataType:%s, dataLen:%d\t", g_Dbs.db[i].superTbls[j].columns[k].dataType, g_Dbs.db[i].superTbls[j].columns[k].dataLen); - if ((0 == strncasecmp(g_Dbs.db[i].superTbls[j].columns[k].dataType, "binary", 6)) || (0 == strncasecmp(g_Dbs.db[i].superTbls[j].columns[k].dataType, "nchar", 5))) { - printf("column[\033[33m%d\033[0m]:\033[33m%s(%d)\033[0m ", k, g_Dbs.db[i].superTbls[j].columns[k].dataType, g_Dbs.db[i].superTbls[j].columns[k].dataLen); + if ((0 == strncasecmp(g_Dbs.db[i].superTbls[j].columns[k].dataType, "binary", 6)) + || (0 == strncasecmp(g_Dbs.db[i].superTbls[j].columns[k].dataType, "nchar", 5))) { + printf("column[\033[33m%d\033[0m]:\033[33m%s(%d)\033[0m ", k, + g_Dbs.db[i].superTbls[j].columns[k].dataType, g_Dbs.db[i].superTbls[j].columns[k].dataLen); } else { - printf("column[%d]:\033[33m%s\033[0m ", k, g_Dbs.db[i].superTbls[j].columns[k].dataType); + printf("column[%d]:\033[33m%s\033[0m ", k, + g_Dbs.db[i].superTbls[j].columns[k].dataType); } } printf("\n"); - - printf(" tagCount: \033[33m%d\033[0m\n ", g_Dbs.db[i].superTbls[j].tagCount); + + printf(" tagCount: \033[33m%d\033[0m\n ", + g_Dbs.db[i].superTbls[j].tagCount); for (int k = 0; k < g_Dbs.db[i].superTbls[j].tagCount; k++) { //printf("dataType:%s, dataLen:%d\t", g_Dbs.db[i].superTbls[j].tags[k].dataType, g_Dbs.db[i].superTbls[j].tags[k].dataLen); - if ((0 == strncasecmp(g_Dbs.db[i].superTbls[j].tags[k].dataType, "binary", 6)) || (0 == strncasecmp(g_Dbs.db[i].superTbls[j].tags[k].dataType, "nchar", 5))) { - printf("tag[%d]:\033[33m%s(%d)\033[0m ", k, g_Dbs.db[i].superTbls[j].tags[k].dataType, g_Dbs.db[i].superTbls[j].tags[k].dataLen); + if ((0 == strncasecmp(g_Dbs.db[i].superTbls[j].tags[k].dataType, "binary", 6)) + || (0 == strncasecmp(g_Dbs.db[i].superTbls[j].tags[k].dataType, "nchar", 5))) { + printf("tag[%d]:\033[33m%s(%d)\033[0m ", k, + g_Dbs.db[i].superTbls[j].tags[k].dataType, g_Dbs.db[i].superTbls[j].tags[k].dataLen); } else { - printf("tag[%d]:\033[33m%s\033[0m ", k, g_Dbs.db[i].superTbls[j].tags[k].dataType); + printf("tag[%d]:\033[33m%s\033[0m ", k, + g_Dbs.db[i].superTbls[j].tags[k].dataType); } } printf("\n"); } printf("\n"); } - printf("\033[1m\033[40;32m================ insert.json parse result END================\033[0m\n"); + + SHOW_PARSE_RESULT_END(); return 0; } static void printfInsertMetaToFile(FILE* fp) { - fprintf(fp, "================ insert.json parse result START================\n"); + SHOW_PARSE_RESULT_START_TO_FILE(fp); + fprintf(fp, "host: %s:%u\n", g_Dbs.host, g_Dbs.port); fprintf(fp, "user: %s\n", g_Dbs.user); fprintf(fp, "password: %s\n", g_Dbs.password); @@ -1076,7 +1155,7 @@ static void printfInsertMetaToFile(FILE* fp) { fprintf(fp, "database count: %d\n", g_Dbs.dbCount); for (int i = 0; i < g_Dbs.dbCount; i++) { fprintf(fp, "database[%d]:\n", i); - fprintf(fp, " database name: %s\n", g_Dbs.db[i].dbName); + fprintf(fp, " database[%d] name: %s\n", i, g_Dbs.db[i].dbName); if (0 == g_Dbs.db[i].drop) { fprintf(fp, " drop: no\n"); }else { @@ -1120,7 +1199,8 @@ static void printfInsertMetaToFile(FILE* fp) { fprintf(fp, " quorum: %d\n", g_Dbs.db[i].dbCfg.quorum); } if (g_Dbs.db[i].dbCfg.precision[0] != 0) { - if ((0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "ms", 2)) || (0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "us", 2))) { + if ((0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "ms", 2)) + || (0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "us", 2))) { fprintf(fp, " precision: %s\n", g_Dbs.db[i].dbCfg.precision); } else { fprintf(fp, " precision error: %s\n", g_Dbs.db[i].dbCfg.precision); @@ -1153,7 +1233,6 @@ static void printfInsertMetaToFile(FILE* fp) { fprintf(fp, " childTblPrefix: %s\n", g_Dbs.db[i].superTbls[j].childTblPrefix); fprintf(fp, " dataSource: %s\n", g_Dbs.db[i].superTbls[j].dataSource); fprintf(fp, " insertMode: %s\n", g_Dbs.db[i].superTbls[j].insertMode); - fprintf(fp, " insertRate: %d\n", g_Dbs.db[i].superTbls[j].insertRate); fprintf(fp, " insertRows: %"PRId64"\n", g_Dbs.db[i].superTbls[j].insertRows); if (0 == g_Dbs.db[i].superTbls[j].multiThreadWriteOneTbl) { @@ -1197,11 +1276,11 @@ static void printfInsertMetaToFile(FILE* fp) { } fprintf(fp, "\n"); } - fprintf(fp, "================ insert.json parse result END ================\n\n"); + SHOW_PARSE_RESULT_END_TO_FILE(fp); } static void printfQueryMeta() { - printf("\033[1m\033[40;32m================ query.json parse result ================\033[0m\n"); + SHOW_PARSE_RESULT_START(); printf("host: \033[33m%s:%u\033[0m\n", g_queryInfo.host, g_queryInfo.port); printf("user: \033[33m%s\033[0m\n", g_queryInfo.user); printf("password: \033[33m%s\033[0m\n", g_queryInfo.password); @@ -1213,14 +1292,13 @@ static void printfQueryMeta() { printf("concurrent: \033[33m%d\033[0m\n", g_queryInfo.superQueryInfo.concurrent); printf("sqlCount: \033[33m%d\033[0m\n", g_queryInfo.superQueryInfo.sqlCount); - if (SUBSCRIBE_MODE == g_jsonType) { + if (SUBSCRIBE_MODE == g_args.test_mode) { printf("mod: \033[33m%d\033[0m\n", g_queryInfo.superQueryInfo.subscribeMode); printf("interval: \033[33m%d\033[0m\n", g_queryInfo.superQueryInfo.subscribeInterval); printf("restart: \033[33m%d\033[0m\n", g_queryInfo.superQueryInfo.subscribeRestart); printf("keepProgress: \033[33m%d\033[0m\n", g_queryInfo.superQueryInfo.subscribeKeepProgress); } - for (int i = 0; i < g_queryInfo.superQueryInfo.sqlCount; i++) { printf(" sql[%d]: \033[33m%s\033[0m\n", i, g_queryInfo.superQueryInfo.sql[i]); } @@ -1231,7 +1309,7 @@ static void printfQueryMeta() { printf("childTblCount: \033[33m%d\033[0m\n", g_queryInfo.subQueryInfo.childTblCount); printf("stable name: \033[33m%s\033[0m\n", g_queryInfo.subQueryInfo.sTblName); - if (SUBSCRIBE_MODE == g_jsonType) { + if (SUBSCRIBE_MODE == g_args.test_mode) { printf("mod: \033[33m%d\033[0m\n", g_queryInfo.subQueryInfo.subscribeMode); printf("interval: \033[33m%d\033[0m\n", g_queryInfo.subQueryInfo.subscribeInterval); printf("restart: \033[33m%d\033[0m\n", g_queryInfo.subQueryInfo.subscribeRestart); @@ -1243,7 +1321,8 @@ static void printfQueryMeta() { printf(" sql[%d]: \033[33m%s\033[0m\n", i, g_queryInfo.subQueryInfo.sql[i]); } printf("\n"); - printf("\033[1m\033[40;32m================ query.json parse result ================\033[0m\n"); + + SHOW_PARSE_RESULT_END(); } @@ -1410,7 +1489,9 @@ static int getDbFromServer(TAOS * taos, SDbInfo** dbInfos) { dbInfos[count]->comp = (int8_t)(*((int8_t *)row[TSDB_SHOW_DB_COMP_INDEX])); dbInfos[count]->cachelast = (int8_t)(*((int8_t *)row[TSDB_SHOW_DB_CACHELAST_INDEX])); - tstrncpy(dbInfos[count]->precision, (char *)row[TSDB_SHOW_DB_PRECISION_INDEX], fields[TSDB_SHOW_DB_PRECISION_INDEX].bytes); + tstrncpy(dbInfos[count]->precision, + (char *)row[TSDB_SHOW_DB_PRECISION_INDEX], + fields[TSDB_SHOW_DB_PRECISION_INDEX].bytes); dbInfos[count]->update = *((int8_t *)row[TSDB_SHOW_DB_UPDATE_INDEX]); tstrncpy(dbInfos[count]->status, (char *)row[TSDB_SHOW_DB_STATUS_INDEX], fields[TSDB_SHOW_DB_STATUS_INDEX].bytes); @@ -1598,6 +1679,7 @@ int postProceSql(char* host, uint16_t port, char* sqlstr) ERROR_EXIT("ERROR storing complete response from socket"); } + response_buf[RESP_BUF_LEN - 1] = '\0'; printf("Response:\n%s\n", response_buf); free(request_buf); @@ -1861,10 +1943,14 @@ static int createSuperTable(TAOS * taos, char* dbName, SSuperTable* superTbls, char* dataType = superTbls->columns[colIndex].dataType; if (strcasecmp(dataType, "BINARY") == 0) { - len += snprintf(cols + len, STRING_LEN - len, ", col%d %s(%d)", colIndex, "BINARY", superTbls->columns[colIndex].dataLen); + len += snprintf(cols + len, STRING_LEN - len, + ", col%d %s(%d)", colIndex, "BINARY", + superTbls->columns[colIndex].dataLen); lenOfOneRow += superTbls->columns[colIndex].dataLen + 3; } else if (strcasecmp(dataType, "NCHAR") == 0) { - len += snprintf(cols + len, STRING_LEN - len, ", col%d %s(%d)", colIndex, "NCHAR", superTbls->columns[colIndex].dataLen); + len += snprintf(cols + len, STRING_LEN - len, + ", col%d %s(%d)", colIndex, "NCHAR", + superTbls->columns[colIndex].dataLen); lenOfOneRow += superTbls->columns[colIndex].dataLen + 3; } else if (strcasecmp(dataType, "INT") == 0) { len += snprintf(cols + len, STRING_LEN - len, ", col%d %s", colIndex, "INT"); @@ -1901,13 +1987,14 @@ static int createSuperTable(TAOS * taos, char* dbName, SSuperTable* superTbls, //printf("%s.%s column count:%d, column length:%d\n\n", g_Dbs.db[i].dbName, g_Dbs.db[i].superTbls[j].sTblName, g_Dbs.db[i].superTbls[j].columnCount, lenOfOneRow); // save for creating child table - superTbls->colsOfCreatChildTable = (char*)calloc(len+20, 1); - if (NULL == superTbls->colsOfCreatChildTable) { + superTbls->colsOfCreateChildTable = (char*)calloc(len+20, 1); + if (NULL == superTbls->colsOfCreateChildTable) { printf("Failed when calloc, size:%d", len+1); taos_close(taos); exit(-1); } - snprintf(superTbls->colsOfCreatChildTable, len+20, "(ts timestamp%s)", cols); + snprintf(superTbls->colsOfCreateChildTable, len+20, "(ts timestamp%s)", cols); + verbosePrint("%s() LN%d: %s\n", __func__, __LINE__, superTbls->colsOfCreateChildTable); if (use_metric) { char tags[STRING_LEN] = "\0"; @@ -1956,12 +2043,17 @@ static int createSuperTable(TAOS * taos, char* dbName, SSuperTable* superTbls, len += snprintf(tags + len, STRING_LEN - len, ")"); superTbls->lenOfTagOfOneRow = lenOfTagOfOneRow; - - snprintf(command, BUFFER_SIZE, "create table if not exists %s.%s (ts timestamp%s) tags %s", dbName, superTbls->sTblName, cols, tags); + + snprintf(command, BUFFER_SIZE, + "create table if not exists %s.%s (ts timestamp%s) tags %s", + dbName, superTbls->sTblName, cols, tags); + verbosePrint("%s() LN%d: %s\n", __func__, __LINE__, command); + if (0 != queryDbExec(taos, command, NO_INSERT_TYPE)) { - return -1; + fprintf(stderr, "create supertable %s failed!\n\n", superTbls->sTblName); + return -1; } - printf("\ncreate supertable %s success!\n\n", superTbls->sTblName); + debugPrint("create supertable %s success!\n\n", superTbls->sTblName); } return 0; } @@ -1973,75 +2065,94 @@ static int createDatabases() { taos = taos_connect(g_Dbs.host, g_Dbs.user, g_Dbs.password, NULL, g_Dbs.port); if (taos == NULL) { fprintf(stderr, "Failed to connect to TDengine, reason:%s\n", taos_errstr(NULL)); - exit(-1); + return -1; } char command[BUFFER_SIZE] = "\0"; - for (int i = 0; i < g_Dbs.dbCount; i++) { if (g_Dbs.db[i].drop) { sprintf(command, "drop database if exists %s;", g_Dbs.db[i].dbName); + verbosePrint("%s() %d command: %s\n", __func__, __LINE__, command); if (0 != queryDbExec(taos, command, NO_INSERT_TYPE)) { taos_close(taos); return -1; } } - + int dataLen = 0; - dataLen += snprintf(command + dataLen, BUFFER_SIZE - dataLen, "create database if not exists %s ", g_Dbs.db[i].dbName); + dataLen += snprintf(command + dataLen, + BUFFER_SIZE - dataLen, "create database if not exists %s ", g_Dbs.db[i].dbName); if (g_Dbs.db[i].dbCfg.blocks > 0) { - dataLen += snprintf(command + dataLen, BUFFER_SIZE - dataLen, "blocks %d ", g_Dbs.db[i].dbCfg.blocks); + dataLen += snprintf(command + dataLen, + BUFFER_SIZE - dataLen, "blocks %d ", g_Dbs.db[i].dbCfg.blocks); } if (g_Dbs.db[i].dbCfg.cache > 0) { - dataLen += snprintf(command + dataLen, BUFFER_SIZE - dataLen, "cache %d ", g_Dbs.db[i].dbCfg.cache); + dataLen += snprintf(command + dataLen, + BUFFER_SIZE - dataLen, "cache %d ", g_Dbs.db[i].dbCfg.cache); } if (g_Dbs.db[i].dbCfg.days > 0) { - dataLen += snprintf(command + dataLen, BUFFER_SIZE - dataLen, "days %d ", g_Dbs.db[i].dbCfg.days); + dataLen += snprintf(command + dataLen, + BUFFER_SIZE - dataLen, "days %d ", g_Dbs.db[i].dbCfg.days); } if (g_Dbs.db[i].dbCfg.keep > 0) { - dataLen += snprintf(command + dataLen, BUFFER_SIZE - dataLen, "keep %d ", g_Dbs.db[i].dbCfg.keep); + dataLen += snprintf(command + dataLen, + BUFFER_SIZE - dataLen, "keep %d ", g_Dbs.db[i].dbCfg.keep); } if (g_Dbs.db[i].dbCfg.replica > 0) { - dataLen += snprintf(command + dataLen, BUFFER_SIZE - dataLen, "replica %d ", g_Dbs.db[i].dbCfg.replica); + dataLen += snprintf(command + dataLen, + BUFFER_SIZE - dataLen, "replica %d ", g_Dbs.db[i].dbCfg.replica); } if (g_Dbs.db[i].dbCfg.update > 0) { - dataLen += snprintf(command + dataLen, BUFFER_SIZE - dataLen, "update %d ", g_Dbs.db[i].dbCfg.update); + dataLen += snprintf(command + dataLen, + BUFFER_SIZE - dataLen, "update %d ", g_Dbs.db[i].dbCfg.update); } //if (g_Dbs.db[i].dbCfg.maxtablesPerVnode > 0) { - // dataLen += snprintf(command + dataLen, BUFFER_SIZE - dataLen, "tables %d ", g_Dbs.db[i].dbCfg.maxtablesPerVnode); + // dataLen += snprintf(command + dataLen, + // BUFFER_SIZE - dataLen, "tables %d ", g_Dbs.db[i].dbCfg.maxtablesPerVnode); //} if (g_Dbs.db[i].dbCfg.minRows > 0) { - dataLen += snprintf(command + dataLen, BUFFER_SIZE - dataLen, "minrows %d ", g_Dbs.db[i].dbCfg.minRows); + dataLen += snprintf(command + dataLen, + BUFFER_SIZE - dataLen, "minrows %d ", g_Dbs.db[i].dbCfg.minRows); } if (g_Dbs.db[i].dbCfg.maxRows > 0) { - dataLen += snprintf(command + dataLen, BUFFER_SIZE - dataLen, "maxrows %d ", g_Dbs.db[i].dbCfg.maxRows); + dataLen += snprintf(command + dataLen, + BUFFER_SIZE - dataLen, "maxrows %d ", g_Dbs.db[i].dbCfg.maxRows); } if (g_Dbs.db[i].dbCfg.comp > 0) { - dataLen += snprintf(command + dataLen, BUFFER_SIZE - dataLen, "comp %d ", g_Dbs.db[i].dbCfg.comp); + dataLen += snprintf(command + dataLen, + BUFFER_SIZE - dataLen, "comp %d ", g_Dbs.db[i].dbCfg.comp); } if (g_Dbs.db[i].dbCfg.walLevel > 0) { - dataLen += snprintf(command + dataLen, BUFFER_SIZE - dataLen, "wal %d ", g_Dbs.db[i].dbCfg.walLevel); + dataLen += snprintf(command + dataLen, + BUFFER_SIZE - dataLen, "wal %d ", g_Dbs.db[i].dbCfg.walLevel); } if (g_Dbs.db[i].dbCfg.cacheLast > 0) { - dataLen += snprintf(command + dataLen, BUFFER_SIZE - dataLen, "cachelast %d ", g_Dbs.db[i].dbCfg.cacheLast); + dataLen += snprintf(command + dataLen, + BUFFER_SIZE - dataLen, "cachelast %d ", g_Dbs.db[i].dbCfg.cacheLast); } if (g_Dbs.db[i].dbCfg.fsync > 0) { dataLen += snprintf(command + dataLen, BUFFER_SIZE - dataLen, "fsync %d ", g_Dbs.db[i].dbCfg.fsync); } - if ((0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "ms", 2)) || (0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "us", 2))) { - dataLen += snprintf(command + dataLen, BUFFER_SIZE - dataLen, "precision \'%s\';", g_Dbs.db[i].dbCfg.precision); + if ((0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "ms", 2)) + || (0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "us", 2))) { + dataLen += snprintf(command + dataLen, BUFFER_SIZE - dataLen, + "precision \'%s\';", g_Dbs.db[i].dbCfg.precision); } - + + debugPrint("%s() %d command: %s\n", __func__, __LINE__, command); if (0 != queryDbExec(taos, command, NO_INSERT_TYPE)) { taos_close(taos); + printf("\ncreate database %s failed!\n\n", g_Dbs.db[i].dbName); return -1; } printf("\ncreate database %s success!\n\n", g_Dbs.db[i].dbName); + debugPrint("%s() %d supertbl count:%d\n", __func__, __LINE__, g_Dbs.db[i].superTblCount); for (int j = 0; j < g_Dbs.db[i].superTblCount; j++) { // describe super table, if exists sprintf(command, "describe %s.%s;", g_Dbs.db[i].dbName, g_Dbs.db[i].superTbls[j].sTblName); + verbosePrint("%s() %d command: %s\n", __func__, __LINE__, command); if (0 != queryDbExec(taos, command, NO_INSERT_TYPE)) { g_Dbs.db[i].superTbls[j].superTblExists = TBL_NO_EXISTS; ret = createSuperTable(taos, g_Dbs.db[i].dbName, &g_Dbs.db[i].superTbls[j], g_Dbs.use_metric); @@ -2051,6 +2162,7 @@ static int createDatabases() { } if (0 != ret) { + printf("\ncreate super table %d failed!\n\n", j); taos_close(taos); return -1; } @@ -2061,50 +2173,74 @@ static int createDatabases() { return 0; } - -void * createTable(void *sarg) +static void* createTable(void *sarg) { threadInfo *winfo = (threadInfo *)sarg; SSuperTable* superTblInfo = winfo->superTblInfo; int64_t lastPrintTime = taosGetTimestampMs(); - char* buffer = calloc(superTblInfo->maxSqlLen, 1); + int buff_len; + if (superTblInfo) + buff_len = superTblInfo->maxSqlLen; + else + buff_len = BUFFER_SIZE; + + char *buffer = calloc(buff_len, 1); + if (buffer == NULL) { + fprintf(stderr, "Memory allocated failed!"); + exit(-1); + } int len = 0; int batchNum = 0; //printf("Creating table from %d to %d\n", winfo->start_table_id, winfo->end_table_id); for (int i = winfo->start_table_id; i <= winfo->end_table_id; i++) { if (0 == g_Dbs.use_metric) { - snprintf(buffer, BUFFER_SIZE, "create table if not exists %s.%s%d %s;", winfo->db_name, superTblInfo->childTblPrefix, i, superTblInfo->colsOfCreatChildTable); + snprintf(buffer, buff_len, + "create table if not exists %s.%s%d %s;", + winfo->db_name, + g_args.tb_prefix, i, + winfo->cols); } else { if (0 == len) { batchNum = 0; - memset(buffer, 0, superTblInfo->maxSqlLen); - len += snprintf(buffer + len, superTblInfo->maxSqlLen - len, "create table "); + memset(buffer, 0, buff_len); + len += snprintf(buffer + len, + buff_len - len, "create table "); } - + char* tagsValBuf = NULL; if (0 == superTblInfo->tagSource) { tagsValBuf = generateTagVaulesForStb(superTblInfo); } else { - tagsValBuf = getTagValueFromTagSample(superTblInfo, i % superTblInfo->tagSampleCount); + tagsValBuf = getTagValueFromTagSample( + superTblInfo, + i % superTblInfo->tagSampleCount); } if (NULL == tagsValBuf) { free(buffer); return NULL; } - len += snprintf(buffer + len, superTblInfo->maxSqlLen - len, "if not exists %s.%s%d using %s.%s tags %s ", winfo->db_name, superTblInfo->childTblPrefix, i, winfo->db_name, superTblInfo->sTblName, tagsValBuf); + len += snprintf(buffer + len, + superTblInfo->maxSqlLen - len, + "if not exists %s.%s%d using %s.%s tags %s ", + winfo->db_name, superTblInfo->childTblPrefix, + i, winfo->db_name, + superTblInfo->sTblName, tagsValBuf); free(tagsValBuf); batchNum++; - if ((batchNum < superTblInfo->batchCreateTableNum) && ((superTblInfo->maxSqlLen - len) >= (superTblInfo->lenOfTagOfOneRow + 256))) { + if ((batchNum < superTblInfo->batchCreateTableNum) + && ((superTblInfo->maxSqlLen - len) + >= (superTblInfo->lenOfTagOfOneRow + 256))) { continue; } } len = 0; + verbosePrint("%s() LN%d %s\n", __func__, __LINE__, buffer); if (0 != queryDbExec(winfo->taos, buffer, NO_INSERT_TYPE)){ free(buffer); return NULL; @@ -2112,20 +2248,24 @@ void * createTable(void *sarg) int64_t currentPrintTime = taosGetTimestampMs(); if (currentPrintTime - lastPrintTime > 30*1000) { - printf("thread[%d] already create %d - %d tables\n", winfo->threadID, winfo->start_table_id, i); + printf("thread[%d] already create %d - %d tables\n", + winfo->threadID, winfo->start_table_id, i); lastPrintTime = currentPrintTime; } } if (0 != len) { + verbosePrint("%s() %d buffer: %s\n", __func__, __LINE__, buffer); (void)queryDbExec(winfo->taos, buffer, NO_INSERT_TYPE); } - + free(buffer); return NULL; } -void startMultiThreadCreateChildTable(char* cols, int threads, int ntables, char* db_name, SSuperTable* superTblInfo) { +int startMultiThreadCreateChildTable( + char* cols, int threads, int ntables, + char* db_name, SSuperTable* superTblInfo) { pthread_t *pids = malloc(threads * sizeof(pthread_t)); threadInfo *infos = malloc(threads * sizeof(threadInfo)); @@ -2146,14 +2286,26 @@ void startMultiThreadCreateChildTable(char* cols, int threads, int ntables, char int b = 0; b = ntables % threads; - + int last = 0; for (int i = 0; i < threads; i++) { threadInfo *t_info = infos + i; t_info->threadID = i; tstrncpy(t_info->db_name, db_name, MAX_DB_NAME_SIZE); t_info->superTblInfo = superTblInfo; - t_info->taos = taos_connect(g_Dbs.host, g_Dbs.user, g_Dbs.password, db_name, g_Dbs.port); + verbosePrint("%s() %d db_name: %s\n", __func__, __LINE__, db_name); + t_info->taos = taos_connect( + g_Dbs.host, + g_Dbs.user, + g_Dbs.password, + db_name, + g_Dbs.port); + if (t_info->taos == NULL) { + fprintf(stderr, "Failed to connect to TDengine, reason:%s\n", taos_errstr(NULL)); + free(pids); + free(infos); + return -1; + } t_info->start_table_id = last; t_info->end_table_id = i < b ? last + a : last + a - 1; last = t_info->end_table_id + 1; @@ -2174,18 +2326,59 @@ void startMultiThreadCreateChildTable(char* cols, int threads, int ntables, char free(pids); free(infos); + + return 0; } static void createChildTables() { + char tblColsBuf[MAX_SQL_SIZE]; + int len; + for (int i = 0; i < g_Dbs.dbCount; i++) { - for (int j = 0; j < g_Dbs.db[i].superTblCount; j++) { - if ((AUTO_CREATE_SUBTBL == g_Dbs.db[i].superTbls[j].autoCreateTable) || (TBL_ALREADY_EXISTS == g_Dbs.db[i].superTbls[j].childTblExists)) { - continue; + if (g_Dbs.db[i].superTblCount > 0) { + // with super table + for (int j = 0; j < g_Dbs.db[i].superTblCount; j++) { + if ((AUTO_CREATE_SUBTBL == g_Dbs.db[i].superTbls[j].autoCreateTable) + || (TBL_ALREADY_EXISTS == g_Dbs.db[i].superTbls[j].childTblExists)) { + continue; + } + + verbosePrint("%s() LN%d: %s\n", __func__, __LINE__, + g_Dbs.db[i].superTbls[j].colsOfCreateChildTable); + startMultiThreadCreateChildTable( + g_Dbs.db[i].superTbls[j].colsOfCreateChildTable, + g_Dbs.threadCountByCreateTbl, + g_Dbs.db[i].superTbls[j].childTblCount, + g_Dbs.db[i].dbName, &(g_Dbs.db[i].superTbls[j])); + g_totalChildTables += g_Dbs.db[i].superTbls[j].childTblCount; } - startMultiThreadCreateChildTable(g_Dbs.db[i].superTbls[j].colsOfCreatChildTable, g_Dbs.threadCountByCreateTbl, g_Dbs.db[i].superTbls[j].childTblCount, g_Dbs.db[i].dbName, &(g_Dbs.db[i].superTbls[j])); - g_totalChildTables += g_Dbs.db[i].superTbls[j].childTblCount; - } + } else { + // normal table + len = snprintf(tblColsBuf, MAX_SQL_SIZE, "(TS TIMESTAMP"); + int j = 0; + while (g_args.datatype[j]) { + if ((strncasecmp(g_args.datatype[j], "BINARY", strlen("BINARY")) == 0) + || (strncasecmp(g_args.datatype[j], "NCHAR", strlen("NCHAR")) == 0)) { + len = snprintf(tblColsBuf + len, MAX_SQL_SIZE, ", COL%d %s(60)", j, g_args.datatype[j]); + } else { + len = snprintf(tblColsBuf + len, MAX_SQL_SIZE, ", COL%d %s", j, g_args.datatype[j]); + } + len = strlen(tblColsBuf); + j++; + } + + len = snprintf(tblColsBuf + len, MAX_SQL_SIZE - len, ")"); + + verbosePrint("%s() LN%d: dbName: %s num of tb: %d schema: %s\n", __func__, __LINE__, + g_Dbs.db[i].dbName, g_args.num_of_tables, tblColsBuf); + startMultiThreadCreateChildTable( + tblColsBuf, + g_Dbs.threadCountByCreateTbl, + g_args.num_of_tables, + g_Dbs.db[i].dbName, + NULL); + } } } @@ -2220,10 +2413,11 @@ int readTagFromCsvFileToMem(SSuperTable * superTblInfo) { size_t n = 0; ssize_t readLen = 0; char * line = NULL; - + FILE *fp = fopen(superTblInfo->tagsFile, "r"); if (fp == NULL) { - printf("Failed to open tags file: %s, reason:%s\n", superTblInfo->tagsFile, strerror(errno)); + printf("Failed to open tags file: %s, reason:%s\n", + superTblInfo->tagsFile, strerror(errno)); return -1; } @@ -2231,7 +2425,7 @@ int readTagFromCsvFileToMem(SSuperTable * superTblInfo) { free(superTblInfo->tagDataBuf); superTblInfo->tagDataBuf = NULL; } - + int tagCount = 10000; int count = 0; char* tagDataBuf = calloc(1, superTblInfo->lenOfTagOfOneRow * tagCount); @@ -2254,11 +2448,13 @@ int readTagFromCsvFileToMem(SSuperTable * superTblInfo) { count++; if (count >= tagCount - 1) { - char *tmp = realloc(tagDataBuf, (size_t)tagCount*1.5*superTblInfo->lenOfTagOfOneRow); + char *tmp = realloc(tagDataBuf, + (size_t)tagCount*1.5*superTblInfo->lenOfTagOfOneRow); if (tmp != NULL) { tagDataBuf = tmp; tagCount = (int)(tagCount*1.5); - memset(tagDataBuf + count*superTblInfo->lenOfTagOfOneRow, 0, (size_t)((tagCount-count)*superTblInfo->lenOfTagOfOneRow)); + memset(tagDataBuf + count*superTblInfo->lenOfTagOfOneRow, + 0, (size_t)((tagCount-count)*superTblInfo->lenOfTagOfOneRow)); } else { // exit, if allocate more memory failed printf("realloc fail for save tag val from %s\n", superTblInfo->tagsFile); @@ -2298,7 +2494,8 @@ int readSampleFromCsvFileToMem(FILE *fp, SSuperTable* superTblInfo, char* sample readLen = tgetline(&line, &n, fp); if (-1 == readLen) { if(0 != fseek(fp, 0, SEEK_SET)) { - printf("Failed to fseek file: %s, reason:%s\n", superTblInfo->sampleFile, strerror(errno)); + printf("Failed to fseek file: %s, reason:%s\n", + superTblInfo->sampleFile, strerror(errno)); return -1; } continue; @@ -2313,7 +2510,8 @@ int readSampleFromCsvFileToMem(FILE *fp, SSuperTable* superTblInfo, char* sample } if (readLen > superTblInfo->lenOfOneRow) { - printf("sample row len[%d] overflow define schema len[%d], so discard this row\n", (int32_t)readLen, superTblInfo->lenOfOneRow); + printf("sample row len[%d] overflow define schema len[%d], so discard this row\n", + (int32_t)readLen, superTblInfo->lenOfOneRow); continue; } @@ -2359,14 +2557,15 @@ static bool getColumnAndTagTypeFromInsertJsonFile(cJSON* stbInfo, SSuperTable* s int columnSize = cJSON_GetArraySize(columns); if (columnSize > MAX_COLUMN_COUNT) { - printf("failed to read json, column size overflow, max column size is %d\n", MAX_COLUMN_COUNT); + printf("failed to read json, column size overflow, max column size is %d\n", + MAX_COLUMN_COUNT); goto PARSE_OVER; } int count = 1; int index = 0; StrColumn columnCase; - + //superTbls->columnCount = columnSize; for (int k = 0; k < columnSize; ++k) { cJSON* column = cJSON_GetArrayItem(columns, k); @@ -2544,8 +2743,30 @@ static bool getMetaFromInsertJsonFile(cJSON* root) { goto PARSE_OVER; } + cJSON* insertInterval = cJSON_GetObjectItem(root, "insert_interval"); + if (insertInterval && insertInterval->type == cJSON_Number) { + g_args.insert_interval = insertInterval->valueint; + } else if (!insertInterval) { + g_args.insert_interval = 0; + } else { + printf("failed to read json, insert_interval not found"); + goto PARSE_OVER; + } + + cJSON* numRecPerReq = cJSON_GetObjectItem(root, "num_of_records_per_req"); + if (numRecPerReq && numRecPerReq->type == cJSON_Number) { + g_args.num_of_RPR = numRecPerReq->valueint; + } else if (!numRecPerReq) { + g_args.num_of_RPR = 100; + } else { + printf("failed to read json, num_of_records_per_req not found"); + goto PARSE_OVER; + } + cJSON *answerPrompt = cJSON_GetObjectItem(root, "confirm_parameter_prompt"); // yes, no, - if (answerPrompt && answerPrompt->type == cJSON_String && answerPrompt->valuestring != NULL) { + if (answerPrompt + && answerPrompt->type == cJSON_String + && answerPrompt->valuestring != NULL) { if (0 == strncasecmp(answerPrompt->valuestring, "yes", 3)) { g_args.answer_yes = false; } else if (0 == strncasecmp(answerPrompt->valuestring, "no", 2)) { @@ -2790,7 +3011,9 @@ static bool getMetaFromInsertJsonFile(cJSON* root) { tstrncpy(g_Dbs.db[i].superTbls[j].childTblPrefix, prefix->valuestring, MAX_DB_NAME_SIZE); cJSON *autoCreateTbl = cJSON_GetObjectItem(stbInfo, "auto_create_table"); // yes, no, null - if (autoCreateTbl && autoCreateTbl->type == cJSON_String && autoCreateTbl->valuestring != NULL) { + if (autoCreateTbl + && autoCreateTbl->type == cJSON_String + && autoCreateTbl->valuestring != NULL) { if (0 == strncasecmp(autoCreateTbl->valuestring, "yes", 3)) { g_Dbs.db[i].superTbls[j].autoCreateTable = AUTO_CREATE_SUBTBL; } else if (0 == strncasecmp(autoCreateTbl->valuestring, "no", 2)) { @@ -2816,7 +3039,9 @@ static bool getMetaFromInsertJsonFile(cJSON* root) { } cJSON *childTblExists = cJSON_GetObjectItem(stbInfo, "child_table_exists"); // yes, no - if (childTblExists && childTblExists->type == cJSON_String && childTblExists->valuestring != NULL) { + if (childTblExists + && childTblExists->type == cJSON_String + && childTblExists->valuestring != NULL) { if (0 == strncasecmp(childTblExists->valuestring, "yes", 3)) { g_Dbs.db[i].superTbls[j].childTblExists = TBL_ALREADY_EXISTS; } else if (0 == strncasecmp(childTblExists->valuestring, "no", 2)) { @@ -2839,8 +3064,10 @@ static bool getMetaFromInsertJsonFile(cJSON* root) { g_Dbs.db[i].superTbls[j].childTblCount = count->valueint; cJSON *dataSource = cJSON_GetObjectItem(stbInfo, "data_source"); - if (dataSource && dataSource->type == cJSON_String && dataSource->valuestring != NULL) { - tstrncpy(g_Dbs.db[i].superTbls[j].dataSource, dataSource->valuestring, MAX_DB_NAME_SIZE); + if (dataSource && dataSource->type == cJSON_String + && dataSource->valuestring != NULL) { + tstrncpy(g_Dbs.db[i].superTbls[j].dataSource, + dataSource->valuestring, MAX_DB_NAME_SIZE); } else if (!dataSource) { tstrncpy(g_Dbs.db[i].superTbls[j].dataSource, "rand", MAX_DB_NAME_SIZE); } else { @@ -2849,8 +3076,10 @@ static bool getMetaFromInsertJsonFile(cJSON* root) { } cJSON *insertMode = cJSON_GetObjectItem(stbInfo, "insert_mode"); // taosc , restful - if (insertMode && insertMode->type == cJSON_String && insertMode->valuestring != NULL) { - tstrncpy(g_Dbs.db[i].superTbls[j].insertMode, insertMode->valuestring, MAX_DB_NAME_SIZE); + if (insertMode && insertMode->type == cJSON_String + && insertMode->valuestring != NULL) { + tstrncpy(g_Dbs.db[i].superTbls[j].insertMode, + insertMode->valuestring, MAX_DB_NAME_SIZE); } else if (!insertMode) { tstrncpy(g_Dbs.db[i].superTbls[j].insertMode, "taosc", MAX_DB_NAME_SIZE); } else { @@ -2893,7 +3122,8 @@ static bool getMetaFromInsertJsonFile(cJSON* root) { cJSON *sampleFormat = cJSON_GetObjectItem(stbInfo, "sample_format"); if (sampleFormat && sampleFormat->type == cJSON_String && sampleFormat->valuestring != NULL) { - tstrncpy(g_Dbs.db[i].superTbls[j].sampleFormat, sampleFormat->valuestring, MAX_DB_NAME_SIZE); + tstrncpy(g_Dbs.db[i].superTbls[j].sampleFormat, + sampleFormat->valuestring, MAX_DB_NAME_SIZE); } else if (!sampleFormat) { tstrncpy(g_Dbs.db[i].superTbls[j].sampleFormat, "csv", MAX_DB_NAME_SIZE); } else { @@ -2903,7 +3133,8 @@ static bool getMetaFromInsertJsonFile(cJSON* root) { cJSON *sampleFile = cJSON_GetObjectItem(stbInfo, "sample_file"); if (sampleFile && sampleFile->type == cJSON_String && sampleFile->valuestring != NULL) { - tstrncpy(g_Dbs.db[i].superTbls[j].sampleFile, sampleFile->valuestring, MAX_FILE_NAME_LEN); + tstrncpy(g_Dbs.db[i].superTbls[j].sampleFile, + sampleFile->valuestring, MAX_FILE_NAME_LEN); } else if (!sampleFile) { memset(g_Dbs.db[i].superTbls[j].sampleFile, 0, MAX_FILE_NAME_LEN); } else { @@ -2913,7 +3144,8 @@ static bool getMetaFromInsertJsonFile(cJSON* root) { cJSON *tagsFile = cJSON_GetObjectItem(stbInfo, "tags_file"); if (tagsFile && tagsFile->type == cJSON_String && tagsFile->valuestring != NULL) { - tstrncpy(g_Dbs.db[i].superTbls[j].tagsFile, tagsFile->valuestring, MAX_FILE_NAME_LEN); + tstrncpy(g_Dbs.db[i].superTbls[j].tagsFile, + tagsFile->valuestring, MAX_FILE_NAME_LEN); if (0 == g_Dbs.db[i].superTbls[j].tagsFile[0]) { g_Dbs.db[i].superTbls[j].tagSource = 0; } else { @@ -2943,8 +3175,11 @@ static bool getMetaFromInsertJsonFile(cJSON* root) { goto PARSE_OVER; } - cJSON *multiThreadWriteOneTbl = cJSON_GetObjectItem(stbInfo, "multi_thread_write_one_tbl"); // no , yes - if (multiThreadWriteOneTbl && multiThreadWriteOneTbl->type == cJSON_String && multiThreadWriteOneTbl->valuestring != NULL) { + cJSON *multiThreadWriteOneTbl = + cJSON_GetObjectItem(stbInfo, "multi_thread_write_one_tbl"); // no , yes + if (multiThreadWriteOneTbl + && multiThreadWriteOneTbl->type == cJSON_String + && multiThreadWriteOneTbl->valuestring != NULL) { if (0 == strncasecmp(multiThreadWriteOneTbl->valuestring, "yes", 3)) { g_Dbs.db[i].superTbls[j].multiThreadWriteOneTbl = 1; } else { @@ -2996,17 +3231,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) { printf("failed to read json, disorderRange not found"); goto PARSE_OVER; } - - cJSON* insertRate = cJSON_GetObjectItem(stbInfo, "insert_rate"); - if (insertRate && insertRate->type == cJSON_Number) { - g_Dbs.db[i].superTbls[j].insertRate = insertRate->valueint; - } else if (!insertRate) { - g_Dbs.db[i].superTbls[j].insertRate = 0; - } else { - printf("failed to read json, insert_rate not found"); - goto PARSE_OVER; - } - + cJSON* insertRows = cJSON_GetObjectItem(stbInfo, "insert_rows"); if (insertRows && insertRows->type == cJSON_Number) { g_Dbs.db[i].superTbls[j].insertRows = insertRows->valueint; @@ -3020,7 +3245,8 @@ static bool getMetaFromInsertJsonFile(cJSON* root) { goto PARSE_OVER; } - if (NO_CREATE_SUBTBL == g_Dbs.db[i].superTbls[j].autoCreateTable || (TBL_ALREADY_EXISTS == g_Dbs.db[i].superTbls[j].childTblExists)) { + if (NO_CREATE_SUBTBL == g_Dbs.db[i].superTbls[j].autoCreateTable + || (TBL_ALREADY_EXISTS == g_Dbs.db[i].superTbls[j].childTblExists)) { continue; } @@ -3080,7 +3306,8 @@ static bool getMetaFromQueryJsonFile(cJSON* root) { } cJSON *answerPrompt = cJSON_GetObjectItem(root, "confirm_parameter_prompt"); // yes, no, - if (answerPrompt && answerPrompt->type == cJSON_String && answerPrompt->valuestring != NULL) { + if (answerPrompt && answerPrompt->type == cJSON_String + && answerPrompt->valuestring != NULL) { if (0 == strncasecmp(answerPrompt->valuestring, "yes", 3)) { g_args.answer_yes = false; } else if (0 == strncasecmp(answerPrompt->valuestring, "no", 2)) { @@ -3174,7 +3401,9 @@ static bool getMetaFromQueryJsonFile(cJSON* root) { } cJSON* keepProgress = cJSON_GetObjectItem(superQuery, "keepProgress"); - if (keepProgress && keepProgress->type == cJSON_String && keepProgress->valuestring != NULL) { + if (keepProgress + && keepProgress->type == cJSON_String + && keepProgress->valuestring != NULL) { if (0 == strcmp("yes", keepProgress->valuestring)) { g_queryInfo.superQueryInfo.subscribeKeepProgress = 1; } else if (0 == strcmp("no", keepProgress->valuestring)) { @@ -3365,6 +3594,8 @@ PARSE_OVER: } static bool getInfoFromJsonFile(char* file) { + debugPrint("%s %d %s\n", __func__, __LINE__, file); + FILE *fp = fopen(file, "r"); if (!fp) { printf("failed to read %s, reason:%s\n", file, strerror(errno)); @@ -3392,32 +3623,32 @@ static bool getInfoFromJsonFile(char* file) { cJSON* filetype = cJSON_GetObjectItem(root, "filetype"); if (filetype && filetype->type == cJSON_String && filetype->valuestring != NULL) { if (0 == strcasecmp("insert", filetype->valuestring)) { - g_jsonType = INSERT_MODE; + g_args.test_mode = INSERT_MODE; } else if (0 == strcasecmp("query", filetype->valuestring)) { - g_jsonType = QUERY_MODE; + g_args.test_mode = QUERY_MODE; } else if (0 == strcasecmp("subscribe", filetype->valuestring)) { - g_jsonType = SUBSCRIBE_MODE; + g_args.test_mode = SUBSCRIBE_MODE; } else { printf("failed to read json, filetype not support\n"); goto PARSE_OVER; } } else if (!filetype) { - g_jsonType = INSERT_MODE; + g_args.test_mode = INSERT_MODE; } else { printf("failed to read json, filetype not found\n"); goto PARSE_OVER; } - if (INSERT_MODE == g_jsonType) { + if (INSERT_MODE == g_args.test_mode) { ret = getMetaFromInsertJsonFile(root); - } else if (QUERY_MODE == g_jsonType) { + } else if (QUERY_MODE == g_args.test_mode) { ret = getMetaFromQueryJsonFile(root); - } else if (SUBSCRIBE_MODE == g_jsonType) { + } else if (SUBSCRIBE_MODE == g_args.test_mode) { ret = getMetaFromQueryJsonFile(root); } else { printf("input json file type error! please input correct file type: insert or query or subscribe\n"); goto PARSE_OVER; - } + } PARSE_OVER: free(content); @@ -3426,14 +3657,13 @@ PARSE_OVER: return ret; } - void prePareSampleData() { for (int i = 0; i < g_Dbs.dbCount; i++) { for (int j = 0; j < g_Dbs.db[i].superTblCount; j++) { //if (0 == strncasecmp(g_Dbs.db[i].superTbls[j].dataSource, "sample", 6)) { // readSampleFromFileToMem(&g_Dbs.db[i].superTbls[j]); //} - + if (g_Dbs.db[i].superTbls[j].tagsFile[0] != 0) { (void)readTagFromCsvFileToMem(&g_Dbs.db[i].superTbls[j]); } @@ -3445,9 +3675,9 @@ void postFreeResource() { tmfclose(g_fpOfInsertResult); for (int i = 0; i < g_Dbs.dbCount; i++) { for (int j = 0; j < g_Dbs.db[i].superTblCount; j++) { - if (0 != g_Dbs.db[i].superTbls[j].colsOfCreatChildTable) { - free(g_Dbs.db[i].superTbls[j].colsOfCreatChildTable); - g_Dbs.db[i].superTbls[j].colsOfCreatChildTable = NULL; + if (0 != g_Dbs.db[i].superTbls[j].colsOfCreateChildTable) { + free(g_Dbs.db[i].superTbls[j].colsOfCreateChildTable); + g_Dbs.db[i].superTbls[j].colsOfCreateChildTable = NULL; } if (0 != g_Dbs.db[i].superTbls[j].sampleDataBuf) { free(g_Dbs.db[i].superTbls[j].sampleDataBuf); @@ -3523,13 +3753,15 @@ int generateRowData(char* dataBuf, int maxLen, int64_t timestamp, SSuperTable* return (-1); } } + dataLen -= 2; dataLen += snprintf(dataBuf + dataLen, maxLen - dataLen, ")"); - + return dataLen; } -void syncWriteForNumberOfTblInOneSql(threadInfo *winfo, FILE *fp, char* sampleDataBuf) { +static void syncWriteForNumberOfTblInOneSql( + threadInfo *winfo, FILE *fp, char* sampleDataBuf) { SSuperTable* superTblInfo = winfo->superTblInfo; int samplePos = 0; @@ -3551,30 +3783,18 @@ void syncWriteForNumberOfTblInOneSql(threadInfo *winfo, FILE *fp, char* sampleDa numberOfTblInOneSql = tbls; } - int64_t time_counter = winfo->start_time; - int64_t tmp_time; + uint64_t time_counter = winfo->start_time; int sampleUsePos; int64_t st = 0; int64_t et = 0; for (int i = 0; i < superTblInfo->insertRows;) { - if (superTblInfo->insertRate && (et - st) < 1000) { - taosMsleep(1000 - (et - st)); // ms - //printf("========sleep duration:%"PRId64 "========inserted rows:%d, table range:%d - %d\n", (1000 - (et - st)), i, winfo->start_table_id, winfo->end_table_id); - } - - if (superTblInfo->insertRate) { - st = taosGetTimestampMs(); - } - int32_t tbl_id = 0; for (int tID = winfo->start_table_id; tID <= winfo->end_table_id; ) { + int64_t tmp_time = 0; int inserted = i; - int k = 0; - int batchRowsSql = 0; - while (1) - { + for (int k = 0; k < g_args.num_of_RPR;) { int len = 0; memset(buffer, 0, superTblInfo->maxSqlLen); char *pstr = buffer; @@ -3583,6 +3803,7 @@ void syncWriteForNumberOfTblInOneSql(threadInfo *winfo, FILE *fp, char* sampleDa if (end_tbl_id > winfo->end_table_id) { end_tbl_id = winfo->end_table_id+1; } + for (tbl_id = tID; tbl_id < end_tbl_id; tbl_id++) { sampleUsePos = samplePos; if (AUTO_CREATE_SUBTBL == superTblInfo->autoCreateTable) { @@ -3590,47 +3811,97 @@ void syncWriteForNumberOfTblInOneSql(threadInfo *winfo, FILE *fp, char* sampleDa if (0 == superTblInfo->tagSource) { tagsValBuf = generateTagVaulesForStb(superTblInfo); } else { - tagsValBuf = getTagValueFromTagSample(superTblInfo, tbl_id % superTblInfo->tagSampleCount); + tagsValBuf = getTagValueFromTagSample( + superTblInfo, tbl_id % superTblInfo->tagSampleCount); } if (NULL == tagsValBuf) { goto free_and_statistics; } if (0 == len) { - len += snprintf(pstr + len, superTblInfo->maxSqlLen - len, "insert into %s.%s%d using %s.%s tags %s values ", winfo->db_name, superTblInfo->childTblPrefix, tbl_id, winfo->db_name, superTblInfo->sTblName, tagsValBuf); + len += snprintf(pstr + len, + superTblInfo->maxSqlLen - len, + "insert into %s.%s%d using %s.%s tags %s values ", + winfo->db_name, + superTblInfo->childTblPrefix, + tbl_id, + winfo->db_name, + superTblInfo->sTblName, + tagsValBuf); } else { - len += snprintf(pstr + len, superTblInfo->maxSqlLen - len, " %s.%s%d using %s.%s tags %s values ", winfo->db_name, superTblInfo->childTblPrefix, tbl_id, winfo->db_name, superTblInfo->sTblName, tagsValBuf); + len += snprintf(pstr + len, + superTblInfo->maxSqlLen - len, + " %s.%s%d using %s.%s tags %s values ", + winfo->db_name, + superTblInfo->childTblPrefix, + tbl_id, + winfo->db_name, + superTblInfo->sTblName, + tagsValBuf); } tmfree(tagsValBuf); } else if (TBL_ALREADY_EXISTS == superTblInfo->childTblExists) { if (0 == len) { - len += snprintf(pstr + len, superTblInfo->maxSqlLen - len, "insert into %s.%s values ", winfo->db_name, superTblInfo->childTblName + tbl_id * TSDB_TABLE_NAME_LEN); + len += snprintf(pstr + len, + superTblInfo->maxSqlLen - len, + "insert into %s.%s values ", + winfo->db_name, + superTblInfo->childTblName + tbl_id * TSDB_TABLE_NAME_LEN); } else { - len += snprintf(pstr + len, superTblInfo->maxSqlLen - len, " %s.%s values ", winfo->db_name, superTblInfo->childTblName + tbl_id * TSDB_TABLE_NAME_LEN); + len += snprintf(pstr + len, + superTblInfo->maxSqlLen - len, + " %s.%s values ", + winfo->db_name, + superTblInfo->childTblName + tbl_id * TSDB_TABLE_NAME_LEN); } } else { // pre-create child table if (0 == len) { - len += snprintf(pstr + len, superTblInfo->maxSqlLen - len, "insert into %s.%s%d values ", winfo->db_name, superTblInfo->childTblPrefix, tbl_id); + len += snprintf(pstr + len, + superTblInfo->maxSqlLen - len, + "insert into %s.%s%d values ", + winfo->db_name, + superTblInfo->childTblPrefix, + tbl_id); } else { - len += snprintf(pstr + len, superTblInfo->maxSqlLen - len, " %s.%s%d values ", winfo->db_name, superTblInfo->childTblPrefix, tbl_id); + len += snprintf(pstr + len, + superTblInfo->maxSqlLen - len, + " %s.%s%d values ", + winfo->db_name, + superTblInfo->childTblPrefix, + tbl_id); } } - + tmp_time = time_counter; for (k = 0; k < superTblInfo->rowsPerTbl;) { int retLen = 0; - if (0 == strncasecmp(superTblInfo->dataSource, "sample", 6)) { - retLen = getRowDataFromSample(pstr + len, superTblInfo->maxSqlLen - len, tmp_time += superTblInfo->timeStampStep, superTblInfo, &sampleUsePos, fp, sampleDataBuf); + if (0 == strncasecmp(superTblInfo->dataSource, + "sample", strlen("sample"))) { + retLen = getRowDataFromSample(pstr + len, + superTblInfo->maxSqlLen - len, + tmp_time += superTblInfo->timeStampStep, + superTblInfo, + &sampleUsePos, + fp, + sampleDataBuf); if (retLen < 0) { goto free_and_statistics; } - } else if (0 == strncasecmp(superTblInfo->dataSource, "rand", 8)) { + } else if (0 == strncasecmp( + superTblInfo->dataSource, "rand", strlen("rand"))) { int rand_num = rand_tinyint() % 100; - if (0 != superTblInfo->disorderRatio && rand_num < superTblInfo->disorderRatio) { + if (0 != superTblInfo->disorderRatio + && rand_num < superTblInfo->disorderRatio) { int64_t d = tmp_time - rand() % superTblInfo->disorderRange; - retLen = generateRowData(pstr + len, superTblInfo->maxSqlLen - len, d, superTblInfo); + retLen = generateRowData(pstr + len, + superTblInfo->maxSqlLen - len, + d, + superTblInfo); } else { - retLen = generateRowData(pstr + len, superTblInfo->maxSqlLen - len, tmp_time += superTblInfo->timeStampStep, superTblInfo); + retLen = generateRowData(pstr + len, + superTblInfo->maxSqlLen - len, + tmp_time += superTblInfo->timeStampStep, + superTblInfo); } if (retLen < 0) { goto free_and_statistics; @@ -3640,30 +3911,44 @@ void syncWriteForNumberOfTblInOneSql(threadInfo *winfo, FILE *fp, char* sampleDa //inserted++; k++; totalRowsInserted++; - batchRowsSql++; - - if (inserted >= superTblInfo->insertRows || (superTblInfo->maxSqlLen - len) < (superTblInfo->lenOfOneRow + 128) || batchRowsSql >= INT16_MAX - 1) { + + if (inserted >= superTblInfo->insertRows || + (superTblInfo->maxSqlLen - len) < (superTblInfo->lenOfOneRow + 128)) { tID = tbl_id + 1; - printf("config rowsPerTbl and numberOfTblInOneSql not match with max_sql_lenth, please reconfig![lenOfOneRow:%d]\n", superTblInfo->lenOfOneRow); + printf("config rowsPerTbl and numberOfTblInOneSql not match with max_sql_lenth, please reconfig![lenOfOneRow:%d]\n", + superTblInfo->lenOfOneRow); goto send_to_server; } } - } tID = tbl_id; inserted += superTblInfo->rowsPerTbl; - send_to_server: - batchRowsSql = 0; - if (0 == strncasecmp(superTblInfo->insertMode, "taosc", 5)) { +send_to_server: + if (g_args.insert_interval && (g_args.insert_interval > (et - st))) { + int sleep_time = g_args.insert_interval - (et -st); + printf("sleep: %d ms specified by insert_interval\n", sleep_time); + taosMsleep(sleep_time); // ms + } + + if (g_args.insert_interval) { + st = taosGetTimestampMs(); + } + + if (0 == strncasecmp(superTblInfo->insertMode, + "taosc", + strlen("taosc"))) { //printf("multi table===== sql: %s \n\n", buffer); //int64_t t1 = taosGetTimestampMs(); int64_t startTs; int64_t endTs; startTs = taosGetTimestampUs(); - int affectedRows = queryDbExec(winfo->taos, buffer, INSERT_TYPE); + debugPrint("%s() LN%d buff: %s\n", __func__, __LINE__, buffer); + int affectedRows = queryDbExec( + winfo->taos, buffer, INSERT_TYPE); + if (0 > affectedRows) { goto free_and_statistics; } else { @@ -3673,35 +3958,40 @@ void syncWriteForNumberOfTblInOneSql(threadInfo *winfo, FILE *fp, char* sampleDa if (delay < winfo->minDelay) winfo->minDelay = delay; winfo->cntDelay++; winfo->totalDelay += delay; - //winfo->avgDelay = (double)winfo->totalDelay / winfo->cntDelay; + winfo->avgDelay = (double)winfo->totalDelay / winfo->cntDelay; + winfo->totalAffectedRows += affectedRows; } - totalAffectedRows += affectedRows; int64_t currentPrintTime = taosGetTimestampMs(); if (currentPrintTime - lastPrintTime > 30*1000) { - printf("thread[%d] has currently inserted rows: %"PRId64 ", affected rows: %"PRId64 "\n", winfo->threadID, totalRowsInserted, totalAffectedRows); + printf("thread[%d] has currently inserted rows: %"PRId64 ", affected rows: %"PRId64 "\n", + winfo->threadID, + winfo->totalRowsInserted, + winfo->totalAffectedRows); lastPrintTime = currentPrintTime; } //int64_t t2 = taosGetTimestampMs(); - //printf("taosc insert sql return, Spent %.4f seconds \n", (double)(t2 - t1)/1000.0); + //printf("taosc insert sql return, Spent %.4f seconds \n", (double)(t2 - t1)/1000.0); } else { //int64_t t1 = taosGetTimestampMs(); int retCode = postProceSql(g_Dbs.host, g_Dbs.port, buffer); //int64_t t2 = taosGetTimestampMs(); //printf("http insert sql return, Spent %ld ms \n", t2 - t1); - + if (0 != retCode) { printf("========restful return fail, threadID[%d]\n", winfo->threadID); goto free_and_statistics; } } - - //printf("========tID:%d, k:%d, loop_cnt:%d\n", tID, k, loop_cnt); + if (g_args.insert_interval) { + et = taosGetTimestampMs(); + } + break; } if (tID > winfo->end_table_id) { - if (0 == strncasecmp(superTblInfo->dataSource, "sample", 6)) { + if (0 == strncasecmp(superTblInfo->dataSource, "sample", strlen("sample"))) { samplePos = sampleUsePos; } i = inserted; @@ -3709,13 +3999,10 @@ void syncWriteForNumberOfTblInOneSql(threadInfo *winfo, FILE *fp, char* sampleDa } } - if (superTblInfo->insertRate) { - et = taosGetTimestampMs(); - } //printf("========loop %d childTables duration:%"PRId64 "========inserted rows:%d\n", winfo->end_table_id - winfo->start_table_id, et - st, i); } - free_and_statistics: +free_and_statistics: tmfree(buffer); winfo->totalRowsInserted = totalRowsInserted; winfo->totalAffectedRows = totalAffectedRows; @@ -3723,6 +4010,64 @@ void syncWriteForNumberOfTblInOneSql(threadInfo *winfo, FILE *fp, char* sampleDa return; } +int32_t generateData(char *res, char **data_type, + int num_of_cols, int64_t timestamp, int len_of_binary) { + memset(res, 0, MAX_DATA_SIZE); + char *pstr = res; + pstr += sprintf(pstr, "(%" PRId64, timestamp); + int c = 0; + + for (; c < MAX_NUM_DATATYPE; c++) { + if (data_type[c] == NULL) { + break; + } + } + + if (0 == c) { + perror("data type error!"); + exit(-1); + } + + for (int i = 0; i < num_of_cols; i++) { + if (strcasecmp(data_type[i % c], "tinyint") == 0) { + pstr += sprintf(pstr, ", %d", rand_tinyint() ); + } else if (strcasecmp(data_type[i % c], "smallint") == 0) { + pstr += sprintf(pstr, ", %d", rand_smallint()); + } else if (strcasecmp(data_type[i % c], "int") == 0) { + pstr += sprintf(pstr, ", %d", rand_int()); + } else if (strcasecmp(data_type[i % c], "bigint") == 0) { + pstr += sprintf(pstr, ", %" PRId64, rand_bigint()); + } else if (strcasecmp(data_type[i % c], "float") == 0) { + pstr += sprintf(pstr, ", %10.4f", rand_float()); + } else if (strcasecmp(data_type[i % c], "double") == 0) { + double t = rand_double(); + pstr += sprintf(pstr, ", %20.8f", t); + } else if (strcasecmp(data_type[i % c], "bool") == 0) { + bool b = rand() & 1; + pstr += sprintf(pstr, ", %s", b ? "true" : "false"); + } else if (strcasecmp(data_type[i % c], "binary") == 0) { + char *s = malloc(len_of_binary); + rand_string(s, len_of_binary); + pstr += sprintf(pstr, ", \"%s\"", s); + free(s); + }else if (strcasecmp(data_type[i % c], "nchar") == 0) { + char *s = malloc(len_of_binary); + rand_string(s, len_of_binary); + pstr += sprintf(pstr, ", \"%s\"", s); + free(s); + } + + if (pstr - res > MAX_DATA_SIZE) { + perror("column length too long, abort"); + exit(-1); + } + } + + pstr += sprintf(pstr, ")"); + + return (int32_t)(pstr - res); +} + // sync insertion /* 1 thread: 100 tables * 2000 rows/s @@ -3731,11 +4076,127 @@ void syncWriteForNumberOfTblInOneSql(threadInfo *winfo, FILE *fp, char* sampleDa 2 taosinsertdata , 1 thread: 10 tables * 20000 rows/s */ -void *syncWrite(void *sarg) { - int64_t totalRowsInserted = 0; - int64_t totalAffectedRows = 0; - int64_t lastPrintTime = taosGetTimestampMs(); - +static void* syncWrite(void *sarg) { + + threadInfo *winfo = (threadInfo *)sarg; + + char buffer[BUFFER_SIZE] = "\0"; + char data[MAX_DATA_SIZE]; + char **data_type = g_args.datatype; + int len_of_binary = g_args.len_of_binary; + + int ncols_per_record = 1; // count first col ts + int i = 0; + while(g_args.datatype[i]) { + i ++; + ncols_per_record ++; + } + + srand((uint32_t)time(NULL)); + int64_t time_counter = winfo->start_time; + + uint64_t st = 0; + uint64_t et = 0; + + winfo->totalRowsInserted = 0; + winfo->totalAffectedRows = 0; + + for (int tID = winfo->start_table_id; tID <= winfo->end_table_id; tID++) { + for (int i = 0; i < g_args.num_of_DPT;) { + + int tblInserted = i; + int64_t tmp_time = time_counter; + + char *pstr = buffer; + pstr += sprintf(pstr, + "insert into %s.%s%d values ", + winfo->db_name, g_args.tb_prefix, tID); + int k; + for (k = 0; k < g_args.num_of_RPR;) { + int rand_num = rand() % 100; + int len = -1; + + if ((g_args.disorderRatio != 0) + && (rand_num < g_args.disorderRange)) { + + int64_t d = tmp_time - rand() % 1000000 + rand_num; + len = generateData(data, data_type, + ncols_per_record, d, len_of_binary); + } else { + len = generateData(data, data_type, + ncols_per_record, tmp_time += 1000, len_of_binary); + } + + //assert(len + pstr - buffer < BUFFER_SIZE); + if (len + pstr - buffer >= BUFFER_SIZE) { // too long + break; + } + + pstr += sprintf(pstr, " %s", data); + tblInserted++; + k++; + i++; + + if (tblInserted >= g_args.num_of_DPT) + break; + } + + winfo->totalRowsInserted += k; + /* puts(buffer); */ + int64_t startTs; + int64_t endTs; + startTs = taosGetTimestampUs(); + //queryDB(winfo->taos, buffer); + if (i > 0 && g_args.insert_interval + && (g_args.insert_interval > (et - st) )) { + int sleep_time = g_args.insert_interval - (et -st); + printf("sleep: %d ms specified by insert_interval\n", sleep_time); + taosMsleep(sleep_time); // ms + } + + if (g_args.insert_interval) { + st = taosGetTimestampMs(); + } + verbosePrint("%s() LN%d %s\n", __func__, __LINE__, buffer); + int affectedRows = queryDbExec(winfo->taos, buffer, 1); + + verbosePrint("%s() LN%d: affectedRows:%d\n", __func__, __LINE__, affectedRows); + if (0 <= affectedRows){ + endTs = taosGetTimestampUs(); + int64_t delay = endTs - startTs; + if (delay > winfo->maxDelay) + winfo->maxDelay = delay; + if (delay < winfo->minDelay) + winfo->minDelay = delay; + winfo->cntDelay++; + winfo->totalDelay += delay; + winfo->totalAffectedRows += affectedRows; + winfo->avgDelay = (double)winfo->totalDelay / winfo->cntDelay; + } + + verbosePrint("%s() LN%d: totalaffectedRows:%"PRId64"\n", __func__, __LINE__, winfo->totalAffectedRows); + if (g_args.insert_interval) { + et = taosGetTimestampMs(); + } + + if (tblInserted >= g_args.num_of_DPT) { + break; + } + } // num_of_DPT + } // tId + + printf("====thread[%d] completed total inserted rows: %"PRId64 ", total affected rows: %"PRId64 "====\n", + winfo->threadID, + winfo->totalRowsInserted, + winfo->totalAffectedRows); + + return NULL; +} + + +static void* syncWriteWithStb(void *sarg) { + uint64_t lastPrintTime = taosGetTimestampMs(); + threadInfo *winfo = (threadInfo *)sarg; SSuperTable* superTblInfo = winfo->superTblInfo; @@ -3744,20 +4205,27 @@ void *syncWrite(void *sarg) { int samplePos = 0; // each thread read sample data from csv file - if (0 == strncasecmp(superTblInfo->dataSource, "sample", 6)) { - sampleDataBuf = calloc(superTblInfo->lenOfOneRow * MAX_SAMPLES_ONCE_FROM_FILE, 1); + if (0 == strncasecmp(superTblInfo->dataSource, + "sample", + strlen("sample"))) { + sampleDataBuf = calloc( + superTblInfo->lenOfOneRow * MAX_SAMPLES_ONCE_FROM_FILE, 1); if (sampleDataBuf == NULL) { - printf("Failed to calloc %d Bytes, reason:%s\n", superTblInfo->lenOfOneRow * MAX_SAMPLES_ONCE_FROM_FILE, strerror(errno)); + printf("Failed to calloc %d Bytes, reason:%s\n", + superTblInfo->lenOfOneRow * MAX_SAMPLES_ONCE_FROM_FILE, + strerror(errno)); return NULL; } - + fp = fopen(superTblInfo->sampleFile, "r"); if (fp == NULL) { - printf("Failed to open sample file: %s, reason:%s\n", superTblInfo->sampleFile, strerror(errno)); + printf("Failed to open sample file: %s, reason:%s\n", + superTblInfo->sampleFile, strerror(errno)); tmfree(sampleDataBuf); return NULL; } - int ret = readSampleFromCsvFileToMem(fp, superTblInfo, sampleDataBuf); + int ret = readSampleFromCsvFileToMem(fp, + superTblInfo, sampleDataBuf); if (0 != ret) { tmfree(sampleDataBuf); tmfclose(fp); @@ -3772,127 +4240,150 @@ void *syncWrite(void *sarg) { return NULL; } - //printf("========threadID[%d], table rang: %d - %d \n", winfo->threadID, winfo->start_table_id, winfo->end_table_id); - char* buffer = calloc(superTblInfo->maxSqlLen, 1); + if (NULL == buffer) { + printf("Failed to calloc %d Bytes, reason:%s\n", + superTblInfo->maxSqlLen, + strerror(errno)); + tmfree(sampleDataBuf); + tmfclose(fp); + return NULL; + } - int nrecords_per_request = 0; - if (AUTO_CREATE_SUBTBL == superTblInfo->autoCreateTable) { - nrecords_per_request = (superTblInfo->maxSqlLen - 1280 - superTblInfo->lenOfTagOfOneRow) / superTblInfo->lenOfOneRow; - } else { - nrecords_per_request = (superTblInfo->maxSqlLen - 1280) / superTblInfo->lenOfOneRow; - } + uint64_t st = 0; + uint64_t et = 0; - int nrecords_no_last_req = nrecords_per_request; - int nrecords_last_req = 0; - int loop_cnt = 0; - if (0 != superTblInfo->insertRate) { - if (nrecords_no_last_req >= superTblInfo->insertRate) { - nrecords_no_last_req = superTblInfo->insertRate; - } else { - nrecords_last_req = superTblInfo->insertRate % nrecords_per_request; - loop_cnt = (superTblInfo->insertRate / nrecords_per_request) + (superTblInfo->insertRate % nrecords_per_request ? 1 : 0) ; - } - } - - if (nrecords_no_last_req <= 0) { - nrecords_no_last_req = 1; - } + winfo->totalRowsInserted = 0; + winfo->totalAffectedRows = 0; - if (nrecords_no_last_req >= INT16_MAX) { - nrecords_no_last_req = INT16_MAX - 1; - } + int sampleUsePos; - if (nrecords_last_req >= INT16_MAX) { - nrecords_last_req = INT16_MAX - 1; - } + debugPrint("%s() LN%d insertRows=%"PRId64"\n", __func__, __LINE__, superTblInfo->insertRows); - int nrecords_cur_req = nrecords_no_last_req; - int loop_cnt_orig = loop_cnt; + for (uint32_t tID = winfo->start_table_id; tID <= winfo->end_table_id; + tID++) { + int64_t start_time = winfo->start_time; - //printf("========nrecords_per_request:%d, nrecords_no_last_req:%d, nrecords_last_req:%d, loop_cnt:%d\n", nrecords_per_request, nrecords_no_last_req, nrecords_last_req, loop_cnt); + for (int i = 0; i < superTblInfo->insertRows;) { - int64_t time_counter = winfo->start_time; + int64_t tblInserted = i; - int64_t st = 0; - int64_t et = 0; - for (int i = 0; i < superTblInfo->insertRows;) { - if (superTblInfo->insertRate && (et - st) < 1000) { - taosMsleep(1000 - (et - st)); // ms - //printf("========sleep duration:%"PRId64 "========inserted rows:%d, table range:%d - %d\n", (1000 - (et - st)), i, winfo->start_table_id, winfo->end_table_id); - } + if (i > 0 && g_args.insert_interval + && (g_args.insert_interval > (et - st) )) { + int sleep_time = g_args.insert_interval - (et -st); + printf("sleep: %d ms specified by insert_interval\n", sleep_time); + taosMsleep(sleep_time); // ms + } - if (superTblInfo->insertRate) { - st = taosGetTimestampMs(); - } - - for (int tID = winfo->start_table_id; tID <= winfo->end_table_id; tID++) { - int inserted = i; - int64_t tmp_time = time_counter; + if (g_args.insert_interval) { + st = taosGetTimestampMs(); + } - int sampleUsePos = samplePos; - int k = 0; - while (1) - { - int len = 0; - memset(buffer, 0, superTblInfo->maxSqlLen); - char *pstr = buffer; + sampleUsePos = samplePos; + verbosePrint("%s() LN%d num_of_RPR=%d\n", __func__, __LINE__, g_args.num_of_RPR); + + memset(buffer, 0, superTblInfo->maxSqlLen); + int len = 0; + + char *pstr = buffer; - if (AUTO_CREATE_SUBTBL == superTblInfo->autoCreateTable) { + if (AUTO_CREATE_SUBTBL == superTblInfo->autoCreateTable) { char* tagsValBuf = NULL; if (0 == superTblInfo->tagSource) { tagsValBuf = generateTagVaulesForStb(superTblInfo); } else { - tagsValBuf = getTagValueFromTagSample(superTblInfo, tID % superTblInfo->tagSampleCount); + tagsValBuf = getTagValueFromTagSample( + superTblInfo, + tID % superTblInfo->tagSampleCount); } if (NULL == tagsValBuf) { goto free_and_statistics_2; } - len += snprintf(pstr + len, superTblInfo->maxSqlLen - len, "insert into %s.%s%d using %s.%s tags %s values", winfo->db_name, superTblInfo->childTblPrefix, tID, winfo->db_name, superTblInfo->sTblName, tagsValBuf); + len += snprintf(pstr + len, + superTblInfo->maxSqlLen - len, + "insert into %s.%s%d using %s.%s tags %s values", + winfo->db_name, + superTblInfo->childTblPrefix, + tID, + winfo->db_name, + superTblInfo->sTblName, + tagsValBuf); tmfree(tagsValBuf); - } else if (TBL_ALREADY_EXISTS == superTblInfo->childTblExists) { - len += snprintf(pstr + len, superTblInfo->maxSqlLen - len, "insert into %s.%s values", winfo->db_name, superTblInfo->childTblName + tID * TSDB_TABLE_NAME_LEN); - } else { - len += snprintf(pstr + len, superTblInfo->maxSqlLen - len, "insert into %s.%s%d values", winfo->db_name, superTblInfo->childTblPrefix, tID); - } - - for (k = 0; k < nrecords_cur_req;) { - int retLen = 0; - if (0 == strncasecmp(superTblInfo->dataSource, "sample", 6)) { - retLen = getRowDataFromSample(pstr + len, superTblInfo->maxSqlLen - len, tmp_time += superTblInfo->timeStampStep, superTblInfo, &sampleUsePos, fp, sampleDataBuf); + } else if (TBL_ALREADY_EXISTS == superTblInfo->childTblExists) { + len += snprintf(pstr + len, + superTblInfo->maxSqlLen - len, + "insert into %s.%s values", + winfo->db_name, + superTblInfo->childTblName + tID * TSDB_TABLE_NAME_LEN); + } else { + len += snprintf(pstr + len, + superTblInfo->maxSqlLen - len, + "insert into %s.%s%d values", + winfo->db_name, + superTblInfo->childTblPrefix, + tID); + } + + int k; + for (k = 0; k < g_args.num_of_RPR;) { + int retLen = 0; + if (0 == strncasecmp(superTblInfo->dataSource, "sample", strlen("sample"))) { + retLen = getRowDataFromSample( + pstr + len, + superTblInfo->maxSqlLen - len, + start_time + superTblInfo->timeStampStep * i, + superTblInfo, + &sampleUsePos, + fp, + sampleDataBuf); if (retLen < 0) { goto free_and_statistics_2; } - } else if (0 == strncasecmp(superTblInfo->dataSource, "rand", 8)) { + } else if (0 == strncasecmp(superTblInfo->dataSource, "rand", strlen("rand"))) { int rand_num = rand_tinyint() % 100; - if (0 != superTblInfo->disorderRatio && rand_num < superTblInfo->disorderRatio) { - int64_t d = tmp_time - rand() % superTblInfo->disorderRange; - retLen = generateRowData(pstr + len, superTblInfo->maxSqlLen - len, d, superTblInfo); + if (0 != superTblInfo->disorderRatio + && rand_num < superTblInfo->disorderRatio) { + int64_t d = start_time - rand() % superTblInfo->disorderRange; + retLen = generateRowData( + pstr + len, + superTblInfo->maxSqlLen - len, + d, + superTblInfo); //printf("disorder rows, rand_num:%d, last ts:%"PRId64" current ts:%"PRId64"\n", rand_num, tmp_time, d); - } else { - retLen = generateRowData(pstr + len, superTblInfo->maxSqlLen - len, tmp_time += superTblInfo->timeStampStep, superTblInfo); + } else { + retLen = generateRowData( + pstr + len, + superTblInfo->maxSqlLen - len, + start_time + superTblInfo->timeStampStep * i, + superTblInfo); } if (retLen < 0) { goto free_and_statistics_2; } - } - len += retLen; - inserted++; - k++; - totalRowsInserted++; - - if (inserted >= superTblInfo->insertRows || (superTblInfo->maxSqlLen - len) < (superTblInfo->lenOfOneRow + 128)) break; } + + len += retLen; + verbosePrint("%s() LN%d retLen=%d len=%d k=%d buffer=%s\n", __func__, __LINE__, retLen, len, k, buffer); + + tblInserted++; + k++; + i++; + + if (tblInserted >= superTblInfo->insertRows) + break; + } - if (0 == strncasecmp(superTblInfo->insertMode, "taosc", 5)) { - //printf("===== sql: %s \n\n", buffer); - //int64_t t1 = taosGetTimestampMs(); + winfo->totalRowsInserted += k; + + if (0 == strncasecmp(superTblInfo->insertMode, "taosc", strlen("taosc"))) { int64_t startTs; int64_t endTs; startTs = taosGetTimestampUs(); + verbosePrint("%s() LN%d %s\n", __func__, __LINE__, buffer); int affectedRows = queryDbExec(winfo->taos, buffer, INSERT_TYPE); + if (0 > affectedRows){ goto free_and_statistics_2; } else { @@ -3902,76 +4393,59 @@ void *syncWrite(void *sarg) { if (delay < winfo->minDelay) winfo->minDelay = delay; winfo->cntDelay++; winfo->totalDelay += delay; - //winfo->avgDelay = (double)winfo->totalDelay / winfo->cntDelay; } - totalAffectedRows += affectedRows; + winfo->totalAffectedRows += affectedRows; int64_t currentPrintTime = taosGetTimestampMs(); if (currentPrintTime - lastPrintTime > 30*1000) { - printf("thread[%d] has currently inserted rows: %"PRId64 ", affected rows: %"PRId64 "\n", winfo->threadID, totalRowsInserted, totalAffectedRows); + printf("thread[%d] has currently inserted rows: %"PRId64 ", affected rows: %"PRId64 "\n", + winfo->threadID, + winfo->totalRowsInserted, + winfo->totalAffectedRows); lastPrintTime = currentPrintTime; } - //int64_t t2 = taosGetTimestampMs(); - //printf("taosc insert sql return, Spent %.4f seconds \n", (double)(t2 - t1)/1000.0); - } else { - //int64_t t1 = taosGetTimestampMs(); + } else { int retCode = postProceSql(g_Dbs.host, g_Dbs.port, buffer); - //int64_t t2 = taosGetTimestampMs(); - //printf("http insert sql return, Spent %ld ms \n", t2 - t1); if (0 != retCode) { printf("========restful return fail, threadID[%d]\n", winfo->threadID); goto free_and_statistics_2; } - } - - //printf("========tID:%d, k:%d, loop_cnt:%d\n", tID, k, loop_cnt); - - if (loop_cnt) { - loop_cnt--; - if ((1 == loop_cnt) && (0 != nrecords_last_req)) { - nrecords_cur_req = nrecords_last_req; - } else if (0 == loop_cnt){ - nrecords_cur_req = nrecords_no_last_req; - loop_cnt = loop_cnt_orig; - break; - } - } else { - break; - } } + if (g_args.insert_interval) { + et = taosGetTimestampMs(); + } + + if (tblInserted >= superTblInfo->insertRows) + break; + } // num_of_DPT - if (tID == winfo->end_table_id) { - if (0 == strncasecmp(superTblInfo->dataSource, "sample", 6)) { + if (tID == winfo->end_table_id) { + if (0 == strncasecmp( + superTblInfo->dataSource, "sample", strlen("sample"))) { samplePos = sampleUsePos; } - i = inserted; - time_counter = tmp_time; - } - } - - if (superTblInfo->insertRate) { - et = taosGetTimestampMs(); + } //printf("========loop %d childTables duration:%"PRId64 "========inserted rows:%d\n", winfo->end_table_id - winfo->start_table_id, et - st, i); - } + } // tID - free_and_statistics_2: +free_and_statistics_2: tmfree(buffer); tmfree(sampleDataBuf); tmfclose(fp); - winfo->totalRowsInserted = totalRowsInserted; - winfo->totalAffectedRows = totalAffectedRows; - - printf("====thread[%d] completed total inserted rows: %"PRId64 ", total affected rows: %"PRId64 "====\n", winfo->threadID, totalRowsInserted, totalAffectedRows); + printf("====thread[%d] completed total inserted rows: %"PRId64 ", total affected rows: %"PRId64 "====\n", + winfo->threadID, + winfo->totalRowsInserted, + winfo->totalAffectedRows); return NULL; } void callBack(void *param, TAOS_RES *res, int code) { threadInfo* winfo = (threadInfo*)param; - if (winfo->superTblInfo->insertRate) { + if (g_args.insert_interval) { winfo->et = taosGetTimestampMs(); if (winfo->et - winfo->st < 1000) { taosMsleep(1000 - (winfo->et - winfo->st)); // ms @@ -3982,7 +4456,8 @@ void callBack(void *param, TAOS_RES *res, int code) { char *data = calloc(1, MAX_DATA_SIZE); char *pstr = buffer; pstr += sprintf(pstr, "insert into %s.%s%d values", winfo->db_name, winfo->tb_prefix, winfo->start_table_id); - if (winfo->counter >= winfo->superTblInfo->insertRows) { +// if (winfo->counter >= winfo->superTblInfo->insertRows) { + if (winfo->counter >= g_args.num_of_RPR) { winfo->start_table_id++; winfo->counter = 0; } @@ -3994,7 +4469,7 @@ void callBack(void *param, TAOS_RES *res, int code) { return; } - for (int i = 0; i < winfo->nrecords_per_request; i++) { + for (int i = 0; i < g_args.num_of_RPR; i++) { int rand_num = rand() % 100; if (0 != winfo->superTblInfo->disorderRatio && rand_num < winfo->superTblInfo->disorderRatio) { @@ -4013,7 +4488,7 @@ void callBack(void *param, TAOS_RES *res, int code) { } } - if (winfo->superTblInfo->insertRate) { + if (g_args.insert_interval) { winfo->st = taosGetTimestampMs(); } taos_query_a(winfo->taos, buffer, callBack, winfo); @@ -4026,36 +4501,11 @@ void callBack(void *param, TAOS_RES *res, int code) { void *asyncWrite(void *sarg) { threadInfo *winfo = (threadInfo *)sarg; - winfo->nrecords_per_request = 0; - //if (AUTO_CREATE_SUBTBL == winfo->superTblInfo->autoCreateTable) { - winfo->nrecords_per_request = (winfo->superTblInfo->maxSqlLen - 1280 - winfo->superTblInfo->lenOfTagOfOneRow) / winfo->superTblInfo->lenOfOneRow; - //} else { - // winfo->nrecords_per_request = (winfo->superTblInfo->maxSqlLen - 1280) / winfo->superTblInfo->lenOfOneRow; - //} - - if (0 != winfo->superTblInfo->insertRate) { - if (winfo->nrecords_per_request >= winfo->superTblInfo->insertRate) { - winfo->nrecords_per_request = winfo->superTblInfo->insertRate; - } - } - - if (winfo->nrecords_per_request <= 0) { - winfo->nrecords_per_request = 1; - } - - if (winfo->nrecords_per_request >= INT16_MAX) { - winfo->nrecords_per_request = INT16_MAX - 1; - } - - if (winfo->nrecords_per_request >= INT16_MAX) { - winfo->nrecords_per_request = INT16_MAX - 1; - } - winfo->st = 0; winfo->et = 0; winfo->lastTs = winfo->start_time; - if (winfo->superTblInfo->insertRate) { + if (g_args.insert_interval) { winfo->st = taosGetTimestampMs(); } taos_query_a(winfo->taos, "show databases", callBack, winfo); @@ -4065,23 +4515,30 @@ void *asyncWrite(void *sarg) { return NULL; } -void startMultiThreadInsertData(int threads, char* db_name, char* precision, SSuperTable* superTblInfo) { - pthread_t *pids = malloc(threads * sizeof(pthread_t)); - threadInfo *infos = malloc(threads * sizeof(threadInfo)); - memset(pids, 0, threads * sizeof(pthread_t)); - memset(infos, 0, threads * sizeof(threadInfo)); - int ntables = superTblInfo->childTblCount; +void startMultiThreadInsertData(int threads, char* db_name, char* precision, + SSuperTable* superTblInfo) { - int a = ntables / threads; - if (a < 1) { - threads = ntables; - a = 1; - } + pthread_t *pids = malloc(threads * sizeof(pthread_t)); + threadInfo *infos = malloc(threads * sizeof(threadInfo)); + memset(pids, 0, threads * sizeof(pthread_t)); + memset(infos, 0, threads * sizeof(threadInfo)); - int b = 0; - if (threads != 0) { - b = ntables % threads; - } + int ntables = 0; + if (superTblInfo) + ntables = superTblInfo->childTblCount; + else + ntables = g_args.num_of_tables; + + int a = ntables / threads; + if (a < 1) { + threads = ntables; + a = 1; + } + + int b = 0; + if (threads != 0) { + b = ntables % threads; + } //TAOS* taos; //if (0 == strncasecmp(superTblInfo->insertMode, "taosc", 5)) { @@ -4104,11 +4561,22 @@ void startMultiThreadInsertData(int threads, char* db_name, char* precision, SSu } } - int64_t start_time; - if (0 == strncasecmp(superTblInfo->startTimestamp, "now", 3)) { - start_time = taosGetTimestamp(timePrec); - } else { - (void)taosParseTime(superTblInfo->startTimestamp, &start_time, strlen(superTblInfo->startTimestamp), timePrec, 0); + int64_t start_time; + if (superTblInfo) { + if (0 == strncasecmp(superTblInfo->startTimestamp, "now", 3)) { + start_time = taosGetTimestamp(timePrec); + } else { + if (TSDB_CODE_SUCCESS != taosParseTime( + superTblInfo->startTimestamp, + &start_time, + strlen(superTblInfo->startTimestamp), + timePrec, 0)) { + printf("ERROR to parse time!\n"); + exit(-1); + } + } + } else { + start_time = 1500000000000; } double start = getCurrentTime(); @@ -4123,18 +4591,23 @@ void startMultiThreadInsertData(int threads, char* db_name, char* precision, SSu t_info->start_time = start_time; t_info->minDelay = INT16_MAX; - if (0 == strncasecmp(superTblInfo->insertMode, "taosc", 5)) { + if ((NULL == superTblInfo) || + (0 == strncasecmp(superTblInfo->insertMode, "taosc", 5))) { //t_info->taos = taos; - t_info->taos = taos_connect(g_Dbs.host, g_Dbs.user, g_Dbs.password, db_name, g_Dbs.port); + t_info->taos = taos_connect( + g_Dbs.host, g_Dbs.user, + g_Dbs.password, db_name, g_Dbs.port); if (NULL == t_info->taos) { - printf("connect to server fail from insert sub thread, reason: %s\n", taos_errstr(NULL)); + printf("connect to server fail from insert sub thread, reason: %s\n", + taos_errstr(NULL)); exit(-1); } } else { t_info->taos = NULL; } - if (0 == superTblInfo->multiThreadWriteOneTbl) { + if ((NULL == superTblInfo) + || (0 == superTblInfo->multiThreadWriteOneTbl)) { t_info->start_table_id = last; t_info->end_table_id = i < b ? last + a : last + a - 1; last = t_info->end_table_id + 1; @@ -4145,9 +4618,12 @@ void startMultiThreadInsertData(int threads, char* db_name, char* precision, SSu } tsem_init(&(t_info->lock_sem), 0, 0); - if (SYNC == g_Dbs.queryMode) { - pthread_create(pids + i, NULL, syncWrite, t_info); + if (superTblInfo) { + pthread_create(pids + i, NULL, syncWriteWithStb, t_info); + } else { + pthread_create(pids + i, NULL, syncWrite, t_info); + } } else { pthread_create(pids + i, NULL, asyncWrite, t_info); } @@ -4169,10 +4645,12 @@ void startMultiThreadInsertData(int threads, char* db_name, char* precision, SSu tsem_destroy(&(t_info->lock_sem)); taos_close(t_info->taos); - superTblInfo->totalAffectedRows += t_info->totalAffectedRows; - superTblInfo->totalRowsInserted += t_info->totalRowsInserted; + if (superTblInfo) { + superTblInfo->totalAffectedRows += t_info->totalAffectedRows; + superTblInfo->totalRowsInserted += t_info->totalRowsInserted; + } - totalDelay += t_info->totalDelay; + totalDelay += t_info->totalDelay; cntDelay += t_info->cntDelay; if (t_info->maxDelay > maxDelay) maxDelay = t_info->maxDelay; if (t_info->minDelay < minDelay) minDelay = t_info->minDelay; @@ -4184,23 +4662,29 @@ void startMultiThreadInsertData(int threads, char* db_name, char* precision, SSu double end = getCurrentTime(); double t = end - start; - printf("Spent %.4f seconds to insert rows: %"PRId64", affected rows: %"PRId64" with %d thread(s) into %s.%s. %2.f records/second\n\n", - t, superTblInfo->totalRowsInserted, superTblInfo->totalAffectedRows, threads, db_name, superTblInfo->sTblName, superTblInfo->totalRowsInserted / t); - fprintf(g_fpOfInsertResult, "Spent %.4f seconds to insert rows: %"PRId64", affected rows: %"PRId64" with %d thread(s) into %s.%s. %2.f records/second\n\n", - t, superTblInfo->totalRowsInserted, superTblInfo->totalAffectedRows, threads, db_name, superTblInfo->sTblName, superTblInfo->totalRowsInserted / t); + if (superTblInfo) { + printf("Spent %.4f seconds to insert rows: %"PRId64", affected rows: %"PRId64" with %d thread(s) into %s.%s. %2.f records/second\n\n", + t, superTblInfo->totalRowsInserted, + superTblInfo->totalAffectedRows, + threads, db_name, superTblInfo->sTblName, + superTblInfo->totalRowsInserted / t); + fprintf(g_fpOfInsertResult, "Spent %.4f seconds to insert rows: %"PRId64", affected rows: %"PRId64" with %d thread(s) into %s.%s. %2.f records/second\n\n", + t, superTblInfo->totalRowsInserted, + superTblInfo->totalAffectedRows, + threads, db_name, superTblInfo->sTblName, + superTblInfo->totalRowsInserted / t); + } printf("insert delay, avg: %10.6fms, max: %10.6fms, min: %10.6fms\n\n", avgDelay/1000.0, (double)maxDelay/1000.0, (double)minDelay/1000.0); fprintf(g_fpOfInsertResult, "insert delay, avg:%10.6fms, max: %10.6fms, min: %10.6fms\n\n", avgDelay/1000.0, (double)maxDelay/1000.0, (double)minDelay/1000.0); - //taos_close(taos); free(pids); free(infos); - } @@ -4209,7 +4693,7 @@ void *readTable(void *sarg) { threadInfo *rinfo = (threadInfo *)sarg; TAOS *taos = rinfo->taos; char command[BUFFER_SIZE] = "\0"; - int64_t sTime = rinfo->start_time; + uint64_t sTime = rinfo->start_time; char *tb_prefix = rinfo->tb_prefix; FILE *fp = fopen(rinfo->fp, "a"); if (NULL == fp) { @@ -4217,7 +4701,14 @@ void *readTable(void *sarg) { return NULL; } - int num_of_DPT = rinfo->superTblInfo->insertRows; // nrecords_per_table; + int num_of_DPT; +/* if (rinfo->superTblInfo) { + num_of_DPT = rinfo->superTblInfo->insertRows; // nrecords_per_table; + } else { + */ + num_of_DPT = g_args.num_of_DPT; +// } + int num_of_tables = rinfo->end_table_id - rinfo->start_table_id + 1; int totalData = num_of_DPT * num_of_tables; bool do_aggreFunc = g_Dbs.do_aggreFunc; @@ -4343,19 +4834,21 @@ void *readMetric(void *sarg) { int insertTestProcess() { - g_fpOfInsertResult = fopen(g_Dbs.resultFile, "a"); - if (NULL == g_fpOfInsertResult) { - fprintf(stderr, "Failed to open %s for save result\n", g_Dbs.resultFile); - return 1; - }; - setupForAnsiEscape(); int ret = printfInsertMeta(); resetAfterAnsiEscape(); + if (ret == -1) exit(EXIT_FAILURE); - printfInsertMetaToFile(g_fpOfInsertResult); + debugPrint("%d result file: %s\n", __LINE__, g_Dbs.resultFile); + g_fpOfInsertResult = fopen(g_Dbs.resultFile, "a"); + if (NULL == g_fpOfInsertResult) { + fprintf(stderr, "Failed to open %s for save result\n", g_Dbs.resultFile); + return -1; + } { + printfInsertMetaToFile(g_fpOfInsertResult); + } if (!g_args.answer_yes) { printf("Press enter key to continue\n\n"); @@ -4365,11 +4858,14 @@ int insertTestProcess() { init_rand_data(); // create database and super tables - (void)createDatabases(); + if(createDatabases() != 0) { + fclose(g_fpOfInsertResult); + return -1; + } // pretreatement prePareSampleData(); - + double start; double end; @@ -4377,26 +4873,41 @@ int insertTestProcess() { start = getCurrentTime(); createChildTables(); end = getCurrentTime(); + if (g_totalChildTables > 0) { - printf("Spent %.4f seconds to create %d tables with %d thread(s)\n\n", end - start, g_totalChildTables, g_Dbs.threadCount); - fprintf(g_fpOfInsertResult, "Spent %.4f seconds to create %d tables with %d thread(s)\n\n", end - start, g_totalChildTables, g_Dbs.threadCount); + printf("Spent %.4f seconds to create %d tables with %d thread(s)\n\n", + end - start, g_totalChildTables, g_Dbs.threadCount); + fprintf(g_fpOfInsertResult, + "Spent %.4f seconds to create %d tables with %d thread(s)\n\n", + end - start, g_totalChildTables, g_Dbs.threadCount); } - - taosMsleep(1000); + taosMsleep(1000); // create sub threads for inserting data //start = getCurrentTime(); - for (int i = 0; i < g_Dbs.dbCount; i++) { - for (int j = 0; j < g_Dbs.db[i].superTblCount; j++) { - SSuperTable* superTblInfo = &g_Dbs.db[i].superTbls[j]; - if (0 == g_Dbs.db[i].superTbls[j].insertRows) { - continue; - } - startMultiThreadInsertData(g_Dbs.threadCount, g_Dbs.db[i].dbName, g_Dbs.db[i].dbCfg.precision, superTblInfo); + for (int i = 0; i < g_Dbs.dbCount; i++) { + if (g_Dbs.db[i].superTblCount > 0) { + for (int j = 0; j < g_Dbs.db[i].superTblCount; j++) { + SSuperTable* superTblInfo = &g_Dbs.db[i].superTbls[j]; + if (0 == g_Dbs.db[i].superTbls[j].insertRows) { + continue; + } + startMultiThreadInsertData( + g_Dbs.threadCount, + g_Dbs.db[i].dbName, + g_Dbs.db[i].dbCfg.precision, + superTblInfo); + } + } else { + startMultiThreadInsertData( + g_Dbs.threadCount, + g_Dbs.db[i].dbName, + g_Dbs.db[i].dbCfg.precision, + NULL); + } } - } //end = getCurrentTime(); - + //int64_t totalRowsInserted = 0; //int64_t totalAffectedRows = 0; //for (int i = 0; i < g_Dbs.dbCount; i++) { @@ -4405,31 +4916,8 @@ int insertTestProcess() { // totalAffectedRows += g_Dbs.db[i].superTbls[j].totalAffectedRows; //} //printf("Spent %.4f seconds to insert rows: %"PRId64", affected rows: %"PRId64" with %d thread(s)\n\n", end - start, totalRowsInserted, totalAffectedRows, g_Dbs.threadCount); - if (NULL == g_args.metaFile && false == g_Dbs.insert_only) { - // query data - pthread_t read_id; - threadInfo *rInfo = malloc(sizeof(threadInfo)); - rInfo->start_time = 1500000000000; // 2017-07-14 10:40:00.000 - rInfo->start_table_id = 0; - rInfo->end_table_id = g_Dbs.db[0].superTbls[0].childTblCount - 1; - //rInfo->do_aggreFunc = g_Dbs.do_aggreFunc; - //rInfo->nrecords_per_table = g_Dbs.db[0].superTbls[0].insertRows; - rInfo->superTblInfo = &g_Dbs.db[0].superTbls[0]; - rInfo->taos = taos_connect(g_Dbs.host, g_Dbs.user, g_Dbs.password, g_Dbs.db[0].dbName, g_Dbs.port); - strcpy(rInfo->tb_prefix, g_Dbs.db[0].superTbls[0].childTblPrefix); - strcpy(rInfo->fp, g_Dbs.resultFile); - - if (!g_Dbs.use_metric) { - pthread_create(&read_id, NULL, readTable, rInfo); - } else { - pthread_create(&read_id, NULL, readMetric, rInfo); - } - pthread_join(read_id, NULL); - taos_close(rInfo->taos); - } - postFreeResource(); - + return 0; } @@ -4458,12 +4946,15 @@ void *superQueryProcess(void *sarg) { } selectAndGetResult(winfo->taos, g_queryInfo.superQueryInfo.sql[i], tmpFile); int64_t t2 = taosGetTimestampUs(); - printf("=[taosc] thread[%"PRId64"] complete one sql, Spent %f s\n", taosGetSelfPthreadId(), (t2 - t1)/1000000.0); + printf("=[taosc] thread[%"PRId64"] complete one sql, Spent %f s\n", + taosGetSelfPthreadId(), (t2 - t1)/1000000.0); } else { int64_t t1 = taosGetTimestampUs(); - int retCode = postProceSql(g_queryInfo.host, g_queryInfo.port, g_queryInfo.superQueryInfo.sql[i]); + int retCode = postProceSql(g_queryInfo.host, + g_queryInfo.port, g_queryInfo.superQueryInfo.sql[i]); int64_t t2 = taosGetTimestampUs(); - printf("=[restful] thread[%"PRId64"] complete one sql, Spent %f s\n", taosGetSelfPthreadId(), (t2 - t1)/1000000.0); + printf("=[restful] thread[%"PRId64"] complete one sql, Spent %f s\n", + taosGetSelfPthreadId(), (t2 - t1)/1000000.0); if (0 != retCode) { printf("====restful return fail, threadID[%d]\n", winfo->threadID); @@ -4472,7 +4963,8 @@ void *superQueryProcess(void *sarg) { } } et = taosGetTimestampMs(); - printf("==thread[%"PRId64"] complete all sqls to specify tables once queries duration:%.6fs\n\n", taosGetSelfPthreadId(), (double)(et - st)/1000.0); + printf("==thread[%"PRId64"] complete all sqls to specify tables once queries duration:%.6fs\n\n", + taosGetSelfPthreadId(), (double)(et - st)/1000.0); } return NULL; } @@ -4480,7 +4972,9 @@ void *superQueryProcess(void *sarg) { void replaceSubTblName(char* inSql, char* outSql, int tblIndex) { char sourceString[32] = "xxxx"; char subTblName[MAX_TB_NAME_SIZE*3]; - sprintf(subTblName, "%s.%s", g_queryInfo.dbName, g_queryInfo.subQueryInfo.childTblName + tblIndex*TSDB_TABLE_NAME_LEN); + sprintf(subTblName, "%s.%s", + g_queryInfo.dbName, + g_queryInfo.subQueryInfo.childTblName + tblIndex*TSDB_TABLE_NAME_LEN); //printf("inSql: %s\n", inSql); @@ -4503,7 +4997,7 @@ void *subQueryProcess(void *sarg) { int64_t st = 0; int64_t et = (int64_t)g_queryInfo.subQueryInfo.rate*1000; while (1) { - if (g_queryInfo.subQueryInfo.rate && (et - st) < g_queryInfo.subQueryInfo.rate*1000) { + if (g_queryInfo.subQueryInfo.rate && (et - st) < (int64_t)g_queryInfo.subQueryInfo.rate*1000) { taosMsleep(g_queryInfo.subQueryInfo.rate*1000 - (et - st)); // ms //printf("========sleep duration:%"PRId64 "========inserted rows:%d, table range:%d - %d\n", (1000 - (et - st)), i, winfo->start_table_id, winfo->end_table_id); } @@ -4514,28 +5008,42 @@ void *subQueryProcess(void *sarg) { memset(sqlstr,0,sizeof(sqlstr)); replaceSubTblName(g_queryInfo.subQueryInfo.sql[j], sqlstr, i); char tmpFile[MAX_FILE_NAME_LEN*2] = {0}; - if (g_queryInfo.subQueryInfo.result[j][0] != 0) { - sprintf(tmpFile, "%s-%d", g_queryInfo.subQueryInfo.result[j], winfo->threadID); + if (g_queryInfo.subQueryInfo.result[i][0] != 0) { + sprintf(tmpFile, "%s-%d", + g_queryInfo.subQueryInfo.result[i], + winfo->threadID); } selectAndGetResult(winfo->taos, sqlstr, tmpFile); } } et = taosGetTimestampMs(); - printf("####thread[%"PRId64"] complete all sqls to allocate all sub-tables[%d - %d] once queries duration:%.4fs\n\n", taosGetSelfPthreadId(), winfo->start_table_id, winfo->end_table_id, (double)(et - st)/1000.0); + printf("####thread[%"PRId64"] complete all sqls to allocate all sub-tables[%d - %d] once queries duration:%.4fs\n\n", + taosGetSelfPthreadId(), + winfo->start_table_id, + winfo->end_table_id, + (double)(et - st)/1000.0); } return NULL; } -int queryTestProcess() { +static int queryTestProcess() { TAOS * taos = NULL; - taos = taos_connect(g_queryInfo.host, g_queryInfo.user, g_queryInfo.password, NULL, g_queryInfo.port); + taos = taos_connect(g_queryInfo.host, + g_queryInfo.user, + g_queryInfo.password, + NULL, + g_queryInfo.port); if (taos == NULL) { fprintf(stderr, "Failed to connect to TDengine, reason:%s\n", taos_errstr(NULL)); exit(-1); } if (0 != g_queryInfo.subQueryInfo.sqlCount) { - (void)getAllChildNameOfSuperTable(taos, g_queryInfo.dbName, g_queryInfo.subQueryInfo.sTblName, &g_queryInfo.subQueryInfo.childTblName, &g_queryInfo.subQueryInfo.childTblCount); + (void)getAllChildNameOfSuperTable(taos, + g_queryInfo.dbName, + g_queryInfo.subQueryInfo.sTblName, + &g_queryInfo.subQueryInfo.childTblName, + &g_queryInfo.subQueryInfo.childTblCount); } printfQueryMeta(); @@ -4569,6 +5077,7 @@ int queryTestProcess() { char sqlStr[MAX_TB_NAME_SIZE*2]; sprintf(sqlStr, "use %s", g_queryInfo.dbName); + verbosePrint("%s() %d sqlStr: %s\n", __func__, __LINE__, sqlStr); (void)queryDbExec(t_info->taos, sqlStr, NO_INSERT_TYPE); } else { t_info->taos = NULL; @@ -4655,9 +5164,14 @@ static TAOS_SUB* subscribeImpl(TAOS *taos, char *sql, char* topic, char* resultF TAOS_SUB* tsub = NULL; if (g_queryInfo.superQueryInfo.subscribeMode) { - tsub = taos_subscribe(taos, g_queryInfo.superQueryInfo.subscribeRestart, topic, sql, subscribe_callback, (void*)resultFileName, g_queryInfo.superQueryInfo.subscribeInterval); + tsub = taos_subscribe(taos, + g_queryInfo.superQueryInfo.subscribeRestart, + topic, sql, subscribe_callback, (void*)resultFileName, + g_queryInfo.superQueryInfo.subscribeInterval); } else { - tsub = taos_subscribe(taos, g_queryInfo.superQueryInfo.subscribeRestart, topic, sql, NULL, NULL, 0); + tsub = taos_subscribe(taos, + g_queryInfo.superQueryInfo.subscribeRestart, + topic, sql, NULL, NULL, 0); } if (tsub == NULL) { @@ -4674,6 +5188,7 @@ void *subSubscribeProcess(void *sarg) { char sqlStr[MAX_TB_NAME_SIZE*2]; sprintf(sqlStr, "use %s", g_queryInfo.dbName); + debugPrint("%s() %d sqlStr: %s\n", __func__, __LINE__, sqlStr); if (0 != queryDbExec(winfo->taos, sqlStr, NO_INSERT_TYPE)){ return NULL; } @@ -4717,7 +5232,9 @@ void *subSubscribeProcess(void *sarg) { if (res) { char tmpFile[MAX_FILE_NAME_LEN*2] = {0}; if (g_queryInfo.subQueryInfo.result[i][0] != 0) { - sprintf(tmpFile, "%s-%d", g_queryInfo.subQueryInfo.result[i], winfo->threadID); + sprintf(tmpFile, "%s-%d", + g_queryInfo.subQueryInfo.result[i], + winfo->threadID); } getResult(res, tmpFile); } @@ -4726,7 +5243,8 @@ void *subSubscribeProcess(void *sarg) { taos_free_result(res); for (int i = 0; i < g_queryInfo.subQueryInfo.sqlCount; i++) { - taos_unsubscribe(g_queryInfo.subQueryInfo.tsub[i], g_queryInfo.subQueryInfo.subscribeKeepProgress); + taos_unsubscribe(g_queryInfo.subQueryInfo.tsub[i], + g_queryInfo.subQueryInfo.subscribeKeepProgress); } return NULL; } @@ -4736,6 +5254,7 @@ void *superSubscribeProcess(void *sarg) { char sqlStr[MAX_TB_NAME_SIZE*2]; sprintf(sqlStr, "use %s", g_queryInfo.dbName); + debugPrint("%s() %d sqlStr: %s\n", __func__, __LINE__, sqlStr); if (0 != queryDbExec(winfo->taos, sqlStr, NO_INSERT_TYPE)) { return NULL; } @@ -4754,9 +5273,13 @@ void *superSubscribeProcess(void *sarg) { sprintf(topic, "taosdemo-subscribe-%d", i); char tmpFile[MAX_FILE_NAME_LEN*2] = {0}; if (g_queryInfo.subQueryInfo.result[i][0] != 0) { - sprintf(tmpFile, "%s-%d", g_queryInfo.superQueryInfo.result[i], winfo->threadID); + sprintf(tmpFile, "%s-%d", + g_queryInfo.superQueryInfo.result[i], winfo->threadID); } - g_queryInfo.superQueryInfo.tsub[i] = subscribeImpl(winfo->taos, g_queryInfo.superQueryInfo.sql[i], topic, tmpFile); + g_queryInfo.superQueryInfo.tsub[i] = + subscribeImpl(winfo->taos, + g_queryInfo.superQueryInfo.sql[i], + topic, tmpFile); if (NULL == g_queryInfo.superQueryInfo.tsub[i]) { return NULL; } @@ -4777,7 +5300,8 @@ void *superSubscribeProcess(void *sarg) { if (res) { char tmpFile[MAX_FILE_NAME_LEN*2] = {0}; if (g_queryInfo.superQueryInfo.result[i][0] != 0) { - sprintf(tmpFile, "%s-%d", g_queryInfo.superQueryInfo.result[i], winfo->threadID); + sprintf(tmpFile, "%s-%d", + g_queryInfo.superQueryInfo.result[i], winfo->threadID); } getResult(res, tmpFile); } @@ -4786,12 +5310,13 @@ void *superSubscribeProcess(void *sarg) { taos_free_result(res); for (int i = 0; i < g_queryInfo.superQueryInfo.sqlCount; i++) { - taos_unsubscribe(g_queryInfo.superQueryInfo.tsub[i], g_queryInfo.superQueryInfo.subscribeKeepProgress); + taos_unsubscribe(g_queryInfo.superQueryInfo.tsub[i], + g_queryInfo.superQueryInfo.subscribeKeepProgress); } return NULL; } -int subscribeTestProcess() { +static int subscribeTestProcess() { printfQueryMeta(); if (!g_args.answer_yes) { @@ -4800,21 +5325,30 @@ int subscribeTestProcess() { } TAOS * taos = NULL; - taos = taos_connect(g_queryInfo.host, g_queryInfo.user, g_queryInfo.password, g_queryInfo.dbName, g_queryInfo.port); + taos = taos_connect(g_queryInfo.host, + g_queryInfo.user, + g_queryInfo.password, + g_queryInfo.dbName, + g_queryInfo.port); if (taos == NULL) { fprintf(stderr, "Failed to connect to TDengine, reason:%s\n", taos_errstr(NULL)); exit(-1); } if (0 != g_queryInfo.subQueryInfo.sqlCount) { - (void)getAllChildNameOfSuperTable(taos, g_queryInfo.dbName, g_queryInfo.subQueryInfo.sTblName, &g_queryInfo.subQueryInfo.childTblName, &g_queryInfo.subQueryInfo.childTblCount); + (void)getAllChildNameOfSuperTable(taos, + g_queryInfo.dbName, + g_queryInfo.subQueryInfo.sTblName, + &g_queryInfo.subQueryInfo.childTblName, + &g_queryInfo.subQueryInfo.childTblCount); } pthread_t *pids = NULL; threadInfo *infos = NULL; //==== create sub threads for query from super table - if (g_queryInfo.superQueryInfo.sqlCount > 0 && g_queryInfo.superQueryInfo.concurrent > 0) { + if (g_queryInfo.superQueryInfo.sqlCount > 0 + && g_queryInfo.superQueryInfo.concurrent > 0) { pids = malloc(g_queryInfo.superQueryInfo.concurrent * sizeof(pthread_t)); infos = malloc(g_queryInfo.superQueryInfo.concurrent * sizeof(threadInfo)); if ((NULL == pids) || (NULL == infos)) { @@ -4834,9 +5368,12 @@ int subscribeTestProcess() { //==== create sub threads for query from sub table pthread_t *pidsOfSub = NULL; threadInfo *infosOfSub = NULL; - if ((g_queryInfo.subQueryInfo.sqlCount > 0) && (g_queryInfo.subQueryInfo.threadCnt > 0)) { - pidsOfSub = malloc(g_queryInfo.subQueryInfo.threadCnt * sizeof(pthread_t)); - infosOfSub = malloc(g_queryInfo.subQueryInfo.threadCnt * sizeof(threadInfo)); + if ((g_queryInfo.subQueryInfo.sqlCount > 0) + && (g_queryInfo.subQueryInfo.threadCnt > 0)) { + pidsOfSub = malloc(g_queryInfo.subQueryInfo.threadCnt * + sizeof(pthread_t)); + infosOfSub = malloc(g_queryInfo.subQueryInfo.threadCnt * + sizeof(threadInfo)); if ((NULL == pidsOfSub) || (NULL == infosOfSub)) { printf("malloc failed for create threads\n"); taos_close(taos); @@ -4890,23 +5427,23 @@ int subscribeTestProcess() { void initOfInsertMeta() { memset(&g_Dbs, 0, sizeof(SDbs)); - // set default values - tstrncpy(g_Dbs.host, "127.0.0.1", MAX_DB_NAME_SIZE); - g_Dbs.port = 6030; - tstrncpy(g_Dbs.user, TSDB_DEFAULT_USER, MAX_DB_NAME_SIZE); - tstrncpy(g_Dbs.password, TSDB_DEFAULT_PASS, MAX_DB_NAME_SIZE); - g_Dbs.threadCount = 2; - g_Dbs.use_metric = true; + // set default values + tstrncpy(g_Dbs.host, "127.0.0.1", MAX_DB_NAME_SIZE); + g_Dbs.port = 6030; + tstrncpy(g_Dbs.user, TSDB_DEFAULT_USER, MAX_DB_NAME_SIZE); + tstrncpy(g_Dbs.password, TSDB_DEFAULT_PASS, MAX_DB_NAME_SIZE); + g_Dbs.threadCount = 2; + g_Dbs.use_metric = true; } void initOfQueryMeta() { memset(&g_queryInfo, 0, sizeof(SQueryMetaInfo)); - // set default values - tstrncpy(g_queryInfo.host, "127.0.0.1", MAX_DB_NAME_SIZE); - g_queryInfo.port = 6030; - tstrncpy(g_queryInfo.user, TSDB_DEFAULT_USER, MAX_DB_NAME_SIZE); - tstrncpy(g_queryInfo.password, TSDB_DEFAULT_PASS, MAX_DB_NAME_SIZE); + // set default values + tstrncpy(g_queryInfo.host, "127.0.0.1", MAX_DB_NAME_SIZE); + g_queryInfo.port = 6030; + tstrncpy(g_queryInfo.user, TSDB_DEFAULT_USER, MAX_DB_NAME_SIZE); + tstrncpy(g_queryInfo.password, TSDB_DEFAULT_PASS, MAX_DB_NAME_SIZE); } void setParaFromArg(){ @@ -4928,42 +5465,21 @@ void setParaFromArg(){ g_Dbs.port = g_args.port; } + g_Dbs.threadCount = g_args.num_of_threads; + g_Dbs.threadCountByCreateTbl = g_args.num_of_threads; + g_Dbs.dbCount = 1; g_Dbs.db[0].drop = 1; - + tstrncpy(g_Dbs.db[0].dbName, g_args.database, MAX_DB_NAME_SIZE); g_Dbs.db[0].dbCfg.replica = g_args.replica; tstrncpy(g_Dbs.db[0].dbCfg.precision, "ms", MAX_DB_NAME_SIZE); - tstrncpy(g_Dbs.resultFile, g_args.output_file, MAX_FILE_NAME_LEN); g_Dbs.use_metric = g_args.use_metric; g_Dbs.insert_only = g_args.insert_only; - g_Dbs.db[0].superTblCount = 1; - tstrncpy(g_Dbs.db[0].superTbls[0].sTblName, "meters", MAX_TB_NAME_SIZE); - g_Dbs.db[0].superTbls[0].childTblCount = g_args.num_of_tables; - g_Dbs.threadCount = g_args.num_of_threads; - g_Dbs.threadCountByCreateTbl = 1; - g_Dbs.queryMode = g_args.mode; - - g_Dbs.db[0].superTbls[0].autoCreateTable = PRE_CREATE_SUBTBL; - g_Dbs.db[0].superTbls[0].superTblExists = TBL_NO_EXISTS; - g_Dbs.db[0].superTbls[0].childTblExists = TBL_NO_EXISTS; - g_Dbs.db[0].superTbls[0].insertRate = 0; - g_Dbs.db[0].superTbls[0].disorderRange = g_args.disorderRange; - g_Dbs.db[0].superTbls[0].disorderRatio = g_args.disorderRatio; - tstrncpy(g_Dbs.db[0].superTbls[0].childTblPrefix, g_args.tb_prefix, MAX_TB_NAME_SIZE); - tstrncpy(g_Dbs.db[0].superTbls[0].dataSource, "rand", MAX_TB_NAME_SIZE); - tstrncpy(g_Dbs.db[0].superTbls[0].insertMode, "taosc", MAX_TB_NAME_SIZE); - tstrncpy(g_Dbs.db[0].superTbls[0].startTimestamp, "2017-07-14 10:40:00.000", MAX_TB_NAME_SIZE); - g_Dbs.db[0].superTbls[0].timeStampStep = 10; - - // g_args.num_of_RPR; - g_Dbs.db[0].superTbls[0].insertRows = g_args.num_of_DPT; - g_Dbs.db[0].superTbls[0].maxSqlLen = TSDB_PAYLOAD_SIZE; - g_Dbs.do_aggreFunc = true; char dataString[STRING_LEN]; @@ -4971,32 +5487,58 @@ void setParaFromArg(){ memset(dataString, 0, STRING_LEN); - if (strcasecmp(data_type[0], "BINARY") == 0 || strcasecmp(data_type[0], "BOOL") == 0 || strcasecmp(data_type[0], "NCHAR") == 0 ) { + if (strcasecmp(data_type[0], "BINARY") == 0 + || strcasecmp(data_type[0], "BOOL") == 0 + || strcasecmp(data_type[0], "NCHAR") == 0 ) { g_Dbs.do_aggreFunc = false; } - g_Dbs.db[0].superTbls[0].columnCount = 0; - for (int i = 0; i < MAX_NUM_DATATYPE; i++) { - if (data_type[i] == NULL) { - break; - } - - tstrncpy(g_Dbs.db[0].superTbls[0].columns[i].dataType, data_type[i], MAX_TB_NAME_SIZE); - g_Dbs.db[0].superTbls[0].columns[i].dataLen = g_args.len_of_binary; - g_Dbs.db[0].superTbls[0].columnCount++; - } + if (g_args.use_metric) { + g_Dbs.db[0].superTblCount = 1; + tstrncpy(g_Dbs.db[0].superTbls[0].sTblName, "meters", MAX_TB_NAME_SIZE); + g_Dbs.db[0].superTbls[0].childTblCount = g_args.num_of_tables; + g_Dbs.threadCount = g_args.num_of_threads; + g_Dbs.threadCountByCreateTbl = 1; + g_Dbs.queryMode = g_args.mode; + + g_Dbs.db[0].superTbls[0].autoCreateTable = PRE_CREATE_SUBTBL; + g_Dbs.db[0].superTbls[0].superTblExists = TBL_NO_EXISTS; + g_Dbs.db[0].superTbls[0].childTblExists = TBL_NO_EXISTS; + g_Dbs.db[0].superTbls[0].disorderRange = g_args.disorderRange; + g_Dbs.db[0].superTbls[0].disorderRatio = g_args.disorderRatio; + tstrncpy(g_Dbs.db[0].superTbls[0].childTblPrefix, + g_args.tb_prefix, MAX_TB_NAME_SIZE); + tstrncpy(g_Dbs.db[0].superTbls[0].dataSource, "rand", MAX_TB_NAME_SIZE); + tstrncpy(g_Dbs.db[0].superTbls[0].insertMode, "taosc", MAX_TB_NAME_SIZE); + tstrncpy(g_Dbs.db[0].superTbls[0].startTimestamp, + "2017-07-14 10:40:00.000", MAX_TB_NAME_SIZE); + g_Dbs.db[0].superTbls[0].timeStampStep = 10; + + g_Dbs.db[0].superTbls[0].insertRows = g_args.num_of_DPT; + g_Dbs.db[0].superTbls[0].maxSqlLen = TSDB_PAYLOAD_SIZE; - if (g_Dbs.db[0].superTbls[0].columnCount > g_args.num_of_CPR) { - g_Dbs.db[0].superTbls[0].columnCount = g_args.num_of_CPR; - } else { - for (int i = g_Dbs.db[0].superTbls[0].columnCount; i < g_args.num_of_CPR; i++) { - tstrncpy(g_Dbs.db[0].superTbls[0].columns[i].dataType, "INT", MAX_TB_NAME_SIZE); - g_Dbs.db[0].superTbls[0].columns[i].dataLen = 0; + g_Dbs.db[0].superTbls[0].columnCount = 0; + for (int i = 0; i < MAX_NUM_DATATYPE; i++) { + if (data_type[i] == NULL) { + break; + } + + tstrncpy(g_Dbs.db[0].superTbls[0].columns[i].dataType, + data_type[i], MAX_TB_NAME_SIZE); + g_Dbs.db[0].superTbls[0].columns[i].dataLen = g_args.len_of_binary; g_Dbs.db[0].superTbls[0].columnCount++; } - } + + if (g_Dbs.db[0].superTbls[0].columnCount > g_args.num_of_CPR) { + g_Dbs.db[0].superTbls[0].columnCount = g_args.num_of_CPR; + } else { + for (int i = g_Dbs.db[0].superTbls[0].columnCount; i < g_args.num_of_CPR; i++) { + tstrncpy(g_Dbs.db[0].superTbls[0].columns[i].dataType, "INT", MAX_TB_NAME_SIZE); + g_Dbs.db[0].superTbls[0].columns[i].dataLen = 0; + g_Dbs.db[0].superTbls[0].columnCount++; + } + } - if (g_Dbs.use_metric) { tstrncpy(g_Dbs.db[0].superTbls[0].tags[0].dataType, "INT", MAX_TB_NAME_SIZE); g_Dbs.db[0].superTbls[0].tags[0].dataLen = 0; @@ -5004,8 +5546,10 @@ void setParaFromArg(){ g_Dbs.db[0].superTbls[0].tags[1].dataLen = g_args.len_of_binary; g_Dbs.db[0].superTbls[0].tagCount = 2; } else { + g_Dbs.threadCountByCreateTbl = 1; g_Dbs.db[0].superTbls[0].tagCount = 0; } + } /* Function to do regular expression check */ @@ -5050,7 +5594,7 @@ void querySqlFile(TAOS* taos, char* sqlFile) printf("failed to open file %s, reason:%s\n", sqlFile, strerror(errno)); return; } - + int read_len = 0; char * cmd = calloc(1, MAX_SQL_SIZE); size_t cmd_len = 0; @@ -5058,7 +5602,7 @@ void querySqlFile(TAOS* taos, char* sqlFile) size_t line_len = 0; double t = getCurrentTime(); - + while ((read_len = tgetline(&line, &line_len, fp)) != -1) { if (read_len >= MAX_SQL_SIZE) continue; line[--read_len] = '\0'; @@ -5075,7 +5619,15 @@ void querySqlFile(TAOS* taos, char* sqlFile) } memcpy(cmd + cmd_len, line, read_len); + verbosePrint("%s() LN%d cmd: %s\n", __func__, __LINE__, cmd); queryDbExec(taos, cmd, NO_INSERT_TYPE); + if (0 != queryDbExec(taos, cmd, NO_INSERT_TYPE)) { + printf("queryDbExec %s failed!\n", cmd); + tmfree(cmd); + tmfree(line); + tmfclose(fp); + return; + } memset(cmd, 0, MAX_SQL_SIZE); cmd_len = 0; } @@ -5089,59 +5641,63 @@ void querySqlFile(TAOS* taos, char* sqlFile) return; } -int main(int argc, char *argv[]) { - parse_args(argc, argv, &g_args); - - if (g_args.metaFile) { - initOfInsertMeta(); - initOfQueryMeta(); - if (false == getInfoFromJsonFile(g_args.metaFile)) { - printf("Failed to read %s\n", g_args.metaFile); - return 1; - } - if (INSERT_MODE == g_jsonType) { +static void testMetaFile() { + if (INSERT_MODE == g_args.test_mode) { if (g_Dbs.cfgDir[0]) taos_options(TSDB_OPTION_CONFIGDIR, g_Dbs.cfgDir); - (void)insertTestProcess(); - } else if (QUERY_MODE == g_jsonType) { - if (g_queryInfo.cfgDir[0]) taos_options(TSDB_OPTION_CONFIGDIR, g_queryInfo.cfgDir); - (void)queryTestProcess(); - } else if (SUBSCRIBE_MODE == g_jsonType) { - if (g_queryInfo.cfgDir[0]) taos_options(TSDB_OPTION_CONFIGDIR, g_queryInfo.cfgDir); - (void)subscribeTestProcess(); + insertTestProcess(); + } else if (QUERY_MODE == g_args.test_mode) { + if (g_queryInfo.cfgDir[0]) + taos_options(TSDB_OPTION_CONFIGDIR, g_queryInfo.cfgDir); + queryTestProcess(); + } else if (SUBSCRIBE_MODE == g_args.test_mode) { + if (g_queryInfo.cfgDir[0]) + taos_options(TSDB_OPTION_CONFIGDIR, g_queryInfo.cfgDir); + subscribeTestProcess(); } else { ; } - } else { - - memset(&g_Dbs, 0, sizeof(SDbs)); - g_jsonType = INSERT_MODE; - setParaFromArg(); +} - if (NULL != g_args.sqlFile) { - TAOS* qtaos = taos_connect( - g_Dbs.host, g_Dbs.user, g_Dbs.password, g_Dbs.db[0].dbName, g_Dbs.port); - querySqlFile(qtaos, g_args.sqlFile); - taos_close(qtaos); - return 0; - } - - (void)insertTestProcess(); - if (g_Dbs.insert_only) return 0; +static void testCmdLine() { + + g_args.test_mode = INSERT_MODE; + insertTestProcess(); + + if (g_Dbs.insert_only) + return; // select if (false == g_Dbs.insert_only) { // query data - + pthread_t read_id; threadInfo *rInfo = malloc(sizeof(threadInfo)); rInfo->start_time = 1500000000000; // 2017-07-14 10:40:00.000 rInfo->start_table_id = 0; - rInfo->end_table_id = g_Dbs.db[0].superTbls[0].childTblCount - 1; + //rInfo->do_aggreFunc = g_Dbs.do_aggreFunc; - //rInfo->nrecords_per_table = g_Dbs.db[0].superTbls[0].insertRows; - rInfo->superTblInfo = &g_Dbs.db[0].superTbls[0]; - rInfo->taos = taos_connect(g_Dbs.host, g_Dbs.user, g_Dbs.password, g_Dbs.db[0].dbName, g_Dbs.port); - strcpy(rInfo->tb_prefix, g_Dbs.db[0].superTbls[0].childTblPrefix); + if (g_args.use_metric) { + rInfo->end_table_id = g_Dbs.db[0].superTbls[0].childTblCount - 1; + rInfo->superTblInfo = &g_Dbs.db[0].superTbls[0]; + strcpy(rInfo->tb_prefix, + g_Dbs.db[0].superTbls[0].childTblPrefix); + } else { + rInfo->end_table_id = g_args.num_of_tables -1; + strcpy(rInfo->tb_prefix, g_args.tb_prefix); + } + + rInfo->taos = taos_connect( + g_Dbs.host, + g_Dbs.user, + g_Dbs.password, + g_Dbs.db[0].dbName, + g_Dbs.port); + if (rInfo->taos == NULL) { + fprintf(stderr, "Failed to connect to TDengine, reason:%s\n", taos_errstr(NULL)); + free(rInfo); + exit(-1); + } + strcpy(rInfo->fp, g_Dbs.resultFile); if (!g_Dbs.use_metric) { @@ -5153,9 +5709,42 @@ int main(int argc, char *argv[]) { taos_close(rInfo->taos); free(rInfo); } +} + +int main(int argc, char *argv[]) { + parse_args(argc, argv, &g_args); + + debugPrint("meta file: %s\n", g_args.metaFile); + + if (g_args.metaFile) { + initOfInsertMeta(); + initOfQueryMeta(); + + if (false == getInfoFromJsonFile(g_args.metaFile)) { + printf("Failed to read %s\n", g_args.metaFile); + return 1; + } + + testMetaFile(); + } else { + memset(&g_Dbs, 0, sizeof(SDbs)); + setParaFromArg(); + + if (NULL != g_args.sqlFile) { + TAOS* qtaos = taos_connect( + g_Dbs.host, + g_Dbs.user, + g_Dbs.password, + g_Dbs.db[0].dbName, + g_Dbs.port); + querySqlFile(qtaos, g_args.sqlFile); + taos_close(qtaos); + + } else { + testCmdLine(); + } } - taos_cleanup(); return 0; } diff --git a/src/kit/taosdump/taosdump.c b/src/kit/taosdump/taosdump.c index e2105ad18b1df1c430e42f5fb63d8cab06f12663..3cee5f1b1dc02e0218c97e66c4af9acb6c77c69f 100644 --- a/src/kit/taosdump/taosdump.c +++ b/src/kit/taosdump/taosdump.c @@ -769,6 +769,7 @@ int32_t taosSaveTableOfMetricToTempFile(TAOS *taosCon, char* metric, struct argu } sprintf(tmpBuf, ".select-tbname.tmp"); (void)remove(tmpBuf); + free(tblBuf); close(fd); return -1; } @@ -1523,6 +1524,7 @@ int taosDumpDb(SDbInfo *dbInfo, struct arguments *arguments, FILE *fp, TAOS *tao } sprintf(tmpBuf, ".show-tables.tmp"); (void)remove(tmpBuf); + free(tblBuf); close(fd); return -1; } diff --git a/src/rpc/src/rpcMain.c b/src/rpc/src/rpcMain.c index 6d34c9fb1531c692604612d937cf8aa0fc9d848f..cae227cbdb7363bc0b26c80f1cc9edcbf1574d71 100644 --- a/src/rpc/src/rpcMain.c +++ b/src/rpc/src/rpcMain.c @@ -1281,7 +1281,7 @@ static void rpcSendReqToServer(SRpcInfo *pRpc, SRpcReqContext *pContext) { SRpcConn *pConn = rpcSetupConnToServer(pContext); if (pConn == NULL) { pContext->code = terrno; - taosTmrStart(rpcProcessConnError, 0, pContext, pRpc->tmrCtrl); + taosTmrStart(rpcProcessConnError, 1, pContext, pRpc->tmrCtrl); return; } diff --git a/tests/Jenkinsfile b/tests/Jenkinsfile index 2f8b0de09d928404ae5e2f2925a452b4e1bfa150..7cdcfb2e24f92a96e2d3cc15324ca4f3ac4bc7a8 100644 --- a/tests/Jenkinsfile +++ b/tests/Jenkinsfile @@ -55,9 +55,15 @@ pipeline { sh ''' cd ${WKC}/tests ./test-all.sh b1 + date''' + sh ''' cd ${WKC}/tests ./test-all.sh full jdbc date''' + sh ''' + cd ${WKC}/tests + ./test-all.sh full unit + date''' } } diff --git a/tests/examples/JDBC/springbootdemo/pom.xml b/tests/examples/JDBC/springbootdemo/pom.xml index 52fb8caa90b1ee8ef0566ee7e87aae5199b6ea73..bd5f7efbc0321962a3f3f41a17bc9bde5dee9b27 100644 --- a/tests/examples/JDBC/springbootdemo/pom.xml +++ b/tests/examples/JDBC/springbootdemo/pom.xml @@ -63,7 +63,9 @@ com.taosdata.jdbc taos-jdbcdriver - 2.0.18 + 2.0.20 + + diff --git a/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/SpringbootdemoApplication.java b/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/SpringbootdemoApplication.java index 8066126d62b195d3a5c16f3c580d6ff07fe32648..8f30c299466ca1f83bc689928b08e8001a87908d 100644 --- a/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/SpringbootdemoApplication.java +++ b/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/SpringbootdemoApplication.java @@ -10,4 +10,4 @@ public class SpringbootdemoApplication { public static void main(String[] args) { SpringApplication.run(SpringbootdemoApplication.class, args); } -} +} \ No newline at end of file diff --git a/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/controller/WeatherController.java b/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/controller/WeatherController.java index 4a4109dcf326bd82067e3ab7153547f926e9f5ae..c153e27701403b672709a3585603b8630796d178 100644 --- a/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/controller/WeatherController.java +++ b/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/controller/WeatherController.java @@ -6,6 +6,7 @@ import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.*; import java.util.List; +import java.util.Map; @RequestMapping("/weather") @RestController @@ -20,7 +21,7 @@ public class WeatherController { * @return */ @GetMapping("/init") - public boolean init() { + public int init() { return weatherService.init(); } @@ -44,19 +45,23 @@ public class WeatherController { * @return */ @PostMapping("/{temperature}/{humidity}") - public int saveWeather(@PathVariable int temperature, @PathVariable float humidity) { + public int saveWeather(@PathVariable float temperature, @PathVariable int humidity) { return weatherService.save(temperature, humidity); } - /** - * upload multi weather info - * - * @param weatherList - * @return - */ - @PostMapping("/batch") - public int batchSaveWeather(@RequestBody List weatherList) { - return weatherService.save(weatherList); + @GetMapping("/count") + public int count() { + return weatherService.count(); + } + + @GetMapping("/subTables") + public List getSubTables() { + return weatherService.getSubTables(); + } + + @GetMapping("/avg") + public List avg() { + return weatherService.avg(); } } diff --git a/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/dao/WeatherMapper.java b/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/dao/WeatherMapper.java index cae1a1aec03297d79bd8b7deb7ef1c387f81d740..ad6733558a9d548be196cf8c9c0c63dc96227b39 100644 --- a/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/dao/WeatherMapper.java +++ b/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/dao/WeatherMapper.java @@ -4,16 +4,26 @@ import com.taosdata.example.springbootdemo.domain.Weather; import org.apache.ibatis.annotations.Param; import java.util.List; +import java.util.Map; public interface WeatherMapper { + void dropDB(); + + void createDB(); + + void createSuperTable(); + + void createTable(Weather weather); + + List select(@Param("limit") Long limit, @Param("offset") Long offset); + int insert(Weather weather); - int batchInsert(List weatherList); + int count(); - List select(@Param("limit") Long limit, @Param("offset")Long offset); + List getSubTables(); - void createDB(); + List avg(); - void createTable(); } diff --git a/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/dao/WeatherMapper.xml b/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/dao/WeatherMapper.xml index a9bcda0b00ca73f133b2f31622e5d6e0b034e5bf..2d3e0540650f35c1018992795ac33fb6cb7c4837 100644 --- a/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/dao/WeatherMapper.xml +++ b/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/dao/WeatherMapper.xml @@ -4,28 +4,29 @@ - - - + + + - - create database if not exists test; + + drop database if exists test + + + + create database if not exists test - - create table if not exists test.weather(ts timestamp, temperature int, humidity float); + + create table if not exists test.weather(ts timestamp, temperature float, humidity float) tags(location nchar(64), groupId int) - - ts, temperature, humidity - + + create table if not exists test.t#{groupId} using test.weather tags(#{location}, #{groupId}) + - - insert into test.weather (ts, temperature, humidity) values (now, #{temperature,jdbcType=INTEGER}, #{humidity,jdbcType=FLOAT}) + + insert into test.t#{groupId} (ts, temperature, humidity) values (#{ts}, ${temperature}, ${humidity}) - - insert into test.weather (ts, temperature, humidity) values - - (now + #{index}a, #{weather.temperature}, #{weather.humidity}) - - + + + + + + + + + + \ No newline at end of file diff --git a/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/domain/Weather.java b/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/domain/Weather.java index 60565448ad7d66bb713e46ca5f62b41bbe905893..255b2cdca920a5568338e823d0744f2e9c4db7d3 100644 --- a/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/domain/Weather.java +++ b/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/domain/Weather.java @@ -6,12 +6,21 @@ import java.sql.Timestamp; public class Weather { - @JsonFormat(pattern = "yyyy-MM-dd HH:mm:ss.SSS",timezone = "GMT+8") + @JsonFormat(pattern = "yyyy-MM-dd HH:mm:ss.SSS", timezone = "GMT+8") private Timestamp ts; + private float temperature; + private float humidity; + private String location; + private int groupId; - private int temperature; + public Weather() { + } - private float humidity; + public Weather(Timestamp ts, float temperature, float humidity) { + this.ts = ts; + this.temperature = temperature; + this.humidity = humidity; + } public Timestamp getTs() { return ts; @@ -21,11 +30,11 @@ public class Weather { this.ts = ts; } - public int getTemperature() { + public float getTemperature() { return temperature; } - public void setTemperature(int temperature) { + public void setTemperature(float temperature) { this.temperature = temperature; } @@ -36,4 +45,20 @@ public class Weather { public void setHumidity(float humidity) { this.humidity = humidity; } + + public String getLocation() { + return location; + } + + public void setLocation(String location) { + this.location = location; + } + + public int getGroupId() { + return groupId; + } + + public void setGroupId(int groupId) { + this.groupId = groupId; + } } diff --git a/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/service/WeatherService.java b/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/service/WeatherService.java index 31ce8f1dd96b7f4d006b0b10acf722f262afda33..0aef828e1cf19f9c612f9c8d0433ce1a361c7441 100644 --- a/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/service/WeatherService.java +++ b/tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/service/WeatherService.java @@ -5,25 +5,41 @@ import com.taosdata.example.springbootdemo.domain.Weather; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; +import java.sql.Timestamp; import java.util.List; +import java.util.Map; +import java.util.Random; @Service public class WeatherService { @Autowired private WeatherMapper weatherMapper; + private Random random = new Random(System.currentTimeMillis()); + private String[] locations = {"北京", "上海", "广州", "深圳", "天津"}; - public boolean init() { + public int init() { + weatherMapper.dropDB(); weatherMapper.createDB(); - weatherMapper.createTable(); - return true; + weatherMapper.createSuperTable(); + long ts = System.currentTimeMillis(); + long thirtySec = 1000 * 30; + int count = 0; + for (int i = 0; i < 20; i++) { + Weather weather = new Weather(new Timestamp(ts + (thirtySec * i)), 30 * random.nextFloat(), random.nextInt(100)); + weather.setLocation(locations[random.nextInt(locations.length)]); + weather.setGroupId(i % locations.length); + weatherMapper.createTable(weather); + count += weatherMapper.insert(weather); + } + return count; } public List query(Long limit, Long offset) { return weatherMapper.select(limit, offset); } - public int save(int temperature, float humidity) { + public int save(float temperature, int humidity) { Weather weather = new Weather(); weather.setTemperature(temperature); weather.setHumidity(humidity); @@ -31,8 +47,15 @@ public class WeatherService { return weatherMapper.insert(weather); } - public int save(List weatherList) { - return weatherMapper.batchInsert(weatherList); + public int count() { + return weatherMapper.count(); } + public List getSubTables() { + return weatherMapper.getSubTables(); + } + + public List avg() { + return weatherMapper.avg(); + } } diff --git a/tests/examples/JDBC/springbootdemo/src/main/resources/application.properties b/tests/examples/JDBC/springbootdemo/src/main/resources/application.properties index 4fb68758c454b923c0a19e2e723a86c8b56ed88d..4d7e64d10576388827502a459df9e68da2721dbb 100644 --- a/tests/examples/JDBC/springbootdemo/src/main/resources/application.properties +++ b/tests/examples/JDBC/springbootdemo/src/main/resources/application.properties @@ -1,12 +1,14 @@ # datasource config - JDBC-JNI -spring.datasource.driver-class-name=com.taosdata.jdbc.TSDBDriver -spring.datasource.url=jdbc:TAOS://127.0.0.1:6030/test?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8 -spring.datasource.username=root -spring.datasource.password=taosdata +#spring.datasource.driver-class-name=com.taosdata.jdbc.TSDBDriver +#spring.datasource.url=jdbc:TAOS://127.0.0.1:6030/test?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8 +#spring.datasource.username=root +#spring.datasource.password=taosdata # datasource config - JDBC-RESTful -#spring.datasource.driver-class-name=com.taosdata.jdbc.rs.RestfulDriver -#spring.datasource.url=jdbc:TAOS-RS://master:6041/test?user=root&password=taosdata +spring.datasource.driver-class-name=com.taosdata.jdbc.rs.RestfulDriver +spring.datasource.url=jdbc:TAOS-RS://master:6041/test?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8 +spring.datasource.username=root +spring.datasource.password=taosdata spring.datasource.druid.initial-size=5 spring.datasource.druid.min-idle=5 diff --git a/tests/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/TaosDemoApplication.java b/tests/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/TaosDemoApplication.java index 19f47e13d6112b8e638fc24e5d8f7f257602c0ae..c361df82b0aebb0d804b1a0982a0c1cf44ef5953 100644 --- a/tests/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/TaosDemoApplication.java +++ b/tests/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/TaosDemoApplication.java @@ -4,7 +4,7 @@ import com.taosdata.taosdemo.components.DataSourceFactory; import com.taosdata.taosdemo.components.JdbcTaosdemoConfig; import com.taosdata.taosdemo.domain.SuperTableMeta; import com.taosdata.taosdemo.service.DatabaseService; -import com.taosdata.taosdemo.service.QueryService; +import com.taosdata.taosdemo.service.SqlExecuteTask; import com.taosdata.taosdemo.service.SubTableService; import com.taosdata.taosdemo.service.SuperTableService; import com.taosdata.taosdemo.service.data.SuperTableMetaGenerator; @@ -32,6 +32,17 @@ public class TaosDemoApplication { } // 初始化 final DataSource dataSource = DataSourceFactory.getInstance(config.host, config.port, config.user, config.password); + if (config.executeSql != null && !config.executeSql.isEmpty() && !config.executeSql.replaceAll("\\s", "").isEmpty()) { + Thread task = new Thread(new SqlExecuteTask(dataSource, config.executeSql)); + task.start(); + try { + task.join(); + } catch (InterruptedException e) { + e.printStackTrace(); + } + return; + } + final DatabaseService databaseService = new DatabaseService(dataSource); final SuperTableService superTableService = new SuperTableService(dataSource); final SubTableService subTableService = new SubTableService(dataSource); @@ -96,7 +107,6 @@ public class TaosDemoApplication { // 查询 - /**********************************************************************************/ // 删除表 if (config.dropTable) { diff --git a/tests/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/components/JdbcTaosdemoConfig.java b/tests/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/components/JdbcTaosdemoConfig.java index 971c10dee2889543e95a70b244ea3cda462df3a6..974a2755a5a029d3a5fc681bc8c59b0aca1a7ca4 100644 --- a/tests/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/components/JdbcTaosdemoConfig.java +++ b/tests/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/components/JdbcTaosdemoConfig.java @@ -42,7 +42,7 @@ public final class JdbcTaosdemoConfig { public int rate = 10; public long range = 1000l; // select task - + public String executeSql; // drop task public boolean dropTable = false; @@ -89,7 +89,7 @@ public final class JdbcTaosdemoConfig { System.out.println("-rate The proportion of data out of order. effective only if order is 1. min 0, max 100, default is 10"); System.out.println("-range The range of data out of order. effective only if order is 1. default is 1000 ms"); // query task -// System.out.println("-sqlFile The select sql file"); + System.out.println("-executeSql execute a specific sql."); // drop task System.out.println("-dropTable Drop data before quit. Default is false"); System.out.println("--help Give this help list"); @@ -207,6 +207,9 @@ public final class JdbcTaosdemoConfig { range = Integer.parseInt(args[++i]); } // select task + if ("-executeSql".equals(args[i]) && i < args.length - 1) { + executeSql = args[++i]; + } // drop task if ("-dropTable".equals(args[i]) && i < args.length - 1) { diff --git a/tests/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/service/SqlExecuteTask.java b/tests/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/service/SqlExecuteTask.java new file mode 100644 index 0000000000000000000000000000000000000000..ff2e4d0af068a10e62933837817d2d2df0712a4c --- /dev/null +++ b/tests/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/service/SqlExecuteTask.java @@ -0,0 +1,36 @@ +package com.taosdata.taosdemo.service; + +import com.taosdata.taosdemo.utils.Printer; + +import javax.sql.DataSource; +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; + +public class SqlExecuteTask implements Runnable { + private final DataSource dataSource; + private final String sql; + + public SqlExecuteTask(DataSource dataSource, String sql) { + this.dataSource = dataSource; + this.sql = sql; + } + + @Override + public void run() { + try (Connection conn = dataSource.getConnection(); Statement stmt = conn.createStatement()) { + long start = System.currentTimeMillis(); + boolean execute = stmt.execute(sql); + long end = System.currentTimeMillis(); + if (execute) { + ResultSet rs = stmt.getResultSet(); + Printer.printResult(rs); + } else { + Printer.printSql(sql, true, (end - start)); + } + } catch (SQLException e) { + e.printStackTrace(); + } + } +} diff --git a/tests/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/utils/Printer.java b/tests/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/utils/Printer.java new file mode 100644 index 0000000000000000000000000000000000000000..a4627463ec5cd1bbccdb64b67506ba38f712de8f --- /dev/null +++ b/tests/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/utils/Printer.java @@ -0,0 +1,27 @@ +package com.taosdata.taosdemo.utils; + +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; + +public class Printer { + + public static void printResult(ResultSet resultSet) throws SQLException { + ResultSetMetaData metaData = resultSet.getMetaData(); + while (resultSet.next()) { + for (int i = 1; i <= metaData.getColumnCount(); i++) { + String columnLabel = metaData.getColumnLabel(i); + String value = resultSet.getString(i); + System.out.printf("%s: %s\t", columnLabel, value); + } + System.out.println(); + } + } + + public static void printSql(String sql, boolean succeed, long cost) { + System.out.println("[ " + (succeed ? "OK" : "ERROR!") + " ] time cost: " + cost + " ms, execute statement ====> " + sql); + } + + private Printer() { + } +} diff --git a/tests/pytest/cluster/clusterSetup.py b/tests/pytest/cluster/clusterSetup.py index dbda5657b6528402cc898762cccc78d74a196cd8..8a264270219232447a598768b45a37779edba698 100644 --- a/tests/pytest/cluster/clusterSetup.py +++ b/tests/pytest/cluster/clusterSetup.py @@ -11,15 +11,9 @@ # -*- coding: utf-8 -*- -import os -import sys -sys.path.insert(0, os.getcwd()) from fabric import Connection -from util.sql import * -from util.log import * -import taos import random -import threading +import time import logging class Node: @@ -76,6 +70,19 @@ class Node: print("remove taosd error for node %d " % self.index) logging.exception(e) + def forceStopOneTaosd(self): + try: + self.conn.run("kill -9 $(ps -ax|grep taosd|awk '{print $1}')") + except Exception as e: + print("kill taosd error on node%d " % self.index) + + def startOneTaosd(self): + try: + self.conn.run("nohup taosd -c /etc/taos/ > /dev/null 2>&1 &") + except Exception as e: + print("start taosd error on node%d " % self.index) + logging.exception(e) + def installTaosd(self, packagePath): self.conn.put(packagePath, self.homeDir) self.conn.cd(self.homeDir) @@ -122,100 +129,51 @@ class Node: class Nodes: def __init__(self): - self.node1 = Node(1, 'root', '52.151.60.239', 'node1', 'r', '/root/') - self.node2 = Node(2, 'root', '52.183.32.246', 'node1', 'r', '/root/') - self.node3 = Node(3, 'root', '51.143.46.79', 'node1', 'r', '/root/') - self.node4 = Node(4, 'root', '52.183.2.76', 'node1', 'r', '/root/') - self.node5 = Node(5, 'root', '13.66.225.87', 'node1', 'r', '/root/') + self.tdnodes = [] + self.tdnodes.append(Node(0, 'root', '52.143.103.7', 'node1', 'a', '/root/')) + self.tdnodes.append(Node(1, 'root', '52.250.48.222', 'node2', 'a', '/root/')) + self.tdnodes.append(Node(2, 'root', '51.141.167.23', 'node3', 'a', '/root/')) + self.tdnodes.append(Node(3, 'root', '52.247.207.173', 'node4', 'a', '/root/')) + self.tdnodes.append(Node(4, 'root', '51.141.166.100', 'node5', 'a', '/root/')) + + def stopOneNode(self, index): + self.tdnodes[index].forceStopOneTaosd() + + def startOneNode(self, index): + self.tdnodes[index].startOneTaosd() def stopAllTaosd(self): - self.node1.stopTaosd() - self.node2.stopTaosd() - self.node3.stopTaosd() - + for i in range(len(self.tdnodes)): + self.tdnodes[i].stopTaosd() + def startAllTaosd(self): - self.node1.startTaosd() - self.node2.startTaosd() - self.node3.startTaosd() + for i in range(len(self.tdnodes)): + self.tdnodes[i].startTaosd() def restartAllTaosd(self): - self.node1.restartTaosd() - self.node2.restartTaosd() - self.node3.restartTaosd() + for i in range(len(self.tdnodes)): + self.tdnodes[i].restartTaosd() def addConfigs(self, configKey, configValue): - self.node1.configTaosd(configKey, configValue) - self.node2.configTaosd(configKey, configValue) - self.node3.configTaosd(configKey, configValue) + for i in range(len(self.tdnodes)): + self.tdnodes[i].configTaosd(configKey, configValue) - def removeConfigs(self, configKey, configValue): - self.node1.removeTaosConfig(configKey, configValue) - self.node2.removeTaosConfig(configKey, configValue) - self.node3.removeTaosConfig(configKey, configValue) + def removeConfigs(self, configKey, configValue): + for i in range(len(self.tdnodes)): + self.tdnodes[i].removeTaosConfig(configKey, configValue) def removeAllDataFiles(self): - self.node1.removeData() - self.node2.removeData() - self.node3.removeData() - -class ClusterTest: - def __init__(self, hostName): - self.host = hostName - self.user = "root" - self.password = "taosdata" - self.config = "/etc/taos" - self.dbName = "mytest" - self.stbName = "meters" - self.numberOfThreads = 20 - self.numberOfTables = 10000 - self.numberOfRecords = 1000 - self.tbPrefix = "t" - self.ts = 1538548685000 - self.repeat = 1 - - def connectDB(self): - self.conn = taos.connect( - host=self.host, - user=self.user, - password=self.password, - config=self.config) - - def createSTable(self, replica): - cursor = self.conn.cursor() - tdLog.info("drop database if exists %s" % self.dbName) - cursor.execute("drop database if exists %s" % self.dbName) - tdLog.info("create database %s replica %d" % (self.dbName, replica)) - cursor.execute("create database %s replica %d" % (self.dbName, replica)) - tdLog.info("use %s" % self.dbName) - cursor.execute("use %s" % self.dbName) - tdLog.info("drop table if exists %s" % self.stbName) - cursor.execute("drop table if exists %s" % self.stbName) - tdLog.info("create table %s(ts timestamp, current float, voltage int, phase int) tags(id int)" % self.stbName) - cursor.execute("create table %s(ts timestamp, current float, voltage int, phase int) tags(id int)" % self.stbName) - cursor.close() - - def insertData(self, threadID): - print("Thread %d: starting" % threadID) - cursor = self.conn.cursor() - tablesPerThread = int(self.numberOfTables / self.numberOfThreads) - baseTableID = tablesPerThread * threadID - for i in range (tablesPerThread): - cursor.execute("create table %s%d using %s tags(%d)" % (self.tbPrefix, baseTableID + i, self.stbName, baseTableID + i)) - query = "insert into %s%d values" % (self.tbPrefix, baseTableID + i) - base = self.numberOfRecords * i - for j in range(self.numberOfRecords): - query += "(%d, %f, %d, %d)" % (self.ts + base + j, random.random(), random.randint(210, 230), random.randint(0, 10)) - cursor.execute(query) - cursor.close() - print("Thread %d: finishing" % threadID) - - def run(self): - threads = [] - tdLog.info("Inserting data") - for i in range(self.numberOfThreads): - thread = threading.Thread(target=self.insertData, args=(i,)) - threads.append(thread) - thread.start() - - for i in range(self.numberOfThreads): - threads[i].join() \ No newline at end of file + for i in range(len(self.tdnodes)): + self.tdnodes[i].removeData() + +# kill taosd randomly every 10 mins +nodes = Nodes() +loop = 0 +while True: + loop = loop + 1 + index = random.randint(0, 4) + print("loop: %d, kill taosd on node%d" %(loop, index)) + nodes.stopOneNode(index) + time.sleep(60) + nodes.startOneNode(index) + time.sleep(600) \ No newline at end of file diff --git a/tests/pytest/crash_gen/crash_gen_main.py b/tests/pytest/crash_gen/crash_gen_main.py index 309c0df9108f340bf96d73529ccf8bb49c1c9692..43506c68d51b134347df48080ef7f3223b379775 100755 --- a/tests/pytest/crash_gen/crash_gen_main.py +++ b/tests/pytest/crash_gen/crash_gen_main.py @@ -35,16 +35,19 @@ import os import signal import traceback import resource -from guppy import hpy +# from guppy import hpy import gc from crash_gen.service_manager import ServiceManager, TdeInstance from crash_gen.misc import Logging, Status, CrashGenError, Dice, Helper, Progress from crash_gen.db import DbConn, MyTDSql, DbConnNative, DbManager +import crash_gen.settings import taos import requests +crash_gen.settings.init() + # Require Python 3 if sys.version_info[0] < 3: raise Exception("Must be using Python 3") @@ -259,6 +262,7 @@ class ThreadCoordinator: self._execStats = ExecutionStats() self._runStatus = Status.STATUS_RUNNING self._initDbs() + self._stepStartTime = None # Track how long it takes to execute each step def getTaskExecutor(self): return self._te @@ -394,6 +398,10 @@ class ThreadCoordinator: try: self._syncAtBarrier() # For now just cross the barrier Progress.emit(Progress.END_THREAD_STEP) + if self._stepStartTime : + stepExecTime = time.time() - self._stepStartTime + Progress.emitStr('{:.3f}s/{}'.format(stepExecTime, DbConnNative.totalRequests)) + DbConnNative.resetTotalRequests() # reset to zero except threading.BrokenBarrierError as err: self._execStats.registerFailure("Aborted due to worker thread timeout") Logging.error("\n") @@ -433,6 +441,7 @@ class ThreadCoordinator: # Then we move on to the next step Progress.emit(Progress.BEGIN_THREAD_STEP) + self._stepStartTime = time.time() self._releaseAllWorkerThreads(transitionFailed) if hasAbortedTask or transitionFailed : # abnormal ending, workers waiting at "gate" @@ -691,7 +700,7 @@ class AnyState: def canDropDb(self): # If user requests to run up to a number of DBs, # we'd then not do drop_db operations any more - if gConfig.max_dbs > 0 : + if gConfig.max_dbs > 0 or gConfig.use_shadow_db : return False return self._info[self.CAN_DROP_DB] @@ -699,6 +708,8 @@ class AnyState: return self._info[self.CAN_CREATE_FIXED_SUPER_TABLE] def canDropFixedSuperTable(self): + if gConfig.use_shadow_db: # duplicate writes to shaddow DB, in which case let's disable dropping s-table + return False return self._info[self.CAN_DROP_FIXED_SUPER_TABLE] def canAddData(self): @@ -1037,7 +1048,7 @@ class Database: _clsLock = threading.Lock() # class wide lock _lastInt = 101 # next one is initial integer _lastTick = 0 - _lastLaggingTick = 0 # lagging tick, for unsequenced insersions + _lastLaggingTick = 0 # lagging tick, for out-of-sequence (oos) data insertions def __init__(self, dbNum: int, dbc: DbConn): # TODO: remove dbc self._dbNum = dbNum # we assign a number to databases, for our testing purpose @@ -1093,21 +1104,24 @@ class Database: t3 = datetime.datetime(2012, 1, 1) # default "keep" is 10 years t4 = datetime.datetime.fromtimestamp( t3.timestamp() + elSec2) # see explanation above - Logging.debug("Setting up TICKS to start from: {}".format(t4)) + Logging.info("Setting up TICKS to start from: {}".format(t4)) return t4 @classmethod - def getNextTick(cls): + def getNextTick(cls): + ''' + Fetch a timestamp tick, with some random factor, may not be unique. + ''' with cls._clsLock: # prevent duplicate tick if cls._lastLaggingTick==0 or cls._lastTick==0 : # not initialized # 10k at 1/20 chance, should be enough to avoid overlaps tick = cls.setupLastTick() cls._lastTick = tick - cls._lastLaggingTick = tick + datetime.timedelta(0, -10000) + cls._lastLaggingTick = tick + datetime.timedelta(0, -60*2) # lagging behind 2 minutes, should catch up fast # if : # should be quite a bit into the future - if Dice.throw(20) == 0: # 1 in 20 chance, return lagging tick - cls._lastLaggingTick += datetime.timedelta(0, 1) # Go back in time 100 seconds + if gConfig.mix_oos_data and Dice.throw(20) == 0: # if asked to do so, and 1 in 20 chance, return lagging tick + cls._lastLaggingTick += datetime.timedelta(0, 1) # pick the next sequence from the lagging tick sequence return cls._lastLaggingTick else: # regular # add one second to it @@ -1334,7 +1348,8 @@ class Task(): elif self._isErrAcceptable(errno2, err.__str__()): self.logDebug("[=] Acceptable Taos library exception: errno=0x{:X}, msg: {}, SQL: {}".format( errno2, err, wt.getDbConn().getLastSql())) - print("_", end="", flush=True) + # print("_", end="", flush=True) + Progress.emit(Progress.ACCEPTABLE_ERROR) self._err = err else: # not an acceptable error errMsg = "[=] Unexpected Taos library exception ({}): errno=0x{:X}, msg: {}, SQL: {}".format( @@ -1563,8 +1578,11 @@ class TaskCreateDb(StateTransitionTask): # numReplica = Dice.throw(gConfig.max_replicas) + 1 # 1,2 ... N numReplica = gConfig.max_replicas # fixed, always repStr = "replica {}".format(numReplica) - self.execWtSql(wt, "create database {} {}" - .format(self._db.getName(), repStr) ) + updatePostfix = "update 1" if gConfig.verify_data else "" # allow update only when "verify data" is active + dbName = self._db.getName() + self.execWtSql(wt, "create database {} {} {} ".format(dbName, repStr, updatePostfix ) ) + if dbName == "db_0" and gConfig.use_shadow_db: + self.execWtSql(wt, "create database {} {} {} ".format("db_s", repStr, updatePostfix ) ) class TaskDropDb(StateTransitionTask): @classmethod @@ -1774,13 +1792,13 @@ class TdSuperTable: ]) # TODO: add more from 'top' - if aggExpr not in ['stddev(speed)']: #TODO: STDDEV not valid for super tables?! - sql = "select {} from {}.{}".format(aggExpr, self._dbName, self.getName()) - if Dice.throw(3) == 0: # 1 in X chance - sql = sql + ' GROUP BY color' - Progress.emit(Progress.QUERY_GROUP_BY) - # Logging.info("Executing GROUP-BY query: " + sql) - ret.append(SqlQuery(sql)) + # if aggExpr not in ['stddev(speed)']: # STDDEV not valid for super tables?! (Done in TD-1049) + sql = "select {} from {}.{}".format(aggExpr, self._dbName, self.getName()) + if Dice.throw(3) == 0: # 1 in X chance + sql = sql + ' GROUP BY color' + Progress.emit(Progress.QUERY_GROUP_BY) + # Logging.info("Executing GROUP-BY query: " + sql) + ret.append(SqlQuery(sql)) return ret @@ -1988,7 +2006,7 @@ class TaskAddData(StateTransitionTask): numRecords = self.LARGE_NUMBER_OF_RECORDS if gConfig.larger_data else self.SMALL_NUMBER_OF_RECORDS fullTableName = db.getName() + '.' + regTableName - sql = "insert into {} values ".format(fullTableName) + sql = "INSERT INTO {} VALUES ".format(fullTableName) for j in range(numRecords): # number of records per table nextInt = db.getNextInt() nextTick = db.getNextTick() @@ -2016,12 +2034,24 @@ class TaskAddData(StateTransitionTask): # print("_w" + str(nextInt % 100), end="", flush=True) # Trace what was written try: - sql = "insert into {} values ('{}', {}, '{}');".format( # removed: tags ('{}', {}) + sql = "INSERT INTO {} VALUES ('{}', {}, '{}');".format( # removed: tags ('{}', {}) fullTableName, # ds.getFixedSuperTableName(), # ds.getNextBinary(), ds.getNextFloat(), nextTick, nextInt, nextColor) dbc.execute(sql) + + # Quick hack, attach an update statement here. TODO: create an "update" task + if (not gConfig.use_shadow_db) and Dice.throw(5) == 0: # 1 in N chance, plus not using shaddow DB + nextInt = db.getNextInt() + nextColor = db.getNextColor() + sql = "INSERt INTO {} VALUES ('{}', {}, '{}');".format( # "INSERt" means "update" here + fullTableName, + nextTick, nextInt, nextColor) + # sql = "UPDATE {} set speed={}, color='{}' WHERE ts='{}'".format( + # fullTableName, db.getNextInt(), db.getNextColor(), nextTick) + dbc.execute(sql) + except: # Any exception at all if gConfig.verify_data: self.unlockTable(fullTableName) @@ -2070,7 +2100,8 @@ class TaskAddData(StateTransitionTask): random.shuffle(tblSeq) # now we have random sequence for i in tblSeq: if (i in self.activeTable): # wow already active - print("x", end="", flush=True) # concurrent insertion + # print("x", end="", flush=True) # concurrent insertion + Progress.emit(Progress.CONCURRENT_INSERTION) else: self.activeTable.add(i) # marking it active @@ -2373,6 +2404,11 @@ class MainExec: '--larger-data', action='store_true', help='Write larger amount of data during write operations (default: false)') + parser.add_argument( + '-m', + '--mix-oos-data', + action='store_false', + help='Mix out-of-sequence data into the test data stream (default: true)') parser.add_argument( '-n', '--dynamic-db-table-names', @@ -2414,6 +2450,11 @@ class MainExec: '--verify-data', action='store_true', help='Verify data written in a number of places by reading back (default: false)') + parser.add_argument( + '-w', + '--use-shadow-db', + action='store_true', + help='Use a shaddow database to verify data integrity (default: false)') parser.add_argument( '-x', '--continue-on-exception', @@ -2422,6 +2463,11 @@ class MainExec: global gConfig gConfig = parser.parse_args() + crash_gen.settings.gConfig = gConfig # TODO: fix this hack, consolidate this global var + + # Sanity check for arguments + if gConfig.use_shadow_db and gConfig.max_dbs>1 : + raise CrashGenError("Cannot combine use-shadow-db with max-dbs of more than 1") Logging.clsInit(gConfig) diff --git a/tests/pytest/crash_gen/db.py b/tests/pytest/crash_gen/db.py index e38692dbe1e5c33ffe162015e3e60630fd51fa38..62a369c41a7ed0d73ab847232a206c2cabb53d53 100644 --- a/tests/pytest/crash_gen/db.py +++ b/tests/pytest/crash_gen/db.py @@ -18,6 +18,8 @@ import datetime import traceback # from .service_manager import TdeInstance +import crash_gen.settings + class DbConn: TYPE_NATIVE = "native-c" TYPE_REST = "rest-api" @@ -244,7 +246,7 @@ class MyTDSql: self._conn.close() # TODO: very important, cursor close does NOT close DB connection! self._cursor.close() - def _execInternal(self, sql): + def _execInternal(self, sql): startTime = time.time() # Logging.debug("Executing SQL: " + sql) ret = self._cursor.execute(sql) @@ -257,6 +259,27 @@ class MyTDSql: cls.longestQuery = sql cls.longestQueryTime = queryTime cls.lqStartTime = startTime + + # Now write to the shadow database + if crash_gen.settings.gConfig.use_shadow_db: + if sql[:11] == "INSERT INTO": + if sql[:16] == "INSERT INTO db_0": + sql2 = "INSERT INTO db_s" + sql[16:] + self._cursor.execute(sql2) + else: + raise CrashGenError("Did not find db_0 in INSERT statement: {}".format(sql)) + else: # not an insert statement + pass + + if sql[:12] == "CREATE TABLE": + if sql[:17] == "CREATE TABLE db_0": + sql2 = sql.replace('db_0', 'db_s') + self._cursor.execute(sql2) + else: + raise CrashGenError("Did not find db_0 in CREATE TABLE statement: {}".format(sql)) + else: # not an insert statement + pass + return ret def query(self, sql): @@ -302,12 +325,18 @@ class DbConnNative(DbConn): _lock = threading.Lock() # _connInfoDisplayed = False # TODO: find another way to display this totalConnections = 0 # Not private + totalRequests = 0 def __init__(self, dbTarget): super().__init__(dbTarget) self._type = self.TYPE_NATIVE self._conn = None - # self._cursor = None + # self._cursor = None + + @classmethod + def resetTotalRequests(cls): + with cls._lock: # force single threading for opening DB connections. # TODO: whaaat??!!! + cls.totalRequests = 0 def openByType(self): # Open connection # global gContainer @@ -356,6 +385,8 @@ class DbConnNative(DbConn): Logging.debug("[SQL] Executing SQL: {}".format(sql)) self._lastSql = sql nRows = self._tdSql.execute(sql) + cls = self.__class__ + cls.totalRequests += 1 Logging.debug( "[SQL] Execution Result, nRows = {}, SQL = {}".format( nRows, sql)) @@ -369,6 +400,8 @@ class DbConnNative(DbConn): Logging.debug("[SQL] Executing SQL: {}".format(sql)) self._lastSql = sql nRows = self._tdSql.query(sql) + cls = self.__class__ + cls.totalRequests += 1 Logging.debug( "[SQL] Query Result, nRows = {}, SQL = {}".format( nRows, sql)) diff --git a/tests/pytest/crash_gen/misc.py b/tests/pytest/crash_gen/misc.py index 6ea5691ce223eb1c14214d4b11c47cf85e29c795..9774ec5455961392d82ea2b4b59c0657b5704f9a 100644 --- a/tests/pytest/crash_gen/misc.py +++ b/tests/pytest/crash_gen/misc.py @@ -176,11 +176,13 @@ class Progress: SERVICE_START_NAP = 7 CREATE_TABLE_ATTEMPT = 8 QUERY_GROUP_BY = 9 + CONCURRENT_INSERTION = 10 + ACCEPTABLE_ERROR = 11 tokens = { STEP_BOUNDARY: '.', - BEGIN_THREAD_STEP: '[', - END_THREAD_STEP: '] ', + BEGIN_THREAD_STEP: ' [', + END_THREAD_STEP: ']', SERVICE_HEART_BEAT: '.Y.', SERVICE_RECONNECT_START: '', @@ -188,8 +190,14 @@ class Progress: SERVICE_START_NAP: '_zz', CREATE_TABLE_ATTEMPT: 'c', QUERY_GROUP_BY: 'g', + CONCURRENT_INSERTION: 'x', + ACCEPTABLE_ERROR: '_', } @classmethod def emit(cls, token): print(cls.tokens[token], end="", flush=True) + + @classmethod + def emitStr(cls, str): + print('({})'.format(str), end="", flush=True) diff --git a/tests/pytest/crash_gen/settings.py b/tests/pytest/crash_gen/settings.py new file mode 100644 index 0000000000000000000000000000000000000000..3c4c91e6e077c325c53d15918624db783957fc20 --- /dev/null +++ b/tests/pytest/crash_gen/settings.py @@ -0,0 +1,8 @@ +from __future__ import annotations +import argparse + +gConfig: argparse.Namespace + +def init(): + global gConfig + gConfig = [] \ No newline at end of file diff --git a/tests/pytest/functions/function_operations.py b/tests/pytest/functions/function_operations.py index 8bbe6dc9a34b54ef1ddecff4e3f186312e5235b1..e703147b6722f848263f170b89ce9f972f7a2435 100644 --- a/tests/pytest/functions/function_operations.py +++ b/tests/pytest/functions/function_operations.py @@ -69,15 +69,37 @@ class TDTestCase: tdSql.checkData(10, 0, None) # test for tarithoperator.c coverage - col_list = [ 'col1' , 'col2' , 'col3' , 'col4' , 'col5' , 'col6' , 'col11' , 'col12' , 'col13' , 'col14' , '1' ] + tdSql.execute("insert into test1 values(1537146000010,1,NULL,9,8,1.2,1.3,0,1,1,5,4,3,2)") + tdSql.execute("insert into test1 values(1537146000011,2,1,NULL,9,1.2,1.3,1,2,2,6,5,4,3)") + tdSql.execute("insert into test1 values(1537146000012,3,2,1,NULL,1.2,1.3,0,3,3,7,6,5,4)") + tdSql.execute("insert into test1 values(1537146000013,4,3,2,1,1.2,1.3,1,4,4,8,7,6,5)") + tdSql.execute("insert into test1 values(1537146000014,5,4,3,2,1.2,1.3,0,5,5,9,8,7,6)") + tdSql.execute("insert into test1 values(1537146000015,6,5,4,3,1.2,1.3,1,6,6,NULL,9,8,7)") + tdSql.execute("insert into test1 values(1537146000016,7,6,5,4,1.2,1.3,0,7,7,1,NULL,9,8)") + tdSql.execute("insert into test1 values(1537146000017,8,7,6,5,1.2,1.3,1,8,8,2,1,NULL,9)") + tdSql.execute("insert into test1 values(1537146000018,9,8,7,6,1.2,1.3,0,9,9,3,2,1,NULL)") + tdSql.execute("insert into test1 values(1537146000019,NULL,9,8,7,1.2,1.3,1,10,10,4,3,2,1)") + + self.ts = self.ts + self.rowNum + 10 + + tdSql.execute("insert into test1 values(%d, 1, 1, 1, 1, 1.1, 1.1, 1, NULL, '涛思数据3', 1, 1, 1, 1)" % ( self.ts + self.rowNum + 1 )) + tdSql.execute("insert into test1 values(%d, 1, 1, 1, 1, 1.1, 1.1, 1, 'taosdata', NULL, 1, 1, 1, 1)" % ( self.ts + self.rowNum + 2 )) + tdSql.execute("insert into test1 values(%d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL)" % ( self.ts + self.rowNum + 3 )) + tdSql.execute("insert into test1 values(%d, 1, 1, 1, 1, NULL, 1.1, 1, NULL, '涛思数据3', 1, 1, 1, 1)" % ( self.ts + self.rowNum + 4 )) + tdSql.execute("insert into test1 values(%d, 1, 1, 1, 1, 1.1, NULL, 1, 'taosdata', NULL, 1, 1, 1, 1)" % ( self.ts + self.rowNum + 5 )) + self.rowNum = self.rowNum + 5 + + col_list = [ 'col1' , 'col2' , 'col3' , 'col4' , 'col5' , 'col6' , 'col7' , 'col8' , 'col9' , 'col11' , 'col12' , 'col13' , 'col14' , '1' , '1.1' , 'NULL' ] op_list = [ '+' , '-' , '*' , '/' , '%' ] + err_list = [ 'col7' , 'col8' , 'col9' , 'NULL' ] for i in col_list : for j in col_list : for k in op_list : sql = " select %s %s %s from test1 " % ( i , k , j ) - print(sql) - tdSql.query(sql) - + if i in err_list or j in err_list: + tdSql.error(sql) + else: + tdSql.query(sql) def stop(self): tdSql.close() tdLog.success("%s successfully executed" % __file__) diff --git a/tests/pytest/functions/function_operations_restart.py b/tests/pytest/functions/function_operations_restart.py index e0e9d36e466cc57722dba29afd413892a8fc2f37..3ba787a4a0f3f8af74f461f4e668078d726a428d 100644 --- a/tests/pytest/functions/function_operations_restart.py +++ b/tests/pytest/functions/function_operations_restart.py @@ -41,24 +41,24 @@ class TDTestCase: tdSql.error("select col1 + col9 from test1") tdSql.query("select col1 + col2 from test1") - tdSql.checkRows(11) + tdSql.checkRows(25) tdSql.checkData(0, 0, 2.0) tdSql.query("select col1 + col2 * col3 + col3 / col4 + col5 + col6 + col11 + col12 + col13 + col14 from test1") - tdSql.checkRows(11) + tdSql.checkRows(25) tdSql.checkData(0, 0, 7.2) #tdSql.execute("insert into test1(ts, col1) values(%d, 11)" % (self.ts + 11)) tdSql.query("select col1 + col2 from test1") - tdSql.checkRows(11) + tdSql.checkRows(25) tdSql.checkData(10, 0, None) tdSql.query("select col1 + col2 * col3 from test1") - tdSql.checkRows(11) + tdSql.checkRows(25) tdSql.checkData(10, 0, None) tdSql.query("select col1 + col2 * col3 + col3 / col4 + col5 + col6 + col11 + col12 + col13 + col14 from test1") - tdSql.checkRows(11) + tdSql.checkRows(25) tdSql.checkData(10, 0, None) diff --git a/tests/pytest/insert/metadataUpdate.py b/tests/pytest/insert/metadataUpdate.py index c3ea5c6a7e7b5dbff9ded75394793e7fc4a7fd6c..76b9a3ae8f818e3bf3784eb99974ce56b1071af5 100644 --- a/tests/pytest/insert/metadataUpdate.py +++ b/tests/pytest/insert/metadataUpdate.py @@ -52,8 +52,8 @@ class TDTestCase: p.start() p.join() p.terminate() - - tdSql.execute("insert into tb values(%d, 1, 2)" % (self.ts + 1)) + + tdSql.execute("insert into tb(ts, col1, col2) values(%d, 1, 2)" % (self.ts + 2)) print("==============step2") tdSql.query("select * from tb") diff --git a/tests/pytest/query/queryGroupbySort.py b/tests/pytest/query/queryGroupbySort.py index c2649a86db8dcd399d7cddf2d752d1ac0b898abb..063db936087c125e45051da0094b57a9fd184b9b 100644 --- a/tests/pytest/query/queryGroupbySort.py +++ b/tests/pytest/query/queryGroupbySort.py @@ -28,12 +28,13 @@ class TDTestCase: def run(self): tdSql.prepare() - tdSql.execute("CREATE TABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int)") - tdSql.execute("CREATE TABLE D1001 USING meters TAGS ('Beijing.Chaoyang', 2)") - tdSql.execute("CREATE TABLE D1002 USING meters TAGS ('Beijing.Chaoyang', 3)") + tdSql.execute("CREATE TABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int, t3 float, t4 double)") + tdSql.execute("CREATE TABLE D1001 USING meters TAGS ('Beijing.Chaoyang', 2 , NULL, NULL)") + tdSql.execute("CREATE TABLE D1002 USING meters TAGS ('Beijing.Chaoyang', 3 , NULL , 1.7)") + tdSql.execute("CREATE TABLE D1003 USING meters TAGS ('Beijing.Chaoyang', 3 , 1.1 , 1.7)") tdSql.execute("INSERT INTO D1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) (1538548696800, 12.3, 221, 0.31)") tdSql.execute("INSERT INTO D1002 VALUES (1538548685001, 10.5, 220, 0.28) (1538548696800, 12.3, 221, 0.31)") - + tdSql.execute("INSERT INTO D1003 VALUES (1538548685001, 10.5, 220, 0.28) (1538548696800, 12.3, 221, 0.31)") tdSql.query("SELECT SUM(current), AVG(voltage) FROM meters WHERE groupId > 1 INTERVAL(1s) GROUP BY location order by ts DESC") tdSql.checkRows(3) tdSql.checkData(0, 0, "2018-10-03 14:38:16") @@ -49,6 +50,12 @@ class TDTestCase: tdSql.error("SELECT SUM(current) as s, AVG(voltage) FROM meters WHERE groupId > 1 INTERVAL(1s) GROUP BY location order by s ASC") tdSql.error("SELECT SUM(current) as s, AVG(voltage) FROM meters WHERE groupId > 1 INTERVAL(1s) GROUP BY location order by s DESC") + + #add for TD-3170 + tdSql.query("select avg(current) from meters group by t3;") + tdSql.checkData(0, 0, 11.6) + tdSql.query("select avg(current) from meters group by t4;") + tdSql.query("select avg(current) from meters group by t3,t4;") def stop(self): tdSql.close() diff --git a/tests/pytest/tools/taosdemoTest.py b/tests/pytest/tools/taosdemoTest.py index 1cb2f71d8fc98e703795a839eb3441cf6e044d5d..c450570d2446a236ad8fa98c0817d2e31bcae795 100644 --- a/tests/pytest/tools/taosdemoTest.py +++ b/tests/pytest/tools/taosdemoTest.py @@ -24,7 +24,7 @@ class TDTestCase: tdLog.debug("start to execute %s" % __file__) tdSql.init(conn.cursor(), logSql) - self.numberOfTables = 10000 + self.numberOfTables = 1000 self.numberOfRecords = 100 def getBuildPath(self): diff --git a/tests/script/general/cache/new_metrics.sim b/tests/script/general/cache/new_metrics.sim index a13dd23bb64c490f52fed7abe946ec7dfec90adb..00cbe843f955c618f8dfca56d9633eb911038d5e 100644 --- a/tests/script/general/cache/new_metrics.sim +++ b/tests/script/general/cache/new_metrics.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/cache/restart_metrics.sim b/tests/script/general/cache/restart_metrics.sim index f5035e1251c11c7c4010160cfac967d9b2617c21..4f2dae5fd95eba03de931234f6a71204948cb78a 100644 --- a/tests/script/general/cache/restart_metrics.sim +++ b/tests/script/general/cache/restart_metrics.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 @@ -51,7 +51,7 @@ print =============== step2 system sh/exec.sh -n dnode1 -s stop sleep 3000 system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start print =============== step3 diff --git a/tests/script/general/cache/restart_table.sim b/tests/script/general/cache/restart_table.sim index fcdb2fb593bac7d3eb13b82c7abfe268eafe36dc..9b0d8f9c5b483b79da41fb4e86c17902d328a89d 100644 --- a/tests/script/general/cache/restart_table.sim +++ b/tests/script/general/cache/restart_table.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 @@ -35,7 +35,7 @@ print =============== step2 system sh/exec.sh -n dnode1 -s stop sleep 3000 system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start print =============== step3 diff --git a/tests/script/general/column/commit.sim b/tests/script/general/column/commit.sim index e1b98d38147fa4d5721f4e38915f68370378998d..008bec3bf9dc6c0bd8ca0a4c2b5816a9a22c93b2 100644 --- a/tests/script/general/column/commit.sim +++ b/tests/script/general/column/commit.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/column/metrics.sim b/tests/script/general/column/metrics.sim index a46ab6560ae4d76c74408639ad6055eb093cb143..580e2320cd139af4bd9174f7eecef6dbdef97396 100644 --- a/tests/script/general/column/metrics.sim +++ b/tests/script/general/column/metrics.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/column/table.sim b/tests/script/general/column/table.sim index 7c9302aa083fd03d6b53190ff38927a5f115d688..46d5de1e82f957ac8ccd223ae28f80286a27a827 100644 --- a/tests/script/general/column/table.sim +++ b/tests/script/general/column/table.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/compress/compress.sim b/tests/script/general/compress/compress.sim index 0df2a0d4cba3c6fb2b28ac8506e498c650c988b7..cecfe61229667086d83e963b8833ae9b0fabc2da 100644 --- a/tests/script/general/compress/compress.sim +++ b/tests/script/general/compress/compress.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c comp -v 1 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/compress/compress2.sim b/tests/script/general/compress/compress2.sim index 007d1ec3394d0cc141bc8ece6ee3968407eb4c20..1e6868eaa6382218e4f4d1b8fb3f69c8762f77c8 100644 --- a/tests/script/general/compress/compress2.sim +++ b/tests/script/general/compress/compress2.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c comp -v 2 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/compress/uncompress.sim b/tests/script/general/compress/uncompress.sim index 9c71c8c1cc642557f929022f0d46a33399e10abf..49945dcb79ef69b31aa8cc4e8b0009e2ceb691f1 100644 --- a/tests/script/general/compress/uncompress.sim +++ b/tests/script/general/compress/uncompress.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c comp -v 1 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/compute/avg.sim b/tests/script/general/compute/avg.sim index 027cfbe61deb28dc1767d8a571a0def3c300b628..db452b0344b6cbe14f313e613e737415780aeb27 100644 --- a/tests/script/general/compute/avg.sim +++ b/tests/script/general/compute/avg.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 sql connect diff --git a/tests/script/general/compute/bottom.sim b/tests/script/general/compute/bottom.sim index 5c76d5d171139cdebe8796fffe390e48809c4cb9..7af67ecbf0237bd591d84a243fd88b80e167e4f7 100644 --- a/tests/script/general/compute/bottom.sim +++ b/tests/script/general/compute/bottom.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 sql connect diff --git a/tests/script/general/compute/count.sim b/tests/script/general/compute/count.sim index fc481088a5774ea318f62a22d11dc5b375f53b7e..cf84918f5b7c2bd5a396b6457c75b4c59b62bb19 100644 --- a/tests/script/general/compute/count.sim +++ b/tests/script/general/compute/count.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 sql connect diff --git a/tests/script/general/compute/diff.sim b/tests/script/general/compute/diff.sim index 433a9356096c4fba8f7049db25c201ae55748e42..bc303a9ca59152fb34836da6d37535d7e049d754 100644 --- a/tests/script/general/compute/diff.sim +++ b/tests/script/general/compute/diff.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 sql connect diff --git a/tests/script/general/compute/diff2.sim b/tests/script/general/compute/diff2.sim index 7406771ac6769ffd077eab73f682c1d82a5d3d40..55fa1daa95516729fb37b2e541067fe7abda19e6 100644 --- a/tests/script/general/compute/diff2.sim +++ b/tests/script/general/compute/diff2.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 sql connect diff --git a/tests/script/general/compute/first.sim b/tests/script/general/compute/first.sim index f12ed2b73a78aba273804452a9121a193baa686c..fce334167b7b553bdd056b2372a2fae30f2af79b 100644 --- a/tests/script/general/compute/first.sim +++ b/tests/script/general/compute/first.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 sql connect diff --git a/tests/script/general/compute/interval.sim b/tests/script/general/compute/interval.sim index 79f34752223f93ac9a3d6446a934a54a16c60f08..c21003a6463d00d53a2b75bd402f5a2a905247a0 100644 --- a/tests/script/general/compute/interval.sim +++ b/tests/script/general/compute/interval.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 sql connect diff --git a/tests/script/general/compute/last.sim b/tests/script/general/compute/last.sim index 7b7c45044f1164bbb1510ae898c8a1ee26421b62..9f20f8c5aa63416119e2148da575166602d0a226 100644 --- a/tests/script/general/compute/last.sim +++ b/tests/script/general/compute/last.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 sql connect diff --git a/tests/script/general/compute/last_row.sim b/tests/script/general/compute/last_row.sim index 03fa9bc4af66d8585b3f2052f01092eb5ee7562e..3b28b0baa54d179693db7be1e7386a8701b8ff7f 100644 --- a/tests/script/general/compute/last_row.sim +++ b/tests/script/general/compute/last_row.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 sql connect diff --git a/tests/script/general/compute/leastsquare.sim b/tests/script/general/compute/leastsquare.sim index 20e2cc9d1757fafa25a950a44c0813c70f21aedc..1c8af7fe7fdceb6666000e79bb89ee7c11ac3cac 100644 --- a/tests/script/general/compute/leastsquare.sim +++ b/tests/script/general/compute/leastsquare.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 sql connect diff --git a/tests/script/general/compute/max.sim b/tests/script/general/compute/max.sim index ba87416292458ffdc3e1c9e37723f08b2d57a03b..f9665a823db130a0eb5b4a7eb169a5900948c8a8 100644 --- a/tests/script/general/compute/max.sim +++ b/tests/script/general/compute/max.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 sql connect diff --git a/tests/script/general/compute/min.sim b/tests/script/general/compute/min.sim index e981f8335f1242ea5258647298d18e56b3dcc164..4a9904e06594f63f4de7e13c193a2e8561ccbf10 100644 --- a/tests/script/general/compute/min.sim +++ b/tests/script/general/compute/min.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/compute/null.sim b/tests/script/general/compute/null.sim index de2a834684e0a555a9a5ea694f074b59a8a21af1..cd00b7a69dae3c039794b19573bd25cbb436be1a 100644 --- a/tests/script/general/compute/null.sim +++ b/tests/script/general/compute/null.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 sql connect diff --git a/tests/script/general/compute/percentile.sim b/tests/script/general/compute/percentile.sim index 9798aa2f5c6e1c1a2271c255fbd6cce81b4dc4e6..b0f4fff8d7e9d36d62fab8a587e359ad9ab4f5ec 100644 --- a/tests/script/general/compute/percentile.sim +++ b/tests/script/general/compute/percentile.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 sql connect diff --git a/tests/script/general/compute/stddev.sim b/tests/script/general/compute/stddev.sim index 2f6ffb097a91854a714a6c9b24ace8d43726c913..772ec8386a8d61167e4a208eef4c9b8830893f9c 100644 --- a/tests/script/general/compute/stddev.sim +++ b/tests/script/general/compute/stddev.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 sql connect diff --git a/tests/script/general/compute/sum.sim b/tests/script/general/compute/sum.sim index 230248a37093e7e163eba6fe9fc5ca63d386d298..8fad9927504221b0313892360ab30b5d1e8b59c9 100644 --- a/tests/script/general/compute/sum.sim +++ b/tests/script/general/compute/sum.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 sql connect diff --git a/tests/script/general/compute/top.sim b/tests/script/general/compute/top.sim index a4f482b12759f9586a4660f0622b463b7a46b9da..1e9d302b7c9b8d4e8c690ff372d3c802fa2a8d7a 100644 --- a/tests/script/general/compute/top.sim +++ b/tests/script/general/compute/top.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 sql connect diff --git a/tests/script/general/db/backup/keep.sim b/tests/script/general/db/backup/keep.sim index 943022deba516fc2dbbc64bdf984659da0233cff..4d157b39857a639a83d24e1d8eb027492a4d40de 100644 --- a/tests/script/general/db/backup/keep.sim +++ b/tests/script/general/db/backup/keep.sim @@ -6,9 +6,9 @@ system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 system sh/deploy.sh -n dnode3 -i 3 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 -system sh/cfg.sh -n dnode3 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 +system sh/cfg.sh -n dnode3 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c numOfMnodes -v 1 system sh/cfg.sh -n dnode2 -c numOfMnodes -v 1 system sh/cfg.sh -n dnode3 -c numOfMnodes -v 1 diff --git a/tests/script/general/db/delete_reuse1.sim b/tests/script/general/db/delete_reuse1.sim index b18bb285d1c002b6607a0a6574798baf6a59bb3d..9d02e041c461000982079b86b5359a0b5d3107a3 100644 --- a/tests/script/general/db/delete_reuse1.sim +++ b/tests/script/general/db/delete_reuse1.sim @@ -5,10 +5,10 @@ system sh/deploy.sh -n dnode2 -i 2 system sh/deploy.sh -n dnode3 -i 3 system sh/deploy.sh -n dnode4 -i 4 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 -system sh/cfg.sh -n dnode3 -c walLevel -v 0 -system sh/cfg.sh -n dnode4 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 +system sh/cfg.sh -n dnode3 -c walLevel -v 1 +system sh/cfg.sh -n dnode4 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c numOfMnodes -v 1 system sh/cfg.sh -n dnode2 -c numOfMnodes -v 1 diff --git a/tests/script/general/db/delete_reuse2.sim b/tests/script/general/db/delete_reuse2.sim index c82457ec42921cccf90d3351a09de41d8bf7197c..889c8f46f74783ff084e1dfaa06fb1fc9d900d5d 100644 --- a/tests/script/general/db/delete_reuse2.sim +++ b/tests/script/general/db/delete_reuse2.sim @@ -5,10 +5,10 @@ system sh/deploy.sh -n dnode2 -i 2 system sh/deploy.sh -n dnode3 -i 3 system sh/deploy.sh -n dnode4 -i 4 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 -system sh/cfg.sh -n dnode3 -c walLevel -v 0 -system sh/cfg.sh -n dnode4 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 +system sh/cfg.sh -n dnode3 -c walLevel -v 1 +system sh/cfg.sh -n dnode4 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c numOfMnodes -v 1 system sh/cfg.sh -n dnode2 -c numOfMnodes -v 1 diff --git a/tests/script/general/db/delete_writing1.sim b/tests/script/general/db/delete_writing1.sim index 98e9a3d66f54bc40ac87feb3271403bace0319e8..010bd707434afe3fe85c2f8d0ecbe77e67d6b252 100644 --- a/tests/script/general/db/delete_writing1.sim +++ b/tests/script/general/db/delete_writing1.sim @@ -5,10 +5,10 @@ system sh/deploy.sh -n dnode2 -i 2 system sh/deploy.sh -n dnode3 -i 3 system sh/deploy.sh -n dnode4 -i 4 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 -system sh/cfg.sh -n dnode3 -c walLevel -v 0 -system sh/cfg.sh -n dnode4 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 +system sh/cfg.sh -n dnode3 -c walLevel -v 1 +system sh/cfg.sh -n dnode4 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c numOfMnodes -v 1 system sh/cfg.sh -n dnode2 -c numOfMnodes -v 1 diff --git a/tests/script/general/db/len.sim b/tests/script/general/db/len.sim index 561245e666920bbe8b32d3fa3c7999f02f2553f7..8c08d2b09afe713907f6e8f104874ee19b7a2fae 100644 --- a/tests/script/general/db/len.sim +++ b/tests/script/general/db/len.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 2000 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/db/show_create_db.sim b/tests/script/general/db/show_create_db.sim index edaa971c5cac45d1cb95e48acf84c9c6058901c1..f2e8011a0e789792a1a9a345e7f59eb3a8d4a0e7 100644 --- a/tests/script/general/db/show_create_db.sim +++ b/tests/script/general/db/show_create_db.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/db/show_create_table.sim b/tests/script/general/db/show_create_table.sim index 0d7408748a48a41e06dbd1a892aa03595f221383..5f2ae61343f51e52a39b11ebce75de2f2c305cad 100644 --- a/tests/script/general/db/show_create_table.sim +++ b/tests/script/general/db/show_create_table.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/db/tables.sim b/tests/script/general/db/tables.sim index 50b6805ecad44a74a7c421301c7bc6bdc625f21d..b32fae25e0a12084e473cd82f482501eb3d99716 100644 --- a/tests/script/general/db/tables.sim +++ b/tests/script/general/db/tables.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode2 -c maxVgroupsPerDb -v 4 system sh/cfg.sh -n dnode1 -c maxTablesPerVnode -v 4 diff --git a/tests/script/general/db/vnodes.sim b/tests/script/general/db/vnodes.sim index b123e91ad120e26aad4648794397d2e06b627f8e..96f68f16322a678ad6a6002a7f4fa6b092254616 100644 --- a/tests/script/general/db/vnodes.sim +++ b/tests/script/general/db/vnodes.sim @@ -5,7 +5,7 @@ $maxTables = 4 $totalRows = $totalVnodes * $maxTables system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxTablesPerVnode -v $maxTables system sh/cfg.sh -n dnode1 -c maxVgroupsPerDb -v $totalVnodes system sh/cfg.sh -n dnode1 -c maxVnodeConnections -v 100000 @@ -44,4 +44,4 @@ if $data00 != $totalRows then return -1 endi -system sh/exec.sh -n dnode1 -s stop -x SIGINT \ No newline at end of file +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/general/field/2.sim b/tests/script/general/field/2.sim index 812301dfbda072e302b8db029c6ee4a514683198..dc39e5ad602276e52bda3cce0e823a3deb3f124d 100644 --- a/tests/script/general/field/2.sim +++ b/tests/script/general/field/2.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/field/3.sim b/tests/script/general/field/3.sim index a531caaca0471887edf1e573f69cd5fa9839facf..b45e3a005b4af41494ef24cf93b130d2b41c8dc1 100644 --- a/tests/script/general/field/3.sim +++ b/tests/script/general/field/3.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/field/4.sim b/tests/script/general/field/4.sim index c530ff62d1f1a892550a7d755dc35a58c0a9b302..e219be87783cfea2f071db481a1f8eb457f20f5e 100644 --- a/tests/script/general/field/4.sim +++ b/tests/script/general/field/4.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/field/5.sim b/tests/script/general/field/5.sim index a676281313abd59e933513b9687d88f91a83ff5d..e02bbd122ff00a0f727ff8706e4a5cc3cf28a57b 100644 --- a/tests/script/general/field/5.sim +++ b/tests/script/general/field/5.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/field/6.sim b/tests/script/general/field/6.sim index f187d7db108a3b8c787479acf8a1511bbede1125..a852230cea7fd53f1fbc8e6cd44a67ba137ab3b3 100644 --- a/tests/script/general/field/6.sim +++ b/tests/script/general/field/6.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/field/bigint.sim b/tests/script/general/field/bigint.sim index c9bda687e19e28adec072bd593b5305eb8e4c2de..3bb120c6410a2ea2e9c671bb4aca56fc8014124f 100644 --- a/tests/script/general/field/bigint.sim +++ b/tests/script/general/field/bigint.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/field/binary.sim b/tests/script/general/field/binary.sim index 36a43272ca094a85c2d95b51495ecaa33fa32c18..d601750b0d96b94bd60751de62c7fd139e1882be 100644 --- a/tests/script/general/field/binary.sim +++ b/tests/script/general/field/binary.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/field/bool.sim b/tests/script/general/field/bool.sim index d8e613572ace057fd0750e2ea584779a338954c5..796ed4e0aaa4c1b0b61462ef95ffaf98a671f2ed 100644 --- a/tests/script/general/field/bool.sim +++ b/tests/script/general/field/bool.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/field/double.sim b/tests/script/general/field/double.sim index f7dac3192afaf8083df902e6447bcfdbb3e89e5a..ef86585e5f69fdd63c417ef823a57f3502436e40 100644 --- a/tests/script/general/field/double.sim +++ b/tests/script/general/field/double.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/field/float.sim b/tests/script/general/field/float.sim index 3ab5602c55f8e5a12c9c978583c7b08891d91757..a01bcbdd4ca2a7a7b7009ded641cd88357fbcc07 100644 --- a/tests/script/general/field/float.sim +++ b/tests/script/general/field/float.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/field/int.sim b/tests/script/general/field/int.sim index 5c3cec99cfab308559b6450c7c6e7f0487207d10..c04fe5d2b1d4cc2d43de0111d3e22dcd177fa8fc 100644 --- a/tests/script/general/field/int.sim +++ b/tests/script/general/field/int.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/field/single.sim b/tests/script/general/field/single.sim index d572f6df2af770f43f0d72b6a9053bfcc5898561..0cfb92aad5c795e14a304146419a97157176c3ac 100644 --- a/tests/script/general/field/single.sim +++ b/tests/script/general/field/single.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/field/smallint.sim b/tests/script/general/field/smallint.sim index 8bd99723217db4800645ef0ca1dffa72b2d8ec95..1d5566812efcd8e59ccc339e53577246c61c7a63 100644 --- a/tests/script/general/field/smallint.sim +++ b/tests/script/general/field/smallint.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/field/tinyint.sim b/tests/script/general/field/tinyint.sim index 3a2d7bc44e55432f48a318f1a59bf31bea556986..f10e3d293a9d847f4f71a585603598a69ddc5e1c 100644 --- a/tests/script/general/field/tinyint.sim +++ b/tests/script/general/field/tinyint.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/http/grafana.sim b/tests/script/general/http/grafana.sim index ca2f62a368fdcbb7d1370d3b6b04ccb5bcca6fa0..414b859bd3dcaa78ab7d814afd660c9894857cc3 100644 --- a/tests/script/general/http/grafana.sim +++ b/tests/script/general/http/grafana.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh sleep 2000 system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c http -v 1 #system sh/cfg.sh -n dnode1 -c adminRowLimit -v 10 system sh/cfg.sh -n dnode1 -c httpDebugFlag -v 135 diff --git a/tests/script/general/http/grafana_bug.sim b/tests/script/general/http/grafana_bug.sim index ce039af44c22975b164db15852e836175252ed8f..e66cd29deacc5a02d24568c236ffed79e114eb25 100644 --- a/tests/script/general/http/grafana_bug.sim +++ b/tests/script/general/http/grafana_bug.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh sleep 2000 system sh/deploy.sh -n dnode1 -i 1 system sh/cfg.sh -n dnode1 -c http -v 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 #system sh/cfg.sh -n dnode1 -c adminRowLimit -v 10 system sh/cfg.sh -n dnode1 -c httpDebugFlag -v 135 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/http/prepare.sim b/tests/script/general/http/prepare.sim index 6803643caf7d0b97822c18582597db90f74c5001..4bf6b6119833563f85ee8f8de1a2393bf07a2e71 100644 --- a/tests/script/general/http/prepare.sim +++ b/tests/script/general/http/prepare.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh sleep 2000 system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c http -v 1 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/http/restful_insert.sim b/tests/script/general/http/restful_insert.sim index 90bf7e1b1568f02f3f897bedbbba889be649a169..b77a1dd49785bf9ad1f86e803f63e2ba0e40a8d7 100644 --- a/tests/script/general/http/restful_insert.sim +++ b/tests/script/general/http/restful_insert.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh sleep 2000 system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c http -v 1 system sh/cfg.sh -n dnode1 -c httpEnableRecordSql -v 1 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/http/restful_limit.sim b/tests/script/general/http/restful_limit.sim index c925656b3612e9a52cffc5f53a5a3fb7290ab2e5..48a4fdf7d3f90d8d58337992b0b8198cb43e953c 100644 --- a/tests/script/general/http/restful_limit.sim +++ b/tests/script/general/http/restful_limit.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh sleep 2000 system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/http/telegraf.sim b/tests/script/general/http/telegraf.sim index f3426971864e6cf510ff6600b7731cf81f64f952..9fc153b2329bac70efe2506814af4a60b89966ae 100644 --- a/tests/script/general/http/telegraf.sim +++ b/tests/script/general/http/telegraf.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh sleep 2000 system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c http -v 1 system sh/cfg.sh -n dnode1 -c httpEnableRecordSql -v 1 system sh/cfg.sh -n dnode1 -c telegrafUseFieldNum -v 0 diff --git a/tests/script/general/import/basic.sim b/tests/script/general/import/basic.sim index 50c4059c52c65cf353f5987d44be78cc7649733d..f72d132ca39cdf0138b0c79d94857d6d646bccb5 100644 --- a/tests/script/general/import/basic.sim +++ b/tests/script/general/import/basic.sim @@ -19,10 +19,10 @@ system sh/cfg.sh -n dnode2 -c maxtablesPerVnode -v 2000 system sh/cfg.sh -n dnode3 -c maxtablesPerVnode -v 2000 system sh/cfg.sh -n dnode4 -c maxtablesPerVnode -v 2000 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 -system sh/cfg.sh -n dnode3 -c walLevel -v 0 -system sh/cfg.sh -n dnode4 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 +system sh/cfg.sh -n dnode3 -c walLevel -v 1 +system sh/cfg.sh -n dnode4 -c walLevel -v 1 print ========= start dnode1 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/import/commit.sim b/tests/script/general/import/commit.sim index 0aa63b14ff4af7c4c410dd2363b9ad8c98da39a9..3b4055d7128442ee67621ffafa9869527b7a3801 100644 --- a/tests/script/general/import/commit.sim +++ b/tests/script/general/import/commit.sim @@ -19,10 +19,10 @@ system sh/cfg.sh -n dnode2 -c maxtablesPerVnode -v 2000 system sh/cfg.sh -n dnode3 -c maxtablesPerVnode -v 2000 system sh/cfg.sh -n dnode4 -c maxtablesPerVnode -v 2000 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 -system sh/cfg.sh -n dnode3 -c walLevel -v 0 -system sh/cfg.sh -n dnode4 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 +system sh/cfg.sh -n dnode3 -c walLevel -v 1 +system sh/cfg.sh -n dnode4 -c walLevel -v 1 print ========= start dnode1 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/import/large.sim b/tests/script/general/import/large.sim index 3b82c0355aa17266bbd9addbfe13ca0d7235cb8d..23fbcc75eae55623e3ed2afa8b8ebf42c77a1012 100644 --- a/tests/script/general/import/large.sim +++ b/tests/script/general/import/large.sim @@ -19,10 +19,10 @@ system sh/cfg.sh -n dnode2 -c maxtablesPerVnode -v 2000 system sh/cfg.sh -n dnode3 -c maxtablesPerVnode -v 2000 system sh/cfg.sh -n dnode4 -c maxtablesPerVnode -v 2000 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 -system sh/cfg.sh -n dnode3 -c walLevel -v 0 -system sh/cfg.sh -n dnode4 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 +system sh/cfg.sh -n dnode3 -c walLevel -v 1 +system sh/cfg.sh -n dnode4 -c walLevel -v 1 print ========= start dnode1 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/insert/basic.sim b/tests/script/general/insert/basic.sim index c688342fc5cf3c3b77cdfcd45270bc1ccb685f0f..88eb30a1ad819a26cdc9d61cb692fde9f1446cd7 100644 --- a/tests/script/general/insert/basic.sim +++ b/tests/script/general/insert/basic.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/insert/query_block1_file.sim b/tests/script/general/insert/query_block1_file.sim index 63f46d84f15d813acb1c1166331ab61ba15734a2..636b9530f9729a98e6e290a3c360e6dfc94945ba 100644 --- a/tests/script/general/insert/query_block1_file.sim +++ b/tests/script/general/insert/query_block1_file.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/insert/query_block1_memory.sim b/tests/script/general/insert/query_block1_memory.sim index 516085f93f8c297b305ef5908010e898673f3a1d..823e466ee939263ed3fdbd0bd2fdd82a597a71e0 100644 --- a/tests/script/general/insert/query_block1_memory.sim +++ b/tests/script/general/insert/query_block1_memory.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/insert/query_block2_file.sim b/tests/script/general/insert/query_block2_file.sim index a1fa920c0f389300b621cdaf953a330830282128..5b7438875e8bfff0eb63ed308f7e06b2e7428da7 100644 --- a/tests/script/general/insert/query_block2_file.sim +++ b/tests/script/general/insert/query_block2_file.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/insert/query_block2_memory.sim b/tests/script/general/insert/query_block2_memory.sim index 9ce0b942d43cb34ad0a8f2e537144283438f3a12..fb41981c89cd08bc8597de5ba48d379d07858af8 100644 --- a/tests/script/general/insert/query_block2_memory.sim +++ b/tests/script/general/insert/query_block2_memory.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/insert/query_file_memory.sim b/tests/script/general/insert/query_file_memory.sim index d43328a65a521654c9d2ae871c76436d8a1251c2..f920c215c084f288fe7ce93752ac501b43a0cb9c 100644 --- a/tests/script/general/insert/query_file_memory.sim +++ b/tests/script/general/insert/query_file_memory.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/insert/query_multi_file.sim b/tests/script/general/insert/query_multi_file.sim index 3b70dd621484f68bf34ed520e366930e67bd8f26..bbca53d309146a91dc87167756371b4ffbf2617b 100644 --- a/tests/script/general/insert/query_multi_file.sim +++ b/tests/script/general/insert/query_multi_file.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/insert/tcp.sim b/tests/script/general/insert/tcp.sim index 50383efb498a40df5f37542415971eebce615c5c..002d84dcaea4f81adb1911e5eb9df691ee4f3605 100644 --- a/tests/script/general/insert/tcp.sim +++ b/tests/script/general/insert/tcp.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/parser/alter.sim b/tests/script/general/parser/alter.sim index e604d2122e56b4b2d87ca6cd87597abcac02672c..31d255115ee7253a73d97e1b27decd14636038d6 100644 --- a/tests/script/general/parser/alter.sim +++ b/tests/script/general/parser/alter.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 sql connect diff --git a/tests/script/general/parser/alter1.sim b/tests/script/general/parser/alter1.sim index 26ed029050641d156705cb1a22fe110ce5536ee6..a52202fc1abe044151f8af1903159bb1422e8b61 100644 --- a/tests/script/general/parser/alter1.sim +++ b/tests/script/general/parser/alter1.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 sql connect diff --git a/tests/script/general/parser/alter_stable.sim b/tests/script/general/parser/alter_stable.sim index 17fc8d984a8899c136cb8451f0fa1e9038bf2cbd..8a7f4fa924268fdace68881f86d30cbdbd131935 100644 --- a/tests/script/general/parser/alter_stable.sim +++ b/tests/script/general/parser/alter_stable.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 sql connect diff --git a/tests/script/general/parser/auto_create_tb.sim b/tests/script/general/parser/auto_create_tb.sim index 926eb7547694860810e4018d814637c788787e54..a902469fde690722698351f37dda7e89adff4b84 100644 --- a/tests/script/general/parser/auto_create_tb.sim +++ b/tests/script/general/parser/auto_create_tb.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 2 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/parser/between_and.sim b/tests/script/general/parser/between_and.sim index 2e031c4917cf396044a399493ef0be2849baa830..cdced47cb65aea79618540b57e159b741bf9288a 100644 --- a/tests/script/general/parser/between_and.sim +++ b/tests/script/general/parser/between_and.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 2 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/parser/binary_escapeCharacter.sim b/tests/script/general/parser/binary_escapeCharacter.sim index dc54add763e5cf34595c479373020c7c4baae63c..f0589d154f9fc91da50e8d83e76da1d32939bcc1 100644 --- a/tests/script/general/parser/binary_escapeCharacter.sim +++ b/tests/script/general/parser/binary_escapeCharacter.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 sql connect diff --git a/tests/script/general/parser/col_arithmetic_operation.sim b/tests/script/general/parser/col_arithmetic_operation.sim index 3911c2dca6da3f13dd908b4a0e1cc9f6f7279cfa..7611bd582fc76d0e2c3fe2fe51bc2dcd74e7fbbe 100644 --- a/tests/script/general/parser/col_arithmetic_operation.sim +++ b/tests/script/general/parser/col_arithmetic_operation.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 sql connect diff --git a/tests/script/general/parser/columnValue.sim b/tests/script/general/parser/columnValue.sim index 4e6c6640042a80ffafd008538f2c46e5f43c0318..c98542fbf26a2d6098ca01c48c38d9fdca92b03f 100644 --- a/tests/script/general/parser/columnValue.sim +++ b/tests/script/general/parser/columnValue.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 sql connect diff --git a/tests/script/general/parser/commit.sim b/tests/script/general/parser/commit.sim index 4085ef620d26ac7c869d6f2a298022ac2fe19564..dfe521b92bff36d50f3ec0b3ae8a82c2a9fff304 100644 --- a/tests/script/general/parser/commit.sim +++ b/tests/script/general/parser/commit.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxTablesperVnode -v 100 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/constCol.sim b/tests/script/general/parser/constCol.sim index 4a8e443281dbaa0892106a8fe91bbdd6d61c3e8a..716d36e82bcc4d4505426b2c3784de03d2f77c30 100644 --- a/tests/script/general/parser/constCol.sim +++ b/tests/script/general/parser/constCol.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c dDebugFlag -v 135 system sh/cfg.sh -n dnode1 -c mDebugFlag -v 135 diff --git a/tests/script/general/parser/create_db.sim b/tests/script/general/parser/create_db.sim index ea1cc17a6a779391c8dfa064bf98bf70715d9e70..9ca84af136ad16735c9faf4f10ba913775f53103 100644 --- a/tests/script/general/parser/create_db.sim +++ b/tests/script/general/parser/create_db.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/create_mt.sim b/tests/script/general/parser/create_mt.sim index 9278fbfba415a89f79763b5fb8bbda67b0eb89c5..830b72f93f45835be1d21622dd595a4bbbb4f0dc 100644 --- a/tests/script/general/parser/create_mt.sim +++ b/tests/script/general/parser/create_mt.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/create_tb.sim b/tests/script/general/parser/create_tb.sim index 48b7a0fb1994f192cf984b37665fdc6281e9937f..d30a1392ef74deea0f57807e2828d4dc3f6930a3 100644 --- a/tests/script/general/parser/create_tb.sim +++ b/tests/script/general/parser/create_tb.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/create_tb_with_tag_name.sim b/tests/script/general/parser/create_tb_with_tag_name.sim index bbd5fc11e1eb662053860bca4f0d4210c1d1fbbc..130f4097f6547e50127afd86d8d73902a5653ae2 100644 --- a/tests/script/general/parser/create_tb_with_tag_name.sim +++ b/tests/script/general/parser/create_tb_with_tag_name.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 2 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/parser/dbtbnameValidate.sim b/tests/script/general/parser/dbtbnameValidate.sim index 5fc67334c4b03329a7807c448eea38ebb4f2bdce..f2e6de81f1cd6bedf3b455bb35b68f669cd889e1 100644 --- a/tests/script/general/parser/dbtbnameValidate.sim +++ b/tests/script/general/parser/dbtbnameValidate.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 sql connect diff --git a/tests/script/general/parser/fill.sim b/tests/script/general/parser/fill.sim index aac79e1a3ca5a52a89d4f29ba43efc7be228c0f9..7bf00145d619f2455a35686b9fd7424efbcc3f2a 100644 --- a/tests/script/general/parser/fill.sim +++ b/tests/script/general/parser/fill.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 sql connect diff --git a/tests/script/general/parser/fill_stb.sim b/tests/script/general/parser/fill_stb.sim index a9547b8a9408a52aa3cda64301326b81c2e4655f..ba8ddbdf6ac6e9035398b3ac7c26861c02771e99 100644 --- a/tests/script/general/parser/fill_stb.sim +++ b/tests/script/general/parser/fill_stb.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 sql connect diff --git a/tests/script/general/parser/fill_us.sim b/tests/script/general/parser/fill_us.sim index a429df059bad3cd36301ecd4685db89d74090ba0..8cd2c333475a0d0140eb5c0c8ee0fa4186fccc97 100644 --- a/tests/script/general/parser/fill_us.sim +++ b/tests/script/general/parser/fill_us.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 sql connect diff --git a/tests/script/general/parser/first_last.sim b/tests/script/general/parser/first_last.sim index aeff740a5f790533e26fda99205d3cffc876deb1..f47f67ddc668c94f423cb3a42d97be3d52c8dcda 100644 --- a/tests/script/general/parser/first_last.sim +++ b/tests/script/general/parser/first_last.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxTablespervnode -v 4 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/function.sim b/tests/script/general/parser/function.sim index af16bfd4f18a40d9964b6f4f7d4f6ea0840937a2..8eb6aa55f2d06f4d90e14cd4035d2c460b6b1b00 100644 --- a/tests/script/general/parser/function.sim +++ b/tests/script/general/parser/function.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 sql connect diff --git a/tests/script/general/parser/groupby.sim b/tests/script/general/parser/groupby.sim index 19b14e327c0cd180957748b656b8ad15c0ad35cd..a83c975be22e24684a50caccca4baa90bcae0b16 100644 --- a/tests/script/general/parser/groupby.sim +++ b/tests/script/general/parser/groupby.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablespervnode -v 4 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/import.sim b/tests/script/general/parser/import.sim index d626f4fa74eb3f53a9f9118800494a05320678a7..4468ab87a923fc65eeb22eff97dc7d56bfdc7dc9 100644 --- a/tests/script/general/parser/import.sim +++ b/tests/script/general/parser/import.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 sql connect diff --git a/tests/script/general/parser/import_file.sim b/tests/script/general/parser/import_file.sim index e50fc92e28ee498989f8b71d1bc5dd50faa7baa3..217679047aa823119269d6a082270d75b9e2975f 100644 --- a/tests/script/general/parser/import_file.sim +++ b/tests/script/general/parser/import_file.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 500 sql connect diff --git a/tests/script/general/parser/insert_multiTbl.sim b/tests/script/general/parser/insert_multiTbl.sim index e9ee4fcf98666ddf81816c367927c8e528de3f42..b17323280eb0297fc648fc6d06b38856b7a2299e 100644 --- a/tests/script/general/parser/insert_multiTbl.sim +++ b/tests/script/general/parser/insert_multiTbl.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 500 sql connect diff --git a/tests/script/general/parser/insert_tb.sim b/tests/script/general/parser/insert_tb.sim index f212325f26315f815a1a9f69ea84844b57579081..1e431aef3dc8355b4766abb81db6a94ee34052bf 100644 --- a/tests/script/general/parser/insert_tb.sim +++ b/tests/script/general/parser/insert_tb.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/interp.sim b/tests/script/general/parser/interp.sim index 13b6a08024206b99b410ac06a913d14078c11bfc..3fb91e36c66985d90776b33b89607fa9a272d500 100644 --- a/tests/script/general/parser/interp.sim +++ b/tests/script/general/parser/interp.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 sql connect diff --git a/tests/script/general/parser/join.sim b/tests/script/general/parser/join.sim index 56f115051cd4e558839d958a5177164b9fcf7a91..392228e12122bcbcd0d336ec9dbf32a0cf9ef4c5 100644 --- a/tests/script/general/parser/join.sim +++ b/tests/script/general/parser/join.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c debugFlag -v 135 system sh/cfg.sh -n dnode1 -c rpcDebugFlag -v 135 system sh/cfg.sh -n dnode1 -c maxtablespervnode -v 4 diff --git a/tests/script/general/parser/join_multivnode.sim b/tests/script/general/parser/join_multivnode.sim index 9d7cdfe3d7598a77ade1861fe6a96513f6c828f1..2322496a94467d44e3fdc5eabe29dded23e7dbde 100644 --- a/tests/script/general/parser/join_multivnode.sim +++ b/tests/script/general/parser/join_multivnode.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablespervnode -v 4 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/parser/lastrow.sim b/tests/script/general/parser/lastrow.sim index d1eadfb67a43c027b54c66967d8818454cc81d81..2b8f294d5d058f4b7cc8c45380862180e35a5899 100644 --- a/tests/script/general/parser/lastrow.sim +++ b/tests/script/general/parser/lastrow.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablespervnode -v 4 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/limit.sim b/tests/script/general/parser/limit.sim index 17636dfb74d117187db66f5b66918fdd4ba9500b..23b85095c54c9bb439c0766e379b8950778f8d90 100644 --- a/tests/script/general/parser/limit.sim +++ b/tests/script/general/parser/limit.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxVgroupsPerDb -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/limit1.sim b/tests/script/general/parser/limit1.sim index 2c40f0af2bfd3b38a43cefa4501d992dcbcfa661..e37bea92207d5a3da34033bbe135570344751375 100644 --- a/tests/script/general/parser/limit1.sim +++ b/tests/script/general/parser/limit1.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxVgroupsPerDb -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/limit1_tblocks100.sim b/tests/script/general/parser/limit1_tblocks100.sim index 45ead58ba067be400f60c39a15cc58adb2b75a05..4546ffdb7910a7dd999a119743fa36682fe0eade 100644 --- a/tests/script/general/parser/limit1_tblocks100.sim +++ b/tests/script/general/parser/limit1_tblocks100.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxVgroupsPerDb -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/limit2.sim b/tests/script/general/parser/limit2.sim index 0e7e13b6de0ea5eb40fd0609f9591b839157f5d6..336f9234c82ab51b1ff478e21f99e058074a2818 100644 --- a/tests/script/general/parser/limit2.sim +++ b/tests/script/general/parser/limit2.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c rowsInFileBlock -v 255 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/limit2_tblocks100.sim b/tests/script/general/parser/limit2_tblocks100.sim index 19cea74e708beadcd1c9185b8b7d911f3e4f872b..11f7a15eb06a3de4bc5ab2eda6acade17abe5000 100644 --- a/tests/script/general/parser/limit2_tblocks100.sim +++ b/tests/script/general/parser/limit2_tblocks100.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c rowsInFileBlock -v 255 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/mixed_blocks.sim b/tests/script/general/parser/mixed_blocks.sim index 8208963858721c88262af385fe4d9336012d63e8..c20cf9a915e49d014e29ce4bb817802f22346f9e 100644 --- a/tests/script/general/parser/mixed_blocks.sim +++ b/tests/script/general/parser/mixed_blocks.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablespervnode -v 4 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/parser/nchar.sim b/tests/script/general/parser/nchar.sim index 3dcfca7503cdf82b8896a715e1393e614a498c34..786cee651b793b23ec8519be400554567af49852 100644 --- a/tests/script/general/parser/nchar.sim +++ b/tests/script/general/parser/nchar.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/null_char.sim b/tests/script/general/parser/null_char.sim index 4e7c8ab3548a54a465801c602c0d6572b347b284..cb65290d2548c4ca6e7377d18b33d5c1768e818e 100644 --- a/tests/script/general/parser/null_char.sim +++ b/tests/script/general/parser/null_char.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/projection_limit_offset.sim b/tests/script/general/parser/projection_limit_offset.sim index a92493b7f4a9dfa180836360c7c39b37a4991c83..ffbcb28ffd9b4e15f707509dc5cc808ef3f8ce4a 100644 --- a/tests/script/general/parser/projection_limit_offset.sim +++ b/tests/script/general/parser/projection_limit_offset.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablespervnode -v 4 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/selectResNum.sim b/tests/script/general/parser/selectResNum.sim index 20b502447836409ca06c6acaebd7bd55921f7404..dfd204e15240c93c1f9bcde32b8eb65c0918604a 100644 --- a/tests/script/general/parser/selectResNum.sim +++ b/tests/script/general/parser/selectResNum.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablespervnode -v 200 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/select_across_vnodes.sim b/tests/script/general/parser/select_across_vnodes.sim index 6c473a35d14cccf5cb24792547649500a9935563..9bf61ee61d16a5220b6ae501b8c9d345f9131c1e 100644 --- a/tests/script/general/parser/select_across_vnodes.sim +++ b/tests/script/general/parser/select_across_vnodes.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 5 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/select_distinct_tag.sim b/tests/script/general/parser/select_distinct_tag.sim index 7ea9dc444fb3befc4409a0bc0c74a3710838ebaa..d8e92d4bc5ed3e3a1def0b33faf23ec66047227d 100644 --- a/tests/script/general/parser/select_distinct_tag.sim +++ b/tests/script/general/parser/select_distinct_tag.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 5 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/select_from_cache_disk.sim b/tests/script/general/parser/select_from_cache_disk.sim index 3d2cc0b70060ed9124205a00e3ea9922f2ed203b..7f8af52c6bd395416f982554083a81ee1939f059 100644 --- a/tests/script/general/parser/select_from_cache_disk.sim +++ b/tests/script/general/parser/select_from_cache_disk.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 2 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/select_with_tags.sim b/tests/script/general/parser/select_with_tags.sim index 5428f98593ae473f9df1038ba4d68a1a2ac38386..38a514a51b0fce41e003a1040bf264eeca3bf29a 100644 --- a/tests/script/general/parser/select_with_tags.sim +++ b/tests/script/general/parser/select_with_tags.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablespervnode -v 4 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/set_tag_vals.sim b/tests/script/general/parser/set_tag_vals.sim index 92380ace8419c7756975fa4ce19482b845f804bf..74184f94d47b07684a233ef692769094141a3f56 100644 --- a/tests/script/general/parser/set_tag_vals.sim +++ b/tests/script/general/parser/set_tag_vals.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxVgroupsPerDb -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/single_row_in_tb.sim b/tests/script/general/parser/single_row_in_tb.sim index bc9362904163c7abf9d32f517a31e503801fcab4..5de2a51f0f81d9286184c11e97d682a2b82ebdcd 100644 --- a/tests/script/general/parser/single_row_in_tb.sim +++ b/tests/script/general/parser/single_row_in_tb.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablespervnode -v 4 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/slimit.sim b/tests/script/general/parser/slimit.sim index bfb97b52618b7167434680c41d16f23786312fea..0af31f982604a3b6c6e0901cd94795fc503edd4b 100644 --- a/tests/script/general/parser/slimit.sim +++ b/tests/script/general/parser/slimit.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/slimit1.sim b/tests/script/general/parser/slimit1.sim index 901da4cab226bfaba705f7a0976731f017b8d941..2dede439ec38c862afcce5e1658fa02824282087 100644 --- a/tests/script/general/parser/slimit1.sim +++ b/tests/script/general/parser/slimit1.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 2 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/slimit_alter_tags.sim b/tests/script/general/parser/slimit_alter_tags.sim index ad557e891b0cb0b715da1c494c98e4a16213db8e..53af0f3e8470b0708f2193b09ed1797c8abc9a52 100644 --- a/tests/script/general/parser/slimit_alter_tags.sim +++ b/tests/script/general/parser/slimit_alter_tags.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 2 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/stableOp.sim b/tests/script/general/parser/stableOp.sim index f4b1bd4b8aa67df48024b8f9fa2f4015bbb14662..8647657e7bee1ea73dcdbdc6f346e2279dd58cd5 100644 --- a/tests/script/general/parser/stableOp.sim +++ b/tests/script/general/parser/stableOp.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 diff --git a/tests/script/general/parser/tags_dynamically_specifiy.sim b/tests/script/general/parser/tags_dynamically_specifiy.sim index 87b278da095463e2c51bb4c0c8bec8b22d2e873c..f6b3dabf153e502b598d1bea8545db3232ceef2d 100644 --- a/tests/script/general/parser/tags_dynamically_specifiy.sim +++ b/tests/script/general/parser/tags_dynamically_specifiy.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 sql connect diff --git a/tests/script/general/parser/tags_filter.sim b/tests/script/general/parser/tags_filter.sim index f0ac3bdcbafcc06c7040215fa4d9e54948ac0758..3d3e79b6f52928af9d3333810f57300e52e0ebb0 100644 --- a/tests/script/general/parser/tags_filter.sim +++ b/tests/script/general/parser/tags_filter.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 sql connect diff --git a/tests/script/general/parser/tbnameIn.sim b/tests/script/general/parser/tbnameIn.sim index 2ee5f38ab1b48a485be06376da08612bee9b98e8..003a86f90b2f36602b4e999aee2974ef259d3670 100644 --- a/tests/script/general/parser/tbnameIn.sim +++ b/tests/script/general/parser/tbnameIn.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 100 sql connect diff --git a/tests/script/general/parser/timestamp.sim b/tests/script/general/parser/timestamp.sim index 7d7362bcb5200f96f6b216430d5f712812071e66..0a87bce51dedecf0c39179ffc7b1aa864b5e7823 100644 --- a/tests/script/general/parser/timestamp.sim +++ b/tests/script/general/parser/timestamp.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablespervnode -v 4 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/parser/topbot.sim b/tests/script/general/parser/topbot.sim index f5c78d07a1d362ffdd5ebe5c989d88cc35a33e72..3e73e4967be505d80ebf47809e63c89bc2846a04 100644 --- a/tests/script/general/parser/topbot.sim +++ b/tests/script/general/parser/topbot.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablespervnode -v 200 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/parser/union.sim b/tests/script/general/parser/union.sim index 1b184245c38245a991e8d4cb16f6aab65342ac6b..d50daea6566af4dd11c51e195e5b5a7f31ee4ad1 100644 --- a/tests/script/general/parser/union.sim +++ b/tests/script/general/parser/union.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c debugFlag -v 135 system sh/cfg.sh -n dnode1 -c rpcDebugFlag -v 135 system sh/cfg.sh -n dnode1 -c maxtablespervnode -v 4 diff --git a/tests/script/general/parser/where.sim b/tests/script/general/parser/where.sim index 5f9e0ec2083de0ee4acdd3f727189a229e6eaa38..ace1ca6f5bf1f62ecc78c676da4a9b4624f12a5c 100644 --- a/tests/script/general/parser/where.sim +++ b/tests/script/general/parser/where.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablespervnode -v 4 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/stable/disk.sim b/tests/script/general/stable/disk.sim index 35088c757c83c315b51881b4e1eaaf5f28e74529..1faae78e898107af6d3a1140413ffe16b0680e26 100644 --- a/tests/script/general/stable/disk.sim +++ b/tests/script/general/stable/disk.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode1 -c maxTablesPerVnode -v 4 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/stable/dnode3.sim b/tests/script/general/stable/dnode3.sim index 2859f644bb883483385fdf2421b0996a4416b9e6..872cfb8d2e5022872e30eaabd7db67a4546a0204 100644 --- a/tests/script/general/stable/dnode3.sim +++ b/tests/script/general/stable/dnode3.sim @@ -4,10 +4,10 @@ system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 system sh/deploy.sh -n dnode3 -i 3 system sh/deploy.sh -n dnode4 -i 4 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 -system sh/cfg.sh -n dnode3 -c walLevel -v 0 -system sh/cfg.sh -n dnode4 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 +system sh/cfg.sh -n dnode3 -c walLevel -v 1 +system sh/cfg.sh -n dnode4 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode2 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode3 -c maxtablesPerVnode -v 4 diff --git a/tests/script/general/stable/metrics.sim b/tests/script/general/stable/metrics.sim index 5bda2ad42e48362084a5d8d8559109fc7a879d92..a3dca3f1a5a5f2234520e987ab356f6ed68f0afc 100644 --- a/tests/script/general/stable/metrics.sim +++ b/tests/script/general/stable/metrics.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/stable/refcount.sim b/tests/script/general/stable/refcount.sim index e609ccc0552978a9570be5635482d269d7115af7..6629dc1177c6021fce45f135f153475f6cb858a3 100644 --- a/tests/script/general/stable/refcount.sim +++ b/tests/script/general/stable/refcount.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/stable/show.sim b/tests/script/general/stable/show.sim index 836a848ce0cfa715c6fd84a746009259f39692fb..5fe05b41eb8fb6f0080eeea35cc63506faf55e5c 100644 --- a/tests/script/general/stable/show.sim +++ b/tests/script/general/stable/show.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/stable/values.sim b/tests/script/general/stable/values.sim index 5e5416e4e92928760f20b7509ed6d7eaad572216..fb2c908da269872ddd85489d9d98510e1bb58749 100644 --- a/tests/script/general/stable/values.sim +++ b/tests/script/general/stable/values.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/stable/vnode3.sim b/tests/script/general/stable/vnode3.sim index 0889a8f0b7da653687264e727e144e96b661f834..61948b506302f803aee2f73eb5e528e14f1174db 100644 --- a/tests/script/general/stable/vnode3.sim +++ b/tests/script/general/stable/vnode3.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode1 -c maxTablesPerVnode -v 4 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/stream/metrics_del.sim b/tests/script/general/stream/metrics_del.sim index 030f9fc527623c67619801202042c2a604cb8778..321658cd8da686642e35936e0d622079aa9d7f1f 100644 --- a/tests/script/general/stream/metrics_del.sim +++ b/tests/script/general/stream/metrics_del.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/stream/metrics_replica1_vnoden.sim b/tests/script/general/stream/metrics_replica1_vnoden.sim index d7541bac662081277fddcc4991539119e2814e2e..4629063c4460c10dccc844b6b39e512012638379 100644 --- a/tests/script/general/stream/metrics_replica1_vnoden.sim +++ b/tests/script/general/stream/metrics_replica1_vnoden.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 1000 system sh/cfg.sh -n dnode1 -c maxVgroupsPerDb -v 3 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/stream/restart_stream.sim b/tests/script/general/stream/restart_stream.sim index ce06f24df72d81c5b4da45cde75ff7d8bbd507d0..5da69075361da7c2fad3ed9fde7384493fac9519 100644 --- a/tests/script/general/stream/restart_stream.sim +++ b/tests/script/general/stream/restart_stream.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 @@ -96,7 +96,7 @@ endi print =============== step4 system sh/exec.sh -n dnode1 -s stop system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start print =============== step5 diff --git a/tests/script/general/stream/stream_3.sim b/tests/script/general/stream/stream_3.sim index 632fe78c7f4ffd71da58ebc7824ee3fb5031a7c0..31490dc5ac44b93f0daf02cd0ba3b4d77a3be20b 100644 --- a/tests/script/general/stream/stream_3.sim +++ b/tests/script/general/stream/stream_3.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 @@ -108,7 +108,7 @@ print =============== step7 system sh/exec.sh -n dnode1 -s stop system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 4000 diff --git a/tests/script/general/stream/stream_restart.sim b/tests/script/general/stream/stream_restart.sim index dd7bdf1a914337101c94fd84f100f8bf0f0ca1a3..4bf6760703ef744572a0b172cb860141f8768ff5 100644 --- a/tests/script/general/stream/stream_restart.sim +++ b/tests/script/general/stream/stream_restart.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/stream/table_del.sim b/tests/script/general/stream/table_del.sim index 7ddce53c4f222de6a8462373322716a82d200bc2..3cbce538d5a87553289b834bd15dea1f43da35fb 100644 --- a/tests/script/general/stream/table_del.sim +++ b/tests/script/general/stream/table_del.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/stream/table_replica1_vnoden.sim b/tests/script/general/stream/table_replica1_vnoden.sim index 3c30897d2b76ceee358e300b048e38306f112c00..be67a31b4e6782e81289b03fb267db501f341520 100644 --- a/tests/script/general/stream/table_replica1_vnoden.sim +++ b/tests/script/general/stream/table_replica1_vnoden.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 1000 system sh/cfg.sh -n dnode1 -c maxVgroupsPerDb -v 3 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/table/bigint.sim b/tests/script/general/table/bigint.sim index 0b2841f0f8f781cf42a27ea5db4e5ba28b59402a..d75f406d77688374d75fd317508dd114f965f73a 100644 --- a/tests/script/general/table/bigint.sim +++ b/tests/script/general/table/bigint.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/table/binary.sim b/tests/script/general/table/binary.sim index fd2aa5ec44762a9de3c4d564ba75b7142da332c9..47915530efc618f959c4f4ce17ea130aaa6ddf30 100644 --- a/tests/script/general/table/binary.sim +++ b/tests/script/general/table/binary.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/table/bool.sim b/tests/script/general/table/bool.sim index 59297e90a9529856cf0996b4cbd7b2c0b2c5deb4..e49637448f8f6afea890e58da47b81db20b35768 100644 --- a/tests/script/general/table/bool.sim +++ b/tests/script/general/table/bool.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/table/column2.sim b/tests/script/general/table/column2.sim index 85d59fef49603400031358bd1b05cf653ab90324..441766f2d4e6a2ff565b56f10b623e35b7c4c2ad 100644 --- a/tests/script/general/table/column2.sim +++ b/tests/script/general/table/column2.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/table/column_name.sim b/tests/script/general/table/column_name.sim index 480bcd06107a8f6617f8380f1e496b6c13435d5e..47fcfab5a8979bfb91290429a0a6490368371685 100644 --- a/tests/script/general/table/column_name.sim +++ b/tests/script/general/table/column_name.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/table/column_num.sim b/tests/script/general/table/column_num.sim index b055027f595b4aa0358bb68678ad0ac61750b439..a18173bc8f866fc984a1b10879990d69eaf0a311 100644 --- a/tests/script/general/table/column_num.sim +++ b/tests/script/general/table/column_num.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/table/column_value.sim b/tests/script/general/table/column_value.sim index 92ce02ade5dad14768aded9d0ab5737a71d472d0..1edf8c2992c4e5fb3dacdbac627a8687e1fc549f 100644 --- a/tests/script/general/table/column_value.sim +++ b/tests/script/general/table/column_value.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/table/date.sim b/tests/script/general/table/date.sim index 7dea76dbc40473d1b97c31e07b73922c85a55108..23188e12e0a9b148877c87b67079c5429455eece 100644 --- a/tests/script/general/table/date.sim +++ b/tests/script/general/table/date.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/table/db.table.sim b/tests/script/general/table/db.table.sim index 717f5158c49fc98189ce1547d4c3bfdb5ed1fa51..906396402a1abad594e08387878bf7c3674f55ad 100644 --- a/tests/script/general/table/db.table.sim +++ b/tests/script/general/table/db.table.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/table/delete_reuse1.sim b/tests/script/general/table/delete_reuse1.sim index 26c82e201b4b7ec0de59691516730d02a3f3693e..db594254873d47564e94927b04765c32798866bd 100644 --- a/tests/script/general/table/delete_reuse1.sim +++ b/tests/script/general/table/delete_reuse1.sim @@ -4,10 +4,10 @@ system sh/deploy.sh -n dnode2 -i 2 system sh/deploy.sh -n dnode3 -i 3 system sh/deploy.sh -n dnode4 -i 4 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 -system sh/cfg.sh -n dnode3 -c walLevel -v 0 -system sh/cfg.sh -n dnode4 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 +system sh/cfg.sh -n dnode3 -c walLevel -v 1 +system sh/cfg.sh -n dnode4 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c numOfMnodes -v 1 system sh/cfg.sh -n dnode2 -c numOfMnodes -v 1 diff --git a/tests/script/general/table/delete_reuse2.sim b/tests/script/general/table/delete_reuse2.sim index 2e3f01a3a23a3cb0243bb78e377602bb43a8390a..f2784a3f5f2b3cb9ae9729b8ed254502eec7766a 100644 --- a/tests/script/general/table/delete_reuse2.sim +++ b/tests/script/general/table/delete_reuse2.sim @@ -4,10 +4,10 @@ system sh/deploy.sh -n dnode2 -i 2 system sh/deploy.sh -n dnode3 -i 3 system sh/deploy.sh -n dnode4 -i 4 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 -system sh/cfg.sh -n dnode3 -c walLevel -v 0 -system sh/cfg.sh -n dnode4 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 +system sh/cfg.sh -n dnode3 -c walLevel -v 1 +system sh/cfg.sh -n dnode4 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c numOfMnodes -v 1 system sh/cfg.sh -n dnode2 -c numOfMnodes -v 1 diff --git a/tests/script/general/table/delete_writing.sim b/tests/script/general/table/delete_writing.sim index 342915f0e802a784a3a9e3ae144e6ce3b719d8ed..b7350f26a70bc18023cfff6497ffddc2d5b8ced8 100644 --- a/tests/script/general/table/delete_writing.sim +++ b/tests/script/general/table/delete_writing.sim @@ -4,10 +4,10 @@ system sh/deploy.sh -n dnode2 -i 2 system sh/deploy.sh -n dnode3 -i 3 system sh/deploy.sh -n dnode4 -i 4 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 -system sh/cfg.sh -n dnode3 -c walLevel -v 0 -system sh/cfg.sh -n dnode4 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 +system sh/cfg.sh -n dnode3 -c walLevel -v 1 +system sh/cfg.sh -n dnode4 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c numOfMnodes -v 1 system sh/cfg.sh -n dnode2 -c numOfMnodes -v 1 diff --git a/tests/script/general/table/describe.sim b/tests/script/general/table/describe.sim index ca5cf6f6e04925f861112063bfbab18524ecae37..e59371e530a51a171efa64393925e6d311230432 100644 --- a/tests/script/general/table/describe.sim +++ b/tests/script/general/table/describe.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/table/double.sim b/tests/script/general/table/double.sim index c46029e39135cebc03a30668ec7cd784287a0fcc..ab3f2428bde489b3024e332861bad1d38cb99179 100644 --- a/tests/script/general/table/double.sim +++ b/tests/script/general/table/double.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/table/fill.sim b/tests/script/general/table/fill.sim index fd79e09ba16f2a43f6a16a79c45a0b6a072094ee..069eeff6cf0cdf35a55f33fa3e5bbf8135cc7921 100644 --- a/tests/script/general/table/fill.sim +++ b/tests/script/general/table/fill.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/table/float.sim b/tests/script/general/table/float.sim index bbeb56e37099b60d7774e7a969dafd8123529843..2d0ea0e5eab06ad127ef445d140c0cd52a50532e 100644 --- a/tests/script/general/table/float.sim +++ b/tests/script/general/table/float.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/table/int.sim b/tests/script/general/table/int.sim index 142c8b4f04fdf05f0c1310ed0f34f5b3dd842b95..f30b5b28f5457ccfbf83d48816f2417cbd3e3d46 100644 --- a/tests/script/general/table/int.sim +++ b/tests/script/general/table/int.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/table/limit.sim b/tests/script/general/table/limit.sim index 45a20f680d0d4b8f15c2a20a9abce7d3c2075ba0..dd38453d0cb0580e446992ae36f022e5e2ac1185 100644 --- a/tests/script/general/table/limit.sim +++ b/tests/script/general/table/limit.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 129 system sh/cfg.sh -n dnode1 -c maxVgroupsPerDb -v 8 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/table/smallint.sim b/tests/script/general/table/smallint.sim index 53dfbe15bfa5c54e575cc3f129cb6f4268050903..f622ce7853062edb15dca4344101cd6a96299225 100644 --- a/tests/script/general/table/smallint.sim +++ b/tests/script/general/table/smallint.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/table/table.sim b/tests/script/general/table/table.sim index 5d6f2ee1a3db7c54c5aa3d9254a8142f22296e64..c9806c40c649695a489bdcc659e085b03f7d674b 100644 --- a/tests/script/general/table/table.sim +++ b/tests/script/general/table/table.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/table/table_len.sim b/tests/script/general/table/table_len.sim index f4c27926fb1815e1e60625cf888ff4c5c6aa3cfe..d95c9ab0aa9cdc4db66d65d2334088550c05bfa9 100644 --- a/tests/script/general/table/table_len.sim +++ b/tests/script/general/table/table_len.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/table/tinyint.sim b/tests/script/general/table/tinyint.sim index 5ad8a6933a51932b127b8a64ea4a3619b25fabe4..afa931fc79802e5790f90492576c90acb51e06dc 100644 --- a/tests/script/general/table/tinyint.sim +++ b/tests/script/general/table/tinyint.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/table/vgroup.sim b/tests/script/general/table/vgroup.sim index f7806e316e68324c157b0ba4f34ba76e347da90d..d306a4731db2023e87ac0e69378e448bbd555c56 100644 --- a/tests/script/general/table/vgroup.sim +++ b/tests/script/general/table/vgroup.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxVgroupsPerDb -v 4 system sh/cfg.sh -n dnode1 -c maxTablesPerVnode -v 4 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/tag/3.sim b/tests/script/general/tag/3.sim index 878fdc04142fb54b22a9a023a85b71ce6300164f..20185f5f01403b838799427d9ac39fcafe653d86 100644 --- a/tests/script/general/tag/3.sim +++ b/tests/script/general/tag/3.sim @@ -1,6 +1,6 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/tag/4.sim b/tests/script/general/tag/4.sim index d5ac6476d9ef8a02b41465e69405b16522e9a21c..ee3c8efa6c47be1fc91b1114294b6ebf832eabdc 100644 --- a/tests/script/general/tag/4.sim +++ b/tests/script/general/tag/4.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/tag/5.sim b/tests/script/general/tag/5.sim index ae6d49e9f8b0502d2ef177e77a60c64ea07ef35b..895b1a9492644290e5c30abc901849e1adf33e6c 100644 --- a/tests/script/general/tag/5.sim +++ b/tests/script/general/tag/5.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/tag/6.sim b/tests/script/general/tag/6.sim index 71957dad9f7fe920615c66fddd0e8565a31753f2..9190998bb316bc63fd47022029b4c8deb61c0727 100644 --- a/tests/script/general/tag/6.sim +++ b/tests/script/general/tag/6.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/tag/bigint.sim b/tests/script/general/tag/bigint.sim index c8e6e72825aeeb436c0e8918f437bfa42a424e3f..3e5d528980e3f8716ae5dea0c1d4090e02d0420a 100644 --- a/tests/script/general/tag/bigint.sim +++ b/tests/script/general/tag/bigint.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/tag/binary.sim b/tests/script/general/tag/binary.sim index 771e7dc263a43c12db2ff3db362e7208c7469a91..960f45675d28349d170544729055ca3b815a2ccb 100644 --- a/tests/script/general/tag/binary.sim +++ b/tests/script/general/tag/binary.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/tag/binary_binary.sim b/tests/script/general/tag/binary_binary.sim index 5660b1b320f5a4bf95a3d39cc5dcee70db0f8885..3a0fb568482fa19a142f4b9019bd09c5208b4252 100644 --- a/tests/script/general/tag/binary_binary.sim +++ b/tests/script/general/tag/binary_binary.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/tag/bool.sim b/tests/script/general/tag/bool.sim index 828e89f3c7c9144ce444b974042079c173eb10a5..e37cba669ba27ec4046e8483938763b90360de31 100644 --- a/tests/script/general/tag/bool.sim +++ b/tests/script/general/tag/bool.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/tag/bool_binary.sim b/tests/script/general/tag/bool_binary.sim index fc25399c5a6e41e9c56ab93c37b6aca4af38baf1..9f6e4f734432bd39c6250adbc0fe26883f104267 100644 --- a/tests/script/general/tag/bool_binary.sim +++ b/tests/script/general/tag/bool_binary.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/tag/bool_int.sim b/tests/script/general/tag/bool_int.sim index aa443cca620ca9ba032e5df15f5e52d1ee014cab..60345c2d68b9216acc78e8bb8617c698013f4402 100644 --- a/tests/script/general/tag/bool_int.sim +++ b/tests/script/general/tag/bool_int.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/tag/column.sim b/tests/script/general/tag/column.sim index 36610d78da826f2f037a2202bd8321b78c6833bf..5a0cd169c5a77f0dfd6f81b615bbd09bb8e5a629 100644 --- a/tests/script/general/tag/column.sim +++ b/tests/script/general/tag/column.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/tag/create.sim b/tests/script/general/tag/create.sim index d387b4345fc890e8d63e35b3dfc31dfdc3121ae5..95b416654394946287cc8f9cdb95b984edbf3a88 100644 --- a/tests/script/general/tag/create.sim +++ b/tests/script/general/tag/create.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/tag/double.sim b/tests/script/general/tag/double.sim index 514c35dc47c4cd4dfec51afafda6182981159044..f17043393ffe51f7cdce4fb053168746430ab498 100644 --- a/tests/script/general/tag/double.sim +++ b/tests/script/general/tag/double.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/tag/float.sim b/tests/script/general/tag/float.sim index bfc7e0f569702423f97ac3b101ef354dd6ee5c6c..35bf7d6090140ba73cb0c63efd57d828fb6b21bb 100644 --- a/tests/script/general/tag/float.sim +++ b/tests/script/general/tag/float.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/tag/int.sim b/tests/script/general/tag/int.sim index 2518aeb285b4a18a3842354979117fa9313a024d..4eea02c9cd4438f16cdf759fef0ade88eaabc8b1 100644 --- a/tests/script/general/tag/int.sim +++ b/tests/script/general/tag/int.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/tag/int_binary.sim b/tests/script/general/tag/int_binary.sim index cb7aa16209f4b58f9e0d5c7323aad177a0f4467b..9a756976769c6294f65f61c5318033274b1145f2 100644 --- a/tests/script/general/tag/int_binary.sim +++ b/tests/script/general/tag/int_binary.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/tag/int_float.sim b/tests/script/general/tag/int_float.sim index afd0a3644a4bc2de59050b2eccbec6cfe0a629d2..a03c4b7148ab14c9c0b1e28d3fc28dc86df0c2d9 100644 --- a/tests/script/general/tag/int_float.sim +++ b/tests/script/general/tag/int_float.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/tag/smallint.sim b/tests/script/general/tag/smallint.sim index 598b397890003a3e5e07e1470e4e27ac4dbb1373..e69eef05f274175b55ba78e4c9f69a5e102eb6f8 100644 --- a/tests/script/general/tag/smallint.sim +++ b/tests/script/general/tag/smallint.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/tag/tinyint.sim b/tests/script/general/tag/tinyint.sim index 3c173a66bf57475516871fc03e337e5ce65989cd..2d70dc7d4880d4b1540da557f779fe06b877dabc 100644 --- a/tests/script/general/tag/tinyint.sim +++ b/tests/script/general/tag/tinyint.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/user/monitor.sim b/tests/script/general/user/monitor.sim index 016848c06d1a5a08f8e78a2e2ea8cdc5acc68744..90aad59932115e04263da8cc1e8b91effb09eb07 100644 --- a/tests/script/general/user/monitor.sim +++ b/tests/script/general/user/monitor.sim @@ -2,7 +2,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 print ========== step1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c monitor -v 1 system sh/cfg.sh -n dnode1 -c monitorInterval -v 1 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/general/vector/metrics_field.sim b/tests/script/general/vector/metrics_field.sim index 84c1ed37e7ea9d924e53275962f8381cf32a615c..2719805c63af87d4088a9b1fd87249f09e51b957 100644 --- a/tests/script/general/vector/metrics_field.sim +++ b/tests/script/general/vector/metrics_field.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/vector/metrics_mix.sim b/tests/script/general/vector/metrics_mix.sim index 2c7fc86f9f3ab2af95ffaad34d276dfe10a20844..7c9bb3b668c67fbc75cffc9b15bad1b276142917 100644 --- a/tests/script/general/vector/metrics_mix.sim +++ b/tests/script/general/vector/metrics_mix.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/vector/metrics_query.sim b/tests/script/general/vector/metrics_query.sim index 46afe7df20ff8b71346ad2b0834b3f0fac52602e..fd635a3104f70f4fcbcd71914b476dfc3fb69ea8 100644 --- a/tests/script/general/vector/metrics_query.sim +++ b/tests/script/general/vector/metrics_query.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/vector/metrics_tag.sim b/tests/script/general/vector/metrics_tag.sim index be13ac764b7cac6e00753429a2f41b6025044fa0..1d412d35d3669ced6437e6025f8415ddd3a1cf1e 100644 --- a/tests/script/general/vector/metrics_tag.sim +++ b/tests/script/general/vector/metrics_tag.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/vector/metrics_time.sim b/tests/script/general/vector/metrics_time.sim index 0b82153860ed7ea6312fbc1c74ef79acecc8dc70..d0152439bff2c9ab5450d870b2238d0b137a2fa4 100644 --- a/tests/script/general/vector/metrics_time.sim +++ b/tests/script/general/vector/metrics_time.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/vector/multi.sim b/tests/script/general/vector/multi.sim index 2ca9b7f48f046ea1ad659312983f2bf90d36bef4..1101b0b0dbc4484e8ccef3aaab5ce8fe54b03d0b 100644 --- a/tests/script/general/vector/multi.sim +++ b/tests/script/general/vector/multi.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/vector/single.sim b/tests/script/general/vector/single.sim index eee5364a4fcea4de6fcf354982af29c480e5571f..e979a0ffb71e78ed2a3a79cb32e5b34ab98fd21a 100644 --- a/tests/script/general/vector/single.sim +++ b/tests/script/general/vector/single.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/vector/table_field.sim b/tests/script/general/vector/table_field.sim index 65c50dadc2acdbe1dd59f811dec458c3d8ff239d..d86eb99331f9c65c013d4d9bdd5798672629eab9 100644 --- a/tests/script/general/vector/table_field.sim +++ b/tests/script/general/vector/table_field.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/vector/table_mix.sim b/tests/script/general/vector/table_mix.sim index 12808fd6a617c6f75b1daac33ecfbff82eaa953d..5c4fb52888d3a02608bdc90d4c213c3a4f4ac0ff 100644 --- a/tests/script/general/vector/table_mix.sim +++ b/tests/script/general/vector/table_mix.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/vector/table_query.sim b/tests/script/general/vector/table_query.sim index 3e9d2d0b772ffd2083333f7e764c3bfdf99e10ff..9ef18255a9582982cddcd8c8c699adeb6f01f1b6 100644 --- a/tests/script/general/vector/table_query.sim +++ b/tests/script/general/vector/table_query.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/general/vector/table_time.sim b/tests/script/general/vector/table_time.sim index 552bdb2a99199119b6a4cd1605ef04d2fde39ea6..c38546b1170fdbe5249a4220fbf112ea8eb25b18 100644 --- a/tests/script/general/vector/table_time.sim +++ b/tests/script/general/vector/table_time.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sleep 2000 diff --git a/tests/script/unique/big/tcp.sim b/tests/script/unique/big/tcp.sim index 3c5cf92c7f183a84b2f1be0ce389bf86f482a42a..b282e2e2223d89dde7cf6ce364b31537593a6cb4 100644 --- a/tests/script/unique/big/tcp.sim +++ b/tests/script/unique/big/tcp.sim @@ -1,7 +1,7 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 30000 system sh/cfg.sh -n dnode1 -c dDebugFlag -v 131 diff --git a/tests/script/unique/cluster/cache.sim b/tests/script/unique/cluster/cache.sim index 7a5afae79d7fc501724ce61bc6afb388aa4df51c..33aaea425c783c8362b4372efad96d384b6dbc70 100644 --- a/tests/script/unique/cluster/cache.sim +++ b/tests/script/unique/cluster/cache.sim @@ -4,8 +4,8 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c httpMaxThreads -v 2 system sh/cfg.sh -n dnode2 -c httpMaxThreads -v 2 system sh/cfg.sh -n dnode1 -c monitor -v 1 @@ -55,3 +55,11 @@ if $rows < 10 then endi #sql create table sys.st as select avg(taosd), avg(system) from sys.cpu interval(30s) + +sql show log.vgroups +if $data05 != master then + return -1 +endi +if $data15 != master then + return -1 +endi diff --git a/tests/script/unique/stable/dnode2.sim b/tests/script/unique/stable/dnode2.sim index 5c227f8cece29d31232ba1e030dbd30b3698096d..3ca8c4ee20bb74890da8f42c73521186306e3097 100644 --- a/tests/script/unique/stable/dnode2.sim +++ b/tests/script/unique/stable/dnode2.sim @@ -1,8 +1,8 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode2 -c maxtablesPerVnode -v 4 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/unique/stable/dnode3.sim b/tests/script/unique/stable/dnode3.sim index 436ae73595bef47631113498f472d43e173aa7a9..d0708c81542f13634466053d7159c119687dfa04 100644 --- a/tests/script/unique/stable/dnode3.sim +++ b/tests/script/unique/stable/dnode3.sim @@ -2,9 +2,9 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 system sh/deploy.sh -n dnode3 -i 3 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 -system sh/cfg.sh -n dnode3 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 +system sh/cfg.sh -n dnode3 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode2 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode3 -c maxtablesPerVnode -v 4 diff --git a/tests/script/unique/stream/metrics_balance.sim b/tests/script/unique/stream/metrics_balance.sim index b78cfc33a1d32c68feadc7696a449eadbdc70c36..ff48c2236709635c8d1a790104b0185144a96866 100644 --- a/tests/script/unique/stream/metrics_balance.sim +++ b/tests/script/unique/stream/metrics_balance.sim @@ -8,8 +8,8 @@ system sh/cfg.sh -n dnode1 -c statusInterval -v 1 system sh/cfg.sh -n dnode2 -c statusInterval -v 1 system sh/cfg.sh -n dnode1 -c balanceInterval -v 10 system sh/cfg.sh -n dnode2 -c balanceInterval -v 10 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c mnodeEqualVnodeNum -v 0 system sh/cfg.sh -n dnode2 -c mnodeEqualVnodeNum -v 0 system sh/cfg.sh -n dnode1 -c maxTablesPerVnode -v 4 diff --git a/tests/script/unique/stream/metrics_replica1_dnode2.sim b/tests/script/unique/stream/metrics_replica1_dnode2.sim index bbc4d5174cbade0959f5586e2ed15b81bebf4f5b..20c37cefc39f8fa6393d49934adb046f409fca25 100644 --- a/tests/script/unique/stream/metrics_replica1_dnode2.sim +++ b/tests/script/unique/stream/metrics_replica1_dnode2.sim @@ -4,8 +4,8 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode2 -c maxtablesPerVnode -v 4 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/unique/stream/metrics_replica2_dnode2.sim b/tests/script/unique/stream/metrics_replica2_dnode2.sim index e9944daf3726be0433c4d757c8f6674869baa672..aa8c1871017982cecc695abc8f64d732a8a7fc4e 100644 --- a/tests/script/unique/stream/metrics_replica2_dnode2.sim +++ b/tests/script/unique/stream/metrics_replica2_dnode2.sim @@ -4,8 +4,8 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sql connect diff --git a/tests/script/unique/stream/metrics_replica2_dnode2_vnoden.sim b/tests/script/unique/stream/metrics_replica2_dnode2_vnoden.sim index f60355cd6ac2037e97ab1fb4bf0ac04895db2955..be2fcefe66ed6ca2e24a44cd22fa072201137b89 100644 --- a/tests/script/unique/stream/metrics_replica2_dnode2_vnoden.sim +++ b/tests/script/unique/stream/metrics_replica2_dnode2_vnoden.sim @@ -4,8 +4,8 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode2 -c maxtablesPerVnode -v 4 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/unique/stream/metrics_replica2_dnode3.sim b/tests/script/unique/stream/metrics_replica2_dnode3.sim index 981f5e9b70aae8796a29bbc40b05b764b87fa2eb..f7b17610c380d9f90a2cefd4af86ea766facdffa 100644 --- a/tests/script/unique/stream/metrics_replica2_dnode3.sim +++ b/tests/script/unique/stream/metrics_replica2_dnode3.sim @@ -6,9 +6,9 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 system sh/deploy.sh -n dnode3 -i 3 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 -system sh/cfg.sh -n dnode3 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 +system sh/cfg.sh -n dnode3 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode2 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode3 -c maxtablesPerVnode -v 4 diff --git a/tests/script/unique/stream/metrics_replica3_dnode4.sim b/tests/script/unique/stream/metrics_replica3_dnode4.sim index 902e9db16b6421c209734716d1c97dac00a17c54..402712800313ff5b96f970d12ffe007f77bc26f7 100644 --- a/tests/script/unique/stream/metrics_replica3_dnode4.sim +++ b/tests/script/unique/stream/metrics_replica3_dnode4.sim @@ -8,10 +8,10 @@ system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 system sh/deploy.sh -n dnode3 -i 3 system sh/deploy.sh -n dnode4 -i 4 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 -system sh/cfg.sh -n dnode3 -c walLevel -v 0 -system sh/cfg.sh -n dnode4 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 +system sh/cfg.sh -n dnode3 -c walLevel -v 1 +system sh/cfg.sh -n dnode4 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode2 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode3 -c maxtablesPerVnode -v 4 diff --git a/tests/script/unique/stream/metrics_vnode_stop.sim b/tests/script/unique/stream/metrics_vnode_stop.sim index f1a0981d70a1f6638af6174af9bfaf84d02a0673..cd84cb3cdf5f8096f4986a222cc371db3900f765 100644 --- a/tests/script/unique/stream/metrics_vnode_stop.sim +++ b/tests/script/unique/stream/metrics_vnode_stop.sim @@ -4,8 +4,8 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c numOfMnodes -v 2 system sh/cfg.sh -n dnode2 -c numOfMnodes -v 2 system sh/exec.sh -n dnode1 -s start @@ -99,8 +99,8 @@ system sh/exec.sh -n dnode1 -s stop system sh/exec.sh -n dnode2 -s stop system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 system sh/exec.sh -n dnode2 -s start sleep 2000 diff --git a/tests/script/unique/stream/table_balance.sim b/tests/script/unique/stream/table_balance.sim index e8dec54e3ed98ebfea67d7f61d2dbbb675f9b3b4..45e054e2efdfbd7f3d01e3a860c5ac227f3327fc 100644 --- a/tests/script/unique/stream/table_balance.sim +++ b/tests/script/unique/stream/table_balance.sim @@ -6,8 +6,8 @@ system sh/cfg.sh -n dnode1 -c statusInterval -v 1 system sh/cfg.sh -n dnode2 -c statusInterval -v 1 system sh/cfg.sh -n dnode1 -c balanceInterval -v 10 system sh/cfg.sh -n dnode2 -c balanceInterval -v 10 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c mnodeEqualVnodeNum -v 0 system sh/cfg.sh -n dnode2 -c mnodeEqualVnodeNum -v 0 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4 diff --git a/tests/script/unique/stream/table_replica1_dnode2.sim b/tests/script/unique/stream/table_replica1_dnode2.sim index aaab2990b4a80cfe626930cce4c6afc1161c450d..ccc6026e9c92975ccdd4fd12366a11f50a818d3f 100644 --- a/tests/script/unique/stream/table_replica1_dnode2.sim +++ b/tests/script/unique/stream/table_replica1_dnode2.sim @@ -2,8 +2,8 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode2 -c maxtablesPerVnode -v 4 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/unique/stream/table_replica2_dnode2.sim b/tests/script/unique/stream/table_replica2_dnode2.sim index da24b5ab4e825d2795a7ec1f3e2ccbee563e81ed..947fa0d2f9093c802a9c99c74edddeffca102d38 100644 --- a/tests/script/unique/stream/table_replica2_dnode2.sim +++ b/tests/script/unique/stream/table_replica2_dnode2.sim @@ -4,8 +4,8 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start sql connect diff --git a/tests/script/unique/stream/table_replica2_dnode2_vnoden.sim b/tests/script/unique/stream/table_replica2_dnode2_vnoden.sim index 0717e6f965fbc3d70caf09102b0fb6df15e76a5e..75300362393eaa543740307d4d11f9a4eabbbc50 100644 --- a/tests/script/unique/stream/table_replica2_dnode2_vnoden.sim +++ b/tests/script/unique/stream/table_replica2_dnode2_vnoden.sim @@ -4,8 +4,8 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode2 -c maxtablesPerVnode -v 4 system sh/exec.sh -n dnode1 -s start diff --git a/tests/script/unique/stream/table_replica2_dnode3.sim b/tests/script/unique/stream/table_replica2_dnode3.sim index 10d9feec53e15b92e01479227d10016c22ed8bb7..49eb3563b3964f05f31d72a8fd1ff12f2b5b3a03 100644 --- a/tests/script/unique/stream/table_replica2_dnode3.sim +++ b/tests/script/unique/stream/table_replica2_dnode3.sim @@ -6,9 +6,9 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 system sh/deploy.sh -n dnode3 -i 3 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 -system sh/cfg.sh -n dnode3 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 +system sh/cfg.sh -n dnode3 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode2 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode3 -c maxtablesPerVnode -v 4 diff --git a/tests/script/unique/stream/table_replica3_dnode4.sim b/tests/script/unique/stream/table_replica3_dnode4.sim index 3b9552084bbc968f6a6076129668208641b0608a..2cc443c72fc656b87ca8c1d330381ed5078cd755 100644 --- a/tests/script/unique/stream/table_replica3_dnode4.sim +++ b/tests/script/unique/stream/table_replica3_dnode4.sim @@ -8,10 +8,10 @@ system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 system sh/deploy.sh -n dnode3 -i 3 system sh/deploy.sh -n dnode4 -i 4 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 -system sh/cfg.sh -n dnode3 -c walLevel -v 0 -system sh/cfg.sh -n dnode4 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 +system sh/cfg.sh -n dnode3 -c walLevel -v 1 +system sh/cfg.sh -n dnode4 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode2 -c maxtablesPerVnode -v 4 system sh/cfg.sh -n dnode3 -c maxtablesPerVnode -v 4 diff --git a/tests/script/unique/stream/table_vnode_stop.sim b/tests/script/unique/stream/table_vnode_stop.sim index 229d814e42ced35d75eec6a4ccc2fbaa66e6f654..625de32a8d7a1e5336dd10f313565bdbc0daf0fc 100644 --- a/tests/script/unique/stream/table_vnode_stop.sim +++ b/tests/script/unique/stream/table_vnode_stop.sim @@ -4,8 +4,8 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 system sh/cfg.sh -n dnode1 -c numOfMnodes -v 2 system sh/cfg.sh -n dnode2 -c numOfMnodes -v 2 system sh/exec.sh -n dnode1 -s start @@ -100,8 +100,8 @@ system sh/exec.sh -n dnode1 -s stop system sh/exec.sh -n dnode2 -s stop system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 sleep 2000 system sh/exec.sh -n dnode2 -s start diff --git a/tests/script/unique/vnode/backup/replica4.sim b/tests/script/unique/vnode/backup/replica4.sim index bccc17e682258b29d08d493a8c457682287ccfa1..c0ff267c734b58e1f198932c86e78c817625981e 100644 --- a/tests/script/unique/vnode/backup/replica4.sim +++ b/tests/script/unique/vnode/backup/replica4.sim @@ -10,10 +10,10 @@ system sh/deploy.sh -n dnode2 -i 2 system sh/deploy.sh -n dnode3 -i 3 system sh/deploy.sh -n dnode4 -i 4 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 -system sh/cfg.sh -n dnode3 -c walLevel -v 0 -system sh/cfg.sh -n dnode4 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 +system sh/cfg.sh -n dnode3 -c walLevel -v 1 +system sh/cfg.sh -n dnode4 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start $x = 0 diff --git a/tests/script/unique/vnode/backup/replica5.sim b/tests/script/unique/vnode/backup/replica5.sim index d29d11fdafdcf9b368e3f94350fd959168764445..1223cf6b585f7affeb40e2f625a3f6cbd79dae80 100644 --- a/tests/script/unique/vnode/backup/replica5.sim +++ b/tests/script/unique/vnode/backup/replica5.sim @@ -12,11 +12,11 @@ system sh/deploy.sh -n dnode3 -i 3 system sh/deploy.sh -n dnode4 -i 4 system sh/deploy.sh -n dnode5 -i 5 -system sh/cfg.sh -n dnode1 -c walLevel -v 0 -system sh/cfg.sh -n dnode2 -c walLevel -v 0 -system sh/cfg.sh -n dnode3 -c walLevel -v 0 -system sh/cfg.sh -n dnode4 -c walLevel -v 0 -system sh/cfg.sh -n dnode5 -c walLevel -v 0 +system sh/cfg.sh -n dnode1 -c walLevel -v 1 +system sh/cfg.sh -n dnode2 -c walLevel -v 1 +system sh/cfg.sh -n dnode3 -c walLevel -v 1 +system sh/cfg.sh -n dnode4 -c walLevel -v 1 +system sh/cfg.sh -n dnode5 -c walLevel -v 1 system sh/exec.sh -n dnode1 -s start diff --git a/tests/test-all.sh b/tests/test-all.sh index 3ebdffebf238706d6c3cad18f4ed996b85de49cd..4f7afe7d17bc57d6f62a5d68cac3277914dedb50 100755 --- a/tests/test-all.sh +++ b/tests/test-all.sh @@ -160,9 +160,10 @@ function runPyCaseOneByOnefq { totalFailed=0 totalPyFailed=0 totalJDBCFailed=0 +totalUnitFailed=0 corepath=`grep -oP '.*(?=core_)' /proc/sys/kernel/core_pattern||grep -oP '.*(?=core-)' /proc/sys/kernel/core_pattern` -if [ "$2" != "jdbc" ] && [ "$2" != "python" ]; then +if [ "$2" != "jdbc" ] && [ "$2" != "python" ] && [ "$2" != "unit" ]; then echo "### run TSIM test case ###" cd $tests_dir/script @@ -231,7 +232,7 @@ if [ "$2" != "jdbc" ] && [ "$2" != "python" ]; then fi fi -if [ "$2" != "sim" ] && [ "$2" != "jdbc" ] ; then +if [ "$2" != "sim" ] && [ "$2" != "jdbc" ] && [ "$2" != "unit" ]; then echo "### run Python test case ###" cd $tests_dir @@ -300,8 +301,8 @@ if [ "$2" != "sim" ] && [ "$2" != "jdbc" ] ; then fi -if [ "$2" != "sim" ] && [ "$2" != "python" ] && [ "$1" == "full" ]; then - echo "### run JDBC test case ###" +if [ "$2" != "sim" ] && [ "$2" != "python" ] && [ "$2" != "unit" ] && [ "$1" == "full" ]; then + echo "### run JDBC test cases ###" cd $tests_dir @@ -318,7 +319,7 @@ if [ "$2" != "sim" ] && [ "$2" != "python" ] && [ "$1" == "full" ]; then nohup build/bin/taosd -c /etc/taos/ > /dev/null 2>&1 & sleep 30 - cd $tests_dir/../src/connector/jdbc + cd $tests_dir/../src/connector/jdbc mvn test > jdbc-out.log 2>&1 tail -n 20 jdbc-out.log @@ -343,4 +344,40 @@ if [ "$2" != "sim" ] && [ "$2" != "python" ] && [ "$1" == "full" ]; then dohavecore 1 fi -exit $(($totalFailed + $totalPyFailed + $totalJDBCFailed)) +if [ "$2" != "sim" ] && [ "$2" != "python" ] && [ "$2" != "jdbc" ] && [ "$1" == "full" ]; then + echo "### run Unit tests ###" + + stopTaosd + cd $tests_dir + + if [[ "$tests_dir" == *"$IN_TDINTERNAL"* ]]; then + cd ../../ + else + cd ../ + fi + + pwd + cd debug/build/bin + nohup ./taosd -c /etc/taos/ > /dev/null 2>&1 & + sleep 30 + + pwd + ./queryTest > unittest-out.log 2>&1 + tail -n 20 unittest-out.log + + totalUnitTests=`grep "Running" unittest-out.log | awk '{print $3}'` + totalUnitSuccess=`grep 'PASSED' unittest-out.log | awk '{print $4}'` + totalUnitFailed=`expr $totalUnitTests - $totalUnitSuccess` + + if [ "$totalUnitSuccess" -gt "0" ]; then + echo -e "\n${GREEN} ### Total $totalUnitSuccess Unit test succeed! ### ${NC}" + fi + + if [ "$totalUnitFailed" -ne "0" ]; then + echo -e "\n${RED} ### Total $totalUnitFailed Unit test failed! ### ${NC}" + fi + dohavecore 1 +fi + + +exit $(($totalFailed + $totalPyFailed + $totalJDBCFailed + $totalUnitFailed))