提交 bcb3a303 编写于 作者: G Ganlin Zhao

Merge branch 'develop' into feature/TD-5623

......@@ -6,7 +6,7 @@
TDengine 采用 SQL 作为查询语言。应用程序可以通过 C/C++, Java, Go, Python 连接器发送 SQL 语句,用户可以通过 TDengine 提供的命令行(Command Line Interface, CLI)工具 TAOS Shell 手动执行 SQL 即席查询(Ad-Hoc Query)。TDengine 支持如下查询功能:
- 单列、多列数据查询
- 标签和数值的多种过滤条件:>, <, =, <>, like 等
- 标签和数值的多种过滤条件:>, <, =, <>, like 等
- 聚合结果的分组(Group by)、排序(Order by)、约束输出(Limit/Offset)
- 数值列及聚合结果的四则运算
- 时间戳对齐的连接查询(Join Query: 隐式连接)操作
......
......@@ -138,7 +138,7 @@ select * from meters where ts > now - 1d and current > 10;
订阅的`topic`实际上是它的名字,因为订阅功能是在客户端API中实现的,所以没必要保证它全局唯一,但需要它在一台客户端机器上唯一。
如果名`topic`的订阅不存在,参数`restart`没有意义;但如果用户程序创建这个订阅后退出,当它再次启动并重新使用这个`topic`时,`restart`就会被用于决定是从头开始读取数据,还是接续上次的位置进行读取。本例中,如果`restart`**true**(非零值),用户程序肯定会读到所有数据。但如果这个订阅之前就存在了,并且已经读取了一部分数据,且`restart`**false****0**),用户程序就不会读到之前已经读取的数据了。
如果名`topic`的订阅不存在,参数`restart`没有意义;但如果用户程序创建这个订阅后退出,当它再次启动并重新使用这个`topic`时,`restart`就会被用于决定是从头开始读取数据,还是接续上次的位置进行读取。本例中,如果`restart`**true**(非零值),用户程序肯定会读到所有数据。但如果这个订阅之前就存在了,并且已经读取了一部分数据,且`restart`**false****0**),用户程序就不会读到之前已经读取的数据了。
`taos_subscribe`的最后一个参数是以毫秒为单位的轮询周期。在同步模式下,如果前后两次调用`taos_consume`的时间间隔小于此时间,`taos_consume`会阻塞,直到间隔超过此时间。异步模式下,这个时间是两次调用回调函数的最小时间间隔。
......@@ -179,7 +179,8 @@ void print_result(TAOS_RES* res, int blockFetch) {
  } else {
    while ((row = taos_fetch_row(res))) {
      char temp[256];
      taos_print_row(temp, row, fields, num_fields);puts(temp);
      taos_print_row(temp, row, fields, num_fields);
      puts(temp);
      nRows++;
    }
  }
......@@ -211,14 +212,14 @@ taos_unsubscribe(tsub, keep);
则可以在示例代码所在目录执行以下命令来编译并启动示例程序:
```shell
```bash
$ make
$ ./subscribe -sql='select * from meters where current > 10;'
```
示例程序启动后,打开另一个终端窗口,启动 TDengine 的 shell 向 **D1001** 插入一条电流为 12A 的数据:
```shell
```sql
$ taos
> use test;
> insert into D1001 values(now, 12, 220, 1);
......@@ -313,7 +314,7 @@ public class SubscribeDemo {
运行示例程序,首先,它会消费符合查询条件的所有历史数据:
```shell
```bash
# java -jar subscribe.jar
ts: 1597464000000 current: 12.0 voltage: 220 phase: 1 location: Beijing.Chaoyang groupid : 2
......@@ -333,16 +334,16 @@ taos> insert into d1001 values("2020-08-15 12:40:00.000", 12.4, 220, 1);
因为这条数据的电流大于10A,示例程序会将其消费:
```shell
```
ts: 1597466400000 current: 12.4 voltage: 220 phase: 1 location: Beijing.Chaoyang groupid: 2
```
## <a class="anchor" id="cache"></a>缓存(Cache)
TDengine采用时间驱动缓存管理策略(First-In-First-Out,FIFO),又称为写驱动的缓存管理机制。这种策略有别于读驱动的数据缓存模式(Least-Recent-Use,LRU),直接将最近写入的数据保存在系统的缓存中。当缓存达到临界值的时候,将最早的数据批量写入磁盘。一般意义上来说,对于物联网数据的使用,用户最为关心最近产生的数据,即当前状态。TDengine充分利用了这一特性,将最近到达的(当前状态)数据保存在缓存中。
TDengine采用时间驱动缓存管理策略(First-In-First-Out,FIFO),又称为写驱动的缓存管理机制。这种策略有别于读驱动的数据缓存模式(Least-Recent-Used,LRU),直接将最近写入的数据保存在系统的缓存中。当缓存达到临界值的时候,将最早的数据批量写入磁盘。一般意义上来说,对于物联网数据的使用,用户最为关心最近产生的数据,即当前状态。TDengine充分利用了这一特性,将最近到达的(当前状态)数据保存在缓存中。
TDengine通过查询函数向用户提供毫秒级的数据获取能力。直接将最近到达的数据保存在缓存中,可以更加快速地响应用户针对最近一条或一批数据的查询分析,整体上提供更快的数据库查询响应能力。从这个意义上来说,可通过设置合适的配置参数将TDengine作为数据缓存来使用,而不需要再部署额外的缓存系统,可有效地简化系统架构,降低运维的成本。需要注意的是,TDengine重启以后系统的缓存将被清空,之前缓存的数据均会被批量写入磁盘,缓存的数据将不会像专门的Key-value缓存系统再将之前缓存的数据重新加载到缓存中。
TDengine通过查询函数向用户提供毫秒级的数据获取能力。直接将最近到达的数据保存在缓存中,可以更加快速地响应用户针对最近一条或一批数据的查询分析,整体上提供更快的数据库查询响应能力。从这个意义上来说,可通过设置合适的配置参数将TDengine作为数据缓存来使用,而不需要再部署额外的缓存系统,可有效地简化系统架构,降低运维的成本。需要注意的是,TDengine重启以后系统的缓存将被清空,之前缓存的数据均会被批量写入磁盘,缓存的数据将不会像专门的key-value缓存系统再将之前缓存的数据重新加载到缓存中。
TDengine分配固定大小的内存空间作为缓存空间,缓存空间可根据应用的需求和硬件资源配置。通过适当的设置缓存空间,TDengine可以提供极高性能的写入和查询的支持。TDengine中每个虚拟节点(virtual node)创建时分配独立的缓存池。每个虚拟节点管理自己的缓存池,不同虚拟节点间不共享缓存池。每个虚拟节点内部所属的全部表共享该虚拟节点的缓存池。
......
# Java Connector
TDengine 提供了遵循 JDBC 标准(3.0)API 规范的 `taos-jdbcdriver` 实现,可在 maven 的中央仓库 [Sonatype Repository][1] 搜索下载。
## 安装
Java连接器支持的系统有: Linux 64/Windows x64/Windows x86。
**安装前准备:**
- 已安装TDengine服务器端
- 已安装好TDengine应用驱动,具体请参照 [安装连接器驱动步骤](https://www.taosdata.com/cn/documentation/connector#driver) 章节
TDengine 为了方便 Java 应用使用,提供了遵循 JDBC 标准(3.0)API 规范的 `taos-jdbcdriver` 实现。目前可以通过 [Sonatype Repository](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver) 搜索并下载。
由于 TDengine 的应用驱动是使用C语言开发的,使用 taos-jdbcdriver 驱动包时需要依赖系统对应的本地函数库。
- libtaos.so 在 Linux 系统中成功安装 TDengine 后,依赖的本地函数库 libtaos.so 文件会被自动拷贝至 /usr/lib/libtaos.so,该目录包含在 Linux 自动扫描路径上,无需单独指定。
- taos.dll 在 Windows 系统中安装完客户端之后,驱动包依赖的 taos.dll 文件会自动拷贝到系统默认搜索路径 C:/Windows/System32 下,同样无需要单独指定。
注意:在 Windows 环境开发时需要安装 TDengine 对应的 [windows 客户端](https://www.taosdata.com/cn/all-downloads/#TDengine-Windows-Client),Linux 服务器安装完 TDengine 之后默认已安装 client,也可以单独安装 [Linux 客户端](https://www.taosdata.com/cn/getting-started/#快速上手) 连接远程 TDengine Server。
### 如何获取 TAOS-JDBCDriver
**maven仓库**
目前 taos-jdbcdriver 已经发布到 [Sonatype Repository](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver) 仓库,且各大仓库都已同步。
- [sonatype](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver)
- [mvnrepository](https://mvnrepository.com/artifact/com.taosdata.jdbc/taos-jdbcdriver)
- [maven.aliyun](https://maven.aliyun.com/mvn/search)
maven 项目中使用如下 pom.xml 配置即可:
```xml-dtd
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.18</version>
</dependency>
```
**源码编译打包**
下载 TDengine 源码之后,进入 taos-jdbcdriver 源码目录 `src/connector/jdbc` 执行 `mvn clean package -Dmaven.test.skip=true` 即可生成相应 jar 包。
### 示例程序
示例程序源码位于install_directory/examples/JDBC,有如下目录:
JDBCDemo JDBC示例源程序
JDBCConnectorChecker JDBC安装校验源程序及jar包
Springbootdemo springboot示例源程序
SpringJdbcTemplate SpringJDBC模板
### 安装验证
运行如下指令:
```Bash
cd {install_directory}/examples/JDBC/JDBCConnectorChecker
java -jar JDBCConnectorChecker.jar -host <fqdn>
```
验证通过将打印出成功信息。
## Java连接器的使用
`taos-jdbcdriver` 的实现包括 2 种形式: JDBC-JNI 和 JDBC-RESTful(taos-jdbcdriver-2.0.18 开始支持 JDBC-RESTful)。 JDBC-JNI 通过调用客户端 libtaos.so(或 taos.dll )的本地方法实现, JDBC-RESTful 则在内部封装了 RESTful 接口实现。
......@@ -20,7 +86,7 @@ TDengine 的 JDBC 驱动实现尽可能与关系型数据库驱动保持一致
* 对每个 Connection 的实例,至多只能有一个打开的 ResultSet 实例;如果在 ResultSet 还没关闭的情况下执行了新的查询,taos-jdbcdriver 会自动关闭上一个 ResultSet。
## JDBC-JNI和JDBC-RESTful的对比
### JDBC-JNI和JDBC-RESTful的对比
<table >
<tr align="center"><th>对比项</th><th>JDBC-JNI</th><th>JDBC-RESTful</th></tr>
......@@ -51,33 +117,34 @@ TDengine 的 JDBC 驱动实现尽可能与关系型数据库驱动保持一致
注意:与 JNI 方式不同,RESTful 接口是无状态的,因此 `USE db_name` 指令没有效果,RESTful 下所有对表名、超级表名的引用都需要指定数据库名前缀。
## 如何获取 taos-jdbcdriver
### maven 仓库
目前 taos-jdbcdriver 已经发布到 [Sonatype Repository][1] 仓库,且各大仓库都已同步。
* [sonatype][8]
* [mvnrepository][9]
* [maven.aliyun][10]
maven 项目中使用如下 pom.xml 配置即可:
```xml
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.18</version>
</dependency>
```
### 源码编译打包
### <a class="anchor" id="version"></a>TAOS-JDBCDriver 版本以及支持的 TDengine 版本和 JDK 版本
下载 [TDengine][3] 源码之后,进入 taos-jdbcdriver 源码目录 `src/connector/jdbc` 执行 `mvn clean package -Dmaven.test.skip=true` 即可生成相应 jar 包。
| taos-jdbcdriver 版本 | TDengine 版本 | JDK 版本 |
| -------------------- | ----------------- | -------- |
| 2.0.31 | 2.1.3.0 及以上 | 1.8.x |
| 2.0.22 - 2.0.30 | 2.0.18.0 - 2.1.2.x | 1.8.x |
| 2.0.12 - 2.0.21 | 2.0.8.0 - 2.0.17.x | 1.8.x |
| 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.x | 1.8.x |
| 1.0.3 | 1.6.1.x 及以上 | 1.8.x |
| 1.0.2 | 1.6.1.x 及以上 | 1.8.x |
| 1.0.1 | 1.6.1.x 及以上 | 1.8.x |
### TDengine DataType 和 Java DataType
TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下:
## JDBC的使用说明
| TDengine DataType | Java DataType |
| ----------------- | ------------------ |
| TIMESTAMP | java.sql.Timestamp |
| INT | java.lang.Integer |
| BIGINT | java.lang.Long |
| FLOAT | java.lang.Float |
| DOUBLE | java.lang.Double |
| SMALLINT | java.lang.Short |
| TINYINT | java.lang.Byte |
| BOOL | java.lang.Boolean |
| BINARY | byte array |
| NCHAR | java.lang.String |
### 获取连接
......@@ -112,12 +179,12 @@ Connection conn = DriverManager.getConnection(jdbcUrl);
**注意**:使用 JDBC-JNI 的 driver,taos-jdbcdriver 驱动包时需要依赖系统对应的本地函数库。
* libtaos.so
linux 系统中成功安装 TDengine 后,依赖的本地函数库 libtaos.so 文件会被自动拷贝至 /usr/lib/libtaos.so,该目录包含在 Linux 自动扫描路径上,无需单独指定。
Linux 系统中成功安装 TDengine 后,依赖的本地函数库 libtaos.so 文件会被自动拷贝至 /usr/lib/libtaos.so,该目录包含在 Linux 自动扫描路径上,无需单独指定。
* taos.dll
windows 系统中安装完客户端之后,驱动包依赖的 taos.dll 文件会自动拷贝到系统默认搜索路径 C:/Windows/System32 下,同样无需要单独指定。
Windows 系统中安装完客户端之后,驱动包依赖的 taos.dll 文件会自动拷贝到系统默认搜索路径 C:/Windows/System32 下,同样无需要单独指定。
> 在 windows 环境开发时需要安装 TDengine 对应的 [windows 客户端][14],Linux 服务器安装完 TDengine 之后默认已安装 client,也可以单独安装 [Linux 客户端][15] 连接远程 TDengine Server。
> 在 Windows 环境开发时需要安装 TDengine 对应的 [windows 客户端][14],Linux 服务器安装完 TDengine 之后默认已安装 client,也可以单独安装 [Linux 客户端][15] 连接远程 TDengine Server。
JDBC-JNI 的使用请参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1955.html)
......@@ -166,8 +233,7 @@ properties 中的配置参数如下:
#### 使用客户端配置文件建立连接
当使用 JDBC-JNI 连接 TDengine 集群时,可以使用客户端配置文件,在客户端配置文件中指定集群的 firstEp、secondEp参数。
如下所示:
当使用 JDBC-JNI 连接 TDengine 集群时,可以使用客户端配置文件,在客户端配置文件中指定集群的 firstEp、secondEp参数。如下所示:
1. 在 Java 应用中不指定 hostname 和 port
......@@ -214,7 +280,7 @@ TDengine 中,只要保证 firstEp 和 secondEp 中一个节点有效,就可
例如:在 url 中指定了 password 为 taosdata,在 Properties 中指定了 password 为 taosdemo,那么,JDBC 会使用 url 中的 password 建立连接。
> 更多详细配置请参考[客户端配置][13]
> 更多详细配置请参考[客户端配置](https://www.taosdata.com/cn/documentation/administrator/#client)
### 创建数据库和表
......@@ -242,8 +308,8 @@ int affectedRows = stmt.executeUpdate("insert into tb values(now, 23, 10.3) (now
System.out.println("insert " + affectedRows + " rows.");
```
> now 为系统内部函数,默认为服务器当前时间。
> `now + 1s` 代表服务器当前时间往后加 1 秒,数字后面代表时间单位:a(毫秒), s(秒), m(分), h(小时), d(天),w(周), n(月), y(年)。
> now 为系统内部函数,默认为客户端所在计算机当前时间。
> `now + 1s` 代表客户端当前时间往后加 1 秒,数字后面代表时间单位:a(毫秒),s(秒),m(分),h(小时),d(天),w(周),n(月),y(年)。
### 查询数据
......@@ -464,7 +530,7 @@ conn.close();
```
> 通过 HikariDataSource.getConnection() 获取连接后,使用完成后需要调用 close() 方法,实际上它并不会关闭连接,只是放回连接池中。
> 更多 HikariCP 使用问题请查看[官方说明][5]
> 更多 HikariCP 使用问题请查看[官方说明](https://github.com/brettwooldridge/HikariCP)
**Druid**
......@@ -505,13 +571,13 @@ public static void main(String[] args) throws Exception {
}
```
> 更多 druid 使用问题请查看[官方说明][6]
> 更多 druid 使用问题请查看[官方说明](https://github.com/alibaba/druid)
**注意事项**
* TDengine `v1.6.4.1` 版本开始提供了一个专门用于心跳检测的函数 `select server_status()`,所以在使用连接池时推荐使用 `select server_status()` 进行 Validation Query。
如下所示,`select server_status()` 执行成功会返回 `1`
```shell
```sql
taos> select server_status();
server_status()|
================
......@@ -521,43 +587,10 @@ Query OK, 1 row(s) in set (0.000141s)
## 与框架使用
* Spring JdbcTemplate 中使用 taos-jdbcdriver,可参考 [SpringJdbcTemplate][11]
* Springboot + Mybatis 中使用,可参考 [springbootdemo][12]
## 在框架中使用
## <a class="anchor" id="version"></a>TAOS-JDBCDriver 版本以及支持的 TDengine 版本和 JDK 版本
| taos-jdbcdriver 版本 | TDengine 版本 | JDK 版本 |
| -------------------- | ----------------- | -------- |
| 2.0.31 | 2.1.3.0 及以上 | 1.8.x |
| 2.0.22 - 2.0.30 | 2.0.18.0 - 2.1.2.x | 1.8.x |
| 2.0.12 - 2.0.21 | 2.0.8.0 - 2.0.17.x | 1.8.x |
| 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.x | 1.8.x |
| 1.0.3 | 1.6.1.x 及以上 | 1.8.x |
| 1.0.2 | 1.6.1.x 及以上 | 1.8.x |
| 1.0.1 | 1.6.1.x 及以上 | 1.8.x |
## TDengine DataType 和 Java DataType
TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下:
| TDengine DataType | Java DataType |
| ----------------- | ------------------ |
| TIMESTAMP | java.sql.Timestamp |
| INT | java.lang.Integer |
| BIGINT | java.lang.Long |
| FLOAT | java.lang.Float |
| DOUBLE | java.lang.Double |
| SMALLINT | java.lang.Short |
| TINYINT | java.lang.Byte |
| BOOL | java.lang.Boolean |
| BINARY | byte array |
| NCHAR | java.lang.String |
* Spring JdbcTemplate 中使用 taos-jdbcdriver,可参考 [SpringJdbcTemplate](https://github.com/taosdata/TDengine/tree/develop/tests/examples/JDBC/SpringJdbcTemplate)
* Springboot + Mybatis 中使用,可参考 [springbootdemo](https://github.com/taosdata/TDengine/tree/develop/tests/examples/JDBC/springbootdemo)
......@@ -567,7 +600,7 @@ TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对
**原因**:程序没有找到依赖的本地函数库 taos。
**解决方法**windows 下可以将 C:\TDengine\driver\taos.dll 拷贝到 C:\Windows\System32\ 目录下,linux 下将建立如下软链 `ln -s /usr/local/taos/driver/libtaos.so.x.x.x.x /usr/lib/libtaos.so` 即可。
**解决方法**Windows 下可以将 C:\TDengine\driver\taos.dll 拷贝到 C:\Windows\System32\ 目录下,Linux 下将建立如下软链 `ln -s /usr/local/taos/driver/libtaos.so.x.x.x.x /usr/lib/libtaos.so` 即可。
* java.lang.UnsatisfiedLinkError: taos.dll Can't load AMD 64 bit on a IA 32-bit platform
......@@ -575,21 +608,5 @@ TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对
**解决方法**:重新安装 64 位 JDK。
* 其它问题请参考 [Issues][7]
[1]: https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver
[2]: https://mvnrepository.com/artifact/com.taosdata.jdbc/taos-jdbcdriver
[3]: https://github.com/taosdata/TDengine
[4]: https://www.taosdata.com/blog/2019/12/03/jdbcdriver%e6%89%be%e4%b8%8d%e5%88%b0%e5%8a%a8%e6%80%81%e9%93%be%e6%8e%a5%e5%ba%93/
[5]: https://github.com/brettwooldridge/HikariCP
[6]: https://github.com/alibaba/druid
[7]: https://github.com/taosdata/TDengine/issues
[8]: https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver
[9]: https://mvnrepository.com/artifact/com.taosdata.jdbc/taos-jdbcdriver
[10]: https://maven.aliyun.com/mvn/search
[11]: https://github.com/taosdata/TDengine/tree/develop/tests/examples/JDBC/SpringJdbcTemplate
[12]: https://github.com/taosdata/TDengine/tree/develop/tests/examples/JDBC/springbootdemo
[13]: https://www.taosdata.com/cn/documentation/administrator/#client
[14]: https://www.taosdata.com/cn/all-downloads/#TDengine-Windows-Client
[15]: https://www.taosdata.com/cn/getting-started/#%E5%AE%A2%E6%88%B7%E7%AB%AF
* 其它问题请参考 [Issues](https://github.com/taosdata/TDengine/issues)
......@@ -23,7 +23,7 @@ TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、
* 在没有安装TDengine服务端软件的系统中使用连接器(除RESTful外)访问 TDengine 数据库,需要安装相应版本的客户端安装包来使应用驱动(Linux系统中文件名为libtaos.so,Windows系统中为taos.dll)被安装在系统中,否则会产生无法找到相应库文件的错误。
* 所有执行 SQL 语句的 API,例如 C/C++ Connector 中的 `tao_query``taos_query_a``taos_subscribe` 等,以及其它语言中与它们对应的API,每次都只能执行一条 SQL 语句,如果实际参数中包含了多条语句,它们的行为是未定义的。
* 升级到TDengine到2.0.8.0版本的用户,必须更新JDBC连接TDengine必须升级taos-jdbcdriver到2.0.12及以上。详细的版本依赖关系请参见 [taos-jdbcdriver 文档](https://www.taosdata.com/cn/documentation/connector/java#version)
* 升级 TDengine 到 2.0.8.0 版本的用户,必须更新 JDBC。连接 TDengine 必须升级 taos-jdbcdriver 到 2.0.12 及以上。详细的版本依赖关系请参见 [taos-jdbcdriver 文档](https://www.taosdata.com/cn/documentation/connector/java#version)
* 无论选用何种编程语言的连接器,2.0 及以上版本的 TDengine 推荐数据库应用的每个线程都建立一个独立的连接,或基于线程建立连接池,以避免连接内的“USE statement”状态量在线程之间相互干扰(但连接的查询和写入操作都是线程安全的)。
## <a class="anchor" id="driver"></a>安装连接器驱动步骤
......@@ -32,7 +32,7 @@ TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、
**Linux**
**1. 从[涛思官网](https://www.taosdata.com/cn/all-downloads/)下载**
**1. 从[涛思官网](https://www.taosdata.com/cn/all-downloads/)下载<!-- REPLACE_OPEN_TO_ENTERPRISE__YOU_CAN_GET_CONNECTOR_DRIVER_FROM_TAOS -->:**
* X64硬件环境:TDengine-client-2.x.x.x-Linux-x64.tar.gz
......@@ -46,7 +46,7 @@ TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、
`tar -xzvf TDengine-client-xxxxxxxxx.tar.gz`
其中xxxxxxx需要替换为实际版本的字符串。
其中xxxxxxxxx需要替换为实际版本的字符串。
**3. 执行安装脚本**
......@@ -68,7 +68,7 @@ TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、
**Windows x64/x86**
**1. 从[涛思官网](https://www.taosdata.com/cn/all-downloads/)下载 :**
**1. 从[涛思官网](https://www.taosdata.com/cn/all-downloads/)下载<!-- REPLACE_OPEN_TO_ENTERPRISE__YOU_CAN_GET_CONNECTOR_DRIVER_WINDOWS_FROM_TAOS -->:**
* X64硬件环境:TDengine-client-2.X.X.X-Windows-x64.exe
......@@ -99,13 +99,13 @@ TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、
**2.卸载:运行unins000.exe可卸载TDengine应用驱动。**
**安装验证**
### 安装验证
以上安装和配置完成后,并确认TDengine服务已经正常启动运行,此时可以执行taos客户端进行登录。
**Linux环境:**
linux shell下直接执行 taos,应该就能正常链接到tdegine服务,进入到taos shell界面,示例如下:
Linux shell下直接执行 taos,应该就能正常连接到TDegine服务,进入到taos shell界面,示例如下:
```mysql
$ taos
......@@ -146,7 +146,10 @@ taos>
| **OS类型** | Linux | Win64 | Win32 | Linux | Linux |
| **支持与否** | **支持** | **支持** | **支持** | **支持** | **开发中** |
C/C++的API类似于MySQL的C API。应用程序使用时,需要包含TDengine头文件 _taos.h_(安装后,位于 _/usr/local/taos/include_):
C/C++的API类似于MySQL的C API。应用程序使用时,需要包含TDengine头文件 *taos.h*,里面列出了提供的API的函数原型。安装后,taos.h位于:
- Linux:`/usr/local/taos/include`
- Windows:`C:\TDengine\include`
```C
#include <taos.h>
......@@ -156,9 +159,22 @@ C/C++的API类似于MySQL的C API。应用程序使用时,需要包含TDengine
* 在编译时需要链接TDengine动态库。Linux 为 *libtaos.so* ,安装后,位于 _/usr/local/taos/driver_。Windows为 taos.dll,安装后位于 *C:\TDengine*
* 如未特别说明,当API的返回值是整数时,_0_ 代表成功,其它是代表失败原因的错误码,当返回值是指针时, _NULL_ 表示失败。
* 在 taoserror.h中有所有的错误码,以及对应的原因描述。
### 示例程序
使用C/C++连接器的示例代码请参见 https://github.com/taosdata/TDengine/tree/develop/tests/examples/c 。
示例程序源码也可以在安装目录下的 examples/c 路径下找到:
**apitest.c、asyncdemo.c、demo.c、prepare.c、stream.c、subscribe.c**
该目录下有makefile,在Linux环境下,直接执行make就可以编译得到执行文件。
在一台机器上启动TDengine服务,执行这些示例程序,按照提示输入TDengine服务的FQDN,就可以正常运行,并打印出信息。
**提示:**在ARM环境下编译时,请将makefile中的-msse4.2打开,这个选项只有在x64/x86硬件平台上才能支持。
### 基础API
基础API用于完成创建数据库连接等工作,为其它API的执行提供运行时环境。
......@@ -187,7 +203,7 @@ C/C++的API类似于MySQL的C API。应用程序使用时,需要包含TDengine
- user:用户名
- pass:密码
- db:数据库名字,如果用户没有提供,也可以正常连接,用户可以通过该连接创建新的数据库,如果用户提供了数据库名字,则说明该数据库用户已经创建好,缺省使用该数据库
- port:端口号
- port:TDengine管理主节点的端口号
返回值为空表示失败。应用程序需要保存返回的参数,以便后续API调用。
......@@ -201,7 +217,7 @@ C/C++的API类似于MySQL的C API。应用程序使用时,需要包含TDengine
- `void taos_close(TAOS *taos)`
关闭连接, 其中`taos``taos_connect`函数返回的指针。
关闭连接其中`taos``taos_connect`函数返回的指针。
### 同步查询API
......@@ -237,13 +253,13 @@ C/C++的API类似于MySQL的C API。应用程序使用时,需要包含TDengine
- `TAOS_FIELD *taos_fetch_fields(TAOS_RES *res)`
获取查询结果集每列数据的属性(数据类型、名字、字节数),与taos_num_fileds配合使用,可用来解析`taos_fetch_row`返回的一个元组(一行)的数据。 `TAOS_FIELD` 的结构如下:
获取查询结果集每列数据的属性(列的名称、列的数据类型、列的长度),与taos_num_fileds配合使用,可用来解析`taos_fetch_row`返回的一个元组(一行)的数据。 `TAOS_FIELD` 的结构如下:
```c
typedef struct taosField {
char name[65]; // 列名
uint8_t type; // 数据类型
int16_t bytes; // 字节数
int16_t bytes; // 长度,单位是字节
} TAOS_FIELD;
```
......@@ -440,18 +456,30 @@ TDengine提供时间驱动的实时流式计算API。可以每隔一指定的时
取消订阅。 如参数 `keepProgress` 不为0,API会保留订阅的进度信息,后续调用 `taos_subscribe` 时可以基于此进度继续;否则将删除进度信息,后续只能重新开始读取数据。
<!-- REPLACE_OPEN_TO_ENTERPRISE__JAVA_CONNECTOR_DOC -->
## <a class="anchor" id="python"></a>Python Connector
Python连接器的使用参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1963.html)
### 安装准备
* 应用驱动安装请参考[安装连接器驱动步骤](https://www.taosdata.com/cn/documentation/connector#driver)
* 已安装python 2.7 or >= 3.4
* 已安装pip 或 pip3
**安装**:参见下面具体步骤
**示例程序**:位于install_directory/examples/python
### 安装
Python连接器支持的系统有:Linux 64/Windows x64
安装前准备:
### Python客户端安装
- 已安装好TDengine应用驱动,请参考[安装连接器驱动步骤](https://www.taosdata.com/cn/documentation/connector#driver)
- 已安装python 2.7 or >= 3.4
- 已安装pip
#### Linux
### Python连接器安装
**Linux**
用户可以在源代码的src/connector/python(或者tar.gz的/connector/python)文件夹下找到connector安装包。用户可以通过pip命令安装:
......@@ -461,9 +489,10 @@ Python连接器的使用参见[视频教程](https://www.taosdata.com/blog/2020/
`pip3 install src/connector/python/`
#### Windows
在已安装Windows TDengine 客户端的情况下, 将文件"C:\TDengine\driver\taos.dll" 拷贝到 "C:\windows\system32" 目录下, 然后进入Windwos <em>cmd</em> 命令行界面
```cmd
**Windows**
在已安装Windows TDengine 客户端的情况下, 将文件"C:\TDengine\driver\taos.dll" 拷贝到 "C:\Windows\system32" 目录下, 然后进入Windows *cmd* 命令行界面
```bash
cd C:\TDengine\connector\python
python -m pip install .
```
......@@ -471,7 +500,39 @@ python -m pip install .
* 如果机器上没有pip命令,用户可将src/connector/python下的taos文件夹拷贝到应用程序的目录使用。
对于windows 客户端,安装TDengine windows 客户端后,将C:\TDengine\driver\taos.dll拷贝到C:\windows\system32目录下即可。
### 使用
### 示例程序
示例程序源码位于install_directory/examples/Python,有:
**read_example.py Python示例源程序**
用户可以参考read_example.py这个程序来设计用户自己的写入、查询程序。
在安装了对应的应用驱动后,通过import taos引入taos类。主要步骤如下:
- 通过taos.connect获取TDengineConnection对象,这个对象可以一个程序只申请一个,在多线程中共享。
- 通过TDengineConnection对象的 .cursor()方法获取一个新的游标对象,这个游标对象必须保证每个线程独享。
- 通过游标对象的execute()方法,执行写入或查询的SQL语句
- 如果执行的是写入语句,execute返回的是成功写入的行数信息affected rows
- 如果执行的是查询语句,则execute执行成功后,需要通过fetchall方法去拉取结果集。 具体方法可以参考示例代码。
### 安装验证
运行如下指令:
```bash
cd {install_directory}/examples/python/PYTHONConnectorChecker/`
python3 PythonChecker.py -host <fqdn>
```
验证通过将打印出成功信息。
### Python连接器的使用
#### 代码示例
......@@ -486,7 +547,7 @@ import taos
conn = taos.connect(host="127.0.0.1", user="root", password="taosdata", config="/etc/taos")
c1 = conn.cursor()
```
* <em>host</em> 是TDengine 服务端所有IP, <em>config</em> 为客户端配置文件所在目录
* *host* 是TDengine 服务端所有IP, *config* 为客户端配置文件所在目录
* 写入数据
......@@ -599,18 +660,46 @@ conn.close()
注意:与标准连接器的一个区别是,RESTful 接口是无状态的,因此 `USE db_name` 指令没有效果,所有对表名、超级表名的引用都需要指定数据库名前缀。
### HTTP请求格式
### 安装
RESTful接口不依赖于任何TDengine的库,因此客户端不需要安装任何TDengine的库,只要客户端的开发语言支持HTTP协议即可。
### 验证
在已经安装TDengine服务器端的情况下,可以按照如下方式进行验证。
下面以Ubuntu环境中使用curl工具(确认已经安装)来验证RESTful接口的正常。
下面示例是列出所有的数据库,请把h1.taosdata.com和6041(缺省值)替换为实际运行的TDengine服务fqdn和端口号:
```html
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' h1.taosdata.com:6041/rest/sql
```
返回值结果如下表示验证通过:
```json
{
"status": "succ",
"head": ["name","created_time","ntables","vgroups","replica","quorum","days","keep1,keep2,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","precision","status"],
"data": [
["log","2020-09-02 17:23:00.039",4,1,1,1,10,"30,30,30",1,3,100,4096,1,3000,2,"us","ready"],
],
"rows": 1
}
```
### RESTful连接器的使用
#### HTTP请求格式
```
http://<ip>:<PORT>/rest/sql
http://<fqdn>:<port>/rest/sql
```
参数说明:
- IP: 集群中的任一台主机
- PORT: 配置文件中httpPort配置项,缺省为6041
- fqnd: 集群中的任一台主机FQDN或IP地址
- port: 配置文件中httpPort配置项,缺省为6041
例如:http://192.168.0.1:6041/rest/sql 是指向IP地址为192.168.0.1的URL.
例如:http://h1.taos.com:6041/rest/sql 是指向地址为h1.taos.com:6041的url。
HTTP请求的Header里需带有身份认证信息,TDengine支持Basic认证与自定义认证两种机制,后续版本将提供标准安全的数字签名机制来做身份验证。
......@@ -662,8 +751,8 @@ curl -u username:password -d '<SQL>' <ip>:<PORT>/rest/sql
说明:
- status: 告知操作结果是成功还是失败。
- head: 表的定义,如果不返回结果集,则仅有一列“affected_rows”。(从 2.0.17 版本开始,建议不要依赖 head 返回值来判断数据列类型,而推荐使用 column_meta。在未来版本中,有可能会从返回值中去掉 head 这一项。)
- column_meta: 从 2.0.17 版本开始,返回值中增加这一项来说明 data 里每一列的数据类型。具体每个列会用三个值来说明,分别为:列名、列类型、类型长度。例如`["current",6,4]`表示列名为“current”;列类型为 6,也即 float 类型;类型长度为 4,也即对应 4 个字节表示的 float。如果列类型为 binary 或 nchar,则类型长度表示该列最多可以保存的内容长度,而不是本次返回值中的具体数据长度。当列类型是 nchar 的时候,其类型长度表示可以保存的 unicode 字符数量,而不是 bytes。
- head: 表的定义,如果不返回结果集,则仅有一列“affected_rows”。(从 2.0.17.0 版本开始,建议不要依赖 head 返回值来判断数据列类型,而推荐使用 column_meta。在未来版本中,有可能会从返回值中去掉 head 这一项。)
- column_meta: 从 2.0.17.0 版本开始,返回值中增加这一项来说明 data 里每一列的数据类型。具体每个列会用三个值来说明,分别为:列名、列类型、类型长度。例如`["current",6,4]`表示列名为“current”;列类型为 6,也即 float 类型;类型长度为 4,也即对应 4 个字节表示的 float。如果列类型为 binary 或 nchar,则类型长度表示该列最多可以保存的内容长度,而不是本次返回值中的具体数据长度。当列类型是 nchar 的时候,其类型长度表示可以保存的 unicode 字符数量,而不是 bytes。
- data: 具体返回的数据,一行一行的呈现,如果不返回结果集,那么就仅有[[affected_rows]]。data 中每一行的数据列顺序,与 column_meta 中描述数据列的顺序完全一致。
- rows: 表明总共多少行数据。
......@@ -684,16 +773,16 @@ column_meta 中的列类型说明:
HTTP请求中需要带有授权码`<TOKEN>`,用于身份识别。授权码通常由管理员提供,可简单的通过发送`HTTP GET`请求来获取授权码,操作如下:
```bash
curl http://<ip>:6041/rest/login/<username>/<password>
curl http://<fqnd>:<port>/rest/login/<username>/<password>
```
其中,`ip`是TDengine数据库的IP地址`username`为数据库用户名,`password`为数据库密码,返回值为`JSON`格式,各字段含义如下:
其中,`fqdn`是TDengine数据库的fqdn或ip地址,port是TDengine服务的端口号`username`为数据库用户名,`password`为数据库密码,返回值为`JSON`格式,各字段含义如下:
- status:请求结果的标志位
- code:返回值代码
- desc: 授权码
- desc授权码
获取授权码示例:
......@@ -802,10 +891,10 @@ HTTP请求URL采用`sqlutc`时,返回结果集的时间戳将采用UTC时间
下面仅列出一些与RESTful接口有关的配置参数,其他系统参数请看配置文件里的说明。注意:配置修改后,需要重启taosd服务才能生效
- 对外提供RESTful服务的端口号,默认绑定到 6041(实际取值是 serverPort + 11,因此可以通过修改 serverPort 参数的设置来修改)
- httpMaxThreads: 启动的线程数量,默认为2(2.0.17版本开始,默认值改为CPU核数的一半向下取整)
- httpMaxThreads: 启动的线程数量,默认为2(2.0.17.0版本开始,默认值改为CPU核数的一半向下取整)
- restfulRowLimit: 返回结果集(JSON格式)的最大条数,默认值为10240
- httpEnableCompress: 是否支持压缩,默认不支持,目前TDengine仅支持gzip压缩格式
- httpDebugFlag: 日志开关,131:仅错误和报警信息,135:调试信息,143:非常详细的调试信息,默认131
- httpDebugFlag: 日志开关,默认131。131:仅错误和报警信息,135:调试信息,143:非常详细的调试信息,默认131
## <a class="anchor" id="csharp"></a>CSharp Connector
......@@ -817,6 +906,12 @@ C#连接器支持的系统有:Linux 64/Windows x64/Windows x86
* .NET接口文件TDengineDrivercs.cs和参考程序示例TDengineTest.cs均位于Windows客户端install_directory/examples/C#目录下。
* 在Windows系统上,C#应用程序可以使用TDengine的原生C接口来执行所有数据库操作,后续版本将提供ORM(Dapper)框架驱动。
### 示例程序
示例程序源码位于install_directory/examples/C#,有:
TDengineTest.cs C#示例源程序
### 安装验证
运行install_directory/examples/C#/C#Checker/C#Checker.exe
......@@ -856,32 +951,52 @@ https://www.taosdata.com/blog/2020/11/02/1901.html
### 安装准备
* 应用驱动安装请参考[安装连接器驱动步骤](https://www.taosdata.com/cn/documentation/connector#driver)
Go连接器支持的系统有:
| **CPU类型** | x64(64bit) | | | aarch64 | aarch32 |
| --------------- | ------------ | -------- | -------- | -------- | ---------- |
| **OS类型** | Linux | Win64 | Win32 | Linux | Linux |
| **支持与否** | **支持** | **支持** | **支持** | **支持** | **开发中** |
TDengine提供了GO驱动程序`taosSql``taosSql`实现了GO语言的内置接口`database/sql/driver`。用户只需按如下方式引入包就可以在应用程序中访问TDengine, 详见`https://github.com/taosdata/driver-go/blob/develop/taosSql/driver_test.go`
安装前准备:
- 已安装好TDengine应用驱动,参考[安装连接器驱动步骤](https://www.taosdata.com/cn/documentation/connector#driver)
### 示例程序
使用 Go 连接器的示例代码请参考 https://github.com/taosdata/TDengine/tree/develop/tests/examples/go 以及[视频教程](https://www.taosdata.com/blog/2020/11/11/1951.html)
```Go
import (
"database/sql"
_ "github.com/taosdata/driver-go/taosSql"
)
示例程序源码也位于安装目录下的 examples/go/taosdemo.go 文件中
**提示:建议Go版本是1.13及以上,并开启模块支持:**
```sh
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.io,direct
```
在taosdemo.go所在目录下进行编译和执行:
```sh
go mod init *demo*
go build ./demo -h fqdn -p serverPort
```
**建议使用Go版本1.13或以上,并开启模块支持:**
```bash
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.io,direct
### Go连接器的使用
TDengine提供了GO驱动程序包`taosSql`.`taosSql`实现了GO语言的内置接口`database/sql/driver`。用户只需按如下方式引入包就可以在应用程序中访问TDengine。
```go
import (
"database/sql"
_ "github.com/taosdata/driver-go/taosSql"
)
```
**提示**:下划线与双引号之间必须有一个空格。
### 常用API
- `sql.Open(DRIVER_NAME string, dataSourceName string) *DB`
该API用来打开DB,返回一个类型为\*DB的对象,一般情况下,DRIVER_NAME设置为字符串`taosSql`, dataSourceName设置为字符串`user:password@/tcp(host:port)/dbname`,如果客户想要用多个goroutine并发访问TDengine, 那么需要在各个goroutine中分别创建一个sql.Open对象并用之访问TDengine
该API用来打开DB,返回一个类型为\*DB的对象,一般情况下,DRIVER_NAME设置为字符串`taosSql`dataSourceName设置为字符串`user:password@/tcp(host:port)/dbname`,如果客户想要用多个goroutine并发访问TDengine, 那么需要在各个goroutine中分别创建一个sql.Open对象并用之访问TDengine
**注意**: 该API成功创建的时候,并没有做权限等检查,只有在真正执行Query或者Exec的时候才能真正的去创建连接,并同时检查user/password/host/port是不是合法。 另外,由于整个驱动程序大部分实现都下沉到taosSql所依赖的libtaos中。所以,sql.Open本身特别轻量。
**注意**: 该API成功创建的时候,并没有做权限等检查,只有在真正执行Query或者Exec的时候才能真正的去创建连接,并同时检查user/password/host/port是不是合法。另外,由于整个驱动程序大部分实现都下沉到taosSql所依赖的libtaos动态库中。所以,sql.Open本身特别轻量。
- `func (db *DB) Exec(query string, args ...interface{}) (Result, error)`
......@@ -957,12 +1072,12 @@ npm install td2.0-connector
手动安装以下工具:
- 安装Visual Studio相关:[Visual Studio Build 工具](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools) 或者 [Visual Studio 2017 Community](https://visualstudio.microsoft.com/pl/thank-you-downloading-visual-studio/?sku=Community)
- 安装 [Python](https://www.python.org/downloads/) 2.7(`v3.x.x` 暂不支持) 并执行 `npm config set python python2.7`
- 进入`cmd`命令行界面, `npm config set msvs_version 2017`
- 安装 [Python](https://www.python.org/downloads/) 2.7(`v3.x.x` 暂不支持) 并执行 `npm config set python python2.7`
- 进入`cmd`命令行界面`npm config set msvs_version 2017`
如果以上步骤不能成功执行, 可以参考微软的node.js用户手册[Microsoft's Node.js Guidelines for Windows](https://github.com/Microsoft/nodejs-guidelines/blob/master/windows-environment.md#compiling-native-addon-modules)
如果以上步骤不能成功执行可以参考微软的node.js用户手册[Microsoft's Node.js Guidelines for Windows](https://github.com/Microsoft/nodejs-guidelines/blob/master/windows-environment.md#compiling-native-addon-modules)
如果在Windows 10 ARM 上使用ARM64 Node.js, 还需添加 "Visual C++ compilers and libraries for ARM64" 和 "Visual C++ ATL for ARM64".
如果在Windows 10 ARM 上使用ARM64 Node.js,还需添加 "Visual C++ compilers and libraries for ARM64" 和 "Visual C++ ATL for ARM64"。
### 示例程序
......@@ -979,7 +1094,7 @@ Node-example-raw.js
1. 新建安装验证目录,例如:`~/tdengine-test`,拷贝github上nodejsChecker.js源程序。下载地址:(https://github.com/taosdata/TDengine/tree/develop/tests/examples/nodejs/nodejsChecker.js)。
2. 在命令中执行以下命令:
2. 在命令中执行以下命令:
```bash
npm init -y
......@@ -991,23 +1106,19 @@ node nodejsChecker.js host=localhost
### Node.js连接器的使用
以下是node.js 连接器的一些基本使用方法,详细的使用方法可参考[TDengine Node.js connector](http://docs.taosdata.com/node)
以下是Node.js 连接器的一些基本使用方法,详细的使用方法可参考[TDengine Node.js connector](http://docs.taosdata.com/node)
#### 建立连接
使用node.js连接器时,必须先<em>require</em> `td2.0-connector`,然后使用 `taos.connect` 函数。`taos.connect` 函数必须提供的参数是`host`,其它参数在没有提供的情况下会使用如下的默认值。最后需要初始化`cursor` 来和TDengine服务端通信
使用node.js连接器时,必须先`require td2.0-connector`,然后使用 `taos.connect` 函数建立到服务端的连接。例如如下代码:
```javascript
const taos = require('td2.0-connector');
var conn = taos.connect({host:"127.0.0.1", user:"root", password:"taosdata", config:"/etc/taos",port:0})
var conn = taos.connect({host:"taosdemo.com", user:"root", password:"taosdata", config:"/etc/taos",port:6030})
var cursor = conn.cursor(); // Initializing a new cursor
```
关闭连接可执行
```javascript
conn.close();
```
建立了一个到hostname为taosdemo.com,端口为6030(Tdengine的默认端口号)的连接。连接指定了用户名(root)和密码(taosdata)。taos.connect 函数必须提供的参数是`host`,其它参数在没有提供的情况下会使用如下的默认值。taos.connect返回了`cursor` 对象,使用cursor来执行sql语句。
#### 执行SQL和插入数据
......@@ -1061,6 +1172,14 @@ promise.then(function(result) {
result.pretty();
})
```
#### 关闭连接
在完成插入、查询等操作后,要关闭连接。代码如下:
```js
conn.close();
```
#### 异步函数
异步查询数据库的操作和上面类似,只需要在`cursor.execute`, `TaosQuery.execute`等函数后面加上`_a`
......
......@@ -23,7 +23,7 @@ sudo cp -rf /usr/local/taos/connector/grafanaplugin /var/lib/grafana/plugins/tde
#### 配置数据源
用户可以直接通过 localhost:3000 的网址,登录 Grafana 服务器(用户名/密码:admin/admin),通过左侧 `Configuration -> Data Sources` 可以添加数据源,如下图所示:
用户可以直接通过 localhost:3000 的网址,登录 Grafana 服务器(用户名/密码:admin/admin),通过左侧 `Configuration -> Data Sources` 可以添加数据源,如下图所示:
![img](page://images/connections/add_datasource1.jpg)
......@@ -35,7 +35,7 @@ sudo cp -rf /usr/local/taos/connector/grafanaplugin /var/lib/grafana/plugins/tde
![img](page://images/connections/add_datasource3.jpg)
* Host: TDengine 集群的中任意一台服务器的 IP 地址与 TDengine RESTful 接口的端口号(6041),默认 http://localhost:6041
* Host: TDengine 集群的中任意一台服务器的 IP 地址与 TDengine RESTful 接口的端口号(6041),默认 http://localhost:6041
* User:TDengine 用户名。
* Password:TDengine 用户密码。
......@@ -64,7 +64,7 @@ sudo cp -rf /usr/local/taos/connector/grafanaplugin /var/lib/grafana/plugins/tde
#### 导入 Dashboard
在 Grafana 插件目录 /usr/local/taos/connector/grafana/tdengine/dashboard/ 下提供了一个 `tdengine-grafana.json` 可导入的 dashboard。
在 Grafana 插件目录 /usr/local/taos/connector/grafanaplugin/dashboard 下提供了一个 `tdengine-grafana.json` 可导入的 dashboard。
点击左侧 `Import` 按钮,并上传 `tdengine-grafana.json` 文件:
......@@ -140,13 +140,13 @@ conn<-dbConnect(drv,"jdbc:TSDB://192.168.0.1:0/?user=root&password=taosdata","ro
- dbWriteTable(conn, "test", iris, overwrite=FALSE, append=TRUE):将数据框iris写入表test中,overwrite必须设置为false,append必须设为TRUE,且数据框iris要与表test的结构一致。
- dbGetQuery(conn, "select count(*) from test"):查询语句
- dbGetQuery(conn, "select count(*) from test"):查询语句
- dbSendUpdate(conn, "use db"):执行任何非查询sql语句。例如dbSendUpdate(conn, "use db"), 写入数据dbSendUpdate(conn, "insert into t1 values(now, 99)")等。
- dbReadTable(conn, "test"):读取表test中数据
- dbDisconnect(conn):关闭连接
- dbRemoveTable(conn, "test"):删除表test
- dbReadTable(conn, "test"):读取表test中数据
- dbDisconnect(conn):关闭连接
- dbRemoveTable(conn, "test"):删除表test
TDengine客户端暂不支持如下函数:
- dbExistsTable(conn, "test"):是否存在表test
- dbListTables(conn):显示连接中的所有表
- dbExistsTable(conn, "test"):是否存在表test
- dbListTables(conn):显示连接中的所有表
......@@ -6,7 +6,7 @@ TDengine can quickly integrate with [Grafana](https://www.grafana.com/), an open
### Install Grafana
TDengine currently supports Grafana 5.2.4 and above. You can download and install the package from Grafana website according to the current operating system. The download address is as follows:
TDengine currently supports Grafana 6.2 and above. You can download and install the package from Grafana website according to the current operating system. The download address is as follows:
https://grafana.com/grafana/download.
......@@ -64,7 +64,7 @@ According to the default prompt, query the average system memory usage at the sp
#### Import Dashboard
A `tdengine-grafana.json` importable dashboard is provided under the Grafana plug-in directory/usr/local/taos/connector/grafana/tdengine/dashboard/.
A `tdengine-grafana.json` importable dashboard is provided under the Grafana plug-in directory `/usr/local/taos/connector/grafanaplugin/dashboard`.
Click the `Import` button on the left panel and upload the `tdengine-grafana.json` file:
......
......@@ -821,17 +821,22 @@ static void doSetupSDataBlock(SSqlRes* pRes, SSDataBlock* pBlock, SFilterInfo* p
// filter data if needed
if (pFilterInfo) {
//doSetFilterColumnInfo(pFilterInfo, numOfFilterCols, pBlock);
doSetFilterColInfo(pFilterInfo, pBlock);
filterSetColFieldData(pFilterInfo, pBlock->info.numOfCols, pBlock->pDataBlock);
bool gotNchar = false;
filterConverNcharColumns(pFilterInfo, pBlock->info.rows, &gotNchar);
int8_t* p = calloc(pBlock->info.rows, sizeof(int8_t));
int8_t* p = NULL;
//bool all = doFilterDataBlock(pFilterInfo, numOfFilterCols, pBlock->info.rows, p);
bool all = filterExecute(pFilterInfo, pBlock->info.rows, p);
bool all = filterExecute(pFilterInfo, pBlock->info.rows, &p, NULL, 0);
if (gotNchar) {
filterFreeNcharColumns(pFilterInfo);
}
if (!all) {
doCompactSDataBlock(pBlock, pBlock->info.rows, p);
if (p) {
doCompactSDataBlock(pBlock, pBlock->info.rows, p);
} else {
pBlock->info.rows = 0;
pBlock->pBlockStatis = NULL; // clean the block statistics info
}
}
tfree(p);
......
......@@ -32,7 +32,7 @@ typedef enum {
typedef struct {
int8_t msgType;
int8_t sver;
int8_t sver; // sver 2 for WAL SDataRow/SMemRow compatibility
int8_t reserved[2];
int32_t len;
uint64_t version;
......
......@@ -56,6 +56,8 @@ typedef struct SShellArguments {
int abort;
int port;
int pktLen;
int pktNum;
char* pktType;
char* netTestRole;
} SShellArguments;
......
......@@ -50,6 +50,8 @@ static struct argp_option options[] = {
{"timezone", 't', "TIMEZONE", 0, "Time zone of the shell, default is local."},
{"netrole", 'n', "NETROLE", 0, "Net role when network connectivity test, default is startup, options: client|server|rpc|startup|sync."},
{"pktlen", 'l', "PKTLEN", 0, "Packet length used for net test, default is 1000 bytes."},
{"pktnum", 'N', "PKTNUM", 0, "Packet numbers used for net test, default is 100."},
{"pkttype", 'S', "PKTTYPE", 0, "Packet type used for net test, default is TCP."},
{0}};
static error_t parse_opt(int key, char *arg, struct argp_state *state) {
......@@ -146,6 +148,17 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) {
return -1;
}
break;
case 'N':
if (arg) {
arguments->pktNum = atoi(arg);
} else {
fprintf(stderr, "Invalid packet number\n");
return -1;
}
break;
case 'S':
arguments->pktType = arg;
break;
case OPT_ABORT:
arguments->abort = 1;
break;
......
......@@ -85,6 +85,8 @@ SShellArguments args = {
.threadNum = 5,
.commands = NULL,
.pktLen = 1000,
.pktNum = 100,
.pktType = "TCP",
.netTestRole = NULL
};
......@@ -118,7 +120,7 @@ int main(int argc, char* argv[]) {
printf("Failed to init taos");
exit(EXIT_FAILURE);
}
taosNetTest(args.netTestRole, args.host, args.port, args.pktLen);
taosNetTest(args.netTestRole, args.host, args.port, args.pktLen, args.pktNum, args.pktType);
exit(0);
}
......
......@@ -58,6 +58,10 @@ void printHelp() {
printf("%s%s%s\n", indent, indent, "Net role when network connectivity test, default is startup, options: client|server|rpc|startup|sync.");
printf("%s%s\n", indent, "-l");
printf("%s%s%s\n", indent, indent, "Packet length used for net test, default is 1000 bytes.");
printf("%s%s\n", indent, "-N");
printf("%s%s%s\n", indent, indent, "Packet numbers used for net test, default is 100.");
printf("%s%s\n", indent, "-S");
printf("%s%s%s\n", indent, indent, "Packet type used for net test, default is TCP.");
printf("%s%s\n", indent, "-V");
printf("%s%s%s\n", indent, indent, "Print program version.");
......@@ -184,6 +188,22 @@ void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) {
exit(EXIT_FAILURE);
}
}
else if (strcmp(argv[i], "-N") == 0) {
if (i < argc - 1) {
arguments->pktNum = atoi(argv[++i]);
} else {
fprintf(stderr, "option -N requires an argument\n");
exit(EXIT_FAILURE);
}
}
else if (strcmp(argv[i], "-S") == 0) {
if (i < argc - 1) {
arguments->pktType = argv[++i];
} else {
fprintf(stderr, "option -S requires an argument\n");
exit(EXIT_FAILURE);
}
}
else if (strcmp(argv[i], "-V") == 0) {
printVersion();
exit(EXIT_SUCCESS);
......
......@@ -602,7 +602,6 @@ SSDataBlock* doSLimit(void* param, bool* newgroup);
int32_t doCreateFilterInfo(SColumnInfo* pCols, int32_t numOfCols, int32_t numOfFilterCols, SSingleColumnFilterInfo** pFilterInfo, uint64_t qId);
void doSetFilterColumnInfo(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFilterCols, SSDataBlock* pBlock);
void doSetFilterColInfo(SFilterInfo *pFilters, SSDataBlock* pBlock);
bool doFilterDataBlock(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFilterCols, int32_t numOfRows, int8_t* p);
void doCompactSDataBlock(SSDataBlock* pBlock, int32_t numOfRows, int8_t* p);
......
......@@ -31,10 +31,11 @@ extern "C" {
#define FILTER_DEFAULT_GROUP_UNIT_SIZE 2
#define FILTER_DUMMY_EMPTY_OPTR 127
#define FILTER_DUMMY_RANGE_OPTR 126
#define MAX_NUM_STR_SIZE 40
#define FILTER_RM_UNIT_MIN_ROWS 100
enum {
FLD_TYPE_COLUMN = 1,
FLD_TYPE_VALUE = 2,
......@@ -70,6 +71,12 @@ enum {
FI_STATUS_CLONED = 8,
};
enum {
FI_STATUS_BLK_ALL = 1,
FI_STATUS_BLK_EMPTY = 2,
FI_STATUS_BLK_ACTIVE = 4,
};
enum {
RANGE_TYPE_UNIT = 1,
RANGE_TYPE_VAR_HASH = 2,
......@@ -98,7 +105,7 @@ typedef struct SFilterColRange {
typedef bool (*rangeCompFunc) (const void *, const void *, const void *, const void *, __compar_fn_t);
typedef int32_t(*filter_desc_compare_func)(const void *, const void *);
typedef bool(*filter_exec_func)(void *, int32_t, int8_t*);
typedef bool(*filter_exec_func)(void *, int32_t, int8_t**, SDataStatis *, int16_t);
typedef struct SFilterRangeCompare {
int64_t s;
......@@ -197,6 +204,7 @@ typedef struct SFilterComUnit {
void *colData;
void *valData;
void *valData2;
uint16_t colId;
uint16_t dataSize;
uint8_t dataType;
uint8_t optr;
......@@ -224,7 +232,11 @@ typedef struct SFilterInfo {
uint8_t *unitRes; // result
uint8_t *unitFlags; // got result
SFilterRangeCtx **colRange;
filter_exec_func func;
filter_exec_func func;
uint8_t blkFlag;
uint16_t blkGroupNum;
uint16_t *blkUnits;
int8_t *blkUnitRes;
SFilterPCtx pctx;
} SFilterInfo;
......@@ -265,12 +277,13 @@ typedef struct SFilterInfo {
#define CHK_RET(c, r) do { if (c) { return r; } } while (0)
#define CHK_JMP(c) do { if (c) { goto _return; } } while (0)
#define CHK_LRETV(c,...) do { if (c) { qError(__VA_ARGS__); return; } } while (0)
#define CHK_LRET(c, r,...) do { if (c) { qError(__VA_ARGS__); return r; } } while (0)
#define CHK_LRET(c, r,...) do { if (c) { if (r) {qError(__VA_ARGS__); } else { qDebug(__VA_ARGS__); } return r; } } while (0)
#define FILTER_GET_FIELD(i, id) (&((i)->fields[(id).type].fields[(id).idx]))
#define FILTER_GET_COL_FIELD(i, idx) (&((i)->fields[FLD_TYPE_COLUMN].fields[idx]))
#define FILTER_GET_COL_FIELD_TYPE(fi) (((SSchema *)((fi)->desc))->type)
#define FILTER_GET_COL_FIELD_SIZE(fi) (((SSchema *)((fi)->desc))->bytes)
#define FILTER_GET_COL_FIELD_ID(fi) (((SSchema *)((fi)->desc))->colId)
#define FILTER_GET_COL_FIELD_DESC(fi) ((SSchema *)((fi)->desc))
#define FILTER_GET_COL_FIELD_DATA(fi, ri) ((char *)(fi)->data + ((SSchema *)((fi)->desc))->bytes * (ri))
#define FILTER_GET_VAL_FIELD_TYPE(fi) (((tVariant *)((fi)->desc))->nType)
......@@ -280,10 +293,12 @@ typedef struct SFilterInfo {
#define FILTER_GROUP_UNIT(i, g, uid) ((i)->units + (g)->unitIdxs[uid])
#define FILTER_UNIT_LEFT_FIELD(i, u) FILTER_GET_FIELD(i, (u)->left)
#define FILTER_UNIT_RIGHT_FIELD(i, u) FILTER_GET_FIELD(i, (u)->right)
#define FILTER_UNIT_RIGHT2_FIELD(i, u) FILTER_GET_FIELD(i, (u)->right2)
#define FILTER_UNIT_DATA_TYPE(u) ((u)->compare.type)
#define FILTER_UNIT_COL_DESC(i, u) FILTER_GET_COL_FIELD_DESC(FILTER_UNIT_LEFT_FIELD(i, u))
#define FILTER_UNIT_COL_DATA(i, u, ri) FILTER_GET_COL_FIELD_DATA(FILTER_UNIT_LEFT_FIELD(i, u), ri)
#define FILTER_UNIT_COL_SIZE(i, u) FILTER_GET_COL_FIELD_SIZE(FILTER_UNIT_LEFT_FIELD(i, u))
#define FILTER_UNIT_COL_ID(i, u) FILTER_GET_COL_FIELD_ID(FILTER_UNIT_LEFT_FIELD(i, u))
#define FILTER_UNIT_VAL_DATA(i, u) FILTER_GET_VAL_FIELD_DATA(FILTER_UNIT_RIGHT_FIELD(i, u))
#define FILTER_UNIT_COL_IDX(u) ((u)->left.idx)
#define FILTER_UNIT_OPTR(u) ((u)->compare.optr)
......@@ -309,8 +324,8 @@ typedef struct SFilterInfo {
extern int32_t filterInitFromTree(tExprNode* tree, SFilterInfo **pinfo, uint32_t options);
extern bool filterExecute(SFilterInfo *info, int32_t numOfRows, int8_t* p);
extern int32_t filterSetColFieldData(SFilterInfo *info, int16_t colId, void *data);
extern bool filterExecute(SFilterInfo *info, int32_t numOfRows, int8_t** p, SDataStatis *statis, int16_t numOfCols);
extern int32_t filterSetColFieldData(SFilterInfo *info, int32_t numOfCols, SArray* pDataBlock);
extern int32_t filterGetTimeRange(SFilterInfo *info, STimeWindow *win);
extern int32_t filterConverNcharColumns(SFilterInfo* pFilterInfo, int32_t rows, bool *gotNchar);
extern int32_t filterFreeNcharColumns(SFilterInfo* pFilterInfo);
......
......@@ -2951,12 +2951,13 @@ void filterRowsInDataBlock(SQueryRuntimeEnv* pRuntimeEnv, SSingleColumnFilterInf
void filterColRowsInDataBlock(SQueryRuntimeEnv* pRuntimeEnv, SSDataBlock* pBlock, bool ascQuery) {
int32_t numOfRows = pBlock->info.rows;
int8_t *p = calloc(numOfRows, sizeof(int8_t));
int8_t *p = NULL;
bool all = true;
if (pRuntimeEnv->pTsBuf != NULL) {
SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, 0);
SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, 0);
p = calloc(numOfRows, sizeof(int8_t));
TSKEY* k = (TSKEY*) pColInfoData->pData;
for (int32_t i = 0; i < numOfRows; ++i) {
int32_t offset = ascQuery? i:(numOfRows - i - 1);
......@@ -2979,11 +2980,16 @@ void filterColRowsInDataBlock(SQueryRuntimeEnv* pRuntimeEnv, SSDataBlock* pBlock
// save the cursor status
pRuntimeEnv->current->cur = tsBufGetCursor(pRuntimeEnv->pTsBuf);
} else {
all = filterExecute(pRuntimeEnv->pQueryAttr->pFilters, numOfRows, p);
all = filterExecute(pRuntimeEnv->pQueryAttr->pFilters, numOfRows, &p, pBlock->pBlockStatis, pRuntimeEnv->pQueryAttr->numOfCols);
}
if (!all) {
doCompactSDataBlock(pBlock, numOfRows, p);
if (p) {
doCompactSDataBlock(pBlock, numOfRows, p);
} else {
pBlock->info.rows = 0;
pBlock->pBlockStatis = NULL; // clean the block statistics info
}
}
tfree(p);
......@@ -3032,15 +3038,6 @@ void doSetFilterColumnInfo(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFi
}
}
void doSetFilterColInfo(SFilterInfo * pFilters, SSDataBlock* pBlock) {
for (int32_t j = 0; j < pBlock->info.numOfCols; ++j) {
SColumnInfoData* pColInfo = taosArrayGet(pBlock->pDataBlock, j);
filterSetColFieldData(pFilters, pColInfo->info.colId, pColInfo->pData);
}
}
int32_t loadDataBlockOnDemand(SQueryRuntimeEnv* pRuntimeEnv, STableScanInfo* pTableScanInfo, SSDataBlock* pBlock,
uint32_t* status) {
*status = BLK_DATA_NO_NEEDED;
......@@ -3195,7 +3192,7 @@ int32_t loadDataBlockOnDemand(SQueryRuntimeEnv* pRuntimeEnv, STableScanInfo* pTa
}
if (pQueryAttr->pFilters != NULL) {
doSetFilterColInfo(pQueryAttr->pFilters, pBlock);
filterSetColFieldData(pQueryAttr->pFilters, pBlock->info.numOfCols, pBlock->pDataBlock);
}
if (pQueryAttr->pFilters != NULL || pRuntimeEnv->pTsBuf != NULL) {
......
......@@ -157,7 +157,7 @@ __compar_fn_t gDataCompare[] = {compareInt32Val, compareInt8Val, compareInt16Val
compareDoubleVal, compareLenPrefixedStr, compareStrPatternComp, compareFindItemInSet, compareWStrPatternComp,
compareLenPrefixedWStr, compareUint8Val, compareUint16Val, compareUint32Val, compareUint64Val,
setCompareBytes1, setCompareBytes2, setCompareBytes4, setCompareBytes8
};
};
int8_t filterGetCompFuncIdx(int32_t type, int32_t optr) {
int8_t comparFn = 0;
......@@ -1521,7 +1521,7 @@ void filterDumpInfoToString(SFilterInfo *info, const char *msg, int32_t options)
int32_t type = FILTER_UNIT_DATA_TYPE(unit);
int32_t len = 0;
int32_t tlen = 0;
char str[256] = {0};
char str[512] = {0};
SFilterField *left = FILTER_UNIT_LEFT_FIELD(info, unit);
SSchema *sch = left->desc;
......@@ -1539,6 +1539,24 @@ void filterDumpInfoToString(SFilterInfo *info, const char *msg, int32_t options)
strcat(str, "NULL");
}
strcat(str, "]");
if (unit->compare.optr2) {
strcat(str, " && ");
sprintf(str + strlen(str), "[%d][%s] %s [", sch->colId, sch->name, gOptrStr[unit->compare.optr2].str);
if (unit->right2.type == FLD_TYPE_VALUE && FILTER_UNIT_OPTR(unit) != TSDB_RELATION_IN) {
SFilterField *right = FILTER_UNIT_RIGHT2_FIELD(info, unit);
char *data = right->data;
if (IS_VAR_DATA_TYPE(type)) {
tlen = varDataLen(data);
data += VARSTR_HEADER_SIZE;
}
converToStr(str + strlen(str), type, data, tlen > 32 ? 32 : tlen, &tlen);
} else {
strcat(str, "NULL");
}
strcat(str, "]");
}
qDebug("%s", str); //TODO
}
......@@ -1556,37 +1574,63 @@ void filterDumpInfoToString(SFilterInfo *info, const char *msg, int32_t options)
return;
}
qDebug("%s - RANGE info:", msg);
qDebug("RANGE Num:%u", info->colRangeNum);
for (uint16_t i = 0; i < info->colRangeNum; ++i) {
SFilterRangeCtx *ctx = info->colRange[i];
qDebug("Column ID[%d] RANGE: isnull[%d],notnull[%d],range[%d]", ctx->colId, ctx->isnull, ctx->notnull, ctx->isrange);
if (ctx->isrange) {
SFilterRangeNode *r = ctx->rs;
while (r) {
char str[256] = {0};
int32_t tlen = 0;
if (FILTER_GET_FLAG(r->ra.sflag, RANGE_FLG_NULL)) {
strcat(str,"(NULL)");
} else {
FILTER_GET_FLAG(r->ra.sflag, RANGE_FLG_EXCLUDE) ? strcat(str,"(") : strcat(str,"[");
converToStr(str + strlen(str), ctx->type, &r->ra.s, tlen > 32 ? 32 : tlen, &tlen);
FILTER_GET_FLAG(r->ra.sflag, RANGE_FLG_EXCLUDE) ? strcat(str,")") : strcat(str,"]");
}
strcat(str, " - ");
if (FILTER_GET_FLAG(r->ra.eflag, RANGE_FLG_NULL)) {
strcat(str, "(NULL)");
} else {
FILTER_GET_FLAG(r->ra.eflag, RANGE_FLG_EXCLUDE) ? strcat(str,"(") : strcat(str,"[");
converToStr(str + strlen(str), ctx->type, &r->ra.e, tlen > 32 ? 32 : tlen, &tlen);
FILTER_GET_FLAG(r->ra.eflag, RANGE_FLG_EXCLUDE) ? strcat(str,")") : strcat(str,"]");
if (options == 1) {
qDebug("%s - RANGE info:", msg);
qDebug("RANGE Num:%u", info->colRangeNum);
for (uint16_t i = 0; i < info->colRangeNum; ++i) {
SFilterRangeCtx *ctx = info->colRange[i];
qDebug("Column ID[%d] RANGE: isnull[%d],notnull[%d],range[%d]", ctx->colId, ctx->isnull, ctx->notnull, ctx->isrange);
if (ctx->isrange) {
SFilterRangeNode *r = ctx->rs;
while (r) {
char str[256] = {0};
int32_t tlen = 0;
if (FILTER_GET_FLAG(r->ra.sflag, RANGE_FLG_NULL)) {
strcat(str,"(NULL)");
} else {
FILTER_GET_FLAG(r->ra.sflag, RANGE_FLG_EXCLUDE) ? strcat(str,"(") : strcat(str,"[");
converToStr(str + strlen(str), ctx->type, &r->ra.s, tlen > 32 ? 32 : tlen, &tlen);
FILTER_GET_FLAG(r->ra.sflag, RANGE_FLG_EXCLUDE) ? strcat(str,")") : strcat(str,"]");
}
strcat(str, " - ");
if (FILTER_GET_FLAG(r->ra.eflag, RANGE_FLG_NULL)) {
strcat(str, "(NULL)");
} else {
FILTER_GET_FLAG(r->ra.eflag, RANGE_FLG_EXCLUDE) ? strcat(str,"(") : strcat(str,"[");
converToStr(str + strlen(str), ctx->type, &r->ra.e, tlen > 32 ? 32 : tlen, &tlen);
FILTER_GET_FLAG(r->ra.eflag, RANGE_FLG_EXCLUDE) ? strcat(str,")") : strcat(str,"]");
}
qDebug("range: %s", str);
r = r->next;
}
qDebug("range: %s", str);
r = r->next;
}
}
return;
}
qDebug("%s - Block Filter info:", msg);
if (FILTER_GET_FLAG(info->blkFlag, FI_STATUS_BLK_ALL)) {
qDebug("Flag:%s", "ALL");
return;
} else if (FILTER_GET_FLAG(info->blkFlag, FI_STATUS_BLK_EMPTY)) {
qDebug("Flag:%s", "EMPTY");
return;
} else if (FILTER_GET_FLAG(info->blkFlag, FI_STATUS_BLK_ACTIVE)){
qDebug("Flag:%s", "ACTIVE");
}
qDebug("GroupNum:%d", info->blkGroupNum);
uint16_t *unitIdx = info->blkUnits;
for (uint16_t i = 0; i < info->blkGroupNum; ++i) {
qDebug("Group[%d] UnitNum: %d:", i, *unitIdx);
uint16_t unitNum = *(unitIdx++);
for (uint16_t m = 0; m < unitNum; ++m) {
qDebug("uidx[%d]", *(unitIdx++));
}
}
}
}
......@@ -1674,7 +1718,9 @@ void filterFreeInfo(SFilterInfo *info) {
CHK_RETV(info == NULL);
tfree(info->cunits);
tfree(info->blkUnitRes);
tfree(info->blkUnits);
for (int32_t i = 0; i < FLD_TYPE_MAX; ++i) {
for (uint16_t f = 0; f < info->fields[i].num; ++f) {
filterFreeField(&info->fields[i].fields[f], i);
......@@ -2485,7 +2531,9 @@ int32_t filterGenerateComInfo(SFilterInfo *info) {
uint16_t n = 0;
info->cunits = malloc(info->unitNum * sizeof(*info->cunits));
info->blkUnitRes = malloc(sizeof(*info->blkUnitRes) * info->unitNum);
info->blkUnits = malloc(sizeof(*info->blkUnits) * (info->unitNum + 1) * info->groupNum);
for (uint16_t i = 0; i < info->unitNum; ++i) {
SFilterUnit *unit = &info->units[i];
......@@ -2493,6 +2541,7 @@ int32_t filterGenerateComInfo(SFilterInfo *info) {
info->cunits[i].rfunc = filterGetRangeCompFuncFromOptrs(unit->compare.optr, unit->compare.optr2);
info->cunits[i].optr = FILTER_UNIT_OPTR(unit);
info->cunits[i].colData = NULL;
info->cunits[i].colId = FILTER_UNIT_COL_ID(info, unit);
if (unit->right.type == FLD_TYPE_VALUE) {
info->cunits[i].valData = FILTER_UNIT_VAL_DATA(info, unit);
......@@ -2541,36 +2590,317 @@ int32_t filterUpdateComUnits(SFilterInfo *info) {
}
static FORCE_INLINE bool filterExecuteImplAll(void *info, int32_t numOfRows, int8_t* p) {
int32_t filterRmUnitByRange(SFilterInfo *info, SDataStatis *pDataStatis, int32_t numOfCols, int32_t numOfRows) {
int32_t rmUnit = 0;
memset(info->blkUnitRes, 0, sizeof(*info->blkUnitRes) * info->unitNum);
for (int32_t k = 0; k < info->unitNum; ++k) {
int32_t index = -1;
SFilterComUnit *cunit = &info->cunits[k];
if (FILTER_NO_MERGE_DATA_TYPE(cunit->dataType)) {
continue;
}
for(int32_t i = 0; i < numOfCols; ++i) {
if (pDataStatis[i].colId == cunit->colId) {
index = i;
break;
}
}
if (index == -1) {
continue;
}
if (pDataStatis[index].numOfNull <= 0) {
if (cunit->optr == TSDB_RELATION_ISNULL) {
info->blkUnitRes[k] = -1;
rmUnit = 1;
continue;
}
if (cunit->optr == TSDB_RELATION_NOTNULL) {
info->blkUnitRes[k] = 1;
rmUnit = 1;
continue;
}
} else {
if (pDataStatis[index].numOfNull == numOfRows) {
if (cunit->optr == TSDB_RELATION_ISNULL) {
info->blkUnitRes[k] = 1;
rmUnit = 1;
continue;
}
info->blkUnitRes[k] = -1;
rmUnit = 1;
continue;
}
}
if (cunit->optr == TSDB_RELATION_ISNULL || cunit->optr == TSDB_RELATION_NOTNULL
|| cunit->optr == TSDB_RELATION_IN || cunit->optr == TSDB_RELATION_LIKE
|| cunit->optr == TSDB_RELATION_NOT_EQUAL) {
continue;
}
SDataStatis* pDataBlockst = &pDataStatis[index];
void *minVal, *maxVal;
if (cunit->dataType == TSDB_DATA_TYPE_FLOAT) {
float minv = (float)(*(double *)(&pDataBlockst->min));
float maxv = (float)(*(double *)(&pDataBlockst->max));
minVal = &minv;
maxVal = &maxv;
} else {
minVal = &pDataBlockst->min;
maxVal = &pDataBlockst->max;
}
bool minRes = false, maxRes = false;
if (cunit->rfunc >= 0) {
minRes = (*gRangeCompare[cunit->rfunc])(minVal, minVal, cunit->valData, cunit->valData2, gDataCompare[cunit->func]);
maxRes = (*gRangeCompare[cunit->rfunc])(maxVal, maxVal, cunit->valData, cunit->valData2, gDataCompare[cunit->func]);
if (minRes && maxRes) {
info->blkUnitRes[k] = 1;
rmUnit = 1;
} else if ((!minRes) && (!maxRes)) {
minRes = filterDoCompare(gDataCompare[cunit->func], TSDB_RELATION_LESS_EQUAL, minVal, cunit->valData);
maxRes = filterDoCompare(gDataCompare[cunit->func], TSDB_RELATION_GREATER_EQUAL, maxVal, cunit->valData2);
if (minRes && maxRes) {
continue;
}
info->blkUnitRes[k] = -1;
rmUnit = 1;
}
} else {
minRes = filterDoCompare(gDataCompare[cunit->func], cunit->optr, minVal, cunit->valData);
maxRes = filterDoCompare(gDataCompare[cunit->func], cunit->optr, maxVal, cunit->valData);
if (minRes && maxRes) {
info->blkUnitRes[k] = 1;
rmUnit = 1;
} else if ((!minRes) && (!maxRes)) {
if (cunit->optr == TSDB_RELATION_EQUAL) {
minRes = filterDoCompare(gDataCompare[cunit->func], TSDB_RELATION_GREATER, minVal, cunit->valData);
maxRes = filterDoCompare(gDataCompare[cunit->func], TSDB_RELATION_LESS, maxVal, cunit->valData);
if (minRes || maxRes) {
info->blkUnitRes[k] = -1;
rmUnit = 1;
}
continue;
}
info->blkUnitRes[k] = -1;
rmUnit = 1;
}
}
}
CHK_LRET(rmUnit == 0, TSDB_CODE_SUCCESS, "NO Block Filter APPLY");
info->blkGroupNum = info->groupNum;
uint16_t *unitNum = info->blkUnits;
uint16_t *unitIdx = unitNum + 1;
int32_t all = 0, empty = 0;
for (uint32_t g = 0; g < info->groupNum; ++g) {
SFilterGroup *group = &info->groups[g];
*unitNum = group->unitNum;
all = 0;
empty = 0;
for (uint32_t u = 0; u < group->unitNum; ++u) {
uint16_t uidx = group->unitIdxs[u];
if (info->blkUnitRes[uidx] == 1) {
--(*unitNum);
all = 1;
continue;
} else if (info->blkUnitRes[uidx] == -1) {
*unitNum = 0;
empty = 1;
break;
}
*(unitIdx++) = uidx;
}
if (*unitNum == 0) {
--info->blkGroupNum;
assert(empty || all);
if (empty) {
FILTER_SET_FLAG(info->blkFlag, FI_STATUS_BLK_EMPTY);
} else {
FILTER_SET_FLAG(info->blkFlag, FI_STATUS_BLK_ALL);
goto _return;
}
continue;
}
unitNum = unitIdx;
++unitIdx;
}
if (info->blkGroupNum) {
FILTER_CLR_FLAG(info->blkFlag, FI_STATUS_BLK_EMPTY);
FILTER_SET_FLAG(info->blkFlag, FI_STATUS_BLK_ACTIVE);
}
_return:
filterDumpInfoToString(info, "Block Filter", 2);
return TSDB_CODE_SUCCESS;
}
bool filterExecuteBasedOnStatisImpl(void *pinfo, int32_t numOfRows, int8_t** p, SDataStatis *statis, int16_t numOfCols) {
SFilterInfo *info = (SFilterInfo *)pinfo;
bool all = true;
uint16_t *unitIdx = NULL;
*p = calloc(numOfRows, sizeof(int8_t));
for (int32_t i = 0; i < numOfRows; ++i) {
//FILTER_UNIT_CLR_F(info);
unitIdx = info->blkUnits;
for (uint32_t g = 0; g < info->blkGroupNum; ++g) {
uint16_t unitNum = *(unitIdx++);
for (uint32_t u = 0; u < unitNum; ++u) {
SFilterComUnit *cunit = &info->cunits[*(unitIdx + u)];
void *colData = (char *)cunit->colData + cunit->dataSize * i;
//if (FILTER_UNIT_GET_F(info, uidx)) {
// p[i] = FILTER_UNIT_GET_R(info, uidx);
//} else {
uint8_t optr = cunit->optr;
if (isNull(colData, cunit->dataType)) {
(*p)[i] = optr == TSDB_RELATION_ISNULL ? true : false;
} else {
if (optr == TSDB_RELATION_NOTNULL) {
(*p)[i] = 1;
} else if (optr == TSDB_RELATION_ISNULL) {
(*p)[i] = 0;
} else if (cunit->rfunc >= 0) {
(*p)[i] = (*gRangeCompare[cunit->rfunc])(colData, colData, cunit->valData, cunit->valData2, gDataCompare[cunit->func]);
} else {
(*p)[i] = filterDoCompare(gDataCompare[cunit->func], cunit->optr, colData, cunit->valData);
}
//FILTER_UNIT_SET_R(info, uidx, p[i]);
//FILTER_UNIT_SET_F(info, uidx);
}
if ((*p)[i] == 0) {
break;
}
}
if ((*p)[i]) {
break;
}
unitIdx += unitNum;
}
if ((*p)[i] == 0) {
all = false;
}
}
return all;
}
int32_t filterExecuteBasedOnStatis(SFilterInfo *info, int32_t numOfRows, int8_t** p, SDataStatis *statis, int16_t numOfCols, bool* all) {
if (statis && numOfRows >= FILTER_RM_UNIT_MIN_ROWS) {
info->blkFlag = 0;
filterRmUnitByRange(info, statis, numOfCols, numOfRows);
if (info->blkFlag) {
if (FILTER_GET_FLAG(info->blkFlag, FI_STATUS_BLK_ALL)) {
*all = true;
goto _return;
} else if (FILTER_GET_FLAG(info->blkFlag, FI_STATUS_BLK_EMPTY)) {
*all = false;
goto _return;
}
assert(info->unitNum > 1);
*all = filterExecuteBasedOnStatisImpl(info, numOfRows, p, statis, numOfCols);
goto _return;
}
}
return 1;
_return:
info->blkFlag = 0;
return TSDB_CODE_SUCCESS;
}
static FORCE_INLINE bool filterExecuteImplAll(void *info, int32_t numOfRows, int8_t** p, SDataStatis *statis, int16_t numOfCols) {
return true;
}
static FORCE_INLINE bool filterExecuteImplEmpty(void *info, int32_t numOfRows, int8_t* p) {
static FORCE_INLINE bool filterExecuteImplEmpty(void *info, int32_t numOfRows, int8_t** p, SDataStatis *statis, int16_t numOfCols) {
return false;
}
static FORCE_INLINE bool filterExecuteImplIsNull(void *pinfo, int32_t numOfRows, int8_t* p) {
static FORCE_INLINE bool filterExecuteImplIsNull(void *pinfo, int32_t numOfRows, int8_t** p, SDataStatis *statis, int16_t numOfCols) {
SFilterInfo *info = (SFilterInfo *)pinfo;
bool all = true;
if (filterExecuteBasedOnStatis(info, numOfRows, p, statis, numOfCols, &all) == 0) {
return all;
}
*p = calloc(numOfRows, sizeof(int8_t));
for (int32_t i = 0; i < numOfRows; ++i) {
uint16_t uidx = info->groups[0].unitIdxs[0];
void *colData = (char *)info->cunits[uidx].colData + info->cunits[uidx].dataSize * i;
p[i] = isNull(colData, info->cunits[uidx].dataType);
if (p[i] == 0) {
(*p)[i] = isNull(colData, info->cunits[uidx].dataType);
if ((*p)[i] == 0) {
all = false;
}
}
return all;
}
static FORCE_INLINE bool filterExecuteImplNotNull(void *pinfo, int32_t numOfRows, int8_t* p) {
static FORCE_INLINE bool filterExecuteImplNotNull(void *pinfo, int32_t numOfRows, int8_t** p, SDataStatis *statis, int16_t numOfCols) {
SFilterInfo *info = (SFilterInfo *)pinfo;
bool all = true;
if (filterExecuteBasedOnStatis(info, numOfRows, p, statis, numOfCols, &all) == 0) {
return all;
}
*p = calloc(numOfRows, sizeof(int8_t));
for (int32_t i = 0; i < numOfRows; ++i) {
uint16_t uidx = info->groups[0].unitIdxs[0];
void *colData = (char *)info->cunits[uidx].colData + info->cunits[uidx].dataSize * i;
p[i] = !isNull(colData, info->cunits[uidx].dataType);
if (p[i] == 0) {
(*p)[i] = !isNull(colData, info->cunits[uidx].dataType);
if ((*p)[i] == 0) {
all = false;
}
}
......@@ -2578,7 +2908,7 @@ static FORCE_INLINE bool filterExecuteImplNotNull(void *pinfo, int32_t numOfRows
return all;
}
bool filterExecuteImplRange(void *pinfo, int32_t numOfRows, int8_t* p) {
bool filterExecuteImplRange(void *pinfo, int32_t numOfRows, int8_t** p, SDataStatis *statis, int16_t numOfCols) {
SFilterInfo *info = (SFilterInfo *)pinfo;
bool all = true;
uint16_t dataSize = info->cunits[0].dataSize;
......@@ -2587,6 +2917,12 @@ bool filterExecuteImplRange(void *pinfo, int32_t numOfRows, int8_t* p) {
void *valData = info->cunits[0].valData;
void *valData2 = info->cunits[0].valData2;
__compar_fn_t func = gDataCompare[info->cunits[0].func];
if (filterExecuteBasedOnStatis(info, numOfRows, p, statis, numOfCols, &all) == 0) {
return all;
}
*p = calloc(numOfRows, sizeof(int8_t));
for (int32_t i = 0; i < numOfRows; ++i) {
if (isNull(colData, info->cunits[0].dataType)) {
......@@ -2595,9 +2931,9 @@ bool filterExecuteImplRange(void *pinfo, int32_t numOfRows, int8_t* p) {
continue;
}
p[i] = (*rfunc)(colData, colData, valData, valData2, func);
(*p)[i] = (*rfunc)(colData, colData, valData, valData2, func);
if (p[i] == 0) {
if ((*p)[i] == 0) {
all = false;
}
......@@ -2607,9 +2943,15 @@ bool filterExecuteImplRange(void *pinfo, int32_t numOfRows, int8_t* p) {
return all;
}
bool filterExecuteImplMisc(void *pinfo, int32_t numOfRows, int8_t* p) {
bool filterExecuteImplMisc(void *pinfo, int32_t numOfRows, int8_t** p, SDataStatis *statis, int16_t numOfCols) {
SFilterInfo *info = (SFilterInfo *)pinfo;
bool all = true;
if (filterExecuteBasedOnStatis(info, numOfRows, p, statis, numOfCols, &all) == 0) {
return all;
}
*p = calloc(numOfRows, sizeof(int8_t));
for (int32_t i = 0; i < numOfRows; ++i) {
uint16_t uidx = info->groups[0].unitIdxs[0];
......@@ -2619,9 +2961,9 @@ bool filterExecuteImplMisc(void *pinfo, int32_t numOfRows, int8_t* p) {
continue;
}
p[i] = filterDoCompare(gDataCompare[info->cunits[uidx].func], info->cunits[uidx].optr, colData, info->cunits[uidx].valData);
(*p)[i] = filterDoCompare(gDataCompare[info->cunits[uidx].func], info->cunits[uidx].optr, colData, info->cunits[uidx].valData);
if (p[i] == 0) {
if ((*p)[i] == 0) {
all = false;
}
}
......@@ -2630,10 +2972,16 @@ bool filterExecuteImplMisc(void *pinfo, int32_t numOfRows, int8_t* p) {
}
bool filterExecuteImpl(void *pinfo, int32_t numOfRows, int8_t* p) {
bool filterExecuteImpl(void *pinfo, int32_t numOfRows, int8_t** p, SDataStatis *statis, int16_t numOfCols) {
SFilterInfo *info = (SFilterInfo *)pinfo;
bool all = true;
if (filterExecuteBasedOnStatis(info, numOfRows, p, statis, numOfCols, &all) == 0) {
return all;
}
*p = calloc(numOfRows, sizeof(int8_t));
for (int32_t i = 0; i < numOfRows; ++i) {
//FILTER_UNIT_CLR_F(info);
......@@ -2650,33 +2998,33 @@ bool filterExecuteImpl(void *pinfo, int32_t numOfRows, int8_t* p) {
uint8_t optr = cunit->optr;
if (isNull(colData, cunit->dataType)) {
p[i] = optr == TSDB_RELATION_ISNULL ? true : false;
(*p)[i] = optr == TSDB_RELATION_ISNULL ? true : false;
} else {
if (optr == TSDB_RELATION_NOTNULL) {
p[i] = 1;
(*p)[i] = 1;
} else if (optr == TSDB_RELATION_ISNULL) {
p[i] = 0;
(*p)[i] = 0;
} else if (cunit->rfunc >= 0) {
p[i] = (*gRangeCompare[cunit->rfunc])(colData, colData, cunit->valData, cunit->valData2, gDataCompare[cunit->func]);
(*p)[i] = (*gRangeCompare[cunit->rfunc])(colData, colData, cunit->valData, cunit->valData2, gDataCompare[cunit->func]);
} else {
p[i] = filterDoCompare(gDataCompare[cunit->func], cunit->optr, colData, cunit->valData);
(*p)[i] = filterDoCompare(gDataCompare[cunit->func], cunit->optr, colData, cunit->valData);
}
//FILTER_UNIT_SET_R(info, uidx, p[i]);
//FILTER_UNIT_SET_F(info, uidx);
}
if (p[i] == 0) {
if ((*p)[i] == 0) {
break;
}
}
if (p[i]) {
if ((*p)[i]) {
break;
}
}
if (p[i] == 0) {
if ((*p)[i] == 0) {
all = false;
}
}
......@@ -2684,8 +3032,9 @@ bool filterExecuteImpl(void *pinfo, int32_t numOfRows, int8_t* p) {
return all;
}
FORCE_INLINE bool filterExecute(SFilterInfo *info, int32_t numOfRows, int8_t* p) {
return (*info->func)(info, numOfRows, p);
FORCE_INLINE bool filterExecute(SFilterInfo *info, int32_t numOfRows, int8_t** p, SDataStatis *statis, int16_t numOfCols) {
return (*info->func)(info, numOfRows, p, statis, numOfCols);
}
int32_t filterSetExecFunc(SFilterInfo *info) {
......@@ -2767,7 +3116,7 @@ _return:
return TSDB_CODE_SUCCESS;
}
int32_t filterSetColFieldData(SFilterInfo *info, int16_t colId, void *data) {
int32_t filterSetColFieldData(SFilterInfo *info, int32_t numOfCols, SArray* pDataBlock) {
CHK_LRET(info == NULL, TSDB_CODE_QRY_APP_ERROR, "info NULL");
CHK_LRET(info->fields[FLD_TYPE_COLUMN].num <= 0, TSDB_CODE_QRY_APP_ERROR, "no column fileds");
......@@ -2778,10 +3127,14 @@ int32_t filterSetColFieldData(SFilterInfo *info, int16_t colId, void *data) {
for (uint16_t i = 0; i < info->fields[FLD_TYPE_COLUMN].num; ++i) {
SFilterField* fi = &info->fields[FLD_TYPE_COLUMN].fields[i];
SSchema* sch = fi->desc;
if (sch->colId == colId) {
fi->data = data;
break;
for (int32_t j = 0; j < numOfCols; ++j) {
SColumnInfoData* pColInfo = taosArrayGet(pDataBlock, j);
if (sch->colId == pColInfo->info.colId) {
fi->data = pColInfo->pData;
break;
}
}
}
......@@ -2880,16 +3233,17 @@ bool filterRangeExecute(SFilterInfo *info, SDataStatis *pDataStatis, int32_t num
// no statistics data, load the true data block
if (index == -1) {
return true;
break;
}
// not support pre-filter operation on binary/nchar data type
if (FILTER_NO_MERGE_DATA_TYPE(ctx->type)) {
return true;
break;
}
if ((pDataStatis[index].numOfNull <= 0) && (ctx->isnull && !ctx->notnull && !ctx->isrange)) {
return false;
ret = false;
break;
}
// all data in current column are NULL, no need to check its boundary value
......@@ -2897,7 +3251,8 @@ bool filterRangeExecute(SFilterInfo *info, SDataStatis *pDataStatis, int32_t num
// if isNULL query exists, load the null data column
if ((ctx->notnull || ctx->isrange) && (!ctx->isnull)) {
return false;
ret = false;
break;
}
continue;
......
......@@ -20,7 +20,7 @@
extern "C" {
#endif
void taosNetTest(char *role, char *host, int port, int pkgLen);
void taosNetTest(char *role, char *host, int32_t port, int32_t pkgLen, int32_t pkgNum, char *pkgType);
#ifdef __cplusplus
}
......
......@@ -27,6 +27,10 @@
#include "syncMsg.h"
#define MAX_PKG_LEN (64 * 1000)
#define MAX_SPEED_PKG_LEN (1024 * 1024 * 1024)
#define MIN_SPEED_PKG_LEN 1024
#define MAX_SPEED_PKG_NUM 10000
#define MIN_SPEED_PKG_NUM 1
#define BUFFER_SIZE (MAX_PKG_LEN + 1024)
extern int32_t tsRpcMaxUdpSize;
......@@ -466,6 +470,7 @@ static void taosNetTestRpc(char *host, int32_t startPort, int32_t pkgLen) {
sendpkgLen = pkgLen;
}
tsRpcForceTcp = 1;
int32_t ret = taosNetCheckRpc(host, port, sendpkgLen, spi, NULL);
if (ret < 0) {
printf("failed to test TCP port:%d\n", port);
......@@ -479,6 +484,7 @@ static void taosNetTestRpc(char *host, int32_t startPort, int32_t pkgLen) {
sendpkgLen = pkgLen;
}
tsRpcForceTcp = 0;
ret = taosNetCheckRpc(host, port, pkgLen, spi, NULL);
if (ret < 0) {
printf("failed to test UDP port:%d\n", port);
......@@ -542,12 +548,122 @@ static void taosNetTestServer(char *host, int32_t startPort, int32_t pkgLen) {
}
}
void taosNetTest(char *role, char *host, int32_t port, int32_t pkgLen) {
static void taosNetTestFqdn(char *host) {
int code = 0;
uint64_t startTime = taosGetTimestampUs();
uint32_t ip = taosGetIpv4FromFqdn(host);
if (ip == 0xffffffff) {
uError("failed to get IP address from %s since %s", host, strerror(errno));
code = -1;
}
uint64_t endTime = taosGetTimestampUs();
uint64_t el = endTime - startTime;
printf("check convert fqdn spend, status: %d\tcost: %" PRIu64 " us\n", code, el);
return;
}
static void taosNetCheckSpeed(char *host, int32_t port, int32_t pkgLen,
int32_t pkgNum, char *pkgType) {
// record config
int32_t compressTmp = tsCompressMsgSize;
int32_t maxUdpSize = tsRpcMaxUdpSize;
int32_t forceTcp = tsRpcForceTcp;
if (0 == strcmp("tcp", pkgType)){
tsRpcForceTcp = 1;
tsRpcMaxUdpSize = 0; // force tcp
} else {
tsRpcForceTcp = 0;
tsRpcMaxUdpSize = INT_MAX;
}
tsCompressMsgSize = -1;
SRpcEpSet epSet;
SRpcMsg reqMsg;
SRpcMsg rspMsg;
void * pRpcConn;
char secretEncrypt[32] = {0};
char spi = 0;
pRpcConn = taosNetInitRpc(secretEncrypt, spi);
if (NULL == pRpcConn) {
uError("failed to init client rpc");
return;
}
printf("check net spend, host:%s port:%d pkgLen:%d pkgNum:%d pkgType:%s\n\n", host, port, pkgLen, pkgNum, pkgType);
int32_t totalSucc = 0;
uint64_t startT = taosGetTimestampUs();
for (int32_t i = 1; i <= pkgNum; i++) {
uint64_t startTime = taosGetTimestampUs();
memset(&epSet, 0, sizeof(SRpcEpSet));
epSet.inUse = 0;
epSet.numOfEps = 1;
epSet.port[0] = port;
strcpy(epSet.fqdn[0], host);
reqMsg.msgType = TSDB_MSG_TYPE_NETWORK_TEST;
reqMsg.pCont = rpcMallocCont(pkgLen);
reqMsg.contLen = pkgLen;
reqMsg.code = 0;
reqMsg.handle = NULL; // rpc handle returned to app
reqMsg.ahandle = NULL; // app handle set by client
strcpy(reqMsg.pCont, "nettest speed");
rpcSendRecv(pRpcConn, &epSet, &reqMsg, &rspMsg);
int code = 0;
if ((rspMsg.code != 0) || (rspMsg.msgType != TSDB_MSG_TYPE_NETWORK_TEST + 1)) {
uError("ret code 0x%x %s", rspMsg.code, tstrerror(rspMsg.code));
code = -1;
}else{
totalSucc ++;
}
rpcFreeCont(rspMsg.pCont);
uint64_t endTime = taosGetTimestampUs();
uint64_t el = endTime - startTime;
printf("progress:%5d/%d\tstatus:%d\tcost:%8.2lf ms\tspeed:%8.2lf MB/s\n", i, pkgNum, code, el/1000.0, pkgLen/(el/1000000.0)/1024.0/1024.0);
}
int64_t endT = taosGetTimestampUs();
uint64_t elT = endT - startT;
printf("\ntotal succ:%5d/%d\tcost:%8.2lf ms\tspeed:%8.2lf MB/s\n", totalSucc, pkgNum, elT/1000.0, pkgLen/(elT/1000000.0)/1024.0/1024.0*totalSucc);
rpcClose(pRpcConn);
// return config
tsCompressMsgSize = compressTmp;
tsRpcMaxUdpSize = maxUdpSize;
tsRpcForceTcp = forceTcp;
return;
}
static void taosNetTestSpeed(char *host, int32_t port, int32_t pkgLen,
int32_t pkgNum, char *pkgType) {
if (0 == strcmp("fqdn", pkgType)){
taosNetTestFqdn(host);
return;
}
taosNetCheckSpeed(host, port, pkgLen, pkgNum, pkgType);
return;
}
void taosNetTest(char *role, char *host, int32_t port, int32_t pkgLen,
int32_t pkgNum, char *pkgType) {
tscEmbedded = 1;
if (host == NULL) host = tsLocalFqdn;
if (port == 0) port = tsServerPort;
if (pkgLen <= 10) pkgLen = 1000;
if (pkgLen > MAX_PKG_LEN) pkgLen = MAX_PKG_LEN;
if (0 == strcmp("speed", role)){
if (pkgLen <= MIN_SPEED_PKG_LEN) pkgLen = MIN_SPEED_PKG_LEN;
if (pkgLen > MAX_SPEED_PKG_LEN) pkgLen = MAX_SPEED_PKG_LEN;
if (pkgNum <= MIN_SPEED_PKG_NUM) pkgNum = MIN_SPEED_PKG_NUM;
if (pkgNum > MAX_SPEED_PKG_NUM) pkgNum = MAX_SPEED_PKG_NUM;
}else{
if (pkgLen <= 10) pkgLen = 1000;
if (pkgLen > MAX_PKG_LEN) pkgLen = MAX_PKG_LEN;
}
if (0 == strcmp("client", role)) {
taosNetTestClient(host, port, pkgLen);
......@@ -560,7 +676,11 @@ void taosNetTest(char *role, char *host, int32_t port, int32_t pkgLen) {
taosNetCheckSync(host, port);
} else if (0 == strcmp("startup", role)) {
taosNetTestStartup(host, port);
} else {
} else if (0 == strcmp("speed", role)) {
tscEmbedded = 0;
char type[10] = {0};
taosNetTestSpeed(host, port, pkgLen, pkgNum, strtolower(type, pkgType));
}else {
taosNetTestStartup(host, port);
}
......
......@@ -17,6 +17,7 @@
#define TAOS_RANDOM_FILE_FAIL_TEST
#include "os.h"
#include "taoserror.h"
#include "taosmsg.h"
#include "tchecksum.h"
#include "tfile.h"
#include "twal.h"
......@@ -114,7 +115,7 @@ void walRemoveAllOldFiles(void *handle) {
#if defined(WAL_CHECKSUM_WHOLE)
static void walUpdateChecksum(SWalHead *pHead) {
pHead->sver = 1;
pHead->sver = 2;
pHead->cksum = 0;
pHead->cksum = taosCalcChecksum(0, (uint8_t *)pHead, sizeof(*pHead) + pHead->len);
}
......@@ -122,7 +123,7 @@ static void walUpdateChecksum(SWalHead *pHead) {
static int walValidateChecksum(SWalHead *pHead) {
if (pHead->sver == 0) { // for compatible with wal before sver 1
return taosCheckChecksumWhole((uint8_t *)pHead, sizeof(*pHead));
} else if (pHead->sver == 1) {
} else if (pHead->sver >= 1) {
uint32_t cksum = pHead->cksum;
pHead->cksum = 0;
return taosCheckChecksum((uint8_t *)pHead, sizeof(*pHead) + pHead->len, cksum);
......@@ -281,7 +282,7 @@ static int32_t walSkipCorruptedRecord(SWal *pWal, SWalHead *pHead, int64_t tfd,
return TSDB_CODE_SUCCESS;
}
if (pHead->sver == 1) {
if (pHead->sver >= 1) {
if (tfRead(tfd, pHead->cont, pHead->len) < pHead->len) {
wError("vgId:%d, read to end of corrupted wal file, offset:%" PRId64, pWal->vgId, pos);
return TSDB_CODE_WAL_FILE_CORRUPTED;
......@@ -306,7 +307,115 @@ static int32_t walSkipCorruptedRecord(SWal *pWal, SWalHead *pHead, int64_t tfd,
return TSDB_CODE_WAL_FILE_CORRUPTED;
}
// Add SMemRowType ahead of SDataRow
static void expandSubmitBlk(SSubmitBlk *pDest, SSubmitBlk *pSrc, int32_t *lenExpand) {
// copy the header firstly
memcpy(pDest, pSrc, sizeof(SSubmitBlk));
int32_t nRows = htons(pDest->numOfRows);
int32_t dataLen = htonl(pDest->dataLen);
if ((nRows <= 0) || (dataLen <= 0)) {
return;
}
char *pDestData = pDest->data;
char *pSrcData = pSrc->data;
for (int32_t i = 0; i < nRows; ++i) {
memRowSetType(pDestData, SMEM_ROW_DATA);
memcpy(memRowDataBody(pDestData), pSrcData, dataRowLen(pSrcData));
pDestData = POINTER_SHIFT(pDestData, memRowTLen(pDestData));
pSrcData = POINTER_SHIFT(pSrcData, dataRowLen(pSrcData));
++(*lenExpand);
}
pDest->dataLen = htonl(dataLen + nRows * sizeof(uint8_t));
}
// Check SDataRow by comparing the SDataRow len and SSubmitBlk dataLen
static bool walIsSDataRow(void *pBlkData, int nRows, int32_t dataLen) {
if ((nRows <= 0) || (dataLen <= 0)) {
return true;
}
int32_t len = 0, kvLen = 0;
for (int i = 0; i < nRows; ++i) {
len += dataRowLen(pBlkData);
if (len > dataLen) {
return false;
}
/**
* For SDataRow between version [2.1.5.0 and 2.1.6.X], it would never conflict.
* For SKVRow between version [2.1.5.0 and 2.1.6.X], it may conflict in below scenario
* - with 1st type byte 0x01 and sversion 0x0101(257), thus do further check
*/
if (dataRowLen(pBlkData) == 257) {
SMemRow memRow = pBlkData;
SKVRow kvRow = memRowKvBody(memRow);
int nCols = kvRowNCols(kvRow);
uint16_t calcTsOffset = (uint16_t)(TD_KV_ROW_HEAD_SIZE + sizeof(SColIdx) * nCols);
uint16_t realTsOffset = (kvRowColIdx(kvRow))->offset;
if (calcTsOffset == realTsOffset) {
kvLen += memRowKvTLen(memRow);
}
}
pBlkData = POINTER_SHIFT(pBlkData, dataRowLen(pBlkData));
}
if (len != dataLen) {
return false;
}
if (kvLen == dataLen) {
return false;
}
return true;
}
// for WAL SMemRow/SDataRow compatibility
static int walSMemRowCheck(SWalHead *pHead) {
if ((pHead->sver < 2) && (pHead->msgType == TSDB_MSG_TYPE_SUBMIT)) {
SSubmitMsg *pMsg = (SSubmitMsg *)pHead->cont;
int32_t numOfBlocks = htonl(pMsg->numOfBlocks);
if (numOfBlocks <= 0) {
return 0;
}
int32_t nTotalRows = 0;
SSubmitBlk *pBlk = (SSubmitBlk *)pMsg->blocks;
for (int32_t i = 0; i < numOfBlocks; ++i) {
int32_t dataLen = htonl(pBlk->dataLen);
int32_t nRows = htons(pBlk->numOfRows);
nTotalRows += nRows;
if (!walIsSDataRow(pBlk->data, nRows, dataLen)) {
return 0;
}
pBlk = (SSubmitBlk *)POINTER_SHIFT(pBlk, sizeof(SSubmitBlk) + dataLen);
}
ASSERT(nTotalRows >= 0);
SWalHead *pWalHead = (SWalHead *)calloc(sizeof(SWalHead) + pHead->len + nTotalRows * sizeof(uint8_t), 1);
if (pWalHead == NULL) {
return -1;
}
memcpy(pWalHead, pHead, sizeof(SWalHead) + sizeof(SSubmitMsg));
SSubmitMsg *pDestMsg = (SSubmitMsg *)pWalHead->cont;
SSubmitBlk *pDestBlks = (SSubmitBlk *)pDestMsg->blocks;
SSubmitBlk *pSrcBlks = (SSubmitBlk *)pMsg->blocks;
int32_t lenExpand = 0;
for (int32_t i = 0; i < numOfBlocks; ++i) {
expandSubmitBlk(pDestBlks, pSrcBlks, &lenExpand);
pDestBlks = POINTER_SHIFT(pDestBlks, htonl(pDestBlks->dataLen) + sizeof(SSubmitBlk));
pSrcBlks = POINTER_SHIFT(pSrcBlks, htonl(pSrcBlks->dataLen) + sizeof(SSubmitBlk));
}
if (lenExpand > 0) {
pDestMsg->header.contLen = htonl(pDestMsg->length) + lenExpand;
pDestMsg->length = htonl(pDestMsg->header.contLen);
pWalHead->len = pWalHead->len + lenExpand;
}
memcpy(pHead, pWalHead, sizeof(SWalHead) + pWalHead->len);
tfree(pWalHead);
}
return 0;
}
static int32_t walRestoreWalFile(SWal *pWal, void *pVnode, FWalWrite writeFp, char *name, int64_t fileId) {
int32_t size = WAL_MAX_SIZE;
......@@ -346,7 +455,7 @@ static int32_t walRestoreWalFile(SWal *pWal, void *pVnode, FWalWrite writeFp, ch
}
#if defined(WAL_CHECKSUM_WHOLE)
if ((pHead->sver == 0 && !walValidateChecksum(pHead)) || pHead->sver < 0 || pHead->sver > 1) {
if ((pHead->sver == 0 && !walValidateChecksum(pHead)) || pHead->sver < 0 || pHead->sver > 2) {
wError("vgId:%d, file:%s, wal head cksum is messed up, hver:%" PRIu64 " len:%d offset:%" PRId64, pWal->vgId, name,
pHead->version, pHead->len, offset);
code = walSkipCorruptedRecord(pWal, pHead, tfd, &offset);
......@@ -379,7 +488,7 @@ static int32_t walRestoreWalFile(SWal *pWal, void *pVnode, FWalWrite writeFp, ch
continue;
}
if (pHead->sver == 1 && !walValidateChecksum(pHead)) {
if ((pHead->sver >= 1) && !walValidateChecksum(pHead)) {
wError("vgId:%d, file:%s, wal whole cksum is messed up, hver:%" PRIu64 " len:%d offset:%" PRId64, pWal->vgId, name,
pHead->version, pHead->len, offset);
code = walSkipCorruptedRecord(pWal, pHead, tfd, &offset);
......@@ -432,6 +541,13 @@ static int32_t walRestoreWalFile(SWal *pWal, void *pVnode, FWalWrite writeFp, ch
pWal->version = pHead->version;
//wInfo("writeFp: %ld", offset);
if (0 != walSMemRowCheck(pHead)) {
wError("vgId:%d, restore wal, fileId:%" PRId64 " hver:%" PRIu64 " wver:%" PRIu64 " len:%d offset:%" PRId64,
pWal->vgId, fileId, pHead->version, pWal->version, pHead->len, offset);
tfClose(tfd);
tfree(buffer);
return TAOS_SYSTEM_ERROR(errno);
}
(*writeFp)(pVnode, pHead, TAOS_QTYPE_WAL, NULL);
}
......
......@@ -35,6 +35,8 @@ class TDTestCase:
ret = tdSql.query('select server_status() as result')
tdSql.checkData(0, 0, 1)
time.sleep(1)
ret = tdSql.query('show dnodes')
dnodeId = tdSql.getData(0, 0);
......@@ -43,6 +45,7 @@ class TDTestCase:
ret = tdSql.execute('alter dnode "%s" debugFlag 135' % dnodeId)
tdLog.info('alter dnode "%s" debugFlag 135 -> ret: %d' % (dnodeId, ret))
time.sleep(1)
ret = tdSql.query('show mnodes')
tdSql.checkRows(1)
......@@ -63,6 +66,9 @@ class TDTestCase:
tdSql.execute('create stable st (ts timestamp, f int) tags(t int)')
tdSql.execute('create table ct1 using st tags(1)');
tdSql.execute('create table ct2 using st tags(2)');
time.sleep(1)
ret = tdSql.query('show vnodes "{}"'.format(dnodeEndpoint))
tdSql.checkRows(1)
tdSql.checkData(0, 0, 2)
......
......@@ -212,6 +212,7 @@ python3 test.py -f tools/taosdemoAllTest/taosdemoTestInsertWithJson.py
python3 test.py -f tools/taosdemoAllTest/taosdemoTestQueryWithJson.py
#query
python3 test.py -f query/distinctOneColTb.py
python3 ./test.py -f query/filter.py
python3 ./test.py -f query/filterCombo.py
python3 ./test.py -f query/queryNormal.py
......@@ -284,7 +285,7 @@ python3 ./test.py -f alter/alterTabAddTagWithNULL.py
python3 ./test.py -f alter/alterTimestampColDataProcess.py
# client
#python3 ./test.py -f client/client.py
python3 ./test.py -f client/client.py
python3 ./test.py -f client/version.py
python3 ./test.py -f client/alterDatabase.py
python3 ./test.py -f client/noConnectionErrorTest.py
......
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import taos
from util.log import *
from util.cases import *
from util.sql import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor())
def tb_all_query(self, num, sql="tb_all", where=""):
tdSql.query(
f"select distinct ts2 from {sql} {where}")
tdSql.checkRows(num)
tdSql.query(
f"select distinct cint from {sql} {where}")
tdSql.checkRows(num)
tdSql.query(
f"select distinct cbigint from {sql} {where}")
tdSql.checkRows(num)
tdSql.query(
f"select distinct csmallint from {sql} {where}")
tdSql.checkRows(num)
tdSql.query(
f"select distinct ctinyint from {sql} {where}")
tdSql.checkRows(num)
tdSql.query(
f"select distinct cfloat from {sql} {where}")
tdSql.checkRows(num)
tdSql.query(
f"select distinct cdouble from {sql} {where}")
tdSql.checkRows(num)
tdSql.query(
f"select distinct cbool from {sql} {where}")
if num < 2:
tdSql.checkRows(num)
else:
tdSql.checkRows(2)
tdSql.query(
f"select distinct cbinary from {sql} {where}")
tdSql.checkRows(num)
tdSql.query(
f"select distinct cnchar from {sql} {where}")
tdSql.checkRows(num)
tdSql.query(
f"select distinct ts2 as a from {sql} {where}")
tdSql.checkRows(num)
tdSql.query(
f"select distinct cint as a from {sql} {where}")
tdSql.checkRows(num)
tdSql.query(
f"select distinct cbigint as a from {sql} {where}")
tdSql.checkRows(num)
tdSql.query(
f"select distinct csmallint as a from {sql} {where}")
tdSql.checkRows(num)
tdSql.query(
f"select distinct ctinyint as a from {sql} {where}")
tdSql.checkRows(num)
tdSql.query(
f"select distinct cfloat as a from {sql} {where}")
tdSql.checkRows(num)
tdSql.query(
f"select distinct cdouble as a from {sql} {where}")
tdSql.checkRows(num)
tdSql.query(
f"select distinct cbool as a from {sql} {where}")
if num < 2:
tdSql.checkRows(num)
else:
tdSql.checkRows(2)
tdSql.query(
f"select distinct cbinary as a from {sql} {where}")
tdSql.checkRows(num)
tdSql.query(
f"select distinct cnchar as a from {sql} {where}")
tdSql.checkRows(num)
def tb_all_query_sub(self, num, sql="tb_all", where="",colName = "c1"):
tdSql.query(
f"select distinct {colName} from {sql} {where}")
tdSql.checkRows(num)
def errorCheck(self, sql='tb_all'):
tdSql.error(f"select distinct from {sql}")
tdSql.error(f"distinct ts2 from {sql}")
tdSql.error(f"distinct c1 from")
tdSql.error(f"select distinct ts2, avg(cint) from {sql}")
## the following line is going to core dump
#tdSql.error(f"select avg(cint), distinct ts2 from {sql}")
tdSql.error(f"select distinct ts2 from {sql} group by cint")
tdSql.error(f"select distinct ts2 from {sql} interval(1s)")
tdSql.error(f"select distinct ts2 from {sql} slimit 1 soffset 1")
tdSql.error(f"select distinct ts2 from {sql} slimit 1")
##order by is not supported but not being prohibited
#tdSql.error(f"select distinct ts2 from {sql} order by desc")
#tdSql.error(f"select distinct ts2 from {sql} order by asc")
##distinct should not use on first ts, but it can be applied
#tdSql.error(f"select distinct ts from {sql}")
##distinct should not be used in inner query, but error did not occur
# tdSql.error(f"select distinct ts2 from (select distinct ts2 from {sql})")
def query_all_tb(self, whereNum=0,inNum=0,maxNum=0,tbName="tb_all"):
self.tb_all_query(maxNum,sql=tbName)
self.tb_all_query(num=whereNum,where="where ts2 = '2021-08-06 18:01:50.550'",sql=tbName)
self.tb_all_query(num=whereNum,where="where cint = 1073741820",sql=tbName)
self.tb_all_query(num=whereNum,where="where cbigint = 1125899906842620",sql=tbName)
self.tb_all_query(num=whereNum,where="where csmallint = 150",sql=tbName)
self.tb_all_query(num=whereNum,where="where ctinyint = 10",sql=tbName)
self.tb_all_query(num=whereNum,where="where cfloat = 0.10",sql=tbName)
self.tb_all_query(num=whereNum,where="where cdouble = 0.130",sql=tbName)
self.tb_all_query(num=whereNum,where="where cbool = true",sql=tbName)
self.tb_all_query(num=whereNum,where="where cbinary = 'adc1'",sql=tbName)
self.tb_all_query(num=whereNum,where="where cnchar = '双精度浮'",sql=tbName)
self.tb_all_query(num=inNum,where="where ts2 in ( '2021-08-06 18:01:50.550' , '2021-08-06 18:01:50.551' )",sql=tbName)
self.tb_all_query(num=inNum,where="where cint in (1073741820,1073741821)",sql=tbName)
self.tb_all_query(num=inNum,where="where cbigint in (1125899906842620,1125899906842621)",sql=tbName)
self.tb_all_query(num=inNum,where="where csmallint in (150,151)",sql=tbName)
self.tb_all_query(num=inNum,where="where ctinyint in (10,11)",sql=tbName)
self.tb_all_query(num=inNum,where="where cfloat in (0.10,0.11)",sql=tbName)
self.tb_all_query(num=inNum,where="where cdouble in (0.130,0.131)",sql=tbName)
self.tb_all_query(num=inNum,where="where cbinary in ('adc0','adc1')",sql=tbName)
self.tb_all_query(num=inNum,where="where cnchar in ('双','双精度浮')",sql=tbName)
self.tb_all_query(num=inNum,where="where cbool in (0, 1)",sql=tbName)
self.tb_all_query(num=whereNum,where="where cnchar like '双__浮'",sql=tbName)
self.tb_all_query(num=maxNum,where="where cnchar like '双%'",sql=tbName)
self.tb_all_query(num=maxNum,where="where cbinary like 'adc_'",sql=tbName)
self.tb_all_query(num=maxNum,where="where cbinary like 'a%'",sql=tbName)
self.tb_all_query(num=whereNum,where="limit 1",sql=tbName)
self.tb_all_query(num=whereNum,where="limit 1 offset 1",sql=tbName)
#subquery
self.tb_all_query(num=maxNum,sql=f'(select * from {tbName})')
#subquery with where in outer query
self.tb_all_query(num=whereNum,where="where ts2 = '2021-08-06 18:01:50.550'",sql=f'(select * from {tbName})')
self.tb_all_query(num=whereNum,where="where cint = 1073741820",sql=f'(select * from {tbName})')
self.tb_all_query(num=whereNum,where="where cbigint = 1125899906842620",sql=f'(select * from {tbName})')
self.tb_all_query(num=whereNum,where="where csmallint = 150",sql=f'(select * from {tbName})')
self.tb_all_query(num=whereNum,where="where ctinyint = 10",sql=f'(select * from {tbName})')
self.tb_all_query(num=whereNum,where="where cfloat = 0.10",sql=f'(select * from {tbName})')
self.tb_all_query(num=whereNum,where="where cdouble = 0.130",sql=f'(select * from {tbName})')
self.tb_all_query(num=whereNum,where="where cbool = true",sql=f'(select * from {tbName})')
self.tb_all_query(num=whereNum,where="where cbinary = 'adc1'",sql=f'(select * from {tbName})')
self.tb_all_query(num=whereNum,where="where cnchar = '双精度浮'",sql=f'(select * from {tbName})')
self.tb_all_query(num=inNum,where="where ts2 in ( '2021-08-06 18:01:50.550' , '2021-08-06 18:01:50.551' )",sql=f'(select * from {tbName})')
self.tb_all_query(num=inNum,where="where cint in (1073741820,1073741821)",sql=f'(select * from {tbName})')
self.tb_all_query(num=inNum,where="where cbigint in (1125899906842620,1125899906842621)",sql=f'(select * from {tbName})')
self.tb_all_query(num=inNum,where="where csmallint in (150,151)",sql=f'(select * from {tbName})')
self.tb_all_query(num=inNum,where="where ctinyint in (10,11)",sql=f'(select * from {tbName})')
self.tb_all_query(num=inNum,where="where cfloat in (0.10,0.11)",sql=f'(select * from {tbName})')
self.tb_all_query(num=inNum,where="where cdouble in (0.130,0.131)",sql=f'(select * from {tbName})')
self.tb_all_query(num=inNum,where="where cbinary in ('adc0','adc1')",sql=f'(select * from {tbName})')
self.tb_all_query(num=inNum,where="where cnchar in ('双','双精度浮')",sql=f'(select * from {tbName})')
self.tb_all_query(num=inNum,where="where cbool in (0, 1)",sql=f'(select * from {tbName})')
self.tb_all_query(num=whereNum,where="where cnchar like '双__浮'",sql=f'(select * from {tbName})')
self.tb_all_query(num=maxNum,where="where cnchar like '双%'",sql=f'(select * from {tbName})')
self.tb_all_query(num=maxNum,where="where cbinary like 'adc_'",sql=f'(select * from {tbName})')
self.tb_all_query(num=maxNum,where="where cbinary like 'a%'",sql=f'(select * from {tbName})')
self.tb_all_query(num=whereNum,where="limit 1",sql=f'(select * from {tbName})')
self.tb_all_query(num=whereNum,where="limit 1 offset 1",sql=f'(select * from {tbName})')
#table query with inner query has error
tdSql.error('select distinct ts2 from (select )')
#table query with error option
self.errorCheck()
def run(self):
tdSql.prepare()
tdLog.notice(
"==============phase1 distinct col1 with no values==========")
tdSql.execute("create stable if not exists stb_all (ts timestamp, ts2 timestamp, cint int, cbigint bigint, csmallint smallint, ctinyint tinyint,cfloat float, cdouble double, cbool bool, cbinary binary(32), cnchar nchar(32)) tags(tint int)")
tdSql.execute(
"create table if not exists tb_all using stb_all tags(1)")
self.query_all_tb()
tdLog.notice(
"==============phase1 finished ==========\n\n\n")
tdLog.notice(
"==============phase2 distinct col1 all values are null==========")
tdSql.execute(
"insert into tb_all values(now,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL)")
tdSql.execute(
"insert into tb_all values(now,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL)")
tdSql.execute(
"insert into tb_all values(now,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL)")
# normal query
self.query_all_tb()
tdLog.notice(
"==============phase2 finished ==========\n\n\n")
tdLog.notice(
"==============phase3 distinct with distinct values ==========\n\n\n")
tdSql.execute(
"insert into tb_all values(now+1s,'2021-08-06 18:01:50.550',1073741820,1125899906842620,150,10,0.10,0.130, true,'adc0','双')")
tdSql.execute(
"insert into tb_all values(now+2s,'2021-08-06 18:01:50.551',1073741821,1125899906842621,151,11,0.11,0.131,false,'adc1','双精度浮')")
tdSql.execute(
"insert into tb_all values(now+3s,'2021-08-06 18:01:50.552',1073741822,1125899906842622,152,12,0.12,0.132,NULL,'adc2','双精度')")
tdSql.execute(
"insert into tb_all values(now+4s,'2021-08-06 18:01:50.553',1073741823,1125899906842623,153,13,0.13,0.133,NULL,'adc3','双精')")
tdSql.execute(
"insert into tb_all values(now+5s,'2021-08-06 18:01:50.554',1073741824,1125899906842624,154,14,0.14,0.134,NULL,'adc4','双精度浮点型')")
# normal query
self.query_all_tb(maxNum=5,inNum=2,whereNum=1)
tdLog.notice(
"==============phase3 finishes ==========\n\n\n")
tdLog.notice(
"==============phase4 distinct with some values the same values ==========\n\n\n")
tdSql.execute(
"insert into tb_all values(now+10s,'2021-08-06 18:01:50.550',1073741820,1125899906842620,150,10,0.10,0.130, true,'adc0','双')")
tdSql.execute(
"insert into tb_all values(now+20s,'2021-08-06 18:01:50.551',1073741821,1125899906842621,151,11,0.11,0.131,false,'adc1','双精度浮')")
tdSql.execute(
"insert into tb_all values(now+30s,'2021-08-06 18:01:50.552',1073741822,1125899906842622,152,12,0.12,0.132,NULL,'adc2','双精度')")
tdSql.execute(
"insert into tb_all values(now+40s,'2021-08-06 18:01:50.553',1073741823,1125899906842623,153,13,0.13,0.133,NULL,'adc3','双精')")
tdSql.execute(
"insert into tb_all values(now+50s,'2021-08-06 18:01:50.554',1073741824,1125899906842624,154,14,0.14,0.134,NULL,'adc4','双精度浮点型')")
# normal query
self.query_all_tb(maxNum=5,inNum=2,whereNum=1)
tdLog.notice(
"==============phase4 finishes ==========\n\n\n")
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
......@@ -145,26 +145,26 @@ class taosdemoPerformace:
binPath = buildPath + "/build/bin/"
os.system(
"%staosdemo -f %s > taosdemoperf.txt 2>&1" %
"%staosdemo -f %s > /dev/null 2>&1" %
(binPath, self.generateJson()))
self.createTableTime = self.getCMDOutput(
"grep 'Spent' taosdemoperf.txt | awk 'NR==1{print $2}'")
"grep 'Spent' insert_res.txt | awk 'NR==1{print $2}'")
self.insertRecordsTime = self.getCMDOutput(
"grep 'Spent' taosdemoperf.txt | awk 'NR==2{print $2}'")
"grep 'Spent' insert_res.txt | awk 'NR==2{print $2}'")
self.recordsPerSecond = self.getCMDOutput(
"grep 'Spent' taosdemoperf.txt | awk 'NR==2{print $16}'")
"grep 'Spent' insert_res.txt | awk 'NR==2{print $16}'")
self.commitID = self.getCMDOutput("git rev-parse --short HEAD")
delay = self.getCMDOutput(
"grep 'delay' taosdemoperf.txt | awk '{print $4}'")
"grep 'delay' insert_res.txt | awk '{print $4}'")
self.avgDelay = delay[:-4]
delay = self.getCMDOutput(
"grep 'delay' taosdemoperf.txt | awk '{print $6}'")
"grep 'delay' insert_res.txt | awk '{print $6}'")
self.maxDelay = delay[:-4]
delay = self.getCMDOutput(
"grep 'delay' taosdemoperf.txt | awk '{print $8}'")
"grep 'delay' insert_res.txt | awk '{print $8}'")
self.minDelay = delay[:-3]
os.system("[ -f taosdemoperf.txt ] && rm taosdemoperf.txt")
os.system("[ -f insert_res.txt ] && rm insert_res.txt")
def createTablesAndStoreData(self):
cursor = self.conn2.cursor()
......@@ -185,7 +185,7 @@ class taosdemoPerformace:
cursor.close()
cursor1 = self.conn.cursor()
# cursor1.execute("drop database if exists %s" % self.insertDB)
cursor1.execute("drop database if exists %s" % self.insertDB)
cursor1.close()
if __name__ == '__main__':
......
......@@ -3,6 +3,7 @@ system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/cfg.sh -n dnode1 -c walLevel -v 1
system sh/cfg.sh -n dnode1 -c maxtablespervnode -v 4
system sh/cfg.sh -n dnode1 -c cache -v 1
system sh/exec.sh -n dnode1 -s start
sleep 100
......@@ -93,6 +94,47 @@ sql insert into tb3_2 values ('2021-05-06 18:19:28',15,NULL,15,NULL,15,NULL,true
sql insert into tb3_2 values ('2021-06-06 18:19:28',NULL,16.0,NULL,16,NULL,16.0,NULL,'16',NULL)
sql insert into tb3_2 values ('2021-07-06 18:19:28',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL)
sql create table stb4 (ts timestamp, c1 int, c2 float, c3 bigint, c4 smallint, c5 tinyint, c6 double, c7 bool, c8 binary(10), c9 nchar(9),c10 binary(16300)) TAGS(t1 int, t2 binary(10), t3 double)
sql create table tb4_0 using stb4 tags(0,'0',0.0)
sql create table tb4_1 using stb4 tags(1,'1',1.0)
sql create table tb4_2 using stb4 tags(2,'2',2.0)
sql create table tb4_3 using stb4 tags(3,'3',3.0)
sql create table tb4_4 using stb4 tags(4,'4',4.0)
$i = 0
$ts0 = 1625850000000
$blockNum = 5
$delta = 0
$tbname0 = tb4_
$a = 0
$b = 200
$c = 400
while $i < $blockNum
$x = 0
$rowNum = 1200
while $x < $rowNum
$ts = $ts0 + $x
$a = $a + 1
$b = $b + 1
$c = $c + 1
$d = $x / 10
$tin = $rowNum
$binary = 'binary . $c
$binary = $binary . '
$nchar = 'nchar . $c
$nchar = $nchar . '
$tbname = 'tb4_ . $i
$tbname = $tbname . '
sql insert into $tbname values ( $ts , $a , $b , $c , $d , $d , $c , true, $binary , $nchar , $binary )
$x = $x + 1
endw
$i = $i + 1
$ts0 = $ts0 + 259200000
endw
sleep 100
sql connect
......
......@@ -1959,6 +1959,117 @@ if $rows != 14 then
return -1
endi
sql select ts,c1 from stb4 where c1 = 200;
if $rows != 1 then
return -1
endi
if $data00 != @21-07-10 01:00:00.199@ then
return -1
endi
sql select ts,c1 from stb4 where c1 != 200;
if $rows != 5999 then
return -1
endi
sql select ts,c1,c2,c3,c4 from stb4 where c1 >= 200 and c2 > 500 and c3 < 800 and c4 between 33 and 37 and c4 != 35 and c2 < 555 and c1 < 339 and c1 in (331,333,335);
if $rows != 3 then
return -1
endi
if $data00 != @21-07-10 01:00:00.330@ then
return -1
endi
if $data10 != @21-07-10 01:00:00.332@ then
return -1
endi
if $data20 != @21-07-10 01:00:00.334@ then
return -1
endi
sql select ts,c1,c2,c3,c4 from stb4 where c1 > -3 and c1 < 5;
if $rows != 4 then
return -1
endi
if $data00 != @21-07-10 01:00:00.000@ then
return -1
endi
if $data10 != @21-07-10 01:00:00.001@ then
return -1
endi
if $data20 != @21-07-10 01:00:00.002@ then
return -1
endi
if $data30 != @21-07-10 01:00:00.003@ then
return -1
endi
sql select ts,c1,c2,c3,c4 from stb4 where c1 >= 2 and c1 < 5;
if $rows != 3 then
return -1
endi
if $data00 != @21-07-10 01:00:00.001@ then
return -1
endi
if $data10 != @21-07-10 01:00:00.002@ then
return -1
endi
if $data20 != @21-07-10 01:00:00.003@ then
return -1
endi
sql select ts,c1,c2,c3,c4 from stb4 where c1 >= -3 and c1 < 1300;
if $rows != 1299 then
return -1
endi
sql select ts,c1,c2,c3,c4 from stb4 where c1 >= 1298 and c1 < 1300 or c2 > 210 and c2 < 213;
if $rows != 4 then
return -1
endi
if $data00 != @21-07-10 01:00:00.010@ then
return -1
endi
if $data10 != @21-07-10 01:00:00.011@ then
return -1
endi
if $data20 != @21-07-13 01:00:00.097@ then
return -1
endi
if $data30 != @21-07-13 01:00:00.098@ then
return -1
endi
sql select ts,c1,c2,c3,c4 from stb4 where c1 >= -3;
if $rows != 6000 then
return -1
endi
sql select ts,c1,c2,c3,c4 from stb4 where c1 < 1400;
if $rows != 1399 then
return -1
endi
sql select ts,c1,c2,c3,c4 from stb4 where c1 < 1100;
if $rows != 1099 then
return -1
endi
sql select ts,c1,c2,c3,c4 from stb4 where c1 in(10,100, 1100,3300) and c1 != 10;
if $rows != 3 then
return -1
endi
if $data00 != @21-07-10 01:00:00.099@ then
return -1
endi
if $data10 != @21-07-10 01:00:01.099@ then
return -1
endi
if $data20 != @21-07-16 01:00:00.899@ then
return -1
endi
print "ts test"
......
......@@ -80,7 +80,7 @@ cd ../../../debug; make
./test.sh -f general/parser/set_tag_vals.sim
./test.sh -f general/parser/tags_filter.sim
./test.sh -f general/parser/slimit_alter_tags.sim
#./test.sh -f general/parser/join.sim
./test.sh -f general/parser/join.sim
./test.sh -f general/parser/join_multivnode.sim
./test.sh -f general/parser/binary_escapeCharacter.sim
./test.sh -f general/parser/repeatAlter.sim
......
......@@ -24,9 +24,9 @@ function stopTaosd {
function dohavecore(){
corefile=`find $corepath -mmin 1`
core_file=`echo $corefile|cut -d " " -f2`
proc=`echo $corefile|cut -d "_" -f3`
if [ -n "$corefile" ];then
core_file=`echo $corefile|cut -d " " -f2`
proc=`file $core_file|awk -F "execfn:" '/execfn:/{print $2}'|tr -d \' |awk '{print $1}'|tr -d \,`
echo 'taosd or taos has generated core'
rm case.log
if [[ "$tests_dir" == *"$IN_TDINTERNAL"* ]] && [[ $1 == 1 ]]; then
......@@ -46,7 +46,7 @@ function dohavecore(){
fi
fi
if [[ $1 == 1 ]];then
echo '\n'|gdb /usr/local/taos/bin/$proc $core_file -ex "bt 10" -ex quit
echo '\n'|gdb $proc $core_file -ex "bt 10" -ex quit
exit 8
fi
fi
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册