提交 513f284d 编写于 作者: S shenglian zhou

Merge branch 'develop' into feature/szhou/schemaless

build/ build/
.ycm_extra_conf.py
.vscode/ .vscode/
.idea/ .idea/
cmake-build-debug/ cmake-build-debug/
......
...@@ -2,18 +2,18 @@ ...@@ -2,18 +2,18 @@
## <a class="anchor" id="intro"></a>TDengine 简介 ## <a class="anchor" id="intro"></a>TDengine 简介
TDengine是涛思数据面对高速增长的物联网大数据市场和技术挑战推出的创新性的大数据处理产品,它不依赖任何第三方软件,也不是优化或包装了一个开源的数据库或流式计算产品,而是在吸取众多传统关系型数据库、NoSQL数据库、流式计算引擎、消息队列等软件的优点之后自主开发的产品,在时序空间大数据处理上,有着自己独到的优势。 TDengine 是涛思数据面对高速增长的物联网大数据市场和技术挑战推出的创新性的大数据处理产品,它不依赖任何第三方软件,也不是优化或包装了一个开源的数据库或流式计算产品,而是在吸取众多传统关系型数据库、NoSQL 数据库、流式计算引擎、消息队列等软件的优点之后自主开发的产品,在时序空间大数据处理上,有着自己独到的优势。
TDengine的模块之一是时序数据库。但除此之外,为减少研发的复杂度、系统维护的难度,TDengine还提供缓存、消息队列、订阅、流式计算等功能,为物联网、工业互联网大数据的处理提供全栈的技术方案,是一个高效易用的物联网大数据平台。与Hadoop等典型的大数据平台相比,它具有如下鲜明的特点: TDengine 的模块之一是时序数据库。但除此之外,为减少研发的复杂度、系统维护的难度,TDengine 还提供缓存、消息队列、订阅、流式计算等功能,为物联网、工业互联网大数据的处理提供全栈的技术方案,是一个高效易用的物联网大数据平台。与 Hadoop 等典型的大数据平台相比,它具有如下鲜明的特点:
* __10倍以上的性能提升__:定义了创新的数据存储结构,单核每秒能处理至少2万次请求,插入数百万个数据点,读出一千万以上数据点,比现有通用数据库快十倍以上。 * __10 倍以上的性能提升__:定义了创新的数据存储结构,单核每秒能处理至少 2 万次请求,插入数百万个数据点,读出一千万以上数据点,比现有通用数据库快十倍以上。
* __硬件或云服务成本降至1/5__:由于超强性能,计算资源不到通用大数据方案的1/5;通过列式存储和先进的压缩算法,存储空间不到通用数据库的1/10。 * __硬件或云服务成本降至 1/5__:由于超强性能,计算资源不到通用大数据方案的 1/5;通过列式存储和先进的压缩算法,存储空间不到通用数据库的 1/10。
* __全栈时序数据处理引擎__:将数据库、消息队列、缓存、流式计算等功能融为一体,应用无需再集成Kafka/Redis/HBase/Spark/HDFS等软件,大幅降低应用开发和维护的复杂度成本。 * __全栈时序数据处理引擎__:将数据库、消息队列、缓存、流式计算等功能融为一体,应用无需再集成 Kafka/Redis/HBase/Spark/HDFS 等软件,大幅降低应用开发和维护的复杂度成本。
* __强大的分析功能__:无论是十年前还是一秒钟前的数据,指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。即席查询可通过Shell, Python, R, MATLAB随时进行。 * __强大的分析功能__:无论是十年前还是一秒钟前的数据,指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。即席查询可通过 Shell, Python, R, MATLAB 随时进行。
* __与第三方工具无缝连接__:不用一行代码,即可与Telegraf, Grafana, EMQ, HiveMQ, Prometheus, MATLAB, R等集成。后续将支持OPC, Hadoop, Spark等,BI工具也将无缝连接。 * __与第三方工具无缝连接__:不用一行代码,即可与 Telegraf, Grafana, EMQ, HiveMQ, Prometheus, MATLAB, R 等集成。后续将支持 OPC, Hadoop, Spark 等,BI 工具也将无缝连接。
* __零运维成本、零学习成本__:安装集群简单快捷,无需分库分表,实时备份。类标准SQL,支持RESTful,支持Python/Java/C/C++/C#/Go/Node.js, 与MySQL相似,零学习成本。 * __零运维成本、零学习成本__:安装集群简单快捷,无需分库分表,实时备份。类标准 SQL,支持 RESTful,支持 Python/Java/C/C++/C#/Go/Node.js, 与 MySQL 相似,零学习成本。
采用TDengine,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。但需要指出的是,因充分利用了物联网时序数据的特点,它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM等通用型数据。 采用 TDengine,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。但需要指出的是,因充分利用了物联网时序数据的特点,它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM 等通用型数据。
![TDengine技术生态图](page://images/eco_system.png) ![TDengine技术生态图](page://images/eco_system.png)
<center>图 1. TDengine技术生态图</center> <center>图 1. TDengine技术生态图</center>
...@@ -21,42 +21,47 @@ TDengine的模块之一是时序数据库。但除此之外,为减少研发的 ...@@ -21,42 +21,47 @@ TDengine的模块之一是时序数据库。但除此之外,为减少研发的
## <a class="anchor" id="scenes"></a>TDengine 总体适用场景 ## <a class="anchor" id="scenes"></a>TDengine 总体适用场景
作为一个IOT大数据平台,TDengine的典型适用场景是在IOT范畴,而且用户有一定的数据量。本文后续的介绍主要针对这个范畴里面的系统。范畴之外的系统,比如CRM,ERP等,不在本文讨论范围内。 作为一个 IOT 大数据平台,TDengine 的典型适用场景是在 IOT 范畴,而且用户有一定的数据量。本文后续的介绍主要针对这个范畴里面的系统。范畴之外的系统,比如 CRM,ERP 等,不在本文讨论范围内。
### 数据源特点和需求 ### 数据源特点和需求
从数据源角度,设计人员可以从下面几个角度分析TDengine在目标应用系统里面的适用性。
从数据源角度,设计人员可以从下面几个角度分析 TDengine 在目标应用系统里面的适用性。
|数据源特点和需求|不适用|可能适用|非常适用|简单说明| |数据源特点和需求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---| |---|---|---|---|---|
|总体数据量巨大| | | √ |TDengine在容量方面提供出色的水平扩展功能,并且具备匹配高压缩的存储结构,达到业界最优的存储效率。| |总体数据量巨大| | | √ |TDengine 在容量方面提供出色的水平扩展功能,并且具备匹配高压缩的存储结构,达到业界最优的存储效率。|
|数据输入速度偶尔或者持续巨大| | | √ | TDengine的性能大大超过同类产品,可以在同样的硬件环境下持续处理大量的输入数据,并且提供很容易在用户环境里面运行的性能评估工具。| |数据输入速度偶尔或者持续巨大| | | √ | TDengine 的性能大大超过同类产品,可以在同样的硬件环境下持续处理大量的输入数据,并且提供很容易在用户环境里面运行的性能评估工具。|
|数据源数目巨大| | | √ |TDengine设计中包含专门针对大量数据源的优化,包括数据的写入和查询,尤其适合高效处理海量(千万或者更多量级)的数据源。| |数据源数目巨大| | | √ | TDengine 设计中包含专门针对大量数据源的优化,包括数据的写入和查询,尤其适合高效处理海量(千万或者更多量级)的数据源。|
### 系统架构要求 ### 系统架构要求
|系统架构要求|不适用|可能适用|非常适用|简单说明| |系统架构要求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---| |---|---|---|---|---|
|要求简单可靠的系统架构| | | √ |TDengine的系统架构非常简单可靠,自带消息队列,缓存,流式计算,监控等功能,无需集成额外的第三方产品。| |要求简单可靠的系统架构| | | √ | TDengine 的系统架构非常简单可靠,自带消息队列,缓存,流式计算,监控等功能,无需集成额外的第三方产品。|
|要求容错和高可靠| | | √ |TDengine的集群功能,自动提供容错灾备等高可靠功能。| |要求容错和高可靠| | | √ | TDengine 的集群功能,自动提供容错灾备等高可靠功能。|
|标准化规范| | | √ |TDengine使用标准的SQL语言提供主要功能,遵守标准化规范。| |标准化规范| | | √ | TDengine 使用标准的 SQL 语言提供主要功能,遵守标准化规范。|
### 系统功能需求 ### 系统功能需求
|系统功能需求|不适用|可能适用|非常适用|简单说明| |系统功能需求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---| |---|---|---|---|---|
|要求完整的内置数据处理算法| | √ | |TDengine的实现了通用的数据处理算法,但是还没有做到妥善处理各行各业的所有要求,因此特殊类型的处理还需要应用层面处理。| |要求完整的内置数据处理算法| | √ | | TDengine 的实现了通用的数据处理算法,但是还没有做到妥善处理各行各业的所有要求,因此特殊类型的处理还需要应用层面处理。|
|需要大量的交叉查询处理| | √ | |这种类型的处理更多应该用关系型数据系统处理,或者应该考虑TDengine和关系型数据系统配合实现系统功能。| |需要大量的交叉查询处理| | √ | |这种类型的处理更多应该用关系型数据系统处理,或者应该考虑 TDengine 和关系型数据系统配合实现系统功能。|
### 系统性能需求 ### 系统性能需求
|系统性能需求|不适用|可能适用|非常适用|简单说明| |系统性能需求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---| |---|---|---|---|---|
|要求较大的总体处理能力| | | √ |TDengine的集群功能可以轻松地让多服务器配合达成处理能力的提升。| |要求较大的总体处理能力| | | √ | TDengine 的集群功能可以轻松地让多服务器配合达成处理能力的提升。|
|要求高速处理数据 | | | √ |TDengine的专门为IOT优化的存储和数据处理的设计,一般可以让系统得到超出同类产品多倍数的处理速度提升。| |要求高速处理数据 | | | √ | TDengine 的专门为 IOT 优化的存储和数据处理的设计,一般可以让系统得到超出同类产品多倍数的处理速度提升。|
|要求快速处理小粒度数据| | | √ |这方面TDengine性能可以完全对标关系型和NoSQL型数据处理系统。| |要求快速处理小粒度数据| | | √ |这方面 TDengine 性能可以完全对标关系型和 NoSQL 型数据处理系统。|
### 系统维护需求 ### 系统维护需求
|系统维护需求|不适用|可能适用|非常适用|简单说明| |系统维护需求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---| |---|---|---|---|---|
|要求系统可靠运行| | | √ |TDengine的系统架构非常稳定可靠,日常维护也简单便捷,对维护人员的要求简洁明了,最大程度上杜绝人为错误和事故。| |要求系统可靠运行| | | √ | TDengine 的系统架构非常稳定可靠,日常维护也简单便捷,对维护人员的要求简洁明了,最大程度上杜绝人为错误和事故。|
|要求运维学习成本可控| | | √ |同上。| |要求运维学习成本可控| | | √ |同上。|
|要求市场有大量人才储备| √ | | |TDengine作为新一代产品,目前人才市场里面有经验的人员还有限。但是学习成本低,我们作为厂家也提供运维的培训和辅助服务。| |要求市场有大量人才储备| √ | | | TDengine 作为新一代产品,目前人才市场里面有经验的人员还有限。但是学习成本低,我们作为厂家也提供运维的培训和辅助服务。|
...@@ -191,7 +191,7 @@ cdf548465318 ...@@ -191,7 +191,7 @@ cdf548465318
1,通过端口映射(-p),将容器内部开放的网络端口映射到宿主机的指定端口上。通过挂载本地目录(-v),可以实现宿主机与容器内部的数据同步,防止容器删除后,数据丢失。 1,通过端口映射(-p),将容器内部开放的网络端口映射到宿主机的指定端口上。通过挂载本地目录(-v),可以实现宿主机与容器内部的数据同步,防止容器删除后,数据丢失。
```bash ```bash
$ docker run -d -v /etc/taos:/etc/taos -p 6041:6041 tdengine/tdengine $ docker run -d -v /etc/taos:/etc/taos -P 6041:6041 tdengine/tdengine
526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd 526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd
$ curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql $ curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql
......
...@@ -13,7 +13,7 @@ TDengine采用关系型数据模型,需要建库、建表。因此对于一个 ...@@ -13,7 +13,7 @@ TDengine采用关系型数据模型,需要建库、建表。因此对于一个
```mysql ```mysql
CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 6 UPDATE 1; CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 6 UPDATE 1;
``` ```
上述语句将创建一个名为power的库,这个库的数据将保留365天(超过365天将被自动删除),每10天一个数据文件,内存块数为4,允许更新数据。详细的语法及参数请见 [TAOS SQL 的数据管理](https://www.taosdata.com/cn/documentation/taos-sql#management) 章节。 上述语句将创建一个名为power的库,这个库的数据将保留365天(超过365天将被自动删除),每10天一个数据文件,内存块数为6,允许更新数据。详细的语法及参数请见 [TAOS SQL 的数据管理](https://www.taosdata.com/cn/documentation/taos-sql#management) 章节。
创建库之后,需要使用SQL命令USE将当前库切换过来,例如: 创建库之后,需要使用SQL命令USE将当前库切换过来,例如:
......
...@@ -178,15 +178,9 @@ Connection conn = DriverManager.getConnection(jdbcUrl); ...@@ -178,15 +178,9 @@ Connection conn = DriverManager.getConnection(jdbcUrl);
以上示例,使用了 JDBC-JNI 的 driver,建立了到 hostname 为 taosdemo.com,端口为 6030(TDengine 的默认端口),数据库名为 test 的连接。这个 URL 中指定用户名(user)为 root,密码(password)为 taosdata。 以上示例,使用了 JDBC-JNI 的 driver,建立了到 hostname 为 taosdemo.com,端口为 6030(TDengine 的默认端口),数据库名为 test 的连接。这个 URL 中指定用户名(user)为 root,密码(password)为 taosdata。
**注意**:使用 JDBC-JNI 的 driver,taos-jdbcdriver 驱动包时需要依赖系统对应的本地函数库。 **注意**:使用 JDBC-JNI 的 driver,taos-jdbcdriver 驱动包时需要依赖系统对应的本地函数库(Linux 下是 libtaos.so;Windows 下是 taos.dll)
* libtaos.so > 在 Windows 环境开发时需要安装 TDengine 对应的 [windows 客户端](https://www.taosdata.com/cn/all-downloads/#TDengine-Windows-Client),Linux 服务器安装完 TDengine 之后默认已安装 client,也可以单独安装 [Linux 客户端](https://www.taosdata.com/cn/getting-started/#%E5%AE%A2%E6%88%B7%E7%AB%AF) 连接远程 TDengine Server。
在 Linux 系统中成功安装 TDengine 后,依赖的本地函数库 libtaos.so 文件会被自动拷贝至 /usr/lib/libtaos.so,该目录包含在 Linux 自动扫描路径上,无需单独指定。
* taos.dll
在 Windows 系统中安装完客户端之后,驱动包依赖的 taos.dll 文件会自动拷贝到系统默认搜索路径 C:/Windows/System32 下,同样无需要单独指定。
> 在 Windows 环境开发时需要安装 TDengine 对应的 [windows 客户端][14],Linux 服务器安装完 TDengine 之后默认已安装 client,也可以单独安装 [Linux 客户端][15] 连接远程 TDengine Server。
JDBC-JNI 的使用请参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1955.html) JDBC-JNI 的使用请参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1955.html)
...@@ -194,9 +188,9 @@ TDengine 的 JDBC URL 规范格式为: ...@@ -194,9 +188,9 @@ TDengine 的 JDBC URL 规范格式为:
`jdbc:[TAOS|TAOS-RS]://[host_name]:[port]/[database_name]?[user={user}|&password={password}|&charset={charset}|&cfgdir={config_dir}|&locale={locale}|&timezone={timezone}]` `jdbc:[TAOS|TAOS-RS]://[host_name]:[port]/[database_name]?[user={user}|&password={password}|&charset={charset}|&cfgdir={config_dir}|&locale={locale}|&timezone={timezone}]`
url中的配置参数如下: url中的配置参数如下:
* user:登录 TDengine 用户名,默认值 root * user:登录 TDengine 用户名,默认值 'root'
* password:用户登录密码,默认值 taosdata * password:用户登录密码,默认值 'taosdata'
* cfgdir:客户端配置文件目录路径,Linux OS 上默认值 /etc/taos ,Windows OS 上默认值 C:/TDengine/cfg * cfgdir:客户端配置文件目录路径,Linux OS 上默认值 `/etc/taos`,Windows OS 上默认值 `C:/TDengine/cfg`
* charset:客户端使用的字符集,默认值为系统字符集。 * charset:客户端使用的字符集,默认值为系统字符集。
* locale:客户端语言环境,默认值系统当前 locale。 * locale:客户端语言环境,默认值系统当前 locale。
* timezone:客户端使用的时区,默认值为系统当前时区。 * timezone:客户端使用的时区,默认值为系统当前时区。
...@@ -222,9 +216,9 @@ public Connection getConn() throws Exception{ ...@@ -222,9 +216,9 @@ public Connection getConn() throws Exception{
以上示例,建立一个到 hostname 为 taosdemo.com,端口为 6030,数据库名为 test 的连接。注释为使用 JDBC-RESTful 时的方法。这个连接在 url 中指定了用户名(user)为 root,密码(password)为 taosdata,并在 connProps 中指定了使用的字符集、语言环境、时区等信息。 以上示例,建立一个到 hostname 为 taosdemo.com,端口为 6030,数据库名为 test 的连接。注释为使用 JDBC-RESTful 时的方法。这个连接在 url 中指定了用户名(user)为 root,密码(password)为 taosdata,并在 connProps 中指定了使用的字符集、语言环境、时区等信息。
properties 中的配置参数如下: properties 中的配置参数如下:
* TSDBDriver.PROPERTY_KEY_USER:登录 TDengine 用户名,默认值 root * TSDBDriver.PROPERTY_KEY_USER:登录 TDengine 用户名,默认值 'root'
* TSDBDriver.PROPERTY_KEY_PASSWORD:用户登录密码,默认值 taosdata * TSDBDriver.PROPERTY_KEY_PASSWORD:用户登录密码,默认值 'taosdata'
* TSDBDriver.PROPERTY_KEY_CONFIG_DIR:客户端配置文件目录路径,Linux OS 上默认值 /etc/taos ,Windows OS 上默认值 C:/TDengine/cfg * TSDBDriver.PROPERTY_KEY_CONFIG_DIR:客户端配置文件目录路径,Linux OS 上默认值 `/etc/taos`,Windows OS 上默认值 `C:/TDengine/cfg`
* TSDBDriver.PROPERTY_KEY_CHARSET:客户端使用的字符集,默认值为系统字符集。 * TSDBDriver.PROPERTY_KEY_CHARSET:客户端使用的字符集,默认值为系统字符集。
* TSDBDriver.PROPERTY_KEY_LOCALE:客户端语言环境,默认值系统当前 locale。 * TSDBDriver.PROPERTY_KEY_LOCALE:客户端语言环境,默认值系统当前 locale。
* TSDBDriver.PROPERTY_KEY_TIME_ZONE:客户端使用的时区,默认值为系统当前时区。 * TSDBDriver.PROPERTY_KEY_TIME_ZONE:客户端使用的时区,默认值为系统当前时区。
......
...@@ -315,6 +315,10 @@ TDengine的异步API均采用非阻塞调用模式。应用程序可以用多线 ...@@ -315,6 +315,10 @@ TDengine的异步API均采用非阻塞调用模式。应用程序可以用多线
1. 调用 `taos_stmt_init` 创建参数绑定对象; 1. 调用 `taos_stmt_init` 创建参数绑定对象;
2. 调用 `taos_stmt_prepare` 解析 INSERT 语句; 2. 调用 `taos_stmt_prepare` 解析 INSERT 语句;
3. 如果 INSERT 语句中预留了表名但没有预留 TAGS,那么调用 `taos_stmt_set_tbname` 来设置表名; 3. 如果 INSERT 语句中预留了表名但没有预留 TAGS,那么调用 `taos_stmt_set_tbname` 来设置表名;
* 从 2.1.6.0 版本开始,对于向一个超级表下的多个子表同时写入数据(每个子表写入的数据较少,可能只有一行)的情形,提供了一个专用的优化接口 `taos_stmt_set_sub_tbname`,可以通过提前载入 meta 数据以及避免对 SQL 语法的重复解析来节省总体的处理时间(但这个优化方法并不支持自动建表语法)。具体使用方法如下:
1. 必须先提前调用 `taos_load_table_info` 来加载所有需要的超级表和子表的 table meta;
2. 然后对一个超级表的第一个子表调用 `taos_stmt_set_tbname` 来设置表名;
3. 后续子表用 `taos_stmt_set_sub_tbname` 来设置表名。
4. 如果 INSERT 语句中既预留了表名又预留了 TAGS(例如 INSERT 语句采取的是自动建表的方式),那么调用 `taos_stmt_set_tbname_tags` 来设置表名和 TAGS 的值; 4. 如果 INSERT 语句中既预留了表名又预留了 TAGS(例如 INSERT 语句采取的是自动建表的方式),那么调用 `taos_stmt_set_tbname_tags` 来设置表名和 TAGS 的值;
5. 调用 `taos_stmt_bind_param_batch` 以多列的方式设置 VALUES 的值,或者调用 `taos_stmt_bind_param` 以单行的方式设置 VALUES 的值; 5. 调用 `taos_stmt_bind_param_batch` 以多列的方式设置 VALUES 的值,或者调用 `taos_stmt_bind_param` 以单行的方式设置 VALUES 的值;
6. 调用 `taos_stmt_add_batch` 把当前绑定的参数加入批处理; 6. 调用 `taos_stmt_add_batch` 把当前绑定的参数加入批处理;
...@@ -358,6 +362,12 @@ typedef struct TAOS_BIND { ...@@ -358,6 +362,12 @@ typedef struct TAOS_BIND {
(2.1.1.0 版本新增,仅支持用于替换 INSERT 语句中的参数值) (2.1.1.0 版本新增,仅支持用于替换 INSERT 语句中的参数值)
当 SQL 语句中的表名使用了 `?` 占位时,可以使用此函数绑定一个具体的表名。 当 SQL 语句中的表名使用了 `?` 占位时,可以使用此函数绑定一个具体的表名。
- `int taos_stmt_set_sub_tbname(TAOS_STMT* stmt, const char* name)`
(2.1.6.0 版本新增,仅支持用于替换 INSERT 语句中、属于同一个超级表下的多个子表中、作为写入目标的第 2 个到第 n 个子表的表名)
当 SQL 语句中的表名使用了 `?` 占位时,如果想要一批写入的表是多个属于同一个超级表的子表,那么可以使用此函数绑定除第一个子表之外的其他子表的表名。
*注意:*在使用时,客户端必须先调用 `taos_load_table_info` 来加载所有需要的超级表和子表的 table meta,然后对一个超级表的第一个子表调用 `taos_stmt_set_tbname`,后续子表用 `taos_stmt_set_sub_tbname`
- `int taos_stmt_set_tbname_tags(TAOS_STMT* stmt, const char* name, TAOS_BIND* tags)` - `int taos_stmt_set_tbname_tags(TAOS_STMT* stmt, const char* name, TAOS_BIND* tags)`
(2.1.2.0 版本新增,仅支持用于替换 INSERT 语句中的参数值) (2.1.2.0 版本新增,仅支持用于替换 INSERT 语句中的参数值)
......
...@@ -217,7 +217,7 @@ taosd -C ...@@ -217,7 +217,7 @@ taosd -C
| 99 | queryBufferSize | | **S** | MB | 为所有并发查询占用保留的内存大小。 | | | 计算规则可以根据实际应用可能的最大并发数和表的数字相乘,再乘 170 。(2.0.15 以前的版本中,此参数的单位是字节) | | 99 | queryBufferSize | | **S** | MB | 为所有并发查询占用保留的内存大小。 | | | 计算规则可以根据实际应用可能的最大并发数和表的数字相乘,再乘 170 。(2.0.15 以前的版本中,此参数的单位是字节) |
| 100 | ratioOfQueryCores | | **S** | | 设置查询线程的最大数量。 | | | 最小值0 表示只有1个查询线程;最大值2表示最大建立2倍CPU核数的查询线程。默认为1,表示最大和CPU核数相等的查询线程。该值可以为小数,即0.5表示最大建立CPU核数一半的查询线程。 | | 100 | ratioOfQueryCores | | **S** | | 设置查询线程的最大数量。 | | | 最小值0 表示只有1个查询线程;最大值2表示最大建立2倍CPU核数的查询线程。默认为1,表示最大和CPU核数相等的查询线程。该值可以为小数,即0.5表示最大建立CPU核数一半的查询线程。 |
| 101 | update | | **S** | | 允许更新已存在的数据行 | 0 \| 1 | 0 | 从 2.0.8.0 版本开始 | | 101 | update | | **S** | | 允许更新已存在的数据行 | 0 \| 1 | 0 | 从 2.0.8.0 版本开始 |
| 102 | cacheLast | | **S** | | 是否在内存中缓存子表的最近数据 | 0:关闭;1:缓存子表最近一行数据;2:缓存子表每一列的最近的非NULL值;3:同时打开缓存最近行和列功能。 | 0 | 2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。 | | 102 | cacheLast | | **S** | | 是否在内存中缓存子表的最近数据 | 0:关闭;1:缓存子表最近一行数据;2:缓存子表每一列的最近的非NULL值;3:同时打开缓存最近行和列功能。(2.1.2.0 版本开始此参数支持 0~3 的取值范围,在此之前取值只能是 [0, 1]) | 0 | 2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。 |
| 103 | numOfCommitThreads | YES | **S** | | 设置写入线程的最大数量 | | | | | 103 | numOfCommitThreads | YES | **S** | | 设置写入线程的最大数量 | | | |
| 104 | maxWildCardsLength | | **C** | bytes | 设定 LIKE 算子的通配符字符串允许的最大长度 | 0-16384 | 100 | 2.1.6.1 版本新增。 | | 104 | maxWildCardsLength | | **C** | bytes | 设定 LIKE 算子的通配符字符串允许的最大长度 | 0-16384 | 100 | 2.1.6.1 版本新增。 |
......
...@@ -1285,6 +1285,19 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数 ...@@ -1285,6 +1285,19 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
说明:(从 2.1.3.0 版本开始新增此函数)输出结果行数是范围内总行数减一,第一行没有结果输出。DERIVATIVE 函数可以在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname)。 说明:(从 2.1.3.0 版本开始新增此函数)输出结果行数是范围内总行数减一,第一行没有结果输出。DERIVATIVE 函数可以在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname)。
示例:
```mysql
taos> select derivative(current, 10m, 0) from t1;
ts | derivative(current, 10m, 0) |
========================================================
2021-08-20 10:11:22.790 | 0.500000000 |
2021-08-20 11:11:22.791 | 0.166666620 |
2021-08-20 12:11:22.791 | 0.000000000 |
2021-08-20 13:11:22.792 | 0.166666620 |
2021-08-20 14:11:22.792 | -0.666666667 |
Query OK, 5 row(s) in set (0.004883s)
```
- **SPREAD** - **SPREAD**
```mysql ```mysql
SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause]; SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause];
......
...@@ -71,7 +71,7 @@ TDengine is a highly efficient platform to store, query, and analyze time-series ...@@ -71,7 +71,7 @@ TDengine is a highly efficient platform to store, query, and analyze time-series
## [Connector](/connector) ## [Connector](/connector)
- [C/C++ Connector](/connector#c-cpp): primary method to connect to TDengine server through libtaos client library - [C/C++ Connector](/connector#c-cpp): primary method to connect to TDengine server through libtaos client library
- [Java Connector(JDBC)]: driver for connecting to the server from Java applications using the JDBC API - [Java Connector(JDBC)](/connector/java): driver for connecting to the server from Java applications using the JDBC API
- [Python Connector](/connector#python): driver for connecting to TDengine server from Python applications - [Python Connector](/connector#python): driver for connecting to TDengine server from Python applications
- [RESTful Connector](/connector#restful): a simple way to interact with TDengine via HTTP - [RESTful Connector](/connector#restful): a simple way to interact with TDengine via HTTP
- [Go Connector](/connector#go): driver for connecting to TDengine server from Go applications - [Go Connector](/connector#go): driver for connecting to TDengine server from Go applications
......
此差异已折叠。
...@@ -284,3 +284,5 @@ keepColumnName 1 ...@@ -284,3 +284,5 @@ keepColumnName 1
# 0 no query allowed, queries are disabled # 0 no query allowed, queries are disabled
# queryBufferSize -1 # queryBufferSize -1
# percent of redundant data in tsdb meta will compact meta data,0 means donot compact
# tsdbMetaCompactRatio 0
...@@ -214,6 +214,7 @@ SExprInfo* tscExprUpdate(SQueryInfo* pQueryInfo, int32_t index, int16_t function ...@@ -214,6 +214,7 @@ SExprInfo* tscExprUpdate(SQueryInfo* pQueryInfo, int32_t index, int16_t function
int16_t size); int16_t size);
size_t tscNumOfExprs(SQueryInfo* pQueryInfo); size_t tscNumOfExprs(SQueryInfo* pQueryInfo);
int32_t tscExprTopBottomIndex(SQueryInfo* pQueryInfo);
SExprInfo *tscExprGet(SQueryInfo* pQueryInfo, int32_t index); SExprInfo *tscExprGet(SQueryInfo* pQueryInfo, int32_t index);
int32_t tscExprCopy(SArray* dst, const SArray* src, uint64_t uid, bool deepcopy); int32_t tscExprCopy(SArray* dst, const SArray* src, uint64_t uid, bool deepcopy);
int32_t tscExprCopyAll(SArray* dst, const SArray* src, bool deepcopy); int32_t tscExprCopyAll(SArray* dst, const SArray* src, bool deepcopy);
......
...@@ -643,7 +643,7 @@ static void doExecuteFinalMerge(SOperatorInfo* pOperator, int32_t numOfExpr, SSD ...@@ -643,7 +643,7 @@ static void doExecuteFinalMerge(SOperatorInfo* pOperator, int32_t numOfExpr, SSD
for(int32_t j = 0; j < numOfExpr; ++j) { for(int32_t j = 0; j < numOfExpr; ++j) {
pCtx[j].pOutput += (pCtx[j].outputBytes * numOfRows); pCtx[j].pOutput += (pCtx[j].outputBytes * numOfRows);
if (pCtx[j].functionId == TSDB_FUNC_TOP || pCtx[j].functionId == TSDB_FUNC_BOTTOM) { if (pCtx[j].functionId == TSDB_FUNC_TOP || pCtx[j].functionId == TSDB_FUNC_BOTTOM) {
pCtx[j].ptsOutputBuf = pCtx[0].pOutput; if(j > 0)pCtx[j].ptsOutputBuf = pCtx[j - 1].pOutput;
} }
} }
......
...@@ -2590,13 +2590,12 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col ...@@ -2590,13 +2590,12 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col
// set the first column ts for diff query // set the first column ts for diff query
if (functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE) { if (functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE) {
colIndex += 1;
SColumnIndex indexTS = {.tableIndex = index.tableIndex, .columnIndex = 0}; SColumnIndex indexTS = {.tableIndex = index.tableIndex, .columnIndex = 0};
SExprInfo* pExpr = tscExprAppend(pQueryInfo, TSDB_FUNC_TS_DUMMY, &indexTS, TSDB_DATA_TYPE_TIMESTAMP, SExprInfo* pExpr = tscExprAppend(pQueryInfo, TSDB_FUNC_TS_DUMMY, &indexTS, TSDB_DATA_TYPE_TIMESTAMP,
TSDB_KEYSIZE, getNewResColId(pCmd), TSDB_KEYSIZE, false); TSDB_KEYSIZE, getNewResColId(pCmd), TSDB_KEYSIZE, false);
SColumnList ids = createColumnList(1, 0, 0); SColumnList ids = createColumnList(1, 0, 0);
insertResultField(pQueryInfo, 0, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP, aAggs[TSDB_FUNC_TS_DUMMY].name, pExpr); insertResultField(pQueryInfo, colIndex, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP, aAggs[TSDB_FUNC_TS_DUMMY].name, pExpr);
} }
SExprInfo* pExpr = tscExprAppend(pQueryInfo, functionId, &index, resultType, resultSize, getNewResColId(pCmd), intermediateResSize, false); SExprInfo* pExpr = tscExprAppend(pQueryInfo, functionId, &index, resultType, resultSize, getNewResColId(pCmd), intermediateResSize, false);
...@@ -2869,7 +2868,7 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col ...@@ -2869,7 +2868,7 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col
const int32_t TS_COLUMN_INDEX = PRIMARYKEY_TIMESTAMP_COL_INDEX; const int32_t TS_COLUMN_INDEX = PRIMARYKEY_TIMESTAMP_COL_INDEX;
SColumnList ids = createColumnList(1, index.tableIndex, TS_COLUMN_INDEX); SColumnList ids = createColumnList(1, index.tableIndex, TS_COLUMN_INDEX);
insertResultField(pQueryInfo, TS_COLUMN_INDEX, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP, insertResultField(pQueryInfo, colIndex, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP,
aAggs[TSDB_FUNC_TS].name, pExpr); aAggs[TSDB_FUNC_TS].name, pExpr);
colIndex += 1; // the first column is ts colIndex += 1; // the first column is ts
...@@ -5864,10 +5863,12 @@ int32_t validateOrderbyNode(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, SSqlNode* pSq ...@@ -5864,10 +5863,12 @@ int32_t validateOrderbyNode(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, SSqlNode* pSq
pQueryInfo->order.orderColId = pSchema[index.columnIndex].colId; pQueryInfo->order.orderColId = pSchema[index.columnIndex].colId;
} else if (isTopBottomQuery(pQueryInfo)) { } else if (isTopBottomQuery(pQueryInfo)) {
/* order of top/bottom query in interval is not valid */ /* order of top/bottom query in interval is not valid */
SExprInfo* pExpr = tscExprGet(pQueryInfo, 0); int32_t pos = tscExprTopBottomIndex(pQueryInfo);
assert(pos > 0);
SExprInfo* pExpr = tscExprGet(pQueryInfo, pos - 1);
assert(pExpr->base.functionId == TSDB_FUNC_TS); assert(pExpr->base.functionId == TSDB_FUNC_TS);
pExpr = tscExprGet(pQueryInfo, 1); pExpr = tscExprGet(pQueryInfo, pos);
if (pExpr->base.colInfo.colIndex != index.columnIndex && index.columnIndex != PRIMARYKEY_TIMESTAMP_COL_INDEX) { if (pExpr->base.colInfo.colIndex != index.columnIndex && index.columnIndex != PRIMARYKEY_TIMESTAMP_COL_INDEX) {
return invalidOperationMsg(pMsgBuf, msg5); return invalidOperationMsg(pMsgBuf, msg5);
} }
...@@ -5959,10 +5960,12 @@ int32_t validateOrderbyNode(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, SSqlNode* pSq ...@@ -5959,10 +5960,12 @@ int32_t validateOrderbyNode(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, SSqlNode* pSq
} }
} else { } else {
/* order of top/bottom query in interval is not valid */ /* order of top/bottom query in interval is not valid */
SExprInfo* pExpr = tscExprGet(pQueryInfo, 0); int32_t pos = tscExprTopBottomIndex(pQueryInfo);
assert(pos > 0);
SExprInfo* pExpr = tscExprGet(pQueryInfo, pos - 1);
assert(pExpr->base.functionId == TSDB_FUNC_TS); assert(pExpr->base.functionId == TSDB_FUNC_TS);
pExpr = tscExprGet(pQueryInfo, 1); pExpr = tscExprGet(pQueryInfo, pos);
if (pExpr->base.colInfo.colIndex != index.columnIndex && index.columnIndex != PRIMARYKEY_TIMESTAMP_COL_INDEX) { if (pExpr->base.colInfo.colIndex != index.columnIndex && index.columnIndex != PRIMARYKEY_TIMESTAMP_COL_INDEX) {
return invalidOperationMsg(pMsgBuf, msg5); return invalidOperationMsg(pMsgBuf, msg5);
} }
......
...@@ -2678,7 +2678,7 @@ int tscProcessQueryRsp(SSqlObj *pSql) { ...@@ -2678,7 +2678,7 @@ int tscProcessQueryRsp(SSqlObj *pSql) {
return 0; return 0;
} }
static void decompressQueryColData(SSqlRes *pRes, SQueryInfo* pQueryInfo, char **data, int8_t compressed, int compLen) { static void decompressQueryColData(SSqlObj *pSql, SSqlRes *pRes, SQueryInfo* pQueryInfo, char **data, int8_t compressed, int32_t compLen) {
int32_t decompLen = 0; int32_t decompLen = 0;
int32_t numOfCols = pQueryInfo->fieldsInfo.numOfOutput; int32_t numOfCols = pQueryInfo->fieldsInfo.numOfOutput;
int32_t *compSizes; int32_t *compSizes;
...@@ -2715,6 +2715,9 @@ static void decompressQueryColData(SSqlRes *pRes, SQueryInfo* pQueryInfo, char * ...@@ -2715,6 +2715,9 @@ static void decompressQueryColData(SSqlRes *pRes, SQueryInfo* pQueryInfo, char *
pData = *data + compLen + numOfCols * sizeof(int32_t); pData = *data + compLen + numOfCols * sizeof(int32_t);
} }
tscDebug("0x%"PRIx64" decompress col data, compressed size:%d, decompressed size:%d",
pSql->self, (int32_t)(compLen + numOfCols * sizeof(int32_t)), decompLen);
int32_t tailLen = pRes->rspLen - sizeof(SRetrieveTableRsp) - decompLen; int32_t tailLen = pRes->rspLen - sizeof(SRetrieveTableRsp) - decompLen;
memmove(*data + decompLen, pData, tailLen); memmove(*data + decompLen, pData, tailLen);
memmove(*data, outputBuf, decompLen); memmove(*data, outputBuf, decompLen);
...@@ -2749,7 +2752,7 @@ int tscProcessRetrieveRspFromNode(SSqlObj *pSql) { ...@@ -2749,7 +2752,7 @@ int tscProcessRetrieveRspFromNode(SSqlObj *pSql) {
//Decompress col data if compressed from server //Decompress col data if compressed from server
if (pRetrieve->compressed) { if (pRetrieve->compressed) {
int32_t compLen = htonl(pRetrieve->compLen); int32_t compLen = htonl(pRetrieve->compLen);
decompressQueryColData(pRes, pQueryInfo, &pRes->data, pRetrieve->compressed, compLen); decompressQueryColData(pSql, pRes, pQueryInfo, &pRes->data, pRetrieve->compressed, compLen);
} }
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0); STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
......
...@@ -2419,6 +2419,19 @@ size_t tscNumOfExprs(SQueryInfo* pQueryInfo) { ...@@ -2419,6 +2419,19 @@ size_t tscNumOfExprs(SQueryInfo* pQueryInfo) {
return taosArrayGetSize(pQueryInfo->exprList); return taosArrayGetSize(pQueryInfo->exprList);
} }
int32_t tscExprTopBottomIndex(SQueryInfo* pQueryInfo){
size_t numOfExprs = tscNumOfExprs(pQueryInfo);
for(int32_t i = 0; i < numOfExprs; ++i) {
SExprInfo* pExpr = tscExprGet(pQueryInfo, i);
if (pExpr == NULL)
continue;
if (pExpr->base.functionId == TSDB_FUNC_TOP || pExpr->base.functionId == TSDB_FUNC_BOTTOM) {
return i;
}
}
return -1;
}
// todo REFACTOR // todo REFACTOR
void tscExprAddParams(SSqlExpr* pExpr, char* argument, int32_t type, int32_t bytes) { void tscExprAddParams(SSqlExpr* pExpr, char* argument, int32_t type, int32_t bytes) {
assert (pExpr != NULL || argument != NULL || bytes != 0); assert (pExpr != NULL || argument != NULL || bytes != 0);
......
此差异已折叠。
...@@ -517,6 +517,7 @@ void tdAppendMemRowToDataCol(SMemRow row, STSchema *pSchema, SDataCols *pCols, b ...@@ -517,6 +517,7 @@ void tdAppendMemRowToDataCol(SMemRow row, STSchema *pSchema, SDataCols *pCols, b
} }
} }
//TODO: refactor this function to eliminate additional memory copy
int tdMergeDataCols(SDataCols *target, SDataCols *source, int rowsToMerge, int *pOffset, bool forceSetNull) { int tdMergeDataCols(SDataCols *target, SDataCols *source, int rowsToMerge, int *pOffset, bool forceSetNull) {
ASSERT(rowsToMerge > 0 && rowsToMerge <= source->numOfRows); ASSERT(rowsToMerge > 0 && rowsToMerge <= source->numOfRows);
ASSERT(target->numOfCols == source->numOfCols); ASSERT(target->numOfCols == source->numOfCols);
......
...@@ -76,12 +76,11 @@ int32_t tsMaxBinaryDisplayWidth = 30; ...@@ -76,12 +76,11 @@ int32_t tsMaxBinaryDisplayWidth = 30;
int32_t tsCompressMsgSize = -1; int32_t tsCompressMsgSize = -1;
/* denote if server needs to compress the retrieved column data before adding to the rpc response message body. /* denote if server needs to compress the retrieved column data before adding to the rpc response message body.
* 0: disable column data compression * 0: all data are compressed
* 1: enable column data compression * -1: all data are not compressed
* This option is default to disabled. Once enabled, compression will be conducted if any column has size more * other values: if any retrieved column size is greater than the tsCompressColData, all data will be compressed.
* than QUERY_COMP_THRESHOLD. Otherwise, no further compression is needed.
*/ */
int32_t tsCompressColData = 0; int32_t tsCompressColData = -1;
// client // client
int32_t tsMaxSQLStringLen = TSDB_MAX_ALLOWED_SQL_LEN; int32_t tsMaxSQLStringLen = TSDB_MAX_ALLOWED_SQL_LEN;
...@@ -95,7 +94,7 @@ int32_t tsMaxNumOfOrderedResults = 100000; ...@@ -95,7 +94,7 @@ int32_t tsMaxNumOfOrderedResults = 100000;
// 10 ms for sliding time, the value will changed in case of time precision changed // 10 ms for sliding time, the value will changed in case of time precision changed
int32_t tsMinSlidingTime = 10; int32_t tsMinSlidingTime = 10;
// the maxinum number of distict query result // the maxinum number of distict query result
int32_t tsMaxNumOfDistinctResults = 1000 * 10000; int32_t tsMaxNumOfDistinctResults = 1000 * 10000;
// 1 us for interval time range, changed accordingly // 1 us for interval time range, changed accordingly
...@@ -150,6 +149,7 @@ int32_t tsMaxVgroupsPerDb = 0; ...@@ -150,6 +149,7 @@ int32_t tsMaxVgroupsPerDb = 0;
int32_t tsMinTablePerVnode = TSDB_TABLES_STEP; int32_t tsMinTablePerVnode = TSDB_TABLES_STEP;
int32_t tsMaxTablePerVnode = TSDB_DEFAULT_TABLES; int32_t tsMaxTablePerVnode = TSDB_DEFAULT_TABLES;
int32_t tsTableIncStepPerVnode = TSDB_TABLES_STEP; int32_t tsTableIncStepPerVnode = TSDB_TABLES_STEP;
int32_t tsTsdbMetaCompactRatio = TSDB_META_COMPACT_RATIO;
// tsdb config // tsdb config
...@@ -1006,10 +1006,10 @@ static void doInitGlobalConfig(void) { ...@@ -1006,10 +1006,10 @@ static void doInitGlobalConfig(void) {
cfg.option = "compressColData"; cfg.option = "compressColData";
cfg.ptr = &tsCompressColData; cfg.ptr = &tsCompressColData;
cfg.valType = TAOS_CFG_VTYPE_INT8; cfg.valType = TAOS_CFG_VTYPE_INT32;
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_SHOW; cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_CLIENT | TSDB_CFG_CTYPE_B_SHOW;
cfg.minValue = 0; cfg.minValue = -1;
cfg.maxValue = 1; cfg.maxValue = 100000000.0f;
cfg.ptrLength = 0; cfg.ptrLength = 0;
cfg.unitType = TAOS_CFG_UTYPE_NONE; cfg.unitType = TAOS_CFG_UTYPE_NONE;
taosInitConfigOption(cfg); taosInitConfigOption(cfg);
...@@ -1581,6 +1581,16 @@ static void doInitGlobalConfig(void) { ...@@ -1581,6 +1581,16 @@ static void doInitGlobalConfig(void) {
cfg.unitType = TAOS_CFG_UTYPE_NONE; cfg.unitType = TAOS_CFG_UTYPE_NONE;
taosInitConfigOption(cfg); taosInitConfigOption(cfg);
cfg.option = "tsdbMetaCompactRatio";
cfg.ptr = &tsTsdbMetaCompactRatio;
cfg.valType = TAOS_CFG_VTYPE_INT32;
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG;
cfg.minValue = 0;
cfg.maxValue = 100;
cfg.ptrLength = 0;
cfg.unitType = TAOS_CFG_UTYPE_NONE;
taosInitConfigOption(cfg);
assert(tsGlobalConfigNum <= TSDB_CFG_MAX_NUM); assert(tsGlobalConfigNum <= TSDB_CFG_MAX_NUM);
#ifdef TD_TSZ #ifdef TD_TSZ
// lossy compress // lossy compress
......
...@@ -38,12 +38,12 @@ void tVariantCreate(tVariant *pVar, SStrToken *token) { ...@@ -38,12 +38,12 @@ void tVariantCreate(tVariant *pVar, SStrToken *token) {
switch (token->type) { switch (token->type) {
case TSDB_DATA_TYPE_BOOL: { case TSDB_DATA_TYPE_BOOL: {
int32_t k = strncasecmp(token->z, "true", 4); if (strncasecmp(token->z, "true", 4) == 0) {
if (k == 0) {
pVar->i64 = TSDB_TRUE; pVar->i64 = TSDB_TRUE;
} else { } else if (strncasecmp(token->z, "false", 5) == 0) {
assert(strncasecmp(token->z, "false", 5) == 0);
pVar->i64 = TSDB_FALSE; pVar->i64 = TSDB_FALSE;
} else {
return;
} }
break; break;
......
...@@ -10,7 +10,8 @@ import sys ...@@ -10,7 +10,8 @@ import sys
_datetime_epoch = datetime.utcfromtimestamp(0) _datetime_epoch = datetime.utcfromtimestamp(0)
def _is_not_none(obj): def _is_not_none(obj):
obj != None return obj != None
class TaosBind(ctypes.Structure): class TaosBind(ctypes.Structure):
_fields_ = [ _fields_ = [
("buffer_type", c_int), ("buffer_type", c_int),
...@@ -299,27 +300,14 @@ class TaosMultiBind(ctypes.Structure): ...@@ -299,27 +300,14 @@ class TaosMultiBind(ctypes.Structure):
self.buffer = cast(buffer, c_void_p) self.buffer = cast(buffer, c_void_p)
self.num = len(values) self.num = len(values)
def binary(self, values): def _str_to_buffer(self, values):
self.num = len(values) self.num = len(values)
self.buffer = cast(c_char_p("".join(filter(_is_not_none, values)).encode("utf-8")), c_void_p) is_null = [1 if v == None else 0 for v in values]
self.length = (c_int * len(values))(*[len(value) if value is not None else 0 for value in values]) self.is_null = cast((c_byte * self.num)(*is_null), c_char_p)
self.buffer_type = FieldType.C_BINARY
self.is_null = cast((c_byte * self.num)(*[1 if v == None else 0 for v in values]), c_char_p) if sum(is_null) == self.num:
self.length = (c_int32 * len(values))(0 * self.num)
def timestamp(self, values, precision=PrecisionEnum.Milliseconds): return
try:
buffer = cast(values, c_void_p)
except:
buffer_type = c_int64 * len(values)
buffer = buffer_type(*[_datetime_to_timestamp(value, precision) for value in values])
self.buffer_type = FieldType.C_TIMESTAMP
self.buffer = cast(buffer, c_void_p)
self.buffer_length = sizeof(c_int64)
self.num = len(values)
def nchar(self, values):
# type: (list[str]) -> None
if sys.version_info < (3, 0): if sys.version_info < (3, 0):
_bytes = [bytes(value) if value is not None else None for value in values] _bytes = [bytes(value) if value is not None else None for value in values]
buffer_length = max(len(b) + 1 for b in _bytes if b is not None) buffer_length = max(len(b) + 1 for b in _bytes if b is not None)
...@@ -347,9 +335,26 @@ class TaosMultiBind(ctypes.Structure): ...@@ -347,9 +335,26 @@ class TaosMultiBind(ctypes.Structure):
) )
self.length = (c_int32 * len(values))(*[len(b) if b is not None else 0 for b in _bytes]) self.length = (c_int32 * len(values))(*[len(b) if b is not None else 0 for b in _bytes])
self.buffer_length = buffer_length self.buffer_length = buffer_length
def binary(self, values):
self.buffer_type = FieldType.C_BINARY
self._str_to_buffer(values)
def timestamp(self, values, precision=PrecisionEnum.Milliseconds):
try:
buffer = cast(values, c_void_p)
except:
buffer_type = c_int64 * len(values)
buffer = buffer_type(*[_datetime_to_timestamp(value, precision) for value in values])
self.buffer_type = FieldType.C_TIMESTAMP
self.buffer = cast(buffer, c_void_p)
self.buffer_length = sizeof(c_int64)
self.num = len(values) self.num = len(values)
self.is_null = cast((c_byte * self.num)(*[1 if v == None else 0 for v in values]), c_char_p)
def nchar(self, values):
# type: (list[str]) -> None
self.buffer_type = FieldType.C_NCHAR self.buffer_type = FieldType.C_NCHAR
self._str_to_buffer(values)
def tinyint_unsigned(self, values): def tinyint_unsigned(self, values):
self.buffer_type = FieldType.C_TINYINT_UNSIGNED self.buffer_type = FieldType.C_TINYINT_UNSIGNED
......
...@@ -3,6 +3,9 @@ ...@@ -3,6 +3,9 @@
"""Constants in TDengine python """Constants in TDengine python
""" """
import ctypes, struct
class FieldType(object): class FieldType(object):
"""TDengine Field Types""" """TDengine Field Types"""
...@@ -33,8 +36,8 @@ class FieldType(object): ...@@ -33,8 +36,8 @@ class FieldType(object):
C_INT_UNSIGNED_NULL = 4294967295 C_INT_UNSIGNED_NULL = 4294967295
C_BIGINT_NULL = -9223372036854775808 C_BIGINT_NULL = -9223372036854775808
C_BIGINT_UNSIGNED_NULL = 18446744073709551615 C_BIGINT_UNSIGNED_NULL = 18446744073709551615
C_FLOAT_NULL = float("nan") C_FLOAT_NULL = ctypes.c_float(struct.unpack("<f", b"\x00\x00\xf0\x7f")[0])
C_DOUBLE_NULL = float("nan") C_DOUBLE_NULL = ctypes.c_double(struct.unpack("<d", b"\x00\x00\x00\x00\x00\xff\xff\x7f")[0])
C_BINARY_NULL = bytearray([int("0xff", 16)]) C_BINARY_NULL = bytearray([int("0xff", 16)])
# Timestamp precision definition # Timestamp precision definition
C_TIMESTAMP_MILLI = 0 C_TIMESTAMP_MILLI = 0
......
from taos import *
conn = connect()
dbname = "pytest_taos_stmt_multi"
conn.execute("drop database if exists %s" % dbname)
conn.execute("create database if not exists %s" % dbname)
conn.select_db(dbname)
conn.execute(
"create table if not exists log(ts timestamp, bo bool, nil tinyint, \
ti tinyint, si smallint, ii int, bi bigint, tu tinyint unsigned, \
su smallint unsigned, iu int unsigned, bu bigint unsigned, \
ff float, dd double, bb binary(100), nn nchar(100), tt timestamp)",
)
stmt = conn.statement("insert into log values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)")
params = new_multi_binds(16)
params[0].timestamp((1626861392589, 1626861392590, 1626861392591))
params[1].bool((True, None, False))
params[2].tinyint([-128, -128, None]) # -128 is tinyint null
params[3].tinyint([0, 127, None])
params[4].smallint([3, None, 2])
params[5].int([3, 4, None])
params[6].bigint([3, 4, None])
params[7].tinyint_unsigned([3, 4, None])
params[8].smallint_unsigned([3, 4, None])
params[9].int_unsigned([3, 4, None])
params[10].bigint_unsigned([3, 4, None])
params[11].float([3, None, 1])
params[12].double([3, None, 1.2])
params[13].binary(["abc", "dddafadfadfadfadfa", None])
# params[14].nchar(["涛思数据", None, "a long string with 中文字符"])
params[14].nchar([None, None, None])
params[15].timestamp([None, None, 1626861392591])
stmt.bind_param_batch(params)
stmt.execute()
result = stmt.use_result()
assert result.affected_rows == 3
result.close()
result = conn.query("select * from log")
for row in result:
print(row)
result.close()
stmt.close()
conn.close()
...@@ -277,6 +277,7 @@ do { \ ...@@ -277,6 +277,7 @@ do { \
#define TSDB_MAX_TABLES 10000000 #define TSDB_MAX_TABLES 10000000
#define TSDB_DEFAULT_TABLES 1000000 #define TSDB_DEFAULT_TABLES 1000000
#define TSDB_TABLES_STEP 1000 #define TSDB_TABLES_STEP 1000
#define TSDB_META_COMPACT_RATIO 0 // disable tsdb meta compact by default
#define TSDB_MIN_DAYS_PER_FILE 1 #define TSDB_MIN_DAYS_PER_FILE 1
#define TSDB_MAX_DAYS_PER_FILE 3650 #define TSDB_MAX_DAYS_PER_FILE 3650
......
...@@ -190,7 +190,9 @@ static void parse_args( ...@@ -190,7 +190,9 @@ static void parse_args(
fprintf(stderr, "password reading error\n"); fprintf(stderr, "password reading error\n");
} }
taosSetConsoleEcho(true); taosSetConsoleEcho(true);
getchar(); if (EOF == getchar()) {
fprintf(stderr, "getchar() return EOF\n");
}
} else { } else {
tstrncpy(g_password, (char *)(argv[i] + 2), SHELL_MAX_PASSWORD_LEN); tstrncpy(g_password, (char *)(argv[i] + 2), SHELL_MAX_PASSWORD_LEN);
strcpy(argv[i], "-p"); strcpy(argv[i], "-p");
......
...@@ -659,7 +659,21 @@ static FILE * g_fpOfInsertResult = NULL; ...@@ -659,7 +659,21 @@ static FILE * g_fpOfInsertResult = NULL;
fprintf(stderr, "PERF: "fmt, __VA_ARGS__); } while(0) fprintf(stderr, "PERF: "fmt, __VA_ARGS__); } while(0)
#define errorPrint(fmt, ...) \ #define errorPrint(fmt, ...) \
do { fprintf(stderr, " \033[31m"); fprintf(stderr, "ERROR: "fmt, __VA_ARGS__); fprintf(stderr, " \033[0m"); } while(0) do {\
struct tm Tm, *ptm;\
struct timeval timeSecs; \
time_t curTime;\
gettimeofday(&timeSecs, NULL); \
curTime = timeSecs.tv_sec;\
ptm = localtime_r(&curTime, &Tm);\
fprintf(stderr, " \033[31m");\
fprintf(stderr, "%02d/%02d %02d:%02d:%02d.%06d %08" PRId64 " ",\
ptm->tm_mon + 1, ptm->tm_mday, ptm->tm_hour,\
ptm->tm_min, ptm->tm_sec, (int32_t)timeSecs.tv_usec,\
taosGetSelfPthreadId());\
fprintf(stderr, "ERROR: "fmt, __VA_ARGS__);\
fprintf(stderr, " \033[0m");\
} while(0)
// for strncpy buffer overflow // for strncpy buffer overflow
#define min(a, b) (((a) < (b)) ? (a) : (b)) #define min(a, b) (((a) < (b)) ? (a) : (b))
...@@ -5202,7 +5216,8 @@ static int64_t generateStbRowData( ...@@ -5202,7 +5216,8 @@ static int64_t generateStbRowData(
tmpLen = strlen(tmp); tmpLen = strlen(tmp);
tstrncpy(pstr + dataLen, tmp, min(tmpLen +1, INT_BUFF_LEN)); tstrncpy(pstr + dataLen, tmp, min(tmpLen +1, INT_BUFF_LEN));
} else { } else {
errorPrint( "Not support data type: %s\n", stbInfo->columns[i].dataType); errorPrint( "Not support data type: %s\n",
stbInfo->columns[i].dataType);
return -1; return -1;
} }
...@@ -5215,8 +5230,7 @@ static int64_t generateStbRowData( ...@@ -5215,8 +5230,7 @@ static int64_t generateStbRowData(
return 0; return 0;
} }
dataLen -= 1; tstrncpy(pstr + dataLen - 1, ")", 2);
dataLen += snprintf(pstr + dataLen, maxLen - dataLen, ")");
verbosePrint("%s() LN%d, dataLen:%"PRId64"\n", __func__, __LINE__, dataLen); verbosePrint("%s() LN%d, dataLen:%"PRId64"\n", __func__, __LINE__, dataLen);
verbosePrint("%s() LN%d, recBuf:\n\t%s\n", __func__, __LINE__, recBuf); verbosePrint("%s() LN%d, recBuf:\n\t%s\n", __func__, __LINE__, recBuf);
......
...@@ -62,6 +62,20 @@ typedef struct { ...@@ -62,6 +62,20 @@ typedef struct {
#define errorPrint(fmt, ...) \ #define errorPrint(fmt, ...) \
do { fprintf(stderr, "\033[31m"); fprintf(stderr, "ERROR: "fmt, __VA_ARGS__); fprintf(stderr, "\033[0m"); } while(0) do { fprintf(stderr, "\033[31m"); fprintf(stderr, "ERROR: "fmt, __VA_ARGS__); fprintf(stderr, "\033[0m"); } while(0)
static bool isStringNumber(char *input)
{
int len = strlen(input);
if (0 == len) {
return false;
}
for (int i = 0; i < len; i++) {
if (!isdigit(input[i]))
return false;
}
return true;
}
// -------------------------- SHOW DATABASE INTERFACE----------------------- // -------------------------- SHOW DATABASE INTERFACE-----------------------
enum _show_db_index { enum _show_db_index {
...@@ -472,6 +486,10 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) { ...@@ -472,6 +486,10 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) {
g_args.table_batch = atoi(arg); g_args.table_batch = atoi(arg);
break; break;
case 'T': case 'T':
if (!isStringNumber(arg)) {
errorPrint("%s", "\n\t-T need a number following!\n");
exit(EXIT_FAILURE);
}
g_args.thread_num = atoi(arg); g_args.thread_num = atoi(arg);
break; break;
case OPT_ABORT: case OPT_ABORT:
......
...@@ -63,12 +63,12 @@ int taosSetConsoleEcho(bool on) ...@@ -63,12 +63,12 @@ int taosSetConsoleEcho(bool on)
} }
if (on) if (on)
term.c_lflag|=ECHOFLAGS; term.c_lflag |= ECHOFLAGS;
else else
term.c_lflag &=~ECHOFLAGS; term.c_lflag &= ~ECHOFLAGS;
err = tcsetattr(STDIN_FILENO,TCSAFLUSH,&term); err = tcsetattr(STDIN_FILENO, TCSAFLUSH, &term);
if (err == -1 && err == EINTR) { if (err == -1 || err == EINTR) {
perror("Cannot set the attribution of the terminal"); perror("Cannot set the attribution of the terminal");
return -1; return -1;
} }
......
...@@ -43,9 +43,7 @@ typedef int32_t (*__block_search_fn_t)(char* data, int32_t num, int64_t key, int ...@@ -43,9 +43,7 @@ typedef int32_t (*__block_search_fn_t)(char* data, int32_t num, int64_t key, int
#define GET_NUM_OF_RESULTS(_r) (((_r)->outputBuf) == NULL? 0:((_r)->outputBuf)->info.rows) #define GET_NUM_OF_RESULTS(_r) (((_r)->outputBuf) == NULL? 0:((_r)->outputBuf)->info.rows)
//TODO: may need to fine tune this threshold #define NEEDTO_COMPRESS_QUERY(size) ((size) > tsCompressColData? 1 : 0)
#define QUERY_COMP_THRESHOLD (1024 * 512)
#define NEEDTO_COMPRESS_QUERY(size) ((size) > QUERY_COMP_THRESHOLD ? 1 : 0)
enum { enum {
// when query starts to execute, this status will set // when query starts to execute, this status will set
...@@ -614,6 +612,7 @@ int32_t getNumOfResult(SQueryRuntimeEnv *pRuntimeEnv, SQLFunctionCtx* pCtx, int3 ...@@ -614,6 +612,7 @@ int32_t getNumOfResult(SQueryRuntimeEnv *pRuntimeEnv, SQLFunctionCtx* pCtx, int3
void finalizeQueryResult(SOperatorInfo* pOperator, SQLFunctionCtx* pCtx, SResultRowInfo* pResultRowInfo, int32_t* rowCellInfoOffset); void finalizeQueryResult(SOperatorInfo* pOperator, SQLFunctionCtx* pCtx, SResultRowInfo* pResultRowInfo, int32_t* rowCellInfoOffset);
void updateOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity, int32_t numOfInputRows); void updateOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity, int32_t numOfInputRows);
void clearOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity); void clearOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity);
void copyTsColoum(SSDataBlock* pRes, SQLFunctionCtx* pCtx, int32_t numOfOutput);
void freeParam(SQueryParam *param); void freeParam(SQueryParam *param);
int32_t convertQueryMsg(SQueryTableMsg *pQueryMsg, SQueryParam* param); int32_t convertQueryMsg(SQueryTableMsg *pQueryMsg, SQueryParam* param);
......
...@@ -3591,7 +3591,7 @@ void setDefaultOutputBuf(SQueryRuntimeEnv *pRuntimeEnv, SOptrBasicInfo *pInfo, i ...@@ -3591,7 +3591,7 @@ void setDefaultOutputBuf(SQueryRuntimeEnv *pRuntimeEnv, SOptrBasicInfo *pInfo, i
// set the timestamp output buffer for top/bottom/diff query // set the timestamp output buffer for top/bottom/diff query
int32_t fid = pCtx[i].functionId; int32_t fid = pCtx[i].functionId;
if (fid == TSDB_FUNC_TOP || fid == TSDB_FUNC_BOTTOM || fid == TSDB_FUNC_DIFF || fid == TSDB_FUNC_DERIVATIVE) { if (fid == TSDB_FUNC_TOP || fid == TSDB_FUNC_BOTTOM || fid == TSDB_FUNC_DIFF || fid == TSDB_FUNC_DERIVATIVE) {
pCtx[i].ptsOutputBuf = pCtx[0].pOutput; if (i > 0) pCtx[i].ptsOutputBuf = pCtx[i-1].pOutput;
} }
} }
...@@ -3619,14 +3619,15 @@ void updateOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity, int32_t numOf ...@@ -3619,14 +3619,15 @@ void updateOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity, int32_t numOf
} }
} }
for (int32_t i = 0; i < pDataBlock->info.numOfCols; ++i) { for (int32_t i = 0; i < pDataBlock->info.numOfCols; ++i) {
SColumnInfoData *pColInfo = taosArrayGet(pDataBlock->pDataBlock, i); SColumnInfoData *pColInfo = taosArrayGet(pDataBlock->pDataBlock, i);
pBInfo->pCtx[i].pOutput = pColInfo->pData + pColInfo->info.bytes * pDataBlock->info.rows; pBInfo->pCtx[i].pOutput = pColInfo->pData + pColInfo->info.bytes * pDataBlock->info.rows;
// re-estabilish output buffer pointer. // re-estabilish output buffer pointer.
int32_t functionId = pBInfo->pCtx[i].functionId; int32_t functionId = pBInfo->pCtx[i].functionId;
if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM || functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE) { if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM || functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE){
pBInfo->pCtx[i].ptsOutputBuf = pBInfo->pCtx[i-1].pOutput; if (i > 0) pBInfo->pCtx[i].ptsOutputBuf = pBInfo->pCtx[i-1].pOutput;
} }
} }
} }
...@@ -3644,7 +3645,35 @@ void clearOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity) { ...@@ -3644,7 +3645,35 @@ void clearOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity) {
} }
} }
void copyTsColoum(SSDataBlock* pRes, SQLFunctionCtx* pCtx, int32_t numOfOutput) {
bool needCopyTs = false;
int32_t tsNum = 0;
char *src = NULL;
for (int32_t i = 0; i < numOfOutput; i++) {
int32_t functionId = pCtx[i].functionId;
if (functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE) {
needCopyTs = true;
if (i > 0 && pCtx[i-1].functionId == TSDB_FUNC_TS_DUMMY){
SColumnInfoData* pColRes = taosArrayGet(pRes->pDataBlock, i - 1); // find ts data
src = pColRes->pData;
}
}else if(functionId == TSDB_FUNC_TS_DUMMY) {
tsNum++;
}
}
if (!needCopyTs) return;
if (tsNum < 2) return;
if (src == NULL) return;
for (int32_t i = 0; i < numOfOutput; i++) {
int32_t functionId = pCtx[i].functionId;
if(functionId == TSDB_FUNC_TS_DUMMY) {
SColumnInfoData* pColRes = taosArrayGet(pRes->pDataBlock, i);
memcpy(pColRes->pData, src, pColRes->info.bytes * pRes->info.rows);
}
}
}
void initCtxOutputBuffer(SQLFunctionCtx* pCtx, int32_t size) { void initCtxOutputBuffer(SQLFunctionCtx* pCtx, int32_t size) {
for (int32_t j = 0; j < size; ++j) { for (int32_t j = 0; j < size; ++j) {
...@@ -3826,7 +3855,7 @@ void setResultRowOutputBufInitCtx(SQueryRuntimeEnv *pRuntimeEnv, SResultRow *pRe ...@@ -3826,7 +3855,7 @@ void setResultRowOutputBufInitCtx(SQueryRuntimeEnv *pRuntimeEnv, SResultRow *pRe
} }
if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM || functionId == TSDB_FUNC_DIFF) { if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM || functionId == TSDB_FUNC_DIFF) {
pCtx[i].ptsOutputBuf = pCtx[0].pOutput; if(i > 0) pCtx[i].ptsOutputBuf = pCtx[i-1].pOutput;
} }
if (!pResInfo->initialized) { if (!pResInfo->initialized) {
...@@ -3887,7 +3916,7 @@ void setResultOutputBuf(SQueryRuntimeEnv *pRuntimeEnv, SResultRow *pResult, SQLF ...@@ -3887,7 +3916,7 @@ void setResultOutputBuf(SQueryRuntimeEnv *pRuntimeEnv, SResultRow *pResult, SQLF
int32_t functionId = pCtx[i].functionId; int32_t functionId = pCtx[i].functionId;
if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM || functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE) { if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM || functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE) {
pCtx[i].ptsOutputBuf = pCtx[0].pOutput; if(i > 0) pCtx[i].ptsOutputBuf = pCtx[i-1].pOutput;
} }
/* /*
...@@ -5708,6 +5737,7 @@ static SSDataBlock* doProjectOperation(void* param, bool* newgroup) { ...@@ -5708,6 +5737,7 @@ static SSDataBlock* doProjectOperation(void* param, bool* newgroup) {
pRes->info.rows = getNumOfResult(pRuntimeEnv, pInfo->pCtx, pOperator->numOfOutput); pRes->info.rows = getNumOfResult(pRuntimeEnv, pInfo->pCtx, pOperator->numOfOutput);
if (pRes->info.rows >= pRuntimeEnv->resultInfo.threshold) { if (pRes->info.rows >= pRuntimeEnv->resultInfo.threshold) {
copyTsColoum(pRes, pInfo->pCtx, pOperator->numOfOutput);
clearNumOfRes(pInfo->pCtx, pOperator->numOfOutput); clearNumOfRes(pInfo->pCtx, pOperator->numOfOutput);
return pRes; return pRes;
} }
...@@ -5733,8 +5763,7 @@ static SSDataBlock* doProjectOperation(void* param, bool* newgroup) { ...@@ -5733,8 +5763,7 @@ static SSDataBlock* doProjectOperation(void* param, bool* newgroup) {
if (*newgroup) { if (*newgroup) {
if (pRes->info.rows > 0) { if (pRes->info.rows > 0) {
pProjectInfo->existDataBlock = pBlock; pProjectInfo->existDataBlock = pBlock;
clearNumOfRes(pInfo->pCtx, pOperator->numOfOutput); break;
return pInfo->pRes;
} else { // init output buffer for a new group data } else { // init output buffer for a new group data
for (int32_t j = 0; j < pOperator->numOfOutput; ++j) { for (int32_t j = 0; j < pOperator->numOfOutput; ++j) {
aAggs[pInfo->pCtx[j].functionId].xFinalize(&pInfo->pCtx[j]); aAggs[pInfo->pCtx[j].functionId].xFinalize(&pInfo->pCtx[j]);
...@@ -5764,7 +5793,7 @@ static SSDataBlock* doProjectOperation(void* param, bool* newgroup) { ...@@ -5764,7 +5793,7 @@ static SSDataBlock* doProjectOperation(void* param, bool* newgroup) {
break; break;
} }
} }
copyTsColoum(pRes, pInfo->pCtx, pOperator->numOfOutput);
clearNumOfRes(pInfo->pCtx, pOperator->numOfOutput); clearNumOfRes(pInfo->pCtx, pOperator->numOfOutput);
return (pInfo->pRes->info.rows > 0)? pInfo->pRes:NULL; return (pInfo->pRes->info.rows > 0)? pInfo->pRes:NULL;
} }
......
...@@ -357,7 +357,7 @@ int32_t qDumpRetrieveResult(qinfo_t qinfo, SRetrieveTableRsp **pRsp, int32_t *co ...@@ -357,7 +357,7 @@ int32_t qDumpRetrieveResult(qinfo_t qinfo, SRetrieveTableRsp **pRsp, int32_t *co
} }
(*pRsp)->precision = htons(pQueryAttr->precision); (*pRsp)->precision = htons(pQueryAttr->precision);
(*pRsp)->compressed = (int8_t)(tsCompressColData && checkNeedToCompressQueryCol(pQInfo)); (*pRsp)->compressed = (int8_t)((tsCompressColData != -1) && checkNeedToCompressQueryCol(pQInfo));
if (GET_NUM_OF_RESULTS(&(pQInfo->runtimeEnv)) > 0 && pQInfo->code == TSDB_CODE_SUCCESS) { if (GET_NUM_OF_RESULTS(&(pQInfo->runtimeEnv)) > 0 && pQInfo->code == TSDB_CODE_SUCCESS) {
doDumpQueryResult(pQInfo, (*pRsp)->data, (*pRsp)->compressed, &compLen); doDumpQueryResult(pQInfo, (*pRsp)->data, (*pRsp)->compressed, &compLen);
...@@ -367,8 +367,12 @@ int32_t qDumpRetrieveResult(qinfo_t qinfo, SRetrieveTableRsp **pRsp, int32_t *co ...@@ -367,8 +367,12 @@ int32_t qDumpRetrieveResult(qinfo_t qinfo, SRetrieveTableRsp **pRsp, int32_t *co
if ((*pRsp)->compressed && compLen != 0) { if ((*pRsp)->compressed && compLen != 0) {
int32_t numOfCols = pQueryAttr->pExpr2 ? pQueryAttr->numOfExpr2 : pQueryAttr->numOfOutput; int32_t numOfCols = pQueryAttr->pExpr2 ? pQueryAttr->numOfExpr2 : pQueryAttr->numOfOutput;
*contLen = *contLen - pQueryAttr->resultRowSize * s + compLen + numOfCols * sizeof(int32_t); int32_t origSize = pQueryAttr->resultRowSize * s;
int32_t compSize = compLen + numOfCols * sizeof(int32_t);
*contLen = *contLen - origSize + compSize;
*pRsp = (SRetrieveTableRsp *)rpcReallocCont(*pRsp, *contLen); *pRsp = (SRetrieveTableRsp *)rpcReallocCont(*pRsp, *contLen);
qDebug("QInfo:0x%"PRIx64" compress col data, uncompressed size:%d, compressed size:%d, ratio:%.2f",
pQInfo->qId, origSize, compSize, (float)origSize / (float)compSize);
} }
(*pRsp)->compLen = htonl(compLen); (*pRsp)->compLen = htonl(compLen);
......
...@@ -42,8 +42,9 @@ typedef struct { ...@@ -42,8 +42,9 @@ typedef struct {
typedef struct { typedef struct {
pthread_rwlock_t lock; pthread_rwlock_t lock;
SFSStatus* cstatus; // current status SFSStatus* cstatus; // current status
SHashObj* metaCache; // meta cache SHashObj* metaCache; // meta cache
SHashObj* metaCacheComp; // meta cache for compact
bool intxn; bool intxn;
SFSStatus* nstatus; // new status SFSStatus* nstatus; // new status
} STsdbFS; } STsdbFS;
......
...@@ -14,6 +14,8 @@ ...@@ -14,6 +14,8 @@
*/ */
#include "tsdbint.h" #include "tsdbint.h"
extern int32_t tsTsdbMetaCompactRatio;
#define TSDB_MAX_SUBBLOCKS 8 #define TSDB_MAX_SUBBLOCKS 8
static FORCE_INLINE int TSDB_KEY_FID(TSKEY key, int32_t days, int8_t precision) { static FORCE_INLINE int TSDB_KEY_FID(TSKEY key, int32_t days, int8_t precision) {
if (key < 0) { if (key < 0) {
...@@ -55,8 +57,9 @@ typedef struct { ...@@ -55,8 +57,9 @@ typedef struct {
#define TSDB_COMMIT_TXN_VERSION(ch) FS_TXN_VERSION(REPO_FS(TSDB_COMMIT_REPO(ch))) #define TSDB_COMMIT_TXN_VERSION(ch) FS_TXN_VERSION(REPO_FS(TSDB_COMMIT_REPO(ch)))
static int tsdbCommitMeta(STsdbRepo *pRepo); static int tsdbCommitMeta(STsdbRepo *pRepo);
static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void *cont, int contLen); static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void *cont, int contLen, bool compact);
static int tsdbDropMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid); static int tsdbDropMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid);
static int tsdbCompactMetaFile(STsdbRepo *pRepo, STsdbFS *pfs, SMFile *pMFile);
static int tsdbCommitTSData(STsdbRepo *pRepo); static int tsdbCommitTSData(STsdbRepo *pRepo);
static void tsdbStartCommit(STsdbRepo *pRepo); static void tsdbStartCommit(STsdbRepo *pRepo);
static void tsdbEndCommit(STsdbRepo *pRepo, int eno); static void tsdbEndCommit(STsdbRepo *pRepo, int eno);
...@@ -261,6 +264,35 @@ int tsdbWriteBlockIdx(SDFile *pHeadf, SArray *pIdxA, void **ppBuf) { ...@@ -261,6 +264,35 @@ int tsdbWriteBlockIdx(SDFile *pHeadf, SArray *pIdxA, void **ppBuf) {
// =================== Commit Meta Data // =================== Commit Meta Data
static int tsdbInitCommitMetaFile(STsdbRepo *pRepo, SMFile* pMf, bool open) {
STsdbFS * pfs = REPO_FS(pRepo);
SMFile * pOMFile = pfs->cstatus->pmf;
SDiskID did;
// Create/Open a meta file or open the existing file
if (pOMFile == NULL) {
// Create a new meta file
did.level = TFS_PRIMARY_LEVEL;
did.id = TFS_PRIMARY_ID;
tsdbInitMFile(pMf, did, REPO_ID(pRepo), FS_TXN_VERSION(REPO_FS(pRepo)));
if (open && tsdbCreateMFile(pMf, true) < 0) {
tsdbError("vgId:%d failed to create META file since %s", REPO_ID(pRepo), tstrerror(terrno));
return -1;
}
tsdbInfo("vgId:%d meta file %s is created to commit", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(pMf));
} else {
tsdbInitMFileEx(pMf, pOMFile);
if (open && tsdbOpenMFile(pMf, O_WRONLY) < 0) {
tsdbError("vgId:%d failed to open META file since %s", REPO_ID(pRepo), tstrerror(terrno));
return -1;
}
}
return 0;
}
static int tsdbCommitMeta(STsdbRepo *pRepo) { static int tsdbCommitMeta(STsdbRepo *pRepo) {
STsdbFS * pfs = REPO_FS(pRepo); STsdbFS * pfs = REPO_FS(pRepo);
SMemTable *pMem = pRepo->imem; SMemTable *pMem = pRepo->imem;
...@@ -269,34 +301,25 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) { ...@@ -269,34 +301,25 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) {
SActObj * pAct = NULL; SActObj * pAct = NULL;
SActCont * pCont = NULL; SActCont * pCont = NULL;
SListNode *pNode = NULL; SListNode *pNode = NULL;
SDiskID did;
ASSERT(pOMFile != NULL || listNEles(pMem->actList) > 0); ASSERT(pOMFile != NULL || listNEles(pMem->actList) > 0);
if (listNEles(pMem->actList) <= 0) { if (listNEles(pMem->actList) <= 0) {
// no meta data to commit, just keep the old meta file // no meta data to commit, just keep the old meta file
tsdbUpdateMFile(pfs, pOMFile); tsdbUpdateMFile(pfs, pOMFile);
return 0; if (tsTsdbMetaCompactRatio > 0) {
} else { if (tsdbInitCommitMetaFile(pRepo, &mf, false) < 0) {
// Create/Open a meta file or open the existing file
if (pOMFile == NULL) {
// Create a new meta file
did.level = TFS_PRIMARY_LEVEL;
did.id = TFS_PRIMARY_ID;
tsdbInitMFile(&mf, did, REPO_ID(pRepo), FS_TXN_VERSION(REPO_FS(pRepo)));
if (tsdbCreateMFile(&mf, true) < 0) {
tsdbError("vgId:%d failed to create META file since %s", REPO_ID(pRepo), tstrerror(terrno));
return -1; return -1;
} }
int ret = tsdbCompactMetaFile(pRepo, pfs, &mf);
if (ret < 0) tsdbError("compact meta file error");
tsdbInfo("vgId:%d meta file %s is created to commit", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(&mf)); return ret;
} else { }
tsdbInitMFileEx(&mf, pOMFile); return 0;
if (tsdbOpenMFile(&mf, O_WRONLY) < 0) { } else {
tsdbError("vgId:%d failed to open META file since %s", REPO_ID(pRepo), tstrerror(terrno)); if (tsdbInitCommitMetaFile(pRepo, &mf, true) < 0) {
return -1; return -1;
}
} }
} }
...@@ -305,7 +328,7 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) { ...@@ -305,7 +328,7 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) {
pAct = (SActObj *)pNode->data; pAct = (SActObj *)pNode->data;
if (pAct->act == TSDB_UPDATE_META) { if (pAct->act == TSDB_UPDATE_META) {
pCont = (SActCont *)POINTER_SHIFT(pAct, sizeof(SActObj)); pCont = (SActCont *)POINTER_SHIFT(pAct, sizeof(SActObj));
if (tsdbUpdateMetaRecord(pfs, &mf, pAct->uid, (void *)(pCont->cont), pCont->len) < 0) { if (tsdbUpdateMetaRecord(pfs, &mf, pAct->uid, (void *)(pCont->cont), pCont->len, false) < 0) {
tsdbError("vgId:%d failed to update META record, uid %" PRIu64 " since %s", REPO_ID(pRepo), pAct->uid, tsdbError("vgId:%d failed to update META record, uid %" PRIu64 " since %s", REPO_ID(pRepo), pAct->uid,
tstrerror(terrno)); tstrerror(terrno));
tsdbCloseMFile(&mf); tsdbCloseMFile(&mf);
...@@ -338,6 +361,10 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) { ...@@ -338,6 +361,10 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) {
tsdbCloseMFile(&mf); tsdbCloseMFile(&mf);
tsdbUpdateMFile(pfs, &mf); tsdbUpdateMFile(pfs, &mf);
if (tsTsdbMetaCompactRatio > 0 && tsdbCompactMetaFile(pRepo, pfs, &mf) < 0) {
tsdbError("compact meta file error");
}
return 0; return 0;
} }
...@@ -375,7 +402,7 @@ void tsdbGetRtnSnap(STsdbRepo *pRepo, SRtn *pRtn) { ...@@ -375,7 +402,7 @@ void tsdbGetRtnSnap(STsdbRepo *pRepo, SRtn *pRtn) {
pRtn->minFid, pRtn->midFid, pRtn->maxFid); pRtn->minFid, pRtn->midFid, pRtn->maxFid);
} }
static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void *cont, int contLen) { static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void *cont, int contLen, bool compact) {
char buf[64] = "\0"; char buf[64] = "\0";
void * pBuf = buf; void * pBuf = buf;
SKVRecord rInfo; SKVRecord rInfo;
...@@ -401,13 +428,18 @@ static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void ...@@ -401,13 +428,18 @@ static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void
} }
tsdbUpdateMFileMagic(pMFile, POINTER_SHIFT(cont, contLen - sizeof(TSCKSUM))); tsdbUpdateMFileMagic(pMFile, POINTER_SHIFT(cont, contLen - sizeof(TSCKSUM)));
SKVRecord *pRecord = taosHashGet(pfs->metaCache, (void *)&uid, sizeof(uid));
SHashObj* cache = compact ? pfs->metaCacheComp : pfs->metaCache;
pMFile->info.nRecords++;
SKVRecord *pRecord = taosHashGet(cache, (void *)&uid, sizeof(uid));
if (pRecord != NULL) { if (pRecord != NULL) {
pMFile->info.tombSize += (pRecord->size + sizeof(SKVRecord)); pMFile->info.tombSize += (pRecord->size + sizeof(SKVRecord));
} else { } else {
pMFile->info.nRecords++; pMFile->info.nRecords++;
} }
taosHashPut(pfs->metaCache, (void *)(&uid), sizeof(uid), (void *)(&rInfo), sizeof(rInfo)); taosHashPut(cache, (void *)(&uid), sizeof(uid), (void *)(&rInfo), sizeof(rInfo));
return 0; return 0;
} }
...@@ -442,6 +474,129 @@ static int tsdbDropMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid) { ...@@ -442,6 +474,129 @@ static int tsdbDropMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid) {
return 0; return 0;
} }
static int tsdbCompactMetaFile(STsdbRepo *pRepo, STsdbFS *pfs, SMFile *pMFile) {
float delPercent = (float)(pMFile->info.nDels) / (float)(pMFile->info.nRecords);
float tombPercent = (float)(pMFile->info.tombSize) / (float)(pMFile->info.size);
float compactRatio = (float)(tsTsdbMetaCompactRatio)/100;
if (delPercent < compactRatio && tombPercent < compactRatio) {
return 0;
}
if (tsdbOpenMFile(pMFile, O_RDONLY) < 0) {
tsdbError("open meta file %s compact fail", pMFile->f.rname);
return -1;
}
tsdbInfo("begin compact tsdb meta file, ratio:%d, nDels:%" PRId64 ",nRecords:%" PRId64 ",tombSize:%" PRId64 ",size:%" PRId64,
tsTsdbMetaCompactRatio, pMFile->info.nDels,pMFile->info.nRecords,pMFile->info.tombSize,pMFile->info.size);
SMFile mf;
SDiskID did;
// first create tmp meta file
did.level = TFS_PRIMARY_LEVEL;
did.id = TFS_PRIMARY_ID;
tsdbInitMFile(&mf, did, REPO_ID(pRepo), FS_TXN_VERSION(REPO_FS(pRepo)) + 1);
if (tsdbCreateMFile(&mf, true) < 0) {
tsdbError("vgId:%d failed to create META file since %s", REPO_ID(pRepo), tstrerror(terrno));
return -1;
}
tsdbInfo("vgId:%d meta file %s is created to compact meta data", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(&mf));
// second iterator metaCache
int code = -1;
int64_t maxBufSize = 1024;
SKVRecord *pRecord;
void *pBuf = NULL;
pBuf = malloc((size_t)maxBufSize);
if (pBuf == NULL) {
goto _err;
}
// init Comp
assert(pfs->metaCacheComp == NULL);
pfs->metaCacheComp = taosHashInit(4096, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_NO_LOCK);
if (pfs->metaCacheComp == NULL) {
goto _err;
}
pRecord = taosHashIterate(pfs->metaCache, NULL);
while (pRecord) {
if (tsdbSeekMFile(pMFile, pRecord->offset + sizeof(SKVRecord), SEEK_SET) < 0) {
tsdbError("vgId:%d failed to seek file %s since %s", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(pMFile),
tstrerror(terrno));
goto _err;
}
if (pRecord->size > maxBufSize) {
maxBufSize = pRecord->size;
void* tmp = realloc(pBuf, (size_t)maxBufSize);
if (tmp == NULL) {
goto _err;
}
pBuf = tmp;
}
int nread = (int)tsdbReadMFile(pMFile, pBuf, pRecord->size);
if (nread < 0) {
tsdbError("vgId:%d failed to read file %s since %s", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(pMFile),
tstrerror(terrno));
goto _err;
}
if (nread < pRecord->size) {
tsdbError("vgId:%d failed to read file %s since file corrupted, expected read:%" PRId64 " actual read:%d",
REPO_ID(pRepo), TSDB_FILE_FULL_NAME(pMFile), pRecord->size, nread);
goto _err;
}
if (tsdbUpdateMetaRecord(pfs, &mf, pRecord->uid, pBuf, (int)pRecord->size, true) < 0) {
tsdbError("vgId:%d failed to update META record, uid %" PRIu64 " since %s", REPO_ID(pRepo), pRecord->uid,
tstrerror(terrno));
goto _err;
}
pRecord = taosHashIterate(pfs->metaCache, pRecord);
}
code = 0;
_err:
if (code == 0) TSDB_FILE_FSYNC(&mf);
tsdbCloseMFile(&mf);
tsdbCloseMFile(pMFile);
if (code == 0) {
// rename meta.tmp -> meta
tsdbInfo("vgId:%d meta file rename %s -> %s", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(&mf), TSDB_FILE_FULL_NAME(pMFile));
taosRename(mf.f.aname,pMFile->f.aname);
tstrncpy(mf.f.aname, pMFile->f.aname, TSDB_FILENAME_LEN);
tstrncpy(mf.f.rname, pMFile->f.rname, TSDB_FILENAME_LEN);
// update current meta file info
pfs->nstatus->pmf = NULL;
tsdbUpdateMFile(pfs, &mf);
taosHashCleanup(pfs->metaCache);
pfs->metaCache = pfs->metaCacheComp;
pfs->metaCacheComp = NULL;
} else {
// remove meta.tmp file
remove(mf.f.aname);
taosHashCleanup(pfs->metaCacheComp);
pfs->metaCacheComp = NULL;
}
tfree(pBuf);
ASSERT(mf.info.nDels == 0);
ASSERT(mf.info.tombSize == 0);
tsdbInfo("end compact tsdb meta file,code:%d,nRecords:%" PRId64 ",size:%" PRId64,
code,mf.info.nRecords,mf.info.size);
return code;
}
// =================== Commit Time-Series Data // =================== Commit Time-Series Data
static int tsdbCommitTSData(STsdbRepo *pRepo) { static int tsdbCommitTSData(STsdbRepo *pRepo) {
SMemTable *pMem = pRepo->imem; SMemTable *pMem = pRepo->imem;
......
...@@ -215,6 +215,7 @@ STsdbFS *tsdbNewFS(STsdbCfg *pCfg) { ...@@ -215,6 +215,7 @@ STsdbFS *tsdbNewFS(STsdbCfg *pCfg) {
} }
pfs->intxn = false; pfs->intxn = false;
pfs->metaCacheComp = NULL;
pfs->nstatus = tsdbNewFSStatus(maxFSet); pfs->nstatus = tsdbNewFSStatus(maxFSet);
if (pfs->nstatus == NULL) { if (pfs->nstatus == NULL) {
......
...@@ -16,11 +16,11 @@ ...@@ -16,11 +16,11 @@
#include "tsdbint.h" #include "tsdbint.h"
static const char *TSDB_FNAME_SUFFIX[] = { static const char *TSDB_FNAME_SUFFIX[] = {
"head", // TSDB_FILE_HEAD "head", // TSDB_FILE_HEAD
"data", // TSDB_FILE_DATA "data", // TSDB_FILE_DATA
"last", // TSDB_FILE_LAST "last", // TSDB_FILE_LAST
"", // TSDB_FILE_MAX "", // TSDB_FILE_MAX
"meta" // TSDB_FILE_META "meta", // TSDB_FILE_META
}; };
static void tsdbGetFilename(int vid, int fid, uint32_t ver, TSDB_FILE_T ftype, char *fname); static void tsdbGetFilename(int vid, int fid, uint32_t ver, TSDB_FILE_T ftype, char *fname);
......
...@@ -43,6 +43,7 @@ static int tsdbRemoveTableFromStore(STsdbRepo *pRepo, STable *pTable); ...@@ -43,6 +43,7 @@ static int tsdbRemoveTableFromStore(STsdbRepo *pRepo, STable *pTable);
static int tsdbRmTableFromMeta(STsdbRepo *pRepo, STable *pTable); static int tsdbRmTableFromMeta(STsdbRepo *pRepo, STable *pTable);
static int tsdbAdjustMetaTables(STsdbRepo *pRepo, int tid); static int tsdbAdjustMetaTables(STsdbRepo *pRepo, int tid);
static int tsdbCheckTableTagVal(SKVRow *pKVRow, STSchema *pSchema); static int tsdbCheckTableTagVal(SKVRow *pKVRow, STSchema *pSchema);
static int tsdbInsertNewTableAction(STsdbRepo *pRepo, STable* pTable);
static int tsdbAddSchema(STable *pTable, STSchema *pSchema); static int tsdbAddSchema(STable *pTable, STSchema *pSchema);
static void tsdbFreeTableSchema(STable *pTable); static void tsdbFreeTableSchema(STable *pTable);
...@@ -128,21 +129,16 @@ int tsdbCreateTable(STsdbRepo *repo, STableCfg *pCfg) { ...@@ -128,21 +129,16 @@ int tsdbCreateTable(STsdbRepo *repo, STableCfg *pCfg) {
tsdbUnlockRepoMeta(pRepo); tsdbUnlockRepoMeta(pRepo);
// Write to memtable action // Write to memtable action
// TODO: refactor duplicate codes
int tlen = 0;
void *pBuf = NULL;
if (newSuper || superChanged) { if (newSuper || superChanged) {
tlen = tsdbGetTableEncodeSize(TSDB_UPDATE_META, super); // add insert new super table action
pBuf = tsdbAllocBytes(pRepo, tlen); if (tsdbInsertNewTableAction(pRepo, super) != 0) {
if (pBuf == NULL) goto _err; goto _err;
void *tBuf = tsdbInsertTableAct(pRepo, TSDB_UPDATE_META, pBuf, super); }
ASSERT(POINTER_DISTANCE(tBuf, pBuf) == tlen); }
// add insert new table action
if (tsdbInsertNewTableAction(pRepo, table) != 0) {
goto _err;
} }
tlen = tsdbGetTableEncodeSize(TSDB_UPDATE_META, table);
pBuf = tsdbAllocBytes(pRepo, tlen);
if (pBuf == NULL) goto _err;
void *tBuf = tsdbInsertTableAct(pRepo, TSDB_UPDATE_META, pBuf, table);
ASSERT(POINTER_DISTANCE(tBuf, pBuf) == tlen);
if (tsdbCheckCommit(pRepo) < 0) return -1; if (tsdbCheckCommit(pRepo) < 0) return -1;
...@@ -383,7 +379,7 @@ int tsdbUpdateTableTagValue(STsdbRepo *repo, SUpdateTableTagValMsg *pMsg) { ...@@ -383,7 +379,7 @@ int tsdbUpdateTableTagValue(STsdbRepo *repo, SUpdateTableTagValMsg *pMsg) {
tdDestroyTSchemaBuilder(&schemaBuilder); tdDestroyTSchemaBuilder(&schemaBuilder);
} }
// Chage in memory // Change in memory
if (pNewSchema != NULL) { // change super table tag schema if (pNewSchema != NULL) { // change super table tag schema
TSDB_WLOCK_TABLE(pTable->pSuper); TSDB_WLOCK_TABLE(pTable->pSuper);
STSchema *pOldSchema = pTable->pSuper->tagSchema; STSchema *pOldSchema = pTable->pSuper->tagSchema;
...@@ -426,6 +422,21 @@ int tsdbUpdateTableTagValue(STsdbRepo *repo, SUpdateTableTagValMsg *pMsg) { ...@@ -426,6 +422,21 @@ int tsdbUpdateTableTagValue(STsdbRepo *repo, SUpdateTableTagValMsg *pMsg) {
} }
// ------------------ INTERNAL FUNCTIONS ------------------ // ------------------ INTERNAL FUNCTIONS ------------------
static int tsdbInsertNewTableAction(STsdbRepo *pRepo, STable* pTable) {
int tlen = 0;
void *pBuf = NULL;
tlen = tsdbGetTableEncodeSize(TSDB_UPDATE_META, pTable);
pBuf = tsdbAllocBytes(pRepo, tlen);
if (pBuf == NULL) {
return -1;
}
void *tBuf = tsdbInsertTableAct(pRepo, TSDB_UPDATE_META, pBuf, pTable);
ASSERT(POINTER_DISTANCE(tBuf, pBuf) == tlen);
return 0;
}
STsdbMeta *tsdbNewMeta(STsdbCfg *pCfg) { STsdbMeta *tsdbNewMeta(STsdbCfg *pCfg) {
STsdbMeta *pMeta = (STsdbMeta *)calloc(1, sizeof(*pMeta)); STsdbMeta *pMeta = (STsdbMeta *)calloc(1, sizeof(*pMeta));
if (pMeta == NULL) { if (pMeta == NULL) {
...@@ -617,6 +628,7 @@ int16_t tsdbGetLastColumnsIndexByColId(STable* pTable, int16_t colId) { ...@@ -617,6 +628,7 @@ int16_t tsdbGetLastColumnsIndexByColId(STable* pTable, int16_t colId) {
if (pTable->lastCols == NULL) { if (pTable->lastCols == NULL) {
return -1; return -1;
} }
// TODO: use binary search instead
for (int16_t i = 0; i < pTable->maxColNum; ++i) { for (int16_t i = 0; i < pTable->maxColNum; ++i) {
if (pTable->lastCols[i].colId == colId) { if (pTable->lastCols[i].colId == colId) {
return i; return i;
...@@ -734,10 +746,10 @@ void tsdbUpdateTableSchema(STsdbRepo *pRepo, STable *pTable, STSchema *pSchema, ...@@ -734,10 +746,10 @@ void tsdbUpdateTableSchema(STsdbRepo *pRepo, STable *pTable, STSchema *pSchema,
TSDB_WUNLOCK_TABLE(pCTable); TSDB_WUNLOCK_TABLE(pCTable);
if (insertAct) { if (insertAct) {
int tlen = tsdbGetTableEncodeSize(TSDB_UPDATE_META, pCTable); if (tsdbInsertNewTableAction(pRepo, pCTable) != 0) {
void *buf = tsdbAllocBytes(pRepo, tlen); tsdbError("vgId:%d table %s tid %d uid %" PRIu64 " tsdbInsertNewTableAction fail", REPO_ID(pRepo), TABLE_CHAR_NAME(pTable),
ASSERT(buf != NULL); TABLE_TID(pTable), TABLE_UID(pTable));
tsdbInsertTableAct(pRepo, TSDB_UPDATE_META, buf, pCTable); }
} }
} }
...@@ -1250,8 +1262,14 @@ static int tsdbEncodeTable(void **buf, STable *pTable) { ...@@ -1250,8 +1262,14 @@ static int tsdbEncodeTable(void **buf, STable *pTable) {
tlen += taosEncodeFixedU64(buf, TABLE_SUID(pTable)); tlen += taosEncodeFixedU64(buf, TABLE_SUID(pTable));
tlen += tdEncodeKVRow(buf, pTable->tagVal); tlen += tdEncodeKVRow(buf, pTable->tagVal);
} else { } else {
tlen += taosEncodeFixedU8(buf, (uint8_t)taosArrayGetSize(pTable->schema)); uint32_t arraySize = (uint32_t)taosArrayGetSize(pTable->schema);
for (int i = 0; i < taosArrayGetSize(pTable->schema); i++) { if(arraySize > UINT8_MAX) {
tlen += taosEncodeFixedU8(buf, 0);
tlen += taosEncodeFixedU32(buf, arraySize);
} else {
tlen += taosEncodeFixedU8(buf, (uint8_t)arraySize);
}
for (uint32_t i = 0; i < arraySize; i++) {
STSchema *pSchema = taosArrayGetP(pTable->schema, i); STSchema *pSchema = taosArrayGetP(pTable->schema, i);
tlen += tdEncodeSchema(buf, pSchema); tlen += tdEncodeSchema(buf, pSchema);
} }
...@@ -1284,8 +1302,11 @@ static void *tsdbDecodeTable(void *buf, STable **pRTable) { ...@@ -1284,8 +1302,11 @@ static void *tsdbDecodeTable(void *buf, STable **pRTable) {
buf = taosDecodeFixedU64(buf, &TABLE_SUID(pTable)); buf = taosDecodeFixedU64(buf, &TABLE_SUID(pTable));
buf = tdDecodeKVRow(buf, &(pTable->tagVal)); buf = tdDecodeKVRow(buf, &(pTable->tagVal));
} else { } else {
uint8_t nSchemas; uint32_t nSchemas = 0;
buf = taosDecodeFixedU8(buf, &nSchemas); buf = taosDecodeFixedU8(buf, (uint8_t *)&nSchemas);
if(nSchemas == 0) {
buf = taosDecodeFixedU32(buf, &nSchemas);
}
for (int i = 0; i < nSchemas; i++) { for (int i = 0; i < nSchemas; i++) {
STSchema *pSchema; STSchema *pSchema;
buf = tdDecodeSchema(buf, &pSchema); buf = tdDecodeSchema(buf, &pSchema);
...@@ -1485,4 +1506,4 @@ static void tsdbFreeTableSchema(STable *pTable) { ...@@ -1485,4 +1506,4 @@ static void tsdbFreeTableSchema(STable *pTable) {
taosArrayDestroy(pTable->schema); taosArrayDestroy(pTable->schema);
} }
} }
\ No newline at end of file
...@@ -14,23 +14,24 @@ ...@@ -14,23 +14,24 @@
*/ */
#include "tfunctional.h" #include "tfunctional.h"
#include "tarray.h"
tGenericSavedFunc* genericSavedFuncInit(GenericVaFunc func, int numOfArgs) { tGenericSavedFunc* genericSavedFuncInit(GenericVaFunc func, int numOfArgs) {
tGenericSavedFunc* pSavedFunc = malloc(sizeof(tGenericSavedFunc) + numOfArgs * (sizeof(void*))); tGenericSavedFunc* pSavedFunc = malloc(sizeof(tGenericSavedFunc) + numOfArgs * (sizeof(void*)));
if(pSavedFunc == NULL) return NULL;
pSavedFunc->func = func; pSavedFunc->func = func;
return pSavedFunc; return pSavedFunc;
} }
tI32SavedFunc* i32SavedFuncInit(I32VaFunc func, int numOfArgs) { tI32SavedFunc* i32SavedFuncInit(I32VaFunc func, int numOfArgs) {
tI32SavedFunc* pSavedFunc = malloc(sizeof(tI32SavedFunc) + numOfArgs * sizeof(void *)); tI32SavedFunc* pSavedFunc = malloc(sizeof(tI32SavedFunc) + numOfArgs * sizeof(void *));
if(pSavedFunc == NULL) return NULL;
pSavedFunc->func = func; pSavedFunc->func = func;
return pSavedFunc; return pSavedFunc;
} }
tVoidSavedFunc* voidSavedFuncInit(VoidVaFunc func, int numOfArgs) { tVoidSavedFunc* voidSavedFuncInit(VoidVaFunc func, int numOfArgs) {
tVoidSavedFunc* pSavedFunc = malloc(sizeof(tVoidSavedFunc) + numOfArgs * sizeof(void*)); tVoidSavedFunc* pSavedFunc = malloc(sizeof(tVoidSavedFunc) + numOfArgs * sizeof(void*));
if(pSavedFunc == NULL) return NULL;
pSavedFunc->func = func; pSavedFunc->func = func;
return pSavedFunc; return pSavedFunc;
} }
......
...@@ -104,6 +104,21 @@ class TDTestCase: ...@@ -104,6 +104,21 @@ class TDTestCase:
tdSql.checkRows(2) tdSql.checkRows(2)
tdSql.checkData(0, 1, 1) tdSql.checkData(0, 1, 1)
tdSql.checkData(1, 1, 2) tdSql.checkData(1, 1, 2)
tdSql.query("select ts,bottom(col1, 2),ts from test1")
tdSql.checkRows(2)
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
tdSql.checkData(1, 0, "2018-09-17 09:00:00.001")
tdSql.checkData(1, 3, "2018-09-17 09:00:00.001")
tdSql.query("select ts,bottom(col1, 2),ts from test group by tbname")
tdSql.checkRows(2)
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
tdSql.checkData(1, 0, "2018-09-17 09:00:00.001")
tdSql.checkData(1, 3, "2018-09-17 09:00:00.001")
#TD-2457 bottom + interval + order by #TD-2457 bottom + interval + order by
tdSql.error('select top(col2,1) from test interval(1y) order by col2;') tdSql.error('select top(col2,1) from test interval(1y) order by col2;')
......
...@@ -54,6 +54,29 @@ class TDTestCase: ...@@ -54,6 +54,29 @@ class TDTestCase:
tdSql.query("select derivative(col, 10s, 0) from stb group by tbname") tdSql.query("select derivative(col, 10s, 0) from stb group by tbname")
tdSql.checkRows(10) tdSql.checkRows(10)
tdSql.query("select ts,derivative(col, 10s, 1),ts from stb group by tbname")
tdSql.checkRows(4)
tdSql.checkData(0, 0, "2018-09-17 09:00:10.000")
tdSql.checkData(0, 1, "2018-09-17 09:00:10.000")
tdSql.checkData(0, 3, "2018-09-17 09:00:10.000")
tdSql.checkData(3, 0, "2018-09-17 09:01:20.000")
tdSql.checkData(3, 1, "2018-09-17 09:01:20.000")
tdSql.checkData(3, 3, "2018-09-17 09:01:20.000")
tdSql.query("select ts,derivative(col, 10s, 1),ts from tb1")
tdSql.checkRows(2)
tdSql.checkData(0, 0, "2018-09-17 09:00:10.000")
tdSql.checkData(0, 1, "2018-09-17 09:00:10.000")
tdSql.checkData(0, 3, "2018-09-17 09:00:10.000")
tdSql.checkData(1, 0, "2018-09-17 09:00:20.009")
tdSql.checkData(1, 1, "2018-09-17 09:00:20.009")
tdSql.checkData(1, 3, "2018-09-17 09:00:20.009")
tdSql.query("select ts from(select ts,derivative(col, 10s, 0) from stb group by tbname)")
tdSql.checkData(0, 0, "2018-09-17 09:00:10.000")
tdSql.error("select derivative(col, 10s, 0) from tb1 group by tbname") tdSql.error("select derivative(col, 10s, 0) from tb1 group by tbname")
tdSql.query("select derivative(col, 10s, 1) from tb1") tdSql.query("select derivative(col, 10s, 1) from tb1")
......
...@@ -95,6 +95,24 @@ class TDTestCase: ...@@ -95,6 +95,24 @@ class TDTestCase:
tdSql.error("select diff(col14) from test") tdSql.error("select diff(col14) from test")
tdSql.query("select ts,diff(col1),ts from test1")
tdSql.checkRows(10)
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
tdSql.checkData(0, 3, "2018-09-17 09:00:00.000")
tdSql.checkData(9, 0, "2018-09-17 09:00:00.009")
tdSql.checkData(9, 1, "2018-09-17 09:00:00.009")
tdSql.checkData(9, 3, "2018-09-17 09:00:00.009")
tdSql.query("select ts,diff(col1),ts from test group by tbname")
tdSql.checkRows(10)
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
tdSql.checkData(0, 3, "2018-09-17 09:00:00.000")
tdSql.checkData(9, 0, "2018-09-17 09:00:00.009")
tdSql.checkData(9, 1, "2018-09-17 09:00:00.009")
tdSql.checkData(9, 3, "2018-09-17 09:00:00.009")
tdSql.query("select diff(col1) from test1") tdSql.query("select diff(col1) from test1")
tdSql.checkRows(10) tdSql.checkRows(10)
......
...@@ -117,7 +117,22 @@ class TDTestCase: ...@@ -117,7 +117,22 @@ class TDTestCase:
tdSql.checkRows(2) tdSql.checkRows(2)
tdSql.checkData(0, 1, 8.1) tdSql.checkData(0, 1, 8.1)
tdSql.checkData(1, 1, 9.1) tdSql.checkData(1, 1, 9.1)
tdSql.query("select ts,top(col1, 2),ts from test1")
tdSql.checkRows(2)
tdSql.checkData(0, 0, "2018-09-17 09:00:00.008")
tdSql.checkData(0, 1, "2018-09-17 09:00:00.008")
tdSql.checkData(1, 0, "2018-09-17 09:00:00.009")
tdSql.checkData(1, 3, "2018-09-17 09:00:00.009")
tdSql.query("select ts,top(col1, 2),ts from test group by tbname")
tdSql.checkRows(2)
tdSql.checkData(0, 0, "2018-09-17 09:00:00.008")
tdSql.checkData(0, 1, "2018-09-17 09:00:00.008")
tdSql.checkData(1, 0, "2018-09-17 09:00:00.009")
tdSql.checkData(1, 3, "2018-09-17 09:00:00.009")
#TD-2563 top + super_table + interval #TD-2563 top + super_table + interval
tdSql.execute("create table meters(ts timestamp, c int) tags (d int)") tdSql.execute("create table meters(ts timestamp, c int) tags (d int)")
tdSql.execute("create table t1 using meters tags (1)") tdSql.execute("create table t1 using meters tags (1)")
......
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import os
import sys
sys.path.insert(0, os.getcwd())
from util.log import *
from util.sql import *
from util.dnodes import *
import taos
import threading
import subprocess
from random import choice
class TwoClients:
def initConnection(self):
self.host = "chr03"
self.user = "root"
self.password = "taosdata"
self.config = "/etc/taos/"
self.port =6030
self.rowNum = 10
self.ts = 1537146000000
def run(self):
# new taos client
conn1 = taos.connect(host=self.host, user=self.user, password=self.password, config=self.config )
print(conn1)
cur1 = conn1.cursor()
tdSql.init(cur1, True)
tdSql.execute("drop database if exists db3")
# insert data with taosc
for i in range(10):
os.system("taosdemo -f manualTest/TD-5114/insertDataDb3Replica2.json -y ")
# # check data correct
tdSql.execute("show databases")
tdSql.execute("use db3")
tdSql.query("select count (tbname) from stb0")
tdSql.checkData(0, 0, 20000)
tdSql.query("select count (*) from stb0")
tdSql.checkData(0, 0, 4000000)
# insert data with python connector , if you want to use this case ,cancel note.
# for x in range(10):
# dataType= [ "tinyint", "smallint", "int", "bigint", "float", "double", "bool", " binary(20)", "nchar(20)", "tinyint unsigned", "smallint unsigned", "int unsigned", "bigint unsigned"]
# tdSql.execute("drop database if exists db3")
# tdSql.execute("create database db3 keep 3650 replica 2 ")
# tdSql.execute("use db3")
# tdSql.execute('''create table test(ts timestamp, col0 tinyint, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
# col7 bool, col8 binary(20), col9 nchar(20), col10 tinyint unsigned, col11 smallint unsigned, col12 int unsigned, col13 bigint unsigned) tags(loc nchar(3000), tag1 int)''')
# rowNum2= 988
# for i in range(rowNum2):
# tdSql.execute("alter table test add column col%d %s ;" %( i+14, choice(dataType)) )
# rowNum3= 988
# for i in range(rowNum3):
# tdSql.execute("alter table test drop column col%d ;" %( i+14) )
# self.rowNum = 50
# self.rowNum2 = 2000
# self.ts = 1537146000000
# for j in range(self.rowNum2):
# tdSql.execute("create table test%d using test tags('beijing%d', 10)" % (j,j) )
# for i in range(self.rowNum):
# tdSql.execute("insert into test%d values(%d, %d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
# % (j, self.ts + i*1000, i + 1, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
# # check data correct
# tdSql.execute("show databases")
# tdSql.execute("use db3")
# tdSql.query("select count (tbname) from test")
# tdSql.checkData(0, 0, 200)
# tdSql.query("select count (*) from test")
# tdSql.checkData(0, 0, 200000)
# delete useless file
testcaseFilename = os.path.split(__file__)[-1]
os.system("rm -rf ./insert_res.txt")
os.system("rm -rf manualTest/TD-5114/%s.sql" % testcaseFilename )
clients = TwoClients()
clients.initConnection()
# clients.getBuildPath()
clients.run()
\ No newline at end of file
{
"filetype": "insert",
"cfgdir": "/etc/taos",
"host": "127.0.0.1",
"port": 6030,
"user": "root",
"password": "taosdata",
"thread_count": 4,
"thread_count_create_tbl": 4,
"result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no",
"insert_interval": 0,
"interlace_rows": 0,
"num_of_records_per_req": 3000,
"max_sql_len": 1024000,
"databases": [{
"dbinfo": {
"name": "db3",
"drop": "yes",
"replica": 2,
"days": 10,
"cache": 50,
"blocks": 8,
"precision": "ms",
"keep": 365,
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
"update": 0
},
"super_tables": [{
"name": "stb0",
"child_table_exists":"no",
"childtable_count": 20000,
"childtable_prefix": "stb0_",
"auto_create_table": "no",
"batch_create_tbl_num": 1000,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 2000,
"childtable_limit": 0,
"childtable_offset":0,
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
}]
}]
}
此差异已折叠。
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import taos
from util.log import *
from util.cases import *
from util.sql import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor())
self.ts = 1537146000000
def run(self):
tdSql.prepare()
print("======= Verify filter for bool, nchar and binary type =========")
tdLog.debug(
"create table st(ts timestamp, tbcol1 bool, tbcol2 binary(10), tbcol3 nchar(20), tbcol4 tinyint, tbcol5 smallint, tbcol6 int, tbcol7 bigint, tbcol8 float, tbcol9 double) tags(tagcol1 bool, tagcol2 binary(10), tagcol3 nchar(10))")
tdSql.execute(
"create table st(ts timestamp, tbcol1 bool, tbcol2 binary(10), tbcol3 nchar(20), tbcol4 tinyint, tbcol5 smallint, tbcol6 int, tbcol7 bigint, tbcol8 float, tbcol9 double) tags(tagcol1 bool, tagcol2 binary(10), tagcol3 nchar(10))")
tdSql.execute("create table st1 using st tags(true, 'table1', '水表')")
for i in range(1, 6):
tdSql.execute(
"insert into st1 values(%d, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d, %f, %f)" %
(self.ts + i, i %
2, i, i,
i, i, i, i, 1.0, 1.0))
# =============Data type keywords cannot be used in filter====================
# timestamp
tdSql.error("select * from st where timestamp = 1629417600")
# bool
tdSql.error("select * from st where bool = false")
#binary
tdSql.error("select * from st where binary = 'taosdata'")
# nchar
tdSql.error("select * from st where nchar = '涛思数据'")
# tinyint
tdSql.error("select * from st where tinyint = 127")
# smallint
tdSql.error("select * from st where smallint = 32767")
# int
tdSql.error("select * from st where INTEGER = 2147483647")
tdSql.error("select * from st where int = 2147483647")
# bigint
tdSql.error("select * from st where bigint = 2147483647")
# float
tdSql.error("select * from st where float = 3.4E38")
# double
tdSql.error("select * from st where double = 1.7E308")
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册