提交 181e2b0c 编写于 作者: C Cary Xu

Merge branch 'develop' into feature/TD-6117

......@@ -40,17 +40,19 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专
* [超级表管理](/taos-sql#super-table):添加、删除、查看、修改超级表
* [标签管理](/taos-sql#tags):增加、删除、修改标签
* [数据写入](/taos-sql#insert):支持单表单条、多条、多表多条写入,支持历史数据写入
* [数据查询](/taos-sql#select):支持时间段、值过滤、排序、查询结果手动分页等
* [数据查询](/taos-sql#select):支持时间段、值过滤、排序、嵌套查询、UINON、JOIN、查询结果手动分页等
* [SQL函数](/taos-sql#functions):支持各种聚合函数、选择函数、计算函数,如avg, min, diff等
* [窗口切分聚合](/taos-sql#aggregation):将表中数据按照时间段等方式进行切割后聚合,降维处理
* [边界限制](/taos-sql#limitation):库、表、SQL等边界限制条件
* [UDF](/taos-sql/udf):用户定义函数的创建和管理方法
* [错误码](/taos-sql/error-code):TDengine 2.0 错误码以及对应的十进制码
## [高效写入数据](/insert)
* [SQL写入](/insert#sql):使用SQL insert命令向一张或多张表写入单条或多条记录
* [Prometheus写入](/insert#prometheus):配置Prometheus, 不用任何代码,将数据直接写入
* [Telegraf写入](/insert#telegraf):配置Telegraf, 不用任何代码,将采集数据直接写入
* [SQL 写入](/insert#sql):使用SQL insert命令向一张或多张表写入单条或多条记录
* [Schemaless 写入](/insert#schemaless):免于预先建表,将数据直接写入时自动维护元数据结构
* [Prometheus 写入](/insert#prometheus):配置Prometheus, 不用任何代码,将数据直接写入
* [Telegraf 写入](/insert#telegraf):配置Telegraf, 不用任何代码,将采集数据直接写入
* [EMQ X Broker](/insert#emq):配置EMQ X,不用任何代码,就可将MQTT数据直接写入
* [HiveMQ Broker](/insert#hivemq):配置HiveMQ,不用任何代码,就可将MQTT数据直接写入
......
......@@ -2,28 +2,27 @@
## <a class="anchor" id="intro"></a>TDengine 简介
TDengine 是涛思数据面对高速增长的物联网大数据市场和技术挑战推出的创新性的大数据处理产品,它不依赖任何第三方软件,也不是优化或包装了一个开源的数据库或流式计算产品,而是在吸取众多传统关系型数据库、NoSQL 数据库、流式计算引擎、消息队列等软件的优点之后自主开发的产品,在时序空间大数据处理上,有着自己独到的优势。
TDengine 是涛思数据面对高速增长的物联网大数据市场和技术挑战推出的创新性的大数据处理产品,它不依赖任何第三方软件,也不是优化或包装了一个开源的数据库或流式计算产品,而是在吸取众多传统关系型数据库、NoSQL 数据库、流式计算引擎、消息队列等软件的优点之后自主开发的产品,TDengine 在时序空间大数据处理上,有着自己独到的优势。
TDengine 的模块之一是时序数据库。但除此之外,为减少研发的复杂度、系统维护的难度,TDengine 还提供缓存、消息队列、订阅、流式计算等功能,为物联网、工业互联网大数据的处理提供全栈的技术方案,是一个高效易用的物联网大数据平台。与 Hadoop 等典型的大数据平台相比,它具有如下鲜明的特点:
TDengine 的模块之一是时序数据库。但除此之外,为减少研发的复杂度、系统维护的难度,TDengine 还提供缓存、消息队列、订阅、流式计算等功能,为物联网和工业互联网大数据的处理提供全栈的技术方案,是一个高效易用的物联网大数据平台。与 Hadoop 等典型的大数据平台相比,TDengine 具有如下鲜明的特点:
* __10 倍以上的性能提升__:定义了创新的数据存储结构,单核每秒能处理至少 2 万次请求,插入数百万个数据点,读出一千万以上数据点,比现有通用数据库快十倍以上。
* __硬件或云服务成本降至 1/5__:由于超强性能,计算资源不到通用大数据方案的 1/5;通过列式存储和先进的压缩算法,存储空间不到通用数据库的 1/10。
* __硬件或云服务成本降至 1/5__:由于超强性能,计算资源不到通用大数据方案的 1/5;通过列式存储和先进的压缩算法,存储占用不到通用数据库的 1/10。
* __全栈时序数据处理引擎__:将数据库、消息队列、缓存、流式计算等功能融为一体,应用无需再集成 Kafka/Redis/HBase/Spark/HDFS 等软件,大幅降低应用开发和维护的复杂度成本。
* __强大的分析功能__:无论是十年前还是一秒钟前的数据,指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。即席查询可通过 Shell, Python, R, MATLAB 随时进行。
* __与第三方工具无缝连接__:不用一行代码,即可与 Telegraf, Grafana, EMQ, HiveMQ, Prometheus, MATLAB, R 等集成。后续将支持 OPC, Hadoop, Spark 等,BI 工具也将无缝连接。
* __零运维成本、零学习成本__:安装集群简单快捷,无需分库分表,实时备份。类标准 SQL,支持 RESTful,支持 Python/Java/C/C++/C#/Go/Node.js, 与 MySQL 相似,零学习成本。
* __高可用性和水平扩展__:通过分布式架构和一致性算法,通过多复制和集群特性,TDengine确保了高可用性和水平扩展性以支持关键任务应用程序。
* __零运维成本、零学习成本__:安装集群简单快捷,无需分库分表,实时备份。类似标准 SQL,支持 RESTful,支持 Python/Java/C/C++/C#/Go/Node.js, 与 MySQL 相似,零学习成本。
* __核心开源__:除了一些辅助功能外,TDengine的核心是开源的。企业再也不会被数据库绑定了。这使生态更加强大,产品更加稳定,开发者社区更加活跃。
采用 TDengine,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。但需要指出的是,因充分利用了物联网时序数据的特点,它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM 等通用型数据。
![TDengine技术生态图](page://images/eco_system.png)
<center>图 1. TDengine技术生态图</center>
## <a class="anchor" id="scenes"></a>TDengine 总体适用场景
作为一个 IoT 大数据平台,TDengine 的典型适用场景是在 IoT 范畴,而且用户有一定的数据量。本文后续的介绍主要针对这个范畴里面的系统。范畴之外的系统,比如 CRM,ERP 等,不在本文讨论范围内。
### 数据源特点和需求
从数据源角度,设计人员可以从下面几个角度分析 TDengine 在目标应用系统里面的适用性。
......@@ -64,4 +63,3 @@ TDengine 的模块之一是时序数据库。但除此之外,为减少研发
|要求系统可靠运行| | | √ | TDengine 的系统架构非常稳定可靠,日常维护也简单便捷,对维护人员的要求简洁明了,最大程度上杜绝人为错误和事故。|
|要求运维学习成本可控| | | √ |同上。|
|要求市场有大量人才储备| √ | | | TDengine 作为新一代产品,目前人才市场里面有经验的人员还有限。但是学习成本低,我们作为厂家也提供运维的培训和辅助服务。|
......@@ -250,7 +250,7 @@ vnode(虚拟数据节点)负责为采集的时序数据提供写入、查询和
创建DB时,系统并不会马上分配资源。但当创建一张表时,系统将看是否有已经分配的vnode, 且该vnode是否有空余的表空间,如果有,立即在该有空位的vnode创建表。如果没有,系统将从集群中,根据当前的负载情况,在一个dnode上创建一新的vnode, 然后创建表。如果DB有多个副本,系统不是只创建一个vnode,而是一个vgroup(虚拟数据节点组)。系统对vnode的数目没有任何限制,仅仅受限于物理节点本身的计算和存储资源。
每张表的meda data(包含schema, 标签等)也存放于vnode里,而不是集中存放于mnode,实际上这是对Meta数据的分片,这样便于高效并行的进行标签过滤操作。
每张表的meta data(包含schema, 标签等)也存放于vnode里,而不是集中存放于mnode,实际上这是对Meta数据的分片,这样便于高效并行的进行标签过滤操作。
### 数据分区
......
......@@ -56,7 +56,6 @@ measurement,tag_set field_set timestamp
- 后缀为 i16,表示为 SMALLINT (INT16) 类型;
- 后缀为 i32,表示为 INT (INT32) 类型;
- 后缀为 i64,表示为 BIGINT (INT64) 类型;
- 后缀为 b,表示为 BOOL 类型。
* t, T, true, True, TRUE, f, F, false, False 将直接作为 BOOL 型来处理。
timestamp 位置的时间戳通过后缀来声明时间精度,具体如下:
......@@ -72,13 +71,15 @@ timestamp 位置的时间戳通过后缀来声明时间精度,具体如下:
st,t1=3i64,t2=4f64,t3="t3" c1=3i64,c3=L"passit",c2=false,c4=4f64 1626006833639000000ns
```
需要注意的是,如果描述数据类型后缀时使用了错误的大小写,或者为数据指定的数据类型有误,均可能引发报错提示而导致数据写入失败。
### Schemaless 的处理逻辑
Schemaless 按照如下原则来处理行数据:
1. 当 tag_set 中有 ID 字段时,该字段的值将作为数据子表的表名。
2. 没有 ID 字段时,将使用 `measurement + tag_value1 + tag_value2 + ...` 的 md5 值来作为子表名。
3. 如果指定的超级表名不存在,则 Schemaless 会创建这个超级表。
4. 如果指定的数据子表不存在,则 Schemaless 会使用 tag values 创建这个数据子表。
4. 如果指定的数据子表不存在,则 Schemaless 会按照步骤 1 或 2 确定的子表名来创建子表。
5. 如果数据行中指定的标签列或普通列不存在,则 Schemaless 会在超级表中增加对应的标签列或普通列(只增不减)。
6. 如果超级表中存在一些标签列或普通列未在一个数据行中被指定取值,那么这些列的值在这一行中会被置为 NULL。
7. 对 BINARY 或 NCHAR 列,如果数据行中所提供值的长度超出了列类型的限制,那么 Schemaless 会增加该列允许存储的字符长度上限(只增不减),以保证数据的完整保存。
......
......@@ -776,7 +776,7 @@ curl -u username:password -d '<SQL>' <ip>:<PORT>/rest/sql/[db_name]
- data: 具体返回的数据,一行一行的呈现,如果不返回结果集,那么就仅有 [[affected_rows]]。data 中每一行的数据列顺序,与 column_meta 中描述数据列的顺序完全一致。
- rows: 表明总共多少行数据。
column_meta 中的列类型说明:
<a class="anchor" id="column_meta"></a>column_meta 中的列类型说明:
* 1:BOOL
* 2:TINYINT
* 3:SMALLINT
......
......@@ -568,6 +568,35 @@ COMPACT 命令对指定的一个或多个 VGroup 启动碎片重整,系统会
需要注意的是,碎片重整操作会大幅消耗磁盘 I/O。因此在重整进行期间,有可能会影响节点的写入和查询性能,甚至在极端情况下导致短时间的阻写。
<a class="anchor" id="tsz_compress"></a>
## 浮点数有损压缩
在车联网等物联网智能应用场景中,经常会采集和存储海量的浮点数类型数据,如果能更高效地对此类数据进行压缩,那么不但能够节省数据存储的硬件资源,也能够因降低磁盘 I/O 数据量而提升系统性能表现。
从 2.1.6.0 版本开始,TDengine 提供一种名为 TSZ 的新型数据压缩算法,无论设置为有损压缩还是无损压缩,都能够显著提升浮点数类型数据的压缩率表现。目前该功能以可选模块的方式进行发布,可以通过添加特定的编译参数来启用该功能(也即常规安装包中暂未包含该功能)。
**需要注意的是,该功能一旦启用,效果是全局的,也即会对系统中所有的 FLOAT、DOUBLE 类型的数据生效。同时,在启用了浮点数有损压缩功能后写入的数据,也无法被未启用该功能的版本载入,并有可能因此而导致数据库服务报错退出。**
### 创建支持 TSZ 压缩算法的 TDengine 版本
TSZ 模块保存在单独的代码仓库 https://github.com/taosdata/TSZ 中。可以通过以下步骤创建包含此模块的 TDengine 版本:
1. TDengine 中的插件目前只支持通过 SSH 的方式拉取和编译,所以需要自己先配置好通过 SSH 拉取 GitHub 代码的环境。
2. `git clone git@github.com:taosdata/TDengine -b your_branchname --recurse-submodules` 通过 `--recurse-submodules` 使依赖模块的源代码可以被一并下载。
3. `mkdir debug && cd debug` 进入单独的编译目录。
4. `cmake .. -DTSZ_ENABLED=true` 其中参数 `-DTSZ_ENABLED=true` 表示在编译过程中加入对 TSZ 插件功能的支持。如果成功激活对 TSZ 模块的编译,那么 CMAKE 过程中也会显示 `build with TSZ enabled` 字样。
5. 编译成功后,包含 TSZ 浮点压缩功能的插件便已经编译进了 TDengine 中了,可以通过调整 taos.cfg 中的配置参数来使用此功能了。
### 通过配置文件来启用 TSZ 压缩算法
如果要启用 TSZ 压缩算法,除了在 TDengine 的编译过程需要声明启用 TSZ 模块之外,还需要在 taos.cfg 配置文件中对以下参数进行设置:
* lossyColumns:配置要进行有损压缩的浮点数数据类型。参数值类型为字符串,含义为:空 - 关闭有损压缩;float - 只对 FLOAT 类型进行有损压缩;double - 只对 DOUBLE 类型进行有损压缩;float|double:对 FLOAT 和 DOUBLE 类型都进行有损压缩。默认值是“空”,也即关闭有损压缩。
* fPrecision:设置 float 类型浮点数压缩精度,小于此值的浮点数尾数部分将被截断。参数值类型为 FLOAT,最小值为 0.0,最大值为 100,000.0。缺省值为 0.00000001(1E-8)。
* dPrecision:设置 double 类型浮点数压缩精度,小于此值的浮点数尾数部分将被截断。参数值类型为 DOUBLE,最小值为 0.0,最大值为 100,000.0。缺省值为 0.0000000000000001(1E-16)。
* maxRange:表示数据的最大浮动范围。一般无需调整,在数据具有特定特征时可以配合 range 参数来实现极高的数据压缩率。默认值为 500。
* range:表示数据大体浮动范围。一般无需调整,在数据具有特定特征时可以配合 maxRange 参数来实现极高的数据压缩率。默认值为 100。
**注意:**对 cfg 配置文件中参数值的任何调整,都需要重新启动 taosd 才能生效。并且以上选项为全局配置选项,配置后对所有数据库中所有表的 FLOAT 及 DOUBLE 类型的字段生效。
## <a class="anchor" id="directories"></a>文件目录结构
安装TDengine后,默认会在操作系统中生成下列目录或文件:
......
# UDF(用户定义函数)
在有些应用场景中,应用逻辑需要的查询无法直接使用系统内置的函数来表示。利用 UDF 功能,TDengine 可以插入用户编写的处理代码并在查询中使用它们,就能够很方便地解决特殊应用场景中的使用需求。
从 2.2.0.0 版本开始,TDengine 支持通过 C/C++ 语言进行 UDF 定义。接下来结合示例讲解 UDF 的使用方法。
## 用 C/C++ 语言来定义 UDF
TDengine 提供 3 个 UDF 的源代码示例,分别为:
* [add_one.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/add_one.c)
* [abs_max.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/abs_max.c)
* [sum_double.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/sum_double.c)
### 无需中间变量的标量函数
[add_one.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/add_one.c) 是结构最简单的 UDF 实现。其功能为:对传入的一个数据列(可能因 WHERE 子句进行了筛选)中的每一项,都输出 +1 之后的值,并且要求输入的列数据类型为 INT。
这一具体的处理逻辑在函数 `void add_one(char* data, short itype, short ibytes, int numOfRows, long long* ts, char* dataOutput, char* interBUf, char* tsOutput, int* numOfOutput, short otype, short obytes, SUdfInit* buf)` 中定义。这类用于实现 UDF 的基础计算逻辑的函数,我们称为 udfNormalFunc,也就是对行数据块的标量计算函数。需要注意的是,udfNormalFunc 的参数项是固定的,用于按照约束完成与引擎之间的数据交换。
- udfNormalFunc 中各参数的具体含义是:
* data:存有输入的数据。
* itype:输入数据的类型。这里采用的是短整型表示法,与各种数据类型对应的值可以参见 [column_meta 中的列类型说明](https://www.taosdata.com/cn/documentation/connector#column_meta)。例如 4 用于表示 INT 型。
* iBytes:输入数据中每个值会占用的字节数。
* numOfRows:输入数据的总行数。
* ts:主键时间戳在输入中的列数据。
* dataOutput:输出数据的缓冲区。
* interBuf:系统使用的中间临时缓冲区,通常用户逻辑无需对 interBuf 进行处理。
* tsOutput:主键时间戳在输出时的列数据。
* numOfOutput:输出数据的个数。
* oType:输出数据的类型。取值含义与 itype 参数一致。
* oBytes:输出数据中每个值会占用的字节数。
* buf:计算过程的中间变量缓冲区。
其中 buf 参数需要用到一个自定义结构体 SUdfInit。在这个例子中,因为 add_one 的计算过程无需用到中间变量缓存,所以可以把 SUdfInit 定义成一个空结构体。
### 无需中间变量的聚合函数
[abs_max.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/abs_max.c) 实现的是一个聚合函数,功能是对一组数据按绝对值取最大值。
其计算过程为:与所在查询语句相关的数据会被分为多个行数据块,对每个行数据块调用 udfNormalFunc(在本例的实现代码中,实际函数名是 `abs_max`),再将每个数据块的计算结果调用 udfMergeFunc(本例中,其实际的函数名是 `abs_max_merge`)进行聚合,生成每个子表的聚合结果。如果查询指令涉及超级表,那么最后还会通过 udfFinalizeFunc(本例中,其实际的函数名是 `abs_max_finalize`)再把子表的计算结果聚合为超级表的计算结果。
值得注意的是,udfNormalFunc、udfMergeFunc、udfFinalizeFunc 之间,函数名约定使用相同的前缀,此前缀即 udfNormalFunc 的实际函数名。udfMergeFunc 的函数名后缀 `_merge`、udfFinalizeFunc 的函数名后缀 `_finalize`,是 UDF 实现规则的一部分,系统会按照这些函数名后缀来调用相应功能。
- udfMergeFunc 用于对计算中间结果进行聚合。本例中 udfMergeFunc 对应的实现函数为 `void abs_max_merge(char* data, int32_t numOfRows, char* dataOutput, int32_t* numOfOutput, SUdfInit* buf)`,其中各参数的具体含义是:
* data:udfNormalFunc 的输出组合在一起的数据,也就成为了 udfMergeFunc 的输入。
* numOfRows:data 中数据的行数。
* dataOutput:输出数据的缓冲区。
* numOfOutput:输出数据的个数。
* buf:计算过程的中间变量缓冲区。
- udfFinalizeFunc 用于对计算结果进行最终聚合。本例中 udfFinalizeFunc 对应的实现函数为 `void abs_max_finalize(char* dataOutput, char* interBuf, int* numOfOutput, SUdfInit* buf)`,其中各参数的具体含义是:
* dataOutput:输出数据的缓冲区。对 udfFinalizeFunc 来说,其输入数据也来自于这里。
* interBuf:系统使用的中间临时缓冲区,与 udfNormalFunc 中的同名参数含义一致。
* numOfOutput:输出数据的个数。
* buf:计算过程的中间变量缓冲区。
同样因为 abs_max 的计算过程无需用到中间变量缓存,所以同样是可以把 SUdfInit 定义成一个空结构体。
### 使用中间变量的聚合函数
[sum_double.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/sum_double.c) 也是一个聚合函数,功能是对一组数据输出求和结果的倍数。
出于功能演示的目的,在这个用户定义函数的实现方法中,用到了中间变量缓冲区 buf。因此,在这个源代码文件中,SUdfInit 就不再是一个空的结构体,而是定义了缓冲区的具体存储内容。
也正是因为用到了中间变量缓冲区,因此就需要对这一缓冲区进行初始化和资源释放。具体来说,也即对应 udfInitFunc(本例中,其实际的函数名是 `sum_double_init`)和 udfDestroyFunc(本例中,其实际的函数名是 `sum_double_destroy`)。其函数名命名规则同样是采取以 udfNormalFunc 的实际函数名为前缀,以 `_init``_destroy` 为后缀。系统会在初始化和资源释放时调用对应名称的函数。
- udfInitFunc 用于初始化中间变量缓冲区中的变量和内容。本例中 udfInitFunc 对应的实现函数为 `int sum_double_init(SUdfInit* buf)`,其中各参数的具体含义是:
* buf:计算过程的中间变量缓冲区。
- udfDestroyFunc 用于释放中间变量缓冲区中的变量和内容。本例中 udfDestroyFunc 对应的实现函数为 `void sum_double_destroy(SUdfInit* buf)`,其中各参数的具体含义是:
* buf:计算过程的中间变量缓冲区。
注意,UDF 的实现过程中需要小心处理对中间变量缓冲区的使用,如果使用不当则有可能导致内存泄露或对资源的过度占用,甚至导致系统服务进程崩溃等。
### UDF 实现方式的规则总结
根据所要实现的 UDF 类型不同,用户所要实现的功能函数内容也会有所区别:
* 无需中间变量的标量函数:结构体 SUdfInit 可以为空,需实现 udfNormalFunc。
* 无需中间变量的聚合函数:结构体 SUdfInit 可以为空,需实现 udfNormalFunc、udfMergeFunc、udfFinalizeFunc。
* 使用中间变量的标量函数:结构体 SUdfInit 需要具体定义,并需实现 udfNormalFunc、udfInitFunc、udfDestroyFunc。
* 使用中间变量的聚合函数:结构体 SUdfInit 需要具体定义,并需实现 udfNormalFunc、udfInitFunc、udfDestroyFunc、udfMergeFunc、udfFinalizeFunc。
## 编译 UDF
用户定义函数的 C 语言源代码无法直接被 TDengine 系统使用,而是需要先编译为 .so 链接库,之后才能载入 TDengine 系统。
例如,按照上一章节描述的规则准备好了用户定义函数的源代码 add_one.c,那么可以执行如下指令编译得到动态链接库文件:
```bash
gcc -g -O0 -fPIC -shared add_one.c -o add_one.so
```
这样就准备好了动态链接库 add_one.so 文件,可以供后文创建 UDF 时使用了。
## 在系统中管理和使用 UDF
### 创建 UDF
用户可以通过 SQL 指令在系统中加载客户端所在主机上的 UDF 函数库(不能通过 RESTful 接口或 HTTP 管理界面来进行这一过程)。一旦创建成功,则当前 TDengine 集群的所有用户都可以在 SQL 指令中使用这些函数。UDF 存储在系统的 MNode 节点上,因此即使重启 TDengine 系统,已经创建的 UDF 也仍然可用。
在创建 UDF 时,需要区分标量函数和聚合函数。如果创建时声明了错误的函数类别,则可能导致通过 SQL 指令调用函数时出错。
- 创建标量函数:`CREATE FUNCTION ids(X) AS ids(Y) OUTPUTTYPE typename(Z) bufsize(B);`
* X:标量函数未来在 SQL 指令中被调用时的函数名,必须与函数实现中 udfNormalFunc 的实际名称一致;
* Y:包含 UDF 函数实现的动态链接库的库文件路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件);
* Z:此函数计算结果的数据类型,使用数字表示,含义与上文中 udfNormalFunc 的 itype 参数一致;
* B:系统使用的中间临时缓冲区大小,单位是字节,最小 0,最大 512,通常可以设置为 128。
- 创建聚合函数:`CREATE AGGREGATE FUNCTION ids(X) AS ids(Y) OUTPUTTYPE typename(Z) bufsize(B);`
* X:标量函数未来在 SQL 指令中被调用时的函数名,必须与函数实现中 udfNormalFunc 的实际名称一致;
* Y:包含 UDF 函数实现的动态链接库的库文件路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件);
* Z:此函数计算结果的数据类型,使用数字表示,含义与上文中 udfNormalFunc 的 itype 参数一致;
* B:系统使用的中间临时缓冲区大小,单位是字节,最小 0,最大 512,通常可以设置为 128。
### 管理 UDF
- 删除指定名称的用户定义函数:`DROP FUNCTION ids(X);`
* X:此参数的含义与 CREATE 指令中的 X 参数一致。
- 显示系统中当前可用的所有 UDF:`SHOW FUNCTIONS;`
### 调用 UDF
在 SQL 指令中,可以直接以在系统中创建 UDF 时赋予的函数名来调用用户定义函数。例如:
```sql
SELECT X(c) FROM table/stable;
```
表示对名为 c 的数据列调用名为 X 的用户定义函数。SQL 指令中用户定义函数可以配合 WHERE 等查询特性来使用。
## UDF 的一些使用限制
在当前版本下,使用 UDF 存在如下这些限制:
1. 在创建和调用 UDF 时,服务端和客户端都只支持 Linux 操作系统;
2. UDF 不能与系统内建的 SQL 函数混合使用;
3. UDF 只支持以单个数据列作为输入;
4. UDF 只要创建成功,就会被持久化存储到 MNode 节点中;
5. 无法通过 RESTful 接口来创建 UDF;
6. UDF 在 SQL 中定义的函数名,必须与 .so 库文件实现中的接口函数名前缀保持一致,也即必须是 udfNormalFunc 的名称,而且不可与 TDengine 中已有的内建 SQL 函数重名。
......@@ -730,6 +730,34 @@ Query OK, 1 row(s) in set (0.001091s)
5. 从 2.0.17.0 版本开始,条件过滤开始支持 BETWEEN AND 语法,例如 `WHERE col2 BETWEEN 1.5 AND 3.25` 表示查询条件为“1.5 ≤ col2 ≤ 3.25”。
6. 从 2.1.4.0 版本开始,条件过滤开始支持 IN 算子,例如 `WHERE city IN ('Beijing', 'Shanghai')`。说明:BOOL 类型写作 `{true, false}` 或 `{0, 1}` 均可,但不能写作 0、1 之外的整数;FLOAT 和 DOUBLE 类型会受到浮点数精度影响,集合内的值在精度范围内认为和数据行的值完全相等才能匹配成功;TIMESTAMP 类型支持非主键的列。<!-- REPLACE_OPEN_TO_ENTERPRISE__IN_OPERATOR_AND_UNSIGNED_INTEGER -->
<a class="anchor" id="join"></a>
### JOIN 子句
从 2.2.0.0 版本开始,TDengine 对内连接(INNER JOIN)中的自然连接(Natural join)操作实现了完整的支持。也即支持“普通表与普通表之间”、“超级表与超级表之间”、“子查询与子查询之间”进行自然连接。自然连接与内连接的主要区别是,自然连接要求参与连接的字段在不同的表/超级表中必须是同名字段。也即,TDengine 在连接关系的表达中,要求必须使用同名数据列/标签列的相等关系。
在普通表与普通表之间的 JOIN 操作中,只能使用主键时间戳之间的相等关系。例如:
```sql
SELECT *
FROM temp_tb_1 t1, pressure_tb_1 t2
WHERE t1.ts = t2.ts
```
在超级表与超级表之间的 JOIN 操作中,除了主键时间戳一致的条件外,还要求引入能实现一一对应的标签列的相等关系。例如:
```sql
SELECT *
FROM temp_stable t1, temp_stable t2
WHERE t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0;
```
类似地,也可以对多个子查询的查询结果进行 JOIN 操作。
注意,JOIN 操作存在如下限制要求:
1. 参与一条语句中 JOIN 操作的表/超级表最多可以有 10 个。
2. 在包含 JOIN 操作的查询语句中不支持 FILL。
3. 暂不支持参与 JOIN 操作的表之间聚合后的四则运算。
4. 不支持只对其中一部分表做 GROUP BY。
5. JOIN 查询的不同表的过滤条件之间不能为 OR。
<a class="anchor" id="nested"></a>
### 嵌套查询
......@@ -757,7 +785,7 @@ SELECT ... FROM (SELECT ... FROM ...) ...;
* 外层查询不支持 GROUP BY。
<a class="anchor" id="union"></a>
### UNION ALL 操作符
### UNION ALL 子句
```mysql
SELECT ...
......@@ -1486,12 +1514,6 @@ SELECT AVG(current), MAX(current), LEASTSQUARES(current, start_val, step_val), P
TAOS SQL 支持对标签、TBNAME 进行 GROUP BY 操作,也支持普通列进行 GROUP BY,前提是:仅限一列且该列的唯一值小于 10 万个。
**JOIN 操作的限制**
TAOS SQL 支持表之间按主键时间戳来 join 两张表的列,暂不支持两个表之间聚合后的四则运算。
JOIN 查询的不同表的过滤条件之间不能为 OR。
**IS NOT NULL 与不为空的表达式适用范围**
IS NOT NULL 支持所有类型的列。不为空的表达式为 <>"",仅对非数值类型的列适用。
......
......@@ -6,17 +6,16 @@ TDengine is an innovative Big Data processing product launched by TAOS Data in t
One of the modules of TDengine is the time-series database. However, in addition to this, to reduce the complexity of research and development and the difficulty of system operation, TDengine also provides functions such as caching, message queuing, subscription, stream computing, etc. TDengine provides a full-stack technical solution for the processing of IoT and Industrial Internet BigData. It is an efficient and easy-to-use IoT Big Data platform. Compared with typical Big Data platforms such as Hadoop, TDengine has the following distinct characteristics:
- **Performance improvement over 10 times**: An innovative data storage structure is defined, with each single core can process at least 20,000 requests per second, insert millions of data points, and read more than 10 million data points, which is more than 10 times faster than other existing general database.
- **Performance improvement over 10 times**: An innovative data storage structure is defined, with every single core that can process at least 20,000 requests per second, insert millions of data points, and read more than 10 million data points, which is more than 10 times faster than other existing general database.
- **Reduce the cost of hardware or cloud services to 1/5**: Due to its ultra-performance, TDengine’s computing resources consumption is less than 1/5 of other common Big Data solutions; through columnar storage and advanced compression algorithms, the storage consumption is less than 1/10 of other general databases.
- **Full-stack time-series data processing engine**: Integrate database, message queue, cache, stream computing, and other functions, and the applications do not need to integrate with software such as Kafka/Redis/HBase/Spark/HDFS, thus greatly reducing the complexity cost of application development and maintenance.
- **Highly Available and Horizontal Scalable **: With the distributed architecture and consistency algorithm, via multi-replication and clustering features, TDengine ensures high availability and horizontal scalability to support the mission-critical applications.
- **Highly Available and Horizontal Scalable**: With the distributed architecture and consistency algorithm, via multi-replication and clustering features, TDengine ensures high availability and horizontal scalability to support mission-critical applications.
- **Zero operation cost & zero learning cost**: Installing clusters is simple and quick, with real-time backup built-in, and no need to split libraries or tables. Similar to standard SQL, TDengine can support RESTful, Python/Java/C/C++/C#/Go/Node.js, and similar to MySQL with zero learning cost.
- **Core is Open Sourced:** Except some auxiliary features, the core of TDengine is open sourced. Enterprise won't be locked by the database anymore. Ecosystem is more strong, product is more stable, and developer communities are more active.
- **Core is Open Sourced:** Except for some auxiliary features, the core of TDengine is open-sourced. Enterprise won't be locked by the database anymore. The ecosystem is more strong, products are more stable, and developer communities are more active.
With TDengine, the total cost of ownership of typical IoT, Internet of Vehicles, and Industrial Internet Big Data platforms can be greatly reduced. However, since it makes full use of the characteristics of IoT time-series data, TDengine cannot be used to process general data from web crawlers, microblogs, WeChat, e-commerce, ERP, CRM, and other sources.
![TDengine Technology Ecosystem](page://images/eco_system.png)
<center>Figure 1. TDengine Technology Ecosystem</center>
## <a class="anchor" id="scenes"></a>Overall Scenarios of TDengine
......@@ -62,4 +61,4 @@ From the perspective of data sources, designers can analyze the applicability of
| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
| Require system with high-reliability | | | √ | TDengine has a very robust and reliable system architecture to implement simple and convenient daily operation with streamlined experiences for operators, thus human errors and accidents are eliminated to the greatest extent. |
| Require controllable operation learning cost | | | √ | As above. |
| Require abundant talent supply | √ | | | As a new-generation product, it’s still difficult to find talents with TDengine experiences from market. However, the learning cost is low. As the vendor, we also provide extensive operation training and counselling services. |
\ No newline at end of file
| Require abundant talent supply | √ | | | As a new-generation product, it’s still difficult to find talents with TDengine experiences from the market. However, the learning cost is low. As the vendor, we also provide extensive operation training and counseling services. |
......@@ -413,11 +413,11 @@ See [video tutorials](https://www.taosdata.com/blog/2020/11/11/1963.html) for th
Users can find the connector package for python2 and python3 in the source code src/connector/python (or tar.gz/connector/python) folder. Users can install it through `pip` command:
`pip install src/connector/python/linux/python2/`
`pip install src/connector/python/`
or
`pip3 install src/connector/python/linux/python3/`
`pip3 install src/connector/python/`
#### Windows
......
......@@ -128,8 +128,12 @@ function install_lib() {
${csudo} ln -s ${install_main_dir}/driver/libtaos.* ${lib_link_dir}/libtaos.1.dylib
${csudo} ln -s ${lib_link_dir}/libtaos.1.dylib ${lib_link_dir}/libtaos.dylib
fi
${csudo} ldconfig
if [ "$osType" != "Darwin" ]; then
${csudo} ldconfig
else
${csudo} update_dyld_shared_cache
fi
}
function install_header() {
......
......@@ -13,13 +13,13 @@ IF (TD_LINUX)
# set the static lib name
ADD_LIBRARY(taos_static STATIC ${SRC})
TARGET_LINK_LIBRARIES(taos_static common query trpc tutil pthread m rt ${VAR_TSZ})
TARGET_LINK_LIBRARIES(taos_static common query trpc tutil pthread m rt cJson ${VAR_TSZ})
SET_TARGET_PROPERTIES(taos_static PROPERTIES OUTPUT_NAME "taos_static")
SET_TARGET_PROPERTIES(taos_static PROPERTIES CLEAN_DIRECT_OUTPUT 1)
# generate dynamic library (*.so)
ADD_LIBRARY(taos SHARED ${SRC})
TARGET_LINK_LIBRARIES(taos common query trpc tutil pthread m rt)
TARGET_LINK_LIBRARIES(taos common query trpc tutil pthread m rt cJson)
IF (TD_LINUX_64)
TARGET_LINK_LIBRARIES(taos lua)
ENDIF ()
......@@ -39,13 +39,13 @@ ELSEIF (TD_DARWIN)
# set the static lib name
ADD_LIBRARY(taos_static STATIC ${SRC})
TARGET_LINK_LIBRARIES(taos_static common query trpc tutil pthread m lua)
TARGET_LINK_LIBRARIES(taos_static common query trpc tutil pthread m lua cJson)
SET_TARGET_PROPERTIES(taos_static PROPERTIES OUTPUT_NAME "taos_static")
SET_TARGET_PROPERTIES(taos_static PROPERTIES CLEAN_DIRECT_OUTPUT 1)
# generate dynamic library (*.dylib)
ADD_LIBRARY(taos SHARED ${SRC})
TARGET_LINK_LIBRARIES(taos common query trpc tutil pthread m lua)
TARGET_LINK_LIBRARIES(taos common query trpc tutil pthread m lua cJson)
SET_TARGET_PROPERTIES(taos PROPERTIES CLEAN_DIRECT_OUTPUT 1)
#set version of .dylib
......@@ -63,26 +63,26 @@ ELSEIF (TD_WINDOWS)
CONFIGURE_FILE("${TD_COMMUNITY_DIR}/src/client/src/taos.rc.in" "${TD_COMMUNITY_DIR}/src/client/src/taos.rc")
ADD_LIBRARY(taos_static STATIC ${SRC})
TARGET_LINK_LIBRARIES(taos_static trpc tutil query)
TARGET_LINK_LIBRARIES(taos_static trpc tutil query cJson)
# generate dynamic library (*.dll)
ADD_LIBRARY(taos SHARED ${SRC} ${TD_COMMUNITY_DIR}/src/client/src/taos.rc)
IF (NOT TD_GODLL)
SET_TARGET_PROPERTIES(taos PROPERTIES LINK_FLAGS /DEF:${TD_COMMUNITY_DIR}/src/client/src/taos.def)
ENDIF ()
TARGET_LINK_LIBRARIES(taos trpc tutil query lua)
TARGET_LINK_LIBRARIES(taos trpc tutil query lua cJson)
ELSEIF (TD_DARWIN)
SET(CMAKE_MACOSX_RPATH 1)
INCLUDE_DIRECTORIES(${TD_COMMUNITY_DIR}/deps/jni/linux)
ADD_LIBRARY(taos_static STATIC ${SRC})
TARGET_LINK_LIBRARIES(taos_static query trpc tutil pthread m lua)
TARGET_LINK_LIBRARIES(taos_static query trpc tutil pthread m lua cJson)
SET_TARGET_PROPERTIES(taos_static PROPERTIES OUTPUT_NAME "taos_static")
# generate dynamic library (*.dylib)
ADD_LIBRARY(taos SHARED ${SRC})
TARGET_LINK_LIBRARIES(taos query trpc tutil pthread m lua)
TARGET_LINK_LIBRARIES(taos query trpc tutil pthread m lua cJson)
SET_TARGET_PROPERTIES(taos PROPERTIES CLEAN_DIRECT_OUTPUT 1)
......
......@@ -60,17 +60,25 @@ void doAsyncQuery(STscObj* pObj, SSqlObj* pSql, __async_cb_func_t fp, void* para
tscDebugL("0x%"PRIx64" SQL: %s", pSql->self, pSql->sqlstr);
pCmd->resColumnId = TSDB_RES_COL_ID;
taosAcquireRef(tscObjRef, pSql->self);
int32_t code = tsParseSql(pSql, true);
if (code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) return;
if (code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) {
taosReleaseRef(tscObjRef, pSql->self);
return;
}
if (code != TSDB_CODE_SUCCESS) {
pSql->res.code = code;
tscAsyncResultOnError(pSql);
taosReleaseRef(tscObjRef, pSql->self);
return;
}
SQueryInfo* pQueryInfo = tscGetQueryInfo(pCmd);
executeQuery(pSql, pQueryInfo);
taosReleaseRef(tscObjRef, pSql->self);
}
// TODO return the correct error code to client in tscQueueAsyncError
......
......@@ -2128,11 +2128,12 @@ int32_t tscParseLines(char* lines[], int numLines, SArray* points, SArray* faile
int taos_insert_lines(TAOS* taos, char* lines[], int numLines) {
int32_t code = 0;
SSmlLinesInfo* info = calloc(1, sizeof(SSmlLinesInfo));
SSmlLinesInfo* info = tcalloc(1, sizeof(SSmlLinesInfo));
info->id = genLinesSmlId();
if (numLines <= 0 || numLines > 65536) {
tscError("SML:0x%"PRIx64" taos_insert_lines numLines should be between 1 and 65536. numLines: %d", info->id, numLines);
tfree(info);
code = TSDB_CODE_TSC_APP_ERROR;
return code;
}
......@@ -2140,7 +2141,7 @@ int taos_insert_lines(TAOS* taos, char* lines[], int numLines) {
for (int i = 0; i < numLines; ++i) {
if (lines[i] == NULL) {
tscError("SML:0x%"PRIx64" taos_insert_lines line %d is NULL", info->id, i);
free(info);
tfree(info);
code = TSDB_CODE_TSC_APP_ERROR;
return code;
}
......@@ -2149,7 +2150,7 @@ int taos_insert_lines(TAOS* taos, char* lines[], int numLines) {
SArray* lpPoints = taosArrayInit(numLines, sizeof(TAOS_SML_DATA_POINT));
if (lpPoints == NULL) {
tscError("SML:0x%"PRIx64" taos_insert_lines failed to allocate memory", info->id);
free(info);
tfree(info);
return TSDB_CODE_TSC_OUT_OF_MEMORY;
}
......@@ -2177,7 +2178,7 @@ cleanup:
taosArrayDestroy(lpPoints);
free(info);
tfree(info);
return code;
}
......@@ -3,6 +3,7 @@
#include <stdlib.h>
#include <string.h>
#include "cJSON.h"
#include "hash.h"
#include "taos.h"
......@@ -12,9 +13,12 @@
#include "tscParseLine.h"
#define MAX_TELNET_FILEDS_NUM 2
#define OTS_TIMESTAMP_COLUMN_NAME "ts"
#define OTS_METRIC_VALUE_COLUMN_NAME "value"
#define OTD_MAX_FIELDS_NUM 2
#define OTD_JSON_SUB_FIELDS_NUM 2
#define OTD_JSON_FIELDS_NUM 4
#define OTD_TIMESTAMP_COLUMN_NAME "ts"
#define OTD_METRIC_VALUE_COLUMN_NAME "value"
/* telnet style API parser */
static uint64_t HandleId = 0;
......@@ -77,12 +81,12 @@ static int32_t parseTelnetTimeStamp(TAOS_SML_KV **pTS, int *num_kvs, const char
const char *start, *cur;
int32_t ret = TSDB_CODE_SUCCESS;
int len = 0;
char key[] = OTS_TIMESTAMP_COLUMN_NAME;
char key[] = OTD_TIMESTAMP_COLUMN_NAME;
char *value = NULL;
start = cur = *index;
//allocate fields for timestamp and value
*pTS = tcalloc(MAX_TELNET_FILEDS_NUM, sizeof(TAOS_SML_KV));
*pTS = tcalloc(OTD_MAX_FIELDS_NUM, sizeof(TAOS_SML_KV));
while(*cur != '\0') {
if (*cur == ' ') {
......@@ -123,7 +127,7 @@ static int32_t parseTelnetMetricValue(TAOS_SML_KV **pKVs, int *num_kvs, const ch
const char *start, *cur;
int32_t ret = TSDB_CODE_SUCCESS;
int len = 0;
char key[] = OTS_METRIC_VALUE_COLUMN_NAME;
char key[] = OTD_METRIC_VALUE_COLUMN_NAME;
char *value = NULL;
start = cur = *index;
......@@ -405,7 +409,7 @@ cleanup:
tscDebug("OTD:0x%"PRIx64" taos_insert_telnet_lines finish inserting %d lines. code: %d", info->id, numLines, code);
points = TARRAY_GET_START(lpPoints);
numPoints = taosArrayGetSize(lpPoints);
for (int i=0; i<numPoints; ++i) {
for (int i = 0; i < numPoints; ++i) {
destroySmlDataPoint(points+i);
}
......@@ -422,3 +426,548 @@ int taos_telnet_insert(TAOS* taos, TAOS_SML_DATA_POINT* points, int numPoint) {
tfree(info);
return code;
}
/* telnet style API parser */
int32_t parseMetricFromJSON(cJSON *root, TAOS_SML_DATA_POINT* pSml, SSmlLinesInfo* info) {
cJSON *metric = cJSON_GetObjectItem(root, "metric");
if (!cJSON_IsString(metric)) {
return TSDB_CODE_TSC_INVALID_JSON;
}
size_t stableLen = strlen(metric->valuestring);
if (stableLen > TSDB_TABLE_NAME_LEN) {
tscError("OTD:0x%"PRIx64" Metric cannot exceeds 193 characters in JSON", info->id);
return TSDB_CODE_TSC_INVALID_TABLE_ID_LENGTH;
}
pSml->stableName = tcalloc(stableLen + 1, sizeof(char));
if (pSml->stableName == NULL){
return TSDB_CODE_TSC_OUT_OF_MEMORY;
}
if (isdigit(metric->valuestring[0])) {
tscError("OTD:0x%"PRIx64" Metric cannnot start with digit in JSON", info->id);
tfree(pSml->stableName);
return TSDB_CODE_TSC_INVALID_JSON;
}
tstrncpy(pSml->stableName, metric->valuestring, stableLen + 1);
return TSDB_CODE_SUCCESS;
}
int32_t parseTimestampFromJSONObj(cJSON *root, int64_t *tsVal, SSmlLinesInfo* info) {
int32_t size = cJSON_GetArraySize(root);
if (size != OTD_JSON_SUB_FIELDS_NUM) {
return TSDB_CODE_TSC_INVALID_JSON;
}
cJSON *value = cJSON_GetObjectItem(root, "value");
if (!cJSON_IsNumber(value)) {
return TSDB_CODE_TSC_INVALID_JSON;
}
cJSON *type = cJSON_GetObjectItem(root, "type");
if (!cJSON_IsString(type)) {
return TSDB_CODE_TSC_INVALID_JSON;
}
*tsVal = value->valueint;
//if timestamp value is 0 use current system time
if (*tsVal == 0) {
*tsVal = taosGetTimestampNs();
return TSDB_CODE_SUCCESS;
}
size_t typeLen = strlen(type->valuestring);
if (typeLen == 1 && type->valuestring[0] == 's') {
//seconds
*tsVal = (int64_t)(*tsVal * 1e9);
} else if (typeLen == 2 && type->valuestring[1] == 's') {
switch (type->valuestring[0]) {
case 'm':
//milliseconds
*tsVal = convertTimePrecision(*tsVal, TSDB_TIME_PRECISION_MILLI, TSDB_TIME_PRECISION_NANO);
break;
case 'u':
//microseconds
*tsVal = convertTimePrecision(*tsVal, TSDB_TIME_PRECISION_MICRO, TSDB_TIME_PRECISION_NANO);
break;
case 'n':
//nanoseconds
*tsVal = *tsVal * 1;
break;
default:
return TSDB_CODE_TSC_INVALID_JSON;
}
}
return TSDB_CODE_SUCCESS;
}
int32_t parseTimestampFromJSON(cJSON *root, TAOS_SML_KV **pTS, int *num_kvs, SSmlLinesInfo* info) {
//Timestamp must be the first KV to parse
assert(*num_kvs == 0);
int64_t tsVal;
char key[] = OTD_TIMESTAMP_COLUMN_NAME;
cJSON *timestamp = cJSON_GetObjectItem(root, "timestamp");
if (cJSON_IsNumber(timestamp)) {
//timestamp value 0 indicates current system time
if (timestamp->valueint == 0) {
tsVal = taosGetTimestampNs();
} else {
tsVal = convertTimePrecision(timestamp->valueint, TSDB_TIME_PRECISION_MICRO, TSDB_TIME_PRECISION_NANO);
}
} else if (cJSON_IsObject(timestamp)) {
int32_t ret = parseTimestampFromJSONObj(timestamp, &tsVal, info);
if (ret != TSDB_CODE_SUCCESS) {
tscError("OTD:0x%"PRIx64" Failed to parse timestamp from JSON Obj", info->id);
return ret;
}
} else {
return TSDB_CODE_TSC_INVALID_JSON;
}
//allocate fields for timestamp and value
*pTS = tcalloc(OTD_MAX_FIELDS_NUM, sizeof(TAOS_SML_KV));
(*pTS)->key = tcalloc(sizeof(key), 1);
memcpy((*pTS)->key, key, sizeof(key));
(*pTS)->type = TSDB_DATA_TYPE_TIMESTAMP;
(*pTS)->length = (int16_t)tDataTypes[(*pTS)->type].bytes;
(*pTS)->value = tcalloc((*pTS)->length, 1);
memcpy((*pTS)->value, &tsVal, (*pTS)->length);
*num_kvs += 1;
return TSDB_CODE_SUCCESS;
}
int32_t convertJSONBool(TAOS_SML_KV *pVal, char* typeStr, int64_t valueInt, SSmlLinesInfo* info) {
if (strcasecmp(typeStr, "bool") != 0) {
tscError("OTD:0x%"PRIx64" invalid type(%s) for JSON Bool", info->id, typeStr);
return TSDB_CODE_TSC_INVALID_JSON_TYPE;
}
pVal->type = TSDB_DATA_TYPE_BOOL;
pVal->length = (int16_t)tDataTypes[pVal->type].bytes;
pVal->value = tcalloc(pVal->length, 1);
*(bool *)(pVal->value) = valueInt ? true : false;
return TSDB_CODE_SUCCESS;
}
int32_t convertJSONNumber(TAOS_SML_KV *pVal, char* typeStr, cJSON *value, SSmlLinesInfo* info) {
//tinyint
if (strcasecmp(typeStr, "i8") == 0 ||
strcasecmp(typeStr, "tinyint") == 0) {
if (!IS_VALID_TINYINT(value->valueint)) {
tscError("OTD:0x%"PRIx64" JSON value(%"PRId64") cannot fit in type(tinyint)", info->id, value->valueint);
return TSDB_CODE_TSC_VALUE_OUT_OF_RANGE;
}
pVal->type = TSDB_DATA_TYPE_TINYINT;
pVal->length = (int16_t)tDataTypes[pVal->type].bytes;
pVal->value = tcalloc(pVal->length, 1);
*(int8_t *)(pVal->value) = (int8_t)(value->valueint);
return TSDB_CODE_SUCCESS;
}
//smallint
if (strcasecmp(typeStr, "i16") == 0 ||
strcasecmp(typeStr, "smallint") == 0) {
if (!IS_VALID_SMALLINT(value->valueint)) {
tscError("OTD:0x%"PRIx64" JSON value(%"PRId64") cannot fit in type(smallint)", info->id, value->valueint);
return TSDB_CODE_TSC_VALUE_OUT_OF_RANGE;
}
pVal->type = TSDB_DATA_TYPE_SMALLINT;
pVal->length = (int16_t)tDataTypes[pVal->type].bytes;
pVal->value = tcalloc(pVal->length, 1);
*(int16_t *)(pVal->value) = (int16_t)(value->valueint);
return TSDB_CODE_SUCCESS;
}
//int
if (strcasecmp(typeStr, "i32") == 0 ||
strcasecmp(typeStr, "int") == 0) {
if (!IS_VALID_INT(value->valueint)) {
tscError("OTD:0x%"PRIx64" JSON value(%"PRId64") cannot fit in type(int)", info->id, value->valueint);
return TSDB_CODE_TSC_VALUE_OUT_OF_RANGE;
}
pVal->type = TSDB_DATA_TYPE_INT;
pVal->length = (int16_t)tDataTypes[pVal->type].bytes;
pVal->value = tcalloc(pVal->length, 1);
*(int32_t *)(pVal->value) = (int32_t)(value->valueint);
return TSDB_CODE_SUCCESS;
}
//bigint
if (strcasecmp(typeStr, "i64") == 0 ||
strcasecmp(typeStr, "bigint") == 0) {
if (!IS_VALID_BIGINT(value->valueint)) {
tscError("OTD:0x%"PRIx64" JSON value(%"PRId64") cannot fit in type(bigint)", info->id, value->valueint);
return TSDB_CODE_TSC_VALUE_OUT_OF_RANGE;
}
pVal->type = TSDB_DATA_TYPE_BIGINT;
pVal->length = (int16_t)tDataTypes[pVal->type].bytes;
pVal->value = tcalloc(pVal->length, 1);
*(int64_t *)(pVal->value) = (int64_t)(value->valueint);
return TSDB_CODE_SUCCESS;
}
//float
if (strcasecmp(typeStr, "f32") == 0 ||
strcasecmp(typeStr, "float") == 0) {
if (!IS_VALID_FLOAT(value->valuedouble)) {
tscError("OTD:0x%"PRIx64" JSON value(%f) cannot fit in type(float)", info->id, value->valuedouble);
return TSDB_CODE_TSC_VALUE_OUT_OF_RANGE;
}
pVal->type = TSDB_DATA_TYPE_FLOAT;
pVal->length = (int16_t)tDataTypes[pVal->type].bytes;
pVal->value = tcalloc(pVal->length, 1);
*(float *)(pVal->value) = (float)(value->valuedouble);
return TSDB_CODE_SUCCESS;
}
//double
if (strcasecmp(typeStr, "f64") == 0 ||
strcasecmp(typeStr, "double") == 0) {
if (!IS_VALID_DOUBLE(value->valuedouble)) {
tscError("OTD:0x%"PRIx64" JSON value(%f) cannot fit in type(double)", info->id, value->valuedouble);
return TSDB_CODE_TSC_VALUE_OUT_OF_RANGE;
}
pVal->type = TSDB_DATA_TYPE_DOUBLE;
pVal->length = (int16_t)tDataTypes[pVal->type].bytes;
pVal->value = tcalloc(pVal->length, 1);
*(double *)(pVal->value) = (double)(value->valuedouble);
return TSDB_CODE_SUCCESS;
}
//if reach here means type is unsupported
tscError("OTD:0x%"PRIx64" invalid type(%s) for JSON Number", info->id, typeStr);
return TSDB_CODE_TSC_INVALID_JSON_TYPE;
}
int32_t convertJSONString(TAOS_SML_KV *pVal, char* typeStr, cJSON *value, SSmlLinesInfo* info) {
if (strcasecmp(typeStr, "binary") == 0) {
pVal->type = TSDB_DATA_TYPE_BINARY;
} else if (strcasecmp(typeStr, "nchar") == 0) {
pVal->type = TSDB_DATA_TYPE_NCHAR;
} else {
tscError("OTD:0x%"PRIx64" invalid type(%s) for JSON String", info->id, typeStr);
return TSDB_CODE_TSC_INVALID_JSON_TYPE;
}
pVal->length = (int16_t)strlen(value->valuestring);
pVal->value = tcalloc(pVal->length + 1, 1);
memcpy(pVal->value, value->valuestring, pVal->length);
return TSDB_CODE_SUCCESS;
}
int32_t parseValueFromJSONObj(cJSON *root, TAOS_SML_KV *pVal, SSmlLinesInfo* info) {
int32_t ret = TSDB_CODE_SUCCESS;
int32_t size = cJSON_GetArraySize(root);
if (size != OTD_JSON_SUB_FIELDS_NUM) {
return TSDB_CODE_TSC_INVALID_JSON;
}
cJSON *value = cJSON_GetObjectItem(root, "value");
if (value == NULL) {
return TSDB_CODE_TSC_INVALID_JSON;
}
cJSON *type = cJSON_GetObjectItem(root, "type");
if (!cJSON_IsString(type)) {
return TSDB_CODE_TSC_INVALID_JSON;
}
switch (value->type) {
case cJSON_True:
case cJSON_False: {
ret = convertJSONBool(pVal, type->valuestring, value->valueint, info);
if (ret != TSDB_CODE_SUCCESS) {
return ret;
}
break;
}
case cJSON_Number: {
ret = convertJSONNumber(pVal, type->valuestring, value, info);
if (ret != TSDB_CODE_SUCCESS) {
return ret;
}
break;
}
case cJSON_String: {
ret = convertJSONString(pVal, type->valuestring, value, info);
if (ret != TSDB_CODE_SUCCESS) {
return ret;
}
break;
}
default:
return TSDB_CODE_TSC_INVALID_JSON_TYPE;
}
return TSDB_CODE_SUCCESS;
}
int32_t parseValueFromJSON(cJSON *root, TAOS_SML_KV *pVal, SSmlLinesInfo* info) {
int type = root->type;
switch (type) {
case cJSON_True:
case cJSON_False: {
pVal->type = TSDB_DATA_TYPE_BOOL;
pVal->length = (int16_t)tDataTypes[pVal->type].bytes;
pVal->value = tcalloc(pVal->length, 1);
*(bool *)(pVal->value) = root->valueint ? true : false;
break;
}
case cJSON_Number: {
//convert default JSON Number type to float
pVal->type = TSDB_DATA_TYPE_FLOAT;
pVal->length = (int16_t)tDataTypes[pVal->type].bytes;
pVal->value = tcalloc(pVal->length, 1);
*(float *)(pVal->value) = (float)(root->valuedouble);
break;
}
case cJSON_String: {
//convert default JSON String type to nchar
pVal->type = TSDB_DATA_TYPE_NCHAR;
//pVal->length = wcslen((wchar_t *)root->valuestring) * TSDB_NCHAR_SIZE;
pVal->length = (int16_t)strlen(root->valuestring);
pVal->value = tcalloc(pVal->length + 1, 1);
memcpy(pVal->value, root->valuestring, pVal->length);
break;
}
case cJSON_Object: {
int32_t ret = parseValueFromJSONObj(root, pVal, info);
if (ret != TSDB_CODE_SUCCESS) {
tscError("OTD:0x%"PRIx64" Failed to parse timestamp from JSON Obj", info->id);
return ret;
}
break;
}
default:
return TSDB_CODE_TSC_INVALID_JSON;
}
return TSDB_CODE_SUCCESS;
}
int32_t parseMetricValueFromJSON(cJSON *root, TAOS_SML_KV **pKVs, int *num_kvs, SSmlLinesInfo* info) {
//skip timestamp
TAOS_SML_KV *pVal = *pKVs + 1;
char key[] = OTD_METRIC_VALUE_COLUMN_NAME;
cJSON *metricVal = cJSON_GetObjectItem(root, "value");
if (metricVal == NULL) {
return TSDB_CODE_TSC_INVALID_JSON;
}
int32_t ret = parseValueFromJSON(metricVal, pVal, info);
if (ret != TSDB_CODE_SUCCESS) {
return ret;
}
pVal->key = tcalloc(sizeof(key), 1);
memcpy(pVal->key, key, sizeof(key));
*num_kvs += 1;
return TSDB_CODE_SUCCESS;
}
int32_t parseTagsFromJSON(cJSON *root, TAOS_SML_KV **pKVs, int *num_kvs, char **childTableName, SSmlLinesInfo* info) {
int32_t ret = TSDB_CODE_SUCCESS;
cJSON *tags = cJSON_GetObjectItem(root, "tags");
if (tags == NULL || tags->type != cJSON_Object) {
return TSDB_CODE_TSC_INVALID_JSON;
}
//only pick up the first ID value as child table name
cJSON *id = cJSON_GetObjectItem(tags, "ID");
if (id != NULL) {
size_t idLen = strlen(id->valuestring);
ret = isValidChildTableName(id->valuestring, (int16_t)idLen);
if (ret != TSDB_CODE_SUCCESS) {
return ret;
}
*childTableName = tcalloc(idLen + 1, sizeof(char));
memcpy(*childTableName, id->valuestring, idLen);
//remove all ID fields from tags list no case sensitive
while (id != NULL) {
cJSON_DeleteItemFromObject(tags, "ID");
id = cJSON_GetObjectItem(tags, "ID");
}
}
int32_t tagNum = cJSON_GetArraySize(tags);
//at least one tag pair required
if (tagNum <= 0) {
return TSDB_CODE_TSC_INVALID_JSON;
}
//allocate memory for tags
*pKVs = tcalloc(tagNum, sizeof(TAOS_SML_KV));
TAOS_SML_KV *pkv = *pKVs;
for (int32_t i = 0; i < tagNum; ++i) {
cJSON *tag = cJSON_GetArrayItem(tags, i);
if (tag == NULL) {
return TSDB_CODE_TSC_INVALID_JSON;
}
//key
size_t keyLen = strlen(tag->string);
pkv->key = tcalloc(keyLen + 1, sizeof(char));
strncpy(pkv->key, tag->string, keyLen);
//value
ret = parseValueFromJSON(tag, pkv, info);
if (ret != TSDB_CODE_SUCCESS) {
return ret;
}
*num_kvs += 1;
pkv++;
}
return ret;
}
int32_t tscParseJSONPayload(cJSON *root, TAOS_SML_DATA_POINT* pSml, SSmlLinesInfo* info) {
int32_t ret = TSDB_CODE_SUCCESS;
if (!cJSON_IsObject(root)) {
tscError("OTD:0x%"PRIx64" data point needs to be JSON object", info->id);
return TSDB_CODE_TSC_INVALID_JSON;
}
int32_t size = cJSON_GetArraySize(root);
//outmost json fields has to be exactly 4
if (size != OTD_JSON_FIELDS_NUM) {
tscError("OTD:0x%"PRIx64" Invalid number of JSON fields in data point %d", info->id, size);
return TSDB_CODE_TSC_INVALID_JSON;
}
//Parse metric
ret = parseMetricFromJSON(root, pSml, info);
if (ret != TSDB_CODE_SUCCESS) {
tscError("OTD:0x%"PRIx64" Unable to parse metric from JSON payload", info->id);
return ret;
}
tscDebug("OTD:0x%"PRIx64" Parse metric from JSON payload finished", info->id);
//Parse timestamp
ret = parseTimestampFromJSON(root, &pSml->fields, &pSml->fieldNum, info);
if (ret) {
tscError("OTD:0x%"PRIx64" Unable to parse timestamp from JSON payload", info->id);
return ret;
}
tscDebug("OTD:0x%"PRIx64" Parse timestamp from JSON payload finished", info->id);
//Parse metric value
ret = parseMetricValueFromJSON(root, &pSml->fields, &pSml->fieldNum, info);
if (ret) {
tscError("OTD:0x%"PRIx64" Unable to parse metric value from JSON payload", info->id);
return ret;
}
tscDebug("OTD:0x%"PRIx64" Parse metric value from JSON payload finished", info->id);
//Parse tags
ret = parseTagsFromJSON(root, &pSml->tags, &pSml->tagNum, &pSml->childTableName, info);
if (ret) {
tscError("OTD:0x%"PRIx64" Unable to parse tags from JSON payload", info->id);
return ret;
}
tscDebug("OTD:0x%"PRIx64" Parse tags from JSON payload finished", info->id);
return TSDB_CODE_SUCCESS;
}
int32_t tscParseMultiJSONPayload(char* payload, SArray* points, SSmlLinesInfo* info) {
int32_t payloadNum, ret;
ret = TSDB_CODE_SUCCESS;
if (payload == NULL) {
tscError("OTD:0x%"PRIx64" empty JSON Payload", info->id);
return TSDB_CODE_TSC_INVALID_JSON;
}
cJSON *root = cJSON_Parse(payload);
//multiple data points must be sent in JSON array
if (cJSON_IsObject(root)) {
payloadNum = 1;
} else if (cJSON_IsArray(root)) {
payloadNum = cJSON_GetArraySize(root);
} else {
tscError("OTD:0x%"PRIx64" Invalid JSON Payload", info->id);
ret = TSDB_CODE_TSC_INVALID_JSON;
goto PARSE_JSON_OVER;
}
for (int32_t i = 0; i < payloadNum; ++i) {
TAOS_SML_DATA_POINT point = {0};
cJSON *dataPoint = (payloadNum == 1) ? root : cJSON_GetArrayItem(root, i);
ret = tscParseJSONPayload(dataPoint, &point, info);
if (ret != TSDB_CODE_SUCCESS) {
tscError("OTD:0x%"PRIx64" JSON data point parse failed", info->id);
destroySmlDataPoint(&point);
goto PARSE_JSON_OVER;
} else {
tscDebug("OTD:0x%"PRIx64" JSON data point parse success", info->id);
}
taosArrayPush(points, &point);
}
PARSE_JSON_OVER:
cJSON_Delete(root);
return ret;
}
int taos_insert_json_payload(TAOS* taos, char* payload) {
int32_t code = 0;
SSmlLinesInfo* info = tcalloc(1, sizeof(SSmlLinesInfo));
info->id = genUID();
if (payload == NULL) {
tscError("OTD:0x%"PRIx64" taos_insert_json_payload payload is NULL", info->id);
tfree(info);
code = TSDB_CODE_TSC_APP_ERROR;
return code;
}
SArray* lpPoints = taosArrayInit(1, sizeof(TAOS_SML_DATA_POINT));
if (lpPoints == NULL) {
tscError("OTD:0x%"PRIx64" taos_insert_json_payload failed to allocate memory", info->id);
tfree(info);
return TSDB_CODE_TSC_OUT_OF_MEMORY;
}
tscDebug("OTD:0x%"PRIx64" taos_insert_telnet_lines begin inserting %d points", info->id, 1);
code = tscParseMultiJSONPayload(payload, lpPoints, info);
size_t numPoints = taosArrayGetSize(lpPoints);
if (code != 0) {
goto cleanup;
}
TAOS_SML_DATA_POINT* points = TARRAY_GET_START(lpPoints);
code = tscSmlInsert(taos, points, (int)numPoints, info);
if (code != 0) {
tscError("OTD:0x%"PRIx64" taos_insert_json_payload error: %s", info->id, tstrerror((code)));
}
cleanup:
tscDebug("OTD:0x%"PRIx64" taos_insert_json_payload finish inserting 1 Point. code: %d", info->id, code);
points = TARRAY_GET_START(lpPoints);
numPoints = taosArrayGetSize(lpPoints);
for (int i = 0; i < numPoints; ++i) {
destroySmlDataPoint(points+i);
}
taosArrayDestroy(lpPoints);
tfree(info);
return code;
}
......@@ -281,6 +281,8 @@ static uint8_t convertRelationalOperator(SStrToken *pToken) {
return TSDB_RELATION_LIKE;
case TK_MATCH:
return TSDB_RELATION_MATCH;
case TK_NMATCH:
return TSDB_RELATION_NMATCH;
case TK_ISNULL:
return TSDB_RELATION_ISNULL;
case TK_NOTNULL:
......@@ -3782,6 +3784,9 @@ static int32_t doExtractColumnFilterInfo(SSqlCmd* pCmd, SQueryInfo* pQueryInfo,
case TK_MATCH:
pColumnFilter->lowerRelOptr = TSDB_RELATION_MATCH;
break;
case TK_NMATCH:
pColumnFilter->lowerRelOptr = TSDB_RELATION_NMATCH;
break;
case TK_ISNULL:
pColumnFilter->lowerRelOptr = TSDB_RELATION_ISNULL;
break;
......@@ -3846,15 +3851,17 @@ static int32_t tablenameListToString(tSqlExpr* pExpr, SStringBuilder* sb) {
}
static int32_t tablenameCondToString(tSqlExpr* pExpr, uint32_t opToken, SStringBuilder* sb) {
assert(opToken == TK_LIKE || opToken == TK_MATCH);
assert(opToken == TK_LIKE || opToken == TK_MATCH || opToken == TK_NMATCH);
if (opToken == TK_LIKE) {
taosStringBuilderAppendStringLen(sb, QUERY_COND_REL_PREFIX_LIKE, QUERY_COND_REL_PREFIX_LIKE_LEN);
taosStringBuilderAppendString(sb, pExpr->value.pz);
} else if (opToken == TK_MATCH) {
taosStringBuilderAppendStringLen(sb, QUERY_COND_REL_PREFIX_MATCH, QUERY_COND_REL_PREFIX_MATCH_LEN);
taosStringBuilderAppendString(sb, pExpr->value.pz);
} else if (opToken == TK_NMATCH) {
taosStringBuilderAppendStringLen(sb, QUERY_COND_REL_PREFIX_NMATCH, QUERY_COND_REL_PREFIX_NMATCH_LEN);
taosStringBuilderAppendString(sb, pExpr->value.pz);
}
return TSDB_CODE_SUCCESS;
}
......@@ -3903,12 +3910,13 @@ static int32_t checkColumnFilterInfo(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, SCol
&& pExpr->tokenId != TK_NOTNULL
&& pExpr->tokenId != TK_LIKE
&& pExpr->tokenId != TK_MATCH
&& pExpr->tokenId != TK_NMATCH
&& pExpr->tokenId != TK_IN) {
ret = invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg2);
goto _err_ret;
}
} else {
if (pExpr->tokenId == TK_LIKE || pExpr->tokenId == TK_MATCH) {
if (pExpr->tokenId == TK_LIKE || pExpr->tokenId == TK_MATCH || pExpr->tokenId == TK_NMATCH) {
ret = invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg1);
goto _err_ret;
}
......@@ -3956,7 +3964,7 @@ static int32_t getTablenameCond(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, tSqlExpr*
if (pTableCond->tokenId == TK_IN) {
ret = tablenameListToString(pRight, sb);
} else if (pTableCond->tokenId == TK_LIKE || pTableCond->tokenId == TK_MATCH) {
} else if (pTableCond->tokenId == TK_LIKE || pTableCond->tokenId == TK_MATCH || pTableCond->tokenId == TK_NMATCH) {
if (pRight->tokenId != TK_STRING) {
return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg1);
}
......@@ -4410,7 +4418,7 @@ static bool validateJoinExprNode(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, tSqlExpr
}
static bool validTableNameOptr(tSqlExpr* pExpr) {
const char nameFilterOptr[] = {TK_IN, TK_LIKE, TK_MATCH};
const char nameFilterOptr[] = {TK_IN, TK_LIKE, TK_MATCH, TK_NMATCH};
for (int32_t i = 0; i < tListLen(nameFilterOptr); ++i) {
if (pExpr->tokenId == nameFilterOptr[i]) {
......@@ -4505,13 +4513,13 @@ static int32_t validateLikeExpr(tSqlExpr* pExpr, STableMeta* pTableMeta, int32_t
// check for match expression
static int32_t validateMatchExpr(tSqlExpr* pExpr, STableMeta* pTableMeta, int32_t index, char* msgBuf) {
const char* msg1 = "regular expression string should be less than %d characters";
const char* msg2 = "illegal column type for match";
const char* msg2 = "illegal column type for match/nmatch";
const char* msg3 = "invalid regular expression";
tSqlExpr* pLeft = pExpr->pLeft;
tSqlExpr* pRight = pExpr->pRight;
if (pExpr->tokenId == TK_MATCH) {
if (pExpr->tokenId == TK_MATCH || pExpr->tokenId == TK_NMATCH) {
if (pRight->value.nLen > tsMaxRegexStringLen) {
char tmp[64] = {0};
sprintf(tmp, msg1, tsMaxRegexStringLen);
......@@ -4519,10 +4527,14 @@ static int32_t validateMatchExpr(tSqlExpr* pExpr, STableMeta* pTableMeta, int32_
}
SSchema* pSchema = tscGetTableSchema(pTableMeta);
if ((!isTablenameToken(&pLeft->columnName)) && !IS_VAR_DATA_TYPE(pSchema[index].type)) {
if ((!isTablenameToken(&pLeft->columnName)) &&(pSchema[index].type != TSDB_DATA_TYPE_BINARY)) {
return invalidOperationMsg(msgBuf, msg2);
}
if (!(pRight->type == SQL_NODE_VALUE && pRight->value.nType == TSDB_DATA_TYPE_BINARY)) {
return invalidOperationMsg(msgBuf, msg3);
}
int errCode = 0;
regex_t regex;
char regErrBuf[256] = {0};
......@@ -4925,9 +4937,12 @@ static int32_t setTableCondForSTableQuery(SSqlCmd* pCmd, SQueryInfo* pQueryInfo,
STagCond* pTagCond = &pQueryInfo->tagCond;
pTagCond->tbnameCond.uid = pTableMetaInfo->pTableMeta->id.uid;
assert(pExpr->tokenId == TK_LIKE || pExpr->tokenId == TK_MATCH || pExpr->tokenId == TK_IN);
assert(pExpr->tokenId == TK_LIKE
|| pExpr->tokenId == TK_MATCH
|| pExpr->tokenId == TK_NMATCH
|| pExpr->tokenId == TK_IN);
if (pExpr->tokenId == TK_LIKE || pExpr->tokenId == TK_MATCH) {
if (pExpr->tokenId == TK_LIKE || pExpr->tokenId == TK_MATCH || pExpr->tokenId == TK_NMATCH) {
char* str = taosStringBuilderGetResult(sb, NULL);
pQueryInfo->tagCond.tbnameCond.cond = strdup(str);
pQueryInfo->tagCond.tbnameCond.len = (int32_t) strlen(str);
......@@ -8224,11 +8239,12 @@ static int32_t handleExprInHavingClause(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, S
&& pExpr->tokenId != TK_NOTNULL
&& pExpr->tokenId != TK_LIKE
&& pExpr->tokenId != TK_MATCH
&& pExpr->tokenId != TK_NMATCH
) {
return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg2);
}
} else {
if (pExpr->tokenId == TK_LIKE || pExpr->tokenId == TK_MATCH) {
if (pExpr->tokenId == TK_LIKE || pExpr->tokenId == TK_MATCH || pExpr->tokenId == TK_NMATCH) {
return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg1);
}
......
......@@ -34,10 +34,12 @@ struct SSchema;
#define QUERY_COND_REL_PREFIX_IN "IN|"
#define QUERY_COND_REL_PREFIX_LIKE "LIKE|"
#define QUERY_COND_REL_PREFIX_MATCH "MATCH|"
#define QUERY_COND_REL_PREFIX_NMATCH "NMATCH|"
#define QUERY_COND_REL_PREFIX_IN_LEN 3
#define QUERY_COND_REL_PREFIX_LIKE_LEN 5
#define QUERY_COND_REL_PREFIX_MATCH_LEN 6
#define QUERY_COND_REL_PREFIX_NMATCH_LEN 7
typedef bool (*__result_filter_fn_t)(const void *, void *);
typedef void (*__do_filter_suppl_fn_t)(void *, void *);
......
......@@ -440,7 +440,16 @@ tExprNode* exprTreeFromTableName(const char* tbnameCond) {
memcpy(pVal->pz, tbnameCond + QUERY_COND_REL_PREFIX_MATCH_LEN, len);
pVal->nType = TSDB_DATA_TYPE_BINARY;
pVal->nLen = (int32_t)len;
} else if (strncmp(tbnameCond, QUERY_COND_REL_PREFIX_NMATCH, QUERY_COND_REL_PREFIX_NMATCH_LEN) == 0) {
right->nodeType = TSQL_NODE_VALUE;
expr->_node.optr = TSDB_RELATION_NMATCH;
tVariant* pVal = exception_calloc(1, sizeof(tVariant));
right->pVal = pVal;
size_t len = strlen(tbnameCond + QUERY_COND_REL_PREFIX_NMATCH_LEN) + 1;
pVal->pz = exception_malloc(len);
memcpy(pVal->pz, tbnameCond + QUERY_COND_REL_PREFIX_NMATCH_LEN, len);
pVal->nType = TSDB_DATA_TYPE_BINARY;
pVal->nLen = (int32_t)len;
} else if (strncmp(tbnameCond, QUERY_COND_REL_PREFIX_IN, QUERY_COND_REL_PREFIX_IN_LEN) == 0) {
right->nodeType = TSQL_NODE_VALUE;
expr->_node.optr = TSDB_RELATION_IN;
......
......@@ -1044,6 +1044,16 @@ static void doInitGlobalConfig(void) {
cfg.unitType = TAOS_CFG_UTYPE_BYTE;
taosInitConfigOption(cfg);
cfg.option = "maxRegexStringLen";
cfg.ptr = &tsMaxRegexStringLen;
cfg.valType = TAOS_CFG_VTYPE_INT32;
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_CLIENT | TSDB_CFG_CTYPE_B_SHOW;
cfg.minValue = 0;
cfg.maxValue = TSDB_MAX_FIELD_LEN;
cfg.ptrLength = 0;
cfg.unitType = TAOS_CFG_UTYPE_BYTE;
taosInitConfigOption(cfg);
cfg.option = "maxNumOfOrderedRes";
cfg.ptr = &tsMaxNumOfOrderedResults;
cfg.valType = TAOS_CFG_VTYPE_INT32;
......
......@@ -113,6 +113,7 @@
</includes>
<excludes>
<exclude>**/AppMemoryLeakTest.java</exclude>
<exclude>**/JDBCTypeAndTypeCompareTest.java</exclude>
<exclude>**/ConnectMultiTaosdByRestfulWithDifferentTokenTest.java</exclude>
<exclude>**/DatetimeBefore1970Test.java</exclude>
<exclude>**/FailOverTest.java</exclude>
......
package com.taosdata.jdbc.cases;
import org.junit.Test;
import java.sql.*;
public class JDBCTypeAndTypeCompareTest {
@Test
public void test() throws SQLException {
Connection conn = DriverManager.getConnection("jdbc:TAOS://192.168.17.156:6030/", "root", "taosdata");
Statement stmt = conn.createStatement();
stmt.execute("drop database if exists test");
stmt.execute("create database if not exists test");
stmt.execute("use test");
stmt.execute("create table weather(ts timestamp, f1 int, f2 bigint, f3 float, f4 double, f5 smallint, f6 tinyint, f7 bool, f8 binary(10), f9 nchar(10) )");
stmt.execute("insert into weather values(now, 1, 2, 3.0, 4.0, 5, 6, true, 'test','test')");
ResultSet rs = stmt.executeQuery("select * from weather");
ResultSetMetaData meta = rs.getMetaData();
while (rs.next()) {
for (int i = 1; i <= meta.getColumnCount(); i++) {
String columnName = meta.getColumnName(i);
String columnTypeName = meta.getColumnTypeName(i);
Object value = rs.getObject(i);
System.out.printf("columnName : %s, columnTypeName: %s, JDBCType: %s\n", columnName, columnTypeName, value.getClass().getName());
}
}
stmt.close();
conn.close();
}
}
......@@ -835,8 +835,14 @@ def taos_insert_telnet_lines(connection, lines):
p_lines = lines_type(*lines)
errno = _libtaos.taos_insert_telnet_lines(connection, p_lines, num_of_lines)
if errno != 0:
raise LinesError("insert telnet lines error", errno)
raise TelnetLinesError("insert telnet lines error", errno)
def taos_insert_json_payload(connection, payload):
# type: (c_void_p, list[str] | tuple(str)) -> None
payload = payload.encode("utf-8")
errno = _libtaos.taos_insert_json_payload(connection, payload)
if errno != 0:
raise JsonPayloadError("insert json payload error", errno)
class CTaosInterface(object):
def __init__(self, config=None):
......
......@@ -154,6 +154,25 @@ class TaosConnection(object):
"""
return taos_insert_telnet_lines(self._conn, lines)
def insert_json_payload(self, payload):
"""OpenTSDB HTTP JSON format support
## Example
"{
"metric": "cpu_load_0",
"timestamp": 1626006833610123,
"value": 55.5,
"tags":
{
"host": "ubuntu",
"interface": "eth0",
"Id": "tb0"
}
}"
"""
return taos_insert_json_payload(self._conn, payload)
def cursor(self):
# type: () -> TaosCursor
"""Return a new Cursor object using the connection."""
......
......@@ -83,4 +83,14 @@ class ResultError(DatabaseError):
class LinesError(DatabaseError):
"""taos_insert_lines errors."""
pass
\ No newline at end of file
pass
class TelnetLinesError(DatabaseError):
"""taos_insert_telnet_lines errors."""
pass
class JsonPayloadError(DatabaseError):
"""taos_insert_json_payload errors."""
pass
......@@ -174,6 +174,8 @@ DLL_EXPORT int taos_insert_lines(TAOS* taos, char* lines[], int numLines);
DLL_EXPORT int taos_insert_telnet_lines(TAOS* taos, char* lines[], int numLines);
DLL_EXPORT int taos_insert_json_payload(TAOS* taos, char* payload);
#ifdef __cplusplus
}
#endif
......
......@@ -165,6 +165,7 @@ do { \
#define TSDB_RELATION_NOT 13
#define TSDB_RELATION_MATCH 14
#define TSDB_RELATION_NMATCH 15
#define TSDB_BINARY_OP_ADD 30
#define TSDB_BINARY_OP_SUBTRACT 31
......
......@@ -108,6 +108,9 @@ int32_t* taosGetErrno();
#define TSDB_CODE_TSC_INVALID_TAG_LENGTH TAOS_DEF_ERROR_CODE(0, 0x021E) //"Invalid tag length")
#define TSDB_CODE_TSC_INVALID_COLUMN_LENGTH TAOS_DEF_ERROR_CODE(0, 0x021F) //"Invalid column length")
#define TSDB_CODE_TSC_DUP_TAG_NAMES TAOS_DEF_ERROR_CODE(0, 0x0220) //"duplicated tag names")
#define TSDB_CODE_TSC_INVALID_JSON TAOS_DEF_ERROR_CODE(0, 0x0221) //"Invalid JSON format")
#define TSDB_CODE_TSC_INVALID_JSON_TYPE TAOS_DEF_ERROR_CODE(0, 0x0222) //"Invalid JSON data type")
#define TSDB_CODE_TSC_VALUE_OUT_OF_RANGE TAOS_DEF_ERROR_CODE(0, 0x0223) //"Value out of range")
// mnode
#define TSDB_CODE_MND_MSG_NOT_PROCESSED TAOS_DEF_ERROR_CODE(0, 0x0300) //"Message not processed")
......
......@@ -38,179 +38,180 @@
#define TK_IS 20
#define TK_LIKE 21
#define TK_MATCH 22
#define TK_GLOB 23
#define TK_BETWEEN 24
#define TK_IN 25
#define TK_GT 26
#define TK_GE 27
#define TK_LT 28
#define TK_LE 29
#define TK_BITAND 30
#define TK_BITOR 31
#define TK_LSHIFT 32
#define TK_RSHIFT 33
#define TK_PLUS 34
#define TK_MINUS 35
#define TK_DIVIDE 36
#define TK_TIMES 37
#define TK_STAR 38
#define TK_SLASH 39
#define TK_REM 40
#define TK_CONCAT 41
#define TK_UMINUS 42
#define TK_UPLUS 43
#define TK_BITNOT 44
#define TK_SHOW 45
#define TK_DATABASES 46
#define TK_TOPICS 47
#define TK_FUNCTIONS 48
#define TK_MNODES 49
#define TK_DNODES 50
#define TK_ACCOUNTS 51
#define TK_USERS 52
#define TK_MODULES 53
#define TK_QUERIES 54
#define TK_CONNECTIONS 55
#define TK_STREAMS 56
#define TK_VARIABLES 57
#define TK_SCORES 58
#define TK_GRANTS 59
#define TK_VNODES 60
#define TK_DOT 61
#define TK_CREATE 62
#define TK_TABLE 63
#define TK_STABLE 64
#define TK_DATABASE 65
#define TK_TABLES 66
#define TK_STABLES 67
#define TK_VGROUPS 68
#define TK_DROP 69
#define TK_TOPIC 70
#define TK_FUNCTION 71
#define TK_DNODE 72
#define TK_USER 73
#define TK_ACCOUNT 74
#define TK_USE 75
#define TK_DESCRIBE 76
#define TK_DESC 77
#define TK_ALTER 78
#define TK_PASS 79
#define TK_PRIVILEGE 80
#define TK_LOCAL 81
#define TK_COMPACT 82
#define TK_LP 83
#define TK_RP 84
#define TK_IF 85
#define TK_EXISTS 86
#define TK_AS 87
#define TK_OUTPUTTYPE 88
#define TK_AGGREGATE 89
#define TK_BUFSIZE 90
#define TK_PPS 91
#define TK_TSERIES 92
#define TK_DBS 93
#define TK_STORAGE 94
#define TK_QTIME 95
#define TK_CONNS 96
#define TK_STATE 97
#define TK_COMMA 98
#define TK_KEEP 99
#define TK_CACHE 100
#define TK_REPLICA 101
#define TK_QUORUM 102
#define TK_DAYS 103
#define TK_MINROWS 104
#define TK_MAXROWS 105
#define TK_BLOCKS 106
#define TK_CTIME 107
#define TK_WAL 108
#define TK_FSYNC 109
#define TK_COMP 110
#define TK_PRECISION 111
#define TK_UPDATE 112
#define TK_CACHELAST 113
#define TK_PARTITIONS 114
#define TK_UNSIGNED 115
#define TK_TAGS 116
#define TK_USING 117
#define TK_NULL 118
#define TK_NOW 119
#define TK_SELECT 120
#define TK_UNION 121
#define TK_ALL 122
#define TK_DISTINCT 123
#define TK_FROM 124
#define TK_VARIABLE 125
#define TK_INTERVAL 126
#define TK_EVERY 127
#define TK_SESSION 128
#define TK_STATE_WINDOW 129
#define TK_FILL 130
#define TK_SLIDING 131
#define TK_ORDER 132
#define TK_BY 133
#define TK_ASC 134
#define TK_GROUP 135
#define TK_HAVING 136
#define TK_LIMIT 137
#define TK_OFFSET 138
#define TK_SLIMIT 139
#define TK_SOFFSET 140
#define TK_WHERE 141
#define TK_RESET 142
#define TK_QUERY 143
#define TK_SYNCDB 144
#define TK_ADD 145
#define TK_COLUMN 146
#define TK_MODIFY 147
#define TK_TAG 148
#define TK_CHANGE 149
#define TK_SET 150
#define TK_KILL 151
#define TK_CONNECTION 152
#define TK_STREAM 153
#define TK_COLON 154
#define TK_ABORT 155
#define TK_AFTER 156
#define TK_ATTACH 157
#define TK_BEFORE 158
#define TK_BEGIN 159
#define TK_CASCADE 160
#define TK_CLUSTER 161
#define TK_CONFLICT 162
#define TK_COPY 163
#define TK_DEFERRED 164
#define TK_DELIMITERS 165
#define TK_DETACH 166
#define TK_EACH 167
#define TK_END 168
#define TK_EXPLAIN 169
#define TK_FAIL 170
#define TK_FOR 171
#define TK_IGNORE 172
#define TK_IMMEDIATE 173
#define TK_INITIALLY 174
#define TK_INSTEAD 175
#define TK_KEY 176
#define TK_OF 177
#define TK_RAISE 178
#define TK_REPLACE 179
#define TK_RESTRICT 180
#define TK_ROW 181
#define TK_STATEMENT 182
#define TK_TRIGGER 183
#define TK_VIEW 184
#define TK_IPTOKEN 185
#define TK_SEMI 186
#define TK_NONE 187
#define TK_PREV 188
#define TK_LINEAR 189
#define TK_IMPORT 190
#define TK_TBNAME 191
#define TK_JOIN 192
#define TK_INSERT 193
#define TK_INTO 194
#define TK_VALUES 195
#define TK_NMATCH 23
#define TK_GLOB 24
#define TK_BETWEEN 25
#define TK_IN 26
#define TK_GT 27
#define TK_GE 28
#define TK_LT 29
#define TK_LE 30
#define TK_BITAND 31
#define TK_BITOR 32
#define TK_LSHIFT 33
#define TK_RSHIFT 34
#define TK_PLUS 35
#define TK_MINUS 36
#define TK_DIVIDE 37
#define TK_TIMES 38
#define TK_STAR 39
#define TK_SLASH 40
#define TK_REM 41
#define TK_CONCAT 42
#define TK_UMINUS 43
#define TK_UPLUS 44
#define TK_BITNOT 45
#define TK_SHOW 46
#define TK_DATABASES 47
#define TK_TOPICS 48
#define TK_FUNCTIONS 49
#define TK_MNODES 50
#define TK_DNODES 51
#define TK_ACCOUNTS 52
#define TK_USERS 53
#define TK_MODULES 54
#define TK_QUERIES 55
#define TK_CONNECTIONS 56
#define TK_STREAMS 57
#define TK_VARIABLES 58
#define TK_SCORES 59
#define TK_GRANTS 60
#define TK_VNODES 61
#define TK_DOT 62
#define TK_CREATE 63
#define TK_TABLE 64
#define TK_STABLE 65
#define TK_DATABASE 66
#define TK_TABLES 67
#define TK_STABLES 68
#define TK_VGROUPS 69
#define TK_DROP 70
#define TK_TOPIC 71
#define TK_FUNCTION 72
#define TK_DNODE 73
#define TK_USER 74
#define TK_ACCOUNT 75
#define TK_USE 76
#define TK_DESCRIBE 77
#define TK_DESC 78
#define TK_ALTER 79
#define TK_PASS 80
#define TK_PRIVILEGE 81
#define TK_LOCAL 82
#define TK_COMPACT 83
#define TK_LP 84
#define TK_RP 85
#define TK_IF 86
#define TK_EXISTS 87
#define TK_AS 88
#define TK_OUTPUTTYPE 89
#define TK_AGGREGATE 90
#define TK_BUFSIZE 91
#define TK_PPS 92
#define TK_TSERIES 93
#define TK_DBS 94
#define TK_STORAGE 95
#define TK_QTIME 96
#define TK_CONNS 97
#define TK_STATE 98
#define TK_COMMA 99
#define TK_KEEP 100
#define TK_CACHE 101
#define TK_REPLICA 102
#define TK_QUORUM 103
#define TK_DAYS 104
#define TK_MINROWS 105
#define TK_MAXROWS 106
#define TK_BLOCKS 107
#define TK_CTIME 108
#define TK_WAL 109
#define TK_FSYNC 110
#define TK_COMP 111
#define TK_PRECISION 112
#define TK_UPDATE 113
#define TK_CACHELAST 114
#define TK_PARTITIONS 115
#define TK_UNSIGNED 116
#define TK_TAGS 117
#define TK_USING 118
#define TK_NULL 119
#define TK_NOW 120
#define TK_SELECT 121
#define TK_UNION 122
#define TK_ALL 123
#define TK_DISTINCT 124
#define TK_FROM 125
#define TK_VARIABLE 126
#define TK_INTERVAL 127
#define TK_EVERY 128
#define TK_SESSION 129
#define TK_STATE_WINDOW 130
#define TK_FILL 131
#define TK_SLIDING 132
#define TK_ORDER 133
#define TK_BY 134
#define TK_ASC 135
#define TK_GROUP 136
#define TK_HAVING 137
#define TK_LIMIT 138
#define TK_OFFSET 139
#define TK_SLIMIT 140
#define TK_SOFFSET 141
#define TK_WHERE 142
#define TK_RESET 143
#define TK_QUERY 144
#define TK_SYNCDB 145
#define TK_ADD 146
#define TK_COLUMN 147
#define TK_MODIFY 148
#define TK_TAG 149
#define TK_CHANGE 150
#define TK_SET 151
#define TK_KILL 152
#define TK_CONNECTION 153
#define TK_STREAM 154
#define TK_COLON 155
#define TK_ABORT 156
#define TK_AFTER 157
#define TK_ATTACH 158
#define TK_BEFORE 159
#define TK_BEGIN 160
#define TK_CASCADE 161
#define TK_CLUSTER 162
#define TK_CONFLICT 163
#define TK_COPY 164
#define TK_DEFERRED 165
#define TK_DELIMITERS 166
#define TK_DETACH 167
#define TK_EACH 168
#define TK_END 169
#define TK_EXPLAIN 170
#define TK_FAIL 171
#define TK_FOR 172
#define TK_IGNORE 173
#define TK_IMMEDIATE 174
#define TK_INITIALLY 175
#define TK_INSTEAD 176
#define TK_KEY 177
#define TK_OF 178
#define TK_RAISE 179
#define TK_REPLACE 180
#define TK_RESTRICT 181
#define TK_ROW 182
#define TK_STATEMENT 183
#define TK_TRIGGER 184
#define TK_VIEW 185
#define TK_IPTOKEN 186
#define TK_SEMI 187
#define TK_NONE 188
#define TK_PREV 189
#define TK_LINEAR 190
#define TK_IMPORT 191
#define TK_TBNAME 192
#define TK_JOIN 193
#define TK_INSERT 194
#define TK_INTO 195
#define TK_VALUES 196
#define TK_SPACE 300
......
此差异已折叠。
......@@ -17,6 +17,7 @@
#define TDENGINE_HTTP_UTIL_H
bool httpCheckUsedbSql(char *sql);
bool httpCheckAlterSql(char *sql);
void httpTimeToString(int32_t t, char *buf, int32_t buflen);
bool httpUrlMatch(HttpContext *pContext, int32_t pos, char *cmp);
......
......@@ -35,6 +35,7 @@ bool httpProcessData(HttpContext* pContext) {
if (!httpAlterContextState(pContext, HTTP_CONTEXT_STATE_READY, HTTP_CONTEXT_STATE_HANDLING)) {
httpTrace("context:%p, fd:%d, state:%s not in ready state, stop process request", pContext, pContext->fd,
httpContextStateStr(pContext->state));
pContext->error = true;
httpCloseContextByApp(pContext);
return false;
}
......
......@@ -1157,10 +1157,6 @@ static int32_t httpParseChar(HttpParser *parser, const char c, int32_t *again) {
httpOnError(parser, HTTP_CODE_INTERNAL_SERVER_ERROR, TSDB_CODE_HTTP_PARSE_ERROR_STATE);
}
if (ok != 0) {
pContext->error = true;
}
return ok;
}
......
......@@ -147,6 +147,8 @@ void httpSendErrorResp(HttpContext *pContext, int32_t errNo) {
httpCode = pContext->parser->httpCode;
}
pContext->error = true;
char *httpCodeStr = httpGetStatusDesc(httpCode);
httpSendErrorRespImp(pContext, httpCode, httpCodeStr, errNo & 0XFFFF, tstrerror(errNo));
}
......
......@@ -16,6 +16,7 @@
#define _DEFAULT_SOURCE
#include "os.h"
#include "tglobal.h"
#include "tsclient.h"
#include "httpLog.h"
#include "httpJson.h"
#include "httpRestHandle.h"
......@@ -62,13 +63,21 @@ void restStartSqlJson(HttpContext *pContext, HttpSqlCmd *cmd, TAOS_RES *result)
httpJsonItemToken(jsonBuf);
httpJsonToken(jsonBuf, JsonArrStt);
SSqlObj *pObj = (SSqlObj *) result;
bool isAlterSql = (pObj->sqlstr == NULL) ? false : httpCheckAlterSql(pObj->sqlstr);
if (num_fields == 0) {
httpJsonItemToken(jsonBuf);
httpJsonString(jsonBuf, REST_JSON_AFFECT_ROWS, REST_JSON_AFFECT_ROWS_LEN);
} else {
for (int32_t i = 0; i < num_fields; ++i) {
if (isAlterSql == true) {
httpJsonItemToken(jsonBuf);
httpJsonString(jsonBuf, fields[i].name, (int32_t)strlen(fields[i].name));
httpJsonString(jsonBuf, REST_JSON_AFFECT_ROWS, REST_JSON_AFFECT_ROWS_LEN);
} else {
for (int32_t i = 0; i < num_fields; ++i) {
httpJsonItemToken(jsonBuf);
httpJsonString(jsonBuf, fields[i].name, (int32_t)strlen(fields[i].name));
}
}
}
......@@ -99,8 +108,14 @@ void restStartSqlJson(HttpContext *pContext, HttpSqlCmd *cmd, TAOS_RES *result)
httpJsonItemToken(jsonBuf);
httpJsonToken(jsonBuf, JsonArrStt);
httpJsonItemToken(jsonBuf);
httpJsonString(jsonBuf, fields[i].name, (int32_t)strlen(fields[i].name));
if (isAlterSql == true) {
httpJsonItemToken(jsonBuf);
httpJsonString(jsonBuf, REST_JSON_AFFECT_ROWS, REST_JSON_AFFECT_ROWS_LEN);
} else {
httpJsonItemToken(jsonBuf);
httpJsonString(jsonBuf, fields[i].name, (int32_t)strlen(fields[i].name));
}
httpJsonItemToken(jsonBuf);
httpJsonInt(jsonBuf, fields[i].type);
httpJsonItemToken(jsonBuf);
......
......@@ -191,8 +191,6 @@ static void httpProcessHttpData(void *param) {
if (httpReadData(pContext)) {
(*(pThread->processData))(pContext);
atomic_fetch_add_32(&pServer->requestNum, 1);
} else {
httpReleaseContext(pContext/*, false*/);
}
}
}
......@@ -402,13 +400,17 @@ static bool httpReadData(HttpContext *pContext) {
} else if (nread < 0) {
if (errno == EINTR || errno == EAGAIN || errno == EWOULDBLOCK) {
httpDebug("context:%p, fd:%d, read from socket error:%d, wait another event", pContext, pContext->fd, errno);
return false; // later again
continue; // later again
} else {
httpError("context:%p, fd:%d, read from socket error:%d, close connect", pContext, pContext->fd, errno);
taosCloseSocket(pContext->fd);
httpReleaseContext(pContext/*, false */);
return false;
}
} else {
httpError("context:%p, fd:%d, nread:%d, wait another event", pContext, pContext->fd, nread);
taosCloseSocket(pContext->fd);
httpReleaseContext(pContext/*, false */);
return false;
}
}
......
......@@ -405,7 +405,6 @@ void httpProcessRequestCb(void *param, TAOS_RES *result, int32_t code) {
if (pContext->session == NULL) {
httpSendErrorResp(pContext, TSDB_CODE_HTTP_SESSION_FULL);
httpCloseContextByApp(pContext);
} else {
httpExecCmd(pContext);
}
......
......@@ -21,6 +21,7 @@
#include "httpResp.h"
#include "httpSql.h"
#include "httpUtil.h"
#include "ttoken.h"
bool httpCheckUsedbSql(char *sql) {
if (strstr(sql, "use ") != NULL) {
......@@ -29,6 +30,17 @@ bool httpCheckUsedbSql(char *sql) {
return false;
}
bool httpCheckAlterSql(char *sql) {
int32_t index = 0;
do {
SStrToken t0 = tStrGetToken(sql, &index, false);
if (t0.type != TK_LP) {
return t0.type == TK_ALTER;
}
} while (1);
}
void httpTimeToString(int32_t t, char *buf, int32_t buflen) {
memset(buf, 0, (size_t)buflen);
char ts[32] = {0};
......
......@@ -11,7 +11,7 @@
%left OR.
%left AND.
%right NOT.
%left EQ NE ISNULL NOTNULL IS LIKE MATCH GLOB BETWEEN IN.
%left EQ NE ISNULL NOTNULL IS LIKE MATCH NMATCH GLOB BETWEEN IN.
%left GT GE LT LE.
%left BITAND BITOR LSHIFT RSHIFT.
%left PLUS MINUS.
......@@ -753,6 +753,7 @@ expr(A) ::= expr(X) LIKE expr(Y). {A = tSqlExprCreate(X, Y, TK_LIKE); }
// match expression
expr(A) ::= expr(X) MATCH expr(Y). {A = tSqlExprCreate(X, Y, TK_MATCH); }
expr(A) ::= expr(X) NMATCH expr(Y). {A = tSqlExprCreate(X, Y, TK_NMATCH); }
//in expression
expr(A) ::= expr(X) IN LP exprlist(Y) RP. {A = tSqlExprCreate(X, (tSqlExpr*)Y, TK_IN); }
......@@ -919,5 +920,5 @@ cmd ::= KILL QUERY INTEGER(X) COLON(Z) INTEGER(Y). {X.n += (Z.n + Y.n); s
%fallback ID ABORT AFTER ASC ATTACH BEFORE BEGIN CASCADE CLUSTER CONFLICT COPY DATABASE DEFERRED
DELIMITERS DESC DETACH EACH END EXPLAIN FAIL FOR GLOB IGNORE IMMEDIATE INITIALLY INSTEAD
LIKE MATCH KEY OF OFFSET RAISE REPLACE RESTRICT ROW STATEMENT TRIGGER VIEW ALL
LIKE MATCH NMATCH KEY OF OFFSET RAISE REPLACE RESTRICT ROW STATEMENT TRIGGER VIEW ALL
NOW IPTOKEN SEMI NONE PREV LINEAR IMPORT TBNAME JOIN STABLE NULL INSERT INTO VALUES.
......@@ -29,6 +29,7 @@ OptrStr gOptrStr[] = {
{TSDB_RELATION_NOT_EQUAL, "!="},
{TSDB_RELATION_LIKE, "like"},
{TSDB_RELATION_MATCH, "match"},
{TSDB_RELATION_MATCH, "nmatch"},
{TSDB_RELATION_ISNULL, "is null"},
{TSDB_RELATION_NOTNULL, "not null"},
{TSDB_RELATION_IN, "in"},
......@@ -157,7 +158,7 @@ int8_t filterGetRangeCompFuncFromOptrs(uint8_t optr, uint8_t optr2) {
__compar_fn_t gDataCompare[] = {compareInt32Val, compareInt8Val, compareInt16Val, compareInt64Val, compareFloatVal,
compareDoubleVal, compareLenPrefixedStr, compareStrPatternComp, compareFindItemInSet, compareWStrPatternComp,
compareLenPrefixedWStr, compareUint8Val, compareUint16Val, compareUint32Val, compareUint64Val,
setCompareBytes1, setCompareBytes2, setCompareBytes4, setCompareBytes8, compareStrRegexComp,
setCompareBytes1, setCompareBytes2, setCompareBytes4, setCompareBytes8, compareStrRegexCompMatch, compareStrRegexCompNMatch
};
int8_t filterGetCompFuncIdx(int32_t type, int32_t optr) {
......@@ -198,6 +199,8 @@ int8_t filterGetCompFuncIdx(int32_t type, int32_t optr) {
case TSDB_DATA_TYPE_BINARY: {
if (optr == TSDB_RELATION_MATCH) {
comparFn = 19;
} else if (optr == TSDB_RELATION_NMATCH) {
comparFn = 20;
} else if (optr == TSDB_RELATION_LIKE) { /* wildcard query using like operator */
comparFn = 7;
} else if (optr == TSDB_RELATION_IN) {
......@@ -212,6 +215,8 @@ int8_t filterGetCompFuncIdx(int32_t type, int32_t optr) {
case TSDB_DATA_TYPE_NCHAR: {
if (optr == TSDB_RELATION_MATCH) {
comparFn = 19;
} else if (optr == TSDB_RELATION_NMATCH) {
comparFn = 20;
} else if (optr == TSDB_RELATION_LIKE) {
comparFn = 9;
} else if (optr == TSDB_RELATION_IN) {
......@@ -1879,6 +1884,9 @@ bool filterDoCompare(__compar_fn_t func, uint8_t optr, void *left, void *right)
case TSDB_RELATION_MATCH: {
return ret == 0;
}
case TSDB_RELATION_NMATCH: {
return ret == 0;
}
case TSDB_RELATION_IN: {
return ret == 1;
}
......
此差异已折叠。
......@@ -3710,6 +3710,9 @@ static bool tableFilterFp(const void* pNode, void* param) {
case TSDB_RELATION_MATCH: {
return ret == 0;
}
case TSDB_RELATION_NMATCH: {
return ret == 0;
}
case TSDB_RELATION_IN: {
return ret == 1;
}
......@@ -4045,6 +4048,8 @@ static int32_t setQueryCond(tQueryInfo *queryColInfo, SQueryCond* pCond) {
assert(0);
} else if (optr == TSDB_RELATION_MATCH) {
assert(0);
} else if (optr == TSDB_RELATION_NMATCH) {
assert(0);
}
return TSDB_CODE_SUCCESS;
......@@ -4202,7 +4207,9 @@ static void queryIndexlessColumn(SSkipList* pSkipList, tQueryInfo* pQueryInfo, S
if (pQueryInfo->sch.colId == TSDB_TBNAME_COLUMN_INDEX) {
if (pQueryInfo->optr == TSDB_RELATION_IN) {
addToResult = pQueryInfo->compare(name, pQueryInfo->q);
} else if (pQueryInfo->optr == TSDB_RELATION_LIKE || pQueryInfo->optr == TSDB_RELATION_MATCH) {
} else if (pQueryInfo->optr == TSDB_RELATION_LIKE ||
pQueryInfo->optr == TSDB_RELATION_MATCH ||
pQueryInfo->optr == TSDB_RELATION_NMATCH) {
addToResult = !pQueryInfo->compare(name, pQueryInfo->q);
}
} else {
......@@ -4234,7 +4241,8 @@ void getTableListfromSkipList(tExprNode *pExpr, SSkipList *pSkipList, SArray *re
param->setupInfoFn(pExpr, param->pExtInfo);
tQueryInfo *pQueryInfo = pExpr->_node.info;
if (pQueryInfo->indexed && (pQueryInfo->optr != TSDB_RELATION_LIKE && pQueryInfo->optr != TSDB_RELATION_MATCH
if (pQueryInfo->indexed && (pQueryInfo->optr != TSDB_RELATION_LIKE
&& pQueryInfo->optr != TSDB_RELATION_MATCH && pQueryInfo->optr != TSDB_RELATION_NMATCH
&& pQueryInfo->optr != TSDB_RELATION_IN)) {
queryIndexedColumn(pSkipList, pQueryInfo, result);
} else {
......
......@@ -84,6 +84,8 @@ int32_t compareLenPrefixedStr(const void *pLeft, const void *pRight);
int32_t compareLenPrefixedWStr(const void *pLeft, const void *pRight);
int32_t compareStrPatternComp(const void* pLeft, const void* pRight);
int32_t compareStrRegexComp(const void* pLeft, const void* pRight);
int32_t compareStrRegexCompMatch(const void* pLeft, const void* pRight);
int32_t compareStrRegexCompNMatch(const void* pLeft, const void* pRight);
int32_t compareFindItemInSet(const void *pLeft, const void* pRight);
int32_t compareWStrPatternComp(const void* pLeft, const void* pRight);
......
......@@ -20,7 +20,7 @@
extern "C" {
#endif
#define TSDB_CFG_MAX_NUM 122
#define TSDB_CFG_MAX_NUM 123
#define TSDB_CFG_PRINT_LEN 23
#define TSDB_CFG_OPTION_LEN 24
#define TSDB_CFG_VALUE_LEN 41
......
......@@ -350,6 +350,14 @@ int32_t compareStrPatternComp(const void* pLeft, const void* pRight) {
return (ret == TSDB_PATTERN_MATCH) ? 0 : 1;
}
int32_t compareStrRegexCompMatch(const void* pLeft, const void* pRight) {
return compareStrRegexComp(pLeft, pRight);
}
int32_t compareStrRegexCompNMatch(const void* pLeft, const void* pRight) {
return compareStrRegexComp(pLeft, pRight) ? 0 : 1;
}
int32_t compareStrRegexComp(const void* pLeft, const void* pRight) {
size_t sz = varDataLen(pRight);
char *pattern = malloc(sz + 1);
......@@ -449,7 +457,9 @@ __compar_fn_t getComparFunc(int32_t type, int32_t optr) {
case TSDB_DATA_TYPE_DOUBLE: comparFn = compareDoubleVal; break;
case TSDB_DATA_TYPE_BINARY: {
if (optr == TSDB_RELATION_MATCH) {
comparFn = compareStrRegexComp;
comparFn = compareStrRegexCompMatch;
} else if (optr == TSDB_RELATION_NMATCH) {
comparFn = compareStrRegexCompNMatch;
} else if (optr == TSDB_RELATION_LIKE) { /* wildcard query using like operator */
comparFn = compareStrPatternComp;
} else if (optr == TSDB_RELATION_IN) {
......@@ -463,7 +473,9 @@ __compar_fn_t getComparFunc(int32_t type, int32_t optr) {
case TSDB_DATA_TYPE_NCHAR: {
if (optr == TSDB_RELATION_MATCH) {
comparFn = compareStrRegexComp;
comparFn = compareStrRegexCompMatch;
} else if (optr == TSDB_RELATION_NMATCH) {
comparFn = compareStrRegexCompNMatch;
} else if (optr == TSDB_RELATION_LIKE) {
comparFn = compareWStrPatternComp;
} else if (optr == TSDB_RELATION_IN) {
......
......@@ -116,6 +116,9 @@ TAOS_DEFINE_ERROR(TSDB_CODE_TSC_DUP_COL_NAMES, "duplicated column nam
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_TAG_LENGTH, "Invalid tag length")
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_COLUMN_LENGTH, "Invalid column length")
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_DUP_TAG_NAMES, "duplicated tag names")
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_JSON, "Invalid JSON format")
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_JSON_TYPE, "Invalid JSON data type")
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_VALUE_OUT_OF_RANGE, "Value out of range")
// mnode
TAOS_DEFINE_ERROR(TSDB_CODE_MND_MSG_NOT_PROCESSED, "Message not processed")
......
......@@ -195,6 +195,7 @@ static SKeyword keywordTable[] = {
{"INITIALLY", TK_INITIALLY},
{"INSTEAD", TK_INSTEAD},
{"MATCH", TK_MATCH},
{"NMATCH", TK_NMATCH},
{"KEY", TK_KEY},
{"OF", TK_OF},
{"RAISE", TK_RAISE},
......
......@@ -17,7 +17,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.31</version>
<version>2.0.34</version>
</dependency>
</dependencies>
......
......@@ -7,6 +7,9 @@ public class JdbcDemo {
private static String host;
private static final String dbName = "test";
private static final String tbName = "weather";
private static final String user = "root";
private static final String password = "taosdata";
private Connection connection;
public static void main(String[] args) {
......@@ -30,10 +33,9 @@ public class JdbcDemo {
}
private void init() {
final String url = "jdbc:TAOS://" + host + ":6030/?user=root&password=taosdata";
final String url = "jdbc:TAOS://" + host + ":6030/?user=" + user + "&password=" + password;
// get connection
try {
Class.forName("com.taosdata.jdbc.TSDBDriver");
Properties properties = new Properties();
properties.setProperty("charset", "UTF-8");
properties.setProperty("locale", "en_US.UTF-8");
......@@ -42,8 +44,7 @@ public class JdbcDemo {
connection = DriverManager.getConnection(url, properties);
if (connection != null)
System.out.println("[ OK ] Connection established.");
} catch (ClassNotFoundException | SQLException e) {
System.out.println("[ ERROR! ] Connection establish failed.");
} catch (SQLException e) {
e.printStackTrace();
}
}
......@@ -74,7 +75,7 @@ public class JdbcDemo {
}
private void select() {
final String sql = "select * from "+ dbName + "." + tbName;
final String sql = "select * from " + dbName + "." + tbName;
executeQuery(sql);
}
......@@ -89,8 +90,6 @@ public class JdbcDemo {
}
}
/************************************************************************/
private void executeQuery(String sql) {
long start = System.currentTimeMillis();
try (Statement statement = connection.createStatement()) {
......@@ -117,7 +116,6 @@ public class JdbcDemo {
}
}
private void printSql(String sql, boolean succeed, long cost) {
System.out.println("[ " + (succeed ? "OK" : "ERROR!") + " ] time cost: " + cost + " ms, execute statement ====> " + sql);
}
......@@ -132,7 +130,6 @@ public class JdbcDemo {
long end = System.currentTimeMillis();
printSql(sql, false, (end - start));
e.printStackTrace();
}
}
......@@ -141,5 +138,4 @@ public class JdbcDemo {
System.exit(0);
}
}
\ No newline at end of file
}
......@@ -4,14 +4,15 @@ import java.sql.*;
import java.util.Properties;
public class JdbcRestfulDemo {
private static final String host = "127.0.0.1";
private static final String host = "localhost";
private static final String dbname = "test";
private static final String user = "root";
private static final String password = "taosdata";
public static void main(String[] args) {
try {
// load JDBC-restful driver
Class.forName("com.taosdata.jdbc.rs.RestfulDriver");
// use port 6041 in url when use JDBC-restful
String url = "jdbc:TAOS-RS://" + host + ":6041/?user=root&password=taosdata";
String url = "jdbc:TAOS-RS://" + host + ":6041/?user=" + user + "&password=" + password;
Properties properties = new Properties();
properties.setProperty("charset", "UTF-8");
......@@ -21,12 +22,12 @@ public class JdbcRestfulDemo {
Connection conn = DriverManager.getConnection(url, properties);
Statement stmt = conn.createStatement();
stmt.execute("drop database if exists restful_test");
stmt.execute("create database if not exists restful_test");
stmt.execute("use restful_test");
stmt.execute("create table restful_test.weather(ts timestamp, temperature float) tags(location nchar(64))");
stmt.executeUpdate("insert into t1 using restful_test.weather tags('北京') values(now, 18.2)");
ResultSet rs = stmt.executeQuery("select * from restful_test.weather");
stmt.execute("drop database if exists " + dbname);
stmt.execute("create database if not exists " + dbname);
stmt.execute("use " + dbname);
stmt.execute("create table " + dbname + ".weather(ts timestamp, temperature float) tags(location nchar(64))");
stmt.executeUpdate("insert into t1 using " + dbname + ".weather tags('北京') values(now, 18.2)");
ResultSet rs = stmt.executeQuery("select * from " + dbname + ".weather");
ResultSetMetaData meta = rs.getMetaData();
while (rs.next()) {
for (int i = 1; i <= meta.getColumnCount(); i++) {
......@@ -38,8 +39,6 @@ public class JdbcRestfulDemo {
rs.close();
stmt.close();
conn.close();
} catch (ClassNotFoundException e) {
e.printStackTrace();
} catch (SQLException e) {
e.printStackTrace();
}
......
......@@ -34,9 +34,8 @@ public class SubscribeDemo {
System.out.println(usage);
return;
}
/*********************************************************************************************/
try {
Class.forName("com.taosdata.jdbc.TSDBDriver");
Properties properties = new Properties();
properties.setProperty(TSDBDriver.PROPERTY_KEY_CHARSET, "UTF-8");
properties.setProperty(TSDBDriver.PROPERTY_KEY_LOCALE, "en_US.UTF-8");
......
......@@ -60,12 +60,15 @@
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.28</version>
<!-- <scope>system</scope>-->
<!-- <systemPath>${project.basedir}/src/main/resources/taos-jdbcdriver-2.0.28-dist.jar</systemPath>-->
<version>2.0.34</version>
</dependency>
<dependency>
......
......@@ -4,7 +4,7 @@ import org.mybatis.spring.annotation.MapperScan;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@MapperScan(basePackages = {"com.taosdata.example.springbootdemo.dao"})
@MapperScan(basePackages = {"com.taosdata.example.springbootdemo"})
@SpringBootApplication
public class SpringbootdemoApplication {
......
......@@ -15,35 +15,21 @@ public class WeatherController {
@Autowired
private WeatherService weatherService;
/**
* create database and table
*
* @return
*/
@GetMapping("/lastOne")
public Weather lastOne() {
return weatherService.lastOne();
}
@GetMapping("/init")
public int init() {
return weatherService.init();
}
/**
* Pagination Query
*
* @param limit
* @param offset
* @return
*/
@GetMapping("/{limit}/{offset}")
public List<Weather> queryWeather(@PathVariable Long limit, @PathVariable Long offset) {
return weatherService.query(limit, offset);
}
/**
* upload single weather info
*
* @param temperature
* @param humidity
* @return
*/
@PostMapping("/{temperature}/{humidity}")
public int saveWeather(@PathVariable float temperature, @PathVariable float humidity) {
return weatherService.save(temperature, humidity);
......
......@@ -8,6 +8,8 @@ import java.util.Map;
public interface WeatherMapper {
Map<String, Object> lastOne();
void dropDB();
void createDB();
......
......@@ -9,20 +9,48 @@
<result column="humidity" jdbcType="FLOAT" property="humidity"/>
</resultMap>
<select id="lastOne" resultType="java.util.Map">
select last_row(*), location, groupid
from test.weather
</select>
<update id="dropDB">
drop database if exists test
drop
database if exists test
</update>
<update id="createDB">
create database if not exists test
create
database if not exists test
</update>
<update id="createSuperTable">
create table if not exists test.weather(ts timestamp, temperature float, humidity float) tags(location nchar(64), groupId int)
create table if not exists test.weather
(
ts
timestamp,
temperature
float,
humidity
float,
note
binary
(
64
)) tags
(
location nchar
(
64
), groupId int)
</update>
<update id="createTable" parameterType="com.taosdata.example.springbootdemo.domain.Weather">
create table if not exists test.t#{groupId} using test.weather tags(#{location}, #{groupId})
create table if not exists test.t#{groupId} using test.weather tags
(
#{location},
#{groupId}
)
</update>
<select id="select" resultMap="BaseResultMap">
......@@ -36,25 +64,29 @@
</select>
<insert id="insert" parameterType="com.taosdata.example.springbootdemo.domain.Weather">
insert into test.t#{groupId} (ts, temperature, humidity) values (#{ts}, ${temperature}, ${humidity})
insert into test.t#{groupId} (ts, temperature, humidity, note)
values (#{ts}, ${temperature}, ${humidity}, #{note})
</insert>
<select id="getSubTables" resultType="String">
select tbname from test.weather
select tbname
from test.weather
</select>
<select id="count" resultType="int">
select count(*) from test.weather
select count(*)
from test.weather
</select>
<resultMap id="avgResultSet" type="com.taosdata.example.springbootdemo.domain.Weather">
<id column="ts" jdbcType="TIMESTAMP" property="ts" />
<result column="avg(temperature)" jdbcType="FLOAT" property="temperature" />
<result column="avg(humidity)" jdbcType="FLOAT" property="humidity" />
<id column="ts" jdbcType="TIMESTAMP" property="ts"/>
<result column="avg(temperature)" jdbcType="FLOAT" property="temperature"/>
<result column="avg(humidity)" jdbcType="FLOAT" property="humidity"/>
</resultMap>
<select id="avg" resultMap="avgResultSet">
select avg(temperature), avg(humidity)from test.weather interval(1m)
select avg(temperature), avg(humidity)
from test.weather interval(1m)
</select>
</mapper>
\ No newline at end of file
......@@ -11,6 +11,7 @@ public class Weather {
private Float temperature;
private Float humidity;
private String location;
private String note;
private int groupId;
public Weather() {
......@@ -61,4 +62,12 @@ public class Weather {
public void setGroupId(int groupId) {
this.groupId = groupId;
}
public String getNote() {
return note;
}
public void setNote(String note) {
this.note = note;
}
}
......@@ -29,6 +29,7 @@ public class WeatherService {
Weather weather = new Weather(new Timestamp(ts + (thirtySec * i)), 30 * random.nextFloat(), random.nextInt(100));
weather.setLocation(locations[random.nextInt(locations.length)]);
weather.setGroupId(i % locations.length);
weather.setNote("note-" + i);
weatherMapper.createTable(weather);
count += weatherMapper.insert(weather);
}
......@@ -58,4 +59,21 @@ public class WeatherService {
public List<Weather> avg() {
return weatherMapper.avg();
}
public Weather lastOne() {
Map<String, Object> result = weatherMapper.lastOne();
long ts = (long) result.get("ts");
float temperature = (float) result.get("temperature");
float humidity = (float) result.get("humidity");
String note = (String) result.get("note");
int groupId = (int) result.get("groupid");
String location = (String) result.get("location");
Weather weather = new Weather(new Timestamp(ts), temperature, humidity);
weather.setNote(note);
weather.setGroupId(groupId);
weather.setLocation(location);
return weather;
}
}
package com.taosdata.example.springbootdemo.util;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.springframework.stereotype.Component;
import java.sql.Timestamp;
import java.util.Map;
@Aspect
@Component
public class TaosAspect {
@Around("execution(java.util.Map<String,Object> com.taosdata.example.springbootdemo.dao.*.*(..))")
public Object handleType(ProceedingJoinPoint joinPoint) {
Map<String, Object> result = null;
try {
result = (Map<String, Object>) joinPoint.proceed();
for (String key : result.keySet()) {
Object obj = result.get(key);
if (obj instanceof byte[]) {
obj = new String((byte[]) obj);
result.put(key, obj);
}
if (obj instanceof Timestamp) {
obj = ((Timestamp) obj).getTime();
result.put(key, obj);
}
}
} catch (Throwable e) {
e.printStackTrace();
}
return result;
}
}
# datasource config - JDBC-JNI
#spring.datasource.driver-class-name=com.taosdata.jdbc.TSDBDriver
#spring.datasource.url=jdbc:TAOS://127.0.0.1:6030/test?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8
#spring.datasource.url=jdbc:TAOS://localhost:6030/?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8
#spring.datasource.username=root
#spring.datasource.password=taosdata
# datasource config - JDBC-RESTful
spring.datasource.driver-class-name=com.taosdata.jdbc.rs.RestfulDriver
spring.datasource.url=jdbc:TAOS-RS://master:6041/test?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8
spring.datasource.url=jdbc:TAOS-RS://localhsot:6041/test?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8
spring.datasource.username=root
spring.datasource.password=taosdata
spring.datasource.druid.initial-size=5
spring.datasource.druid.min-idle=5
spring.datasource.druid.max-active=5
spring.datasource.druid.max-wait=30000
spring.datasource.druid.validation-query=select server_status();
spring.aop.auto=true
spring.aop.proxy-target-class=true
#mybatis
mybatis.mapper-locations=classpath:mapper/*.xml
logging.level.com.taosdata.jdbc.springbootdemo.dao=debug
此差异已折叠。
......@@ -6,8 +6,8 @@ TARGET=exe
LFLAGS = '-Wl,-rpath,/usr/local/taos/driver/' -ltaos -lpthread -lm -lrt
CFLAGS = -O3 -g -Wall -Wno-deprecated -fPIC -Wno-unused-result -Wconversion \
-Wno-char-subscripts -D_REENTRANT -Wno-format -D_REENTRANT -DLINUX \
-Wno-unused-function -D_M_X64 -I/usr/local/taos/include -std=gnu99
-Wno-unused-function -D_M_X64 -I/usr/local/taos/include -std=gnu99 \
-I../../../deps/cJson/inc
all: $(TARGET)
exe:
......
......@@ -183,6 +183,9 @@ python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanosub
python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestInsertTime_step.py
python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdumpTestNanoSupport.py
#
python3 ./test.py -f tsdb/tsdbComp.py
# update
python3 ./test.py -f update/allow_update.py
python3 ./test.py -f update/allow_update-0.py
......@@ -267,7 +270,7 @@ python3 ./test.py -f query/queryStateWindow.py
# python3 ./test.py -f query/nestedQuery/queryWithOrderLimit.py
python3 ./test.py -f query/nestquery_last_row.py
python3 ./test.py -f query/queryCnameDisplay.py
python3 ./test.py -f query/operator_cost.py
# python3 ./test.py -f query/operator_cost.py
# python3 ./test.py -f query/long_where_query.py
python3 test.py -f query/nestedQuery/queryWithSpread.py
......
###################################################################
# Copyright (c) 2021 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from util.log import *
from util.cases import *
from util.sql import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
self._conn = conn
def run(self):
print("running {}".format(__file__))
tdSql.execute("drop database if exists test")
tdSql.execute("create database if not exists test precision 'us'")
tdSql.execute('use test')
### Default format ###
### metric value ###
print("============= step1 : test metric value types ================")
payload = '''
{
"metric": "stb0_0",
"timestamp": 1626006833610123,
"value": 10,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb0_0")
tdSql.checkData(1, 1, "FLOAT")
payload = '''
{
"metric": "stb0_1",
"timestamp": 1626006833610123,
"value": true,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb0_1")
tdSql.checkData(1, 1, "BOOL")
payload = '''
{
"metric": "stb0_2",
"timestamp": 1626006833610123,
"value": false,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb0_2")
tdSql.checkData(1, 1, "BOOL")
payload = '''
{
"metric": "stb0_3",
"timestamp": 1626006833610123,
"value": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>",
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb0_3")
tdSql.checkData(1, 1, "NCHAR")
### timestamp 0 ###
payload = '''
{
"metric": "stb0_4",
"timestamp": 0,
"value": 123,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
### ID ###
payload = '''
{
"metric": "stb0_5",
"timestamp": 0,
"value": 123,
"tags": {
"ID": "tb0_5",
"t1": true,
"iD": "tb000",
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>",
"id": "tb555"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("select tbname from stb0_5")
tdSql.checkData(0, 0, "tb0_5")
### Nested format ###
### timestamp ###
#seconds
payload = '''
{
"metric": "stb1_0",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": 10,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("select ts from stb1_0")
tdSql.checkData(0, 0, "2021-07-11 20:33:53.000000")
#milliseconds
payload = '''
{
"metric": "stb1_1",
"timestamp": {
"value": 1626006833610,
"type": "ms"
},
"value": 10,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("select ts from stb1_1")
tdSql.checkData(0, 0, "2021-07-11 20:33:53.610000")
#microseconds
payload = '''
{
"metric": "stb1_2",
"timestamp": {
"value": 1626006833610123,
"type": "us"
},
"value": 10,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("select ts from stb1_2")
tdSql.checkData(0, 0, "2021-07-11 20:33:53.610123")
#nanoseconds
payload = '''
{
"metric": "stb1_3",
"timestamp": {
"value": 1.6260068336101233e+18,
"type": "ns"
},
"value": 10,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("select ts from stb1_3")
tdSql.checkData(0, 0, "2021-07-11 20:33:53.610123")
#now
tdSql.execute('use test')
payload = '''
{
"metric": "stb1_4",
"timestamp": {
"value": 0,
"type": "ns"
},
"value": 10,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
### metric value ###
payload = '''
{
"metric": "stb2_0",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": true,
"type": "bool"
},
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb2_0")
tdSql.checkData(1, 1, "BOOL")
payload = '''
{
"metric": "stb2_1",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": 127,
"type": "tinyint"
},
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb2_1")
tdSql.checkData(1, 1, "TINYINT")
payload = '''
{
"metric": "stb2_2",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": 32767,
"type": "smallint"
},
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb2_2")
tdSql.checkData(1, 1, "SMALLINT")
payload = '''
{
"metric": "stb2_3",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": 2147483647,
"type": "int"
},
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb2_3")
tdSql.checkData(1, 1, "INT")
payload = '''
{
"metric": "stb2_4",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": 9.2233720368547758e+18,
"type": "bigint"
},
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb2_4")
tdSql.checkData(1, 1, "BIGINT")
payload = '''
{
"metric": "stb2_5",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": 11.12345,
"type": "float"
},
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb2_5")
tdSql.checkData(1, 1, "FLOAT")
payload = '''
{
"metric": "stb2_6",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": 22.123456789,
"type": "double"
},
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb2_6")
tdSql.checkData(1, 1, "DOUBLE")
payload = '''
{
"metric": "stb2_7",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>",
"type": "binary"
},
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb2_7")
tdSql.checkData(1, 1, "BINARY")
payload = '''
{
"metric": "stb2_8",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": "你好",
"type": "nchar"
},
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb2_8")
tdSql.checkData(1, 1, "NCHAR")
### tag value ###
payload = '''
{
"metric": "stb3_0",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": "hello",
"type": "nchar"
},
"tags": {
"t1": {
"value": true,
"type": "bool"
},
"t2": {
"value": 127,
"type": "tinyint"
},
"t3": {
"value": 32767,
"type": "smallint"
},
"t4": {
"value": 2147483647,
"type": "int"
},
"t5": {
"value": 9.2233720368547758e+18,
"type": "bigint"
},
"t6": {
"value": 11.12345,
"type": "float"
},
"t7": {
"value": 22.123456789,
"type": "double"
},
"t8": {
"value": "binary_val",
"type": "binary"
},
"t9": {
"value": "你好",
"type": "nchar"
}
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb3_0")
tdSql.checkData(2, 1, "BOOL")
tdSql.checkData(3, 1, "TINYINT")
tdSql.checkData(4, 1, "SMALLINT")
tdSql.checkData(5, 1, "INT")
tdSql.checkData(6, 1, "BIGINT")
tdSql.checkData(7, 1, "FLOAT")
tdSql.checkData(8, 1, "DOUBLE")
tdSql.checkData(9, 1, "BINARY")
tdSql.checkData(10, 1, "NCHAR")
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
......@@ -42,7 +42,7 @@ class TwoClients:
tdSql.execute("drop database if exists db3")
# insert data with taosc
# insert data with c connector
for i in range(10):
os.system("taosdemo -f manualTest/TD-5114/insertDataDb3Replica2.json -y ")
# # check data correct
......
......@@ -22,7 +22,7 @@
"cache": 50,
"blocks": 8,
"precision": "ms",
"keep": 365,
"keep": 36500,
"minRows": 100,
"maxRows": 4096,
"comp":2,
......
......@@ -13,6 +13,7 @@
import sys
import os
import time
from util.log import *
from util.cases import *
from util.sql import *
......@@ -24,6 +25,9 @@ class TDTestCase:
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
now = time.time()
self.ts = int(round(now * 1000))
def getBuildPath(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
......@@ -50,6 +54,7 @@ class TDTestCase:
# insert: create one or mutiple tables per sql and insert multiple rows per sql
# test case for https://jira.taosdata.com:18080/browse/TD-4985
os.system("rm -rf tools/taosdemoAllTest/TD-4985/query-limit-offset.py.sql")
os.system("%staosdemo -f tools/taosdemoAllTest/TD-4985/query-limit-offset.json -y " % binPath)
tdSql.execute("use db")
tdSql.query("select count (tbname) from stb0")
......@@ -57,25 +62,25 @@ class TDTestCase:
for i in range(1000):
tdSql.execute('''insert into stb00_9999 values(%d, %d, %d,'test99.%s')'''
% (1600000000000 + i, i, -10000+i, i))
% (self.ts + i, i, -10000+i, i))
tdSql.execute('''insert into stb00_8888 values(%d, %d, %d,'test98.%s')'''
% (1600000000000 + i, i, -10000+i, i))
% (self.ts + i, i, -10000+i, i))
tdSql.execute('''insert into stb00_7777 values(%d, %d, %d,'test97.%s')'''
% (1600000000000 + i, i, -10000+i, i))
% (self.ts + i, i, -10000+i, i))
tdSql.execute('''insert into stb00_6666 values(%d, %d, %d,'test96.%s')'''
% (1600000000000 + i, i, -10000+i, i))
% (self.ts + i, i, -10000+i, i))
tdSql.execute('''insert into stb00_5555 values(%d, %d, %d,'test95.%s')'''
% (1600000000000 + i, i, -10000+i, i))
% (self.ts + i, i, -10000+i, i))
tdSql.execute('''insert into stb00_4444 values(%d, %d, %d,'test94.%s')'''
% (1600000000000 + i, i, -10000+i, i))
% (self.ts + i, i, -10000+i, i))
tdSql.execute('''insert into stb00_3333 values(%d, %d, %d,'test93.%s')'''
% (1600000000000 + i, i, -10000+i, i))
% (self.ts + i, i, -10000+i, i))
tdSql.execute('''insert into stb00_2222 values(%d, %d, %d,'test92.%s')'''
% (1600000000000 + i, i, -10000+i, i))
% (self.ts + i, i, -10000+i, i))
tdSql.execute('''insert into stb00_1111 values(%d, %d, %d,'test91.%s')'''
% (1600000000000 + i, i, -10000+i, i))
% (self.ts + i, i, -10000+i, i))
tdSql.execute('''insert into stb00_100 values(%d, %d, %d,'test90.%s')'''
% (1600000000000 + i, i, -10000+i, i))
% (self.ts + i, i, -10000+i, i))
tdSql.query("select * from stb0 where c2 like 'test99%' ")
tdSql.checkRows(1000)
tdSql.query("select * from stb0 where tbname like 'stb00_9999' limit 10" )
......@@ -176,7 +181,7 @@ class TDTestCase:
tdSql.checkData(0, 1, 5)
tdSql.checkData(1, 1, 6)
tdSql.checkData(2, 1, 7)
os.system("rm -rf tools/taosdemoAllTest/TD-4985/query-limit-offset.py.sql")
def stop(self):
tdSql.close()
......
......@@ -24,7 +24,7 @@ from random import choice
class TwoClients:
def initConnection(self):
self.host = "chenhaoran02"
self.host = "chenhaoran01"
self.user = "root"
self.password = "taosdata"
self.config = "/etc/taos/"
......@@ -116,8 +116,10 @@ class TwoClients:
sleep(3)
tdSql.execute(" drop dnode 'chenhaoran02:6030'; ")
sleep(20)
os.system("rm -rf /var/lib/taos/*")
# remove data file;
os.system("rm -rf /home/chr/data/data0/*")
print("clear dnode chenhaoran02'data files")
sleep(5)
os.system("nohup /usr/bin/taosd > /dev/null 2>&1 &")
print("start taosd")
sleep(10)
......
......@@ -29,13 +29,22 @@ endi
sql select tbname from $st_name where tbname match '^ct[[:digit:]]'
if $rows != 2 then
return -1
endi
sql select tbname from $st_name where tbname nmatch '^ct[[:digit:]]'
if $rows != 1 then
return -1
endi
sql select tbname from $st_name where tbname match '.*'
if $rows !=3 then
if $rows != 3 then
return -1
endi
sql select tbname from $st_name where tbname nmatch '.*'
if $rows != 0 then
return -1
endi
......@@ -44,6 +53,11 @@ if $rows != 2 then
return -1
endi
sql select tbname from $st_name where t1b nmatch '[[:lower:]]+'
if $rows != 1 then
return -1
endi
sql insert into $ct1_name values(now, 'this is engine')
sql insert into $ct2_name values(now, 'this is app egnine')
......@@ -56,6 +70,52 @@ if $rows != 1 then
return -1
endi
sql select c1b from $st_name where c1b nmatch 'engine'
if $data00 != @this is app egnine@ then
return -1
endi
if $rows != 1 then
return -1
endi
sql_error select c1b from $st_name where c1b match e;
sql_error select c1b from $st_name where c1b nmatch e;
sql create table wrong_type(ts timestamp, c0 tinyint, c1 smallint, c2 int, c3 bigint, c4 float, c5 double, c6 bool, c7 nchar(20)) tags(t0 tinyint, t1 smallint, t2 int, t3 bigint, t4 float, t5 double, t6 bool, t7 nchar(10))
sql insert into wrong_type_1 using wrong_type tags(1, 2, 3, 4, 5, 6, true, 'notsupport') values(now, 1, 2, 3, 4, 5, 6, false, 'notsupport')
sql_error select * from wrong_type where ts match '.*'
sql_error select * from wrong_type where ts nmatch '.*'
sql_error select * from wrong_type where c0 match '.*'
sql_error select * from wrong_type where c0 nmatch '.*'
sql_error select * from wrong_type where c1 match '.*'
sql_error select * from wrong_type where c1 nmatch '.*'
sql_error select * from wrong_type where c2 match '.*'
sql_error select * from wrong_type where c2 nmatch '.*'
sql_error select * from wrong_type where c3 match '.*'
sql_error select * from wrong_type where c3 nmatch '.*'
sql_error select * from wrong_type where c4 match '.*'
sql_error select * from wrong_type where c4 nmatch '.*'
sql_error select * from wrong_type where c5 match '.*'
sql_error select * from wrong_type where c5 nmatch '.*'
sql_error select * from wrong_type where c6 match '.*'
sql_error select * from wrong_type where c6 nmatch '.*'
sql_error select * from wrong_type where c7 match '.*'
sql_error select * from wrong_type where c7 nmatch '.*'
sql_error select * from wrong_type where t1 match '.*'
sql_error select * from wrong_type where t1 nmatch '.*'
sql_error select * from wrong_type where t2 match '.*'
sql_error select * from wrong_type where t2 nmatch '.*'
sql_error select * from wrong_type where t3 match '.*'
sql_error select * from wrong_type where t3 nmatch '.*'
sql_error select * from wrong_type where t4 match '.*'
sql_error select * from wrong_type where t4 nmatch '.*'
sql_error select * from wrong_type where t5 match '.*'
sql_error select * from wrong_type where t5 nmatch '.*'
sql_error select * from wrong_type where t6 match '.*'
sql_error select * from wrong_type where t6 nmatch '.*'
sql_error select * from wrong_type where t7 match '.*'
sql_error select * from wrong_type where t7 nmatch '.*'
system sh/exec.sh -n dnode1 -s stop -x SIGINT
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册