diff --git a/README.md b/README.md index c0cf01bf30ca3a25a341e577f909cf6f19576b73..f5bcf4a0c8db1b69cef2514b677e3663a793c43f 100644 --- a/README.md +++ b/README.md @@ -45,9 +45,9 @@ test/ Then fill the configuration file _test/cfg/taos.cfg_: ``` echo -e "dataDir $(pwd)/test/data\nlogDir $(pwd)/test/log" > test/cfg/taos.cfg -``` --> +​``` --> To start the TDengine server, run the command below in terminal: -```cmd +​```cmd ./build/bin/taosd -c test/cfg ``` In another terminal, use the TDengine shell to connect the server: @@ -88,7 +88,9 @@ drop database db; ``` # Developing with TDengine -TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors. +### Official Connectors + +TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation. - [Java](https://www.taosdata.com/en/documentation/connector/#Java-Connector) - [C/C++](https://www.taosdata.com/en/documentation/connector/#C/C++-Connector) @@ -97,6 +99,13 @@ TDengine provides abundant developing tools for users to develop on TDengine. Fo - [RESTful API](https://www.taosdata.com/en/documentation/connector/#RESTful-Connector) - [Node.js](https://www.taosdata.com/en/documentation/connector/#Node.js-Connector) +### Third Party Connectors + +The TDengine community has also kindly built some of their own connectors! Follow the links below to find the source code for them. + +- [Rust Connector](https://github.com/taosdata/TDengine/tree/master/tests/examples/rust) +- [.Net Core Connector](https://github.com/maikebing/Maikebing.EntityFrameworkCore.Taos) + # TDengine Roadmap - Support event-driven stream computing - Support user defined functions diff --git a/documentation/tdenginedocs-cn/super-table/index.html b/documentation/tdenginedocs-cn/super-table/index.html index c4f1d5b9136aa2cae2000fe01fd25ace7bfee582..13e5cdd17370a1bc3a2e6659c8f529bed23eaefa 100644 --- a/documentation/tdenginedocs-cn/super-table/index.html +++ b/documentation/tdenginedocs-cn/super-table/index.html @@ -3,15 +3,15 @@

什么是超级表

STable是同一类型数据采集点的抽象,是同类型采集实例的集合,包含多张数据结构一样的子表。每个STable为其子表定义了表结构和一组标签:表结构即表中记录的数据列及其数据类型;标签名和数据类型由STable定义,标签值记录着每个子表的静态信息,用以对子表进行分组过滤。子表本质上就是普通的表,由一个时间戳主键和若干个数据列组成,每行记录着具体的数据,数据查询操作与普通表完全相同;但子表与普通表的区别在于每个子表从属于一张超级表,并带有一组由STable定义的标签值。每种类型的采集设备可以定义一个STable。数据模型定义表的每列数据的类型,如温度、压力、电压、电流、GPS实时位置等,而标签信息属于Meta Data,如采集设备的序列号、型号、位置等,是静态的,是表的元数据。用户在创建表(数据采集点)时指定STable(采集类型)外,还可以指定标签的值,也可事后增加或修改。

TDengine扩展标准SQL语法用于定义STable,使用关键词tags指定标签信息。语法如下:

-
CREATE TABLE <stable_name> (<field_name> TIMESTAMP, field_name1 field_type,…)   TAGS(tag_name tag_type, …) 
+
CREATE TABLE <stable_name> (<field_name> TIMESTAMP, field_name1 field_type,…)   TAGS(tag_name tag_type, …) 

其中tag_name是标签名,tag_type是标签的数据类型。标签可以使用时间戳之外的其他TDengine支持的数据类型,标签的个数最多为6个,名字不能与系统关键词相同,也不能与其他列名相同。如:

-
create table thermometer (ts timestamp, degree float) 
+
create table thermometer (ts timestamp, degree float) 
 tags (location binary(20), type int)

上述SQL创建了一个名为thermometer的STable,带有标签location和标签type。

为某个采集点创建表时,可以指定其所属的STable以及标签的值,语法如下:

-
CREATE TABLE <tb_name> USING <stb_name> TAGS (tag_value1,...)
+
CREATE TABLE <tb_name> USING <stb_name> TAGS (tag_value1,...)

沿用上面温度计的例子,使用超级表thermometer建立单个温度计数据表的语句如下:

-
create table t1 using thermometer tags (‘beijing’, 10)
+
create table t1 using thermometer tags (‘beijing’, 10)

上述SQL以thermometer为模板,创建了名为t1的表,这张表的Schema就是thermometer的Schema,但标签location值为‘beijing’,标签type值为10。

用户可以使用一个STable创建数量无上限的具有不同标签的表,从这个意义上理解,STable就是若干具有相同数据模型,不同标签的表的集合。与普通表一样,用户可以创建、删除、查看超级表STable,大部分适用于普通表的查询操作都可运用到STable上,包括各种聚合和投影选择函数。除此之外,可以设置标签的过滤条件,仅对STbale中部分表进行聚合查询,大大简化应用的开发。

TDengine对表的主键(时间戳)建立索引,暂时不提供针对数据模型中其他采集量(比如温度、压力值)的索引。每个数据采集点会采集若干数据记录,但每个采集点的标签仅仅是一条记录,因此数据标签在存储上没有冗余,且整体数据规模有限。TDengine将标签数据与采集的动态数据完全分离存储,而且针对STable的标签建立了高性能内存索引结构,为标签提供全方位的快速操作支持。用户可按照需求对其进行增删改查(Create,Retrieve,Update,Delete,CRUD)操作。

@@ -19,69 +19,51 @@ tags (location binary(20), type int)

超级表管理

写数据时自动建子表

在某些特殊场景中,用户在写数据时并不确定某个设备的表是否存在,此时可使用自动建表语法来实现写入数据时里用超级表定义的表结构自动创建不存在的子表,若该表已存在则不会建立新表。注意:自动建表语句只能自动建立子表而不能建立超级表,这就要求超级表已经被事先定义好。自动建表语法跟insert/import语法非常相似,唯一区别是语句中增加了超级表和标签信息。具体语法如下:

-
INSERT INTO <tb_name> USING <stb_name> TAGS (<tag1_value>, ...) VALUES (field_value, ...) (field_value, ...) ...;
-
+
INSERT INTO <tb_name> USING <stb_name> TAGS (<tag1_value>, ...) VALUES (field_value, ...) (field_value, ...) ...;

向表tb_name中插入一条或多条记录,如果tb_name这张表不存在,则会用超级表stb_name定义的表结构以及用户指定的标签值(即tag1_value…)来创建名为tb_name新表,并将用户指定的值写入表中。如果tb_name已经存在,则建表过程会被忽略,系统也不会检查tb_name的标签是否与用户指定的标签值一致,也即不会更新已存在表的标签。

-
INSERT INTO <tb1_name> USING <stb1_name> TAGS (<tag1_value1>, ...) VALUES (<field1_value1>, ...) (<field1_value2>, ...) ... <tb_name2> USING <stb_name2> TAGS(<tag1_value2>, ...) VALUES (<field1_value1>, ...) ...;
-
+
INSERT INTO <tb1_name> USING <stb1_name> TAGS (<tag1_value1>, ...) VALUES (<field1_value1>, ...) (<field1_value2>, ...) ... <tb_name2> USING <stb_name2> TAGS(<tag1_value2>, ...) VALUES (<field1_value1>, ...) ...;

向多张表tb1_name,tb2_name等插入一条或多条记录,并分别指定各自的超级表进行自动建表。

STable中TAG管理

除了更新标签的值的操作是针对子表进行,其他所有的标签操作(添加标签、删除标签等)均只能作用于STable,不能对单个子表操作。对STable添加标签以后,依托于该STable建立的所有表将自动增加了一个标签,对于数值型的标签,新增加的标签的默认值是0.

STable多表聚合

针对所有的通过STable创建的子表进行多表聚合查询,支持按照全部的TAG值进行条件过滤,并可将结果按照TAGS中的值进行聚合,暂不支持针对binary类型的模糊匹配过滤。语法如下:

-
SELECT function<field_name>,… 
+
SELECT function<field_name>,… 
  FROM <stable_name> 
  WHERE <tag_name> <[=|<=|>=|<>] values..> ([AND|OR] …)
  INTERVAL (<time range>)
@@ -90,41 +72,39 @@ tags (location binary(20), type int)
SLIMIT <group_limit> SOFFSET <group_offset> LIMIT <record_limit> - OFFSET <record_offset> -
+ OFFSET <record_offset>

说明

超级表聚合查询,TDengine目前支持以下聚合\选择函数:sum、count、avg、first、last、min、max、top、bottom,以及针对全部或部分列的投影操作,使用方式与单表查询的计算过程相同。暂不支持其他类型的聚合计算和四则运算。当前所有的函数及计算过程均不支持嵌套的方式进行执行。

-

不使用GROUP BY的查询将会对超级表下所有满足筛选条件的表按时间进行聚合,结果输出默认是按照时间戳单调递增输出,用户可以使用ORDER BY _c0 ASC|DESC选择查询结果时间戳的升降排序;使用GROUP BY <tag_name> 的聚合查询会按照tags进行分组,并对每个组内的数据分别进行聚合,输出结果为各个组的聚合结果,组间的排序可以由ORDER BY <tag_name> 语句指定,每个分组内部,时间序列是单调递增的。

+

不使用GROUP BY的查询将会对超级表下所有满足筛选条件的表按时间进行聚合,结果输出默认是按照时间戳单调递增输出,用户可以使用ORDER BY _c0 ASC|DESC选择查询结果时间戳的升降排序;使用GROUP BY 的聚合查询会按照tags进行分组,并对每个组内的数据分别进行聚合,输出结果为各个组的聚合结果,组间的排序可以由ORDER BY 语句指定,每个分组内部,时间序列是单调递增的。

使用SLIMIT/SOFFSET语句指定组间分页,即指定结果集中输出的最大组数以及对组起始的位置。使用LIMIT/OFFSET语句指定组内分页,即指定结果集中每个组内最多输出多少条记录以及记录起始的位置。

STable使用示例

以温度传感器采集时序数据作为例,示范STable的使用。 在这个例子中,对每个温度计都会建立一张表,表名为温度计的ID,温度计读数的时刻记为ts,采集的值记为degree。通过tags给每个采集器打上不同的标签,其中记录温度计的地区和类型,以方便我们后面的查询。所有温度计的采集量都一样,因此我们用STable来定义表结构。

定义STable表结构并使用它创建子表

创建STable语句如下:

-
CREATE TABLE thermometer (ts timestamp, degree double) 
+
CREATE TABLE thermometer (ts timestamp, degree double) 
 TAGS(location binary(20), type int)

假设有北京,天津和上海三个地区的采集器共4个,温度采集器有3种类型,我们就可以对每个采集器建表如下:

-
CREATE TABLE therm1 USING thermometer TAGS (’beijing’, 1);
+
CREATE TABLE therm1 USING thermometer TAGS (’beijing’, 1);
 CREATE TABLE therm2 USING thermometer TAGS (’beijing’, 2);
 CREATE TABLE therm3 USING thermometer TAGS (’tianjin’, 1);
 CREATE TABLE therm4 USING thermometer TAGS (’shanghai’, 3);

其中therm1,therm2,therm3,therm4是超级表thermometer四个具体的子表,也即普通的Table。以therm1为例,它表示采集器therm1的数据,表结构完全由thermometer定义,标签location=”beijing”, type=1表示therm1的地区是北京,类型是第1类的温度计。

写入数据

注意,写入数据时不能直接对STable操作,而是要对每张子表进行操作。我们分别向四张表therm1,therm2, therm3, therm4写入一条数据,写入语句如下:

-
INSERT INTO therm1 VALUES (’2018-01-01 00:00:00.000’, 20);
+
INSERT INTO therm1 VALUES (’2018-01-01 00:00:00.000’, 20);
 INSERT INTO therm2 VALUES (’2018-01-01 00:00:00.000’, 21);
 INSERT INTO therm3 VALUES (’2018-01-01 00:00:00.000’, 24);
 INSERT INTO therm4 VALUES (’2018-01-01 00:00:00.000’, 23);

按标签聚合查询

查询位于北京(beijing)和天津(tianjing)两个地区的温度传感器采样值的数量count(*)、平均温度avg(degree)、最高温度max(degree)、最低温度min(degree),并将结果按所处地域(location)和传感器类型(type)进行聚合。

-
SELECT COUNT(*), AVG(degree), MAX(degree), MIN(degree)
+
SELECT COUNT(*), AVG(degree), MAX(degree), MIN(degree)
 FROM thermometer
 WHERE location=’beijing’ or location=’tianjing’
 GROUP BY location, type 

按时间周期聚合查询

查询仅位于北京以外地区的温度传感器最近24小时(24h)采样值的数量count(*)、平均温度avg(degree)、最高温度max(degree)和最低温度min(degree),将采集结果按照10分钟为周期进行聚合,并将结果按所处地域(location)和传感器类型(type)再次进行聚合。

-
SELECT COUNT(*), AVG(degree), MAX(degree), MIN(degree)
+
SELECT COUNT(*), AVG(degree), MAX(degree), MIN(degree)
 FROM thermometer
 WHERE name<>’beijing’ and ts>=now-1d
 INTERVAL(10M)
-GROUP BY location, type
-回去 \ No newline at end of file +GROUP BY location, type
回去 \ No newline at end of file diff --git a/documentation/tdenginedocs-cn/taos-sql/index.html b/documentation/tdenginedocs-cn/taos-sql/index.html index 7f143b7638181fdf79d48e6081412172c7f6cf20..ec3e42d0901cf4730f54cc33d7a60890a64c2136 100644 --- a/documentation/tdenginedocs-cn/taos-sql/index.html +++ b/documentation/tdenginedocs-cn/taos-sql/index.html @@ -2,11 +2,10 @@

TDengine提供类似SQL语法,用户可以在TDengine Shell中使用SQL语句操纵数据库,也可以通过C/C++, Java(JDBC), Python, Go等各种程序来执行SQL语句。

本章节SQL语法遵循如下约定:

    -
  • < > 里的内容是用户需要输入的,但不要输入<>本身
  • +
  • < > 里的内容是用户需要输入的,但不要输入<>本身
  • [ ]表示内容为可选项,但不能输入[]本身
  • | 表示多选一,选择其中一个即可,但不能输入|本身
  • … 表示前面的项可重复多个
  • -

支持的数据类型

使用TDengine,最重要的是时间戳。创建并插入记录、查询历史记录的时候,均需要指定时间戳。时间戳有如下规则:

@@ -16,318 +15,344 @@
  • 插入记录时,如果时间戳为0,插入数据时使用服务器当前时间
  • Epoch Time: 时间戳也可以是一个长整数,表示从1970-01-01 08:00:00.000开始的毫秒数
  • 时间可以加减,比如 now-2h,表明查询时刻向前推2个小时(最近2小时)。数字后面的时间单位:a(毫秒), s(秒), m(分), h(小时), d(天),w(周), n(月), y(年)。比如select * from t1 where ts > now-2w and ts <= now-1w, 表示查询两周前整整一周的数据
  • -

    TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMicrosecond就可支持微秒。

    在TDengine中,普通表的数据模型中可使用以下10种数据类型。

    -
    +
    - - -
     类型Bytes说明
    1TIMESTAMP8时间戳。最小精度毫秒。从格林威治时间1970-01-01 08:00:00.000开始,计时不能早于该时间。
    2INT4整型,范围 [-2^31+1, 2^31-1], -2^31被用作Null值
    3BIGINT8长整型,范围 [-2^59, 2^59]
    4FLOAT4浮点型,有效位数6-7,范围 [-3.4E38, 3.4E38]
    5DOUBLE8双精度浮点型,有效位数15-16,范围 [-1.7E308, 1.7E308]
    6BINARY自定义用于记录字符串,最长不能超过504 bytes。binary仅支持字符串输入,字符串两端使用单引号引用,否则英文全部自动转化为小写。使用时须指定大小,如binary(20)定义了最长为20个字符的字符串,每个字符占1byte的存储空间。如果用户字符串超出20字节,将被自动截断。对于字符串内的单引号,可以用转义字符反斜线加单引号来表示, 即 \’
    7SMALLINT2短整型, 范围 [-32767, 32767]
    8TINYINT1单字节整型,范围 [-127, 127]
    9BOOL1布尔型,{true, false}
    10NCHAR自定义用于记录非ASCII字符串,如中文字符。每个nchar字符占用4bytes的存储空间。字符串两端使用单引号引用,字符串内的单引号需用转义字符 \’。nchar使用时须指定字符串大小,类型为nchar(10)的列表示此列的字符串最多存储10个nchar字符,会固定占用40bytes的空间。如用户字符串长度超出声明长度,则将被自动截断。
    + + +类型 +Bytes +说明 + + + + +1 +TIMESTAMP +8 +时间戳。最小精度毫秒。从格林威治时间1970-01-01 08:00:00.000开始,计时不能早于该时间。 + + +2 +INT +4 +整型,范围 [-2^31+1, 2^31-1], -2^31被用作Null值 + + +3 +BIGINT +8 +长整型,范围 [-2^59, 2^59] + + +4 +FLOAT +4 +浮点型,有效位数6-7,范围 [-3.4E38, 3.4E38] + + +5 +DOUBLE +8 +双精度浮点型,有效位数15-16,范围 [-1.7E308, 1.7E308] + + +6 +BINARY +自定义 +用于记录字符串,最长不能超过504 bytes。binary仅支持字符串输入,字符串两端使用单引号引用,否则英文全部自动转化为小写。使用时须指定大小,如binary(20)定义了最长为20个字符的字符串,每个字符占1byte的存储空间。如果用户字符串超出20字节,将被自动截断。对于字符串内的单引号,可以用转义字符反斜线加单引号来表示, 即 \’。 + + +7 +SMALLINT +2 +短整型, 范围 [-32767, 32767] + + +8 +TINYINT +1 +单字节整型,范围 [-127, 127] + + +9 +BOOL +1 +布尔型,{true, false} + + +10 +NCHAR +自定义 +用于记录非ASCII字符串,如中文字符。每个nchar字符占用4bytes的存储空间。字符串两端使用单引号引用,字符串内的单引号需用转义字符 \’。nchar使用时须指定字符串大小,类型为nchar(10)的列表示此列的字符串最多存储10个nchar字符,会固定占用40bytes的空间。如用户字符串长度超出声明长度,则将被自动截断。 + + +

    Tips: TDengine对SQL语句中的英文字符不区分大小写,自动转化为小写执行。因此用户大小写敏感的字符串及密码,需要使用单引号将字符串引起来。

    数据库管理

    • 创建数据库

      -
      CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep]
      -

      创建数据库。KEEP是该数据库的数据保留多长天数,缺省是3650天(10年),数据库会自动删除超过时限的数据。数据库还有更多与存储相关的配置参数,请参见系统管理

      -
    • - -
    -
      +
      CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep]
      +

      创建数据库。KEEP是该数据库的数据保留多长天数,缺省是3650天(10年),数据库会自动删除超过时限的数据。数据库还有更多与存储相关的配置参数,请参见系统管理

    • 使用数据库

      -
      USE db_name
      -

      使用/切换数据库

      -
    • - -
    -
      +
      USE db_name
      +

      使用/切换数据库

    • 删除数据库

      -
      DROP DATABASE [IF EXISTS] db_name
      -

      删除数据库。所包含的全部数据表将被删除,谨慎使用

      -
    • - -
    -
      +
      DROP DATABASE [IF EXISTS] db_name
      +

      删除数据库。所包含的全部数据表将被删除,谨慎使用

    • 显示系统所有数据库

      -
      SHOW DATABASES
      -
    • - +
      SHOW DATABASES

    表管理

    • 创建数据表

      -
      CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...])
      -

      说明:1)表的第一个字段必须是TIMESTAMP,并且系统自动将其设为主键;2)表的每行长度不能超过4096字节;3)使用数据类型binary或nchar,需指定其最长的字节数,如binary(20),表示20字节。

      -
    • - -
    -
      +
      CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...])
      +

      说明:1)表的第一个字段必须是TIMESTAMP,并且系统自动将其设为主键;2)表的每行长度不能超过4096字节;3)使用数据类型binary或nchar,需指定其最长的字节数,如binary(20),表示20字节。

    • 删除数据表

      -
      DROP TABLE [IF EXISTS] tb_name
      -
    • +
      DROP TABLE [IF EXISTS] tb_name
    • 显示当前数据库下的所有数据表信息

      -
      SHOW TABLES [LIKE tb_name_wildcar]
      -

      显示当前数据库下的所有数据表信息。说明:可在like中使用通配符进行名称的匹配。 通配符匹配:1)’%’ (百分号)匹配0到任意个字符;2)’_’下划线匹配一个字符。

      -
    • - -
    -
      +
      SHOW TABLES [LIKE tb_name_wildcar]
      +

      显示当前数据库下的所有数据表信息。说明:可在like中使用通配符进行名称的匹配。 通配符匹配:1)’%’ (百分号)匹配0到任意个字符;2)’_’下划线匹配一个字符。

    • 获取表的结构信息

      -
      DESCRIBE tb_name
      -
    • +
      DESCRIBE tb_name
    • 表增加列

      -
      ALTER TABLE tb_name ADD COLUMN field_name data_type
      -
    • +
      ALTER TABLE tb_name ADD COLUMN field_name data_type
    • 表删除列

      -
      ALTER TABLE tb_name DROP COLUMN field_name 
      -

      如果表是通过超级表创建,更改表结构的操作只能对超级表进行。同时针对超级表的结构更改对所有通过该结构创建的表生效。对于不是通过超级表创建的表,可以直接修改表结构

      -
    • - +
      ALTER TABLE tb_name DROP COLUMN field_name 
      +

      如果表是通过超级表创建,更改表结构的操作只能对超级表进行。同时针对超级表的结构更改对所有通过该结构创建的表生效。对于不是通过超级表创建的表,可以直接修改表结构

      +

      Tips:SQL语句中操作的当前数据库(通过use db_name的方式指定)中的表不需要指定表所属数据库。如果要操作非当前数据库中的表,需要采用“库名”.“表名”的方式。例如:demo.tb1,是指数据库demo中的表tb1。

    -

    Tips:SQL语句中操作的当前数据库(通过use db_name的方式指定)中的表不需要指定表所属数据库。如果要操作非当前数据库中的表,需要采用“库名”.“表名”的方式。例如:demo.tb1,是指数据库demo中的表tb1。

    数据写入

    • 插入一条记录

      -
      INSERT INTO tb_name VALUES (field_value, ...);
      -

      向表tb_name中插入一条记录

      -
    • - -
    -
      +
      INSERT INTO tb_name VALUES (field_value, ...);
      +

      向表tb_name中插入一条记录

    • 插入一条记录,数据对应到指定的列

      -
      INSERT INTO tb_name (field1_name, ...) VALUES(field1_value, ...)
      -

      向表tb_name中插入一条记录,数据对应到指定的列。SQL语句中没有出现的列,数据库将自动填充为NULL。主键(时间戳)不能为NULL。

      -
    • - -
    -
      +
      INSERT INTO tb_name (field1_name, ...) VALUES(field1_value, ...)
      +

      向表tb_name中插入一条记录,数据对应到指定的列。SQL语句中没有出现的列,数据库将自动填充为NULL。主键(时间戳)不能为NULL。

    • 插入多条记录

      -
      INSERT INTO tb_name VALUES (field1_value1, ...) (field1_value2, ...)...;
      -

      向表tb_name中插入多条记录

      -
    • - -
    -
      +
      INSERT INTO tb_name VALUES (field1_value1, ...) (field1_value2, ...)...;
      +

      向表tb_name中插入多条记录

    • 按指定的列插入多条记录

      -
      INSERT INTO tb_name (field1_name, ...) VALUES(field1_value1, ...) (field1_value2, ...)
      -

      向表tb_name中按指定的列插入多条记录

      -
    • - -
    -
      +
      INSERT INTO tb_name (field1_name, ...) VALUES(field1_value1, ...) (field1_value2, ...)
      +

      向表tb_name中按指定的列插入多条记录

    • 向多个表插入多条记录

      -
      INSERT INTO tb1_name VALUES (field1_value1, ...)(field1_value2, ...)... 
      -            tb2_name VALUES (field1_value1, ...)(field1_value2, ...)...;
      -
      -

      同时向表tb1_name和tb2_name中分别插入多条记录

      -
    • - -
    -
      +
      INSERT INTO tb1_name VALUES (field1_value1, ...)(field1_value2, ...)... 
      +            tb2_name VALUES (field1_value1, ...)(field1_value2, ...)...;
      +

      同时向表tb1_name和tb2_name中分别插入多条记录

    • 同时向多个表按列插入多条记录

      -
      INSERT INTO tb1_name (tb1_field1_name, ...) VALUES (field1_value1, ...) (field1_value1, ...)
      -            tb2_name (tb2_field1_name, ...) VALUES(field1_value1, ...) (field1_value2, ...)
      -
      -

      同时向表tb1_name和tb2_name中按列分别插入多条记录

      -
    • - +
      INSERT INTO tb1_name (tb1_field1_name, ...) VALUES (field1_value1, ...) (field1_value1, ...)
      +            tb2_name (tb2_field1_name, ...) VALUES(field1_value1, ...) (field1_value2, ...)
      +

      同时向表tb1_name和tb2_name中按列分别插入多条记录

    注意:对同一张表,插入的新记录的时间戳必须递增,否则会跳过插入该条记录。如果时间戳为0,系统将自动使用服务器当前时间作为该记录的时间戳。

    IMPORT:如果需要将时间戳小于最后一条记录时间的记录写入到数据库中,可使用IMPORT替代INSERT命令,IMPORT的语法与INSERT完全一样。如果同时IMPORT多条记录,需要保证一批记录是按时间戳排序好的。

    数据查询

    查询语法是:

    -
    SELECT {* | expr_list} FROM tb_name
    +
    SELECT {* | expr_list} FROM tb_name
         [WHERE where_condition]
         [ORDER BY _c0 { DESC | ASC }]
         [LIMIT limit [, OFFSET offset]]
         [>> export_file]
    -    
    +
     SELECT function_list FROM tb_name
         [WHERE where_condition]
         [LIMIT limit [, OFFSET offset]]
    -    [>> export_file]
    -
    + [>> export_file]
    • 可以使用* 返回所有列,或指定列名。可以对数字列进行四则运算,可以给输出的列取列名
    • where语句可以使用各种逻辑判断来过滤数字值,或使用通配符来过滤字符串
    • 输出结果缺省按首列时间戳升序排序,但可以指定按降序排序(_c0指首列时间戳)。使用ORDER BY对其他字段进行排序为非法操作。
    • 参数LIMIT控制输出条数,OFFSET指定从第几条开始输出。LIMIT/OFFSET对结果集的执行顺序在ORDER BY之后。
    • -
    • 通过”>>"输出结果可以导出到指定文件
    • - +
    • 通过”>>"输出结果可以导出到指定文件

    支持的条件过滤操作

    -
    +
    - - -
    OperationNoteApplicable Data Types
    >larger thantimestamp and all numeric types
    <smaller thantimestamp and all numeric types
    >=larger than or equal totimestamp and all numeric types
    <=smaller than or equal totimestamp and all numeric types
    =equal toall types
    <>not equal toall types
    %match with any char sequencesbinary nchar
    _match with a single charbinary nchar
    + +Operation +Note +Applicable Data Types + + + + +> +larger than +timestamp and all numeric types + + +< +smaller than +timestamp and all numeric types + + +>= +larger than or equal to +timestamp and all numeric types + + +<= +smaller than or equal to +timestamp and all numeric types + + += +equal to +all types + + +<> +not equal to +all types + + +% +match with any char sequences +binary nchar + + +_ +match with a single char +binary nchar + + +
    1. 同时进行多个字段的范围过滤需要使用关键词AND进行连接不同的查询条件,暂不支持OR连接的查询条件。
    2. -
    3. 针对某一字段的过滤只支持单一区间的过滤条件。例如:value>20 and value<30是合法的过滤条件, 而Value<20 AND value<>5是非法的过滤条件。
    4. - +
    5. 针对某一字段的过滤只支持单一区间的过滤条件。例如:value>20 and value<30是合法的过滤条件, 而Value<20 AND value<>5是非法的过滤条件。
    -

    Some Examples

    +

    Some Examples

    • 对于下面的例子,表tb1用以下语句创建

      -
      CREATE TABLE tb1 (ts timestamp, col1 int, col2 float, col3 binary(50))
      -
    • +
      CREATE TABLE tb1 (ts timestamp, col1 int, col2 float, col3 binary(50))
    • 查询tb1刚过去的一个小时的所有记录

      -
      SELECT * FROM tb1 WHERE ts >= NOW - 1h
      -
    • -
    • 查询表tb1从2018-06-01 08:00:00.000 到2018-06-02 08:00:00.000时间范围,并且clo3的字符串是'nny'结尾的记录,结果按照时间戳降序

      -
      SELECT * FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' AND ts <= '2018-06-02 08:00:00.000' AND col3 LIKE '%nny' ORDER BY ts DESC
      -
    • +
      SELECT * FROM tb1 WHERE ts >= NOW - 1h
      +
    • 查询表tb1从2018-06-01 08:00:00.000 到2018-06-02 08:00:00.000时间范围,并且clo3的字符串是'nny'结尾的记录,结果按照时间戳降序

      +
      SELECT * FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' AND ts <= '2018-06-02 08:00:00.000' AND col3 LIKE '%nny' ORDER BY ts DESC
    • 查询col1与col2的和,并取名complex, 时间大于2018-06-01 08:00:00.000, col2大于1.2,结果输出仅仅10条记录,从第5条开始

      -
      SELECT (col1 + col2) AS 'complex' FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' and col2 > 1.2 LIMIT 10 OFFSET 5
      -
    • +
      SELECT (col1 + col2) AS 'complex' FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' and col2 > 1.2 LIMIT 10 OFFSET 5
    • 查询过去10分钟的记录,col2的值大于3.14,并且将结果输出到文件 /home/testoutpu.csv.

      -
      SELECT COUNT(*) FROM tb1 WHERE ts >= NOW - 10m AND col2 > 3.14 >> /home/testoutpu.csv
      -
    • - +
      SELECT COUNT(*) FROM tb1 WHERE ts >= NOW - 10m AND col2 > 3.14 >> /home/testoutpu.csv

    SQL函数

    聚合函数

    TDengine支持针对数据的聚合查询。提供支持的聚合和提取函数如下表:

    • COUNT

      -
      SELECT COUNT([*|field_name]) FROM tb_name [WHERE clause]
      -

      功能说明:统计表/超级表中记录行数或某列的非空值个数。
      返回结果数据类型:长整型INT64。
      应用字段:应用全部字段。
      适用于:表、超级表。
      说明:1)可以使用星号(*)来替代具体的字段,使用星号(*)返回全部记录数量。2)针对同一表的(不包含NULL值)字段查询结果均相同。3)如果统计对象是具体的列,则返回该列中非NULL值的记录数量。

      -
    • - -
    -
      +
      SELECT COUNT([*|field_name]) FROM tb_name [WHERE clause]
      +

      功能说明:统计表/超级表中记录行数或某列的非空值个数。
      +返回结果数据类型:长整型INT64。
      +应用字段:应用全部字段。
      +适用于:表、超级表。
      +说明:1)可以使用星号来替代具体的字段,使用星号()返回全部记录数量。2)针对同一表的(不包含NULL值)字段查询结果均相同。3)如果统计对象是具体的列,则返回该列中非NULL值的记录数量。

    • AVG

      -
      SELECT AVG(field_name) FROM tb_name [WHERE clause]
      -

      功能说明:统计表/超级表中某列的平均值。
      返回结果数据类型:双精度浮点数Double。
      应用字段:不能应用在timestamp、binary、nchar、bool字段。
      适用于:表、超级表。

      -
    • - -
    -
      +
      SELECT AVG(field_name) FROM tb_name [WHERE clause]
      +

      功能说明:统计表/超级表中某列的平均值。
      +返回结果数据类型:双精度浮点数Double。
      +应用字段:不能应用在timestamp、binary、nchar、bool字段。
      +适用于:表、超级表。

    • WAVG

      -
      SELECT WAVG(field_name) FROM tb_name WHERE clause
      -
      -

      功能说明:统计表/超级表中某列在一段时间内的时间加权平均。
      返回结果数据类型:双精度浮点数Double。
      应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
      适用于:表、超级表。

      -
    • - -
    -
      +
      SELECT WAVG(field_name) FROM tb_name WHERE clause
      +

      功能说明:统计表/超级表中某列在一段时间内的时间加权平均。
      +返回结果数据类型:双精度浮点数Double。
      +应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
      +适用于:表、超级表。

    • SUM

      -
      SELECT SUM(field_name) FROM tb_name [WHERE clause]
      -
      -

      功能说明:统计表/超级表中某列的和。
      返回结果数据类型:双精度浮点数Double和长整型INT64。
      应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
      适用于:表、超级表。

      -
    • - -
    -
      +
      SELECT SUM(field_name) FROM tb_name [WHERE clause]
      +

      功能说明:统计表/超级表中某列的和。
      +返回结果数据类型:双精度浮点数Double和长整型INT64。
      +应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
      +适用于:表、超级表。

    • STDDEV

      -
      SELECT STDDEV(field_name) FROM tb_name [WHERE clause]
      -
      -

      功能说明:统计表中某列的均方差。
      返回结果数据类型:双精度浮点数Double。
      应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
      适用于:表。

      -
    • - -
    -
      +
      SELECT STDDEV(field_name) FROM tb_name [WHERE clause]
      +

      功能说明:统计表中某列的均方差。
      +返回结果数据类型:双精度浮点数Double。
      +应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
      +适用于:表。

    • LEASTSQUARES

      -
      SELECT LEASTSQUARES(field_name) FROM tb_name [WHERE clause]
      -
      -

      功能说明:统计表中某列的值是主键(时间戳)的拟合直线方程。
      返回结果数据类型:字符串表达式(斜率, 截距)。
      应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
      说明:自变量是时间戳,因变量是该列的值。
      适用于:表。

      -
    • - +
      SELECT LEASTSQUARES(field_name) FROM tb_name [WHERE clause]
      +

      功能说明:统计表中某列的值是主键(时间戳)的拟合直线方程。
      +返回结果数据类型:字符串表达式(斜率, 截距)。
      +应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
      +说明:自变量是时间戳,因变量是该列的值。
      +适用于:表。

    选择函数

    • MIN

      -
      SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause]
      -
      -

      功能说明:统计表/超级表中某列的值最小值。
      返回结果数据类型:同应用的字段。
      应用字段:不能应用在timestamp、binary、nchar、bool类型字段。

      -
    • - -
    -
      +
      SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause]
      +

      功能说明:统计表/超级表中某列的值最小值。
      +返回结果数据类型:同应用的字段。
      +应用字段:不能应用在timestamp、binary、nchar、bool类型字段。

    • MAX

      -
      SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause]
      -
      -

      功能说明:统计表/超级表中某列的值最大值。
      返回结果数据类型:同应用的字段。
      应用字段:不能应用在timestamp、binary、nchar、bool类型字段。

      -
    • - -
    -
      +
      SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause]
      +

      功能说明:统计表/超级表中某列的值最大值。
      +返回结果数据类型:同应用的字段。
      +应用字段:不能应用在timestamp、binary、nchar、bool类型字段。

    • FIRST

      -
      SELECT FIRST(field_name) FROM { tb_name | stb_name } [WHERE clause]
      -
      -

      功能说明:统计表/超级表中某列的值最先写入的非NULL值。
      返回结果数据类型:同应用的字段。
      应用字段:所有字段。
      说明:1)如果要返回各个列的首个(时间戳最小)非NULL值,可以使用FIRST(*);2) 如果结果集中的某列全部为NULL值,则该列的返回结果也是NULL;3) 如果结果集中所有列全部为NULL值,则不返回结果。

      -
    • - -
    -
      +
      SELECT FIRST(field_name) FROM { tb_name | stb_name } [WHERE clause]
      +

      功能说明:统计表/超级表中某列的值最先写入的非NULL值。
      +返回结果数据类型:同应用的字段。
      +应用字段:所有字段。
      +说明:1)如果要返回各个列的首个(时间戳最小)非NULL值,可以使用FIRST(*);2) 如果结果集中的某列全部为NULL值,则该列的返回结果也是NULL;3) 如果结果集中所有列全部为NULL值,则不返回结果。

    • LAST

      -
      SELECT LAST(field_name) FROM { tb_name | stb_name } [WHERE clause]
      -
      -

      功能说明:统计表/超级表中某列的值最后写入的非NULL值。
      返回结果数据类型:同应用的字段。
      应用字段:所有字段。
      说明:1)如果要返回各个列的最后(时间戳最大)一个非NULL值,可以使用LAST(*);2)如果结果集中的某列全部为NULL值,则该列的返回结果也是NULL;如果结果集中所有列全部为NULL值,则不返回结果。

      -
    • - -
    -
      +
      SELECT LAST(field_name) FROM { tb_name | stb_name } [WHERE clause]
      +

      功能说明:统计表/超级表中某列的值最后写入的非NULL值。
      +返回结果数据类型:同应用的字段。
      +应用字段:所有字段。
      +说明:1)如果要返回各个列的最后(时间戳最大)一个非NULL值,可以使用LAST(*);2)如果结果集中的某列全部为NULL值,则该列的返回结果也是NULL;如果结果集中所有列全部为NULL值,则不返回结果。

    • TOP

      -
      SELECT TOP(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
      -
      -

      功能说明: 统计表/超级表中某列的值最大k个非NULL值。若多于k个列值并列最大,则返回时间戳小的。
      返回结果数据类型:同应用的字段。
      应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
      说明:1)k值取值范围1≤k≤100;2)系统同时返回该记录关联的时间戳列。

      -
    • - -
    -
      +
      SELECT TOP(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
      +

      功能说明: 统计表/超级表中某列的值最大k个非NULL值。若多于k个列值并列最大,则返回时间戳小的。
      +返回结果数据类型:同应用的字段。
      +应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
      +说明:1)k值取值范围1≤k≤100;2)系统同时返回该记录关联的时间戳列。

    • BOTTOM

      -
      SELECT BOTTOM(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
      -
      -

      功能说明:统计表/超级表中某列的值最小k个非NULL值。若多于k个列值并列最小,则返回时间戳小的。
      返回结果数据类型:同应用的字段。
      应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
      说明:1)k值取值范围1≤k≤100;2)系统同时返回该记录关联的时间戳列。

      -
    • - -
    -
      +
      SELECT BOTTOM(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
      +

      功能说明:统计表/超级表中某列的值最小k个非NULL值。若多于k个列值并列最小,则返回时间戳小的。
      +返回结果数据类型:同应用的字段。
      +应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
      +说明:1)k值取值范围1≤k≤100;2)系统同时返回该记录关联的时间戳列。

    • PERCENTILE

      -
      SELECT PERCENTILE(field_name, P) FROM { tb_name | stb_name } [WHERE clause]
      -
      -

      功能说明:统计表中某列的值百分比分位数。
      返回结果数据类型: 双精度浮点数Double。
      应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
      说明:k值取值范围0≤k≤100,为0的时候等同于MIN,为100的时候等同于MAX。

      -
    • - -
    -
      +
      SELECT PERCENTILE(field_name, P) FROM { tb_name | stb_name } [WHERE clause]
      +

      功能说明:统计表中某列的值百分比分位数。
      +返回结果数据类型: 双精度浮点数Double。
      +应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
      +说明:k值取值范围0≤k≤100,为0的时候等同于MIN,为100的时候等同于MAX。

    • LAST_ROW

      -
      SELECT LAST_ROW(field_name) FROM { tb_name | stb_name }
      -
      -

      功能说明:返回表(超级表)的最后一条记录。
      返回结果数据类型:同应用的字段。
      应用字段:所有字段。
      说明:与last函数不同,last_row不支持时间范围限制,强制返回最后一条记录。

      -
    • - +
      SELECT LAST_ROW(field_name) FROM { tb_name | stb_name }
      +

      功能说明:返回表(超级表)的最后一条记录。
      +返回结果数据类型:同应用的字段。
      +应用字段:所有字段。
      +说明:与last函数不同,last_row不支持时间范围限制,强制返回最后一条记录。

    计算函数

    • DIFF

      -
      SELECT DIFF(field_name) FROM tb_name [WHERE clause]
      -
      -

      功能说明:统计表中某列的值与前一行对应值的差。
      返回结果数据类型: 同应用字段。
      应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
      说明:输出结果行数是范围内总行数减一,第一行没有结果输出。

      -
    • - -
    -
      +
      SELECT DIFF(field_name) FROM tb_name [WHERE clause]
      +

      功能说明:统计表中某列的值与前一行对应值的差。
      +返回结果数据类型: 同应用字段。
      +应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
      +说明:输出结果行数是范围内总行数减一,第一行没有结果输出。

    • SPREAD

      -
      SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause]
      -
      -

      功能说明:统计表/超级表中某列的最大值和最小值之差。
      返回结果数据类型: 双精度浮点数。
      应用字段:不能应用在binary、nchar、bool类型字段。
      说明:可用于TIMESTAMP字段,此时表示记录的时间覆盖范围。

      -
    • - -
    -
      +
      SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause]
      +

      功能说明:统计表/超级表中某列的最大值和最小值之差。
      +返回结果数据类型: 双精度浮点数。
      +应用字段:不能应用在binary、nchar、bool类型字段。
      +说明:可用于TIMESTAMP字段,此时表示记录的时间覆盖范围。

    • 四则运算

      -
      SELECT field_name [+|-|*|/|%][Value|field_name] FROM { tb_name | stb_name }  [WHERE clause]
      -
      -

      功能说明:统计表/超级表中某列或多列间的值加、减、乘、除、取余计算结果。
      返回结果数据类型:双精度浮点数。
      应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
      说明:1)支持两列或多列之间进行计算,可使用括号控制计算优先级;2)NULL字段不参与计算,如果参与计算的某行中包含NULL,该行的计算结果为NULL。

      -
    • - +
      SELECT field_name [+|-|*|/|%][Value|field_name] FROM { tb_name | stb_name }  [WHERE clause]
      +

      功能说明:统计表/超级表中某列或多列间的值加、减、乘、除、取余计算结果。
      +返回结果数据类型:双精度浮点数。
      +应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
      +说明:1)支持两列或多列之间进行计算,可使用括号控制计算优先级;2)NULL字段不参与计算,如果参与计算的某行中包含NULL,该行的计算结果为NULL。

    时间维度聚合

    TDengine支持按时间段进行聚合,可以将表中数据按照时间段进行切割后聚合生成结果,比如温度传感器每秒采集一次数据,但需查询每隔10分钟的温度平均值。这个聚合适合于降维(down sample)操作, 语法如下:

    -
    SELECT function_list FROM tb_name 
    +
    SELECT function_list FROM tb_name 
       [WHERE where_condition]
       INTERVAL (interval)
       [FILL ({NONE | VALUE | PREV | NULL | LINEAR})]
    @@ -336,39 +361,28 @@ SELECT function_list FROM stb_name
       [WHERE where_condition]
       [GROUP BY tags]
       INTERVAL (interval)
    -  [FILL ({ VALUE | PREV | NULL | LINEAR})]
    -
    + [FILL ({ VALUE | PREV | NULL | LINEAR})]
      -
    • 聚合时间段的长度由关键词INTERVAL指定,最短时间间隔10毫秒(10a)。聚合查询中,能够同时执行的聚合和选择函数仅限于单个输出的函数:count、avg、sum 、stddev、leastsquares、percentile、min、max、first、last,不能使用具有多行输出结果的函数(例如:top、bottom、diff以及四则运算)。

      -
    • -
    • WHERE语句可以指定查询的起止时间和其他过滤条件

      -
    • -
    • FILL语句指定某一时间区间数据缺失的情况下的填充模式。填充模式包括以下几种:

      +
    • 聚合时间段的长度由关键词INTERVAL指定,最短时间间隔10毫秒(10a)。聚合查询中,能够同时执行的聚合和选择函数仅限于单个输出的函数:count、avg、sum 、stddev、leastsquares、percentile、min、max、first、last,不能使用具有多行输出结果的函数(例如:top、bottom、diff以及四则运算)。
    • +
    • WHERE语句可以指定查询的起止时间和其他过滤条件
    • +
    • FILL语句指定某一时间区间数据缺失的情况下的填充模式。填充模式包括以下几种:
    • +
      -
    1. 不进行填充:NONE(默认填充模式)。
    2. -
    3. VALUE填充:固定值填充,此时需要指定填充的数值。例如:fill(value, 1.23)。
    4. -
    5. NULL填充:使用NULL填充数据。例如:fill(null)。
    6. -
    7. PREV填充:使用前一个非NULL值填充数据。例如:fill(prev)。
    8. - +
    9. 不进行填充:NONE(默认填充模式)。

    10. +
    11. VALUE填充:固定值填充,此时需要指定填充的数值。例如:fill(value, 1.23)。

    12. +
    13. NULL填充:使用NULL填充数据。例如:fill(null)。

    14. +
    15. PREV填充:使用前一个非NULL值填充数据。例如:fill(prev)。

    - - -

    说明:

    1. 使用FILL语句的时候可能生成大量的填充输出,务必指定查询的时间区间。针对每次查询,系统可返回不超过1千万条具有插值的结果。
    2. 在时间维度聚合中,返回的结果中时间序列严格单调递增。
    3. 如果查询对象是超级表,则聚合函数会作用于该超级表下满足值过滤条件的所有表的数据。如果查询中没有使用group by语句,则返回的结果按照时间序列严格单调递增;如果查询中使用了group by语句分组,则返回结果中每个group内不按照时间序列严格单调递增。
    4. -

    示例:温度数据表的建表语句如下:

    -
    create table sensor(ts timestamp, degree double, pm25 smallint) 
    -
    +
    create table sensor(ts timestamp, degree double, pm25 smallint) 

    针对传感器采集的数据,以10分钟为一个阶段,计算过去24小时的温度数据的平均值、最大值、温度的中位数、以及随着时间变化的温度走势拟合直线。如果没有计算值,用前一个非NULL值填充。

    -
    SELECT AVG(degree),MAX(degree),LEASTSQUARES(degree), PERCENTILE(degree, 50) FROM sensor
    +
    SELECT AVG(degree),MAX(degree),LEASTSQUARES(degree), PERCENTILE(degree, 50) FROM sensor
       WHERE TS>=NOW-1d
       INTERVAL(10m)
    -  FILL(PREV);
    -
    -

     

    -回去 \ No newline at end of file + FILL(PREV);
    回去 \ No newline at end of file diff --git a/documentation/tdenginedocs-en/advanced-features/index.html b/documentation/tdenginedocs-en/advanced-features/index.html index 42d2bf394e505f412a1983c881e48a282ee594fa..0401fc37dc9ce5502409d162f240aeeb18bc3b71 100644 --- a/documentation/tdenginedocs-en/advanced-features/index.html +++ b/documentation/tdenginedocs-en/advanced-features/index.html @@ -2,44 +2,40 @@

    Continuous Query

    Continuous Query is a query executed by TDengine periodically with a sliding window, it is a simplified stream computing driven by timers, not by events. Continuous query can be applied to a table or a STable, and the result set can be passed to the application directly via call back function, or written into a new table in TDengine. The query is always executed on a specified time window (window size is specified by parameter interval), and this window slides forward while time flows (the sliding period is specified by parameter sliding).

    Continuous query is defined by TAOS SQL, there is nothing special. One of the best applications is downsampling. Once it is defined, at the end of each cycle, the system will execute the query, pass the result to the application or write it to a database.

    -

    If historical data pints are inserted into the stream, the query won't be re-executed, and the result set won't be updated. If the result set is passed to the application, the application needs to keep the status of continuous query, the server won't maintain it. If application re-starts, it needs to decide the time where the stream computing shall be started.

    +

    If historical data pints are inserted into the stream, the query won't be re-executed, and the result set won't be updated. If the result set is passed to the application, the application needs to keep the status of continuous query, the server won't maintain it. If application re-starts, it needs to decide the time where the stream computing shall be started.

    How to use continuous query

    • Pass result set to application

      Application shall use API taos_stream (details in connector section) to start the stream computing. Inside the API, the SQL syntax is:

      -
      SELECT aggregation FROM [table_name | stable_name] 
      +
      SELECT aggregation FROM [table_name | stable_name] 
       INTERVAL(window_size) SLIDING(period)

      where the new keyword INTERVAL specifies the window size, and SLIDING specifies the sliding period. If parameter sliding is not specified, the sliding period will be the same as window size. The minimum window size is 10ms. The sliding period shall not be larger than the window size. If you set a value larger than the window size, the system will adjust it to window size automatically.

      For example:

      -
      SELECT COUNT(*) FROM FOO_TABLE 
      +
      SELECT COUNT(*) FROM FOO_TABLE 
       INTERVAL(1M) SLIDING(30S)
      -

      The above SQL statement will count the number of records for the past 1-minute window every 30 seconds.

      -
    • +

      The above SQL statement will count the number of records for the past 1-minute window every 30 seconds.

    • Save the result into a database

      If you want to save the result set of stream computing into a new table, the SQL shall be:

      -
      CREATE TABLE table_name AS 
      +
      CREATE TABLE table_name AS 
       SELECT aggregation from [table_name | stable_name]  
       INTERVAL(window_size) SLIDING(period)

      Also, you can set the time range to execute the continuous query. If no range is specified, the continuous query will be executed forever. For example, the following continuous query will be executed from now and will stop in one hour.

      -
      CREATE TABLE QUERY_RES AS 
      +
      CREATE TABLE QUERY_RES AS 
       SELECT COUNT(*) FROM FOO_TABLE 
       WHERE TS > NOW AND TS <= NOW + 1H 
      -INTERVAL(1M) SLIDING(30S) 
      -
    • - +INTERVAL(1M) SLIDING(30S)

    Manage the Continuous Query

    -

    Inside TDengine shell, you can use the command "show streams" to list the ongoing continuous queries, the command "kill stream" to kill a specific continuous query.

    +

    Inside TDengine shell, you can use the command "show streams" to list the ongoing continuous queries, the command "kill stream" to kill a specific continuous query.

    If you drop a table generated by the continuous query, the query will be removed too.

    Publisher/Subscriber

    Time series data is a sequence of data points over time. Inside a table, the data points are stored in order of timestamp. Also, there is a data retention policy, the data points will be removed once their lifetime is passed. From another view, a table in DTengine is just a standard message queue.

    To reduce the development complexity and improve data consistency, TDengine provides the pub/sub functionality. To publish a message, you simply insert a record into a table. Compared with popular messaging tool Kafka, you subscribe to a table or a SQL query statement, instead of a topic. Once new data points arrive, TDengine will notify the application. The process is just like Kafka.

    -

    The detailed API will be introduced in the connectors section.

    +

    The detailed API will be introduced in the connectors section.

    Caching

    TDengine allocates a fixed-size buffer in memory, the newly arrived data will be written into the buffer first. Every device or table gets one or more memory blocks. For typical IoT scenarios, the hot data shall always be newly arrived data, they are more important for timely analysis. Based on this observation, TDengine manages the cache blocks in First-In-First-Out strategy. If no enough space in the buffer, the oldest data will be saved into hard disk first, then be overwritten by newly arrived data. TDengine also guarantees every device can keep at least one block of data in the buffer.

    By this design, the application can retrieve the latest data from each device super-fast, since they are all available in memory. You can use last or last_row function to return the last data record. If the super table is used, it can be used to return the last data records of all or a subset of devices. For example, to retrieve the latest temperature from thermometers in located Beijing, execute the following SQL

    -
    select last(*) from thermometers where location=’beijing’
    +
    select last(*) from thermometers where location=’beijing’

    By this design, caching tool, like Redis, is not needed in the system. It will reduce the complexity of the system.

    TDengine creates one or more virtual nodes(vnode) in each data node. Each vnode contains data for multiple tables and has its own buffer. The buffer of a vnode is fully separated from the buffer of another vnode, not shared. But the tables in a vnode share the same buffer.

    -

    System configuration parameter cacheBlockSize configures the cache block size in bytes, and another parameter cacheNumOfBlocks configures the number of cache blocks. The total memory for the buffer of a vnode is cacheBlockSize \times cacheNumOfBlocks. Another system parameter numOfBlocksPerMeter configures the maximum number of cache blocks a table can use. When you create a database, you can specify these parameters.

    -Back \ No newline at end of file +

    System configuration parameter cacheBlockSize configures the cache block size in bytes, and another parameter cacheNumOfBlocks configures the number of cache blocks. The total memory for the buffer of a vnode is $cacheBlockSize \times cacheNumOfBlocks$. Another system parameter numOfBlocksPerMeter configures the maximum number of cache blocks a table can use. When you create a database, you can specify these parameters.

    Back \ No newline at end of file diff --git a/documentation/tdenginedocs-en/connector/index.html b/documentation/tdenginedocs-en/connector/index.html index 512da3ffc1944d82c9d5d8c50bfbbad0271565c3..ce32c062ffa6b92c257be97831e4fb8c292ec64d 100644 --- a/documentation/tdenginedocs-en/connector/index.html +++ b/documentation/tdenginedocs-en/connector/index.html @@ -47,7 +47,7 @@
  • void taos_fetch_row_a(TAOS_RES *res, void (*fp)(void *param, TAOS_RES *, TAOS_ROW row), void *param);

    The async API to fetch a result row. res is the result handle. fp is the callback function. param is a user-defined structure to pass to fp. The third parameter of the callback function is a single result row, which is different from that of taos_fetch_rows_a API. With this API, it is not necessary to call taos_fetch_row to retrieve each result row, which is handier than taos_fetch_rows_a but less efficient.

  • -

    Applications may apply operations on multiple tables. However, it is important to make sure the operations on the same table are serialized. That means after sending an insert request in a table to the server, no operations on the table are allowed before a request is received.

    +

    Applications may apply operations on multiple tables. However, it is important to make sure the operations on the same table are serialized. That means after sending an insert request in a table to the server, no operations on the table are allowed before a response is received.

    C/C++ continuous query interface

    TDengine provides APIs for continuous query driven by time, which run queries periodically in the background. There are only two APIs:

      @@ -268,17 +268,26 @@ promise.then(function(result) { result.pretty(); //logs the results to the console as if you were in the taos shell });

    You can also query by binding parameters to a query by filling in the question marks in a string as so. The query will automatically parse what was binded and convert it to the proper format for use with TDengine

    -
    var query = cursor.query('select * from meterinfo.meters where ts <= ? and areaid = ?').bind(new Date(), 5);
    +
    var query = cursor.query('select * from meterinfo.meters where ts <= ? and areaid = ?;').bind(new Date(), 5);
     query.execute().then(function(result) {
       result.pretty();
     })

    The TaosQuery object can also be immediately executed upon creation by passing true as the second argument, returning a promise instead of a TaosQuery.

    -
    var promise = cursor.query('select * from meterinfo.meters where v1 = 30', true)
    +
    var promise = cursor.query('select * from meterinfo.meters where v1 = 30;', true)
     promise.then(function(result) {
       result.pretty();
     })

    Async functionality

    -

    Coming soon

    +

    Async queries can be performed using the same functions such as cursor.execute, cursor.query, but now with _a appended to them.

    +

    Say you want to execute an two async query on two seperate tables, using cursor.query_a, you can do that and get a TaosQuery object, which upon executing with the execute_a function, returns a promise that resolves with a TaosResult object.

    +
    var promise1 = cursor.query_a('select count(*), avg(v1), avg(v2) from meter1;').execute_a()
    +var promise2 = cursor.query_a('select count(*), avg(v1), avg(v2) from meter2;').execute_a();
    +promise1.then(function(result) {
    +  result.pretty();
    +})
    +promise2.then(function(result) {
    +  result.pretty();
    +})

    Example

    An example of using the NodeJS connector to create a table with weather data and create and execute queries can be found here (The preferred method for using the connector)

    An example of using the NodeJS connector to achieve the same things but without all the object wrappers that wrap around the data returned to achieve higher functionality can be found here

    Back \ No newline at end of file diff --git a/documentation/tdenginedocs-en/super-table/index.html b/documentation/tdenginedocs-en/super-table/index.html index 8eeefb62529bc6faa4b073569066be5effecdec2..0c68df7d0048e2fb88606ce61f222f76ccee67d2 100644 --- a/documentation/tdenginedocs-en/super-table/index.html +++ b/documentation/tdenginedocs-en/super-table/index.html @@ -1,38 +1,37 @@ Documentation | Taos Data
    Back

    STable: Super Table

    -

    "One Table for One Device" design can improve the insert/query performance significantly for a single device. But it has a side effect, the aggregation of multiple tables becomes hard. To reduce the complexity and improve the efficiency, TDengine introduced a new concept: STable (Super Table).

    +

    "One Table for One Device" design can improve the insert/query performance significantly for a single device. But it has a side effect, the aggregation of multiple tables becomes hard. To reduce the complexity and improve the efficiency, TDengine introduced a new concept: STable (Super Table).

    What is a Super Table

    STable is an abstract and a template for a type of device. A STable contains a set of devices (tables) that have the same schema or data structure. Besides the shared schema, a STable has a set of tags, like the model, serial number and so on. Tags are used to record the static attributes for the devices and are used to group a set of devices (tables) for aggregation. Tags are metadata of a table and can be added, deleted or changed.

    TDengine does not save tags as a part of the data points collected. Instead, tags are saved as metadata. Each table has a set of tags. To improve query performance, tags are all cached and indexed. One table can only belong to one STable, but one STable may contain many tables.

    Like a table, you can create, show, delete and describe STables. Most query operations on tables can be applied to STable too, including the aggregation and selector functions. For queries on a STable, if no tags filter, the operations are applied to all the tables created via this STable. If there is a tag filter, the operations are applied only to a subset of the tables which satisfy the tag filter conditions. It will be very convenient to use tags to put devices into different groups for aggregation.

    Create a STable

    Similiar to creating a standard table, syntax is:

    -
    CREATE TABLE <stable_name> (<field_name> TIMESTAMP, field_name1 field_type,…) TAGS(tag_name tag_type, …)
    -

    New keyword "tags" is introduced, where tag_name is the tag name, and tag_type is the associated data type.

    +
    CREATE TABLE <stable_name> (<field_name> TIMESTAMP, field_name1 field_type,…) TAGS(tag_name tag_type, …)
    +

    New keyword "tags" is introduced, where tag_name is the tag name, and tag_type is the associated data type.

    Note:

    1. The bytes of all tags together shall be less than 512
    2. -
    3. Tag's data type can not be time stamp or nchar
    4. +
    5. Tag's data type can not be time stamp or nchar
    6. Tag name shall be different from the field name
    7. Tag name shall not be the same as system keywords
    8. Maximum number of tags is 6
    9. -

    For example:

    -
    create table thermometer (ts timestamp, degree float) 
    +
    create table thermometer (ts timestamp, degree float) 
     tags (location binary(20), type int)
    -

    The above statement creates a STable thermometer with two tag "location" and "type"

    +

    The above statement creates a STable thermometer with two tag "location" and "type"

    Create a Table via STable

    To create a table for a device, you can use a STable as its template and assign the tag values. The syntax is:

    -
    CREATE TABLE <tb_name> USING <stb_name> TAGS (tag_value1,...)
    +
    CREATE TABLE <tb_name> USING <stb_name> TAGS (tag_value1,...)

    You can create any number of tables via a STable, and each table may have different tag values. For example, you create five tables via STable thermometer below:

    -
     create table t1 using thermometer tags (‘beijing’, 10);
    +
     create table t1 using thermometer tags (‘beijing’, 10);
      create table t2 using thermometer tags (‘beijing’, 20);
      create table t3 using thermometer tags (‘shanghai’, 10);
      create table t4 using thermometer tags (‘shanghai’, 20);
      create table t5 using thermometer tags (‘new york’, 10);

    Aggregate Tables via STable

    You can group a set of tables together by specifying the tags filter condition, then apply the aggregation operations. The result set can be grouped and ordered based on tag value. Syntax is:

    -
    SELECT function<field_name>,… 
    +
    SELECT function<field_name>,… 
      FROM <stable_name> 
      WHERE <tag_name> <[=|<=|>=|<>] values..> ([AND|OR] …)
      INTERVAL (<time range>)
    @@ -44,54 +43,53 @@ tags (location binary(20), type int)
    OFFSET <record_offset>

    For the time being, STable supports only the following aggregation/selection functions: sum, count, avg, first, last, min, max, top, bottom, and the projection operations, the same syntax as a standard table. Arithmetic operations are not supported, embedded queries not either.

    INTERVAL is used for the aggregation over a time range.

    -

    If GROUP BY is not used, the aggregation is applied to all the selected tables, and the result set is output in ascending order of the timestamp, but you can use "ORDER BY _c0 ASC|DESC" to specify the order you like.

    -

    If GROUP BY <tag_name> is used, the aggregation is applied to groups based on tags. Each group is aggregated independently. Result set is a group of aggregation results. The group order is decided by ORDER BY <tag_name>. Inside each group, the result set is in the ascending order of the time stamp.

    +

    If GROUP BY is not used, the aggregation is applied to all the selected tables, and the result set is output in ascending order of the timestamp, but you can use "ORDER BY _c0 ASC|DESC" to specify the order you like.

    +

    If GROUP BY is used, the aggregation is applied to groups based on tags. Each group is aggregated independently. Result set is a group of aggregation results. The group order is decided by ORDER BY . Inside each group, the result set is in the ascending order of the time stamp.

    SLIMIT/SOFFSET are used to limit the number of groups and starting group number.

    LIMIT/OFFSET are used to limit the number of records in a group and the starting rows.

    Example 1:

    Check the average, maximum, and minimum temperatures of Beijing and Shanghai, and group the result set by location and type. The SQL statement shall be:

    -
    SELECT COUNT(*), AVG(degree), MAX(degree), MIN(degree)
    +
    SELECT COUNT(*), AVG(degree), MAX(degree), MIN(degree)
     FROM thermometer
     WHERE location=’beijing’ or location=’tianjing’
     GROUP BY location, type 

    Example 2:

    List the number of records, average, maximum, and minimum temperature every 10 minutes for the past 24 hours for all the thermometers located in Beijing with type 10. The SQL statement shall be:

    -
    SELECT COUNT(*), AVG(degree), MAX(degree), MIN(degree)
    +
    SELECT COUNT(*), AVG(degree), MAX(degree), MIN(degree)
     FROM thermometer
     WHERE name=’beijing’ and type=10 and ts>=now-1d
     INTERVAL(10M)

    Create Table Automatically

    -

    Insert operation will fail if the table is not created yet. But for STable, TDengine can create the table automatically if the application provides the STable name, table name and tags' value when inserting data points. The syntax is:

    -
    INSERT INTO <tb_name> USING <stb_name> TAGS (<tag1_value>, ...) VALUES (field_value, ...) (field_value, ...) ... <tb_name2> USING <stb_name2> TAGS(<tag1_value2>, ...) VALUES (<field1_value1>, ...) ...;
    +

    Insert operation will fail if the table is not created yet. But for STable, TDengine can create the table automatically if the application provides the STable name, table name and tags' value when inserting data points. The syntax is:

    +
    INSERT INTO <tb_name> USING <stb_name> TAGS (<tag1_value>, ...) VALUES (field_value, ...) (field_value, ...) ... <tb_name2> USING <stb_name2> TAGS(<tag1_value2>, ...) VALUES (<field1_value1>, ...) ...;

    When inserting data points into table tb_name, the system will check if table tb_name is created or not. If it is already created, the data points will be inserted as usual. But if the table is not created yet, the system will create the table tb_bame using STable stb_name as the template with the tags. Multiple tables can be specified in the SQL statement.

    Management of STables

    After you can create a STable, you can describe, delete, change STables. This section lists all the supported operations.

    Show STables in current DB

    -
    show stables;
    +
    show stables;

    It lists all STables in current DB, including the name, created time, number of fileds, number of tags, and number of tables which are created via this STable.

    Describe a STable

    -
    DESCRIBE <stable_name>
    -

    It lists the STable's schema and tags

    +
    DESCRIBE <stable_name>
    +

    It lists the STable's schema and tags

    Drop a STable

    -
    DROP TABLE <stable_name>
    +
    DROP TABLE <stable_name>

    To delete a STable, all the tables created via this STable shall be deleted first, otherwise, it will fail.

    List the Associated Tables of a STable

    -
    SELECT TBNAME,[TAG_NAME,…] FROM <stable_name> WHERE <tag_name> <[=|=<|>=|<>] values..> ([AND|OR] …)
    +
    SELECT TBNAME,[TAG_NAME,…] FROM <stable_name> WHERE <tag_name> <[=|=<|>=|<>] values..> ([AND|OR] …)

    It will list all the tables which satisfy the tag filter conditions. The tables are all created from this specific STable. TBNAME is a new keyword introduced, it is the table name associated with the STable.

    -
    SELECT COUNT(TBNAME) FROM <stable_name> WHERE <tag_name> <[=|=<|>=|<>] values..> ([AND|OR] …)
    +
    SELECT COUNT(TBNAME) FROM <stable_name> WHERE <tag_name> <[=|=<|>=|<>] values..> ([AND|OR] …)

    The above SQL statement will list the number of tables in a STable, which satisfy the filter condition.

    Management of Tags

    You can add, delete and change the tags for a STable, and you can change the tag value of a table. The SQL commands are listed below.

    Add a Tag

    -
    ALTER TABLE <stable_name> ADD TAG <new_tag_name> <TYPE>
    +
    ALTER TABLE <stable_name> ADD TAG <new_tag_name> <TYPE>

    It adds a new tag to the STable with a data type. The maximum number of tags is 6.

    Drop a Tag

    -
    ALTER TABLE <stable_name> DROP TAG <tag_name>
    +
    ALTER TABLE <stable_name> DROP TAG <tag_name>

    It drops a tag from a STable. The first tag could not be deleted, and there must be at least one tag.

    -

    Change a Tag's Name

    -
    ALTER TABLE <stable_name> CHANGE TAG <old_tag_name> <new_tag_name>
    +

    Change a Tag's Name

    +
    ALTER TABLE <stable_name> CHANGE TAG <old_tag_name> <new_tag_name>

    It changes the name of a tag from old to new.

    -

    Change the Tag's Value

    -
    ALTER TABLE <table_name> SET TAG <tag_name>=<new_tag_value>
    -

    It changes a table's tag value to a new one.

    -Back
    \ No newline at end of file +

    Change the Tag's Value

    +
    ALTER TABLE <table_name> SET TAG <tag_name>=<new_tag_value>
    +

    It changes a table's tag value to a new one.

    Back \ No newline at end of file diff --git a/documentation/tdenginedocs-en/taos-sql/index.html b/documentation/tdenginedocs-en/taos-sql/index.html index e75e90703b2427cb2dd52601fc673670a5e770c4..6b73a70cd63d7abfac6706acda0f7b55630b6f0e 100644 --- a/documentation/tdenginedocs-en/taos-sql/index.html +++ b/documentation/tdenginedocs-en/taos-sql/index.html @@ -2,364 +2,367 @@

    TDengine provides a SQL like query language to insert or query data. You can execute the SQL statements through TDengine Shell, or through C/C++, Java(JDBC), Python, Restful, Go APIs to interact with the taosd service.

    Before reading through, please have a look at the conventions used for syntax descriptions here in this documentation.

      -
    • Squared brackets ("[]") indicate optional arguments or clauses
    • -
    • Curly braces ("{}") indicate that one member from a set of choices in the braces must be chosen
    • -
    • A single verticle line ("|") works a separator for multiple optional args or clauses
    • -
    • Dots ("…") means repeating for as many times
    • - +
    • Squared brackets ("[]") indicate optional arguments or clauses
    • +
    • Curly braces ("{}") indicate that one member from a set of choices in the braces must be chosen
    • +
    • A single verticle line ("|") works a separator for multiple optional args or clauses
    • +
    • Dots ("…") means repeating for as many times

    Data Types

    Timestamp

    The timestamp is the most important data type in TDengine. The first column of each table must be TIMESTAMP type, but other columns can also be TIMESTAMP type. The following rules for timestamp:

      -
    • String Format: 'YYYY-MM-DD HH:mm:ss.MS', which represents the year, month, day, hour, minute and second and milliseconds. For example,'2017-08-12 18:52:58.128' is a valid timestamp string. Note: timestamp string must be quoted by either single quote or double quote.
    • -
    • Epoch Time: a timestamp value can also be a long integer representing milliseconds since the epoch. For example, the values in the above example can be represented as an epoch 1502535178128 in milliseconds. Please note the epoch time doesn't need any quotes.
    • -
    • Internal FunctionNOW : this is the current time of the server
    • -
    • If timestamp is 0 when inserting a record, timestamp will be set to the current time of the server
    • -
    • Arithmetic operations can be applied to timestamp. For example: now-2h represents a timestamp which is 2 hours ago from the current server time. Units include a (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), n (months), y (years). NOW can be used in either insertions or queries.
    • - +
    • String Format: 'YYYY-MM-DD HH:mm:ss.MS', which represents the year, month, day, hour, minute and second and milliseconds. For example,'2017-08-12 18:52:58.128' is a valid timestamp string. Note: timestamp string must be quoted by either single quote or double quote.

    • +
    • Epoch Time: a timestamp value can also be a long integer representing milliseconds since the epoch. For example, the values in the above example can be represented as an epoch 1502535178128 in milliseconds. Please note the epoch time doesn't need any quotes.

    • +
    • Internal FunctionNOW : this is the current time of the server

    • +
    • If timestamp is 0 when inserting a record, timestamp will be set to the current time of the server

    • +
    • Arithmetic operations can be applied to timestamp. For example: now-2h represents a timestamp which is 2 hours ago from the current server time. Units include a (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), n (months), y (years). NOW can be used in either insertions or queries.

    -

    Default time precision is millisecond, you can change it to microseocnd by setting parameter enableMicrosecond in system configuration. For epoch time, the long integer shall be microseconds since the epoch. For the above string format, MS shall be six digits.

    +

    Default time precision is millisecond, you can change it to microseocnd by setting parameter enableMicrosecond in system configuration. For epoch time, the long integer shall be microseconds since the epoch. For the above string format, MS shall be six digits.

    Data Types

    The full list of data types is listed below. For string types of data, we will use M to indicate the maximum length of that type.

    -
    +
    - - -
     Data TypeBytesNote
    1TINYINT1A nullable integer type with a range of [-127, 127]
    2SMALLINT2A nullable integer type with a range of [-32767, 32767]
    3INT4A nullable integer type with a range of [-2^{31}+1, 2^{31}-1 ]
    4BIGINT8A nullable integer type with a range of [-2^{59}, 2^{59} ]
    5FLOAT4A standard nullable float type with 6 -7 significant digits and a range of [-3.4 e 38, 3.4 e 38]
    6DOUBLE8A standard nullable double float type with 15-16 significant digits and a range of [-1.7e308, 1.7e308]
    7BOOL1A nullable boolean type, [true, false]
    8TIMESTAMP8A nullable timestamp type with the same usage as the primary column timestamp
    9BINARY(M)MA nullable string type whose length is M, any exceeded chars will be automatically truncated. This type of string only supports ASCii encoded chars.
    10NCHAR(M)4 * MA nullable string type whose length is M, any exceeded chars will be truncated. The NCHAR type supports Unicode encoded chars.
    -

    All the keywords in a SQL statement are case-insensitive, but strings values are case-sensitive and must be quoted by a pair of ' or ". To quote a ' or a " , you can use the escape character \.

    + + +Data Type +Bytes +Note + + + + +1 +TINYINT +1 +A nullable integer type with a range of [-127, 127]​ + + +2 +SMALLINT +2 +A nullable integer type with a range of [-32767, 32767]​ + + +3 +INT +4 +A nullable integer type with a range of [-2^31+1, 2^31-1 ] + + +4 +BIGINT +8 +A nullable integer type with a range of [-2^59, 2^59 ]​ + + +5 +FLOAT +4 +A standard nullable float type with 6 -7 significant digits and a range of [-3.4E38, 3.4E38] + + +6 +DOUBLE +8 +A standard nullable double float type with 15-16 significant digits and a range of [-1.7E308, 1.7E308]​ + + +7 +BOOL +1 +A nullable boolean type, [true, false] + + +8 +TIMESTAMP +8 +A nullable timestamp type with the same usage as the primary column timestamp + + +9 +BINARY(M) +M +A nullable string type whose length is M, any exceeded chars will be automatically truncated. This type of string only supports ASCii encoded chars. + + +10 +NCHAR(M) +4 * M +A nullable string type whose length is M, any exceeded chars will be truncated. The NCHAR type supports Unicode encoded chars. + + + +

    All the keywords in a SQL statement are case-insensitive, but strings values are case-sensitive and must be quoted by a pair of ' or ". To quote a ' or a " , you can use the escape character \.

    Database Management

    • Create a Database

      -
      CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep]
      -

      Option: KEEP is used for data retention policy. The data records will be removed once keep-days are passed. There are more parameters related to DB storage, please check system configuration.

      -
    • - -
    -
      +
      CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep]
      +

      Option: KEEP is used for data retention policy. The data records will be removed once keep-days are passed. There are more parameters related to DB storage, please check system configuration.

    • Use a Database

      -
      USE db_name
      -

      Use or switch the current database.

      -
    • - -
    -
      +
      USE db_name
      +

      Use or switch the current database.

    • Drop a Database

      -
      DROP DATABASE [IF EXISTS] db_name
      -

      Remove a database, all the tables inside the DB will be removed too, be careful.

      -
    • - -
    -
      +
      DROP DATABASE [IF EXISTS] db_name
      +

      Remove a database, all the tables inside the DB will be removed too, be careful.

    • List all Databases

      -
      SHOW DATABASES
      -
    • - +
      SHOW DATABASES

    Table Management

    • Create a Table

      -
      CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...])
      -

      Note: 1) the first column must be timstamp, and system will set it as the primary key; 2) the record size is limited to 4096 bytes; 3) for binary or nachr data type, the length shall be specified, for example, binary(20), it means 20 bytes.

      -
    • - -
    -
      +
      CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...])
      +

      Note: 1) the first column must be timstamp, and system will set it as the primary key; 2) the record size is limited to 4096 bytes; 3) for binary or nachr data type, the length shall be specified, for example, binary(20), it means 20 bytes.

    • Drop a Table

      -
      DROP TABLE [IF EXISTS] tb_name
      -
    • - -
    -
      -
    • List all Tables

      -
      SHOW TABLES [LIKE tb_name_wildcar]
      -

      It shows all tables in the current DB. Note: wildcard character can be used in the table name to filter tables. Wildcard character: 1) ’%’ means 0 to any number of characters; 2)’_’ underscore means exactly one character.

      -
    • - -
    -
      +
      DROP TABLE [IF EXISTS] tb_name
      +
    • **List all Tables **

      +
      SHOW TABLES [LIKE tb_name_wildcar]
      +

      It shows all tables in the current DB. Note: wildcard character can be used in the table name to filter tables. Wildcard character: 1) ’%’ means 0 to any number of characters; 2)’_’ underscore means exactly one character.

    • Print Table Schema

      -
      DESCRIBE tb_name
      -
    • - -
    -
      +
      DESCRIBE tb_name
    • Add a Column

      -
      ALTER TABLE tb_name ADD COLUMN field_name data_type
      -
      -
    • - -
    -
      +
      ALTER TABLE tb_name ADD COLUMN field_name data_type
    • Drop a Column

      -
      ALTER TABLE tb_name DROP COLUMN field_name 
      -
      -

      If the table is created via Super Table, the schema can only be changed via STable. But for tables not created from STable, you can change their schema directly.

      -
    • - +
      ALTER TABLE tb_name DROP COLUMN field_name 
      +

      If the table is created via [Super Table](), the schema can only be changed via STable. But for tables not created from STable, you can change their schema directly.

    -

    Tips: You can apply an operation on a table not in the current DB by concatenating DB name with the character '.', then with table name. For example, 'demo.tb1' means the operation is applied to table tb1 in DB demo although demo is not the current selected DB.

    +

    Tips: You can apply an operation on a table not in the current DB by concatenating DB name with the character '.', then with table name. For example, 'demo.tb1' means the operation is applied to table tb1 in DB demo although demo is not the current selected DB.

    Inserting Records

    • Insert a Record

      -
      INSERT INTO tb_name VALUES (field_value, ...);
      -
      -

      Insert a data record into table tb_name

      -
    • - -
    -
      +
      INSERT INTO tb_name VALUES (field_value, ...);
      +

      Insert a data record into table tb_name

    • Insert a Record with Selected Columns

      -
      INSERT INTO tb_name (field1_name, ...) VALUES(field1_value, ...)
      -
      -

      Insert a data record into table tb_name, with data in selected columns. If a column is not selected, the system will put NULL there. First column (time stamp ) could not be null, it must be there.

      -
    • - -
    -
      +
      INSERT INTO tb_name (field1_name, ...) VALUES(field1_value, ...)
      +

      Insert a data record into table tb_name, with data in selected columns. If a column is not selected, the system will put NULL there. First column (time stamp ) cant not be null, it must be inserted.

    • Insert a Batch of Records

      -
      INSERT INTO tb_name VALUES (field1_value1, ...) (field1_value2, ...)...;
      -
      -

      Insert multiple data records to the table

      -
    • - -
    -
      +
      INSERT INTO tb_name VALUES (field1_value1, ...) (field1_value2, ...)...;
      +

      Insert multiple data records to the table

    • Insert a Batch of Records with Selected Columns

      -
      INSERT INTO tb_name (field1_name, ...) VALUES(field1_value1, ...) (field1_value2, ...)
      -
      -
    • - -
    -
      +
      INSERT INTO tb_name (field1_name, ...) VALUES(field1_value1, ...) (field1_value2, ...)
    • Insert Records into Multiple Tables

      -
      INSERT INTO tb1_name VALUES (field1_value1, ...)(field1_value2, ...)... 
      -            tb2_name VALUES (field1_value1, ...)(field1_value2, ...)...;
      -
      -

      Insert data records into table tb1_name and tb2_name

      -
    • - -
    -
      +
      INSERT INTO tb1_name VALUES (field1_value1, ...)(field1_value2, ...)... 
      +            tb2_name VALUES (field1_value1, ...)(field1_value2, ...)...;
      +

      Insert data records into table tb1_name and tb2_name

    • Insert Records into Multiple Tables with Selected Columns

      -
      INSERT INTO tb1_name (tb1_field1_name, ...) VALUES (field1_value1, ...) (field1_value1, ...)
      -            tb2_name (tb2_field1_name, ...) VALUES(field1_value1, ...) (field1_value2, ...)
      -
      -
    • - +
      INSERT INTO tb1_name (tb1_field1_name, ...) VALUES (field1_value1, ...) (field1_value1, ...)
      +            tb2_name (tb2_field1_name, ...) VALUES(field1_value1, ...) (field1_value2, ...)

    Note: For a table, the new record must have timestamp bigger than the last data record, otherwise, it will be thrown away. If timestamp is 0, the time stamp will be set to the system time on server.

    -

    IMPORT: If you do want to insert a historical data record into a table, use IMPORT command instead of INSERT. IMPORT has the same syntax as INSERT. If you want to import a batch of historical records, the records shall be ordered in the timestamp, otherwise, TDengine won't handle it in the right way.

    +

    IMPORT: If you do want to insert a historical data record into a table, use IMPORT command instead of INSERT. IMPORT has the same syntax as INSERT. If you want to import a batch of historical records, the records shall be ordered in the timestamp, otherwise, TDengine won't handle it in the right way.

    Data Query

    Query Syntax:

    -
    SELECT {* | expr_list} FROM tb_name
    +
    SELECT {* | expr_list} FROM tb_name
         [WHERE where_condition]
         [ORDER BY _c0 { DESC | ASC }]
         [LIMIT limit [, OFFSET offset]]
         [>> export_file]
    -    
    +
     SELECT function_list FROM tb_name
         [WHERE where_condition]
         [LIMIT limit [, OFFSET offset]]
    -    [>> export_file]
    -
    + [>> export_file]
    • To query a table, use * to select all data from a table; or a specified list of expressions expr_list of columns. The SQL expression can contain alias and arithmetic operations between numeric typed columns.
    • For the WHERE conditions, use logical operations to filter the timestamp column and all numeric columns, and wild cards to filter the two string typed columns.
    • -
    • Sort the result set by the first column timestamp _c0 (or directly use the timestamp column name) in either descending or ascending order (by default). "Order by" could not be applied to other columns.
    • -
    • Use LIMIT and OFFSET to control the number of rows returned and the starting position of the retrieved rows. LIMIT/OFFSET is applied after "order by" operations.
    • -
    • Export the retrieved result set into a CSV file using >>. The target file's full path should be explicitly specified in the statement.
    • - +
    • Sort the result set by the first column timestamp _c0 (or directly use the timestamp column name) in either descending or ascending order (by default). "Order by" could not be applied to other columns.
    • +
    • Use LIMIT and OFFSET to control the number of rows returned and the starting position of the retrieved rows. LIMIT/OFFSET is applied after "order by" operations.
    • +
    • Export the retrieved result set into a CSV file using >>. The target file's full path should be explicitly specified in the statement.

    Supported Operations of Data Filtering:

    -
    +
    - - -
    OperationNoteApplicable Data Types
    >larger thantimestamp and all numeric types
    <smaller thantimestamp and all numeric types
    >=larger than or equal totimestamp and all numeric types
    <=smaller than or equal totimestamp and all numeric types
    =equal toall types
    <>not equal toall types
    %match with any char sequencesbinary nchar
    _match with a single charbinary nchar
    + +Operation +Note +Applicable Data Types + + + + +> +larger than +timestamp and all numeric types + + +< +smaller than +timestamp and all numeric types + + +>= +larger than or equal to +timestamp and all numeric types + + +<= +smaller than or equal to +timestamp and all numeric types + + += +equal to +all types + + +<> +not equal to +all types + + +% +match with any char sequences +binary nchar + + +_ +match with a single char +binary nchar + + +
    1. For two or more conditions, only AND is supported, OR is not supported yet.
    2. For filtering, only a single range is supported. For example, value>20 and value<30 is valid condition, but value<20 AND value<>5 is invalid condition
    3. -
    -

    Some Examples

    +

    Some Examples

    • For the examples below, table tb1 is created via the following statements

      -
      CREATE TABLE tb1 (ts timestamp, col1 int, col2 float, col3 binary(50))
      -
      -
    • +
      CREATE TABLE tb1 (ts timestamp, col1 int, col2 float, col3 binary(50))
    • Query all the records in tb1 in the last hour:

      -
      SELECT * FROM tb1 WHERE ts >= NOW - 1h
      -
      -
    • - -
    -
      -
    • Query all the records in tb1 between 2018-06-01 08:00:00.000 and 2018-06-02 08:00:00.000, and filter out only the records whose col3 value ends with 'nny', and sort the records by their timestamp in a descending order:

      -
      SELECT * FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' AND ts <= '2018-06-02 08:00:00.000' AND col3 LIKE '%nny' ORDER BY ts DESC
      -
      -
    • - -
    -
      -
    • Query the sum of col1 and col2 as alias 'complex_metric', and filter on the timestamp and col2 values. Limit the number of returned rows to 10, and offset the result by 5.

      -
      SELECT (col1 + col2) AS 'complex_metric' FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' and col2 > 1.2 LIMIT 10 OFFSET 5
      -
      -
    • - -
    -
      +
      SELECT * FROM tb1 WHERE ts >= NOW - 1h
      +
    • Query all the records in tb1 between 2018-06-01 08:00:00.000 and 2018-06-02 08:00:00.000, and filter out only the records whose col3 value ends with 'nny', and sort the records by their timestamp in a descending order:

      +
      SELECT * FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' AND ts <= '2018-06-02 08:00:00.000' AND col3 LIKE '%nny' ORDER BY ts DESC
    • +
    • Query the sum of col1 and col2 as alias 'complex_metric', and filter on the timestamp and col2 values. Limit the number of returned rows to 10, and offset the result by 5.

      +
      SELECT (col1 + col2) AS 'complex_metric' FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' and col2 > 1.2 LIMIT 10 OFFSET 5
    • Query the number of records in tb1 in the last 10 minutes, whose col2 value is larger than 3.14, and export the result to file /home/testoutpu.csv.

      -
      SELECT COUNT(*) FROM tb1 WHERE ts >= NOW - 10m AND col2 > 3.14 >> /home/testoutpu.csv
      -
      -
    • - +
      SELECT COUNT(*) FROM tb1 WHERE ts >= NOW - 10m AND col2 > 3.14 >> /home/testoutpu.csv

    SQL Functions

    Aggregation Functions

    TDengine supports aggregations over numerical values, they are listed below:

    • COUNT

      -
      SELECT COUNT([*|field_name]) FROM tb_name [WHERE clause]
      -
      -

      Function: return the number of rows.
      Return Data Type: integer.
      Applicable Data Types: all.
      Applied to: table/STable.
      Note: 1) * can be used for all columns, as long as a column has non-NULL value, it will be counted; 2) If it is on a specific column, only rows with non-NULL value will be counted

      -
    • - -
    -
      +
      SELECT COUNT([*|field_name]) FROM tb_name [WHERE clause]
      +

      Function: return the number of rows.
      +Return Data Type: integer.
      +Applicable Data Types: all.
      +Applied to: table/STable.
      +Note: 1) * can be used for all columns, as long as a column has non-NULL value, it will be counted; 2) If it is on a specific column, only rows with non-NULL value will be counted

    • AVG

      -
      SELECT AVG(field_name) FROM tb_name [WHERE clause]
      -
      -

      Function: return the average value of a specific column.
      Return Data Type: double.
      Applicable Data Types: all types except timestamp, binary, nchar, bool.
      Applied to: table/STable.

      -
    • - -
    -
      +
      SELECT AVG(field_name) FROM tb_name [WHERE clause]
      +

      Function: return the average value of a specific column.
      +Return Data Type: double.
      +Applicable Data Types: all types except timestamp, binary, nchar, bool.
      +Applied to: table/STable.

    • WAVG

      -
      SELECT WAVG(field_name) FROM tb_name WHERE clause
      -
      -

      Function: return the time-weighted average value of a specific column
      Return Data Type: double
      Applicable Data Types: all types except timestamp, binary, nchar, bool
      Applied to: table/STable

      -
    • - -
    -
      +
      SELECT WAVG(field_name) FROM tb_name WHERE clause
      +

      Function: return the time-weighted average value of a specific column
      +Return Data Type: double
      +Applicable Data Types: all types except timestamp, binary, nchar, bool
      +Applied to: table/STable

    • SUM

      -
      SELECT SUM(field_name) FROM tb_name [WHERE clause]
      -
      -

      Function: return the sum of a specific column.
      Return Data Type: long integer or double.
      Applicable Data Types: all types except timestamp, binary, nchar, bool.
      Applied to: table/STable.

      -
    • - -
    -
      +
      SELECT SUM(field_name) FROM tb_name [WHERE clause]
      +

      Function: return the sum of a specific column.
      +Return Data Type: long integer or double.
      +Applicable Data Types: all types except timestamp, binary, nchar, bool.
      +Applied to: table/STable.

    • STDDEV

      -
      SELECT STDDEV(field_name) FROM tb_name [WHERE clause]
      -
      -

      Function: return the standard deviation of a specific column.
      Return Data Type: double.
      Applicable Data Types: all types except timestamp, binary, nchar, bool.
      Applied to: table.

      -
    • - -
    -
      +
      SELECT STDDEV(field_name) FROM tb_name [WHERE clause]
      +

      Function: return the standard deviation of a specific column.
      +Return Data Type: double.
      +Applicable Data Types: all types except timestamp, binary, nchar, bool.
      +Applied to: table.

    • LEASTSQUARES

      -
      SELECT LEASTSQUARES(field_name) FROM tb_name [WHERE clause]
      -
      +
      SELECT LEASTSQUARES(field_name) FROM tb_name [WHERE clause]

      Function: performs a linear fit to the primary timestamp and the specified column. -Return Data Type: return a string of the coefficient and the interception of the fitted line.
      Applicable Data Types: all types except timestamp, binary, nchar, bool.
      Applied to: table.
      Note: The timestmap is taken as the independent variable while the specified column value is taken as the dependent variables.

      -
    • - +Return Data Type: return a string of the coefficient and the interception of the fitted line.
      +Applicable Data Types: all types except timestamp, binary, nchar, bool.
      +Applied to: table.
      +Note: The timestmap is taken as the independent variable while the specified column value is taken as the dependent variables.

    Selector Functions

    • MIN

      -
      SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause]
      -
      -

      Function: return the minimum value of a specific column.
      Return Data Type: the same data type.
      Applicable Data Types: all types except timestamp, binary, nchar, bool.
      Applied to: table/STable.

      -
    • - -
    -
      +
      SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause]
      +

      Function: return the minimum value of a specific column.
      +Return Data Type: the same data type.
      +Applicable Data Types: all types except timestamp, binary, nchar, bool.
      +Applied to: table/STable.

    • MAX

      -
      SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause]
      -
      -

      Function: return the maximum value of a specific column.
      Return Data Type: the same data type.
      Applicable Data Types: all types except timestamp, binary, nchar, bool.
      Applied to: table/STable.

      -
    • - -
    -
      +
      SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause]
      +

      Function: return the maximum value of a specific column.
      +Return Data Type: the same data type.
      +Applicable Data Types: all types except timestamp, binary, nchar, bool.
      +Applied to: table/STable.

    • FIRST

      -
      SELECT FIRST(field_name) FROM { tb_name | stb_name } [WHERE clause]
      -
      -

      Function: return the first non-NULL value.
      Return Data Type: the same data type.
      Applicable Data Types: all types.
      Applied to: table/STable.
      Note: To return all columns, use first(*).

      -
    • - -
    -
      +
      SELECT FIRST(field_name) FROM { tb_name | stb_name } [WHERE clause]
      +

      Function: return the first non-NULL value.
      +Return Data Type: the same data type.
      +Applicable Data Types: all types.
      +Applied to: table/STable.
      +Note: To return all columns, use first(*).

    • LAST

      -
      SELECT LAST(field_name) FROM { tb_name | stb_name } [WHERE clause]
      -
      -

      Function: return the last non-NULL value.
      Return Data Type: the same data type.
      Applicable Data Types: all types.
      Applied to: table/STable.
      Note: To return all columns, use last(*).

      -
    • - -
    -
      +
      SELECT LAST(field_name) FROM { tb_name | stb_name } [WHERE clause]
      +

      Function: return the last non-NULL value.
      +Return Data Type: the same data type.
      +Applicable Data Types: all types.
      +Applied to: table/STable.
      +Note: To return all columns, use last(*).

    • TOP

      -
      SELECT TOP(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
      -
      -

      Function: return the k largest values.
      Return Data Type: the same data type.
      Applicable Data Types: all types except timestamp, binary, nchar, bool.
      Applied to: table/STable.
      Note: 1) valid range of K: 1≤k≤100; 2) the associated time stamp will be returned too.

      -
    • - -
    -
      +
      SELECT TOP(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
      +

      Function: return the k largest values.
      +Return Data Type: the same data type.
      +Applicable Data Types: all types except timestamp, binary, nchar, bool.
      +Applied to: table/STable.
      +Note: 1) valid range of K: 1≤k≤100; 2) the associated time stamp will be returned too.

    • BOTTOM

      -
      SELECT BOTTOM(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
      -
      -

      Function: return the k smallest values.
      Return Data Type: the same data type.
      Applicable Data Types: all types except timestamp, binary, nchar, bool.
      Applied to: table/STable.
      Note: 1) valid range of K: 1≤k≤100; 2) the associated timestamp will be returned too.

      -
    • - -
    -
      +
      SELECT BOTTOM(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
      +

      Function: return the k smallest values.
      +Return Data Type: the same data type.
      +Applicable Data Types: all types except timestamp, binary, nchar, bool.
      +Applied to: table/STable.
      +Note: 1) valid range of K: 1≤k≤100; 2) the associated timestamp will be returned too.

    • PERCENTILE

      -
      SELECT PERCENTILE(field_name, P) FROM { tb_name | stb_name } [WHERE clause]
      -
      -

      Function: the value of the specified column below which P percent of the data points fall.
      Return Data Type: the same data type.
      Applicable Data Types: all types except timestamp, binary, nchar, bool.
      Applied to: table/STable.
      Note: The range of P is [0, 100]. When P=0 , PERCENTILE returns the equal value as MIN; when P=100, PERCENTILE returns the equal value as MAX.

      -
    • - -
    -
      +
      SELECT PERCENTILE(field_name, P) FROM { tb_name | stb_name } [WHERE clause]
      +

      Function: the value of the specified column below which P percent of the data points fall.
      +Return Data Type: the same data type.
      +Applicable Data Types: all types except timestamp, binary, nchar, bool.
      +Applied to: table/STable.
      +Note: The range of P is [0, 100]. When P=0 , PERCENTILE returns the equal value as MIN; when P=100, PERCENTILE returns the equal value as MAX.

    • LAST_ROW

      -
      SELECT LAST_ROW(field_name) FROM { tb_name | stb_name } 
      -
      -

      Function: return the last row.
      Return Data Type: the same data type.
      Applicable Data Types: all types.
      Applied to: table/STable.
      Note: different from last, last_row returns the last row even it has NULL value.

      -
    • - +
      SELECT LAST_ROW(field_name) FROM { tb_name | stb_name } 
      +

      Function: return the last row.
      +Return Data Type: the same data type.
      +Applicable Data Types: all types.
      +Applied to: table/STable.
      +Note: different from last, last_row returns the last row even it has NULL value.

    Transformation Functions

    • DIFF

      -
      SELECT DIFF(field_name) FROM tb_name [WHERE clause]
      -
      -

      Function: return the difference between successive values of the specified column.
      Return Data Type: the same data type.
      Applicable Data Types: all types except timestamp, binary, nchar, bool.
      Applied to: table.

      -
    • - -
    -
      +
      SELECT DIFF(field_name) FROM tb_name [WHERE clause]
      +

      Function: return the difference between successive values of the specified column.
      +Return Data Type: the same data type.
      +Applicable Data Types: all types except timestamp, binary, nchar, bool.
      +Applied to: table.

    • SPREAD

      -
      SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause]
      -
      -

      Function: return the difference between the maximum and the mimimum value.
      Return Data Type: the same data type.
      Applicable Data Types: all types except timestamp, binary, nchar, bool.
      Applied to: table/STable.
      Note: spread gives the range of data variation in a table/supertable; it is equivalent to MAX() - MIN()

      -
    • - -
    -
      +
      SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause]
      +

      Function: return the difference between the maximum and the mimimum value.
      +Return Data Type: the same data type.
      +Applicable Data Types: all types except timestamp, binary, nchar, bool.
      +Applied to: table/STable.
      +Note: spread gives the range of data variation in a table/supertable; it is equivalent to MAX() - MIN()

    • Arithmetic Operations

      -
      SELECT field_name [+|-|*|/|%][Value|field_name] FROM { tb_name | stb_name }  [WHERE clause]
      -
      -

      Function: arithmetic operations on the selected columns.
      Return Data Type: double.
      Applicable Data Types: all types except timestamp, binary, nchar, bool.
      Applied to: table/STable.
      Note: 1) bracket can be used for operation priority; 2) If a column has NULL value, the result is NULL.

      -
    • - +
      SELECT field_name [+|-|*|/|%][Value|field_name] FROM { tb_name | stb_name }  [WHERE clause]
      +

      Function: arithmetic operations on the selected columns.
      +Return Data Type: double.
      +Applicable Data Types: all types except timestamp, binary, nchar, bool.
      +Applied to: table/STable.
      +Note: 1) bracket can be used for operation priority; 2) If a column has NULL value, the result is NULL.

    Downsampling

    Time-series data are usually sampled by sensors at a very high frequency, but more often we are only interested in the downsampled, aggregated data of each timeline. TDengine provides a convenient way to downsample the highly frequently sampled data points as well as filling the missing data with a variety of interpolation choices.

    -
    SELECT function_list FROM tb_name 
    +
    SELECT function_list FROM tb_name 
       [WHERE where_condition]
       INTERVAL (interval)
       [FILL ({NONE | VALUE | PREV | NULL | LINEAR})]
    @@ -368,54 +371,53 @@ SELECT function_list FROM stb_name
       [WHERE where_condition]
       [GROUP BY tags]
       INTERVAL (interval)
    -  [FILL ({ VALUE | PREV | NULL | LINEAR})]
    -
    + [FILL ({ VALUE | PREV | NULL | LINEAR})]

    The downsampling time window is defined by interval, which is at least 10 milliseconds. The query returns a new series of downsampled data that has a series of fixed timestamps with an increment of interval.

    For the time being, only function count, avg, sum, stddev, leastsquares, percentile, min, max, first, last are supported. Functions that may return multiple rows are not supported.

    -

    You can also use FILL to interpolate the intervals that don't contain any data.FILL currently supports four different interpolation strategies which are listed below:

    -
    +

    You can also use FILL to interpolate the intervals that don't contain any data.FILL currently supports four different interpolation strategies which are listed below:

    +
    - - -
    InterpolationUsage
    FILL(VALUE, val1 [, val2, ...])Interpolate with specified constants
    FILL(PREV)Interpolate with the value at the previous timestamp
    FILL(LINEAR)Linear interpolation with the non-null values at the previous timestamp and at the next timestamp
    FILL(NULL)Interpolate with NULL value
    -

    A few downsampling examples:

    + +Interpolation +Usage + + + + +FILL(VALUE, val1 [, val2, ...]) +Interpolate with specified constants + + +FILL(PREV) +Interpolate with the value at the previous timestamp + + +FILL(LINEAR) +Linear interpolation with the non-null values at the previous timestamp and at the next timestamp + + +FILL(NULL) +Interpolate with NULL value + + + +

    A few downsampling examples:

    • Find the number of data points, the maximum value of col1 and minimum value of col2 in a tb1 for every 10 minutes in the last 5 hours:

      -
      SELECT COUNT(*), MAX(col1), MIN(col2) FROM tb1 WHERE ts > NOW - 5h INTERVAL (10m)
      -
      -
    • - -
    -
      +
      SELECT COUNT(*), MAX(col1), MIN(col2) FROM tb1 WHERE ts > NOW - 5h INTERVAL (10m)
    • Fill the above downsampling results using constant-value interpolation:

      -
      SELECT COUNT(*), MAX(col1), MIN(col2) FROM tb1 WHERE ts > NOW - 5h INTERVAL(10m) FILL(VALUE, 0, 1, -1)
      -
      -

      Note that the number of constant values in FILL() should be equal or fewer than the number of functions in the SELECT clause. Exceeding fill constants will be ignored.

      -
    • - -
    -
      +
      SELECT COUNT(*), MAX(col1), MIN(col2) FROM tb1 WHERE ts > NOW - 5h INTERVAL(10m) FILL(VALUE, 0, 1, -1)
      +

      Note that the number of constant values in FILL() should be equal or fewer than the number of functions in the SELECT clause. Exceeding fill constants will be ignored.

    • Fill the above downsampling results using PREV interpolation:

      -
      SELECT COUNT(*), MAX(col1), MIN(col2) FROM tb1 WHERE ts > NOW - 5h INTERVAL(10m) FILL(PREV)
      -
      -

      This will interpolate missing data points with the value at the previous timestamp.

      -
    • - -
    -
      +
      SELECT COUNT(*), MAX(col1), MIN(col2) FROM tb1 WHERE ts > NOW - 5h INTERVAL(10m) FILL(PREV)
      +

      This will interpolate missing data points with the value at the previous timestamp.

    • Fill the above downsampling results using NULL interpolation:

      -
      SELECT COUNT(*), MAX(col1), MIN(col2) FROM tb1 WHERE ts > NOW - 5h INTERVAL(10m) FILL(NULL)
      -
      -

      Fill NULL to the interpolated data points.

      -
    • - +
      SELECT COUNT(*), MAX(col1), MIN(col2) FROM tb1 WHERE ts > NOW - 5h INTERVAL(10m) FILL(NULL)
      +

      Fill NULL to the interpolated data points.

    Notes:

    1. FILL can generate tons of interpolated data points if the interval is small and the queried time range is large. So always remember to specify a time range when using interpolation. For each query with interpolation, the result set can not exceed 10,000,000 records.
    2. The result set will always be sorted by time in ascending order.
    3. If the query object is a supertable, then all the functions will be applied to all the tables that qualify the WHERE conditions. If the GROUP BY clause is also applied, the result set will be sorted ascendingly by time in each single group, otherwise, the result set will be sorted ascendingly by time as a whole.
    4. - -
    -

     

    -Back \ No newline at end of file +Back \ No newline at end of file diff --git a/src/connector/nodejs/nodetaos/cinterface.js b/src/connector/nodejs/nodetaos/cinterface.js index 51ad882fec8c2e56c1f04af6d555c0a489ac59c3..1462ff2b05d19f29344c523f54fee2bb01b69481 100644 --- a/src/connector/nodejs/nodetaos/cinterface.js +++ b/src/connector/nodejs/nodetaos/cinterface.js @@ -25,7 +25,7 @@ function convertTimestamp(data, num_of_rows, nbytes = 0, offset = 0, micro=false if (micro == true) { timestampConverter = convertMicrosecondsToDatetime; } - data = ref.reinterpret(data.deref().deref(), nbytes * num_of_rows, offset); + data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset); let res = []; let currOffset = 0; while (currOffset < data.length) { @@ -46,7 +46,7 @@ function convertTimestamp(data, num_of_rows, nbytes = 0, offset = 0, micro=false return res; } function convertBool(data, num_of_rows, nbytes = 0, offset = 0, micro=false) { - data = ref.reinterpret(data.deref().deref(), nbytes * num_of_rows, offset); + data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset); let res = new Array(data.length); for (let i = 0; i < data.length; i++) { if (data[i] == 0) { @@ -59,7 +59,7 @@ function convertBool(data, num_of_rows, nbytes = 0, offset = 0, micro=false) { return res; } function convertTinyint(data, num_of_rows, nbytes = 0, offset = 0, micro=false) { - data = ref.reinterpret(data.deref().deref(), nbytes * num_of_rows, offset); + data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset); let res = []; let currOffset = 0; while (currOffset < data.length) { @@ -69,7 +69,7 @@ function convertTinyint(data, num_of_rows, nbytes = 0, offset = 0, micro=false) return res; } function convertSmallint(data, num_of_rows, nbytes = 0, offset = 0, micro=false) { - data = ref.reinterpret(data.deref().deref(), nbytes * num_of_rows, offset); + data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset); let res = []; let currOffset = 0; while (currOffset < data.length) { @@ -79,7 +79,7 @@ function convertSmallint(data, num_of_rows, nbytes = 0, offset = 0, micro=false) return res; } function convertInt(data, num_of_rows, nbytes = 0, offset = 0, micro=false) { - data = ref.reinterpret(data.deref().deref(), nbytes * num_of_rows, offset); + data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset); let res = []; let currOffset = 0; while (currOffset < data.length) { @@ -98,7 +98,7 @@ function readBigInt64LE(buffer, offset = 0) { return ((BigInt(val) << 32n) + BigInt(first + buffer[++offset] * 2 ** 8 + buffer[++offset] * 2 ** 16 + buffer[++offset] * 2 ** 24)); } function convertBigint(data, num_of_rows, nbytes = 0, offset = 0, micro=false) { - data = ref.reinterpret(data.deref().deref(), nbytes * num_of_rows, offset); + data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset); let res = []; let currOffset = 0; while (currOffset < data.length) { @@ -108,7 +108,7 @@ function convertBigint(data, num_of_rows, nbytes = 0, offset = 0, micro=false) { return res; } function convertFloat(data, num_of_rows, nbytes = 0, offset = 0, micro=false) { - data = ref.reinterpret(data.deref().deref(), nbytes * num_of_rows, offset); + data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset); let res = []; let currOffset = 0; while (currOffset < data.length) { @@ -118,7 +118,7 @@ function convertFloat(data, num_of_rows, nbytes = 0, offset = 0, micro=false) { return res; } function convertDouble(data, num_of_rows, nbytes = 0, offset = 0, micro=false) { - data = ref.reinterpret(data.deref().deref(), nbytes * num_of_rows, offset); + data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset); let res = []; let currOffset = 0; while (currOffset < data.length) { @@ -128,7 +128,7 @@ function convertDouble(data, num_of_rows, nbytes = 0, offset = 0, micro=false) { return res; } function convertBinary(data, num_of_rows, nbytes = 0, offset = 0, micro=false) { - data = ref.reinterpret(data.deref().deref(), nbytes * num_of_rows, offset); + data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset); let res = []; let currOffset = 0; while (currOffset < data.length) { @@ -139,7 +139,7 @@ function convertBinary(data, num_of_rows, nbytes = 0, offset = 0, micro=false) { return res; } function convertNchar(data, num_of_rows, nbytes = 0, offset = 0, micro=false) { - data = ref.reinterpret(data.deref().deref(), nbytes * num_of_rows, offset); + data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset); let res = []; let currOffset = 0; //every 4; @@ -185,7 +185,9 @@ TaosField.defineProperty('type', ref.types.char); function CTaosInterface (config = null, pass = false) { ref.types.char_ptr = ref.refType(ref.types.char); ref.types.void_ptr = ref.refType(ref.types.void); + ref.types.void_ptr2 = ref.refType(ref.types.void_ptr); /*Declare a bunch of functions first*/ + /* Note, pointers to TAOS_RES, TAOS, are ref.types.void_ptr. The connection._conn buffer is supplied for pointers to TAOS */ this.libtaos = ffi.Library('libtaos', { 'taos_options': [ ref.types.int, [ ref.types.int , ref.types.void_ptr ] ], 'taos_init': [ ref.types.void, [ ] ], @@ -201,6 +203,11 @@ function CTaosInterface (config = null, pass = false) { 'taos_affected_rows': [ ref.types.int, [ ref.types.void_ptr] ], //int taos_fetch_block(TAOS_RES *res, TAOS_ROW *rows) 'taos_fetch_block': [ ref.types.int, [ ref.types.void_ptr, ref.types.void_ptr] ], + //int taos_num_fields(TAOS_RES *res); + 'taos_num_fields': [ ref.types.int, [ ref.types.void_ptr] ], + //TAOS_ROW taos_fetch_row(TAOS_RES *res) + //TAOS_ROW is void **, but we set the return type as a reference instead to get the row + 'taos_fetch_row': [ ref.refType(ref.types.void_ptr2), [ ref.types.void_ptr ] ], //int taos_result_precision(TAOS_RES *res) 'taos_result_precision': [ ref.types.int, [ ref.types.void_ptr ] ], //void taos_free_result(TAOS_RES *res) @@ -212,7 +219,13 @@ function CTaosInterface (config = null, pass = false) { //int taos_errno(TAOS *taos) 'taos_errno': [ ref.types.int, [ ref.types.void_ptr] ], //char *taos_errstr(TAOS *taos) - 'taos_errstr': [ ref.types.char, [ ref.types.void_ptr] ] + 'taos_errstr': [ ref.types.char, [ ref.types.void_ptr] ], + + // ASYNC + // void taos_query_a(TAOS *taos, char *sqlstr, void (*fp)(void *, TAOS_RES *, int), void *param) + 'taos_query_a': [ ref.types.void, [ ref.types.void_ptr, ref.types.char_ptr, ref.types.void_ptr, ref.types.void_ptr ] ], + // void taos_fetch_rows_a(TAOS_RES *res, void (*fp)(void *param, TAOS_RES *, int numOfRows), void *param); + 'taos_fetch_rows_a': [ ref.types.void, [ ref.types.void_ptr, ref.types.void_ptr, ref.types.void_ptr ]] }); if (pass == false) { if (config == null) { @@ -293,20 +306,20 @@ CTaosInterface.prototype.useResult = function useResult(connection) { let fields = []; let pfields = this.fetchFields(result); if (ref.isNull(pfields) == false) { - let fullpfields = ref.reinterpret(pfields, this.fieldsCount(connection) * 68, 0); - for (let i = 0; i < fullpfields.length; i += 68) { + pfields = ref.reinterpret(pfields, this.fieldsCount(connection) * 68, 0); + for (let i = 0; i < pfields.length; i += 68) { //0 - 63 = name //64 - 65 = bytes, 66 - 67 = type fields.push( { - name: ref.readCString(ref.reinterpret(fullpfields,64,i)), - bytes: fullpfields[i + 64], - type: fullpfields[i + 66] + name: ref.readCString(ref.reinterpret(pfields,64,i)), + bytes: pfields[i + 64], + type: pfields[i + 66] }) } } return {result:result, fields:fields} } CTaosInterface.prototype.fetchBlock = function fetchBlock(result, fields) { - let pblock = ref.ref(ref.ref(ref.NULL)); + let pblock = ref.ref(ref.ref(ref.NULL)); // equal to our raw data let num_of_rows = this.libtaos.taos_fetch_block(result, pblock) if (num_of_rows == 0) { return {block:null, num_of_rows:0}; @@ -316,21 +329,30 @@ CTaosInterface.prototype.fetchBlock = function fetchBlock(result, fields) { blocks.fill(null); num_of_rows = Math.abs(num_of_rows); let offset = 0; + pblock = pblock.deref() for (let i = 0; i < fields.length; i++) { if (!convertFunctions[fields[i]['type']] ) { throw new errors.DatabaseError("Invalid data type returned from database"); } - let data = ref.reinterpret(pblock.deref().deref(), fields[i]['bytes'], offset); blocks[i] = convertFunctions[fields[i]['type']](pblock, num_of_rows, fields[i]['bytes'], offset, isMicro); offset += fields[i]['bytes'] * num_of_rows; } return {blocks: blocks, num_of_rows:Math.abs(num_of_rows)} } +CTaosInterface.prototype.fetchRow = function fetchRow(result, fields) { + let row = this.libtaos.taos_fetch_row(result); + return row; +} CTaosInterface.prototype.freeResult = function freeResult(result) { this.libtaos.taos_free_result(result); result = null; } +/** Number of fields returned in this result handle, must use with async */ +CTaosInterface.prototype.numFields = function numFields(result) { + return this.libtaos.taos_num_fields(result); +} +/** @deprecated */ CTaosInterface.prototype.fieldsCount = function fieldsCount(connection) { return this.libtaos.taos_field_count(connection); } @@ -341,5 +363,63 @@ CTaosInterface.prototype.errno = function errno(connection) { return this.libtaos.taos_errno(connection); } CTaosInterface.prototype.errStr = function errStr(connection) { - return (this.libtaos.taos_errstr(connection)); + return this.libtaos.taos_errstr(connection); +} +// Async +CTaosInterface.prototype.query_a = function query_a(connection, sql, callback, param = ref.ref(ref.NULL)) { + // void taos_query_a(TAOS *taos, char *sqlstr, void (*fp)(void *param, TAOS_RES *, int), void *param) + callback = ffi.Callback(ref.types.void, [ ref.types.void_ptr, ref.types.void_ptr, ref.types.int ], callback); + this.libtaos.taos_query_a(connection, ref.allocCString(sql), callback, param); + return param; +} +/** Asynchrnously fetches the next block of rows. Wraps callback and transfers a 4th argument to the cursor, the row data as blocks in javascript form + * Note: This isn't a recursive function, in order to fetch all data either use the TDengine cursor object, TaosQuery object, or implement a recrusive + * function yourself using the libtaos.taos_fetch_rows_a function + */ +CTaosInterface.prototype.fetch_rows_a = function fetch_rows_a(result, callback, param = ref.ref(ref.NULL)) { + // void taos_fetch_rows_a(TAOS_RES *res, void (*fp)(void *param, TAOS_RES *, int numOfRows), void *param); + var cti = this; + // wrap callback with a function so interface can access the numOfRows value, needed in order to properly process the binary data + let asyncCallbackWrapper = function (param2, result2, numOfRows2) { + // Data preparation to pass to cursor. Could be bottleneck in query execution callback times. + let row = cti.libtaos.taos_fetch_row(result2); + let fields = cti.fetchFields_a(result2); + let isMicro = (cti.libtaos.taos_result_precision(result) == FieldTypes.C_TIMESTAMP_MICRO); + let blocks = new Array(fields.length); + blocks.fill(null); + numOfRows2 = Math.abs(numOfRows2); + let offset = 0; + if (numOfRows2 > 0){ + for (let i = 0; i < fields.length; i++) { + if (!convertFunctions[fields[i]['type']] ) { + throw new errors.DatabaseError("Invalid data type returned from database"); + } + blocks[i] = convertFunctions[fields[i]['type']](row, numOfRows2, fields[i]['bytes'], offset, isMicro); + offset += fields[i]['bytes'] * numOfRows2; + } + } + callback(param2, result2, numOfRows2, blocks); + } + asyncCallbackWrapper = ffi.Callback(ref.types.void, [ ref.types.void_ptr, ref.types.void_ptr, ref.types.int], asyncCallbackWrapper); + this.libtaos.taos_fetch_rows_a(result, asyncCallbackWrapper, param); + return param; +} +// Fetch field meta data by result handle +CTaosInterface.prototype.fetchFields_a = function fetchFields_a (result) { + // + let pfields = this.fetchFields(result); + let pfieldscount = this.numFields(result); + let fields = []; + if (ref.isNull(pfields) == false) { + pfields = ref.reinterpret(pfields, 68 * pfieldscount , 0); + for (let i = 0; i < pfields.length; i += 68) { + //0 - 63 = name //64 - 65 = bytes, 66 - 67 = type + fields.push( { + name: ref.readCString(ref.reinterpret(pfields,64,i)), + bytes: pfields[i + 64], + type: pfields[i + 66] + }) + } + } + return fields; } diff --git a/src/connector/nodejs/nodetaos/cursor.js b/src/connector/nodejs/nodetaos/cursor.js index 8c25e00bd5b5d37eb615efdbf0ff1ad7539dadf9..d99996f44ee47483afc60a875bb7a76227ee4e15 100644 --- a/src/connector/nodejs/nodetaos/cursor.js +++ b/src/connector/nodejs/nodetaos/cursor.js @@ -1,3 +1,4 @@ +const ref = require('ref'); require('./globalfunc.js') const CTaosInterface = require('./cinterface') const errors = require ('./error') @@ -22,6 +23,7 @@ module.exports = TDengineCursor; * @since 1.0.0 */ function TDengineCursor(connection=null) { + //All parameters are store for sync queries only. this._description = null; this._rowcount = -1; this._connection = null; @@ -94,6 +96,7 @@ TDengineCursor.prototype.query = function query(operation, execute = false) { */ TDengineCursor.prototype.execute = function execute(operation, options, callback) { if (operation == undefined) { + throw new errors.ProgrammingError('No operation passed as argument'); return null; } @@ -115,9 +118,8 @@ TDengineCursor.prototype.execute = function execute(operation, options, callback }); obs.observe({ entryTypes: ['measure'] }); performance.mark('A'); - performance.mark('B'); res = this._chandle.query(this._connection._conn, stmt); - + performance.mark('B'); performance.measure('query', 'A', 'B'); if (res == 0) { @@ -180,7 +182,6 @@ TDengineCursor.prototype.fetchall = function fetchall(options, callback) { let data = []; this._rowcount = 0; - let k = 0; //let nodetime = 0; let time = 0; const obs = new PerformanceObserver((items) => { @@ -195,13 +196,12 @@ TDengineCursor.prototype.fetchall = function fetchall(options, callback) { obs2.observe({ entryTypes: ['measure'] }); performance.mark('nodea'); */ + obs.observe({ entryTypes: ['measure'] }); + performance.mark('A'); while(true) { - k+=1; - obs.observe({ entryTypes: ['measure'] }); - performance.mark('A'); + let blockAndRows = this._chandle.fetchBlock(this._result, this._fields); - performance.mark('B'); - performance.measure('query', 'A', 'B'); + let block = blockAndRows.blocks; let num_of_rows = blockAndRows.num_of_rows; @@ -217,7 +217,10 @@ TDengineCursor.prototype.fetchall = function fetchall(options, callback) { } data[data.length-1] = (rowBlock); } + } + performance.mark('B'); + performance.measure('query', 'A', 'B'); let response = this._createSetResponse(this._rowcount, time) console.log(response); @@ -226,13 +229,154 @@ TDengineCursor.prototype.fetchall = function fetchall(options, callback) { this._reset_result(); this.data = data; this.fields = fields; - //performance.mark('nodeb'); - //performance.measure('querynode', 'nodea', 'nodeb'); - //console.log('nodetime: ' + nodetime/1000); + wrapCB(callback, data); return data; } +/** + * Asynchrnously execute a query to TDengine. NOTE, insertion requests must be done in sync if on the same table. + * @param {string} operation - The query operation to execute in the taos shell + * @param {Object} options - Execution options object. quiet : true turns off logging from queries + * @param {boolean} options.quiet - True if you want to surpress logging such as "Query OK, 1 row(s) ..." + * @param {function} callback - A callback function to execute after the query is made to TDengine + * @return {number | Buffer} Number of affected rows or a Buffer that points to the results of the query + * @since 1.0.0 + */ +TDengineCursor.prototype.execute_a = function execute_a (operation, options, callback, param) { + if (operation == undefined) { + throw new errors.ProgrammingError('No operation passed as argument'); + return null; + } + if (typeof options == 'function') { + //we expect the parameter after callback to be param + param = callback; + callback = options; + } + if (typeof options != 'object') options = {} + if (this._connection == null) { + throw new errors.ProgrammingError('Cursor is not connected'); + } + if (typeof callback != 'function') { + throw new errors.ProgrammingError("No callback function passed to execute_a function"); + } + // Async wrapper for callback; + var cr = this; + + let asyncCallbackWrapper = function (param2, res2, resCode) { + if (typeof callback == 'function') { + callback(param2, res2, resCode); + } + + if (resCode >= 0) { + let fieldCount = cr._chandle.numFields(res2); + if (fieldCount == 0) { + //get affect fields count + cr._chandle.freeResult(res2); //result will no longer be needed + } + else { + return res2; + } + + } + else { + //new errors.ProgrammingError(this._chandle.errStr(this._connection._conn)) + //how to get error by result handle? + throw new errors.ProgrammingError("Error occuring with use of execute_a async function. Status code was returned with failure"); + } + } + this._connection._clearResultSet(); + let stmt = operation; + let time = 0; + + // Use ref module to write to buffer in cursor.js instead of taosquery to maintain a difference in levels. Have taosquery stay high level + // through letting it pass an object as param + var buf = ref.alloc('Object'); + ref.writeObject(buf, 0, param); + const obs = new PerformanceObserver((items) => { + time = items.getEntries()[0].duration; + performance.clearMarks(); + }); + obs.observe({ entryTypes: ['measure'] }); + performance.mark('A'); + this._chandle.query_a(this._connection._conn, stmt, asyncCallbackWrapper, buf); + performance.mark('B'); + performance.measure('query', 'A', 'B'); + return param; + + +} +/** + * Fetches all results from an async query. It is preferable to use cursor.query_a() to create + * async queries and execute them instead of using the cursor object directly. + * @param {Object} options - An options object containing options for this function + * @param {function} callback - callback function that is callbacked on the COMPLETE fetched data (it is calledback only once!). + * Must be of form function (param, result, rowCount, rowData) + * @param {Object} param - A parameter that is also passed to the main callback function. Important! Param must be an object, and the key "data" cannot be used + * @return {{param:Object, result:buffer}} An object with the passed parameters object and the buffer instance that is a pointer to the result handle. + * @since 1.2.0 + * @example + * cursor.execute('select * from db.table'); + * var data = cursor.fetchall(function(results) { + * results.forEach(row => console.log(row)); + * }) + */ +TDengineCursor.prototype.fetchall_a = function fetchall_a(result, options, callback, param = {}) { + if (typeof options == 'function') { + //we expect the parameter after callback to be param + param = callback; + callback = options; + } + if (typeof options != 'object') options = {} + if (this._connection == null) { + throw new errors.ProgrammingError('Cursor is not connected'); + } + if (typeof callback != 'function') { + throw new errors.ProgrammingError('No callback function passed to fetchall_a function') + } + if (param.data) { + throw new errors.ProgrammingError("You aren't allowed to set the key 'data' for the parameters object"); + } + let buf = ref.alloc('Object'); + param.data = []; + var cr = this; + + // This callback wrapper accumulates the data from the fetch_rows_a function from the cinterface. It is accumulated by passing the param2 + // object which holds accumulated data in the data key. + let asyncCallbackWrapper = function asyncCallbackWrapper(param2, result2, numOfRows2, rowData) { + param2 = ref.readObject(param2); //return the object back from the pointer + // Keep fetching until now rows left. + if (numOfRows2 > 0) { + let buf2 = ref.alloc('Object'); + param2.data.push(rowData); + ref.writeObject(buf2, 0, param2); + cr._chandle.fetch_rows_a(result2, asyncCallbackWrapper, buf2); + } + else { + + let finalData = param2.data; + let fields = cr._chandle.fetchFields_a(result2); + let data = []; + for (let i = 0; i < finalData.length; i++) { + let num_of_rows = finalData[i][0].length; //fetched block number i; + let block = finalData[i]; + for (let j = 0; j < num_of_rows; j++) { + data.push([]); + let rowBlock = new Array(fields.length); + for (let k = 0; k < fields.length; k++) { + rowBlock[k] = block[k][j]; + } + data[data.length-1] = rowBlock; + } + } + cr._chandle.freeResult(result2); // free result, avoid seg faults and mem leaks! + callback(param2, result2, numOfRows2, {data:data,fields:fields}); + } + } + ref.writeObject(buf, 0, param); + param = this._chandle.fetch_rows_a(result, asyncCallbackWrapper, buf); //returned param + return {param:param,result:result}; +} TDengineCursor.prototype.nextset = function nextset() { return; } diff --git a/src/connector/nodejs/nodetaos/globalfunc.js b/src/connector/nodejs/nodetaos/globalfunc.js index 6ae96cc168602c2b4b533ca6bd4b4495f7ff11f7..cf7344c868ee94831eba47ff55369a684e34b02f 100644 --- a/src/connector/nodejs/nodetaos/globalfunc.js +++ b/src/connector/nodejs/nodetaos/globalfunc.js @@ -1,5 +1,5 @@ /* Wrap a callback, reduce code amount */ -function wrapCB(callback,input) { +function wrapCB(callback, input) { if (typeof callback === 'function') { callback(input); } diff --git a/src/connector/nodejs/nodetaos/taosquery.js b/src/connector/nodejs/nodetaos/taosquery.js index d089de433e1394864c4b603a27d11fee282a46ac..bf6df473269eb976410022812bc56a623af5628f 100644 --- a/src/connector/nodejs/nodetaos/taosquery.js +++ b/src/connector/nodejs/nodetaos/taosquery.js @@ -10,12 +10,13 @@ module.exports = TaosQuery; * functionality and save time whilst also making it easier to debug and enter less problems with the use of promises. * @param {string} query - Query to construct object from * @param {TDengineCursor} cursor - The cursor from which this query will execute from - * @param {boolean} execute - Whether or not to immedietely execute the query and fetch all results. Default is false. + * @param {boolean} execute - Whether or not to immedietely execute the query synchronously and fetch all results. Default is false. + * @property {string} query - The current query in string format the TaosQuery object represents * @return {TaosQuery} * @since 1.0.6 */ function TaosQuery(query = "", cursor = null, execute = false) { - this._query = query; + this.query = query; this._cursor = cursor; if (execute == true) { return this.execute(); @@ -36,7 +37,7 @@ TaosQuery.prototype.execute = async function execute() { let fields = []; let result; try { - taosQuery._cursor.execute(taosQuery._query); + taosQuery._cursor.execute(taosQuery.query); if (taosQuery._cursor._fields) fields = taosQuery._cursor._fields; if (taosQuery._cursor._result != null) data = taosQuery._cursor.fetchall(); result = new TaosResult(data, fields) @@ -50,6 +51,42 @@ TaosQuery.prototype.execute = async function execute() { return executionPromise; } +/** + * Executes the query object asynchronously and returns a Promise. Completes query to completion. + * @memberof TaosQuery + * @param {Object} options - Execution options + * @return {Promise} A promise that resolves with a TaosResult object, or rejects with an error + * @since 1.2.0 + */ +TaosQuery.prototype.execute_a = async function execute_a(options = {}) { + var executionPromise = new Promise( (resolve, reject) => { + + }); + var fres; + var frej; + var fetchPromise = new Promise( (resolve, reject) => { + fres = resolve; + frej = reject; + }); + let asyncCallbackFetchall = async function(param, res, numOfRows, blocks) { + //param is expected to be the fetchPromise variable; + + //keep fetching until completion, possibly an issue though + if (numOfRows > 0) { + frej("cursor.fetchall_a didn't fetch all data properly"); + } + else { + fres(new TaosResult(blocks.data, blocks.fields)); + } + } + let asyncCallback = async function(param, res, code) { + //upon success, we fetchall results + this._cursor.fetchall_a(res, options, asyncCallbackFetchall, {}); + } + this._cursor.execute_a(this.query, asyncCallback.bind(this), {}); + return fetchPromise; +} + /** * Bind arguments to the query and automatically parses them into the right format * @param {array | ...args} args - A number of arguments to bind to each ? in the query @@ -71,7 +108,7 @@ TaosQuery.prototype.bind = function bind(f, ...args) { if (arg.constructor.name == 'TaosTimestamp') arg = "\"" + arg.toTaosString() + "\""; else if (arg.constructor.name == 'Date') arg = "\"" + toTaosTSString(arg) + "\""; else if (typeof arg == 'string') arg = "\"" + arg + "\""; - this._query = this._query.replace(/\?/,arg); + this.query = this.query.replace(/\?/,arg); }, this); return this; } diff --git a/src/connector/nodejs/nodetaos/taosresult.js b/src/connector/nodejs/nodetaos/taosresult.js index 8607e90bae2be4f7ca98a08fce02423a43731bca..bfab94d3591a1d6dbc77b306c2789dab7bc946b8 100644 --- a/src/connector/nodejs/nodetaos/taosresult.js +++ b/src/connector/nodejs/nodetaos/taosresult.js @@ -42,7 +42,7 @@ TaosResult.prototype.pretty = function pretty() { else { sizing.push(Math.max(field.name.length, suggestedMinWidths[field._field.type])); } - fieldsStr +=fillEmpty(Math.floor(sizing[i]/2 - field.name.length / 2)) + field.name + fillEmpty(Math.ceil(sizing[i]/2 - field.name.length / 2)) + " | "; + fieldsStr += fillEmpty(Math.floor(sizing[i]/2 - field.name.length / 2)) + field.name + fillEmpty(Math.ceil(sizing[i]/2 - field.name.length / 2)) + " | "; }); var sumLengths = sizing.reduce((a,b)=> a+=b,(0)) + sizing.length * 3; diff --git a/src/connector/nodejs/package-lock.json b/src/connector/nodejs/package-lock.json index 1afbb5a8c9ff6db33c6183f88c1c551d444e66ff..ea138dc092806f250d4b27b35c6c93e94e53d54c 100644 --- a/src/connector/nodejs/package-lock.json +++ b/src/connector/nodejs/package-lock.json @@ -1,6 +1,6 @@ { "name": "td-connector", - "version": "1.1.1", + "version": "1.2.0", "lockfileVersion": 1, "requires": true, "dependencies": { diff --git a/src/connector/nodejs/package.json b/src/connector/nodejs/package.json index 790a2e0b830326b61bfbc8c99d0709794964ba75..bfe37e2ab57e4c4e7224d27a83d0602118caf928 100644 --- a/src/connector/nodejs/package.json +++ b/src/connector/nodejs/package.json @@ -1,6 +1,6 @@ { "name": "td-connector", - "version": "1.1.1", + "version": "1.2.0", "description": "A Node.js connector for TDengine.", "main": "tdengine.js", "scripts": { diff --git a/src/connector/nodejs/readme.md b/src/connector/nodejs/readme.md index 110b00b4112d91acb9172614b75338e74fe9c433..685dd3cf30aad7d3b16d37a57a2f51608c1cdedf 100644 --- a/src/connector/nodejs/readme.md +++ b/src/connector/nodejs/readme.md @@ -106,7 +106,7 @@ promise.then(function(result) { You can also query by binding parameters to a query by filling in the question marks in a string as so. The query will automatically parse what was binded and convert it to the proper format for use with TDengine ```javascript -var query = cursor.query('select * from meterinfo.meters where ts <= ? and areaid = ?').bind(new Date(), 5); +var query = cursor.query('select * from meterinfo.meters where ts <= ? and areaid = ?;').bind(new Date(), 5); query.execute().then(function(result) { result.pretty(); }) @@ -114,7 +114,7 @@ query.execute().then(function(result) { The TaosQuery object can also be immediately executed upon creation by passing true as the second argument, returning a promise instead of a TaosQuery. ```javascript -var promise = cursor.query('select * from meterinfo.meters where v1 = 30', true) +var promise = cursor.query('select * from meterinfo.meters where v1 = 30;', true) promise.then(function(result) { result.pretty(); }) @@ -122,7 +122,7 @@ promise.then(function(result) { If you want to execute queries without objects being wrapped around the data, use ```cursor.execute()``` directly and ```cursor.fetchall()``` to retrieve data if there is any. ```javascript -cursor.execute('select count(*), avg(v1), min(v2) from meterinfo.meters where ts >= \"2019-07-20 00:00:00.000\"'); +cursor.execute('select count(*), avg(v1), min(v2) from meterinfo.meters where ts >= \"2019-07-20 00:00:00.000\";'); var data = cursor.fetchall(); console.log(cursor.fields); // Latest query's Field metadata is stored in cursor.fields console.log(cursor.data); // Latest query's result data is stored in cursor.data, also returned by fetchall. @@ -130,7 +130,20 @@ console.log(cursor.data); // Latest query's result data is stored in cursor.data ### Async functionality -Coming soon +Async queries can be performed using the same functions such as `cursor.execute`, `cursor.query`, but now with `_a` appended to them. + +Say you want to execute an two async query on two seperate tables, using `cursor.query_a`, you can do that and get a TaosQuery object, which upon executing with the `execute_a` function, returns a promise that resolves with a TaosResult object. + +```javascript +var promise1 = cursor.query_a('select count(*), avg(v1), avg(v2) from meter1;').execute_a() +var promise2 = cursor.query_a('select count(*), avg(v1), avg(v2) from meter2;').execute_a(); +promise1.then(function(result) { + result.pretty(); +}) +promise2.then(function(result) { + result.pretty(); +}) +``` ## Example