diff --git a/docs-cn/12-taos-sql/07-function.md b/docs-cn/12-taos-sql/07-function.md index b924aad042a7dd8d3a81c01030ee587f485e8da4..04d4adb7d452ace2575afbc8696f564742fcd1ab 100644 --- a/docs-cn/12-taos-sql/07-function.md +++ b/docs-cn/12-taos-sql/07-function.md @@ -1,933 +1,682 @@ --- -sidebar_label: SQL 函数 -title: SQL 函数 +sidebar_label: 函数 +title: 函数 +toc_max_heading_level: 4 --- -## 聚合函数 +## 单行函数 -TDengine 支持针对数据的聚合查询。提供支持的聚合和选择函数如下: +单行函数为查询结果中的每一行返回一个结果行。 -### COUNT +### 数学函数 -``` -SELECT COUNT([*|field_name]) FROM tb_name [WHERE clause]; -``` +#### ABS -**功能说明**:统计表/超级表中记录行数或某列的非空值个数。 +```sql + SELECT ABS(field_name) FROM { tb_name | stb_name } [WHERE clause] +``` -**返回数据类型**:长整型 INT64。 +**功能说明**:获得指定列的绝对值 -**应用字段**:应用全部字段。 +**返回结果类型**:如果输入值为整数,输出值是 UBIGINT 类型。如果输入值是 FLOAT/DOUBLE 数据类型,输出值是 DOUBLE 数据类型。 -**适用于**:表、超级表。 +**适用数据类型**:数值类型。 -**使用说明**: +**嵌套子查询支持**:适用于内层查询和外层查询。 -- 可以使用星号(\*)来替代具体的字段,使用星号(\*)返回全部记录数量。 -- 针对同一表的(不包含 NULL 值)字段查询结果均相同。 -- 如果统计对象是具体的列,则返回该列中非 NULL 值的记录数量。 +**适用于**: 表和超级表 -**示例**: +**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -``` -taos> SELECT COUNT(*), COUNT(voltage) FROM meters; - count(*) | count(voltage) | -================================================ - 9 | 9 | -Query OK, 1 row(s) in set (0.004475s) +#### ACOS -taos> SELECT COUNT(*), COUNT(voltage) FROM d1001; - count(*) | count(voltage) | -================================================ - 3 | 3 | -Query OK, 1 row(s) in set (0.001075s) +```sql + SELECT ACOS(field_name) FROM { tb_name | stb_name } [WHERE clause] ``` -### AVG - -``` -SELECT AVG(field_name) FROM tb_name [WHERE clause]; -``` +**功能说明**:获得指定列的反余弦结果 -**功能说明**:统计表/超级表中某列的平均值。 +**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL -**返回数据类型**:双精度浮点数 Double。 +**适用数据类型**:数值类型。 -**应用字段**:不能应用在 timestamp、binary、nchar、bool 字段。 +**嵌套子查询支持**:适用于内层查询和外层查询。 -**适用于**:表、超级表。 +**适用于**: 表和超级表 -**示例**: +**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -``` -taos> SELECT AVG(current), AVG(voltage), AVG(phase) FROM meters; - avg(current) | avg(voltage) | avg(phase) | -==================================================================================== - 11.466666751 | 220.444444444 | 0.293333333 | -Query OK, 1 row(s) in set (0.004135s) +#### ASIN -taos> SELECT AVG(current), AVG(voltage), AVG(phase) FROM d1001; - avg(current) | avg(voltage) | avg(phase) | -==================================================================================== - 11.733333588 | 219.333333333 | 0.316666673 | -Query OK, 1 row(s) in set (0.000943s) +```sql + SELECT ASIN(field_name) FROM { tb_name | stb_name } [WHERE clause] ``` -### TWA - -``` -SELECT TWA(field_name) FROM tb_name WHERE clause; -``` +**功能说明**:获得指定列的反正弦结果 -**功能说明**:时间加权平均函数。统计表中某列在一段时间内的时间加权平均。 +**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL -**返回数据类型**:双精度浮点数 Double。 +**适用数据类型**:数值类型。 -**应用字段**:不能应用在 timestamp、binary、nchar、bool 类型字段。 +**嵌套子查询支持**:适用于内层查询和外层查询。 -**适用于**:表、超级表。 +**适用于**: 表和超级表 -**使用说明**: +**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -- 从 2.1.3.0 版本开始,TWA 函数可以在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname)。 -### IRATE +#### ATAN -``` -SELECT IRATE(field_name) FROM tb_name WHERE clause; +```sql + SELECT ATAN(field_name) FROM { tb_name | stb_name } [WHERE clause] ``` -**功能说明**:计算瞬时增长率。使用时间区间中最后两个样本数据来计算瞬时增长速率;如果这两个值呈递减关系,那么只取最后一个数用于计算,而不是使用二者差值。 +**功能说明**:获得指定列的反正切结果 -**返回数据类型**:双精度浮点数 Double。 +**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL -**应用字段**:不能应用在 timestamp、binary、nchar、bool 类型字段。 +**适用数据类型**:数值类型。 -**适用于**:表、超级表。 +**嵌套子查询支持**:适用于内层查询和外层查询。 -**使用说明**: +**适用于**: 表和超级表 -- 从 2.1.3.0 版本开始此函数可用,IRATE 可以在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname)。 +**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -### SUM + +#### CEIL ``` -SELECT SUM(field_name) FROM tb_name [WHERE clause]; +SELECT CEIL(field_name) FROM { tb_name | stb_name } [WHERE clause]; ``` -**功能说明**:统计表/超级表中某列的和。 +**功能说明**:获得指定列的向上取整数的结果。 -**返回数据类型**:双精度浮点数 Double 和长整型 INT64。 +**返回结果类型**:与指定列的原始数据类型一致。例如,如果指定列的原始数据类型为 Float,那么返回的数据类型也为 Float;如果指定列的原始数据类型为 Double,那么返回的数据类型也为 Double。 -**应用字段**:不能应用在 timestamp、binary、nchar、bool 类型字段。 +**适用数据类型**:数值类型。 -**适用于**:表、超级表。 +**适用于**: 普通表、超级表。 -**示例**: +**嵌套子查询支持**:适用于内层查询和外层查询。 -``` -taos> SELECT SUM(current), SUM(voltage), SUM(phase) FROM meters; - sum(current) | sum(voltage) | sum(phase) | -================================================================================ - 103.200000763 | 1984 | 2.640000001 | -Query OK, 1 row(s) in set (0.001702s) +**使用说明**: -taos> SELECT SUM(current), SUM(voltage), SUM(phase) FROM d1001; - sum(current) | sum(voltage) | sum(phase) | -================================================================================ - 35.200000763 | 658 | 0.950000018 | -Query OK, 1 row(s) in set (0.000980s) -``` +- 支持 +、-、\*、/ 运算,如 ceil(col1) + ceil(col2)。 +- 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -### STDDEV +#### COS -``` -SELECT STDDEV(field_name) FROM tb_name [WHERE clause]; +```sql + SELECT COS(field_name) FROM { tb_name | stb_name } [WHERE clause] ``` -**功能说明**:统计表中某列的均方差。 +**功能说明**:获得指定列的余弦结果 -**返回数据类型**:双精度浮点数 Double。 +**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL -**应用字段**:不能应用在 timestamp、binary、nchar、bool 类型字段。 +**适用数据类型**:数值类型。 -**适用于**:表、超级表(从 2.0.15.1 版本开始) +**嵌套子查询支持**:适用于内层查询和外层查询。 + +**适用于**: 表和超级表 -**示例**: +**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 + +#### FLOOR ``` -taos> SELECT STDDEV(current) FROM d1001; - stddev(current) | -============================ - 1.020892909 | -Query OK, 1 row(s) in set (0.000915s) +SELECT FLOOR(field_name) FROM { tb_name | stb_name } [WHERE clause]; ``` -### LEASTSQUARES +**功能说明**:获得指定列的向下取整数的结果。 + 其他使用说明参见 CEIL 函数描述。 -``` -SELECT LEASTSQUARES(field_name, start_val, step_val) FROM tb_name [WHERE clause]; +#### LOG + +```sql + SELECT LOG(field_name, base) FROM { tb_name | stb_name } [WHERE clause] ``` -**功能说明**:统计表中某列的值是主键(时间戳)的拟合直线方程。start_val 是自变量初始值,step_val 是自变量的步长值。 +**功能说明**:获得指定列对于底数 base 的对数 -**返回数据类型**:字符串表达式(斜率, 截距)。 +**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL -**应用字段**:不能应用在 timestamp、binary、nchar、bool 类型字段。 +**适用数据类型**:数值类型。 -**适用于**:表。 +**嵌套子查询支持**:适用于内层查询和外层查询。 -**示例**: +**适用于**: 表和超级表 -``` -taos> SELECT LEASTSQUARES(current, 1, 1) FROM d1001; - leastsquares(current, 1, 1) | -===================================================== -{slop:1.000000, intercept:9.733334} | -Query OK, 1 row(s) in set (0.000921s) -``` +**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -### MODE -``` -SELECT MODE(field_name) FROM tb_name [WHERE clause]; -``` +#### POW -**功能说明**:返回出现频率最高的值,若存在多个频率相同的最高值,输出空。不能匹配标签、时间戳输出。 +```sql + SELECT POW(field_name, power) FROM { tb_name | stb_name } [WHERE clause] +``` -**返回数据类型**:同应用的字段。 +**功能说明**:获得指定列的指数为 power 的幂 -**应用字段**:适合于除时间主列外的任何类型字段。 +**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL -**使用说明**:由于返回数据量未知,考虑到内存因素,为了函数可以正常返回结果,建议不重复的数据量在 10 万级别,否则会报错。 +**适用数据类型**:数值类型。 -**支持的版本**:2.6.0.0 及以后的版本。 +**嵌套子查询支持**:适用于内层查询和外层查询。 -**示例**: +**适用于**: 表和超级表 -``` -taos> select voltage from d002; - voltage | -======================== - 1 | - 1 | - 2 | - 19 | -Query OK, 4 row(s) in set (0.003545s) +**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -taos> select mode(voltage) from d002; - mode(voltage) | -======================== - 1 | -Query OK, 1 row(s) in set (0.019393s) -``` -### HYPERLOGLOG +#### ROUND ``` -SELECT HYPERLOGLOG(field_name) FROM { tb_name | stb_name } [WHERE clause]; +SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause]; ``` -**功能说明**: - - 采用 hyperloglog 算法,返回某列的基数。该算法在数据量很大的情况下,可以明显降低内存的占用,但是求出来的基数是个估算值,标准误差(标准误差是多次实验,每次的平均数的标准差,不是与真实结果的误差)为 0.81%。 - - 在数据量较少的时候该算法不是很准确,可以使用 select count(data) from (select unique(col) as data from table) 的方法。 +**功能说明**:获得指定列的四舍五入的结果。 + 其他使用说明参见 CEIL 函数描述。 -**返回结果类型**:整形。 -**应用字段**:适合于任何类型字段。 +#### SIN -**支持的版本**:2.6.0.0 及以后的版本。 +```sql + SELECT SIN(field_name) FROM { tb_name | stb_name } [WHERE clause] +``` -**示例**: +**功能说明**:获得指定列的正弦结果 -``` -taos> select dbig from shll; - dbig | -======================== - 1 | - 1 | - 1 | - NULL | - 2 | - 19 | - NULL | - 9 | -Query OK, 8 row(s) in set (0.003755s) +**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL -taos> select hyperloglog(dbig) from shll; - hyperloglog(dbig)| -======================== - 4 | -Query OK, 1 row(s) in set (0.008388s) -``` +**适用数据类型**:数值类型。 -### HISTOGRAM +**嵌套子查询支持**:适用于内层查询和外层查询。 -``` -SELECT HISTOGRAM(field_name,bin_type, bin_description, normalized) FROM tb_name [WHERE clause]; -``` +**适用于**: 表和超级表 -**功能说明**:统计数据按照用户指定区间的分布。 +**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -**返回结果类型**:如归一化参数 normalized 设置为 1,返回结果为双精度浮点类型 DOUBLE,否则为长整形 INT64。 +#### SQRT -**应用字段**:数值型字段。 +```sql + SELECT SQRT(field_name) FROM { tb_name | stb_name } [WHERE clause] +``` -**支持的版本**:2.6.0.0 及以后的版本。 +**功能说明**:获得指定列的平方根 -**适用于**: 表和超级表。 +**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL -**说明**: -1. bin_type 用户指定的分桶类型, 有效输入类型为"user_input“, ”linear_bin", "log_bin"。 -2. bin_description 描述如何生成分桶区间,针对三种桶类型,分别为以下描述格式(均为 JSON 格式字符串): - - "user_input": "[1, 3, 5, 7]" - 用户指定 bin 的具体数值。 - - - "linear_bin": "{"start": 0.0, "width": 5.0, "count": 5, "infinity": true}" - "start" 表示数据起始点,"width" 表示每次 bin 偏移量, "count" 为 bin 的总数,"infinity" 表示是否添加(-inf, inf)作为区间起点跟终点, - 生成区间为[-inf, 0.0, 5.0, 10.0, 15.0, 20.0, +inf]。 - - - "log_bin": "{"start":1.0, "factor": 2.0, "count": 5, "infinity": true}" - "start" 表示数据起始点,"factor" 表示按指数递增的因子,"count" 为 bin 的总数,"infinity" 表示是否添加(-inf, inf)作为区间起点跟终点, - 生成区间为[-inf, 1.0, 2.0, 4.0, 8.0, 16.0, +inf]。 -3. normalized 是否将返回结果归一化到 0~1 之间 。有效输入为 0 和 1。 +**适用数据类型**:数值类型。 -**示例**: +**嵌套子查询支持**:适用于内层查询和外层查询。 -```mysql -taos> SELECT HISTOGRAM(voltage, "user_input", "[1,3,5,7]", 1) FROM meters; - histogram(voltage, "user_input", "[1,3,5,7]", 1) | - ======================================================= - {"lower_bin":1, "upper_bin":3, "count":0.333333} | - {"lower_bin":3, "upper_bin":5, "count":0.333333} | - {"lower_bin":5, "upper_bin":7, "count":0.333333} | - Query OK, 3 row(s) in set (0.004273s) - -taos> SELECT HISTOGRAM(voltage, 'linear_bin', '{"start": 1, "width": 3, "count": 3, "infinity": false}', 0) FROM meters; - histogram(voltage, 'linear_bin', '{"start": 1, "width": 3, " | - =================================================================== - {"lower_bin":1, "upper_bin":4, "count":3} | - {"lower_bin":4, "upper_bin":7, "count":3} | - {"lower_bin":7, "upper_bin":10, "count":3} | - Query OK, 3 row(s) in set (0.004887s) - -taos> SELECT HISTOGRAM(voltage, 'log_bin', '{"start": 1, "factor": 3, "count": 3, "infinity": true}', 0) FROM meters; - histogram(voltage, 'log_bin', '{"start": 1, "factor": 3, "count" | - =================================================================== - {"lower_bin":-inf, "upper_bin":1, "count":3} | - {"lower_bin":1, "upper_bin":3, "count":2} | - {"lower_bin":3, "upper_bin":9, "count":6} | - {"lower_bin":9, "upper_bin":27, "count":3} | - {"lower_bin":27, "upper_bin":inf, "count":1} | -``` +**适用于**: 表和超级表 -### ELAPSED +**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -```mysql -SELECT ELAPSED(field_name[, time_unit]) FROM { tb_name | stb_name } [WHERE clause] [INTERVAL(interval [, offset]) [SLIDING sliding]]; +#### TAN + +```sql + SELECT TAN(field_name) FROM { tb_name | stb_name } [WHERE clause] ``` -**功能说明**:elapsed函数表达了统计周期内连续的时间长度,和twa函数配合使用可以计算统计曲线下的面积。在通过INTERVAL子句指定窗口的情况下,统计在给定时间范围内的每个窗口内有数据覆盖的时间范围;如果没有INTERVAL子句,则返回整个给定时间范围内的有数据覆盖的时间范围。注意,ELAPSED返回的并不是时间范围的绝对值,而是绝对值除以time_unit所得到的单位个数。 +**功能说明**:获得指定列的正切结果 -**返回结果类型**:Double +**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL -**应用字段**:Timestamp类型 +**适用数据类型**:数值类型。 -**支持的版本**:2.6.0.0 及以后的版本。 +**嵌套子查询支持**:适用于内层查询和外层查询。 -**适用于**: 表,超级表,嵌套查询的外层查询 +**适用于**: 表和超级表 -**说明**: -- field_name参数只能是表的第一列,即timestamp主键列。 -- 按time_unit参数指定的时间单位返回,最小是数据库的时间分辨率。time_unit参数未指定时,以数据库的时间分辨率为时间单位。 -- 可以和interval组合使用,返回每个时间窗口的时间戳差值。需要特别注意的是,除第一个时间窗口和最后一个时间窗口外,中间窗口的时间戳差值均为窗口长度。 -- order by asc/desc不影响差值的计算结果。 -- 对于超级表,需要和group by tbname子句组合使用,不可以直接使用。 -- 对于普通表,不支持和group by子句组合使用。 -- 对于嵌套查询,仅当内层查询会输出隐式时间戳列时有效。例如select elapsed(ts) from (select diff(value) from sub1)语句,diff函数会让内层查询输出隐式时间戳列,此为主键列,可以用于elapsed函数的第一个参数。相反,例如select elapsed(ts) from (select * from sub1) 语句,ts列输出到外层时已经没有了主键列的含义,无法使用elapsed函数。此外,elapsed函数作为一个与时间线强依赖的函数,形如select elapsed(ts) from (select diff(value) from st group by tbname)尽管会返回一条计算结果,但并无实际意义,这种用法后续也将被限制。 -- 不支持与leastsquares、diff、derivative、top、bottom、last_row、interp等函数混合使用。 +**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -## 选择函数 +### 字符串函数 -在使用所有的选择函数的时候,可以同时指定输出 ts 列或标签列(包括 tbname),这样就可以方便地知道被选出的值是源于哪个数据行的。 +字符串函数的输入参数为字符串类型,返回结果为数值类型或字符串类型。 -### MIN +#### CHAR_LENGTH ``` -SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause]; + SELECT CHAR_LENGTH(str|column) FROM { tb_name | stb_name } [WHERE clause] ``` -**功能说明**:统计表/超级表中某列的值最小值。 +**功能说明**:以字符计数的字符串长度。 -**返回数据类型**:同应用的字段。 +**返回结果类型**:INT。如果输入值为NULL,输出值为NULL。 -**应用字段**:不能应用在 timestamp、binary、nchar、bool 类型字段。 +**适用数据类型**:VARCHAR, NCHAR -**适用于**:表、超级表。 +**嵌套子查询支持**:适用于内层查询和外层查询。 -**示例**: +**适用于**: 表和超级表 -``` -taos> SELECT MIN(current), MIN(voltage) FROM meters; - min(current) | min(voltage) | -====================================== - 10.20000 | 218 | -Query OK, 1 row(s) in set (0.001765s) +#### CONCAT -taos> SELECT MIN(current), MIN(voltage) FROM d1001; - min(current) | min(voltage) | -====================================== - 10.30000 | 218 | -Query OK, 1 row(s) in set (0.000950s) +```sql + SELECT CONCAT(str1|column1, str2|column2, ...) FROM { tb_name | stb_name } [WHERE clause] ``` -### MAX +**功能说明**:字符串连接函数。 -``` -SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause]; -``` +**返回结果类型**:如果所有参数均为 VARCHAR 类型,则结果类型为 VARCHAR。如果参数包含NCHAR类型,则结果类型为NCHAR。如果输入值为NULL,输出值为NULL。 -**功能说明**:统计表/超级表中某列的值最大值。 +**适用数据类型**:VARCHAR, NCHAR。 该函数最小参数个数为2个,最大参数个数为8个。 -**返回数据类型**:同应用的字段。 +**嵌套子查询支持**:适用于内层查询和外层查询。 -**应用字段**:不能应用在 timestamp、binary、nchar、bool 类型字段。 +**适用于**: 表和超级表 -**适用于**:表、超级表。 -**示例**: +#### CONCAT_WS ``` -taos> SELECT MAX(current), MAX(voltage) FROM meters; - max(current) | max(voltage) | -====================================== - 13.40000 | 223 | -Query OK, 1 row(s) in set (0.001123s) - -taos> SELECT MAX(current), MAX(voltage) FROM d1001; - max(current) | max(voltage) | -====================================== - 12.60000 | 221 | -Query OK, 1 row(s) in set (0.000987s) + SELECT CONCAT_WS(separator, str1|column1, str2|column2, ...) FROM { tb_name | stb_name } [WHERE clause] ``` -### FIRST +**功能说明**:带分隔符的字符串连接函数。 -``` -SELECT FIRST(field_name) FROM { tb_name | stb_name } [WHERE clause]; -``` +**返回结果类型**:如果所有参数均为VARCHAR类型,则结果类型为VARCHAR。如果参数包含NCHAR类型,则结果类型为NCHAR。如果输入值为NULL,输出值为NULL。如果separator值不为NULL,其他输入为NULL,输出为空串。 -**功能说明**:统计表/超级表中某列的值最先写入的非 NULL 值。 +**适用数据类型**:VARCHAR, NCHAR。 该函数最小参数个数为3个,最大参数个数为9个。 -**返回数据类型**:同应用的字段。 +**嵌套子查询支持**:适用于内层查询和外层查询。 -**应用字段**:所有字段。 +**适用于**: 表和超级表 -**适用于**:表、超级表。 -**使用说明**: +#### LENGTH -- 如果要返回各个列的首个(时间戳最小)非 NULL 值,可以使用 FIRST(\*); -- 如果结果集中的某列全部为 NULL 值,则该列的返回结果也是 NULL; -- 如果结果集中所有列全部为 NULL 值,则不返回结果。 +``` + SELECT LENGTH(str|column) FROM { tb_name | stb_name } [WHERE clause] +``` -**示例**: +**功能说明**:以字节计数的字符串长度。 -``` -taos> SELECT FIRST(*) FROM meters; - first(ts) | first(current) | first(voltage) | first(phase) | -========================================================================================= -2018-10-03 14:38:04.000 | 10.20000 | 220 | 0.23000 | -Query OK, 1 row(s) in set (0.004767s) +**返回结果类型**:INT。 -taos> SELECT FIRST(current) FROM d1002; - first(current) | -======================= - 10.20000 | -Query OK, 1 row(s) in set (0.001023s) -``` +**适用数据类型**:输入参数是 VARCHAR 类型或者 NCHAR 类型的字符串或者列。 -### LAST +**嵌套子查询支持**:适用于内层查询和外层查询。 + +**适用于**: 表和超级表 + + +#### LOWER ``` -SELECT LAST(field_name) FROM { tb_name | stb_name } [WHERE clause]; + SELECT LOWER(str|column) FROM { tb_name | stb_name } [WHERE clause] ``` -**功能说明**:统计表/超级表中某列的值最后写入的非 NULL 值。 - -**返回数据类型**:同应用的字段。 +**功能说明**:将字符串参数值转换为全小写字母。 -**应用字段**:所有字段。 +**返回结果类型**:同输入类型。如果输入值为NULL,输出值为NULL。 -**适用于**:表、超级表。 +**适用数据类型**:输入参数是 VARCHAR 类型或者 NCHAR 类型的字符串或者列。 -**使用说明**: +**嵌套子查询支持**:适用于内层查询和外层查询。 -- 如果要返回各个列的最后(时间戳最大)一个非 NULL 值,可以使用 LAST(\*); -- 如果结果集中的某列全部为 NULL 值,则该列的返回结果也是 NULL;如果结果集中所有列全部为 NULL 值,则不返回结果。 -- 在用于超级表时,时间戳完全一样且同为最大的数据行可能有多个,那么会从中随机返回一条,而并不保证多次运行所挑选的数据行必然一致。 +**适用于**: 表和超级表 -**示例**: +#### LTRIM ``` -taos> SELECT LAST(*) FROM meters; - last(ts) | last(current) | last(voltage) | last(phase) | -======================================================================================== -2018-10-03 14:38:16.800 | 12.30000 | 221 | 0.31000 | -Query OK, 1 row(s) in set (0.001452s) - -taos> SELECT LAST(current) FROM d1002; - last(current) | -======================= - 10.30000 | -Query OK, 1 row(s) in set (0.000843s) + SELECT LTRIM(str|column) FROM { tb_name | stb_name } [WHERE clause] ``` -### TOP +**功能说明**:返回清除左边空格后的字符串。 -``` -SELECT TOP(field_name, K) FROM { tb_name | stb_name } [WHERE clause]; -``` +**返回结果类型**:同输入类型。如果输入值为NULL,输出值为NULL。 -**功能说明**: 统计表/超级表中某列的值最大 _k_ 个非 NULL 值。如果多条数据取值一样,全部取用又会超出 k 条限制时,系统会从相同值中随机选取符合要求的数量返回。 +**适用数据类型**:输入参数是 VARCHAR 类型或者 NCHAR 类型的字符串或者列。 -**返回数据类型**:同应用的字段。 +**嵌套子查询支持**:适用于内层查询和外层查询。 -**应用字段**:不能应用在 timestamp、binary、nchar、bool 类型字段。 +**适用于**: 表和超级表 -**适用于**:表、超级表。 -**使用说明**: +#### RTRIM -- *k*值取值范围 1≤*k*≤100; -- 系统同时返回该记录关联的时间戳列; -- 限制:TOP 函数不支持 FILL 子句。 +``` + SELECT LTRIM(str|column) FROM { tb_name | stb_name } [WHERE clause] +``` -**示例**: +**功能说明**:返回清除右边空格后的字符串。 -``` -taos> SELECT TOP(current, 3) FROM meters; - ts | top(current, 3) | -================================================= -2018-10-03 14:38:15.000 | 12.60000 | -2018-10-03 14:38:16.600 | 13.40000 | -2018-10-03 14:38:16.800 | 12.30000 | -Query OK, 3 row(s) in set (0.001548s) +**返回结果类型**:同输入类型。如果输入值为NULL,输出值为NULL。 -taos> SELECT TOP(current, 2) FROM d1001; - ts | top(current, 2) | -================================================= -2018-10-03 14:38:15.000 | 12.60000 | -2018-10-03 14:38:16.800 | 12.30000 | -Query OK, 2 row(s) in set (0.000810s) -``` +**适用数据类型**:输入参数是 VARCHAR 类型或者 NCHAR 类型的字符串或者列。 -### BOTTOM +**嵌套子查询支持**:适用于内层查询和外层查询。 + +**适用于**: 表和超级表 + + +#### SUBSTR ``` -SELECT BOTTOM(field_name, K) FROM { tb_name | stb_name } [WHERE clause]; + SELECT SUBSTR(str,pos[,len]) FROM { tb_name | stb_name } [WHERE clause] ``` -**功能说明**:统计表/超级表中某列的值最小 _k_ 个非 NULL 值。如果多条数据取值一样,全部取用又会超出 k 条限制时,系统会从相同值中随机选取符合要求的数量返回。 +**功能说明**:从源字符串 str 中的指定位置 pos 开始取一个长度为 len 的子串并返回。 -**返回数据类型**:同应用的字段。 +**返回结果类型**:同输入类型。如果输入值为NULL,输出值为NULL。 -**应用字段**:不能应用在 timestamp、binary、nchar、bool 类型字段。 +**适用数据类型**:输入参数是 VARCHAR 类型或者 NCHAR 类型的字符串或者列。输入参数pos可以为正数,也可以为负数。如果pos是正数,表示开始位置从字符串开头正数计算。如果pos为负数,表示开始位置从字符串结尾倒数计算。如果输入参数len被忽略,返回的子串包含从pos开始的整个字串。 -**适用于**:表、超级表。 +**嵌套子查询支持**:适用于内层查询和外层查询。 -**使用说明**: +**适用于**: 表和超级表 -- *k*值取值范围 1≤*k*≤100; -- 系统同时返回该记录关联的时间戳列; -- 限制:BOTTOM 函数不支持 FILL 子句。 -**示例**: +#### UPPER ``` -taos> SELECT BOTTOM(voltage, 2) FROM meters; - ts | bottom(voltage, 2) | -=============================================== -2018-10-03 14:38:15.000 | 218 | -2018-10-03 14:38:16.650 | 218 | -Query OK, 2 row(s) in set (0.001332s) - -taos> SELECT BOTTOM(current, 2) FROM d1001; - ts | bottom(current, 2) | -================================================= -2018-10-03 14:38:05.000 | 10.30000 | -2018-10-03 14:38:16.800 | 12.30000 | -Query OK, 2 row(s) in set (0.000793s) + SELECT UPPER(str|column) FROM { tb_name | stb_name } [WHERE clause] ``` -### PERCENTILE +**功能说明**:将字符串参数值转换为全大写字母。 -``` -SELECT PERCENTILE(field_name, P) FROM { tb_name } [WHERE clause]; -``` +**返回结果类型**:同输入类型。如果输入值为NULL,输出值为NULL。 -**功能说明**:统计表中某列的值百分比分位数。 +**适用数据类型**:输入参数是 VARCHAR 类型或者 NCHAR 类型的字符串或者列。 -**返回数据类型**: 双精度浮点数 Double。 +**嵌套子查询支持**:适用于内层查询和外层查询。 -**应用字段**:不能应用在 timestamp、binary、nchar、bool 类型字段。 +**适用于**: 表和超级表 -**适用于**:表。 -**使用说明**:*P*值取值范围 0≤*P*≤100,为 0 的时候等同于 MIN,为 100 的时候等同于 MAX。 +### 转换函数 -**示例**: +转换函数将值从一种数据类型转换为另一种数据类型。 -``` -taos> SELECT PERCENTILE(current, 20) FROM d1001; -percentile(current, 20) | -============================ - 11.100000191 | -Query OK, 1 row(s) in set (0.000787s) +#### CAST + +```sql + SELECT CAST(expression AS type_name) FROM { tb_name | stb_name } [WHERE clause] ``` -### APERCENTILE +**功能说明**:数据类型转换函数,输入参数 expression 支持普通列、常量、标量函数及它们之间的四则运算,只适用于 select 子句中。 -``` -SELECT APERCENTILE(field_name, P[, algo_type]) -FROM { tb_name | stb_name } [WHERE clause] +**返回结果类型**:CAST 中指定的类型(type_name),可以是 BIGINT、BIGINT UNSIGNED、BINARY、VARCHAR、NCHAR和TIMESTAMP。 + +**适用数据类型**:输入参数 expression 的类型可以是BLOB、MEDIUMBLOB和JSON外的所有类型 + +**使用说明**: + +- 对于不能支持的类型转换会直接报错。 +- 如果输入值为NULL则输出值也为NULL。 +- 对于类型支持但某些值无法正确转换的情况对应的转换后的值以转换函数输出为准。目前可能遇到的几种情况: + 1)字符串类型转换数值类型时可能出现的无效字符情况,例如"a"可能转为0,但不会报错。 + 2)转换到数值类型时,数值大于type_name可表示的范围时,则会溢出,但不会报错。 + 3)转换到字符串类型时,如果转换后长度超过type_name的长度,则会截断,但不会报错。 + +#### TO_ISO8601 + +```sql +SELECT TO_ISO8601(ts_val | ts_col) FROM { tb_name | stb_name } [WHERE clause]; ``` -**功能说明**:统计表/超级表中指定列的值百分比分位数,与 PERCENTILE 函数相似,但是返回近似结果。 +**功能说明**:将 UNIX 时间戳转换成为 ISO8601 标准的日期时间格式,并附加客户端时区信息。 -**返回数据类型**: 双精度浮点数 Double。 +**返回结果数据类型**:VARCHAR 类型。 -**应用字段**:不能应用在 timestamp、binary、nchar、bool 类型字段。 +**适用数据类型**:UNIX 时间戳常量或是 TIMESTAMP 类型的列 **适用于**:表、超级表。 -**使用说明** +**使用说明**: -- **P**值有效取值范围 0≤P≤100,为 0 的时候等同于 MIN,为 100 的时候等同于 MAX; -- **algo_type**的有效输入:**default** 和 **t-digest** -- 用于指定计算近似分位数的算法。可不提供第三个参数的输入,此时将使用 default 的算法进行计算,即 apercentile(column_name, 50, "default") 与 apercentile(column_name, 50) 等价。 -- 当使用“t-digest”参数的时候,将使用 t-digest 方式采样计算近似分位数。但该参数指定计算算法的功能从 2.2.0.x 版本开始支持,2.2.0.0 之前的版本不支持指定使用算法的功能。 +- 如果输入是 UNIX 时间戳常量,返回格式精度由时间戳的位数决定; +- 如果输入是 TIMSTAMP 类型的列,返回格式的时间戳精度与当前 DATABASE 设置的时间精度一致。 -**嵌套子查询支持**:适用于内层查询和外层查询。 +#### TO_JSON + +```sql +SELECT TO_JSON(str_literal) FROM { tb_name | stb_name } [WHERE clause]; ``` -taos> SELECT APERCENTILE(current, 20) FROM d1001; -apercentile(current, 20) | -============================ - 10.300000191 | -Query OK, 1 row(s) in set (0.000645s) -taos> select apercentile (count, 80, 'default') from stb1; - apercentile (c0, 80, 'default') | -================================== - 601920857.210056424 | -Query OK, 1 row(s) in set (0.012363s) +**功能说明**: 将字符串常量转换为 JSON 类型。 -taos> select apercentile (count, 80, 't-digest') from stb1; - apercentile (c0, 80, 't-digest') | -=================================== - 605869120.966666579 | -Query OK, 1 row(s) in set (0.011639s) -``` +**返回结果数据类型**: JSON -### LAST_ROW +**适用数据类型**: JSON 字符串,形如 '{ "literal" : literal }'。'{}'表示空值。键必须为字符串字面量,值可以为数值字面量、字符串字面量、布尔字面量或空值字面量。str_literal中不支持转义符。 -``` -SELECT LAST_ROW(field_name) FROM { tb_name | stb_name }; +**适用于**: 表和超级表 + +**嵌套子查询支持**:适用于内层查询和外层查询。 + + +#### TO_UNIXTIMESTAMP + +```sql +SELECT TO_UNIXTIMESTAMP(datetime_string | ts_col) FROM { tb_name | stb_name } [WHERE clause]; ``` -**功能说明**:返回表/超级表的最后一条记录。 +**功能说明**:将日期时间格式的字符串转换成为 UNIX 时间戳。 -**返回数据类型**:同应用的字段。 +**返回结果数据类型**:长整型 INT64。 -**应用字段**:所有字段。 +**应用字段**:字符串常量或是 VARCHAR/NCHAR 类型的列。 **适用于**:表、超级表。 **使用说明**: -- 在用于超级表时,时间戳完全一样且同为最大的数据行可能有多个,那么会从中随机返回一条,而并不保证多次运行所挑选的数据行必然一致。 -- 不能与 INTERVAL 一起使用。 - -**示例**: - -``` - taos> SELECT LAST_ROW(current) FROM meters; - last_row(current) | - ======================= - 12.30000 | - Query OK, 1 row(s) in set (0.001238s) +- 输入的日期时间字符串须符合 ISO8601/RFC3339 标准,无法转换的字符串格式将返回 0。 +- 返回的时间戳精度与当前 DATABASE 设置的时间精度一致。 - taos> SELECT LAST_ROW(current) FROM d1002; - last_row(current) | - ======================= - 10.30000 | - Query OK, 1 row(s) in set (0.001042s) -``` -### INTERP [2.3.1 及之后的版本] +### 时间和日期函数 -``` -SELECT INTERP(field_name) FROM { tb_name | stb_name } [WHERE where_condition] [ RANGE(timestamp1,timestamp2) ] [EVERY(interval)] [FILL ({ VALUE | PREV | NULL | LINEAR | NEXT})]; -``` +时间和日期函数对时间戳类型进行操作。 -**功能说明**:返回表/超级表的指定时间截面指定列的记录值(插值)。 +所有返回当前时间的函数,如NOW、TODAY和TIMEZONE,在一条SQL语句中不论出现多少次都只会被计算一次。 -**返回数据类型**:同字段类型。 +#### NOW -**应用字段**:数值型字段。 +```sql +SELECT NOW() FROM { tb_name | stb_name } [WHERE clause]; +SELECT select_expr FROM { tb_name | stb_name } WHERE ts_col cond_operatior NOW(); +INSERT INTO tb_name VALUES (NOW(), ...); +``` -**适用于**:表、超级表、嵌套查询。 +**功能说明**:返回客户端当前系统时间。 +**返回结果数据类型**:TIMESTAMP 时间戳类型。 -**使用说明** +**应用字段**:在 WHERE 或 INSERT 语句中使用时只能作用于 TIMESTAMP 类型的字段。 -- INTERP 用于在指定时间断面获取指定列的记录值,如果该时间断面不存在符合条件的行数据,那么会根据 FILL 参数的设定进行插值。 -- INTERP 的输入数据为指定列的数据,可以通过条件语句(where 子句)来对原始列数据进行过滤,如果没有指定过滤条件则输入为全部数据。 -- INTERP 的输出时间范围根据 RANGE(timestamp1,timestamp2)字段来指定,需满足 timestamp1<=timestamp2。其中 timestamp1(必选值)为输出时间范围的起始值,即如果 timestamp1 时刻符合插值条件则 timestamp1 为输出的第一条记录,timestamp2(必选值)为输出时间范围的结束值,即输出的最后一条记录的 timestamp 不能大于 timestamp2。如果没有指定 RANGE,那么满足过滤条件的输入数据中第一条记录的 timestamp 即为 timestamp1,最后一条记录的 timestamp 即为 timestamp2,同样也满足 timestamp1 <= timestamp2。 -- INTERP 根据 EVERY 字段来确定输出时间范围内的结果条数,即从 timestamp1 开始每隔固定长度的时间(EVERY 值)进行插值。如果没有指定 EVERY,则默认窗口大小为无穷大,即从 timestamp1 开始只有一个窗口。 -- INTERP 根据 FILL 字段来决定在每个符合输出条件的时刻如何进行插值,如果没有 FILL 字段则默认不插值,即输出为原始记录值或不输出(原始记录不存在)。 -- INTERP 只能在一个时间序列内进行插值,因此当作用于超级表时必须跟 group by tbname 一起使用,当作用嵌套查询外层时内层子查询不能含 GROUP BY 信息。 -- INTERP 的插值结果不受 ORDER BY timestamp 的影响,ORDER BY timestamp 只影响输出结果的排序。 +**适用于**:表、超级表。 -**SQL示例(基于文档中广泛使用的电表 schema )**: +**使用说明**: -- 单点线性插值 +- 支持时间加减操作,如 NOW() + 1s, 支持的时间单位如下: + b(纳秒)、u(微秒)、a(毫秒)、s(秒)、m(分)、h(小时)、d(天)、w(周)。 +- 返回的时间戳精度与当前 DATABASE 设置的时间精度一致。 -``` - taos> SELECT INTERP(current) FROM t1 RANGE('2017-7-14 18:40:00','2017-7-14 18:40:00') FILL(LINEAR); -``` -- 在2017-07-14 18:00:00到2017-07-14 19:00:00间每隔5秒钟进行取值(不插值) +#### TIMEDIFF -``` - taos> SELECT INTERP(current) FROM t1 RANGE('2017-7-14 18:00:00','2017-7-14 19:00:00') EVERY(5s); +```sql +SELECT TIMEDIFF(ts_val1 | datetime_string1 | ts_col1, ts_val2 | datetime_string2 | ts_col2 [, time_unit]) FROM { tb_name | stb_name } [WHERE clause]; ``` -- 在2017-07-14 18:00:00到2017-07-14 19:00:00间每隔5秒钟进行线性插值 +**功能说明**:计算两个时间戳之间的差值,并近似到时间单位 time_unit 指定的精度。 -``` - taos> SELECT INTERP(current) FROM t1 RANGE('2017-7-14 18:00:00','2017-7-14 19:00:00') EVERY(5s) FILL(LINEAR); -``` +**返回结果数据类型**:长整型 INT64。 -- 在所有时间范围内每隔 5 秒钟进行向后插值 +**应用字段**:UNIX 时间戳,日期时间格式的字符串,或者 TIMESTAMP 类型的列。 -``` - taos> SELECT INTERP(current) FROM t1 EVERY(5s) FILL(NEXT); -``` +**适用于**:表、超级表。 -- 根据 2017-07-14 17:00:00 到 2017-07-14 20:00:00 间的数据进行从 2017-07-14 18:00:00 到 2017-07-14 19:00:00 间每隔 5 秒钟进行线性插值 +**使用说明**: +- 支持的时间单位 time_unit 如下: + 1u(微秒),1a(毫秒),1s(秒),1m(分),1h(小时),1d(天)。 +- 如果时间单位 time_unit 未指定, 返回的时间差值精度与当前 DATABASE 设置的时间精度一致。 -``` - taos> SELECT INTERP(current) FROM t1 where ts >= '2017-07-14 17:00:00' and ts <= '2017-07-14 20:00:00' RANGE('2017-7-14 18:00:00','2017-7-14 19:00:00') EVERY(5s) FILL(LINEAR); -``` -### INTERP [2.3.1 之前的版本] +#### TIMETRUNCATE -``` -SELECT INTERP(field_name) FROM { tb_name | stb_name } WHERE ts='timestamp' [FILL ({ VALUE | PREV | NULL | LINEAR | NEXT})]; +```sql +SELECT TIMETRUNCATE(ts_val | datetime_string | ts_col, time_unit) FROM { tb_name | stb_name } [WHERE clause]; ``` -**功能说明**:返回表/超级表的指定时间截面、指定字段的记录。 +**功能说明**:将时间戳按照指定时间单位 time_unit 进行截断。 -**返回数据类型**:同字段类型。 +**返回结果数据类型**:TIMESTAMP 时间戳类型。 -**应用字段**:数值型字段。 +**应用字段**:UNIX 时间戳,日期时间格式的字符串,或者 TIMESTAMP 类型的列。 **适用于**:表、超级表。 -**使用说明**: +**使用说明**: +- 支持的时间单位 time_unit 如下: + 1u(微秒),1a(毫秒),1s(秒),1m(分),1h(小时),1d(天)。 +- 返回的时间戳精度与当前 DATABASE 设置的时间精度一致。 -- 从 2.0.15.0 及以后版本可用 -- INTERP 必须指定时间断面,如果该时间断面不存在直接对应的数据,那么会根据 FILL 参数的设定进行插值。此外,条件语句里面可附带筛选条件,例如标签、tbname。 -- INTERP 查询要求查询的时间区间必须位于数据集合(表)的所有记录的时间范围之内。如果给定的时间戳位于时间范围之外,即使有插值指令,仍然不返回结果。 -- 单个 INTERP 函数查询只能够针对一个时间点进行查询,如果需要返回等时间间隔的断面数据,可以通过 INTERP 配合 EVERY 的方式来进行查询处理(而不是使用 INTERVAL),其含义是每隔固定长度的时间进行插值 -**示例**: +#### TIMEZONE -``` - taos> SELECT INTERP(*) FROM meters WHERE ts='2017-7-14 18:40:00.004'; - interp(ts) | interp(current) | interp(voltage) | interp(phase) | - ========================================================================================== - 2017-07-14 18:40:00.004 | 9.84020 | 216 | 0.32222 | - Query OK, 1 row(s) in set (0.002652s) +```sql +SELECT TIMEZONE() FROM { tb_name | stb_name } [WHERE clause]; ``` -如果给定的时间戳无对应的数据,在不指定插值生成策略的情况下,不会返回结果,如果指定了插值策略,会根据插值策略返回结果。 +**功能说明**:返回客户端当前时区信息。 -``` - taos> SELECT INTERP(*) FROM meters WHERE tbname IN ('d636') AND ts='2017-7-14 18:40:00.005'; - Query OK, 0 row(s) in set (0.004022s) +**返回结果数据类型**:VARCHAR 类型。 - taos> SELECT INTERP(*) FROM meters WHERE tbname IN ('d636') AND ts='2017-7-14 18:40:00.005' FILL(PREV); - interp(ts) | interp(current) | interp(voltage) | interp(phase) | - ========================================================================================== - 2017-07-14 18:40:00.005 | 9.88150 | 217 | 0.32500 | - Query OK, 1 row(s) in set (0.003056s) -``` +**应用字段**:无 -如下所示代码表示在时间区间 `['2017-7-14 18:40:00', '2017-7-14 18:40:00.014']` 中每隔 5 毫秒 进行一次断面计算。 +**适用于**:表、超级表。 -``` - taos> SELECT INTERP(current) FROM d636 WHERE ts>='2017-7-14 18:40:00' AND ts<='2017-7-14 18:40:00.014' EVERY(5a); - ts | interp(current) | - ================================================= - 2017-07-14 18:40:00.000 | 10.04179 | - 2017-07-14 18:40:00.010 | 10.16123 | - Query OK, 2 row(s) in set (0.003487s) -``` -### TAIL +#### TODAY -``` -SELECT TAIL(field_name, k, offset_val) FROM {tb_name | stb_name} [WHERE clause]; +```sql +SELECT TODAY() FROM { tb_name | stb_name } [WHERE clause]; +SELECT select_expr FROM { tb_name | stb_name } WHERE ts_col cond_operatior TODAY()]; +INSERT INTO tb_name VALUES (TODAY(), ...); ``` -**功能说明**:返回跳过最后 offset_val 个,然后取连续 k 个记录,不忽略 NULL 值。offset_val 可以不输入。此时返回最后的 k 个记录。当有 offset_val 输入的情况下,该函数功能等效于 `order by ts desc LIMIT k OFFSET offset_val`。 +**功能说明**:返回客户端当日零时的系统时间。 -**参数范围**:k: [1,100] offset_val: [0,100]。 +**返回结果数据类型**:TIMESTAMP 时间戳类型。 -**返回结果数据类型**:同应用的字段。 +**应用字段**:在 WHERE 或 INSERT 语句中使用时只能作用于 TIMESTAMP 类型的字段。 -**应用字段**:适合于除时间主列外的任何类型字段。 +**适用于**:表、超级表。 -**支持版本**:2.6.0.0 及之后的版本。 +**使用说明**: -**示例**: +- 支持时间加减操作,如 TODAY() + 1s, 支持的时间单位如下: + b(纳秒),u(微秒),a(毫秒),s(秒),m(分),h(小时),d(天),w(周)。 +- 返回的时间戳精度与当前 DATABASE 设置的时间精度一致。 -``` -taos> select ts,dbig from tail2; - ts | dbig | -================================================== -2021-10-15 00:31:33.000 | 1 | -2021-10-17 00:31:31.000 | NULL | -2021-12-24 00:31:34.000 | 2 | -2022-01-01 08:00:05.000 | 19 | -2022-01-01 08:00:06.000 | NULL | -2022-01-01 08:00:07.000 | 9 | -Query OK, 6 row(s) in set (0.001952s) -taos> select tail(dbig,2,2) from tail2; -ts | tail(dbig,2,2) | -================================================== -2021-12-24 00:31:34.000 | 2 | -2022-01-01 08:00:05.000 | 19 | -Query OK, 2 row(s) in set (0.002307s) -``` +## 聚合函数 -### UNIQUE +聚合函数为查询结果集的每一个分组返回单个结果行。可以由 GROUP BY 或窗口切分子句指定分组,如果没有,则整个查询结果集视为一个分组。 + +TDengine 支持针对数据的聚合查询。提供如下聚合函数。 + +### AVG ``` -SELECT UNIQUE(field_name) FROM {tb_name | stb_name} [WHERE clause]; +SELECT AVG(field_name) FROM tb_name [WHERE clause]; ``` -**功能说明**:返回该列的数值首次出现的值。该函数功能与 distinct 相似,但是可以匹配标签和时间戳信息。可以针对除时间列以外的字段进行查询,可以匹配标签和时间戳,其中的标签和时间戳是第一次出现时刻的标签和时间戳。 - -**返回结果数据类型**:同应用的字段。 +**功能说明**:统计表/超级表中某列的平均值。 -**应用字段**:适合于除时间类型以外的字段。 +**返回数据类型**:双精度浮点数 Double。 -**支持版本**:2.6.0.0 及之后的版本。 +**适用数据类型**:数值类型。 -**使用说明**: +**适用于**:表、超级表。 -- 该函数可以应用在普通表和超级表上。不能和窗口操作一起使用,例如 interval/state_window/session_window 。 -- 由于返回数据量未知,考虑到内存因素,为了函数可以正常返回结果,建议不重复的数据量在 10 万级别,否则会报错。 -**示例**: +### COUNT ``` -taos> select ts,voltage from unique1; - ts | voltage | -================================================== -2021-10-17 00:31:31.000 | 1 | -2022-01-24 00:31:31.000 | 1 | -2021-10-17 00:31:31.000 | 1 | -2021-12-24 00:31:31.000 | 2 | -2022-01-01 08:00:01.000 | 19 | -2021-10-17 00:31:31.000 | NULL | -2022-01-01 08:00:02.000 | NULL | -2022-01-01 08:00:03.000 | 9 | -Query OK, 8 row(s) in set (0.003018s) - -taos> select unique(voltage) from unique1; -ts | unique(voltage) | -================================================== -2021-10-17 00:31:31.000 | 1 | -2021-10-17 00:31:31.000 | NULL | -2021-12-24 00:31:31.000 | 2 | -2022-01-01 08:00:01.000 | 19 | -2022-01-01 08:00:03.000 | 9 | -Query OK, 5 row(s) in set (0.108458s) +SELECT COUNT([*|field_name]) FROM tb_name [WHERE clause]; ``` -## 计算函数 +**功能说明**:统计表/超级表中记录行数或某列的非空值个数。 -### DIFF +**返回数据类型**:长整型 INT64。 - ```sql - SELECT {DIFF(field_name, ignore_negative) | DIFF(field_name)} FROM tb_name [WHERE clause]; - ``` +**适用数据类型**:应用全部字段。 -**功能说明**:统计表中某列的值与前一行对应值的差。 ignore_negative 取值为 0|1 , 可以不填,默认值为 0. 不忽略负值。ignore_negative 为 1 时表示忽略负数。 +**适用于**:表、超级表。 -**返回结果数据类型**:同应用字段。 +**使用说明**: -**应用字段**:不能应用在 timestamp、binary、nchar、bool 类型字段。 +- 可以使用星号(\*)来替代具体的字段,使用星号(\*)返回全部记录数量。 +- 针对同一表的(不包含 NULL 值)字段查询结果均相同。 +- 如果统计对象是具体的列,则返回该列中非 NULL 值的记录数量。 -**适用于**:表、超级表。 -**使用说明**: +### ELAPSED -- 输出结果行数是范围内总行数减一,第一行没有结果输出。 -- 从 2.1.3.0 版本开始,DIFF 函数可以在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname)。 -- 从 2.6.0 开始,DIFF 函数支持 ignore_negative 参数 +```mysql +SELECT ELAPSED(ts_primary_key [, time_unit]) FROM { tb_name | stb_name } [WHERE clause] [INTERVAL(interval [, offset]) [SLIDING sliding]]; +``` -**示例**: +**功能说明**:elapsed函数表达了统计周期内连续的时间长度,和twa函数配合使用可以计算统计曲线下的面积。在通过INTERVAL子句指定窗口的情况下,统计在给定时间范围内的每个窗口内有数据覆盖的时间范围;如果没有INTERVAL子句,则返回整个给定时间范围内的有数据覆盖的时间范围。注意,ELAPSED返回的并不是时间范围的绝对值,而是绝对值除以time_unit所得到的单位个数。 - ```sql - taos> SELECT DIFF(current) FROM d1001; - ts | diff(current) | - ================================================= - 2018-10-03 14:38:15.000 | 2.30000 | - 2018-10-03 14:38:16.800 | -0.30000 | - Query OK, 2 row(s) in set (0.001162s) - ``` +**返回结果类型**:Double -### DERIVATIVE +**适用数据类型**:Timestamp类型 + +**支持的版本**:2.6.0.0 及以后的版本。 + +**适用于**: 表,超级表,嵌套查询的外层查询 + +**说明**: +- field_name参数只能是表的第一列,即timestamp主键列。 +- 按time_unit参数指定的时间单位返回,最小是数据库的时间分辨率。time_unit参数未指定时,以数据库的时间分辨率为时间单位。 +- 可以和interval组合使用,返回每个时间窗口的时间戳差值。需要特别注意的是,除第一个时间窗口和最后一个时间窗口外,中间窗口的时间戳差值均为窗口长度。 +- order by asc/desc不影响差值的计算结果。 +- 对于超级表,需要和group by tbname子句组合使用,不可以直接使用。 +- 对于普通表,不支持和group by子句组合使用。 +- 对于嵌套查询,仅当内层查询会输出隐式时间戳列时有效。例如select elapsed(ts) from (select diff(value) from sub1)语句,diff函数会让内层查询输出隐式时间戳列,此为主键列,可以用于elapsed函数的第一个参数。相反,例如select elapsed(ts) from (select * from sub1) 语句,ts列输出到外层时已经没有了主键列的含义,无法使用elapsed函数。此外,elapsed函数作为一个与时间线强依赖的函数,形如select elapsed(ts) from (select diff(value) from st group by tbname)尽管会返回一条计算结果,但并无实际意义,这种用法后续也将被限制。 +- 不支持与leastsquares、diff、derivative、top、bottom、last_row、interp等函数混合使用。 + +### LEASTSQUARES ``` -SELECT DERIVATIVE(field_name, time_interval, ignore_negative) FROM tb_name [WHERE clause]; +SELECT LEASTSQUARES(field_name, start_val, step_val) FROM tb_name [WHERE clause]; ``` -**功能说明**:统计表中某列数值的单位变化率。其中单位时间区间的长度可以通过 time_interval 参数指定,最小可以是 1 秒(1s);ignore_negative 参数的值可以是 0 或 1,为 1 时表示忽略负值。 - -**返回数据类型**:双精度浮点数。 +**功能说明**:统计表中某列的值是主键(时间戳)的拟合直线方程。start_val 是自变量初始值,step_val 是自变量的步长值。 -**应用字段**:不能应用在 timestamp、binary、nchar、bool 类型字段。 +**返回数据类型**:字符串表达式(斜率, 截距)。 -**适用于**:表、超级表 +**适用数据类型**:field_name 必须是数值类型。 -**使用说明**: +**适用于**:表。 -- 从 2.1.3.0 及以后版本可用;输出结果行数是范围内总行数减一,第一行没有结果输出。 -- DERIVATIVE 函数可以在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname)。 -**示例**: +### MODE ``` -taos> select derivative(current, 10m, 0) from t1; - ts | derivative(current, 10m, 0) | -======================================================== - 2021-08-20 10:11:22.790 | 0.500000000 | - 2021-08-20 11:11:22.791 | 0.166666620 | - 2021-08-20 12:11:22.791 | 0.000000000 | - 2021-08-20 13:11:22.792 | 0.166666620 | - 2021-08-20 14:11:22.792 | -0.666666667 | -Query OK, 5 row(s) in set (0.004883s) +SELECT MODE(field_name) FROM tb_name [WHERE clause]; ``` +**功能说明**:返回出现频率最高的值,若存在多个频率相同的最高值,输出空。不能匹配标签、时间戳输出。 + +**返回数据类型**:同应用的字段。 + +**适用数据类型**: 数值类型。 + +**适用于**:表和超级表。 + + ### SPREAD ``` @@ -938,916 +687,531 @@ SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause]; **返回数据类型**:双精度浮点数。 -**应用字段**:不能应用在 binary、nchar、bool 类型字段。 +**适用数据类型**:数值类型或TIMESTAMP类型。 -**适用于**:表、超级表。 +**适用于**:表和超级表。 -**使用说明**:可用于 TIMESTAMP 字段,此时表示记录的时间覆盖范围。 -**示例**: +### STDDEV ``` -taos> SELECT SPREAD(voltage) FROM meters; - spread(voltage) | -============================ - 5.000000000 | -Query OK, 1 row(s) in set (0.001792s) - -taos> SELECT SPREAD(voltage) FROM d1001; - spread(voltage) | -============================ - 3.000000000 | -Query OK, 1 row(s) in set (0.000836s) +SELECT STDDEV(field_name) FROM tb_name [WHERE clause]; ``` -### CEIL +**功能说明**:统计表中某列的均方差。 -``` -SELECT CEIL(field_name) FROM { tb_name | stb_name } [WHERE clause]; -``` - -**功能说明**:获得指定列的向上取整数的结果。 - -**返回结果类型**:与指定列的原始数据类型一致。例如,如果指定列的原始数据类型为 Float,那么返回的数据类型也为 Float;如果指定列的原始数据类型为 Double,那么返回的数据类型也为 Double。 - -**适用数据类型**:不能应用在 timestamp、binary、nchar、bool 类型字段上;在超级表查询中使用时,不能应用在 tag 列,无论 tag 列的类型是什么类型。 - -**适用于**: 普通表、超级表。 - -**嵌套子查询支持**:适用于内层查询和外层查询。 - -**使用说明**: - -- 支持 +、-、\*、/ 运算,如 ceil(col1) + ceil(col2)。 -- 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 - -### FLOOR - -``` -SELECT FLOOR(field_name) FROM { tb_name | stb_name } [WHERE clause]; -``` - -**功能说明**:获得指定列的向下取整数的结果。 - 其他使用说明参见 CEIL 函数描述。 +**返回数据类型**:双精度浮点数 Double。 -### ROUND +**适用数据类型**:数值类型。 -``` -SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause]; -``` +**适用于**:表和超级表。 -**功能说明**:获得指定列的四舍五入的结果。 - 其他使用说明参见 CEIL 函数描述。 -### CSUM +### SUM -```sql - SELECT CSUM(field_name) FROM { tb_name | stb_name } [WHERE clause] ``` - - **功能说明**:累加和(Cumulative sum),输出行与输入行数相同。 - - **返回结果类型**: 输入列如果是整数类型返回值为长整型 (int64_t),浮点数返回值为双精度浮点数(Double)。无符号整数类型返回值为无符号长整型(uint64_t)。 返回结果中同时带有每行记录对应的时间戳。 - - **适用数据类型**:不能应用在 timestamp、binary、nchar、bool 类型字段上;在超级表查询中使用时,不能应用在标签之上。 - - **嵌套子查询支持**: 适用于内层查询和外层查询。 - - **使用说明**: - - - 不支持 +、-、*、/ 运算,如 csum(col1) + csum(col2)。 - - 只能与聚合(Aggregation)函数一起使用。 该函数可以应用在普通表和超级表上。 - - 使用在超级表上的时候,需要搭配 Group by tbname使用,将结果强制规约到单个时间线。 - -**支持版本**: 从2.3.0.x开始支持 - -### MAVG - -```sql - SELECT MAVG(field_name, K) FROM { tb_name | stb_name } [WHERE clause] +SELECT SUM(field_name) FROM tb_name [WHERE clause]; ``` - **功能说明**: 计算连续 k 个值的移动平均数(moving average)。如果输入行数小于 k,则无结果输出。参数 k 的合法输入范围是 1≤ k ≤ 1000。 - - **返回结果类型**: 返回双精度浮点数类型。 +**功能说明**:统计表/超级表中某列的和。 - **适用数据类型**: 不能应用在 timestamp、binary、nchar、bool 类型上;在超级表查询中使用时,不能应用在标签之上。 +**返回数据类型**:双精度浮点数 Double 和长整型 INT64。 - **嵌套子查询支持**: 适用于内层查询和外层查询。 +**适用数据类型**:数值类型。 - **使用说明**: - - - 不支持 +、-、*、/ 运算,如 mavg(col1, k1) + mavg(col2, k1); - - 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用; - - 该函数可以应用在普通表和超级表上;使用在超级表上的时候,需要搭配 Group by tbname使用,将结果强制规约到单个时间线。 +**适用于**:表和超级表。 -**支持版本**: 从2.3.0.x开始支持 -### SAMPLE +### HYPERLOGLOG -```sql - SELECT SAMPLE(field_name, K) FROM { tb_name | stb_name } [WHERE clause] ``` - - **功能说明**: 获取数据的 k 个采样值。参数 k 的合法输入范围是 1≤ k ≤ 1000。 - - **返回结果类型**: 同原始数据类型, 返回结果中带有该行记录的时间戳。 - - **适用数据类型**: 在超级表查询中使用时,不能应用在标签之上。 - - **嵌套子查询支持**: 适用于内层查询和外层查询。 - - **使用说明**: - - - 不能参与表达式计算;该函数可以应用在普通表和超级表上; - - 使用在超级表上的时候,需要搭配 Group by tbname 使用,将结果强制规约到单个时间线。 - -**支持版本**: 从2.3.0.x开始支持 - -### ASIN - -```sql - SELECT ASIN(field_name) FROM { tb_name | stb_name } [WHERE clause] +SELECT HYPERLOGLOG(field_name) FROM { tb_name | stb_name } [WHERE clause]; ``` -**功能说明**:获得指定列的反正弦结果 +**功能说明**: + - 采用 hyperloglog 算法,返回某列的基数。该算法在数据量很大的情况下,可以明显降低内存的占用,但是求出来的基数是个估算值,标准误差(标准误差是多次实验,每次的平均数的标准差,不是与真实结果的误差)为 0.81%。 + - 在数据量较少的时候该算法不是很准确,可以使用 select count(data) from (select unique(col) as data from table) 的方法。 -**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL +**返回结果类型**:整形。 -**适用数据类型**:不能应用在 timestamp、binary、nchar、bool 类型字段上;在超级表查询中使用时,不能应用在 tag 列 +**适用数据类型**:任何类型。 -**嵌套子查询支持**:适用于内层查询和外层查询。 +**适用于**:表和超级表。 -**使用说明**: -- 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -- 该函数可以应用在普通表和超级表上。 -- 版本2.6.0.x后支持 - -### ACOS +### HISTOGRAM -```sql - SELECT ACOS(field_name) FROM { tb_name | stb_name } [WHERE clause] ``` - -**功能说明**:获得指定列的反余弦结果 - -**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL - -**适用数据类型**:不能应用在 timestamp、binary、nchar、bool 类型字段上;在超级表查询中使用时,不能应用在 tag 列 - -**嵌套子查询支持**:适用于内层查询和外层查询。 - -**使用说明**: - -- 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -- 该函数可以应用在普通表和超级表上。 -- 版本2.6.0.x后支持 - -### ATAN - -```sql - SELECT ATAN(field_name) FROM { tb_name | stb_name } [WHERE clause] +SELECT HISTOGRAM(field_name,bin_type, bin_description, normalized) FROM tb_name [WHERE clause]; ``` -**功能说明**:获得指定列的反正切结果 - -**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL - -**适用数据类型**:不能应用在 timestamp、binary、nchar、bool 类型字段上;在超级表查询中使用时,不能应用在 tag 列 - -**嵌套子查询支持**:适用于内层查询和外层查询。 - -**使用说明**: - -- 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -- 该函数可以应用在普通表和超级表上。 -- 版本2.6.0.x后支持 - -### SIN +**功能说明**:统计数据按照用户指定区间的分布。 -```sql - SELECT SIN(field_name) FROM { tb_name | stb_name } [WHERE clause] -``` +**返回结果类型**:如归一化参数 normalized 设置为 1,返回结果为双精度浮点类型 DOUBLE,否则为长整形 INT64。 -**功能说明**:获得指定列的正弦结果 +**适用数据类型**:数值型字段。 -**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL +**适用于**: 表和超级表。 -**适用数据类型**:不能应用在 timestamp、binary、nchar、bool 类型字段上;在超级表查询中使用时,不能应用在 tag 列 +**详细说明**: +1. bin_type 用户指定的分桶类型, 有效输入类型为"user_input“, ”linear_bin", "log_bin"。 +2. bin_description 描述如何生成分桶区间,针对三种桶类型,分别为以下描述格式(均为 JSON 格式字符串): + - "user_input": "[1, 3, 5, 7]" + 用户指定 bin 的具体数值。 + + - "linear_bin": "{"start": 0.0, "width": 5.0, "count": 5, "infinity": true}" + "start" 表示数据起始点,"width" 表示每次 bin 偏移量, "count" 为 bin 的总数,"infinity" 表示是否添加(-inf, inf)作为区间起点跟终点, + 生成区间为[-inf, 0.0, 5.0, 10.0, 15.0, 20.0, +inf]。 + + - "log_bin": "{"start":1.0, "factor": 2.0, "count": 5, "infinity": true}" + "start" 表示数据起始点,"factor" 表示按指数递增的因子,"count" 为 bin 的总数,"infinity" 表示是否添加(-inf, inf)作为区间起点跟终点, + 生成区间为[-inf, 1.0, 2.0, 4.0, 8.0, 16.0, +inf]。 +3. normalized 是否将返回结果归一化到 0~1 之间 。有效输入为 0 和 1。 -**嵌套子查询支持**:适用于内层查询和外层查询。 -**使用说明**: +## 选择函数 -- 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -- 该函数可以应用在普通表和超级表上。 -- 版本2.6.0.x后支持 +选择函数根据语义在查询结果集中选择一行或多行结果返回。用户可以同时指定输出 ts 列或其他列(包括 tbname 和标签列),这样就可以方便地知道被选出的值是源于哪个数据行的。 -### COS +### APERCENTILE -```sql - SELECT COS(field_name) FROM { tb_name | stb_name } [WHERE clause] ``` - -**功能说明**:获得指定列的余弦结果 - -**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL - -**适用数据类型**:不能应用在 timestamp、binary、nchar、bool 类型字段上;在超级表查询中使用时,不能应用在 tag 列 - -**嵌套子查询支持**:适用于内层查询和外层查询。 - -**使用说明**: - -- 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -- 该函数可以应用在普通表和超级表上。 -- 版本2.6.0.x后支持 - -### TAN - -```sql - SELECT TAN(field_name) FROM { tb_name | stb_name } [WHERE clause] +SELECT APERCENTILE(field_name, P[, algo_type]) +FROM { tb_name | stb_name } [WHERE clause] ``` -**功能说明**:获得指定列的正切结果 +**功能说明**:统计表/超级表中指定列的值的近似百分比分位数,与 PERCENTILE 函数相似,但是返回近似结果。 -**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL - -**适用数据类型**:不能应用在 timestamp、binary、nchar、bool 类型字段上;在超级表查询中使用时,不能应用在 tag 列 - -**嵌套子查询支持**:适用于内层查询和外层查询。 +**返回数据类型**: 双精度浮点数 Double。 -**使用说明**: +**适用数据类型**:数值类型。P值范围是[0,100],当为0时等同于MIN,为100时等同于MAX。如果不指定 algo_type 则使用默认算法 。 -- 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -- 该函数可以应用在普通表和超级表上。 -- 版本2.6.0.x后支持 +**适用于**:表、超级表。 -### POW +### BOTTOM -```sql - SELECT POW(field_name, power) FROM { tb_name | stb_name } [WHERE clause] ``` - -**功能说明**:获得指定列的指数为 power 的幂 - -**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL - -**适用数据类型**:不能应用在 timestamp、binary、nchar、bool 类型字段上;在超级表查询中使用时,不能应用在 tag 列 - -**嵌套子查询支持**:适用于内层查询和外层查询。 - -**使用说明**: - -- 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -- 该函数可以应用在普通表和超级表上。 -- 版本2.6.0.x后支持 - -### LOG - -```sql - SELECT LOG(field_name, base) FROM { tb_name | stb_name } [WHERE clause] +SELECT BOTTOM(field_name, K) FROM { tb_name | stb_name } [WHERE clause]; ``` -**功能说明**:获得指定列对于底数 base 的对数 +**功能说明**:统计表/超级表中某列的值最小 _k_ 个非 NULL 值。如果多条数据取值一样,全部取用又会超出 k 条限制时,系统会从相同值中随机选取符合要求的数量返回。 -**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL +**返回数据类型**:同应用的字段。 -**适用数据类型**:不能应用在 timestamp、binary、nchar、bool 类型字段上;在超级表查询中使用时,不能应用在 tag 列 +**适用数据类型**:数值类型。 -**嵌套子查询支持**:适用于内层查询和外层查询。 +**适用于**:表和超级表。 -**使用说明**: +**使用说明**: -- 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -- 该函数可以应用在普通表和超级表上。 -- 版本2.6.0.x后支持 +- *k*值取值范围 1≤*k*≤100; +- 系统同时返回该记录关联的时间戳列; +- 限制:BOTTOM 函数不支持 FILL 子句。 -### ABS +### FIRST -```sql - SELECT ABS(field_name) FROM { tb_name | stb_name } [WHERE clause] ``` - -**功能说明**:获得指定列的绝对值 - -**返回结果类型**:如果输入值为整数,输出值是 UBIGINT 类型。如果输入值是 FLOAT/DOUBLE 数据类型,输出值是 DOUBLE 数据类型。 - -**适用数据类型**:不能应用在 timestamp、binary、nchar、bool 类型字段上;在超级表查询中使用时,不能应用在 tag 列 - -**嵌套子查询支持**:适用于内层查询和外层查询。 - -**使用说明**: - -- 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -- 该函数可以应用在普通表和超级表上。 -- 版本2.6.0.x后支持 - -### SQRT - -```sql - SELECT SQRT(field_name) FROM { tb_name | stb_name } [WHERE clause] +SELECT FIRST(field_name) FROM { tb_name | stb_name } [WHERE clause]; ``` -**功能说明**:获得指定列的平方根 +**功能说明**:统计表/超级表中某列的值最先写入的非 NULL 值。 -**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL +**返回数据类型**:同应用的字段。 -**适用数据类型**:不能应用在 timestamp、binary、nchar、bool 类型字段上;在超级表查询中使用时,不能应用在 tag 列 +**适用数据类型**:所有字段。 -**嵌套子查询支持**:适用于内层查询和外层查询。 +**适用于**:表和超级表。 -**使用说明**: +**使用说明**: -- 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。 -- 该函数可以应用在普通表和超级表上。 -- 版本2.6.0.x后支持 +- 如果要返回各个列的首个(时间戳最小)非 NULL 值,可以使用 FIRST(\*); +- 如果结果集中的某列全部为 NULL 值,则该列的返回结果也是 NULL; +- 如果结果集中所有列全部为 NULL 值,则不返回结果。 -### CAST +### INTERP -```sql - SELECT CAST(expression AS type_name) FROM { tb_name | stb_name } [WHERE clause] ``` - -**功能说明**:数据类型转换函数,输入参数 expression 支持普通列、常量、标量函数及它们之间的四则运算,不支持 tag 列,只适用于 select 子句中。 - -**返回结果类型**:CAST 中指定的类型(type_name)。 - -**适用数据类型**: - -- 输入参数 expression 的类型可以是除 JSON 外目前所有类型字段(BOOL/TINYINT/SMALLINT/INT/BIGINT/FLOAT/DOUBLE/BINARY(M)/TIMESTAMP/NCHAR(M)/TINYINT UNSIGNED/SMALLINT UNSIGNED/INT UNSIGNED/BIGINT UNSIGNED); -- 输出目标类型只支持 BIGINT/BINARY(N)/TIMESTAMP/NCHAR(N)/BIGINT UNSIGNED。 - -**使用说明**: - -- 对于不能支持的类型转换会直接报错。 -- 如果输入值为NULL则输出值也为NULL。 -- 对于类型支持但某些值无法正确转换的情况对应的转换后的值以转换函数输出为准。目前可能遇到的几种情况: - 1)BINARY/NCHAR转BIGINT/BIGINT UNSIGNED时可能出现的无效字符情况,例如"a"可能转为0。 - 2)有符号数或TIMESTAMP转BIGINT UNSIGNED可能遇到的溢出问题。 - 3)BIGINT UNSIGNED转BIGINT可能遇到的溢出问题。 - 4)FLOAT/DOUBLE转BIGINT/BIGINT UNSIGNED可能遇到的溢出问题。 -- 版本2.6.0.x后支持 - -### CONCAT - -```sql - SELECT CONCAT(str1|column1, str2|column2, ...) FROM { tb_name | stb_name } [WHERE clause] +SELECT INTERP(field_name) FROM { tb_name | stb_name } [WHERE where_condition] [ RANGE(timestamp1,timestamp2) ] [EVERY(interval)] [FILL ({ VALUE | PREV | NULL | LINEAR | NEXT})]; ``` -**功能说明**:字符串连接函数。 - -**返回结果类型**:同输入参数类型,BINARY 或者 NCHAR。 +**功能说明**:返回指定时间截面指定列的记录值或插值。 -**适用数据类型**:输入参数或者全部是 BINARY 格式的字符串或者列,或者全部是 NCHAR 格式的字符串或者列。不能应用在 TAG 列。 - -**使用说明**: - -- 如果输入值为NULL,输出值为NULL。 -- 该函数最小参数个数为2个,最大参数个数为8个。 -- 该函数可以应用在普通表和超级表上。 -- 该函数适用于内层查询和外层查询。 -- 版本2.6.0.x后支持 - -### CONCAT_WS - -``` - SELECT CONCAT_WS(separator, str1|column1, str2|column2, ...) FROM { tb_name | stb_name } [WHERE clause] -``` - -**功能说明**:带分隔符的字符串连接函数。 +**返回数据类型**:同字段类型。 -**返回结果类型**:同输入参数类型,BINARY 或者 NCHAR。 +**适用数据类型**:数值类型。 -**适用数据类型**:输入参数或者全部是 BINARY 格式的字符串或者列,或者全部是 NCHAR 格式的字符串或者列。不能应用在 TAG 列。 +**适用于**:表、超级表。 -**使用说明**: +**使用说明** -- 如果separator值为NULL,输出值为NULL。如果separator值不为NULL,其他输入为NULL,输出为空串 -- 该函数最小参数个数为3个,最大参数个数为9个。 -- 该函数可以应用在普通表和超级表上。 -- 该函数适用于内层查询和外层查询。 -- 版本2.6.0.x后支持 +- INTERP 用于在指定时间断面获取指定列的记录值,如果该时间断面不存在符合条件的行数据,那么会根据 FILL 参数的设定进行插值。 +- INTERP 的输入数据为指定列的数据,可以通过条件语句(where 子句)来对原始列数据进行过滤,如果没有指定过滤条件则输入为全部数据。 +- INTERP 的输出时间范围根据 RANGE(timestamp1,timestamp2)字段来指定,需满足 timestamp1<=timestamp2。其中 timestamp1(必选值)为输出时间范围的起始值,即如果 timestamp1 时刻符合插值条件则 timestamp1 为输出的第一条记录,timestamp2(必选值)为输出时间范围的结束值,即输出的最后一条记录的 timestamp 不能大于 timestamp2。如果没有指定 RANGE,那么满足过滤条件的输入数据中第一条记录的 timestamp 即为 timestamp1,最后一条记录的 timestamp 即为 timestamp2,同样也满足 timestamp1 <= timestamp2。 +- INTERP 根据 EVERY 字段来确定输出时间范围内的结果条数,即从 timestamp1 开始每隔固定长度的时间(EVERY 值)进行插值。如果没有指定 EVERY,则默认窗口大小为无穷大,即从 timestamp1 开始只有一个窗口。 +- INTERP 根据 FILL 字段来决定在每个符合输出条件的时刻如何进行插值,如果没有 FILL 字段则默认不插值,即输出为原始记录值或不输出(原始记录不存在)。 +- INTERP 只能在一个时间序列内进行插值,因此当作用于超级表时必须跟 group by tbname 一起使用,当作用嵌套查询外层时内层子查询不能含 GROUP BY 信息。 +- INTERP 的插值结果不受 ORDER BY timestamp 的影响,ORDER BY timestamp 只影响输出结果的排序。 -### LENGTH +### LAST ``` - SELECT LENGTH(str|column) FROM { tb_name | stb_name } [WHERE clause] +SELECT LAST(field_name) FROM { tb_name | stb_name } [WHERE clause]; ``` -**功能说明**:以字节计数的字符串长度。 - -**返回结果类型**:INT。 - -**适用数据类型**:输入参数是 BINARY 类型或者 NCHAR 类型的字符串或者列。不能应用在 TAG 列。 - -**使用说明** - -- 如果输入值为NULL,输出值为NULL。 -- 该函数可以应用在普通表和超级表上。 -- 函数适用于内层查询和外层查询。 -- 版本2.6.0.x后支持 - -### CHAR_LENGTH +**功能说明**:统计表/超级表中某列的值最后写入的非 NULL 值。 -``` - SELECT CHAR_LENGTH(str|column) FROM { tb_name | stb_name } [WHERE clause] -``` +**返回数据类型**:同应用的字段。 -**功能说明**:以字符计数的字符串长度。 +**适用数据类型**:所有字段。 -**返回结果类型**:INT。 +**适用于**:表和超级表。 -**适用数据类型**:输入参数是 BINARY 类型或者 NCHAR 类型的字符串或者列。不能应用在 TAG 列。 +**使用说明**: -**使用说明** +- 如果要返回各个列的最后(时间戳最大)一个非 NULL 值,可以使用 LAST(\*); +- 如果结果集中的某列全部为 NULL 值,则该列的返回结果也是 NULL;如果结果集中所有列全部为 NULL 值,则不返回结果。 +- 在用于超级表时,时间戳完全一样且同为最大的数据行可能有多个,那么会从中随机返回一条,而并不保证多次运行所挑选的数据行必然一致。 -- 如果输入值为NULL,输出值为NULL。 -- 该函数可以应用在普通表和超级表上。 -- 该函数适用于内层查询和外层查询。 -- 版本2.6.0.x后支持 -### LOWER +### LAST_ROW ``` - SELECT LOWER(str|column) FROM { tb_name | stb_name } [WHERE clause] +SELECT LAST_ROW(field_name) FROM { tb_name | stb_name }; ``` -**功能说明**:将字符串参数值转换为全小写字母。 +**功能说明**:返回表/超级表的最后一条记录。 + +**返回数据类型**:同应用的字段。 -**返回结果类型**:同输入类型。 +**适用数据类型**:所有字段。 -**适用数据类型**:输入参数是 BINARY 类型或者 NCHAR 类型的字符串或者列。不能应用在 TAG 列。 +**适用于**:表和超级表。 **使用说明**: -- 如果输入值为NULL,输出值为NULL。 -- 该函数可以应用在普通表和超级表上。 -- 该函数适用于内层查询和外层查询。 -- 版本2.6.0.x后支持 +- 在用于超级表时,时间戳完全一样且同为最大的数据行可能有多个,那么会从中随机返回一条,而并不保证多次运行所挑选的数据行必然一致。 +- 不能与 INTERVAL 一起使用。 -### UPPER +### MAX ``` - SELECT UPPER(str|column) FROM { tb_name | stb_name } [WHERE clause] +SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause]; ``` -**功能说明**:将字符串参数值转换为全大写字母。 +**功能说明**:统计表/超级表中某列的值最大值。 -**返回结果类型**:同输入类型。 +**返回数据类型**:同应用的字段。 -**适用数据类型**:输入参数是 BINARY 类型或者 NCHAR 类型的字符串或者列。不能应用在 TAG 列。 +**适用数据类型**:数值类型。 -**使用说明**: +**适用于**:表和超级表。 -- 如果输入值为NULL,输出值为NULL。 -- 该函数可以应用在普通表和超级表上。 -- 该函数适用于内层查询和外层查询。 -- 版本2.6.0.x后支持 -### LTRIM +### MIN ``` - SELECT LTRIM(str|column) FROM { tb_name | stb_name } [WHERE clause] +SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause]; ``` -**功能说明**:返回清除左边空格后的字符串。 +**功能说明**:统计表/超级表中某列的值最小值。 -**返回结果类型**:同输入类型。 +**返回数据类型**:同应用的字段。 -**适用数据类型**:输入参数是 BINARY 类型或者 NCHAR 类型的字符串或者列。不能应用在 TAG 列。 +**适用数据类型**:数值类型。 -**使用说明**: +**适用于**:表和超级表。 -- 如果输入值为NULL,输出值为NULL。 -- 该函数可以应用在普通表和超级表上。 -- 该函数适用于内层查询和外层查询。 -- 版本2.6.0.x后支持 -### RTRIM +### PERCENTILE ``` - SELECT RTRIM(str|column) FROM { tb_name | stb_name } [WHERE clause] +SELECT PERCENTILE(field_name, P) FROM { tb_name } [WHERE clause]; ``` -**功能说明**:返回清除右边空格后的字符串。 +**功能说明**:统计表中某列的值百分比分位数。 + +**返回数据类型**: 双精度浮点数 Double。 -**返回结果类型**:同输入类型。 +**应用字段**:数值类型。 -**适用数据类型**:输入参数是 BINARY 类型或者 NCHAR 类型的字符串或者列。不能应用在 TAG 列。 +**适用于**:表。 -**使用说明**: +**使用说明**:*P*值取值范围 0≤*P*≤100,为 0 的时候等同于 MIN,为 100 的时候等同于 MAX。 -- 如果输入值为NULL,输出值为NULL。 -- 该函数可以应用在普通表和超级表上。 -- 该函数适用于内层查询和外层查询。 -- 版本2.6.0.x后支持 -### SUBSTR +### TAIL ``` - SELECT SUBSTR(str,pos[,len]) FROM { tb_name | stb_name } [WHERE clause] +SELECT TAIL(field_name, k, offset_val) FROM {tb_name | stb_name} [WHERE clause]; ``` -**功能说明**:从源字符串 str 中的指定位置 pos 开始取一个长度为 len 的子串并返回。 +**功能说明**:返回跳过最后 offset_val 个,然后取连续 k 个记录,不忽略 NULL 值。offset_val 可以不输入。此时返回最后的 k 个记录。当有 offset_val 输入的情况下,该函数功能等效于 `order by ts desc LIMIT k OFFSET offset_val`。 -**返回结果类型**:同输入类型。 +**参数范围**:k: [1,100] offset_val: [0,100]。 -**适用数据类型**:输入参数是 BINARY 类型或者 NCHAR 类型的字符串或者列。不能应用在 TAG 列。 +**返回数据类型**:同应用的字段。 -**使用说明**: +**适用数据类型**:适合于除时间主列外的任何类型。 -- 如果输入值为NULL,输出值为NULL。 -- 输入参数pos可以为正数,也可以为负数。如果pos是正数,表示开始位置从字符串开头正数计算。如果pos为负数,表示开始位置从字符串结尾倒数计算。如果输入参数len被忽略,返回的子串包含从pos开始的整个字串。 -- 该函数可以应用在普通表和超级表上。 -- 该函数适用于内层查询和外层查询。 -- 版本2.6.0.x后支持 +**适用于**:表、超级表。 -### STATECOUNT + +### TOP ``` -SELECT STATECOUNT(field_name, oper, val) FROM { tb_name | stb_name } [WHERE clause]; +SELECT TOP(field_name, K) FROM { tb_name | stb_name } [WHERE clause]; ``` -**功能说明**:返回满足某个条件的连续记录的个数,结果作为新的一列追加在每行后面。条件根据参数计算,如果条件为 true 则加 1,条件为 false 则重置为-1,如果数据为 NULL,跳过该条数据。 +**功能说明**: 统计表/超级表中某列的值最大 _k_ 个非 NULL 值。如果多条数据取值一样,全部取用又会超出 k 条限制时,系统会从相同值中随机选取符合要求的数量返回。 -**参数范围**: +**返回数据类型**:同应用的字段。 -- oper : LT (小于)、GT(大于)、LE(小于等于)、GE(大于等于)、NE(不等于)、EQ(等于),不区分大小写。 -- val : 数值型 +**适用数据类型**:数值类型。 -**返回结果类型**:整形。 +**适用于**:表、超级表。 -**适用数据类型**:不能应用在 timestamp、binary、nchar、bool 类型字段上。 +**使用说明**: -**嵌套子查询支持**:不支持应用在子查询上。 +- *k*值取值范围 1≤*k*≤100; +- 系统同时返回该记录关联的时间戳列; +- 限制:TOP 函数不支持 FILL 子句。 -**支持的版本**:2.6 开始的版本。 +### UNIQUE -**使用说明**: +``` +SELECT UNIQUE(field_name) FROM {tb_name | stb_name} [WHERE clause]; +``` -- 该函数可以应用在普通表上,在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname) -- 不能和窗口操作一起使用,例如 interval/state_window/session_window。 +**功能说明**:返回该列的数值首次出现的值。该函数功能与 distinct 相似,但是可以匹配标签和时间戳信息。可以针对除时间列以外的字段进行查询,可以匹配标签和时间戳,其中的标签和时间戳是第一次出现时刻的标签和时间戳。 -**示例**: +**返回数据类型**:同应用的字段。 -``` -taos> select ts,dbig from statef2; - ts | dbig | -======================================================== -2021-10-15 00:31:33.000000000 | 1 | -2021-10-17 00:31:31.000000000 | NULL | -2021-12-24 00:31:34.000000000 | 2 | -2022-01-01 08:00:05.000000000 | 19 | -2022-01-01 08:00:06.000000000 | NULL | -2022-01-01 08:00:07.000000000 | 9 | -Query OK, 6 row(s) in set (0.002977s) +**适用数据类型**:适合于除时间类型以外的字段。 -taos> select stateCount(dbig,GT,2) from statef2; -ts | dbig | statecount(dbig,gt,2) | -================================================================================ -2021-10-15 00:31:33.000000000 | 1 | -1 | -2021-10-17 00:31:31.000000000 | NULL | NULL | -2021-12-24 00:31:34.000000000 | 2 | -1 | -2022-01-01 08:00:05.000000000 | 19 | 1 | -2022-01-01 08:00:06.000000000 | NULL | NULL | -2022-01-01 08:00:07.000000000 | 9 | 2 | -Query OK, 6 row(s) in set (0.002791s) -``` +**适用于**: 表和超级表。 -### STATEDURATION -```sql -SELECT stateDuration(field_name, oper, val, unit) FROM { tb_name | stb_name } [WHERE clause]; -``` +## 时序数据特有函数 -**功能说明**:返回满足某个条件的连续记录的时间长度,结果作为新的一列追加在每行后面。条件根据参数计算,如果条件为 true 则加上两个记录之间的时间长度(第一个满足条件的记录时间长度记为 0),条件为 false 则重置为-1,如果数据为 NULL,跳过该条数据。 +时序数据特有函数是 TDengine 为了满足时序数据的查询场景而量身定做出来的。在通用数据库中,实现类似功能通常需要复杂的查询语法,且效率很低。TDengine 以函数的方式内置了这些功能,最大程度的减轻了用户的使用成本。 -**参数范围**: +### CSUM -- oper : LT (小于)、GT(大于)、LE(小于等于)、GE(大于等于)、NE(不等于)、EQ(等于),不区分大小写。 -- val : 数值型 -- unit : 时间长度的单位,范围[1s、1m、1h ],不足一个单位舍去。默认为 1s。 +```sql + SELECT CSUM(field_name) FROM { tb_name | stb_name } [WHERE clause] +``` -**返回结果类型**:整形。 +**功能说明**:累加和(Cumulative sum),输出行与输入行数相同。 -**适用数据类型**:不能应用在 timestamp、binary、nchar、bool 类型字段上。 +**返回结果类型**: 输入列如果是整数类型返回值为长整型 (int64_t),浮点数返回值为双精度浮点数(Double)。无符号整数类型返回值为无符号长整型(uint64_t)。 返回结果中同时带有每行记录对应的时间戳。 -**嵌套子查询支持**:不支持应用在子查询上。 +**适用数据类型**:数值类型。 -**支持的版本**:2.6 开始的版本。 +**嵌套子查询支持**: 适用于内层查询和外层查询。 -**使用说明**: +**适用于**:表和超级表 -- 该函数可以应用在普通表上,在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname) -- 不能和窗口操作一起使用,例如 interval/state_window/session_window。 +**使用说明**: + + - 不支持 +、-、*、/ 运算,如 csum(col1) + csum(col2)。 + - 只能与聚合(Aggregation)函数一起使用。 该函数可以应用在普通表和超级表上。 + - 使用在超级表上的时候,需要搭配 Group by tbname使用,将结果强制规约到单个时间线。 -**示例**: -``` -taos> select ts,dbig from statef2; - ts | dbig | -======================================================== -2021-10-15 00:31:33.000000000 | 1 | -2021-10-17 00:31:31.000000000 | NULL | -2021-12-24 00:31:34.000000000 | 2 | -2022-01-01 08:00:05.000000000 | 19 | -2022-01-01 08:00:06.000000000 | NULL | -2022-01-01 08:00:07.000000000 | 9 | -Query OK, 6 row(s) in set (0.002407s) +### DERIVATIVE -taos> select stateDuration(dbig,GT,2) from statef2; -ts | dbig | stateduration(dbig,gt,2) | -=================================================================================== -2021-10-15 00:31:33.000000000 | 1 | -1 | -2021-10-17 00:31:31.000000000 | NULL | NULL | -2021-12-24 00:31:34.000000000 | 2 | -1 | -2022-01-01 08:00:05.000000000 | 19 | 0 | -2022-01-01 08:00:06.000000000 | NULL | NULL | -2022-01-01 08:00:07.000000000 | 9 | 2 | -Query OK, 6 row(s) in set (0.002613s) +``` +SELECT DERIVATIVE(field_name, time_interval, ignore_negative) FROM tb_name [WHERE clause]; ``` -## 时间函数 - -从 2.6.0.0 版本开始,TDengine 查询引擎支持以下时间相关函数: +**功能说明**:统计表中某列数值的单位变化率。其中单位时间区间的长度可以通过 time_interval 参数指定,最小可以是 1 秒(1s);ignore_negative 参数的值可以是 0 或 1,为 1 时表示忽略负值。 -### NOW +**返回数据类型**:双精度浮点数。 -```sql -SELECT NOW() FROM { tb_name | stb_name } [WHERE clause]; -SELECT select_expr FROM { tb_name | stb_name } WHERE ts_col cond_operatior NOW(); -INSERT INTO tb_name VALUES (NOW(), ...); -``` +**适用数据类型**:数值类型。 -**功能说明**:返回客户端当前系统时间。 +**适用于**:表、超级表 -**返回结果数据类型**:TIMESTAMP 时间戳类型。 +**使用说明**: DERIVATIVE 函数可以在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname)。 -**应用字段**:在 WHERE 或 INSERT 语句中使用时只能作用于 TIMESTAMP 类型的字段。 -**适用于**:表、超级表。 +### DIFF -**使用说明**: + ```sql + SELECT {DIFF(field_name, ignore_negative) | DIFF(field_name)} FROM tb_name [WHERE clause]; + ``` -- 支持时间加减操作,如 NOW() + 1s, 支持的时间单位如下: - b(纳秒)、u(微秒)、a(毫秒)、s(秒)、m(分)、h(小时)、d(天)、w(周)。 -- 返回的时间戳精度与当前 DATABASE 设置的时间精度一致。 +**功能说明**:统计表中某列的值与前一行对应值的差。 ignore_negative 取值为 0|1 , 可以不填,默认值为 0. 不忽略负值。ignore_negative 为 1 时表示忽略负数。 -**示例**: +**返回数据类型**:同应用字段。 -```sql -taos> SELECT NOW() FROM meters; - now() | -========================== - 2022-02-02 02:02:02.456 | -Query OK, 1 row(s) in set (0.002093s) +**适用数据类型**:数值类型。 -taos> SELECT NOW() + 1h FROM meters; - now() + 1h | -========================== - 2022-02-02 03:02:02.456 | -Query OK, 1 row(s) in set (0.002093s) +**适用于**:表、超级表。 -taos> SELECT COUNT(voltage) FROM d1001 WHERE ts < NOW(); - count(voltage) | -============================= - 5 | -Query OK, 5 row(s) in set (0.004475s) +**使用说明**: 输出结果行数是范围内总行数减一,第一行没有结果输出。 -taos> INSERT INTO d1001 VALUES (NOW(), 10.2, 219, 0.32); -Query OK, 1 of 1 row(s) in database (0.002210s) -``` -### TODAY +### IRATE -```sql -SELECT TODAY() FROM { tb_name | stb_name } [WHERE clause]; -SELECT select_expr FROM { tb_name | stb_name } WHERE ts_col cond_operatior TODAY()]; -INSERT INTO tb_name VALUES (TODAY(), ...); +``` +SELECT IRATE(field_name) FROM tb_name WHERE clause; ``` -**功能说明**:返回客户端当日零时的系统时间。 +**功能说明**:计算瞬时增长率。使用时间区间中最后两个样本数据来计算瞬时增长速率;如果这两个值呈递减关系,那么只取最后一个数用于计算,而不是使用二者差值。 -**返回结果数据类型**:TIMESTAMP 时间戳类型。 +**返回数据类型**:双精度浮点数 Double。 -**应用字段**:在 WHERE 或 INSERT 语句中使用时只能作用于 TIMESTAMP 类型的字段。 +**适用数据类型**:数值类型。 **适用于**:表、超级表。 -**使用说明**: +### MAVG -- 支持时间加减操作,如 TODAY() + 1s, 支持的时间单位如下: - b(纳秒),u(微秒),a(毫秒),s(秒),m(分),h(小时),d(天),w(周)。 -- 返回的时间戳精度与当前 DATABASE 设置的时间精度一致。 +```sql + SELECT MAVG(field_name, K) FROM { tb_name | stb_name } [WHERE clause] +``` -**示例**: + **功能说明**: 计算连续 k 个值的移动平均数(moving average)。如果输入行数小于 k,则无结果输出。参数 k 的合法输入范围是 1≤ k ≤ 1000。 -```sql -taos> SELECT TODAY() FROM meters; - today() | -========================== - 2022-02-02 00:00:00.000 | -Query OK, 1 row(s) in set (0.002093s) + **返回结果类型**: 返回双精度浮点数类型。 + + **适用数据类型**: 数值类型。 -taos> SELECT TODAY() + 1h FROM meters; - today() + 1h | -========================== - 2022-02-02 01:00:00.000 | -Query OK, 1 row(s) in set (0.002093s) + **嵌套子查询支持**: 适用于内层查询和外层查询。 -taos> SELECT COUNT(voltage) FROM d1001 WHERE ts < TODAY(); - count(voltage) | -============================= - 5 | -Query OK, 5 row(s) in set (0.004475s) + **适用于**:表和超级表 -taos> INSERT INTO d1001 VALUES (TODAY(), 10.2, 219, 0.32); -Query OK, 1 of 1 row(s) in database (0.002210s) -``` + **使用说明**: + + - 不支持 +、-、*、/ 运算,如 mavg(col1, k1) + mavg(col2, k1); + - 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用; + - 使用在超级表上的时候,需要搭配 Group by tbname使用,将结果强制规约到单个时间线。 -### TIMEZONE +### SAMPLE ```sql -SELECT TIMEZONE() FROM { tb_name | stb_name } [WHERE clause]; + SELECT SAMPLE(field_name, K) FROM { tb_name | stb_name } [WHERE clause] ``` -**功能说明**:返回客户端当前时区信息。 + **功能说明**: 获取数据的 k 个采样值。参数 k 的合法输入范围是 1≤ k ≤ 1000。 -**返回结果数据类型**:BINARY 类型。 + **返回结果类型**: 同原始数据类型, 返回结果中带有该行记录的时间戳。 -**应用字段**:无 + **适用数据类型**: 在超级表查询中使用时,不能应用在标签之上。 -**适用于**:表、超级表。 + **嵌套子查询支持**: 适用于内层查询和外层查询。 -**示例**: + **适用于**:表和超级表 -```sql -taos> SELECT TIMEZONE() FROM meters; - timezone() | -================================= - UTC (UTC, +0000) | -Query OK, 1 row(s) in set (0.002093s) -``` + **使用说明**: + + - 不能参与表达式计算;该函数可以应用在普通表和超级表上; + - 使用在超级表上的时候,需要搭配 Group by tbname 使用,将结果强制规约到单个时间线。 -### TO_ISO8601 +### STATECOUNT -```sql -SELECT TO_ISO8601(ts_val | ts_col) FROM { tb_name | stb_name } [WHERE clause]; +``` +SELECT STATECOUNT(field_name, oper, val) FROM { tb_name | stb_name } [WHERE clause]; ``` -**功能说明**:将 UNIX 时间戳转换成为 ISO8601 标准的日期时间格式,并附加客户端时区信息。 +**功能说明**:返回满足某个条件的连续记录的个数,结果作为新的一列追加在每行后面。条件根据参数计算,如果条件为 true 则加 1,条件为 false 则重置为-1,如果数据为 NULL,跳过该条数据。 -**返回结果数据类型**:BINARY 类型。 +**参数范围**: -**应用字段**:UNIX 时间戳常量或是 TIMESTAMP 类型的列 +- oper : LT (小于)、GT(大于)、LE(小于等于)、GE(大于等于)、NE(不等于)、EQ(等于),不区分大小写。 +- val : 数值型 -**适用于**:表、超级表。 +**返回结果类型**:整形。 -**使用说明**: +**适用数据类型**:数值类型。 -- 如果输入是 UNIX 时间戳常量,返回格式精度由时间戳的位数决定; -- 如果输入是 TIMSTAMP 类型的列,返回格式的时间戳精度与当前 DATABASE 设置的时间精度一致。 +**嵌套子查询支持**:不支持应用在子查询上。 -**示例**: +**适用于**:表和超级表。 -```sql -taos> SELECT TO_ISO8601(1643738400) FROM meters; - to_iso8601(1643738400) | -============================== - 2022-02-02T02:00:00+0800 | +**使用说明**: -taos> SELECT TO_ISO8601(ts) FROM meters; - to_iso8601(ts) | -============================== - 2022-02-02T02:00:00+0800 | - 2022-02-02T02:00:00+0800 | - 2022-02-02T02:00:00+0800 | -``` +- 该函数可以应用在普通表上,在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname) +- 不能和窗口操作一起使用,例如 interval/state_window/session_window。 -### TO_UNIXTIMESTAMP + +### STATEDURATION ```sql -SELECT TO_UNIXTIMESTAMP(datetime_string | ts_col) FROM { tb_name | stb_name } [WHERE clause]; +SELECT stateDuration(field_name, oper, val, unit) FROM { tb_name | stb_name } [WHERE clause]; ``` -**功能说明**:将日期时间格式的字符串转换成为 UNIX 时间戳。 +**功能说明**:返回满足某个条件的连续记录的时间长度,结果作为新的一列追加在每行后面。条件根据参数计算,如果条件为 true 则加上两个记录之间的时间长度(第一个满足条件的记录时间长度记为 0),条件为 false 则重置为-1,如果数据为 NULL,跳过该条数据。 -**返回结果数据类型**:长整型 INT64。 +**参数范围**: + +- oper : LT (小于)、GT(大于)、LE(小于等于)、GE(大于等于)、NE(不等于)、EQ(等于),不区分大小写。 +- val : 数值型 +- unit : 时间长度的单位,范围[1s、1m、1h ],不足一个单位舍去。默认为 1s。 -**应用字段**:字符串常量或是 BINARY/NCHAR 类型的列。 +**返回结果类型**:整形。 -**适用于**:表、超级表。 +**适用数据类型**:数值类型。 -**使用说明**: +**嵌套子查询支持**:不支持应用在子查询上。 -- 输入的日期时间字符串须符合 ISO8601/RFC3339 标准,无法转换的字符串格式将返回 0。 -- 返回的时间戳精度与当前 DATABASE 设置的时间精度一致。 +**适用于**:表和超级表。 -**示例**: +**使用说明**: -```sql -taos> SELECT TO_UNIXTIMESTAMP("2022-02-02T02:00:00.000Z") FROM meters; -to_unixtimestamp("2022-02-02T02:00:00.000Z") | -============================================== - 1643767200000 | +- 该函数可以应用在普通表上,在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname) +- 不能和窗口操作一起使用,例如 interval/state_window/session_window。 -taos> SELECT TO_UNIXTIMESTAMP(col_binary) FROM meters; - to_unixtimestamp(col_binary) | -======================================== - 1643767200000 | - 1643767200000 | - 1643767200000 | -``` -### TIMETRUNCATE +### TWA -```sql -SELECT TIMETRUNCATE(ts_val | datetime_string | ts_col, time_unit) FROM { tb_name | stb_name } [WHERE clause]; +``` +SELECT TWA(field_name) FROM tb_name WHERE clause; ``` -**功能说明**:将时间戳按照指定时间单位 time_unit 进行截断。 +**功能说明**:时间加权平均函数。统计表中某列在一段时间内的时间加权平均。 -**返回结果数据类型**:TIMESTAMP 时间戳类型。 +**返回数据类型**:双精度浮点数 Double。 -**应用字段**:UNIX 时间戳,日期时间格式的字符串,或者 TIMESTAMP 类型的列。 +**适用数据类型**:数值类型。 **适用于**:表、超级表。 -**使用说明**: -- 支持的时间单位 time_unit 如下: - 1u(微秒),1a(毫秒),1s(秒),1m(分),1h(小时),1d(天)。 -- 返回的时间戳精度与当前 DATABASE 设置的时间精度一致。 +**使用说明**: TWA 函数可以在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname)。 -**示例**: -```sql -taos> SELECT TIMETRUNCATE(1643738522000, 1h) FROM meters; - timetruncate(1643738522000, 1h) | -=================================== - 2022-02-02 02:00:00.000 | -Query OK, 1 row(s) in set (0.001499s) +## 系统信息函数 -taos> SELECT TIMETRUNCATE("2022-02-02 02:02:02", 1h) FROM meters; - timetruncate("2022-02-02 02:02:02", 1h) | -=========================================== - 2022-02-02 02:00:00.000 | -Query OK, 1 row(s) in set (0.003903s) +### DATABASE -taos> SELECT TIMETRUNCATE(ts, 1h) FROM meters; - timetruncate(ts, 1h) | -========================== - 2022-02-02 02:00:00.000 | - 2022-02-02 02:00:00.000 | - 2022-02-02 02:00:00.000 | -Query OK, 3 row(s) in set (0.003903s) +``` +SELECT DATABASE(); ``` -### TIMEDIFF +**说明**:返回当前登录的数据库。如果登录的时候没有指定默认数据库,且没有使用USE命令切换数据库,则返回NULL。 -```sql -SELECT TIMEDIFF(ts_val1 | datetime_string1 | ts_col1, ts_val2 | datetime_string2 | ts_col2 [, time_unit]) FROM { tb_name | stb_name } [WHERE clause]; -``` -**功能说明**:计算两个时间戳之间的差值,并近似到时间单位 time_unit 指定的精度。 +### CLIENT_VERSION -**返回结果数据类型**:长整型 INT64。 +``` +SELECT CLIENT_VERSION(); +``` -**应用字段**:UNIX 时间戳,日期时间格式的字符串,或者 TIMESTAMP 类型的列。 +**说明**:返回客户端版本。 -**适用于**:表、超级表。 +### SERVER_VERSION -**使用说明**: -- 支持的时间单位 time_unit 如下: - 1u(微秒),1a(毫秒),1s(秒),1m(分),1h(小时),1d(天)。 -- 如果时间单位 time_unit 未指定, 返回的时间差值精度与当前 DATABASE 设置的时间精度一致。 +``` +SELECT SERVER_VERSION(); +``` -**支持的版本**:2.6.0.0 及以后的版本。 +**说明**:返回服务端版本。 -**示例**: +### SERVER_STATUS -```sql -taos> SELECT TIMEDIFF(1643738400000, 1643742000000) FROM meters; - timediff(1643738400000, 1643742000000) | -========================================= - 3600000 | -Query OK, 1 row(s) in set (0.002553s) -taos> SELECT TIMEDIFF(1643738400000, 1643742000000, 1h) FROM meters; - timediff(1643738400000, 1643742000000, 1h) | -============================================= - 1 | -Query OK, 1 row(s) in set (0.003726s) - -taos> SELECT TIMEDIFF("2022-02-02 03:00:00", "2022-02-02 02:00:00", 1h) FROM meters; - timediff("2022-02-02 03:00:00", "2022-02-02 02:00:00", 1h) | -============================================================= - 1 | -Query OK, 1 row(s) in set (0.001937s) - -taos> SELECT TIMEDIFF(ts_col1, ts_col2, 1h) FROM meters; - timediff(ts_col1, ts_col2, 1h) | -=================================== - 1 | -Query OK, 1 row(s) in set (0.001937s) ``` +SELECT SERVER_VERSION(); +``` + +**说明**:返回服务端当前的状态。 diff --git a/docs-cn/12-taos-sql/12-keywords.md b/docs-cn/12-taos-sql/12-keywords.md index 0e8e1edfee4a4aa3f05ef7bfd99ca156e44afd2e..5c68e5da7e8c537e7514c5f9cfba43084d72189b 100644 --- a/docs-cn/12-taos-sql/12-keywords.md +++ b/docs-cn/12-taos-sql/12-keywords.md @@ -93,10 +93,13 @@ title: TDengine 参数限制与保留关键字 `TBNAME` 可以视为超级表中一个特殊的标签,代表子表的表名。 获取一个超级表所有的子表名及相关的标签信息: + ```mysql SELECT TBNAME, location FROM meters; +``` 统计超级表下辖子表数量: + ```mysql SELECT COUNT(TBNAME) FROM meters; ``` diff --git a/docs-en/12-taos-sql/07-function.md b/docs-en/12-taos-sql/07-function.md index 1a0dc28fa048c6c6d9a911a1e6719cf370592fdf..4eaf7c8a68b99db64e25468f79c1fbead290b614 100644 --- a/docs-en/12-taos-sql/07-function.md +++ b/docs-en/12-taos-sql/07-function.md @@ -1,1541 +1,1151 @@ --- title: Functions +toc_max_heading_level: 4 --- -## Aggregate Functions - -Aggregate queries are supported in TDengine by the following aggregate functions and selection functions. - -### COUNT +## Single-Row Functions -``` -SELECT COUNT([*|field_name]) FROM tb_name [WHERE clause]; -``` +Single-Row functions return a result row for each row in the query result. -**Description**: Get the number of rows or the number of non-null values in a table or a super table. +### Numeric Functions -**Return value type**: Long integer INT64 +#### ABS -**Applicable column types**: All +```sql +SELECT ABS(field_name) FROM { tb_name | stb_name } [WHERE clause] +``` -**Applicable table types**: table, super table, sub table +**Description**: The absolute of a specific column. -**More explanation**: +**Return value type**: UBIGINT if the input value is integer; DOUBLE if the input value is FLOAT/DOUBLE. -- Wildcard (\*) is used to represent all columns. The `COUNT` function is used to get the total number of all rows. -- The number of non-NULL values will be returned if this function is used on a specific column. +**Applicable data types**: Numeric types. -**Examples**: +**Applicable table types**: table, STable. -``` -taos> SELECT COUNT(*), COUNT(voltage) FROM meters; - count(*) | count(voltage) | -================================================ - 9 | 9 | -Query OK, 1 row(s) in set (0.004475s) +**Applicable nested query**: Inner query and Outer query. -taos> SELECT COUNT(*), COUNT(voltage) FROM d1001; - count(*) | count(voltage) | -================================================ - 3 | 3 | -Query OK, 1 row(s) in set (0.001075s) -``` +**More explanations**: +- Can't be used with aggregate functions. -### AVG +#### ACOS -``` -SELECT AVG(field_name) FROM tb_name [WHERE clause]; +```sql +SELECT ACOS(field_name) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**: Get the average value of a column in a table or STable +**Description**: The anti-cosine of a specific column -**Return value type**: Double precision floating number +**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL -**Applicable column types**: Data types except for timestamp, binary, nchar and bool +**Applicable data types**: Numeric types. **Applicable table types**: table, STable -**Examples**: - -``` -taos> SELECT AVG(current), AVG(voltage), AVG(phase) FROM meters; - avg(current) | avg(voltage) | avg(phase) | -==================================================================================== - 11.466666751 | 220.444444444 | 0.293333333 | -Query OK, 1 row(s) in set (0.004135s) +**Applicable nested query**: Inner query and Outer query -taos> SELECT AVG(current), AVG(voltage), AVG(phase) FROM d1001; - avg(current) | avg(voltage) | avg(phase) | -==================================================================================== - 11.733333588 | 219.333333333 | 0.316666673 | -Query OK, 1 row(s) in set (0.000943s) -``` +**More explanations**: +- Can't be used with aggregate functions -### TWA +#### ASIN -``` -SELECT TWA(field_name) FROM tb_name WHERE clause; +```sql +SELECT ASIN(field_name) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**: Time weighted average on a specific column within a time range +**Description**: The anti-sine of a specific column -**Return value type**: Double precision floating number +**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL -**Applicable column types**: Data types except for timestamp, binary, nchar and bool +**Applicable data types**: Numeric types. **Applicable table types**: table, STable -**More explanations**: +**Applicable nested query**: Inner query and Outer query -- Since version 2.1.3.0, function TWA can be used on stable with `GROUP BY`, i.e. timelines generated by `GROUP BY tbname` on a STable. +**More explanations**: +- Can't be used with aggregate functions -### IRATE +#### ATAN -``` -SELECT IRATE(field_name) FROM tb_name WHERE clause; +```sql +SELECT ATAN(field_name) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**: instantaneous rate on a specific column. The last two samples in the specified time range are used to calculate instantaneous rate. If the last sample value is smaller, then only the last sample value is used instead of the difference between the last two sample values. +**Description**: anti-tangent of a specific column -**Return value type**: Double precision floating number +**Description**: The anti-cosine of a specific column + +**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL -**Applicable column types**: Data types except for timestamp, binary, nchar and bool +**Applicable data types**: Numeric types. **Applicable table types**: table, STable -**More explanations**: +**Applicable nested query**: Inner query and Outer query -- Since version 2.1.3.0, function IRATE can be used on stble with `GROUP BY`, i.e. timelines generated by `GROUP BY tbname` on a STable. +**More explanations**: +- Can't be used with aggregate functions -### SUM +#### CEIL ``` -SELECT SUM(field_name) FROM tb_name [WHERE clause]; +SELECT CEIL(field_name) FROM { tb_name | stb_name } [WHERE clause]; ``` -**Description**: The sum of a specific column in a table or STable +**Description**: The rounded up value of a specific column -**Return value type**: Double precision floating number or long integer +**Return value type**: Same as the column being used -**Applicable column types**: Data types except for timestamp, binary, nchar and bool +**Applicable data types**: Numeric types. **Applicable table types**: table, STable -**Examples**: - -``` -taos> SELECT SUM(current), SUM(voltage), SUM(phase) FROM meters; - sum(current) | sum(voltage) | sum(phase) | -================================================================================ - 103.200000763 | 1984 | 2.640000001 | -Query OK, 1 row(s) in set (0.001702s) +**Applicable nested query**: Inner query and outer query -taos> SELECT SUM(current), SUM(voltage), SUM(phase) FROM d1001; - sum(current) | sum(voltage) | sum(phase) | -================================================================================ - 35.200000763 | 658 | 0.950000018 | -Query OK, 1 row(s) in set (0.000980s) -``` +**More explanations**: +- Arithmetic operation can be performed on the result of `ceil` function +- Can't be used with aggregate functions -### STDDEV +#### COS -``` -SELECT STDDEV(field_name) FROM tb_name [WHERE clause]; +```sql +SELECT COS(field_name) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**: Standard deviation of a specific column in a table or STable +**Description**: The cosine of a specific column -**Return value type**: Double precision floating number +**Description**: The anti-cosine of a specific column + +**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL -**Applicable column types**: Data types except for timestamp, binary, nchar and bool +**Applicable data types**: Numeric types. -**Applicable table types**: table, STable (since version 2.0.15.1) +**Applicable table types**: table, STable -**Examples**: +**Applicable nested query**: Inner query and Outer query -``` -taos> SELECT STDDEV(current) FROM d1001; - stddev(current) | -============================ - 1.020892909 | -Query OK, 1 row(s) in set (0.000915s) -``` +**More explanations**: +- Can't be used with aggregate functions -### LEASTSQUARES +#### FLOOR ``` -SELECT LEASTSQUARES(field_name, start_val, step_val) FROM tb_name [WHERE clause]; +SELECT FLOOR(field_name) FROM { tb_name | stb_name } [WHERE clause]; ``` -**Description**: The linear regression function of the specified column and the timestamp column (primary key), `start_val` is the initial value and `step_val` is the step value. - -**Return value type**: A string in the format of "(slope, intercept)" - -**Applicable column types**: Data types except for timestamp, binary, nchar and bool - -**Applicable table types**: table only - -**Examples**: +**Description**: The rounded down value of a specific column -``` -taos> SELECT LEASTSQUARES(current, 1, 1) FROM d1001; - leastsquares(current, 1, 1) | -===================================================== -{slop:1.000000, intercept:9.733334} | -Query OK, 1 row(s) in set (0.000921s) -``` +**More explanations**: The restrictions are same as those of the `CEIL` function. -### MODE +#### LOG -``` -SELECT MODE(field_name) FROM tb_name [WHERE clause]; +```sql +SELECT LOG(field_name, base) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**:The value which has the highest frequency of occurrence. NULL is returned if there are multiple values which have highest frequency of occurrence. It can't be used on timestamp column or tags. +**Description**: The log of a specific with `base` as the radix -**Return value type**:Same as the data type of the column being operated upon +**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL -**Applicable column types**:Data types except for timestamp +**Applicable data types**: Numeric types. -**More explanations**:Considering the number of returned result set is unpredictable, it's suggested to limit the number of unique values to 100,000, otherwise error will be returned. +**Applicable table types**: table, STable -**Applicable version**:Since version 2.6.0.0 +**Applicable nested query**: Inner query and Outer query -**Examples**: +**More explanations**: +- Can't be used with aggregate functions -``` -taos> select voltage from d002; - voltage | -======================== - 1 | - 1 | - 2 | - 19 | -Query OK, 4 row(s) in set (0.003545s) +#### POW -taos> select mode(voltage) from d002; - mode(voltage) | -======================== - 1 | -Query OK, 1 row(s) in set (0.019393s) +```sql +SELECT POW(field_name, power) FROM { tb_name | stb_name } [WHERE clause] ``` -### HYPERLOGLOG - -``` -SELECT HYPERLOGLOG(field_name) FROM { tb_name | stb_name } [WHERE clause]; -``` +**Description**: The power of a specific column with `power` as the index -**Description**:The cardinal number of a specific column is returned by using hyperloglog algorithm. +**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL -**Return value type**:Integer +**Applicable data types**: Numeric types. -**Applicable column types**:Any data type +**Applicable table types**: table, STable -**More explanations**: The benefit of using hyperloglog algorithm is that the memory usage is under control when the data volume is huge. However, when the data volume is very small, the result may be not accurate, it's recommented to use `select count(data) from (select unique(col) as data from table)` in this case. +**Applicable nested query**: Inner query and Outer query -**Applicable versions**:Since version 2.6.0.0 +**More explanations**: +- Can't be used with aggregate functions -**Examples**: +#### ROUND ``` -taos> select dbig from shll; - dbig | -======================== - 1 | - 1 | - 1 | - NULL | - 2 | - 19 | - NULL | - 9 | -Query OK, 8 row(s) in set (0.003755s) - -taos> select hyperloglog(dbig) from shll; - hyperloglog(dbig)| -======================== - 4 | -Query OK, 1 row(s) in set (0.008388s) +SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause]; ``` -### HISTOGRAM +**Description**: The rounded value of a specific column. -``` -SELECT HISTOGRAM(field_name,bin_type, bin_description, normalized) FROM tb_name [WHERE clause]; +**More explanations**: The restrictions are same as `CEIL` function. + +#### SIN + +```sql +SELECT SIN(field_name) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**:Returns count of data points in user-specified ranges. +**Description**: The sine of a specific column -**Return value type**:Double or INT64, depends on normalized parameter settings. +**Description**: The anti-cosine of a specific column -**Applicable column type**:Numerical types. +**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL -**Applicable versions**:Since version 2.6.0.0. +**Applicable data types**: Numeric types. **Applicable table types**: table, STable -**Explanations**: +**Applicable nested query**: Inner query and Outer query -1. bin_type: parameter to indicate the bucket type, valid inputs are: "user_input", "linear_bin", "log_bin"。 -2. bin_description: parameter to describe how to generate buckets,can be in the following JSON formats for each bin_type respectively: +**More explanations**: +- Can't be used with aggregate functions - - "user_input": "[1, 3, 5, 7]": User specified bin values. +#### SQRT - - "linear_bin": "{"start": 0.0, "width": 5.0, "count": 5, "infinity": true}" - "start" - bin starting point. - "width" - bin offset. - "count" - number of bins generated. - "infinity" - whether to add(-inf, inf)as start/end point in generated set of bins. - The above "linear_bin" descriptor generates a set of bins: [-inf, 0.0, 5.0, 10.0, 15.0, 20.0, +inf]. +```sql +SELECT SQRT(field_name) FROM { tb_name | stb_name } [WHERE clause] +``` - - "log_bin": "{"start":1.0, "factor": 2.0, "count": 5, "infinity": true}" - "start" - bin starting point. - "factor" - exponential factor of bin offset. - "count" - number of bins generated. - "infinity" - whether to add(-inf, inf)as start/end point in generated range of bins. - The above "log_bin" descriptor generates a set of bins:[-inf, 1.0, 2.0, 4.0, 8.0, 16.0, +inf]. +**Description**: The square root of a specific column -3. normalized: setting to 1/0 to turn on/off result normalization. +**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL -**Example**: +**Applicable data types**: Numeric types. -```mysql -taos> SELECT HISTOGRAM(voltage, "user_input", "[1,3,5,7]", 1) FROM meters; - histogram(voltage, "user_input", "[1,3,5,7]", 1) | - ======================================================= - {"lower_bin":1, "upper_bin":3, "count":0.333333} | - {"lower_bin":3, "upper_bin":5, "count":0.333333} | - {"lower_bin":5, "upper_bin":7, "count":0.333333} | - Query OK, 3 row(s) in set (0.004273s) - -taos> SELECT HISTOGRAM(voltage, 'linear_bin', '{"start": 1, "width": 3, "count": 3, "infinity": false}', 0) FROM meters; - histogram(voltage, 'linear_bin', '{"start": 1, "width": 3, " | - =================================================================== - {"lower_bin":1, "upper_bin":4, "count":3} | - {"lower_bin":4, "upper_bin":7, "count":3} | - {"lower_bin":7, "upper_bin":10, "count":3} | - Query OK, 3 row(s) in set (0.004887s) - -taos> SELECT HISTOGRAM(voltage, 'log_bin', '{"start": 1, "factor": 3, "count": 3, "infinity": true}', 0) FROM meters; - histogram(voltage, 'log_bin', '{"start": 1, "factor": 3, "count" | - =================================================================== - {"lower_bin":-inf, "upper_bin":1, "count":3} | - {"lower_bin":1, "upper_bin":3, "count":2} | - {"lower_bin":3, "upper_bin":9, "count":6} | - {"lower_bin":9, "upper_bin":27, "count":3} | - {"lower_bin":27, "upper_bin":inf, "count":1} | -``` +**Applicable table types**: table, STable -### ELAPSED +**Applicable nested query**: Inner query and Outer query -```mysql -SELECT ELAPSED(field_name[, time_unit]) FROM { tb_name | stb_name } [WHERE clause] [INTERVAL(interval [, offset]) [SLIDING sliding]]; +**More explanations**: +- Can't be used with aggregate functions + +#### TAN + +```sql +SELECT TAN(field_name) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**:`elapsed` function can be used to calculate the continuous time length in which there is valid data. If it's used with `INTERVAL` clause, the returned result is the calcualted time length within each time window. If it's used without `INTERVAL` caluse, the returned result is the calculated time length within the specified time range. Please be noted that the return value of `elapsed` is the number of `time_unit` in the calculated time length. +**Description**: The tangent of a specific column -**Return value type**:Double +**Description**: The anti-cosine of a specific column -**Applicable Column type**:Timestamp +**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL -**Applicable versions**:Sicne version 2.6.0.0 +**Applicable data types**: Numeric types. -**Applicable tables**: table, STable, outter in nested query +**Applicable table types**: table, STable -**Explanations**: -- `field_name` parameter can only be the first column of a table, i.e. timestamp primary key. -- The minimum value of `time_unit` is the time precision of the database. If `time_unit` is not specified, the time precision of the database is used as the default ime unit. -- It can be used with `INTERVAL` to get the time valid time length of each time window. Please be noted that the return value is same as the time window for all time windows except for the first and the last time window. -- `order by asc/desc` has no effect on the result. -- `group by tbname` must be used together when `elapsed` is used against a STable. -- `group by` must NOT be used together when `elapsed` is used against a table or sub table. -- When used in nested query, it's only applicable when the inner query outputs an implicit timestamp column as the primary key. For example, `select elapsed(ts) from (select diff(value) from sub1)` is legal usage while `select elapsed(ts) from (select * from sub1)` is not. -- It can't be used with `leastsquares`, `diff`, `derivative`, `top`, `bottom`, `last_row`, `interp`. +**Applicable nested query**: Inner query and Outer query + +**More explanations**: +- Can't be used with aggregate functions -## Selection Functions +### String Functions -When any select function is used, timestamp column or tag columns including `tbname` can be specified to show that the selected value are from which rows. +String functiosn take strings as input and output numbers or strings. -### MIN +#### CHAR_LENGTH ``` -SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause]; +SELECT CHAR_LENGTH(str|column) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**: The minimum value of a specific column in a table or STable +**Description**: The length in number of characters of a string -**Return value type**: Same as the data type of the column being operated upon +**Return value type**: Integer -**Applicable column types**: Data types except for timestamp, binary, nchar and bool +**Applicable data types**: VARCHAR or NCHAR **Applicable table types**: table, STable -**Examples**: +**Applicable nested query**: Inner query and Outer query -``` -taos> SELECT MIN(current), MIN(voltage) FROM meters; - min(current) | min(voltage) | -====================================== - 10.20000 | 218 | -Query OK, 1 row(s) in set (0.001765s) +**More explanations** -taos> SELECT MIN(current), MIN(voltage) FROM d1001; - min(current) | min(voltage) | -====================================== - 10.30000 | 218 | -Query OK, 1 row(s) in set (0.000950s) -``` +- If the input value is NULL, the output is NULL too -### MAX +#### CONCAT -``` -SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause]; +```sql +SELECT CONCAT(str1|column1, str2|column2, ...) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**: The maximum value of a specific column of a table or STable +**Description**: The concatenation result of two or more strings, the number of strings to be concatenated is at least 2 and at most 8 -**Return value type**: Same as the data type of the column being operated upon +**Return value type**: If all input strings are VARCHAR type, the result is VARCHAR type too. If any one of input strings is NCHAR type, then the result is NCHAR. -**Applicable column types**: Data types except for timestamp, binary, nchar and bool +**Applicable data types**: VARCHAR, NCHAR. Can't be used on tag columns. At least 2 input strings are requird, and at most 8 input strings are allowed. **Applicable table types**: table, STable -**Examples**: - -``` -taos> SELECT MAX(current), MAX(voltage) FROM meters; - max(current) | max(voltage) | -====================================== - 13.40000 | 223 | -Query OK, 1 row(s) in set (0.001123s) - -taos> SELECT MAX(current), MAX(voltage) FROM d1001; - max(current) | max(voltage) | -====================================== - 12.60000 | 221 | -Query OK, 1 row(s) in set (0.000987s) -``` +**Applicable nested query**: Inner query and Outer query -### FIRST +#### CONCAT_WS ``` -SELECT FIRST(field_name) FROM { tb_name | stb_name } [WHERE clause]; +SELECT CONCAT_WS(separator, str1|column1, str2|column2, ...) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**: The first non-null value of a specific column in a table or STable +**Description**: The concatenation result of two or more strings with separator, the number of strings to be concatenated is at least 3 and at most 9 -**Return value type**: Same as the column being operated upon +**Return value type**: If all input strings are VARCHAR type, the result is VARCHAR type too. If any one of input strings is NCHAR type, then the result is NCHAR. -**Applicable column types**: Any data type +**Applicable data types**: VARCHAR, NCHAR. Can't be used on tag columns. At least 3 input strings are requird, and at most 9 input strings are allowed. **Applicable table types**: table, STable +**Applicable nested query**: Inner query and Outer query + **More explanations**: -- FIRST(\*) can be used to get the first non-null value of all columns -- NULL will be returned if all the values of the specified column are all NULL -- A result will NOT be returned if all the columns in the result set are all NULL +- If the value of `separator` is NULL, the output is NULL. If the value of `separator` is not NULL but other input are all NULL, the output is empty string. -**Examples**: +#### LENGTH ``` -taos> SELECT FIRST(*) FROM meters; - first(ts) | first(current) | first(voltage) | first(phase) | -========================================================================================= -2018-10-03 14:38:04.000 | 10.20000 | 220 | 0.23000 | -Query OK, 1 row(s) in set (0.004767s) - -taos> SELECT FIRST(current) FROM d1002; - first(current) | -======================= - 10.20000 | -Query OK, 1 row(s) in set (0.001023s) +SELECT LENGTH(str|column) FROM { tb_name | stb_name } [WHERE clause] ``` -### LAST +**Description**: The length in bytes of a string -``` -SELECT LAST(field_name) FROM { tb_name | stb_name } [WHERE clause]; -``` +**Return value type**: Integer -**Description**: The last non-NULL value of a specific column in a table or STable +**Applicable data types**: VARCHAR or NCHAR +**Applicable table types**: table, STable -**Return value type**: Same as the column being operated upon +**Applicable nested query**: Inner query and Outer query -**Applicable column types**: Any data type +**More explanations** -**Applicable table types**: table, STable +- If the input value is NULL, the output is NULL too -**More explanations**: - -- LAST(\*) can be used to get the last non-NULL value of all columns -- If the values of a column in the result set are all NULL, NULL is returned for that column; if all columns in the result are all NULL, no result will be returned. -- When it's used on a STable, if there are multiple values with the timestamp in the result set, one of them will be returned randomly and it's not guaranteed that the same value is returned if the same query is run multiple times. - -**Examples**: - -``` -taos> SELECT LAST(*) FROM meters; - last(ts) | last(current) | last(voltage) | last(phase) | -======================================================================================== -2018-10-03 14:38:16.800 | 12.30000 | 221 | 0.31000 | -Query OK, 1 row(s) in set (0.001452s) +#### LOWER -taos> SELECT LAST(current) FROM d1002; - last(current) | -======================= - 10.30000 | -Query OK, 1 row(s) in set (0.000843s) ``` - -### TOP - -``` -SELECT TOP(field_name, K) FROM { tb_name | stb_name } [WHERE clause]; +SELECT LOWER(str|column) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**: The greatest _k_ values of a specific column in a table or STable. If a value has multiple occurrences in the column but counting all of them in will exceed the upper limit _k_, then a part of them will be returned randomly. +**Description**: Convert the input string to lower case -**Return value type**: Same as the column being operated upon +**Return value type**: Same as input -**Applicable column types**: Data types except for timestamp, binary, nchar and bool +**Applicable data types**: VARCHAR or NCHAR **Applicable table types**: table, STable -**More explanations**: - -- _k_ must be in range [1,100] -- The timestamp associated with the selected values are returned too -- Can't be used with `FILL` - -**Examples**: +**Applicable nested query**: Inner query and Outer query -``` -taos> SELECT TOP(current, 3) FROM meters; - ts | top(current, 3) | -================================================= -2018-10-03 14:38:15.000 | 12.60000 | -2018-10-03 14:38:16.600 | 13.40000 | -2018-10-03 14:38:16.800 | 12.30000 | -Query OK, 3 row(s) in set (0.001548s) +**More explanations** -taos> SELECT TOP(current, 2) FROM d1001; - ts | top(current, 2) | -================================================= -2018-10-03 14:38:15.000 | 12.60000 | -2018-10-03 14:38:16.800 | 12.30000 | -Query OK, 2 row(s) in set (0.000810s) -``` +- If the input value is NULL, the output is NULL too -### BOTTOM +#### LTRIM ``` -SELECT BOTTOM(field_name, K) FROM { tb_name | stb_name } [WHERE clause]; +SELECT LTRIM(str|column) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**: The least _k_ values of a specific column in a table or STable. If a value has multiple occurrences in the column but counting all of them in will exceed the upper limit _k_, then a part of them will be returned randomly. +**Description**: Remove the left leading blanks of a string -**Return value type**: Same as the column being operated upon +**Return value type**: Same as input -**Applicable column types**: Data types except for timestamp, binary, nchar and bool +**Applicable data types**: VARCHAR or NCHAR **Applicable table types**: table, STable -**More explanations**: - -- _k_ must be in range [1,100] -- The timestamp associated with the selected values are returned too -- Can't be used with `FILL` - -**Examples**: +**Applicable nested query**: Inner query and Outer query -``` -taos> SELECT BOTTOM(voltage, 2) FROM meters; - ts | bottom(voltage, 2) | -=============================================== -2018-10-03 14:38:15.000 | 218 | -2018-10-03 14:38:16.650 | 218 | -Query OK, 2 row(s) in set (0.001332s) +**More explanations** -taos> SELECT BOTTOM(current, 2) FROM d1001; - ts | bottom(current, 2) | -================================================= -2018-10-03 14:38:05.000 | 10.30000 | -2018-10-03 14:38:16.800 | 12.30000 | -Query OK, 2 row(s) in set (0.000793s) -``` +- If the input value is NULL, the output is NULL too -### PERCENTILE +#### RTRIM ``` -SELECT PERCENTILE(field_name, P) FROM { tb_name } [WHERE clause]; +SELECT RTRIM(str|column) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**: The value whose rank in a specific column matches the specified percentage. If such a value matching the specified percentage doesn't exist in the column, an interpolation value will be returned. +**Description**: Remove the right tailing blanks of a string -**Return value type**: Double precision floating point +**Return value type**: Same as input -**Applicable column types**: Data types except for timestamp, binary, nchar and bool +**Applicable data types**: VARCHAR or NCHAR -**Applicable table types**: table +**Applicable table types**: table, STable -**More explanations**: _P_ is in range [0,100], when _P_ is 0, the result is same as using function MIN; when _P_ is 100, the result is same as function MAX. +**Applicable nested query**: Inner query and Outer query -**Examples**: +**More explanations** -``` -taos> SELECT PERCENTILE(current, 20) FROM d1001; -percentile(current, 20) | -============================ - 11.100000191 | -Query OK, 1 row(s) in set (0.000787s) -``` +- If the input value is NULL, the output is NULL too -### APERCENTILE +#### SUBSTR ``` -SELECT APERCENTILE(field_name, P[, algo_type]) -FROM { tb_name | stb_name } [WHERE clause] +SELECT SUBSTR(str,pos[,len]) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**: Similar to `PERCENTILE`, but a simulated result is returned +**Description**: The sub-string starting from `pos` with length of `len` from the original string `str` -**Return value type**: Double precision floating point +**Return value type**: Same as input -**Applicable column types**: Data types except for timestamp, binary, nchar and bool +**Applicable data types**: VARCHAR or NCHAR **Applicable table types**: table, STable -**More explanations** - -- _P_ is in range [0,100], when _P_ is 0, the result is same as using function MIN; when _P_ is 100, the result is same as function MAX. -- **algo_type** can only be input as `default` or `t-digest`, if it's not specified `default` will be used, i.e. `apercentile(column_name, 50)` is same as `apercentile(column_name, 50, "default")`. -- When `t-digest` is used, `t-digest` sampling is used to calculate. It can be used from version 2.2.0.0. - -**Nested query**: It can be used in both the outer query and inner query in a nested query. - -``` -taos> SELECT APERCENTILE(current, 20) FROM d1001; -apercentile(current, 20) | -============================ - 10.300000191 | -Query OK, 1 row(s) in set (0.000645s) +**Applicable nested query**: Inner query and Outer query -taos> select apercentile (count, 80, 'default') from stb1; - apercentile (c0, 80, 'default') | -================================== - 601920857.210056424 | -Query OK, 1 row(s) in set (0.012363s) +**More explanations**: -taos> select apercentile (count, 80, 't-digest') from stb1; - apercentile (c0, 80, 't-digest') | -=================================== - 605869120.966666579 | -Query OK, 1 row(s) in set (0.011639s) -``` +- If the input is NULL, the output is NULL +- Parameter `pos` can be an positive or negative integer; If it's positive, the starting position will be counted from the beginning of the string; if it's negative, the starting position will be counted from the end of the string. +- If `len` is not specified, it means from `pos` to the end. -### LAST_ROW +#### UPPER ``` -SELECT LAST_ROW(field_name) FROM { tb_name | stb_name }; +SELECT UPPER(str|column) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**: The last row of a table or STable +**Description**: Convert the input string to upper case -**Return value type**: Same as the column being operated upon +**Return value type**: Same as input -**Applicable column types**: Any data type +**Applicable data types**: VARCHAR or NCHAR **Applicable table types**: table, STable -**More explanations**: +**Applicable nested query**: Inner query and Outer query -- When it's used against a STable, multiple rows with the same and largest timestamp may exist, in this case one of them is returned randomly and it's not guaranteed that the result is same if the query is run multiple times. -- Can't be used with `INTERVAL`. +**More explanations** -**Examples**: +- If the input value is NULL, the output is NULL too -``` - taos> SELECT LAST_ROW(current) FROM meters; - last_row(current) | - ======================= - 12.30000 | - Query OK, 1 row(s) in set (0.001238s) +### Conversion Functions - taos> SELECT LAST_ROW(current) FROM d1002; - last_row(current) | - ======================= - 10.30000 | - Query OK, 1 row(s) in set (0.001042s) -``` +This kind of functions convert from one data type to another one. -### INTERP [Since version 2.3.1] +#### CAST -``` -SELECT INTERP(field_name) FROM { tb_name | stb_name } [WHERE where_condition] [ RANGE(timestamp1,timestamp2) ] [EVERY(interval)] [FILL ({ VALUE | PREV | NULL | LINEAR | NEXT})]; +```sql +SELECT CAST(expression AS type_name) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**: The value that matches the specified timestamp range is returned, if existing; or an interpolation value is returned. - -**Return value type**: Same as the column being operated upon - -**Applicable column types**: Numeric data types - -**Applicable table types**: table, STable, nested query +**Description**: It's used for type casting. The input parameter `expression` can be data columns, constants, scalar functions or arithmetic between them. -**More explanations** +**Return value type**: The type specified by parameter `type_name` -- `INTERP` is used to get the value that matches the specified time slice from a column. If no such value exists an interpolation value will be returned based on `FILL` parameter. -- The input data of `INTERP` is the value of the specified column and a `where` clause can be used to filter the original data. If no `where` condition is specified then all original data is the input. -- The output time range of `INTERP` is specified by `RANGE(timestamp1,timestamp2)` parameter, with timestamp1<=timestamp2. timestamp1 is the starting point of the output time range and must be specified. timestamp2 is the ending point of the output time range and must be specified. If `RANGE` is not specified, then the timestamp of the first row that matches the filter condition is treated as timestamp1, the timestamp of the last row that matches the filter condition is treated as timestamp2. -- The number of rows in the result set of `INTERP` is determined by the parameter `EVERY`. Starting from timestamp1, one interpolation is performed for every time interval specified `EVERY` parameter. If `EVERY` parameter is not used, the time windows will be considered as no ending timestamp, i.e. there is only one time window from timestamp1. -- Interpolation is performed based on `FILL` parameter. No interpolation is performed if `FILL` is not used, that means either the original data that matches is returned or nothing is returned. -- `INTERP` can only be used to interpolate in single timeline. So it must be used with `group by tbname` when it's used on a STable. It can't be used with `GROUP BY` when it's used in the inner query of a nested query. -- The result of `INTERP` is not influenced by `ORDER BY TIMESTAMP`, which impacts the output order only.. +**Applicable data types**: -**Examples**: Based on the `meters` schema used throughout the documents +- Parameter `expression` can be any data type except for JSON +- The output data type specified by `type_name` can only be one of BIGINT/VARCHAR(N)/TIMESTAMP/NCHAR(N)/BIGINT UNSIGNED -- Single point linear interpolation between "2017-07-14 18:40:00" and "2017-07-14 18:40:00: +**More explanations**: -``` - taos> SELECT INTERP(current) FROM t1 RANGE('2017-7-14 18:40:00','2017-7-14 18:40:00') FILL(LINEAR); -``` +- Error will be reported for unsupported type casting +- NULL will be returned if the input value is NULL +- Some values of some supported data types may not be casted, below are known issues: + 1)When casting VARCHAR/NCHAR to BIGINT/BIGINT UNSIGNED, some characters may be treated as illegal, for example "a" may be converted to 0. + 2)There may be overflow when casting singed integer or TIMESTAMP to unsigned BIGINT + 3)There may be overflow when casting unsigned BIGINT to BIGINT + 4)There may be overflow when casting FLOAT/DOUBLE to BIGINT or UNSIGNED BIGINT -- Get original data every 5 seconds, no interpolation, between "2017-07-14 18:00:00" and "2017-07-14 19:00:00: +#### TO_ISO8601 -``` - taos> SELECT INTERP(current) FROM t1 RANGE('2017-7-14 18:00:00','2017-7-14 19:00:00') EVERY(5s); +```sql +SELECT TO_ISO8601(ts_val | ts_col) FROM { tb_name | stb_name } [WHERE clause]; ``` -- Linear interpolation every 5 seconds between "2017-07-14 18:00:00" and "2017-07-14 19:00:00: +**Description**: The ISO8601 date/time format converted from a UNIX timestamp, plus the timezone of the client side system -``` - taos> SELECT INTERP(current) FROM t1 RANGE('2017-7-14 18:00:00','2017-7-14 19:00:00') EVERY(5s) FILL(LINEAR); -``` +**Return value type**: VARCHAR -- Backward interpolation every 5 seconds +**Applicable column types**: TIMESTAMP, constant or a column -``` - taos> SELECT INTERP(current) FROM t1 EVERY(5s) FILL(NEXT); -``` +**Applicable table types**: table, STable -- Linear interpolation every 5 seconds between "2017-07-14 17:00:00" and "2017-07-14 20:00:00" +**More explanations**: -``` - taos> SELECT INTERP(current) FROM t1 where ts >= '2017-07-14 17:00:00' and ts <= '2017-07-14 20:00:00' RANGE('2017-7-14 18:00:00','2017-7-14 19:00:00') EVERY(5s) FILL(LINEAR); -``` +- If the input is UNIX timestamp constant, the precision of the returned value is determined by the digits of the input timestamp +- If the input is a column of TIMESTAMP type, The precision of the returned value is same as the precision set for the current data base in use -### INTERP [Since version 2.0.15.0] +#### TO_JSON -``` -SELECT INTERP(field_name) FROM { tb_name | stb_name } WHERE ts='timestamp' [FILL ({ VALUE | PREV | NULL | LINEAR | NEXT})]; +```sql +SELECT TO_JSON(str_literal) FROM { tb_name | stb_name } [WHERE clause]; ``` -**Description**: The value of a specific column that matches the specified time slice +**Description**: Convert a JSON string to a JSON body。 -**Return value type**: Same as the column being operated upon +**Return value type**: JSON -**Applicable column types**: Numeric data type +**Applicable column types**: JSON string, in the format like '{ "literal" : literal }'. '{}' is NULL value. keys in the string must be string constants, values can be constants of numeric types, bool, string or NULL. Escaping characters are not allowed in the JSON string. **Applicable table types**: table, STable -**More explanations**: +**Applicable nested query**: Inner query and Outer query. -- Time slice must be specified. If there is no data matching the specified time slice, interpolation is performed based on `FILL` parameter. Conditions such as tags or `tbname` can be used `Where` clause can be used to filter data. -- The timestamp specified must be within the time range of the data rows of the table or STable. If it is beyond the valid time range, nothing is returned even with `FILL` parameter. -- `INTERP` can be used to query only single time point once. `INTERP` can be used with `EVERY` to get the interpolation value every time interval. -- **Examples**: +#### TO_UNIXTIMESTAMP -``` - taos> SELECT INTERP(*) FROM meters WHERE ts='2017-7-14 18:40:00.004'; - interp(ts) | interp(current) | interp(voltage) | interp(phase) | - ========================================================================================== - 2017-07-14 18:40:00.004 | 9.84020 | 216 | 0.32222 | - Query OK, 1 row(s) in set (0.002652s) +```sql +SELECT TO_UNIXTIMESTAMP(datetime_string | ts_col) FROM { tb_name | stb_name } [WHERE clause]; ``` -If there is no data corresponding to the specified timestamp, an interpolation value is returned if interpolation policy is specified by `FILL` parameter; or nothing is returned. +**Description**: UNIX timestamp converted from a string of date/time format -``` - taos> SELECT INTERP(*) FROM meters WHERE tbname IN ('d636') AND ts='2017-7-14 18:40:00.005'; - Query OK, 0 row(s) in set (0.004022s) +**Return value type**: Long integer - taos> SELECT INTERP(*) FROM meters WHERE tbname IN ('d636') AND ts='2017-7-14 18:40:00.005' FILL(PREV); - interp(ts) | interp(current) | interp(voltage) | interp(phase) | - ========================================================================================== - 2017-07-14 18:40:00.005 | 9.88150 | 217 | 0.32500 | - Query OK, 1 row(s) in set (0.003056s) -``` +**Applicable column types**: Constant or column of VARCHAR/NCHAR -Interpolation is performed every 5 milliseconds between `['2017-7-14 18:40:00', '2017-7-14 18:40:00.014']` +**Applicable table types**: table, STable -``` - taos> SELECT INTERP(current) FROM d636 WHERE ts>='2017-7-14 18:40:00' AND ts<='2017-7-14 18:40:00.014' EVERY(5a); - ts | interp(current) | - ================================================= - 2017-07-14 18:40:00.000 | 10.04179 | - 2017-07-14 18:40:00.010 | 10.16123 | - Query OK, 2 row(s) in set (0.003487s) -``` +**More explanations**: -### TAIL +- The input string must be compatible with ISO8601/RFC3339 standard, 0 will be returned if the string can't be converted +- The precision of the returned timestamp is same as the precision set for the current data base in use -``` -SELECT TAIL(field_name, k, offset_val) FROM {tb_name | stb_name} [WHERE clause]; -``` +### DateTime Functions -**Description**: The next _k_ rows are returned after skipping the last `offset_val` rows, NULL values are not ignored. `offset_val` is optional parameter. When it's not specified, the last _k_ rows are returned. When `offset_val` is used, the effect is same as `order by ts desc LIMIT k OFFSET offset_val`. +This kind of functiosn oeprate on timestamp data. NOW(), TODAY() and TIMEZONE() are executed only once even though they may occurr multiple times in a single SQL statement. -**Parameter value range**: k: [1,100] offset_val: [0,100] +#### NOW -**Return value type**: Same as the column being operated upon +```sql +SELECT NOW() FROM { tb_name | stb_name } [WHERE clause]; +SELECT select_expr FROM { tb_name | stb_name } WHERE ts_col cond_operatior NOW(); +INSERT INTO tb_name VALUES (NOW(), ...); +``` -**Applicable column types**: Any data type except form timestamp, i.e. the primary key +**Description**: The current time of the client side system -**Applicable versions**: Since version 2.6.0.0 +**Return value type**: TIMESTAMP -**Examples**: +**Applicable column types**: TIMESTAMP only -``` -taos> select ts,dbig from tail2; - ts | dbig | -================================================== -2021-10-15 00:31:33.000 | 1 | -2021-10-17 00:31:31.000 | NULL | -2021-12-24 00:31:34.000 | 2 | -2022-01-01 08:00:05.000 | 19 | -2022-01-01 08:00:06.000 | NULL | -2022-01-01 08:00:07.000 | 9 | -Query OK, 6 row(s) in set (0.001952s) +**Applicable table types**: table, STable -taos> select tail(dbig,2,2) from tail2; -ts | tail(dbig,2,2) | -================================================== -2021-12-24 00:31:34.000 | 2 | -2022-01-01 08:00:05.000 | 19 | -Query OK, 2 row(s) in set (0.002307s) -``` +**More explanations**: -### UNIQUE +- Add and Subtract operation can be performed, for example NOW() + 1s, the time unit can be: + b(nanosecond), u(microsecond), a(millisecond)), s(second), m(minute), h(hour), d(day), w(week) +- The precision of the returned timestamp is same as the precision set for the current data base in use -``` -SELECT UNIQUE(field_name) FROM {tb_name | stb_name} [WHERE clause]; +#### TIMEDIFF + +```sql +SELECT TIMEDIFF(ts_val1 | datetime_string1 | ts_col1, ts_val2 | datetime_string2 | ts_col2 [, time_unit]) FROM { tb_name | stb_name } [WHERE clause]; ``` -**Description**: The values that occur the first time in the specified column. The effect is similar to `distinct` keyword, but it can also be used to match tags or timestamp. +**Description**: The difference between two timestamps, and rounded to the time unit specified by `time_unit` -**Return value type**: Same as the column or tag being operated upon +**Return value type**: Long Integer -**Applicable column types**: Any data types except for timestamp +**Applicable column types**: UNIX timestamp constant, string constant of date/time format, or a column of TIMESTAMP type -**Applicable versions**: Since version 2.6.0.0 +**Applicable table types**: table, STable **More explanations**: -- It can be used against table or STable, but can't be used together with time window, like `interval`, `state_window` or `session_window` . -- Considering the number of result sets is unpredictable, it's suggested to limit the distinct values under 100,000 to control the memory usage, otherwise error will be returned. - -**Examples**: - -``` -taos> select ts,voltage from unique1; - ts | voltage | -================================================== -2021-10-17 00:31:31.000 | 1 | -2022-01-24 00:31:31.000 | 1 | -2021-10-17 00:31:31.000 | 1 | -2021-12-24 00:31:31.000 | 2 | -2022-01-01 08:00:01.000 | 19 | -2021-10-17 00:31:31.000 | NULL | -2022-01-01 08:00:02.000 | NULL | -2022-01-01 08:00:03.000 | 9 | -Query OK, 8 row(s) in set (0.003018s) - -taos> select unique(voltage) from unique1; -ts | unique(voltage) | -================================================== -2021-10-17 00:31:31.000 | 1 | -2021-10-17 00:31:31.000 | NULL | -2021-12-24 00:31:31.000 | 2 | -2022-01-01 08:00:01.000 | 19 | -2022-01-01 08:00:03.000 | 9 | -Query OK, 5 row(s) in set (0.108458s) -``` - -## Scalar functions +- Time unit specified by `time_unit` can be: + 1u(microsecond),1a(millisecond),1s(second),1m(minute),1h(hour),1d(day). +- The precision of the returned timestamp is same as the precision set for the current data base in use -### DIFF +#### TIMETRUNCATE ```sql -SELECT {DIFF(field_name, ignore_negative) | DIFF(field_name)} FROM tb_name [WHERE clause]; +SELECT TIMETRUNCATE(ts_val | datetime_string | ts_col, time_unit) FROM { tb_name | stb_name } [WHERE clause]; ``` -**Description**: The different of each row with its previous row for a specific column. `ignore_negative` can be specified as 0 or 1, the default value is 1 if it's not specified. `1` means negative values are ignored. +**Description**: Truncate the input timestamp with unit specified by `time_unit` -**Return value type**: Same as the column being operated upon +**Return value type**: TIMESTAMP -**Applicable column types**: Data types except for timestamp, binary, nchar and bool +**Applicable column types**: UNIX timestamp constant, string constant of date/time format, or a column of timestamp **Applicable table types**: table, STable **More explanations**: -- The number of result rows is the number of rows subtracted by one, no output for the first row -- Since version 2.1.30, `DIFF` can be used on STable with `GROUP by tbname` -- Since version 2.6.0, `ignore_negative` parameter is supported +- Time unit specified by `time_unit` can be: + 1u(microsecond),1a(millisecond),1s(second),1m(minute),1h(hour),1d(day). +- The precision of the returned timestamp is same as the precision set for the current data base in use -**Examples**: +#### TIMEZONE ```sql -taos> SELECT DIFF(current) FROM d1001; - ts | diff(current) | -================================================= -2018-10-03 14:38:15.000 | 2.30000 | -2018-10-03 14:38:16.800 | -0.30000 | -Query OK, 2 row(s) in set (0.001162s) +SELECT TIMEZONE() FROM { tb_name | stb_name } [WHERE clause]; ``` -### DERIVATIVE +**Description**: The timezone of the client side system -``` -SELECT DERIVATIVE(field_name, time_interval, ignore_negative) FROM tb_name [WHERE clause]; +**Return value type**: VARCHAR + +**Applicable column types**: None + +**Applicable table types**: table, STable + +#### TODAY + +```sql +SELECT TODAY() FROM { tb_name | stb_name } [WHERE clause]; +SELECT select_expr FROM { tb_name | stb_name } WHERE ts_col cond_operatior TODAY()]; +INSERT INTO tb_name VALUES (TODAY(), ...); ``` -**Description**: The derivative of a specific column. The time rage can be specified by parameter `time_interval`, the minimum allowed time range is 1 second (1s); the value of `ignore_negative` can be 0 or 1, 1 means negative values are ignored. +**Description**: The timestamp of 00:00:00 of the client side system -**Return value type**: Double precision floating point +**Return value type**: TIMESTAMP -**Applicable column types**: Data types except for timestamp, binary, nchar and bool +**Applicable column types**: TIMESTAMP only **Applicable table types**: table, STable **More explanations**: -- It is available from version 2.1.3.0, the number of result rows is the number of total rows in the time range subtracted by one, no output for the first row. -- It can be used together with `GROUP BY tbname` against a STable. +- Add and Subtract operation can be performed, for example NOW() + 1s, the time unit can be: + b(nanosecond), u(microsecond), a(millisecond)), s(second), m(minute), h(hour), d(day), w(week) +- The precision of the returned timestamp is same as the precision set for the current data base in use -**Examples**: +## Aggregate Functions -``` -taos> select derivative(current, 10m, 0) from t1; - ts | derivative(current, 10m, 0) | -======================================================== - 2021-08-20 10:11:22.790 | 0.500000000 | - 2021-08-20 11:11:22.791 | 0.166666620 | - 2021-08-20 12:11:22.791 | 0.000000000 | - 2021-08-20 13:11:22.792 | 0.166666620 | - 2021-08-20 14:11:22.792 | -0.666666667 | -Query OK, 5 row(s) in set (0.004883s) -``` +Aggregate functions return single result row for each group in the query result set. Groups are determined by `GROUP BY` clause or time window clause if they are used; or the whole result is considered a group if neither of them is used. -### SPREAD +### AVG ``` -SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause]; +SELECT AVG(field_name) FROM tb_name [WHERE clause]; ``` -**Description**: The difference between the max and the min of a specific column +**Description**: Get the average value of a column in a table or STable -**Return value type**: Double precision floating point +**Return value type**: Double precision floating number -**Applicable column types**: Data types except for binary, nchar, and bool +**Applicable column types**: Numeric type **Applicable table types**: table, STable -**More explanations**: Can be used on a column of TIMESTAMP type, the result is the time range size. - -**Examples**: +### COUNT ``` -taos> SELECT SPREAD(voltage) FROM meters; - spread(voltage) | -============================ - 5.000000000 | -Query OK, 1 row(s) in set (0.001792s) - -taos> SELECT SPREAD(voltage) FROM d1001; - spread(voltage) | -============================ - 3.000000000 | -Query OK, 1 row(s) in set (0.000836s) +SELECT COUNT([*|field_name]) FROM tb_name [WHERE clause]; ``` -### CEIL +**Description**: Get the number of rows or the number of non-null values in a table or a super table. -``` -SELECT CEIL(field_name) FROM { tb_name | stb_name } [WHERE clause]; -``` +**Return value type**: Long integer INT64 -**Description**: The rounded up value of a specific column +**Applicable column types**: All -**Return value type**: Same as the column being used +**Applicable table types**: table, super table, sub table -**Applicable data types**: Data types except for timestamp, binary, nchar, bool +**More explanation**: -**Applicable table types**: table, STable +- Wildcard (\*) is used to represent all columns. The `COUNT` function is used to get the total number of all rows. +- The number of non-NULL values will be returned if this function is used on a specific column. -**Applicable nested query**: Inner query and outer query +### ELAPSED -**More explanations**: +```mysql +SELECT ELAPSED(field_name[, time_unit]) FROM { tb_name | stb_name } [WHERE clause] [INTERVAL(interval [, offset]) [SLIDING sliding]]; +``` -- Can't be used on any tags of any type -- Arithmetic operation can be performed on the result of `ceil` function -- Can't be used with aggregate functions +**Description**:`elapsed` function can be used to calculate the continuous time length in which there is valid data. If it's used with `INTERVAL` clause, the returned result is the calcualted time length within each time window. If it's used without `INTERVAL` caluse, the returned result is the calculated time length within the specified time range. Please be noted that the return value of `elapsed` is the number of `time_unit` in the calculated time length. -### FLOOR +**Return value type**:Double -``` -SELECT FLOOR(field_name) FROM { tb_name | stb_name } [WHERE clause]; -``` +**Applicable Column type**:Timestamp -**Description**: The rounded down value of a specific column +**Applicable tables**: table, STable, outter in nested query -**More explanations**: The restrictions are same as those of the `CEIL` function. +**Explanations**: -### ROUND +- `field_name` parameter can only be the first column of a table, i.e. timestamp primary key. +- The minimum value of `time_unit` is the time precision of the database. If `time_unit` is not specified, the time precision of the database is used as the default ime unit. +- It can be used with `INTERVAL` to get the time valid time length of each time window. Please be noted that the return value is same as the time window for all time windows except for the first and the last time window. +- `order by asc/desc` has no effect on the result. +- `group by tbname` must be used together when `elapsed` is used against a STable. +- `group by` must NOT be used together when `elapsed` is used against a table or sub table. +- When used in nested query, it's only applicable when the inner query outputs an implicit timestamp column as the primary key. For example, `select elapsed(ts) from (select diff(value) from sub1)` is legal usage while `select elapsed(ts) from (select * from sub1)` is not. +- It can't be used with `leastsquares`, `diff`, `derivative`, `top`, `bottom`, `last_row`, `interp`. + +### LEASTSQUARES ``` -SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause]; +SELECT LEASTSQUARES(field_name, start_val, step_val) FROM tb_name [WHERE clause]; ``` -**Description**: The rounded value of a specific column. - -**More explanations**: The restrictions are same as `CEIL` function. - -### CSUM +**Description**: The linear regression function of the specified column and the timestamp column (primary key), `start_val` is the initial value and `step_val` is the step value. -```sql - SELECT CSUM(field_name) FROM { tb_name | stb_name } [WHERE clause] -``` +**Return value type**: A string in the format of "(slope, intercept)" -**Description**: The cumulative sum of each row for a specific column. The number of output rows is same as that of the input rows. +**Applicable column types**: Numeric types -**Return value type**: Long integer for integers; Double for floating points. Timestamp is returned for each row. +**Applicable table types**: table only -**Applicable data types**: Data types except for timestamp, binary, nchar, and bool +### MODE -**Applicable table types**: table, STable +``` +SELECT MODE(field_name) FROM tb_name [WHERE clause]; +``` -**Applicable nested query**: Inner query and Outer query +**Description**:The value which has the highest frequency of occurrence. NULL is returned if there are multiple values which have highest frequency of occurrence. It can't be used on timestamp column. -**More explanations**: +**Return value type**:Same as the data type of the column being operated upon -- Can't be used on tags when it's used on STable -- Arithmetic operation can't be performed on the result of `csum` function -- Can only be used with aggregate functions -- `Group by tbname` must be used together on a STable to force the result on a single timeline +**Applicable column types**:Data types except for timestamp -**Applicable versions**: Since 2.3.0.x +**More explanations**:Considering the number of returned result set is unpredictable, it's suggested to limit the number of unique values to 100,000, otherwise error will be returned. -### MAVG +### SPREAD -```sql - SELECT MAVG(field_name, K) FROM { tb_name | stb_name } [WHERE clause] +``` +SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause]; ``` -**Description**: The moving average of continuous _k_ values of a specific column. If the number of input rows is less than _k_, nothing is returned. The applicable range is _k_ is [1,1000]. +**Description**: The difference between the max and the min of a specific column **Return value type**: Double precision floating point -**Applicable data types**: Data types except for timestamp, binary, nchar, and bool - -**Applicable nested query**: Inner query and Outer query +**Applicable column types**: Numeric types **Applicable table types**: table, STable -**More explanations**: - -- Arithmetic operation can't be performed on the result of `MAVG`. -- Can only be used with data columns, can't be used with tags. -- Can't be used with aggregate functions. -- Must be used with `GROUP BY tbname` when it's used on a STable to force the result on each single timeline. - -**Applicable versions**: Since 2.3.0.x +**More explanations**: Can be used on a column of TIMESTAMP type, the result is the time range size. -### SAMPLE +### STDDEV -```sql - SELECT SAMPLE(field_name, K) FROM { tb_name | stb_name } [WHERE clause] +``` +SELECT STDDEV(field_name) FROM tb_name [WHERE clause]; ``` -**Description**: _k_ sampling values of a specific column. The applicable range of _k_ is [1,10000] +**Description**: Standard deviation of a specific column in a table or STable -**Return value type**: Same as the column being operated plus the associated timestamp +**Return value type**: Double precision floating number -**Applicable data types**: Any data type except for tags of STable +**Applicable column types**: Numeric types **Applicable table types**: table, STable -**Applicable nested query**: Inner query and Outer query - -**More explanations**: - -- Arithmetic operation can't be operated on the result of `SAMPLE` function -- Must be used with `Group by tbname` when it's used on a STable to force the result on each single timeline - -**Applicable versions**: Since 2.3.0.x - -### ASIN +### SUM -```sql -SELECT ASIN(field_name) FROM { tb_name | stb_name } [WHERE clause] +``` +SELECT SUM(field_name) FROM tb_name [WHERE clause]; ``` -**Description**: The anti-sine of a specific column +**Description**: The sum of a specific column in a table or STable -**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL +**Return value type**: Double precision floating number or long integer -**Applicable data types**: Data types except for timestamp, binary, nchar, bool +**Applicable column types**: Numeric types **Applicable table types**: table, STable -**Applicable nested query**: Inner query and Outer query - -**Applicable versions**: From 2.6.0.0 - -**More explanations**: - -- Can't be used with tags -- Can't be used with aggregate functions - -### ACOS +### HYPERLOGLOG -```sql -SELECT ACOS(field_name) FROM { tb_name | stb_name } [WHERE clause] +``` +SELECT HYPERLOGLOG(field_name) FROM { tb_name | stb_name } [WHERE clause]; ``` -**Description**: The anti-cosine of a specific column - -**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL +**Description**:The cardinal number of a specific column is returned by using hyperloglog algorithm. -**Applicable data types**: Data types except for timestamp, binary, nchar, bool +**Return value type**:Integer -**Applicable table types**: table, STable +**Applicable column types**:Any data type -**Applicable nested query**: Inner query and Outer query +**More explanations**: The benefit of using hyperloglog algorithm is that the memory usage is under control when the data volume is huge. However, when the data volume is very small, the result may be not accurate, it's recommented to use `select count(data) from (select unique(col) as data from table)` in this case. -**Applicable versions**: From 2.6.0.0 +### HISTOGRAM -**More explanations**: +``` +SELECT HISTOGRAM(field_name,bin_type, bin_description, normalized) FROM tb_name [WHERE clause]; +``` -- Can't be used with tags -- Can't be used with aggregate functions +**Description**:Returns count of data points in user-specified ranges. -### ATAN +**Return value type**:Double or INT64, depends on normalized parameter settings. -```sql -SELECT ATAN(field_name) FROM { tb_name | stb_name } [WHERE clause] -``` +**Applicable column type**:Numerical types. -**Description**: anti-tangent of a specific column +**Applicable table types**: table, STable -**Description**: The anti-cosine of a specific column +**Explanations**: -**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL +1. bin_type: parameter to indicate the bucket type, valid inputs are: "user_input", "linear_bin", "log_bin"。 +2. bin_description: parameter to describe how to generate buckets,can be in the following JSON formats for each bin_type respectively: -**Applicable data types**: Data types except for timestamp, binary, nchar, bool + - "user_input": "[1, 3, 5, 7]": User specified bin values. -**Applicable table types**: table, STable + - "linear_bin": "{"start": 0.0, "width": 5.0, "count": 5, "infinity": true}" + "start" - bin starting point. + "width" - bin offset. + "count" - number of bins generated. + "infinity" - whether to add(-inf, inf)as start/end point in generated set of bins. + The above "linear_bin" descriptor generates a set of bins: [-inf, 0.0, 5.0, 10.0, 15.0, 20.0, +inf]. -**Applicable nested query**: Inner query and Outer query + - "log_bin": "{"start":1.0, "factor": 2.0, "count": 5, "infinity": true}" + "start" - bin starting point. + "factor" - exponential factor of bin offset. + "count" - number of bins generated. + "infinity" - whether to add(-inf, inf)as start/end point in generated range of bins. + The above "log_bin" descriptor generates a set of bins:[-inf, 1.0, 2.0, 4.0, 8.0, 16.0, +inf]. -**Applicable versions**: From 2.6.0.0 +3. normalized: setting to 1/0 to turn on/off result normalization. -**More explanations**: +## Selector Functions -- Can't be used with tags -- Can't be used with aggregate functions +Selector functiosn choose one or more rows in the query result set to retrun according toe the semantics. You can specify to output ts column and other columns including tbname and tags so that you can easily know which rows the selected values belong to. -### SIN +### APERCENTILE -```sql -SELECT SIN(field_name) FROM { tb_name | stb_name } [WHERE clause] +``` +SELECT APERCENTILE(field_name, P[, algo_type]) +FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**: The sine of a specific column - -**Description**: The anti-cosine of a specific column +**Description**: Similar to `PERCENTILE`, but a simulated result is returned -**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL +**Return value type**: Double precision floating point -**Applicable data types**: Data types except for timestamp, binary, nchar, bool +**Applicable column types**: Numeric types **Applicable table types**: table, STable -**Applicable nested query**: Inner query and Outer query - -**Applicable versions**: From 2.6.0.0 +**More explanations** -**More explanations**: +- _P_ is in range [0,100], when _P_ is 0, the result is same as using function MIN; when _P_ is 100, the result is same as function MAX. +- **algo_type** can only be input as `default` or `t-digest`, if it's not specified `default` will be used, i.e. `apercentile(column_name, 50)` is same as `apercentile(column_name, 50, "default")`. +- When `t-digest` is used, `t-digest` sampling is used to calculate. -- Can't be used with tags -- Can't be used with aggregate functions +**Nested query**: It can be used in both the outer query and inner query in a nested query. -### COS +### BOTTOM -```sql -SELECT COS(field_name) FROM { tb_name | stb_name } [WHERE clause] +``` +SELECT BOTTOM(field_name, K) FROM { tb_name | stb_name } [WHERE clause]; ``` -**Description**: The cosine of a specific column - -**Description**: The anti-cosine of a specific column +**Description**: The least _k_ values of a specific column in a table or STable. If a value has multiple occurrences in the column but counting all of them in will exceed the upper limit _k_, then a part of them will be returned randomly. -**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL +**Return value type**: Same as the column being operated upon -**Applicable data types**: Data types except for timestamp, binary, nchar, bool +**Applicable column types**: Numeric types **Applicable table types**: table, STable -**Applicable nested query**: Inner query and Outer query - -**Applicable versions**: From 2.6.0.0 - **More explanations**: -- Can't be used with tags -- Can't be used with aggregate functions +- _k_ must be in range [1,100] +- The timestamp associated with the selected values are returned too +- Can't be used with `FILL` -### TAN +### FIRST -```sql -SELECT TAN(field_name) FROM { tb_name | stb_name } [WHERE clause] +``` +SELECT FIRST(field_name) FROM { tb_name | stb_name } [WHERE clause]; ``` -**Description**: The tangent of a specific column - -**Description**: The anti-cosine of a specific column +**Description**: The first non-null value of a specific column in a table or STable -**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL +**Return value type**: Same as the column being operated upon -**Applicable data types**: Data types except for timestamp, binary, nchar, bool +**Applicable column types**: Any data type **Applicable table types**: table, STable -**Applicable nested query**: Inner query and Outer query - -**Applicable versions**: From 2.6.0.0 - **More explanations**: -- Can't be used with tags -- Can't be used with aggregate functions +- FIRST(\*) can be used to get the first non-null value of all columns +- NULL will be returned if all the values of the specified column are all NULL +- A result will NOT be returned if all the columns in the result set are all NULL -### POW +### INTERP -```sql -SELECT POW(field_name, power) FROM { tb_name | stb_name } [WHERE clause] +``` +SELECT INTERP(field_name) FROM { tb_name | stb_name } [WHERE where_condition] [ RANGE(timestamp1,timestamp2) ] [EVERY(interval)] [FILL ({ VALUE | PREV | NULL | LINEAR | NEXT})]; ``` -**Description**: The power of a specific column with `power` as the index - -**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL - -**Applicable data types**: Data types except for timestamp, binary, nchar, bool +**Description**: The value that matches the specified timestamp range is returned, if existing; or an interpolation value is returned. -**Applicable table types**: table, STable +**Return value type**: Same as the column being operated upon -**Applicable nested query**: Inner query and Outer query +**Applicable column types**: Numeric data types -**Applicable versions**: From 2.6.0.0 +**Applicable table types**: table, STable, nested query -**More explanations**: +**More explanations** -- Can't be used with tags -- Can't be used with aggregate functions +- `INTERP` is used to get the value that matches the specified time slice from a column. If no such value exists an interpolation value will be returned based on `FILL` parameter. +- The input data of `INTERP` is the value of the specified column and a `where` clause can be used to filter the original data. If no `where` condition is specified then all original data is the input. +- The output time range of `INTERP` is specified by `RANGE(timestamp1,timestamp2)` parameter, with timestamp1<=timestamp2. timestamp1 is the starting point of the output time range and must be specified. timestamp2 is the ending point of the output time range and must be specified. If `RANGE` is not specified, then the timestamp of the first row that matches the filter condition is treated as timestamp1, the timestamp of the last row that matches the filter condition is treated as timestamp2. +- The number of rows in the result set of `INTERP` is determined by the parameter `EVERY`. Starting from timestamp1, one interpolation is performed for every time interval specified `EVERY` parameter. If `EVERY` parameter is not used, the time windows will be considered as no ending timestamp, i.e. there is only one time window from timestamp1. +- Interpolation is performed based on `FILL` parameter. No interpolation is performed if `FILL` is not used, that means either the original data that matches is returned or nothing is returned. +- `INTERP` can only be used to interpolate in single timeline. So it must be used with `group by tbname` when it's used on a STable. It can't be used with `GROUP BY` when it's used in the inner query of a nested query. +- The result of `INTERP` is not influenced by `ORDER BY TIMESTAMP`, which impacts the output order only.. -### LOG +### LAST -```sql -SELECT LOG(field_name, base) FROM { tb_name | stb_name } [WHERE clause] +``` +SELECT LAST(field_name) FROM { tb_name | stb_name } [WHERE clause]; ``` -**Description**: The log of a specific with `base` as the radix +**Description**: The last non-NULL value of a specific column in a table or STable -**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL +**Return value type**: Same as the column being operated upon -**Applicable data types**: Data types except for timestamp, binary, nchar, bool +**Applicable column types**: Any data type **Applicable table types**: table, STable -**Applicable nested query**: Inner query and Outer query - -**Applicable versions**: From 2.6.0.0 - **More explanations**: -- Can't be used with tags -- Can't be used with aggregate functions +- LAST(\*) can be used to get the last non-NULL value of all columns +- If the values of a column in the result set are all NULL, NULL is returned for that column; if all columns in the result are all NULL, no result will be returned. +- When it's used on a STable, if there are multiple values with the timestamp in the result set, one of them will be returned randomly and it's not guaranteed that the same value is returned if the same query is run multiple times. -### ABS +### LAST_ROW -```sql -SELECT ABS(field_name) FROM { tb_name | stb_name } [WHERE clause] +``` +SELECT LAST_ROW(field_name) FROM { tb_name | stb_name }; ``` -**Description**: The absolute of a specific column +**Description**: The last row of a table or STable -**Return value type**: UBIGINT if the input value is integer; DOUBLE if the input value is FLOAT/DOUBLE +**Return value type**: Same as the column being operated upon -**Applicable data types**: Data types except for timestamp, binary, nchar, bool +**Applicable column types**: Any data type **Applicable table types**: table, STable -**Applicable nested query**: Inner query and Outer query - -**Applicable versions**: From 2.6.0.0 - **More explanations**: -- Can't be used with tags -- Can't be used with aggregate functions +- When it's used against a STable, multiple rows with the same and largest timestamp may exist, in this case one of them is returned randomly and it's not guaranteed that the result is same if the query is run multiple times. +- Can't be used with `INTERVAL`. -### SQRT +### MAX -```sql -SELECT SQRT(field_name) FROM { tb_name | stb_name } [WHERE clause] +``` +SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause]; ``` -**Description**: The square root of a specific column +**Description**: The maximum value of a specific column of a table or STable -**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL +**Return value type**: Same as the data type of the column being operated upon -**Applicable data types**: Data types except for timestamp, binary, nchar, bool +**Applicable column types**: Numeric types **Applicable table types**: table, STable -**Applicable nested query**: Inner query and Outer query +### MIN -**Applicable versions**: From 2.6.0.0 +``` +SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause]; +``` -**More explanations**: +**Description**: The minimum value of a specific column in a table or STable -- Can't be used with tags -- Can't be used with aggregate functions +**Return value type**: Same as the data type of the column being operated upon -### CAST +**Applicable column types**: Numeric types -```sql -SELECT CAST(expression AS type_name) FROM { tb_name | stb_name } [WHERE clause] -``` +**Applicable table types**: table, STable -**Description**: It's used for type casting. The input parameter `expression` can be data columns, constants, scalar functions or arithmetic between them. Can't be used with tags, and can only be used in `select` clause. +### PERCENTILE -**Return value type**: The type specified by parameter `type_name` +``` +SELECT PERCENTILE(field_name, P) FROM { tb_name } [WHERE clause]; +``` -**Applicable data types**: +**Description**: The value whose rank in a specific column matches the specified percentage. If such a value matching the specified percentage doesn't exist in the column, an interpolation value will be returned. -- Parameter `expression` can be any data type except for JSON, more specifically it can be any of BOOL/TINYINT/SMALLINT/INT/BIGINT/FLOAT/DOUBLE/BINARY(M)/TIMESTAMP/NCHAR(M)/TINYINT UNSIGNED/SMALLINT UNSIGNED/INT UNSIGNED/BIGINT UNSIGNED -- The output data type specified by `type_name` can only be one of BIGINT/BINARY(N)/TIMESTAMP/NCHAR(N)/BIGINT UNSIGNED +**Return value type**: Double precision floating point -**Applicable versions**: From 2.6.0.0 +**Applicable column types**: Numeric types -**More explanations**: +**Applicable table types**: table -- Error will be reported for unsupported type casting -- NULL will be returned if the input value is NULL -- Some values of some supported data types may not be casted, below are known issues: - 1)When casting BINARY/NCHAR to BIGINT/BIGINT UNSIGNED, some characters may be treated as illegal, for example "a" may be converted to 0. - 2)There may be overflow when casting singed integer or TIMESTAMP to unsigned BIGINT - 3)There may be overflow when casting unsigned BIGINT to BIGINT - 4)There may be overflow when casting FLOAT/DOUBLE to BIGINT or UNSIGNED BIGINT +**More explanations**: _P_ is in range [0,100], when _P_ is 0, the result is same as using function MIN; when _P_ is 100, the result is same as function MAX. -### CONCAT +### TAIL -```sql -SELECT CONCAT(str1|column1, str2|column2, ...) FROM { tb_name | stb_name } [WHERE clause] +``` +SELECT TAIL(field_name, k, offset_val) FROM {tb_name | stb_name} [WHERE clause]; ``` -**Description**: The concatenation result of two or more strings, the number of strings to be concatenated is at least 2 and at most 8 - -**Return value type**: Same as the columns being operated, BINARY or NCHAR; or NULL if all the input are NULL - -**Applicable data types**: The input data must be in either all BINARY or in all NCHAR; can't be used on tag columns +**Description**: The next _k_ rows are returned after skipping the last `offset_val` rows, NULL values are not ignored. `offset_val` is optional parameter. When it's not specified, the last _k_ rows are returned. When `offset_val` is used, the effect is same as `order by ts desc LIMIT k OFFSET offset_val`. -**Applicable table types**: table, STable +**Parameter value range**: k: [1,100] offset_val: [0,100] -**Applicable nested query**: Inner query and Outer query +**Return value type**: Same as the column being operated upon -**Applicable versions**: From 2.6.0.0 +**Applicable column types**: Any data type except form timestamp, i.e. the primary key -### CONCAT_WS +### TOP ``` -SELECT CONCAT_WS(separator, str1|column1, str2|column2, ...) FROM { tb_name | stb_name } [WHERE clause] +SELECT TOP(field_name, K) FROM { tb_name | stb_name } [WHERE clause]; ``` -**Description**: The concatenation result of two or more strings with separator, the number of strings to be concatenated is at least 3 and at most 9 +**Description**: The greatest _k_ values of a specific column in a table or STable. If a value has multiple occurrences in the column but counting all of them in will exceed the upper limit _k_, then a part of them will be returned randomly. -**Return value type**: Same as the columns being operated, BINARY or NCHAR; or NULL if all the input are NULL +**Return value type**: Same as the column being operated upon -**Applicable data types**: The input data must be in either all BINARY or in all NCHAR; can't be used on tag columns +**Applicable column types**: Numeric types **Applicable table types**: table, STable -**Applicable nested query**: Inner query and Outer query - -**Applicable versions**: From 2.6.0.0 - **More explanations**: -- If the value of `separator` is NULL, the output is NULL. If the value of `separator` is not NULL but other input are all NULL, the output is empty string. +- _k_ must be in range [1,100] +- The timestamp associated with the selected values are returned too +- Can't be used with `FILL` -### LENGTH +### UNIQUE ``` -SELECT LENGTH(str|column) FROM { tb_name | stb_name } [WHERE clause] +SELECT UNIQUE(field_name) FROM {tb_name | stb_name} [WHERE clause]; ``` -**Description**: The length in bytes of a string - -**Return value type**: Integer +**Description**: The values that occur the first time in the specified column. The effect is similar to `distinct` keyword, but it can also be used to match tags or timestamp. -**Applicable data types**: BINARY or NCHAR, can't be used on tags +**Return value type**: Same as the column or tag being operated upon -**Applicable table types**: table, STable +**Applicable column types**: Any data types except for timestamp -**Applicable nested query**: Inner query and Outer query +**More explanations**: -**Applicable versions**: From 2.6.0.0 +- It can be used against table or STable, but can't be used together with time window, like `interval`, `state_window` or `session_window` . +- Considering the number of result sets is unpredictable, it's suggested to limit the distinct values under 100,000 to control the memory usage, otherwise error will be returned. -**More explanations** +## Time-Series Specific Functions -- If the input value is NULL, the output is NULL too +TDengine provides a set of time-series specific functions to better meet the requirements in querying time-series data. In general databases, similar functionalities can only be achieved with much more complex syntax and much worse performance. TDengine provides these functionalities in builtin functions so that the burden on user side is minimized. -### CHAR_LENGTH +### CSUM -``` -SELECT CHAR_LENGTH(str|column) FROM { tb_name | stb_name } [WHERE clause] +```sql + SELECT CSUM(field_name) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**: The length in number of characters of a string +**Description**: The cumulative sum of each row for a specific column. The number of output rows is same as that of the input rows. -**Return value type**: Integer +**Return value type**: Long integer for integers; Double for floating points. Timestamp is returned for each row. -**Applicable data types**: BINARY or NCHAR, can't be used on tags +**Applicable data types**: Numeric types **Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query -**Applicable versions**: From 2.6.0.0 - -**More explanations** - -- If the input value is NULL, the output is NULL too +**More explanations**: +- Arithmetic operation can't be performed on the result of `csum` function +- Can only be used with aggregate functions +- `Group by tbname` must be used together on a STable to force the result on a single timeline -### LOWER +### DERIVATIVE ``` -SELECT LOWER(str|column) FROM { tb_name | stb_name } [WHERE clause] +SELECT DERIVATIVE(field_name, time_interval, ignore_negative) FROM tb_name [WHERE clause]; ``` -**Description**: Convert the input string to lower case +**Description**: The derivative of a specific column. The time rage can be specified by parameter `time_interval`, the minimum allowed time range is 1 second (1s); the value of `ignore_negative` can be 0 or 1, 1 means negative values are ignored. -**Return value type**: Same as input +**Return value type**: Double precision floating point -**Applicable data types**: BINARY or NCHAR, can't be used on tags +**Applicable column types**: Numeric types **Applicable table types**: table, STable -**Applicable nested query**: Inner query and Outer query - -**Applicable versions**: From 2.6.0.0 - -**More explanations** +**More explanations**: -- If the input value is NULL, the output is NULL too +- The number of result rows is the number of total rows in the time range subtracted by one, no output for the first row. +- It can be used together with `GROUP BY tbname` against a STable. -### UPPER +### DIFF -``` -SELECT UPPER(str|column) FROM { tb_name | stb_name } [WHERE clause] +```sql +SELECT {DIFF(field_name, ignore_negative) | DIFF(field_name)} FROM tb_name [WHERE clause]; ``` -**Description**: Convert the input string to upper case +**Description**: The different of each row with its previous row for a specific column. `ignore_negative` can be specified as 0 or 1, the default value is 1 if it's not specified. `1` means negative values are ignored. -**Return value type**: Same as input +**Return value type**: Same as the column being operated upon -**Applicable data types**: BINARY or NCHAR, can't be used on tags +**Applicable column types**: Numeric types **Applicable table types**: table, STable -**Applicable nested query**: Inner query and Outer query - -**Applicable versions**: From 2.6.0.0 - -**More explanations** +**More explanations**: -- If the input value is NULL, the output is NULL too +- The number of result rows is the number of rows subtracted by one, no output for the first row +- It can be used on STable with `GROUP by tbname` -### LTRIM +### IRATE ``` -SELECT LTRIM(str|column) FROM { tb_name | stb_name } [WHERE clause] +SELECT IRATE(field_name) FROM tb_name WHERE clause; ``` -**Description**: Remove the left leading blanks of a string +**Description**: instantaneous rate on a specific column. The last two samples in the specified time range are used to calculate instantaneous rate. If the last sample value is smaller, then only the last sample value is used instead of the difference between the last two sample values. -**Return value type**: Same as input +**Return value type**: Double precision floating number -**Applicable data types**: BINARY or NCHAR, can't be used on tags +**Applicable column types**: Numeric types **Applicable table types**: table, STable -**Applicable nested query**: Inner query and Outer query - -**Applicable versions**: From 2.6.0.0 - -**More explanations** +**More explanations**: -- If the input value is NULL, the output is NULL too +- It can be used on stble with `GROUP BY`, i.e. timelines generated by `GROUP BY tbname` on a STable. -### RTRIM +### MAVG -``` -SELECT RTRIM(str|column) FROM { tb_name | stb_name } [WHERE clause] +```sql + SELECT MAVG(field_name, K) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**: Remove the right tailing blanks of a string - -**Return value type**: Same as input +**Description**: The moving average of continuous _k_ values of a specific column. If the number of input rows is less than _k_, nothing is returned. The applicable range of _k_ is [1,1000]. -**Applicable data types**: BINARY or NCHAR, can't be used on tags +**Return value type**: Double precision floating point -**Applicable table types**: table, STable +**Applicable data types**: Numeric types **Applicable nested query**: Inner query and Outer query -**Applicable versions**: From 2.6.0.0 +**Applicable table types**: table, STable -**More explanations** +**More explanations**: -- If the input value is NULL, the output is NULL too +- Arithmetic operation can't be performed on the result of `MAVG`. +- Can't be used with aggregate functions. +- Must be used with `GROUP BY tbname` when it's used on a STable to force the result on each single timeline. -### SUBSTR +### SAMPLE -``` -SELECT SUBSTR(str,pos[,len]) FROM { tb_name | stb_name } [WHERE clause] +```sql + SELECT SAMPLE(field_name, K) FROM { tb_name | stb_name } [WHERE clause] ``` -**Description**: The sub-string starting from `pos` with length of `len` from the original string `str` +**Description**: _k_ sampling values of a specific column. The applicable range of _k_ is [1,10000] -**Return value type**: Same as input +**Return value type**: Same as the column being operated plus the associated timestamp -**Applicable data types**: BINARY or NCHAR, can't be used on tags +**Applicable data types**: Any data type except for tags of STable **Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query -**Applicable versions**: From 2.6.0.0 - **More explanations**: -- If the input is NULL, the output is NULL -- Parameter `pos` can be an positive or negative integer; If it's positive, the starting position will be counted from the beginning of the string; if it's negative, the starting position will be counted from the end of the string. -- If `len` is not specified, it means from `pos` to the end. +- Arithmetic operation can't be operated on the result of `SAMPLE` function +- Must be used with `Group by tbname` when it's used on a STable to force the result on each single timeline ### STATECOUNT @@ -1552,45 +1162,17 @@ SELECT STATECOUNT(field_name, oper, val) FROM { tb_name | stb_name } [WHERE clau **Return value type**: Integer -**Applicable data types**: Data types excpet for timestamp, binary, nchar, bool +**Applicable data types**: Numeric types **Applicable table types**: table, STable **Applicable nested query**: Outer query only -**Applicable versions**: From 2.6.0.0 - **More explanations**: - Must be used together with `GROUP BY tbname` when it's used on a STable to force the result into each single timeline] - Can't be used with window operation, like interval/state_window/session_window -**Examples**: - -``` -taos> select ts,dbig from statef2; - ts | dbig | -======================================================== -2021-10-15 00:31:33.000000000 | 1 | -2021-10-17 00:31:31.000000000 | NULL | -2021-12-24 00:31:34.000000000 | 2 | -2022-01-01 08:00:05.000000000 | 19 | -2022-01-01 08:00:06.000000000 | NULL | -2022-01-01 08:00:07.000000000 | 9 | -Query OK, 6 row(s) in set (0.002977s) - -taos> select stateCount(dbig,GT,2) from statef2; -ts | dbig | statecount(dbig,gt,2) | -================================================================================ -2021-10-15 00:31:33.000000000 | 1 | -1 | -2021-10-17 00:31:31.000000000 | NULL | NULL | -2021-12-24 00:31:34.000000000 | 2 | -1 | -2022-01-01 08:00:05.000000000 | 19 | 1 | -2022-01-01 08:00:06.000000000 | NULL | NULL | -2022-01-01 08:00:07.000000000 | 9 | 2 | -Query OK, 6 row(s) in set (0.002791s) -``` - ### STATEDURATION ``` @@ -1607,326 +1189,65 @@ SELECT stateDuration(field_name, oper, val, unit) FROM { tb_name | stb_name } [W **Return value type**: Integer -**Applicable data types**: Data types excpet for timestamp, binary, nchar, bool +**Applicable data types**: Numeric types **Applicable table types**: table, STable **Applicable nested query**: Outer query only -**Applicable versions**: From 2.6.0.0 - **More explanations**: - Must be used together with `GROUP BY tbname` when it's used on a STable to force the result into each single timeline] - Can't be used with window operation, like interval/state_window/session_window -**Examples**: - -``` -taos> select ts,dbig from statef2; - ts | dbig | -======================================================== -2021-10-15 00:31:33.000000000 | 1 | -2021-10-17 00:31:31.000000000 | NULL | -2021-12-24 00:31:34.000000000 | 2 | -2022-01-01 08:00:05.000000000 | 19 | -2022-01-01 08:00:06.000000000 | NULL | -2022-01-01 08:00:07.000000000 | 9 | -Query OK, 6 row(s) in set (0.002407s) - -taos> select stateDuration(dbig,GT,2) from statef2; -ts | dbig | stateduration(dbig,gt,2) | -=================================================================================== -2021-10-15 00:31:33.000000000 | 1 | -1 | -2021-10-17 00:31:31.000000000 | NULL | NULL | -2021-12-24 00:31:34.000000000 | 2 | -1 | -2022-01-01 08:00:05.000000000 | 19 | 0 | -2022-01-01 08:00:06.000000000 | NULL | NULL | -2022-01-01 08:00:07.000000000 | 9 | 2 | -Query OK, 6 row(s) in set (0.002613s) -``` - -## Time Functions - -Since version 2.6.0.0, below time related functions can be used in TDengine. - -### NOW - -```sql -SELECT NOW() FROM { tb_name | stb_name } [WHERE clause]; -SELECT select_expr FROM { tb_name | stb_name } WHERE ts_col cond_operatior NOW(); -INSERT INTO tb_name VALUES (NOW(), ...); -``` - -**Description**: The current time of the client side system - -**Return value type**: TIMESTAMP - -**Applicable column types**: TIMESTAMP only - -**Applicable table types**: table, STable - -**More explanations**: - -- Add and Subtract operation can be performed, for example NOW() + 1s, the time unit can be: - b(nanosecond), u(microsecond), a(millisecond)), s(second), m(minute), h(hour), d(day), w(week) -- The precision of the returned timestamp is same as the precision set for the current data base in use - -**Examples**: - -```sql -taos> SELECT NOW() FROM meters; - now() | -========================== - 2022-02-02 02:02:02.456 | -Query OK, 1 row(s) in set (0.002093s) - -taos> SELECT NOW() + 1h FROM meters; - now() + 1h | -========================== - 2022-02-02 03:02:02.456 | -Query OK, 1 row(s) in set (0.002093s) - -taos> SELECT COUNT(voltage) FROM d1001 WHERE ts < NOW(); - count(voltage) | -============================= - 5 | -Query OK, 5 row(s) in set (0.004475s) +### TWA -taos> INSERT INTO d1001 VALUES (NOW(), 10.2, 219, 0.32); -Query OK, 1 of 1 row(s) in database (0.002210s) ``` - -### TODAY - -```sql -SELECT TODAY() FROM { tb_name | stb_name } [WHERE clause]; -SELECT select_expr FROM { tb_name | stb_name } WHERE ts_col cond_operatior TODAY()]; -INSERT INTO tb_name VALUES (TODAY(), ...); +SELECT TWA(field_name) FROM tb_name WHERE clause; ``` -**Description**: The timestamp of 00:00:00 of the client side system +**Description**: Time weighted average on a specific column within a time range -**Return value type**: TIMESTAMP +**Return value type**: Double precision floating number -**Applicable column types**: TIMESTAMP only +**Applicable column types**: Numeric types **Applicable table types**: table, STable **More explanations**: -- Add and Subtract operation can be performed, for example NOW() + 1s, the time unit can be: - b(nanosecond), u(microsecond), a(millisecond)), s(second), m(minute), h(hour), d(day), w(week) -- The precision of the returned timestamp is same as the precision set for the current data base in use - -**Examples**: - -```sql -taos> SELECT TODAY() FROM meters; - today() | -========================== - 2022-02-02 00:00:00.000 | -Query OK, 1 row(s) in set (0.002093s) - -taos> SELECT TODAY() + 1h FROM meters; - today() + 1h | -========================== - 2022-02-02 01:00:00.000 | -Query OK, 1 row(s) in set (0.002093s) - -taos> SELECT COUNT(voltage) FROM d1001 WHERE ts < TODAY(); - count(voltage) | -============================= - 5 | -Query OK, 5 row(s) in set (0.004475s) - -taos> INSERT INTO d1001 VALUES (TODAY(), 10.2, 219, 0.32); -Query OK, 1 of 1 row(s) in database (0.002210s) -``` - -### TIMEZONE - -```sql -SELECT TIMEZONE() FROM { tb_name | stb_name } [WHERE clause]; -``` - -**Description**: The timezone of the client side system - -**Return value type**: BINARY - -**Applicable column types**: None +- It can be used on stable with `GROUP BY`, i.e. timelines generated by `GROUP BY tbname` on a STable. -**Applicable table types**: table, STable +## System Information Functions -**Examples**: +### DATABASE -```sql -taos> SELECT TIMEZONE() FROM meters; - timezone() | -================================= - UTC (UTC, +0000) | -Query OK, 1 row(s) in set (0.002093s) ``` - -### TO_ISO8601 - -```sql -SELECT TO_ISO8601(ts_val | ts_col) FROM { tb_name | stb_name } [WHERE clause]; +SELECT DATABASE(); ``` -**Description**: The ISO8601 date/time format converted from a UNIX timestamp, plus the timezone of the client side system - -**Return value type**: BINARY - -**Applicable column types**: TIMESTAMP, constant or a column - -**Applicable table types**: table, STable - -**More explanations**: - -- If the input is UNIX timestamp constant, the precision of the returned value is determined by the digits of the input timestamp -- If the input is a column of TIMESTAMP type, The precision of the returned value is same as the precision set for the current data base in use - -**Examples**: +**Description**:Return the current database being used. If the user doesn't specify database when logon and doesn't use `USE` SQL command to switch the datbase, this function returns NULL. -```sql -taos> SELECT TO_ISO8601(1643738400) FROM meters; - to_iso8601(1643738400) | -============================== - 2022-02-02T02:00:00+0800 | +### CLIENT_VERSION -taos> SELECT TO_ISO8601(ts) FROM meters; - to_iso8601(ts) | -============================== - 2022-02-02T02:00:00+0800 | - 2022-02-02T02:00:00+0800 | - 2022-02-02T02:00:00+0800 | ``` - -### TO_UNIXTIMESTAMP - -```sql -SELECT TO_UNIXTIMESTAMP(datetime_string | ts_col) FROM { tb_name | stb_name } [WHERE clause]; +SELECT CLIENT_VERSION(); ``` -**Description**: UNIX timestamp converted from a string of date/time format - -**Return value type**: Long integer - -**Applicable column types**: Constant or column of BINARY/NCHAR - -**Applicable table types**: table, STable - -**More explanations**: - -- The input string must be compatible with ISO8601/RFC3339 standard, 0 will be returned if the string can't be converted -- The precision of the returned timestamp is same as the precision set for the current data base in use +**Description**:Return the client version. -**Examples**: - -```sql -taos> SELECT TO_UNIXTIMESTAMP("2022-02-02T02:00:00.000Z") FROM meters; -to_unixtimestamp("2022-02-02T02:00:00.000Z") | -============================================== - 1643767200000 | +### SERVER_VERSION -taos> SELECT TO_UNIXTIMESTAMP(col_binary) FROM meters; - to_unixtimestamp(col_binary) | -======================================== - 1643767200000 | - 1643767200000 | - 1643767200000 | ``` - -### TIMETRUNCATE - -```sql -SELECT TIMETRUNCATE(ts_val | datetime_string | ts_col, time_unit) FROM { tb_name | stb_name } [WHERE clause]; +SELECT SERVER_VERSION(); ``` -**Description**: Truncate the input timestamp with unit specified by `time_unit`\ - -**Return value type**: TIMESTAMP\ - -**Applicable column types**: UNIX timestamp constant, string constant of date/time format, or a column of timestamp - -**Applicable table types**: table, STable - -**More explanations**: - -- Time unit specified by `time_unit` can be: - 1u(microsecond),1a(millisecond),1s(second),1m(minute),1h(hour),1d(day). -- The precision of the returned timestamp is same as the precision set for the current data base in use - -**Examples**: +**Description**:Returns the server version. -```sql -taos> SELECT TIMETRUNCATE(1643738522000, 1h) FROM meters; - timetruncate(1643738522000, 1h) | -=================================== - 2022-02-02 02:00:00.000 | -Query OK, 1 row(s) in set (0.001499s) - -taos> SELECT TIMETRUNCATE("2022-02-02 02:02:02", 1h) FROM meters; - timetruncate("2022-02-02 02:02:02", 1h) | -=========================================== - 2022-02-02 02:00:00.000 | -Query OK, 1 row(s) in set (0.003903s) +### SERVER_STATUS -taos> SELECT TIMETRUNCATE(ts, 1h) FROM meters; - timetruncate(ts, 1h) | -========================== - 2022-02-02 02:00:00.000 | - 2022-02-02 02:00:00.000 | - 2022-02-02 02:00:00.000 | -Query OK, 3 row(s) in set (0.003903s) ``` - -### TIMEDIFF - -```sql -SELECT TIMEDIFF(ts_val1 | datetime_string1 | ts_col1, ts_val2 | datetime_string2 | ts_col2 [, time_unit]) FROM { tb_name | stb_name } [WHERE clause]; +SELECT SERVER_VERSION(); ``` -**Description**: The difference between two timestamps, and rounded to the time unit specified by `time_unit` - -**Return value type**: Long Integer - -**Applicable column types**: UNIX timestamp constant, string constant of date/time format, or a column of TIMESTAMP type - -**Applicable table types**: table, STable - -**More explanations**: - -- Time unit specified by `time_unit` can be: - 1u(microsecond),1a(millisecond),1s(second),1m(minute),1h(hour),1d(day). -- The precision of the returned timestamp is same as the precision set for the current data base in use - -**Applicable versions**:Since version 2.6.0.0 - -**Examples**: - -```sql -taos> SELECT TIMEDIFF(1643738400000, 1643742000000) FROM meters; - timediff(1643738400000, 1643742000000) | -========================================= - 3600000 | -Query OK, 1 row(s) in set (0.002553s) -taos> SELECT TIMEDIFF(1643738400000, 1643742000000, 1h) FROM meters; - timediff(1643738400000, 1643742000000, 1h) | -============================================= - 1 | -Query OK, 1 row(s) in set (0.003726s) - -taos> SELECT TIMEDIFF("2022-02-02 03:00:00", "2022-02-02 02:00:00", 1h) FROM meters; - timediff("2022-02-02 03:00:00", "2022-02-02 02:00:00", 1h) | -============================================================= - 1 | -Query OK, 1 row(s) in set (0.001937s) - -taos> SELECT TIMEDIFF(ts_col1, ts_col2, 1h) FROM meters; - timediff(ts_col1, ts_col2, 1h) | -=================================== - 1 | -Query OK, 1 row(s) in set (0.001937s) -``` +**Description**:Returns the server's status. diff --git a/docs-en/12-taos-sql/12-keywords.md b/docs-en/12-taos-sql/12-keywords.md index 8f045f48019e419d21d3bd22f432a024551c585c..ed0c96b4e4d94dd70da1c3778f4129bd34daed62 100644 --- a/docs-en/12-taos-sql/12-keywords.md +++ b/docs-en/12-taos-sql/12-keywords.md @@ -56,6 +56,7 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam Get the table name and tag values of all subtables in a STable. ```mysql SELECT TBNAME, location FROM meters; +``` Count the number of subtables in a STable. ```mysql diff --git a/example/src/tmq.c b/example/src/tmq.c index 7e4de21f2eeeedde3b252bc9eae407fd3f1cc7d9..e61ad69e6bf36b58422524b668d24ba818700308 100644 --- a/example/src/tmq.c +++ b/example/src/tmq.c @@ -165,7 +165,6 @@ tmq_t* build_consumer() { tmq_conf_set(conf, "group.id", "tg2"); tmq_conf_set(conf, "td.connect.user", "root"); tmq_conf_set(conf, "td.connect.pass", "taosdata"); - /*tmq_conf_set(conf, "td.connect.db", "abc1");*/ tmq_conf_set(conf, "msg.with.table.name", "true"); tmq_conf_set(conf, "enable.auto.commit", "false"); tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL); @@ -191,20 +190,18 @@ void basic_consume_loop(tmq_t* tmq, tmq_list_t* topics) { return; } int32_t cnt = 0; - /*clock_t startTime = clock();*/ while (running) { TAOS_RES* tmqmessage = tmq_consumer_poll(tmq, 0); if (tmqmessage) { cnt++; + msg_process(tmqmessage); + if (cnt >= 2) break; /*printf("get data\n");*/ - /*msg_process(tmqmessage);*/ taos_free_result(tmqmessage); /*} else {*/ /*break;*/ } } - /*clock_t endTime = clock();*/ - /*printf("log cnt: %d %f s\n", cnt, (double)(endTime - startTime) / CLOCKS_PER_SEC);*/ err = tmq_consumer_close(tmq); if (err) @@ -253,39 +250,6 @@ void sync_consume_loop(tmq_t* tmq, tmq_list_t* topics) { fprintf(stderr, "%% Consumer closed\n"); } -void perf_loop(tmq_t* tmq, tmq_list_t* topics) { - tmq_resp_err_t err; - - if ((err = tmq_subscribe(tmq, topics))) { - fprintf(stderr, "%% Failed to start consuming topics: %s\n", tmq_err2str(err)); - printf("subscribe err\n"); - return; - } - int32_t batchCnt = 0; - int32_t skipLogNum = 0; - clock_t startTime = clock(); - while (running) { - TAOS_RES* tmqmessage = tmq_consumer_poll(tmq, 500); - if (tmqmessage) { - batchCnt++; - /*skipLogNum += tmqGetSkipLogNum(tmqmessage);*/ - /*msg_process(tmqmessage);*/ - taos_free_result(tmqmessage); - } else { - break; - } - } - clock_t endTime = clock(); - printf("log batch cnt: %d, skip log cnt: %d, time used:%f s\n", batchCnt, skipLogNum, - (double)(endTime - startTime) / CLOCKS_PER_SEC); - - err = tmq_consumer_close(tmq); - if (err) - fprintf(stderr, "%% Failed to close consumer: %s\n", tmq_err2str(err)); - else - fprintf(stderr, "%% Consumer closed\n"); -} - int main(int argc, char* argv[]) { if (argc > 1) { printf("env init\n"); @@ -296,7 +260,6 @@ int main(int argc, char* argv[]) { } tmq_t* tmq = build_consumer(); tmq_list_t* topic_list = build_topic_list(); - /*perf_loop(tmq, topic_list);*/ - /*basic_consume_loop(tmq, topic_list);*/ - sync_consume_loop(tmq, topic_list); + basic_consume_loop(tmq, topic_list); + /*sync_consume_loop(tmq, topic_list);*/ } diff --git a/include/client/taos.h b/include/client/taos.h index 0b8c67aa794363ff851c69e5848978c78c6a4abc..bab0c18db17572a05c8a7d433876f48a404ded97 100644 --- a/include/client/taos.h +++ b/include/client/taos.h @@ -85,6 +85,14 @@ typedef struct taosField { int32_t bytes; } TAOS_FIELD; +typedef struct TAOS_FIELD_E { + char name[65]; + int8_t type; + uint8_t precision; + uint8_t scale; + int32_t bytes; +} TAOS_FIELD_E; + #ifdef WINDOWS #define DLL_EXPORT __declspec(dllexport) #else @@ -134,7 +142,10 @@ DLL_EXPORT TAOS_STMT *taos_stmt_init(TAOS *taos); DLL_EXPORT int taos_stmt_prepare(TAOS_STMT *stmt, const char *sql, unsigned long length); DLL_EXPORT int taos_stmt_set_tbname_tags(TAOS_STMT *stmt, const char *name, TAOS_MULTI_BIND *tags); DLL_EXPORT int taos_stmt_set_tbname(TAOS_STMT *stmt, const char *name); +DLL_EXPORT int taos_stmt_set_tags(TAOS_STMT *stmt, TAOS_MULTI_BIND *tags); DLL_EXPORT int taos_stmt_set_sub_tbname(TAOS_STMT *stmt, const char *name); +DLL_EXPORT int taos_stmt_get_tag_fields(TAOS_STMT *stmt, int* fieldNum, TAOS_FIELD_E** fields); +DLL_EXPORT int taos_stmt_get_col_fields(TAOS_STMT *stmt, int* fieldNum, TAOS_FIELD_E** fields); DLL_EXPORT int taos_stmt_is_insert(TAOS_STMT *stmt, int *insert); DLL_EXPORT int taos_stmt_num_params(TAOS_STMT *stmt, int *nums); @@ -230,7 +241,7 @@ DLL_EXPORT const char *tmq_err2str(tmq_resp_err_t); DLL_EXPORT tmq_resp_err_t tmq_subscribe(tmq_t *tmq, const tmq_list_t *topic_list); DLL_EXPORT tmq_resp_err_t tmq_unsubscribe(tmq_t *tmq); DLL_EXPORT tmq_resp_err_t tmq_subscription(tmq_t *tmq, tmq_list_t **topics); -DLL_EXPORT TAOS_RES *tmq_consumer_poll(tmq_t *tmq, int64_t wait_time); +DLL_EXPORT TAOS_RES *tmq_consumer_poll(tmq_t *tmq, int64_t timeout); DLL_EXPORT tmq_resp_err_t tmq_consumer_close(tmq_t *tmq); DLL_EXPORT tmq_resp_err_t tmq_commit_sync(tmq_t *tmq, const tmq_topic_vgroup_list_t *offsets); DLL_EXPORT void tmq_commit_async(tmq_t *tmq, const tmq_topic_vgroup_list_t *offsets, tmq_commit_cb *cb, void *param); diff --git a/include/common/tmsg.h b/include/common/tmsg.h index 1eae1835b4d2f92cefb0f8bfad69d6327366803f..d2fe3e964af7c9c3d89440a2684b9ccb18cf21d2 100644 --- a/include/common/tmsg.h +++ b/include/common/tmsg.h @@ -2439,7 +2439,7 @@ typedef struct { int32_t epoch; uint64_t reqId; int64_t consumerId; - int64_t waitTime; + int64_t timeout; int64_t currentOffset; } SMqPollReq; diff --git a/include/libs/parser/parser.h b/include/libs/parser/parser.h index 06272b81514cec2a294da513ec2a57447ad74ef1..ca825b9e2fb460b6aa35110c89535071a50cac52 100644 --- a/include/libs/parser/parser.h +++ b/include/libs/parser/parser.h @@ -77,8 +77,8 @@ int32_t qStmtParseQuerySql(SParseContext* pCxt, SQuery* pQuery); int32_t qBindStmtColsValue(void* pBlock, TAOS_MULTI_BIND* bind, char* msgBuf, int32_t msgBufLen); int32_t qBindStmtSingleColValue(void* pBlock, TAOS_MULTI_BIND* bind, char* msgBuf, int32_t msgBufLen, int32_t colIdx, int32_t rowNum); -int32_t qBuildStmtColFields(void* pDataBlock, int32_t* fieldNum, TAOS_FIELD** fields); -int32_t qBuildStmtTagFields(void* pBlock, void* boundTags, int32_t* fieldNum, TAOS_FIELD** fields); +int32_t qBuildStmtColFields(void* pDataBlock, int32_t* fieldNum, TAOS_FIELD_E** fields); +int32_t qBuildStmtTagFields(void* pBlock, void* boundTags, int32_t* fieldNum, TAOS_FIELD_E** fields); int32_t qBindStmtTagsValue(void* pBlock, void* boundTags, int64_t suid, char* tName, TAOS_MULTI_BIND* bind, char* msgBuf, int32_t msgBufLen); void destroyBoundColumnInfo(void* pBoundInfo); diff --git a/include/os/osDir.h b/include/os/osDir.h index a4c686e2807ee3d1fb9a8a0e1e05066d1b616c0b..9019d4f80240b2335824cb5626488bf4d0957f06 100644 --- a/include/os/osDir.h +++ b/include/os/osDir.h @@ -33,8 +33,19 @@ extern "C" { #ifdef WINDOWS #define TD_TMP_DIR_PATH "C:\\Windows\\Temp\\" +#define TD_CFG_DIR_PATH "C:\\TDengine\\cfg\\" +#define TD_DATA_DIR_PATH "C:\\TDengine\\data\\" +#define TD_LOG_DIR_PATH "C:\\TDengine\\log\\" +#elif defined(_TD_DARWIN_64) +#define TD_TMP_DIR_PATH "/tmp/taosd/" +#define TD_CFG_DIR_PATH "/usr/local/etc/taos/" +#define TD_DATA_DIR_PATH "/usr/local/var/lib/taos/" +#define TD_LOG_DIR_PATH "/usr/local/var/log/taos/" #else #define TD_TMP_DIR_PATH "/tmp/" +#define TD_CFG_DIR_PATH "/etc/taos/" +#define TD_DATA_DIR_PATH "/var/lib/taos/" +#define TD_LOG_DIR_PATH "/var/log/taos/" #endif typedef struct TdDir *TdDirPtr; diff --git a/include/util/taoserror.h b/include/util/taoserror.h index 412192e8a86ff7307f3bf13d2882bb822afb2d6b..c3d27888971732f8b6c8ccd732d0063d93aec487 100644 --- a/include/util/taoserror.h +++ b/include/util/taoserror.h @@ -183,7 +183,7 @@ int32_t* taosGetErrno(); #define TSDB_CODE_MND_BNODE_ALREADY_EXIST TAOS_DEF_ERROR_CODE(0, 0x0356) #define TSDB_CODE_MND_BNODE_NOT_EXIST TAOS_DEF_ERROR_CODE(0, 0x0357) #define TSDB_CODE_MND_TOO_FEW_MNODES TAOS_DEF_ERROR_CODE(0, 0x0358) -#define TSDB_CODE_MND_MNODE_DEPLOYED TAOS_DEF_ERROR_CODE(0, 0x0359) +#define TSDB_CODE_MND_TOO_MANY_MNODES TAOS_DEF_ERROR_CODE(0, 0x0359) #define TSDB_CODE_MND_CANT_DROP_MASTER TAOS_DEF_ERROR_CODE(0, 0x035A) // mnode-acct diff --git a/include/util/tdef.h b/include/util/tdef.h index a9e196316d776cad9d32d6ec7ba45c308d110b6e..de139368c93c08a0fe3a9913a0a21e6925b95c7b 100644 --- a/include/util/tdef.h +++ b/include/util/tdef.h @@ -253,8 +253,7 @@ typedef enum ELogicConditionType { #define TSDB_TRANS_STAGE_LEN 12 #define TSDB_TRANS_TYPE_LEN 16 -#define TSDB_TRANS_ERROR_LEN 64 -#define TSDB_TRANS_DESC_LEN 128 +#define TSDB_TRANS_ERROR_LEN 512 #define TSDB_STEP_NAME_LEN 32 #define TSDB_STEP_DESC_LEN 128 diff --git a/source/client/inc/clientStmt.h b/source/client/inc/clientStmt.h index f0c9dcd67dd8e3b05775003221ddf86681da37ab..936fb92fc4019842485e7051abf161aee8a7d858 100644 --- a/source/client/inc/clientStmt.h +++ b/source/client/inc/clientStmt.h @@ -116,8 +116,11 @@ int stmtAffectedRowsOnce(TAOS_STMT *stmt); int stmtPrepare(TAOS_STMT *stmt, const char *sql, unsigned long length); int stmtSetTbName(TAOS_STMT *stmt, const char *tbName); int stmtSetTbTags(TAOS_STMT *stmt, TAOS_MULTI_BIND *tags); +int stmtGetTagFields(TAOS_STMT* stmt, int* nums, TAOS_FIELD_E** fields); +int stmtGetColFields(TAOS_STMT* stmt, int* nums, TAOS_FIELD_E** fields); int stmtIsInsert(TAOS_STMT *stmt, int *insert); int stmtGetParamNum(TAOS_STMT *stmt, int *nums); +int stmtGetParam(TAOS_STMT *stmt, int idx, int *type, int *bytes); int stmtAddBatch(TAOS_STMT *stmt); TAOS_RES *stmtUseResult(TAOS_STMT *stmt); int stmtBindBatch(TAOS_STMT *stmt, TAOS_MULTI_BIND *bind, int32_t colIdx); diff --git a/source/client/src/clientMain.c b/source/client/src/clientMain.c index 53eb443b36b05393b22667a6f623892008f14ebb..e144885e9efc4b3eca7c806996b77ad416d70161 100644 --- a/source/client/src/clientMain.c +++ b/source/client/src/clientMain.c @@ -666,8 +666,39 @@ int taos_stmt_set_tbname(TAOS_STMT *stmt, const char *name) { return stmtSetTbName(stmt, name); } +int taos_stmt_set_tags(TAOS_STMT *stmt, TAOS_MULTI_BIND *tags) { + if (stmt == NULL || tags == NULL) { + tscError("NULL parameter for %s", __FUNCTION__); + terrno = TSDB_CODE_INVALID_PARA; + return terrno; + } + + return stmtSetTbTags(stmt, tags); +} + + int taos_stmt_set_sub_tbname(TAOS_STMT *stmt, const char *name) { return taos_stmt_set_tbname(stmt, name); } +int taos_stmt_get_tag_fields(TAOS_STMT *stmt, int* fieldNum, TAOS_FIELD_E** fields) { + if (stmt == NULL || NULL == fieldNum) { + tscError("NULL parameter for %s", __FUNCTION__); + terrno = TSDB_CODE_INVALID_PARA; + return terrno; + } + + return stmtGetTagFields(stmt, fieldNum, fields); +} + +int taos_stmt_get_col_fields(TAOS_STMT *stmt, int* fieldNum, TAOS_FIELD_E** fields) { + if (stmt == NULL || NULL == fieldNum) { + tscError("NULL parameter for %s", __FUNCTION__); + terrno = TSDB_CODE_INVALID_PARA; + return terrno; + } + + return stmtGetColFields(stmt, fieldNum, fields); +} + int taos_stmt_bind_param(TAOS_STMT *stmt, TAOS_MULTI_BIND *bind) { if (stmt == NULL || bind == NULL) { tscError("NULL parameter for %s", __FUNCTION__); @@ -772,6 +803,16 @@ int taos_stmt_num_params(TAOS_STMT *stmt, int *nums) { return stmtGetParamNum(stmt, nums); } +int taos_stmt_get_param(TAOS_STMT *stmt, int idx, int *type, int *bytes) { + if (stmt == NULL || type == NULL || NULL == bytes || idx < 0) { + tscError("invalid parameter for %s", __FUNCTION__); + terrno = TSDB_CODE_INVALID_PARA; + return terrno; + } + + return stmtGetParam(stmt, idx, type, bytes); +} + TAOS_RES *taos_stmt_use_result(TAOS_STMT *stmt) { if (stmt == NULL) { tscError("NULL parameter for %s", __FUNCTION__); diff --git a/source/client/src/clientStmt.c b/source/client/src/clientStmt.c index 01d785ef73107778c818437c18d98c778d1f8893..3adb3684da1164363a1ffda4c26130643efc5f78 100644 --- a/source/client/src/clientStmt.c +++ b/source/client/src/clientStmt.c @@ -17,7 +17,7 @@ int32_t stmtSwitchStatus(STscStmt* pStmt, STMT_STATUS newStatus) { } break; case STMT_SETTAGS: - if (STMT_STATUS_NE(SETTBNAME)) { + if (STMT_STATUS_NE(SETTBNAME) && STMT_STATUS_NE(FETCH_FIELDS)) { code = TSDB_CODE_TSC_STMT_API_ERROR; } break; @@ -540,6 +540,8 @@ int stmtSetTbName(TAOS_STMT* stmt, const char* tbName) { if (pStmt->bInfo.needParse) { strncpy(pStmt->bInfo.tbName, tbName, sizeof(pStmt->bInfo.tbName) - 1); pStmt->bInfo.tbName[sizeof(pStmt->bInfo.tbName) - 1] = 0; + + STMT_ERR_RET(stmtParseSql(pStmt)); } return TSDB_CODE_SUCCESS; @@ -550,10 +552,6 @@ int stmtSetTbTags(TAOS_STMT* stmt, TAOS_MULTI_BIND* tags) { STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_SETTAGS)); - if (pStmt->bInfo.needParse) { - STMT_ERR_RET(stmtParseSql(pStmt)); - } - if (pStmt->bInfo.inExecCache) { return TSDB_CODE_SUCCESS; } @@ -571,7 +569,7 @@ int stmtSetTbTags(TAOS_STMT* stmt, TAOS_MULTI_BIND* tags) { return TSDB_CODE_SUCCESS; } -int32_t stmtFetchTagFields(STscStmt* pStmt, int32_t* fieldNum, TAOS_FIELD** fields) { +int stmtFetchTagFields(STscStmt* pStmt, int32_t* fieldNum, TAOS_FIELD_E** fields) { if (STMT_TYPE_QUERY == pStmt->sql.type) { tscError("invalid operation to get query tag fileds"); STMT_ERR_RET(TSDB_CODE_TSC_STMT_API_ERROR); @@ -589,7 +587,7 @@ int32_t stmtFetchTagFields(STscStmt* pStmt, int32_t* fieldNum, TAOS_FIELD** fiel return TSDB_CODE_SUCCESS; } -int32_t stmtFetchColFields(STscStmt* pStmt, int32_t* fieldNum, TAOS_FIELD** fields) { +int stmtFetchColFields(STscStmt* pStmt, int32_t* fieldNum, TAOS_FIELD_E** fields) { if (STMT_TYPE_QUERY == pStmt->sql.type) { tscError("invalid operation to get query column fileds"); STMT_ERR_RET(TSDB_CODE_TSC_STMT_API_ERROR); @@ -852,6 +850,71 @@ int stmtIsInsert(TAOS_STMT* stmt, int* insert) { return TSDB_CODE_SUCCESS; } +int stmtGetTagFields(TAOS_STMT* stmt, int* nums, TAOS_FIELD_E** fields) { + STscStmt* pStmt = (STscStmt*)stmt; + + if (STMT_TYPE_QUERY == pStmt->sql.type) { + STMT_RET(TSDB_CODE_TSC_STMT_API_ERROR); + } + + STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS)); + + if (pStmt->bInfo.needParse && pStmt->sql.runTimes && pStmt->sql.type > 0 && + STMT_TYPE_MULTI_INSERT != pStmt->sql.type) { + pStmt->bInfo.needParse = false; + } + + if (pStmt->exec.pRequest && STMT_TYPE_QUERY == pStmt->sql.type && pStmt->sql.runTimes) { + taos_free_result(pStmt->exec.pRequest); + pStmt->exec.pRequest = NULL; + } + + if (NULL == pStmt->exec.pRequest) { + STMT_ERR_RET(buildRequest(pStmt->taos, pStmt->sql.sqlStr, pStmt->sql.sqlLen, &pStmt->exec.pRequest)); + } + + if (pStmt->bInfo.needParse) { + STMT_ERR_RET(stmtParseSql(pStmt)); + } + + STMT_ERR_RET(stmtFetchTagFields(stmt, nums, fields)); + + return TSDB_CODE_SUCCESS; +} + +int stmtGetColFields(TAOS_STMT* stmt, int* nums, TAOS_FIELD_E** fields) { + STscStmt* pStmt = (STscStmt*)stmt; + + if (STMT_TYPE_QUERY == pStmt->sql.type) { + STMT_RET(TSDB_CODE_TSC_STMT_API_ERROR); + } + + STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS)); + + if (pStmt->bInfo.needParse && pStmt->sql.runTimes && pStmt->sql.type > 0 && + STMT_TYPE_MULTI_INSERT != pStmt->sql.type) { + pStmt->bInfo.needParse = false; + } + + if (pStmt->exec.pRequest && STMT_TYPE_QUERY == pStmt->sql.type && pStmt->sql.runTimes) { + taos_free_result(pStmt->exec.pRequest); + pStmt->exec.pRequest = NULL; + } + + if (NULL == pStmt->exec.pRequest) { + STMT_ERR_RET(buildRequest(pStmt->taos, pStmt->sql.sqlStr, pStmt->sql.sqlLen, &pStmt->exec.pRequest)); + } + + if (pStmt->bInfo.needParse) { + STMT_ERR_RET(stmtParseSql(pStmt)); + } + + STMT_ERR_RET(stmtFetchColFields(stmt, nums, fields)); + + return TSDB_CODE_SUCCESS; +} + + int stmtGetParamNum(TAOS_STMT* stmt, int* nums) { STscStmt* pStmt = (STscStmt*)stmt; @@ -884,6 +947,50 @@ int stmtGetParamNum(TAOS_STMT* stmt, int* nums) { return TSDB_CODE_SUCCESS; } +int stmtGetParam(TAOS_STMT *stmt, int idx, int *type, int *bytes) { + STscStmt* pStmt = (STscStmt*)stmt; + + if (STMT_TYPE_QUERY == pStmt->sql.type) { + STMT_RET(TSDB_CODE_TSC_STMT_API_ERROR); + } + + STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS)); + + if (pStmt->bInfo.needParse && pStmt->sql.runTimes && pStmt->sql.type > 0 && + STMT_TYPE_MULTI_INSERT != pStmt->sql.type) { + pStmt->bInfo.needParse = false; + } + + if (pStmt->exec.pRequest && STMT_TYPE_QUERY == pStmt->sql.type && pStmt->sql.runTimes) { + taos_free_result(pStmt->exec.pRequest); + pStmt->exec.pRequest = NULL; + } + + if (NULL == pStmt->exec.pRequest) { + STMT_ERR_RET(buildRequest(pStmt->taos, pStmt->sql.sqlStr, pStmt->sql.sqlLen, &pStmt->exec.pRequest)); + } + + if (pStmt->bInfo.needParse) { + STMT_ERR_RET(stmtParseSql(pStmt)); + } + + int32_t nums = 0; + TAOS_FIELD_E *pField = NULL; + STMT_ERR_RET(stmtFetchColFields(stmt, &nums, &pField)); + if (idx >= nums) { + tscError("idx %d is too big", idx); + taosMemoryFree(pField); + STMT_ERR_RET(TSDB_CODE_INVALID_PARA); + } + + *type = pField[idx].type; + *bytes = pField[idx].bytes; + + taosMemoryFree(pField); + + return TSDB_CODE_SUCCESS; +} + TAOS_RES* stmtUseResult(TAOS_STMT* stmt) { STscStmt* pStmt = (STscStmt*)stmt; diff --git a/source/client/src/tmq.c b/source/client/src/tmq.c index dfa56f80c457783eb58255f3a0d494936b475bad..416d1a6f26f7d86506df1d2e0855da8e5bf71e3b 100644 --- a/source/client/src/tmq.c +++ b/source/client/src/tmq.c @@ -1243,7 +1243,7 @@ tmq_resp_err_t tmq_seek(tmq_t* tmq, const tmq_topic_vgroup_t* offset) { return TMQ_RESP_ERR__FAIL; } -SMqPollReq* tmqBuildConsumeReqImpl(tmq_t* tmq, int64_t waitTime, SMqClientTopic* pTopic, SMqClientVg* pVg) { +SMqPollReq* tmqBuildConsumeReqImpl(tmq_t* tmq, int64_t timeout, SMqClientTopic* pTopic, SMqClientVg* pVg) { int64_t reqOffset; if (pVg->currentOffset >= 0) { reqOffset = pVg->currentOffset; @@ -1269,7 +1269,7 @@ SMqPollReq* tmqBuildConsumeReqImpl(tmq_t* tmq, int64_t waitTime, SMqClientTopic* strcpy(pReq->subKey + tlen + 1, pTopic->topicName); pReq->withTbName = tmq->withTbName; - pReq->waitTime = waitTime; + pReq->timeout = timeout; pReq->consumerId = tmq->consumerId; pReq->epoch = tmq->epoch; pReq->currentOffset = reqOffset; @@ -1297,7 +1297,7 @@ SMqRspObj* tmqBuildRspFromWrapper(SMqPollRspWrapper* pWrapper) { return pRspObj; } -int32_t tmqPollImpl(tmq_t* tmq, int64_t waitTime) { +int32_t tmqPollImpl(tmq_t* tmq, int64_t timeout) { /*printf("call poll\n");*/ for (int i = 0; i < taosArrayGetSize(tmq->clientTopics); i++) { SMqClientTopic* pTopic = taosArrayGet(tmq->clientTopics, i); @@ -1318,7 +1318,7 @@ int32_t tmqPollImpl(tmq_t* tmq, int64_t waitTime) { #endif } atomic_store_32(&pVg->vgSkipCnt, 0); - SMqPollReq* pReq = tmqBuildConsumeReqImpl(tmq, waitTime, pTopic, pVg); + SMqPollReq* pReq = tmqBuildConsumeReqImpl(tmq, timeout, pTopic, pVg); if (pReq == NULL) { atomic_store_32(&pVg->vgStatus, TMQ_VG_STATUS__IDLE); tsem_post(&tmq->rspSem); @@ -1388,7 +1388,7 @@ int32_t tmqHandleNoPollRsp(tmq_t* tmq, SMqRspWrapper* rspWrapper, bool* pReset) return 0; } -SMqRspObj* tmqHandleAllRsp(tmq_t* tmq, int64_t waitTime, bool pollIfReset) { +SMqRspObj* tmqHandleAllRsp(tmq_t* tmq, int64_t timeout, bool pollIfReset) { while (1) { SMqRspWrapper* rspWrapper = NULL; taosGetQitem(tmq->qall, (void**)&rspWrapper); @@ -1428,17 +1428,17 @@ SMqRspObj* tmqHandleAllRsp(tmq_t* tmq, int64_t waitTime, bool pollIfReset) { taosFreeQitem(rspWrapper); if (pollIfReset && reset) { tscDebug("consumer %ld reset and repoll", tmq->consumerId); - tmqPollImpl(tmq, waitTime); + tmqPollImpl(tmq, timeout); } } } } -TAOS_RES* tmq_consumer_poll(tmq_t* tmq, int64_t wait_time) { +TAOS_RES* tmq_consumer_poll(tmq_t* tmq, int64_t timeout) { SMqRspObj* rspObj; int64_t startTime = taosGetTimestampMs(); - rspObj = tmqHandleAllRsp(tmq, wait_time, false); + rspObj = tmqHandleAllRsp(tmq, timeout, false); if (rspObj) { return (TAOS_RES*)rspObj; } @@ -1450,16 +1450,16 @@ TAOS_RES* tmq_consumer_poll(tmq_t* tmq, int64_t wait_time) { while (1) { tmqHandleAllDelayedTask(tmq); - if (tmqPollImpl(tmq, wait_time) < 0) return NULL; + if (tmqPollImpl(tmq, timeout) < 0) return NULL; - rspObj = tmqHandleAllRsp(tmq, wait_time, false); + rspObj = tmqHandleAllRsp(tmq, timeout, false); if (rspObj) { return (TAOS_RES*)rspObj; } - if (wait_time != 0) { + if (timeout != 0) { int64_t endTime = taosGetTimestampMs(); int64_t leftTime = endTime - startTime; - if (leftTime > wait_time) { + if (leftTime > timeout) { tscDebug("consumer %ld (epoch %d) timeout, no rsp", tmq->consumerId, tmq->epoch); return NULL; } @@ -1474,10 +1474,7 @@ TAOS_RES* tmq_consumer_poll(tmq_t* tmq, int64_t wait_time) { tmq_resp_err_t tmq_consumer_close(tmq_t* tmq) { if (tmq->status == TMQ_CONSUMER_STATUS__READY) { tmq_resp_err_t rsp = tmq_commit_sync(tmq, NULL); - if (rsp == TMQ_RESP_ERR__SUCCESS) { - // TODO: free resources - return TMQ_RESP_ERR__SUCCESS; - } else { + if (rsp == TMQ_RESP_ERR__FAIL) { return TMQ_RESP_ERR__FAIL; } @@ -1485,10 +1482,7 @@ tmq_resp_err_t tmq_consumer_close(tmq_t* tmq) { rsp = tmq_subscribe(tmq, lst); tmq_list_destroy(lst); - if (rsp == TMQ_RESP_ERR__SUCCESS) { - // TODO: free resources - return TMQ_RESP_ERR__SUCCESS; - } else { + if (rsp == TMQ_RESP_ERR__FAIL) { return TMQ_RESP_ERR__FAIL; } } diff --git a/source/common/src/systable.c b/source/common/src/systable.c index 38a6bafe9a5ea2c795b82899e6a6ce91f3ad545d..8207ffb22f42d865c54898530154d317af0ea19d 100644 --- a/source/common/src/systable.c +++ b/source/common/src/systable.c @@ -215,7 +215,6 @@ static const SSysDbTableSchema transSchema[] = { {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, {.name = "stage", .bytes = TSDB_TRANS_STAGE_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, {.name = "db", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR}, - {.name = "type", .bytes = TSDB_TRANS_TYPE_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, {.name = "failed_times", .bytes = 4, .type = TSDB_DATA_TYPE_INT}, {.name = "last_exec_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP}, {.name = "last_error", .bytes = (TSDB_TRANS_ERROR_LEN - 1) + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, diff --git a/source/dnode/mnode/impl/inc/mndDef.h b/source/dnode/mnode/impl/inc/mndDef.h index a415d64170f6bdee1508d41260217ac211fcfb8a..d0c737ae5a7518cf864f55ab3f5c9702b7f60073 100644 --- a/source/dnode/mnode/impl/inc/mndDef.h +++ b/source/dnode/mnode/impl/inc/mndDef.h @@ -54,9 +54,11 @@ typedef enum { } EAuthOp; typedef enum { - TRN_STEP_LOG = 1, - TRN_STEP_ACTION = 2, -} ETrnStep; + TRN_CONFLICT_NOTHING = 0, + TRN_CONFLICT_GLOBAL = 1, + TRN_CONFLICT_DB = 2, + TRN_CONFLICT_DB_INSIDE = 3, +} ETrnConflct; typedef enum { TRN_STAGE_PREPARE = 0, @@ -68,69 +70,15 @@ typedef enum { TRN_STAGE_FINISHED = 6 } ETrnStage; -typedef enum { - TRN_TYPE_BASIC_SCOPE = 1000, - TRN_TYPE_CREATE_ACCT = 1001, - TRN_TYPE_CREATE_CLUSTER = 1002, - TRN_TYPE_CREATE_USER = 1003, - TRN_TYPE_ALTER_USER = 1004, - TRN_TYPE_DROP_USER = 1005, - TRN_TYPE_CREATE_FUNC = 1006, - TRN_TYPE_DROP_FUNC = 1007, - - TRN_TYPE_CREATE_SNODE = 1010, - TRN_TYPE_DROP_SNODE = 1011, - TRN_TYPE_CREATE_QNODE = 1012, - TRN_TYPE_DROP_QNODE = 10013, - TRN_TYPE_CREATE_BNODE = 1014, - TRN_TYPE_DROP_BNODE = 1015, - TRN_TYPE_CREATE_MNODE = 1016, - TRN_TYPE_DROP_MNODE = 1017, - - TRN_TYPE_CREATE_TOPIC = 1020, - TRN_TYPE_DROP_TOPIC = 1021, - TRN_TYPE_SUBSCRIBE = 1022, - TRN_TYPE_REBALANCE = 1023, - TRN_TYPE_COMMIT_OFFSET = 1024, - TRN_TYPE_CREATE_STREAM = 1025, - TRN_TYPE_DROP_STREAM = 1026, - TRN_TYPE_ALTER_STREAM = 1027, - TRN_TYPE_CONSUMER_LOST = 1028, - TRN_TYPE_CONSUMER_RECOVER = 1029, - TRN_TYPE_DROP_CGROUP = 1030, - TRN_TYPE_BASIC_SCOPE_END, - - TRN_TYPE_GLOBAL_SCOPE = 2000, - TRN_TYPE_CREATE_DNODE = 2001, - TRN_TYPE_DROP_DNODE = 2002, - TRN_TYPE_GLOBAL_SCOPE_END, - - TRN_TYPE_DB_SCOPE = 3000, - TRN_TYPE_CREATE_DB = 3001, - TRN_TYPE_ALTER_DB = 3002, - TRN_TYPE_DROP_DB = 3003, - TRN_TYPE_SPLIT_VGROUP = 3004, - TRN_TYPE_MERGE_VGROUP = 3015, - TRN_TYPE_DB_SCOPE_END, - - TRN_TYPE_STB_SCOPE = 4000, - TRN_TYPE_CREATE_STB = 4001, - TRN_TYPE_ALTER_STB = 4002, - TRN_TYPE_DROP_STB = 4003, - TRN_TYPE_CREATE_SMA = 4004, - TRN_TYPE_DROP_SMA = 4005, - TRN_TYPE_STB_SCOPE_END, -} ETrnType; - typedef enum { TRN_POLICY_ROLLBACK = 0, TRN_POLICY_RETRY = 1, } ETrnPolicy; typedef enum { - TRN_EXEC_PARALLEL = 0, - TRN_EXEC_NO_PARALLEL = 1, -} ETrnExecType; + TRN_EXEC_PRARLLEL = 0, + TRN_EXEC_SERIAL = 1, +} ETrnExec; typedef enum { DND_REASON_ONLINE = 0, @@ -159,8 +107,8 @@ typedef struct { int32_t id; ETrnStage stage; ETrnPolicy policy; - ETrnType type; - ETrnExecType parallel; + ETrnConflct conflict; + ETrnExec exec; int32_t code; int32_t failedTimes; SRpcHandleInfo rpcInfo; @@ -172,10 +120,11 @@ typedef struct { SArray* commitActions; int64_t createdTime; int64_t lastExecTime; - int64_t dbUid; + int32_t lastErrorAction; + int32_t lastErrorNo; + tmsg_t lastErrorMsgType; + SEpSet lastErrorEpset; char dbname[TSDB_DB_FNAME_LEN]; - char lastError[TSDB_TRANS_ERROR_LEN]; - char desc[TSDB_TRANS_DESC_LEN]; int32_t startFunc; int32_t stopFunc; int32_t paramLen; diff --git a/source/dnode/mnode/impl/inc/mndTrans.h b/source/dnode/mnode/impl/inc/mndTrans.h index ba6f5faf1ec74580576b18dd13e620368d0541c4..9b063fb44ff30d77a6c7e3b9c0d11ebb26b77150 100644 --- a/source/dnode/mnode/impl/inc/mndTrans.h +++ b/source/dnode/mnode/impl/inc/mndTrans.h @@ -34,7 +34,7 @@ typedef struct { int32_t errCode; int32_t acceptableCode; int8_t stage; - int8_t isRaw; + int8_t actionType; // 0-msg, 1-raw int8_t rawWritten; int8_t msgSent; int8_t msgReceived; @@ -52,7 +52,7 @@ void mndCleanupTrans(SMnode *pMnode); STrans *mndAcquireTrans(SMnode *pMnode, int32_t transId); void mndReleaseTrans(SMnode *pMnode, STrans *pTrans); -STrans *mndTransCreate(SMnode *pMnode, ETrnPolicy policy, ETrnType type, const SRpcMsg *pReq); +STrans *mndTransCreate(SMnode *pMnode, ETrnPolicy policy, ETrnConflct conflict, const SRpcMsg *pReq); void mndTransDrop(STrans *pTrans); int32_t mndTransAppendRedolog(STrans *pTrans, SSdbRaw *pRaw); int32_t mndTransAppendUndolog(STrans *pTrans, SSdbRaw *pRaw); @@ -62,7 +62,7 @@ int32_t mndTransAppendUndoAction(STrans *pTrans, STransAction *pAction); void mndTransSetRpcRsp(STrans *pTrans, void *pCont, int32_t contLen); void mndTransSetCb(STrans *pTrans, ETrnFunc startFunc, ETrnFunc stopFunc, void *param, int32_t paramLen); void mndTransSetDbInfo(STrans *pTrans, SDbObj *pDb); -void mndTransSetNoParallel(STrans *pTrans); +void mndTransSetSerial(STrans *pTrans); int32_t mndTransPrepare(SMnode *pMnode, STrans *pTrans); void mndTransProcessRsp(SRpcMsg *pRsp); diff --git a/source/dnode/mnode/impl/src/mndAcct.c b/source/dnode/mnode/impl/src/mndAcct.c index f3ec3a421b6290dbe00997ba13707d62459dccff..0ce4a8c76e72ce2f2513819139b00a01c67f5231 100644 --- a/source/dnode/mnode/impl/src/mndAcct.c +++ b/source/dnode/mnode/impl/src/mndAcct.c @@ -80,7 +80,7 @@ static int32_t mndCreateDefaultAcct(SMnode *pMnode) { mDebug("acct:%s, will be created when deploying, raw:%p", acctObj.acct, pRaw); - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_CREATE_ACCT, NULL); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_NOTHING, NULL); if (pTrans == NULL) { mError("acct:%s, failed to create since %s", acctObj.acct, terrstr()); return -1; diff --git a/source/dnode/mnode/impl/src/mndBnode.c b/source/dnode/mnode/impl/src/mndBnode.c index 3316a09462ff1d5ff7c940e623941c7abe72a76c..801f335a8056757c2cbe2d7f1ca6d65a4501003f 100644 --- a/source/dnode/mnode/impl/src/mndBnode.c +++ b/source/dnode/mnode/impl/src/mndBnode.c @@ -246,7 +246,7 @@ static int32_t mndCreateBnode(SMnode *pMnode, SRpcMsg *pReq, SDnodeObj *pDnode, bnodeObj.createdTime = taosGetTimestampMs(); bnodeObj.updateTime = bnodeObj.createdTime; - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_CREATE_BNODE, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_NOTHING, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to create bnode:%d", pTrans->id, pCreate->dnodeId); @@ -363,7 +363,7 @@ static int32_t mndSetDropBnodeRedoActions(STrans *pTrans, SDnodeObj *pDnode, SBn static int32_t mndDropBnode(SMnode *pMnode, SRpcMsg *pReq, SBnodeObj *pObj) { int32_t code = -1; - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_DROP_BNODE, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_NOTHING, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to drop bnode:%d", pTrans->id, pObj->id); diff --git a/source/dnode/mnode/impl/src/mndCluster.c b/source/dnode/mnode/impl/src/mndCluster.c index 76c8acf407762cbb4d6d455f2bd552f055ecd0f4..bb3377d16ac815489ce0cfbec22307ebb02156d0 100644 --- a/source/dnode/mnode/impl/src/mndCluster.c +++ b/source/dnode/mnode/impl/src/mndCluster.c @@ -179,10 +179,8 @@ static int32_t mndCreateDefaultCluster(SMnode *pMnode) { sdbSetRawStatus(pRaw, SDB_STATUS_READY); mDebug("cluster:%" PRId64 ", will be created when deploying, raw:%p", clusterObj.id, pRaw); -#if 0 - return sdbWrite(pMnode->pSdb, pRaw); -#else - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_CREATE_CLUSTER, NULL); + + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_NOTHING, NULL); if (pTrans == NULL) { mError("cluster:%" PRId64 ", failed to create since %s", clusterObj.id, terrstr()); return -1; @@ -204,7 +202,6 @@ static int32_t mndCreateDefaultCluster(SMnode *pMnode) { mndTransDrop(pTrans); return 0; -#endif } static int32_t mndRetrieveClusters(SRpcMsg *pMsg, SShowObj *pShow, SSDataBlock *pBlock, int32_t rows) { diff --git a/source/dnode/mnode/impl/src/mndConsumer.c b/source/dnode/mnode/impl/src/mndConsumer.c index c3eaeb73b2e21a7d26c7b260a7ebf43c87d707d1..0314891d59f38d5f8fdc4f92ecaca3f8c09bf2cd 100644 --- a/source/dnode/mnode/impl/src/mndConsumer.c +++ b/source/dnode/mnode/impl/src/mndConsumer.c @@ -97,7 +97,7 @@ static int32_t mndProcessConsumerLostMsg(SRpcMsg *pMsg) { mndReleaseConsumer(pMnode, pConsumer); - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_CONSUMER_LOST, pMsg); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_NOTHING, pMsg); if (pTrans == NULL) goto FAIL; if (mndSetConsumerCommitLogs(pMnode, pTrans, pConsumerNew) != 0) goto FAIL; if (mndTransPrepare(pMnode, pTrans) != 0) goto FAIL; @@ -121,7 +121,7 @@ static int32_t mndProcessConsumerRecoverMsg(SRpcMsg *pMsg) { mndReleaseConsumer(pMnode, pConsumer); - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_CONSUMER_RECOVER, pMsg); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_NOTHING, pMsg); if (pTrans == NULL) goto FAIL; if (mndSetConsumerCommitLogs(pMnode, pTrans, pConsumerNew) != 0) goto FAIL; if (mndTransPrepare(pMnode, pTrans) != 0) goto FAIL; @@ -403,7 +403,7 @@ static int32_t mndProcessSubscribeReq(SRpcMsg *pMsg) { int32_t newTopicNum = taosArrayGetSize(newSub); // check topic existance - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_SUBSCRIBE, pMsg); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_NOTHING, pMsg); if (pTrans == NULL) goto SUBSCRIBE_OVER; for (int32_t i = 0; i < newTopicNum; i++) { diff --git a/source/dnode/mnode/impl/src/mndDb.c b/source/dnode/mnode/impl/src/mndDb.c index a0d940c049384b2c19ad6f36e62f8f5460bd62ed..5d79708109fc6da808dbb686e9342caa312b11ea 100644 --- a/source/dnode/mnode/impl/src/mndDb.c +++ b/source/dnode/mnode/impl/src/mndDb.c @@ -545,7 +545,7 @@ static int32_t mndCreateDb(SMnode *pMnode, SRpcMsg *pReq, SCreateDbReq *pCreate, } int32_t code = -1; - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_CREATE_DB, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_DB, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to create db:%s", pTrans->id, pCreate->db); @@ -775,7 +775,7 @@ static int32_t mndSetAlterDbRedoActions(SMnode *pMnode, STrans *pTrans, SDbObj * static int32_t mndAlterDb(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pOld, SDbObj *pNew) { int32_t code = -1; - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_ALTER_DB, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_DB, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to alter db:%s", pTrans->id, pOld->name); @@ -1036,7 +1036,7 @@ static int32_t mndBuildDropDbRsp(SDbObj *pDb, int32_t *pRspLen, void **ppRsp, bo static int32_t mndDropDb(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb) { int32_t code = -1; - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_DROP_DB, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_DB, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to drop db:%s", pTrans->id, pDb->name); diff --git a/source/dnode/mnode/impl/src/mndDnode.c b/source/dnode/mnode/impl/src/mndDnode.c index d2d97e14053288338ac4401efd8df2457b50063c..aeff018aa82da7216e21bb46270a6bbb8c3ead7a 100644 --- a/source/dnode/mnode/impl/src/mndDnode.c +++ b/source/dnode/mnode/impl/src/mndDnode.c @@ -101,10 +101,7 @@ static int32_t mndCreateDefaultDnode(SMnode *pMnode) { mDebug("dnode:%d, will be created when deploying, raw:%p", dnodeObj.id, pRaw); -#if 0 - return sdbWrite(pMnode->pSdb, pRaw); -#else - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_CREATE_DNODE, NULL); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_GLOBAL, NULL); if (pTrans == NULL) { mError("dnode:%s, failed to create since %s", dnodeObj.ep, terrstr()); return -1; @@ -126,7 +123,6 @@ static int32_t mndCreateDefaultDnode(SMnode *pMnode) { mndTransDrop(pTrans); return 0; -#endif } static SSdbRaw *mndDnodeActionEncode(SDnodeObj *pDnode) { @@ -260,7 +256,7 @@ int32_t mndGetDnodeSize(SMnode *pMnode) { bool mndIsDnodeOnline(SMnode *pMnode, SDnodeObj *pDnode, int64_t curMs) { int64_t interval = TABS(pDnode->lastAccessTime - curMs); - if (interval > 30000 * tsStatusInterval) { + if (interval > 5000 * tsStatusInterval) { if (pDnode->rebootTime > 0) { pDnode->offlineReason = DND_REASON_STATUS_MSG_TIMEOUT; } @@ -488,7 +484,7 @@ static int32_t mndCreateDnode(SMnode *pMnode, SRpcMsg *pReq, SCreateDnodeReq *pC memcpy(dnodeObj.fqdn, pCreate->fqdn, TSDB_FQDN_LEN); snprintf(dnodeObj.ep, TSDB_EP_LEN, "%s:%u", dnodeObj.fqdn, dnodeObj.port); - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_CREATE_DNODE, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_GLOBAL, pReq); if (pTrans == NULL) { mError("dnode:%s, failed to create since %s", dnodeObj.ep, terrstr()); return -1; @@ -564,7 +560,7 @@ CREATE_DNODE_OVER: } static int32_t mndDropDnode(SMnode *pMnode, SRpcMsg *pReq, SDnodeObj *pDnode) { - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_DROP_DNODE, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_GLOBAL, pReq); if (pTrans == NULL) { mError("dnode:%d, failed to drop since %s", pDnode->id, terrstr()); return -1; @@ -617,7 +613,7 @@ static int32_t mndProcessDropDnodeReq(SRpcMsg *pReq) { pMObj = mndAcquireMnode(pMnode, dropReq.dnodeId); if (pMObj != NULL) { - terrno = TSDB_CODE_MND_MNODE_DEPLOYED; + terrno = TSDB_CODE_MND_MNODE_NOT_EXIST; goto DROP_DNODE_OVER; } diff --git a/source/dnode/mnode/impl/src/mndFunc.c b/source/dnode/mnode/impl/src/mndFunc.c index 9107dab693d4c9eb6adc6599d03126d5a59a5a69..bf4baebd8584bd8324f3e4e53836bbd8a2002fad 100644 --- a/source/dnode/mnode/impl/src/mndFunc.c +++ b/source/dnode/mnode/impl/src/mndFunc.c @@ -215,7 +215,7 @@ static int32_t mndCreateFunc(SMnode *pMnode, SRpcMsg *pReq, SCreateFuncReq *pCre } memcpy(func.pCode, pCreate->pCode, func.codeSize); - pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_CREATE_FUNC, pReq); + pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_NOTHING, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to create func:%s", pTrans->id, pCreate->name); @@ -245,7 +245,7 @@ _OVER: static int32_t mndDropFunc(SMnode *pMnode, SRpcMsg *pReq, SFuncObj *pFunc) { int32_t code = -1; - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_DROP_FUNC, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_NOTHING, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to drop user:%s", pTrans->id, pFunc->name); diff --git a/source/dnode/mnode/impl/src/mndMain.c b/source/dnode/mnode/impl/src/mndMain.c index 2a2a45a45d5759dfb8fbe8f0dc4662cfab8fcd14..3a3fd7ebdb5ac8f56a64ea5b0169dfeda8cd3b97 100644 --- a/source/dnode/mnode/impl/src/mndMain.c +++ b/source/dnode/mnode/impl/src/mndMain.c @@ -369,7 +369,7 @@ int32_t mndProcessSyncMsg(SRpcMsg *pMsg) { mError("failed to process sync msg:%p type:%s since %s", pMsg, TMSG_INFO(pMsg->msgType), terrstr()); return TAOS_SYNC_PROPOSE_OTHER_ERROR; } - + char logBuf[512] = {0}; char *syncNodeStr = sync2SimpleStr(pMgmt->sync); snprintf(logBuf, sizeof(logBuf), "==vnodeProcessSyncReq== msgType:%d, syncNode: %s", pMsg->msgType, syncNodeStr); @@ -472,7 +472,7 @@ int32_t mndProcessRpcMsg(SRpcMsg *pMsg) { } else if (code == 0) { mTrace("msg:%p, successfully processed and response", pMsg); } else { - mDebug("msg:%p, failed to process since %s, app:%p type:%s", pMsg, terrstr(), pMsg->info.ahandle, + mError("msg:%p, failed to process since %s, app:%p type:%s", pMsg, terrstr(), pMsg->info.ahandle, TMSG_INFO(pMsg->msgType)); } @@ -686,4 +686,4 @@ void mndReleaseSyncRef(SMnode *pMnode) { int32_t ref = atomic_sub_fetch_32(&pMnode->syncRef, 1); mTrace("mnode sync is released, ref:%d", ref); taosThreadRwlockUnlock(&pMnode->lock); -} \ No newline at end of file +} diff --git a/source/dnode/mnode/impl/src/mndMnode.c b/source/dnode/mnode/impl/src/mndMnode.c index 5b8ba6deaa2f768154d90af5b774c098f81c6434..4578d81efb3fbd4c21192447c0b068f871a95619 100644 --- a/source/dnode/mnode/impl/src/mndMnode.c +++ b/source/dnode/mnode/impl/src/mndMnode.c @@ -18,9 +18,9 @@ #include "mndAuth.h" #include "mndDnode.h" #include "mndShow.h" +#include "mndSync.h" #include "mndTrans.h" #include "mndUser.h" -#include "mndSync.h" #define MNODE_VER_NUMBER 1 #define MNODE_RESERVE_SIZE 64 @@ -92,10 +92,7 @@ static int32_t mndCreateDefaultMnode(SMnode *pMnode) { mDebug("mnode:%d, will be created when deploying, raw:%p", mnodeObj.id, pRaw); -#if 0 - return sdbWrite(pMnode->pSdb, pRaw); -#else - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_CREATE_DNODE, NULL); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_GLOBAL, NULL); if (pTrans == NULL) { mError("mnode:%d, failed to create since %s", mnodeObj.id, terrstr()); return -1; @@ -117,7 +114,6 @@ static int32_t mndCreateDefaultMnode(SMnode *pMnode) { mndTransDrop(pTrans); return 0; -#endif } static SSdbRaw *mndMnodeActionEncode(SMnodeObj *pObj) { @@ -363,11 +359,11 @@ static int32_t mndCreateMnode(SMnode *pMnode, SRpcMsg *pReq, SDnodeObj *pDnode, mnodeObj.createdTime = taosGetTimestampMs(); mnodeObj.updateTime = mnodeObj.createdTime; - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_CREATE_MNODE, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_GLOBAL, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to create mnode:%d", pTrans->id, pCreate->dnodeId); - mndTransSetNoParallel(pTrans); + mndTransSetSerial(pTrans); if (mndSetCreateMnodeRedoLogs(pMnode, pTrans, &mnodeObj) != 0) goto _OVER; if (mndSetCreateMnodeCommitLogs(pMnode, pTrans, &mnodeObj) != 0) goto _OVER; if (mndSetCreateMnodeRedoActions(pMnode, pTrans, pDnode, &mnodeObj) != 0) goto _OVER; @@ -396,6 +392,11 @@ static int32_t mndProcessCreateMnodeReq(SRpcMsg *pReq) { mDebug("mnode:%d, start to create", createReq.dnodeId); + if (sdbGetSize(pMnode->pSdb, SDB_MNODE) >= 3) { + terrno = TSDB_CODE_MND_TOO_MANY_MNODES; + goto _OVER; + } + pObj = mndAcquireMnode(pMnode, createReq.dnodeId); if (pObj != NULL) { terrno = TSDB_CODE_MND_MNODE_ALREADY_EXIST; @@ -535,11 +536,11 @@ static int32_t mndSetDropMnodeRedoActions(SMnode *pMnode, STrans *pTrans, SDnode static int32_t mndDropMnode(SMnode *pMnode, SRpcMsg *pReq, SMnodeObj *pObj) { int32_t code = -1; - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_DROP_MNODE, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_GLOBAL, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to drop mnode:%d", pTrans->id, pObj->id); - mndTransSetNoParallel(pTrans); + mndTransSetSerial(pTrans); if (mndSetDropMnodeRedoLogs(pMnode, pTrans, pObj) != 0) goto _OVER; if (mndSetDropMnodeCommitLogs(pMnode, pTrans, pObj) != 0) goto _OVER; if (mndSetDropMnodeRedoActions(pMnode, pTrans, pObj->pDnode, pObj) != 0) goto _OVER; @@ -632,6 +633,7 @@ static int32_t mndRetrieveMnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB int32_t cols = 0; SMnodeObj *pObj = NULL; char *pWrite; + int64_t curMs = taosGetTimestampMs(); while (numOfRows < rows) { pShow->pIter = sdbFetch(pSdb, SDB_MNODE, pShow->pIter, (void **)&pObj); @@ -647,11 +649,16 @@ static int32_t mndRetrieveMnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB pColInfo = taosArrayGet(pBlock->pDataBlock, cols++); colDataAppend(pColInfo, numOfRows, b1, false); + bool online = mndIsDnodeOnline(pMnode, pObj->pDnode, curMs); const char *roles = NULL; if (pObj->id == pMnode->selfDnodeId) { roles = syncStr(TAOS_SYNC_STATE_LEADER); } else { - roles = syncStr(pObj->state); + if (!online) { + roles = "OFFLINE"; + } else { + roles = syncStr(pObj->state); + } } char *b2 = taosMemoryCalloc(1, 12 + VARSTR_HEADER_SIZE); STR_WITH_MAXSIZE_TO_VARSTR(b2, roles, pShow->pMeta->pSchemas[cols].bytes); diff --git a/source/dnode/mnode/impl/src/mndOffset.c b/source/dnode/mnode/impl/src/mndOffset.c index 6cbaca3c07818304417f05baed75fdeab70da5ca..00c8bb30d03d87545750b87d9eddab9efb8e821e 100644 --- a/source/dnode/mnode/impl/src/mndOffset.c +++ b/source/dnode/mnode/impl/src/mndOffset.c @@ -179,7 +179,7 @@ static int32_t mndProcessCommitOffsetReq(SRpcMsg *pMsg) { tDecodeSMqCMCommitOffsetReq(&decoder, &commitOffsetReq); - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_COMMIT_OFFSET, pMsg); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_NOTHING, pMsg); for (int32_t i = 0; i < commitOffsetReq.num; i++) { SMqOffset *pOffset = &commitOffsetReq.offsets[i]; diff --git a/source/dnode/mnode/impl/src/mndQnode.c b/source/dnode/mnode/impl/src/mndQnode.c index 7c7bdc2e3acd0af56d6487983623328d39b13c68..27881865af11913b4a04c4fc84df115e98823fd1 100644 --- a/source/dnode/mnode/impl/src/mndQnode.c +++ b/source/dnode/mnode/impl/src/mndQnode.c @@ -248,7 +248,7 @@ static int32_t mndCreateQnode(SMnode *pMnode, SRpcMsg *pReq, SDnodeObj *pDnode, qnodeObj.createdTime = taosGetTimestampMs(); qnodeObj.updateTime = qnodeObj.createdTime; - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_CREATE_QNODE, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_NOTHING, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to create qnode:%d", pTrans->id, pCreate->dnodeId); @@ -365,7 +365,7 @@ static int32_t mndSetDropQnodeRedoActions(STrans *pTrans, SDnodeObj *pDnode, SQn static int32_t mndDropQnode(SMnode *pMnode, SRpcMsg *pReq, SQnodeObj *pObj) { int32_t code = -1; - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_DROP_QNODE, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_NOTHING, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to drop qnode:%d", pTrans->id, pObj->id); diff --git a/source/dnode/mnode/impl/src/mndSma.c b/source/dnode/mnode/impl/src/mndSma.c index 2cb28dccad9d6750fe23057d18b328947a0d4c50..8b09674179062bb1e0f363aa8f1164f47a30ede7 100644 --- a/source/dnode/mnode/impl/src/mndSma.c +++ b/source/dnode/mnode/impl/src/mndSma.c @@ -508,12 +508,12 @@ static int32_t mndCreateSma(SMnode *pMnode, SRpcMsg *pReq, SMCreateSmaReq *pCrea streamObj.fixedSinkVgId = smaObj.dstVgId; int32_t code = -1; - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_CREATE_SMA, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_DB, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to create sma:%s", pTrans->id, pCreate->name); mndTransSetDbInfo(pTrans, pDb); - mndTransSetNoParallel(pTrans); + mndTransSetSerial(pTrans); if (mndSetCreateSmaRedoLogs(pMnode, pTrans, &smaObj) != 0) goto _OVER; if (mndSetCreateSmaVgroupRedoLogs(pMnode, pTrans, &streamObj.fixedSinkVg) != 0) goto _OVER; @@ -753,7 +753,7 @@ static int32_t mndDropSma(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SSmaObj *p pVgroup = mndAcquireVgroup(pMnode, pSma->dstVgId); if (pVgroup == NULL) goto _OVER; - pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_DROP_SMA, pReq); + pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_DB, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to drop sma:%s", pTrans->id, pSma->name); diff --git a/source/dnode/mnode/impl/src/mndSnode.c b/source/dnode/mnode/impl/src/mndSnode.c index 87b61f59ecb088692941a9f57ebf89db2cefa054..c6acb4fef4a09ef78c561178f11428cb3004b4f3 100644 --- a/source/dnode/mnode/impl/src/mndSnode.c +++ b/source/dnode/mnode/impl/src/mndSnode.c @@ -253,7 +253,7 @@ static int32_t mndCreateSnode(SMnode *pMnode, SRpcMsg *pReq, SDnodeObj *pDnode, snodeObj.createdTime = taosGetTimestampMs(); snodeObj.updateTime = snodeObj.createdTime; - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_CREATE_SNODE, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_NOTHING, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to create snode:%d", pTrans->id, pCreate->dnodeId); @@ -372,7 +372,7 @@ static int32_t mndSetDropSnodeRedoActions(STrans *pTrans, SDnodeObj *pDnode, SSn static int32_t mndDropSnode(SMnode *pMnode, SRpcMsg *pReq, SSnodeObj *pObj) { int32_t code = -1; - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_DROP_SNODE, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_NOTHING, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to drop snode:%d", pTrans->id, pObj->id); diff --git a/source/dnode/mnode/impl/src/mndStb.c b/source/dnode/mnode/impl/src/mndStb.c index 53befd731c8b214c3b4feb87040b0aa5fd11d605..9ca76135199ec72e107028cc80ddeb65bb8dfd5f 100644 --- a/source/dnode/mnode/impl/src/mndStb.c +++ b/source/dnode/mnode/impl/src/mndStb.c @@ -735,7 +735,7 @@ static int32_t mndCreateStb(SMnode *pMnode, SRpcMsg *pReq, SMCreateStbReq *pCrea int32_t code = -1; - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_CREATE_STB, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_DB_INSIDE, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to create stb:%s", pTrans->id, pCreate->name); @@ -1257,7 +1257,7 @@ static int32_t mndAlterStb(SMnode *pMnode, SRpcMsg *pReq, const SMAlterStbReq *p if (code != 0) goto _OVER; code = -1; - pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_ALTER_STB, pReq); + pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_DB_INSIDE, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to alter stb:%s", pTrans->id, pAlter->name); @@ -1403,7 +1403,7 @@ static int32_t mndSetDropStbRedoActions(SMnode *pMnode, STrans *pTrans, SDbObj * static int32_t mndDropStb(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SStbObj *pStb) { int32_t code = -1; - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_DROP_STB, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_DB_INSIDE, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to drop stb:%s", pTrans->id, pStb->name); diff --git a/source/dnode/mnode/impl/src/mndStream.c b/source/dnode/mnode/impl/src/mndStream.c index 13071b5c538a45f9339f4bc97fce9d9e3239a0f6..5ee5b06a578f7c31ab18f66f2de1cdef2aa85a04 100644 --- a/source/dnode/mnode/impl/src/mndStream.c +++ b/source/dnode/mnode/impl/src/mndStream.c @@ -402,7 +402,7 @@ static int32_t mndCreateStream(SMnode *pMnode, SRpcMsg *pReq, SCMCreateStreamReq tstrncpy(streamObj.targetDb, pDb->name, TSDB_DB_FNAME_LEN); } - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_CREATE_STREAM, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_NOTHING, pReq); if (pTrans == NULL) { mError("stream:%s, failed to create since %s", pCreate->name, terrstr()); return -1; diff --git a/source/dnode/mnode/impl/src/mndSubscribe.c b/source/dnode/mnode/impl/src/mndSubscribe.c index 2dbc0cbbc1fb149665a7717e33a4f0ed83896f36..fc736809fd16098388033553f4fec92fb2df6974 100644 --- a/source/dnode/mnode/impl/src/mndSubscribe.c +++ b/source/dnode/mnode/impl/src/mndSubscribe.c @@ -394,8 +394,8 @@ static int32_t mndDoRebalance(SMnode *pMnode, const SMqRebInputObj *pInput, SMqR mInfo("rebalance calculation completed, rebalanced vg:"); for (int32_t i = 0; i < taosArrayGetSize(pOutput->rebVgs); i++) { SMqRebOutputVg *pOutputRebVg = taosArrayGet(pOutput->rebVgs, i); - mInfo("vg: %d moved from consumer %ld to consumer %ld", pOutputRebVg->pVgEp->vgId, pOutputRebVg->oldConsumerId, - pOutputRebVg->newConsumerId); + mInfo("vgId:%d moved from consumer %" PRId64 " to consumer %" PRId64, pOutputRebVg->pVgEp->vgId, + pOutputRebVg->oldConsumerId, pOutputRebVg->newConsumerId); } // 9. clear @@ -405,10 +405,9 @@ static int32_t mndDoRebalance(SMnode *pMnode, const SMqRebInputObj *pInput, SMqR } static int32_t mndPersistRebResult(SMnode *pMnode, SRpcMsg *pMsg, const SMqRebOutputObj *pOutput) { - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_REBALANCE, pMsg); - if (pTrans == NULL) { - return -1; - } + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_NOTHING, pMsg); + if (pTrans == NULL) return -1; + // make txn: // 1. redo action: action to all vg const SArray *rebVgs = pOutput->rebVgs; @@ -625,7 +624,7 @@ static int32_t mndProcessDropCgroupReq(SRpcMsg *pReq) { return -1; } - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_DROP_CGROUP, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_NOTHING, pReq); if (pTrans == NULL) { mError("cgroup: %s on topic:%s, failed to drop since %s", dropReq.cgroup, dropReq.topic, terrstr()); mndReleaseSubscribe(pMnode, pSub); diff --git a/source/dnode/mnode/impl/src/mndTopic.c b/source/dnode/mnode/impl/src/mndTopic.c index 02f06a0de81a5f2ae332ec7c5f77faad1136edd8..446992a24588e6b0c2b5bbadfe00a899dbc6cdc8 100644 --- a/source/dnode/mnode/impl/src/mndTopic.c +++ b/source/dnode/mnode/impl/src/mndTopic.c @@ -383,7 +383,7 @@ static int32_t mndCreateTopic(SMnode *pMnode, SRpcMsg *pReq, SCMCreateTopicReq * /*topicObj.withSchema = 1;*/ } - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_CREATE_TOPIC, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_NOTHING, pReq); if (pTrans == NULL) { mError("topic:%s, failed to create since %s", pCreate->name, terrstr()); taosMemoryFreeClear(topicObj.ast); @@ -551,7 +551,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) { } #endif - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_DROP_TOPIC, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_NOTHING, pReq); if (pTrans == NULL) { mError("topic:%s, failed to drop since %s", pTopic->name, terrstr()); return -1; diff --git a/source/dnode/mnode/impl/src/mndTrans.c b/source/dnode/mnode/impl/src/mndTrans.c index ad6388c585139b537832b0c4fe9d1f61d5569e27..bad513a89dc1f4303073f226ee763282d6119548 100644 --- a/source/dnode/mnode/impl/src/mndTrans.c +++ b/source/dnode/mnode/impl/src/mndTrans.c @@ -88,12 +88,12 @@ static int32_t mndTransGetActionsSize(SArray *pArray) { for (int32_t i = 0; i < actionNum; ++i) { STransAction *pAction = taosArrayGet(pArray, i); - if (pAction->isRaw) { + if (pAction->actionType) { rawDataLen += (sdbGetRawTotalSize(pAction->pRaw) + sizeof(int32_t)); } else { rawDataLen += (sizeof(STransAction) + pAction->contLen); } - rawDataLen += sizeof(pAction->isRaw); + rawDataLen += sizeof(pAction->actionType); } return rawDataLen; @@ -117,8 +117,8 @@ static SSdbRaw *mndTransActionEncode(STrans *pTrans) { SDB_SET_INT32(pRaw, dataPos, pTrans->id, _OVER) SDB_SET_INT16(pRaw, dataPos, pTrans->stage, _OVER) SDB_SET_INT16(pRaw, dataPos, pTrans->policy, _OVER) - SDB_SET_INT16(pRaw, dataPos, pTrans->type, _OVER) - SDB_SET_INT16(pRaw, dataPos, pTrans->parallel, _OVER) + SDB_SET_INT16(pRaw, dataPos, pTrans->conflict, _OVER) + SDB_SET_INT16(pRaw, dataPos, pTrans->exec, _OVER) SDB_SET_INT64(pRaw, dataPos, pTrans->createdTime, _OVER) SDB_SET_BINARY(pRaw, dataPos, pTrans->dbname, TSDB_DB_FNAME_LEN, _OVER) SDB_SET_INT32(pRaw, dataPos, pTrans->redoActionPos, _OVER) @@ -135,9 +135,9 @@ static SSdbRaw *mndTransActionEncode(STrans *pTrans) { SDB_SET_INT32(pRaw, dataPos, pAction->id, _OVER) SDB_SET_INT32(pRaw, dataPos, pAction->errCode, _OVER) SDB_SET_INT32(pRaw, dataPos, pAction->acceptableCode, _OVER) - SDB_SET_INT8(pRaw, dataPos, pAction->isRaw, _OVER) + SDB_SET_INT8(pRaw, dataPos, pAction->actionType, _OVER) SDB_SET_INT8(pRaw, dataPos, pAction->stage, _OVER) - if (pAction->isRaw) { + if (pAction->actionType) { int32_t len = sdbGetRawTotalSize(pAction->pRaw); SDB_SET_INT8(pRaw, dataPos, pAction->rawWritten, _OVER) SDB_SET_INT32(pRaw, dataPos, len, _OVER) @@ -157,9 +157,9 @@ static SSdbRaw *mndTransActionEncode(STrans *pTrans) { SDB_SET_INT32(pRaw, dataPos, pAction->id, _OVER) SDB_SET_INT32(pRaw, dataPos, pAction->errCode, _OVER) SDB_SET_INT32(pRaw, dataPos, pAction->acceptableCode, _OVER) - SDB_SET_INT8(pRaw, dataPos, pAction->isRaw, _OVER) + SDB_SET_INT8(pRaw, dataPos, pAction->actionType, _OVER) SDB_SET_INT8(pRaw, dataPos, pAction->stage, _OVER) - if (pAction->isRaw) { + if (pAction->actionType) { int32_t len = sdbGetRawTotalSize(pAction->pRaw); SDB_SET_INT8(pRaw, dataPos, pAction->rawWritten, _OVER) SDB_SET_INT32(pRaw, dataPos, len, _OVER) @@ -179,9 +179,9 @@ static SSdbRaw *mndTransActionEncode(STrans *pTrans) { SDB_SET_INT32(pRaw, dataPos, pAction->id, _OVER) SDB_SET_INT32(pRaw, dataPos, pAction->errCode, _OVER) SDB_SET_INT32(pRaw, dataPos, pAction->acceptableCode, _OVER) - SDB_SET_INT8(pRaw, dataPos, pAction->isRaw, _OVER) + SDB_SET_INT8(pRaw, dataPos, pAction->actionType, _OVER) SDB_SET_INT8(pRaw, dataPos, pAction->stage, _OVER) - if (pAction->isRaw) { + if (pAction->actionType) { int32_t len = sdbGetRawTotalSize(pAction->pRaw); SDB_SET_INT8(pRaw, dataPos, pAction->rawWritten, _OVER) SDB_SET_INT32(pRaw, dataPos, len, _OVER) @@ -250,16 +250,16 @@ static SSdbRow *mndTransActionDecode(SSdbRaw *pRaw) { int16_t stage = 0; int16_t policy = 0; - int16_t type = 0; - int16_t parallel = 0; + int16_t conflict = 0; + int16_t exec = 0; SDB_GET_INT16(pRaw, dataPos, &stage, _OVER) SDB_GET_INT16(pRaw, dataPos, &policy, _OVER) - SDB_GET_INT16(pRaw, dataPos, &type, _OVER) - SDB_GET_INT16(pRaw, dataPos, ¶llel, _OVER) + SDB_GET_INT16(pRaw, dataPos, &conflict, _OVER) + SDB_GET_INT16(pRaw, dataPos, &exec, _OVER) pTrans->stage = stage; pTrans->policy = policy; - pTrans->type = type; - pTrans->parallel = parallel; + pTrans->conflict = conflict; + pTrans->exec = exec; SDB_GET_INT64(pRaw, dataPos, &pTrans->createdTime, _OVER) SDB_GET_BINARY(pRaw, dataPos, pTrans->dbname, TSDB_DB_FNAME_LEN, _OVER) SDB_GET_INT32(pRaw, dataPos, &pTrans->redoActionPos, _OVER) @@ -279,9 +279,9 @@ static SSdbRow *mndTransActionDecode(SSdbRaw *pRaw) { SDB_GET_INT32(pRaw, dataPos, &action.id, _OVER) SDB_GET_INT32(pRaw, dataPos, &action.errCode, _OVER) SDB_GET_INT32(pRaw, dataPos, &action.acceptableCode, _OVER) - SDB_GET_INT8(pRaw, dataPos, &action.isRaw, _OVER) + SDB_GET_INT8(pRaw, dataPos, &action.actionType, _OVER) SDB_GET_INT8(pRaw, dataPos, &action.stage, _OVER) - if (action.isRaw) { + if (action.actionType) { SDB_GET_INT8(pRaw, dataPos, &action.rawWritten, _OVER) SDB_GET_INT32(pRaw, dataPos, &dataLen, _OVER) action.pRaw = taosMemoryMalloc(dataLen); @@ -308,9 +308,9 @@ static SSdbRow *mndTransActionDecode(SSdbRaw *pRaw) { SDB_GET_INT32(pRaw, dataPos, &action.id, _OVER) SDB_GET_INT32(pRaw, dataPos, &action.errCode, _OVER) SDB_GET_INT32(pRaw, dataPos, &action.acceptableCode, _OVER) - SDB_GET_INT8(pRaw, dataPos, &action.isRaw, _OVER) + SDB_GET_INT8(pRaw, dataPos, &action.actionType, _OVER) SDB_GET_INT8(pRaw, dataPos, &action.stage, _OVER) - if (action.isRaw) { + if (action.actionType) { SDB_GET_INT8(pRaw, dataPos, &action.rawWritten, _OVER) SDB_GET_INT32(pRaw, dataPos, &dataLen, _OVER) action.pRaw = taosMemoryMalloc(dataLen); @@ -337,9 +337,9 @@ static SSdbRow *mndTransActionDecode(SSdbRaw *pRaw) { SDB_GET_INT32(pRaw, dataPos, &action.id, _OVER) SDB_GET_INT32(pRaw, dataPos, &action.errCode, _OVER) SDB_GET_INT32(pRaw, dataPos, &action.acceptableCode, _OVER) - SDB_GET_INT8(pRaw, dataPos, &action.isRaw, _OVER) + SDB_GET_INT8(pRaw, dataPos, &action.actionType, _OVER) SDB_GET_INT8(pRaw, dataPos, &action.stage, _OVER) - if (action.isRaw) { + if (action.actionType) { SDB_GET_INT8(pRaw, dataPos, &action.rawWritten, _OVER) SDB_GET_INT32(pRaw, dataPos, &dataLen, _OVER) action.pRaw = taosMemoryMalloc(dataLen); @@ -408,81 +408,6 @@ static const char *mndTransStr(ETrnStage stage) { } } -static const char *mndTransType(ETrnType type) { - switch (type) { - case TRN_TYPE_CREATE_USER: - return "create-user"; - case TRN_TYPE_ALTER_USER: - return "alter-user"; - case TRN_TYPE_DROP_USER: - return "drop-user"; - case TRN_TYPE_CREATE_FUNC: - return "create-func"; - case TRN_TYPE_DROP_FUNC: - return "drop-func"; - case TRN_TYPE_CREATE_SNODE: - return "create-snode"; - case TRN_TYPE_DROP_SNODE: - return "drop-snode"; - case TRN_TYPE_CREATE_QNODE: - return "create-qnode"; - case TRN_TYPE_DROP_QNODE: - return "drop-qnode"; - case TRN_TYPE_CREATE_BNODE: - return "create-bnode"; - case TRN_TYPE_DROP_BNODE: - return "drop-bnode"; - case TRN_TYPE_CREATE_MNODE: - return "create-mnode"; - case TRN_TYPE_DROP_MNODE: - return "drop-mnode"; - case TRN_TYPE_CREATE_TOPIC: - return "create-topic"; - case TRN_TYPE_DROP_TOPIC: - return "drop-topic"; - case TRN_TYPE_SUBSCRIBE: - return "subscribe"; - case TRN_TYPE_REBALANCE: - return "rebalance"; - case TRN_TYPE_COMMIT_OFFSET: - return "commit-offset"; - case TRN_TYPE_CREATE_STREAM: - return "create-stream"; - case TRN_TYPE_DROP_STREAM: - return "drop-stream"; - case TRN_TYPE_CONSUMER_LOST: - return "consumer-lost"; - case TRN_TYPE_CONSUMER_RECOVER: - return "consumer-recover"; - case TRN_TYPE_CREATE_DNODE: - return "create-qnode"; - case TRN_TYPE_DROP_DNODE: - return "drop-qnode"; - case TRN_TYPE_CREATE_DB: - return "create-db"; - case TRN_TYPE_ALTER_DB: - return "alter-db"; - case TRN_TYPE_DROP_DB: - return "drop-db"; - case TRN_TYPE_SPLIT_VGROUP: - return "split-vgroup"; - case TRN_TYPE_MERGE_VGROUP: - return "merge-vgroup"; - case TRN_TYPE_CREATE_STB: - return "create-stb"; - case TRN_TYPE_ALTER_STB: - return "alter-stb"; - case TRN_TYPE_DROP_STB: - return "drop-stb"; - case TRN_TYPE_CREATE_SMA: - return "create-sma"; - case TRN_TYPE_DROP_SMA: - return "drop-sma"; - default: - return "invalid"; - } -} - static void mndTransTestStartFunc(SMnode *pMnode, void *param, int32_t paramLen) { mInfo("test trans start, param:%s, len:%d", (char *)param, paramLen); } @@ -594,7 +519,7 @@ void mndReleaseTrans(SMnode *pMnode, STrans *pTrans) { sdbRelease(pSdb, pTrans); } -STrans *mndTransCreate(SMnode *pMnode, ETrnPolicy policy, ETrnType type, const SRpcMsg *pReq) { +STrans *mndTransCreate(SMnode *pMnode, ETrnPolicy policy, ETrnConflct conflict, const SRpcMsg *pReq) { STrans *pTrans = taosMemoryCalloc(1, sizeof(STrans)); if (pTrans == NULL) { terrno = TSDB_CODE_OUT_OF_MEMORY; @@ -605,8 +530,8 @@ STrans *mndTransCreate(SMnode *pMnode, ETrnPolicy policy, ETrnType type, const S pTrans->id = sdbGetMaxId(pMnode->pSdb, SDB_TRANS); pTrans->stage = TRN_STAGE_PREPARE; pTrans->policy = policy; - pTrans->type = type; - pTrans->parallel = TRN_EXEC_PARALLEL; + pTrans->conflict = conflict; + pTrans->exec = TRN_EXEC_PRARLLEL; pTrans->createdTime = taosGetTimestampMs(); pTrans->redoActions = taosArrayInit(TRANS_ARRAY_SIZE, sizeof(STransAction)); pTrans->undoActions = taosArrayInit(TRANS_ARRAY_SIZE, sizeof(STransAction)); @@ -627,7 +552,7 @@ static void mndTransDropActions(SArray *pArray) { int32_t size = taosArrayGetSize(pArray); for (int32_t i = 0; i < size; ++i) { STransAction *pAction = taosArrayGet(pArray, i); - if (pAction->isRaw) { + if (pAction->actionType) { taosMemoryFreeClear(pAction->pRaw); } else { taosMemoryFreeClear(pAction->pCont); @@ -658,17 +583,17 @@ static int32_t mndTransAppendAction(SArray *pArray, STransAction *pAction) { } int32_t mndTransAppendRedolog(STrans *pTrans, SSdbRaw *pRaw) { - STransAction action = {.stage = TRN_STAGE_REDO_ACTION, .isRaw = true, .pRaw = pRaw}; + STransAction action = {.stage = TRN_STAGE_REDO_ACTION, .actionType = true, .pRaw = pRaw}; return mndTransAppendAction(pTrans->redoActions, &action); } int32_t mndTransAppendUndolog(STrans *pTrans, SSdbRaw *pRaw) { - STransAction action = {.stage = TRN_STAGE_UNDO_ACTION, .isRaw = true, .pRaw = pRaw}; + STransAction action = {.stage = TRN_STAGE_UNDO_ACTION, .actionType = true, .pRaw = pRaw}; return mndTransAppendAction(pTrans->undoActions, &action); } int32_t mndTransAppendCommitlog(STrans *pTrans, SSdbRaw *pRaw) { - STransAction action = {.stage = TRN_STAGE_COMMIT_ACTION, .isRaw = true, .pRaw = pRaw}; + STransAction action = {.stage = TRN_STAGE_COMMIT_ACTION, .actionType = true, .pRaw = pRaw}; return mndTransAppendAction(pTrans->commitActions, &action); } @@ -698,7 +623,7 @@ void mndTransSetDbInfo(STrans *pTrans, SDbObj *pDb) { memcpy(pTrans->dbname, pDb->name, TSDB_DB_FNAME_LEN); } -void mndTransSetNoParallel(STrans *pTrans) { pTrans->parallel = TRN_EXEC_NO_PARALLEL; } +void mndTransSetSerial(STrans *pTrans) { pTrans->exec = TRN_EXEC_SERIAL; } static int32_t mndTransSync(SMnode *pMnode, STrans *pTrans) { SSdbRaw *pRaw = mndTransActionEncode(pTrans); @@ -721,76 +646,43 @@ static int32_t mndTransSync(SMnode *pMnode, STrans *pTrans) { return 0; } -static bool mndIsBasicTrans(STrans *pTrans) { - return pTrans->type > TRN_TYPE_BASIC_SCOPE && pTrans->type < TRN_TYPE_BASIC_SCOPE_END; -} - -static bool mndIsGlobalTrans(STrans *pTrans) { - return pTrans->type > TRN_TYPE_GLOBAL_SCOPE && pTrans->type < TRN_TYPE_GLOBAL_SCOPE_END; -} - -static bool mndIsDbTrans(STrans *pTrans) { - return pTrans->type > TRN_TYPE_DB_SCOPE && pTrans->type < TRN_TYPE_DB_SCOPE_END; -} - -static bool mndIsStbTrans(STrans *pTrans) { - return pTrans->type > TRN_TYPE_STB_SCOPE && pTrans->type < TRN_TYPE_STB_SCOPE_END; -} - -static bool mndCheckTransConflict(SMnode *pMnode, STrans *pNewTrans) { +static bool mndCheckTransConflict(SMnode *pMnode, STrans *pNew) { STrans *pTrans = NULL; void *pIter = NULL; bool conflict = false; - if (mndIsBasicTrans(pNewTrans)) return conflict; + if (pNew->conflict == TRN_CONFLICT_NOTHING) return conflict; while (1) { pIter = sdbFetch(pMnode->pSdb, SDB_TRANS, pIter, (void **)&pTrans); if (pIter == NULL) break; - if (mndIsGlobalTrans(pNewTrans)) { - if (mndIsDbTrans(pTrans) || mndIsStbTrans(pTrans)) { - mError("trans:%d, can't execute since trans:%d in progress db:%s", pNewTrans->id, pTrans->id, pTrans->dbname); - conflict = true; - } else { - } + if (pNew->conflict == TRN_CONFLICT_GLOBAL) conflict = true; + if (pNew->conflict == TRN_CONFLICT_DB) { + if (pTrans->conflict == TRN_CONFLICT_GLOBAL) conflict = true; + if (pTrans->conflict == TRN_CONFLICT_DB && strcmp(pNew->dbname, pTrans->dbname) == 0) conflict = true; + if (pTrans->conflict == TRN_CONFLICT_DB_INSIDE && strcmp(pNew->dbname, pTrans->dbname) == 0) conflict = true; } - - else if (mndIsDbTrans(pNewTrans)) { - if (mndIsGlobalTrans(pTrans)) { - mError("trans:%d, can't execute since trans:%d in progress", pNewTrans->id, pTrans->id); - conflict = true; - } else if (mndIsDbTrans(pTrans) || mndIsStbTrans(pTrans)) { - if (strcmp(pNewTrans->dbname, pTrans->dbname) == 0) { - mError("trans:%d, can't execute since trans:%d in progress db:%s", pNewTrans->id, pTrans->id, pTrans->dbname); - conflict = true; - } - } else { - } + if (pNew->conflict == TRN_CONFLICT_DB_INSIDE) { + if (pTrans->conflict == TRN_CONFLICT_GLOBAL) conflict = true; + if (pTrans->conflict == TRN_CONFLICT_DB && strcmp(pNew->dbname, pTrans->dbname) == 0) conflict = true; } - - else if (mndIsStbTrans(pNewTrans)) { - if (mndIsGlobalTrans(pTrans)) { - mError("trans:%d, can't execute since trans:%d in progress", pNewTrans->id, pTrans->id); - conflict = true; - } else if (mndIsDbTrans(pTrans)) { - if (strcmp(pNewTrans->dbname, pTrans->dbname) == 0) { - mError("trans:%d, can't execute since trans:%d in progress db:%s", pNewTrans->id, pTrans->id, pTrans->dbname); - conflict = true; - } - } else { - } - } - + mError("trans:%d, can't execute since conflict with trans:%d, db:%s", pNew->id, pTrans->id, pTrans->dbname); sdbRelease(pMnode->pSdb, pTrans); } - sdbCancelFetch(pMnode->pSdb, pIter); - sdbRelease(pMnode->pSdb, pTrans); return conflict; } int32_t mndTransPrepare(SMnode *pMnode, STrans *pTrans) { + if (pTrans->conflict == TRN_CONFLICT_DB || pTrans->conflict == TRN_CONFLICT_DB_INSIDE) { + if (strlen(pTrans->dbname) == 0) { + terrno = TSDB_CODE_MND_TRANS_CONFLICT; + mError("trans:%d, failed to prepare conflict db not set", pTrans->id); + return -1; + } + } + if (mndCheckTransConflict(pMnode, pTrans)) { terrno = TSDB_CODE_MND_TRANS_CONFLICT; mError("trans:%d, failed to prepare since %s", pTrans->id, terrstr()); @@ -921,9 +813,6 @@ void mndTransProcessRsp(SRpcMsg *pRsp) { if (pAction != NULL) { pAction->msgReceived = 1; pAction->errCode = pRsp->code; - if (pAction->errCode != 0) { - tstrncpy(pTrans->lastError, tstrerror(pAction->errCode), TSDB_TRANS_ERROR_LEN); - } } mDebug("trans:%d, %s:%d response is received, code:0x%x, accept:0x%x", transId, mndTransStr(pAction->stage), action, @@ -1004,7 +893,7 @@ static int32_t mndTransSendSingleMsg(SMnode *pMnode, STrans *pTrans, STransActio } static int32_t mndTransExecSingleAction(SMnode *pMnode, STrans *pTrans, STransAction *pAction) { - if (pAction->isRaw) { + if (pAction->actionType) { return mndTransWriteSingleLog(pMnode, pTrans, pAction); } else { return mndTransSendSingleMsg(pMnode, pTrans, pAction); @@ -1032,24 +921,36 @@ static int32_t mndTransExecuteActions(SMnode *pMnode, STrans *pTrans, SArray *pA return -1; } - int32_t numOfExecuted = 0; - int32_t errCode = 0; + int32_t numOfExecuted = 0; + int32_t errCode = 0; + STransAction *pErrAction = NULL; for (int32_t action = 0; action < numOfActions; ++action) { STransAction *pAction = taosArrayGet(pArray, action); if (pAction->msgReceived || pAction->rawWritten) { numOfExecuted++; if (pAction->errCode != 0 && pAction->errCode != pAction->acceptableCode) { errCode = pAction->errCode; + pErrAction = pAction; } } } if (numOfExecuted == numOfActions) { if (errCode == 0) { + pTrans->lastErrorAction = 0; + pTrans->lastErrorNo = 0; + pTrans->lastErrorMsgType = 0; + memset(&pTrans->lastErrorEpset, 0, sizeof(pTrans->lastErrorEpset)); mDebug("trans:%d, all %d actions execute successfully", pTrans->id, numOfActions); return 0; } else { mError("trans:%d, all %d actions executed, code:0x%x", pTrans->id, numOfActions, errCode & 0XFFFF); + if (pErrAction != NULL) { + pTrans->lastErrorMsgType = pErrAction->msgType; + pTrans->lastErrorAction = pErrAction->id; + pTrans->lastErrorNo = pErrAction->errCode; + pTrans->lastErrorEpset = pErrAction->epSet; + } mndTransResetActions(pMnode, pTrans, pArray); terrno = errCode; return errCode; @@ -1084,7 +985,7 @@ static int32_t mndTransExecuteCommitActions(SMnode *pMnode, STrans *pTrans) { return code; } -static int32_t mndTransExecuteRedoActionsNoParallel(SMnode *pMnode, STrans *pTrans) { +static int32_t mndTransExecuteRedoActionsSerial(SMnode *pMnode, STrans *pTrans) { int32_t code = 0; int32_t numOfActions = taosArrayGetSize(pTrans->redoActions); if (numOfActions == 0) return code; @@ -1111,6 +1012,18 @@ static int32_t mndTransExecuteRedoActionsNoParallel(SMnode *pMnode, STrans *pTra } } + if (code == 0) { + pTrans->lastErrorAction = 0; + pTrans->lastErrorNo = 0; + pTrans->lastErrorMsgType = 0; + memset(&pTrans->lastErrorEpset, 0, sizeof(pTrans->lastErrorEpset)); + } else { + pTrans->lastErrorMsgType = pAction->msgType; + pTrans->lastErrorAction = action; + pTrans->lastErrorNo = pAction->errCode; + pTrans->lastErrorEpset = pAction->epSet; + } + if (code == 0) { pTrans->redoActionPos++; mDebug("trans:%d, %s:%d is executed and need sync to other mnodes", pTrans->id, mndTransStr(pAction->stage), @@ -1144,8 +1057,8 @@ static bool mndTransPerformRedoActionStage(SMnode *pMnode, STrans *pTrans) { bool continueExec = true; int32_t code = 0; - if (pTrans->parallel == TRN_EXEC_NO_PARALLEL) { - code = mndTransExecuteRedoActionsNoParallel(pMnode, pTrans); + if (pTrans->exec == TRN_EXEC_SERIAL) { + code = mndTransExecuteRedoActionsSerial(pMnode, pTrans); } else { code = mndTransExecuteRedoActions(pMnode, pTrans); } @@ -1455,11 +1368,6 @@ static int32_t mndRetrieveTrans(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBl pColInfo = taosArrayGet(pBlock->pDataBlock, cols++); colDataAppend(pColInfo, numOfRows, (const char *)dbname, false); - char type[TSDB_TRANS_TYPE_LEN + VARSTR_HEADER_SIZE] = {0}; - STR_WITH_MAXSIZE_TO_VARSTR(type, mndTransType(pTrans->type), pShow->pMeta->pSchemas[cols].bytes); - pColInfo = taosArrayGet(pBlock->pDataBlock, cols++); - colDataAppend(pColInfo, numOfRows, (const char *)type, false); - pColInfo = taosArrayGet(pBlock->pDataBlock, cols++); colDataAppend(pColInfo, numOfRows, (const char *)&pTrans->failedTimes, false); @@ -1467,7 +1375,20 @@ static int32_t mndRetrieveTrans(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBl colDataAppend(pColInfo, numOfRows, (const char *)&pTrans->lastExecTime, false); char lastError[TSDB_TRANS_ERROR_LEN + VARSTR_HEADER_SIZE] = {0}; - STR_WITH_MAXSIZE_TO_VARSTR(lastError, pTrans->lastError, pShow->pMeta->pSchemas[cols].bytes); + char detail[TSDB_TRANS_ERROR_LEN] = {0}; + if (pTrans->lastErrorNo != 0) { + int32_t len = snprintf(detail, sizeof(detail), "action:%d errno:0x%x(%s) ", pTrans->lastErrorAction, + pTrans->lastErrorNo & 0xFFFF, tstrerror(pTrans->lastErrorNo)); + SEpSet epset = pTrans->lastErrorEpset; + if (epset.numOfEps > 0) { + len += snprintf(detail + len, sizeof(detail) - len, "msgType:%s numOfEps:%d inUse:%d ", + TMSG_INFO(pTrans->lastErrorMsgType), epset.numOfEps, epset.inUse); + } + for (int32_t i = 0; i < pTrans->lastErrorEpset.numOfEps; ++i) { + len += snprintf(detail + len, sizeof(detail) - len, "ep:%d-%s:%u ", i, epset.eps[i].fqdn, epset.eps[i].port); + } + } + STR_WITH_MAXSIZE_TO_VARSTR(lastError, detail, pShow->pMeta->pSchemas[cols].bytes); pColInfo = taosArrayGet(pBlock->pDataBlock, cols++); colDataAppend(pColInfo, numOfRows, (const char *)lastError, false); diff --git a/source/dnode/mnode/impl/src/mndUser.c b/source/dnode/mnode/impl/src/mndUser.c index 83d00c86e3eb8797f5b40fa58243df429e905d5a..345d756f4399a46b4d4abfa8db1ea74b2271b01e 100644 --- a/source/dnode/mnode/impl/src/mndUser.c +++ b/source/dnode/mnode/impl/src/mndUser.c @@ -79,10 +79,7 @@ static int32_t mndCreateDefaultUser(SMnode *pMnode, char *acct, char *user, char mDebug("user:%s, will be created when deploying, raw:%p", userObj.user, pRaw); -#if 0 - return sdbWrite(pMnode->pSdb, pRaw); -#else - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_CREATE_USER, NULL); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_NOTHING, NULL); if (pTrans == NULL) { mError("user:%s, failed to create since %s", userObj.user, terrstr()); return -1; @@ -104,7 +101,6 @@ static int32_t mndCreateDefaultUser(SMnode *pMnode, char *acct, char *user, char mndTransDrop(pTrans); return 0; -#endif } static int32_t mndCreateDefaultUsers(SMnode *pMnode) { @@ -291,7 +287,7 @@ static int32_t mndCreateUser(SMnode *pMnode, char *acct, SCreateUserReq *pCreate userObj.updateTime = userObj.createdTime; userObj.superUser = pCreate->superUser; - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_CREATE_USER, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_NOTHING, pReq); if (pTrans == NULL) { mError("user:%s, failed to create since %s", pCreate->user, terrstr()); return -1; @@ -371,7 +367,7 @@ _OVER: } static int32_t mndAlterUser(SMnode *pMnode, SUserObj *pOld, SUserObj *pNew, SRpcMsg *pReq) { - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_ALTER_USER, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_NOTHING, pReq); if (pTrans == NULL) { mError("user:%s, failed to alter since %s", pOld->user, terrstr()); return -1; @@ -578,7 +574,7 @@ _OVER: } static int32_t mndDropUser(SMnode *pMnode, SRpcMsg *pReq, SUserObj *pUser) { - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_DROP_USER, pReq); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_NOTHING, pReq); if (pTrans == NULL) { mError("user:%s, failed to drop since %s", pUser->user, terrstr()); return -1; diff --git a/source/dnode/mnode/impl/test/trans/trans2.cpp b/source/dnode/mnode/impl/test/trans/trans2.cpp index cfcfc2490e022092386b64f859befe4b1b922c80..d518db2d38f00dea07552b02b7342d9371454930 100644 --- a/source/dnode/mnode/impl/test/trans/trans2.cpp +++ b/source/dnode/mnode/impl/test/trans/trans2.cpp @@ -11,6 +11,8 @@ #include +#if 0 + #include "mndTrans.h" #include "mndUser.h" #include "tcache.h" @@ -103,7 +105,7 @@ class MndTestTrans2 : public ::testing::Test { void SetUp() override {} void TearDown() override {} - int32_t CreateUserLog(const char *acct, const char *user, ETrnType type, SDbObj *pDb) { + int32_t CreateUserLog(const char *acct, const char *user, ETrnConflct conflict, SDbObj *pDb) { SUserObj userObj = {0}; taosEncryptPass_c((uint8_t *)"taosdata", strlen("taosdata"), userObj.pass); tstrncpy(userObj.user, user, TSDB_USER_LEN); @@ -113,7 +115,7 @@ class MndTestTrans2 : public ::testing::Test { userObj.superUser = 1; SRpcMsg rpcMsg = {0}; - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, type, &rpcMsg); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, conflict, &rpcMsg); SSdbRaw *pRedoRaw = mndUserActionEncode(&userObj); mndTransAppendRedolog(pTrans, pRedoRaw); sdbSetRawStatus(pRedoRaw, SDB_STATUS_READY); @@ -135,7 +137,7 @@ class MndTestTrans2 : public ::testing::Test { return code; } - int32_t CreateUserAction(const char *acct, const char *user, bool hasUndoAction, ETrnPolicy policy, ETrnType type, + int32_t CreateUserAction(const char *acct, const char *user, bool hasUndoAction, ETrnPolicy policy, ETrnConflct conflict, SDbObj *pDb) { SUserObj userObj = {0}; taosEncryptPass_c((uint8_t *)"taosdata", strlen("taosdata"), userObj.pass); @@ -146,7 +148,7 @@ class MndTestTrans2 : public ::testing::Test { userObj.superUser = 1; SRpcMsg rpcMsg = {0}; - STrans *pTrans = mndTransCreate(pMnode, policy, type, &rpcMsg); + STrans *pTrans = mndTransCreate(pMnode, policy, conflict, &rpcMsg); SSdbRaw *pRedoRaw = mndUserActionEncode(&userObj); mndTransAppendRedolog(pTrans, pRedoRaw); sdbSetRawStatus(pRedoRaw, SDB_STATUS_READY); @@ -218,7 +220,7 @@ class MndTestTrans2 : public ::testing::Test { userObj.superUser = 1; SRpcMsg rpcMsg = {0}; - STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_CREATE_USER, &rpcMsg); + STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_NOTHING, &rpcMsg); SSdbRaw *pRedoRaw = mndUserActionEncode(&userObj); mndTransAppendRedolog(pTrans, pRedoRaw); sdbSetRawStatus(pRedoRaw, SDB_STATUS_READY); @@ -528,3 +530,5 @@ TEST_F(MndTestTrans2, 04_Conflict) { mndReleaseUser(pMnode, pUser); } } + +#endif \ No newline at end of file diff --git a/source/dnode/mnode/impl/test/user/CMakeLists.txt b/source/dnode/mnode/impl/test/user/CMakeLists.txt index b39ea0e73f728cacc648f6eb0723328e028c05f4..ed4d96461742a77fd4a2ba3d0b9cd070c2f00c43 100644 --- a/source/dnode/mnode/impl/test/user/CMakeLists.txt +++ b/source/dnode/mnode/impl/test/user/CMakeLists.txt @@ -5,7 +5,9 @@ target_link_libraries( PUBLIC sut ) -add_test( - NAME userTest - COMMAND userTest -) +if(NOT TD_WINDOWS) + add_test( + NAME userTest + COMMAND userTest + ) +endif(NOT TD_WINDOWS) diff --git a/source/dnode/vnode/CMakeLists.txt b/source/dnode/vnode/CMakeLists.txt index 17445b7abe6872f038a5931d926cb9af6a95ce2d..ea2a256663cd0f9ec7657579938d9c036af10e6a 100644 --- a/source/dnode/vnode/CMakeLists.txt +++ b/source/dnode/vnode/CMakeLists.txt @@ -52,10 +52,11 @@ target_sources( # tq "src/tq/tq.c" "src/tq/tqExec.c" - "src/tq/tqCommit.c" - "src/tq/tqOffset.c" - "src/tq/tqPush.c" + "src/tq/tqMeta.c" "src/tq/tqRead.c" + "src/tq/tqOffset.c" + #"src/tq/tqPush.c" + #"src/tq/tqCommit.c" ) target_include_directories( vnode diff --git a/source/dnode/vnode/src/inc/tq.h b/source/dnode/vnode/src/inc/tq.h index 72138926aa2c73d1a4bf4ea780665df3ad39d9ed..89ea969d921891b4ff693d55dac63337349c59d3 100644 --- a/source/dnode/vnode/src/inc/tq.h +++ b/source/dnode/vnode/src/inc/tq.h @@ -66,12 +66,12 @@ struct STqReadHandle { // tqPush typedef struct { - int64_t consumerId; - int32_t epoch; - int32_t skipLogNum; - int64_t reqOffset; - SRWLatch lock; - SRpcMsg* handle; + int64_t consumerId; + int32_t epoch; + int32_t skipLogNum; + int64_t reqOffset; + SRpcHandleInfo info; + SRWLatch lock; } STqPushHandle; #if 0 @@ -168,6 +168,13 @@ int64_t tqFetchLog(STQ* pTq, STqHandle* pHandle, int64_t* fetchOffset, SWalHead* int32_t tqDataExec(STQ* pTq, STqExecHandle* pExec, SSubmitReq* pReq, SMqDataBlkRsp* pRsp, int32_t workerId); +// tqMeta + +int32_t tqMetaOpen(STQ* pTq); +int32_t tqMetaClose(STQ* pTq); +int32_t tqMetaSaveHandle(STQ* pTq, const char* key, const STqHandle* pHandle); +int32_t tqMetaDeleteHandle(STQ* pTq, const char* key); + // tqOffset STqOffsetStore* STqOffsetOpen(STqOffsetCfg*); void STqOffsetClose(STqOffsetStore*); diff --git a/source/dnode/vnode/src/tq/tq.c b/source/dnode/vnode/src/tq/tq.c index b4747f2264abdfcd78b77cad4aa4c9c14731ee79..93f305ba77d7a9debe7c121249ae3ecafab34b6e 100644 --- a/source/dnode/vnode/src/tq/tq.c +++ b/source/dnode/vnode/src/tq/tq.c @@ -14,7 +14,6 @@ */ #include "tq.h" -#include "tdbInt.h" int32_t tqInit() { int8_t old; @@ -47,51 +46,6 @@ void tqCleanUp() { } } -int tqExecKeyCompare(const void* pKey1, int32_t kLen1, const void* pKey2, int32_t kLen2) { - return strcmp(pKey1, pKey2); -} - -int32_t tqStoreHandle(STQ* pTq, const char* key, const STqHandle* pHandle) { - int32_t code; - int32_t vlen; - tEncodeSize(tEncodeSTqHandle, pHandle, vlen, code); - ASSERT(code == 0); - - void* buf = taosMemoryCalloc(1, vlen); - if (buf == NULL) { - ASSERT(0); - } - - SEncoder encoder; - tEncoderInit(&encoder, buf, vlen); - - if (tEncodeSTqHandle(&encoder, pHandle) < 0) { - ASSERT(0); - } - - TXN txn; - - if (tdbTxnOpen(&txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, TDB_TXN_WRITE | TDB_TXN_READ_UNCOMMITTED) < 0) { - ASSERT(0); - } - - if (tdbBegin(pTq->pMetaStore, &txn) < 0) { - ASSERT(0); - } - - if (tdbTbUpsert(pTq->pExecStore, key, (int)strlen(key), buf, vlen, &txn) < 0) { - ASSERT(0); - } - - if (tdbCommit(pTq->pMetaStore, &txn) < 0) { - ASSERT(0); - } - - tEncoderClear(&encoder); - taosMemoryFree(buf); - return 0; -} - STQ* tqOpen(const char* path, SVnode* pVnode, SWal* pWal) { STQ* pTq = taosMemoryMalloc(sizeof(STQ)); if (pTq == NULL) { @@ -108,60 +62,7 @@ STQ* tqOpen(const char* path, SVnode* pVnode, SWal* pWal) { pTq->pushMgr = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_ENTRY_LOCK); - if (tdbOpen(path, 16 * 1024, 1, &pTq->pMetaStore) < 0) { - ASSERT(0); - } - - if (tdbTbOpen("handles", -1, -1, tqExecKeyCompare, pTq->pMetaStore, &pTq->pExecStore) < 0) { - ASSERT(0); - } - - TXN txn; - - if (tdbTxnOpen(&txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, 0) < 0) { - ASSERT(0); - } - - TBC* pCur; - if (tdbTbcOpen(pTq->pExecStore, &pCur, &txn) < 0) { - ASSERT(0); - } - - void* pKey; - int kLen; - void* pVal; - int vLen; - - tdbTbcMoveToFirst(pCur); - SDecoder decoder; - - while (tdbTbcNext(pCur, &pKey, &kLen, &pVal, &vLen) == 0) { - STqHandle handle; - tDecoderInit(&decoder, (uint8_t*)pVal, vLen); - tDecodeSTqHandle(&decoder, &handle); - handle.pWalReader = walOpenReadHandle(pTq->pVnode->pWal); - for (int32_t i = 0; i < 5; i++) { - handle.execHandle.pExecReader[i] = tqInitSubmitMsgScanner(pTq->pVnode->pMeta); - } - if (handle.execHandle.subType == TOPIC_SUB_TYPE__COLUMN) { - for (int32_t i = 0; i < 5; i++) { - SReadHandle reader = { - .reader = handle.execHandle.pExecReader[i], - .meta = pTq->pVnode->pMeta, - .pMsgCb = &pTq->pVnode->msgCb, - }; - handle.execHandle.exec.execCol.task[i] = - qCreateStreamExecTaskInfo(handle.execHandle.exec.execCol.qmsg, &reader); - ASSERT(handle.execHandle.exec.execCol.task[i]); - } - } else { - handle.execHandle.exec.execDb.pFilterOutTbUid = - taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK); - } - taosHashPut(pTq->handles, pKey, kLen, &handle, sizeof(STqHandle)); - } - - if (tdbTxnClose(&txn) < 0) { + if (tqMetaOpen(pTq) < 0) { ASSERT(0); } @@ -174,46 +75,12 @@ void tqClose(STQ* pTq) { taosHashCleanup(pTq->handles); taosHashCleanup(pTq->pStreamTasks); taosHashCleanup(pTq->pushMgr); - tdbClose(pTq->pMetaStore); + tqMetaClose(pTq); taosMemoryFree(pTq); } // TODO } -#if 0 -int32_t tEncodeSTqExec(SEncoder* pEncoder, const STqExec* pExec) { - if (tStartEncode(pEncoder) < 0) return -1; - if (tEncodeCStr(pEncoder, pExec->subKey) < 0) return -1; - if (tEncodeI64(pEncoder, pExec->consumerId) < 0) return -1; - if (tEncodeI32(pEncoder, pExec->epoch) < 0) return -1; - if (tEncodeI8(pEncoder, pExec->subType) < 0) return -1; - /*if (tEncodeI8(pEncoder, pExec->withTbName) < 0) return -1;*/ - /*if (tEncodeI8(pEncoder, pExec->withSchema) < 0) return -1;*/ - /*if (tEncodeI8(pEncoder, pExec->withTag) < 0) return -1;*/ - if (pExec->subType == TOPIC_SUB_TYPE__COLUMN) { - if (tEncodeCStr(pEncoder, pExec->qmsg) < 0) return -1; - } - tEndEncode(pEncoder); - return pEncoder->pos; -} - -int32_t tDecodeSTqExec(SDecoder* pDecoder, STqExec* pExec) { - if (tStartDecode(pDecoder) < 0) return -1; - if (tDecodeCStrTo(pDecoder, pExec->subKey) < 0) return -1; - if (tDecodeI64(pDecoder, &pExec->consumerId) < 0) return -1; - if (tDecodeI32(pDecoder, &pExec->epoch) < 0) return -1; - if (tDecodeI8(pDecoder, &pExec->subType) < 0) return -1; - /*if (tDecodeI8(pDecoder, &pExec->withTbName) < 0) return -1;*/ - /*if (tDecodeI8(pDecoder, &pExec->withSchema) < 0) return -1;*/ - /*if (tDecodeI8(pDecoder, &pExec->withTag) < 0) return -1;*/ - if (pExec->subType == TOPIC_SUB_TYPE__COLUMN) { - if (tDecodeCStrAlloc(pDecoder, &pExec->qmsg) < 0) return -1; - } - tEndDecode(pDecoder); - return 0; -} -#endif - int32_t tEncodeSTqHandle(SEncoder* pEncoder, const STqHandle* pHandle) { if (tStartEncode(pEncoder) < 0) return -1; if (tEncodeCStr(pEncoder, pHandle->subKey) < 0) return -1; @@ -290,9 +157,6 @@ int32_t tqPushMsgNew(STQ* pTq, void* msg, int32_t msgLen, tmsg_t msgType, int64_ taosWLockLatch(&pHandle->pushHandle.lock); - SRpcMsg* pMsg = atomic_load_ptr(&pHandle->pushHandle.handle); - ASSERT(pMsg); - SMqDataBlkRsp rsp = {0}; rsp.reqOffset = pHandle->pushHandle.reqOffset; rsp.blockData = taosArrayInit(0, sizeof(void*)); @@ -318,7 +182,7 @@ int32_t tqPushMsgNew(STQ* pTq, void* msg, int32_t msgLen, tmsg_t msgType, int64_ int32_t tlen = sizeof(SMqRspHead) + tEncodeSMqDataBlkRsp(NULL, &rsp); void* buf = rpcMallocCont(tlen); if (buf == NULL) { - pMsg->code = -1; + // todo free return -1; } @@ -329,10 +193,15 @@ int32_t tqPushMsgNew(STQ* pTq, void* msg, int32_t msgLen, tmsg_t msgType, int64_ void* abuf = POINTER_SHIFT(buf, sizeof(SMqRspHead)); tEncodeSMqDataBlkRsp(&abuf, &rsp); - SRpcMsg resp = {.info = handleInfo, .pCont = buf, .contLen = tlen, .code = 0}; + SRpcMsg resp = { + .info = pHandle->pushHandle.info, + .pCont = buf, + .contLen = tlen, + .code = 0, + }; tmsgSendRsp(&resp); - atomic_store_ptr(&pHandle->pushHandle.handle, NULL); + memset(&pHandle->pushHandle.info, 0, sizeof(SRpcHandleInfo)); taosWUnLockLatch(&pHandle->pushHandle.lock); tqDebug("vg %d offset %ld from consumer %ld (epoch %d) send rsp, block num: %d, reqOffset: %ld, rspOffset: %ld", @@ -374,7 +243,7 @@ int tqCommit(STQ* pTq) { int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) { SMqPollReq* pReq = pMsg->pCont; int64_t consumerId = pReq->consumerId; - int64_t waitTime = pReq->waitTime; + int64_t waitTime = pReq->timeout; int32_t reqEpoch = pReq->epoch; int64_t fetchOffset; @@ -410,24 +279,22 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) { rsp.blockData = taosArrayInit(0, sizeof(void*)); rsp.blockDataLen = taosArrayInit(0, sizeof(int32_t)); - rsp.blockSchema = taosArrayInit(0, sizeof(void*)); - rsp.blockTbName = taosArrayInit(0, sizeof(void*)); rsp.withTbName = pReq->withTbName; + if (rsp.withTbName) { + rsp.blockTbName = taosArrayInit(0, sizeof(void*)); + } if (pHandle->execHandle.subType == TOPIC_SUB_TYPE__COLUMN) { rsp.withSchema = false; + rsp.withTag = false; } else { rsp.withSchema = true; + rsp.blockSchema = taosArrayInit(0, sizeof(void*)); + rsp.withTag = false; } - /*int8_t withTbName = pExec->withTbName;*/ - /*if (pReq->withTbName != -1) {*/ - /*withTbName = pReq->withTbName;*/ - /*}*/ - /*rsp.withTbName = withTbName;*/ - while (1) { consumerEpoch = atomic_load_32(&pHandle->epoch); if (consumerEpoch > reqEpoch) { @@ -443,15 +310,6 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) { SWalReadHead* pHead = &pHeadWithCkSum->head; -#if 0 - SWalReadHead* pHead; - if (walReadWithHandle_s(pExec->pWalReader, fetchOffset, &pHead) < 0) { - // TODO: no more log, set timer to wait blocking time - // if data inserted during waiting, launch query and - // response to user - tqDebug("tmq poll: consumer %ld (epoch %d) vg %d offset %ld, no more log to return", consumerId, pReq->epoch, - TD_VID(pTq->pVnode), fetchOffset); - #if 0 // add to pushMgr taosWLockLatch(&pExec->pushHandle.lock); @@ -473,10 +331,6 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) { return 0; #endif - break; - } -#endif - tqDebug("tmq poll: consumer %ld (epoch %d) iter log, vg %d offset %ld msgType %d", consumerId, pReq->epoch, TD_VID(pTq->pVnode), fetchOffset, pHead->msgType); @@ -533,8 +387,14 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) { // TODO wrap in destroy func taosArrayDestroy(rsp.blockData); taosArrayDestroy(rsp.blockDataLen); - taosArrayDestroyP(rsp.blockSchema, (FDelete)tDeleteSSchemaWrapper); - taosArrayDestroyP(rsp.blockTbName, (FDelete)taosMemoryFree); + + if (rsp.withSchema) { + taosArrayDestroyP(rsp.blockSchema, (FDelete)tDeleteSSchemaWrapper); + } + + if (rsp.withTbName) { + taosArrayDestroyP(rsp.blockTbName, (FDelete)taosMemoryFree); + } return 0; } @@ -545,24 +405,9 @@ int32_t tqProcessVgDeleteReq(STQ* pTq, char* msg, int32_t msgLen) { int32_t code = taosHashRemove(pTq->handles, pReq->subKey, strlen(pReq->subKey)); ASSERT(code == 0); - TXN txn; - - if (tdbTxnOpen(&txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, TDB_TXN_WRITE | TDB_TXN_READ_UNCOMMITTED) < 0) { - ASSERT(0); - } - - if (tdbBegin(pTq->pMetaStore, &txn) < 0) { + if (tqMetaDeleteHandle(pTq, pReq->subKey) < 0) { ASSERT(0); } - - if (tdbTbDelete(pTq->pExecStore, pReq->subKey, (int)strlen(pReq->subKey), &txn) < 0) { - /*ASSERT(0);*/ - } - - if (tdbCommit(pTq->pMetaStore, &txn) < 0) { - ASSERT(0); - } - return 0; } @@ -620,7 +465,7 @@ int32_t tqProcessVgChangeReq(STQ* pTq, char* msg, int32_t msgLen) { atomic_add_fetch_32(&pHandle->epoch, 1); } - if (tqStoreHandle(pTq, req.subKey, pHandle) < 0) { + if (tqMetaSaveHandle(pTq, req.subKey, pHandle) < 0) { // TODO } return 0; diff --git a/source/dnode/vnode/src/tq/tqMeta.c b/source/dnode/vnode/src/tq/tqMeta.c index f2f48bbc8a69a022d0fc6b8a88c5a9a55d0b4ad6..74162a9f49c577df23a817c291cdedb8ca953f60 100644 --- a/source/dnode/vnode/src/tq/tqMeta.c +++ b/source/dnode/vnode/src/tq/tqMeta.c @@ -12,3 +12,137 @@ * You should have received a copy of the GNU Affero General Public License * along with this program. If not, see . */ +#include "tdbInt.h" +#include "tq.h" + +int tqExecKeyCompare(const void* pKey1, int32_t kLen1, const void* pKey2, int32_t kLen2) { + return strcmp(pKey1, pKey2); +} + +int32_t tqMetaOpen(STQ* pTq) { + if (tdbOpen(pTq->path, 16 * 1024, 1, &pTq->pMetaStore) < 0) { + ASSERT(0); + } + + if (tdbTbOpen("handles", -1, -1, tqExecKeyCompare, pTq->pMetaStore, &pTq->pExecStore) < 0) { + ASSERT(0); + } + + TXN txn; + + if (tdbTxnOpen(&txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, 0) < 0) { + ASSERT(0); + } + + TBC* pCur; + if (tdbTbcOpen(pTq->pExecStore, &pCur, &txn) < 0) { + ASSERT(0); + } + + void* pKey; + int kLen; + void* pVal; + int vLen; + + tdbTbcMoveToFirst(pCur); + SDecoder decoder; + + while (tdbTbcNext(pCur, &pKey, &kLen, &pVal, &vLen) == 0) { + STqHandle handle; + tDecoderInit(&decoder, (uint8_t*)pVal, vLen); + tDecodeSTqHandle(&decoder, &handle); + handle.pWalReader = walOpenReadHandle(pTq->pVnode->pWal); + for (int32_t i = 0; i < 5; i++) { + handle.execHandle.pExecReader[i] = tqInitSubmitMsgScanner(pTq->pVnode->pMeta); + } + if (handle.execHandle.subType == TOPIC_SUB_TYPE__COLUMN) { + for (int32_t i = 0; i < 5; i++) { + SReadHandle reader = { + .reader = handle.execHandle.pExecReader[i], + .meta = pTq->pVnode->pMeta, + .pMsgCb = &pTq->pVnode->msgCb, + }; + handle.execHandle.exec.execCol.task[i] = + qCreateStreamExecTaskInfo(handle.execHandle.exec.execCol.qmsg, &reader); + ASSERT(handle.execHandle.exec.execCol.task[i]); + } + } else { + handle.execHandle.exec.execDb.pFilterOutTbUid = + taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK); + } + taosHashPut(pTq->handles, pKey, kLen, &handle, sizeof(STqHandle)); + } + + if (tdbTxnClose(&txn) < 0) { + ASSERT(0); + } + return 0; +} + +int32_t tqMetaClose(STQ* pTq) { + tdbClose(pTq->pMetaStore); + return 0; +} + +int32_t tqMetaSaveHandle(STQ* pTq, const char* key, const STqHandle* pHandle) { + int32_t code; + int32_t vlen; + tEncodeSize(tEncodeSTqHandle, pHandle, vlen, code); + ASSERT(code == 0); + + void* buf = taosMemoryCalloc(1, vlen); + if (buf == NULL) { + ASSERT(0); + } + + SEncoder encoder; + tEncoderInit(&encoder, buf, vlen); + + if (tEncodeSTqHandle(&encoder, pHandle) < 0) { + ASSERT(0); + } + + TXN txn; + + if (tdbTxnOpen(&txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, TDB_TXN_WRITE | TDB_TXN_READ_UNCOMMITTED) < 0) { + ASSERT(0); + } + + if (tdbBegin(pTq->pMetaStore, &txn) < 0) { + ASSERT(0); + } + + if (tdbTbUpsert(pTq->pExecStore, key, (int)strlen(key), buf, vlen, &txn) < 0) { + ASSERT(0); + } + + if (tdbCommit(pTq->pMetaStore, &txn) < 0) { + ASSERT(0); + } + + tEncoderClear(&encoder); + taosMemoryFree(buf); + return 0; +} + +int32_t tqMetaDeleteHandle(STQ* pTq, const char* key) { + TXN txn; + + if (tdbTxnOpen(&txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, TDB_TXN_WRITE | TDB_TXN_READ_UNCOMMITTED) < 0) { + ASSERT(0); + } + + if (tdbBegin(pTq->pMetaStore, &txn) < 0) { + ASSERT(0); + } + + if (tdbTbDelete(pTq->pExecStore, key, (int)strlen(key), &txn) < 0) { + /*ASSERT(0);*/ + } + + if (tdbCommit(pTq->pMetaStore, &txn) < 0) { + ASSERT(0); + } + + return 0; +} diff --git a/source/dnode/vnode/src/tq/tqPush.c b/source/dnode/vnode/src/tq/tqPush.c index f2f48bbc8a69a022d0fc6b8a88c5a9a55d0b4ad6..e31566f3faca14b0955b851f654247355f500630 100644 --- a/source/dnode/vnode/src/tq/tqPush.c +++ b/source/dnode/vnode/src/tq/tqPush.c @@ -12,3 +12,5 @@ * You should have received a copy of the GNU Affero General Public License * along with this program. If not, see . */ + +#include "tq.h" diff --git a/source/dnode/vnode/test/tsdbSmaTest.cpp b/source/dnode/vnode/test/tsdbSmaTest.cpp index ab617cb18660bc6663b500d7ef9da60a5c2d9fa5..4d2741f751066b62ebb463b4ff8d2930f057a318 100644 --- a/source/dnode/vnode/test/tsdbSmaTest.cpp +++ b/source/dnode/vnode/test/tsdbSmaTest.cpp @@ -368,7 +368,7 @@ TEST(testCase, tSma_Data_Insert_Query_Test) { SDiskCfg pDisks = {0}; pDisks.level = 0; pDisks.primary = 1; - strncpy(pDisks.dir, "/var/lib/taos", TSDB_FILENAME_LEN); + strncpy(pDisks.dir, TD_DATA_DIR_PATH, TSDB_FILENAME_LEN); int32_t numOfDisks = 1; pTsdb->pTfs = tfsOpen(&pDisks, numOfDisks); EXPECT_NE(pTsdb->pTfs, nullptr); diff --git a/source/libs/catalog/test/catalogTests.cpp b/source/libs/catalog/test/catalogTests.cpp index 81d206a0f3fee7f33f24b9740c973ab8d89b10d1..19c5bb6dcdd8e6d879c349633ae54d7b542af303 100644 --- a/source/libs/catalog/test/catalogTests.cpp +++ b/source/libs/catalog/test/catalogTests.cpp @@ -137,7 +137,7 @@ void ctgTestInitLogFile() { tsAsyncLog = 0; qDebugFlag = 159; - strcpy(tsLogDir, "/var/log/taos"); + strcpy(tsLogDir, TD_LOG_DIR_PATH); ctgdEnableDebug("api"); ctgdEnableDebug("meta"); diff --git a/source/libs/function/src/builtinsimpl.c b/source/libs/function/src/builtinsimpl.c index 3f6f13a572dd309b6348241f9533eb3a88594f12..a7e93246b765199369eda8e7aba0da94a49fce3b 100644 --- a/source/libs/function/src/builtinsimpl.c +++ b/source/libs/function/src/builtinsimpl.c @@ -1646,8 +1646,8 @@ bool leastSQRFunctionSetup(SqlFunctionCtx* pCtx, SResultRowEntryInfo* pResultInf pInfo->startVal = IS_FLOAT_TYPE(pCtx->param[1].param.nType) ? pCtx->param[1].param.d : (double)pCtx->param[1].param.i; - pInfo->stepVal = IS_FLOAT_TYPE(pCtx->param[1].param.nType) ? pCtx->param[2].param.d : - (double)pCtx->param[1].param.i; + pInfo->stepVal = IS_FLOAT_TYPE(pCtx->param[2].param.nType) ? pCtx->param[2].param.d : + (double)pCtx->param[2].param.i; return true; } diff --git a/source/libs/parser/src/parInsert.c b/source/libs/parser/src/parInsert.c index 047c2d15045f667d41319b6d7c14c475cd6273a1..8a843c2c1ac4bcce8555889b08a8082ca3502f84 100644 --- a/source/libs/parser/src/parInsert.c +++ b/source/libs/parser/src/parInsert.c @@ -1525,14 +1525,14 @@ int32_t qBindStmtTagsValue(void* pBlock, void* boundTags, int64_t suid, char* tN SKvParam param = {.builder = &tagBuilder}; for (int c = 0; c < tags->numOfBound; ++c) { + SSchema* pTagSchema = &pSchema[tags->boundColumns[c]]; + param.schema = pTagSchema; + if (bind[c].is_null && bind[c].is_null[0]) { KvRowAppend(&pBuf, NULL, 0, ¶m); continue; } - SSchema* pTagSchema = &pSchema[tags->boundColumns[c]]; - param.schema = pTagSchema; - int32_t colLen = pTagSchema->bytes; if (IS_VAR_DATA_TYPE(pTagSchema->type)) { colLen = bind[c].length[0]; @@ -1724,18 +1724,23 @@ int32_t qBindStmtSingleColValue(void* pBlock, TAOS_MULTI_BIND* bind, char* msgBu return TSDB_CODE_SUCCESS; } -int32_t buildBoundFields(SParsedDataColInfo* boundInfo, SSchema* pSchema, int32_t* fieldNum, TAOS_FIELD** fields) { +int32_t buildBoundFields(SParsedDataColInfo* boundInfo, SSchema* pSchema, int32_t* fieldNum, TAOS_FIELD_E** fields, uint8_t timePrec) { if (fields) { *fields = taosMemoryCalloc(boundInfo->numOfBound, sizeof(TAOS_FIELD)); if (NULL == *fields) { return TSDB_CODE_OUT_OF_MEMORY; } + SSchema* schema = &pSchema[boundInfo->boundColumns[0]]; + if (TSDB_DATA_TYPE_TIMESTAMP == schema->type) { + (*fields)[0].precision = timePrec; + } + for (int32_t i = 0; i < boundInfo->numOfBound; ++i) { - SSchema* pTagSchema = &pSchema[boundInfo->boundColumns[i]]; - strcpy((*fields)[i].name, pTagSchema->name); - (*fields)[i].type = pTagSchema->type; - (*fields)[i].bytes = pTagSchema->bytes; + schema = &pSchema[boundInfo->boundColumns[i]]; + strcpy((*fields)[i].name, schema->name); + (*fields)[i].type = schema->type; + (*fields)[i].bytes = schema->bytes; } } @@ -1744,7 +1749,7 @@ int32_t buildBoundFields(SParsedDataColInfo* boundInfo, SSchema* pSchema, int32_ return TSDB_CODE_SUCCESS; } -int32_t qBuildStmtTagFields(void* pBlock, void* boundTags, int32_t* fieldNum, TAOS_FIELD** fields) { +int32_t qBuildStmtTagFields(void* pBlock, void* boundTags, int32_t* fieldNum, TAOS_FIELD_E** fields) { STableDataBlocks* pDataBlock = (STableDataBlocks*)pBlock; SParsedDataColInfo* tags = (SParsedDataColInfo*)boundTags; if (NULL == tags) { @@ -1759,12 +1764,12 @@ int32_t qBuildStmtTagFields(void* pBlock, void* boundTags, int32_t* fieldNum, TA return TSDB_CODE_SUCCESS; } - CHECK_CODE(buildBoundFields(tags, pSchema, fieldNum, fields)); + CHECK_CODE(buildBoundFields(tags, pSchema, fieldNum, fields, 0)); return TSDB_CODE_SUCCESS; } -int32_t qBuildStmtColFields(void* pBlock, int32_t* fieldNum, TAOS_FIELD** fields) { +int32_t qBuildStmtColFields(void* pBlock, int32_t* fieldNum, TAOS_FIELD_E** fields) { STableDataBlocks* pDataBlock = (STableDataBlocks*)pBlock; SSchema* pSchema = getTableColumnSchema(pDataBlock->pTableMeta); if (pDataBlock->boundColumnInfo.numOfBound <= 0) { @@ -1776,7 +1781,7 @@ int32_t qBuildStmtColFields(void* pBlock, int32_t* fieldNum, TAOS_FIELD** fields return TSDB_CODE_SUCCESS; } - CHECK_CODE(buildBoundFields(&pDataBlock->boundColumnInfo, pSchema, fieldNum, fields)); + CHECK_CODE(buildBoundFields(&pDataBlock->boundColumnInfo, pSchema, fieldNum, fields, pDataBlock->pTableMeta->tableInfo.precision)); return TSDB_CODE_SUCCESS; } diff --git a/source/libs/parser/src/parTranslater.c b/source/libs/parser/src/parTranslater.c index 1b46b0f7144e14b15527f1b693633b926211d379..acbedff0c2bc1df1723d9acd834ee595315772e5 100644 --- a/source/libs/parser/src/parTranslater.c +++ b/source/libs/parser/src/parTranslater.c @@ -830,7 +830,8 @@ static EDealRes translateComparisonOperator(STranslateContext* pCxt, SOperatorNo if (!IS_VAR_DATA_TYPE(((SExprNode*)(pOp->pLeft))->resType.type)) { return generateDealNodeErrMsg(pCxt, TSDB_CODE_PAR_WRONG_VALUE_TYPE, ((SExprNode*)(pOp->pLeft))->aliasName); } - if (QUERY_NODE_VALUE != nodeType(pOp->pRight) || !IS_STR_DATA_TYPE(((SExprNode*)(pOp->pRight))->resType.type)) { + if (QUERY_NODE_VALUE != nodeType(pOp->pRight) || + ((!IS_STR_DATA_TYPE(((SExprNode*)(pOp->pRight))->resType.type)) && (((SExprNode*)(pOp->pRight))->resType.type != TSDB_DATA_TYPE_NULL))) { return generateDealNodeErrMsg(pCxt, TSDB_CODE_PAR_WRONG_VALUE_TYPE, ((SExprNode*)(pOp->pRight))->aliasName); } } diff --git a/source/libs/qworker/test/qworkerTests.cpp b/source/libs/qworker/test/qworkerTests.cpp index 1b959fbe633e0c50ddc7b80af321ee0420a9616d..16dcd7b6e025dd5761202308d00c20435d9a55f0 100644 --- a/source/libs/qworker/test/qworkerTests.cpp +++ b/source/libs/qworker/test/qworkerTests.cpp @@ -108,7 +108,7 @@ void qwtInitLogFile() { tsAsyncLog = 0; qDebugFlag = 159; - strcpy(tsLogDir, "/var/log/taos"); + strcpy(tsLogDir, TD_LOG_DIR_PATH); if (taosInitLog(defaultLogFileNamePrefix, maxLogFileNum) < 0) { printf("failed to open log file in directory:%s\n", tsLogDir); diff --git a/source/libs/scalar/test/filter/filterTests.cpp b/source/libs/scalar/test/filter/filterTests.cpp index 59c3104e96c0320804ba4f17dd0a013146b27a2d..7fb1ffbd64aecb2fee9d7c862f295070dbea8e09 100644 --- a/source/libs/scalar/test/filter/filterTests.cpp +++ b/source/libs/scalar/test/filter/filterTests.cpp @@ -60,7 +60,7 @@ void flttInitLogFile() { tsAsyncLog = 0; qDebugFlag = 159; - strcpy(tsLogDir, "/var/log/taos"); + strcpy(tsLogDir, TD_LOG_DIR_PATH); if (taosInitLog(defaultLogFileNamePrefix, maxLogFileNum) < 0) { printf("failed to open log file in directory:%s\n", tsLogDir); diff --git a/source/libs/scalar/test/scalar/CMakeLists.txt b/source/libs/scalar/test/scalar/CMakeLists.txt index 86b936d93ae950e27069835cffcb0e8a99768ac9..672cb5a3de39bfed51c9d399ac3d0431614f50ab 100644 --- a/source/libs/scalar/test/scalar/CMakeLists.txt +++ b/source/libs/scalar/test/scalar/CMakeLists.txt @@ -17,7 +17,9 @@ TARGET_INCLUDE_DIRECTORIES( PUBLIC "${TD_SOURCE_DIR}/source/libs/parser/inc" PRIVATE "${TD_SOURCE_DIR}/source/libs/scalar/inc" ) -add_test( - NAME scalarTest - COMMAND scalarTest -) +if(NOT TD_WINDOWS) + add_test( + NAME scalarTest + COMMAND scalarTest + ) +endif(NOT TD_WINDOWS) diff --git a/source/libs/scalar/test/scalar/scalarTests.cpp b/source/libs/scalar/test/scalar/scalarTests.cpp index 6a32c6577532c7c7bbb3f07e0f012706dd7163df..169f01c0536a929b846aacd3c70e1b6c565720b7 100644 --- a/source/libs/scalar/test/scalar/scalarTests.cpp +++ b/source/libs/scalar/test/scalar/scalarTests.cpp @@ -74,7 +74,7 @@ void scltInitLogFile() { tsAsyncLog = 0; qDebugFlag = 159; - strcpy(tsLogDir, "/var/log/taos"); + strcpy(tsLogDir, TD_LOG_DIR_PATH); if (taosInitLog(defaultLogFileNamePrefix, maxLogFileNum) < 0) { printf("failed to open log file in directory:%s\n", tsLogDir); diff --git a/source/libs/scheduler/test/schedulerTests.cpp b/source/libs/scheduler/test/schedulerTests.cpp index d5c834e5cf47484875d3613f461c2ae611f2d12b..4bf114ad8febb30c4fac89a391f8e0bc3389a60c 100644 --- a/source/libs/scheduler/test/schedulerTests.cpp +++ b/source/libs/scheduler/test/schedulerTests.cpp @@ -79,7 +79,7 @@ void schtInitLogFile() { tsAsyncLog = 0; qDebugFlag = 159; - strcpy(tsLogDir, "/var/log/taos"); + strcpy(tsLogDir, TD_LOG_DIR_PATH); if (taosInitLog(defaultLogFileNamePrefix, maxLogFileNum) < 0) { printf("failed to open log file in directory:%s\n", tsLogDir); diff --git a/source/os/CMakeLists.txt b/source/os/CMakeLists.txt index b6e131d4ccc670f0d3b35e00483f33f072a314e2..e15627fe6682bb7a94f96d4e7e341a3b3b4c0637 100644 --- a/source/os/CMakeLists.txt +++ b/source/os/CMakeLists.txt @@ -10,7 +10,11 @@ target_include_directories( PUBLIC "${TD_SOURCE_DIR}/contrib/msvcregex" ) # iconv -find_path(IconvApiIncludes iconv.h PATHS) +if(TD_WINDOWS) + find_path(IconvApiIncludes iconv.h "${TD_SOURCE_DIR}/contrib/iconv") +else() + find_path(IconvApiIncludes iconv.h PATHS) +endif(TD_WINDOWS) if(NOT IconvApiIncludes) add_definitions(-DDISALLOW_NCHAR_WITHOUT_ICONV) endif () diff --git a/source/os/src/osEnv.c b/source/os/src/osEnv.c index 6746025f78be619868e53267588f8f4defe1d5cb..6ae3d8a0c0d655ae6be8bf1a23b36309962b7a65 100644 --- a/source/os/src/osEnv.c +++ b/source/os/src/osEnv.c @@ -70,11 +70,11 @@ void osDefaultInit() { #elif defined(_TD_DARWIN_64) if (configDir[0] == 0) { - strcpy(configDir, "/tmp/taosd"); + strcpy(configDir, "/usr/local/etc/taos"); } strcpy(tsDataDir, "/usr/local/var/lib/taos"); strcpy(tsLogDir, "/usr/local/var/log/taos"); - strcpy(tsTempDir, "/usr/local/etc/taos"); + strcpy(tsTempDir, "/tmp/taosd"); strcpy(tsOsName, "Darwin"); #else diff --git a/source/util/src/terror.c b/source/util/src/terror.c index a57f942f74ae9ea4f7f0ebd1ae2c542bc42f5f26..b81d81c736b177952c10cd722cacdca59c3e37bc 100644 --- a/source/util/src/terror.c +++ b/source/util/src/terror.c @@ -187,9 +187,9 @@ TAOS_DEFINE_ERROR(TSDB_CODE_MND_SNODE_ALREADY_EXIST, "Snode already exists" TAOS_DEFINE_ERROR(TSDB_CODE_MND_SNODE_NOT_EXIST, "Snode not there") TAOS_DEFINE_ERROR(TSDB_CODE_MND_BNODE_ALREADY_EXIST, "Bnode already exists") TAOS_DEFINE_ERROR(TSDB_CODE_MND_BNODE_NOT_EXIST, "Bnode not there") -TAOS_DEFINE_ERROR(TSDB_CODE_MND_TOO_FEW_MNODES, "Too few mnodes") -TAOS_DEFINE_ERROR(TSDB_CODE_MND_MNODE_DEPLOYED, "Mnode deployed in this dnode") -TAOS_DEFINE_ERROR(TSDB_CODE_MND_CANT_DROP_MASTER, "Can't drop mnode which is master") +TAOS_DEFINE_ERROR(TSDB_CODE_MND_TOO_FEW_MNODES, "The replicas of mnode cannot less than 1") +TAOS_DEFINE_ERROR(TSDB_CODE_MND_TOO_MANY_MNODES, "The replicas of mnode cannot exceed 3") +TAOS_DEFINE_ERROR(TSDB_CODE_MND_CANT_DROP_MASTER, "Can't drop mnode which is LEADER") // mnode-acct TAOS_DEFINE_ERROR(TSDB_CODE_MND_ACCT_ALREADY_EXIST, "Account already exists") diff --git a/tests/script/api/batchprepare.c b/tests/script/api/batchprepare.c index 2ded58a979ad16e06f03ab8d4f828f1c10731df3..7dd7621d0b429caeb2e54c0215b29c4a0b396124 100644 --- a/tests/script/api/batchprepare.c +++ b/tests/script/api/batchprepare.c @@ -10,6 +10,9 @@ #include "../../../include/client/taos.h" #define FUNCTION_TEST_IDX 1 +#define TIME_PRECISION_MILLI 0 +#define TIME_PRECISION_MICRO 1 +#define TIME_PRECISION_NANO 2 int32_t shortColList[] = {TSDB_DATA_TYPE_TIMESTAMP, TSDB_DATA_TYPE_INT}; int32_t fullColList[] = {TSDB_DATA_TYPE_TIMESTAMP, TSDB_DATA_TYPE_BOOL, TSDB_DATA_TYPE_TINYINT, TSDB_DATA_TYPE_UTINYINT, TSDB_DATA_TYPE_SMALLINT, TSDB_DATA_TYPE_USMALLINT, TSDB_DATA_TYPE_INT, TSDB_DATA_TYPE_UINT, TSDB_DATA_TYPE_BIGINT, TSDB_DATA_TYPE_UBIGINT, TSDB_DATA_TYPE_FLOAT, TSDB_DATA_TYPE_DOUBLE, TSDB_DATA_TYPE_BINARY, TSDB_DATA_TYPE_NCHAR}; @@ -32,6 +35,8 @@ typedef enum { BP_BIND_COL, } BP_BIND_TYPE; +#define BP_BIND_TYPE_STR(t) (((t) == BP_BIND_COL) ? "column" : "tag") + OperInfo operInfo[] = { {">", 2, false}, {">=", 2, false}, @@ -57,11 +62,12 @@ FuncInfo funcInfo[] = { {"min", 1}, }; +#define BP_STARTUP_TS 1591060628000 + char *bpStbPrefix = "st"; char *bpTbPrefix = "t"; int32_t bpDefaultStbId = 1; - - +int64_t bpTs; //char *operatorList[] = {">", ">=", "<", "<=", "=", "<>", "in", "not in"}; //char *varoperatorList[] = {">", ">=", "<", "<=", "=", "<>", "in", "not in", "like", "not like", "match", "nmatch"}; @@ -188,8 +194,10 @@ typedef struct { bool printCreateTblSql; bool printQuerySql; bool printStmtSql; + bool printVerbose; bool autoCreateTbl; bool numericParam; + uint8_t precision; int32_t rowNum; //row num for one table int32_t bindColNum; int32_t bindTagNum; @@ -209,12 +217,15 @@ typedef struct { int32_t caseRunNum; // total run case num } CaseCtrl; -#if 1 +#if 0 CaseCtrl gCaseCtrl = { // default + .precision = TIME_PRECISION_MICRO, .bindNullNum = 0, .printCreateTblSql = false, .printQuerySql = true, .printStmtSql = true, + .printVerbose = false, + .printRes = false, .autoCreateTbl = false, .numericParam = false, .rowNum = 0, @@ -230,7 +241,6 @@ CaseCtrl gCaseCtrl = { // default .funcIdxListNum = 0, .funcIdxList = NULL, .checkParamNum = false, - .printRes = false, .runTimes = 0, .caseIdx = -1, .caseNum = -1, @@ -240,26 +250,35 @@ CaseCtrl gCaseCtrl = { // default #endif -#if 0 +#if 1 CaseCtrl gCaseCtrl = { + .precision = TIME_PRECISION_MILLI, .bindNullNum = 0, - .printCreateTblSql = true, + .printCreateTblSql = false, .printQuerySql = true, .printStmtSql = true, + .printVerbose = false, + .printRes = true, .autoCreateTbl = false, + .numericParam = false, .rowNum = 0, .bindColNum = 0, .bindTagNum = 0, .bindRowNum = 0, + .bindColTypeNum = 0, + .bindColTypeList = NULL, .bindTagTypeNum = 0, .bindTagTypeList = NULL, + .optrIdxListNum = 0, + .optrIdxList = NULL, + .funcIdxListNum = 0, + .funcIdxList = NULL, .checkParamNum = false, - .printRes = false, .runTimes = 0, - .caseIdx = 1, - .caseNum = 1, + .caseIdx = -1, + .caseNum = -1, .caseRunIdx = -1, - .caseRunNum = 1, + .caseRunNum = -1, }; #endif @@ -891,7 +910,6 @@ int32_t prepareColData(BP_BIND_TYPE bType, BindData *data, int32_t bindIdx, int3 int32_t prepareInsertData(BindData *data) { - static int64_t tsData = 1591060628000; uint64_t allRowNum = gCurCase->rowNum * gCurCase->tblNum; data->colNum = 0; @@ -918,7 +936,7 @@ int32_t prepareInsertData(BindData *data) { } for (int32_t i = 0; i < allRowNum; ++i) { - data->tsData[i] = tsData++; + data->tsData[i] = bpTs++; data->boolData[i] = (bool)(i % 2); data->tinyData[i] = (int8_t)i; data->utinyData[i] = (uint8_t)(i+1); @@ -956,7 +974,6 @@ int32_t prepareInsertData(BindData *data) { } int32_t prepareQueryCondData(BindData *data, int32_t tblIdx) { - static int64_t tsData = 1591060628000; uint64_t bindNum = gCurCase->rowNum / gCurCase->bindRowNum; data->colNum = 0; @@ -982,7 +999,7 @@ int32_t prepareQueryCondData(BindData *data, int32_t tblIdx) { } for (int32_t i = 0; i < bindNum; ++i) { - data->tsData[i] = tsData + tblIdx*gCurCase->rowNum + rand()%gCurCase->rowNum; + data->tsData[i] = bpTs + tblIdx*gCurCase->rowNum + rand()%gCurCase->rowNum; data->boolData[i] = (bool)(tblIdx*gCurCase->rowNum + rand() % gCurCase->rowNum); data->tinyData[i] = (int8_t)(tblIdx*gCurCase->rowNum + rand() % gCurCase->rowNum); data->utinyData[i] = (uint8_t)(tblIdx*gCurCase->rowNum + rand() % gCurCase->rowNum); @@ -1014,7 +1031,6 @@ int32_t prepareQueryCondData(BindData *data, int32_t tblIdx) { int32_t prepareQueryMiscData(BindData *data, int32_t tblIdx) { - static int64_t tsData = 1591060628000; uint64_t bindNum = gCurCase->rowNum / gCurCase->bindRowNum; data->colNum = 0; @@ -1040,7 +1056,7 @@ int32_t prepareQueryMiscData(BindData *data, int32_t tblIdx) { } for (int32_t i = 0; i < bindNum; ++i) { - data->tsData[i] = tsData + tblIdx*gCurCase->rowNum + rand()%gCurCase->rowNum; + data->tsData[i] = bpTs + tblIdx*gCurCase->rowNum + rand()%gCurCase->rowNum; data->boolData[i] = (bool)(tblIdx*gCurCase->rowNum + rand() % gCurCase->rowNum); data->tinyData[i] = (int8_t)(tblIdx*gCurCase->rowNum + rand() % gCurCase->rowNum); data->utinyData[i] = (uint8_t)(tblIdx*gCurCase->rowNum + rand() % gCurCase->rowNum); @@ -1202,39 +1218,7 @@ int32_t bpAppendValueString(char *buf, int type, void *value, int32_t valueLen, } -int32_t bpBindParam(TAOS_STMT *stmt, TAOS_MULTI_BIND *bind) { - static int32_t n = 0; - - if (gCurCase->bindRowNum > 1) { - if (0 == (n++%2)) { - if (taos_stmt_bind_param_batch(stmt, bind)) { - printf("!!!taos_stmt_bind_param_batch error:%s\n", taos_stmt_errstr(stmt)); - exit(1); - } - } else { - for (int32_t i = 0; i < gCurCase->bindColNum; ++i) { - if (taos_stmt_bind_single_param_batch(stmt, bind++, i)) { - printf("!!!taos_stmt_bind_single_param_batch error:%s\n", taos_stmt_errstr(stmt)); - exit(1); - } - } - } - } else { - if (0 == (n++%2)) { - if (taos_stmt_bind_param_batch(stmt, bind)) { - printf("!!!taos_stmt_bind_param_batch error:%s\n", taos_stmt_errstr(stmt)); - exit(1); - } - } else { - if (taos_stmt_bind_param(stmt, bind)) { - printf("!!!taos_stmt_bind_param error:%s\n", taos_stmt_errstr(stmt)); - exit(1); - } - } - } - return 0; -} void bpCheckIsInsert(TAOS_STMT *stmt, int32_t insert) { int32_t isInsert = 0; @@ -1280,15 +1264,12 @@ void bpCheckAffectedRowsOnce(TAOS_STMT *stmt, int32_t expectedNum) { } void bpCheckQueryResult(TAOS_STMT *stmt, TAOS *taos, char *stmtSql, TAOS_MULTI_BIND* bind) { - TAOS_RES* res = taos_stmt_use_result(stmt); - int32_t sqlResNum = 0; - int32_t stmtResNum = 0; - bpFetchRows(res, gCaseCtrl.printRes, &stmtResNum); - + // query using sql char sql[1024]; int32_t len = 0; char* p = stmtSql; char* s = NULL; + int32_t sqlResNum = 0; for (int32_t i = 0; true; ++i, p=s+1) { s = strchr(p, '?'); @@ -1313,6 +1294,12 @@ void bpCheckQueryResult(TAOS_STMT *stmt, TAOS *taos, char *stmtSql, TAOS_MULTI_B } bpExecQuery(taos, sql, gCaseCtrl.printRes, &sqlResNum); + + // query using stmt + TAOS_RES* res = taos_stmt_use_result(stmt); + int32_t stmtResNum = 0; + bpFetchRows(res, gCaseCtrl.printRes, &stmtResNum); + if (sqlResNum != stmtResNum) { printf("!!!sql res num %d mis-match stmt res num %d\n", sqlResNum, stmtResNum); exit(1); @@ -1321,9 +1308,165 @@ void bpCheckQueryResult(TAOS_STMT *stmt, TAOS *taos, char *stmtSql, TAOS_MULTI_B printf("***sql res num match stmt res num %d\n", stmtResNum); } +void bpCheckColTagFields(TAOS_STMT *stmt, int32_t fieldNum, TAOS_FIELD_E* pFields, int32_t expecteNum, TAOS_MULTI_BIND* pBind, BP_BIND_TYPE type) { + int32_t code = 0; + + if (fieldNum != expecteNum) { + printf("!!!%s field num %d mis-match expect num %d\n", BP_BIND_TYPE_STR(type), fieldNum, expecteNum); + exit(1); + } + + if (type == BP_BIND_COL) { + if (pFields[0].precision != gCaseCtrl.precision) { + printf("!!!db precision %d mis-match expect %d\n", pFields[0].precision, gCaseCtrl.precision); + exit(1); + } + } + + for (int32_t i = 0; i < fieldNum; ++i) { + if (pFields[i].type != pBind[i].buffer_type) { + printf("!!!%s %dth field type %d mis-match expect type %d\n", BP_BIND_TYPE_STR(type), i, pFields[i].type, pBind[i].buffer_type); + exit(1); + } + + if (pFields[i].type == TSDB_DATA_TYPE_BINARY) { + if (pFields[i].bytes != (pBind[i].buffer_length + 2)) { + printf("!!!%s %dth field len %d mis-match expect len %d\n", BP_BIND_TYPE_STR(type), i, pFields[i].bytes, (pBind[i].buffer_length + 2)); + exit(1); + } + } else if (pFields[i].type == TSDB_DATA_TYPE_NCHAR) { + if (pFields[i].bytes != (pBind[i].buffer_length * 4 + 2)) { + printf("!!!%s %dth field len %d mis-match expect len %d\n", BP_BIND_TYPE_STR(type), i, pFields[i].bytes, (pBind[i].buffer_length + 2)); + exit(1); + } + } else if (pFields[i].bytes != pBind[i].buffer_length) { + printf("!!!%s %dth field len %d mis-match expect len %d\n", BP_BIND_TYPE_STR(type), i, pFields[i].bytes, pBind[i].buffer_length); + exit(1); + } + } + + if (type == BP_BIND_COL) { + int fieldType = 0; + int fieldBytes = 0; + for (int32_t i = 0; i < fieldNum; ++i) { + code = taos_stmt_get_param(stmt, i, &fieldType, &fieldBytes); + if (code) { + printf("!!!taos_stmt_get_param error:%s\n", taos_stmt_errstr(stmt)); + exit(1); + } + + if (pFields[i].type != fieldType) { + printf("!!!%s %dth field type %d mis-match expect type %d\n", BP_BIND_TYPE_STR(type), i, fieldType, pFields[i].type); + exit(1); + } + + if (pFields[i].bytes != fieldBytes) { + printf("!!!%s %dth field len %d mis-match expect len %d\n", BP_BIND_TYPE_STR(type), i, fieldBytes, pFields[i].bytes); + exit(1); + } + } + } + + if (gCaseCtrl.printVerbose) { + printf("%s fields check passed\n", BP_BIND_TYPE_STR(type)); + } +} + + +void bpCheckTagFields(TAOS_STMT *stmt, TAOS_MULTI_BIND* pBind) { + int32_t code = 0; + int fieldNum = 0; + TAOS_FIELD_E* pFields = NULL; + code = taos_stmt_get_tag_fields(stmt, &fieldNum, &pFields); + if (code != 0){ + printf("!!!taos_stmt_get_tag_fields error:%s\n", taos_stmt_errstr(stmt)); + exit(1); + } + + bpCheckColTagFields(stmt, fieldNum, pFields, gCurCase->bindTagNum, pBind, BP_BIND_TAG); +} + +void bpCheckColFields(TAOS_STMT *stmt, TAOS_MULTI_BIND* pBind) { + if (gCurCase->testType == TTYPE_QUERY) { + return; + } + + int32_t code = 0; + int fieldNum = 0; + TAOS_FIELD_E* pFields = NULL; + code = taos_stmt_get_col_fields(stmt, &fieldNum, &pFields); + if (code != 0){ + printf("!!!taos_stmt_get_col_fields error:%s\n", taos_stmt_errstr(stmt)); + exit(1); + } + + bpCheckColTagFields(stmt, fieldNum, pFields, gCurCase->bindColNum, pBind, BP_BIND_COL); +} + +void bpShowBindParam(TAOS_MULTI_BIND *bind, int32_t num) { + for (int32_t i = 0; i < num; ++i) { + TAOS_MULTI_BIND* b = &bind[i]; + printf("Bind %d: type[%d],buf[%p],buflen[%d],len[%],null[%d],num[%d]\n", + i, b->buffer_type, b->buffer, b->buffer_length, b->length ? *b->length : 0, b->is_null ? *b->is_null : 0, b->num); + } +} + +int32_t bpBindParam(TAOS_STMT *stmt, TAOS_MULTI_BIND *bind) { + static int32_t n = 0; + + bpCheckColFields(stmt, bind); + + if (gCurCase->bindRowNum > 1) { + if (0 == (n++%2)) { + if (taos_stmt_bind_param_batch(stmt, bind)) { + printf("!!!taos_stmt_bind_param_batch error:%s\n", taos_stmt_errstr(stmt)); + bpShowBindParam(bind, gCurCase->bindColNum); + exit(1); + } + } else { + for (int32_t i = 0; i < gCurCase->bindColNum; ++i) { + if (taos_stmt_bind_single_param_batch(stmt, bind+i, i)) { + printf("!!!taos_stmt_bind_single_param_batch %d error:%s\n", taos_stmt_errstr(stmt), i); + bpShowBindParam(bind, gCurCase->bindColNum); + exit(1); + } + } + } + } else { + if (0 == (n++%2)) { + if (taos_stmt_bind_param_batch(stmt, bind)) { + printf("!!!taos_stmt_bind_param_batch error:%s\n", taos_stmt_errstr(stmt)); + bpShowBindParam(bind, gCurCase->bindColNum); + exit(1); + } + } else { + if (taos_stmt_bind_param(stmt, bind)) { + printf("!!!taos_stmt_bind_param error:%s\n", taos_stmt_errstr(stmt)); + bpShowBindParam(bind, gCurCase->bindColNum); + exit(1); + } + } + } + + return 0; +} + int32_t bpSetTableNameTags(BindData *data, int32_t tblIdx, char *tblName, TAOS_STMT *stmt) { + int32_t code = 0; if (gCurCase->bindTagNum > 0) { - return taos_stmt_set_tbname_tags(stmt, tblName, data->pTags + tblIdx * gCurCase->bindTagNum); + if ((rand() % 2) == 0) { + code = taos_stmt_set_tbname(stmt, tblName); + if (code != 0){ + printf("!!!taos_stmt_set_tbname error:%s\n", taos_stmt_errstr(stmt)); + exit(1); + } + + bpCheckTagFields(stmt, data->pTags + tblIdx * gCurCase->bindTagNum); + + return taos_stmt_set_tags(stmt, data->pTags + tblIdx * gCurCase->bindTagNum); + } else { + return taos_stmt_set_tbname_tags(stmt, tblName, data->pTags + tblIdx * gCurCase->bindTagNum); + } } else { return taos_stmt_set_tbname(stmt, tblName); } @@ -1755,7 +1898,7 @@ int insertAUTOTest1(TAOS_STMT *stmt, TAOS *taos) { if (gCurCase->tblNum > 1) { char buf[32]; sprintf(buf, "t%d", t); - code = taos_stmt_set_tbname_tags(stmt, buf, data.pTags + t * gCurCase->bindTagNum); + code = bpSetTableNameTags(&data, t, buf, stmt); if (code != 0){ printf("!!!taos_stmt_set_tbname_tags error:%s\n", taos_stmt_errstr(stmt)); exit(1); @@ -2223,14 +2366,48 @@ void generateCreateTableSQL(char *buf, int32_t tblIdx, int32_t colNum, int32_t * } } +char *bpPrecisionStr(uint8_t precision) { + switch (precision) { + case TIME_PRECISION_MILLI: + return "ms"; + case TIME_PRECISION_MICRO: + return "us"; + case TIME_PRECISION_NANO: + return "ns"; + default: + return "unknwon"; + } +} + +void bpSetStartupTs() { + switch (gCaseCtrl.precision) { + case TIME_PRECISION_MILLI: + bpTs = BP_STARTUP_TS; + break; + case TIME_PRECISION_MICRO: + bpTs = BP_STARTUP_TS * 1000; + break; + case TIME_PRECISION_NANO: + bpTs = BP_STARTUP_TS * 1000000; + break; + default: + bpTs = BP_STARTUP_TS; + break; + } +} + void prepare(TAOS *taos, int32_t colNum, int32_t *colList, int prepareStb) { TAOS_RES *result; int code; + char createDbSql[128] = {0}; result = taos_query(taos, "drop database demo"); taos_free_result(result); - result = taos_query(taos, "create database demo keep 36500"); + sprintf(createDbSql, "create database demo keep 36500 precision \"%s\"", bpPrecisionStr(gCaseCtrl.precision)); + printf("\tCreate Database SQL:%s\n", createDbSql); + + result = taos_query(taos, createDbSql); code = taos_errno(result); if (code != 0) { printf("!!!failed to create database, reason:%s\n", taos_errstr(result)); @@ -2278,6 +2455,8 @@ int32_t runCase(TAOS *taos, int32_t caseIdx, int32_t caseRunIdx, bool silent) { CaseCfg cfg = gCase[caseIdx]; CaseCfg cfgBk; gCurCase = &cfg; + + bpSetStartupTs(); if ((gCaseCtrl.bindColTypeNum || gCaseCtrl.bindColNum) && (gCurCase->colNum != gFullColNum)) { return 1; @@ -2413,22 +2592,28 @@ void* runCaseList(TAOS *taos) { } void runAll(TAOS *taos) { -#if 1 - - strcpy(gCaseCtrl.caseCatalog, "Normal Test"); + strcpy(gCaseCtrl.caseCatalog, "Default Test"); printf("%s Begin\n", gCaseCtrl.caseCatalog); runCaseList(taos); + strcpy(gCaseCtrl.caseCatalog, "Micro DB precision Test"); + printf("%s Begin\n", gCaseCtrl.caseCatalog); + gCaseCtrl.precision = TIME_PRECISION_MICRO; + runCaseList(taos); + gCaseCtrl.precision = TIME_PRECISION_MILLI; + strcpy(gCaseCtrl.caseCatalog, "Nano DB precision Test"); + printf("%s Begin\n", gCaseCtrl.caseCatalog); + gCaseCtrl.precision = TIME_PRECISION_NANO; + runCaseList(taos); + gCaseCtrl.precision = TIME_PRECISION_MILLI; + strcpy(gCaseCtrl.caseCatalog, "Auto Create Table Test"); gCaseCtrl.autoCreateTbl = true; printf("%s Begin\n", gCaseCtrl.caseCatalog); runCaseList(taos); gCaseCtrl.autoCreateTbl = false; - -#endif -/* strcpy(gCaseCtrl.caseCatalog, "Null Test"); printf("%s Begin\n", gCaseCtrl.caseCatalog); gCaseCtrl.bindNullNum = 1; @@ -2441,6 +2626,7 @@ void runAll(TAOS *taos) { runCaseList(taos); gCaseCtrl.bindRowNum = 0; +#if 0 strcpy(gCaseCtrl.caseCatalog, "Row Num Test"); printf("%s Begin\n", gCaseCtrl.caseCatalog); gCaseCtrl.rowNum = 1000; @@ -2448,23 +2634,21 @@ void runAll(TAOS *taos) { runCaseList(taos); gCaseCtrl.rowNum = 0; gCaseCtrl.printRes = true; -*/ strcpy(gCaseCtrl.caseCatalog, "Runtimes Test"); printf("%s Begin\n", gCaseCtrl.caseCatalog); gCaseCtrl.runTimes = 2; runCaseList(taos); gCaseCtrl.runTimes = 0; +#endif -#if 1 strcpy(gCaseCtrl.caseCatalog, "Check Param Test"); printf("%s Begin\n", gCaseCtrl.caseCatalog); gCaseCtrl.checkParamNum = true; runCaseList(taos); gCaseCtrl.checkParamNum = false; -#endif -/* +#if 0 strcpy(gCaseCtrl.caseCatalog, "Bind Col Num Test"); printf("%s Begin\n", gCaseCtrl.caseCatalog); gCaseCtrl.bindColNum = 6; @@ -2476,7 +2660,7 @@ void runAll(TAOS *taos) { gCaseCtrl.bindColTypeNum = tListLen(bindColTypeList); gCaseCtrl.bindColTypeList = bindColTypeList; runCaseList(taos); -*/ +#endif printf("All Test End\n"); } diff --git a/tests/script/jenkins/basic.txt b/tests/script/jenkins/basic.txt index ca0db9e32a0143517a8f964eb69df91abb8a6d2b..12b678eeaef8b3d9c86b25add6549d2fd4b59794 100644 --- a/tests/script/jenkins/basic.txt +++ b/tests/script/jenkins/basic.txt @@ -57,7 +57,6 @@ # ---- mnode ./test.sh -f tsim/mnode/basic1.sim ./test.sh -f tsim/mnode/basic2.sim -./test.sh -f tsim/mnode/basic3.sim # ---- show ./test.sh -f tsim/show/basic.sim diff --git a/tests/script/tsim/mnode/basic3.sim b/tests/script/tsim/mnode/basic3.sim index b0ee23cd8c15e95d26a12659d77fad0ebc0770dc..3c69e6ed51de4335a082f7a6f5cb20d81858e7f6 100644 --- a/tests/script/tsim/mnode/basic3.sim +++ b/tests/script/tsim/mnode/basic3.sim @@ -2,14 +2,17 @@ system sh/stop_dnodes.sh system sh/deploy.sh -n dnode1 -i 1 system sh/deploy.sh -n dnode2 -i 2 system sh/deploy.sh -n dnode3 -i 3 +system sh/deploy.sh -n dnode4 -i 4 system sh/exec.sh -n dnode1 -s start system sh/exec.sh -n dnode2 -s start system sh/exec.sh -n dnode3 -s start +system sh/exec.sh -n dnode4 -s start sql connect print =============== step1: create dnodes sql create dnode $hostname port 7200 sql create dnode $hostname port 7300 +sql create dnode $hostname port 7400 $x = 0 step1: @@ -32,6 +35,7 @@ endi print =============== step2: create mnode 2 sql create mnode on dnode 2 sql create mnode on dnode 3 +sql_error create mnode on dnode 4 $x = 0 step2: @@ -106,6 +110,10 @@ print $data(1)[0] $data(1)[1] $data(1)[2] print $data(2)[0] $data(2)[1] $data(2)[2] print $data(3)[0] $data(3)[1] $data(3)[2] +if $data(2)[2] != OFFLINE then + goto step5 +endi + sql show users if $rows != 2 then return -1 @@ -134,4 +142,5 @@ endi system sh/exec.sh -n dnode1 -s stop system sh/exec.sh -n dnode2 -s stop -system sh/exec.sh -n dnode3 -s stop \ No newline at end of file +system sh/exec.sh -n dnode3 -s stop +system sh/exec.sh -n dnode4 -s stop \ No newline at end of file diff --git a/tests/script/tsim/trans/create_db.sim b/tests/script/tsim/trans/create_db.sim index ae6b7eab160f788db5a1d7fa8f47ed4ffda6e8c8..158a6b9f920e8e194f5336f8985bf609d9c7f2a1 100644 --- a/tests/script/tsim/trans/create_db.sim +++ b/tests/script/tsim/trans/create_db.sim @@ -76,14 +76,6 @@ if $data[0][3] != d1 then return -1 endi -if $data[0][4] != create-db then - return -1 -endi - -if $data[0][7] != @Unable to establish connection@ then - return -1 -endi - sql_error create database d1 vgroups 2; print =============== start dnode2 @@ -125,15 +117,7 @@ endi if $data[0][3] != d2 then return -1 endi - -if $data[0][4] != create-db then - return -1 -endi - -if $data[0][7] != @Unable to establish connection@ then - return -1 -endi - +return sql_error create database d2 vgroups 2; print =============== kill transaction diff --git a/tests/test/c/sdbDump.c b/tests/test/c/sdbDump.c index 3b3a9fc85ec7c7e20e6b91574034bb4a196e9876..1781dd31836376cb3e502de0bd05bde50e66288a 100644 --- a/tests/test/c/sdbDump.c +++ b/tests/test/c/sdbDump.c @@ -279,9 +279,9 @@ void dumpTrans(SSdb *pSdb, SJson *json) { tjsonAddIntegerToObject(item, "id", pObj->id); tjsonAddIntegerToObject(item, "stage", pObj->stage); tjsonAddIntegerToObject(item, "policy", pObj->policy); - tjsonAddIntegerToObject(item, "type", pObj->type); + tjsonAddIntegerToObject(item, "conflict", pObj->conflict); + tjsonAddIntegerToObject(item, "exec", pObj->exec); tjsonAddStringToObject(item, "createdTime", i642str(pObj->createdTime)); - tjsonAddStringToObject(item, "dbUid", i642str(pObj->dbUid)); tjsonAddStringToObject(item, "dbname", pObj->dbname); tjsonAddIntegerToObject(item, "commitLogNum", taosArrayGetSize(pObj->commitActions)); tjsonAddIntegerToObject(item, "redoActionNum", taosArrayGetSize(pObj->redoActions)); diff --git a/tools/taos-tools b/tools/taos-tools index 4d83d8c62973506f760bcaa3a33f4665ed9046d0..717f5aaa5f0a1b4d92bb2ae68858fec554fb5eda 160000 --- a/tools/taos-tools +++ b/tools/taos-tools @@ -1 +1 @@ -Subproject commit 4d83d8c62973506f760bcaa3a33f4665ed9046d0 +Subproject commit 717f5aaa5f0a1b4d92bb2ae68858fec554fb5eda