diff --git a/cmake/taostools_CMakeLists.txt.in b/cmake/taostools_CMakeLists.txt.in index e71598ae5a1255c68c3945ac27c9158390e6e022..620faa636b8caef4de240a3a54415eb937658436 100644 --- a/cmake/taostools_CMakeLists.txt.in +++ b/cmake/taostools_CMakeLists.txt.in @@ -2,7 +2,7 @@ # taos-tools ExternalProject_Add(taos-tools GIT_REPOSITORY https://github.com/taosdata/taos-tools.git - GIT_TAG c9cc20f + GIT_TAG 8a5e336 SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools" BINARY_DIR "" #BUILD_IN_SOURCE TRUE diff --git a/docs/en/12-taos-sql/12-distinguished.md b/docs/en/12-taos-sql/12-distinguished.md index 2dad49ece942d0530c12afa145c2e11682c23fe3..d2f7cf66b63521d157a6e05f1dd8d93658d65549 100644 --- a/docs/en/12-taos-sql/12-distinguished.md +++ b/docs/en/12-taos-sql/12-distinguished.md @@ -1,142 +1,109 @@ --- -sidebar_label: 时序数据特色查询 -title: 时序数据特色查询 +sidebar_label: Distinguished +title: Distinguished Query for Time Series Database --- -TDengine 是专为时序数据而研发的大数据平台,存储和计算都针对时序数据的特定进行了量身定制,在支持标准 SQL 的基础之上,还提供了一系列贴合时序业务场景的特色查询语法,极大的方便时序场景的应用开发。 +Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window. +Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are three kinds of windows: time window, status window, and session window. There are two kinds of time windows: sliding window and flip time/tumbling window. -TDengine 提供的特色查询包括标签切分查询和窗口切分查询。 +## Time Window -## 标签切分查询 +The `INTERVAL` clause is used to generate time windows of the same time interval. The `SLIDING` parameter is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining a continuous query, both the size of the time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time ranges of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time/tumbling window. -超级表查询中,当需要针对标签进行数据切分然后在切分出的数据空间内再进行一系列的计算时使用标签切分子句,标签切分的语句如下: +![TDengine Database Time Window](./timewindow-1.webp) -```sql -PARTITION BY part_list -``` - -part_list 可以是任意的标量表达式,包括列、常量、标量函数和它们的组合。 - -当 PARTITION BY 和标签一起使用时,TDengine 按如下方式处理标签切分子句: +`INTERVAL` and `SLIDING` should be used with aggregate functions and select functions. The SQL statement below is illegal because no aggregate or selection function is used with `INTERVAL`. -- 标签切分子句位于 WHERE 子句之后,且不能和 JOIN 子句一起使用。 -- 标签切分子句将超级表数据按指定的标签组合进行切分,每个切分的分片进行指定的计算。计算由之后的子句定义(窗口子句、GROUP BY 子句或 SELECT 子句)。 -- 标签切分子句可以和窗口切分子句(或 GROUP BY 子句)一起使用,此时后面的子句作用在每个切分的分片上。例如,将数据按标签 location 进行分组,并对每个组按 10 分钟进行降采样,取其最大值。 - -```sql -select max(current) from meters partition by location interval(10m) ``` - -## 窗口切分查询 - -TDengine 支持按时间段窗口切分方式进行聚合结果查询,比如温度传感器每秒采集一次数据,但需查询每隔 10 分钟的温度平均值。这种场景下可以使用窗口子句来获得需要的查询结果。窗口子句用于针对查询的数据集合按照窗口切分成为查询子集并进行聚合,窗口包含时间窗口(time window)、状态窗口(status window)、会话窗口(session window)三种窗口。其中时间窗口又可划分为滑动时间窗口和翻转时间窗口。窗口切分查询语法如下: - -```sql -SELECT function_list FROM tb_name - [WHERE where_condition] - [SESSION(ts_col, tol_val)] - [STATE_WINDOW(col)] - [INTERVAL(interval [, offset]) [SLIDING sliding]] - [FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})] +SELECT * FROM temp_tb_1 INTERVAL(1m); ``` -在上述语法中的具体限制如下 - -### 窗口切分查询中使用函数的限制 - -- 在聚合查询中,function_list 位置允许使用聚合和选择函数,并要求每个函数仅输出单个结果(例如:COUNT、AVG、SUM、STDDEV、LEASTSQUARES、PERCENTILE、MIN、MAX、FIRST、LAST),而不能使用具有多行输出结果的函数(例如:DIFF 以及四则运算)。 -- 此外 LAST_ROW 查询也不能与窗口聚合同时出现。 -- 标量函数(如:CEIL/FLOOR 等)也不能使用在窗口聚合查询中。 - -### 窗口子句的规则 - -- 窗口子句位于标签切分子句之后,GROUP BY 子句之前,且不可以和 GROUP BY 子句一起使用。 -- 窗口子句将数据按窗口进行切分,对每个窗口进行 SELECT 列表中的表达式的计算,SELECT 列表中的表达式只能包含: - - 常量。 - - 聚集函数。 - - 包含上面表达式的表达式。 -- 窗口子句不可以和 GROUP BY 子句一起使用。 -- WHERE 语句可以指定查询的起止时间和其他过滤条件。 - -### FILL 子句 +The time step specified by `SLIDING` cannot exceed the time interval specified by `INTERVAL`. The SQL statement below is illegal because the time length specified by `SLIDING` exceeds that specified by `INTERVAL`. -FILL 语句指定某一窗口区间数据缺失的情况下的填充模式。填充模式包括以下几种: - -1. 不进行填充:NONE(默认填充模式)。 -2. VALUE 填充:固定值填充,此时需要指定填充的数值。例如:FILL(VALUE, 1.23)。这里需要注意,最终填充的值受由相应列的类型决定,如 FILL(VALUE, 1.23),相应列为 INT 类型,则填充值为 1。 -3. PREV 填充:使用前一个非 NULL 值填充数据。例如:FILL(PREV)。 -4. NULL 填充:使用 NULL 填充数据。例如:FILL(NULL)。 -5. LINEAR 填充:根据前后距离最近的非 NULL 值做线性插值填充。例如:FILL(LINEAR)。 -6. NEXT 填充:使用下一个非 NULL 值填充数据。例如:FILL(NEXT)。 - -:::info - -1. 使用 FILL 语句的时候可能生成大量的填充输出,务必指定查询的时间区间。针对每次查询,系统可返回不超过 1 千万条具有插值的结果。 -2. 在时间维度聚合中,返回的结果中时间序列严格单调递增。 -3. 如果查询对象是超级表,则聚合函数会作用于该超级表下满足值过滤条件的所有表的数据。如果查询中没有使用 GROUP BY 语句,则返回的结果按照时间序列严格单调递增;如果查询中使用了 GROUP BY 语句分组,则返回结果中每个 GROUP 内不按照时间序列严格单调递增。 - -::: +``` +SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m); +``` -### 时间窗口 +When the time length specified by `SLIDING` is the same as that specified by `INTERVAL`, the sliding window is actually a flip/tumbling window. The minimum time range specified by `INTERVAL` is 10 milliseconds (10a) prior to version 2.1.5.0. Since version 2.1.5.0, the minimum time range by `INTERVAL` can be 1 microsecond (1u). However, if the DB precision is millisecond, the minimum time range is 1 millisecond (1a). Please note that the `timezone` parameter should be configured to be the same value in the `taos.cfg` configuration file on client side and server side. -时间窗口又可分为滑动时间窗口和翻转时间窗口。 +## Status Window -INTERVAL 子句用于产生相等时间周期的窗口,SLIDING 用以指定窗口向前滑动的时间。每次执行的查询是一个时间窗口,时间窗口随着时间流动向前滑动。在定义连续查询的时候需要指定时间窗口(time window )大小和每次前向增量时间(forward sliding times)。如图,[t0s, t0e] ,[t1s , t1e], [t2s, t2e] 是分别是执行三次连续查询的时间窗口范围,窗口的前向滑动的时间范围 sliding time 标识 。查询过滤、聚合等操作按照每个时间窗口为独立的单位执行。当 SLIDING 与 INTERVAL 相等的时候,滑动窗口即为翻转窗口。 +In case of using integer, bool, or string to represent the status of a device at any given moment, continuous rows with the same status belong to a status window. Once the status changes, the status window closes. As shown in the following figure, there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now. -![TDengine Database 时间窗口示意图](./timewindow-1.webp) +![TDengine Database Status Window](./timewindow-3.webp) -INTERVAL 和 SLIDING 子句需要配合聚合和选择函数来使用。以下 SQL 语句非法: +`STATE_WINDOW` is used to specify the column on which the status window will be based. For example: ``` -SELECT * FROM temp_tb_1 INTERVAL(1m); +SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status); ``` -SLIDING 的向前滑动的时间不能超过一个窗口的时间范围。以下语句非法: +## Session Window -``` -SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m); +```sql +SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val); ``` -使用时间窗口需要注意: +The primary key, i.e. timestamp, is used to determine which session window a row belongs to. If the time interval between two adjacent rows is within the time range specified by `tol_val`, they belong to the same session window; otherwise they belong to two different session windows. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30], because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds. -- 聚合时间段的窗口宽度由关键词 INTERVAL 指定,最短时间间隔 10 毫秒(10a);并且支持偏移 offset(偏移必须小于间隔),也即时间窗口划分与“UTC 时刻 0”相比的偏移量。SLIDING 语句用于指定聚合时间段的前向增量,也即每次窗口向前滑动的时长。 -- 使用 INTERVAL 语句时,除非极特殊的情况,都要求把客户端和服务端的 taos.cfg 配置文件中的 timezone 参数配置为相同的取值,以避免时间处理函数频繁进行跨时区转换而导致的严重性能影响。 -- 返回的结果中时间序列严格单调递增。 +![TDengine Database Session Window](./timewindow-2.webp) -### 状态窗口 +If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now. -使用整数(布尔值)或字符串来标识产生记录时候设备的状态量。产生的记录如果具有相同的状态量数值则归属于同一个状态窗口,数值改变后该窗口关闭。如下图所示,根据状态量确定的状态窗口分别是[2019-04-28 14:22:07,2019-04-28 14:22:10]和[2019-04-28 14:22:11,2019-04-28 14:22:12]两个。(状态窗口暂不支持对超级表使用) +## More On Window Aggregate -![TDengine Database 时间窗口示意图](./timewindow-3.webp) +### Syntax -使用 STATE_WINDOW 来确定状态窗口划分的列。例如: +The full syntax of aggregate by window is as follows: -``` -SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status); +```sql +SELECT function_list FROM tb_name + [WHERE where_condition] + [SESSION(ts_col, tol_val)] + [STATE_WINDOW(col)] + [INTERVAL(interval [, offset]) [SLIDING sliding]] + [FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})] + +SELECT function_list FROM stb_name + [WHERE where_condition] + [INTERVAL(interval [, offset]) [SLIDING sliding]] + [FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})] + [GROUP BY tags] ``` -### 会话窗口 +### Restrictions -会话窗口根据记录的时间戳主键的值来确定是否属于同一个会话。如下图所示,如果设置时间戳的连续的间隔小于等于 12 秒,则以下 6 条记录构成 2 个会话窗口,分别是:[2019-04-28 14:22:10,2019-04-28 14:22:30]和[2019-04-28 14:23:10,2019-04-28 14:23:30]。因为 2019-04-28 14:22:30 与 2019-04-28 14:23:10 之间的时间间隔是 40 秒,超过了连续时间间隔(12 秒)。 +- Aggregate functions and select functions can be used in `function_list`, with each function having only one output. For example COUNT, AVG, SUM, STDDEV, LEASTSQUARES, PERCENTILE, MIN, MAX, FIRST, LAST. Functions having multiple outputs, such as DIFF or arithmetic operations can't be used. +- `LAST_ROW` can't be used together with window aggregate. +- Scalar functions, like CEIL/FLOOR, can't be used with window aggregate. +- `WHERE` clause can be used to specify the starting and ending time and other filter conditions +- `FILL` clause is used to specify how to fill when there is data missing in any window, including: + 1. NONE: No fill (the default fill mode) + 2. VALUE:Fill with a fixed value, which should be specified together, for example `FILL(VALUE, 1.23)` + 3. PREV:Fill with the previous non-NULL value, `FILL(PREV)` + 4. NULL:Fill with NULL, `FILL(NULL)` + 5. LINEAR:Fill with the closest non-NULL value, `FILL(LINEAR)` + 6. NEXT:Fill with the next non-NULL value, `FILL(NEXT)` -![TDengine Database 时间窗口示意图](./timewindow-2.webp) +:::info -在 tol_value 时间间隔范围内的结果都认为归属于同一个窗口,如果连续的两条记录的时间超过 tol_val,则自动开启下一个窗口。(会话窗口暂不支持对超级表使用) +1. A huge volume of interpolation output may be returned using `FILL`, so it's recommended to specify the time range when using `FILL`. The maximum number of interpolation values that can be returned in a single query is 10,000,000. +2. The result set is in ascending order of timestamp when you aggregate by time window. +3. If aggregate by window is used on STable, the aggregate function is performed on all the rows matching the filter conditions. If `GROUP BY` is not used in the query, the result set will be returned in ascending order of timestamp; otherwise the result set is not exactly in the order of ascending timestamp in each group. -``` +::: -SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val); -``` +Aggregate by time window is also used in continuous query, please refer to [Continuous Query](/develop/continuous-query). -### 示例 +## Examples -智能电表的建表语句如下: +A table of intelligent meters can be created by the SQL statement below: -``` +```sql CREATE TABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT); ``` -针对智能电表采集的数据,以 10 分钟为一个阶段,计算过去 24 小时的电流数据的平均值、最大值、电流的中位数。如果没有计算值,用前一个非 NULL 值填充。使用的查询语句如下: +The average current, maximum current and median of current in every 10 minutes for the past 24 hours can be calculated using the SQL statement below, with missing values filled with the previous non-NULL values. ``` SELECT AVG(current), MAX(current), APERCENTILE(current, 50) FROM meters diff --git a/docs/examples/csharp/AsyncQueryExample.cs b/docs/examples/csharp/AsyncQueryExample.cs index 3dabbebd1630a207af2e1b1b11cc4ba15bdd94a9..0d47325932e2f01fec8d55cfdb64c636258f4a03 100644 --- a/docs/examples/csharp/AsyncQueryExample.cs +++ b/docs/examples/csharp/AsyncQueryExample.cs @@ -1,4 +1,7 @@ +using System; +using System.Collections.Generic; using TDengineDriver; +using TDengineDriver.Impl; using System.Runtime.InteropServices; namespace TDengineExample @@ -19,8 +22,8 @@ namespace TDengineExample { if (code == 0 && taosRes != IntPtr.Zero) { - FetchRowAsyncCallback fetchRowAsyncCallback = new FetchRowAsyncCallback(FetchRowCallback); - TDengine.FetchRowAsync(taosRes, fetchRowAsyncCallback, param); + FetchRawBlockAsyncCallback fetchRowAsyncCallback = new FetchRawBlockAsyncCallback(FetchRawBlockCallback); + TDengine.FetchRawBlockAsync(taosRes, fetchRowAsyncCallback, param); } else { @@ -28,179 +31,44 @@ namespace TDengineExample } } - static void FetchRowCallback(IntPtr param, IntPtr taosRes, int numOfRows) + // Iteratively call this interface until "numOfRows" is no greater than 0. + static void FetchRawBlockCallback(IntPtr param, IntPtr taosRes, int numOfRows) { if (numOfRows > 0) { Console.WriteLine($"{numOfRows} rows async retrieved"); - DisplayRes(taosRes); - TDengine.FetchRowAsync(taosRes, FetchRowCallback, param); + IntPtr pdata = TDengine.GetRawBlock(taosRes); + List metaList = TDengine.FetchFields(taosRes); + List dataList = LibTaos.ReadRawBlock(pdata, metaList, numOfRows); + + for (int i = 0; i < dataList.Count; i++) + { + if (i != 0 && (i+1) % metaList.Count == 0) + { + Console.WriteLine("{0}\t|", dataList[i]); + } + else + { + Console.Write("{0}\t|", dataList[i]); + } + } + Console.WriteLine(""); + TDengine.FetchRawBlockAsync(taosRes, FetchRawBlockCallback, param); } else { if (numOfRows == 0) { Console.WriteLine("async retrieve complete."); - } else { - Console.WriteLine($"FetchRowAsync callback error, error code {numOfRows}"); + Console.WriteLine($"FetchRawBlockCallback callback error, error code {numOfRows}"); } TDengine.FreeResult(taosRes); } } - public static void DisplayRes(IntPtr res) - { - if (!IsValidResult(res)) - { - TDengine.Cleanup(); - System.Environment.Exit(1); - } - - List metaList = TDengine.FetchFields(res); - int fieldCount = metaList.Count; - // metaList.ForEach((item) => { Console.Write("{0} ({1}) \t|\t", item.name, item.size); }); - - List dataList = QueryRes(res, metaList); - for (int index = 0; index < dataList.Count; index++) - { - if (index % fieldCount == 0 && index != 0) - { - Console.WriteLine(""); - } - Console.Write("{0} \t|\t", dataList[index].ToString()); - - } - Console.WriteLine(""); - } - - public static bool IsValidResult(IntPtr res) - { - if ((res == IntPtr.Zero) || (TDengine.ErrorNo(res) != 0)) - { - if (res != IntPtr.Zero) - { - Console.Write("reason: " + TDengine.Error(res)); - return false; - } - Console.WriteLine(""); - return false; - } - return true; - } - - private static List QueryRes(IntPtr res, List meta) - { - IntPtr taosRow; - List dataRaw = new(); - while ((taosRow = TDengine.FetchRows(res)) != IntPtr.Zero) - { - dataRaw.AddRange(FetchRow(taosRow, res)); - } - if (TDengine.ErrorNo(res) != 0) - { - Console.Write("Query is not complete, Error {0} {1}", TDengine.ErrorNo(res), TDengine.Error(res)); - } - TDengine.FreeResult(res); - Console.WriteLine(""); - return dataRaw; - } - - public static List FetchRow(IntPtr taosRow, IntPtr taosRes)//, List metaList, int numOfFiled - { - List metaList = TDengine.FetchFields(taosRes); - int numOfFiled = TDengine.FieldCount(taosRes); - - - List dataRaw = new(); - - IntPtr colLengthPrt = TDengine.FetchLengths(taosRes); - int[] colLengthArr = new int[numOfFiled]; - Marshal.Copy(colLengthPrt, colLengthArr, 0, numOfFiled); - - for (int i = 0; i < numOfFiled; i++) - { - TDengineMeta meta = metaList[i]; - IntPtr data = Marshal.ReadIntPtr(taosRow, IntPtr.Size * i); - - if (data == IntPtr.Zero) - { - dataRaw.Add("NULL"); - continue; - } - switch ((TDengineDataType)meta.type) - { - case TDengineDataType.TSDB_DATA_TYPE_BOOL: - bool v1 = Marshal.ReadByte(data) != 0; - dataRaw.Add(v1); - break; - case TDengineDataType.TSDB_DATA_TYPE_TINYINT: - sbyte v2 = (sbyte)Marshal.ReadByte(data); - dataRaw.Add(v2); - break; - case TDengineDataType.TSDB_DATA_TYPE_SMALLINT: - short v3 = Marshal.ReadInt16(data); - dataRaw.Add(v3); - break; - case TDengineDataType.TSDB_DATA_TYPE_INT: - int v4 = Marshal.ReadInt32(data); - dataRaw.Add(v4); - break; - case TDengineDataType.TSDB_DATA_TYPE_BIGINT: - long v5 = Marshal.ReadInt64(data); - dataRaw.Add(v5); - break; - case TDengineDataType.TSDB_DATA_TYPE_FLOAT: - float v6 = (float)Marshal.PtrToStructure(data, typeof(float)); - dataRaw.Add(v6); - break; - case TDengineDataType.TSDB_DATA_TYPE_DOUBLE: - double v7 = (double)Marshal.PtrToStructure(data, typeof(double)); - dataRaw.Add(v7); - break; - case TDengineDataType.TSDB_DATA_TYPE_BINARY: - string v8 = Marshal.PtrToStringUTF8(data, colLengthArr[i]); - dataRaw.Add(v8); - break; - case TDengineDataType.TSDB_DATA_TYPE_TIMESTAMP: - long v9 = Marshal.ReadInt64(data); - dataRaw.Add(v9); - break; - case TDengineDataType.TSDB_DATA_TYPE_NCHAR: - string v10 = Marshal.PtrToStringUTF8(data, colLengthArr[i]); - dataRaw.Add(v10); - break; - case TDengineDataType.TSDB_DATA_TYPE_UTINYINT: - byte v12 = Marshal.ReadByte(data); - dataRaw.Add(v12.ToString()); - break; - case TDengineDataType.TSDB_DATA_TYPE_USMALLINT: - ushort v13 = (ushort)Marshal.ReadInt16(data); - dataRaw.Add(v13); - break; - case TDengineDataType.TSDB_DATA_TYPE_UINT: - uint v14 = (uint)Marshal.ReadInt32(data); - dataRaw.Add(v14); - break; - case TDengineDataType.TSDB_DATA_TYPE_UBIGINT: - ulong v15 = (ulong)Marshal.ReadInt64(data); - dataRaw.Add(v15); - break; - case TDengineDataType.TSDB_DATA_TYPE_JSONTAG: - string v16 = Marshal.PtrToStringUTF8(data, colLengthArr[i]); - dataRaw.Add(v16); - break; - default: - dataRaw.Add("nonsupport data type"); - break; - } - - } - return dataRaw; - } - static IntPtr GetConnection() { string host = "localhost"; @@ -223,16 +91,16 @@ namespace TDengineExample } } -//output: -// Connect to TDengine success -// 8 rows async retrieved - -// 1538548685500 | 11.8 | 221 | 0.28 | california.losangeles | 2 | -// 1538548696600 | 13.4 | 223 | 0.29 | california.losangeles | 2 | -// 1538548685000 | 10.8 | 223 | 0.29 | california.losangeles | 3 | -// 1538548686500 | 11.5 | 221 | 0.35 | california.losangeles | 3 | -// 1538548685000 | 10.3 | 219 | 0.31 | california.sanfrancisco | 2 | -// 1538548695000 | 12.6 | 218 | 0.33 | california.sanfrancisco | 2 | -// 1538548696800 | 12.3 | 221 | 0.31 | california.sanfrancisco | 2 | -// 1538548696650 | 10.3 | 218 | 0.25 | california.sanfrancisco | 3 | -// async retrieve complete. \ No newline at end of file +// //output: +// // Connect to TDengine success +// // 8 rows async retrieved + +// // 1538548685500 | 11.8 | 221 | 0.28 | california.losangeles | 2 | +// // 1538548696600 | 13.4 | 223 | 0.29 | california.losangeles | 2 | +// // 1538548685000 | 10.8 | 223 | 0.29 | california.losangeles | 3 | +// // 1538548686500 | 11.5 | 221 | 0.35 | california.losangeles | 3 | +// // 1538548685000 | 10.3 | 219 | 0.31 | california.sanfrancisco | 2 | +// // 1538548695000 | 12.6 | 218 | 0.33 | california.sanfrancisco | 2 | +// // 1538548696800 | 12.3 | 221 | 0.31 | california.sanfrancisco | 2 | +// // 1538548696650 | 10.3 | 218 | 0.25 | california.sanfrancisco | 3 | +// // async retrieve complete. \ No newline at end of file diff --git a/docs/examples/csharp/QueryExample.cs b/docs/examples/csharp/QueryExample.cs index 97f0c456d412e2ed608c345ba87469d3f5ccfc15..c90a8cd0b7a3fd3e4a797188aec9fa50ba704717 100644 --- a/docs/examples/csharp/QueryExample.cs +++ b/docs/examples/csharp/QueryExample.cs @@ -1,4 +1,5 @@ using TDengineDriver; +using TDengineDriver.Impl; using System.Runtime.InteropServices; namespace TDengineExample @@ -23,7 +24,7 @@ namespace TDengineExample Console.WriteLine("fieldCount=" + fieldCount); // print column names - List metas = TDengine.FetchFields(res); + List metas = LibTaos.GetMeta(res); for (int i = 0; i < metas.Count; i++) { Console.Write(metas[i].name + "\t"); @@ -31,98 +32,17 @@ namespace TDengineExample Console.WriteLine(); // print values - IntPtr row; - while ((row = TDengine.FetchRows(res)) != IntPtr.Zero) + List resData = LibTaos.GetData(res); + for (int i = 0; i < resData.Count; i++) { - List metaList = TDengine.FetchFields(res); - int numOfFiled = TDengine.FieldCount(res); - - List dataRaw = new List(); - - IntPtr colLengthPrt = TDengine.FetchLengths(res); - int[] colLengthArr = new int[numOfFiled]; - Marshal.Copy(colLengthPrt, colLengthArr, 0, numOfFiled); - - for (int i = 0; i < numOfFiled; i++) + Console.Write($"|{resData[i].ToString()} \t"); + if (((i + 1) % metas.Count == 0)) { - TDengineMeta meta = metaList[i]; - IntPtr data = Marshal.ReadIntPtr(row, IntPtr.Size * i); - - if (data == IntPtr.Zero) - { - Console.Write("NULL\t"); - continue; - } - switch ((TDengineDataType)meta.type) - { - case TDengineDataType.TSDB_DATA_TYPE_BOOL: - bool v1 = Marshal.ReadByte(data) == 0 ? false : true; - Console.Write(v1.ToString() + "\t"); - break; - case TDengineDataType.TSDB_DATA_TYPE_TINYINT: - sbyte v2 = (sbyte)Marshal.ReadByte(data); - Console.Write(v2.ToString() + "\t"); - break; - case TDengineDataType.TSDB_DATA_TYPE_SMALLINT: - short v3 = Marshal.ReadInt16(data); - Console.Write(v3.ToString() + "\t"); - break; - case TDengineDataType.TSDB_DATA_TYPE_INT: - int v4 = Marshal.ReadInt32(data); - Console.Write(v4.ToString() + "\t"); - break; - case TDengineDataType.TSDB_DATA_TYPE_BIGINT: - long v5 = Marshal.ReadInt64(data); - Console.Write(v5.ToString() + "\t"); - break; - case TDengineDataType.TSDB_DATA_TYPE_FLOAT: - float v6 = (float)Marshal.PtrToStructure(data, typeof(float)); - Console.Write(v6.ToString() + "\t"); - break; - case TDengineDataType.TSDB_DATA_TYPE_DOUBLE: - double v7 = (double)Marshal.PtrToStructure(data, typeof(double)); - Console.Write(v7.ToString() + "\t"); - break; - case TDengineDataType.TSDB_DATA_TYPE_BINARY: - string v8 = Marshal.PtrToStringUTF8(data, colLengthArr[i]); - Console.Write(v8 + "\t"); - break; - case TDengineDataType.TSDB_DATA_TYPE_TIMESTAMP: - long v9 = Marshal.ReadInt64(data); - Console.Write(v9.ToString() + "\t"); - break; - case TDengineDataType.TSDB_DATA_TYPE_NCHAR: - string v10 = Marshal.PtrToStringUTF8(data, colLengthArr[i]); - Console.Write(v10 + "\t"); - break; - case TDengineDataType.TSDB_DATA_TYPE_UTINYINT: - byte v12 = Marshal.ReadByte(data); - Console.Write(v12.ToString() + "\t"); - break; - case TDengineDataType.TSDB_DATA_TYPE_USMALLINT: - ushort v13 = (ushort)Marshal.ReadInt16(data); - Console.Write(v13.ToString() + "\t"); - break; - case TDengineDataType.TSDB_DATA_TYPE_UINT: - uint v14 = (uint)Marshal.ReadInt32(data); - Console.Write(v14.ToString() + "\t"); - break; - case TDengineDataType.TSDB_DATA_TYPE_UBIGINT: - ulong v15 = (ulong)Marshal.ReadInt64(data); - Console.Write(v15.ToString() + "\t"); - break; - case TDengineDataType.TSDB_DATA_TYPE_JSONTAG: - string v16 = Marshal.PtrToStringUTF8(data, colLengthArr[i]); - Console.Write(v16 + "\t"); - break; - default: - Console.Write("nonsupport data type value"); - break; - } - + Console.WriteLine(""); } - Console.WriteLine(); } + Console.WriteLine(); + if (TDengine.ErrorNo(res) != 0) { Console.WriteLine($"Query is not complete, Error {TDengine.ErrorNo(res)} {TDengine.Error(res)}"); diff --git a/docs/examples/csharp/SQLInsertExample.cs b/docs/examples/csharp/SQLInsertExample.cs index d5462c1062e01fd5c93bac983696d0350117ad92..192ea96d5713bbf7f37f2208687c41e3e66d473b 100644 --- a/docs/examples/csharp/SQLInsertExample.cs +++ b/docs/examples/csharp/SQLInsertExample.cs @@ -15,10 +15,10 @@ namespace TDengineExample CheckRes(conn, res, "failed to change database"); res = TDengine.Query(conn, "CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)"); CheckRes(conn, res, "failed to create stable"); - var sql = "INSERT INTO d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000) " + - "d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000) " + - "d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000)('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000) " + - "d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000)('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)"; + var sql = "INSERT INTO d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000) " + + "d1002 USING power.meters TAGS('California.SanFrancisco', 3) VALUES('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000) " + + "d1003 USING power.meters TAGS('California.LosAngeles', 2) VALUES('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000)('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000) " + + "d1004 USING power.meters TAGS('California.LosAngeles', 3) VALUES('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000)('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)"; res = TDengine.Query(conn, sql); CheckRes(conn, res, "failed to insert data"); int affectedRows = TDengine.AffectRows(res); diff --git a/docs/examples/csharp/StmtInsertExample.cs b/docs/examples/csharp/StmtInsertExample.cs index 6ade424b95d64529b7a40a782de13e3106d0c78a..0a4098091f6371a674eee6f158e1c57bff2b6862 100644 --- a/docs/examples/csharp/StmtInsertExample.cs +++ b/docs/examples/csharp/StmtInsertExample.cs @@ -21,7 +21,7 @@ namespace TDengineExample CheckStmtRes(res, "failed to prepare stmt"); // 2. bind table name and tags - TAOS_BIND[] tags = new TAOS_BIND[2] { TaosBind.BindBinary("California.SanFrancisco"), TaosBind.BindInt(2) }; + TAOS_MULTI_BIND[] tags = new TAOS_MULTI_BIND[2] { TaosMultiBind.MultiBindBinary(new string[]{"California.SanFrancisco"}), TaosMultiBind.MultiBindInt(new int?[] {2}) }; res = TDengine.StmtSetTbnameTags(stmt, "d1001", tags); CheckStmtRes(res, "failed to bind table name and tags"); @@ -44,7 +44,7 @@ namespace TDengineExample CheckStmtRes(res, "faild to execute"); // 6. free - TaosBind.FreeTaosBind(tags); + TaosMultiBind.FreeTaosBind(tags); TaosMultiBind.FreeTaosBind(values); TDengine.Close(conn); TDengine.Cleanup(); diff --git a/docs/examples/csharp/SubscribeDemo.cs b/docs/examples/csharp/SubscribeDemo.cs index 34509215da73ea6369eb95f458d622cd95a97932..b62ff12e5ea38eb27ae5de8e8027aa41b1873d23 100644 --- a/docs/examples/csharp/SubscribeDemo.cs +++ b/docs/examples/csharp/SubscribeDemo.cs @@ -1,12 +1,100 @@ using System; -using System.Collections.Generic; -using System.Linq; -using System.Text; -using System.Threading.Tasks; +using TDengineTMQ; +using TDengineDriver; +using System.Runtime.InteropServices; -namespace csharp +namespace TMQExample { internal class SubscribeDemo { + static void Main(string[] args) + { + IntPtr conn = GetConnection(); + string topic = "topic_example"; + Console.WriteLine($"create topic if not exist {topic} as select * from meters"); + //create topic + IntPtr res = TDengine.Query(conn, $"create topic if not exists {topic} as select * from meters"); + + if (res == IntPtr.Zero) + { + throw new Exception($"create topic failed, reason:{TDengine.Error(res)}"); + } + + var cfg = new ConsumerConfig + { + GourpId = "group_1", + TDConnectUser = "root", + TDConnectPasswd = "taosdata", + MsgWithTableName = "true", + TDConnectIp = "127.0.0.1", + }; + + // create consumer + var consumer = new ConsumerBuilder(cfg) + .Build(); + + // subscribe + consumer.Subscribe(topic); + + // consume + for (int i = 0; i < 5; i++) + { + var consumeRes = consumer.Consume(300); + // print consumeResult + foreach (KeyValuePair kv in consumeRes.Message) + { + Console.WriteLine("topic partitions:\n{0}", kv.Key.ToString()); + + kv.Value.Metas.ForEach(meta => + { + Console.Write("{0} {1}({2}) \t|", meta.name, meta.TypeName(), meta.size); + }); + Console.WriteLine(""); + kv.Value.Datas.ForEach(data => + { + Console.WriteLine(data.ToString()); + }); + } + + consumer.Commit(consumeRes); + Console.WriteLine("\n================ {0} done ", i); + + } + + // retrieve topic list + List topics = consumer.Subscription(); + topics.ForEach(t => Console.WriteLine("topic name:{0}", t)); + + + // unsubscribe + consumer.Unsubscribe(); + + // close consumer after use.Otherwise will lead memory leak. + consumer.Close(); + TDengine.Close(conn); + + + } + + static IntPtr GetConnection() + { + string host = "localhost"; + short port = 6030; + string username = "root"; + string password = "taosdata"; + string dbname = "power"; + var conn = TDengine.Connect(host, username, password, dbname, port); + if (conn == IntPtr.Zero) + { + Console.WriteLine("Connect to TDengine failed"); + System.Environment.Exit(0); + } + else + { + Console.WriteLine("Connect to TDengine success"); + } + return conn; + } } + } diff --git a/docs/examples/csharp/asyncquery.csproj b/docs/examples/csharp/asyncquery.csproj index 7a952fe7abd15798bc704de52aa72624e59061eb..045969edd7febbd11cc6577c8ba958669a5a7e3b 100644 --- a/docs/examples/csharp/asyncquery.csproj +++ b/docs/examples/csharp/asyncquery.csproj @@ -9,7 +9,7 @@ - + diff --git a/docs/examples/csharp/connect.csproj b/docs/examples/csharp/connect.csproj index 27cffa30ae5a98277b74c63cf3d2c749ec67ce8d..3a912f8987ace6ae540726886d901c8d32a7b81b 100644 --- a/docs/examples/csharp/connect.csproj +++ b/docs/examples/csharp/connect.csproj @@ -9,7 +9,7 @@ - + diff --git a/docs/examples/csharp/influxdbline.csproj b/docs/examples/csharp/influxdbline.csproj index a8b197dc7167f229b903bfd010a9ff7282c9173b..58bca485088e409fe1d387c6020418bbc2bf871b 100644 --- a/docs/examples/csharp/influxdbline.csproj +++ b/docs/examples/csharp/influxdbline.csproj @@ -9,7 +9,7 @@ - + diff --git a/docs/examples/csharp/optsjson.csproj b/docs/examples/csharp/optsjson.csproj index b1bd83405efb1086b23134a9c875912a9c09e3f8..da16025dcd45f8e5c4ba6e242524c2e56191e93c 100644 --- a/docs/examples/csharp/optsjson.csproj +++ b/docs/examples/csharp/optsjson.csproj @@ -9,7 +9,7 @@ - + diff --git a/docs/examples/csharp/optstelnet.csproj b/docs/examples/csharp/optstelnet.csproj index 1ab41067715f8cb070e78aec2be59d686bac0e86..194de21bcc74653a2267b29681ece6243fd401fc 100644 --- a/docs/examples/csharp/optstelnet.csproj +++ b/docs/examples/csharp/optstelnet.csproj @@ -9,7 +9,7 @@ - + diff --git a/docs/examples/csharp/query.csproj b/docs/examples/csharp/query.csproj index 63f13c3ddbf13be9ffe5c39054cc6e4cae6001e7..39fc135d5ab9f5a8397b412e2307a2306abd4f2a 100644 --- a/docs/examples/csharp/query.csproj +++ b/docs/examples/csharp/query.csproj @@ -9,7 +9,7 @@ - + diff --git a/docs/examples/csharp/sqlinsert.csproj b/docs/examples/csharp/sqlinsert.csproj index 0380395a5ad00d96418410b2f7974b294bf2bb07..ab0e5e717a78faad07c949b434b0d0b8a26c7211 100644 --- a/docs/examples/csharp/sqlinsert.csproj +++ b/docs/examples/csharp/sqlinsert.csproj @@ -9,7 +9,7 @@ - + diff --git a/docs/examples/csharp/stmtinsert.csproj b/docs/examples/csharp/stmtinsert.csproj index 8defb895eb2894c680fe1b9dd7346223f0f92df1..3d459fbeda02ab03dc40dac2ecae290724cccbcc 100644 --- a/docs/examples/csharp/stmtinsert.csproj +++ b/docs/examples/csharp/stmtinsert.csproj @@ -9,7 +9,7 @@ - + diff --git a/docs/examples/csharp/subscribe.csproj b/docs/examples/csharp/subscribe.csproj index 8286922c6f926b166f14983525e604ce9e13d74a..eff29b3bf42bde521aae70bfd1ed555ac72bfce9 100644 --- a/docs/examples/csharp/subscribe.csproj +++ b/docs/examples/csharp/subscribe.csproj @@ -5,11 +5,11 @@ net6.0 enable enable - TDengineExample.SubscribeDemo + TMQExample.SubscribeDemo - + diff --git a/docs/zh/07-develop/07-tmq.md b/docs/zh/07-develop/07-tmq.md index 0f531e07c9dce7dbb03bacebf8e5cbefae82671f..358c824ffa0330b469938a6cee75cd125ddb25c2 100644 --- a/docs/zh/07-develop/07-tmq.md +++ b/docs/zh/07-develop/07-tmq.md @@ -1,254 +1,241 @@ ---- -sidebar_label: 数据订阅 -description: "轻量级的数据订阅与推送服务。连续写入到 TDengine 中的时序数据能够被自动推送到订阅客户端。" -title: 数据订阅 ---- - -import Tabs from "@theme/Tabs"; -import TabItem from "@theme/TabItem"; -import Java from "./_sub_java.mdx"; -import Python from "./_sub_python.mdx"; -import Go from "./_sub_go.mdx"; -import Rust from "./_sub_rust.mdx"; -import Node from "./_sub_node.mdx"; -import CSharp from "./_sub_cs.mdx"; -import CDemo from "./_sub_c.mdx"; - -基于数据天然的时间序列特性,TDengine 的数据写入(insert)与消息系统的数据发布(pub)逻辑上一致,均可视为系统中插入一条带时间戳的新记录。同时,TDengine 在内部严格按照数据时间序列单调递增的方式保存数据。本质上来说,TDengine 中每一张表均可视为一个标准的消息队列。 - -TDengine 内嵌支持轻量级的消息订阅与推送服务。使用系统提供的 API,用户可使用普通查询语句订阅数据库中的一张或多张表。订阅的逻辑和操作状态的维护均是由客户端完成,客户端定时轮询服务器是否有新的记录到达,有新的记录到达就会将结果反馈到客户。 - -TDengine 的订阅与推送服务的状态是由客户端维持,TDengine 服务端并不维持。因此如果应用重启,从哪个时间点开始获取最新数据,由应用决定。 - -TDengine 的 API 中,与订阅相关的主要有以下三个: - -```c -taos_subscribe -taos_consume -taos_unsubscribe -``` - -这些 API 的文档请见 [C/C++ Connector](/reference/connector/cpp),下面仍以智能电表场景为例介绍一下它们的具体用法(超级表和子表结构请参考上一节“连续查询”),完整的示例代码可以在 [这里](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c) 找到。 - -如果我们希望当某个电表的电流超过一定限制(比如 10A)后能得到通知并进行一些处理, 有两种方法:一是分别对每张子表进行查询,每次查询后记录最后一条数据的时间戳,后续只查询这个时间戳之后的数据: - -```sql -select * from D1001 where ts > {last_timestamp1} and current > 10; -select * from D1002 where ts > {last_timestamp2} and current > 10; -... -``` - -这确实可行,但随着电表数量的增加,查询数量也会增加,客户端和服务端的性能都会受到影响,当电表数增长到一定的程度,系统就无法承受了。 - -另一种方法是对超级表进行查询。这样,无论有多少电表,都只需一次查询: - -```sql -select * from meters where ts > {last_timestamp} and current > 10; -``` - -但是,如何选择 `last_timestamp` 就成了一个新的问题。因为,一方面数据的产生时间(也就是数据时间戳)和数据入库的时间一般并不相同,有时偏差还很大;另一方面,不同电表的数据到达 TDengine 的时间也会有差异。所以,如果我们在查询中使用最慢的那台电表的数据的时间戳作为 `last_timestamp`,就可能重复读入其它电表的数据;如果使用最快的电表的时间戳,其它电表的数据就可能被漏掉。 - -TDengine 的订阅功能为上面这个问题提供了一个彻底的解决方案。 - -首先是使用 `taos_subscribe` 创建订阅: - -```c -TAOS_SUB* tsub = NULL; -if (async) { -  // create an asynchronized subscription, the callback function will be called every 1s -  tsub = taos_subscribe(taos, restart, topic, sql, subscribe_callback, &blockFetch, 1000); -} else { -  // create an synchronized subscription, need to call 'taos_consume' manually -  tsub = taos_subscribe(taos, restart, topic, sql, NULL, NULL, 0); -} -``` - -TDengine 中的订阅既可以是同步的,也可以是异步的,上面的代码会根据从命令行获取的参数 `async` 的值来决定使用哪种方式。这里,同步的意思是用户程序要直接调用 `taos_consume` 来拉取数据,而异步则由 API 在内部的另一个线程中调用 `taos_consume`,然后把拉取到的数据交给回调函数 `subscribe_callback`去处理。(注意,`subscribe_callback` 中不宜做较为耗时的操作,否则有可能导致客户端阻塞等不可控的问题。) - -参数 `taos` 是一个已经建立好的数据库连接,在同步模式下无特殊要求。但在异步模式下,需要注意它不会被其它线程使用,否则可能导致不可预计的错误,因为回调函数在 API 的内部线程中被调用,而 TDengine 的部分 API 不是线程安全的。 - -参数 `sql` 是查询语句,可以在其中使用 where 子句指定过滤条件。在我们的例子中,如果只想订阅电流超过 10A 时的数据,可以这样写: - -```sql -select * from meters where current > 10; -``` - -注意,这里没有指定起始时间,所以会读到所有时间的数据。如果只想从一天前的数据开始订阅,而不需要更早的历史数据,可以再加上一个时间条件: - -```sql -select * from meters where ts > now - 1d and current > 10; -``` - -订阅的 `topic` 实际上是它的名字,因为订阅功能是在客户端 API 中实现的,所以没必要保证它全局唯一,但需要它在一台客户端机器上唯一。 - -如果名为 `topic` 的订阅不存在,参数 `restart` 没有意义;但如果用户程序创建这个订阅后退出,当它再次启动并重新使用这个 `topic` 时,`restart` 就会被用于决定是从头开始读取数据,还是接续上次的位置进行读取。本例中,如果 `restart` 是 **true**(非零值),用户程序肯定会读到所有数据。但如果这个订阅之前就存在了,并且已经读取了一部分数据,且 `restart` 是 **false**(**0**),用户程序就不会读到之前已经读取的数据了。 - -`taos_subscribe`的最后一个参数是以毫秒为单位的轮询周期。在同步模式下,如果前后两次调用 `taos_consume` 的时间间隔小于此时间,`taos_consume` 会阻塞,直到间隔超过此时间。异步模式下,这个时间是两次调用回调函数的最小时间间隔。 - -`taos_subscribe` 的倒数第二个参数用于用户程序向回调函数传递附加参数,订阅 API 不对其做任何处理,只原样传递给回调函数。此参数在同步模式下无意义。 - -订阅创建以后,就可以消费其数据了,同步模式下,示例代码是下面的 else 部分: - -```c -if (async) { -  getchar(); -} else while(1) { -  TAOS_RES* res = taos_consume(tsub); -  if (res == NULL) { -    printf("failed to consume data."); -    break; -  } else { -    print_result(res, blockFetch); -    getchar(); -  } -} -``` - -这里是一个 **while** 循环,用户每按一次回车键就调用一次 `taos_consume`,而 `taos_consume` 的返回值是查询到的结果集,与 `taos_use_result` 完全相同,例子中使用这个结果集的代码是函数 `print_result`: - -```c -void print_result(TAOS_RES* res, int blockFetch) { -  TAOS_ROW row = NULL; -  int num_fields = taos_num_fields(res); -  TAOS_FIELD* fields = taos_fetch_fields(res); -  int nRows = 0; -  if (blockFetch) { -    nRows = taos_fetch_block(res, &row); -    for (int i = 0; i < nRows; i++) { -      char temp[256]; -      taos_print_row(temp, row + i, fields, num_fields); -      puts(temp); -    } -  } else { -    while ((row = taos_fetch_row(res))) { -      char temp[256]; -      taos_print_row(temp, row, fields, num_fields); -      puts(temp); -      nRows++; -    } -  } -  printf("%d rows consumed.\n", nRows); -} -``` - -其中的 `taos_print_row` 用于处理订阅到数据,在我们的例子中,它会打印出所有符合条件的记录。而异步模式下,消费订阅到的数据则显得更为简单: - -```c -void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) { -  print_result(res, *(int*)param); -} -``` - -当要结束一次数据订阅时,需要调用 `taos_unsubscribe`: - -```c -taos_unsubscribe(tsub, keep); -``` - -其第二个参数,用于决定是否在客户端保留订阅的进度信息。如果这个参数是**false**(**0**),那无论下次调用 `taos_subscribe` 时的 `restart` 参数是什么,订阅都只能重新开始。另外,进度信息的保存位置是 _{DataDir}/subscribe/_ 这个目录下(注:`taos.cfg` 配置文件中 `DataDir` 参数值默认为 **/var/lib/taos/**,但是 Windows 服务器上本身不存在该目录,所以需要在 Windows 的配置文件中修改 `DataDir` 参数值为相应的已存在目录"),每个订阅有一个与其 `topic` 同名的文件,删掉某个文件,同样会导致下次创建其对应的订阅时只能重新开始。 - -代码介绍完毕,我们来看一下实际的运行效果。假设: - -- 示例代码已经下载到本地 -- TDengine 也已经在同一台机器上安装好 -- 示例所需的数据库、超级表、子表已经全部创建好 - -则可以在示例代码所在目录执行以下命令来编译并启动示例程序: - -```bash -make -./subscribe -sql='select * from meters where current > 10;' -``` - -示例程序启动后,打开另一个终端窗口,启动 TDengine CLI 向 **D1001** 插入一条电流为 12A 的数据: - -```sql -$ taos -> use test; -> insert into D1001 values(now, 12, 220, 1); -``` - -这时,因为电流超过了 10A,您应该可以看到示例程序将它输出到了屏幕上。您可以继续插入一些数据观察示例程序的输出。 - -## 示例程序 - -下面的示例程序展示是如何使用连接器订阅所有电流超过 10A 的记录。 - -### 准备数据 - -``` -# create database "power" -taos> create database power; -# use "power" as the database in following operations -taos> use power; -# create super table "meters" -taos> create table meters(ts timestamp, current float, voltage int, phase int) tags(location binary(64), groupId int); -# create tabes using the schema defined by super table "meters" -taos> create table d1001 using meters tags ("California.SanFrancisco", 2); -taos> create table d1002 using meters tags ("California.LosAngeles", 2); -# insert some rows -taos> insert into d1001 values("2020-08-15 12:00:00.000", 12, 220, 1),("2020-08-15 12:10:00.000", 12.3, 220, 2),("2020-08-15 12:20:00.000", 12.2, 220, 1); -taos> insert into d1002 values("2020-08-15 12:00:00.000", 9.9, 220, 1),("2020-08-15 12:10:00.000", 10.3, 220, 1),("2020-08-15 12:20:00.000", 11.2, 220, 1); -# filter out the rows in which current is bigger than 10A -taos> select * from meters where current > 10; - ts | current | voltage | phase | location | groupid | -=========================================================================================================== - 2020-08-15 12:10:00.000 | 10.30000 | 220 | 1 | California.LosAngeles | 2 | - 2020-08-15 12:20:00.000 | 11.20000 | 220 | 1 | California.LosAngeles | 2 | - 2020-08-15 12:00:00.000 | 12.00000 | 220 | 1 | California.SanFrancisco | 2 | - 2020-08-15 12:10:00.000 | 12.30000 | 220 | 2 | California.SanFrancisco | 2 | - 2020-08-15 12:20:00.000 | 12.20000 | 220 | 1 | California.SanFrancisco | 2 | -Query OK, 5 row(s) in set (0.004896s) -``` - -### 示例代码 - - - - - - - - - {/* - - */} - - - - {/* - - - - - */} - - - - - -### 运行示例程序 - -示例程序会先消费符合查询条件的所有历史数据: - -```bash -ts: 1597464000000 current: 12.0 voltage: 220 phase: 1 location: California.SanFrancisco groupid : 2 -ts: 1597464600000 current: 12.3 voltage: 220 phase: 2 location: California.SanFrancisco groupid : 2 -ts: 1597465200000 current: 12.2 voltage: 220 phase: 1 location: California.SanFrancisco groupid : 2 -ts: 1597464600000 current: 10.3 voltage: 220 phase: 1 location: California.LosAngeles groupid : 2 -ts: 1597465200000 current: 11.2 voltage: 220 phase: 1 location: California.LosAngeles groupid : 2 -``` - -接着,使用 TDengine CLI 向表中新增一条数据: - -``` -# taos -taos> use power; -taos> insert into d1001 values(now, 12.4, 220, 1); -``` - -因为这条数据的电流大于 10A,示例程序会将其消费: - -``` -ts: 1651146662805 current: 12.4 voltage: 220 phase: 1 location: California.SanFrancisco groupid: 2 -``` +--- +sidebar_label: 消息队列 +description: "数据订阅与推送服务。连续写入到 TDengine 中的时序数据能够被自动推送到订阅客户端。" +title: 消息队列 +--- + +基于数据天然的时间序列特性,TDengine 的数据写入(insert)与消息系统的数据发布(pub)逻辑上一致,均可视为系统中插入一条带时间戳的新记录。同时,TDengine 在内部严格按照数据时间序列单调递增的方式保存数据。本质上来说,TDengine 中每一张表均可视为一个标准的消息队列。 + +TDengine 内嵌支持消息订阅与推送服务(下文都简称TMQ)。使用系统提供的 API,用户可使用普通查询语句订阅数据库中的一张或多张表,或整个库。客户端启动订阅后,定时或按需轮询服务器是否有新的记录到达,有新的记录到达就会将结果反馈到客户。 + +TMQ提供了提交机制来保证消息队列的可靠性和正确性。在调用方法上,支持自动提交和手动提交。 + +TMQ 的 API 中,与订阅相关的主要数据结构和API如下: + +```c +typedef struct tmq_t tmq_t; +typedef struct tmq_conf_t tmq_conf_t; +typedef struct tmq_list_t tmq_list_t; + +typedef void(tmq_commit_cb(tmq_t *, int32_t code, void *param)); + +DLL_EXPORT tmq_list_t *tmq_list_new(); +DLL_EXPORT int32_t tmq_list_append(tmq_list_t *, const char *); +DLL_EXPORT void tmq_list_destroy(tmq_list_t *); +DLL_EXPORT tmq_t *tmq_consumer_new(tmq_conf_t *conf, char *errstr, int32_t errstrLen); +DLL_EXPORT const char *tmq_err2str(int32_t code); + +DLL_EXPORT int32_t tmq_subscribe(tmq_t *tmq, const tmq_list_t *topic_list); +DLL_EXPORT int32_t tmq_unsubscribe(tmq_t *tmq); +DLL_EXPORT TAOS_RES *tmq_consumer_poll(tmq_t *tmq, int64_t timeout); +DLL_EXPORT int32_t tmq_consumer_close(tmq_t *tmq); +DLL_EXPORT int32_t tmq_commit_sync(tmq_t *tmq, const TAOS_RES *msg); +DLL_EXPORT void tmq_commit_async(tmq_t *tmq, const TAOS_RES *msg, tmq_commit_cb *cb, void *param); + +enum tmq_conf_res_t { + TMQ_CONF_UNKNOWN = -2, + TMQ_CONF_INVALID = -1, + TMQ_CONF_OK = 0, +}; +typedef enum tmq_conf_res_t tmq_conf_res_t; + +DLL_EXPORT tmq_conf_t *tmq_conf_new(); +DLL_EXPORT tmq_conf_res_t tmq_conf_set(tmq_conf_t *conf, const char *key, const char *value); +DLL_EXPORT void tmq_conf_destroy(tmq_conf_t *conf); +DLL_EXPORT void tmq_conf_set_auto_commit_cb(tmq_conf_t *conf, tmq_commit_cb *cb, void *param); +``` + +这些 API 的文档请见 [C/C++ Connector](/reference/connector/cpp),下面介绍一下它们的具体用法(超级表和子表结构请参考“数据建模”一节),完整的示例代码可以在 [tmq.c](https://github.com/taosdata/TDengine/blob/3.0/examples/c/tmq.c) 看到。 + +一、首先完成建库、建一张超级表和多张子表,并每个子表插入若干条数据记录: + +```sql +drop database if exists tmqdb; +create database tmqdb; +create table tmqdb.stb (ts timestamp, c1 int, c2 float, c3 varchar(16) tags(t1 int, t3 varchar(16)); +create table tmqdb.ctb0 using tmqdb.stb tags(0, "subtable0"); +create table tmqdb.ctb1 using tmqdb.stb tags(1, "subtable1"); +create table tmqdb.ctb2 using tmqdb.stb tags(2, "subtable2"); +create table tmqdb.ctb3 using tmqdb.stb tags(3, "subtable3"); +insert into tmqdb.ctb0 values(now, 0, 0, 'a0')(now+1s, 0, 0, 'a00'); +insert into tmqdb.ctb1 values(now, 1, 1, 'a1')(now+1s, 11, 11, 'a11'); +insert into tmqdb.ctb2 values(now, 2, 2, 'a1')(now+1s, 22, 22, 'a22'); +insert into tmqdb.ctb3 values(now, 3, 3, 'a1')(now+1s, 33, 33, 'a33'); +``` + +二、创建topic: + +```sql +create topic topicName as select ts, c1, c2, c3 from tmqdb.stb where c1 > 1; +``` + +注:TMQ支持多种订阅类型: +1、列订阅 + +语法:CREATE TOPIC topic_name as subquery +通过select语句订阅(包括select *,或select ts, c1等指定列描述订阅,可以带条件过滤、标量函数计算,但不支持聚合函数、不支持时间窗口聚合) + +- TOPIC一旦创建则schema确定 +- 被订阅或用于计算的column和tag不可被删除、修改 +- 若发生schema变更,新增的column不出现在结果中 + +2、超级表订阅 +语法:CREATE TOPIC topic_name AS STABLE stbName + +- 订阅某超级表的全部数据,schema变更不受限,schema变更后写入的数据将以最新schema返回 +- 在tmq的返回消息中schema是块级别的,每块的schema可能不一样 +- 列变更后写入的数据若未落盘,将以写入时的schema返回 +- 列变更后写入的数据若已落盘,将以落盘时的schema返回 + +3、db订阅 +语法:CREATE TOPIC topic_name AS DATABASE db_name + +- 订阅某一db的全部数据,schema变更不受限 +- 在tmq的返回消息中schema是块级别的,每块的schema可能不一样 +- 列变更后写入的数据若未落盘,将以写入时的schema返回 +- 列变更后写入的数据若已落盘,将以落盘时的schema返回 + +三、创建consumer + +目前支持的config: + +| 参数名称 | 参数值 | 备注 | +| ---------------------------- | ------------------------------ | ------------------------------------------------------ | +| group.id | 最大长度:192 | | +| enable.auto.commit | 合法值:true, false | | +| auto.commit.interval.ms | | | +| auto.offset.reset | 合法值:earliest, latest, none | | +| td.connect.ip | 用于连接,同taos_connect的参数 | | +| td.connect.user | 用于连接,同taos_connect的参数 | | +| td.connect.pass | 用于连接,同taos_connect的参数 | | +| td.connect.port | 用于连接,同taos_connect的参数 | | +| enable.heartbeat.background | 合法值:true, false | 开启后台心跳,即consumer不会因为长时间不poll而认为离线 | +| experimental.snapshot.enable | 合法值:true, false | 从wal开始消费,还是从tsbs开始消费 | +| msg.with.table.name | 合法值:true, false | 从消息中能否解析表名 | + +```sql +/* 根据需要,设置消费组(group.id)、自动提交(enable.auto.commit)、自动提交时间间隔(auto.commit.interval.ms)、用户名(td.connect.user)、密码(td.connect.pass)等参数 */ + tmq_conf_t* conf = tmq_conf_new(); + tmq_conf_set(conf, "enable.auto.commit", "true"); + tmq_conf_set(conf, "auto.commit.interval.ms", "1000"); + tmq_conf_set(conf, "group.id", "cgrpName"); + tmq_conf_set(conf, "td.connect.user", "root"); + tmq_conf_set(conf, "td.connect.pass", "taosdata"); + tmq_conf_set(conf, "auto.offset.reset", "earliest"); + tmq_conf_set(conf, "experimental.snapshot.enable", "true"); + tmq_conf_set(conf, "msg.with.table.name", "true"); + tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL); + + tmq_t* tmq = tmq_consumer_new(conf, NULL, 0); + tmq_conf_destroy(conf); + return tmq; +``` + +四、创建订阅主题列表 + +```sql + tmq_list_t* topicList = tmq_list_new(); + tmq_list_append(topicList, "topicName"); + return topicList; +``` + +单个consumer支持同时订阅多个topic。 + +五、启动订阅并开始消费 + +```sql + /* 启动订阅 */ + tmq_subscribe(tmq, topicList); + tmq_list_destroy(topicList); + + /* 循环poll消息 */ + int32_t totalRows = 0; + int32_t msgCnt = 0; + int32_t consumeDelay = 5000; + while (running) { + TAOS_RES* tmqmsg = tmq_consumer_poll(tmq, consumeDelay); + if (tmqmsg) { + msgCnt++; + totalRows += msg_process(tmqmsg); + taos_free_result(tmqmsg); + } else { + break; + } + } + + fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows); +``` + +这里是一个 **while** 循环,每调用一次tmq_consumer_poll(),获取一个消息,该消息与普通查询返回的结果集完全相同,可以使用相同的解析API完成消息内容的解析: + +```sql + static int32_t msg_process(TAOS_RES* msg) { + char buf[1024]; + int32_t rows = 0; + + const char* topicName = tmq_get_topic_name(msg); + const char* dbName = tmq_get_db_name(msg); + int32_t vgroupId = tmq_get_vgroup_id(msg); + + printf("topic: %s\n", topicName); + printf("db: %s\n", dbName); + printf("vgroup id: %d\n", vgroupId); + + while (1) { + TAOS_ROW row = taos_fetch_row(msg); + if (row == NULL) break; + + TAOS_FIELD* fields = taos_fetch_fields(msg); + int32_t numOfFields = taos_field_count(msg); + int32_t* length = taos_fetch_lengths(msg); + int32_t precision = taos_result_precision(msg); + const char* tbName = tmq_get_table_name(msg); + rows++; + taos_print_row(buf, row, fields, numOfFields); + printf("row content from %s: %s\n", (tbName != NULL ? tbName : "null table"), buf); + } + + return rows; +} +``` + +五、结束消费 + +```sql + /* 取消订阅 */ + tmq_unsubscribe(tmq); + + /* 关闭消费 */ + tmq_consumer_close(tmq); +``` + +六、删除topic + +如果不再需要,可以删除创建topic,但注意:只有没有被订阅的topic才能别删除。 + +```sql + /* 删除topic */ + drop topic topicName; +``` + +七、状态查看 + +1、topics:查询已经创建的topic + +```sql + show topics; +``` + +2、consumers:查询consumer的状态及其订阅的topic + +```sql + show consumers; +``` + +3、subscriptions:查询consumer与vgroup之间的分配关系 + +```sql + show subscriptions; +``` + + diff --git a/docs/zh/07-develop/09-udf.md b/docs/zh/07-develop/09-udf.md index 6071275b551d68aab51b5434a7ac07498bd27c62..b8ae61810584dd8ffc3016c0ce026ddb5b1a5ccf 100644 --- a/docs/zh/07-develop/09-udf.md +++ b/docs/zh/07-develop/09-udf.md @@ -124,52 +124,49 @@ gcc -g -O0 -fPIC -shared add_one.c -o add_one.so 用户可以通过 SQL 指令在系统中加载客户端所在主机上的 UDF 函数库(不能通过 RESTful 接口或 HTTP 管理界面来进行这一过程)。一旦创建成功,则当前 TDengine 集群的所有用户都可以在 SQL 指令中使用这些函数。UDF 存储在系统的 MNode 节点上,因此即使重启 TDengine 系统,已经创建的 UDF 也仍然可用。 -在创建 UDF 时,需要区分标量函数和聚合函数。如果创建时声明了错误的函数类别,则可能导致通过 SQL 指令调用函数时出错。此外, UDF 支持输入与输出类型不一致,用户需要保证输入数据类型与 UDF 程序匹配,UDF 输出数据类型与 OUTPUTTYPE 匹配。 +在创建 UDF 时,需要区分标量函数和聚合函数。如果创建时声明了错误的函数类别,则可能导致通过 SQL 指令调用函数时出错。此外,用户需要保证输入数据类型与 UDF 程序匹配,UDF 输出数据类型与 OUTPUTTYPE 匹配。 - 创建标量函数 ```sql -CREATE FUNCTION ids(X) AS ids(Y) OUTPUTTYPE typename(Z) [ BUFSIZE B ]; +CREATE FUNCTION function_name AS library_path OUTPUTTYPE output_type; ``` - - ids(X):标量函数未来在 SQL 指令中被调用时的函数名,必须与函数实现中 udfNormalFunc 的实际名称一致; - - ids(Y):包含 UDF 函数实现的动态链接库的库文件绝对路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件),这个路径需要用英文单引号或英文双引号括起来; - - typename(Z):此函数计算结果的数据类型,与上文中 udfNormalFunc 的 itype 参数不同,这里不是使用数字表示法,而是直接写类型名称即可; - - B:中间计算结果的缓冲区大小,单位是字节,最小 0,最大 512,如果不使用可以不设置。 + - function_name:标量函数未来在 SQL 中被调用时的函数名,必须与函数实现中 udf 的实际名称一致; + - library_path:包含 UDF 函数实现的动态链接库的库文件绝对路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件),这个路径需要用英文单引号或英文双引号括起来; + - output_type:此函数计算结果的数据类型名称; - 例如,如下语句可以把 add_one.so 创建为系统中可用的 UDF: + 例如,如下语句可以把 libbitand.so 创建为系统中可用的 UDF: ```sql - CREATE FUNCTION add_one AS "/home/taos/udf_example/add_one.so" OUTPUTTYPE INT; + CREATE FUNCTION bit_and AS "/home/taos/udf_example/libbitand.so" OUTPUTTYPE INT; ``` - 创建聚合函数: ```sql -CREATE AGGREGATE FUNCTION ids(X) AS ids(Y) OUTPUTTYPE typename(Z) [ BUFSIZE B ]; +CREATE AGGREGATE FUNCTION function_name AS library_path OUTPUTTYPE output_type [ BUFSIZE buffer_size ]; ``` - - ids(X):聚合函数未来在 SQL 指令中被调用时的函数名,必须与函数实现中 udfNormalFunc 的实际名称一致; - - ids(Y):包含 UDF 函数实现的动态链接库的库文件绝对路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件),这个路径需要用英文单引号或英文双引号括起来; - - typename(Z):此函数计算结果的数据类型,与上文中 udfNormalFunc 的 itype 参数不同,这里不是使用数字表示法,而是直接写类型名称即可; - - B:中间计算结果的缓冲区大小,单位是字节,最小 0,最大 512,如果不使用可以不设置。 + - function_name:聚合函数未来在 SQL 中被调用时的函数名,必须与函数实现中 udfNormalFunc 的实际名称一致; + - library_path:包含 UDF 函数实现的动态链接库的库文件绝对路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件),这个路径需要用英文单引号或英文双引号括起来; + - output_type:此函数计算结果的数据类型,与上文中 udfNormalFunc 的 itype 参数不同,这里不是使用数字表示法,而是直接写类型名称即可; + - buffer_size:中间计算结果的缓冲区大小,单位是字节。如果不使用可以不设置。 - 关于中间计算结果的使用,可以参考示例程序[demo.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/demo.c) - - 例如,如下语句可以把 demo.so 创建为系统中可用的 UDF: + 例如,如下语句可以把 libsqrsum.so 创建为系统中可用的 UDF: ```sql - CREATE AGGREGATE FUNCTION demo AS "/home/taos/udf_example/demo.so" OUTPUTTYPE DOUBLE bufsize 14; + CREATE AGGREGATE FUNCTION sqr_sum AS "/home/taos/udf_example/libsqrsum.so" OUTPUTTYPE DOUBLE bufsize 8; ``` ### 管理 UDF - 删除指定名称的用户定义函数: ``` -DROP FUNCTION ids(X); +DROP FUNCTION function_name; ``` -- ids(X):此参数的含义与 CREATE 指令中的 ids(X) 参数一致,也即要删除的函数的名字,例如 +- function_name:此参数的含义与 CREATE 指令中的 function_name 参数一致,也即要删除的函数的名字,例如 ```sql -DROP FUNCTION add_one; +DROP FUNCTION bit_and; ``` - 显示系统中当前可用的所有 UDF: ```sql @@ -180,53 +177,32 @@ SHOW FUNCTIONS; 在 SQL 指令中,可以直接以在系统中创建 UDF 时赋予的函数名来调用用户定义函数。例如: ```sql -SELECT X(c) FROM table/stable; +SELECT X(c1,c2) FROM table/stable; ``` -表示对名为 c 的数据列调用名为 X 的用户定义函数。SQL 指令中用户定义函数可以配合 WHERE 等查询特性来使用。 - -## UDF 的一些使用限制 - -在当前版本下,使用 UDF 存在如下这些限制: +表示对名为 c1, c2 的数据列调用名为 X 的用户定义函数。SQL 指令中用户定义函数可以配合 WHERE 等查询特性来使用。 -1. 在创建和调用 UDF 时,服务端和客户端都只支持 Linux 操作系统; -2. UDF 不能与系统内建的 SQL 函数混合使用,暂不支持在一条 SQL 语句中使用多个不同名的 UDF ; -3. UDF 只支持以单个数据列作为输入; -4. UDF 只要创建成功,就会被持久化存储到 MNode 节点中; -5. 无法通过 RESTful 接口来创建 UDF; -6. UDF 在 SQL 中定义的函数名,必须与 .so 库文件实现中的接口函数名前缀保持一致,也即必须是 udfNormalFunc 的名称,而且不可与 TDengine 中已有的内建 SQL 函数重名。 ## 示例代码 -### 标量函数示例 [add_one](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/add_one.c) - -
-add_one.c - -```c -{{#include tests/script/sh/add_one.c}} -``` - -
- -### 向量函数示例 [abs_max](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/abs_max.c) +### 标量函数示例 [bit_and](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/bit_and.c)
-abs_max.c +bit_and.c ```c -{{#include tests/script/sh/abs_max.c}} +{{#include tests/script/sh/bit_and.c}} ```
-### 使用中间计算结果示例 [demo](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/demo.c) +### 聚合函数示例 [sqr_sum](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/sqr_sum.c)
-demo.c +sqr_sum.c ```c -{{#include tests/script/sh/demo.c}} +{{#include tests/script/sh/sqr_sum.c}} ```
diff --git a/docs/zh/12-taos-sql/26-udf.md b/docs/zh/12-taos-sql/26-udf.md index bd8d61a5844241efae9eee99a73c65afd3d0926f..12922063113f990f171347fcdb03633abef21b8e 100644 --- a/docs/zh/12-taos-sql/26-udf.md +++ b/docs/zh/12-taos-sql/26-udf.md @@ -8,21 +8,30 @@ title: 用户自定义函数 ## 创建函数 ```sql -CREATE [AGGREGATE] FUNCTION func_name AS library_path OUTPUTTYPE type_name [BUFSIZE value] +CREATE [AGGREGATE] FUNCTION func_name AS library_path OUTPUTTYPE type_name [BUFSIZE buffer_size] ``` 语法说明: AGGREGATE:标识此函数是标量函数还是聚集函数。 -func_name:函数名,必须与函数实现中udfNormalFunc的实际名称一致。 +func_name:函数名,必须与函数实现中 udf 的实际名称一致。 library_path:包含UDF函数实现的动态链接库的绝对路径,是在客户端侧主机上的绝对路径。 -OUTPUTTYPE:标识此函数的返回类型。 -BUFSIZE:中间结果的缓冲区大小,单位是字节。不设置则默认为0。最大不可超过512字节。 +type_name:标识此函数的返回类型。 +buffer_size:中间结果的缓冲区大小,单位是字节。不设置则默认为0。 关于如何开发自定义函数,请参考 [UDF使用说明](../../develop/udf)。 ## 删除自定义函数 +``` +DROP FUNCTION function_name; +``` + +- function_name:此参数的含义与 CREATE 指令中的 function_name 参数一致,也即要删除的函数的名字,例如 + + +## 显示 UDF + ```sql -DROP FUNCTION func_name -``` \ No newline at end of file +SHOW FUNCTION; +``` diff --git a/docs/zh/14-reference/03-connector/csharp.mdx b/docs/zh/14-reference/03-connector/csharp.mdx index 1e23df9286bf0cb3bf1db95e334301c04d01ad04..723c12932b410e9f85a0f35cd0c0b8273f4f7723 100644 --- a/docs/zh/14-reference/03-connector/csharp.mdx +++ b/docs/zh/14-reference/03-connector/csharp.mdx @@ -22,7 +22,9 @@ import CSAsyncQuery from "../../07-develop/04-query-data/_cs_async.mdx" 本文介绍如何在 Linux 或 Windows 环境中安装 `TDengine.Connector`,并通过 `TDengine.Connector` 连接 TDengine 集群,进行数据写入、查询等基本操作。 -`TDengine.Connector` 的源码托管在 [GitHub](https://github.com/taosdata/taos-connector-dotnet)。 +注意:`TDengine.Connector` 3.x 不兼容 TDengine 2.x,如果在运行 TDengine 2.x 版本的环境下需要使用 C# 连接器请使用 TDengine.Connector 的 1.x 版本 。 + +`TDengine.Connector` 的源码托管在 [GitHub](https://github.com/taosdata/taos-connector-dotnet/tree/3.0)。 ## 支持的平台 @@ -63,15 +65,15 @@ dotnet add package TDengine.Connector -可以下载 TDengine 的源码,直接引用最新版本的 TDengine.Connector 库 +也可以[下载源码](https://github.com/taosdata/taos-connector-dotnet/tree/3.0),直接引用 TDengine.Connector 库 ```bash -git clone https://github.com/taosdata/TDengine.git -cd TDengine/src/connector/C#/src/ -cp -r TDengineDriver/ myProject +git clone -b 3.0 https://github.com/taosdata/taos-connector-dotnet.git +cd taos-connector-dotnet +cp -r src/ myProject cd myProject -dotnet add TDengineDriver/TDengineDriver.csproj +dotnet add exmaple.csproj reference src/TDengine.csproj ``` @@ -145,20 +147,19 @@ namespace TDengineExample |示例程序 | 示例程序描述 | |--------------------------------------------------------------------------------------------------------------------|--------------------------------------------| -| [C#checker](https://github.com/taosdata/TDengine/tree/develop/examples/C%23/C%23checker) | 使用 TDengine.Connector 可以通过 help 命令中提供的参数,测试C# Driver的同步写入和查询 | -| [TDengineTest](https://github.com/taosdata/TDengine/tree/develop/examples/C%23/TDengineTest) | 使用 TDengine.Connector 实现的简单写入和查询的示例 | -| [insertCn](https://github.com/taosdata/TDengine/tree/develop/examples/C%23/insertCn) | 使用 TDengine.Connector 实现的写入和查询中文字符的示例 | -| [jsonTag](https://github.com/taosdata/TDengine/tree/develop/examples/C%23/jsonTag) | 使用 TDengine.Connector 实现的写入和查询 json tag 类型数据的示例 | -| [stmt](https://github.com/taosdata/TDengine/tree/develop/examples/C%23/stmt) | 使用 TDengine.Connector 实现的参数绑定的示例 | -| [schemaless](https://github.com/taosdata/TDengine/tree/develop/examples/C%23/schemaless) | 使用 TDengine.Connector 实现的使用 schemaless 写入的示例 | -| [benchmark](https://github.com/taosdata/TDengine/tree/develop/examples/C%23/taosdemo) | 使用 TDengine.Connector 实现的简易 Benchmark | -| [async query](https://github.com/taosdata/taos-connector-dotnet/blob/develop/examples/QueryAsyncSample.cs) | 使用 TDengine.Connector 实现的异步查询的示例 | -| [subscribe](https://github.com/taosdata/taos-connector-dotnet/blob/develop/examples/SubscribeSample.cs) | 使用 TDengine.Connector 实现的订阅数据的示例 | +| [CURD](https://github.com/taosdata/taos-connector-dotnet/blob/3.0/examples/Query/Query.cs) | 使用 TDengine.Connector 实现的建表、插入、查询示例 | +| [JSON Tag](https://github.com/taosdata/taos-connector-dotnet/blob/3.0/examples/JSONTag) | 使用 TDengine.Connector 实现的写入和查询 JSON tag 类型数据的示例 | +| [stmt](https://github.com/taosdata/taos-connector-dotnet/tree/3.0/examples/Stmt) | 使用 TDengine.Connector 实现的参数绑定插入和查询的示例 | +| [schemaless](https://github.com/taosdata/taos-connector-dotnet/blob/3.0/examples/schemaless) | 使用 TDengine.Connector 实现的使用 schemaless 写入的示例 | +| [async query](https://github.com/taosdata/taos-connector-dotnet/blob/3.0/examples/AsyncQuery/QueryAsync.cs) | 使用 TDengine.Connector 实现的异步查询的示例 | +| [TMQ](https://github.com/taosdata/taos-connector-dotnet/blob/3.0/examples/TMQ/TMQ.cs) | 使用 TDengine.Connector 实现的订阅数据的示例 | ## 重要更新记录 | TDengine.Connector | 说明 | |--------------------|--------------------------------| +| 3.0.0 | 支持 TDengine 3.0.0.0,不兼容 2.x。新增接口TDengine.Impl.GetData(),解析查询结果。 | +| 1.0.7 | 修复 TDengine.Query()内存泄露。 | | 1.0.6 | 修复 schemaless 在 1.0.4 和 1.0.5 中失效 bug。 | | 1.0.5 | 修复 Windows 同步查询中文报错 bug。 | | 1.0.4 | 新增异步查询,订阅等功能。修复绑定参数 bug。 | diff --git a/docs/zh/14-reference/05-taosbenchmark.md b/docs/zh/14-reference/05-taosbenchmark.md index 6b694543b1db435f507b5e2fb325cebe76261b48..f84ec65b4c8574c0812567a65213d7605b306c99 100644 --- a/docs/zh/14-reference/05-taosbenchmark.md +++ b/docs/zh/14-reference/05-taosbenchmark.md @@ -227,45 +227,34 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\) #### 数据库相关配置参数 -创建数据库时的相关参数在 json 配置文件中的 `dbinfo` 中配置,具体参数如下。这些参数与 TDengine 中 `create database` 时所指定的数据库参数相对应。 +创建数据库时的相关参数在 json 配置文件中的 `dbinfo` 中配置,个别具体参数如下。其余参数均与 TDengine 中 `create database` 时所指定的数据库参数相对应,详见[../../taos-sql/database] - **name** : 数据库名。 - **drop** : 插入前是否删除数据库,默认为 true。 -- **replica** : 创建数据库时指定的副本数。 +#### 流式计算相关配置参数 -- **days** : 单个数据文件中存储数据的时间跨度,默认值为 10。 +创建流式计算的相关参数在 json 配置文件中的 `stream` 中配置,具体参数如下。 -- **cache** : 缓存块的大小,单位是 MB,默认值是 16。 +- **stream_name** : 流式计算的名称,必填项。 -- **blocks** : 每个 vnode 中缓存块的数量,默认为 6。 +- **stream_stb** : 流式计算对应的超级表名称,必填项。 -- **precision** : 数据库时间精度,默认值为 "ms"。 +- **stream_sql** : 流式计算的sql语句,必填项。 -- **keep** : 保留数据的天数,默认值为 3650。 +- **trigger_mode** : 流式计算的触发模式,可选项。 -- **minRows** : 文件块中的最小记录数,默认值为 100。 +- **watermark** : 流式计算的水印,可选项。 -- **maxRows** : 文件块中的最大记录数,默认值为 4096。 - -- **comp** : 文件压缩标志,默认值为 2。 - -- **walLevel** : WAL 级别,默认为 1。 - -- **cacheLast** : 是否允许将每个表的最后一条记录保留在内存中,默认值为 0,可选值为 0,1,2,3。 - -- **quorum** : 多副本模式下的写确认数量,默认值为 1。 - -- **fsync** : 当 wal 设置为 2 时,fsync 的间隔时间,单位为 ms,默认值为 3000。 - -- **update** : 是否支持数据更新,默认值为 0, 可选值为 0, 1, 2。 +- **drop** : 是否创建流式计算,可选项为 "yes" 或者 "no", 为 "no" 时不创建。 #### 超级表相关配置参数 -创建超级表时的相关参数在 json 配置文件中的 `super_tables` 中配置,具体参数如下表。 +创建超级表时的相关参数在 json 配置文件中的 `super_tables` 中配置,具体参数如下。 - **name**: 超级表名,必须配置,没有默认值。 + - **child_table_exists** : 子表是否已经存在,默认值为 "no",可选值为 "yes" 或 "no"。 - **child_table_count** : 子表的数量,默认值为 10。 @@ -316,6 +305,22 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\) - **tags_file** : 仅当 insert_mode 为 taosc, rest 的模式下生效。 最终的 tag 的数值与 childtable_count 有关,如果 csv 文件内的 tag 数据行小于给定的子表数量,那么会循环读取 csv 文件数据直到生成 childtable_count 指定的子表数量;否则则只会读取 childtable_count 行 tag 数据。也即最终生成的子表数量为二者取小。 +#### tsma配置参数 + +指定tsma的配置参数在 `super_tables` 中的 `tsmas` 中,具体参数如下。 + +- **name** : 指定 tsma 的名字,必选项。 + +- **function** : 指定 tsma 的函数,必选项。 + +- **interval** : 指定 tsma 的时间间隔,必选项。 + +- **sliding** : 指定 tsma 的窗口时间位移,必选项。 + +- **custom** : 指定 tsma 的创建语句结尾追加的自定义配置,可选项。 + +- **start_when_inserted** : 指定当插入多少行时创建 tsma,可选项,默认为 0。 + #### 标签列与数据列配置参数 指定超级表标签列与数据列的配置参数分别在 `super_tables` 中的 `columns` 和 `tag` 中。 @@ -335,6 +340,8 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\) - **values** : nchar/binary 列/标签的值域,将从值中随机选择。 +- **sma**: 将该列加入bsma中,值为 "yes" 或者 "no",默认为 "no"。 + #### 插入行为配置参数 - **thread_count** : 插入数据的线程数量,默认为 8。 diff --git a/docs/zh/17-operation/17-diagnose.md b/docs/zh/17-operation/17-diagnose.md index e2a2ef035a33a295b206c77ec08edf8f7842671f..e6e9be7153dee855867c4ba4fcd1d3258c9d788f 100644 --- a/docs/zh/17-operation/17-diagnose.md +++ b/docs/zh/17-operation/17-diagnose.md @@ -1,131 +1,71 @@ ---- -title: 诊断及其他 ---- - -## 网络连接诊断 - -当出现客户端应用无法访问服务端时,需要确认客户端与服务端之间网络的各端口连通情况,以便有针对性地排除故障。 - -目前网络连接诊断支持在:Linux 与 Linux,Linux 与 Windows 之间进行诊断测试。 - -诊断步骤: - -1. 如拟诊断的端口范围与服务器 taosd 实例的端口范围相同,须先停掉 taosd 实例 -2. 服务端命令行输入:`taos -n server -P -l ` 以服务端身份启动对端口 port 为基准端口的监听 -3. 客户端命令行输入:`taos -n client -h -P -l ` 以客户端身份启动对指定的服务器、指定的端口发送测试包 - --l : 测试网络包的大小(单位:字节)。最小值是 11、最大值是 64000,默认值为 1000。 -注:两端命令行中指定的测试包长度必须一致,否则测试显示失败。 - -服务端运行正常的话会输出以下信息: - -```bash -# taos -n server -P 6000 -12/21 14:50:13.522509 0x7f536f455200 UTL work as server, host:172.27.0.7 startPort:6000 endPort:6011 pkgLen:1000 - -12/21 14:50:13.522659 0x7f5352242700 UTL TCP server at port:6000 is listening -12/21 14:50:13.522727 0x7f5351240700 UTL TCP server at port:6001 is listening -... -... -... -12/21 14:50:13.523954 0x7f5342fed700 UTL TCP server at port:6011 is listening -12/21 14:50:13.523989 0x7f53437ee700 UTL UDP server at port:6010 is listening -12/21 14:50:13.524019 0x7f53427ec700 UTL UDP server at port:6011 is listening -12/21 14:50:22.192849 0x7f5352242700 UTL TCP: read:1000 bytes from 172.27.0.8 at 6000 -12/21 14:50:22.192993 0x7f5352242700 UTL TCP: write:1000 bytes to 172.27.0.8 at 6000 -12/21 14:50:22.237082 0x7f5351a41700 UTL UDP: recv:1000 bytes from 172.27.0.8 at 6000 -12/21 14:50:22.237203 0x7f5351a41700 UTL UDP: send:1000 bytes to 172.27.0.8 at 6000 -12/21 14:50:22.237450 0x7f5351240700 UTL TCP: read:1000 bytes from 172.27.0.8 at 6001 -12/21 14:50:22.237576 0x7f5351240700 UTL TCP: write:1000 bytes to 172.27.0.8 at 6001 -12/21 14:50:22.281038 0x7f5350a3f700 UTL UDP: recv:1000 bytes from 172.27.0.8 at 6001 -12/21 14:50:22.281141 0x7f5350a3f700 UTL UDP: send:1000 bytes to 172.27.0.8 at 6001 -... -... -... -12/21 14:50:22.677443 0x7f5342fed700 UTL TCP: read:1000 bytes from 172.27.0.8 at 6011 -12/21 14:50:22.677576 0x7f5342fed700 UTL TCP: write:1000 bytes to 172.27.0.8 at 6011 -12/21 14:50:22.721144 0x7f53427ec700 UTL UDP: recv:1000 bytes from 172.27.0.8 at 6011 -12/21 14:50:22.721261 0x7f53427ec700 UTL UDP: send:1000 bytes to 172.27.0.8 at 6011 -``` - -客户端运行正常会输出以下信息: - -```bash -# taos -n client -h 172.27.0.7 -P 6000 -12/21 14:50:22.192434 0x7fc95d859200 UTL work as client, host:172.27.0.7 startPort:6000 endPort:6011 pkgLen:1000 - -12/21 14:50:22.192472 0x7fc95d859200 UTL server ip:172.27.0.7 is resolved from host:172.27.0.7 -12/21 14:50:22.236869 0x7fc95d859200 UTL successed to test TCP port:6000 -12/21 14:50:22.237215 0x7fc95d859200 UTL successed to test UDP port:6000 -... -... -... -12/21 14:50:22.676891 0x7fc95d859200 UTL successed to test TCP port:6010 -12/21 14:50:22.677240 0x7fc95d859200 UTL successed to test UDP port:6010 -12/21 14:50:22.720893 0x7fc95d859200 UTL successed to test TCP port:6011 -12/21 14:50:22.721274 0x7fc95d859200 UTL successed to test UDP port:6011 -``` - -仔细阅读打印出来的错误信息,可以帮助管理员找到原因,以解决问题。 - -## 启动状态及 RPC 诊断 - -`taos -n startup -h ` - -判断 taosd 服务端是否成功启动,是数据库管理员经常遇到的一种情形。特别当若干台服务器组成集群时,判断每个服务端实例是否成功启动就会是一个重要问题。除检索 taosd 服务端日志文件进行问题定位、分析外,还可以通过 `taos -n startup -h ` 来诊断一个 taosd 进程的启动状态。 - -针对多台服务器组成的集群,当服务启动过程耗时较长时,可通过该命令行来诊断每台服务器的 taosd 实例的启动状态,以准确定位问题。 - -`taos -n rpc -h ` - -该命令用来诊断已经启动的 taosd 实例的端口是否可正常访问。如果 taosd 程序异常或者失去响应,可以通过 `taos -n rpc -h ` 来发起一个与指定 fqdn 的 rpc 通信,看看 taosd 是否能收到,以此来判定是网络问题还是 taosd 程序异常问题。 - -## sync 及 arbitrator 诊断 - -``` -taos -n sync -P 6040 -h -taos -n sync -P 6042 -h -``` - -用来诊断 sync 端口是否工作正常,判断服务端 sync 模块是否成功工作。另外,-P 6042 用来诊断 arbitrator 是否配置正常,判断指定服务器的 arbitrator 是否能正常工作。 - -## 网络速度诊断 - -`taos -n speed -h -P 6030 -N 10 -l 10000000 -S TCP` - -从 2.2.0.0 版本开始,taos 工具新提供了一个网络速度诊断的模式,可以对一个正在运行中的 taosd 实例或者 `taos -n server` 方式模拟的一个服务端实例,以非压缩传输的方式进行网络测速。这个模式下可供调整的参数如下: - --n:设为“speed”时,表示对网络速度进行诊断。 --h:所要连接的服务端的 FQDN 或 ip 地址。如果不设置这一项,会使用本机 taos.cfg 文件中 FQDN 参数的设置作为默认值。 --P:所连接服务端的网络端口。默认值为 6030。 --N:诊断过程中使用的网络包总数。最小值是 1、最大值是 10000,默认值为 100。 --l:单个网络包的大小(单位:字节)。最小值是 1024、最大值是 1024 `*` 1024 `*` 1024,默认值为 1024。 --S:网络封包的类型。可以是 TCP 或 UDP,默认值为 TCP。 - -## FQDN 解析速度诊断 - -`taos -n fqdn -h ` - -从 2.2.0.0 版本开始,taos 工具新提供了一个 FQDN 解析速度的诊断模式,可以对一个目标 FQDN 地址尝试解析,并记录解析过程中所消耗的时间。这个模式下可供调整的参数如下: - --n:设为“fqdn”时,表示对 FQDN 解析进行诊断。 --h:所要解析的目标 FQDN 地址。如果不设置这一项,会使用本机 taos.cfg 文件中 FQDN 参数的设置作为默认值。 - -## 服务端日志 - -taosd 服务端日志文件标志位 debugflag 默认为 131,在 debug 时往往需要将其提升到 135 或 143 。 - -一旦设定为 135 或 143,日志文件增长很快,特别是写入、查询请求量较大时,增长速度惊人。如合并保存日志,很容易把日志内的关键信息(如配置信息、错误信息等)冲掉。为此,服务端将重要信息日志与其他日志分开存放: - -- taosinfo 存放重要信息日志, 包括:INFO/ERROR/WARNING 级别的日志信息。不记录 DEBUG、TRACE 级别的日志。 -- taosdlog 服务器端生成的日志,记录 taosinfo 中全部信息外,还根据设置的日志输出级别,记录 DEBUG(日志级别 135)、TRACE(日志级别是 143)。 - -## 客户端日志 - -每个独立运行的客户端(一个进程)生成一个独立的客户端日志,其命名方式采用 taoslog+<序号> 的方式命名。文件标志位 debugflag 默认为 131,在 debug 时往往需要将其提升到 135 或 143 。 - -- taoslog 客户端(driver)生成的日志,默认记录客户端 INFO/ERROR/WARNING 级别日志,还根据设置的日志输出级别,记录 DEBUG(日志级别 135)、TRACE(日志级别是 143)。 - -其中,日志文件最大长度由 numOfLogLines 来进行配置,一个 taosd 实例最多保留两个文件。 - -taosd 服务端日志采用异步落盘写入机制,优点是可以避免硬盘写入压力太大,对性能造成很大影响。缺点是,在极端情况下,存在少量日志行数丢失的可能。 +--- +title: 诊断及其他 +--- + +## 网络连接诊断 + +当出现客户端应用无法访问服务端时,需要确认客户端与服务端之间网络的各端口连通情况,以便有针对性地排除故障。 + +目前网络连接诊断支持在:Linux 与 Linux,Linux 与 Windows 之间进行诊断测试。 + +诊断步骤: + +1. 如拟诊断的端口范围与服务器 taosd 实例的端口范围相同,须先停掉 taosd 实例 +2. 服务端命令行输入:`taos -n server -P -l ` 以服务端身份启动对端口 port 为基准端口的监听 +3. 客户端命令行输入:`taos -n client -h -P -l ` 以客户端身份启动对指定的服务器、指定的端口发送测试包 + +-l : 测试网络包的大小(单位:字节)。最小值是 11、最大值是 64000,默认值为 1000。 +注:两端命令行中指定的测试包长度必须一致,否则测试显示失败。 + +服务端运行正常的话会输出以下信息: + +```bash +# taos -n server -P 6030 -l 1000 +network test server is initialized, port:6030 +request is received, size:1000 +request is received, size:1000 +... +... +... +request is received, size:1000 +request is received, size:1000 +``` + +客户端运行正常会输出以下信息: + +```bash +# taos -n client -h 172.27.0.7 -P 6000 +taos -n client -h v3s2 -P 6030 -l 1000 +network test client is initialized, the server is v3s2:6030 +request is sent, size:1000 +response is received, size:1000 +request is sent, size:1000 +response is received, size:1000 +... +... +... +request is sent, size:1000 +response is received, size:1000 +request is sent, size:1000 +response is received, size:1000 + +total succ: 100/100 cost: 16.23 ms speed: 5.87 MB/s +``` + +仔细阅读打印出来的错误信息,可以帮助管理员找到原因,以解决问题。 + +## 服务端日志 + +taosd 服务端日志文件标志位 debugflag 默认为 131,在 debug 时往往需要将其提升到 135 或 143 。 + +一旦设定为 135 或 143,日志文件增长很快,特别是写入、查询请求量较大时,增长速度惊人。请注意日志文件目录所在磁盘的空间大小。 + +## 客户端日志 + +每个独立运行的客户端(一个进程)生成一个独立的客户端日志,其命名方式采用 taoslog+<序号> 的方式命名。文件标志位 debugflag 默认为 131,在 debug 时往往需要将其提升到 135 或 143 。 + +- taoslog 客户端(driver)生成的日志,默认记录客户端 INFO/ERROR/WARNING 级别日志,还根据设置的日志输出级别,记录 DEBUG(日志级别 135)、TRACE(日志级别是 143)。 + +其中,日志文件最大长度由 numOfLogLines 来进行配置,一个 taosd 实例最多保留两个文件。 + +taosd 服务端日志采用异步落盘写入机制,优点是可以避免硬盘写入压力太大,对性能造成很大影响。缺点是,在极端情况下,存在少量日志行数丢失的可能。当问题分析需要的时候,可以考虑将 参数 asynclog 设置成 0,修改为同步落盘写入机制,保证日志不会丢失。 diff --git a/examples/c/tmq.c b/examples/c/tmq.c index 3686251b4b17dcef0553e912a7babb04404461e9..1cdd4c02daf0e1158745ff0d51a0a35d9934041c 100644 --- a/examples/c/tmq.c +++ b/examples/c/tmq.c @@ -1,473 +1,287 @@ -/* - * Copyright (c) 2019 TAOS Data, Inc. - * - * This program is free software: you can use, redistribute, and/or modify - * it under the terms of the GNU Affero General Public License, version 3 - * or later ("AGPL"), as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. - * - * You should have received a copy of the GNU Affero General Public License - * along with this program. If not, see . - */ - -#include -#include -#include -#include -#include -#include "taos.h" - -static int running = 1; -static void msg_process(TAOS_RES* msg) { - char buf[1024]; - /*memset(buf, 0, 1024);*/ - printf("topic: %s\n", tmq_get_topic_name(msg)); - printf("db: %s\n", tmq_get_db_name(msg)); - printf("vg: %d\n", tmq_get_vgroup_id(msg)); - if (tmq_get_res_type(msg) == TMQ_RES_TABLE_META) { - tmq_raw_data raw = {0}; - int32_t code = tmq_get_raw(msg, &raw); - if (code == 0) { - TAOS* pConn = taos_connect("192.168.1.86", "root", "taosdata", NULL, 0); - if (pConn == NULL) { - return; - } - - TAOS_RES* pRes = taos_query(pConn, "create database if not exists abc1 vgroups 5"); - if (taos_errno(pRes) != 0) { - printf("error in create db, reason:%s\n", taos_errstr(pRes)); - return; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "use abc1"); - if (taos_errno(pRes) != 0) { - printf("error in use db, reason:%s\n", taos_errstr(pRes)); - return; - } - taos_free_result(pRes); - - int32_t ret = tmq_write_raw(pConn, raw); - printf("write raw data: %s\n", tmq_err2str(ret)); - taos_close(pConn); - } - char* result = tmq_get_json_meta(msg); - if (result) { - printf("meta result: %s\n", result); - } - tmq_free_json_meta(result); - return; - } - while (1) { - TAOS_ROW row = taos_fetch_row(msg); - if (row == NULL) break; - TAOS_FIELD* fields = taos_fetch_fields(msg); - int32_t numOfFields = taos_field_count(msg); - taos_print_row(buf, row, fields, numOfFields); - printf("%s\n", buf); - - const char* tbName = tmq_get_table_name(msg); - if (tbName) { - printf("from tb: %s\n", tbName); - } - } -} - -int32_t init_env() { - TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0); - if (pConn == NULL) { - return -1; - } - - TAOS_RES* pRes = taos_query(pConn, "create database if not exists abc1 vgroups 5"); - if (taos_errno(pRes) != 0) { - printf("error in create db, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "use abc1"); - if (taos_errno(pRes) != 0) { - printf("error in use db, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, - "create stable if not exists st1 (ts timestamp, c1 int, c2 float, c3 binary(16)) tags(t1 int, t3 " - "nchar(8), t4 bool)"); - if (taos_errno(pRes) != 0) { - printf("failed to create super table st1, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "create table if not exists ct0 using st1 tags(1000, \"ttt\", true)"); - if (taos_errno(pRes) != 0) { - printf("failed to create child table tu1, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "insert into ct0 values(now, 1, 2, 'a')"); - if (taos_errno(pRes) != 0) { - printf("failed to insert into ct0, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "create table if not exists ct1 using st1(t1) tags(2000)"); - if (taos_errno(pRes) != 0) { - printf("failed to create child table ct1, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "create table if not exists ct2 using st1(t1) tags(NULL)"); - if (taos_errno(pRes) != 0) { - printf("failed to create child table ct2, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "insert into ct1 values(now, 3, 4, 'b')"); - if (taos_errno(pRes) != 0) { - printf("failed to insert into ct1, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "create table if not exists ct3 using st1(t1) tags(3000)"); - if (taos_errno(pRes) != 0) { - printf("failed to create child table ct3, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "insert into ct3 values(now, 5, 6, 'c')"); - if (taos_errno(pRes) != 0) { - printf("failed to insert into ct3, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - -#if 0 - pRes = taos_query(pConn, "alter table st1 add column c4 bigint"); - if (taos_errno(pRes) != 0) { - printf("failed to alter super table st1, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "alter table st1 modify column c3 binary(64)"); - if (taos_errno(pRes) != 0) { - printf("failed to alter super table st1, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "alter table st1 add tag t2 binary(64)"); - if (taos_errno(pRes) != 0) { - printf("failed to alter super table st1, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "alter table ct3 set tag t1=5000"); - if (taos_errno(pRes) != 0) { - printf("failed to slter child table ct3, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "drop table ct3 ct1"); - if (taos_errno(pRes) != 0) { - printf("failed to drop child table ct3, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "drop table st1"); - if (taos_errno(pRes) != 0) { - printf("failed to drop super table st1, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "create table if not exists n1(ts timestamp, c1 int, c2 nchar(4))"); - if (taos_errno(pRes) != 0) { - printf("failed to create normal table n1, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "alter table n1 add column c3 bigint"); - if (taos_errno(pRes) != 0) { - printf("failed to alter normal table n1, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "alter table n1 modify column c2 nchar(8)"); - if (taos_errno(pRes) != 0) { - printf("failed to alter normal table n1, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "alter table n1 rename column c3 cc3"); - if (taos_errno(pRes) != 0) { - printf("failed to alter normal table n1, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "alter table n1 comment 'hello'"); - if (taos_errno(pRes) != 0) { - printf("failed to alter normal table n1, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "alter table n1 drop column c1"); - if (taos_errno(pRes) != 0) { - printf("failed to alter normal table n1, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "drop table n1"); - if (taos_errno(pRes) != 0) { - printf("failed to drop normal table n1, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "create table jt(ts timestamp, i int) tags(t json)"); - if (taos_errno(pRes) != 0) { - printf("failed to create super table jt, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "create table jt1 using jt tags('{\"k1\":1, \"k2\":\"hello\"}')"); - if (taos_errno(pRes) != 0) { - printf("failed to create super table jt, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "create table jt2 using jt tags('')"); - if (taos_errno(pRes) != 0) { - printf("failed to create super table jt2, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, - "create stable if not exists st1 (ts timestamp, c1 int, c2 float, c3 binary(16)) tags(t1 int, t3 " - "nchar(8), t4 bool)"); - if (taos_errno(pRes) != 0) { - printf("failed to create super table st1, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "drop table st1"); - if (taos_errno(pRes) != 0) { - printf("failed to drop super table st1, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); -#endif - - return 0; -} - -int32_t create_topic() { - printf("create topic\n"); - TAOS_RES* pRes; - TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0); - if (pConn == NULL) { - return -1; - } - - pRes = taos_query(pConn, "use abc1"); - if (taos_errno(pRes) != 0) { - printf("error in use db, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - // pRes = taos_query(pConn, "create topic topic_ctb_column with meta as database abc1"); - pRes = taos_query(pConn, "create topic topic_ctb_column as select ts, c1, c2, c3 from st1"); - if (taos_errno(pRes) != 0) { - printf("failed to create topic topic_ctb_column, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - - pRes = taos_query(pConn, "create topic topic2 as select ts, c1, c2, c3 from st1"); - if (taos_errno(pRes) != 0) { - printf("failed to create topic topic_ctb_column, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - -#if 0 - pRes = taos_query(pConn, "insert into tu1 values(now, 1, 1.0, 'bi1')"); - if (taos_errno(pRes) != 0) { - printf("failed to insert, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - pRes = taos_query(pConn, "insert into tu1 values(now+1d, 1, 1.0, 'bi1')"); - if (taos_errno(pRes) != 0) { - printf("failed to insert, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - pRes = taos_query(pConn, "insert into tu2 values(now, 2, 2.0, 'bi2')"); - if (taos_errno(pRes) != 0) { - printf("failed to insert, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); - pRes = taos_query(pConn, "insert into tu2 values(now+1d, 2, 2.0, 'bi2')"); - if (taos_errno(pRes) != 0) { - printf("failed to insert, reason:%s\n", taos_errstr(pRes)); - return -1; - } - taos_free_result(pRes); -#endif - - taos_close(pConn); - return 0; -} - -void tmq_commit_cb_print(tmq_t* tmq, int32_t code, void* param) { - printf("commit %d tmq %p param %p\n", code, tmq, param); -} - -tmq_t* build_consumer() { -#if 0 - TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0); - assert(pConn != NULL); - - TAOS_RES* pRes = taos_query(pConn, "use abc1"); - if (taos_errno(pRes) != 0) { - printf("error in use db, reason:%s\n", taos_errstr(pRes)); - } - taos_free_result(pRes); -#endif - - tmq_conf_t* conf = tmq_conf_new(); - tmq_conf_set(conf, "group.id", "tg2"); - tmq_conf_set(conf, "client.id", "my app 1"); - tmq_conf_set(conf, "td.connect.user", "root"); - tmq_conf_set(conf, "td.connect.pass", "taosdata"); - tmq_conf_set(conf, "msg.with.table.name", "true"); - tmq_conf_set(conf, "enable.auto.commit", "true"); - - /*tmq_conf_set(conf, "experimental.snapshot.enable", "true");*/ - - tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL); - tmq_t* tmq = tmq_consumer_new(conf, NULL, 0); - assert(tmq); - tmq_conf_destroy(conf); - return tmq; -} - -tmq_list_t* build_topic_list() { - tmq_list_t* topic_list = tmq_list_new(); - tmq_list_append(topic_list, "topic_ctb_column"); - /*tmq_list_append(topic_list, "tmq_test_db_multi_insert_topic");*/ - return topic_list; -} - -void basic_consume_loop(tmq_t* tmq, tmq_list_t* topics) { - int32_t code; - - if ((code = tmq_subscribe(tmq, topics))) { - fprintf(stderr, "%% Failed to start consuming topics: %s\n", tmq_err2str(code)); - printf("subscribe err\n"); - return; - } - int32_t cnt = 0; - while (running) { - TAOS_RES* tmqmessage = tmq_consumer_poll(tmq, -1); - if (tmqmessage) { - cnt++; - msg_process(tmqmessage); - /*if (cnt >= 2) break;*/ - /*printf("get data\n");*/ - taos_free_result(tmqmessage); - /*} else {*/ - /*break;*/ - /*tmq_commit_sync(tmq, NULL);*/ - } - } - - code = tmq_consumer_close(tmq); - if (code) - fprintf(stderr, "%% Failed to close consumer: %s\n", tmq_err2str(code)); - else - fprintf(stderr, "%% Consumer closed\n"); -} - -void sync_consume_loop(tmq_t* tmq, tmq_list_t* topics) { - static const int MIN_COMMIT_COUNT = 1; - - int msg_count = 0; - int32_t code; - - if ((code = tmq_subscribe(tmq, topics))) { - fprintf(stderr, "%% Failed to start consuming topics: %s\n", tmq_err2str(code)); - return; - } - - tmq_list_t* subList = NULL; - tmq_subscription(tmq, &subList); - char** subTopics = tmq_list_to_c_array(subList); - int32_t sz = tmq_list_get_size(subList); - printf("subscribed topics: "); - for (int32_t i = 0; i < sz; i++) { - printf("%s, ", subTopics[i]); - } - printf("\n"); - tmq_list_destroy(subList); - - while (running) { - TAOS_RES* tmqmessage = tmq_consumer_poll(tmq, 1000); - if (tmqmessage) { - msg_process(tmqmessage); - taos_free_result(tmqmessage); - - /*tmq_commit_sync(tmq, NULL);*/ - /*if ((++msg_count % MIN_COMMIT_COUNT) == 0) tmq_commit(tmq, NULL, 0);*/ - } - } - - code = tmq_consumer_close(tmq); - if (code) - fprintf(stderr, "%% Failed to close consumer: %s\n", tmq_err2str(code)); - else - fprintf(stderr, "%% Consumer closed\n"); -} - -int main(int argc, char* argv[]) { - if (argc > 1) { - printf("env init\n"); - if (init_env() < 0) { - return -1; - } - create_topic(); - } - tmq_t* tmq = build_consumer(); - tmq_list_t* topic_list = build_topic_list(); - basic_consume_loop(tmq, topic_list); - /*sync_consume_loop(tmq, topic_list);*/ -} +/* + * Copyright (c) 2019 TAOS Data, Inc. + * + * This program is free software: you can use, redistribute, and/or modify + * it under the terms of the GNU Affero General Public License, version 3 + * or later ("AGPL"), as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. + * + * You should have received a copy of the GNU Affero General Public License + * along with this program. If not, see . + */ + +#include +#include +#include +#include +#include +#include "taos.h" + +static int running = 1; +static char dbName[64] = "tmqdb"; +static char stbName[64] = "stb"; +static char topicName[64] = "topicname"; + +static int32_t msg_process(TAOS_RES* msg) { + char buf[1024]; + int32_t rows = 0; + + const char* topicName = tmq_get_topic_name(msg); + const char* dbName = tmq_get_db_name(msg); + int32_t vgroupId = tmq_get_vgroup_id(msg); + + printf("topic: %s\n", topicName); + printf("db: %s\n", dbName); + printf("vgroup id: %d\n", vgroupId); + + while (1) { + TAOS_ROW row = taos_fetch_row(msg); + if (row == NULL) break; + + TAOS_FIELD* fields = taos_fetch_fields(msg); + int32_t numOfFields = taos_field_count(msg); + int32_t* length = taos_fetch_lengths(msg); + int32_t precision = taos_result_precision(msg); + const char* tbName = tmq_get_table_name(msg); + rows++; + taos_print_row(buf, row, fields, numOfFields); + printf("row content from %s: %s\n", (tbName != NULL ? tbName : "null table"), buf); + } + + return rows; +} + +static int32_t init_env() { + TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0); + if (pConn == NULL) { + return -1; + } + + TAOS_RES* pRes; + // drop database if exists + printf("create database\n"); + pRes = taos_query(pConn, "drop database if exists tmqdb"); + if (taos_errno(pRes) != 0) { + printf("error in drop tmqdb, reason:%s\n", taos_errstr(pRes)); + return -1; + } + taos_free_result(pRes); + + // create database + pRes = taos_query(pConn, "create database tmqdb"); + if (taos_errno(pRes) != 0) { + printf("error in create tmqdb, reason:%s\n", taos_errstr(pRes)); + return -1; + } + taos_free_result(pRes); + + // create super table + printf("create super table\n"); + pRes = taos_query(pConn, "create table tmqdb.stb (ts timestamp, c1 int, c2 float, c3 varchar(16)) tags(t1 int, t3 varchar(16))"); + if (taos_errno(pRes) != 0) { + printf("failed to create super table stb, reason:%s\n", taos_errstr(pRes)); + return -1; + } + taos_free_result(pRes); + + // create sub tables + printf("create sub tables\n"); + pRes = taos_query(pConn, "create table tmqdb.ctb0 using tmqdb.stb tags(0, 'subtable0')"); + if (taos_errno(pRes) != 0) { + printf("failed to create super table ctb0, reason:%s\n", taos_errstr(pRes)); + return -1; + } + taos_free_result(pRes); + + pRes = taos_query(pConn, "create table tmqdb.ctb1 using tmqdb.stb tags(1, 'subtable1')"); + if (taos_errno(pRes) != 0) { + printf("failed to create super table ctb1, reason:%s\n", taos_errstr(pRes)); + return -1; + } + taos_free_result(pRes); + + pRes = taos_query(pConn, "create table tmqdb.ctb2 using tmqdb.stb tags(2, 'subtable2')"); + if (taos_errno(pRes) != 0) { + printf("failed to create super table ctb2, reason:%s\n", taos_errstr(pRes)); + return -1; + } + taos_free_result(pRes); + + pRes = taos_query(pConn, "create table tmqdb.ctb3 using tmqdb.stb tags(3, 'subtable3')"); + if (taos_errno(pRes) != 0) { + printf("failed to create super table ctb3, reason:%s\n", taos_errstr(pRes)); + return -1; + } + taos_free_result(pRes); + + // insert data + printf("insert data into sub tables\n"); + pRes = taos_query(pConn, "insert into tmqdb.ctb0 values(now, 0, 0, 'a0')(now+1s, 0, 0, 'a00')"); + if (taos_errno(pRes) != 0) { + printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes)); + return -1; + } + taos_free_result(pRes); + + pRes = taos_query(pConn, "insert into tmqdb.ctb1 values(now, 1, 1, 'a1')(now+1s, 11, 11, 'a11')"); + if (taos_errno(pRes) != 0) { + printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes)); + return -1; + } + taos_free_result(pRes); + + pRes = taos_query(pConn, "insert into tmqdb.ctb2 values(now, 2, 2, 'a1')(now+1s, 22, 22, 'a22')"); + if (taos_errno(pRes) != 0) { + printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes)); + return -1; + } + taos_free_result(pRes); + + pRes = taos_query(pConn, "insert into tmqdb.ctb3 values(now, 3, 3, 'a1')(now+1s, 33, 33, 'a33')"); + if (taos_errno(pRes) != 0) { + printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes)); + return -1; + } + taos_free_result(pRes); + + taos_close(pConn); + return 0; +} + +int32_t create_topic() { + printf("create topic\n"); + TAOS_RES* pRes; + TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0); + if (pConn == NULL) { + return -1; + } + + pRes = taos_query(pConn, "use tmqdb"); + if (taos_errno(pRes) != 0) { + printf("error in use tmqdb, reason:%s\n", taos_errstr(pRes)); + return -1; + } + taos_free_result(pRes); + + // pRes = taos_query(pConn, "create topic topic_ctb_column with meta as database abc1"); + pRes = taos_query(pConn, "create topic topicname as select ts, c1, c2, c3 from tmqdb.stb where c1 > 1"); + if (taos_errno(pRes) != 0) { + printf("failed to create topic topicname, reason:%s\n", taos_errstr(pRes)); + return -1; + } + taos_free_result(pRes); + + taos_close(pConn); + return 0; +} + +void tmq_commit_cb_print(tmq_t* tmq, int32_t code, void* param) { + printf("tmq_commit_cb_print() code: %d, tmq: %p, param: %p\n", code, tmq, param); +} + +tmq_t* build_consumer() { + tmq_conf_res_t code; + tmq_conf_t* conf = tmq_conf_new(); + code = tmq_conf_set(conf, "enable.auto.commit", "true"); + if (TMQ_CONF_OK != code) return NULL; + code = tmq_conf_set(conf, "auto.commit.interval.ms", "1000"); + if (TMQ_CONF_OK != code) return NULL; + code = tmq_conf_set(conf, "group.id", "cgrpName"); + if (TMQ_CONF_OK != code) return NULL; + code = tmq_conf_set(conf, "td.connect.user", "root"); + if (TMQ_CONF_OK != code) return NULL; + code = tmq_conf_set(conf, "td.connect.pass", "taosdata"); + if (TMQ_CONF_OK != code) return NULL; + code = tmq_conf_set(conf, "auto.offset.reset", "earliest"); + if (TMQ_CONF_OK != code) return NULL; + code = tmq_conf_set(conf, "experimental.snapshot.enable", "true"); + if (TMQ_CONF_OK != code) return NULL; + code = tmq_conf_set(conf, "msg.with.table.name", "true"); + if (TMQ_CONF_OK != code) return NULL; + + tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL); + + tmq_t* tmq = tmq_consumer_new(conf, NULL, 0); + tmq_conf_destroy(conf); + return tmq; +} + +tmq_list_t* build_topic_list() { + tmq_list_t* topicList = tmq_list_new(); + int32_t code = tmq_list_append(topicList, "topicname"); + if (code) { + return NULL; + } + return topicList; +} + +void basic_consume_loop(tmq_t* tmq, tmq_list_t* topicList) { + int32_t code; + + if ((code = tmq_subscribe(tmq, topicList))) { + fprintf(stderr, "%% Failed to tmq_subscribe(): %s\n", tmq_err2str(code)); + return; + } + + int32_t totalRows = 0; + int32_t msgCnt = 0; + int32_t consumeDelay = 5000; + while (running) { + TAOS_RES* tmqmsg = tmq_consumer_poll(tmq, consumeDelay); + if (tmqmsg) { + msgCnt++; + totalRows += msg_process(tmqmsg); + taos_free_result(tmqmsg); + } else { + break; + } + } + + fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows); +} + +int main(int argc, char* argv[]) { + int32_t code; + + if (init_env() < 0) { + return -1; + } + + if (create_topic() < 0) { + return -1; + } + + tmq_t* tmq = build_consumer(); + if (NULL == tmq) { + fprintf(stderr, "%% build_consumer() fail!\n"); + return -1; + } + + tmq_list_t* topic_list = build_topic_list(); + if (NULL == topic_list) { + return -1; + } + + basic_consume_loop(tmq, topic_list); + + code = tmq_unsubscribe(tmq); + if (code) { + fprintf(stderr, "%% Failed to unsubscribe: %s\n", tmq_err2str(code)); + } + else { + fprintf(stderr, "%% unsubscribe\n"); + } + + code = tmq_consumer_close(tmq); + if (code) { + fprintf(stderr, "%% Failed to close consumer: %s\n", tmq_err2str(code)); + } + else { + fprintf(stderr, "%% Consumer closed\n"); + } + + return 0; +} diff --git a/include/libs/stream/tstream.h b/include/libs/stream/tstream.h index 404d81465b811dc15e45b75e11d56dc8ca3cd1fe..103ca6a4f0e421e65d3664d7894706e3c7476e2b 100644 --- a/include/libs/stream/tstream.h +++ b/include/libs/stream/tstream.h @@ -66,6 +66,25 @@ enum { TASK_OUTPUT_STATUS__BLOCKED, }; +enum { + TASK_TRIGGER_STATUS__INACTIVE = 1, + TASK_TRIGGER_STATUS__ACTIVE, +}; + +enum { + TASK_LEVEL__SOURCE = 1, + TASK_LEVEL__AGG, + TASK_LEVEL__SINK, +}; + +enum { + TASK_OUTPUT__FIXED_DISPATCH = 1, + TASK_OUTPUT__SHUFFLE_DISPATCH, + TASK_OUTPUT__TABLE, + TASK_OUTPUT__SMA, + TASK_OUTPUT__FETCH, +}; + typedef struct { int8_t type; } SStreamQueueItem; @@ -202,29 +221,6 @@ typedef struct { int8_t reserved; } STaskSinkFetch; -enum { - TASK_EXEC__NONE = 1, - TASK_EXEC__PIPE, -}; - -enum { - TASK_DISPATCH__NONE = 1, - TASK_DISPATCH__FIXED, - TASK_DISPATCH__SHUFFLE, -}; - -enum { - TASK_SINK__NONE = 1, - TASK_SINK__TABLE, - TASK_SINK__SMA, - TASK_SINK__FETCH, -}; - -enum { - TASK_TRIGGER_STATUS__IN_ACTIVE = 1, - TASK_TRIGGER_STATUS__ACTIVE, -}; - typedef struct { int32_t nodeId; int32_t childId; @@ -237,11 +233,8 @@ typedef struct { typedef struct SStreamTask { int64_t streamId; int32_t taskId; - int8_t isDataScan; - int8_t execType; - int8_t sinkType; - int8_t dispatchType; - int8_t isStreamDistributed; + int8_t taskLevel; + int8_t outputType; int16_t dispatchMsgType; int8_t taskStatus; @@ -252,13 +245,12 @@ typedef struct SStreamTask { int32_t nodeId; SEpSet epSet; - // used for semi or single task, - // while final task should have processedVer for each child + // used for task source and sink, + // while task agg should have processedVer for each child int64_t recoverSnapVer; int64_t startVer; int64_t checkpointVer; int64_t processedVer; - // int32_t numOfVgroups; // children info SArray* childEpInfo; // SArray @@ -266,19 +258,13 @@ typedef struct SStreamTask { // exec STaskExec exec; - // TODO: unify sink and dispatch - - // local sink - union { - STaskSinkTb tbSink; - STaskSinkSma smaSink; - STaskSinkFetch fetchSink; - }; - - // remote dispatcher + // output union { STaskDispatcherFixedEp fixedEpDispatcher; STaskDispatcherShuffle shuffleDispatcher; + STaskSinkTb tbSink; + STaskSinkSma smaSink; + STaskSinkFetch fetchSink; }; int8_t inputStatus; @@ -292,9 +278,6 @@ typedef struct SStreamTask { int64_t triggerParam; void* timer; - // application storage - // void* ahandle; - // msg handle SMsgCb* pMsgCb; } SStreamTask; @@ -331,7 +314,7 @@ static FORCE_INLINE int32_t streamTaskInput(SStreamTask* pTask, SStreamQueueItem } if (pItem->type != STREAM_INPUT__GET_RES && pItem->type != STREAM_INPUT__CHECKPOINT && pTask->triggerParam != 0) { - atomic_val_compare_exchange_8(&pTask->triggerStatus, TASK_TRIGGER_STATUS__IN_ACTIVE, TASK_TRIGGER_STATUS__ACTIVE); + atomic_val_compare_exchange_8(&pTask->triggerStatus, TASK_TRIGGER_STATUS__INACTIVE, TASK_TRIGGER_STATUS__ACTIVE); } #if 0 @@ -346,18 +329,15 @@ static FORCE_INLINE void streamTaskInputFail(SStreamTask* pTask) { } static FORCE_INLINE int32_t streamTaskOutput(SStreamTask* pTask, SStreamDataBlock* pBlock) { - if (pTask->sinkType == TASK_SINK__TABLE) { - ASSERT(pTask->dispatchType == TASK_DISPATCH__NONE); + if (pTask->outputType == TASK_OUTPUT__TABLE) { pTask->tbSink.tbSinkFunc(pTask, pTask->tbSink.vnode, 0, pBlock->blocks); taosArrayDestroyEx(pBlock->blocks, (FDelete)blockDataFreeRes); taosFreeQitem(pBlock); - } else if (pTask->sinkType == TASK_SINK__SMA) { - ASSERT(pTask->dispatchType == TASK_DISPATCH__NONE); + } else if (pTask->outputType == TASK_OUTPUT__SMA) { pTask->smaSink.smaSink(pTask->smaSink.vnode, pTask->smaSink.smaId, pBlock->blocks); taosArrayDestroyEx(pBlock->blocks, (FDelete)blockDataFreeRes); taosFreeQitem(pBlock); } else { - ASSERT(pTask->dispatchType != TASK_DISPATCH__NONE); taosWriteQitem(pTask->outputQueue->queue, pBlock); } return 0; diff --git a/source/common/src/tglobal.c b/source/common/src/tglobal.c index f6d8ea51c4695fd9953567c07548066c95a3d23c..f836cd76acb25e4aa93e7e47bcc4176ddf9788ca 100644 --- a/source/common/src/tglobal.c +++ b/source/common/src/tglobal.c @@ -89,7 +89,7 @@ bool tsSmlDataFormat = // query int32_t tsQueryPolicy = 1; -int32_t tsQuerySmaOptimize = 1; +int32_t tsQuerySmaOptimize = 0; /* * denote if the server needs to compress response message at the application layer to client, including query rsp, diff --git a/source/dnode/mnode/impl/src/mndScheduler.c b/source/dnode/mnode/impl/src/mndScheduler.c index 9d7fa537bb3ed9fffde4dc5b49e37e7e0e4afc84..218f82df180caf2b7d6628b97b1d8172fa9bbcef 100644 --- a/source/dnode/mnode/impl/src/mndScheduler.c +++ b/source/dnode/mnode/impl/src/mndScheduler.c @@ -98,13 +98,11 @@ END: } int32_t mndAddSinkToTask(SMnode* pMnode, SStreamObj* pStream, SStreamTask* pTask) { - pTask->dispatchType = TASK_DISPATCH__NONE; - // sink if (pStream->smaId != 0) { - pTask->sinkType = TASK_SINK__SMA; + pTask->outputType = TASK_OUTPUT__SMA; pTask->smaSink.smaId = pStream->smaId; } else { - pTask->sinkType = TASK_SINK__TABLE; + pTask->outputType = TASK_OUTPUT__TABLE; pTask->tbSink.stbUid = pStream->targetStbUid; memcpy(pTask->tbSink.stbFullName, pStream->targetSTbName, TSDB_TABLE_FNAME_LEN); pTask->tbSink.pSchemaWrapper = tCloneSSchemaWrapper(&pStream->outputSchema); @@ -113,8 +111,6 @@ int32_t mndAddSinkToTask(SMnode* pMnode, SStreamObj* pStream, SStreamTask* pTask } int32_t mndAddDispatcherToInnerTask(SMnode* pMnode, SStreamObj* pStream, SStreamTask* pTask) { - pTask->sinkType = TASK_SINK__NONE; - bool isShuffle = false; if (pStream->fixedSinkVgId == 0) { @@ -122,7 +118,7 @@ int32_t mndAddDispatcherToInnerTask(SMnode* pMnode, SStreamObj* pStream, SStream ASSERT(pDb); if (pDb->cfg.numOfVgroups > 1) { isShuffle = true; - pTask->dispatchType = TASK_DISPATCH__SHUFFLE; + pTask->outputType = TASK_OUTPUT__SHUFFLE_DISPATCH; pTask->dispatchMsgType = TDMT_STREAM_TASK_DISPATCH; if (mndExtractDbInfo(pMnode, pDb, &pTask->shuffleDispatcher.dbInfo, NULL) < 0) { ASSERT(0); @@ -152,7 +148,7 @@ int32_t mndAddDispatcherToInnerTask(SMnode* pMnode, SStreamObj* pStream, SStream } } } else { - pTask->dispatchType = TASK_DISPATCH__FIXED; + pTask->outputType = TASK_OUTPUT__FIXED_DISPATCH; pTask->dispatchMsgType = TDMT_STREAM_TASK_DISPATCH; SArray* pArray = taosArrayGetP(pStream->tasks, 0); // one sink only @@ -178,7 +174,6 @@ int32_t mndAssignTaskToVg(SMnode* pMnode, SStreamTask* pTask, SSubplan* plan, co terrno = TSDB_CODE_QRY_INVALID_INPUT; return -1; } - ASSERT(pTask->dispatchType != TASK_DISPATCH__NONE || pTask->sinkType != TASK_SINK__NONE); return 0; } @@ -249,26 +244,20 @@ int32_t mndAddShuffleSinkTasksToStream(SMnode* pMnode, SStreamObj* pStream) { pTask->nodeId = pVgroup->vgId; pTask->epSet = mndGetVgroupEpset(pMnode, pVgroup); - // source - pTask->isDataScan = 0; - - // exec - pTask->execType = TASK_EXEC__NONE; + // type + pTask->taskLevel = TASK_LEVEL__SINK; // sink if (pStream->smaId != 0) { - pTask->sinkType = TASK_SINK__SMA; + pTask->outputType = TASK_OUTPUT__SMA; pTask->smaSink.smaId = pStream->smaId; } else { - pTask->sinkType = TASK_SINK__TABLE; + pTask->outputType = TASK_OUTPUT__TABLE; pTask->tbSink.stbUid = pStream->targetStbUid; memcpy(pTask->tbSink.stbFullName, pStream->targetSTbName, TSDB_TABLE_FNAME_LEN); pTask->tbSink.pSchemaWrapper = tCloneSSchemaWrapper(&pStream->outputSchema); ASSERT(pTask->tbSink.pSchemaWrapper); } - - // dispatch - pTask->dispatchType = TASK_DISPATCH__NONE; } return 0; } @@ -295,25 +284,19 @@ int32_t mndAddFixedSinkTaskToStream(SMnode* pMnode, SStreamObj* pStream) { #endif pTask->epSet = mndGetVgroupEpset(pMnode, &pStream->fixedSinkVg); - // source - pTask->isDataScan = 0; - - // exec - pTask->execType = TASK_EXEC__NONE; + pTask->taskLevel = TASK_LEVEL__SINK; // sink if (pStream->smaId != 0) { - pTask->sinkType = TASK_SINK__SMA; + pTask->outputType = TASK_OUTPUT__SMA; pTask->smaSink.smaId = pStream->smaId; } else { - pTask->sinkType = TASK_SINK__TABLE; + pTask->outputType = TASK_OUTPUT__TABLE; pTask->tbSink.stbUid = pStream->targetStbUid; memcpy(pTask->tbSink.stbFullName, pStream->targetSTbName, TSDB_TABLE_FNAME_LEN); pTask->tbSink.pSchemaWrapper = tCloneSSchemaWrapper(&pStream->outputSchema); } - // dispatch - pTask->dispatchType = TASK_DISPATCH__NONE; return 0; } @@ -338,6 +321,7 @@ int32_t mndScheduleStream(SMnode* pMnode, SStreamObj* pStream) { bool multiTarget = pDbObj->cfg.numOfVgroups > 1; if (totLevel == 2 || externalTargetDB || multiTarget) { + /*if (true) {*/ SArray* taskOneLevel = taosArrayInit(0, sizeof(void*)); taosArrayPush(pStream->tasks, &taskOneLevel); // add extra sink @@ -376,8 +360,7 @@ int32_t mndScheduleStream(SMnode* pMnode, SStreamObj* pStream) { pInnerTask->childEpInfo = taosArrayInit(0, sizeof(void*)); - // source - pInnerTask->isDataScan = 0; + pInnerTask->taskLevel = TASK_LEVEL__AGG; // trigger pInnerTask->triggerParam = pStream->triggerParam; @@ -388,9 +371,6 @@ int32_t mndScheduleStream(SMnode* pMnode, SStreamObj* pStream) { return -1; } - // exec - pInnerTask->execType = TASK_EXEC__PIPE; - #if 0 SDbObj* pSourceDb = mndAcquireDb(pMnode, pStream->sourceDb); ASSERT(pDbObj != NULL); @@ -452,19 +432,16 @@ int32_t mndScheduleStream(SMnode* pMnode, SStreamObj* pStream) { mndAddTaskToTaskSet(taskSourceLevel, pTask); // source - pTask->isDataScan = 1; + pTask->taskLevel = TASK_LEVEL__SOURCE; // add fixed vg dispatch - pTask->sinkType = TASK_SINK__NONE; pTask->dispatchMsgType = TDMT_STREAM_TASK_DISPATCH; - pTask->dispatchType = TASK_DISPATCH__FIXED; + pTask->outputType = TASK_OUTPUT__FIXED_DISPATCH; pTask->fixedEpDispatcher.taskId = pInnerTask->taskId; pTask->fixedEpDispatcher.nodeId = pInnerTask->nodeId; pTask->fixedEpDispatcher.epSet = pInnerTask->epSet; - // exec - pTask->execType = TASK_EXEC__PIPE; if (mndAssignTaskToVg(pMnode, pTask, plan, pVgroup) < 0) { sdbRelease(pSdb, pVgroup); qDestroyQueryPlan(pPlan); @@ -515,7 +492,7 @@ int32_t mndScheduleStream(SMnode* pMnode, SStreamObj* pStream) { mndAddTaskToTaskSet(taskOneLevel, pTask); // source - pTask->isDataScan = 1; + pTask->taskLevel = TASK_LEVEL__SOURCE; // trigger pTask->triggerParam = pStream->triggerParam; @@ -527,8 +504,6 @@ int32_t mndScheduleStream(SMnode* pMnode, SStreamObj* pStream) { mndAddSinkToTask(pMnode, pStream, pTask); } - // exec - pTask->execType = TASK_EXEC__PIPE; if (mndAssignTaskToVg(pMnode, pTask, plan, pVgroup) < 0) { sdbRelease(pSdb, pVgroup); qDestroyQueryPlan(pPlan); diff --git a/source/dnode/mnode/impl/src/mndSma.c b/source/dnode/mnode/impl/src/mndSma.c index 74cada7cacc177919b0c9dca18584a064abec6de..6411d06081539007d2cc275ca4a67d9cfcd184fb 100644 --- a/source/dnode/mnode/impl/src/mndSma.c +++ b/source/dnode/mnode/impl/src/mndSma.c @@ -795,11 +795,12 @@ static int32_t mndDropSma(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SSmaObj *p pStb = mndAcquireStb(pMnode, pSma->stb); if (pStb == NULL) goto _OVER; - pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_DB, pReq); + pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_DB, pReq); if (pTrans == NULL) goto _OVER; mDebug("trans:%d, used to drop sma:%s", pTrans->id, pSma->name); mndTransSetDbName(pTrans, pDb->name, NULL); + mndTransSetSerial(pTrans); char streamName[TSDB_TABLE_FNAME_LEN] = {0}; mndGetStreamNameFromSmaName(streamName, pSma->name); @@ -834,9 +835,6 @@ static int32_t mndDropSma(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SSmaObj *p code = 0; _OVER: - if(code != 0) { - ASSERT(0); - } mndTransDrop(pTrans); mndReleaseVgroup(pMnode, pVgroup); mndReleaseStb(pMnode, pStb); @@ -855,6 +853,7 @@ int32_t mndDropSmasByStb(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SStbObj *p if (pIter == NULL) break; if (pSma->stbUid == pStb->uid) { + mndTransSetSerial(pTrans); pVgroup = mndAcquireVgroup(pMnode, pSma->dstVgId); if (pVgroup == NULL) goto _OVER; @@ -935,7 +934,6 @@ static int32_t mndProcessDropSmaReq(SRpcMsg *pReq) { goto _OVER; } else { terrno = TSDB_CODE_MND_SMA_NOT_EXIST; - ASSERT(0); goto _OVER; } } diff --git a/source/dnode/mnode/impl/src/mndStream.c b/source/dnode/mnode/impl/src/mndStream.c index b8501db9fb2a372e9a8ad7a919a904c9296fdea9..902879701cfefb6b169b3732214abbffae8f8458 100644 --- a/source/dnode/mnode/impl/src/mndStream.c +++ b/source/dnode/mnode/impl/src/mndStream.c @@ -323,8 +323,7 @@ FAIL: } int32_t mndPersistTaskDeployReq(STrans *pTrans, const SStreamTask *pTask) { - ASSERT(pTask->isDataScan == 0 || pTask->isDataScan == 1); - if (pTask->isDataScan == 0 && pTask->sinkType == TASK_SINK__NONE) { + if (pTask->taskLevel == TASK_LEVEL__AGG) { ASSERT(taosArrayGetSize(pTask->childEpInfo) != 0); } SEncoder encoder; @@ -548,7 +547,7 @@ int32_t mndRecoverStreamTasks(SMnode *pMnode, STrans *pTrans, SStreamObj *pStrea SArray *pTasks = taosArrayGetP(pStream->tasks, i); int32_t sz = taosArrayGetSize(pTasks); SStreamTask *pTask = taosArrayGetP(pTasks, 0); - if (!pTask->isDataScan && pTask->execType != TASK_EXEC__NONE) { + if (pTask->taskLevel == TASK_LEVEL__AGG) { ASSERT(sz == 1); if (mndPersistTaskRecoverReq(pTrans, pTask) < 0) { return -1; @@ -564,8 +563,8 @@ int32_t mndRecoverStreamTasks(SMnode *pMnode, STrans *pTrans, SStreamObj *pStrea int32_t sz = taosArrayGetSize(pTasks); for (int32_t j = 0; j < sz; j++) { SStreamTask *pTask = taosArrayGetP(pTasks, j); - if (!pTask->isDataScan) break; - ASSERT(pTask->execType != TASK_EXEC__NONE); + if (pTask->taskLevel != TASK_LEVEL__SOURCE) break; + ASSERT(pTask->taskLevel != TASK_LEVEL__SINK); if (mndPersistTaskRecoverReq(pTrans, pTask) < 0) { return -1; } diff --git a/source/dnode/snode/src/snode.c b/source/dnode/snode/src/snode.c index 2561031bac637079d9426959833deca4b97da33b..cda4663285ec560c1ea635a56e242d62efa45d41 100644 --- a/source/dnode/snode/src/snode.c +++ b/source/dnode/snode/src/snode.c @@ -110,9 +110,6 @@ static int32_t sndProcessTaskDeployReq(SSnode *pNode, SRpcMsg *pMsg) { pTask->pMsgCb = &pNode->msgCb; - ASSERT(pTask->execType != TASK_EXEC__NONE); - - ASSERT(pTask->isDataScan == 0); pTask->exec.executor = qCreateStreamExecTaskInfo(pTask->exec.qmsg, NULL); ASSERT(pTask->exec.executor); diff --git a/source/dnode/vnode/src/tq/tq.c b/source/dnode/vnode/src/tq/tq.c index 84a1191ef3f916fb0ef0bf9bd9640eae6e98aeb1..62e37f048edcaea1710db79418bf77697f252c5d 100644 --- a/source/dnode/vnode/src/tq/tq.c +++ b/source/dnode/vnode/src/tq/tq.c @@ -604,8 +604,8 @@ int32_t tqProcessVgChangeReq(STQ* pTq, char* msg, int32_t msgLen) { int32_t tqExpandTask(STQ* pTq, SStreamTask* pTask) { int32_t code = 0; - ASSERT(pTask->isDataScan == 0 || pTask->isDataScan == 1); - if (pTask->isDataScan == 0 && pTask->sinkType == TASK_SINK__NONE) { + + if (pTask->taskLevel == TASK_LEVEL__AGG) { ASSERT(taosArrayGetSize(pTask->childEpInfo) != 0); } @@ -624,32 +624,30 @@ int32_t tqExpandTask(STQ* pTq, SStreamTask* pTask) { pTask->pMsgCb = &pTq->pVnode->msgCb; - // exec - if (pTask->execType != TASK_EXEC__NONE) { - // expand runners - if (pTask->isDataScan) { - SReadHandle handle = { - .meta = pTq->pVnode->pMeta, - .vnode = pTq->pVnode, - .initTqReader = 1, - }; - pTask->exec.executor = qCreateStreamExecTaskInfo(pTask->exec.qmsg, &handle); - } else { - SReadHandle mgHandle = { - .vnode = NULL, - .numOfVgroups = (int32_t)taosArrayGetSize(pTask->childEpInfo), - }; - pTask->exec.executor = qCreateStreamExecTaskInfo(pTask->exec.qmsg, &mgHandle); - } + // expand executor + if (pTask->taskLevel == TASK_LEVEL__SOURCE) { + SReadHandle handle = { + .meta = pTq->pVnode->pMeta, + .vnode = pTq->pVnode, + .initTqReader = 1, + }; + pTask->exec.executor = qCreateStreamExecTaskInfo(pTask->exec.qmsg, &handle); + ASSERT(pTask->exec.executor); + } else if (pTask->taskLevel == TASK_LEVEL__AGG) { + SReadHandle mgHandle = { + .vnode = NULL, + .numOfVgroups = (int32_t)taosArrayGetSize(pTask->childEpInfo), + }; + pTask->exec.executor = qCreateStreamExecTaskInfo(pTask->exec.qmsg, &mgHandle); ASSERT(pTask->exec.executor); } // sink /*pTask->ahandle = pTq->pVnode;*/ - if (pTask->sinkType == TASK_SINK__SMA) { + if (pTask->outputType == TASK_OUTPUT__SMA) { pTask->smaSink.vnode = pTq->pVnode; pTask->smaSink.smaSink = smaHandleRes; - } else if (pTask->sinkType == TASK_SINK__TABLE) { + } else if (pTask->outputType == TASK_OUTPUT__TABLE) { pTask->tbSink.vnode = pTq->pVnode; pTask->tbSink.tbSinkFunc = tqTableSink; @@ -715,7 +713,7 @@ int32_t tqProcessStreamTrigger(STQ* pTq, SSubmitReq* pReq, int64_t ver) { pIter = taosHashIterate(pTq->pStreamTasks, pIter); if (pIter == NULL) break; SStreamTask* pTask = *(SStreamTask**)pIter; - if (!pTask->isDataScan) continue; + if (pTask->taskLevel != TASK_LEVEL__SOURCE) continue; qDebug("data submit enqueue stream task: %d, ver: %" PRId64, pTask->taskId, ver); diff --git a/source/dnode/vnode/src/tq/tqRead.c b/source/dnode/vnode/src/tq/tqRead.c index f20c7e7e55f587ae72ce6864ade4ef25c5da0464..501789385323d84e1c97c7443472335930381ca4 100644 --- a/source/dnode/vnode/src/tq/tqRead.c +++ b/source/dnode/vnode/src/tq/tqRead.c @@ -416,7 +416,7 @@ int32_t tqUpdateTbUidList(STQ* pTq, const SArray* tbUidList, bool isAdd) { pIter = taosHashIterate(pTq->pStreamTasks, pIter); if (pIter == NULL) break; SStreamTask* pTask = *(SStreamTask**)pIter; - if (pTask->isDataScan) { + if (pTask->taskLevel == TASK_LEVEL__SOURCE) { int32_t code = qUpdateQualifiedTableId(pTask->exec.executor, tbUidList, isAdd); ASSERT(code == 0); } diff --git a/source/dnode/vnode/src/vnd/vnodeSync.c b/source/dnode/vnode/src/vnd/vnodeSync.c index 15861433196237d573cfdb14334dcd1dd27871ac..a269f81ddd73cb9c29ae4900c1ab319cf03af764 100644 --- a/source/dnode/vnode/src/vnd/vnodeSync.c +++ b/source/dnode/vnode/src/vnd/vnodeSync.c @@ -708,8 +708,8 @@ int32_t vnodeSyncOpen(SVnode *pVnode, char *path) { } setPingTimerMS(pVnode->sync, 5000); - setElectTimerMS(pVnode->sync, 2800); - setHeartbeatTimerMS(pVnode->sync, 900); + setElectTimerMS(pVnode->sync, 4000); + setHeartbeatTimerMS(pVnode->sync, 700); return 0; } diff --git a/source/libs/executor/src/timewindowoperator.c b/source/libs/executor/src/timewindowoperator.c index 8eb29ca22a63cd5a411610badd2570aa8eb8201c..8a0564c12935914e0880429d4097f99ba04fcd09 100644 --- a/source/libs/executor/src/timewindowoperator.c +++ b/source/libs/executor/src/timewindowoperator.c @@ -2234,7 +2234,7 @@ static SSDataBlock* doTimeslice(SOperatorInfo* pOperator) { blockDataCleanup(pResBlock); - int32_t numOfRows = 0; + //int32_t numOfRows = 0; while (1) { SSDataBlock* pBlock = downstream->fpSet.getNextFn(downstream); if (pBlock == NULL) { @@ -2263,7 +2263,8 @@ static SSDataBlock* doTimeslice(SOperatorInfo* pOperator) { SColumnInfoData* pDst = taosArrayGet(pResBlock->pDataBlock, dstSlot); char* v = colDataGetData(pSrc, i); - colDataAppend(pDst, numOfRows, v, false); + //colDataAppend(pDst, numOfRows, v, false); + colDataAppend(pDst, pResBlock->info.rows, v, false); } pResBlock->info.rows += 1; @@ -2312,12 +2313,47 @@ static SSDataBlock* doTimeslice(SOperatorInfo* pOperator) { } } + // add current row if timestamp match + if (ts == pSliceInfo->current && pSliceInfo->current <= pSliceInfo->win.ekey) { + for (int32_t j = 0; j < pOperator->exprSupp.numOfExprs; ++j) { + SExprInfo* pExprInfo = &pOperator->exprSupp.pExprInfo[j]; + int32_t dstSlot = pExprInfo->base.resSchema.slotId; + int32_t srcSlot = pExprInfo->base.pParam[0].pCol->slotId; + + SColumnInfoData* pSrc = taosArrayGet(pBlock->pDataBlock, srcSlot); + SColumnInfoData* pDst = taosArrayGet(pResBlock->pDataBlock, dstSlot); + + char* v = colDataGetData(pSrc, i); + colDataAppend(pDst, pResBlock->info.rows, v, false); + } + + pResBlock->info.rows += 1; + doKeepPrevRows(pSliceInfo, pBlock, i); + + pSliceInfo->current = + taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit, pInterval->precision); + + if (pResBlock->info.rows >= pResBlock->info.capacity) { + break; + } + } + if (pSliceInfo->current > pSliceInfo->win.ekey) { doSetOperatorCompleted(pOperator); break; } } } + + //check if need to interpolate after ts range + while (pSliceInfo->current <= pSliceInfo->win.ekey) { + genInterpolationResult(pSliceInfo, &pOperator->exprSupp, pBlock, pBlock->info.rows - 1, pResBlock); + pSliceInfo->current = + taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit, pInterval->precision); + if (pResBlock->info.rows >= pResBlock->info.capacity) { + break; + } + } } // restore the value @@ -2375,6 +2411,8 @@ SOperatorInfo* createTimeSliceOperatorInfo(SOperatorInfo* downstream, SPhysiNode pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, doTimeslice, NULL, NULL, destroyBasicOperatorInfo, NULL, NULL, NULL); + blockDataEnsureCapacity(pInfo->pRes, pOperator->resultInfo.capacity); + code = appendDownstream(pOperator, &downstream, 1); return pOperator; diff --git a/source/libs/function/src/builtins.c b/source/libs/function/src/builtins.c index 19c834955165de3e489d12e1bf1a91914fffa392..86cd0b495dddb993273dcc075b04e7a2f3cb4329 100644 --- a/source/libs/function/src/builtins.c +++ b/source/libs/function/src/builtins.c @@ -2298,7 +2298,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { { .name = "derivative", .type = FUNCTION_TYPE_DERIVATIVE, - .classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_IMPLICIT_TS_FUNC | + .classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_IMPLICIT_TS_FUNC | FUNC_MGT_KEEP_ORDER_FUNC | FUNC_MGT_CUMULATIVE_FUNC | FUNC_MGT_FORBID_STREAM_FUNC, .translateFunc = translateDerivative, .getEnvFunc = getDerivativeFuncEnv, diff --git a/source/libs/parser/inc/parUtil.h b/source/libs/parser/inc/parUtil.h index 896e2bc2398ebe50ed6deafe11d8e456403d8246..b010e4b9e3f17220179a37986af6c6c61de59b18 100644 --- a/source/libs/parser/inc/parUtil.h +++ b/source/libs/parser/inc/parUtil.h @@ -31,7 +31,8 @@ extern "C" { #define parserDebug(param, ...) qDebug("PARSER: " param, ##__VA_ARGS__) #define parserTrace(param, ...) qTrace("PARSER: " param, ##__VA_ARGS__) -#define PK_TS_COL_INTERNAL_NAME "_rowts" +#define ROWTS_PSEUDO_COLUMN_NAME "_rowts" +#define C0_PSEUDO_COLUMN_NAME "_c0" typedef struct SMsgBuf { int32_t len; diff --git a/source/libs/parser/src/parAstCreater.c b/source/libs/parser/src/parAstCreater.c index b2006f8fc947c5a0d6e1e575a0f827b52439da45..d19c203ffeefce0fce6057b2a20788a1df8caa6f 100644 --- a/source/libs/parser/src/parAstCreater.c +++ b/source/libs/parser/src/parAstCreater.c @@ -443,19 +443,23 @@ SNode* createNotBetweenAnd(SAstCreateContext* pCxt, SNode* pExpr, SNode* pLeft, createOperatorNode(pCxt, OP_TYPE_GREATER_THAN, nodesCloneNode(pExpr), pRight)); } -static SNode* createPrimaryKeyCol(SAstCreateContext* pCxt) { +static SNode* createPrimaryKeyCol(SAstCreateContext* pCxt, const SToken* pFuncName) { CHECK_PARSER_STATUS(pCxt); SColumnNode* pCol = (SColumnNode*)nodesMakeNode(QUERY_NODE_COLUMN); CHECK_OUT_OF_MEM(pCol); pCol->colId = PRIMARYKEY_TIMESTAMP_COL_ID; - strcpy(pCol->colName, PK_TS_COL_INTERNAL_NAME); + if (NULL == pFuncName) { + strcpy(pCol->colName, ROWTS_PSEUDO_COLUMN_NAME); + } else { + strncpy(pCol->colName, pFuncName->z, pFuncName->n); + } return (SNode*)pCol; } SNode* createFunctionNode(SAstCreateContext* pCxt, const SToken* pFuncName, SNodeList* pParameterList) { CHECK_PARSER_STATUS(pCxt); if (0 == strncasecmp("_rowts", pFuncName->z, pFuncName->n) || 0 == strncasecmp("_c0", pFuncName->z, pFuncName->n)) { - return createPrimaryKeyCol(pCxt); + return createPrimaryKeyCol(pCxt, pFuncName); } SFunctionNode* func = (SFunctionNode*)nodesMakeNode(QUERY_NODE_FUNCTION); CHECK_OUT_OF_MEM(func); @@ -586,7 +590,7 @@ SNode* createStateWindowNode(SAstCreateContext* pCxt, SNode* pExpr) { CHECK_PARSER_STATUS(pCxt); SStateWindowNode* state = (SStateWindowNode*)nodesMakeNode(QUERY_NODE_STATE_WINDOW); CHECK_OUT_OF_MEM(state); - state->pCol = createPrimaryKeyCol(pCxt); + state->pCol = createPrimaryKeyCol(pCxt, NULL); if (NULL == state->pCol) { nodesDestroyNode((SNode*)state); CHECK_OUT_OF_MEM(state->pCol); @@ -600,7 +604,7 @@ SNode* createIntervalWindowNode(SAstCreateContext* pCxt, SNode* pInterval, SNode CHECK_PARSER_STATUS(pCxt); SIntervalWindowNode* interval = (SIntervalWindowNode*)nodesMakeNode(QUERY_NODE_INTERVAL_WINDOW); CHECK_OUT_OF_MEM(interval); - interval->pCol = createPrimaryKeyCol(pCxt); + interval->pCol = createPrimaryKeyCol(pCxt, NULL); if (NULL == interval->pCol) { nodesDestroyNode((SNode*)interval); CHECK_OUT_OF_MEM(interval->pCol); @@ -639,7 +643,7 @@ SNode* createGroupingSetNode(SAstCreateContext* pCxt, SNode* pNode) { SNode* createInterpTimeRange(SAstCreateContext* pCxt, SNode* pStart, SNode* pEnd) { CHECK_PARSER_STATUS(pCxt); - return createBetweenAnd(pCxt, createPrimaryKeyCol(pCxt), pStart, pEnd); + return createBetweenAnd(pCxt, createPrimaryKeyCol(pCxt, NULL), pStart, pEnd); } SNode* setProjectionAlias(SAstCreateContext* pCxt, SNode* pNode, SToken* pAlias) { @@ -752,7 +756,7 @@ SNode* addFillClause(SAstCreateContext* pCxt, SNode* pStmt, SNode* pFill) { if (QUERY_NODE_SELECT_STMT == nodeType(pStmt) && NULL != pFill) { SFillNode* pFillClause = (SFillNode*)pFill; nodesDestroyNode(pFillClause->pWStartTs); - pFillClause->pWStartTs = createPrimaryKeyCol(pCxt); + pFillClause->pWStartTs = createPrimaryKeyCol(pCxt, NULL); ((SSelectStmt*)pStmt)->pFill = (SNode*)pFillClause; } return pStmt; @@ -1731,7 +1735,7 @@ SNode* createCountFuncForDelete(SAstCreateContext* pCxt) { SFunctionNode* pFunc = (SFunctionNode*)nodesMakeNode(QUERY_NODE_FUNCTION); CHECK_OUT_OF_MEM(pFunc); strcpy(pFunc->functionName, "count"); - if (TSDB_CODE_SUCCESS != nodesListMakeStrictAppend(&pFunc->pParameterList, createPrimaryKeyCol(pCxt))) { + if (TSDB_CODE_SUCCESS != nodesListMakeStrictAppend(&pFunc->pParameterList, createPrimaryKeyCol(pCxt, NULL))) { nodesDestroyNode((SNode*)pFunc); CHECK_OUT_OF_MEM(NULL); } diff --git a/source/libs/parser/src/parTranslater.c b/source/libs/parser/src/parTranslater.c index d96cdca8a9405e891dca09431d6dac6388fdc25d..db14a4f1c3637a2cab577f027509169a34576a77 100644 --- a/source/libs/parser/src/parTranslater.c +++ b/source/libs/parser/src/parTranslater.c @@ -612,7 +612,8 @@ static int32_t createColumnsByTable(STranslateContext* pCxt, const STableNode* p } static bool isInternalPrimaryKey(const SColumnNode* pCol) { - return PRIMARYKEY_TIMESTAMP_COL_ID == pCol->colId && 0 == strcmp(pCol->colName, PK_TS_COL_INTERNAL_NAME); + return PRIMARYKEY_TIMESTAMP_COL_ID == pCol->colId && + (0 == strcmp(pCol->colName, ROWTS_PSEUDO_COLUMN_NAME) || 0 == strcmp(pCol->colName, C0_PSEUDO_COLUMN_NAME)); } static int32_t findAndSetColumn(STranslateContext* pCxt, SColumnNode** pColRef, const STableNode* pTable, @@ -2566,7 +2567,7 @@ static int32_t createDefaultFillNode(STranslateContext* pCxt, SNode** pOutput) { return TSDB_CODE_OUT_OF_MEMORY; } pCol->colId = PRIMARYKEY_TIMESTAMP_COL_ID; - strcpy(pCol->colName, PK_TS_COL_INTERNAL_NAME); + strcpy(pCol->colName, ROWTS_PSEUDO_COLUMN_NAME); pFill->pWStartTs = (SNode*)pCol; *pOutput = (SNode*)pFill; @@ -2652,7 +2653,7 @@ static int32_t createPrimaryKeyColByTable(STranslateContext* pCxt, STableNode* p return TSDB_CODE_OUT_OF_MEMORY; } pCol->colId = PRIMARYKEY_TIMESTAMP_COL_ID; - strcpy(pCol->colName, PK_TS_COL_INTERNAL_NAME); + strcpy(pCol->colName, ROWTS_PSEUDO_COLUMN_NAME); bool found = false; int32_t code = findAndSetColumn(pCxt, &pCol, pTable, &found); if (TSDB_CODE_SUCCESS != code || !found) { @@ -3878,7 +3879,7 @@ static int32_t buildSampleAst(STranslateContext* pCxt, SSampleAstInfo* pInfo, ch return TSDB_CODE_OUT_OF_MEMORY; } ((SColumnNode*)pInterval->pCol)->colId = PRIMARYKEY_TIMESTAMP_COL_ID; - strcpy(((SColumnNode*)pInterval->pCol)->colName, PK_TS_COL_INTERNAL_NAME); + strcpy(((SColumnNode*)pInterval->pCol)->colName, ROWTS_PSEUDO_COLUMN_NAME); pCxt->createStream = true; int32_t code = translateQuery(pCxt, (SNode*)pSelect); diff --git a/source/libs/planner/src/planOptimizer.c b/source/libs/planner/src/planOptimizer.c index e6f7c4ceb8e2bd45699783f3226e3a757ec56505..719da19da637448fbb2d182d6256eb935f04e1de 100644 --- a/source/libs/planner/src/planOptimizer.c +++ b/source/libs/planner/src/planOptimizer.c @@ -436,8 +436,8 @@ static int32_t pushDownCondOptDealScan(SOptimizeContext* pCxt, SScanLogicNode* p SNode* pPrimaryKeyCond = NULL; SNode* pOtherCond = NULL; - int32_t code = filterPartitionCond(&pScan->node.pConditions, &pPrimaryKeyCond, &pScan->pTagIndexCond, &pScan->pTagCond, - &pOtherCond); + int32_t code = filterPartitionCond(&pScan->node.pConditions, &pPrimaryKeyCond, &pScan->pTagIndexCond, + &pScan->pTagCond, &pOtherCond); if (TSDB_CODE_SUCCESS == code && NULL != pScan->pTagCond) { code = pushDownCondOptRebuildTbanme(&pScan->pTagCond); } @@ -1711,7 +1711,7 @@ static bool eliminateProjOptCanChildConditionUseChildTargets(SLogicNode* pChild, if (!cxt.canUse) return false; } if (QUERY_NODE_LOGIC_PLAN_JOIN == nodeType(pChild) && NULL != ((SJoinLogicNode*)pChild)->pOnConditions) { - SJoinLogicNode* pJoinLogicNode = (SJoinLogicNode*)pChild; + SJoinLogicNode* pJoinLogicNode = (SJoinLogicNode*)pChild; CheckNewChildTargetsCxt cxt = {.pNewChildTargets = pNewChildTargets, .canUse = false}; nodesWalkExpr(pJoinLogicNode->pOnConditions, eliminateProjOptCanUseNewChildTargetsImpl, &cxt); if (!cxt.canUse) return false; @@ -1768,7 +1768,7 @@ static int32_t eliminateProjOptimizeImpl(SOptimizeContext* pCxt, SLogicSubplan* if (TSDB_CODE_SUCCESS == code) { NODES_CLEAR_LIST(pProjectNode->node.pChildren); nodesDestroyNode((SNode*)pProjectNode); - //if pChild is a project logic node, remove its projection which is not reference by its target. + // if pChild is a project logic node, remove its projection which is not reference by its target. alignProjectionWithTarget(pChild); } pCxt->optimized = true; @@ -2404,6 +2404,9 @@ static const SOptimizeRule optimizeRuleSet[] = { static const int32_t optimizeRuleNum = (sizeof(optimizeRuleSet) / sizeof(SOptimizeRule)); static void dumpLogicSubplan(const char* pRuleName, SLogicSubplan* pSubplan) { + if (0 == (qDebugFlag & DEBUG_DEBUG)) { + return; + } char* pStr = NULL; nodesNodeToString((SNode*)pSubplan, false, &pStr, NULL); if (NULL == pRuleName) { diff --git a/source/libs/planner/src/planSpliter.c b/source/libs/planner/src/planSpliter.c index 74c78b970d6edf973d5ca5370f936c96a0466406..e10f0586ca75c6d1e26b4ba61dad6ac37cceaec6 100644 --- a/source/libs/planner/src/planSpliter.c +++ b/source/libs/planner/src/planSpliter.c @@ -264,7 +264,7 @@ static bool stbSplNeedSplitJoin(bool streamQuery, SJoinLogicNode* pJoin) { static bool stbSplNeedSplit(bool streamQuery, SLogicNode* pNode) { switch (nodeType(pNode)) { case QUERY_NODE_LOGIC_PLAN_SCAN: - return stbSplIsMultiTbScan(streamQuery, (SScanLogicNode*)pNode); + return streamQuery ? false : stbSplIsMultiTbScan(streamQuery, (SScanLogicNode*)pNode); case QUERY_NODE_LOGIC_PLAN_JOIN: return stbSplNeedSplitJoin(streamQuery, (SJoinLogicNode*)pNode); case QUERY_NODE_LOGIC_PLAN_PARTITION: @@ -1423,6 +1423,9 @@ static const SSplitRule splitRuleSet[] = { static const int32_t splitRuleNum = (sizeof(splitRuleSet) / sizeof(SSplitRule)); static void dumpLogicSubplan(const char* pRuleName, SLogicSubplan* pSubplan) { + if (0 == (qDebugFlag & DEBUG_DEBUG)) { + return; + } char* pStr = NULL; nodesNodeToString((SNode*)pSubplan, false, &pStr, NULL); if (NULL == pRuleName) { diff --git a/source/libs/planner/src/planner.c b/source/libs/planner/src/planner.c index 0554779746ac6fc6f0ebf2badeee55d24f9b9ef5..1f531b0708518c247019dde6823d2659e041b76c 100644 --- a/source/libs/planner/src/planner.c +++ b/source/libs/planner/src/planner.c @@ -19,6 +19,9 @@ #include "scalar.h" static void dumpQueryPlan(SQueryPlan* pPlan) { + if (0 == (qDebugFlag & DEBUG_DEBUG)) { + return; + } char* pStr = NULL; nodesNodeToString((SNode*)pPlan, false, &pStr, NULL); planDebugL("QID:0x%" PRIx64 " Query Plan: %s", pPlan->queryId, pStr); @@ -42,6 +45,9 @@ int32_t qCreateQueryPlan(SPlanContext* pCxt, SQueryPlan** pPlan, SArray* pExecNo if (TSDB_CODE_SUCCESS == code) { code = createPhysiPlan(pCxt, pLogicPlan, pPlan, pExecNodeList); } + if (TSDB_CODE_SUCCESS == code) { + dumpQueryPlan(*pPlan); + } nodesDestroyNode((SNode*)pLogicSubplan); nodesDestroyNode((SNode*)pLogicPlan); @@ -79,6 +85,7 @@ static int32_t setSubplanExecutionNode(SPhysiNode* pNode, int32_t groupId, SDown } int32_t qSetSubplanExecutionNode(SSubplan* subplan, int32_t groupId, SDownstreamSourceNode* pSource) { + planDebug("QID:0x%" PRIx64 " set subplan execution node, groupId:%d", subplan->id.groupId, groupId); return setSubplanExecutionNode(subplan->pNode, groupId, pSource); } diff --git a/source/libs/scalar/src/filter.c b/source/libs/scalar/src/filter.c index 04328fda9ca22532045f9e8dabbab07e0bcec2af..2bdc8f138473e4d967d10490d916db920131173d 100644 --- a/source/libs/scalar/src/filter.c +++ b/source/libs/scalar/src/filter.c @@ -3246,6 +3246,10 @@ _return: } bool filterRangeExecute(SFilterInfo *info, SColumnDataAgg *pDataStatis, int32_t numOfCols, int32_t numOfRows) { + if (info->scalarMode) { + return true; + } + if (FILTER_EMPTY_RES(info)) { return false; } diff --git a/source/libs/stream/src/stream.c b/source/libs/stream/src/stream.c index 82da396b30a8941bc8329f2d147b6dd61cb8b7bd..30f0919ceeaedc52912189ec7150cb9c48d0000e 100644 --- a/source/libs/stream/src/stream.c +++ b/source/libs/stream/src/stream.c @@ -65,7 +65,7 @@ void streamSchedByTimer(void* param, void* tmrId) { } trigger->pBlock->info.type = STREAM_GET_ALL; - atomic_store_8(&pTask->triggerStatus, TASK_TRIGGER_STATUS__IN_ACTIVE); + atomic_store_8(&pTask->triggerStatus, TASK_TRIGGER_STATUS__INACTIVE); streamTaskInput(pTask, (SStreamQueueItem*)trigger); streamSchedExec(pTask); @@ -77,7 +77,7 @@ void streamSchedByTimer(void* param, void* tmrId) { int32_t streamSetupTrigger(SStreamTask* pTask) { if (pTask->triggerParam != 0) { pTask->timer = taosTmrStart(streamSchedByTimer, (int32_t)pTask->triggerParam, pTask, streamEnv.timer); - pTask->triggerStatus = TASK_TRIGGER_STATUS__IN_ACTIVE; + pTask->triggerStatus = TASK_TRIGGER_STATUS__INACTIVE; } return 0; } @@ -186,7 +186,7 @@ int32_t streamProcessDispatchReq(SStreamTask* pTask, SStreamDispatchReq* pReq, S if (exec) { streamTryExec(pTask); - if (pTask->dispatchType != TASK_DISPATCH__NONE) { + if (pTask->outputType == TASK_OUTPUT__FIXED_DISPATCH || pTask->outputType == TASK_OUTPUT__SHUFFLE_DISPATCH) { streamDispatch(pTask); } } else { @@ -201,7 +201,7 @@ int32_t streamProcessDispatchRsp(SStreamTask* pTask, SStreamDispatchRsp* pRsp) { qDebug("task %d receive dispatch rsp", pTask->taskId); - if (pTask->dispatchType == TASK_DISPATCH__SHUFFLE) { + if (pTask->outputType == TASK_OUTPUT__SHUFFLE_DISPATCH) { int32_t leftRsp = atomic_sub_fetch_32(&pTask->shuffleDispatcher.waitingRspCnt, 1); qDebug("task %d is shuffle, left waiting rsp %d", pTask->taskId, leftRsp); if (leftRsp > 0) return 0; @@ -222,7 +222,7 @@ int32_t streamProcessDispatchRsp(SStreamTask* pTask, SStreamDispatchRsp* pRsp) { int32_t streamProcessRunReq(SStreamTask* pTask) { streamTryExec(pTask); - if (pTask->dispatchType != TASK_DISPATCH__NONE) { + if (pTask->outputType == TASK_OUTPUT__FIXED_DISPATCH || pTask->outputType == TASK_OUTPUT__SHUFFLE_DISPATCH) { streamDispatch(pTask); } return 0; @@ -250,7 +250,7 @@ int32_t streamProcessRecoverRsp(SStreamTask* pTask, SStreamTaskRecoverRsp* pRsp) streamProcessRunReq(pTask); - if (pTask->isDataScan) { + if (pTask->taskLevel == TASK_LEVEL__SOURCE) { // scan data to recover pTask->inputStatus = TASK_INPUT_STATUS__RECOVER; pTask->taskStatus = TASK_STATUS__RECOVERING; @@ -272,12 +272,11 @@ int32_t streamProcessRetrieveReq(SStreamTask* pTask, SStreamRetrieveReq* pReq, S streamTaskEnqueueRetrieve(pTask, pReq, pRsp); - ASSERT(pTask->execType != TASK_EXEC__NONE); + ASSERT(pTask->taskLevel != TASK_LEVEL__SINK); streamSchedExec(pTask); /*streamTryExec(pTask);*/ - /*ASSERT(pTask->dispatchType != TASK_DISPATCH__NONE);*/ /*streamDispatch(pTask);*/ return 0; diff --git a/source/libs/stream/src/streamDispatch.c b/source/libs/stream/src/streamDispatch.c index ef7c10c8e1e2880c52fbf711b95ed461a1f8a7f2..66e689dd3ee7e751c116953c50276a42c8e31957 100644 --- a/source/libs/stream/src/streamDispatch.c +++ b/source/libs/stream/src/streamDispatch.c @@ -242,7 +242,7 @@ int32_t streamDispatchAllBlocks(SStreamTask* pTask, const SStreamDataBlock* pDat int32_t blockNum = taosArrayGetSize(pData->blocks); ASSERT(blockNum != 0); - if (pTask->dispatchType == TASK_DISPATCH__FIXED) { + if (pTask->outputType == TASK_OUTPUT__FIXED_DISPATCH) { SStreamDispatchReq req = { .streamId = pTask->streamId, .dataSrcVgId = pData->srcVgId, @@ -282,7 +282,7 @@ int32_t streamDispatchAllBlocks(SStreamTask* pTask, const SStreamDataBlock* pDat taosArrayDestroy(req.dataLen); return code; - } else if (pTask->dispatchType == TASK_DISPATCH__SHUFFLE) { + } else if (pTask->outputType == TASK_OUTPUT__SHUFFLE_DISPATCH) { int32_t rspCnt = atomic_load_32(&pTask->shuffleDispatcher.waitingRspCnt); ASSERT(rspCnt == 0); @@ -393,11 +393,11 @@ int32_t streamBuildDispatchMsg(SStreamTask* pTask, const SStreamDataBlock* data, int32_t vgId = 0; int32_t downstreamTaskId = 0; // find ep - if (pTask->dispatchType == TASK_DISPATCH__FIXED) { + if (pTask->outputType == TASK_OUTPUT__FIXED_DISPATCH) { vgId = pTask->fixedEpDispatcher.nodeId; *ppEpSet = &pTask->fixedEpDispatcher.epSet; downstreamTaskId = pTask->fixedEpDispatcher.taskId; - } else if (pTask->dispatchType == TASK_DISPATCH__SHUFFLE) { + } else if (pTask->outputType == TASK_OUTPUT__SHUFFLE_DISPATCH) { // TODO get ctbName for each block SSDataBlock* pBlock = taosArrayGet(data->blocks, 0); char* ctbName = buildCtbNameByGroupId(pTask->shuffleDispatcher.stbFullName, pBlock->info.groupId); @@ -439,8 +439,7 @@ FAIL: } int32_t streamDispatch(SStreamTask* pTask) { - ASSERT(pTask->dispatchType != TASK_DISPATCH__NONE); - ASSERT(pTask->sinkType == TASK_SINK__NONE); + ASSERT(pTask->outputType == TASK_OUTPUT__FIXED_DISPATCH || pTask->outputType == TASK_OUTPUT__SHUFFLE_DISPATCH); int8_t old = atomic_val_compare_exchange_8(&pTask->outputStatus, TASK_OUTPUT_STATUS__NORMAL, TASK_OUTPUT_STATUS__WAIT); diff --git a/source/libs/stream/src/streamExec.c b/source/libs/stream/src/streamExec.c index 79c35f2889711dc27b485ab04b3dedebad21b576..e662c18a15e269aa31131841c10b1328e55a2e98 100644 --- a/source/libs/stream/src/streamExec.c +++ b/source/libs/stream/src/streamExec.c @@ -24,7 +24,7 @@ static int32_t streamTaskExecImpl(SStreamTask* pTask, void* data, SArray* pRes) SStreamTrigger* pTrigger = (SStreamTrigger*)data; qSetMultiStreamInput(exec, pTrigger->pBlock, 1, STREAM_INPUT__DATA_BLOCK); } else if (pItem->type == STREAM_INPUT__DATA_SUBMIT) { - ASSERT(pTask->isDataScan); + ASSERT(pTask->taskLevel == TASK_LEVEL__SOURCE); SStreamDataSubmit* pSubmit = (SStreamDataSubmit*)data; qDebug("task %d %p set submit input %p %p %d 1", pTask->taskId, pTask, pSubmit, pSubmit->data, *pSubmit->dataRef); qSetMultiStreamInput(exec, pSubmit->data, 1, STREAM_INPUT__DATA_SUBMIT); @@ -92,7 +92,7 @@ static FORCE_INLINE int32_t streamUpdateVer(SStreamTask* pTask, SStreamDataBlock } int32_t streamPipelineExec(SStreamTask* pTask, int32_t batchNum) { - ASSERT(pTask->execType != TASK_EXEC__NONE); + ASSERT(pTask->taskLevel != TASK_LEVEL__SINK); void* exec = pTask->exec.executor; @@ -139,8 +139,7 @@ int32_t streamPipelineExec(SStreamTask* pTask, int32_t batchNum) { return -1; } - if (pTask->dispatchType != TASK_DISPATCH__NONE) { - ASSERT(pTask->sinkType == TASK_SINK__NONE); + if (pTask->outputType == TASK_OUTPUT__FIXED_DISPATCH || pTask->outputType == TASK_OUTPUT__SHUFFLE_DISPATCH) { streamDispatch(pTask); } } @@ -161,7 +160,7 @@ int32_t streamExecForAll(SStreamTask* pTask) { if (data == NULL) { data = qItem; streamQueueProcessSuccess(pTask->inputQueue); - if (pTask->execType == TASK_EXEC__NONE) { + if (pTask->taskLevel == TASK_LEVEL__SINK) { break; } } else { @@ -187,7 +186,7 @@ int32_t streamExecForAll(SStreamTask* pTask) { break; } - if (pTask->execType == TASK_EXEC__NONE) { + if (pTask->taskLevel == TASK_LEVEL__SINK) { ASSERT(((SStreamQueueItem*)data)->type == STREAM_INPUT__DATA_BLOCK); streamTaskOutput(pTask, data); continue; diff --git a/source/libs/stream/src/streamMeta.c b/source/libs/stream/src/streamMeta.c index be9dc81c3c72dc3437648bac8c48d653b17dff96..7dfeefb26127ae484cf0e776da289f2c44376105 100644 --- a/source/libs/stream/src/streamMeta.c +++ b/source/libs/stream/src/streamMeta.c @@ -52,15 +52,16 @@ SStreamMeta* streamMetaOpen(const char* path, void* ahandle, FTaskExpand expandF pMeta->ahandle = ahandle; pMeta->expandFunc = expandFunc; - + return pMeta; _err: - return NULL; } void streamMetaClose(SStreamMeta* pMeta) { - // - return; + tdbCommit(pMeta->db, &pMeta->txn); + tdbTbClose(pMeta->pTaskDb); + tdbTbClose(pMeta->pStateDb); + tdbClose(pMeta->db); } int32_t streamMetaAddTask(SStreamMeta* pMeta, SStreamTask* pTask) { @@ -123,13 +124,32 @@ int32_t streamMetaCommit(SStreamMeta* pMeta) { if (tdbCommit(pMeta->db, &pMeta->txn) < 0) { return -1; } + memset(&pMeta->txn, 0, sizeof(TXN)); + if (tdbTxnOpen(&pMeta->txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, TDB_TXN_WRITE | TDB_TXN_READ_UNCOMMITTED) < + 0) { + return -1; + } + if (tdbBegin(pMeta->db, &pMeta->txn) < 0) { + return -1; + } return 0; } -int32_t streamMetaRollBack(SStreamMeta* pMeta) { - // TODO tdb rollback +int32_t streamMetaAbort(SStreamMeta* pMeta) { + if (tdbAbort(pMeta->db, &pMeta->txn) < 0) { + return -1; + } + memset(&pMeta->txn, 0, sizeof(TXN)); + if (tdbTxnOpen(&pMeta->txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, TDB_TXN_WRITE | TDB_TXN_READ_UNCOMMITTED) < + 0) { + return -1; + } + if (tdbBegin(pMeta->db, &pMeta->txn) < 0) { + return -1; + } return 0; } + int32_t streamRestoreTask(SStreamMeta* pMeta) { TBC* pCur = NULL; if (tdbTbcOpen(pMeta->pTaskDb, &pCur, NULL) < 0) { @@ -153,6 +173,18 @@ int32_t streamRestoreTask(SStreamMeta* pMeta) { tDecoderInit(&decoder, (uint8_t*)pVal, vLen); tDecodeSStreamTask(&decoder, pTask); tDecoderClear(&decoder); + + if (pMeta->expandFunc(pMeta->ahandle, pTask) < 0) { + return -1; + } + + if (taosHashPut(pMeta->pTasks, &pTask->taskId, sizeof(int32_t), &pTask, sizeof(void*)) < 0) { + return -1; + } + } + + if (tdbTbcClose(pCur) < 0) { + return -1; } return 0; diff --git a/source/libs/stream/src/streamRecover.c b/source/libs/stream/src/streamRecover.c index dec23cd151fe687dafe44f53bb3653a4a1d2b75c..3530c05688086cba7f093e5974d9be83a16d98a8 100644 --- a/source/libs/stream/src/streamRecover.c +++ b/source/libs/stream/src/streamRecover.c @@ -88,14 +88,15 @@ int32_t tDecodeSMStreamTaskRecoverRsp(SDecoder* pDecoder, SMStreamTaskRecoverRsp } int32_t streamProcessFailRecoverReq(SStreamTask* pTask, SMStreamTaskRecoverReq* pReq, SRpcMsg* pRsp) { +#if 0 if (pTask->taskStatus != TASK_STATUS__FAIL) { return 0; } if (pTask->isStreamDistributed) { - if (pTask->isDataScan) { + if (pTask->taskType == TASK_TYPE__SOURCE) { pTask->taskStatus = TASK_STATUS__PREPARE_RECOVER; - } else if (pTask->execType != TASK_EXEC__NONE) { + } else if (pTask->taskType != TASK_TYPE__SINK) { pTask->taskStatus = TASK_STATUS__PREPARE_RECOVER; bool hasCheckpoint = false; int32_t childSz = taosArrayGetSize(pTask->childEpInfo); @@ -113,7 +114,7 @@ int32_t streamProcessFailRecoverReq(SStreamTask* pTask, SMStreamTaskRecoverReq* } } } else { - if (pTask->isDataScan) { + if (pTask->taskType == TASK_TYPE__SOURCE) { if (pTask->checkpointVer != -1) { // load from checkpoint } else { @@ -133,5 +134,6 @@ int32_t streamProcessFailRecoverReq(SStreamTask* pTask, SMStreamTaskRecoverReq* } } +#endif return 0; } diff --git a/source/libs/stream/src/streamTask.c b/source/libs/stream/src/streamTask.c index c4e946e1916a69512310980286e885998d8794f9..3a5498198960fd0ceff28a24297ec7d7c6456dd5 100644 --- a/source/libs/stream/src/streamTask.c +++ b/source/libs/stream/src/streamTask.c @@ -52,10 +52,8 @@ int32_t tEncodeSStreamTask(SEncoder* pEncoder, const SStreamTask* pTask) { /*if (tStartEncode(pEncoder) < 0) return -1;*/ if (tEncodeI64(pEncoder, pTask->streamId) < 0) return -1; if (tEncodeI32(pEncoder, pTask->taskId) < 0) return -1; - if (tEncodeI8(pEncoder, pTask->isDataScan) < 0) return -1; - if (tEncodeI8(pEncoder, pTask->execType) < 0) return -1; - if (tEncodeI8(pEncoder, pTask->sinkType) < 0) return -1; - if (tEncodeI8(pEncoder, pTask->dispatchType) < 0) return -1; + if (tEncodeI8(pEncoder, pTask->taskLevel) < 0) return -1; + if (tEncodeI8(pEncoder, pTask->outputType) < 0) return -1; if (tEncodeI16(pEncoder, pTask->dispatchMsgType) < 0) return -1; if (tEncodeI8(pEncoder, pTask->taskStatus) < 0) return -1; @@ -73,27 +71,23 @@ int32_t tEncodeSStreamTask(SEncoder* pEncoder, const SStreamTask* pTask) { if (tEncodeStreamEpInfo(pEncoder, pInfo) < 0) return -1; } - if (pTask->execType != TASK_EXEC__NONE) { + if (pTask->taskLevel != TASK_LEVEL__SINK) { if (tEncodeCStr(pEncoder, pTask->exec.qmsg) < 0) return -1; } - if (pTask->sinkType == TASK_SINK__TABLE) { + if (pTask->outputType == TASK_OUTPUT__TABLE) { if (tEncodeI64(pEncoder, pTask->tbSink.stbUid) < 0) return -1; if (tEncodeCStr(pEncoder, pTask->tbSink.stbFullName) < 0) return -1; if (tEncodeSSchemaWrapper(pEncoder, pTask->tbSink.pSchemaWrapper) < 0) return -1; - } else if (pTask->sinkType == TASK_SINK__SMA) { + } else if (pTask->outputType == TASK_OUTPUT__SMA) { if (tEncodeI64(pEncoder, pTask->smaSink.smaId) < 0) return -1; - } else if (pTask->sinkType == TASK_SINK__FETCH) { + } else if (pTask->outputType == TASK_OUTPUT__FETCH) { if (tEncodeI8(pEncoder, pTask->fetchSink.reserved) < 0) return -1; - } else { - ASSERT(pTask->sinkType == TASK_SINK__NONE); - } - - if (pTask->dispatchType == TASK_DISPATCH__FIXED) { + } else if (pTask->outputType == TASK_OUTPUT__FIXED_DISPATCH) { if (tEncodeI32(pEncoder, pTask->fixedEpDispatcher.taskId) < 0) return -1; if (tEncodeI32(pEncoder, pTask->fixedEpDispatcher.nodeId) < 0) return -1; if (tEncodeSEpSet(pEncoder, &pTask->fixedEpDispatcher.epSet) < 0) return -1; - } else if (pTask->dispatchType == TASK_DISPATCH__SHUFFLE) { + } else if (pTask->outputType == TASK_OUTPUT__SHUFFLE_DISPATCH) { if (tSerializeSUseDbRspImp(pEncoder, &pTask->shuffleDispatcher.dbInfo) < 0) return -1; if (tEncodeCStr(pEncoder, pTask->shuffleDispatcher.stbFullName) < 0) return -1; } @@ -107,10 +101,8 @@ int32_t tDecodeSStreamTask(SDecoder* pDecoder, SStreamTask* pTask) { /*if (tStartDecode(pDecoder) < 0) return -1;*/ if (tDecodeI64(pDecoder, &pTask->streamId) < 0) return -1; if (tDecodeI32(pDecoder, &pTask->taskId) < 0) return -1; - if (tDecodeI8(pDecoder, &pTask->isDataScan) < 0) return -1; - if (tDecodeI8(pDecoder, &pTask->execType) < 0) return -1; - if (tDecodeI8(pDecoder, &pTask->sinkType) < 0) return -1; - if (tDecodeI8(pDecoder, &pTask->dispatchType) < 0) return -1; + if (tDecodeI8(pDecoder, &pTask->taskLevel) < 0) return -1; + if (tDecodeI8(pDecoder, &pTask->outputType) < 0) return -1; if (tDecodeI16(pDecoder, &pTask->dispatchMsgType) < 0) return -1; if (tDecodeI8(pDecoder, &pTask->taskStatus) < 0) return -1; @@ -131,29 +123,25 @@ int32_t tDecodeSStreamTask(SDecoder* pDecoder, SStreamTask* pTask) { taosArrayPush(pTask->childEpInfo, &pInfo); } - if (pTask->execType != TASK_EXEC__NONE) { + if (pTask->taskLevel != TASK_LEVEL__SINK) { if (tDecodeCStrAlloc(pDecoder, &pTask->exec.qmsg) < 0) return -1; } - if (pTask->sinkType == TASK_SINK__TABLE) { + if (pTask->outputType == TASK_OUTPUT__TABLE) { if (tDecodeI64(pDecoder, &pTask->tbSink.stbUid) < 0) return -1; if (tDecodeCStrTo(pDecoder, pTask->tbSink.stbFullName) < 0) return -1; pTask->tbSink.pSchemaWrapper = taosMemoryCalloc(1, sizeof(SSchemaWrapper)); if (pTask->tbSink.pSchemaWrapper == NULL) return -1; if (tDecodeSSchemaWrapper(pDecoder, pTask->tbSink.pSchemaWrapper) < 0) return -1; - } else if (pTask->sinkType == TASK_SINK__SMA) { + } else if (pTask->outputType == TASK_OUTPUT__SMA) { if (tDecodeI64(pDecoder, &pTask->smaSink.smaId) < 0) return -1; - } else if (pTask->sinkType == TASK_SINK__FETCH) { + } else if (pTask->outputType == TASK_OUTPUT__FETCH) { if (tDecodeI8(pDecoder, &pTask->fetchSink.reserved) < 0) return -1; - } else { - ASSERT(pTask->sinkType == TASK_SINK__NONE); - } - - if (pTask->dispatchType == TASK_DISPATCH__FIXED) { + } else if (pTask->outputType == TASK_OUTPUT__FIXED_DISPATCH) { if (tDecodeI32(pDecoder, &pTask->fixedEpDispatcher.taskId) < 0) return -1; if (tDecodeI32(pDecoder, &pTask->fixedEpDispatcher.nodeId) < 0) return -1; if (tDecodeSEpSet(pDecoder, &pTask->fixedEpDispatcher.epSet) < 0) return -1; - } else if (pTask->dispatchType == TASK_DISPATCH__SHUFFLE) { + } else if (pTask->outputType == TASK_OUTPUT__SHUFFLE_DISPATCH) { if (tDeserializeSUseDbRspImp(pDecoder, &pTask->shuffleDispatcher.dbInfo) < 0) return -1; if (tDecodeCStrTo(pDecoder, pTask->shuffleDispatcher.stbFullName) < 0) return -1; } diff --git a/source/libs/sync/inc/syncInt.h b/source/libs/sync/inc/syncInt.h index bc3275a9714f5db4f7866b18153953a1881bf8cb..82399f52b93b7d87e5adfdda116c1d60e6535239 100644 --- a/source/libs/sync/inc/syncInt.h +++ b/source/libs/sync/inc/syncInt.h @@ -212,6 +212,7 @@ void syncNodeRelease(SSyncNode* pNode); // raft state change -------------- void syncNodeUpdateTerm(SSyncNode* pSyncNode, SyncTerm term); +void syncNodeUpdateTermWithoutStepDown(SSyncNode* pSyncNode, SyncTerm term); void syncNodeBecomeFollower(SSyncNode* pSyncNode, const char* debugStr); void syncNodeBecomeLeader(SSyncNode* pSyncNode, const char* debugStr); diff --git a/source/libs/sync/src/syncAppendEntries.c b/source/libs/sync/src/syncAppendEntries.c index a7bc4df281719f7ee50d672518829d18a9abf65c..f31f3dd1aea037b36846cf0fda55a564220e12fb 100644 --- a/source/libs/sync/src/syncAppendEntries.c +++ b/source/libs/sync/src/syncAppendEntries.c @@ -717,24 +717,15 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc // maybe update commit index, leader notice me if (pMsg->commitIndex > ths->commitIndex) { - // has commit entry in local - if (pMsg->commitIndex <= ths->pLogStore->syncLogLastIndex(ths->pLogStore)) { - // advance commit index to sanpshot first - SSnapshot snapshot; - ths->pFsm->FpGetSnapshotInfo(ths->pFsm, &snapshot); - if (snapshot.lastApplyIndex >= 0 && snapshot.lastApplyIndex > ths->commitIndex) { - SyncIndex commitBegin = ths->commitIndex; - SyncIndex commitEnd = snapshot.lastApplyIndex; - ths->commitIndex = snapshot.lastApplyIndex; + SyncIndex lastIndex = ths->pLogStore->syncLogLastIndex(ths->pLogStore); - char eventLog[128]; - snprintf(eventLog, sizeof(eventLog), "commit by snapshot from index:%" PRId64 " to index:%" PRId64, - commitBegin, commitEnd); - syncNodeEventLog(ths, eventLog); - } + SyncIndex beginIndex = 0; + SyncIndex endIndex = -1; - SyncIndex beginIndex = ths->commitIndex + 1; - SyncIndex endIndex = pMsg->commitIndex; + // has commit entry in local + if (pMsg->commitIndex <= lastIndex) { + beginIndex = ths->commitIndex + 1; + endIndex = pMsg->commitIndex; // update commit index ths->commitIndex = pMsg->commitIndex; @@ -743,10 +734,22 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc code = ths->pLogStore->updateCommitIndex(ths->pLogStore, ths->commitIndex); ASSERT(code == 0); - code = syncNodeCommit(ths, beginIndex, endIndex, ths->state); + } else if (pMsg->commitIndex > lastIndex && ths->commitIndex < lastIndex) { + beginIndex = ths->commitIndex + 1; + endIndex = lastIndex; + + // update commit index, speed up + ths->commitIndex = lastIndex; + + // call back Wal + code = ths->pLogStore->updateCommitIndex(ths->pLogStore, ths->commitIndex); ASSERT(code == 0); } + + code = syncNodeCommit(ths, beginIndex, endIndex, ths->state); + ASSERT(code == 0); } + return 0; } } while (0); diff --git a/source/libs/sync/src/syncCommit.c b/source/libs/sync/src/syncCommit.c index c18c2cc0d596082b256a2a0ee5a670d347068cf6..a603cfff2762bbd4fea088c9a4ad120d0471fce0 100644 --- a/source/libs/sync/src/syncCommit.c +++ b/source/libs/sync/src/syncCommit.c @@ -73,7 +73,7 @@ void syncMaybeAdvanceCommitIndex(SSyncNode* pSyncNode) { ASSERT(pEntry != NULL); // cannot commit, even if quorum agree. need check term! - if (pEntry->term == pSyncNode->pRaftStore->currentTerm) { + if (pEntry->term <= pSyncNode->pRaftStore->currentTerm) { // update commit index newCommitIndex = index; diff --git a/source/libs/sync/src/syncMain.c b/source/libs/sync/src/syncMain.c index 37d0dff095c0b9be8138ef2b9700190263ba88fd..78004c0ad601b403ce1046ae41ac6ff995c6e7f9 100644 --- a/source/libs/sync/src/syncMain.c +++ b/source/libs/sync/src/syncMain.c @@ -1270,6 +1270,8 @@ int32_t syncNodeStopElectTimer(SSyncNode* pSyncNode) { atomic_add_fetch_64(&pSyncNode->electTimerLogicClockUser, 1); taosTmrStop(pSyncNode->pElectTimer); pSyncNode->pElectTimer = NULL; + + sTrace("vgId:%d, sync %s stop elect timer", pSyncNode->vgId, syncUtilState2String(pSyncNode->state)); return ret; } @@ -1343,7 +1345,8 @@ int32_t syncNodeStopHeartbeatTimer(SSyncNode* pSyncNode) { atomic_add_fetch_64(&pSyncNode->heartbeatTimerLogicClockUser, 1); taosTmrStop(pSyncNode->pHeartbeatTimer); pSyncNode->pHeartbeatTimer = NULL; - sTrace("vgId:%d, stop heartbeat timer", pSyncNode->vgId); + + sTrace("vgId:%d, sync %s stop heartbeat timer", pSyncNode->vgId, syncUtilState2String(pSyncNode->state)); return ret; } @@ -1562,7 +1565,7 @@ char* syncNode2Str(const SSyncNode* pSyncNode) { return serialized; } -void syncNodeEventLog(const SSyncNode* pSyncNode, char* str) { +inline void syncNodeEventLog(const SSyncNode* pSyncNode, char* str) { int32_t userStrLen = strlen(str); SSnapshot snapshot = {.data = NULL, .lastApplyIndex = -1, .lastApplyTerm = 0}; @@ -1634,7 +1637,7 @@ void syncNodeEventLog(const SSyncNode* pSyncNode, char* str) { taosMemoryFree(pCfgStr); } -void syncNodeErrorLog(const SSyncNode* pSyncNode, char* str) { +inline void syncNodeErrorLog(const SSyncNode* pSyncNode, char* str) { int32_t userStrLen = strlen(str); SSnapshot snapshot = {.data = NULL, .lastApplyIndex = -1, .lastApplyTerm = 0}; @@ -1701,7 +1704,7 @@ void syncNodeErrorLog(const SSyncNode* pSyncNode, char* str) { taosMemoryFree(pCfgStr); } -char* syncNode2SimpleStr(const SSyncNode* pSyncNode) { +inline char* syncNode2SimpleStr(const SSyncNode* pSyncNode) { int len = 256; char* s = (char*)taosMemoryMalloc(len); @@ -1724,7 +1727,7 @@ char* syncNode2SimpleStr(const SSyncNode* pSyncNode) { return s; } -bool syncNodeInConfig(SSyncNode* pSyncNode, const SSyncCfg* config) { +inline bool syncNodeInConfig(SSyncNode* pSyncNode, const SSyncCfg* config) { bool b1 = false; bool b2 = false; @@ -1987,6 +1990,12 @@ void syncNodeUpdateTerm(SSyncNode* pSyncNode, SyncTerm term) { } } +void syncNodeUpdateTermWithoutStepDown(SSyncNode* pSyncNode, SyncTerm term) { + if (term > pSyncNode->pRaftStore->currentTerm) { + raftStoreSetTerm(pSyncNode->pRaftStore, term); + } +} + void syncNodeBecomeFollower(SSyncNode* pSyncNode, const char* debugStr) { // maybe clear leader cache if (pSyncNode->state == TAOS_SYNC_STATE_LEADER) { @@ -2614,7 +2623,7 @@ int32_t syncNodeOnClientRequestBatchCb(SSyncNode* ths, SyncClientRequestBatch* p // fsync once SSyncLogStoreData* pData = ths->pLogStore->data; SWal* pWal = pData->pWal; - walFsync(pWal, true); + walFsync(pWal, false); if (ths->replicaNum > 1) { // if multi replica, start replicate right now @@ -2797,11 +2806,28 @@ bool syncNodeIsOptimizedOneReplica(SSyncNode* ths, SRpcMsg* pMsg) { } int32_t syncNodeCommit(SSyncNode* ths, SyncIndex beginIndex, SyncIndex endIndex, uint64_t flag) { + if (beginIndex > endIndex) { + return 0; + } + + // advance commit index to sanpshot first + SSnapshot snapshot = {0}; + ths->pFsm->FpGetSnapshotInfo(ths->pFsm, &snapshot); + if (snapshot.lastApplyIndex >= 0 && snapshot.lastApplyIndex >= beginIndex) { + char eventLog[128]; + snprintf(eventLog, sizeof(eventLog), "commit by snapshot from index:%" PRId64 " to index:%" PRId64, beginIndex, + snapshot.lastApplyIndex); + syncNodeEventLog(ths, eventLog); + + // update begin index + beginIndex = snapshot.lastApplyIndex + 1; + } + int32_t code = 0; ESyncState state = flag; char eventLog[128]; - snprintf(eventLog, sizeof(eventLog), "commit wal from index:%" PRId64 " to index:%" PRId64, beginIndex, endIndex); + snprintf(eventLog, sizeof(eventLog), "commit by wal from index:%" PRId64 " to index:%" PRId64, beginIndex, endIndex); syncNodeEventLog(ths, eventLog); // execute fsm @@ -3040,7 +3066,7 @@ void syncLogRecvAppendEntriesBatch(SSyncNode* pSyncNode, const SyncAppendEntries syncNodeEventLog(pSyncNode, logBuf); } -void syncLogSendAppendEntriesReply(SSyncNode* pSyncNode, const SyncAppendEntriesReply* pMsg, const char* s) { + void syncLogSendAppendEntriesReply(SSyncNode* pSyncNode, const SyncAppendEntriesReply* pMsg, const char* s) { char host[64]; uint16_t port; syncUtilU642Addr(pMsg->destId.addr, host, sizeof(host), &port); diff --git a/source/libs/sync/src/syncRaftLog.c b/source/libs/sync/src/syncRaftLog.c index bf440f04a06f7c26dfcae36a110a123c39c5bf49..b575e40d86884c9fdd688db03e4cc8492e9ea0d3 100644 --- a/source/libs/sync/src/syncRaftLog.c +++ b/source/libs/sync/src/syncRaftLog.c @@ -237,51 +237,6 @@ static int32_t raftLogAppendEntry(struct SSyncLogStore* pLogStore, SSyncRaftEntr return 0; } -#if 0 -static int32_t raftLogAppendEntry(struct SSyncLogStore* pLogStore, SSyncRaftEntry* pEntry) { - SSyncLogStoreData* pData = pLogStore->data; - SWal* pWal = pData->pWal; - - SyncIndex writeIndex = raftLogWriteIndex(pLogStore); - if (pEntry->index != writeIndex) { - sError("vgId:%d, wal write index error, entry-index:%" PRId64 " update to %" PRId64, pData->pSyncNode->vgId, - pEntry->index, writeIndex); - pEntry->index = writeIndex; - } - - int code = 0; - SWalSyncInfo syncMeta; - syncMeta.isWeek = pEntry->isWeak; - syncMeta.seqNum = pEntry->seqNum; - syncMeta.term = pEntry->term; - code = walWriteWithSyncInfo(pWal, pEntry->index, pEntry->originalRpcType, syncMeta, pEntry->data, pEntry->dataLen); - if (code != 0) { - int32_t err = terrno; - const char* errStr = tstrerror(err); - int32_t sysErr = errno; - const char* sysErrStr = strerror(errno); - - char logBuf[128]; - snprintf(logBuf, sizeof(logBuf), "wal write error, index:%" PRId64 ", err:%d %X, msg:%s, syserr:%d, sysmsg:%s", - pEntry->index, err, err, errStr, sysErr, sysErrStr); - syncNodeErrorLog(pData->pSyncNode, logBuf); - - ASSERT(0); - } - - // walFsync(pWal, true); - - do { - char eventLog[128]; - snprintf(eventLog, sizeof(eventLog), "write index:%" PRId64 ", type:%s,%d, type2:%s,%d", pEntry->index, - TMSG_INFO(pEntry->msgType), pEntry->msgType, TMSG_INFO(pEntry->originalRpcType), pEntry->originalRpcType); - syncNodeEventLog(pData->pSyncNode, eventLog); - } while (0); - - return code; -} -#endif - // entry found, return 0 // entry not found, return -1, terrno = TSDB_CODE_WAL_LOG_NOT_EXIST // other error, return -1 @@ -400,45 +355,6 @@ static int32_t raftLogGetLastEntry(SSyncLogStore* pLogStore, SSyncRaftEntry** pp //------------------------------- // log[0 .. n] -#if 0 -int32_t logStoreAppendEntry(SSyncLogStore* pLogStore, SSyncRaftEntry* pEntry) { - SSyncLogStoreData* pData = pLogStore->data; - SWal* pWal = pData->pWal; - - SyncIndex lastIndex = logStoreLastIndex(pLogStore); - ASSERT(pEntry->index == lastIndex + 1); - - int code = 0; - SWalSyncInfo syncMeta; - syncMeta.isWeek = pEntry->isWeak; - syncMeta.seqNum = pEntry->seqNum; - syncMeta.term = pEntry->term; - code = walWriteWithSyncInfo(pWal, pEntry->index, pEntry->originalRpcType, syncMeta, pEntry->data, pEntry->dataLen); - if (code != 0) { - int32_t err = terrno; - const char* errStr = tstrerror(err); - int32_t sysErr = errno; - const char* sysErrStr = strerror(errno); - - char logBuf[128]; - snprintf(logBuf, sizeof(logBuf), "wal write error, index:%" PRId64 ", err:%d %X, msg:%s, syserr:%d, sysmsg:%s", - pEntry->index, err, err, errStr, sysErr, sysErrStr); - syncNodeErrorLog(pData->pSyncNode, logBuf); - - ASSERT(0); - } - - // walFsync(pWal, true); - - char eventLog[128]; - snprintf(eventLog, sizeof(eventLog), "old write index:%" PRId64 ", type:%s,%d, type2:%s,%d", pEntry->index, - TMSG_INFO(pEntry->msgType), pEntry->msgType, TMSG_INFO(pEntry->originalRpcType), pEntry->originalRpcType); - syncNodeEventLog(pData->pSyncNode, eventLog); - - return code; -} -#endif - int32_t logStoreAppendEntry(SSyncLogStore* pLogStore, SSyncRaftEntry* pEntry) { SSyncLogStoreData* pData = pLogStore->data; SWal* pWal = pData->pWal; diff --git a/source/libs/sync/src/syncReplication.c b/source/libs/sync/src/syncReplication.c index 4f2dbcae6702b802bc93109fdc233785a241dc97..bc703e519cf9a0afbbb2788023a3cc8ea2338af1 100644 --- a/source/libs/sync/src/syncReplication.c +++ b/source/libs/sync/src/syncReplication.c @@ -140,9 +140,6 @@ int32_t syncNodeAppendEntriesPeersSnapshot2(SSyncNode* pSyncNode) { sError("vgId:%d, sync get pre term error, nextIndex:%" PRId64 ", update next-index:%" PRId64 ", match-index:%d, raftid:%" PRId64, pSyncNode->vgId, nextIndex, newNextIndex, SYNC_INDEX_INVALID, pDestId->addr); - - // syncNodeRestartNowHeartbeatTimer(pSyncNode); - syncNodeStartNowHeartbeatTimer(pSyncNode); return -1; } diff --git a/source/libs/sync/src/syncRequestVote.c b/source/libs/sync/src/syncRequestVote.c index bad32c5f911a5e0bf70aee4cbcda568590d5c36f..122a81930bec953f167030873ae2ed48bdafc555 100644 --- a/source/libs/sync/src/syncRequestVote.c +++ b/source/libs/sync/src/syncRequestVote.c @@ -51,15 +51,23 @@ int32_t syncNodeOnRequestVoteCb(SSyncNode* ths, SyncRequestVote* pMsg) { return -1; } + bool logOK = (pMsg->lastLogTerm > ths->pLogStore->getLastTerm(ths->pLogStore)) || + ((pMsg->lastLogTerm == ths->pLogStore->getLastTerm(ths->pLogStore)) && + (pMsg->lastLogIndex >= ths->pLogStore->getLastIndex(ths->pLogStore))); + // maybe update term if (pMsg->term > ths->pRaftStore->currentTerm) { syncNodeUpdateTerm(ths, pMsg->term); +#if 0 + if (logOK) { + syncNodeUpdateTerm(ths, pMsg->term); + } else { + syncNodeUpdateTermWithoutStepDown(ths, pMsg->term); + } +#endif } ASSERT(pMsg->term <= ths->pRaftStore->currentTerm); - bool logOK = (pMsg->lastLogTerm > ths->pLogStore->getLastTerm(ths->pLogStore)) || - ((pMsg->lastLogTerm == ths->pLogStore->getLastTerm(ths->pLogStore)) && - (pMsg->lastLogIndex >= ths->pLogStore->getLastIndex(ths->pLogStore))); bool grant = (pMsg->term == ths->pRaftStore->currentTerm) && logOK && ((!raftStoreHasVoted(ths->pRaftStore)) || (syncUtilSameId(&(ths->pRaftStore->voteFor), &(pMsg->srcId)))); if (grant) { @@ -94,48 +102,6 @@ int32_t syncNodeOnRequestVoteCb(SSyncNode* ths, SyncRequestVote* pMsg) { return ret; } -#if 0 -int32_t syncNodeOnRequestVoteCb(SSyncNode* ths, SyncRequestVote* pMsg) { - int32_t ret = 0; - - char logBuf[128] = {0}; - snprintf(logBuf, sizeof(logBuf), "==syncNodeOnRequestVoteCb== term:%" PRIu64, ths->pRaftStore->currentTerm); - syncRequestVoteLog2(logBuf, pMsg); - - if (pMsg->term > ths->pRaftStore->currentTerm) { - syncNodeUpdateTerm(ths, pMsg->term); - } - ASSERT(pMsg->term <= ths->pRaftStore->currentTerm); - - bool logOK = (pMsg->lastLogTerm > ths->pLogStore->getLastTerm(ths->pLogStore)) || - ((pMsg->lastLogTerm == ths->pLogStore->getLastTerm(ths->pLogStore)) && - (pMsg->lastLogIndex >= ths->pLogStore->getLastIndex(ths->pLogStore))); - bool grant = (pMsg->term == ths->pRaftStore->currentTerm) && logOK && - ((!raftStoreHasVoted(ths->pRaftStore)) || (syncUtilSameId(&(ths->pRaftStore->voteFor), &(pMsg->srcId)))); - if (grant) { - // maybe has already voted for pMsg->srcId - // vote again, no harm - raftStoreVote(ths->pRaftStore, &(pMsg->srcId)); - - // forbid elect for this round - syncNodeResetElectTimer(ths); - } - - SyncRequestVoteReply* pReply = syncRequestVoteReplyBuild(ths->vgId); - pReply->srcId = ths->myRaftId; - pReply->destId = pMsg->srcId; - pReply->term = ths->pRaftStore->currentTerm; - pReply->voteGranted = grant; - - SRpcMsg rpcMsg; - syncRequestVoteReply2RpcMsg(pReply, &rpcMsg); - syncNodeSendMsgById(&pReply->destId, ths, &rpcMsg); - syncRequestVoteReplyDestroy(pReply); - - return ret; -} -#endif - static bool syncNodeOnRequestVoteLogOK(SSyncNode* pSyncNode, SyncRequestVote* pMsg) { SyncTerm myLastTerm = syncNodeGetLastTerm(pSyncNode); SyncIndex myLastIndex = syncNodeGetLastIndex(pSyncNode); @@ -200,13 +166,21 @@ int32_t syncNodeOnRequestVoteSnapshotCb(SSyncNode* ths, SyncRequestVote* pMsg) { return -1; } + bool logOK = syncNodeOnRequestVoteLogOK(ths, pMsg); + // maybe update term if (pMsg->term > ths->pRaftStore->currentTerm) { syncNodeUpdateTerm(ths, pMsg->term); +#if 0 + if (logOK) { + syncNodeUpdateTerm(ths, pMsg->term); + } else { + syncNodeUpdateTermWithoutStepDown(ths, pMsg->term); + } +#endif } ASSERT(pMsg->term <= ths->pRaftStore->currentTerm); - bool logOK = syncNodeOnRequestVoteLogOK(ths, pMsg); bool grant = (pMsg->term == ths->pRaftStore->currentTerm) && logOK && ((!raftStoreHasVoted(ths->pRaftStore)) || (syncUtilSameId(&(ths->pRaftStore->voteFor), &(pMsg->srcId)))); if (grant) { diff --git a/source/libs/sync/src/syncRequestVoteReply.c b/source/libs/sync/src/syncRequestVoteReply.c index 566b80881f9786426f5aa62e6c44504f92db174e..55553d50485358a79adfadc625dd73ca0c05c251 100644 --- a/source/libs/sync/src/syncRequestVoteReply.c +++ b/source/libs/sync/src/syncRequestVoteReply.c @@ -93,65 +93,6 @@ int32_t syncNodeOnRequestVoteReplyCb(SSyncNode* ths, SyncRequestVoteReply* pMsg) return 0; } -#if 0 -int32_t syncNodeOnRequestVoteReplyCb(SSyncNode* ths, SyncRequestVoteReply* pMsg) { - int32_t ret = 0; - - char logBuf[128] = {0}; - snprintf(logBuf, sizeof(logBuf), "==syncNodeOnRequestVoteReplyCb== term:%" PRIu64, ths->pRaftStore->currentTerm); - syncRequestVoteReplyLog2(logBuf, pMsg); - - if (pMsg->term < ths->pRaftStore->currentTerm) { - sTrace("DropStaleResponse, receive term:%" PRIu64 ", current term:%" PRIu64 "", pMsg->term, - ths->pRaftStore->currentTerm); - return ret; - } - - // ASSERT(!(pMsg->term > ths->pRaftStore->currentTerm)); - // no need this code, because if I receive reply.term, then I must have sent for that term. - // if (pMsg->term > ths->pRaftStore->currentTerm) { - // syncNodeUpdateTerm(ths, pMsg->term); - // } - - if (pMsg->term > ths->pRaftStore->currentTerm) { - char logBuf[128] = {0}; - snprintf(logBuf, sizeof(logBuf), "syncNodeOnRequestVoteReplyCb error term, receive:%" PRIu64 " current:%" PRIu64, pMsg->term, - ths->pRaftStore->currentTerm); - syncNodePrint2(logBuf, ths); - sError("%s", logBuf); - return ret; - } - - ASSERT(pMsg->term == ths->pRaftStore->currentTerm); - - // This tallies votes even when the current state is not Candidate, - // but they won't be looked at, so it doesn't matter. - if (ths->state == TAOS_SYNC_STATE_CANDIDATE) { - votesRespondAdd(ths->pVotesRespond, pMsg); - if (pMsg->voteGranted) { - // add vote - voteGrantedVote(ths->pVotesGranted, pMsg); - - // maybe to leader - if (voteGrantedMajority(ths->pVotesGranted)) { - if (!ths->pVotesGranted->toLeader) { - syncNodeCandidate2Leader(ths); - - // prevent to leader again! - ths->pVotesGranted->toLeader = true; - } - } - } else { - ; - // do nothing - // UNCHANGED <> - } - } - - return ret; -} -#endif - int32_t syncNodeOnRequestVoteReplySnapshotCb(SSyncNode* ths, SyncRequestVoteReply* pMsg) { int32_t ret = 0; @@ -184,6 +125,14 @@ int32_t syncNodeOnRequestVoteReplySnapshotCb(SSyncNode* ths, SyncRequestVoteRepl // This tallies votes even when the current state is not Candidate, // but they won't be looked at, so it doesn't matter. if (ths->state == TAOS_SYNC_STATE_CANDIDATE) { + if (ths->pVotesRespond->term != pMsg->term) { + char logBuf[128]; + snprintf(logBuf, sizeof(logBuf), "vote respond error vote-respond-mgr term:%lu, msg term:lu", + ths->pVotesRespond->term, pMsg->term); + syncNodeErrorLog(ths, logBuf); + return -1; + } + votesRespondAdd(ths->pVotesRespond, pMsg); if (pMsg->voteGranted) { // add vote diff --git a/source/libs/sync/test/sh/a.sh b/source/libs/sync/test/sh/a.sh index 751b42b9c22077d21cbc694392f1b0bab3a0f7d7..f4ffaa50621b0f7c0c859e03a8e1f32c7442b5c4 100644 --- a/source/libs/sync/test/sh/a.sh +++ b/source/libs/sync/test/sh/a.sh @@ -81,4 +81,16 @@ for file in `ls ${logpath}/log.dnode*vgId*`;do done +echo "" +echo "generate log.commit ..." +tmpfile=${logpath}/log.commits.tmp +touch ${tmpfile} +for file in `ls ${logpath}/log.dnode*.vgId*.commit`;do + line=`cat ${file} | tail -n1` + echo $line | awk '{print $5, $0}' >> ${tmpfile} +done +cat ${tmpfile} | sort -k1 | awk 'BEGIN{vgid=$1}{if($1==vgid){print $0}else{print ""; print $0; vgid=$1;}}END{}' > ${logpath}/log.commits + exit 0 + + diff --git a/source/libs/sync/test/sh/insert.tpl.json b/source/libs/sync/test/sh/insert.tpl.json index 1d952b98e8dd855990d57fcf790a97db4281c1d5..631e490a2a0f6125e13182ca83dc2db2846f45f9 100644 --- a/source/libs/sync/test/sh/insert.tpl.json +++ b/source/libs/sync/test/sh/insert.tpl.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 8, - "thread_count_create_tbl": 8, + "create_table_thread_count": 8, "result_file": "./tpl_insert_result_tpl", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/cluster/TD-3693/insert1Data.json b/tests/pytest/cluster/TD-3693/insert1Data.json index 6900ce0366971a71a0e119f0b7cfc363f78cd656..ad83a3516042dab92164dc887dd4c7adadecc1b8 100644 --- a/tests/pytest/cluster/TD-3693/insert1Data.json +++ b/tests/pytest/cluster/TD-3693/insert1Data.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/cluster/TD-3693/insert2Data.json b/tests/pytest/cluster/TD-3693/insert2Data.json index e55fa996fb5099ba7d0702172671bb489ec28213..86495f0ce982bb3aab2321b56fa9ca611c405a93 100644 --- a/tests/pytest/cluster/TD-3693/insert2Data.json +++ b/tests/pytest/cluster/TD-3693/insert2Data.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/dockerCluster/insert.json b/tests/pytest/dockerCluster/insert.json index 32e1043c4e722c379d2256ed6bb7d7a11bd7a8da..ce8d7978fa7abfc1ea39ade8852e5eea7d1b254f 100644 --- a/tests/pytest/dockerCluster/insert.json +++ b/tests/pytest/dockerCluster/insert.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 1, + "create_table_thread_count": 1, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "databases": [{ diff --git a/tests/pytest/manualTest/TD-5114/insertDataDb3Replica2.json b/tests/pytest/manualTest/TD-5114/insertDataDb3Replica2.json index dc9de1626a4da72ad0dda91a3b42191ff27b165b..4b622c3f28ad41693739e55413f6d5c84a3f8cc6 100644 --- a/tests/pytest/manualTest/TD-5114/insertDataDb3Replica2.json +++ b/tests/pytest/manualTest/TD-5114/insertDataDb3Replica2.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/perfbenchmark/bug3433.py b/tests/pytest/perfbenchmark/bug3433.py index 7f2dfad40338e0fd710e908e8ccce940c128d4dc..3e7de39bed86c82c1c6143c82a7c8bb1cd1c5ccb 100644 --- a/tests/pytest/perfbenchmark/bug3433.py +++ b/tests/pytest/perfbenchmark/bug3433.py @@ -185,7 +185,7 @@ class TDTestCase: "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "/tmp/insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/perfbenchmark/joinPerformance.py b/tests/pytest/perfbenchmark/joinPerformance.py index b85c09926a5c93dae1edca645e02ae223569a933..d30bec6664167b9b52ad9499212770e28ff93ec3 100644 --- a/tests/pytest/perfbenchmark/joinPerformance.py +++ b/tests/pytest/perfbenchmark/joinPerformance.py @@ -168,7 +168,7 @@ class JoinPerf: "user": self.user, "password": self.password, "thread_count": cpu_count(), - "thread_count_create_tbl": cpu_count(), + "create_table_thread_count": cpu_count(), "result_file": "/tmp/insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/perfbenchmark/taosdemoInsert.py b/tests/pytest/perfbenchmark/taosdemoInsert.py index 774103aa853183b923edc0a4157f650a08d1eb76..a23797a62b87e9e045a08ef923969834ddee88f2 100644 --- a/tests/pytest/perfbenchmark/taosdemoInsert.py +++ b/tests/pytest/perfbenchmark/taosdemoInsert.py @@ -172,7 +172,7 @@ class Taosdemo: "user": self.user, "password": self.password, "thread_count": cpu_count(), - "thread_count_create_tbl": cpu_count(), + "create_table_thread_count": cpu_count(), "result_file": "/tmp/insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/query/nestedQuery/insertData.json b/tests/pytest/query/nestedQuery/insertData.json index 1aad170bb0d2f1a986d5ed7aac20b53f6456a794..18a843015c4f4fdad9cb748b15f0f04bf83517cd 100644 --- a/tests/pytest/query/nestedQuery/insertData.json +++ b/tests/pytest/query/nestedQuery/insertData.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file":"./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/query/query1970YearsAf.py b/tests/pytest/query/query1970YearsAf.py index 6a5c0796ed1eb766519f4ff0f31d9b7c94f4a49a..e7e9fa5329f1489f51ddd4d20b6f8dede3940305 100644 --- a/tests/pytest/query/query1970YearsAf.py +++ b/tests/pytest/query/query1970YearsAf.py @@ -133,7 +133,7 @@ class TDTestCase: "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "/tmp/insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/insert-interlace.json b/tests/pytest/tools/insert-interlace.json index 0e17edf8fdc90379c93a08b861417c4fd5411d49..8d96c20fe7a86d0d07c248ea284334a9152899be 100644 --- a/tests/pytest/tools/insert-interlace.json +++ b/tests/pytest/tools/insert-interlace.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 5000, diff --git a/tests/pytest/tools/insert-tblimit-tboffset-createdb.json b/tests/pytest/tools/insert-tblimit-tboffset-createdb.json index bbac60872ef3e9341b69adeb0f6a4e67fb297ad8..e50e67943e9630789b54a148efb977b2c8269781 100644 --- a/tests/pytest/tools/insert-tblimit-tboffset-createdb.json +++ b/tests/pytest/tools/insert-tblimit-tboffset-createdb.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/insert-tblimit-tboffset-insertrec.json b/tests/pytest/tools/insert-tblimit-tboffset-insertrec.json index 8f795338d25c05f21310bab7d020d436b4009e1a..fe4945483c0abea0d0546bc6e4482885250281b5 100644 --- a/tests/pytest/tools/insert-tblimit-tboffset-insertrec.json +++ b/tests/pytest/tools/insert-tblimit-tboffset-insertrec.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/insert-tblimit-tboffset.json b/tests/pytest/tools/insert-tblimit-tboffset.json index 2c2d86c4816e6cf6c9f3469e92b7b2a2f750ab66..92b28241a625d6c18435d5698998f72944a52da4 100644 --- a/tests/pytest/tools/insert-tblimit-tboffset.json +++ b/tests/pytest/tools/insert-tblimit-tboffset.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/insert-tblimit-tboffset0.json b/tests/pytest/tools/insert-tblimit-tboffset0.json index ce83ea3e606f80c38f247a44bccf61fc1394329b..0c1e00976b8c2a2878096ca0faebf8749b7a1e60 100644 --- a/tests/pytest/tools/insert-tblimit-tboffset0.json +++ b/tests/pytest/tools/insert-tblimit-tboffset0.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/insert-tblimit1-tboffset.json b/tests/pytest/tools/insert-tblimit1-tboffset.json index b15aaf4eed2870468f43d49f0f6578c2d91dc528..ff002e9528f7c03c55f64b716665321c92235ee8 100644 --- a/tests/pytest/tools/insert-tblimit1-tboffset.json +++ b/tests/pytest/tools/insert-tblimit1-tboffset.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/insert.json b/tests/pytest/tools/insert.json index 523561dc6d22cec1152d0e698976b0f8a5cf66c5..4489730722d797ef59a9f1cb3f77f9a1109d1176 100644 --- a/tests/pytest/tools/insert.json +++ b/tests/pytest/tools/insert.json @@ -7,7 +7,7 @@ "password": "taosdata", "thread_count": 2, "num_of_records_per_req": 10, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "databases": [{ "dbinfo": { "name": "db01", diff --git a/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoInsertMSDB.json b/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoInsertMSDB.json index a11261681a78b4edc85280c666d98db86f370d94..3c876c61c75b58708293f2068c4a804e37925566 100644 --- a/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoInsertMSDB.json +++ b/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoInsertMSDB.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 10, + "create_table_thread_count": 10, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoInsertNanoDB.json b/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoInsertNanoDB.json index 080231551e306a458a4664adb7f9a68df63a1d52..b9162242d49b67a34d1edfea1a0d1914a4e355ce 100644 --- a/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoInsertNanoDB.json +++ b/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoInsertNanoDB.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 10, + "create_table_thread_count": 10, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoInsertUSDB.json b/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoInsertUSDB.json index fe0ecbe2deed56e8ab2c90fc655ff92833215de7..3fbaeceeba129bd04446b33340e9c68670fe0fda 100644 --- a/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoInsertUSDB.json +++ b/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoInsertUSDB.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 10, + "create_table_thread_count": 10, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabase.json b/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabase.json index 1af2952a6940bc78dcc589184f599f5a7d640f1d..6b0631da39c562c0fe78119cf27e39467ecf28c0 100644 --- a/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabase.json +++ b/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabase.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 10, + "create_table_thread_count": 10, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabaseInsertForSub.json b/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabaseInsertForSub.json index 39c5e499096bd6082090f74f2c307629a18f56e2..bf9b0151544409c404285e069ff0c10523931512 100644 --- a/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabaseInsertForSub.json +++ b/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabaseInsertForSub.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 10, + "create_table_thread_count": 10, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabaseNow.json b/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabaseNow.json index f4dbf1ee411377af6c3779d9e5cba6c3e233ed39..346fe31be929385d9f4618b290047321242665c0 100644 --- a/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabaseNow.json +++ b/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabaseNow.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 10, + "create_table_thread_count": 10, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabasecsv.json b/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabasecsv.json index 84b511a44621d89b2f23f7fabe38fe0cac489ac6..65a2836a497c073d5814554b28124d5d687ca98f 100644 --- a/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabasecsv.json +++ b/tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabasecsv.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 10, + "create_table_thread_count": 10, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/TD-3453/query-interrupt.json b/tests/pytest/tools/taosdemoAllTest/TD-3453/query-interrupt.json index 75dbcb443230f9528530962242aff1a3a4ac4789..b7b6c186e6e7db8a1ba38626004fd31c4c8ff869 100644 --- a/tests/pytest/tools/taosdemoAllTest/TD-3453/query-interrupt.json +++ b/tests/pytest/tools/taosdemoAllTest/TD-3453/query-interrupt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/TD-4985/query-limit-offset.json b/tests/pytest/tools/taosdemoAllTest/TD-4985/query-limit-offset.json index 0c2e9cf34ae9a7529d9430655c67594cb0202114..edb9ed7cb81604056c378d279183e7cc7a47e85e 100644 --- a/tests/pytest/tools/taosdemoAllTest/TD-4985/query-limit-offset.json +++ b/tests/pytest/tools/taosdemoAllTest/TD-4985/query-limit-offset.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 10, + "create_table_thread_count": 10, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/TD-5213/insertSigcolumnsNum4096.json b/tests/pytest/tools/taosdemoAllTest/TD-5213/insertSigcolumnsNum4096.json index e90474e872b050ccb33c4e40da76d86f14975b7a..b1d7dc49352c25aba0ece068af194a5f3b28ddba 100755 --- a/tests/pytest/tools/taosdemoAllTest/TD-5213/insertSigcolumnsNum4096.json +++ b/tests/pytest/tools/taosdemoAllTest/TD-5213/insertSigcolumnsNum4096.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 10, + "create_table_thread_count": 10, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insert-1s1tnt1r.json b/tests/pytest/tools/taosdemoAllTest/insert-1s1tnt1r.json index 21603b190272519373b5771616ad3679892653a5..c1c27cf6d770b3f20588405253e342e198c93bc4 100644 --- a/tests/pytest/tools/taosdemoAllTest/insert-1s1tnt1r.json +++ b/tests/pytest/tools/taosdemoAllTest/insert-1s1tnt1r.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insert-1s1tntmr.json b/tests/pytest/tools/taosdemoAllTest/insert-1s1tntmr.json index c944c26915063c9e5169f8bb45442f87f47db423..360ec073703edea5a777bd48a150f65ee6fd97f4 100644 --- a/tests/pytest/tools/taosdemoAllTest/insert-1s1tntmr.json +++ b/tests/pytest/tools/taosdemoAllTest/insert-1s1tntmr.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insert-disorder.json b/tests/pytest/tools/taosdemoAllTest/insert-disorder.json index 4908d3999cad2037a6ce90b9ab85ddcf69df2ddd..930496a877fff73b14627de215212d0f4591b481 100644 --- a/tests/pytest/tools/taosdemoAllTest/insert-disorder.json +++ b/tests/pytest/tools/taosdemoAllTest/insert-disorder.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file":"./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insert-drop-exist-auto-N00.json b/tests/pytest/tools/taosdemoAllTest/insert-drop-exist-auto-N00.json index 03f531f52b74605bd101b246a9ad0b4cb4dbb7ff..12dadf80063ac4a69eee82936186902745f32380 100644 --- a/tests/pytest/tools/taosdemoAllTest/insert-drop-exist-auto-N00.json +++ b/tests/pytest/tools/taosdemoAllTest/insert-drop-exist-auto-N00.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insert-drop-exist-auto-Y00.json b/tests/pytest/tools/taosdemoAllTest/insert-drop-exist-auto-Y00.json index ce2a34627b68780105bbc0a6c233c8d8365b8569..759a3f074dac9362a18f6510664a6b02ebf8f24e 100644 --- a/tests/pytest/tools/taosdemoAllTest/insert-drop-exist-auto-Y00.json +++ b/tests/pytest/tools/taosdemoAllTest/insert-drop-exist-auto-Y00.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insert-illegal.json b/tests/pytest/tools/taosdemoAllTest/insert-illegal.json index 6e438b33df5af7321cd40b125cee553f98032b02..321495782d86d64e1867cffa74715eaee7d72240 100644 --- a/tests/pytest/tools/taosdemoAllTest/insert-illegal.json +++ b/tests/pytest/tools/taosdemoAllTest/insert-illegal.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insert-interlace-row.json b/tests/pytest/tools/taosdemoAllTest/insert-interlace-row.json index 54e646a5a049d36d83b1e6e56856ff1dda6aaa46..5dd37ee8b05d9518435837478b5d7c2646740482 100644 --- a/tests/pytest/tools/taosdemoAllTest/insert-interlace-row.json +++ b/tests/pytest/tools/taosdemoAllTest/insert-interlace-row.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insert-interval-speed.json b/tests/pytest/tools/taosdemoAllTest/insert-interval-speed.json index 9a47a873dddaebb4710827b3cb60840252d62f4c..7fbee6fee078ce225276946e3cc19357723ec3d0 100644 --- a/tests/pytest/tools/taosdemoAllTest/insert-interval-speed.json +++ b/tests/pytest/tools/taosdemoAllTest/insert-interval-speed.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 100, diff --git a/tests/pytest/tools/taosdemoAllTest/insert-newdb.json b/tests/pytest/tools/taosdemoAllTest/insert-newdb.json index 2eb17b1aab5cf26a1cbde8456000a19dd1bef926..16e1f944812bdf8dc292aff162edb3908c380559 100644 --- a/tests/pytest/tools/taosdemoAllTest/insert-newdb.json +++ b/tests/pytest/tools/taosdemoAllTest/insert-newdb.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insert-newtable.json b/tests/pytest/tools/taosdemoAllTest/insert-newtable.json index abe277bf5b2bf3f60aebd96f315cc67fb0c9caeb..86c9359ffbb206517a0bb7a5937e0d6e7e716b90 100644 --- a/tests/pytest/tools/taosdemoAllTest/insert-newtable.json +++ b/tests/pytest/tools/taosdemoAllTest/insert-newtable.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insert-nodbnodrop.json b/tests/pytest/tools/taosdemoAllTest/insert-nodbnodrop.json index 2dae7eb1d727632dca9cfaa6905d33c9fde39487..7eee9ce55bf93e6a57ba94d629167ee578ffdbfd 100644 --- a/tests/pytest/tools/taosdemoAllTest/insert-nodbnodrop.json +++ b/tests/pytest/tools/taosdemoAllTest/insert-nodbnodrop.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insert-offset.json b/tests/pytest/tools/taosdemoAllTest/insert-offset.json index 642d01db3eb97a5611d5fe587d2e77929cb23e84..d3946cee3ced7ab9c4588fb0d39acfaec6049e74 100644 --- a/tests/pytest/tools/taosdemoAllTest/insert-offset.json +++ b/tests/pytest/tools/taosdemoAllTest/insert-offset.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insert-renewdb.json b/tests/pytest/tools/taosdemoAllTest/insert-renewdb.json index 3ef4360aefbca9cb3cae8c04dfe2162075430bd9..c812b4971edfda29b8be030dc893299ce2484600 100644 --- a/tests/pytest/tools/taosdemoAllTest/insert-renewdb.json +++ b/tests/pytest/tools/taosdemoAllTest/insert-renewdb.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insert-sample.json b/tests/pytest/tools/taosdemoAllTest/insert-sample.json index 5b25281e78361f7c27bd94d024a22afcaf870a77..e24e20067c2cefc18844683434a745daa3377b40 100644 --- a/tests/pytest/tools/taosdemoAllTest/insert-sample.json +++ b/tests/pytest/tools/taosdemoAllTest/insert-sample.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file":"./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insert-timestep.json b/tests/pytest/tools/taosdemoAllTest/insert-timestep.json index 6432fde4baf3d7c7810236bdf2f02e99906b6e02..ceadfc677ae03837716a503c2a8f92d98c493a89 100644 --- a/tests/pytest/tools/taosdemoAllTest/insert-timestep.json +++ b/tests/pytest/tools/taosdemoAllTest/insert-timestep.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file":"./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insertBinaryLenLarge16374AllcolLar49151.json b/tests/pytest/tools/taosdemoAllTest/insertBinaryLenLarge16374AllcolLar49151.json index 4e59d8667964a909cffe9dd7f4367d814e7a917a..69ebe45e50b3a987e5eade2ab3de5b84f0451835 100644 --- a/tests/pytest/tools/taosdemoAllTest/insertBinaryLenLarge16374AllcolLar49151.json +++ b/tests/pytest/tools/taosdemoAllTest/insertBinaryLenLarge16374AllcolLar49151.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insertChildTab0.json b/tests/pytest/tools/taosdemoAllTest/insertChildTab0.json index 80d6817b5d09851be7e31c864e968a5b729e063e..8b7086530e7aedaf73a9a62a6ff8f28669e7e0a1 100644 --- a/tests/pytest/tools/taosdemoAllTest/insertChildTab0.json +++ b/tests/pytest/tools/taosdemoAllTest/insertChildTab0.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insertChildTabLess0.json b/tests/pytest/tools/taosdemoAllTest/insertChildTabLess0.json index a35c28f0acd00ed01b627d2d0619bc8183d97f06..1e052ff2a47ccfa2b5af66e47980b40ec87891ed 100644 --- a/tests/pytest/tools/taosdemoAllTest/insertChildTabLess0.json +++ b/tests/pytest/tools/taosdemoAllTest/insertChildTabLess0.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insertColumnsAndTagNum4096.json b/tests/pytest/tools/taosdemoAllTest/insertColumnsAndTagNum4096.json index 05d47c3611dd698d86d078805fac0785bd544479..c67b1dba1428a173a54f3225b386107e118b23c0 100644 --- a/tests/pytest/tools/taosdemoAllTest/insertColumnsAndTagNum4096.json +++ b/tests/pytest/tools/taosdemoAllTest/insertColumnsAndTagNum4096.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insertColumnsAndTagNumLarge4096.json b/tests/pytest/tools/taosdemoAllTest/insertColumnsAndTagNumLarge4096.json index e63b3613ba6fa004f80b1eeefb39bb0011d51b27..25e43aefa736baef9be6a9867fd7493b9b3ba458 100644 --- a/tests/pytest/tools/taosdemoAllTest/insertColumnsAndTagNumLarge4096.json +++ b/tests/pytest/tools/taosdemoAllTest/insertColumnsAndTagNumLarge4096.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insertColumnsNum0.json b/tests/pytest/tools/taosdemoAllTest/insertColumnsNum0.json index 137e6083864580be49a0d02c5798f16f8046834a..af04d9c1a3557bb2e211da4808a5db2880b13b44 100644 --- a/tests/pytest/tools/taosdemoAllTest/insertColumnsNum0.json +++ b/tests/pytest/tools/taosdemoAllTest/insertColumnsNum0.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insertInterlaceRowsLarge1M.json b/tests/pytest/tools/taosdemoAllTest/insertInterlaceRowsLarge1M.json index 63a4a2ab58a67363d0b69bbf7552c76fd5948699..84a5fe94526923290472c36edb318035aa60d767 100644 --- a/tests/pytest/tools/taosdemoAllTest/insertInterlaceRowsLarge1M.json +++ b/tests/pytest/tools/taosdemoAllTest/insertInterlaceRowsLarge1M.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insertMaxNumPerReq.json b/tests/pytest/tools/taosdemoAllTest/insertMaxNumPerReq.json index f3212bc30dcbdb2d8183e1c6050fe3b23ee92748..d092a41483b21d97be50fec553d4e242f0599bfe 100644 --- a/tests/pytest/tools/taosdemoAllTest/insertMaxNumPerReq.json +++ b/tests/pytest/tools/taosdemoAllTest/insertMaxNumPerReq.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insertNumOfrecordPerReq0.json b/tests/pytest/tools/taosdemoAllTest/insertNumOfrecordPerReq0.json index 9711ead80ee17cb5f5b54c3439914262176c5633..45523618f0c25f044b9e7d1485b148896e20a861 100644 --- a/tests/pytest/tools/taosdemoAllTest/insertNumOfrecordPerReq0.json +++ b/tests/pytest/tools/taosdemoAllTest/insertNumOfrecordPerReq0.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insertNumOfrecordPerReqless0.json b/tests/pytest/tools/taosdemoAllTest/insertNumOfrecordPerReqless0.json index 24c61cfa8cfbc810c573e3468730d33e2132eee7..a95c40f9eb6a06f9ca050d7eeed91153ab96535a 100644 --- a/tests/pytest/tools/taosdemoAllTest/insertNumOfrecordPerReqless0.json +++ b/tests/pytest/tools/taosdemoAllTest/insertNumOfrecordPerReqless0.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insertRestful.json b/tests/pytest/tools/taosdemoAllTest/insertRestful.json index ab7ee9a73b3414937f0843215d1d122448e1eedb..26770c3d09497c7e7810b519a2affa7d7c97c3c6 100644 --- a/tests/pytest/tools/taosdemoAllTest/insertRestful.json +++ b/tests/pytest/tools/taosdemoAllTest/insertRestful.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insertSigcolumnsNum4096.json b/tests/pytest/tools/taosdemoAllTest/insertSigcolumnsNum4096.json index d835822e8f81dd371558de1002ed68487ad0d5e7..74737b4dec837be2b2bb25b7189fa80b32684dc9 100644 --- a/tests/pytest/tools/taosdemoAllTest/insertSigcolumnsNum4096.json +++ b/tests/pytest/tools/taosdemoAllTest/insertSigcolumnsNum4096.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insertTagsNumLarge128.json b/tests/pytest/tools/taosdemoAllTest/insertTagsNumLarge128.json index 4c7cdfe39d0ed2dd15abcae7ac6bf75b371e13bf..e0e9f72a5631ec73f49146d13cc40d380b972c40 100644 --- a/tests/pytest/tools/taosdemoAllTest/insertTagsNumLarge128.json +++ b/tests/pytest/tools/taosdemoAllTest/insertTagsNumLarge128.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insertTimestepMulRowsLargeint16.json b/tests/pytest/tools/taosdemoAllTest/insertTimestepMulRowsLargeint16.json index b563dcc94b3c69256f4b2a754e9244cef7874944..fdc1994782b9aba752835afedb50323c0be4508d 100644 --- a/tests/pytest/tools/taosdemoAllTest/insertTimestepMulRowsLargeint16.json +++ b/tests/pytest/tools/taosdemoAllTest/insertTimestepMulRowsLargeint16.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/insert_5M_rows.json b/tests/pytest/tools/taosdemoAllTest/insert_5M_rows.json index 0f1a874cc364736a68962c1d293fc8cdc78cd8c8..91d6c1a83710ed4984fff1195dd1fb76eb0e51f3 100644 --- a/tests/pytest/tools/taosdemoAllTest/insert_5M_rows.json +++ b/tests/pytest/tools/taosdemoAllTest/insert_5M_rows.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/manual_block1_comp.json b/tests/pytest/tools/taosdemoAllTest/manual_block1_comp.json index bdab459987a587554c001c239c570afd3e7f8636..45a718705a535b09ad64e1fff33542abb38c6d4e 100644 --- a/tests/pytest/tools/taosdemoAllTest/manual_block1_comp.json +++ b/tests/pytest/tools/taosdemoAllTest/manual_block1_comp.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/manual_block2.json b/tests/pytest/tools/taosdemoAllTest/manual_block2.json index 763421c7f3bdb47509c354818e02b9a2b20ce5bd..f01e55fb5341e5389672277c976e8a53f9a4b73e 100644 --- a/tests/pytest/tools/taosdemoAllTest/manual_block2.json +++ b/tests/pytest/tools/taosdemoAllTest/manual_block2.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/manual_change_time_1_1_A.json b/tests/pytest/tools/taosdemoAllTest/manual_change_time_1_1_A.json index 0579aedf69a74ad111b8f92808f7046bd0de24c8..f097f15ee13bfa1c23e29870c8bbce45e878417e 100644 --- a/tests/pytest/tools/taosdemoAllTest/manual_change_time_1_1_A.json +++ b/tests/pytest/tools/taosdemoAllTest/manual_change_time_1_1_A.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/manual_change_time_1_1_B.json b/tests/pytest/tools/taosdemoAllTest/manual_change_time_1_1_B.json index d541cb656778fc59fc1f3746fadcca0ced456e0a..2df1fc42aad4ee6537f873c4ae87748bdd488112 100644 --- a/tests/pytest/tools/taosdemoAllTest/manual_change_time_1_1_B.json +++ b/tests/pytest/tools/taosdemoAllTest/manual_change_time_1_1_B.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/moredemo-offset-limit1.json b/tests/pytest/tools/taosdemoAllTest/moredemo-offset-limit1.json index c134391a5f759e32a8e9752deab7205e8cb1aa49..be1df2030fa136084e49468bb8f7048ce2753d89 100644 --- a/tests/pytest/tools/taosdemoAllTest/moredemo-offset-limit1.json +++ b/tests/pytest/tools/taosdemoAllTest/moredemo-offset-limit1.json @@ -7,7 +7,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/moredemo-offset-limit5.json b/tests/pytest/tools/taosdemoAllTest/moredemo-offset-limit5.json index e9f759f8f7167c749bc3617545ee8c926248bf71..a8552404d51b4f8261af092e6ec0a022f7d5b6d6 100644 --- a/tests/pytest/tools/taosdemoAllTest/moredemo-offset-limit5.json +++ b/tests/pytest/tools/taosdemoAllTest/moredemo-offset-limit5.json @@ -7,7 +7,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/moredemo-offset-limit94.json b/tests/pytest/tools/taosdemoAllTest/moredemo-offset-limit94.json index 9b46ff105b3217ee54ee6c0684136c7033995a05..316fbba4a019337c84a9b7eb649ba294e22e6e1b 100644 --- a/tests/pytest/tools/taosdemoAllTest/moredemo-offset-limit94.json +++ b/tests/pytest/tools/taosdemoAllTest/moredemo-offset-limit94.json @@ -7,7 +7,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/moredemo-offset-newdb.json b/tests/pytest/tools/taosdemoAllTest/moredemo-offset-newdb.json index fdcaa131e6742f767a04ac52b7f9853b5757dcfb..d03b29d90fbd002288098a4cdb509421eb5003e0 100644 --- a/tests/pytest/tools/taosdemoAllTest/moredemo-offset-newdb.json +++ b/tests/pytest/tools/taosdemoAllTest/moredemo-offset-newdb.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/query-interrupt.json b/tests/pytest/tools/taosdemoAllTest/query-interrupt.json index 01028f68ad9a6f3aa870d0c1b1e38562e896abe4..1b276cb2b0afd9f0ed7d04f0a2609a4d17df3706 100644 --- a/tests/pytest/tools/taosdemoAllTest/query-interrupt.json +++ b/tests/pytest/tools/taosdemoAllTest/query-interrupt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/queryInsertdata.json b/tests/pytest/tools/taosdemoAllTest/queryInsertdata.json index 0fc789c7e30f1d0f74d4e10df635df738b4411be..8565e4a7111b654cefc0424af309e32e8f74b024 100644 --- a/tests/pytest/tools/taosdemoAllTest/queryInsertdata.json +++ b/tests/pytest/tools/taosdemoAllTest/queryInsertdata.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/queryInsertrestdata.json b/tests/pytest/tools/taosdemoAllTest/queryInsertrestdata.json index 940adfb61c6fc294f7b286514c2808269e8c9e66..0f9be9bdc3d3587178096a8f2eeaca01aaacf594 100644 --- a/tests/pytest/tools/taosdemoAllTest/queryInsertrestdata.json +++ b/tests/pytest/tools/taosdemoAllTest/queryInsertrestdata.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/1174-large-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/1174-large-stmt.json index a4baf73689e97f1494606b8ca243d13af024245f..443da39fa127c2f182f3b2adee54a0cb53fe285e 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/1174-large-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/1174-large-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 1, + "create_table_thread_count": 1, "result_file": "1174.out", "confirm_parameter_prompt": "no", "num_of_records_per_req": 51, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/1174-large-taosc.json b/tests/pytest/tools/taosdemoAllTest/stmt/1174-large-taosc.json index a7a514e9dc46cf62ce24fa81b22bfe9d2c58e654..bd5709ca5e36e252fe7ef496cb652f5db6320332 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/1174-large-taosc.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/1174-large-taosc.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 1, + "create_table_thread_count": 1, "result_file": "1174.out", "confirm_parameter_prompt": "no", "num_of_records_per_req": 51, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/1174-small-stmt-random.json b/tests/pytest/tools/taosdemoAllTest/stmt/1174-small-stmt-random.json index 3c38f926808c0e08fbb3087aad139ec15997101a..209f414c1b2c045d9c7312edd0f547cb1fb1a24b 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/1174-small-stmt-random.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/1174-small-stmt-random.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 1, + "create_table_thread_count": 1, "result_file": "1174.out", "confirm_parameter_prompt": "no", "num_of_records_per_req": 51, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/1174-small-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/1174-small-stmt.json index 2ee489c7a3cff7deaa41bb2b17ed54ce00bbc217..903c8a9c93f947f87030efea313d503f63eb99ad 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/1174-small-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/1174-small-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 1, + "create_table_thread_count": 1, "result_file": "1174.out", "confirm_parameter_prompt": "no", "num_of_records_per_req": 51, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/1174-small-taosc.json b/tests/pytest/tools/taosdemoAllTest/stmt/1174-small-taosc.json index 44da22aa3f54abe403c38c9ec11dcdbe346abfb9..dcbec40034801e0665ccdc23a4318a78d4c37d9a 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/1174-small-taosc.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/1174-small-taosc.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 1, + "create_table_thread_count": 1, "result_file": "1174.out", "confirm_parameter_prompt": "no", "num_of_records_per_req": 51, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insert-1s1tnt1r-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insert-1s1tnt1r-stmt.json index b2805a38e51d86e80838efb753c0f10c94b2c5b4..1ea4de5cfe7d921065c5115ecbca368e1f0484a6 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insert-1s1tnt1r-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insert-1s1tnt1r-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insert-1s1tntmr-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insert-1s1tntmr-stmt.json index ac540befb637b0105a4f718228db11dc3f51ca01..86f2fa6c4d89a20f86d95ea3cc3965813cbb2974 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insert-1s1tntmr-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insert-1s1tntmr-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insert-disorder-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insert-disorder-stmt.json index 9a7ad93636f6578d0adb7553c2d912f38614301d..d634ab83690242bde9beca7aa39b8705e9aa52cc 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insert-disorder-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insert-disorder-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file":"./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insert-drop-exist-auto-N00-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insert-drop-exist-auto-N00-stmt.json index 919b91839530c0bd5db3338d73698eed19aefda7..4b69118ef52832ec7b5586c64b63ebb0732de389 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insert-drop-exist-auto-N00-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insert-drop-exist-auto-N00-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insert-drop-exist-auto-Y00-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insert-drop-exist-auto-Y00-stmt.json index dcf52931ad40788edd1f7f16f3e7cdd190792b16..32043996b6d5de87862ee1ea9e31715edf873aeb 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insert-drop-exist-auto-Y00-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insert-drop-exist-auto-Y00-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insert-interlace-row-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insert-interlace-row-stmt.json index d2304ed537d5c18e81c2d93803947396ecb2ed5a..a1a0b89e48265c1a1a45c35fe3bb508e14aab7e5 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insert-interlace-row-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insert-interlace-row-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insert-interval-speed-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insert-interval-speed-stmt.json index d297240613de0e51dcab3e0582fd041858010eda..f5cea2ccc30c97e5c338e5737a003feafdbe7535 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insert-interval-speed-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insert-interval-speed-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 100, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insert-newdb-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insert-newdb-stmt.json index d117c5b3450e31a8736eea97d36d9d172c74e314..c3bdea61c6aa74b2dfae20299f0c1c8424de4c7d 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insert-newdb-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insert-newdb-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insert-newtable-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insert-newtable-stmt.json index 1b36b3cbe9cc520a625645bf1e1e5b89a6be2a11..e92644d33eef6de46e38c359b4bb4467dfdb2f8f 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insert-newtable-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insert-newtable-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insert-nodbnodrop-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insert-nodbnodrop-stmt.json index ea95736a00fba7630f8479699397f455b51db45c..0618c04b30dd41d87c72f97042a80c7e3b05ebdc 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insert-nodbnodrop-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insert-nodbnodrop-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insert-offset-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insert-offset-stmt.json index 8318de6672bbcb8c705648f593baf647d3b3f571..356ac38d147edc0368ae07fa7c37825f50241e7e 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insert-offset-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insert-offset-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insert-renewdb-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insert-renewdb-stmt.json index b6cb47f2c5f086fd50794fab7b84188ee1162bcf..2f8f6931667092aa8d5e3ba0c90eb0ccbc3860e1 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insert-renewdb-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insert-renewdb-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insert-sample-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insert-sample-stmt.json index 348e93ff8b5b0a1666d22cc017f376a1da120702..c1da95ba8c830e797362043418f5d64d7f524a24 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insert-sample-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insert-sample-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file":"./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insert-timestep-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insert-timestep-stmt.json index edbaae60a14aa8c289d9f3854f654f3da27f37da..9522f0e7b5388d4967303983c38055c5183f0f3a 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insert-timestep-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insert-timestep-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file":"./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insertBinaryLenLarge16374AllcolLar49151-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insertBinaryLenLarge16374AllcolLar49151-stmt.json index 1c72b4f402d67070b9b25d6ff8c83923148e1c92..bcbda0a301cdcf59e65d7b53bd51eb919d1f2705 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insertBinaryLenLarge16374AllcolLar49151-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insertBinaryLenLarge16374AllcolLar49151-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insertChildTab0-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insertChildTab0-stmt.json index 4626babd9519bd702373dc321a801075df655903..2b30aa3e9eea7c7cb91bece37e2ce16e02a71c0a 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insertChildTab0-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insertChildTab0-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insertChildTabLess0-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insertChildTabLess0-stmt.json index f140883de168d77ad83253532fecfee81c9dd7c9..f3c577b30cf29dd733bc5ac920d68ce3f8d6ee55 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insertChildTabLess0-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insertChildTabLess0-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insertColumnsAndTagNum4096-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insertColumnsAndTagNum4096-stmt.json index d1d2db2df388a63b7587932cfc0b980f67cce62f..a0ff8872500163c1479a230fdd357334fac96f87 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insertColumnsAndTagNum4096-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insertColumnsAndTagNum4096-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insertColumnsNum0-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insertColumnsNum0-stmt.json index d79d4cace533578d4cf2d55430bef55dd64485c8..5ff9ec63a2aec226cbb1fd7850c1549e33af1c5b 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insertColumnsNum0-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insertColumnsNum0-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insertInterlaceRowsLarge1M-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insertInterlaceRowsLarge1M-stmt.json index eb0ab0f04ac8f602d83eb5271ae7f5eab86f7d10..79ce66097ba595b5f900e50a41e9e1f1d53a8fa4 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insertInterlaceRowsLarge1M-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insertInterlaceRowsLarge1M-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insertMaxNumPerReq-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insertMaxNumPerReq-stmt.json index 489632c645e732eb9c0fe2fa358947b1e6ba585e..4b21f0a184d37c18aba14537a9cadd439d6a56a7 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insertMaxNumPerReq-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insertMaxNumPerReq-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insertNumOfrecordPerReq0-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insertNumOfrecordPerReq0-stmt.json index 19eb92bf4c8541eca4d6d3306d5e5772998ca719..9fb85aef23e45bcfeaa4e526796abf7cd5ce1e83 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insertNumOfrecordPerReq0-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insertNumOfrecordPerReq0-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insertNumOfrecordPerReqless0-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insertNumOfrecordPerReqless0-stmt.json index dbda4f74a1d209c5a112028e21fdec27ff390a14..80944de3f576574cf1c80c085bb0c4dc1a1b9d58 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insertNumOfrecordPerReqless0-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insertNumOfrecordPerReqless0-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insertSigcolumnsNum4096-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insertSigcolumnsNum4096-stmt.json index 966c285d2f7fc197dd8af6a7b8ea9c0caf58aa45..834ffb56d37dca082b8bc152851bc1e3e847b2be 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insertSigcolumnsNum4096-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insertSigcolumnsNum4096-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insertTagsNumLarge128-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insertTagsNumLarge128-stmt.json index c1fc02553fe501e7c09769d947b3f21acc96555e..f39aa94830cfe47ffd364e5f2a1419f130ea2ec8 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insertTagsNumLarge128-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insertTagsNumLarge128-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/insertTimestepMulRowsLargeint16-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/insertTimestepMulRowsLargeint16-stmt.json index ed3eb280f6869bed76de72bdf50b646bca4a245a..6345227788af3948dd049d09aefde0f4207eea73 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/insertTimestepMulRowsLargeint16-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/insertTimestepMulRowsLargeint16-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/stmt/nsertColumnsAndTagNumLarge4096-stmt.json b/tests/pytest/tools/taosdemoAllTest/stmt/nsertColumnsAndTagNumLarge4096-stmt.json index 1d7ad8a90eb95a86f109214f516d9484b11a53da..75a365bbff9c515ee31d94c96df1d18bc286d988 100644 --- a/tests/pytest/tools/taosdemoAllTest/stmt/nsertColumnsAndTagNumLarge4096-stmt.json +++ b/tests/pytest/tools/taosdemoAllTest/stmt/nsertColumnsAndTagNumLarge4096-stmt.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/subInsertdata.json b/tests/pytest/tools/taosdemoAllTest/subInsertdata.json index 1ca302a320897f7fc04dbbef9aa8a2fea2808724..f5e7ac3018c59ce5d9e8702c788b3e5cd8605994 100644 --- a/tests/pytest/tools/taosdemoAllTest/subInsertdata.json +++ b/tests/pytest/tools/taosdemoAllTest/subInsertdata.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/subInsertdataMaxsql100.json b/tests/pytest/tools/taosdemoAllTest/subInsertdataMaxsql100.json index ef6354627880bf3fde91567e5de3ee518fccb995..896a72598d73b4abc82c0ee8251c8155d54352bd 100644 --- a/tests/pytest/tools/taosdemoAllTest/subInsertdataMaxsql100.json +++ b/tests/pytest/tools/taosdemoAllTest/subInsertdataMaxsql100.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/taosdemoInsertMSDB.json b/tests/pytest/tools/taosdemoAllTest/taosdemoInsertMSDB.json index b6e5847b54897814fb9c6e7b1c7f9cb4ed8d29f3..8211a92a2d999b798bc625cb4cffc7718ecf1bb4 100644 --- a/tests/pytest/tools/taosdemoAllTest/taosdemoInsertMSDB.json +++ b/tests/pytest/tools/taosdemoAllTest/taosdemoInsertMSDB.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 10, + "create_table_thread_count": 10, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/taosdemoInsertNanoDB.json b/tests/pytest/tools/taosdemoAllTest/taosdemoInsertNanoDB.json index ed97fea33e106aff8d2821a10191bd360a629a6b..304ff99c26dcddcd7a98cdbcf7da3d4c458dc4ec 100644 --- a/tests/pytest/tools/taosdemoAllTest/taosdemoInsertNanoDB.json +++ b/tests/pytest/tools/taosdemoAllTest/taosdemoInsertNanoDB.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 10, + "create_table_thread_count": 10, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/taosdemoInsertUSDB.json b/tests/pytest/tools/taosdemoAllTest/taosdemoInsertUSDB.json index db34bfc6b8a617b1a57ed687562bb09ade6c24c8..444e6564bea134286422c3d612f2719186ad324e 100644 --- a/tests/pytest/tools/taosdemoAllTest/taosdemoInsertUSDB.json +++ b/tests/pytest/tools/taosdemoAllTest/taosdemoInsertUSDB.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 10, + "create_table_thread_count": 10, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/taosdemoTestNanoDatabase.json b/tests/pytest/tools/taosdemoAllTest/taosdemoTestNanoDatabase.json index d029ddea219aca3ce79a19035e6ae1bead016795..67003a1fb5c1dae0810497bc480a88e3ab9b4919 100644 --- a/tests/pytest/tools/taosdemoAllTest/taosdemoTestNanoDatabase.json +++ b/tests/pytest/tools/taosdemoAllTest/taosdemoTestNanoDatabase.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 10, + "create_table_thread_count": 10, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/taosdemoTestNanoDatabaseInsertForSub.json b/tests/pytest/tools/taosdemoAllTest/taosdemoTestNanoDatabaseInsertForSub.json index f8a181d352fad7702cf97aaca9aea3aa1801cab1..7454af6521dc46c18c3c61a1c278011affd70654 100644 --- a/tests/pytest/tools/taosdemoAllTest/taosdemoTestNanoDatabaseInsertForSub.json +++ b/tests/pytest/tools/taosdemoAllTest/taosdemoTestNanoDatabaseInsertForSub.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 10, + "create_table_thread_count": 10, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/taosdemoTestNanoDatabaseNow.json b/tests/pytest/tools/taosdemoAllTest/taosdemoTestNanoDatabaseNow.json index b06ec55ef6157e46a435c0a10ef0144f7e648334..602a39ca24bce0580506d7ad833957d29ebf020a 100644 --- a/tests/pytest/tools/taosdemoAllTest/taosdemoTestNanoDatabaseNow.json +++ b/tests/pytest/tools/taosdemoAllTest/taosdemoTestNanoDatabaseNow.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 10, + "create_table_thread_count": 10, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoAllTest/taosdemoTestNanoDatabasecsv.json b/tests/pytest/tools/taosdemoAllTest/taosdemoTestNanoDatabasecsv.json index 6a6a6da2979869690298978676641d3279cd69b0..79d3bc5ed824b541f21750a6f4d9a206172096e1 100644 --- a/tests/pytest/tools/taosdemoAllTest/taosdemoTestNanoDatabasecsv.json +++ b/tests/pytest/tools/taosdemoAllTest/taosdemoTestNanoDatabasecsv.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 10, + "create_table_thread_count": 10, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tools/taosdemoPerformance.py b/tests/pytest/tools/taosdemoPerformance.py index 82c57a656dfea12f80fe4eb2b530742c5bfb0916..9a4b564319048921e349437d9ccc3927147017a9 100644 --- a/tests/pytest/tools/taosdemoPerformance.py +++ b/tests/pytest/tools/taosdemoPerformance.py @@ -94,7 +94,7 @@ class taosdemoPerformace: "user": "root", "password": "taosdata", "thread_count": 10, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "databases": [db] } diff --git a/tests/pytest/tsdb/insertDataDb1.json b/tests/pytest/tsdb/insertDataDb1.json index 92735dad69790f51cc35878f4c81dc7d81a64b72..f771551b26f5f0a16401312aa613450b267e8249 100644 --- a/tests/pytest/tsdb/insertDataDb1.json +++ b/tests/pytest/tsdb/insertDataDb1.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tsdb/insertDataDb1Replica2.json b/tests/pytest/tsdb/insertDataDb1Replica2.json index a5fc525157c9d22084f137b9057b4ebe7d2e7c5f..ec84d71d88cef197678dba8fdc6b1b80e1614718 100644 --- a/tests/pytest/tsdb/insertDataDb1Replica2.json +++ b/tests/pytest/tsdb/insertDataDb1Replica2.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tsdb/insertDataDb2.json b/tests/pytest/tsdb/insertDataDb2.json index 02301e024271509642d4aa4c8fa5f19e2b39c939..494465d23c67036bed7b3994fece6e6a5c5f75de 100644 --- a/tests/pytest/tsdb/insertDataDb2.json +++ b/tests/pytest/tsdb/insertDataDb2.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tsdb/insertDataDb2Newstab.json b/tests/pytest/tsdb/insertDataDb2Newstab.json index 2f5f2367b4445f58f67155381536f520a3422a7a..647a587cad3ad66590fb3ef770aa8ff35a74f31e 100644 --- a/tests/pytest/tsdb/insertDataDb2Newstab.json +++ b/tests/pytest/tsdb/insertDataDb2Newstab.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tsdb/insertDataDb2NewstabReplica2.json b/tests/pytest/tsdb/insertDataDb2NewstabReplica2.json index 67f3b2cd4f2cdb08fe8337a8372e35c0b6a2e02b..13cf2e561c88893d545e425bf2d107382387c3cc 100644 --- a/tests/pytest/tsdb/insertDataDb2NewstabReplica2.json +++ b/tests/pytest/tsdb/insertDataDb2NewstabReplica2.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/tsdb/insertDataDb2Replica2.json b/tests/pytest/tsdb/insertDataDb2Replica2.json index 3d033f13cc77ac9ecdf0803cf8d014c3b5a9a882..c651657a6d9e2f91d40d77c8b7d8b2d2f9d32939 100644 --- a/tests/pytest/tsdb/insertDataDb2Replica2.json +++ b/tests/pytest/tsdb/insertDataDb2Replica2.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/util/taosdemoCfg.py b/tests/pytest/util/taosdemoCfg.py index 7523a808980fc924672cb8eedc0a338be0f8d745..f708d303de06c8ab7639453d94f0bb63d445419b 100644 --- a/tests/pytest/util/taosdemoCfg.py +++ b/tests/pytest/util/taosdemoCfg.py @@ -50,7 +50,7 @@ class TDTaosdemoCfg: "user": "root", "password": "taosdata", "thread_count": cpu_count(), - "thread_count_create_tbl": cpu_count(), + "create_table_thread_count": cpu_count(), "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/wal/insertDataDb1.json b/tests/pytest/wal/insertDataDb1.json index a14fe581412f9497b4c16b94213685f31e06aa0c..2dc0cf2b7f2b6c99fad4d049260ea1b5959d39e9 100644 --- a/tests/pytest/wal/insertDataDb1.json +++ b/tests/pytest/wal/insertDataDb1.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/wal/insertDataDb1Replica2.json b/tests/pytest/wal/insertDataDb1Replica2.json index a5fc525157c9d22084f137b9057b4ebe7d2e7c5f..ec84d71d88cef197678dba8fdc6b1b80e1614718 100644 --- a/tests/pytest/wal/insertDataDb1Replica2.json +++ b/tests/pytest/wal/insertDataDb1Replica2.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/wal/insertDataDb2.json b/tests/pytest/wal/insertDataDb2.json index 891a21f73e195996d7bb5d8539b22b88164efa0c..35232a633315e3281d3496290f4ce5165ac4a235 100644 --- a/tests/pytest/wal/insertDataDb2.json +++ b/tests/pytest/wal/insertDataDb2.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/wal/insertDataDb2Newstab.json b/tests/pytest/wal/insertDataDb2Newstab.json index 2f5f2367b4445f58f67155381536f520a3422a7a..647a587cad3ad66590fb3ef770aa8ff35a74f31e 100644 --- a/tests/pytest/wal/insertDataDb2Newstab.json +++ b/tests/pytest/wal/insertDataDb2Newstab.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/wal/insertDataDb2NewstabReplica2.json b/tests/pytest/wal/insertDataDb2NewstabReplica2.json index 67f3b2cd4f2cdb08fe8337a8372e35c0b6a2e02b..13cf2e561c88893d545e425bf2d107382387c3cc 100644 --- a/tests/pytest/wal/insertDataDb2NewstabReplica2.json +++ b/tests/pytest/wal/insertDataDb2NewstabReplica2.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/pytest/wal/insertDataDb2Replica2.json b/tests/pytest/wal/insertDataDb2Replica2.json index 3d033f13cc77ac9ecdf0803cf8d014c3b5a9a882..c651657a6d9e2f91d40d77c8b7d8b2d2f9d32939 100644 --- a/tests/pytest/wal/insertDataDb2Replica2.json +++ b/tests/pytest/wal/insertDataDb2Replica2.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 4, - "thread_count_create_tbl": 4, + "create_table_thread_count": 4, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/script/tsim/parser/nestquery.sim b/tests/script/tsim/parser/nestquery.sim index 101205db74057a951c8978ac3c85fcc727170ae9..8cb7a3790bfabe18032024e928bc87db4aeae391 100644 --- a/tests/script/tsim/parser/nestquery.sim +++ b/tests/script/tsim/parser/nestquery.sim @@ -160,12 +160,12 @@ endi sql select stddev(c1) from (select c1 from nest_tb0); sql_error select percentile(c1, 20) from (select * from nest_tb0); -sql select interp(c1) from (select * from nest_tb0); +#sql select interp(c1) from (select * from nest_tb0); sql_error select derivative(val, 1s, 0) from (select c1 val from nest_tb0); sql_error select twa(c1) from (select c1 from nest_tb0); sql_error select irate(c1) from (select c1 from nest_tb0); sql_error select diff(c1), twa(c1) from (select * from nest_tb0); -sql_error select irate(c1), interp(c1), twa(c1) from (select * from nest_tb0); +#sql_error select irate(c1), interp(c1), twa(c1) from (select * from nest_tb0); sql select _wstart, apercentile(c1, 50) from (select * from nest_tb0) interval(1d) if $rows != 7 then diff --git a/tests/system-test/1-insert/manyVgroups.json b/tests/system-test/1-insert/manyVgroups.json index 20ac3205523af96cec2e7c646c6245c53a55c7e8..3b0fa96b08f73e26e11c35c89d6673268f764ddc 100644 --- a/tests/system-test/1-insert/manyVgroups.json +++ b/tests/system-test/1-insert/manyVgroups.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 8, - "thread_count_create_tbl": 8, + "create_table_thread_count": 8, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/system-test/1-insert/performanceInsert.json b/tests/system-test/1-insert/performanceInsert.json index de410c30f2fa1846d0318def447d1d09aff2cfea..7278a6f735f0e77bf96c3fc772ab5a46bc52631d 100644 --- a/tests/system-test/1-insert/performanceInsert.json +++ b/tests/system-test/1-insert/performanceInsert.json @@ -6,7 +6,7 @@ "user": "root", "password": "taosdata", "thread_count": 8, - "thread_count_create_tbl": 8, + "create_table_thread_count": 8, "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, diff --git a/tests/system-test/fulltest.sh b/tests/system-test/fulltest.sh index baf706ddfd70b59ccb584710ce60775e0722416f..1f6e8ce1f55fb3ea699e40429c4d214de87b39b0 100755 --- a/tests/system-test/fulltest.sh +++ b/tests/system-test/fulltest.sh @@ -27,7 +27,7 @@ python3 ./test.py -f 1-insert/alter_stable.py python3 ./test.py -f 1-insert/alter_table.py python3 ./test.py -f 1-insert/insertWithMoreVgroup.py python3 ./test.py -f 1-insert/table_comment.py -#python3 ./test.py -f 1-insert/time_range_wise.py #TD-18130 +python3 ./test.py -f 1-insert/time_range_wise.py python3 ./test.py -f 1-insert/block_wise.py python3 ./test.py -f 1-insert/create_retentions.py python3 ./test.py -f 1-insert/table_param_ttl.py @@ -199,25 +199,25 @@ python3 ./test.py -f 6-cluster/5dnode3mnodeStop2Follower.py -N 5 -M 3 python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_createDb_replica1.py -N 4 -M 1 python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica1_insertdatas.py -N 4 -M 1 python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica1_insertdatas_querys.py -N 4 -M 1 -python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_insertdatas_force_stop_all_dnodes.py -N 4 -M 1 +# python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_insertdatas_force_stop_all_dnodes.py -N 4 -M 1 python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_insertdatas.py -N 4 -M 1 -python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_insertdatas_querys_loop_restart_all_vnode.py -N 4 -M 1 -python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_insertdatas_querys_loop_restart_follower.py -N 4 -M 1 +# python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_insertdatas_querys_loop_restart_all_vnode.py -N 4 -M 1 +# python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_insertdatas_querys_loop_restart_follower.py -N 4 -M 1 # python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_insertdatas_querys_loop_restart_leader.py -N 4 -M 1 python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_insertdatas_querys.py -N 4 -M 1 # python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_insertdatas_stop_all_dnodes.py -N 4 -M 1 -python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_insertdatas_stop_follower_sync.py -N 4 -M 1 +# python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_insertdatas_stop_follower_sync.py -N 4 -M 1 # python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_insertdatas_stop_follower_unsync_force_stop.py -N 4 -M 1 # python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_insertdatas_stop_follower_unsync.py -N 4 -M 1 # python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_insertdatas_stop_leader_forece_stop.py -N 4 -M 1 # python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_insertdatas_stop_leader.py -N 4 -M 1 -python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_mnode3_insertdatas_querys.py -N 4 -M 1 +# python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_mnode3_insertdatas_querys.py -N 4 -M 1 # python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_querydatas_stop_follower_force_stop.py -N 4 -M 1 -python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_querydatas_stop_follower.py -N 4 -M 1 +# python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_querydatas_stop_follower.py -N 4 -M 1 # python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_querydatas_stop_leader_force_stop.py -N 4 -M 1 # python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_querydatas_stop_leader.py -N 4 -M 1 python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_vgroups.py -N 4 -M 1 -python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_vgroups_stopOne.py -N 4 -M 1 +# python3 test.py -f 6-cluster/vnode/4dnode1mnode_basic_replica3_vgroups_stopOne.py -N 4 -M 1 python3 ./test.py -f 7-tmq/dropDbR3ConflictTransaction.py -N 3 diff --git a/tools/shell/src/shellEngine.c b/tools/shell/src/shellEngine.c index 4526ff2230a8b05e72c06bcdd60f0c98c2dbaa9a..f0bda821725bc00ca49c77ab43c4aca4a8d89d5b 100644 --- a/tools/shell/src/shellEngine.c +++ b/tools/shell/src/shellEngine.c @@ -22,7 +22,8 @@ static bool shellIsEmptyCommand(const char *cmd); static int32_t shellRunSingleCommand(char *command); -static int32_t shellRunCommand(char *command); +static void shellRecordCommandToHistory(char *command); +static int32_t shellRunCommand(char *command, bool recordHistory); static void shellRunSingleCommandImp(char *command); static char *shellFormatTimestamp(char *buf, int64_t val, int32_t precision); static int32_t shellDumpResultToFile(const char *fname, TAOS_RES *tres); @@ -101,11 +102,7 @@ int32_t shellRunSingleCommand(char *command) { return 0; } -int32_t shellRunCommand(char *command) { - if (shellIsEmptyCommand(command)) { - return 0; - } - +void shellRecordCommandToHistory(char *command) { SShellHistory *pHistory = &shell.history; if (pHistory->hstart == pHistory->hend || pHistory->hist[(pHistory->hend + SHELL_MAX_HISTORY_SIZE - 1) % SHELL_MAX_HISTORY_SIZE] == NULL || @@ -120,6 +117,14 @@ int32_t shellRunCommand(char *command) { pHistory->hstart = (pHistory->hstart + 1) % SHELL_MAX_HISTORY_SIZE; } } +} + +int32_t shellRunCommand(char *command, bool recordHistory) { + if (shellIsEmptyCommand(command)) { + return 0; + } + + if (recordHistory) shellRecordCommandToHistory(command); char quote = 0, *cmd = command; for (char c = *command++; c != 0; c = *command++) { @@ -826,11 +831,15 @@ void shellSourceFile(const char *file) { size_t cmd_len = 0; char *line = NULL; char fullname[PATH_MAX] = {0}; + char sourceFileCommand[PATH_MAX + 8] = {0}; if (taosExpandDir(file, fullname, PATH_MAX) != 0) { tstrncpy(fullname, file, PATH_MAX); } + sprintf(sourceFileCommand, "source %s;",fullname); + shellRecordCommandToHistory(sourceFileCommand); + TdFilePtr pFile = taosOpenFile(fullname, TD_FILE_READ | TD_FILE_STREAM); if (pFile == NULL) { fprintf(stderr, "failed to open file %s\r\n", fullname); @@ -853,9 +862,13 @@ void shellSourceFile(const char *file) { continue; } + if (line[read_len - 1] == '\r') { + line[read_len - 1] = ' '; + } + memcpy(cmd + cmd_len, line, read_len); printf("%s%s\r\n", shell.info.promptHeader, cmd); - shellRunCommand(cmd); + shellRunCommand(cmd, false); memset(cmd, 0, TSDB_MAX_ALLOWED_SQL_LEN); cmd_len = 0; } @@ -977,7 +990,7 @@ void *shellThreadLoop(void *arg) { } taosResetTerminalMode(); - } while (shellRunCommand(command) == 0); + } while (shellRunCommand(command, true) == 0); taosMemoryFreeClear(command); shellWriteHistory(); @@ -1019,7 +1032,7 @@ int32_t shellExecute() { if (pArgs->commands != NULL) { printf("%s%s\r\n", shell.info.promptHeader, pArgs->commands); char *cmd = strdup(pArgs->commands); - shellRunCommand(cmd); + shellRunCommand(cmd, true); taosMemoryFree(cmd); }