diff --git a/docs-cn/02-intro.md b/docs-cn/02-intro.md index 8daea48e3e8e1b083f5cb0c04af0bef8ddb83122..949c21472dd29d51f2703034bd38ab95037e09c6 100644 --- a/docs-cn/02-intro.md +++ b/docs-cn/02-intro.md @@ -62,7 +62,7 @@ TDengine的主要功能如下:
-![TDengine技术生态图](eco_system.png) +![TDengine技术生态图](eco_system.webp)
图 1. TDengine技术生态图
diff --git a/docs-cn/12-taos-sql/08-interval.md b/docs-cn/12-taos-sql/08-interval.md index d62e11b0dbd0ba49ceedb3807e05361f060969b3..7c796e0046c5a740d393d71861828eb30bb3a5cc 100644 --- a/docs-cn/12-taos-sql/08-interval.md +++ b/docs-cn/12-taos-sql/08-interval.md @@ -11,7 +11,7 @@ TDengine 支持按时间段窗口切分方式进行聚合结果查询,比如 INTERVAL 子句用于产生相等时间周期的窗口,SLIDING 用以指定窗口向前滑动的时间。每次执行的查询是一个时间窗口,时间窗口随着时间流动向前滑动。在定义连续查询的时候需要指定时间窗口(time window )大小和每次前向增量时间(forward sliding times)。如图,[t0s, t0e] ,[t1s , t1e], [t2s, t2e] 是分别是执行三次连续查询的时间窗口范围,窗口的前向滑动的时间范围 sliding time 标识 。查询过滤、聚合等操作按照每个时间窗口为独立的单位执行。当 SLIDING 与 INTERVAL 相等的时候,滑动窗口即为翻转窗口。 -![时间窗口示意图](/img/sql/timewindow-1.png) +![时间窗口示意图](./timewindow-1.webp) INTERVAL 和 SLIDING 子句需要配合聚合和选择函数来使用。以下 SQL 语句非法: @@ -33,7 +33,7 @@ _ 从 2.1.5.0 版本开始,INTERVAL 语句允许的最短时间间隔调整为 使用整数(布尔值)或字符串来标识产生记录时候设备的状态量。产生的记录如果具有相同的状态量数值则归属于同一个状态窗口,数值改变后该窗口关闭。如下图所示,根据状态量确定的状态窗口分别是[2019-04-28 14:22:07,2019-04-28 14:22:10]和[2019-04-28 14:22:11,2019-04-28 14:22:12]两个。(状态窗口暂不支持对超级表使用) -![时间窗口示意图](/img/sql/timewindow-3.png) +![时间窗口示意图](./timewindow-3.webp) 使用 STATE_WINDOW 来确定状态窗口划分的列。例如: @@ -45,7 +45,7 @@ SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status); 会话窗口根据记录的时间戳主键的值来确定是否属于同一个会话。如下图所示,如果设置时间戳的连续的间隔小于等于 12 秒,则以下 6 条记录构成 2 个会话窗口,分别是:[2019-04-28 14:22:10,2019-04-28 14:22:30]和[2019-04-28 14:23:10,2019-04-28 14:23:30]。因为 2019-04-28 14:22:30 与 2019-04-28 14:23:10 之间的时间间隔是 40 秒,超过了连续时间间隔(12 秒)。 -![时间窗口示意图](/img/sql/timewindow-2.png) +![时间窗口示意图](./timewindow-2.webp) 在 tol_value 时间间隔范围内的结果都认为归属于同一个窗口,如果连续的两条记录的时间超过 tol_val,则自动开启下一个窗口。(会话窗口暂不支持对超级表使用) diff --git a/docs-cn/12-taos-sql/timewindow-1.webp b/docs-cn/12-taos-sql/timewindow-1.webp new file mode 100644 index 0000000000000000000000000000000000000000..82747558e96df752a0010d85be79a4af07e4a1df Binary files /dev/null and b/docs-cn/12-taos-sql/timewindow-1.webp differ diff --git a/docs-cn/12-taos-sql/timewindow-2.webp b/docs-cn/12-taos-sql/timewindow-2.webp new file mode 100644 index 0000000000000000000000000000000000000000..8f1314ae34f7f5c5cca1d3cb80455f555fad38c3 Binary files /dev/null and b/docs-cn/12-taos-sql/timewindow-2.webp differ diff --git a/docs-cn/12-taos-sql/timewindow-3.webp b/docs-cn/12-taos-sql/timewindow-3.webp new file mode 100644 index 0000000000000000000000000000000000000000..5bd16e68e7fd5da6805551e9765975277cd5d4d9 Binary files /dev/null and b/docs-cn/12-taos-sql/timewindow-3.webp differ diff --git a/docs-cn/14-reference/03-connector/03-connector.mdx b/docs-cn/14-reference/03-connector/03-connector.mdx index c0e714f148a7821e070be38a5484484fdd747e9a..aac358bea0682a9bd0807f10dc0cb2d4ef1d7a7b 100644 --- a/docs-cn/14-reference/03-connector/03-connector.mdx +++ b/docs-cn/14-reference/03-connector/03-connector.mdx @@ -4,7 +4,7 @@ title: 连接器 TDengine 提供了丰富的应用程序开发接口,为了便于用户快速开发自己的应用,TDengine 支持了多种编程语言的连接器,其中官方连接器包括支持 C/C++、Java、Python、Go、Node.js、C# 和 Rust 的连接器。这些连接器支持使用原生接口(taosc)和 REST 接口(部分语言暂不支持)连接 TDengine 集群。社区开发者也贡献了多个非官方连接器,例如 ADO.NET 连接器、Lua 连接器和 PHP 连接器。 -![image-connector](/img/connector.png) +![image-connector](./connector.webp) ## 支持的平台 diff --git a/docs-cn/14-reference/03-connector/connector.webp b/docs-cn/14-reference/03-connector/connector.webp new file mode 100644 index 0000000000000000000000000000000000000000..040cf5c26c726b345b2e0e5363dd3c677bec61be Binary files /dev/null and b/docs-cn/14-reference/03-connector/connector.webp differ diff --git a/docs-cn/14-reference/03-connector/java.mdx b/docs-cn/14-reference/03-connector/java.mdx index 55abf84fd50fe1c4b5b6a07b28731a00d4534a05..813e82e82cd322c5c397e9efc068d01e3f4cf14a 100644 --- a/docs-cn/14-reference/03-connector/java.mdx +++ b/docs-cn/14-reference/03-connector/java.mdx @@ -11,7 +11,7 @@ import TabItem from '@theme/TabItem'; `taos-jdbcdriver` 是 TDengine 的官方 Java 语言连接器,Java 开发人员可以通过它开发存取 TDengine 数据库的应用软件。`taos-jdbcdriver` 实现了 JDBC driver 标准的接口,并提供两种形式的连接器。一种是通过 TDengine 客户端驱动程序(taosc)原生连接 TDengine 实例,支持数据写入、查询、订阅、schemaless 接口和参数绑定接口等功能,一种是通过 taosAdapter 提供的 REST 接口连接 TDengine 实例(2.4.0.0 及更高版本)。REST 连接实现的功能集合和原生连接有少量不同。 -![tdengine-connector](tdengine-jdbc-connector.png) +![tdengine-connector](tdengine-jdbc-connector.webp) 上图显示了两种 Java 应用使用连接器访问 TDengine 的两种方式: diff --git a/docs-cn/14-reference/03-connector/tdengine-jdbc-connector.png b/docs-cn/14-reference/03-connector/tdengine-jdbc-connector.png deleted file mode 100644 index 1cb8401ea30b01d8db652ed4ea70ecc511de7461..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/03-connector/tdengine-jdbc-connector.png and /dev/null differ diff --git a/docs-cn/14-reference/03-connector/tdengine-jdbc-connector.webp b/docs-cn/14-reference/03-connector/tdengine-jdbc-connector.webp new file mode 100644 index 0000000000000000000000000000000000000000..0956d6005ffc5e90727d49d7566158affdda09c2 Binary files /dev/null and b/docs-cn/14-reference/03-connector/tdengine-jdbc-connector.webp differ diff --git a/docs-cn/14-reference/04-taosadapter.md b/docs-cn/14-reference/04-taosadapter.md index 90a31ec94c94559311e2c91cd34f75af7e87e9a0..5fc9a282815813a87c9c17b84e5ffafc2f4692e7 100644 --- a/docs-cn/14-reference/04-taosadapter.md +++ b/docs-cn/14-reference/04-taosadapter.md @@ -24,7 +24,7 @@ taosAdapter 提供以下功能: ## taosAdapter 架构图 -![taosAdapter Architecture](taosAdapter-architecture.png) +![taosAdapter Architecture](taosAdapter-architecture.webp) ## taosAdapter 部署方法 diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.png b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.png deleted file mode 100644 index 4708f836feb21980f2db7fed4a55f799b23a6ec1..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.webp b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.webp new file mode 100644 index 0000000000000000000000000000000000000000..a78e18028a94c2f6a783b08d992a25c791527407 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.png b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.png deleted file mode 100644 index f2684e6eed70e8f56697eae42b495d6bd62815e8..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.webp b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.webp new file mode 100644 index 0000000000000000000000000000000000000000..b152418d0902b8ebdf62ebce6705c10dd5ab4fbf Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.png b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.png deleted file mode 100644 index 74686691e4106b8646c3deee1e0ce73b2f53f1ea..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.webp b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.webp new file mode 100644 index 0000000000000000000000000000000000000000..f58f48b7f17375cb8e62e7c0126ca3aea56a13f6 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-4-requests.png b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-4-requests.png deleted file mode 100644 index 27964215567f9f961c0aeaf1b863188437008fb7..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-4-requests.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-4-requests.webp b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-4-requests.webp new file mode 100644 index 0000000000000000000000000000000000000000..00afcce013602dce0da17bfd033f65aaa8e43bb7 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-4-requests.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-5-database.png b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-5-database.png deleted file mode 100644 index b0d3abbf21ec4d4bd7bfb95fcc03a5f936b22665..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-5-database.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-5-database.webp b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-5-database.webp new file mode 100644 index 0000000000000000000000000000000000000000..567e5694f9d7a035a3eb354493d3df8ed64db251 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-5-database.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.png b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.png deleted file mode 100644 index 2b54cbeb83bcff12f20461a4f57f882e2073f231..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.webp b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.webp new file mode 100644 index 0000000000000000000000000000000000000000..cc8a912810f35e53a6e5fa96ea0c81e334ffc0df Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-7-login-history.png b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-7-login-history.png deleted file mode 100644 index eb3848657f13900c856ac595c20766465157e9c4..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-7-login-history.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-7-login-history.webp b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-7-login-history.webp new file mode 100644 index 0000000000000000000000000000000000000000..651b716bc511ba2ed5db5e6fc6b0591ef150cbf6 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-7-login-history.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.png b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.png deleted file mode 100644 index d94b2e02ac9855bb3d2f77d8902e068839db364f..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.webp b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.webp new file mode 100644 index 0000000000000000000000000000000000000000..8666193f59497180574fd2786266e5baabbe9761 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-full.png b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-full.png deleted file mode 100644 index 654df2934597ce600a1dc2dcd0cab7e29de7076d..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-full.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-full.webp b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-full.webp new file mode 100644 index 0000000000000000000000000000000000000000..7f38a76a2b899ffebc7aecd39c8ec4fd0b2da778 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-full.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-manager-status.png b/docs-cn/14-reference/07-tdinsight/assets/alert-manager-status.png deleted file mode 100644 index e3afa22c0326d70567ec4529c83101c746daac87..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/alert-manager-status.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-manager-status.webp b/docs-cn/14-reference/07-tdinsight/assets/alert-manager-status.webp new file mode 100644 index 0000000000000000000000000000000000000000..3d7fe932a23f3720e76e4217a7b5d1868d81fac8 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/alert-manager-status.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-notification-channel.png b/docs-cn/14-reference/07-tdinsight/assets/alert-notification-channel.png deleted file mode 100644 index 198bf37141c86a66cdd91b47a331bcdeb83daaf8..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/alert-notification-channel.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-notification-channel.webp b/docs-cn/14-reference/07-tdinsight/assets/alert-notification-channel.webp new file mode 100644 index 0000000000000000000000000000000000000000..517123954efe4b94485fdab2e07be0d765f5daa2 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/alert-notification-channel.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-query-demo.png b/docs-cn/14-reference/07-tdinsight/assets/alert-query-demo.png deleted file mode 100644 index ace3aa3c2f8f14fabdac54bc25ae2d9449445b69..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/alert-query-demo.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-query-demo.webp b/docs-cn/14-reference/07-tdinsight/assets/alert-query-demo.webp new file mode 100644 index 0000000000000000000000000000000000000000..6666296ac16e7a0c0ab3db23f0517f2089d09035 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/alert-query-demo.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.png b/docs-cn/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.png deleted file mode 100644 index 7082e49f6beb8690c36f98a3f4ff2befdb8fd014..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.webp b/docs-cn/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.webp new file mode 100644 index 0000000000000000000000000000000000000000..6f74bc3a47a32de661ef25f787a947d823715810 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-rule-test.png b/docs-cn/14-reference/07-tdinsight/assets/alert-rule-test.png deleted file mode 100644 index ffd4911b53854c42dbf0ff11838cb604fa694138..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/alert-rule-test.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-rule-test.webp b/docs-cn/14-reference/07-tdinsight/assets/alert-rule-test.webp new file mode 100644 index 0000000000000000000000000000000000000000..acda3b24a6263815ac8b658709d2172300ca3b00 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/alert-rule-test.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-button.png b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-button.png deleted file mode 100644 index 802c7366f921301bd7fbc62458e56b2d1eaf195c..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-button.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-button.webp b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-button.webp new file mode 100644 index 0000000000000000000000000000000000000000..903e236e2a776dfef7f85c014662e8913a9033a5 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-button.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.png b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.png deleted file mode 100644 index 019ec921b6f808671f4f864ddf3380159d4a0dcc..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.webp b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.webp new file mode 100644 index 0000000000000000000000000000000000000000..14fcfe9d183e8804199708ae4492d0904a7c9d62 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-test.png b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-test.png deleted file mode 100644 index 3963abb4ea8ae0e6f5557466f7a5b746c2d2ea3c..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-test.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-test.webp b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-test.webp new file mode 100644 index 0000000000000000000000000000000000000000..00b50cc619b030d1fb2be3a367183901d5c833e8 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-test.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource.png b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource.png deleted file mode 100644 index 837100464b35a5cafac474723aef603f91945ebc..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource.webp b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource.webp new file mode 100644 index 0000000000000000000000000000000000000000..06d0ff6ed50091a6340508bc5b2b3f78b65dcb18 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-display.png b/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-display.png deleted file mode 100644 index 98223df25499effac343ff5723544a3c289f18fa..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-display.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-display.webp b/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-display.webp new file mode 100644 index 0000000000000000000000000000000000000000..e2ec052b91e439a817f6e88b8afd0fcb4dcb7ef8 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-display.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-import-options.png b/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-import-options.png deleted file mode 100644 index 07aba348f02b4fb8ef68e79664920c119b842d4c..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-import-options.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-import-options.webp b/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-import-options.webp new file mode 100644 index 0000000000000000000000000000000000000000..665c035f9755b9472aee33cd61d3ab52831194b5 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-import-options.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-import-dashboard.png b/docs-cn/14-reference/07-tdinsight/assets/howto-import-dashboard.png deleted file mode 100644 index 7e28939ead8bf3b6e2b4330e4f9b59c2e39b5c1c..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/howto-import-dashboard.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-import-dashboard.webp b/docs-cn/14-reference/07-tdinsight/assets/howto-import-dashboard.webp new file mode 100644 index 0000000000000000000000000000000000000000..7dc42eeba919fee7b438a453c00bb9fd0ac2d274 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/howto-import-dashboard.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-15167.png b/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-15167.png deleted file mode 100644 index 981f640b14d18aa6f0682768d8405a232df500f6..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-15167.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-15167.webp b/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-15167.webp new file mode 100644 index 0000000000000000000000000000000000000000..7ef081900f8de99c859193b69d49b3d6bc187909 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-15167.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.png b/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.png deleted file mode 100644 index 94ef4fa5fe63e535118a81707b413c028ce01f70..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.webp b/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.webp new file mode 100644 index 0000000000000000000000000000000000000000..602452fc4c89424d8e17d46d74949b69be84dbe8 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.png b/docs-cn/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.png deleted file mode 100644 index 670cacc377c2801fa9437c3c132c5c7fbc361b0f..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.webp b/docs-cn/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.webp new file mode 100644 index 0000000000000000000000000000000000000000..35a3ebba781f24dbb0066993d1ca2f02659997d2 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/import_dashboard.png b/docs-cn/14-reference/07-tdinsight/assets/import_dashboard.png deleted file mode 100644 index d74cd36c96ee0fd24ddc6feae2da07824816f745..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/import_dashboard.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/import_dashboard.webp b/docs-cn/14-reference/07-tdinsight/assets/import_dashboard.webp new file mode 100644 index 0000000000000000000000000000000000000000..fb7958f1b9fbd43c8f63136024842790e711c490 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/import_dashboard.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/tdengine_dashboard.png b/docs-cn/14-reference/07-tdinsight/assets/tdengine_dashboard.png deleted file mode 100644 index 0101e7430cb2ef673818de8bd3af53d0d082ad3f..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/tdengine_dashboard.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/tdengine_dashboard.webp b/docs-cn/14-reference/07-tdinsight/assets/tdengine_dashboard.webp new file mode 100644 index 0000000000000000000000000000000000000000..49f1d88f4ad93286cd8582536e82b4dcc4ff271b Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/tdengine_dashboard.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/index.md b/docs-cn/14-reference/07-tdinsight/index.md index a554d7ee6b36797940282fa8401df2f22c4cf579..d7511fde3b5b92b335d60026e56944b9e2b99398 100644 --- a/docs-cn/14-reference/07-tdinsight/index.md +++ b/docs-cn/14-reference/07-tdinsight/index.md @@ -233,33 +233,33 @@ sudo systemctl enable grafana-server 指向 **Configurations** -> **Data Sources** 菜单,然后点击 **Add data source** 按钮。 -![添加数据源按钮](./assets/howto-add-datasource-button.png) +![添加数据源按钮](./assets/howto-add-datasource-button.webp) 搜索并选择**TDengine**。 -![添加数据源](./assets/howto-add-datasource-tdengine.png) +![添加数据源](./assets/howto-add-datasource-tdengine.webp) 配置 TDengine 数据源。 -![数据源配置](./assets/howto-add-datasource.png) +![数据源配置](./assets/howto-add-datasource.webp) 保存并测试,正常情况下会报告 'TDengine Data source is working'。 -![数据源测试](./assets/howto-add-datasource-test.png) +![数据源测试](./assets/howto-add-datasource-test.webp) ### 导入仪表盘 指向 **+** / **Create** - **import**(或 `/dashboard/import` url)。 -![导入仪表盘和配置](./assets/import_dashboard.png) +![导入仪表盘和配置](./assets/import_dashboard.webp) 在 **Import via grafana.com** 位置键入仪表盘 ID `15167` 并 **Load**。 -![通过 grafana.com 导入](./assets/import-dashboard-15167.png) +![通过 grafana.com 导入](./assets/import-dashboard-15167.webp) 导入完成后,TDinsight 的完整页面视图如下所示。 -![显示](./assets/TDinsight-full.png) +![显示](./assets/TDinsight-full.webp) ## TDinsight 仪表盘详细信息 @@ -269,7 +269,7 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源使用情况[dnodes, mnodes ### 集群状态 -![tdinsight-mnodes-overview](./assets/TDinsight-1-cluster-status.png) +![tdinsight-mnodes-overview](./assets/TDinsight-1-cluster-status.webp) 这部分包括集群当前信息和状态,告警信息也在此处(从左到右,从上到下)。 @@ -289,7 +289,7 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源使用情况[dnodes, mnodes ### DNodes 状态 -![tdinsight-mnodes-overview](./assets/TDinsight-2-dnodes.png) +![tdinsight-mnodes-overview](./assets/TDinsight-2-dnodes.webp) - **DNodes Status**:`show dnodes` 的简单表格视图。 - **DNodes Lifetime**:从创建 dnode 开始经过的时间。 @@ -298,14 +298,14 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源使用情况[dnodes, mnodes ### MNode 概述 -![tdinsight-mnodes-overview](./assets/TDinsight-3-mnodes.png) +![tdinsight-mnodes-overview](./assets/TDinsight-3-mnodes.webp) 1. **MNodes Status**:`show mnodes` 的简单表格视图。 2. **MNodes Number**:类似于`DNodes Number`,MNodes 数量变化。 ### 请求 -![tdinsight-requests](./assets/TDinsight-4-requests.png) +![tdinsight-requests](./assets/TDinsight-4-requests.webp) 1. **Requests Rate(Inserts per Second)**:平均每秒插入次数。 2. **Requests (Selects)**:查询请求数及变化率(count of second)。 @@ -313,7 +313,7 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源使用情况[dnodes, mnodes ### 数据库 -![tdinsight-database](./assets/TDinsight-5-database.png) +![tdinsight-database](./assets/TDinsight-5-database.webp) 数据库使用情况,对变量 `$database` 的每个值即每个数据库进行重复多行展示。 @@ -325,7 +325,7 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源使用情况[dnodes, mnodes ### DNode 资源使用情况 -![dnode-usage](./assets/TDinsight-6-dnode-usage.png) +![dnode-usage](./assets/TDinsight-6-dnode-usage.webp) 数据节点资源使用情况展示,对变量 `$fqdn` 即每个数据节点进行重复多行展示。包括: @@ -346,13 +346,13 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源使用情况[dnodes, mnodes ### 登录历史 -![登录历史](./assets/TDinsight-7-login-history.png) +![登录历史](./assets/TDinsight-7-login-history.webp) 目前只报告每分钟登录次数。 ### 监控 taosAdapter -![taosadapter](./assets/TDinsight-8-taosadapter.png) +![taosadapter](./assets/TDinsight-8-taosadapter.webp) 支持监控 taosAdapter 请求统计和状态详情。包括: diff --git a/docs-cn/14-reference/taosAdapter-architecture.png b/docs-cn/14-reference/taosAdapter-architecture.png deleted file mode 100644 index 08a9018553aae6f86b42d127b372d0cecfa9bdf8..0000000000000000000000000000000000000000 Binary files a/docs-cn/14-reference/taosAdapter-architecture.png and /dev/null differ diff --git a/docs-cn/14-reference/taosAdapter-architecture.webp b/docs-cn/14-reference/taosAdapter-architecture.webp new file mode 100644 index 0000000000000000000000000000000000000000..a4162b0a037c06d34191784716c51080b9f8a570 Binary files /dev/null and b/docs-cn/14-reference/taosAdapter-architecture.webp differ diff --git a/docs-cn/20-third-party/01-grafana.mdx b/docs-cn/20-third-party/01-grafana.mdx index 9a4c33d8aceb086ff8ba8dca0f38b1bcbf762005..f9f7a26aa1632a07406199d76b3ad4ef9f1ec3e0 100644 --- a/docs-cn/20-third-party/01-grafana.mdx +++ b/docs-cn/20-third-party/01-grafana.mdx @@ -64,15 +64,15 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource 用户可以直接通过 http://localhost:3000 的网址,登录 Grafana 服务器(用户名/密码:admin/admin),通过左侧 `Configuration -> Data Sources` 可以添加数据源,如下图所示: -![img](/img/connections/add_datasource1.jpg) +![img](./add_datasource1.webp) 点击 `Add data source` 可进入新增数据源页面,在查询框中输入 TDengine 可选择添加,如下图所示: -![img](/img/connections/add_datasource2.jpg) +![img](./add_datasource2.webp) 进入数据源配置页面,按照默认提示修改相应配置即可: -![img](/img/connections/add_datasource3.jpg) +![img](./add_datasource3.webp) - Host: TDengine 集群中提供 REST 服务 (在 2.4 之前由 taosd 提供, 从 2.4 开始由 taosAdapter 提供)的组件所在服务器的 IP 地址与 TDengine REST 服务的端口号(6041),默认 http://localhost:6041。 - User:TDengine 用户名。 @@ -80,13 +80,13 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource 点击 `Save & Test` 进行测试,成功会有如下提示: -![img](/img/connections/add_datasource4.jpg) +![img](./add_datasource4.webp) ### 创建 Dashboard 回到主界面创建 Dashboard,点击 Add Query 进入面板查询页面: -![img](/img/connections/create_dashboard1.jpg) +![img](./create_dashboard1.webp) 如上图所示,在 Query 中选中 `TDengine` 数据源,在下方查询框可输入相应 SQL 进行查询,具体说明如下: @@ -96,7 +96,7 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource 按照默认提示查询当前 TDengine 部署所在服务器指定间隔系统内存平均使用量如下: -![img](/img/connections/create_dashboard2.jpg) +![img](./create_dashboard2.webp) > 关于如何使用 Grafana 创建相应的监测界面以及更多有关使用 Grafana 的信息,请参考 Grafana 官方的[文档](https://grafana.com/docs/)。 diff --git a/docs-cn/20-third-party/09-emq-broker.md b/docs-cn/20-third-party/09-emq-broker.md index f57ccb20e6517c51b55093d11fa767bef7d0c9a8..b9d099c145d89c4f8e8a3cfaa994bffa0085e280 100644 --- a/docs-cn/20-third-party/09-emq-broker.md +++ b/docs-cn/20-third-party/09-emq-broker.md @@ -45,25 +45,25 @@ MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/em 使用浏览器打开网址 http://IP:18083 并登录 EMQX Dashboard。初次安装用户名为 `admin` 密码为:`public` -![img](./emqx/login-dashboard.png) +![img](./emqx/login-dashboard.webp) ### 创建规则(Rule) 选择左侧“规则引擎(Rule Engine)”中的“规则(Rule)”并点击“创建(Create)”按钮: -![img](./emqx/rule-engine.png) +![img](./emqx/rule-engine.webp) ### 编辑 SQL 字段 -![img](./emqx/create-rule.png) +![img](./emqx/create-rule.webp) ### 新增“动作(action handler)” -![img](./emqx/add-action-handler.png) +![img](./emqx/add-action-handler.webp) ### 新增“资源(Resource)” -![img](./emqx/create-resource.png) +![img](./emqx/create-resource.webp) 选择“发送数据到 Web 服务“并点击“新建资源”按钮: @@ -71,13 +71,13 @@ MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/em 选择“发送数据到 Web 服务“并填写 请求 URL 为 运行 taosAdapter 的服务器地址和端口(默认为 6041)。其他属性请保持默认值。 -![img](./emqx/edit-resource.png) +![img](./emqx/edit-resource.webp) ### 编辑“动作(action)” 编辑资源配置,增加 Authorization 认证的键/值配对项,相关文档请参考[ TDengine REST API 文档](https://docs.taosdata.com/reference/rest-api/)。在消息体中输入规则引擎替换模板。 -![img](./emqx/edit-action.png) +![img](./emqx/edit-action.webp) ## 编写模拟测试程序 @@ -164,7 +164,7 @@ MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/em 注意:代码中 CLIENT_NUM 在开始测试中可以先设置一个较小的值,避免硬件性能不能完全处理较大并发客户端数量。 -![img](./emqx/client-num.png) +![img](./emqx/client-num.webp) ## 执行测试模拟发送 MQTT 数据 @@ -173,19 +173,19 @@ npm install mqtt mockjs --save --registry=https://registry.npm.taobao.org node mock.js ``` -![img](./emqx/run-mock.png) +![img](./emqx/run-mock.webp) ## 验证 EMQX 接收到数据 在 EMQX Dashboard 规则引擎界面进行刷新,可以看到有多少条记录被正确接收到: -![img](./emqx/check-rule-matched.png) +![img](./emqx/check-rule-matched.webp) ## 验证数据写入到 TDengine 使用 TDengine CLI 程序登录并查询相应数据库和表,验证数据是否被正确写入到 TDengine 中: -![img](./emqx/check-result-in-taos.png) +![img](./emqx/check-result-in-taos.webp) TDengine 详细使用方法请参考 [TDengine 官方文档](https://docs.taosdata.com/)。 EMQX 详细使用方法请参考 [EMQX 官方文档](https://www.emqx.io/docs/zh/v4.4/rule/rule-engine.html)。 diff --git a/docs-cn/20-third-party/11-kafka.md b/docs-cn/20-third-party/11-kafka.md index d12d5fab75671d8a1e7356e766d0e8979c6519c2..beb8f1bd6bb24aa2e2e3012c4f137bd00f3deb5c 100644 --- a/docs-cn/20-third-party/11-kafka.md +++ b/docs-cn/20-third-party/11-kafka.md @@ -9,11 +9,11 @@ TDengine Kafka Connector 包含两个插件: TDengine Source Connector 和 TDeng Kafka Connect 是 Apache Kafka 的一个组件,用于使其它系统,比如数据库、云服务、文件系统等能方便地连接到 Kafka。数据既可以通过 Kafka Connect 从其它系统流向 Kafka, 也可以通过 Kafka Connect 从 Kafka 流向其它系统。从其它系统读数据的插件称为 Source Connector, 写数据到其它系统的插件称为 Sink Connector。Source Connector 和 Sink Connector 都不会直接连接 Kafka Broker,Source Connector 把数据转交给 Kafka Connect。Sink Connector 从 Kafka Connect 接收数据。 -![](kafka/Kafka_Connect.png) +![](kafka/Kafka_Connect.webp) TDengine Source Connector 用于把数据实时地从 TDengine 读出来发送给 Kafka Connect。TDengine Sink Connector 用于 从 Kafka Connect 接收数据并写入 TDengine。 -![](kafka/streaming-integration-with-kafka-connect.png) +![](kafka/streaming-integration-with-kafka-connect.webp) ## 什么是 Confluent? @@ -26,7 +26,7 @@ Confluent 在 Kafka 的基础上增加很多扩展功能。包括: 5. 管理和监控 Kafka 的 GUI —— Confluent 控制中心 这些扩展功能有的包含在社区版本的 Confluent 中,有的只有企业版能用。 -![](kafka/confluentPlatform.png) +![](kafka/confluentPlatform.webp) Confluent 企业版提供了 `confluent` 命令行工具管理各个组件。 diff --git a/docs-cn/20-third-party/add_datasource1.webp b/docs-cn/20-third-party/add_datasource1.webp new file mode 100644 index 0000000000000000000000000000000000000000..211edc4457abd0db6b0ef64636d61d65b5f43db6 Binary files /dev/null and b/docs-cn/20-third-party/add_datasource1.webp differ diff --git a/docs-cn/20-third-party/add_datasource2.webp b/docs-cn/20-third-party/add_datasource2.webp new file mode 100644 index 0000000000000000000000000000000000000000..8ab547231fee4d3b0874fcfe08c0ce152b0c53a1 Binary files /dev/null and b/docs-cn/20-third-party/add_datasource2.webp differ diff --git a/docs-cn/20-third-party/add_datasource3.webp b/docs-cn/20-third-party/add_datasource3.webp new file mode 100644 index 0000000000000000000000000000000000000000..d8a733360a09b4425c571f254a9ecb298c04b72f Binary files /dev/null and b/docs-cn/20-third-party/add_datasource3.webp differ diff --git a/docs-cn/20-third-party/add_datasource4.webp b/docs-cn/20-third-party/add_datasource4.webp new file mode 100644 index 0000000000000000000000000000000000000000..b1e0fc6e2b27df4af1bb5ad92756bcb5d4fda63e Binary files /dev/null and b/docs-cn/20-third-party/add_datasource4.webp differ diff --git a/docs-cn/20-third-party/create_dashboard1.webp b/docs-cn/20-third-party/create_dashboard1.webp new file mode 100644 index 0000000000000000000000000000000000000000..55eb388833e4df2a46f4d1cf6d346aa11429385d Binary files /dev/null and b/docs-cn/20-third-party/create_dashboard1.webp differ diff --git a/docs-cn/20-third-party/create_dashboard2.webp b/docs-cn/20-third-party/create_dashboard2.webp new file mode 100644 index 0000000000000000000000000000000000000000..bb40e407187718c52e9f617d8ebd3d25fd14b56b Binary files /dev/null and b/docs-cn/20-third-party/create_dashboard2.webp differ diff --git a/docs-cn/20-third-party/dashboard-15146.webp b/docs-cn/20-third-party/dashboard-15146.webp new file mode 100644 index 0000000000000000000000000000000000000000..fae586f5c74317621002416b2824830a7bdf3982 Binary files /dev/null and b/docs-cn/20-third-party/dashboard-15146.webp differ diff --git a/docs-cn/20-third-party/emqx/add-action-handler.png b/docs-cn/20-third-party/emqx/add-action-handler.png deleted file mode 100644 index 97a1f933ecfadfcab399938806d73c5a5ecc6427..0000000000000000000000000000000000000000 Binary files a/docs-cn/20-third-party/emqx/add-action-handler.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/add-action-handler.webp b/docs-cn/20-third-party/emqx/add-action-handler.webp new file mode 100644 index 0000000000000000000000000000000000000000..4a8d105f711991226cfbd43b6e9ab07d7ccc686a Binary files /dev/null and b/docs-cn/20-third-party/emqx/add-action-handler.webp differ diff --git a/docs-cn/20-third-party/emqx/check-result-in-taos.png b/docs-cn/20-third-party/emqx/check-result-in-taos.png deleted file mode 100644 index c17a5c1ea2b9bbd49263056c8bf09c9aabab07d5..0000000000000000000000000000000000000000 Binary files a/docs-cn/20-third-party/emqx/check-result-in-taos.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/check-result-in-taos.webp b/docs-cn/20-third-party/emqx/check-result-in-taos.webp new file mode 100644 index 0000000000000000000000000000000000000000..8fa040a86104fece02ddaf8986f0a67de316143d Binary files /dev/null and b/docs-cn/20-third-party/emqx/check-result-in-taos.webp differ diff --git a/docs-cn/20-third-party/emqx/check-rule-matched.png b/docs-cn/20-third-party/emqx/check-rule-matched.png deleted file mode 100644 index 9e9a466946a1afa857e2bbc07b14956dd0f984b6..0000000000000000000000000000000000000000 Binary files a/docs-cn/20-third-party/emqx/check-rule-matched.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/check-rule-matched.webp b/docs-cn/20-third-party/emqx/check-rule-matched.webp new file mode 100644 index 0000000000000000000000000000000000000000..e5a614035739df859b27c817f3b9f41be444b513 Binary files /dev/null and b/docs-cn/20-third-party/emqx/check-rule-matched.webp differ diff --git a/docs-cn/20-third-party/emqx/client-num.png b/docs-cn/20-third-party/emqx/client-num.png deleted file mode 100644 index fff48cbf3b271c367079ddde425b3f9b014062f7..0000000000000000000000000000000000000000 Binary files a/docs-cn/20-third-party/emqx/client-num.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/client-num.webp b/docs-cn/20-third-party/emqx/client-num.webp new file mode 100644 index 0000000000000000000000000000000000000000..a151b184843607d67b649babb3145bfb3e329cda Binary files /dev/null and b/docs-cn/20-third-party/emqx/client-num.webp differ diff --git a/docs-cn/20-third-party/emqx/create-resource.png b/docs-cn/20-third-party/emqx/create-resource.png deleted file mode 100644 index 58da4c391a3692b9f5fa348d952701eab8bcb746..0000000000000000000000000000000000000000 Binary files a/docs-cn/20-third-party/emqx/create-resource.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/create-resource.webp b/docs-cn/20-third-party/emqx/create-resource.webp new file mode 100644 index 0000000000000000000000000000000000000000..bf9cccbe49c57f925c5e6b094a4c0d88a64242cb Binary files /dev/null and b/docs-cn/20-third-party/emqx/create-resource.webp differ diff --git a/docs-cn/20-third-party/emqx/create-rule.png b/docs-cn/20-third-party/emqx/create-rule.png deleted file mode 100644 index 73b0b6ee3e6065a142df98abe8c0dbb32b34f89d..0000000000000000000000000000000000000000 Binary files a/docs-cn/20-third-party/emqx/create-rule.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/create-rule.webp b/docs-cn/20-third-party/emqx/create-rule.webp new file mode 100644 index 0000000000000000000000000000000000000000..13e8fc83d48d2fd9d0a303c707ef3024d3ee5203 Binary files /dev/null and b/docs-cn/20-third-party/emqx/create-rule.webp differ diff --git a/docs-cn/20-third-party/emqx/edit-action.png b/docs-cn/20-third-party/emqx/edit-action.png deleted file mode 100644 index 2a43ee369a439cf11cee23c11f25d6a84b26d7dc..0000000000000000000000000000000000000000 Binary files a/docs-cn/20-third-party/emqx/edit-action.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/edit-action.webp b/docs-cn/20-third-party/emqx/edit-action.webp new file mode 100644 index 0000000000000000000000000000000000000000..7f6d2e36a82b1917930e5d3969115db9359674a0 Binary files /dev/null and b/docs-cn/20-third-party/emqx/edit-action.webp differ diff --git a/docs-cn/20-third-party/emqx/edit-resource.png b/docs-cn/20-third-party/emqx/edit-resource.png deleted file mode 100644 index 0a0b3560044f4ed6e0a8f040b74085a7e8948b1f..0000000000000000000000000000000000000000 Binary files a/docs-cn/20-third-party/emqx/edit-resource.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/edit-resource.webp b/docs-cn/20-third-party/emqx/edit-resource.webp new file mode 100644 index 0000000000000000000000000000000000000000..fd5d278fab16bba4e04e1c348d4086dce77abb98 Binary files /dev/null and b/docs-cn/20-third-party/emqx/edit-resource.webp differ diff --git a/docs-cn/20-third-party/emqx/login-dashboard.png b/docs-cn/20-third-party/emqx/login-dashboard.png deleted file mode 100644 index d6c5035c98d860faf639ef6611c6719adf80c47b..0000000000000000000000000000000000000000 Binary files a/docs-cn/20-third-party/emqx/login-dashboard.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/login-dashboard.webp b/docs-cn/20-third-party/emqx/login-dashboard.webp new file mode 100644 index 0000000000000000000000000000000000000000..f84cee668fb6efe1586515ba0dee3ae2f10a5b30 Binary files /dev/null and b/docs-cn/20-third-party/emqx/login-dashboard.webp differ diff --git a/docs-cn/20-third-party/emqx/rule-engine.png b/docs-cn/20-third-party/emqx/rule-engine.png deleted file mode 100644 index db110a837b024c82ee9d22f02dcd3a9d06abdd55..0000000000000000000000000000000000000000 Binary files a/docs-cn/20-third-party/emqx/rule-engine.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/rule-engine.webp b/docs-cn/20-third-party/emqx/rule-engine.webp new file mode 100644 index 0000000000000000000000000000000000000000..c1711c8cc757cd73fef5cb941a1818756241f7f0 Binary files /dev/null and b/docs-cn/20-third-party/emqx/rule-engine.webp differ diff --git a/docs-cn/20-third-party/emqx/rule-header-key-value.png b/docs-cn/20-third-party/emqx/rule-header-key-value.png deleted file mode 100644 index b81b9a9684aa2f98d00b7ec21e5de411fb450312..0000000000000000000000000000000000000000 Binary files a/docs-cn/20-third-party/emqx/rule-header-key-value.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/rule-header-key-value.webp b/docs-cn/20-third-party/emqx/rule-header-key-value.webp new file mode 100644 index 0000000000000000000000000000000000000000..e645b3822dffec86f4926e78a57eaffa1e7f4d8d Binary files /dev/null and b/docs-cn/20-third-party/emqx/rule-header-key-value.webp differ diff --git a/docs-cn/20-third-party/emqx/run-mock.png b/docs-cn/20-third-party/emqx/run-mock.png deleted file mode 100644 index 0da25818575247732d5d3a783aa52cf7ce24662c..0000000000000000000000000000000000000000 Binary files a/docs-cn/20-third-party/emqx/run-mock.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/run-mock.webp b/docs-cn/20-third-party/emqx/run-mock.webp new file mode 100644 index 0000000000000000000000000000000000000000..ed33f1666d456f1ab40ed6830af4550d4c7ca037 Binary files /dev/null and b/docs-cn/20-third-party/emqx/run-mock.webp differ diff --git a/docs-cn/20-third-party/import_dashboard1.webp b/docs-cn/20-third-party/import_dashboard1.webp new file mode 100644 index 0000000000000000000000000000000000000000..d4fb374ce8bb75c8a0fbdbb9cab5b30eb29ab06d Binary files /dev/null and b/docs-cn/20-third-party/import_dashboard1.webp differ diff --git a/docs-cn/20-third-party/import_dashboard2.webp b/docs-cn/20-third-party/import_dashboard2.webp new file mode 100644 index 0000000000000000000000000000000000000000..9f74dc96be20ab64b5fb555aaccdaa1c1139b35c Binary files /dev/null and b/docs-cn/20-third-party/import_dashboard2.webp differ diff --git a/docs-cn/20-third-party/kafka/Kafka_Connect.png b/docs-cn/20-third-party/kafka/Kafka_Connect.png deleted file mode 100644 index f3dc02ea2a743c6e1ae5531e14f820e3adeca29a..0000000000000000000000000000000000000000 Binary files a/docs-cn/20-third-party/kafka/Kafka_Connect.png and /dev/null differ diff --git a/docs-cn/20-third-party/kafka/Kafka_Connect.webp b/docs-cn/20-third-party/kafka/Kafka_Connect.webp new file mode 100644 index 0000000000000000000000000000000000000000..8f2000a749b0a2ccec9939abd144c53c44fbe171 Binary files /dev/null and b/docs-cn/20-third-party/kafka/Kafka_Connect.webp differ diff --git a/docs-cn/20-third-party/kafka/confluentPlatform.png b/docs-cn/20-third-party/kafka/confluentPlatform.png deleted file mode 100644 index f8e69f2c7f64d809996b2d1bf1370b67b8030850..0000000000000000000000000000000000000000 Binary files a/docs-cn/20-third-party/kafka/confluentPlatform.png and /dev/null differ diff --git a/docs-cn/20-third-party/kafka/confluentPlatform.webp b/docs-cn/20-third-party/kafka/confluentPlatform.webp new file mode 100644 index 0000000000000000000000000000000000000000..ff03d4e51aaaec85f07ff41ecda0fb9bd6cb2847 Binary files /dev/null and b/docs-cn/20-third-party/kafka/confluentPlatform.webp differ diff --git a/docs-cn/20-third-party/kafka/streaming-integration-with-kafka-connect.png b/docs-cn/20-third-party/kafka/streaming-integration-with-kafka-connect.png deleted file mode 100644 index 26d8a866d706180c900d69bb6f57ca2dff0047dd..0000000000000000000000000000000000000000 Binary files a/docs-cn/20-third-party/kafka/streaming-integration-with-kafka-connect.png and /dev/null differ diff --git a/docs-cn/20-third-party/kafka/streaming-integration-with-kafka-connect.webp b/docs-cn/20-third-party/kafka/streaming-integration-with-kafka-connect.webp new file mode 100644 index 0000000000000000000000000000000000000000..120d534ec132cea2ccef6cf87a3ce680a5ac6e9c Binary files /dev/null and b/docs-cn/20-third-party/kafka/streaming-integration-with-kafka-connect.webp differ diff --git a/docs-cn/21-tdinternal/01-arch.md b/docs-cn/21-tdinternal/01-arch.md index 6f479efc1ad13e27899e7819d194a2df59ed3ad1..456d4bea910fab46965600639cc3dd634daa15b9 100644 --- a/docs-cn/21-tdinternal/01-arch.md +++ b/docs-cn/21-tdinternal/01-arch.md @@ -11,7 +11,7 @@ TDengine 的设计是基于单个硬件、软件系统不可靠,基于任何 TDengine 分布式架构的逻辑结构图如下: -![TDengine架构示意图](/img/architecture/structure.png) +![TDengine架构示意图](./structure.webp)
图 1 TDengine架构示意图
@@ -63,7 +63,7 @@ TDengine 分布式架构的逻辑结构图如下: 为解释 vnode、mnode、taosc 和应用之间的关系以及各自扮演的角色,下面对写入数据这个典型操作的流程进行剖析。 -![TDengine典型的操作流程](/img/architecture/message.png) +![TDengine典型的操作流程](./message.webp)
图 2 TDengine 典型的操作流程
@@ -135,7 +135,7 @@ TDengine 除 vnode 分片之外,还对时序数据按照时间段进行分区 Master Vnode 遵循下面的写入流程: -![TDengine Master写入流程](/img/architecture/write_master.png) +![TDengine Master写入流程](./write_master.webp)
图 3 TDengine Master 写入流程
@@ -150,7 +150,7 @@ Master Vnode 遵循下面的写入流程: 对于 slave vnode,写入流程是: -![TDengine Slave 写入流程](/img/architecture/write_slave.png) +![TDengine Slave 写入流程](./write_slave.webp)
图 4 TDengine Slave 写入流程
@@ -284,7 +284,7 @@ SELECT COUNT(*) FROM d1001 WHERE ts >= '2017-7-14 00:00:00' AND ts < '2017-7-14 TDengine 对每个数据采集点单独建表,但在实际应用中经常需要对不同的采集点数据进行聚合。为高效的进行聚合操作,TDengine 引入超级表(STable)的概念。超级表用来代表一特定类型的数据采集点,它是包含多张表的表集合,集合里每张表的模式(schema)完全一致,但每张表都带有自己的静态标签,标签可以有多个,可以随时增加、删除和修改。应用可通过指定标签的过滤条件,对一个 STable 下的全部或部分表进行聚合或统计操作,这样大大简化应用的开发。其具体流程如下图所示: -![多表聚合查询原理图](/img/architecture/multi_tables.png) +![多表聚合查询原理图](./multi_tables.webp)
图 5 多表聚合查询原理图
diff --git a/docs-cn/21-tdinternal/dnode.webp b/docs-cn/21-tdinternal/dnode.webp new file mode 100644 index 0000000000000000000000000000000000000000..a56c7e4594df00a721cb48381d68ca3bc813cdc8 Binary files /dev/null and b/docs-cn/21-tdinternal/dnode.webp differ diff --git a/docs-cn/21-tdinternal/message.webp b/docs-cn/21-tdinternal/message.webp new file mode 100644 index 0000000000000000000000000000000000000000..a2a42abff3d6e932b41a3abe9feae4a5cc13c9e5 Binary files /dev/null and b/docs-cn/21-tdinternal/message.webp differ diff --git a/docs-cn/21-tdinternal/modules.webp b/docs-cn/21-tdinternal/modules.webp new file mode 100644 index 0000000000000000000000000000000000000000..718a6abccdbe40d4a0df5e3812fe0ab943a7c523 Binary files /dev/null and b/docs-cn/21-tdinternal/modules.webp differ diff --git a/docs-cn/21-tdinternal/multi_tables.webp b/docs-cn/21-tdinternal/multi_tables.webp new file mode 100644 index 0000000000000000000000000000000000000000..8f649e34a3a62d1b11b4403b2e743ff6b5e47be2 Binary files /dev/null and b/docs-cn/21-tdinternal/multi_tables.webp differ diff --git a/docs-cn/21-tdinternal/replica-forward.webp b/docs-cn/21-tdinternal/replica-forward.webp new file mode 100644 index 0000000000000000000000000000000000000000..512efd4eba8f23ad0f8607eaaf5525f51ecdcf0e Binary files /dev/null and b/docs-cn/21-tdinternal/replica-forward.webp differ diff --git a/docs-cn/21-tdinternal/replica-master.webp b/docs-cn/21-tdinternal/replica-master.webp new file mode 100644 index 0000000000000000000000000000000000000000..57030a11f563af2689dbcfd206183f410b121aee Binary files /dev/null and b/docs-cn/21-tdinternal/replica-master.webp differ diff --git a/docs-cn/21-tdinternal/replica-restore.webp b/docs-cn/21-tdinternal/replica-restore.webp new file mode 100644 index 0000000000000000000000000000000000000000..f282c2d4d23f517e3ef08e906cea7e9c5edc0b2a Binary files /dev/null and b/docs-cn/21-tdinternal/replica-restore.webp differ diff --git a/docs-cn/21-tdinternal/structure.webp b/docs-cn/21-tdinternal/structure.webp new file mode 100644 index 0000000000000000000000000000000000000000..b77a42c074b15302b5c3ab889fb550a46dd549b3 Binary files /dev/null and b/docs-cn/21-tdinternal/structure.webp differ diff --git a/docs-cn/21-tdinternal/vnode.webp b/docs-cn/21-tdinternal/vnode.webp new file mode 100644 index 0000000000000000000000000000000000000000..fae3104c89c542c26790b509d12ad56661082c32 Binary files /dev/null and b/docs-cn/21-tdinternal/vnode.webp differ diff --git a/docs-cn/21-tdinternal/write_master.webp b/docs-cn/21-tdinternal/write_master.webp new file mode 100644 index 0000000000000000000000000000000000000000..9624036ed3d46ed60924ead9ce5c61acee0f4652 Binary files /dev/null and b/docs-cn/21-tdinternal/write_master.webp differ diff --git a/docs-cn/21-tdinternal/write_slave.webp b/docs-cn/21-tdinternal/write_slave.webp new file mode 100644 index 0000000000000000000000000000000000000000..7c45dec11b00e6a738de458f9e1bedacfad75a96 Binary files /dev/null and b/docs-cn/21-tdinternal/write_slave.webp differ diff --git a/docs-cn/25-application/01-telegraf.md b/docs-cn/25-application/01-telegraf.md index f63a6701eed2b4c5b98f577d5b2867ae6dada387..5bfc94c53410f6142b3bc24f696334c334cde933 100644 --- a/docs-cn/25-application/01-telegraf.md +++ b/docs-cn/25-application/01-telegraf.md @@ -16,7 +16,7 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如 本文介绍不需要写一行代码,通过简单修改几行配置文件,就可以快速搭建一个基于 TDengine + Telegraf + Grafana 的 IT 运维系统。架构如下图: -![IT-DevOps-Solutions-Telegraf.png](/img/IT-DevOps-Solutions-Telegraf.png) +![IT-DevOps-Solutions-Telegraf.webp](./IT-DevOps-Solutions-Telegraf.webp) ## 安装步骤 @@ -75,7 +75,7 @@ sudo systemctl start telegraf 点击左侧齿轮图标并选择 `Plugins`,应该可以找到 TDengine data source 插件图标。 点击左侧加号图标并选择 `Import`,从 `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v0.1.0.json` 下载 dashboard JSON 文件后导入。之后可以看到如下界面的仪表盘: -![IT-DevOps-Solutions-telegraf-dashboard.png](/img/IT-DevOps-Solutions-telegraf-dashboard.png) +![IT-DevOps-Solutions-telegraf-dashboard.webp]./IT-DevOps-Solutions-telegraf-dashboard.webp) ## 总结 diff --git a/docs-cn/25-application/02-collectd.md b/docs-cn/25-application/02-collectd.md index 5e6bc6577b2f4c8564e4533ced745d0b214ec748..5966f2d6544c78adb806d51e8a4157ba7dc420e9 100644 --- a/docs-cn/25-application/02-collectd.md +++ b/docs-cn/25-application/02-collectd.md @@ -16,7 +16,7 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如 本文介绍不需要写一行代码,通过简单修改几行配置文件,就可以快速搭建一个基于 TDengine + collectd / statsD + Grafana 的 IT 运维系统。架构如下图: -![IT-DevOps-Solutions-Collectd-StatsD.png](/img/IT-DevOps-Solutions-Collectd-StatsD.png) +![IT-DevOps-Solutions-Collectd-StatsD.webp](./IT-DevOps-Solutions-Collectd-StatsD.webp) ## 安装步骤 @@ -81,12 +81,12 @@ repeater 部分添加 { host:'', port: Figure 1. TDengine Technical Ecosystem -On the left side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides interactive command-line interface and web interface for management and maintenance. +On the left-hand side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right-hand side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides an interactive command-line interface and a web interface for management and maintenance. -## Suited Scenarios +## Typical Use Cases -As a high-performance, scalable and SQL supported time-series database, TDengine's typical application scenarios include but are not limited to IoT, Industrial Internet, Connected Vehicles, IT operation and maintenance, energy, financial markets and other fields. TDengine is a purpose-built database optimized for the characteristics of time series data, it cannot be used to process data from web crawlers, social media, e-commerce, ERP, CRM, etc. This section makes a more detailed analysis of the applicable scenarios. +As a high-performance, scalable and SQL supported time-series database, TDengine's typical use case include but are not limited to IoT, Industrial Internet, Connected Vehicles, IT operation and maintenance, energy, financial markets and other fields. TDengine is a purpose-built database optimized for the characteristics of time series data. As such, it cannot be used to process data from web crawlers, social media, e-commerce, ERP, CRM and so on. More generally TDengine is not a suitable storage engine for non-time-series data. This section makes a more detailed analysis of the applicable scenarios. ### Characteristics and Requirements of Data Sources diff --git a/docs-en/04-concept/index.md b/docs-en/04-concept/index.md index abc553ab6d90042cb2389ba0b71d3b5395dcebfd..d714bace1d978017910f3160df98a0cc28189f5d 100644 --- a/docs-en/04-concept/index.md +++ b/docs-en/04-concept/index.md @@ -2,7 +2,7 @@ title: Concepts --- -In order to explain the basic concepts and provide some sample code, the TDengine documentation takes smart meters as a typical time series data scenario. Assuming that each smart meter collects three metrics of current, voltage, and phase, there are multiple smart meters, and each meter has static attributes like location and group ID, the collected data will be similar to the following table: +In order to explain the basic concepts and provide some sample code, the TDengine documentation smart meters as a typical time series use case. We assume the following: 1. Each smart meter collects three metrics i.e. current, voltage, and phase 2. There are multiple smart meters, and 3. Each meter has static attributes like location and group ID. Based on this, collected data will look similar to the following table:
@@ -29,7 +29,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin - + @@ -38,7 +38,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin - + @@ -47,7 +47,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin - + @@ -56,7 +56,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin - + @@ -65,7 +65,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin - + @@ -74,7 +74,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin - + @@ -83,7 +83,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin - + @@ -92,7 +92,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin - + @@ -112,7 +112,7 @@ Label/Tag refers to the static properties of sensors, equipment or other types o ## Data Collection Point -Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipments, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car, so in this example the car would have three data collection points. +Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points. ## Table @@ -122,10 +122,10 @@ To make full use of time-series data characteristics, TDengine adopts a strategy 1. Since the metric data from different DCP are fully independent, the data source of each DCP is unique, and a table has only one writer. In this way, data points can be written in a lock-free manner, and the writing speed can be greatly improved. 2. For a DCP, the metric data generated by DCP is ordered by timestamp, so the write operation can be implemented by simple appending, which further greatly improves the data writing speed. -3. The metric data from a DCP is continuously stored in block by block. If you read data for a period of time, it can greatly reduce random read operations and improve read and query performance by orders of magnitude. -4. Inside a data block for a DCP, columnar storage is used, and different compression algorithms are used for different data types. Metrics generally don't vary as significantly between themselves over a time range as compared to other metrics, this allows for a higher compression rate. +3. The metric data from a DCP is continuously stored, block by block. If you read data for a period of time, it can greatly reduce random read operations and improve read and query performance by orders of magnitude. +4. Inside a data block for a DCP, columnar storage is used, and different compression algorithms are used for different data types. Metrics generally don't vary as significantly between themselves over a time range as compared to other metrics, which allows for a higher compression rate. -If the metric data of multiple DCPs are traditionally written into a single table, due to the uncontrollable network delay, the timing of the data from different DCPs arriving at the server cannot be guaranteed, the writing operation must be protected by locks, and the metric data from one DCP cannot be guaranteed to be continuously stored together. **One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest extent.** +If the metric data of multiple DCPs are traditionally written into a single table, due to uncontrollable network delays, the timing of the data from different DCPs arriving at the server cannot be guaranteed, write operations must be protected by locks, and metric data from one DCP cannot be guaranteed to be continuously stored together. **One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest possible extent.** TDengine suggests using DCP ID as the table name (like D1001 in the above table). Each DCP may collect one or multiple metrics (like the current, voltage, phase as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the time stamp as the index, and won’t build the index on any metrics stored. Column wise storage is used. @@ -139,7 +139,7 @@ In the design of TDengine, **a table is used to represent a specific data collec ## Subtable -When creating a table for a specific data collection point, the user can use a STable as a template and specifies the tag values of this specific DCP to create it. **The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is: +When creating a table for a specific data collection point, the user can use a STable as a template and specify the tag values of this specific DCP to create it. **The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is: 1. Subtable is a table, all SQL commands applied on a regular table can be applied on subtable. 2. Subtable is a table with extensions, it has static tags (labels), and these tags can be added, deleted, and updated after it is created. But a regular table does not have tags. 3. A subtable belongs to only one STable, but a STable may have many subtables. Regular tables do not belong to a STable. @@ -151,7 +151,7 @@ The relationship between a STable and the subtables created based on this STable 2. The schema of metrics or labels cannot be adjusted through subtables, and it can only be changed via STable. Changes to the schema of a STable takes effect immediately for all associated subtables. 3. STable defines only one template and does not store any data or label information by itself. Therefore, data cannot be written to a STable, only to subtables. -Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which can greatly reduce the data sets to be scanned, thus greatly improving the performance of data aggregation across multiple DCPs. +Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs. In TDengine, it is recommended to use a subtable instead of a regular table for a DCP. @@ -167,4 +167,4 @@ FQDN (Fully Qualified Domain Name) is the full domain name of a specific compute Each node of a TDengine cluster is uniquely identified by an End Point, which consists of an FQDN and a Port, such as h1.tdengine.com:6030. In this way, when the IP changes, we can still use the FQDN to dynamically find the node without changing any configuration of the cluster. In addition, FQDN is used to facilitate unified access to the same cluster from the Intranet and the Internet. -TDengine does not recommend using an IP address to access the cluster, FQDN is recommended for cluster management. +TDengine does not recommend using an IP address to access the cluster. FQDN is recommended for cluster management. diff --git a/docs-en/05-get-started/index.md b/docs-en/05-get-started/index.md index 39b2d02eca3c15aebd5715ee64e455781c8236e5..596d42d32f2fcba51a4326cd96220b4055d11704 100644 --- a/docs-en/05-get-started/index.md +++ b/docs-en/05-get-started/index.md @@ -10,7 +10,7 @@ import AptGetInstall from "./\_apt_get_install.mdx"; ## Quick Install -The full package of TDengine includes the server(taosd), taosAdapter for connecting with third-party systems and providing a RESTful interface, client driver(taosc), command-line program(CLI, taos) and some tools. For the current version, the server taosd and taosAdapter can only be installed and run on Linux systems. In the future taosd and taosAdapter will also be supported on Windows, macOS and other systems. The client driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to the connectors of multiple languages, [RESTful interface](/reference/rest-api) is also provided by [taosAdapter](/reference/taosadapter) in TDengine. Prior to version 2.4.0.0, however, there is no taosAdapter, the RESTful interface is provided by the built-in HTTP service of taosd. +The full package of TDengine includes the server(taosd), taosAdapter for connecting with third-party systems and providing a RESTful interface, client driver(taosc), command-line program(CLI, taos) and some tools. For the current version, the server taosd and taosAdapter can only be installed and run on Linux systems. In the future taosd and taosAdapter will also be supported on Windows, macOS and other systems. The client driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to connectors for multiple languages, TDengine also provides a [RESTful interface](/reference/rest-api) through [taosAdapter](/reference/taosadapter). Prior to version 2.4.0.0, taosAdapter did not exist and the RESTful interface was provided by the built-in HTTP service of taosd. TDengine supports X64/ARM64/MIPS64/Alpha64 hardware platforms, and will support ARM32, RISC-V and other CPU architectures in the future. diff --git a/docs-en/07-develop/01-connect/index.md b/docs-en/07-develop/01-connect/index.md index 2e886cb8922c0731d08d62696854e13a6893bc9a..21b2149f4451e8e5d388a41f1a0a06b6adc00a96 100644 --- a/docs-en/07-develop/01-connect/index.md +++ b/docs-en/07-develop/01-connect/index.md @@ -1,7 +1,7 @@ --- sidebar_label: Connection title: Connect to TDengine -description: "This document explains how to establish connection to TDengine, and briefly introduce how to install and use TDengine connectors." +description: "This document explains how to establish connections to TDengine, and briefly introduces how to install and use TDengine connectors." --- import Tabs from "@theme/Tabs"; @@ -19,7 +19,7 @@ import InstallOnLinux from "../../14-reference/03-connector/\_windows_install.md import VerifyLinux from "../../14-reference/03-connector/\_verify_linux.mdx"; import VerifyWindows from "../../14-reference/03-connector/\_verify_windows.mdx"; -Any application programs running on any kind of platforms can access TDengine through the REST API provided by TDengine. For the details, please refer to [REST API](/reference/rest-api/). Besides, application programs can use the connectors of multiple programming languages to access TDengine, including C/C++, Java, Python, Go, Node.js, C#, and Rust. This chapter describes how to establish connection to TDengine and briefly introduces how to install and use connectors. For details about the connectors, please refer to [Connectors](/reference/connector/) +Any application programs running on any kind of platform can access TDengine through the REST API provided by TDengine. For details, please refer to [REST API](/reference/rest-api/). Additionally, application programs can use the connectors of multiple programming languages including C/C++, Java, Python, Go, Node.js, C#, and Rust to access TDengine. This chapter describes how to establish a connection to TDengine and briefly introduces how to install and use connectors. For details about the connectors, please refer to [Connectors](/reference/connector/) ## Establish Connection @@ -31,12 +31,12 @@ There are two ways for a connector to establish connections to TDengine: Key differences: 1. The TDengine client driver (taosc) has the highest performance with all the features of TDengine like [Parameter Binding](/reference/connector/cpp#parameter-binding-api), [Subscription](/reference/connector/cpp#subscription-and-consumption-api), etc. -2. The TDengine client driver (taosc) is not supported across all platforms, and applications built on taosc may need to be modified when updating taosc to newere versions. +2. The TDengine client driver (taosc) is not supported across all platforms, and applications built on taosc may need to be modified when updating taosc to newer versions. 3. The REST connection is more accessible with cross-platform support, however it results in a 30% performance downgrade. ## Install Client Driver taosc -If you are choosing to use native connection and the application is not on the same host as TDengine server, the TDengine client driver taosc needs to be installed on the application host. If choosing to use the REST connection or the application is on the same host as TDengine server, this step can be skipped. It's better to use same version of taosc as the server. +If you are choosing to use the native connection and the the application is not on the same host as TDengine server, the TDengine client driver taosc needs to be installed on the application host. If choosing to use the REST connection or the application is on the same host as TDengine server, this step can be skipped. It's better to use same version of taosc as the TDengine server. ### Install diff --git a/docs-en/07-develop/05-continuous-query.mdx b/docs-en/07-develop/05-continuous-query.mdx index 97e32a17ff325a9f67ac0a732be3dd72ccca8888..6e7ec643078b500a33a06c7b60c99ed5c601c694 100644 --- a/docs-en/07-develop/05-continuous-query.mdx +++ b/docs-en/07-develop/05-continuous-query.mdx @@ -4,15 +4,15 @@ description: "Continuous query is a query that's executed automatically accordin title: "Continuous Query" --- -Continuous query is a query that's executed automatically according to predefined frequency to provide aggregate query capability by time window, it's actually a simplified time driven stream computing. Continuous query can be performed on a table or STable in TDengine. The result of continuous query can be pushed to client or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively. +Continuous query is a query that's executed automatically according to a predefined frequency to provide aggregate query capability by time window, it's actually a simplified time driven stream computing. Continuous query can be performed on a table or STable in TDengine. The result of continuous query can be pushed to clients or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively. -Continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With continuous query, the result can be generated according to time window to achieve down sampling of original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to client or written to TDengine. +Continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With continuous query, the result can be generated according to a time window to achieve down sampling of the original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to clients or written to TDengine. There are some differences between continuous query in TDengine and time window computation in stream computing: - The computation is performed and the result is returned in real time in stream computing, but the computation in continuous query is only started when a time window closes. For example, if the time window is 1 day, then the result will only be generated at 23:59:59. -- If a historical data row is written in to a time widow for which the computation has been finished, the computation will not be performed again and the result will not be pushed to client again either. If the result has been written into TDengine, there will be no update for the result. -- In continuous query, if the result is pushed to client, the client status is not cached on the server side and Exactly-once is not guaranteed by the server either. If the client program crashes, a new time window will be generated from the time where the continuous query is restarted. If the result is written into TDengine, the data written into TDengine can be guaranteed as valid and continuous. +- If a historical data row is written in to a time window for which the computation has already finished, the computation will not be performed again and the result will not be pushed to client applications again. If the results have already been written into TDengine, they will not be updated. +- In continuous query, if the result is pushed to a client, the client status is not cached on the server side and Exactly-once is not guaranteed by the server. If the client program crashes, a new time window will be generated from the time where the continuous query is restarted. If the result is written into TDengine, the data written into TDengine can be guaranteed as valid and continuous. ## Syntax @@ -30,7 +30,7 @@ SLIDING: The time step for which the time window moves forward each time ## How to Use -In this section the use case of meters will be used to introduce how to use continuous query. Assume the STable and sub tables have been created using below SQL statement. +In this section the use case of meters will be used to introduce how to use continuous query. Assume the STable and subtables have been created using the SQL statements below. ```sql create table meters (ts timestamp, current float, voltage int, phase float) tags (location binary(64), groupId int); @@ -38,7 +38,7 @@ create table D1001 using meters tags ("Beijing.Chaoyang", 2); create table D1002 using meters tags ("Beijing.Haidian", 2); ``` -The average voltage for each time window of one minute with 30 seconds as the length of moving forward can be retrieved using below SQL statement. +The SQL statement below retrieves the average voltage for a one minute time window, with each time window moving forward by 30 seconds. ```sql select avg(voltage) from meters interval(1m) sliding(30s); @@ -50,13 +50,13 @@ Whenever the above SQL statement is executed, all the existing data will be comp select avg(voltage) from meters where ts > {startTime} interval(1m) sliding(30s); ``` -Another easier way for same purpose is prepend `create table {tableName} as` before the `select`. +An easier way to achieve this is to prepend `create table {tableName} as` before the `select`. ```sql create table avg_vol as select avg(voltage) from meters interval(1m) sliding(30s); ``` -A table named as `avg_vol` will be created automatically, then every 30 seconds the `select` statement will be executed automatically on the data in the past 1 minutes, i.e. the latest time window, and the result is written into table `avg_vol`. The client program just needs to query from table `avg_vol`. For example: +A table named as `avg_vol` will be created automatically, then every 30 seconds the `select` statement will be executed automatically on the data in the past 1 minute, i.e. the latest time window, and the result is written into table `avg_vol`. The client program just needs to query from table `avg_vol`. For example: ```sql taos> select * from avg_vol; @@ -68,16 +68,16 @@ taos> select * from avg_vol; 2020-07-29 13:39:00.000 | 223.0800000 | ``` -Please be noted that the minimum allowed time window is 10 milliseconds, and no upper limit. +Please note that the minimum allowed time window is 10 milliseconds, and no upper limit. -Besides, it's allowed to specify the start and end time of continuous query. If the start time is not specified, the timestamp of the first original row will be considered as the start time; if the end time is not specified, the continuous will be performed infinitely, otherwise it will be terminated once the end time is reached. For example, the continuous query in below SQL statement will be started from now and terminated one hour later. +It's possible to specify the start and end time of a continuous query. If the start time is not specified, the timestamp of the first row will be considered as the start time; if the end time is not specified, the continuous query will be performed indefinitely, otherwise it will be terminated once the end time is reached. For example, the continuous query in the SQL statement below will be started from now and terminated one hour later. ```sql create table avg_vol as select avg(voltage) from meters where ts > now and ts <= now + 1h interval(1m) sliding(30s); ``` -`now` in above SQL statement stands for the time when the continuous query is created, not the time when the computation is actually performed. Besides, to avoid the trouble caused by the delay of original data as much as possible, the actual computation in continuous query is also started with a little delay. That means, once a time window closes, the computation is not started immediately. Normally, the result can only be available a little time later, normally within one minute, after the time window closes. +`now` in the above SQL statement stands for the time when the continuous query is created, not the time when the computation is actually performed. To avoid the trouble caused by a delay in receiving data as much as possible, the actual computation in a continuous query is started after a little delay. That means, once a time window closes, the computation is not started immediately. Normally, the result are available after a little time, normally within one minute, after the time window closes. ## How to Manage -`show streams` command can be used in TDengine CLI `taos` to show all the continuous queries in the system, and `kill stream` can be used to terminate a continuous query. +`show streams` command can be used in the TDengine CLI `taos` to show all the continuous queries in the system, and `kill stream` can be used to terminate a continuous query. diff --git a/docs-en/07-develop/06-subscribe.mdx b/docs-en/07-develop/06-subscribe.mdx index 56f4ed83d8ebc6f21afbdd2eca2e01f11b313883..964bb2fd8d8ca284e1c79013868a5c44fbeb1aed 100644 --- a/docs-en/07-develop/06-subscribe.mdx +++ b/docs-en/07-develop/06-subscribe.mdx @@ -16,9 +16,9 @@ import CDemo from "./_sub_c.mdx"; ## Introduction -According to the time series nature of the data, data inserting in TDengine is similar to data publishing in message queues, they both can be considered as a new data record with timestamp is inserted into the system. Data is stored in ascending order of timestamp inside TDengine, so essentially each table in TDengine can be considered as a message queue. +Due to the nature of time series data, data inserting in TDengine is similar to data publishing in message queues. Data is stored in ascending order of timestamp inside TDengine, so each table in TDengine can essentially be considered as a message queue. -Lightweight service for data subscription and pushing is built in TDengine. With the API provided by TDengine, client programs can used `select` statement to subscribe the data from one or more tables. The subscription and and state maintenance is performed on the client side, the client programs polls the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start for retrieving new data is up to the client side. +A lightweight service for data subscription and pushing is built in TDengine. With the API provided by TDengine, client programs can use `select` statements to subscribe to data from one or more tables. The subscription and state maintenance is performed on the client side, the client programs poll the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start for retrieving new data is up to the client side. There are 3 major APIs related to subscription provided in the TDengine client driver. @@ -28,9 +28,9 @@ taos_consume taos_unsubscribe ``` -For more details about these API please refer to [C/C++ Connector](/reference/connector/cpp). Their usage will be introduced below using the use case of meters, in which the schema of STable and sub tables please refer to the previous section "continuous query". Full sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c). +For more details about these APIs please refer to [C/C++ Connector](/reference/connector/cpp). Their usage will be introduced below using the use case of meters, in which the schema of STable and subtables from the previous section [Continuous Query](/develop/continuous-query) are used. Full sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c). -If we want to get notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways: +If we want to get a notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways: The first way is to query on each sub table and record the last timestamp matching the criteria, then after some time query on the data later than recorded timestamp and repeat this process. The SQL statements for this way are as below. @@ -40,7 +40,7 @@ select * from D1002 where ts > {last_timestamp2} and current > 10; ... ``` -The above way works, but the problem is that the number of `select` statements increases with the number of meters grows. Finally the performance of both client side and server side will be unacceptable once the number of meters grows to a big enough number. +The above way works, but the problem is that the number of `select` statements increases with the number of meters. Additionally, the performance of both client side and server side will be unacceptable once the number of meters grows to a big enough number. A better way is to query on the STable, only one `select` is enough regardless of the number of meters, like below: @@ -48,7 +48,7 @@ A better way is to query on the STable, only one `select` is enough regardless o select * from meters where ts > {last_timestamp} and current > 10; ``` -However, how to choose `last_timestamp` becomes a new problem if using this way. Firstly, the timestamp when the data is generated is different from the timestamp when the data is inserted into the database, sometimes the difference between them may be very big. Secondly, the time when the data from different meters may arrives at the database may be different too. If the timestamp of the "slowest" meter is used as `last_timestamp` in the query, the data from other meters may be selected repeatedly; but if the timestamp of the "fasted" meters is used as `last_timestamp`, some data from other meters may be missed. +However, this presents a new problem in how to choose `last_timestamp`. First, the timestamp when the data is generated is different from the timestamp when the data is inserted into the database, sometimes the difference between them may be very big. Second, the time when the data from different meters arrives at the database may be different too. If the timestamp of the "slowest" meter is used as `last_timestamp` in the query, the data from other meters may be selected repeatedly; but if the timestamp of the "fastest" meter is used as `last_timestamp`, some data from other meters may be missed. All the problems mentioned above can be resolved thoroughly using subscription provided by TDengine. @@ -75,19 +75,19 @@ The parameter `sql` is a `select` statement in which `where` clause can be used select * from meters where current > 10; ``` -Please be noted that, all the data will be processed because no start time is specified. If only the data from one day ago needs to be processed, a time related condition can be added: +Please note that, all the data will be processed because no start time is specified. If only the data from one day ago needs to be processed, a time related condition can be added: ```sql select * from meters where ts > now - 1d and current > 10; ``` -The parameter `topic` is the name of the subscription, it needs to be guaranteed unique in the client program, but it's not necessary to be globally unique because subscription is implemented in the APIs on client side. +The parameter `topic` is the name of the subscription, it needs to be guaranteed unique in the client program, but it's not necessary to be globally unique because subscription is implemented in the APIs on the client side. -If the subscription named as `topic` doesn't exist, parameter `restart` would be ignored. If the subscription named as `topic` has been created before by the client program which then exited, when the client program is restarted to use this `topic`, parameter `restart` is used to determine retrieving data from beginning or from the last point where the subscription was broken. If the value of `restart` is **true** (i.e. a non-zero value), the data will be retrieved from beginning, or if it is **false** (i.e. zero), the data already consumed before will not be processed again. +If the subscription named as `topic` doesn't exist, the parameter `restart` will be ignored. If the subscription named as `topic` has been created before by the client program, when the client program is restarted with the subscription named `topic`, parameter `restart` is used to determine whether to retrieve data from the beginning or from the last point where the subscription was broken. If the value of `restart` is **true** (i.e. a non-zero value), the data will be retrieved from beginning, or if it is **false** (i.e. zero), the data already consumed before will not be processed again. -The last parameter of `taos_subscribe` is the polling interval in unit of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` would be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function. +The last parameter of `taos_subscribe` is the polling interval in unit of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` will be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function. -The last second parameter of `taos_subscribe` is used to pass arguments to the call back function. `taos_subscribe` doesn't process this parameter and simply passes it to the call back function. This parameter is simply ignored in sync mode. +The second to last parameter of `taos_subscribe` is used to pass arguments to the call back function. `taos_subscribe` doesn't process this parameter and simply passes it to the call back function. This parameter is simply ignored in sync mode. After a subscription is created, its data can be consumed and processed, below is the sample code of how to consume data in sync mode, in the else part if `if (async)`. @@ -149,22 +149,22 @@ void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) { taos_unsubscribe(tsub, keep); ``` -The second parameter `keep` is used to specify whether to keep the subscription progress on the client sde. If it is **false**, i.e. **0**, then subscription will be restarted from beginning regardless of the `restart` parameter's value in when `taos_subscribe` is invoked again. The subscription progress information is stored in _{DataDir}/subscribe/_ , under which there is a file with same name as `topic` for each subscription, the subscription will be restarted from beginning if the corresponding progress file is removed. +The second parameter `keep` is used to specify whether to keep the subscription progress on the client sde. If it is **false**, i.e. **0**, then subscription will be restarted from beginning regardless of the `restart` parameter's value when `taos_subscribe` is invoked again. The subscription progress information is stored in _{DataDir}/subscribe/_ , under which there is a file with the same name as `topic` for each subscription, the subscription will be restarted from the beginning if the corresponding progress file is removed. Now let's see the effect of the above sample code, assuming below prerequisites have been done. - The sample code has been downloaded to local system - TDengine has been installed and launched properly on same system -- The database, STable, sub tables required in the sample code have been ready +- The database, STable, and subtables required in the sample code are ready -It's ready to launch below command in the directory where the sample code resides to compile and start the program. +Launch the command below in the directory where the sample code resides to compile and start the program. ```bash make ./subscribe -sql='select * from meters where current > 10;' ``` -After the program is started, open another terminal and launch TDengine CLI `taos`, then use below SQL commands to insert a row whose current is 12A into table **D1001**. +After the program is started, open another terminal and launch TDengine CLI `taos`, then use the below SQL commands to insert a row whose current is 12A into table **D1001**. ```sql use test; @@ -232,7 +232,7 @@ Query OK, 5 row(s) in set (0.004896s) ### Run the Examples -The example programs firstly consume all historical data matching the criteria. +The example programs first consume all historical data matching the criteria. ```bash ts: 1597464000000 current: 12.0 voltage: 220 phase: 1 location: Beijing.Chaoyang groupid : 2 diff --git a/docs-en/07-develop/07-cache.md b/docs-en/07-develop/07-cache.md index 13db6c363802abed290cfc4d4466d40e48852f3d..c99612ecfbf192d8ffb4e2edd33665ba9f994797 100644 --- a/docs-en/07-develop/07-cache.md +++ b/docs-en/07-develop/07-cache.md @@ -10,9 +10,9 @@ Caching the latest data provides the capability of retrieving data in millisecon The memory space used by TDengine cache is fixed in size, according to the configuration based on application requirement and system resources. Independent memory pool is allocated for and managed by each vnode (virtual node) in TDengine, there is no sharing of memory pools between vnodes. All the tables belonging to a vnode share all the cache memory of the vnode. -Memory pool is divided into blocks and data is stored in row format in memory and each block follows FIFO policy. The size of each block is determined by configuration parameter `cache`, the number of blocks for each vnode is determined by `blocks`. For each vnode, the total cache size is `cache * blocks`. It's better to set the size of each block to hold at least tends of rows. +Memory pool is divided into blocks and data is stored in row format in memory and each block follows FIFO policy. The size of each block is determined by configuration parameter `cache`, the number of blocks for each vnode is determined by `blocks`. For each vnode, the total cache size is `cache * blocks`. A cache block needs to ensure that each table can store at least dozens of records to be efficient. -`last_row` function can be used to retrieve the last row of a table or a STable to quickly show the current state of devices on monitoring screen. For example below SQL statement retrieves the latest voltage of all meters in Chaoyang district of Beijing. +`last_row` function can be used to retrieve the last row of a table or a STable to quickly show the current state of devices on monitoring screen. For example the below SQL statement retrieves the latest voltage of all meters in Chaoyang district of Beijing. ```sql select last_row(voltage) from meters where location='Beijing.Chaoyang'; diff --git a/docs-en/07-develop/index.md b/docs-en/07-develop/index.md index 122dd0d870ac42b62c4f9e694cf79eec3ca122a5..e3f55f290753f79ac1708337082ce90bb050b21f 100644 --- a/docs-en/07-develop/index.md +++ b/docs-en/07-develop/index.md @@ -2,15 +2,15 @@ title: Developer Guide --- -To develop an application using TDengine to process time-series data, we recommend taking the following steps: +To develop an application to process time-series data using TDengine, we recommend taking the following steps: -1. Choose the way for connection to TDengine. No matter what programming language you use, you can always use the REST interface to access TDengine, but you can also use connectors unique to each programming language. -2. Design the data model based on your own application scenarios. Learn the [concepts](/concept/) of TDengine including "one table for one data collection point" and the "super table" concept; learn about static labels, collected metrics, and subtables. According to the data characteristics, you may decide to create one or more databases, and you should design the STable schema to fit your data. -3. Decide how to insert data. TDengine supports writing using standard SQL, but also supports schemaless writing, so that data can be written directly without creating tables manually. -4. Based on business requirements, find out what SQL query statements need to be written. +1. Choose the method to connect to TDengine. No matter what programming language you use, you can always use the REST interface to access TDengine, but you can also use connectors unique to each programming language. +2. Design the data model based on your own use cases. Learn the [concepts](/concept/) of TDengine including "one table for one data collection point" and the "super table" (STable) concept; learn about static labels, collected metrics, and subtables. Depending on the characteristics of your data and your requirements, you may decide to create one or more databases, and you should design the STable schema to fit your data. +3. Decide how you will insert data. TDengine supports writing using standard SQL, but also supports schemaless writing, so that data can be written directly without creating tables manually. +4. Based on business requirements, find out what SQL query statements need to be written. You may be able to repurpose any existing SQL. 5. If you want to run real-time analysis based on time series data, including various dashboards, it is recommended that you use the TDengine continuous query feature instead of deploying complex streaming processing systems such as Spark or Flink. 6. If your application has modules that need to consume inserted data, and they need to be notified when new data is inserted, it is recommended that you use the data subscription function provided by TDengine without the need to deploy Kafka. -7. In many scenarios (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately. +7. In many use cases (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately. 8. If you find that the SQL functions of TDengine cannot meet your requirements, then you can use user-defined functions to solve the problem. This section is organized in the order described above. For ease of understanding, TDengine provides sample code for each supported programming language for each function. If you want to learn more about the use of SQL, please read the [SQL manual](/taos-sql/). For a more in-depth understanding of the use of each connector, please read the [Connector Reference Guide](/reference/connector/). If you also want to integrate TDengine with third-party systems, such as Grafana, please refer to the [third-party tools](/third-party/). diff --git a/docs-en/10-cluster/01-deploy.md b/docs-en/10-cluster/01-deploy.md index 8c921797ec038fb8afbf382a980b8f7a197fa898..844a026ff6feb840d62506d145131e5b05ac1ffd 100644 --- a/docs-en/10-cluster/01-deploy.md +++ b/docs-en/10-cluster/01-deploy.md @@ -6,15 +6,15 @@ title: Deployment ### Step 1 -The FQDN of all hosts need to be setup properly, all the FQDNs need to be configured in the /etc/hosts of each host. It must be guaranteed that each FQDN can be accessed (by ping, for example) from any other hosts. +The FQDN of all hosts needs to be setup properly, all the FQDNs need to be configured in the /etc/hosts of each host. It must be confirmed that each FQDN can be accessed (by ping, for example) from any other hosts. -On each host command `hostname -f` can be executed to get the hostname. `ping` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, need to be checked and revised to make any two hosts accessible to each other. +On each host the command `hostname -f` can be executed to get the hostname. `ping` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, need to be checked and revised to make any two hosts accessible to each other. :::note -- The host where the client program runs also needs to configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster. +- The host where the client program runs also needs to be configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster. -- It's suggested to disable the firewall for all hosts in the cluster. At least TCP/UDP for port 6030~6042 need to be open if firewall is enabled. +- It's suggested to disable the firewall for all hosts in the cluster. At least TCP/UDP for port 6030~6042 need to be open if a firewall is enabled. ::: @@ -28,7 +28,7 @@ Now it's time to install TDengine on all hosts without starting `taosd`, the ver ### Step 4 -Now each physical node (referred to as `dnode` hereinafter, it's abbreviation for "data node") of TDengine need to be configured properly. Please be noted that one dnode doesn't stand for one host, multiple TDengine nodes can be started on single host as long as they are configured properly without conflicting. More specifically each instance of the configuration file `taos.cfg` stands for a dnode. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following. +Now each physical node (referred to as `dnode` hereinafter, it's abbreviation for "data node") of TDengine needs to be configured properly. Please note that one dnode doesn't stand for one host, multiple TDengine nodes can be started on single host as long as they are configured properly without conflicting. More specifically each instance of the configuration file `taos.cfg` stands for a dnode. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following. ```c // firstEp is the end point to connect to when any dnode starts @@ -44,9 +44,9 @@ serverPort 6030 #arbitrator ha.taosdata.com:6042 ``` -`firstEp` and `fqdn` must be configured properly. In `taos.cfg` of all dnodes in TDengine cluster, `firstEp` must be configured to point to same address, i.e. the first dnode of the cluster. `fqdn` and `serverPort` compose the address of each node itself. If you want to start multiple TDengine dnodes on a single host, please also make sure all other configurations like `dataDir`, `logDir`, and other resources related parameters are not conflicting. +`firstEp` and `fqdn` must be configured properly. In `taos.cfg` of all dnodes in TDengine cluster, `firstEp` must be configured to point to same address, i.e. the first dnode of the cluster. `fqdn` and `serverPort` compose the address of each node itself. If you want to start multiple TDengine dnodes on a single host, please make sure all other configurations like `dataDir`, `logDir`, and other resources related parameters are not conflicting. -For all the dnodes in a TDengine cluster, below parameters must be configured as exactly same, any node whose configuration is different from dnodes already in the cluster can't join the cluster. +For all the dnodes in a TDengine cluster, the below parameters must be configured exactly the same, any node whose configuration is different from dnodes already in the cluster can't join the cluster. | **#** | **Parameter** | **Definition** | | ----- | ------------------ | --------------------------------------------------------------------------------- | @@ -61,7 +61,7 @@ For all the dnodes in a TDengine cluster, below parameters must be configured as | 9 | maxVgroupsPerDb | Maximum number vgroups that can be used by each DB | :::note -Prior to version 2.0.19.0, besides the above parameters, `locale` and `charset` must be configured as same too for each dnode. +Prior to version 2.0.19.0, besides the above parameters, `locale` and `charset` must also be configured the same for each dnode. ::: @@ -92,7 +92,7 @@ From the above output, it is shown that the end point of the started dnode is "h There are a few steps necessary to add other dnodes in the cluster. -Firstly, start `taosd` as instructed in [Get Started](/get-started/), assuming it's for the second dnode. Before starting `taosd`, please making sure the configuration is correct, especially `firstEp`, `FQDN` and `serverPort`, `firstEp` must be same as the dnode shown in the section "Start First DNODE", i.e. "h1.taosdata.com" in this example. +First, start `taosd` as instructed in [Get Started](/get-started/), assuming it's for the second dnode. Before starting `taosd`, please making sure the configuration is correct, especially `firstEp`, `FQDN` and `serverPort`, `firstEp` must be same as the dnode shown in the section "Start First DNODE", i.e. "h1.taosdata.com" in this example. Then, on the first dnode, use TDengine CLI `taos` to execute below command to add the end point of the dnode in the cluster. In the command "fqdn:port" should be quoted using double quotes. @@ -109,6 +109,6 @@ SHOW DNODES; If the status of the newly added dnode is offline, please check: - Whether the `taosd` process is running properly or not -- In the log file `taosdlog.0` to see whether the fqdn and port are correct or not +- In the log file `taosdlog.0` to see whether the fqdn and port are correct The above process can be repeated to add more dnodes in the cluster. diff --git a/docs-en/10-cluster/02-cluster-mgmt.md b/docs-en/10-cluster/02-cluster-mgmt.md index 3fcd68b29ce08519af9a0cde11d5361c6b4cd312..9d717be236e3e89114f58fc492223e3ad94fc9ea 100644 --- a/docs-en/10-cluster/02-cluster-mgmt.md +++ b/docs-en/10-cluster/02-cluster-mgmt.md @@ -3,7 +3,7 @@ sidebar_label: Operation title: Manage DNODEs --- -It has been introduced that how to deploy and start a cluster from scratch. Once a cluster is ready, the dnode status in the cluster can be shown at any time, new dnode can be added to scale out the cluster, an existing dnode can be removed, even load balance can be performed manually.\ +The previous section [Deployment](/cluster/deploy) introduced how to deploy and start a cluster from scratch. Once a cluster is ready, the dnode status in the cluster can be shown at any time, new dnode can be added to scale out the cluster, an existing dnode can be removed, even load balance can be performed manually. :::note All the commands to be introduced in this chapter need to be run through TDengine CLI, sometimes it's necessary to use root privilege. @@ -12,7 +12,7 @@ All the commands to be introduced in this chapter need to be run through TDengin ## Show DNODEs -below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes, etc. It's suggested to execute this command to check after adding or removing a dnode. +The below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes, etc. It's suggested to execute this command to check after adding or removing a dnode. ```sql SHOW DNODES; @@ -39,7 +39,7 @@ USE SOME_DATABASE; SHOW VGROUPS; ``` -The example output is as below: +The example output is below: ``` taos> show dnodes; @@ -87,7 +87,7 @@ taos> show dnodes; Query OK, 2 row(s) in set (0.001017s) ``` -It can be seen that the status of the new dnode is "offline", once the dnode is started and connects the firstEp of the cluster, execute the command again and get below example output, from which it can be seen that two dnodes are both in "ready" status. +It can be seen that the status of the new dnode is "offline", once the dnode is started and connects the firstEp of the cluster, execute the command again and get the example output below, from which it can be seen that two dnodes are both in "ready" status. ``` taos> show dnodes; @@ -100,7 +100,7 @@ Query OK, 2 row(s) in set (0.001316s) ## Drop DNODE -Launch TDengine CLI `taos` and execute the command below to drop or remove a dnode from the cluster. In the command, `dnodeId` can be gotten from `show dnodes`. +Launch TDengine CLI `taos` and execute the command below to drop or remove a dnode from the cluster. In the command, you can get `dnodeId` from `show dnodes`. ```sql DROP DNODE "fqdn:port"; @@ -112,7 +112,7 @@ or DROP DNODE dnodeId; ``` -The example output is as below: +The example output is below: ``` taos> show dnodes; @@ -139,7 +139,7 @@ In the above example, when `show dnodes` is executed the first time, two dnodes - Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Normally, before dropping a dnode, the data belonging to the dnode needs to be migrated to other place. - Please be noted that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped. - Once a dnode is dropped, other dnodes in the cluster will be notified of the drop and will not accept the request from the dropped dnode. -- dnodeID is allocated automatically and can't be interfered manually. dnodeID is generated in ascending order without duplication. +- dnodeID is allocated automatically and can't be manually modified. dnodeID is generated in ascending order without duplication. ::: @@ -155,7 +155,7 @@ ALTER DNODE BALANCE "VNODE:-DNODE:"; In the above command, `source-dnodeId` is the original dnodeId where the vnode resides, `dest-dnodeId` specifies the target dnode. vgId (vgroup ID) can be shown by `SHOW VGROUPS `. -Firstly `show vgroups` is executed to show the vgroup distribution. +First `show vgroups` is executed to show the vgroup distribution. ``` taos> show vgroups; @@ -172,7 +172,7 @@ taos> show vgroups; Query OK, 8 row(s) in set (0.001314s) ``` -It can be seen that there are 5 vgroups in dnode 3 and 3 vgroups in node 1, now we want to move vgId 18 from dnode 3 to dnode 1. Execute below command in `taos` +It can be seen that there are 5 vgroups in dnode 3 and 3 vgroups in node 1, now we want to move vgId 18 from dnode 3 to dnode 1. Execute the below command in `taos` ``` taos> alter dnode 3 balance "vnode:18-dnode:1"; @@ -207,7 +207,7 @@ It can be seen from above output that vgId 18 has been moved from dnode 3 to dno :::note - Manual load balancing can only be performed when the automatic load balancing is disabled, i.e. `balance` is set to 0. -- Only vnode in normal state, i.e. master or slave, can be moved. vnode can't moved when its in status offline, unsynced or syncing. +- Only a vnode in normal state, i.e. master or slave, can be moved. vnode can't be moved when its in status offline, unsynced or syncing. - Before moving a vnode, it's necessary to make sure the target dnode has enough resources: CPU, memory and disk. ::: diff --git a/docs-en/10-cluster/03-ha-and-lb.md b/docs-en/10-cluster/03-ha-and-lb.md index 53c95be9e995a728b2b4053e4f204df58271716e..6e0c386abe4100ec59f60c1c90b3305e0d187c79 100644 --- a/docs-en/10-cluster/03-ha-and-lb.md +++ b/docs-en/10-cluster/03-ha-and-lb.md @@ -7,19 +7,19 @@ title: High Availability and Load Balancing High availability of vnode and mnode can be achieved through replicas in TDengine. -The number of vnodes is associated with each DB, there can be multiple DBs in a TDengine cluster. For the purpose of operation, different number of replicas can be configured properly for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas, the default value is 1. With single replica, the high availability of the system can't be guaranteed. Whenever one node is down, data service would be unavailable. The number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation would fail with error "more dnodes are needed". Below SQL statement is used to create a database named as "demo" with 3 replicas. +The number of vnodes is associated with each DB, there can be multiple DBs in a TDengine cluster. A different number of replicas can be configured for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas, the default value is 1. With single replica, the high availability of the system can't be guaranteed. Whenever one node is down, the data service will be unavailable. The number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation would fail with error "more dnodes are needed". The SQL statement below is used to create a database named "demo" with 3 replicas. ```sql CREATE DATABASE demo replica 3; ``` -The data in a DB is divided into multiple shards and stored in multiple vgroups. The number of vnodes in each group is determined by the number of replicas set for the DB. The vnodes in each vgroups store exactly same data. For the purpose of high availability, the vnodes in a vgroup must be located in different dnodes on different hosts. As long as over half of the vnodes in a vgroup are in online state, the vgroup is able to serve data access. Otherwise the vgroup can't handle any data access for reading or inserting data. +The data in a DB is divided into multiple shards and stored in multiple vgroups. The number of vnodes in each vgroup is determined by the number of replicas set for the DB. The vnodes in each vgroup store exactly the same data. For the purpose of high availability, the vnodes in a vgroup must be located in different dnodes on different hosts. As long as over half of the vnodes in a vgroup are in an online state, the vgroup is able to provide data access. Otherwise the vgroup can't provide data access for reading or inserting data. There may be data for multiple DBs in a dnode. Once a dnode is down, multiple DBs may be affected. However, it's hard to say the cluster is guaranteed to work properly as long as over half of dnodes are online because vnodes are introduced and there may be complex mapping between vnodes and dnodes. ## High Availability of Mnode -Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`, the valid time range is [1,3]. To make sure the data consistency between mnodes, the data replication between mnodes is performed in synchronous way. +Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`, the valid time range is [1,3]. To make sure the data consistency between mnodes, the data replication between mnodes is performed in a synchronous way. There may be multiple dnodes in a cluster, but only one mnode can be started in each dnode. Which one or ones of the dnodes will be designated as mnodes is automatically determined by TDengine according to the cluster configuration and system resources. Command `show mnodes` can be executed in TDengine `taos` to show the mnodes in the cluster. @@ -32,19 +32,19 @@ The end point and role/status (master, slave, unsynced, or offline) of all mnode For the high availability of mnode, `numOfMnodes` needs to be configured to 2 or a higher value. Because the data consistency between mnodes must be guaranteed, the replica confirmation parameter `quorum` is set to 2 automatically if `numOfMNodes` is set to 2 or higher. :::note -If high availability is important for your system, both vnode and mnode must be configured to have multiple replicas. How to configure for them are different and have been described. +If high availability is important for your system, both vnode and mnode must be configured to have multiple replicas. ::: ## Load Balance -Load balance will be triggered in 3 cades without manual intervention. +Load balance will be triggered in 3 cases without manual intervention. - When a new dnode is joined in the cluster, automatic load balancing may be triggered, some data from some dnodes may be transferred to the new dnode automatically. - When a dnode is removed from the cluster, the data from this dnode will be transferred to other dnodes automatically. - When a dnode is too hot, i.e. too much data has been stored in it, automatic load balancing may be triggered to migrate some vnodes from this dnode to other dnodes. -- :::tip - Automatic load balancing is controlled by parameter `balance`, 0 means disabled and 1 means enabled. +:::tip +Automatic load balancing is controlled by parameter `balance`, 0 means disabled and 1 means enabled. ::: @@ -54,7 +54,7 @@ When a dnode is offline, it can be detected by the TDengine cluster. There are t - The dnode becomes online again before the threshold configured in `offlineThreshold` is reached, it is still in the cluster and data replication is started automatically. The dnode can work properly after the data syncup is finished. -- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. System alert will be generated and automatic load balancing will be triggered too if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not be joined in the cluster automatically, it can only be joined manually by the system operator. +- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. A system alert will be generated and automatic load balancing will be triggered if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not join in the cluster automatically, it can only be joined manually by the system operator. :::note If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsynced status, the master node can only be voted after all the vnodes or mnodes in the group become online and can exchange status, then the vgroup (or mnode group) is able to provide service. @@ -63,15 +63,15 @@ If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsyn ## Arbitrator -If the number of replicas is set to an even number like 2, when half of the vnodes in a vgroup don't work master node can't be voted. Similar case is also applicable to mnode if the number of mnodes is set to an even number like 2. +If the number of replicas is set to an even number like 2, when half of the vnodes in a vgroup don't work a master node can't be voted. A similar case is also applicable to mnode if the number of mnodes is set to an even number like 2. -To resolve this problem, a new arbitrator component named `tarbitrator`, abbreviated for TDengine Arbitrator, was introduced. Arbitrator simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. With Arbitrator, any vgroup or mnode group can be considered as having number of member nodes and master node can be selected. +To resolve this problem, a new arbitrator component named `tarbitrator`, abbreviated for TDengine Arbitrator, was introduced. Arbitrator simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. As long as more than half of the vnode or mnode, including Arbitrator, are available the vnode group or mnode group can provide data insertion or query services normally. -Normally, it's suggested to configure replica number of each DB or system parameter `numOfMNodes` to an odd number. However, if a user is very sensitive to storage space, replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability. +Normally, it's suggested to configure a replica number of each DB or system parameter `numOfMNodes` to an odd number. However, if a user is very sensitive to storage space, a replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability. Arbitrator component is installed with the server package. For details about how to install, please refer to [Install](/operation/pkg-install). The `-p` parameter of `tarbitrator` can be used to specify the port on which it provides service. -In the configuration file `taos.cfg` of each dnode, parameter `arbitrator` needs to be configured to the end point of the `tarbitrator` process. arbitrator component will be used automatically if the replica is configured to an even number and will be ignored if the replica is configured to an odd number. +In the configuration file `taos.cfg` of each dnode, parameter `arbitrator` needs to be configured to the end point of the `tarbitrator` process. Arbitrator component will be used automatically if the replica is configured to an even number and will be ignored if the replica is configured to an odd number. Arbitrator can be shown by executing command in TDengine CLI `taos` with its role shown as "arb". diff --git a/docs-en/10-cluster/index.md b/docs-en/10-cluster/index.md index a19a54e01d5a6429e95958c2544072961b0cb66a..5a45a2ce7b08c67322265cf1bbd54ef66cbfc027 100644 --- a/docs-en/10-cluster/index.md +++ b/docs-en/10-cluster/index.md @@ -3,7 +3,7 @@ title: Cluster keywords: ["cluster", "high availability", "load balance", "scale out"] --- -TDengine has a native distributed design and provides the ability to scale out. A few of nodes can form a TDengine cluster. If you need to get higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source. +TDengine has a native distributed design and provides the ability to scale out. A few nodes can form a TDengine cluster. If you need higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source. This chapter mainly introduces cluster deployment, maintenance, and how to achieve high availability and load balancing. diff --git a/docs-en/12-taos-sql/08-interval.md b/docs-en/12-taos-sql/08-interval.md index 5cc3fa8cb43749fd40b808699f82a8761525cc6a..2044ff4f61d9da6bdc1c07b5361b89050193aa96 100644 --- a/docs-en/12-taos-sql/08-interval.md +++ b/docs-en/12-taos-sql/08-interval.md @@ -10,7 +10,7 @@ Window related clauses are used to divide the data set to be queried into subset `INTERVAL` clause is used to generate time windows of same time interval, `SLIDING` is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining continuous query both the size of time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time range of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time window. -![Time Window](/img/sql/timewindow-1.png) +![Time Window](./timewindow-1.webp) `INTERVAL` and `SLIDING` should be used with aggregate functions and selection functions. Below SQL statement is illegal because no aggregate or selection function is used with `INTERVAL`. @@ -30,7 +30,7 @@ When the time length specified by `SLIDING` is same as that specified by `INTERV In case of using integer, bool, or string to represent the device status at a moment, the continuous rows with same status belong to same status window. Once the status changes, the status window closes. As shown in the following figure,there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now. -![Status Window](/img/sql/timewindow-3.png) +![Status Window](./timewindow-3.webp) `STATE_WINDOW` is used to specify the column based on which to define status window, for example: @@ -46,7 +46,7 @@ SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val); The primary key, i.e. timestamp, is used to determine which session window the row belongs to. If the time interval between two adjacent rows is within the time range specified by `tol_val`, they belong to same session window; otherwise they belong to two different time windows. As shown in the figure below, if the limit of time interval for session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30], because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds. -![Session Window](/img/sql/timewindow-2.png) +![Session Window](./timewindow-2.webp) If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now. diff --git a/docs-en/12-taos-sql/index.md b/docs-en/12-taos-sql/index.md index 611f2bf75eb2a234ae139ce65f2e78d356483bb7..32850e8c4b0a816cae94563079c79b94c8611bd5 100644 --- a/docs-en/12-taos-sql/index.md +++ b/docs-en/12-taos-sql/index.md @@ -3,9 +3,9 @@ title: TDengine SQL description: "The syntax supported by TDengine SQL " --- -This section explains the syntax about operating database, table, STable, inserting data, selecting data, functions and some tips that can be used in TDengine SQL. It would be easier to understand with some fundamental knowledge of SQL. +This section explains the syntax to operating databases, tables, STables, inserting data, selecting data, functions and some tips that can be used in TDengine SQL. It would be easier to understand with some fundamental knowledge of SQL. -TDengine SQL is the major interface for users to write data into or query from TDengine. For users to easily use, syntax similar to standard SQL is provided. However, please be noted that TDengine SQL is not standard SQL. Besides, because TDengine doesn't provide the functionality of deleting time series data, corresponding statements are not provided in TDengine SQL. +TDengine SQL is the major interface for users to write data into or query from TDengine. For users to easily use, syntax similar to standard SQL is provided. However, please note that TDengine SQL is not standard SQL. For instance, TDengine doesn't provide the functionality of deleting time series data, thus corresponding statements are not provided in TDengine SQL. TDengine SQL doesn't support abbreviation for keywords, for example `DESCRIBE` can't be abbreviated as `DESC`. @@ -16,7 +16,7 @@ Syntax Specifications used in this chapter: - | means one of a few options, excluding | itself. - … means the item prior to it can be repeated multiple times. -To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of meters. Assuming each meter collects 3 data: current, voltage, phase. The data model is as below: +To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of meters. Assuming each meter collects 3 data measurements: current, voltage, phase. The data model is shown below: ```sql taos> DESCRIBE meters; diff --git a/docs-en/12-taos-sql/timewindow-1.webp b/docs-en/12-taos-sql/timewindow-1.webp new file mode 100644 index 0000000000000000000000000000000000000000..82747558e96df752a0010d85be79a4af07e4a1df Binary files /dev/null and b/docs-en/12-taos-sql/timewindow-1.webp differ diff --git a/docs-en/12-taos-sql/timewindow-2.webp b/docs-en/12-taos-sql/timewindow-2.webp new file mode 100644 index 0000000000000000000000000000000000000000..8f1314ae34f7f5c5cca1d3cb80455f555fad38c3 Binary files /dev/null and b/docs-en/12-taos-sql/timewindow-2.webp differ diff --git a/docs-en/12-taos-sql/timewindow-3.webp b/docs-en/12-taos-sql/timewindow-3.webp new file mode 100644 index 0000000000000000000000000000000000000000..5bd16e68e7fd5da6805551e9765975277cd5d4d9 Binary files /dev/null and b/docs-en/12-taos-sql/timewindow-3.webp differ diff --git a/docs-en/14-reference/03-connector/03-connector.mdx b/docs-en/14-reference/03-connector/03-connector.mdx index 6be914bdb4b701f478b6b8b27366d6ebb5a39ec8..38eba73d0983951901a26eee3962e89007f6d30a 100644 --- a/docs-en/14-reference/03-connector/03-connector.mdx +++ b/docs-en/14-reference/03-connector/03-connector.mdx @@ -4,7 +4,7 @@ title: Connector TDengine provides a rich set of APIs (application development interface). To facilitate users to develop their applications quickly, TDengine supports connectors for multiple programming languages, including official connectors for C/C++, Java, Python, Go, Node.js, C#, and Rust. These connectors support connecting to TDengine clusters using both native interfaces (taosc) and REST interfaces (not supported in a few languages yet). Community developers have also contributed several unofficial connectors, such as the ADO.NET connector, the Lua connector, and the PHP connector. -![image-connector](/img/connector.png) +![image-connector](./connector.webp) ## Supported platforms diff --git a/docs-en/14-reference/03-connector/connector.webp b/docs-en/14-reference/03-connector/connector.webp new file mode 100644 index 0000000000000000000000000000000000000000..040cf5c26c726b345b2e0e5363dd3c677bec61be Binary files /dev/null and b/docs-en/14-reference/03-connector/connector.webp differ diff --git a/docs-en/14-reference/03-connector/java.mdx b/docs-en/14-reference/03-connector/java.mdx index 328907c4d781bdea8d30623e01d431cedbf8d0fa..0a1960be51145ebcab10b56243413549135f1c03 100644 --- a/docs-en/14-reference/03-connector/java.mdx +++ b/docs-en/14-reference/03-connector/java.mdx @@ -11,7 +11,7 @@ import TabItem from '@theme/TabItem'; 'taos-jdbcdriver' is TDengine's official Java language connector, which allows Java developers to develop applications that access the TDengine database. 'taos-jdbcdriver' implements the interface of the JDBC driver standard and provides two forms of connectors. One is to connect to a TDengine instance natively through the TDengine client driver (taosc), which supports functions including data writing, querying, subscription, schemaless writing, and bind interface. And the other is to connect to a TDengine instance through the REST interface provided by taosAdapter (2.4.0.0 and later). REST connections implement has a slight differences to compare the set of features implemented and native connections. -![tdengine-connector](tdengine-jdbc-connector.png) +![tdengine-connector](tdengine-jdbc-connector.webp) The preceding diagram shows two ways for a Java app to access TDengine via connector: diff --git a/docs-en/14-reference/03-connector/tdengine-jdbc-connector.png b/docs-en/14-reference/03-connector/tdengine-jdbc-connector.png deleted file mode 100644 index 7541aaf98ad73cbddac44c34bd775b32ab3a735e..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/03-connector/tdengine-jdbc-connector.png and /dev/null differ diff --git a/docs-en/14-reference/03-connector/tdengine-jdbc-connector.webp b/docs-en/14-reference/03-connector/tdengine-jdbc-connector.webp new file mode 100644 index 0000000000000000000000000000000000000000..37cf6d90a528e320d5cb7d6da502d3a5b10aa4ee Binary files /dev/null and b/docs-en/14-reference/03-connector/tdengine-jdbc-connector.webp differ diff --git a/docs-en/14-reference/04-taosadapter.md b/docs-en/14-reference/04-taosadapter.md index 85fd2923b02189d6f3cfd73efff784d12c3bb69a..de42e8a883d8b195b9d342f761e39458e557dfac 100644 --- a/docs-en/14-reference/04-taosadapter.md +++ b/docs-en/14-reference/04-taosadapter.md @@ -24,7 +24,7 @@ taosAdapter provides the following features. ## taosAdapter architecture diagram -![taosAdapter Architecture](taosAdapter-architecture.png) +![taosAdapter Architecture](taosAdapter-architecture.webp) ## taosAdapter Deployment Method diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.png deleted file mode 100644 index 4708f836feb21980f2db7fed4a55f799b23a6ec1..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.webp new file mode 100644 index 0000000000000000000000000000000000000000..a78e18028a94c2f6a783b08d992a25c791527407 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.png deleted file mode 100644 index f2684e6eed70e8f56697eae42b495d6bd62815e8..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.webp new file mode 100644 index 0000000000000000000000000000000000000000..b152418d0902b8ebdf62ebce6705c10dd5ab4fbf Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.png deleted file mode 100644 index 74686691e4106b8646c3deee1e0ce73b2f53f1ea..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.webp new file mode 100644 index 0000000000000000000000000000000000000000..f58f48b7f17375cb8e62e7c0126ca3aea56a13f6 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.png deleted file mode 100644 index 27964215567f9f961c0aeaf1b863188437008fb7..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.webp new file mode 100644 index 0000000000000000000000000000000000000000..00afcce013602dce0da17bfd033f65aaa8e43bb7 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.png deleted file mode 100644 index b0d3abbf21ec4d4bd7bfb95fcc03a5f936b22665..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.webp new file mode 100644 index 0000000000000000000000000000000000000000..567e5694f9d7a035a3eb354493d3df8ed64db251 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.png deleted file mode 100644 index 2b54cbeb83bcff12f20461a4f57f882e2073f231..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.webp new file mode 100644 index 0000000000000000000000000000000000000000..cc8a912810f35e53a6e5fa96ea0c81e334ffc0df Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.png deleted file mode 100644 index eb3848657f13900c856ac595c20766465157e9c4..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.webp new file mode 100644 index 0000000000000000000000000000000000000000..651b716bc511ba2ed5db5e6fc6b0591ef150cbf6 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.png deleted file mode 100644 index d94b2e02ac9855bb3d2f77d8902e068839db364f..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.webp new file mode 100644 index 0000000000000000000000000000000000000000..8666193f59497180574fd2786266e5baabbe9761 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.png deleted file mode 100644 index 654df2934597ce600a1dc2dcd0cab7e29de7076d..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.webp new file mode 100644 index 0000000000000000000000000000000000000000..7f38a76a2b899ffebc7aecd39c8ec4fd0b2da778 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.png b/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.png deleted file mode 100644 index e3afa22c0326d70567ec4529c83101c746daac87..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.webp b/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.webp new file mode 100644 index 0000000000000000000000000000000000000000..3d7fe932a23f3720e76e4217a7b5d1868d81fac8 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.png b/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.png deleted file mode 100644 index 198bf37141c86a66cdd91b47a331bcdeb83daaf8..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.webp b/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.webp new file mode 100644 index 0000000000000000000000000000000000000000..517123954efe4b94485fdab2e07be0d765f5daa2 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.png b/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.png deleted file mode 100644 index ace3aa3c2f8f14fabdac54bc25ae2d9449445b69..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.webp b/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.webp new file mode 100644 index 0000000000000000000000000000000000000000..6666296ac16e7a0c0ab3db23f0517f2089d09035 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.png b/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.png deleted file mode 100644 index 7082e49f6beb8690c36f98a3f4ff2befdb8fd014..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.webp b/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.webp new file mode 100644 index 0000000000000000000000000000000000000000..6f74bc3a47a32de661ef25f787a947d823715810 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.png b/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.png deleted file mode 100644 index ffd4911b53854c42dbf0ff11838cb604fa694138..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.webp b/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.webp new file mode 100644 index 0000000000000000000000000000000000000000..acda3b24a6263815ac8b658709d2172300ca3b00 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.png b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.png deleted file mode 100644 index 802c7366f921301bd7fbc62458e56b2d1eaf195c..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.webp b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.webp new file mode 100644 index 0000000000000000000000000000000000000000..903e236e2a776dfef7f85c014662e8913a9033a5 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.png b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.png deleted file mode 100644 index 019ec921b6f808671f4f864ddf3380159d4a0dcc..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.webp b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.webp new file mode 100644 index 0000000000000000000000000000000000000000..14fcfe9d183e8804199708ae4492d0904a7c9d62 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.png b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.png deleted file mode 100644 index 3963abb4ea8ae0e6f5557466f7a5b746c2d2ea3c..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.webp b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.webp new file mode 100644 index 0000000000000000000000000000000000000000..00b50cc619b030d1fb2be3a367183901d5c833e8 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.png b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.png deleted file mode 100644 index 837100464b35a5cafac474723aef603f91945ebc..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.webp b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.webp new file mode 100644 index 0000000000000000000000000000000000000000..06d0ff6ed50091a6340508bc5b2b3f78b65dcb18 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.png b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.png deleted file mode 100644 index 98223df25499effac343ff5723544a3c289f18fa..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.webp b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.webp new file mode 100644 index 0000000000000000000000000000000000000000..e2ec052b91e439a817f6e88b8afd0fcb4dcb7ef8 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.png b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.png deleted file mode 100644 index 07aba348f02b4fb8ef68e79664920c119b842d4c..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.webp b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.webp new file mode 100644 index 0000000000000000000000000000000000000000..665c035f9755b9472aee33cd61d3ab52831194b5 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.png b/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.png deleted file mode 100644 index 7e28939ead8bf3b6e2b4330e4f9b59c2e39b5c1c..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.webp b/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.webp new file mode 100644 index 0000000000000000000000000000000000000000..7dc42eeba919fee7b438a453c00bb9fd0ac2d274 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.png b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.png deleted file mode 100644 index 981f640b14d18aa6f0682768d8405a232df500f6..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.webp b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.webp new file mode 100644 index 0000000000000000000000000000000000000000..7ef081900f8de99c859193b69d49b3d6bc187909 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.png b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.png deleted file mode 100644 index 94ef4fa5fe63e535118a81707b413c028ce01f70..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.webp b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.webp new file mode 100644 index 0000000000000000000000000000000000000000..602452fc4c89424d8e17d46d74949b69be84dbe8 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.png b/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.png deleted file mode 100644 index 670cacc377c2801fa9437c3c132c5c7fbc361b0f..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.webp b/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.webp new file mode 100644 index 0000000000000000000000000000000000000000..35a3ebba781f24dbb0066993d1ca2f02659997d2 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/import_dashboard.png b/docs-en/14-reference/07-tdinsight/assets/import_dashboard.png deleted file mode 100644 index d74cd36c96ee0fd24ddc6feae2da07824816f745..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/import_dashboard.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/import_dashboard.webp b/docs-en/14-reference/07-tdinsight/assets/import_dashboard.webp new file mode 100644 index 0000000000000000000000000000000000000000..fb7958f1b9fbd43c8f63136024842790e711c490 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/import_dashboard.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.png b/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.png deleted file mode 100644 index 0101e7430cb2ef673818de8bd3af53d0d082ad3f..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.webp b/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.webp new file mode 100644 index 0000000000000000000000000000000000000000..49f1d88f4ad93286cd8582536e82b4dcc4ff271b Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.webp differ diff --git a/docs-en/14-reference/07-tdinsight/index.md b/docs-en/14-reference/07-tdinsight/index.md index 4850cecb334ff24cc9fcf3b9a6e394827730111c..dc337bf9fff2a9b60ea2f1c5110185a8ac683098 100644 --- a/docs-en/14-reference/07-tdinsight/index.md +++ b/docs-en/14-reference/07-tdinsight/index.md @@ -233,33 +233,33 @@ The default username/password is `admin`. Grafana will require a password change Point to the **Configurations** -> **Data Sources** menu, and click the **Add data source** button. -![Add data source button](./assets/howto-add-datasource-button.png) +![Add data source button](./assets/howto-add-datasource-button.webp) Search for and select **TDengine**. -![Add datasource](./assets/howto-add-datasource-tdengine.png) +![Add datasource](./assets/howto-add-datasource-tdengine.webp) Configure the TDengine datasource. -![Datasource Configuration](./assets/howto-add-datasource.png) +![Datasource Configuration](./assets/howto-add-datasource.webp) Save and test. It will report 'TDengine Data source is working' under normal circumstances. -![datasource test](./assets/howto-add-datasource-test.png) +![datasource test](./assets/howto-add-datasource-test.webp) ### Importing dashboards Point to **+** / **Create** - **import** (or `/dashboard/import` url). -![Import Dashboard and Configuration](./assets/import_dashboard.png) +![Import Dashboard and Configuration](./assets/import_dashboard.webp) Type the dashboard ID `15167` in the **Import via grafana.com** location and **Load**. -![Import via grafana.com](./assets/import-dashboard-15167.png) +![Import via grafana.com](./assets/import-dashboard-15167.webp) Once the import is complete, the full page view of TDinsight is shown below. -![show](./assets/TDinsight-full.png) +![show](./assets/TDinsight-full.webp) ## TDinsight dashboard details @@ -269,7 +269,7 @@ Details of the metrics are as follows. ### Cluster Status -![tdinsight-mnodes-overview](./assets/TDinsight-1-cluster-status.png) +![tdinsight-mnodes-overview](./assets/TDinsight-1-cluster-status.webp) This section contains the current information and status of the cluster, the alert information is also here (from left to right, top to bottom). @@ -289,7 +289,7 @@ This section contains the current information and status of the cluster, the ale ### DNodes Status -![tdinsight-mnodes-overview](./assets/TDinsight-2-dnodes.png) +![tdinsight-mnodes-overview](./assets/TDinsight-2-dnodes.webp) - **DNodes Status**: simple table view of `show dnodes`. - **DNodes Lifetime**: the time elapsed since the dnode was created. @@ -298,14 +298,14 @@ This section contains the current information and status of the cluster, the ale ### MNode Overview -![tdinsight-mnodes-overview](./assets/TDinsight-3-mnodes.png) +![tdinsight-mnodes-overview](./assets/TDinsight-3-mnodes.webp) 1. **MNodes Status**: a simple table view of `show mnodes`. 2. 2. **MNodes Number**: similar to `DNodes Number`, the number of MNodes changes. ### Request -![tdinsight-requests](./assets/TDinsight-4-requests.png) +![tdinsight-requests](./assets/TDinsight-4-requests.webp) 1. **Requests Rate(Inserts per Second)**: average number of inserts per second. 2. **Requests (Selects)**: number of query requests and change rate (count of second). @@ -313,7 +313,7 @@ This section contains the current information and status of the cluster, the ale ### Database -![tdinsight-database](./assets/TDinsight-5-database.png) +![tdinsight-database](./assets/TDinsight-5-database.webp) Database usage, repeated for each value of the variable `$database` i.e. multiple rows per database. @@ -325,7 +325,7 @@ Database usage, repeated for each value of the variable `$database` i.e. multipl ### DNode Resource Usage -![dnode-usage](./assets/TDinsight-6-dnode-usage.png) +![dnode-usage](./assets/TDinsight-6-dnode-usage.webp) Data node resource usage display with repeated multiple rows for the variable `$fqdn` i.e., each data node. Includes. @@ -346,13 +346,13 @@ Data node resource usage display with repeated multiple rows for the variable `$ ### Login History -![Login History](./assets/TDinsight-7-login-history.png) +![Login History](./assets/TDinsight-7-login-history.webp) Currently, only the number of logins per minute is reported. ### Monitoring taosAdapter -![taosadapter](./assets/TDinsight-8-taosadapter.png) +![taosadapter](./assets/TDinsight-8-taosadapter.webp) Support monitoring taosAdapter request statistics and status details. Includes. diff --git a/docs-en/14-reference/taosAdapter-architecture.png b/docs-en/14-reference/taosAdapter-architecture.png deleted file mode 100644 index 08a9018553aae6f86b42d127b372d0cecfa9bdf8..0000000000000000000000000000000000000000 Binary files a/docs-en/14-reference/taosAdapter-architecture.png and /dev/null differ diff --git a/docs-en/14-reference/taosAdapter-architecture.webp b/docs-en/14-reference/taosAdapter-architecture.webp new file mode 100644 index 0000000000000000000000000000000000000000..a4162b0a037c06d34191784716c51080b9f8a570 Binary files /dev/null and b/docs-en/14-reference/taosAdapter-architecture.webp differ diff --git a/docs-en/20-third-party/01-grafana.mdx b/docs-en/20-third-party/01-grafana.mdx index c1bfd4a96a4576df8570d8b480d5c2afe47e20b8..7239710e0aebdd95977d9b73a5a1a9fccd656542 100644 --- a/docs-en/20-third-party/01-grafana.mdx +++ b/docs-en/20-third-party/01-grafana.mdx @@ -62,15 +62,15 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource Users can log in to the Grafana server (username/password: admin/admin) directly through the URL `http://localhost:3000` and add a datasource through `Configuration -> Data Sources` on the left side, as shown in the following figure. -![img](./grafana/add_datasource1.jpg) +![img](./grafana/add_datasource1.webp) Click `Add data source` to enter the Add data source page, and enter TDengine in the query box to add it, as shown in the following figure. -![img](./grafana/add_datasource2.jpg) +![img](./grafana/add_datasource2.webp) Enter the datasource configuration page, and follow the default prompts to modify the corresponding configuration. -![img](./grafana/add_datasource3.jpg) +![img](./grafana/add_datasource3.webp) - Host: IP address of the server where the components of the TDengine cluster provide REST service (offered by taosd before 2.4 and by taosAdapter since 2.4) and the port number of the TDengine REST service (6041), by default use `http://localhost:6041`. - User: TDengine user name. @@ -78,13 +78,13 @@ Enter the datasource configuration page, and follow the default prompts to modif Click `Save & Test` to test. Follows are a success. -![img](./grafana/add_datasource4.jpg) +![img](./grafana/add_datasource4.webp) ### Create Dashboard Go back to the main interface to create the Dashboard, click Add Query to enter the panel query page: -![img](./grafana/create_dashboard1.jpg) +![img](./grafana/create_dashboard1.webp) As shown above, select the `TDengine` data source in the `Query` and enter the corresponding SQL in the query box below for query. @@ -94,7 +94,7 @@ As shown above, select the `TDengine` data source in the `Query` and enter the c Follow the default prompt to query the average system memory usage for the specified interval on the server where the current TDengine deployment is located as follows. -![img](./grafana/create_dashboard2.jpg) +![img](./grafana/create_dashboard2.webp) > For more information on how to use Grafana to create the appropriate monitoring interface and for more details on using Grafana, refer to the official Grafana [documentation](https://grafana.com/docs/). diff --git a/docs-en/20-third-party/09-emq-broker.md b/docs-en/20-third-party/09-emq-broker.md index 13562ba7f720499c23771437c5c6ba0f61819456..560c6463b59b00a362023d6cfa44cf833419a9ea 100644 --- a/docs-en/20-third-party/09-emq-broker.md +++ b/docs-en/20-third-party/09-emq-broker.md @@ -44,25 +44,25 @@ Since the configuration interface of EMQX differs from version to version, here Use your browser to open the URL `http://IP:18083` and log in to EMQX Dashboard. The initial installation username is `admin` and the password is: `public`. -![img](./emqx/login-dashboard.png) +![img](./emqx/login-dashboard.webp) ### Creating Rule Select "Rule" in the "Rule Engine" on the left and click the "Create" button: ! -![img](./emqx/rule-engine.png) +![img](./emqx/rule-engine.webp) ### Edit SQL fields -![img](./emqx/create-rule.png) +![img](./emqx/create-rule.webp) ### Add "action handler" -![img](./emqx/add-action-handler.png) +![img](./emqx/add-action-handler.webp) ### Add "Resource" -![img](./emqx/create-resource.png) +![img](./emqx/create-resource.webp) Select "Data to Web Service" and click the "New Resource" button. @@ -70,13 +70,13 @@ Select "Data to Web Service" and click the "New Resource" button. Select "Data to Web Service" and fill in the request URL as the address and port of the server running taosAdapter (default is 6041). Leave the other properties at their default values. -![img](./emqx/edit-resource.png) +![img](./emqx/edit-resource.webp) ### Edit "action" Edit the resource configuration to add the key/value pairing for Authorization. Please refer to the [ TDengine REST API documentation ](https://docs.taosdata.com/reference/rest-api/) for the authorization in details. Enter the rule engine replacement template in the message body. -![img](./emqx/edit-action.png) +![img](./emqx/edit-action.webp) ## Compose program to mock data @@ -163,7 +163,7 @@ Edit the resource configuration to add the key/value pairing for Authorization. Note: `CLIENT_NUM` in the code can be set to a smaller value at the beginning of the test to avoid hardware performance be not capable to handle a more significant number of concurrent clients. -![img](./emqx/client-num.png) +![img](./emqx/client-num.webp) ## Execute tests to simulate sending MQTT data @@ -172,19 +172,19 @@ npm install mqtt mockjs --save ---registry=https://registry.npm.taobao.org node mock.js ``` -![img](./emqx/run-mock.png) +![img](./emqx/run-mock.webp) ## Verify that EMQX is receiving data Refresh the EMQX Dashboard rules engine interface to see how many records were received correctly: -![img](./emqx/check-rule-matched.png) +![img](./emqx/check-rule-matched.webp) ## Verify that data writing to TDengine Use the TDengine CLI program to log in and query the appropriate databases and tables to verify that the data is being written to TDengine correctly: -![img](./emqx/check-result-in-taos.png) +![img](./emqx/check-result-in-taos.webp) Please refer to the [TDengine official documentation](https://docs.taosdata.com/) for more details on how to use TDengine. EMQX Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use EMQX. diff --git a/docs-en/20-third-party/11-kafka.md b/docs-en/20-third-party/11-kafka.md index b9c7a3814a75a066b498438b6e632690697ae7ca..5aee6e044dcec77a9904f2ccfe5cb577eaa4d0ad 100644 --- a/docs-en/20-third-party/11-kafka.md +++ b/docs-en/20-third-party/11-kafka.md @@ -9,11 +9,11 @@ TDengine Kafka Connector contains two plugins: TDengine Source Connector and TDe Kafka Connect is a component of Apache Kafka that enables other systems, such as databases, cloud services, file systems, etc., to connect to Kafka easily. Data can flow from other software to Kafka via Kafka Connect and Kafka to other systems via Kafka Connect. Plugins that read data from other software are called Source Connectors, and plugins that write data to other software are called Sink Connectors. Neither Source Connector nor Sink Connector will directly connect to Kafka Broker, and Source Connector transfers data to Kafka Connect. Sink Connector receives data from Kafka Connect. -![](kafka/Kafka_Connect.png) +![](kafka/Kafka_Connect.webp) TDengine Source Connector is used to read data from TDengine in real-time and send it to Kafka Connect. Users can use The TDengine Sink Connector to receive data from Kafka Connect and write it to TDengine. -![](kafka/streaming-integration-with-kafka-connect.png) +![](kafka/streaming-integration-with-kafka-connect.webp) ## What is Confluent? @@ -26,7 +26,7 @@ Confluent adds many extensions to Kafka. include: 5. GUI for managing and monitoring Kafka - Confluent Control Center Some of these extensions are available in the community version of Confluent. Some are only available in the enterprise version. -![](kafka/confluentPlatform.png) +![](kafka/confluentPlatform.webp) Confluent Enterprise Edition provides the `confluent` command-line tool to manage various components. diff --git a/docs-en/20-third-party/emqx/add-action-handler.png b/docs-en/20-third-party/emqx/add-action-handler.png deleted file mode 100644 index 97a1f933ecfadfcab399938806d73c5a5ecc6427..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/emqx/add-action-handler.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/add-action-handler.webp b/docs-en/20-third-party/emqx/add-action-handler.webp new file mode 100644 index 0000000000000000000000000000000000000000..4a8d105f711991226cfbd43b6e9ab07d7ccc686a Binary files /dev/null and b/docs-en/20-third-party/emqx/add-action-handler.webp differ diff --git a/docs-en/20-third-party/emqx/check-result-in-taos.png b/docs-en/20-third-party/emqx/check-result-in-taos.png deleted file mode 100644 index c17a5c1ea2b9bbd49263056c8bf09c9aabab07d5..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/emqx/check-result-in-taos.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/check-result-in-taos.webp b/docs-en/20-third-party/emqx/check-result-in-taos.webp new file mode 100644 index 0000000000000000000000000000000000000000..8fa040a86104fece02ddaf8986f0a67de316143d Binary files /dev/null and b/docs-en/20-third-party/emqx/check-result-in-taos.webp differ diff --git a/docs-en/20-third-party/emqx/check-rule-matched.png b/docs-en/20-third-party/emqx/check-rule-matched.png deleted file mode 100644 index 9e9a466946a1afa857e2bbc07b14956dd0f984b6..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/emqx/check-rule-matched.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/check-rule-matched.webp b/docs-en/20-third-party/emqx/check-rule-matched.webp new file mode 100644 index 0000000000000000000000000000000000000000..e5a614035739df859b27c817f3b9f41be444b513 Binary files /dev/null and b/docs-en/20-third-party/emqx/check-rule-matched.webp differ diff --git a/docs-en/20-third-party/emqx/client-num.png b/docs-en/20-third-party/emqx/client-num.png deleted file mode 100644 index fff48cbf3b271c367079ddde425b3f9b014062f7..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/emqx/client-num.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/client-num.webp b/docs-en/20-third-party/emqx/client-num.webp new file mode 100644 index 0000000000000000000000000000000000000000..a151b184843607d67b649babb3145bfb3e329cda Binary files /dev/null and b/docs-en/20-third-party/emqx/client-num.webp differ diff --git a/docs-en/20-third-party/emqx/create-resource.png b/docs-en/20-third-party/emqx/create-resource.png deleted file mode 100644 index 58da4c391a3692b9f5fa348d952701eab8bcb746..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/emqx/create-resource.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/create-resource.webp b/docs-en/20-third-party/emqx/create-resource.webp new file mode 100644 index 0000000000000000000000000000000000000000..bf9cccbe49c57f925c5e6b094a4c0d88a64242cb Binary files /dev/null and b/docs-en/20-third-party/emqx/create-resource.webp differ diff --git a/docs-en/20-third-party/emqx/create-rule.png b/docs-en/20-third-party/emqx/create-rule.png deleted file mode 100644 index 73b0b6ee3e6065a142df98abe8c0dbb32b34f89d..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/emqx/create-rule.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/create-rule.webp b/docs-en/20-third-party/emqx/create-rule.webp new file mode 100644 index 0000000000000000000000000000000000000000..13e8fc83d48d2fd9d0a303c707ef3024d3ee5203 Binary files /dev/null and b/docs-en/20-third-party/emqx/create-rule.webp differ diff --git a/docs-en/20-third-party/emqx/edit-action.png b/docs-en/20-third-party/emqx/edit-action.png deleted file mode 100644 index 2a43ee369a439cf11cee23c11f25d6a84b26d7dc..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/emqx/edit-action.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/edit-action.webp b/docs-en/20-third-party/emqx/edit-action.webp new file mode 100644 index 0000000000000000000000000000000000000000..7f6d2e36a82b1917930e5d3969115db9359674a0 Binary files /dev/null and b/docs-en/20-third-party/emqx/edit-action.webp differ diff --git a/docs-en/20-third-party/emqx/edit-resource.png b/docs-en/20-third-party/emqx/edit-resource.png deleted file mode 100644 index 0a0b3560044f4ed6e0a8f040b74085a7e8948b1f..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/emqx/edit-resource.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/edit-resource.webp b/docs-en/20-third-party/emqx/edit-resource.webp new file mode 100644 index 0000000000000000000000000000000000000000..fd5d278fab16bba4e04e1c348d4086dce77abb98 Binary files /dev/null and b/docs-en/20-third-party/emqx/edit-resource.webp differ diff --git a/docs-en/20-third-party/emqx/login-dashboard.png b/docs-en/20-third-party/emqx/login-dashboard.png deleted file mode 100644 index d6c5035c98d860faf639ef6611c6719adf80c47b..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/emqx/login-dashboard.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/login-dashboard.webp b/docs-en/20-third-party/emqx/login-dashboard.webp new file mode 100644 index 0000000000000000000000000000000000000000..f84cee668fb6efe1586515ba0dee3ae2f10a5b30 Binary files /dev/null and b/docs-en/20-third-party/emqx/login-dashboard.webp differ diff --git a/docs-en/20-third-party/emqx/rule-engine.png b/docs-en/20-third-party/emqx/rule-engine.png deleted file mode 100644 index db110a837b024c82ee9d22f02dcd3a9d06abdd55..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/emqx/rule-engine.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/rule-engine.webp b/docs-en/20-third-party/emqx/rule-engine.webp new file mode 100644 index 0000000000000000000000000000000000000000..c1711c8cc757cd73fef5cb941a1818756241f7f0 Binary files /dev/null and b/docs-en/20-third-party/emqx/rule-engine.webp differ diff --git a/docs-en/20-third-party/emqx/rule-header-key-value.png b/docs-en/20-third-party/emqx/rule-header-key-value.png deleted file mode 100644 index b81b9a9684aa2f98d00b7ec21e5de411fb450312..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/emqx/rule-header-key-value.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/rule-header-key-value.webp b/docs-en/20-third-party/emqx/rule-header-key-value.webp new file mode 100644 index 0000000000000000000000000000000000000000..e645b3822dffec86f4926e78a57eaffa1e7f4d8d Binary files /dev/null and b/docs-en/20-third-party/emqx/rule-header-key-value.webp differ diff --git a/docs-en/20-third-party/emqx/run-mock.png b/docs-en/20-third-party/emqx/run-mock.png deleted file mode 100644 index 0da25818575247732d5d3a783aa52cf7ce24662c..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/emqx/run-mock.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/run-mock.webp b/docs-en/20-third-party/emqx/run-mock.webp new file mode 100644 index 0000000000000000000000000000000000000000..ed33f1666d456f1ab40ed6830af4550d4c7ca037 Binary files /dev/null and b/docs-en/20-third-party/emqx/run-mock.webp differ diff --git a/docs-en/20-third-party/grafana/add_datasource1.jpg b/docs-en/20-third-party/grafana/add_datasource1.jpg deleted file mode 100644 index 1f0f5110f312c57f3ec1788bbc02f04fac6ac142..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/grafana/add_datasource1.jpg and /dev/null differ diff --git a/docs-en/20-third-party/grafana/add_datasource1.webp b/docs-en/20-third-party/grafana/add_datasource1.webp new file mode 100644 index 0000000000000000000000000000000000000000..211edc4457abd0db6b0ef64636d61d65b5f43db6 Binary files /dev/null and b/docs-en/20-third-party/grafana/add_datasource1.webp differ diff --git a/docs-en/20-third-party/grafana/add_datasource2.jpg b/docs-en/20-third-party/grafana/add_datasource2.jpg deleted file mode 100644 index fa7a83e00e96fae649910dff4edf5f5bdadd7850..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/grafana/add_datasource2.jpg and /dev/null differ diff --git a/docs-en/20-third-party/grafana/add_datasource2.webp b/docs-en/20-third-party/grafana/add_datasource2.webp new file mode 100644 index 0000000000000000000000000000000000000000..8ab547231fee4d3b0874fcfe08c0ce152b0c53a1 Binary files /dev/null and b/docs-en/20-third-party/grafana/add_datasource2.webp differ diff --git a/docs-en/20-third-party/grafana/add_datasource3.jpg b/docs-en/20-third-party/grafana/add_datasource3.jpg deleted file mode 100644 index fc850ad08ff1174de972906842e0d5ee64e6e5cb..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/grafana/add_datasource3.jpg and /dev/null differ diff --git a/docs-en/20-third-party/grafana/add_datasource3.webp b/docs-en/20-third-party/grafana/add_datasource3.webp new file mode 100644 index 0000000000000000000000000000000000000000..d8a733360a09b4425c571f254a9ecb298c04b72f Binary files /dev/null and b/docs-en/20-third-party/grafana/add_datasource3.webp differ diff --git a/docs-en/20-third-party/grafana/add_datasource4.jpg b/docs-en/20-third-party/grafana/add_datasource4.jpg deleted file mode 100644 index 3ba73e50d455111f8621f4165746078554c2d790..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/grafana/add_datasource4.jpg and /dev/null differ diff --git a/docs-en/20-third-party/grafana/add_datasource4.webp b/docs-en/20-third-party/grafana/add_datasource4.webp new file mode 100644 index 0000000000000000000000000000000000000000..b1e0fc6e2b27df4af1bb5ad92756bcb5d4fda63e Binary files /dev/null and b/docs-en/20-third-party/grafana/add_datasource4.webp differ diff --git a/docs-en/20-third-party/grafana/create_dashboard1.jpg b/docs-en/20-third-party/grafana/create_dashboard1.jpg deleted file mode 100644 index 3b83c3a1714e9e7540e0b06239ef7c1c4f63fe2c..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/grafana/create_dashboard1.jpg and /dev/null differ diff --git a/docs-en/20-third-party/grafana/create_dashboard1.webp b/docs-en/20-third-party/grafana/create_dashboard1.webp new file mode 100644 index 0000000000000000000000000000000000000000..55eb388833e4df2a46f4d1cf6d346aa11429385d Binary files /dev/null and b/docs-en/20-third-party/grafana/create_dashboard1.webp differ diff --git a/docs-en/20-third-party/grafana/create_dashboard2.jpg b/docs-en/20-third-party/grafana/create_dashboard2.jpg deleted file mode 100644 index fe5d768ac55254251e0290bf257178f5ff28f5a5..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/grafana/create_dashboard2.jpg and /dev/null differ diff --git a/docs-en/20-third-party/grafana/create_dashboard2.webp b/docs-en/20-third-party/grafana/create_dashboard2.webp new file mode 100644 index 0000000000000000000000000000000000000000..bb40e407187718c52e9f617d8ebd3d25fd14b56b Binary files /dev/null and b/docs-en/20-third-party/grafana/create_dashboard2.webp differ diff --git a/docs-en/20-third-party/kafka/Kafka_Connect.png b/docs-en/20-third-party/kafka/Kafka_Connect.png deleted file mode 100644 index f3dc02ea2a743c6e1ae5531e14f820e3adeca29a..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/kafka/Kafka_Connect.png and /dev/null differ diff --git a/docs-en/20-third-party/kafka/Kafka_Connect.webp b/docs-en/20-third-party/kafka/Kafka_Connect.webp new file mode 100644 index 0000000000000000000000000000000000000000..8f2000a749b0a2ccec9939abd144c53c44fbe171 Binary files /dev/null and b/docs-en/20-third-party/kafka/Kafka_Connect.webp differ diff --git a/docs-en/20-third-party/kafka/confluentPlatform.png b/docs-en/20-third-party/kafka/confluentPlatform.png deleted file mode 100644 index f8e69f2c7f64d809996b2d1bf1370b67b8030850..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/kafka/confluentPlatform.png and /dev/null differ diff --git a/docs-en/20-third-party/kafka/confluentPlatform.webp b/docs-en/20-third-party/kafka/confluentPlatform.webp new file mode 100644 index 0000000000000000000000000000000000000000..ff03d4e51aaaec85f07ff41ecda0fb9bd6cb2847 Binary files /dev/null and b/docs-en/20-third-party/kafka/confluentPlatform.webp differ diff --git a/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.png b/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.png deleted file mode 100644 index 26d8a866d706180c900d69bb6f57ca2dff0047dd..0000000000000000000000000000000000000000 Binary files a/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.png and /dev/null differ diff --git a/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.webp b/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.webp new file mode 100644 index 0000000000000000000000000000000000000000..120d534ec132cea2ccef6cf87a3ce680a5ac6e9c Binary files /dev/null and b/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.webp differ diff --git a/docs-en/21-tdinternal/01-arch.md b/docs-en/21-tdinternal/01-arch.md index 9607c9b38709f6a320f82a8ee250afb407492627..2c430908e410c7ae8e6f09a3f7e2d059f906fda5 100644 --- a/docs-en/21-tdinternal/01-arch.md +++ b/docs-en/21-tdinternal/01-arch.md @@ -11,7 +11,7 @@ The design of TDengine is based on the assumption that any hardware or software Logical structure diagram of TDengine distributed architecture as following: -![TDengine architecture diagram](structure.png) +![TDengine architecture diagram](structure.webp)
Figure 1: TDengine architecture diagram
A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDengine client driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit. @@ -54,7 +54,7 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc To explain the relationship between vnode, mnode, TAOSC and application and their respective roles, the following is an analysis of a typical data writing process. -![typical process of TDengine](message.png) +![typical process of TDengine](message.webp)
Figure 2: Typical process of TDengine
1. Application initiates a request to insert data through JDBC, ODBC, or other APIs. @@ -123,7 +123,7 @@ If a database has N replicas, thus a virtual node group has N virtual nodes, but Master Vnode uses a writing process as follows: -![TDengine Master Writing Process](write_master.png) +![TDengine Master Writing Process](write_master.webp)
Figure 3: TDengine Master writing process
1. Master vnode receives the application data insertion request, verifies, and moves to next step; @@ -137,7 +137,7 @@ Master Vnode uses a writing process as follows: For a slave vnode, the write process as follows: -![TDengine Slave Writing Process](write_slave.png) +![TDengine Slave Writing Process](write_slave.webp)
Figure 4: TDengine Slave Writing Process
1. Slave vnode receives a data insertion request forwarded by Master vnode; @@ -267,7 +267,7 @@ For the data collected by device D1001, the number of records per hour is counte TDengine creates a separate table for each data collection point, but in practical applications, it is often necessary to aggregate data from different data collection points. In order to perform aggregation operations efficiently, TDengine introduces the concept of STable. STable is used to represent a specific type of data collection point. It is a table set containing multiple tables. The schema of each table in the set is the same, but each table has its own static tag. The tags can be multiple and be added, deleted and modified at any time. Applications can aggregate or statistically operate all or a subset of tables under a STABLE by specifying tag filters, thus greatly simplifying the development of applications. The process is shown in the following figure: -![Diagram of multi-table aggregation query](multi_tables.png) +![Diagram of multi-table aggregation query](multi_tables.webp)
Figure 5: Diagram of multi-table aggregation query
1. Application sends a query condition to system; diff --git a/docs-en/21-tdinternal/dnode.png b/docs-en/21-tdinternal/dnode.png deleted file mode 100644 index cea87dcccba5d2761996e5dde998022d86487eb9..0000000000000000000000000000000000000000 Binary files a/docs-en/21-tdinternal/dnode.png and /dev/null differ diff --git a/docs-en/21-tdinternal/dnode.webp b/docs-en/21-tdinternal/dnode.webp new file mode 100644 index 0000000000000000000000000000000000000000..a56c7e4594df00a721cb48381d68ca3bc813cdc8 Binary files /dev/null and b/docs-en/21-tdinternal/dnode.webp differ diff --git a/docs-en/21-tdinternal/message.png b/docs-en/21-tdinternal/message.png deleted file mode 100644 index 715a8bd37ee9fe7e96eacce4e7ff563fedeefbee..0000000000000000000000000000000000000000 Binary files a/docs-en/21-tdinternal/message.png and /dev/null differ diff --git a/docs-en/21-tdinternal/message.webp b/docs-en/21-tdinternal/message.webp new file mode 100644 index 0000000000000000000000000000000000000000..a2a42abff3d6e932b41a3abe9feae4a5cc13c9e5 Binary files /dev/null and b/docs-en/21-tdinternal/message.webp differ diff --git a/docs-en/21-tdinternal/modules.png b/docs-en/21-tdinternal/modules.png deleted file mode 100644 index 10ae4703a6cbbf66afea325ce4c0f919f7769a07..0000000000000000000000000000000000000000 Binary files a/docs-en/21-tdinternal/modules.png and /dev/null differ diff --git a/docs-en/21-tdinternal/modules.webp b/docs-en/21-tdinternal/modules.webp new file mode 100644 index 0000000000000000000000000000000000000000..718a6abccdbe40d4a0df5e3812fe0ab943a7c523 Binary files /dev/null and b/docs-en/21-tdinternal/modules.webp differ diff --git a/docs-en/21-tdinternal/multi_tables.png b/docs-en/21-tdinternal/multi_tables.png deleted file mode 100644 index 0cefaab6a9a4cdd671c671f7c6186dea41415ff0..0000000000000000000000000000000000000000 Binary files a/docs-en/21-tdinternal/multi_tables.png and /dev/null differ diff --git a/docs-en/21-tdinternal/multi_tables.webp b/docs-en/21-tdinternal/multi_tables.webp new file mode 100644 index 0000000000000000000000000000000000000000..8f649e34a3a62d1b11b4403b2e743ff6b5e47be2 Binary files /dev/null and b/docs-en/21-tdinternal/multi_tables.webp differ diff --git a/docs-en/21-tdinternal/replica-forward.png b/docs-en/21-tdinternal/replica-forward.png deleted file mode 100644 index bf616e030b130603eceb5dccfd30b4a1dfa68ea5..0000000000000000000000000000000000000000 Binary files a/docs-en/21-tdinternal/replica-forward.png and /dev/null differ diff --git a/docs-en/21-tdinternal/replica-forward.webp b/docs-en/21-tdinternal/replica-forward.webp new file mode 100644 index 0000000000000000000000000000000000000000..512efd4eba8f23ad0f8607eaaf5525f51ecdcf0e Binary files /dev/null and b/docs-en/21-tdinternal/replica-forward.webp differ diff --git a/docs-en/21-tdinternal/replica-master.png b/docs-en/21-tdinternal/replica-master.png deleted file mode 100644 index cb33f1ce98661563693215d8fc73b003235c7668..0000000000000000000000000000000000000000 Binary files a/docs-en/21-tdinternal/replica-master.png and /dev/null differ diff --git a/docs-en/21-tdinternal/replica-master.webp b/docs-en/21-tdinternal/replica-master.webp new file mode 100644 index 0000000000000000000000000000000000000000..57030a11f563af2689dbcfd206183f410b121aee Binary files /dev/null and b/docs-en/21-tdinternal/replica-master.webp differ diff --git a/docs-en/21-tdinternal/replica-restore.png b/docs-en/21-tdinternal/replica-restore.png deleted file mode 100644 index 1558e5ed0108d23efdc6b5d9ea0e44a1dff45d28..0000000000000000000000000000000000000000 Binary files a/docs-en/21-tdinternal/replica-restore.png and /dev/null differ diff --git a/docs-en/21-tdinternal/replica-restore.webp b/docs-en/21-tdinternal/replica-restore.webp new file mode 100644 index 0000000000000000000000000000000000000000..f282c2d4d23f517e3ef08e906cea7e9c5edc0b2a Binary files /dev/null and b/docs-en/21-tdinternal/replica-restore.webp differ diff --git a/docs-en/21-tdinternal/structure.png b/docs-en/21-tdinternal/structure.png deleted file mode 100644 index 4fc8f47ab0a30d95b85ba1d85105726ed981e56e..0000000000000000000000000000000000000000 Binary files a/docs-en/21-tdinternal/structure.png and /dev/null differ diff --git a/docs-en/21-tdinternal/structure.webp b/docs-en/21-tdinternal/structure.webp new file mode 100644 index 0000000000000000000000000000000000000000..b77a42c074b15302b5c3ab889fb550a46dd549b3 Binary files /dev/null and b/docs-en/21-tdinternal/structure.webp differ diff --git a/docs-en/21-tdinternal/vnode.png b/docs-en/21-tdinternal/vnode.png deleted file mode 100644 index e6148d4907cf9a18bc52251f712d5c685651b7f5..0000000000000000000000000000000000000000 Binary files a/docs-en/21-tdinternal/vnode.png and /dev/null differ diff --git a/docs-en/21-tdinternal/vnode.webp b/docs-en/21-tdinternal/vnode.webp new file mode 100644 index 0000000000000000000000000000000000000000..fae3104c89c542c26790b509d12ad56661082c32 Binary files /dev/null and b/docs-en/21-tdinternal/vnode.webp differ diff --git a/docs-en/21-tdinternal/write_master.png b/docs-en/21-tdinternal/write_master.png deleted file mode 100644 index ff2dfc20bfc2ecf956a2aab1a8965a7bbcae4387..0000000000000000000000000000000000000000 Binary files a/docs-en/21-tdinternal/write_master.png and /dev/null differ diff --git a/docs-en/21-tdinternal/write_master.webp b/docs-en/21-tdinternal/write_master.webp new file mode 100644 index 0000000000000000000000000000000000000000..9624036ed3d46ed60924ead9ce5c61acee0f4652 Binary files /dev/null and b/docs-en/21-tdinternal/write_master.webp differ diff --git a/docs-en/21-tdinternal/write_slave.png b/docs-en/21-tdinternal/write_slave.png deleted file mode 100644 index cacb2cb6bcc4f4d934e979862387e1345bbac078..0000000000000000000000000000000000000000 Binary files a/docs-en/21-tdinternal/write_slave.png and /dev/null differ diff --git a/docs-en/21-tdinternal/write_slave.webp b/docs-en/21-tdinternal/write_slave.webp new file mode 100644 index 0000000000000000000000000000000000000000..7c45dec11b00e6a738de458f9e1bedacfad75a96 Binary files /dev/null and b/docs-en/21-tdinternal/write_slave.webp differ diff --git a/docs-en/25-application/01-telegraf.md b/docs-en/25-application/01-telegraf.md index 718e04ecd3dbd2a72feba3f5297d9da33a94ba83..07ab289ac2bbf44c219535fe128db69b34465c01 100644 --- a/docs-en/25-application/01-telegraf.md +++ b/docs-en/25-application/01-telegraf.md @@ -16,7 +16,7 @@ Current mainstream IT DevOps system usually include a data collection module, a This article introduces how to quickly build a TDengine + Telegraf + Grafana based IT DevOps visualization system without writing even a single line of code and by simply modifying a few lines of configuration files. The architecture is as follows. -![IT-DevOps-Solutions-Telegraf.png](/img/IT-DevOps-Solutions-Telegraf.png) +![IT-DevOps-Solutions-Telegraf.webp](./IT-DevOps-Solutions-Telegraf.webp) ## Installation steps @@ -75,7 +75,7 @@ Log in to the Grafana interface using a web browser at `IP:3000`, with the syste Click on the gear icon on the left and select `Plugins`, you should find the TDengine data source plugin icon. Click on the plus icon on the left and select `Import` to get the data from `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard- v0.1.0.json`, download the dashboard JSON file and import it. You will then see the dashboard in the following screen. -![IT-DevOps-Solutions-telegraf-dashboard.png](/img/IT-DevOps-Solutions-telegraf-dashboard.png) +![IT-DevOps-Solutions-telegraf-dashboard.webp](./IT-DevOps-Solutions-telegraf-dashboard.webp) ## Wrap-up diff --git a/docs-en/25-application/02-collectd.md b/docs-en/25-application/02-collectd.md index 2ac37618fafe11e71b215313e53f89b6c302f7cb..0ddea2855497f1dfdfce7a2aa6749e0c5ba1b9ff 100644 --- a/docs-en/25-application/02-collectd.md +++ b/docs-en/25-application/02-collectd.md @@ -17,7 +17,7 @@ The new version of TDengine supports multiple data protocols and can accept data This article introduces how to quickly build an IT DevOps visualization system based on TDengine + collectd / StatsD + Grafana without writing even a single line of code but by simply modifying a few lines of configuration files. The architecture is shown in the following figure. -![IT-DevOps-Solutions-Collectd-StatsD.png](/img/IT-DevOps-Solutions-Collectd-StatsD.png) +![IT-DevOps-Solutions-Collectd-StatsD.webp](./IT-DevOps-Solutions-Collectd-StatsD.webp) ## Installation Steps @@ -83,19 +83,19 @@ Click on the gear icon on the left and select `Plugins`, you should find the TDe Download the dashboard json from `https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json`, click the plus icon on the left and select Import, follow the instructions to import the JSON file. After that, you can see The dashboard can be seen in the following screen. -![IT-DevOps-Solutions-collectd-dashboard.png](/img/IT-DevOps-Solutions-collectd-dashboard.png) +![IT-DevOps-Solutions-collectd-dashboard.webp](./IT-DevOps-Solutions-collectd-dashboard.webp) #### import collectd dashboard Download the dashboard json file from `https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json`. Download the dashboard json file, click the plus icon on the left side and select `Import`, and follow the interface prompts to select the JSON file to import. After that, you can see dashboard with the following interface. -![IT-DevOps-Solutions-collectd-dashboard.png](/img/IT-DevOps-Solutions-collectd-dashboard.png) +![IT-DevOps-Solutions-collectd-dashboard.webp](./IT-DevOps-Solutions-collectd-dashboard.webp) #### Importing the StatsD dashboard Download the dashboard json from `https://github.com/taosdata/grafanaplugin/blob/master/examples/statsd/dashboards/statsd-with-tdengine-v0.1.0.json`. Click on the plus icon on the left and select `Import`, and follow the interface prompts to import the JSON file. You will then see the dashboard in the following screen. -![IT-DevOps-Solutions-statsd-dashboard.png](/img/IT-DevOps-Solutions-statsd-dashboard.png) +![IT-DevOps-Solutions-statsd-dashboard.webp](./IT-DevOps-Solutions-statsd-dashboard.webp) ## Wrap-up diff --git a/docs-en/25-application/03-immigrate.md b/docs-en/25-application/03-immigrate.md index 4cfeb892d821a1e5b7d5250615e7122e64b9882d..68d8a2b8cc25c80b8a647332df66874bee344715 100644 --- a/docs-en/25-application/03-immigrate.md +++ b/docs-en/25-application/03-immigrate.md @@ -32,7 +32,7 @@ We will explain how to migrate OpenTSDB applications to TDengine quickly, secure The following figure (Figure 1) shows the system's overall architecture for a typical DevOps application scenario. **Figure 1. Typical architecture in a DevOps scenario** -![IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch](/img/IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.jpg "Figure 1. Typical architecture in a DevOps scenario") +![IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp "Figure 1. Typical architecture in a DevOps scenario") In this application scenario, there are Agent tools deployed in the application environment to collect machine metrics, network metrics, and application metrics. Data collectors to aggregate information collected by agents, systems for persistent data storage and management, and tools for monitoring data visualization (e.g., Grafana, etc.). @@ -75,7 +75,7 @@ After writing the data to TDengine properly, you can adapt Grafana to visualize TDengine provides two sets of Dashboard templates by default, and users only need to import the templates from the Grafana directory into Grafana to activate their use. **Importing Grafana Templates** Figure 2. -![](/img/IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.jpg "Figure 2. Importing a Grafana Template") +![](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp "Figure 2. Importing a Grafana Template") After the above steps, you completed the migration to replace OpenTSDB with TDengine. You can see that the whole process is straightforward, there is no need to write any code, and only some configuration files need to be adjusted to meet the migration work. @@ -88,7 +88,7 @@ In most DevOps scenarios, if you have a small OpenTSDB cluster (3 or fewer nodes Suppose your application is particularly complex, or the application domain is not a DevOps scenario. You can continue reading subsequent chapters for a more comprehensive and in-depth look at the advanced topics of migrating an OpenTSDB application to TDengine. **Figure 3. System architecture after migration** -![IT-DevOps-Solutions-Immigrate-TDengine-Arch](/img/IT-DevOps-Solutions-Immigrate-TDengine-Arch.jpg "Figure 3. System architecture after migration completion") +![IT-DevOps-Solutions-Immigrate-TDengine-Arch](./IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp "Figure 3. System architecture after migration completion") ## Migration evaluation and strategy for other scenarios diff --git a/docs-en/25-application/IT-DevOps-Solutions-Collectd-StatsD.webp b/docs-en/25-application/IT-DevOps-Solutions-Collectd-StatsD.webp new file mode 100644 index 0000000000000000000000000000000000000000..147a65b17bff2aa0e44faa206618bdce5664e1ca Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-Collectd-StatsD.webp differ diff --git a/docs-en/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp b/docs-en/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp new file mode 100644 index 0000000000000000000000000000000000000000..3ca99c835b33df8845adf1b52d8fb8eb63076e82 Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp differ diff --git a/docs-en/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp b/docs-en/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp new file mode 100644 index 0000000000000000000000000000000000000000..04811f61b9b318e129552d87cd48eabf6e99feab Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp differ diff --git a/docs-en/25-application/IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp b/docs-en/25-application/IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp new file mode 100644 index 0000000000000000000000000000000000000000..36930068758556f4de5b58321804a96401c64b22 Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp differ diff --git a/docs-en/25-application/IT-DevOps-Solutions-Telegraf.webp b/docs-en/25-application/IT-DevOps-Solutions-Telegraf.webp new file mode 100644 index 0000000000000000000000000000000000000000..fd5461ec9b37be66cac4c17fb1f81fec76158330 Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-Telegraf.webp differ diff --git a/docs-en/25-application/IT-DevOps-Solutions-collectd-dashboard.webp b/docs-en/25-application/IT-DevOps-Solutions-collectd-dashboard.webp new file mode 100644 index 0000000000000000000000000000000000000000..879c27a1a5843c714ff3c33c1dccfa32a2154b82 Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-collectd-dashboard.webp differ diff --git a/docs-en/25-application/IT-DevOps-Solutions-statsd-dashboard.webp b/docs-en/25-application/IT-DevOps-Solutions-statsd-dashboard.webp new file mode 100644 index 0000000000000000000000000000000000000000..1d4c655970b5f3fcb3be2d65d67eb42f08f35862 Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-statsd-dashboard.webp differ diff --git a/docs-en/25-application/IT-DevOps-Solutions-telegraf-dashboard.webp b/docs-en/25-application/IT-DevOps-Solutions-telegraf-dashboard.webp new file mode 100644 index 0000000000000000000000000000000000000000..105afcdb8312b23675f62ff6339d5e737b5cd958 Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-telegraf-dashboard.webp differ diff --git a/include/libs/function/function.h b/include/libs/function/function.h index 7d3e969c4119cc2e4eaf140188e0f85ee62bcc6e..21b73090554cc951aac82b4d9adb1cb7d847bff2 100644 --- a/include/libs/function/function.h +++ b/include/libs/function/function.h @@ -39,6 +39,7 @@ typedef bool (*FExecInit)(struct SqlFunctionCtx *pCtx, struct SResultRowEntryInf typedef int32_t (*FExecProcess)(struct SqlFunctionCtx *pCtx); typedef int32_t (*FExecFinalize)(struct SqlFunctionCtx *pCtx, SSDataBlock* pBlock); typedef int32_t (*FScalarExecProcess)(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput); +typedef int32_t (*FExecCombine)(struct SqlFunctionCtx *pDestCtx, struct SqlFunctionCtx *pSourceCtx); typedef struct SScalarFuncExecFuncs { FExecGetEnv getEnv; @@ -50,6 +51,7 @@ typedef struct SFuncExecFuncs { FExecInit init; FExecProcess process; FExecFinalize finalize; + FExecCombine combine; } SFuncExecFuncs; typedef struct SFileBlockInfo { diff --git a/include/libs/nodes/nodes.h b/include/libs/nodes/nodes.h index b9cb708c9c172fc522cfef3f7c41bdbd46149cae..3429d838de9f294b216b9e37a63d402a7b46dbba 100644 --- a/include/libs/nodes/nodes.h +++ b/include/libs/nodes/nodes.h @@ -212,6 +212,7 @@ typedef enum ENodeType { QUERY_NODE_PHYSICAL_PLAN_STREAM_FINAL_INTERVAL, QUERY_NODE_PHYSICAL_PLAN_FILL, QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW, + QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW, QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW, QUERY_NODE_PHYSICAL_PLAN_PARTITION, QUERY_NODE_PHYSICAL_PLAN_DISPATCH, diff --git a/include/libs/nodes/plannodes.h b/include/libs/nodes/plannodes.h index 6c4d14ffa10be13974e4651868fda955e41cebb7..d28da1f6083c59ba16d73c43eb9df76c669add47 100644 --- a/include/libs/nodes/plannodes.h +++ b/include/libs/nodes/plannodes.h @@ -296,6 +296,8 @@ typedef struct SSessionWinodwPhysiNode { int64_t gap; } SSessionWinodwPhysiNode; +typedef SSessionWinodwPhysiNode SStreamSessionWinodwPhysiNode; + typedef struct SStateWinodwPhysiNode { SWinodwPhysiNode window; SNode* pStateKey; diff --git a/source/client/src/clientHb.c b/source/client/src/clientHb.c index d01ec501ba215dae820a72a8dfa8ab473d5b8950..1ae6fa405f8649ba28e792a9245b637740ed8781 100644 --- a/source/client/src/clientHb.c +++ b/source/client/src/clientHb.c @@ -141,7 +141,9 @@ static int32_t hbQueryHbRspHandle(SAppHbMgr *pAppHbMgr, SClientHbRsp *pRsp) { if (NULL == pTscObj) { tscDebug("tscObj rid %" PRIx64 " not exist", pRsp->connKey.tscRid); } else { - updateEpSet_s(&pTscObj->pAppInfo->mgmtEp, &pRsp->query->epSet); + if (pRsp->query->totalDnodes > 1 && !isEpsetEqual(&pTscObj->pAppInfo->mgmtEp.epSet, &pRsp->query->epSet)) { + updateEpSet_s(&pTscObj->pAppInfo->mgmtEp, &pRsp->query->epSet); + } pTscObj->connId = pRsp->query->connId; if (pRsp->query->killRid) { diff --git a/source/client/src/clientSml.c b/source/client/src/clientSml.c index 68c47c2d13421cd34e327db37e31ae76774985ac..7d623072d664a4b9f1d77251812032f8c4fa4de1 100644 --- a/source/client/src/clientSml.c +++ b/source/client/src/clientSml.c @@ -24,7 +24,6 @@ #define EQUAL '=' #define QUOTE '"' #define SLASH '\\' -#define tsMaxSQLStringLen (1024*1024) #define JUMP_SPACE(sql) while (*sql != '\0'){if(*sql == SPACE) sql++;else break;} // comma , @@ -63,12 +62,11 @@ for (int i = 1; i < keyLen; ++i) { \ #define TS "_ts" #define TS_LEN 3 -#define VALUE "value" -#define VALUE_LEN 5 +#define VALUE "_value" +#define VALUE_LEN 6 #define BINARY_ADD_LEN 2 // "binary" 2 means " " #define NCHAR_ADD_LEN 3 // L"nchar" 3 means L" " -#define CHAR_SAVE_LENGTH 8 //================================================================================================= typedef TSDB_SML_PROTOCOL_TYPE SMLProtocolType; @@ -253,12 +251,20 @@ static int32_t smlGenerateSchemaAction(SSchema* colField, SHashObj* colHash, SSm return 0; } +static int32_t smlFindNearestPowerOf2(int32_t length){ + int32_t result = 1; + while(result <= length){ + result *= 2; + } + return result; +} + static int32_t smlBuildColumnDescription(SSmlKv* field, char* buf, int32_t bufSize, int32_t* outBytes) { uint8_t type = field->type; char tname[TSDB_TABLE_NAME_LEN] = {0}; memcpy(tname, field->key, field->keyLen); if (type == TSDB_DATA_TYPE_BINARY || type == TSDB_DATA_TYPE_NCHAR) { - int32_t bytes = field->length > CHAR_SAVE_LENGTH ? (2*field->length) : CHAR_SAVE_LENGTH; + int32_t bytes = smlFindNearestPowerOf2(field->length); int out = snprintf(buf, bufSize, "`%s` %s(%d)", tname, tDataTypes[field->type].name, bytes); *outBytes = out; @@ -273,8 +279,8 @@ static int32_t smlBuildColumnDescription(SSmlKv* field, char* buf, int32_t bufSi static int32_t smlApplySchemaAction(SSmlHandle* info, SSchemaAction* action) { int32_t code = 0; int32_t outBytes = 0; - char *result = (char *)taosMemoryCalloc(1, tsMaxSQLStringLen+1); - int32_t capacity = tsMaxSQLStringLen + 1; + char *result = (char *)taosMemoryCalloc(1, TSDB_MAX_ALLOWED_SQL_LEN); + int32_t capacity = TSDB_MAX_ALLOWED_SQL_LEN; uDebug("SML:0x%"PRIx64" apply schema action. action: %d", info->id, action->action); switch (action->action) { @@ -398,7 +404,7 @@ static int32_t smlApplySchemaAction(SSmlHandle* info, SSchemaAction* action) { } if(taosArrayGetSize(cols) == 0){ outBytes = snprintf(pos, freeBytes,"`%s` %s(%d)", - tsSmlTagName, tDataTypes[TSDB_DATA_TYPE_NCHAR].name, CHAR_SAVE_LENGTH); + tsSmlTagName, tDataTypes[TSDB_DATA_TYPE_NCHAR].name, 1); pos += outBytes; freeBytes -= outBytes; *pos = ','; ++pos; --freeBytes; } @@ -508,6 +514,11 @@ static int32_t smlModifyDBSchemas(SSmlHandle* info) { if (code != TSDB_CODE_SUCCESS) { return code; } + + code = catalogRefreshTableMeta(info->pCatalog, info->taos->pAppInfo->pTransporter, &ep, &pName, -1); + if (code != TSDB_CODE_SUCCESS) { + return code; + } } else { uError("SML:0x%"PRIx64" load table meta error: %s", info->id, tstrerror(code)); return code; diff --git a/source/dnode/mnode/impl/src/mndProfile.c b/source/dnode/mnode/impl/src/mndProfile.c index b9ac82d890a12f2355834f415f4bbca65461873b..c9c52af0fe3ef377317530c26648c811d1112c95 100644 --- a/source/dnode/mnode/impl/src/mndProfile.c +++ b/source/dnode/mnode/impl/src/mndProfile.c @@ -379,7 +379,7 @@ static int32_t mndProcessQueryHeartBeat(SMnode *pMnode, SRpcMsg *pMsg, SClientHb } rspBasic->connId = pConn->id; - rspBasic->totalDnodes = 1; // TODO + rspBasic->totalDnodes = mndGetDnodeSize(pMnode); rspBasic->onlineDnodes = 1; // TODO mndGetMnodeEpSet(pMnode, &rspBasic->epSet); mndReleaseConn(pMnode, pConn); diff --git a/source/libs/executor/inc/executorimpl.h b/source/libs/executor/inc/executorimpl.h index 8ac320b9aa2323b9ec92b6062c99da5c2b2452c9..2aad17d515b39c2abd49c06f943df13782d31887 100644 --- a/source/libs/executor/inc/executorimpl.h +++ b/source/libs/executor/inc/executorimpl.h @@ -361,6 +361,18 @@ typedef struct SCatchSupporter { int64_t* pKeyBuf; } SCatchSupporter; +typedef struct SStreamAggSupporter { + SArray* pResultRows; // SResultWindowInfo + int32_t keySize; + char* pKeyBuf; // window key buffer + SDiskbasedBuf* pResultBuf; // query result buffer based on blocked-wised disk file + int32_t resultRowSize; // the result buffer size for each result row, with the meta data size for each row +} SStreamAggSupporter; + +typedef struct SessionWindowSupporter { + SStreamAggSupporter* pStreamAggSup; + int64_t gap; +} SessionWindowSupporter; typedef struct SStreamBlockScanInfo { SArray* pBlockLists; // multiple SSDatablock. SSDataBlock* pRes; // result SSDataBlock @@ -385,6 +397,7 @@ typedef struct SStreamBlockScanInfo { SInterval interval; // if the upstream is an interval operator, the interval info is also kept here. SCatchSupporter childAggSup; SArray* childIds; + SessionWindowSupporter sessionSup; } SStreamBlockScanInfo; typedef struct SSysTableScanInfo { @@ -550,6 +563,27 @@ typedef struct SSessionAggOperatorInfo { STimeWindowAggSupp twAggSup; } SSessionAggOperatorInfo; +typedef struct SResultWindowInfo { + SResultRowPosition pos; + STimeWindow win; + bool isOutput; +} SResultWindowInfo; + +typedef struct SStreamSessionAggOperatorInfo { + SOptrBasicInfo binfo; + SStreamAggSupporter streamAggSup; + SGroupResInfo groupResInfo; + int64_t gap; // session window gap + int32_t primaryTsIndex; // primary timestamp slot id + int32_t order; // current SSDataBlock scan order + STimeWindowAggSupp twAggSup; + SSDataBlock* pWinBlock; // window result + SqlFunctionCtx* pDummyCtx; // for combine + SSDataBlock* pDelRes; + SHashObj* pStDeleted; + void* pDelIterator; +} SStreamSessionAggOperatorInfo; + typedef struct STimeSliceOperatorInfo { SOptrBasicInfo binfo; SInterval interval; @@ -727,6 +761,9 @@ SOperatorInfo* createTimeSliceOperatorInfo(SOperatorInfo* downstream, SExprInfo* SOperatorInfo* createMergeJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDownstream, SExprInfo* pExprInfo, int32_t numOfCols, SSDataBlock* pResBlock, SNode* pOnCondition, SExecTaskInfo* pTaskInfo); SOperatorInfo* createTagScanOperatorInfo(SReadHandle* pReadHandle, SExprInfo* pExpr, int32_t numOfOutput, SSDataBlock* pResBlock, SArray* pColMatchInfo, STableGroupInfo* pTableGroupInfo, SExecTaskInfo* pTaskInfo); +SOperatorInfo* createStreamSessionAggOperatorInfo(SOperatorInfo* downstream, + SExprInfo* pExprInfo, int32_t numOfCols, SSDataBlock* pResBlock, int64_t gap, + int32_t tsSlotId, STimeWindowAggSupp* pTwAggSupp, SExecTaskInfo* pTaskInfo); #if 0 SOperatorInfo* createTableSeqScanOperatorInfo(void* pTsdbReadHandle, STaskRuntimeEnv* pRuntimeEnv); #endif @@ -761,13 +798,19 @@ void aggEncodeResultRow(SOperatorInfo* pOperator, SAggSupporter* pSup, SOptrBasi int32_t* length); STimeWindow getActiveTimeWindow(SDiskbasedBuf* pBuf, SResultRowInfo* pResultRowInfo, int64_t ts, SInterval* pInterval, int32_t precision, STimeWindow* win); -int32_t getNumOfRowsInTimeWindow(SDataBlockInfo* pDataBlockInfo, TSKEY* pPrimaryColumn, int32_t startPos, - TSKEY ekey, __block_search_fn_t searchFn, STableQueryInfo* item, - int32_t order); +int32_t getNumOfRowsInTimeWindow(SDataBlockInfo* pDataBlockInfo, TSKEY* pPrimaryColumn, + int32_t startPos, TSKEY ekey, __block_search_fn_t searchFn, STableQueryInfo* item, + int32_t order); int32_t binarySearchForKey(char* pValue, int num, TSKEY key, int order); -int32_t initCatchSupporter(SCatchSupporter* pCatchSup, size_t rowSize, size_t keyBufSize, - const char* pKey, const char* pDir); - +int32_t initCatchSupporter(SCatchSupporter* pCatchSup, size_t rowSize, const char* pKey, + const char* pDir); +int32_t initStreamAggSupporter(SStreamAggSupporter* pSup, const char* pKey); +SResultRow* getNewResultRow_rv(SDiskbasedBuf* pResultBuf, int64_t tableGroupId, int32_t interBufSize); +SResultWindowInfo* getSessionTimeWindow(SArray* pWinInfos, TSKEY ts, int64_t gap, + int32_t* pIndex); +int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pTs, int32_t rows, + int32_t start, int64_t gap, SHashObj* pStDeleted); +bool functionNeedToExecute(SqlFunctionCtx* pCtx); #ifdef __cplusplus } #endif diff --git a/source/libs/executor/src/executorimpl.c b/source/libs/executor/src/executorimpl.c index e16b60e58b0808d07efcfe3ab0e4b1fb2450ef15..581cf5cacd982050e087d9b27abb51e7d85e556d 100644 --- a/source/libs/executor/src/executorimpl.c +++ b/source/libs/executor/src/executorimpl.c @@ -98,7 +98,6 @@ static int32_t getExprFunctionId(SExprInfo* pExprInfo) { } static void doSetTagValueToResultBuf(char* output, const char* val, int16_t type, int16_t bytes); -static bool functionNeedToExecute(SqlFunctionCtx* pCtx); static void setBlockStatisInfo(SqlFunctionCtx* pCtx, SExprInfo* pExpr, SSDataBlock* pSDataBlock); @@ -937,7 +936,7 @@ int32_t setGroupResultOutputBuf(SOptrBasicInfo* binfo, int32_t numOfCols, char* return TSDB_CODE_SUCCESS; } -static bool functionNeedToExecute(SqlFunctionCtx* pCtx) { +bool functionNeedToExecute(SqlFunctionCtx* pCtx) { struct SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx); // in case of timestamp column, always generated results. @@ -4660,6 +4659,19 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo pOptr = createSessionAggOperatorInfo(ops[0], pExprInfo, num, pResBlock, pSessionNode->gap, tsSlotId, &as, pTaskInfo); + } else if (QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW == type) { + SSessionWinodwPhysiNode* pSessionNode = (SSessionWinodwPhysiNode*)pPhyNode; + + STimeWindowAggSupp as = {.waterMark = pSessionNode->window.watermark, + .calTrigger = pSessionNode->window.triggerType}; + + SExprInfo* pExprInfo = createExprInfo(pSessionNode->window.pFuncs, NULL, &num); + SSDataBlock* pResBlock = createResDataBlock(pPhyNode->pOutputDataBlockDesc); + int32_t tsSlotId = ((SColumnNode*)pSessionNode->window.pTspk)->slotId; + + pOptr = + createStreamSessionAggOperatorInfo(ops[0], pExprInfo, num, pResBlock, pSessionNode->gap, tsSlotId, &as, pTaskInfo); + } else if (QUERY_NODE_PHYSICAL_PLAN_PARTITION == type) { SPartitionPhysiNode* pPartNode = (SPartitionPhysiNode*)pPhyNode; SArray* pColList = extractPartitionColInfo(pPartNode->pPartitionKeys); @@ -5151,15 +5163,37 @@ int32_t getOperatorExplainExecInfo(SOperatorInfo* operatorInfo, SExplainExecInfo return TSDB_CODE_SUCCESS; } -int32_t initCatchSupporter(SCatchSupporter* pCatchSup, size_t rowSize, size_t keyBufSize, const char* pKey, - const char* pDir) { +int32_t initCatchSupporter(SCatchSupporter* pCatchSup, size_t rowSize, const char* pKey, + const char* pDir) { pCatchSup->keySize = sizeof(int64_t) + sizeof(int64_t) + sizeof(TSKEY); pCatchSup->pKeyBuf = taosMemoryCalloc(1, pCatchSup->keySize); - int32_t pageSize = rowSize * 32; - int32_t bufSize = pageSize * 4096; - createDiskbasedBuf(&pCatchSup->pDataBuf, pageSize, bufSize, pKey, pDir); _hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY); pCatchSup->pWindowHashTable = taosHashInit(10000, hashFn, true, HASH_NO_LOCK); - ; - return TSDB_CODE_SUCCESS; + if (pCatchSup->pKeyBuf == NULL || pCatchSup->pWindowHashTable == NULL) { + return TSDB_CODE_OUT_OF_MEMORY; + } + + int32_t pageSize = rowSize * 32; + int32_t bufSize = pageSize * 4096; + return createDiskbasedBuf(&pCatchSup->pDataBuf, pageSize, bufSize, pKey, pDir); +} + +int32_t initStreamAggSupporter(SStreamAggSupporter* pSup, const char* pKey) { + pSup->keySize = sizeof(int64_t) + sizeof(TSKEY); + pSup->pKeyBuf = taosMemoryCalloc(1, pSup->keySize); + pSup->pResultRows = taosArrayInit(1024, sizeof(SResultWindowInfo)); + if (pSup->pKeyBuf == NULL || pSup->pResultRows == NULL) { + return TSDB_CODE_OUT_OF_MEMORY; + } + + int32_t pageSize = 4096; + while (pageSize < pSup->resultRowSize * 4) { + pageSize <<= 1u; + } + // at least four pages need to be in buffer + int32_t bufSize = 4096 * 256; + if (bufSize <= pageSize) { + bufSize = pageSize * 4; + } + return createDiskbasedBuf(&pSup->pResultBuf, pageSize, bufSize, pKey, "/tmp/"); } diff --git a/source/libs/executor/src/scanoperator.c b/source/libs/executor/src/scanoperator.c index f77b80c53383341d68749496fc6a7878bd695b1b..6200e1dcb0cff95dbc1e3538e3a30af4d18a1481 100644 --- a/source/libs/executor/src/scanoperator.c +++ b/source/libs/executor/src/scanoperator.c @@ -645,6 +645,10 @@ static void doClearBufferedBlocks(SStreamBlockScanInfo* pInfo) { taosArrayClear(pInfo->pBlockLists); } +static bool isSessionWindow(SStreamBlockScanInfo* pInfo) { + return pInfo->sessionSup.pStreamAggSup != NULL; +} + static bool prepareDataScan(SStreamBlockScanInfo* pInfo) { SSDataBlock* pSDB = pInfo->pUpdateRes; if (pInfo->updateResIndex < pSDB->info.rows) { @@ -652,13 +656,25 @@ static bool prepareDataScan(SStreamBlockScanInfo* pInfo) { TSKEY *tsCols = (TSKEY*)pColDataInfo->pData; SResultRowInfo dumyInfo; dumyInfo.cur.pageId = -1; - STimeWindow win = getActiveTimeWindow(NULL, &dumyInfo, tsCols[pInfo->updateResIndex], &pInfo->interval, - pInfo->interval.precision, NULL); + STimeWindow win; + if (isSessionWindow(pInfo)) { + SStreamAggSupporter* pAggSup = pInfo->sessionSup.pStreamAggSup; + int64_t gap = pInfo->sessionSup.gap; + int32_t winIndex = 0; + SResultWindowInfo* pCurWin = getSessionTimeWindow(pAggSup->pResultRows, + tsCols[pInfo->updateResIndex], gap, &winIndex); + win = pCurWin->win; + pInfo->updateResIndex += updateSessionWindowInfo(pCurWin, tsCols, pSDB->info.rows, + pInfo->updateResIndex, gap, NULL); + } else { + win = getActiveTimeWindow(NULL, &dumyInfo, tsCols[pInfo->updateResIndex], + &pInfo->interval, pInfo->interval.precision, NULL); + pInfo->updateResIndex += getNumOfRowsInTimeWindow(&pSDB->info, tsCols, pInfo->updateResIndex, + win.ekey, binarySearchForKey, NULL, TSDB_ORDER_ASC); + } STableScanInfo* pTableScanInfo = pInfo->pOperatorDumy->info; pTableScanInfo->cond.twindow = win; tsdbResetReadHandle(pTableScanInfo->dataReader, &pTableScanInfo->cond); - pInfo->updateResIndex += getNumOfRowsInTimeWindow(&pSDB->info, tsCols, pInfo->updateResIndex, - win.ekey, binarySearchForKey, NULL, TSDB_ORDER_ASC); pTableScanInfo->scanTimes = 0; return true; } else { @@ -848,6 +864,7 @@ static SSDataBlock* doStreamBlockScan(SOperatorInfo* pOperator) { } else if (pInfo->scanMode == STREAM_SCAN_FROM_UPDATERES) { blockDataCleanup(pInfo->pRes); pInfo->scanMode = STREAM_SCAN_FROM_DATAREADER; + prepareDataScan(pInfo); return pInfo->pUpdateRes; } else if (pInfo->scanMode == STREAM_SCAN_FROM_DATAREADER) { SSDataBlock* pSDB = doDataScan(pInfo); @@ -924,13 +941,12 @@ static SSDataBlock* doStreamBlockScan(SOperatorInfo* pOperator) { if (rows == 0) { pOperator->status = OP_EXEC_DONE; - } else if (pInfo->interval.interval > 0) { + } else if (pInfo->pUpdateInfo) { SSDataBlock* upRes = getUpdateDataBlock(pInfo, true); //TODO(liuyao) get invertible from plan if (upRes) { pInfo->pUpdateRes = upRes; if (upRes->info.type == STREAM_REPROCESS) { pInfo->updateResIndex = 0; - prepareDataScan(pInfo); pInfo->scanMode = STREAM_SCAN_FROM_UPDATERES; } else if (upRes->info.type == STREAM_INVERT) { pInfo->scanMode = STREAM_SCAN_FROM_RES; @@ -1001,10 +1017,9 @@ SOperatorInfo* createStreamScanOperatorInfo(void* streamReadHandle, void* pDataR pInfo->scanMode = STREAM_SCAN_FROM_READERHANDLE; pInfo->pOperatorDumy = pOperatorDumy; pInfo->interval = pSTInfo->interval; + pInfo->sessionSup = (SessionWindowSupporter){.pStreamAggSup = NULL, .gap = -1}; - size_t childKeyBufSize = sizeof(int64_t) + sizeof(int64_t) + sizeof(TSKEY); - initCatchSupporter(&pInfo->childAggSup, 1024, childKeyBufSize, - "StreamFinalInterval", TD_TMP_DIR_PATH); // TODO(liuyao) get row size from phy plan + initCatchSupporter(&pInfo->childAggSup, 1024, "StreamFinalInterval", "/tmp/"); // TODO(liuyao) get row size from phy plan pOperator->name = "StreamBlockScanOperator"; pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN; diff --git a/source/libs/executor/src/timewindowoperator.c b/source/libs/executor/src/timewindowoperator.c index deca2f380474d4845af09e3e06e45438af0c83ac..9346dbf54a7037d5f072619fa07fef6b4dfde58c 100644 --- a/source/libs/executor/src/timewindowoperator.c +++ b/source/libs/executor/src/timewindowoperator.c @@ -9,6 +9,7 @@ typedef enum SResultTsInterpType { } SResultTsInterpType; static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator); +static SSDataBlock* doStreamSessionWindowAgg(SOperatorInfo* pOperator); /* * There are two cases to handle: @@ -1039,13 +1040,9 @@ static void setInverFunction(SqlFunctionCtx* pCtx, int32_t num, EStreamType type } } -void doClearWindow(SAggSupporter* pSup, SOptrBasicInfo* pBinfo, char* pData, - int16_t bytes, uint64_t groupId, int32_t numOfOutput) { - SET_RES_WINDOW_KEY(pSup->keyBuf, pData, bytes, groupId); - SResultRowPosition* p1 = - (SResultRowPosition*)taosHashGet(pSup->pResultRowHashTable, pSup->keyBuf, - GET_RES_WINDOW_KEY_LEN(bytes)); - SResultRow* pResult = getResultRowByPos(pSup->pResultBuf, p1); +void doClearWindowImpl(SResultRowPosition* p1, SDiskbasedBuf* pResultBuf, + SOptrBasicInfo* pBinfo, int32_t numOfOutput) { + SResultRow* pResult = getResultRowByPos(pResultBuf, p1); SqlFunctionCtx* pCtx = pBinfo->pCtx; for (int32_t i = 0; i < numOfOutput; ++i) { pCtx[i].resultInfo = getResultCell(pResult, i, pBinfo->rowCellInfoOffset); @@ -1060,6 +1057,15 @@ void doClearWindow(SAggSupporter* pSup, SOptrBasicInfo* pBinfo, char* pData, } } +void doClearWindow(SAggSupporter* pSup, SOptrBasicInfo* pBinfo, char* pData, + int16_t bytes, uint64_t groupId, int32_t numOfOutput) { + SET_RES_WINDOW_KEY(pSup->keyBuf, pData, bytes, groupId); + SResultRowPosition* p1 = + (SResultRowPosition*)taosHashGet(pSup->pResultRowHashTable, pSup->keyBuf, + GET_RES_WINDOW_KEY_LEN(bytes)); + doClearWindowImpl(p1, pSup->pResultBuf, pBinfo, numOfOutput); +} + static void doClearWindows(SAggSupporter* pSup, SOptrBasicInfo* pBinfo, SInterval* pIntrerval, int32_t tsIndex, int32_t numOfOutput, SSDataBlock* pBlock) { SColumnInfoData* pColDataInfo = taosArrayGet(pBlock->pDataBlock, tsIndex); @@ -1112,8 +1118,8 @@ static SSDataBlock* doStreamIntervalAgg(SOperatorInfo* pOperator) { } if (pBlock->info.type == STREAM_REPROCESS) { - doClearWindows(&pInfo->aggSup, &pInfo->binfo, &pInfo->interval, - pInfo->primaryTsIndex, pOperator->numOfExprs, pBlock); + doClearWindows(&pInfo->aggSup, &pInfo->binfo, &pInfo->interval, 0, + pOperator->numOfExprs, pBlock); qDebug("%s clear existed time window results for updates checked", GET_TASKID(pTaskInfo)); continue; } @@ -1644,9 +1650,10 @@ _error: return NULL; } -static SArray* doHashInterval(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResultRowInfo, SSDataBlock* pSDataBlock, +static SArray* doHashInterval(SOperatorInfo* pOperatorInfo, SSDataBlock* pSDataBlock, int32_t tableGroupId) { SStreamFinalIntervalOperatorInfo* pInfo = (SStreamFinalIntervalOperatorInfo*)pOperatorInfo->info; + SResultRowInfo* pResultRowInfo = &(pInfo->binfo.resultRowInfo); SExecTaskInfo* pTaskInfo = pOperatorInfo->pTaskInfo; int32_t numOfOutput = pOperatorInfo->numOfExprs; SArray* pUpdated = taosArrayInit(4, POINTER_BYTES); @@ -1659,7 +1666,10 @@ static SArray* doHashInterval(SOperatorInfo* pOperatorInfo, SResultRowInfo* pRes if (pSDataBlock->pDataBlock != NULL) { SColumnInfoData* pColDataInfo = taosArrayGet(pSDataBlock->pDataBlock, pInfo->primaryTsIndex); tsCols = (int64_t*)pColDataInfo->pData; + } else { + return pUpdated; } + int32_t startPos = ascScan ? 0 : (pSDataBlock->info.rows - 1); TSKEY ts = getStartTsKey(&pSDataBlock->info.window, tsCols, pSDataBlock->info.rows, ascScan); STimeWindow nextWin = getActiveTimeWindow(pInfo->aggSup.pResultBuf, pResultRowInfo, ts, @@ -1720,7 +1730,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) { pInfo->primaryTsIndex, pOperator->numOfExprs, pBlock); continue; } - pUpdated = doHashInterval(pOperator, &pInfo->binfo.resultRowInfo, pBlock, 0); + pUpdated = doHashInterval(pOperator, pBlock, 0); } finalizeUpdatedResult(pOperator->numOfExprs, pInfo->aggSup.pResultBuf, pUpdated, pInfo->binfo.rowCellInfoOffset); @@ -1730,3 +1740,534 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) { pOperator->status = OP_RES_TO_RETURN; return pInfo->binfo.pRes->info.rows == 0 ? NULL : pInfo->binfo.pRes; } + +void destroyStreamAggSupporter(SStreamAggSupporter* pSup) { + taosArrayDestroy(pSup->pResultRows); + taosMemoryFreeClear(pSup->pKeyBuf); + destroyDiskbasedBuf(pSup->pResultBuf); +} + +void destroyStreamSessionAggOperatorInfo(void* param, int32_t numOfOutput) { + SStreamSessionAggOperatorInfo* pInfo = (SStreamSessionAggOperatorInfo*)param; + doDestroyBasicInfo(&pInfo->binfo, numOfOutput); + destroyStreamAggSupporter(&pInfo->streamAggSup); + cleanupGroupResInfo(&pInfo->groupResInfo); +} + +int32_t initBiasicInfo(SOptrBasicInfo* pBasicInfo, SExprInfo* pExprInfo, + int32_t numOfCols, SSDataBlock* pResultBlock, SDiskbasedBuf* pResultBuf) { + pBasicInfo->pCtx = createSqlFunctionCtx(pExprInfo, numOfCols, &pBasicInfo->rowCellInfoOffset); + pBasicInfo->pRes = pResultBlock; + for (int32_t i = 0; i < numOfCols; ++i) { + pBasicInfo->pCtx[i].pBuf = pResultBuf; + } + return TSDB_CODE_SUCCESS; +} + +void initDummyFunction(SqlFunctionCtx* pDummy, SqlFunctionCtx* pCtx, int32_t nums) { + for (int i = 0; i < nums; i++) { + pDummy[i].functionId = pCtx[i].functionId; + } +} +void initDownStream(SOperatorInfo* downstream, SStreamSessionAggOperatorInfo* pInfo) { + ASSERT(downstream->operatorType == QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN); + SStreamBlockScanInfo* pScanInfo = downstream->info; + pScanInfo->sessionSup = + (SessionWindowSupporter){.pStreamAggSup = &pInfo->streamAggSup, .gap = pInfo->gap}; + pScanInfo->pUpdateInfo = updateInfoInit(60000, TSDB_TIME_PRECISION_MILLI, 60000 * 60 * 6); +} + +SOperatorInfo* createStreamSessionAggOperatorInfo(SOperatorInfo* downstream, + SExprInfo* pExprInfo, int32_t numOfCols, SSDataBlock* pResBlock, int64_t gap, + int32_t tsSlotId, STimeWindowAggSupp* pTwAggSupp, SExecTaskInfo* pTaskInfo) { + SStreamSessionAggOperatorInfo* pInfo = + taosMemoryCalloc(1, sizeof(SStreamSessionAggOperatorInfo)); + SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo)); + if (pInfo == NULL || pOperator == NULL) { + goto _error; + } + + initResultSizeInfo(pOperator, 4096); + + int32_t code = initStreamAggSupporter(&pInfo->streamAggSup, "StreamSessionAggOperatorInfo"); + if (code != TSDB_CODE_SUCCESS) { + goto _error; + } + + code = initBiasicInfo(&pInfo->binfo, pExprInfo, numOfCols, pResBlock, + pInfo->streamAggSup.pResultBuf); + if (code != TSDB_CODE_SUCCESS) { + goto _error; + } + pInfo->streamAggSup.resultRowSize = getResultRowSize(pInfo->binfo.pCtx, numOfCols); + + pInfo->pDummyCtx = (SqlFunctionCtx*)taosMemoryCalloc(numOfCols, sizeof(SqlFunctionCtx)); + if (pInfo->pDummyCtx == NULL) { + goto _error; + } + initDummyFunction(pInfo->pDummyCtx, pInfo->binfo.pCtx, numOfCols); + + pInfo->twAggSup = *pTwAggSupp; + initResultRowInfo(&pInfo->binfo.resultRowInfo, 8); + initExecTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pTaskInfo->window); + + pInfo->primaryTsIndex = tsSlotId; + pInfo->gap = gap; + pInfo->binfo.pRes = pResBlock; + pInfo->order = TSDB_ORDER_ASC; + _hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY); + pInfo->pStDeleted = taosHashInit(64, hashFn, true, HASH_NO_LOCK); + pInfo->pDelIterator = NULL; + pInfo->pDelRes = createOneDataBlock(pResBlock, false); + blockDataEnsureCapacity(pInfo->pDelRes, 64); + + pOperator->name = "StreamSessionWindowAggOperator"; + pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW; + pOperator->blocking = true; + pOperator->status = OP_NOT_OPENED; + pOperator->pExpr = pExprInfo; + pOperator->numOfExprs = numOfCols; + pOperator->info = pInfo; + pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, doStreamSessionWindowAgg, + NULL, NULL, destroyStreamSessionAggOperatorInfo, aggEncodeResultRow, + aggDecodeResultRow, NULL); + pOperator->pTaskInfo = pTaskInfo; + initDownStream(downstream, pInfo); + code = appendDownstream(pOperator, &downstream, 1); + return pOperator; + +_error: + if (pInfo != NULL) { + destroyStreamSessionAggOperatorInfo(pInfo, numOfCols); + } + + taosMemoryFreeClear(pInfo); + taosMemoryFreeClear(pOperator); + pTaskInfo->code = code; + return NULL; +} + +typedef int64_t (*__get_value_fn_t)(void* data, int32_t index); + +int32_t binarySearch(void* keyList, int num, TSKEY key, int order, + __get_value_fn_t getValuefn) { + int firstPos = 0, lastPos = num - 1, midPos = -1; + int numOfRows = 0; + + if (num <= 0) return -1; + if (order == TSDB_ORDER_DESC) { + // find the first position which is smaller than the key + while (1) { + if (key >= getValuefn(keyList, lastPos)) return lastPos; + if (key == getValuefn(keyList, firstPos)) return firstPos; + if (key < getValuefn(keyList, firstPos)) return firstPos - 1; + + numOfRows = lastPos - firstPos + 1; + midPos = (numOfRows >> 1) + firstPos; + + if (key < getValuefn(keyList, midPos)) { + lastPos = midPos - 1; + } else if (key > getValuefn(keyList, midPos)) { + firstPos = midPos + 1; + } else { + break; + } + } + + } else { + // find the first position which is bigger than the key + while (1) { + if (key <= getValuefn(keyList, firstPos)) return firstPos; + if (key == getValuefn(keyList, lastPos)) return lastPos; + + if (key > getValuefn(keyList, lastPos)) { + lastPos = lastPos + 1; + if (lastPos >= num) + return -1; + else + return lastPos; + } + + numOfRows = lastPos - firstPos + 1; + midPos = (numOfRows >> 1) + firstPos; + + if (key < getValuefn(keyList, midPos)) { + lastPos = midPos - 1; + } else if (key > getValuefn(keyList, midPos)) { + firstPos = midPos + 1; + } else { + break; + } + } + } + + return midPos; +} + +int64_t getSessionWindowEndkey(void* data, int32_t index) { + SArray* pWinInfos = (SArray*) data; + SResultWindowInfo* pWin = taosArrayGet(pWinInfos, index); + return pWin->win.ekey; +} +static bool isInWindow(SResultWindowInfo* pWin, TSKEY ts, int64_t gap) { + int64_t sGap = ts - pWin->win.skey; + int64_t eGap = pWin->win.ekey - ts; + if ( (sGap < 0 && sGap >= -gap) || (eGap < 0 && eGap >= -gap) || (sGap >= 0 && eGap >= 0) ) { + return true; + } + return false; +} + +static SResultWindowInfo* insertNewSessionWindow(SArray* pWinInfos, TSKEY ts, + int32_t index) { + SResultWindowInfo win = + {.pos.offset = -1, .pos.pageId = -1, .win.skey = ts, .win.ekey = ts, .isOutput = false}; + return taosArrayInsert(pWinInfos, index, &win); +} + +static SResultWindowInfo* addNewSessionWindow(SArray* pWinInfos, TSKEY ts) { + SResultWindowInfo win = + {.pos.offset = -1, .pos.pageId = -1, .win.skey = ts, .win.ekey = ts, .isOutput = false}; + return taosArrayPush(pWinInfos, &win); +} + +SResultWindowInfo* getSessionTimeWindow(SArray* pWinInfos, TSKEY ts, int64_t gap, + int32_t* pIndex) { + int32_t size = taosArrayGetSize(pWinInfos); + if (size == 0) { + return addNewSessionWindow(pWinInfos, ts); + } + // find the first position which is smaller than the key + int32_t index = binarySearch(pWinInfos, size, ts, TSDB_ORDER_DESC, + getSessionWindowEndkey); + SResultWindowInfo* pWin = NULL; + if (index >= 0) { + pWin = taosArrayGet(pWinInfos, index); + if (isInWindow(pWin, ts, gap)) { + *pIndex = index; + return pWin; + } + } + + if (index + 1 < size) { + pWin = taosArrayGet(pWinInfos, index + 1); + if (isInWindow(pWin, ts, gap)) { + *pIndex = index + 1; + return pWin; + } + } + + if (index == size - 1) { + *pIndex = taosArrayGetSize(pWinInfos); + return addNewSessionWindow(pWinInfos, ts); + } + *pIndex = index; + return insertNewSessionWindow(pWinInfos, ts, index); +} + +int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pTs, int32_t rows, + int32_t start, int64_t gap, SHashObj* pStDeleted) { + for (int32_t i = start; i < rows; ++i) { + if (!isInWindow(pWinInfo, pTs[i], gap)) { + return i - start; + } + if (pWinInfo->win.skey > pTs[i]) { + if (pStDeleted && pWinInfo->isOutput) { + taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &pWinInfo->win.skey, sizeof(TSKEY)); + pWinInfo->isOutput = false; + } + pWinInfo->win.skey = pTs[i]; + } + pWinInfo->win.ekey = TMAX(pWinInfo->win.ekey, pTs[i]); + } + return rows - start; +} + +static int32_t setWindowOutputBuf(SResultWindowInfo* pWinInfo, SResultRow** pResult, + SqlFunctionCtx* pCtx, int32_t groupId, int32_t numOfOutput, + int32_t* rowCellInfoOffset, SStreamAggSupporter* pAggSup, SExecTaskInfo* pTaskInfo) { + assert(pWinInfo->win.skey <= pWinInfo->win.ekey); + // too many time window in query + int32_t size = taosArrayGetSize(pAggSup->pResultRows); + if (size > MAX_INTERVAL_TIME_WINDOW) { + longjmp(pTaskInfo->env, TSDB_CODE_QRY_TOO_MANY_TIMEWINDOW); + } + + if (pWinInfo->pos.pageId == -1) { + *pResult = getNewResultRow_rv(pAggSup->pResultBuf, groupId, pAggSup->resultRowSize); + if (*pResult == NULL) { + return TSDB_CODE_OUT_OF_MEMORY; + } + initResultRow(*pResult); + + // add a new result set for a new group + pWinInfo->pos.pageId = (*pResult)->pageId; + pWinInfo->pos.offset = (*pResult)->offset; + } else { + *pResult = getResultRowByPos(pAggSup->pResultBuf, &pWinInfo->pos); + if (!(*pResult)) { + qError("getResultRowByPos return NULL, TID:%s", GET_TASKID(pTaskInfo)); + return TSDB_CODE_FAILED; + } + } + + // set time window for current result + (*pResult)->win = pWinInfo->win; + setResultRowInitCtx(*pResult, pCtx, numOfOutput, rowCellInfoOffset); + return TSDB_CODE_SUCCESS; +} + +static int32_t doOneWindowAgg(SStreamSessionAggOperatorInfo* pInfo, + SSDataBlock* pSDataBlock, SResultWindowInfo* pCurWin, SResultRow** pResult, + int32_t startIndex, int32_t winRows, int32_t numOutput, SExecTaskInfo* pTaskInfo ) { + SColumnInfoData* pColDataInfo = + taosArrayGet(pSDataBlock->pDataBlock, pInfo->primaryTsIndex); + TSKEY* tsCols = (int64_t*)pColDataInfo->pData; + int32_t code = setWindowOutputBuf(pCurWin, pResult, pInfo->binfo.pCtx, pSDataBlock->info.groupId, + numOutput, pInfo->binfo.rowCellInfoOffset, &pInfo->streamAggSup, pTaskInfo); + if (code != TSDB_CODE_SUCCESS || (*pResult) == NULL) { + return TSDB_CODE_QRY_OUT_OF_MEMORY; + } + updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pCurWin->win, true); + doApplyFunctions(pTaskInfo, pInfo->binfo.pCtx, &pCurWin->win, + &pInfo->twAggSup.timeWindowData, startIndex, winRows, tsCols, pSDataBlock->info.rows, + numOutput, TSDB_ORDER_ASC); + return TSDB_CODE_SUCCESS; +} + +int32_t copyWinInfoToDataBlock(SSDataBlock* pBlock, SStreamAggSupporter* pAggSup, + int32_t start, int32_t num, int32_t numOfExprs, SOptrBasicInfo* pBinfo) { + for (int32_t i = start; i < num; i += 1) { + SResultWindowInfo* pWinInfo = taosArrayGet(pAggSup->pResultRows, start); + SFilePage* bufPage = getBufPage(pAggSup->pResultBuf, pWinInfo->pos.pageId); + SResultRow* pRow = (SResultRow*)((char*)bufPage + pWinInfo->pos.offset); + for (int32_t j = 0; j < numOfExprs; ++j) { + SResultRowEntryInfo* pResultInfo = getResultCell(pRow, j, pBinfo->rowCellInfoOffset); + SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, j); + char* in = GET_ROWCELL_INTERBUF(pBinfo->pCtx[j].resultInfo); + colDataAppend(pColInfoData, pBlock->info.rows, in, pResultInfo->isNullRes); + } + pBlock->info.rows += pRow->numOfRows; + releaseBufPage(pAggSup->pResultBuf, bufPage); + } + blockDataUpdateTsWindow(pBlock, -1); + return TSDB_CODE_SUCCESS; +} + +int32_t getNumCompactWindow(SArray* pWinInfos, int32_t startIndex, int64_t gap) { + SResultWindowInfo* pCurWin = taosArrayGet(pWinInfos, startIndex); + int32_t size = taosArrayGetSize(pWinInfos); + // Just look for the window behind StartIndex + for (int32_t i = startIndex + 1; i < size; i++) { + SResultWindowInfo* pWinInfo = taosArrayGet(pWinInfos, i); + if (!isInWindow(pCurWin, pWinInfo->win.skey, gap)) { + return i - startIndex - 1; + } + } + + return size - startIndex - 1; +} + +void compactFunctions(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx, + int32_t numOfOutput, SExecTaskInfo* pTaskInfo) { + for (int32_t k = 0; k < numOfOutput; ++k) { + if (fmIsWindowPseudoColumnFunc(pDestCtx[k].functionId)) { + continue; + } + int32_t code = TSDB_CODE_SUCCESS; + if (functionNeedToExecute(&pDestCtx[k]) && pDestCtx[k].fpSet.combine != NULL) { + code = pDestCtx[k].fpSet.combine(&pDestCtx[k], &pSourceCtx[k]); + if (code != TSDB_CODE_SUCCESS) { + qError("%s apply functions error, code: %s", GET_TASKID(pTaskInfo), tstrerror(code)); + pTaskInfo->code = code; + longjmp(pTaskInfo->env, code); + } + } + } +} + +void compactTimeWindow(SStreamSessionAggOperatorInfo* pInfo, int32_t startIndex, int32_t num, + int32_t groupId, int32_t numOfOutput, SExecTaskInfo* pTaskInfo, SHashObj* pStUpdated, SHashObj* pStDeleted) { + SResultWindowInfo* pCurWin = taosArrayGet(pInfo->streamAggSup.pResultRows, startIndex); + SResultRow* pCurResult = NULL; + setWindowOutputBuf(pCurWin, &pCurResult, pInfo->binfo.pCtx, groupId, + numOfOutput, pInfo->binfo.rowCellInfoOffset, &pInfo->streamAggSup, pTaskInfo); + num += startIndex + 1; + ASSERT(num <= taosArrayGetSize(pInfo->streamAggSup.pResultRows)); + // Just look for the window behind StartIndex + for (int32_t i = startIndex + 1; i < num; i++) { + SResultWindowInfo* pWinInfo = taosArrayGet(pInfo->streamAggSup.pResultRows, i); + SResultRow* pWinResult = NULL; + setWindowOutputBuf(pWinInfo, &pWinResult, pInfo->pDummyCtx, groupId, + numOfOutput, pInfo->binfo.rowCellInfoOffset, &pInfo->streamAggSup, pTaskInfo); + pCurWin->win.ekey = TMAX(pCurWin->win.ekey, pWinInfo->win.ekey); + compactFunctions(pInfo->binfo.pCtx, pInfo->pDummyCtx, numOfOutput, pTaskInfo); + taosHashRemove(pStUpdated, &pWinInfo->pos, sizeof(SResultRowPosition)); + if (pWinInfo->isOutput) { + taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &pWinInfo->win.skey, sizeof(TSKEY)); + pWinInfo->isOutput = false; + } + taosArrayRemove(pInfo->streamAggSup.pResultRows, i); + } +} + +static void doStreamSessionWindowAggImpl(SOperatorInfo* pOperator, + SSDataBlock* pSDataBlock, SHashObj* pStUpdated, SHashObj* pStDeleted) { + SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo; + SStreamSessionAggOperatorInfo* pInfo = pOperator->info; + bool masterScan = true; + int32_t numOfOutput = pOperator->numOfExprs; + int64_t groupId = pSDataBlock->info.groupId; + int64_t gap = pInfo->gap; + int64_t code = TSDB_CODE_SUCCESS; + + int32_t step = 1; + bool ascScan = true; + TSKEY* tsCols = NULL; + SResultRow* pResult = NULL; + int32_t winRows = 0; + + if (pSDataBlock->pDataBlock != NULL) { + SColumnInfoData* pColDataInfo = + taosArrayGet(pSDataBlock->pDataBlock, pInfo->primaryTsIndex); + tsCols = (int64_t*)pColDataInfo->pData; + } else { + return ; + } + + SStreamAggSupporter* pAggSup = &pInfo->streamAggSup; + for(int32_t i = 0; i < pSDataBlock->info.rows; ) { + int32_t winIndex = 0; + SResultWindowInfo* pCurWin = + getSessionTimeWindow(pAggSup->pResultRows, tsCols[i], gap, &winIndex); + winRows = + updateSessionWindowInfo(pCurWin, tsCols, pSDataBlock->info.rows, i, pInfo->gap, pStDeleted); + code = doOneWindowAgg(pInfo, pSDataBlock, pCurWin, &pResult, i, winRows, numOfOutput, pTaskInfo); + if (code != TSDB_CODE_SUCCESS || pResult == NULL) { + longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + } + // window start(end) key interpolation + // doWindowBorderInterpolation(pOperatorInfo, pSDataBlock, pInfo->binfo.pCtx, pResult, &nextWin, startPos, forwardStep, + // pInfo->order, false); + int32_t winNum = getNumCompactWindow(pAggSup->pResultRows, winIndex, gap); + if (winNum > 0) { + compactTimeWindow(pInfo, winIndex, winNum, groupId, numOfOutput, pTaskInfo, pStUpdated, pStDeleted); + } + + code = taosHashPut(pStUpdated, &pCurWin->pos, sizeof(SResultRowPosition), &(pCurWin->win.skey), sizeof(TSKEY)); + if (code != TSDB_CODE_SUCCESS) { + longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + } + pCurWin->isOutput = true; + i += winRows; + } +} + +static void doClearSessionWindows(SStreamAggSupporter* pAggSup, SOptrBasicInfo* pBinfo, + SSDataBlock* pBlock, int32_t tsIndex, int32_t numOfOutput, int64_t gap) { + SColumnInfoData* pColDataInfo = taosArrayGet(pBlock->pDataBlock, tsIndex); + TSKEY *tsCols = (TSKEY*)pColDataInfo->pData; + int32_t step = 0; + for (int32_t i = 0; i < pBlock->info.rows; i += step) { + int32_t winIndex = 0; + SResultWindowInfo* pCurWin = + getSessionTimeWindow(pAggSup->pResultRows, tsCols[i], gap, &winIndex); + step = updateSessionWindowInfo(pCurWin, tsCols, pBlock->info.rows, i, gap, NULL); + doClearWindowImpl(&pCurWin->pos, pAggSup->pResultBuf, pBinfo, numOfOutput); + } +} + +static int32_t copyUpdateResult(SHashObj* pStUpdated, SArray* pUpdated, int32_t groupId) { + void* pData = NULL; + size_t keyLen = 0; + while((pData = taosHashIterate(pStUpdated, pData)) != NULL) { + void* key = taosHashGetKey(pData, &keyLen); + ASSERT(keyLen == sizeof(SResultRowPosition)); + SResKeyPos* pos = taosMemoryMalloc(sizeof(SResKeyPos) + sizeof(uint64_t)); + if (pos == NULL) { + return TSDB_CODE_QRY_OUT_OF_MEMORY; + } + pos->groupId = groupId; + pos->pos = *(SResultRowPosition*)key; + *(int64_t*)pos->key = *(uint64_t*)pData; + taosArrayPush(pUpdated, &pos); + } + return TSDB_CODE_SUCCESS; +} + +void doBuildDeleteDataBlock(SHashObj* pStDeleted, SSDataBlock* pBlock, void** Ite) { + blockDataCleanup(pBlock); + size_t keyLen = 0; + while(( (*Ite) = taosHashIterate(pStDeleted, *Ite)) != NULL) { + SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, 0); + colDataAppend(pColInfoData, pBlock->info.rows, *Ite, false); + for (int32_t i = 1; i < pBlock->info.numOfCols; i++) { + pColInfoData = taosArrayGet(pBlock->pDataBlock, i); + colDataAppendNULL(pColInfoData, pBlock->info.rows); + } + pBlock->info.rows += 1; + if (pBlock->info.rows + 1 >= pBlock->info.capacity) { + break; + } + } + if ((*Ite) == NULL) { + taosHashClear(pStDeleted); + } +} + +static SSDataBlock* doStreamSessionWindowAgg(SOperatorInfo* pOperator) { + if (pOperator->status == OP_EXEC_DONE) { + return NULL; + } + + SStreamSessionAggOperatorInfo* pInfo = pOperator->info; + SOptrBasicInfo* pBInfo = &pInfo->binfo; + if (pOperator->status == OP_RES_TO_RETURN) { + doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator); + if (pInfo->pDelRes->info.rows > 0) { + return pInfo->pDelRes; + } + doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, + pInfo->streamAggSup.pResultBuf); + if (pBInfo->pRes->info.rows == 0 || + !hashRemainDataInGroupInfo(&pInfo->groupResInfo)) { + doSetOperatorCompleted(pOperator); + } + return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes; + } + + _hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY); + SHashObj* pStUpdated = taosHashInit(64, hashFn, true, HASH_NO_LOCK); + SOperatorInfo* downstream = pOperator->pDownstream[0]; + while (1) { + SSDataBlock* pBlock = downstream->fpSet.getNextFn(downstream); + if (pBlock == NULL) { + break; + } + // the pDataBlock are always the same one, no need to call this again + setInputDataBlock(pOperator, pBInfo->pCtx, pBlock, TSDB_ORDER_ASC, MAIN_SCAN, true); + if (pBlock->info.type == STREAM_REPROCESS) { + doClearSessionWindows(&pInfo->streamAggSup, &pInfo->binfo, pBlock, 0, + pOperator->numOfExprs, pInfo->gap); + continue; + } + doStreamSessionWindowAggImpl(pOperator, pBlock, pStUpdated, pInfo->pStDeleted); + } + + // restore the value + pOperator->status = OP_RES_TO_RETURN; + SArray* pUpdated = taosArrayInit(16, POINTER_BYTES); + copyUpdateResult(pStUpdated, pUpdated, pBInfo->pRes->info.groupId); + taosHashCleanup(pStUpdated); + finalizeUpdatedResult(pOperator->numOfExprs, pInfo->streamAggSup.pResultBuf, pUpdated, + pInfo->binfo.rowCellInfoOffset); + initMultiResInfoFromArrayList(&pInfo->groupResInfo, pUpdated); + blockDataEnsureCapacity(pInfo->binfo.pRes, pOperator->resultInfo.capacity); + doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator); + if (pInfo->pDelRes->info.rows > 0) { + return pInfo->pDelRes; + } + doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, + pInfo->streamAggSup.pResultBuf); + return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes; +} diff --git a/source/libs/function/inc/builtins.h b/source/libs/function/inc/builtins.h index 3a753325bdffc3886af44a1f06a8a6d1a1dcd31b..3bd0f35bf5f8b29cd585ec841363b091b02211c5 100644 --- a/source/libs/function/inc/builtins.h +++ b/source/libs/function/inc/builtins.h @@ -37,6 +37,7 @@ typedef struct SBuiltinFuncDefinition { FScalarExecProcess sprocessFunc; FExecFinalize finalizeFunc; FExecProcess invertFunc; + FExecCombine combineFunc; } SBuiltinFuncDefinition; extern const SBuiltinFuncDefinition funcMgtBuiltins[]; diff --git a/source/libs/function/inc/builtinsimpl.h b/source/libs/function/inc/builtinsimpl.h index 3e2ccbc6b8fd86926f576eee274efa233a6ed95c..d041e08d35be44d97feb5ec09642f3543b002c44 100644 --- a/source/libs/function/inc/builtinsimpl.h +++ b/source/libs/function/inc/builtinsimpl.h @@ -27,6 +27,7 @@ bool functionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo); int32_t functionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock); int32_t dummyProcess(SqlFunctionCtx* UNUSED_PARAM(pCtx)); int32_t functionFinalizeWithResultBuf(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, char* finalResult); +int32_t combineFunction(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx); EFuncDataRequired countDataRequired(SFunctionNode* pFunc, STimeWindow* pTimeWindow); bool getCountFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv); @@ -37,24 +38,29 @@ EFuncDataRequired statisDataRequired(SFunctionNode* pFunc, STimeWindow* pTimeWin bool getSumFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv); int32_t sumFunction(SqlFunctionCtx *pCtx); int32_t sumInvertFunction(SqlFunctionCtx *pCtx); +int32_t sumCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx); bool minmaxFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo); bool getMinmaxFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv); int32_t minFunction(SqlFunctionCtx* pCtx); int32_t maxFunction(SqlFunctionCtx *pCtx); int32_t minmaxFunctionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock); +int32_t minCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx); +int32_t maxCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx); bool getAvgFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv); bool avgFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo); int32_t avgFunction(SqlFunctionCtx* pCtx); int32_t avgFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock); int32_t avgInvertFunction(SqlFunctionCtx* pCtx); +int32_t avgCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx); bool getStddevFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv); bool stddevFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo); int32_t stddevFunction(SqlFunctionCtx* pCtx); int32_t stddevFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock); int32_t stddevInvertFunction(SqlFunctionCtx* pCtx); +int32_t stddevCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx); bool getLeastSQRFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv); bool leastSQRFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo); @@ -73,8 +79,10 @@ int32_t diffFunction(SqlFunctionCtx *pCtx); bool getFirstLastFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv); int32_t firstFunction(SqlFunctionCtx *pCtx); +int32_t firstCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx); int32_t lastFunction(SqlFunctionCtx *pCtx); int32_t lastFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock); +int32_t lastCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx); bool getTopBotFuncEnv(SFunctionNode* UNUSED_PARAM(pFunc), SFuncExecEnv* pEnv); int32_t topFunction(SqlFunctionCtx *pCtx); diff --git a/source/libs/function/src/builtins.c b/source/libs/function/src/builtins.c index 2cec75c8d3f03270613241ed44973502ff1e72fb..b76ca1ec053ee55a1c24fd9df179f9fa73ea3cb3 100644 --- a/source/libs/function/src/builtins.c +++ b/source/libs/function/src/builtins.c @@ -745,7 +745,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .initFunc = functionSetup, .processFunc = countFunction, .finalizeFunc = functionFinalize, - .invertFunc = countInvertFunction + .invertFunc = countInvertFunction, + .combineFunc = combineFunction, }, { .name = "sum", @@ -757,7 +758,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .initFunc = functionSetup, .processFunc = sumFunction, .finalizeFunc = functionFinalize, - .invertFunc = sumInvertFunction + .invertFunc = sumInvertFunction, + .combineFunc = sumCombine, }, { .name = "min", @@ -768,7 +770,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .getEnvFunc = getMinmaxFuncEnv, .initFunc = minmaxFunctionSetup, .processFunc = minFunction, - .finalizeFunc = minmaxFunctionFinalize + .finalizeFunc = minmaxFunctionFinalize, + .combineFunc = minCombine }, { .name = "max", @@ -779,7 +782,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .getEnvFunc = getMinmaxFuncEnv, .initFunc = minmaxFunctionSetup, .processFunc = maxFunction, - .finalizeFunc = minmaxFunctionFinalize + .finalizeFunc = minmaxFunctionFinalize, + .combineFunc = maxCombine }, { .name = "stddev", @@ -790,7 +794,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .initFunc = stddevFunctionSetup, .processFunc = stddevFunction, .finalizeFunc = stddevFinalize, - .invertFunc = stddevInvertFunction + .invertFunc = stddevInvertFunction, + .combineFunc = stddevCombine, }, { .name = "leastsquares", @@ -801,7 +806,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .initFunc = leastSQRFunctionSetup, .processFunc = leastSQRFunction, .finalizeFunc = leastSQRFinalize, - .invertFunc = leastSQRInvertFunction + .invertFunc = leastSQRInvertFunction, }, { .name = "avg", @@ -812,7 +817,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .initFunc = avgFunctionSetup, .processFunc = avgFunction, .finalizeFunc = avgFinalize, - .invertFunc = avgInvertFunction + .invertFunc = avgInvertFunction, + .combineFunc = avgCombine, }, { .name = "percentile", @@ -894,7 +900,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .getEnvFunc = getFirstLastFuncEnv, .initFunc = functionSetup, .processFunc = firstFunction, - .finalizeFunc = functionFinalize + .finalizeFunc = functionFinalize, + .combineFunc = firstCombine, }, { .name = "last", @@ -904,7 +911,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .getEnvFunc = getFirstLastFuncEnv, .initFunc = functionSetup, .processFunc = lastFunction, - .finalizeFunc = lastFinalize + .finalizeFunc = lastFinalize, + .combineFunc = lastCombine, }, { .name = "histogram", diff --git a/source/libs/function/src/builtinsimpl.c b/source/libs/function/src/builtinsimpl.c index ad92d095d5a292d366f127642e835b3dadda10dd..da842877dcdb9afcb2d32dbf47dada1864a35972 100644 --- a/source/libs/function/src/builtinsimpl.c +++ b/source/libs/function/src/builtinsimpl.c @@ -292,6 +292,24 @@ int32_t functionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { return pResInfo->numOfRes; } +int32_t firstCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) { + SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx); + char* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo); + int32_t type = pDestCtx->input.pData[0]->info.type; + int32_t bytes = pDestCtx->input.pData[0]->info.bytes; + + SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx); + char* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo); + + if (pSResInfo->numOfRes != 0 && + (pDResInfo->numOfRes == 0 || *(TSKEY*)(pDBuf + bytes) > *(TSKEY*)(pSBuf + bytes)) ) { + memcpy(pDBuf, pSBuf, bytes); + *(TSKEY*)(pDBuf + bytes) = *(TSKEY*)(pSBuf + bytes); + pDResInfo->numOfRes = 1; + } + return TSDB_CODE_SUCCESS; +} + int32_t dummyProcess(SqlFunctionCtx* UNUSED_PARAM(pCtx)) { return 0; } @@ -388,6 +406,18 @@ int32_t countInvertFunction(SqlFunctionCtx* pCtx) { return TSDB_CODE_SUCCESS; } +int32_t combineFunction(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) { + SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx); + char* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo); + + SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx); + char* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo); + *((int64_t*)pDBuf) += *((int64_t*)pSBuf); + + SET_VAL(pDResInfo, *((int64_t*)pDBuf), 1); + return TSDB_CODE_SUCCESS; +} + #define LIST_ADD_N(_res, _col, _start, _rows, _t, numOfElem) \ do { \ _t* d = (_t*)(_col->pData); \ @@ -537,6 +567,26 @@ int32_t sumInvertFunction(SqlFunctionCtx* pCtx) { return TSDB_CODE_SUCCESS; } +int32_t sumCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) { + SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx); + SSumRes* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo); + int32_t type = pDestCtx->input.pData[0]->info.type; + + SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx); + SSumRes* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo); + + if (IS_SIGNED_NUMERIC_TYPE(type) || type == TSDB_DATA_TYPE_BOOL) { + pDBuf->isum += pSBuf->isum; + } else if (IS_UNSIGNED_NUMERIC_TYPE(type)) { + pDBuf->usum += pSBuf->usum; + } else if (type == TSDB_DATA_TYPE_DOUBLE || type == TSDB_DATA_TYPE_FLOAT) { + pDBuf->dsum += pSBuf->dsum; + } + + SET_VAL(pDResInfo, *((int64_t*)pDBuf), 1); + return TSDB_CODE_SUCCESS; +} + bool getSumFuncEnv(SFunctionNode* UNUSED_PARAM(pFunc), SFuncExecEnv* pEnv) { pEnv->calcMemSize = sizeof(SSumRes); return true; @@ -738,6 +788,24 @@ int32_t avgInvertFunction(SqlFunctionCtx* pCtx) { return TSDB_CODE_SUCCESS; } +int32_t avgCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) { + SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx); + SAvgRes* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo); + int32_t type = pDestCtx->input.pData[0]->info.type; + + SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx); + SAvgRes* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo); + + if (IS_INTEGER_TYPE(type)) { + pDBuf->sum.isum += pSBuf->sum.isum; + } else { + pDBuf->sum.dsum += pSBuf->sum.dsum; + } + pDBuf->count += pSBuf->count; + + return TSDB_CODE_SUCCESS; +} + int32_t avgFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { SInputColumnInfoData* pInput = &pCtx->input; int32_t type = pInput->pData[0]->info.type; @@ -1273,6 +1341,34 @@ void setSelectivityValue(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, const STuple } } +int32_t minMaxCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx, int32_t isMinFunc) { + SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx); + SMinmaxResInfo* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo); + int32_t type = pDestCtx->input.pData[0]->info.type; + + SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx); + SMinmaxResInfo* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo); + if (IS_FLOAT_TYPE(type)) { + if (pSBuf->assign && + ( (((*(double*)&pDBuf->v) < (*(double*)&pSBuf->v)) ^ isMinFunc) || !pDBuf->assign ) ) { + *(double*) &pDBuf->v = *(double*) &pSBuf->v; + } + } else { + if ( pSBuf->assign && ( ((pDBuf->v < pSBuf->v) ^ isMinFunc) || !pDBuf->assign ) ) { + pDBuf->v = pSBuf->v; + } + } + SET_VAL(pDResInfo, *((int64_t*)pDBuf), 1); + return TSDB_CODE_SUCCESS; +} + +int32_t minCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) { + return minMaxCombine(pDestCtx, pSourceCtx, 1); +} +int32_t maxCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) { + return minMaxCombine(pDestCtx, pSourceCtx, 0); +} + bool getStddevFuncEnv(SFunctionNode* pFunc, SFuncExecEnv* pEnv) { pEnv->calcMemSize = sizeof(SStddevRes); return true; @@ -1491,6 +1587,25 @@ int32_t stddevFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { return functionFinalize(pCtx, pBlock); } +int32_t stddevCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) { + SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx); + SStddevRes* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo); + int32_t type = pDestCtx->input.pData[0]->info.type; + + SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx); + SStddevRes* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo); + + if (IS_INTEGER_TYPE(type)) { + pDBuf->isum += pSBuf->isum; + pDBuf->quadraticISum += pSBuf->quadraticISum; + } else { + pDBuf->dsum += pSBuf->dsum; + pDBuf->quadraticDSum += pSBuf->quadraticDSum; + } + pDBuf->count += pSBuf->count; + return TSDB_CODE_SUCCESS; +} + bool getLeastSQRFuncEnv(SFunctionNode* pFunc, SFuncExecEnv* pEnv) { pEnv->calcMemSize = sizeof(SLeastSQRInfo); return true; @@ -1979,6 +2094,24 @@ int32_t lastFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { return pResInfo->numOfRes; } +int32_t lastCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) { + SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx); + char* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo); + int32_t type = pDestCtx->input.pData[0]->info.type; + int32_t bytes = pDestCtx->input.pData[0]->info.bytes; + + SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx); + char* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo); + + if (pSResInfo->numOfRes != 0 && + (pDResInfo->numOfRes == 0 || *(TSKEY*)(pDBuf + bytes) < *(TSKEY*)(pSBuf + bytes)) ) { + memcpy(pDBuf, pSBuf, bytes); + *(TSKEY*)(pDBuf + bytes) = *(TSKEY*)(pSBuf + bytes); + pDResInfo->numOfRes = 1; + } + return TSDB_CODE_SUCCESS; +} + bool getDiffFuncEnv(SFunctionNode* UNUSED_PARAM(pFunc), SFuncExecEnv* pEnv) { pEnv->calcMemSize = sizeof(SDiffInfo); return true; diff --git a/source/libs/function/src/functionMgt.c b/source/libs/function/src/functionMgt.c index 49b20ebc853e1f97f191b2f5d9d0c5396d241b60..506b0eb8da98444491b2f86f0e9951b71193de75 100644 --- a/source/libs/function/src/functionMgt.c +++ b/source/libs/function/src/functionMgt.c @@ -118,6 +118,7 @@ int32_t fmGetFuncExecFuncs(int32_t funcId, SFuncExecFuncs* pFpSet) { pFpSet->init = funcMgtBuiltins[funcId].initFunc; pFpSet->process = funcMgtBuiltins[funcId].processFunc; pFpSet->finalize = funcMgtBuiltins[funcId].finalizeFunc; + pFpSet->combine = funcMgtBuiltins[funcId].combineFunc; return TSDB_CODE_SUCCESS; } diff --git a/source/libs/nodes/src/nodesCodeFuncs.c b/source/libs/nodes/src/nodesCodeFuncs.c index f28885aad560d09e3aee28f524d2835d5b66f2de..8887b9841ac8dc907d3a9a71360db20674278cfd 100644 --- a/source/libs/nodes/src/nodesCodeFuncs.c +++ b/source/libs/nodes/src/nodesCodeFuncs.c @@ -230,6 +230,8 @@ const char* nodesNodeName(ENodeType type) { return "PhysiFill"; case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW: return "PhysiSessionWindow"; + case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW: + return "PhysiStreamSessionWindow"; case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW: return "PhysiStateWindow"; case QUERY_NODE_PHYSICAL_PLAN_PARTITION: @@ -2528,6 +2530,29 @@ static int32_t jsonToOrderByExprNode(const SJson* pJson, void* pObj) { return code; } +static const char* jkSessionWindowTsPrimaryKey = "TsPrimaryKey"; +static const char* jkSessionWindowGap = "Gap"; + +static int32_t sessionWindowNodeToJson(const void* pObj, SJson* pJson) { + const SSessionWindowNode * pNode = (const SSessionWindowNode*)pObj; + + int32_t code = tjsonAddObject(pJson, jkSessionWindowTsPrimaryKey, nodeToJson, pNode->pCol); + if (TSDB_CODE_SUCCESS == code) { + code = tjsonAddObject(pJson, jkSessionWindowGap, nodeToJson, pNode->pGap); + } + return code; +} + +static int32_t jsonToSessionWindowNode(const SJson* pJson, void* pObj) { + SSessionWindowNode* pNode = (SSessionWindowNode*)pObj; + + int32_t code = jsonToNodeObject(pJson, jkSessionWindowTsPrimaryKey, (SNode **)&pNode->pCol); + if (TSDB_CODE_SUCCESS == code) { + code = jsonToNodeObject(pJson, jkSessionWindowGap, (SNode **)&pNode->pGap); + } + return code; +} + static const char* jkIntervalWindowInterval = "Interval"; static const char* jkIntervalWindowOffset = "Offset"; static const char* jkIntervalWindowSliding = "Sliding"; @@ -3015,8 +3040,9 @@ static int32_t specificNodeToJson(const void* pObj, SJson* pJson) { return orderByExprNodeToJson(pObj, pJson); case QUERY_NODE_LIMIT: case QUERY_NODE_STATE_WINDOW: - case QUERY_NODE_SESSION_WINDOW: break; + case QUERY_NODE_SESSION_WINDOW: + return sessionWindowNodeToJson(pObj, pJson); case QUERY_NODE_INTERVAL_WINDOW: return intervalWindowNodeToJson(pObj, pJson); case QUERY_NODE_NODE_LIST: @@ -3096,6 +3122,7 @@ static int32_t specificNodeToJson(const void* pObj, SJson* pJson) { case QUERY_NODE_PHYSICAL_PLAN_FILL: return physiFillNodeToJson(pObj, pJson); case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW: + case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW: return physiSessionWindowNodeToJson(pObj, pJson); case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW: return physiStateWindowNodeToJson(pObj, pJson); @@ -3134,6 +3161,8 @@ static int32_t jsonToSpecificNode(const SJson* pJson, void* pObj) { return jsonToTempTableNode(pJson, pObj); case QUERY_NODE_ORDER_BY_EXPR: return jsonToOrderByExprNode(pJson, pObj); + case QUERY_NODE_SESSION_WINDOW: + return jsonToSessionWindowNode(pJson, pObj); case QUERY_NODE_INTERVAL_WINDOW: return jsonToIntervalWindowNode(pJson, pObj); case QUERY_NODE_NODE_LIST: @@ -3196,6 +3225,7 @@ static int32_t jsonToSpecificNode(const SJson* pJson, void* pObj) { case QUERY_NODE_PHYSICAL_PLAN_FILL: return jsonToPhysiFillNode(pJson, pObj); case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW: + case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW: return jsonToPhysiSessionWindowNode(pJson, pObj); case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW: return jsonToPhysiStateWindowNode(pJson, pObj); diff --git a/source/libs/nodes/src/nodesTraverseFuncs.c b/source/libs/nodes/src/nodesTraverseFuncs.c index e8274c3c8eaa916a6e2c3877cde6185b99a623d8..ae1ff5744bcc48eeaec661137e01eeaf01684636 100644 --- a/source/libs/nodes/src/nodesTraverseFuncs.c +++ b/source/libs/nodes/src/nodesTraverseFuncs.c @@ -517,6 +517,7 @@ static EDealRes dispatchPhysiPlan(SNode* pNode, ETraversalOrder order, FNodeWalk res = walkWindowPhysi((SWinodwPhysiNode*)pNode, order, walker, pContext); break; case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW: + case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW: res = walkWindowPhysi((SWinodwPhysiNode*)pNode, order, walker, pContext); break; case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW: { diff --git a/source/libs/nodes/src/nodesUtilFuncs.c b/source/libs/nodes/src/nodesUtilFuncs.c index 3f7003dfa3b5a911e35c8823d1d883d9cca5bea7..c9c78b39124ccb0518bab97a3d8525b3fd8cfa5b 100644 --- a/source/libs/nodes/src/nodesUtilFuncs.c +++ b/source/libs/nodes/src/nodesUtilFuncs.c @@ -251,6 +251,8 @@ int32_t nodesNodeSize(ENodeType type) { return sizeof(SFillPhysiNode); case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW: return sizeof(SSessionWinodwPhysiNode); + case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW: + return sizeof(SStreamSessionWinodwPhysiNode); case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW: return sizeof(SStateWinodwPhysiNode); case QUERY_NODE_PHYSICAL_PLAN_PARTITION: @@ -664,6 +666,7 @@ void nodesDestroyNode(SNodeptr pNode) { destroyWinodwPhysiNode((SWinodwPhysiNode*)pNode); break; case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW: + case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW: destroyWinodwPhysiNode((SWinodwPhysiNode*)pNode); break; case QUERY_NODE_PHYSICAL_PLAN_DISPATCH: diff --git a/source/libs/planner/src/planPhysiCreater.c b/source/libs/planner/src/planPhysiCreater.c index fcba2aa2d33a926b1608c03d22489bd86fdded8a..0f88a54e913c57c1fdc848317d7b8a85a4ac0e88 100644 --- a/source/libs/planner/src/planPhysiCreater.c +++ b/source/libs/planner/src/planPhysiCreater.c @@ -945,7 +945,8 @@ static int32_t createIntervalPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChil static int32_t createSessionWindowPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChildren, SWindowLogicNode* pWindowLogicNode, SPhysiNode** pPhyNode) { SSessionWinodwPhysiNode* pSession = (SSessionWinodwPhysiNode*)makePhysiNode( - pCxt, getPrecision(pChildren), (SLogicNode*)pWindowLogicNode, QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW); + pCxt, getPrecision(pChildren), (SLogicNode*)pWindowLogicNode, + (pCxt->pPlanCxt->streamQuery ? QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW : QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW)); if (NULL == pSession) { return TSDB_CODE_OUT_OF_MEMORY; } diff --git a/source/libs/stream/src/tstreamUpdate.c b/source/libs/stream/src/tstreamUpdate.c index d21dadfe559d3dca81b34e3f6ade817ab1278a3b..75319a2354f638d6dab9d871bdd402cfb15ee2c4 100644 --- a/source/libs/stream/src/tstreamUpdate.c +++ b/source/libs/stream/src/tstreamUpdate.c @@ -127,7 +127,10 @@ static SScalableBf *getSBf(SUpdateInfo *pInfo, TSKEY ts) { if (pInfo->minTS < 0) { pInfo->minTS = (TSKEY)(ts / pInfo->interval * pInfo->interval); } - uint64_t index = (uint64_t)((ts - pInfo->minTS) / pInfo->interval); + int64_t index = (int64_t)((ts - pInfo->minTS) / pInfo->interval); + if (index < 0) { + return NULL; + } if (index >= pInfo->numSBFs) { uint64_t count = index + 1 - pInfo->numSBFs; windowSBfDelete(pInfo, count); diff --git a/source/util/src/terror.c b/source/util/src/terror.c index 7c4f0fa2dd5d170f60f583aa87723a73f72be146..a4e39260375c57dac6b437b65df1ee2b1701a9d6 100644 --- a/source/util/src/terror.c +++ b/source/util/src/terror.c @@ -272,6 +272,10 @@ TAOS_DEFINE_ERROR(TSDB_CODE_MND_CONSUMER_NOT_EXIST, "Consumer not exist") TAOS_DEFINE_ERROR(TSDB_CODE_MND_CONSUMER_NOT_READY, "Consumer waiting for rebalance") TAOS_DEFINE_ERROR(TSDB_CODE_MND_TOPIC_SUBSCRIBED, "Topic subscribed cannot be dropped") +TAOS_DEFINE_ERROR(TSDB_CODE_MND_STREAM_ALREADY_EXIST, "Stream already exists") +TAOS_DEFINE_ERROR(TSDB_CODE_MND_STREAM_NOT_EXIST, "Stream not exist") +TAOS_DEFINE_ERROR(TSDB_CODE_MND_INVALID_STREAM_OPTION, "Invalid stream option") + // mnode-sma TAOS_DEFINE_ERROR(TSDB_CODE_MND_SMA_ALREADY_EXIST, "SMA already exists") TAOS_DEFINE_ERROR(TSDB_CODE_MND_SMA_NOT_EXIST, "SMA does not exist") diff --git a/tests/script/jenkins/basic.txt b/tests/script/jenkins/basic.txt index 7f32407b29a3b1a251c6d95fadd24debe38e360d..1cc8b97f6fb2a5cf37161a7df55c5abc04118be3 100644 --- a/tests/script/jenkins/basic.txt +++ b/tests/script/jenkins/basic.txt @@ -67,6 +67,8 @@ # ---- stream ./test.sh -f tsim/stream/basic0.sim ./test.sh -f tsim/stream/basic1.sim +./test.sh -f tsim/stream/session0.sim +./test.sh -f tsim/stream/session1.sim # ---- transaction ./test.sh -f tsim/trans/lossdata1.sim diff --git a/tests/script/tsim/stream/session0.sim b/tests/script/tsim/stream/session0.sim new file mode 100644 index 0000000000000000000000000000000000000000..46b343632abd0347502b86e0978f2afd22c139a8 --- /dev/null +++ b/tests/script/tsim/stream/session0.sim @@ -0,0 +1,162 @@ +system sh/stop_dnodes.sh +system sh/deploy.sh -n dnode1 -i 1 +system sh/exec.sh -n dnode1 -s start +sleep 50 +sql connect + +print =============== create database +sql create database test vgroups 1 +sql show databases +if $rows != 3 then + return -1 +endi + +print $data00 $data01 $data02 + +sql use test + + +sql create table t1(ts timestamp, a int, b int , c int, d double,id int); +sql create stream streams2 trigger at_once into streamt as select _wstartts, count(*) c1, sum(a), max(a), min(d), stddev(a), last(a), first(d), max(id) s from t1 session(ts,10s); +sql insert into t1 values(1648791213000,NULL,NULL,NULL,NULL,1); +sql insert into t1 values(1648791223001,10,2,3,1.1,2); +sql insert into t1 values(1648791233002,3,2,3,2.1,3); +sql insert into t1 values(1648791243003,NULL,NULL,NULL,NULL,4); +sql insert into t1 values(1648791213002,NULL,NULL,NULL,NULL,5) (1648791233012,NULL,NULL,NULL,NULL,6); + +sql select * from streamt order by s desc; + +# row 0 +if $data01 != 3 then + print ======$data01 + return -1 +endi + +if $data02 != 3 then + print ======$data02 + return -1 +endi + +if $data03 != 3 then + print ======$data03 + return -1 +endi + +if $data04 != 2.100000000 then + print ======$data04 + return -1 +endi + +if $data05 != 0.000000000 then + print ======$data05 + return -1 +endi + +if $data06 != 3 then + print ======$data05 + return -1 +endi + +if $data07 != 2.100000000 then + print ======$data05 + return -1 +endi + +if $data08 != 6 then + print ======$data05 + return -1 +endi + +# row 1 + +if $data11 != 3 then + print ======$data01 + return -1 +endi + +if $data12 != 10 then + print ======$data02 + return -1 +endi + +if $data13 != 10 then + print ======$data03 + return -1 +endi + +if $data14 != 1.100000000 then + print ======$data04 + return -1 +endi + +if $data15 != 0.000000000 then + print ======$data05 + return -1 +endi + +if $data16 != 10 then + print ======$data05 + return -1 +endi + +if $data17 != 1.100000000 then + print ======$data05 + return -1 +endi + +if $data18 != 5 then + print ======$data05 + return -1 +endi + +sql insert into t1 values(1648791213000,1,2,3,1.0,7); +sql insert into t1 values(1648791223001,2,2,3,1.1,8); +sql insert into t1 values(1648791233002,3,2,3,2.1,9); +sql insert into t1 values(1648791243003,4,2,3,3.1,10); +sql insert into t1 values(1648791213002,4,2,3,4.1,11) ; +sql insert into t1 values(1648791213002,4,2,3,4.1,12) (1648791223009,4,2,3,4.1,13); + +sql select * from streamt order by s desc ; + +# row 0 +if $data01 != 7 then + print ======$data01 + return -1 +endi + +if $data02 != 9 then + print ======$data02 + return -1 +endi + +if $data03 != 4 then + print ======$data03 + return -1 +endi + +if $data04 != 1.100000000 then + print ======$data04 + return -1 +endi + +if $data05 != 0.816496581 then + print ======$data05 + return -1 +endi + +if $data06 != 3 then + print ======$data05 + return -1 +endi + +if $data07 != 1.100000000 then + print ======$data05 + return -1 +endi + +if $data08 != 13 then + print ======$data05 + return -1 +endi + +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/stream/session1.sim b/tests/script/tsim/stream/session1.sim new file mode 100644 index 0000000000000000000000000000000000000000..a44639ba7a5e17e51e6ac8190d991bfd2edf1a9e --- /dev/null +++ b/tests/script/tsim/stream/session1.sim @@ -0,0 +1,190 @@ +system sh/stop_dnodes.sh +system sh/deploy.sh -n dnode1 -i 1 +system sh/exec.sh -n dnode1 -s start +sleep 50 +sql connect + +print =============== create database +sql create database test vgroups 1 +sql show databases +if $rows != 3 then + return -1 +endi + +print $data00 $data01 $data02 + +sql use test + + +sql create table t1(ts timestamp, a int, b int , c int, d double,id int); +sql create stream streams2 trigger at_once into streamt as select _wstartts, count(*) c1, sum(a), min(b), max(id) s from t1 session(ts,10s); +sql insert into t1 values(1648791210000,1,1,1,1.1,1); +sql insert into t1 values(1648791220000,2,2,2,2.1,2); +sql insert into t1 values(1648791230000,3,3,3,3.1,3); +sql insert into t1 values(1648791240000,4,4,4,4.1,4); + +sql select * from streamt order by s desc; + +# row 0 +if $data01 != 4 then + print ======$data01 + return -1 +endi + +if $data02 != 10 then + print ======$data02 + return -1 +endi + +if $data03 != 1 then + print ======$data03 + return -1 +endi + +if $data04 != 4 then + print ======$data04 + return -1 +endi + +sql insert into t1 values(1648791250005,5,5,5,5.1,5); +sql insert into t1 values(1648791260006,6,6,6,6.1,6); +sql insert into t1 values(1648791270007,7,7,7,7.1,7); +sql insert into t1 values(1648791240005,5,5,5,5.1,8) (1648791250006,6,6,6,6.1,9); + +sql select * from streamt order by s desc; + +# row 0 +if $data01 != 8 then + print ======$data01 + return -1 +endi + +if $data02 != 32 then + print ======$data02 + return -1 +endi + +if $data03 != 1 then + print ======$data03 + return -1 +endi + +if $data04 != 9 then + print ======$data04 + return -1 +endi + +# row 1 +if $data11 != 1 then + print ======$data11 + return -1 +endi + +if $data12 != 7 then + print ======$data12 + return -1 +endi + +if $data13 != 7 then + print ======$data13 + return -1 +endi + +if $data14 != 7 then + print ======$data14 + return -1 +endi + +sql insert into t1 values(1648791280008,7,7,7,7.1,10) (1648791300009,8,8,8,8.1,11); +sql insert into t1 values(1648791260007,7,7,7,7.1,12) (1648791290008,7,7,7,7.1,13) (1648791290009,8,8,8,8.1,14); +sql insert into t1 values(1648791500000,7,7,7,7.1,15) (1648791520000,8,8,8,8.1,16) (1648791540000,8,8,8,8.1,17); +sql insert into t1 values(1648791530000,8,8,8,8.1,18); +sql insert into t1 values(1648791220000,10,10,10,10.1,19) (1648791290008,2,2,2,2.1,20) (1648791540000,17,17,17,17.1,21) (1648791500001,22,22,22,22.1,22); + +sql select * from streamt order by s desc; + +# row 0 +if $data01 != 2 then + print ======$data01 + return -1 +endi + +if $data02 != 29 then + print ======$data02 + return -1 +endi + +if $data03 != 7 then + print ======$data03 + return -1 +endi + +if $data04 != 22 then + print ======$data04 + return -1 +endi + +# row 1 +if $data11 != 3 then + print ======$data11 + return -1 +endi + +if $data12 != 33 then + print ======$data12 + return -1 +endi + +if $data13 != 8 then + print ======$data13 + return -1 +endi + +if $data14 != 21 then + print ======$data14 + return -1 +endi + +# row 2 +if $data21 != 4 then + print ======$data21 + return -1 +endi + +if $data22 != 25 then + print ======$data22 + return -1 +endi + +if $data23 != 2 then + print ======$data23 + return -1 +endi + +if $data24 != 20 then + print ======$data24 + return -1 +endi + +# row 3 +if $data31 != 10 then + print ======$data31 + return -1 +endi + +if $data32 != 54 then + print ======$data32 + return -1 +endi + +if $data33 != 1 then + print ======$data33 + return -1 +endi + +if $data34 != 19 then + print ======$data34 + return -1 +endi + +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/sync/insertDataByRunBack.sim b/tests/script/tsim/sync/insertDataByRunBack.sim index c86cd3844bd3258b5cac4f7b4bbe5dd1c3e0dec2..00f0643b61c3066de4d3bda25f60c54a9cf22084 100644 --- a/tests/script/tsim/sync/insertDataByRunBack.sim +++ b/tests/script/tsim/sync/insertDataByRunBack.sim @@ -20,6 +20,8 @@ print $data[1][0] $data[1][1] $data[1][2] $data[1][3] if $rows == 2 then if $data[1][1] == stop then goto end_insert + elif $data[0][1] == stop then + goto end_insert endi endi @@ -47,6 +49,9 @@ endw if $loop_cnt == 0 then print ====> notify main to working for insert data sql insert into interaction values (now, 'working', 0, 0); + sql select * from interaction + print $data[0][0] $data[0][1] $data[0][2] $data[0][3] + print $data[1][0] $data[1][1] $data[1][2] $data[1][3] endi $loop_cnt = $loop_cnt + 1 goto loop_insert diff --git a/tests/script/tsim/sync/threeReplica1VgElectWihtInsert.sim b/tests/script/tsim/sync/threeReplica1VgElectWihtInsert.sim index f568008a820c880628af0128bb848297d63d5ffe..fc501096e687c0b7681bbf9e7fcad706f7aafced 100644 --- a/tests/script/tsim/sync/threeReplica1VgElectWihtInsert.sim +++ b/tests/script/tsim/sync/threeReplica1VgElectWihtInsert.sim @@ -155,28 +155,13 @@ while $i < $tbNum sql create table $ctb using stb tags( $i ) $ntb = $ntbPrefix . $i sql create table $ntb (ts timestamp, c1 int, c2 float, c3 binary(10)) - -# $x = 0 -# while $x < $rowNum -# $binary = ' . binary -# $binary = $binary . $i -# $binary = $binary . ' -# -# sql insert into $ctb values ($tstart , $i , $x , $binary ) -# sql insert into $ntb values ($tstart , 999 , 999 , 'binary-ntb' ) -# $tstart = $tstart + 1 -# $x = $x + 1 -# endw - -# print ====> insert rows: $rowNum into $ctb and $ntb - $i = $i + 1 -# $tstart = 1640966400000 endw $totalTblNum = $tbNum * 2 -print ====>totalTblNum:$totalTblNum +sleep 1000 sql show tables +print ====> expect $totalTblNum and infinsert $rows in fact if $rows != $totalTblNum then return -1 endi @@ -222,6 +207,9 @@ endi $dnodeId = dnode . $dnodeId print ====> stop $dnodeId system sh/exec.sh -n $dnodeId -s stop -x SIGINT +sleep 1000 +print ====> start $dnodeId +system sh/exec.sh -n $dnodeId -s start $loop_cnt = 0 check_vg_ready_2: @@ -245,7 +233,7 @@ if $data[0][4] == LEADER then if $data[0][8] != FOLLOWER then goto check_vg_ready_2 endi - print ---- vgroup $data[0][0] leader switch to dnode $data[0][3] + print ---- vgroup $dnodeId leader switch to dnode $data[0][3] goto vg_ready_2 elif $data[0][6] == LEADER then if $data[0][4] != FOLLOWER then @@ -254,7 +242,7 @@ elif $data[0][6] == LEADER then if $data[0][8] != FOLLOWER then goto check_vg_ready_2 endi - print ---- vgroup $data[0][0] leader switch to dnode $data[0][5] + print ---- vgroup $dnodeId leader switch to dnode $data[0][5] goto vg_ready_2 elif $data[0][8] == LEADER then if $data[0][4] != FOLLOWER then @@ -263,7 +251,7 @@ elif $data[0][8] == LEADER then if $data[0][6] != FOLLOWER then goto check_vg_ready_2 endi - print ---- vgroup $data[0][0] leader switch to dnode $data[0][7] + print ---- vgroup $dnodeId leader switch to dnode $data[0][7] goto vg_ready_2 else goto check_vg_ready_2 @@ -272,8 +260,6 @@ vg_ready_2: $switch_loop_cnt = $switch_loop_cnt + 1 if $switch_loop_cnt < 3 then - print ====> start $dnodeId - system sh/exec.sh -n $dnodeId -s start goto switch_leader_loop endi diff --git a/tests/system-test/1-insert/insertWithMoreVgroup.py b/tests/system-test/1-insert/insertWithMoreVgroup.py index f0f35831dbd5a276c98e2eede114ea14b7bcc5b2..8d2870fc2cf068153a424d2b1613188c018c6463 100644 --- a/tests/system-test/1-insert/insertWithMoreVgroup.py +++ b/tests/system-test/1-insert/insertWithMoreVgroup.py @@ -294,7 +294,7 @@ class TDTestCase: return def test_case3(self): - self.taosBenchCreate("127.0.0.1","no","db1", "stb1", 1, 8, 1*10000) + self.taosBenchCreate("127.0.0.1","no","db1", "stb1", 1, 1, 1*10) # self.taosBenchCreate("test209","no","db2", "stb2", 1, 8, 1*10000) # self.taosBenchCreate("chenhaoran02","no","db1", "stb1", 1, 8, 1*10000) @@ -349,17 +349,17 @@ class TDTestCase: # run case def run(self): - # create database and tables。 - self.test_case1() - tdLog.debug(" LIMIT test_case1 ............ [OK]") + # # create database and tables。 + # self.test_case1() + # tdLog.debug(" LIMIT test_case1 ............ [OK]") # # taosBenchmark : create database and table # self.test_case2() # tdLog.debug(" LIMIT test_case2 ............ [OK]") - # # taosBenchmark:create database/table and insert data - # self.test_case3() - # tdLog.debug(" LIMIT test_case3 ............ [OK]") + # taosBenchmark:create database/table and insert data + self.test_case3() + tdLog.debug(" LIMIT test_case3 ............ [OK]") # # test qnode diff --git a/tests/system-test/1-insert/manyVgroups.json b/tests/system-test/1-insert/manyVgroups.json index 1c9aa1f28cb0d1eba5b2cf9488dc9d5be2d3f7c2..5dea41476c8cf7777b5a548f470577e03c576663 100644 --- a/tests/system-test/1-insert/manyVgroups.json +++ b/tests/system-test/1-insert/manyVgroups.json @@ -10,7 +10,7 @@ "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, - "interlace_rows": 100000, + "interlace_rows": 0, "num_of_records_per_req": 100, "databases": [ { @@ -29,8 +29,8 @@ "batch_create_tbl_num": 50000, "data_source": "rand", "insert_mode": "taosc", - "insert_rows": 10, - "interlace_rows": 100000, + "insert_rows": 1, + "interlace_rows": 0, "insert_interval": 0, "max_sql_len": 10000000, "disorder_ratio": 0, diff --git a/tests/system-test/2-query/check_tsdb.py b/tests/system-test/2-query/check_tsdb.py new file mode 100644 index 0000000000000000000000000000000000000000..33bf351207ebeacbfea514c2733700656e757d55 --- /dev/null +++ b/tests/system-test/2-query/check_tsdb.py @@ -0,0 +1,106 @@ +import taos +import sys +import datetime +import inspect + +from util.log import * +from util.sql import * +from util.cases import * +from util.dnodes import * + +class TDTestCase: + updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 , + "jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143, + "wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"fnDebugFlag":143} + def init(self, conn, logSql): + tdLog.debug(f"start to excute {__file__}") + tdSql.init(conn.cursor(), True) + + def prepare_datas(self): + tdSql.execute( + '''create table stb1 + (ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp) + tags (t1 int) + ''' + ) + + tdSql.execute( + ''' + create table t1 + (ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp) + ''' + ) + for i in range(4): + tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )') + + for i in range(9): + tdSql.execute( + f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )" + ) + tdSql.execute( + f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )" + ) + tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )") + tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )") + tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )") + tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )") + + tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ") + tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ") + tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ") + + tdSql.execute( + f'''insert into t1 values + ( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) + ( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a ) + ( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a ) + ( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a ) + ( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a ) + ( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) + ( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a ) + ( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a ) + ( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" ) + ( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" ) + ( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" ) + ( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) + ''' + ) + + + def restart_taosd_query_sum(self): + + for i in range(5): + tdLog.info(" this is %d_th restart taosd " %i) + os.system("taos -s ' use db ;select c6 from stb1 ; '") + tdSql.execute("use db ") + tdSql.query("select count(*) from stb1") + tdSql.checkRows(1) + tdSql.query("select sum(c1),sum(c2),sum(c3),sum(c4),sum(c5),sum(c6) from stb1;") + tdSql.checkData(0,0,99) + tdSql.checkData(0,1,499995) + tdSql.checkData(0,2,4995) + tdSql.checkData(0,3,594) + tdSql.checkData(0,4,49.950001001) + tdSql.checkData(0,5,599.940000000) + tdDnodes.stop(1) + tdDnodes.start(1) + time.sleep(2) + + + + def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring + tdSql.prepare() + + tdLog.printNoPrefix("==========step1:create table ==============") + + self.prepare_datas() + + os.system("taos -s ' select c6 from stb1 ; '") + self.restart_taosd_query_sum() + + def stop(self): + tdSql.close() + tdLog.success(f"{__file__} successfully executed") + +tdCases.addLinux(__file__, TDTestCase()) +tdCases.addWindows(__file__, TDTestCase()) diff --git a/tests/system-test/7-tmq/subscribeStb.py b/tests/system-test/7-tmq/subscribeStb.py index fe05d2e2235cbdfb8022dac1e7b1919f5551e80f..a0b3668d47bb45a637da035f19da3cbe01dfa9c1 100644 --- a/tests/system-test/7-tmq/subscribeStb.py +++ b/tests/system-test/7-tmq/subscribeStb.py @@ -1377,9 +1377,9 @@ class TDTestCase: self.tmqCase1(cfgPath, buildPath) self.tmqCase2(cfgPath, buildPath) - self.tmqCase3(cfgPath, buildPath) - self.tmqCase4(cfgPath, buildPath) - self.tmqCase5(cfgPath, buildPath) + # self.tmqCase3(cfgPath, buildPath) + # self.tmqCase4(cfgPath, buildPath) + # self.tmqCase5(cfgPath, buildPath) def stop(self): tdSql.close() diff --git a/tests/system-test/7-tmq/subscribeStb0.py b/tests/system-test/7-tmq/subscribeStb0.py new file mode 100644 index 0000000000000000000000000000000000000000..1d56103059e84de3afbe14647f357b152ab291c3 --- /dev/null +++ b/tests/system-test/7-tmq/subscribeStb0.py @@ -0,0 +1,1391 @@ + +import taos +import sys +import time +import socket +import os +import threading +from enum import Enum + +from util.log import * +from util.sql import * +from util.cases import * +from util.dnodes import * + +class actionType(Enum): + CREATE_DATABASE = 0 + CREATE_STABLE = 1 + CREATE_CTABLE = 2 + INSERT_DATA = 3 + +class TDTestCase: + hostname = socket.gethostname() + #rpcDebugFlagVal = '143' + #clientCfgDict = {'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''} + #clientCfgDict["rpcDebugFlag"] = rpcDebugFlagVal + #updatecfgDict = {'clientCfg': {}, 'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''} + #updatecfgDict["rpcDebugFlag"] = rpcDebugFlagVal + #print ("===================: ", updatecfgDict) + + def init(self, conn, logSql): + tdLog.debug(f"start to excute {__file__}") + #tdSql.init(conn.cursor()) + tdSql.init(conn.cursor(), logSql) # output sql.txt file + + def getBuildPath(self): + selfPath = os.path.dirname(os.path.realpath(__file__)) + + if ("community" in selfPath): + projPath = selfPath[:selfPath.find("community")] + else: + projPath = selfPath[:selfPath.find("tests")] + + for root, dirs, files in os.walk(projPath): + if ("taosd" in files): + rootRealPath = os.path.dirname(os.path.realpath(root)) + if ("packaging" not in rootRealPath): + buildPath = root[:len(root) - len("/build/bin")] + break + return buildPath + + def newcur(self,cfg,host,port): + user = "root" + password = "taosdata" + con=taos.connect(host=host, user=user, password=password, config=cfg ,port=port) + cur=con.cursor() + print(cur) + return cur + + def initConsumerTable(self,cdbName='cdb'): + tdLog.info("create consume database, and consume info table, and consume result table") + tdSql.query("create database if not exists %s vgroups 1"%(cdbName)) + tdSql.query("drop table if exists %s.consumeinfo "%(cdbName)) + tdSql.query("drop table if exists %s.consumeresult "%(cdbName)) + + tdSql.query("create table %s.consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)"%cdbName) + tdSql.query("create table %s.consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)"%cdbName) + + def initConsumerInfoTable(self,cdbName='cdb'): + tdLog.info("drop consumeinfo table") + tdSql.query("drop table if exists %s.consumeinfo "%(cdbName)) + tdSql.query("create table %s.consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)"%cdbName) + + def insertConsumerInfo(self,consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifmanualcommit,cdbName='cdb'): + sql = "insert into %s.consumeinfo values "%cdbName + sql += "(now, %d, '%s', '%s', %d, %d, %d)"%(consumerId, topicList, keyList, expectrowcnt, ifcheckdata, ifmanualcommit) + tdLog.info("consume info sql: %s"%sql) + tdSql.query(sql) + + def selectConsumeResult(self,expectRows,cdbName='cdb'): + resultList=[] + while 1: + tdSql.query("select * from %s.consumeresult"%cdbName) + #tdLog.info("row: %d, %l64d, %l64d"%(tdSql.getData(0, 1),tdSql.getData(0, 2),tdSql.getData(0, 3)) + if tdSql.getRows() == expectRows: + break + else: + time.sleep(5) + + for i in range(expectRows): + tdLog.info ("consume id: %d, consume msgs: %d, consume rows: %d"%(tdSql.getData(i , 1), tdSql.getData(i , 2), tdSql.getData(i , 3))) + resultList.append(tdSql.getData(i , 3)) + + return resultList + + def startTmqSimProcess(self,buildPath,cfgPath,pollDelay,dbName,showMsg=1,showRow=1,cdbName='cdb',valgrind=0): + shellCmd = 'nohup ' + if valgrind == 1: + logFile = cfgPath + '/../log/valgrind-tmq.log' + shellCmd = 'nohup valgrind --log-file=' + logFile + shellCmd += '--tool=memcheck --leak-check=full --show-reachable=no --track-origins=yes --show-leak-kinds=all --num-callers=20 -v --workaround-gcc296-bugs=yes ' + + shellCmd += buildPath + '/build/bin/tmq_sim -c ' + cfgPath + shellCmd += " -y %d -d %s -g %d -r %d -w %s "%(pollDelay, dbName, showMsg, showRow, cdbName) + shellCmd += "> /dev/null 2>&1 &" + tdLog.info(shellCmd) + os.system(shellCmd) + + def create_database(self,tsql, dbName,dropFlag=1,vgroups=4,replica=1): + if dropFlag == 1: + tsql.execute("drop database if exists %s"%(dbName)) + + tsql.execute("create database if not exists %s vgroups %d replica %d"%(dbName, vgroups, replica)) + tdLog.debug("complete to create database %s"%(dbName)) + return + + def create_stable(self,tsql, dbName,stbName): + tsql.execute("create table if not exists %s.%s (ts timestamp, c1 bigint, c2 binary(16)) tags(t1 int)"%(dbName, stbName)) + tdLog.debug("complete to create %s.%s" %(dbName, stbName)) + return + + def create_ctables(self,tsql, dbName,stbName,ctbNum): + tsql.execute("use %s" %dbName) + pre_create = "create table" + sql = pre_create + #tdLog.debug("doing create one stable %s and %d child table in %s ..." %(stbname, count ,dbname)) + for i in range(ctbNum): + sql += " %s_%d using %s tags(%d)"%(stbName,i,stbName,i+1) + if (i > 0) and (i%100 == 0): + tsql.execute(sql) + sql = pre_create + if sql != pre_create: + tsql.execute(sql) + + tdLog.debug("complete to create %d child tables in %s.%s" %(ctbNum, dbName, stbName)) + return + + def insert_data(self,tsql,dbName,stbName,ctbNum,rowsPerTbl,batchNum,startTs=0): + tdLog.debug("start to insert data ............") + tsql.execute("use %s" %dbName) + pre_insert = "insert into " + sql = pre_insert + + if startTs == 0: + t = time.time() + startTs = int(round(t * 1000)) + + #tdLog.debug("doing insert data into stable:%s rows:%d ..."%(stbName, allRows)) + rowsOfSql = 0 + for i in range(ctbNum): + sql += " %s_%d values "%(stbName,i) + for j in range(rowsPerTbl): + sql += "(%d, %d, 'tmqrow_%d') "%(startTs + j, j, j) + rowsOfSql += 1 + if (j > 0) and ((rowsOfSql == batchNum) or (j == rowsPerTbl - 1)): + tsql.execute(sql) + rowsOfSql = 0 + if j < rowsPerTbl - 1: + sql = "insert into %s_%d values " %(stbName,i) + else: + sql = "insert into " + #end sql + if sql != pre_insert: + #print("insert sql:%s"%sql) + tsql.execute(sql) + tdLog.debug("insert data ............ [OK]") + return + + def prepareEnv(self, **parameterDict): + # create new connector for my thread + tsql=self.newcur(parameterDict['cfg'], 'localhost', 6030) + + if parameterDict["actionType"] == actionType.CREATE_DATABASE: + self.create_database(tsql, parameterDict["dbName"]) + elif parameterDict["actionType"] == actionType.CREATE_STABLE: + self.create_stable(tsql, parameterDict["dbName"], parameterDict["stbName"]) + elif parameterDict["actionType"] == actionType.CREATE_CTABLE: + self.create_ctables(tsql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + elif parameterDict["actionType"] == actionType.INSERT_DATA: + self.insert_data(tsql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],parameterDict["batchNum"]) + else: + tdLog.exit("not support's action: ", parameterDict["actionType"]) + + return + + def tmqCase1(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 1: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db1', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 0 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:earliest' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 100 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + time.sleep(5) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("insert process end, and start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 1 end ...... ") + + def tmqCase2(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 2: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db2', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + parameterDict2 = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db2', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb2', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict2['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_stable(tdSql, parameterDict2["dbName"], parameterDict2["stbName"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 0 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:earliest' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 100 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start create child tables of stb1 and stb2") + parameterDict['actionType'] = actionType.CREATE_CTABLE + parameterDict2['actionType'] = actionType.CREATE_CTABLE + + prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict) + prepareEnvThread.start() + prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2) + prepareEnvThread2.start() + + prepareEnvThread.join() + prepareEnvThread2.join() + + tdLog.info("start insert data into child tables of stb1 and stb2") + parameterDict['actionType'] = actionType.INSERT_DATA + parameterDict2['actionType'] = actionType.INSERT_DATA + + prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict) + prepareEnvThread.start() + prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2) + prepareEnvThread2.start() + + prepareEnvThread.join() + prepareEnvThread2.join() + + tdLog.info("insert process end, and start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 2 end ...... ") + + def tmqCase3(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 3: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db3', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 20000, \ + 'batchNum': 50, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,parameterDict["dbName"],parameterDict["stbName"],parameterDict["ctbNum"],parameterDict["rowsPerTbl"],parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 0 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:earliest' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 5 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + time.sleep(3) + tdLog.info("drop som child table of stb1") + dropTblNum = 4 + tdSql.query("drop table if exists %s.%s_1"%(parameterDict["dbName"], parameterDict["stbName"])) + tdSql.query("drop table if exists %s.%s_2"%(parameterDict["dbName"], parameterDict["stbName"])) + tdSql.query("drop table if exists %s.%s_3"%(parameterDict["dbName"], parameterDict["stbName"])) + tdSql.query("drop table if exists %s.%s_4"%(parameterDict["dbName"], parameterDict["stbName"])) + + tdLog.info("drop some child tables, then start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + remaindrowcnt = parameterDict["rowsPerTbl"] * (parameterDict["ctbNum"] - dropTblNum) + + if not (totalConsumeRows < expectrowcnt and totalConsumeRows > remaindrowcnt): + tdLog.info("act consume rows: %d, expect consume rows: between %d and %d"%(totalConsumeRows, remaindrowcnt, expectrowcnt)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 3 end ...... ") + + def tmqCase4(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 4: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db4', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:earliest' + self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 5 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt/4: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4)) + tdLog.exit("tmq consume rows error!") + + self.initConsumerInfoTable() + consumerId = 1 + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("again start consume processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("again check consume result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 4 end ...... ") + + def tmqCase5(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 5: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db5', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 0 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:earliest' + self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 5 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt/4: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4)) + tdLog.exit("tmq consume rows error!") + + self.initConsumerInfoTable() + consumerId = 1 + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("again start consume processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("again check consume result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != (expectrowcnt * (1 + 1/4)): + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 5 end ...... ") + + def tmqCase6(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 6: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db6', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:earliest' + self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 5 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt/4: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4)) + tdLog.exit("tmq consume rows error!") + + self.initConsumerInfoTable() + consumerId = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:latest' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("again start consume processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("again check consume result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 6 end ...... ") + + def tmqCase7(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 7: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db7', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:latest' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 5 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != 0: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0)) + tdLog.exit("tmq consume rows error!") + + self.initConsumerInfoTable() + consumerId = 1 + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("again start consume processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("again check consume result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != 0: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 7 end ...... ") + + def tmqCase8(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 8: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db8', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:latest' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume 0 processor") + pollDelay = 10 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume 0 result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != 0: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0)) + tdLog.exit("tmq consume rows error!") + + tdLog.info("start consume 1 processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start one new thread to insert data") + parameterDict['actionType'] = actionType.INSERT_DATA + prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict) + prepareEnvThread.start() + prepareEnvThread.join() + + tdLog.info("start to check consume 0 and 1 result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt)) + tdLog.exit("tmq consume rows error!") + + tdLog.info("start consume 2 processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start one new thread to insert data") + parameterDict['actionType'] = actionType.INSERT_DATA + prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict) + prepareEnvThread.start() + prepareEnvThread.join() + + tdLog.info("start to check consume 0 and 1 and 2 result") + expectRows = 3 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt*2: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 8 end ...... ") + + def tmqCase9(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 9: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db9', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:latest' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume 0 processor") + pollDelay = 10 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume 0 result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != 0: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0)) + tdLog.exit("tmq consume rows error!") + + tdLog.info("start consume 1 processor") + self.initConsumerInfoTable() + consumerId = 1 + ifManualCommit = 0 + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start one new thread to insert data") + parameterDict['actionType'] = actionType.INSERT_DATA + prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict) + prepareEnvThread.start() + prepareEnvThread.join() + + tdLog.info("start to check consume 0 and 1 result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt)) + tdLog.exit("tmq consume rows error!") + + tdLog.info("start consume 2 processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start one new thread to insert data") + parameterDict['actionType'] = actionType.INSERT_DATA + prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict) + prepareEnvThread.start() + prepareEnvThread.join() + + tdLog.info("start to check consume 0 and 1 and 2 result") + expectRows = 3 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt*2: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 9 end ...... ") + + def tmqCase10(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 10: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db10', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:latest' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume 0 processor") + pollDelay = 10 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume 0 result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != 0: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0)) + tdLog.exit("tmq consume rows error!") + + tdLog.info("start consume 1 processor") + self.initConsumerInfoTable() + consumerId = 1 + ifManualCommit = 1 + self.insertConsumerInfo(consumerId, expectrowcnt-10000,topicList,keyList,ifcheckdata,ifManualCommit) + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start one new thread to insert data") + parameterDict['actionType'] = actionType.INSERT_DATA + prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict) + prepareEnvThread.start() + prepareEnvThread.join() + + tdLog.info("start to check consume 0 and 1 result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt-10000: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt-10000)) + tdLog.exit("tmq consume rows error!") + + tdLog.info("start consume 2 processor") + self.initConsumerInfoTable() + consumerId = 2 + ifManualCommit = 1 + self.insertConsumerInfo(consumerId, expectrowcnt+10000,topicList,keyList,ifcheckdata,ifManualCommit) + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start one new thread to insert data") + parameterDict['actionType'] = actionType.INSERT_DATA + prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict) + prepareEnvThread.start() + prepareEnvThread.join() + + tdLog.info("start to check consume 0 and 1 and 2 result") + expectRows = 3 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt*2: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 10 end ...... ") + + def tmqCase11(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 11: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db11', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:none' + self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 5 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != 0: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0)) + tdLog.exit("tmq consume rows error!") + + self.initConsumerInfoTable() + consumerId = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:none' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("again start consume processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("again check consume result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != 0: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 11 end ...... ") + + def tmqCase12(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 12: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db12', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 0 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:earliest' + self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 5 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt/4: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4)) + tdLog.exit("tmq consume rows error!") + + self.initConsumerInfoTable() + consumerId = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:none' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("again start consume processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("again check consume result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt/4: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 12 end ...... ") + + def tmqCase13(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 13: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db13', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:earliest' + self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 5 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt/4: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4)) + tdLog.exit("tmq consume rows error!") + + self.initConsumerInfoTable() + consumerId = 1 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:none' + self.insertConsumerInfo(consumerId, expectrowcnt/2,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("again start consume processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("again check consume result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt*(1/2+1/4): + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*(1/2+1/4))) + tdLog.exit("tmq consume rows error!") + + self.initConsumerInfoTable() + consumerId = 2 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:none' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("again start consume processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("again check consume result") + expectRows = 3 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 13 end ...... ") + + def run(self): + tdSql.prepare() + + buildPath = self.getBuildPath() + if (buildPath == ""): + tdLog.exit("taosd not found!") + else: + tdLog.info("taosd found in %s" % buildPath) + cfgPath = buildPath + "/../sim/psim/cfg" + tdLog.info("cfgPath: %s" % cfgPath) + + # self.tmqCase1(cfgPath, buildPath) + # self.tmqCase2(cfgPath, buildPath) + self.tmqCase3(cfgPath, buildPath) + self.tmqCase4(cfgPath, buildPath) + self.tmqCase5(cfgPath, buildPath) + + def stop(self): + tdSql.close() + tdLog.success(f"{__file__} successfully executed") + +event = threading.Event() + +tdCases.addLinux(__file__, TDTestCase()) +tdCases.addWindows(__file__, TDTestCase()) diff --git a/tests/system-test/fulltest.sh b/tests/system-test/fulltest.sh index ee650b778ef072c0f706a4d4acee8b60b3e5917b..01b8b569030b2f9274a0670b10a3e35cebe9673b 100755 --- a/tests/system-test/fulltest.sh +++ b/tests/system-test/fulltest.sh @@ -27,6 +27,7 @@ python3 ./test.py -f 2-query/join.py python3 ./test.py -f 2-query/cast.py python3 ./test.py -f 2-query/concat.py python3 ./test.py -f 2-query/concat_ws.py +python3 ./test.py -f 2-query/check_tsdb.py # python3 ./test.py -f 2-query/union.py # python3 ./test.py -f 2-query/union2.py # python3 ./test.py -f 2-query/union3.py @@ -62,11 +63,13 @@ python3 ./test.py -f 2-query/arccos.py python3 ./test.py -f 2-query/arctan.py python3 ./test.py -f 2-query/query_cols_tags_and_or.py python3 ./test.py -f 2-query/nestedQuery.py + python3 ./test.py -f 7-tmq/basic5.py python3 ./test.py -f 7-tmq/subscribeDb.py python3 ./test.py -f 7-tmq/subscribeDb1.py python3 ./test.py -f 7-tmq/subscribeStb.py +python3 ./test.py -f 7-tmq/subscribeStb0.py python3 ./test.py -f 7-tmq/subscribeStb1.py python3 ./test.py -f 7-tmq/subscribeStb2.py diff --git a/tests/test/c/tmqSim.c b/tests/test/c/tmqSim.c index e0f58d052f612fca7ad0a257d8e137d8d4a5a1f6..accd1dd080ec21a33cde9d803a4c4e361cb96b16 100644 --- a/tests/test/c/tmqSim.c +++ b/tests/test/c/tmqSim.c @@ -321,9 +321,16 @@ int32_t saveConsumeResult(SThreadInfo* pInfo) { TAOS* pConn = taos_connect(NULL, "root", "taosdata", NULL, 0); assert(pConn != NULL); + int64_t now = taosGetTimestampMs(); + // schema: ts timestamp, consumerid int, consummsgcnt bigint, checkresult int - sprintf(sqlStr, "insert into %s.consumeresult values (now, %d, %" PRId64 ", %" PRId64 ", %d)", g_stConfInfo.cdbName, - pInfo->consumerId, pInfo->consumeMsgCnt, pInfo->consumeRowCnt, pInfo->checkresult); + sprintf(sqlStr, "insert into %s.consumeresult values (%"PRId64", %d, %" PRId64 ", %" PRId64 ", %d)", + g_stConfInfo.cdbName, + now, + pInfo->consumerId, + pInfo->consumeMsgCnt, + pInfo->consumeRowCnt, + pInfo->checkresult); char tmpString[128]; taosFprintfFile(g_fp, "%s, consume id %d result: %s\n", getCurrentTimeString(tmpString), pInfo->consumerId ,sqlStr);
10.3 219 0.31Beijing.ChaoyangSan Jose 2
10.2 220 0.23Beijing.ChaoyangSan Jose 3
11.5 221 0.35Beijing.HaidianMountain View 3
13.4 223 0.29Beijing.HaidianMountain View 2
12.6 218 0.33Beijing.ChaoyangSan Jose 2
11.8 221 0.28Beijing.HaidianMountain View 2
10.3 218 0.25Beijing.ChaoyangSan Jose 3
12.3 221 0.31Beijing.ChaoyangSan Jose 2