@@ -284,7 +284,7 @@ SELECT COUNT(*) FROM d1001 WHERE ts >= '2017-7-14 00:00:00' AND ts < '2017-7-14
TDengine 对每个数据采集点单独建表,但在实际应用中经常需要对不同的采集点数据进行聚合。为高效的进行聚合操作,TDengine 引入超级表(STable)的概念。超级表用来代表一特定类型的数据采集点,它是包含多张表的表集合,集合里每张表的模式(schema)完全一致,但每张表都带有自己的静态标签,标签可以有多个,可以随时增加、删除和修改。应用可通过指定标签的过滤条件,对一个 STable 下的全部或部分表进行聚合或统计操作,这样大大简化应用的开发。其具体流程如下图所示:
-![多表聚合查询原理图](./multi_tables.png)
+![多表聚合查询原理图](./multi_tables.webp)
图 5 多表聚合查询原理图
diff --git a/docs-cn/21-tdinternal/02-replica.md b/docs-cn/21-tdinternal/02-replica.md
index 33c4f5e55f83d3de846c5b9512a19b866ee3d3c5..a0e3437c164b04b81c6e0009dfc69f5c44a602a9 100644
--- a/docs-cn/21-tdinternal/02-replica.md
+++ b/docs-cn/21-tdinternal/02-replica.md
@@ -93,7 +93,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性
具体的流程图如下:
-![replica-master.png](./replica-master.png)
+![replica-master.webp](./replica-master.webp)
选择Master的具体规则如下:
@@ -108,7 +108,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性
如果vnode A是master, vnode B是slave, vnode A能接受客户端的写请求,而vnode B不能。当vnode A收到写的请求后,遵循下面的流程:
-![replica-forward.png](./replica-forward.png)
+![replica-forward.webp](./replica-forward.webp)
1. 应用对写请求做基本的合法性检查,通过,则给该请求包打上一个版本号(version, 单调递增)
2. 应用将打上版本号的写请求封装一个WAL Head, 写入WAL(Write Ahead Log)
@@ -143,7 +143,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性
整个数据恢复流程分为两大步骤,第一步,先恢复archived data(file), 然后恢复wal。具体流程如下:
-![replica-restore.png](./replica-restore.png)
+![replica-restore.webp](./replica-restore.webp)
1. 通过已经建立的TCP连接,发送sync req给master节点
2. master收到sync req后,以client的身份,向vnode B主动建立一新的专用于同步的TCP连接(syncFd)
diff --git a/docs-cn/21-tdinternal/03-taosd.md b/docs-cn/21-tdinternal/03-taosd.md
index db096d74441d44e67e254d216b44ecf60f791d8d..40959406b4e6f7b47808de7acca821278fd6cf37 100644
--- a/docs-cn/21-tdinternal/03-taosd.md
+++ b/docs-cn/21-tdinternal/03-taosd.md
@@ -9,7 +9,7 @@ title: taosd的设计
taosd 包含 rpc,dnode,vnode,tsdb,query,cq,sync,wal,mnode,http,monitor 等模块,具体如下图:
-![modules.png](./modules.png)
+![modules.webp](./modules.webp)
taosd 的启动入口是 dnode 模块,dnode 然后启动其他模块,包括可选配置的 http,monitor 模块。taosc 或 dnode 之间交互的消息都是通过 rpc 模块进行,dnode 模块根据接收到的消息类型,将消息分发到 vnode 或 mnode 的消息队列,或由 dnode 模块自己消费。dnode 的工作线程(worker)消费消息队列里的消息,交给 mnode 或 vnode 进行处理。下面对各个模块做简要说明。
@@ -44,13 +44,13 @@ RPC 模块还提供数据压缩功能,如果数据包的字节数超过系统
taosd 的消息消费由 dnode 通过读写线程池进行控制,是系统的中枢。该模块内的结构体图如下:
-![dnode.png](./dnode.png)
+![dnode.webp](./dnode.webp)
## VNODE 模块
vnode 是一独立的数据存储查询逻辑单元,但因为一个 vnode 只能容许一个 DB ,因此 vnode 内部没有 account,DB,user 等概念。为实现更好的模块化、封装以及未来的扩展,它有很多子模块,包括负责存储的 TSDB,负责查询的 query,负责数据复制的 sync,负责数据库日志的的 WAL,负责连续查询的 cq(continuous query),负责事件触发的流计算的 event 等模块,这些子模块只与 vnode 模块发生关系,与其他模块没有任何调用关系。模块图如下:
-![vnode.png](./vnode.png)
+![vnode.webp](./vnode.webp)
vnode 模块向下,与 dnodeVRead,dnodeVWrite 发生互动,向上,与子模块发生互动。它主要的功能有:
diff --git a/docs-cn/25-application/01-telegraf.md b/docs-cn/25-application/01-telegraf.md
index 447568cbbdae3bef6e227f696bd35c8e7a7a147f..5bfc94c53410f6142b3bc24f696334c334cde933 100644
--- a/docs-cn/25-application/01-telegraf.md
+++ b/docs-cn/25-application/01-telegraf.md
@@ -16,7 +16,7 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如
本文介绍不需要写一行代码,通过简单修改几行配置文件,就可以快速搭建一个基于 TDengine + Telegraf + Grafana 的 IT 运维系统。架构如下图:
-![IT-DevOps-Solutions-Telegraf.png](./IT-DevOps-Solutions-Telegraf.png)
+![IT-DevOps-Solutions-Telegraf.webp](./IT-DevOps-Solutions-Telegraf.webp)
## 安装步骤
@@ -75,7 +75,7 @@ sudo systemctl start telegraf
点击左侧齿轮图标并选择 `Plugins`,应该可以找到 TDengine data source 插件图标。
点击左侧加号图标并选择 `Import`,从 `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v0.1.0.json` 下载 dashboard JSON 文件后导入。之后可以看到如下界面的仪表盘:
-![IT-DevOps-Solutions-telegraf-dashboard.png]./IT-DevOps-Solutions-telegraf-dashboard.png)
+![IT-DevOps-Solutions-telegraf-dashboard.webp]./IT-DevOps-Solutions-telegraf-dashboard.webp)
## 总结
diff --git a/docs-cn/25-application/02-collectd.md b/docs-cn/25-application/02-collectd.md
index 920e2de3a56632370d4b8b90a773453475deca93..5966f2d6544c78adb806d51e8a4157ba7dc420e9 100644
--- a/docs-cn/25-application/02-collectd.md
+++ b/docs-cn/25-application/02-collectd.md
@@ -16,7 +16,7 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如
本文介绍不需要写一行代码,通过简单修改几行配置文件,就可以快速搭建一个基于 TDengine + collectd / statsD + Grafana 的 IT 运维系统。架构如下图:
-![IT-DevOps-Solutions-Collectd-StatsD.png](./IT-DevOps-Solutions-Collectd-StatsD.png)
+![IT-DevOps-Solutions-Collectd-StatsD.webp](./IT-DevOps-Solutions-Collectd-StatsD.webp)
## 安装步骤
@@ -81,12 +81,12 @@ repeater 部分添加 { host:'', port: Figure 1. TDengine Technical Ecosystem
diff --git a/docs-en/12-taos-sql/08-interval.md b/docs-en/12-taos-sql/08-interval.md
index 86cac5553a45c0d38609f414a73569c8c4dfece6..2044ff4f61d9da6bdc1c07b5361b89050193aa96 100644
--- a/docs-en/12-taos-sql/08-interval.md
+++ b/docs-en/12-taos-sql/08-interval.md
@@ -10,7 +10,7 @@ Window related clauses are used to divide the data set to be queried into subset
`INTERVAL` clause is used to generate time windows of same time interval, `SLIDING` is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining continuous query both the size of time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time range of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time window.
-![Time Window](./timewindow-1.png)
+![Time Window](./timewindow-1.webp)
`INTERVAL` and `SLIDING` should be used with aggregate functions and selection functions. Below SQL statement is illegal because no aggregate or selection function is used with `INTERVAL`.
@@ -30,7 +30,7 @@ When the time length specified by `SLIDING` is same as that specified by `INTERV
In case of using integer, bool, or string to represent the device status at a moment, the continuous rows with same status belong to same status window. Once the status changes, the status window closes. As shown in the following figure,there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now.
-![Status Window](./timewindow-3.png)
+![Status Window](./timewindow-3.webp)
`STATE_WINDOW` is used to specify the column based on which to define status window, for example:
@@ -46,7 +46,7 @@ SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
The primary key, i.e. timestamp, is used to determine which session window the row belongs to. If the time interval between two adjacent rows is within the time range specified by `tol_val`, they belong to same session window; otherwise they belong to two different time windows. As shown in the figure below, if the limit of time interval for session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30], because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
-![Session Window](./timewindow-2.png)
+![Session Window](./timewindow-2.webp)
If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now.
diff --git a/docs-en/14-reference/03-connector/03-connector.mdx b/docs-en/14-reference/03-connector/03-connector.mdx
index 90be1bac978563c551901983452ac083d9620d6f..38eba73d0983951901a26eee3962e89007f6d30a 100644
--- a/docs-en/14-reference/03-connector/03-connector.mdx
+++ b/docs-en/14-reference/03-connector/03-connector.mdx
@@ -4,7 +4,7 @@ title: Connector
TDengine provides a rich set of APIs (application development interface). To facilitate users to develop their applications quickly, TDengine supports connectors for multiple programming languages, including official connectors for C/C++, Java, Python, Go, Node.js, C#, and Rust. These connectors support connecting to TDengine clusters using both native interfaces (taosc) and REST interfaces (not supported in a few languages yet). Community developers have also contributed several unofficial connectors, such as the ADO.NET connector, the Lua connector, and the PHP connector.
-![image-connector](./connector.png)
+![image-connector](./connector.webp)
## Supported platforms
diff --git a/docs-en/14-reference/03-connector/java.mdx b/docs-en/14-reference/03-connector/java.mdx
index 328907c4d781bdea8d30623e01d431cedbf8d0fa..0a1960be51145ebcab10b56243413549135f1c03 100644
--- a/docs-en/14-reference/03-connector/java.mdx
+++ b/docs-en/14-reference/03-connector/java.mdx
@@ -11,7 +11,7 @@ import TabItem from '@theme/TabItem';
'taos-jdbcdriver' is TDengine's official Java language connector, which allows Java developers to develop applications that access the TDengine database. 'taos-jdbcdriver' implements the interface of the JDBC driver standard and provides two forms of connectors. One is to connect to a TDengine instance natively through the TDengine client driver (taosc), which supports functions including data writing, querying, subscription, schemaless writing, and bind interface. And the other is to connect to a TDengine instance through the REST interface provided by taosAdapter (2.4.0.0 and later). REST connections implement has a slight differences to compare the set of features implemented and native connections.
-![tdengine-connector](tdengine-jdbc-connector.png)
+![tdengine-connector](tdengine-jdbc-connector.webp)
The preceding diagram shows two ways for a Java app to access TDengine via connector:
diff --git a/docs-en/14-reference/04-taosadapter.md b/docs-en/14-reference/04-taosadapter.md
index 85fd2923b02189d6f3cfd73efff784d12c3bb69a..de42e8a883d8b195b9d342f761e39458e557dfac 100644
--- a/docs-en/14-reference/04-taosadapter.md
+++ b/docs-en/14-reference/04-taosadapter.md
@@ -24,7 +24,7 @@ taosAdapter provides the following features.
## taosAdapter architecture diagram
-![taosAdapter Architecture](taosAdapter-architecture.png)
+![taosAdapter Architecture](taosAdapter-architecture.webp)
## taosAdapter Deployment Method
diff --git a/docs-en/14-reference/07-tdinsight/index.md b/docs-en/14-reference/07-tdinsight/index.md
index 4850cecb334ff24cc9fcf3b9a6e394827730111c..dc337bf9fff2a9b60ea2f1c5110185a8ac683098 100644
--- a/docs-en/14-reference/07-tdinsight/index.md
+++ b/docs-en/14-reference/07-tdinsight/index.md
@@ -233,33 +233,33 @@ The default username/password is `admin`. Grafana will require a password change
Point to the **Configurations** -> **Data Sources** menu, and click the **Add data source** button.
-![Add data source button](./assets/howto-add-datasource-button.png)
+![Add data source button](./assets/howto-add-datasource-button.webp)
Search for and select **TDengine**.
-![Add datasource](./assets/howto-add-datasource-tdengine.png)
+![Add datasource](./assets/howto-add-datasource-tdengine.webp)
Configure the TDengine datasource.
-![Datasource Configuration](./assets/howto-add-datasource.png)
+![Datasource Configuration](./assets/howto-add-datasource.webp)
Save and test. It will report 'TDengine Data source is working' under normal circumstances.
-![datasource test](./assets/howto-add-datasource-test.png)
+![datasource test](./assets/howto-add-datasource-test.webp)
### Importing dashboards
Point to **+** / **Create** - **import** (or `/dashboard/import` url).
-![Import Dashboard and Configuration](./assets/import_dashboard.png)
+![Import Dashboard and Configuration](./assets/import_dashboard.webp)
Type the dashboard ID `15167` in the **Import via grafana.com** location and **Load**.
-![Import via grafana.com](./assets/import-dashboard-15167.png)
+![Import via grafana.com](./assets/import-dashboard-15167.webp)
Once the import is complete, the full page view of TDinsight is shown below.
-![show](./assets/TDinsight-full.png)
+![show](./assets/TDinsight-full.webp)
## TDinsight dashboard details
@@ -269,7 +269,7 @@ Details of the metrics are as follows.
### Cluster Status
-![tdinsight-mnodes-overview](./assets/TDinsight-1-cluster-status.png)
+![tdinsight-mnodes-overview](./assets/TDinsight-1-cluster-status.webp)
This section contains the current information and status of the cluster, the alert information is also here (from left to right, top to bottom).
@@ -289,7 +289,7 @@ This section contains the current information and status of the cluster, the ale
### DNodes Status
-![tdinsight-mnodes-overview](./assets/TDinsight-2-dnodes.png)
+![tdinsight-mnodes-overview](./assets/TDinsight-2-dnodes.webp)
- **DNodes Status**: simple table view of `show dnodes`.
- **DNodes Lifetime**: the time elapsed since the dnode was created.
@@ -298,14 +298,14 @@ This section contains the current information and status of the cluster, the ale
### MNode Overview
-![tdinsight-mnodes-overview](./assets/TDinsight-3-mnodes.png)
+![tdinsight-mnodes-overview](./assets/TDinsight-3-mnodes.webp)
1. **MNodes Status**: a simple table view of `show mnodes`. 2.
2. **MNodes Number**: similar to `DNodes Number`, the number of MNodes changes.
### Request
-![tdinsight-requests](./assets/TDinsight-4-requests.png)
+![tdinsight-requests](./assets/TDinsight-4-requests.webp)
1. **Requests Rate(Inserts per Second)**: average number of inserts per second.
2. **Requests (Selects)**: number of query requests and change rate (count of second).
@@ -313,7 +313,7 @@ This section contains the current information and status of the cluster, the ale
### Database
-![tdinsight-database](./assets/TDinsight-5-database.png)
+![tdinsight-database](./assets/TDinsight-5-database.webp)
Database usage, repeated for each value of the variable `$database` i.e. multiple rows per database.
@@ -325,7 +325,7 @@ Database usage, repeated for each value of the variable `$database` i.e. multipl
### DNode Resource Usage
-![dnode-usage](./assets/TDinsight-6-dnode-usage.png)
+![dnode-usage](./assets/TDinsight-6-dnode-usage.webp)
Data node resource usage display with repeated multiple rows for the variable `$fqdn` i.e., each data node. Includes.
@@ -346,13 +346,13 @@ Data node resource usage display with repeated multiple rows for the variable `$
### Login History
-![Login History](./assets/TDinsight-7-login-history.png)
+![Login History](./assets/TDinsight-7-login-history.webp)
Currently, only the number of logins per minute is reported.
### Monitoring taosAdapter
-![taosadapter](./assets/TDinsight-8-taosadapter.png)
+![taosadapter](./assets/TDinsight-8-taosadapter.webp)
Support monitoring taosAdapter request statistics and status details. Includes.
diff --git a/docs-en/20-third-party/01-grafana.mdx b/docs-en/20-third-party/01-grafana.mdx
index c1bfd4a96a4576df8570d8b480d5c2afe47e20b8..7239710e0aebdd95977d9b73a5a1a9fccd656542 100644
--- a/docs-en/20-third-party/01-grafana.mdx
+++ b/docs-en/20-third-party/01-grafana.mdx
@@ -62,15 +62,15 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource
Users can log in to the Grafana server (username/password: admin/admin) directly through the URL `http://localhost:3000` and add a datasource through `Configuration -> Data Sources` on the left side, as shown in the following figure.
-![img](./grafana/add_datasource1.jpg)
+![img](./grafana/add_datasource1.webp)
Click `Add data source` to enter the Add data source page, and enter TDengine in the query box to add it, as shown in the following figure.
-![img](./grafana/add_datasource2.jpg)
+![img](./grafana/add_datasource2.webp)
Enter the datasource configuration page, and follow the default prompts to modify the corresponding configuration.
-![img](./grafana/add_datasource3.jpg)
+![img](./grafana/add_datasource3.webp)
- Host: IP address of the server where the components of the TDengine cluster provide REST service (offered by taosd before 2.4 and by taosAdapter since 2.4) and the port number of the TDengine REST service (6041), by default use `http://localhost:6041`.
- User: TDengine user name.
@@ -78,13 +78,13 @@ Enter the datasource configuration page, and follow the default prompts to modif
Click `Save & Test` to test. Follows are a success.
-![img](./grafana/add_datasource4.jpg)
+![img](./grafana/add_datasource4.webp)
### Create Dashboard
Go back to the main interface to create the Dashboard, click Add Query to enter the panel query page:
-![img](./grafana/create_dashboard1.jpg)
+![img](./grafana/create_dashboard1.webp)
As shown above, select the `TDengine` data source in the `Query` and enter the corresponding SQL in the query box below for query.
@@ -94,7 +94,7 @@ As shown above, select the `TDengine` data source in the `Query` and enter the c
Follow the default prompt to query the average system memory usage for the specified interval on the server where the current TDengine deployment is located as follows.
-![img](./grafana/create_dashboard2.jpg)
+![img](./grafana/create_dashboard2.webp)
> For more information on how to use Grafana to create the appropriate monitoring interface and for more details on using Grafana, refer to the official Grafana [documentation](https://grafana.com/docs/).
diff --git a/docs-en/20-third-party/09-emq-broker.md b/docs-en/20-third-party/09-emq-broker.md
index 13562ba7f720499c23771437c5c6ba0f61819456..560c6463b59b00a362023d6cfa44cf833419a9ea 100644
--- a/docs-en/20-third-party/09-emq-broker.md
+++ b/docs-en/20-third-party/09-emq-broker.md
@@ -44,25 +44,25 @@ Since the configuration interface of EMQX differs from version to version, here
Use your browser to open the URL `http://IP:18083` and log in to EMQX Dashboard. The initial installation username is `admin` and the password is: `public`.
-![img](./emqx/login-dashboard.png)
+![img](./emqx/login-dashboard.webp)
### Creating Rule
Select "Rule" in the "Rule Engine" on the left and click the "Create" button: !
-![img](./emqx/rule-engine.png)
+![img](./emqx/rule-engine.webp)
### Edit SQL fields
-![img](./emqx/create-rule.png)
+![img](./emqx/create-rule.webp)
### Add "action handler"
-![img](./emqx/add-action-handler.png)
+![img](./emqx/add-action-handler.webp)
### Add "Resource"
-![img](./emqx/create-resource.png)
+![img](./emqx/create-resource.webp)
Select "Data to Web Service" and click the "New Resource" button.
@@ -70,13 +70,13 @@ Select "Data to Web Service" and click the "New Resource" button.
Select "Data to Web Service" and fill in the request URL as the address and port of the server running taosAdapter (default is 6041). Leave the other properties at their default values.
-![img](./emqx/edit-resource.png)
+![img](./emqx/edit-resource.webp)
### Edit "action"
Edit the resource configuration to add the key/value pairing for Authorization. Please refer to the [ TDengine REST API documentation ](https://docs.taosdata.com/reference/rest-api/) for the authorization in details. Enter the rule engine replacement template in the message body.
-![img](./emqx/edit-action.png)
+![img](./emqx/edit-action.webp)
## Compose program to mock data
@@ -163,7 +163,7 @@ Edit the resource configuration to add the key/value pairing for Authorization.
Note: `CLIENT_NUM` in the code can be set to a smaller value at the beginning of the test to avoid hardware performance be not capable to handle a more significant number of concurrent clients.
-![img](./emqx/client-num.png)
+![img](./emqx/client-num.webp)
## Execute tests to simulate sending MQTT data
@@ -172,19 +172,19 @@ npm install mqtt mockjs --save ---registry=https://registry.npm.taobao.org
node mock.js
```
-![img](./emqx/run-mock.png)
+![img](./emqx/run-mock.webp)
## Verify that EMQX is receiving data
Refresh the EMQX Dashboard rules engine interface to see how many records were received correctly:
-![img](./emqx/check-rule-matched.png)
+![img](./emqx/check-rule-matched.webp)
## Verify that data writing to TDengine
Use the TDengine CLI program to log in and query the appropriate databases and tables to verify that the data is being written to TDengine correctly:
-![img](./emqx/check-result-in-taos.png)
+![img](./emqx/check-result-in-taos.webp)
Please refer to the [TDengine official documentation](https://docs.taosdata.com/) for more details on how to use TDengine.
EMQX Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use EMQX.
diff --git a/docs-en/20-third-party/11-kafka.md b/docs-en/20-third-party/11-kafka.md
index b9c7a3814a75a066b498438b6e632690697ae7ca..5aee6e044dcec77a9904f2ccfe5cb577eaa4d0ad 100644
--- a/docs-en/20-third-party/11-kafka.md
+++ b/docs-en/20-third-party/11-kafka.md
@@ -9,11 +9,11 @@ TDengine Kafka Connector contains two plugins: TDengine Source Connector and TDe
Kafka Connect is a component of Apache Kafka that enables other systems, such as databases, cloud services, file systems, etc., to connect to Kafka easily. Data can flow from other software to Kafka via Kafka Connect and Kafka to other systems via Kafka Connect. Plugins that read data from other software are called Source Connectors, and plugins that write data to other software are called Sink Connectors. Neither Source Connector nor Sink Connector will directly connect to Kafka Broker, and Source Connector transfers data to Kafka Connect. Sink Connector receives data from Kafka Connect.
-![](kafka/Kafka_Connect.png)
+![](kafka/Kafka_Connect.webp)
TDengine Source Connector is used to read data from TDengine in real-time and send it to Kafka Connect. Users can use The TDengine Sink Connector to receive data from Kafka Connect and write it to TDengine.
-![](kafka/streaming-integration-with-kafka-connect.png)
+![](kafka/streaming-integration-with-kafka-connect.webp)
## What is Confluent?
@@ -26,7 +26,7 @@ Confluent adds many extensions to Kafka. include:
5. GUI for managing and monitoring Kafka - Confluent Control Center
Some of these extensions are available in the community version of Confluent. Some are only available in the enterprise version.
-![](kafka/confluentPlatform.png)
+![](kafka/confluentPlatform.webp)
Confluent Enterprise Edition provides the `confluent` command-line tool to manage various components.
diff --git a/docs-en/21-tdinternal/01-arch.md b/docs-en/21-tdinternal/01-arch.md
index 9607c9b38709f6a320f82a8ee250afb407492627..2c430908e410c7ae8e6f09a3f7e2d059f906fda5 100644
--- a/docs-en/21-tdinternal/01-arch.md
+++ b/docs-en/21-tdinternal/01-arch.md
@@ -11,7 +11,7 @@ The design of TDengine is based on the assumption that any hardware or software
Logical structure diagram of TDengine distributed architecture as following:
-![TDengine architecture diagram](structure.png)
+![TDengine architecture diagram](structure.webp)
Figure 1: TDengine architecture diagram
A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDengine client driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit.
@@ -54,7 +54,7 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc
To explain the relationship between vnode, mnode, TAOSC and application and their respective roles, the following is an analysis of a typical data writing process.
-![typical process of TDengine](message.png)
+![typical process of TDengine](message.webp)
Figure 2: Typical process of TDengine
1. Application initiates a request to insert data through JDBC, ODBC, or other APIs.
@@ -123,7 +123,7 @@ If a database has N replicas, thus a virtual node group has N virtual nodes, but
Master Vnode uses a writing process as follows:
-![TDengine Master Writing Process](write_master.png)
+![TDengine Master Writing Process](write_master.webp)
Figure 3: TDengine Master writing process
1. Master vnode receives the application data insertion request, verifies, and moves to next step;
@@ -137,7 +137,7 @@ Master Vnode uses a writing process as follows:
For a slave vnode, the write process as follows:
-![TDengine Slave Writing Process](write_slave.png)
+![TDengine Slave Writing Process](write_slave.webp)
Figure 4: TDengine Slave Writing Process
1. Slave vnode receives a data insertion request forwarded by Master vnode;
@@ -267,7 +267,7 @@ For the data collected by device D1001, the number of records per hour is counte
TDengine creates a separate table for each data collection point, but in practical applications, it is often necessary to aggregate data from different data collection points. In order to perform aggregation operations efficiently, TDengine introduces the concept of STable. STable is used to represent a specific type of data collection point. It is a table set containing multiple tables. The schema of each table in the set is the same, but each table has its own static tag. The tags can be multiple and be added, deleted and modified at any time. Applications can aggregate or statistically operate all or a subset of tables under a STABLE by specifying tag filters, thus greatly simplifying the development of applications. The process is shown in the following figure:
-![Diagram of multi-table aggregation query](multi_tables.png)
+![Diagram of multi-table aggregation query](multi_tables.webp)
Figure 5: Diagram of multi-table aggregation query
1. Application sends a query condition to system;
diff --git a/docs-en/25-application/01-telegraf.md b/docs-en/25-application/01-telegraf.md
index 4af7df310fe52b599f0b48d031606f5199bde4e1..07ab289ac2bbf44c219535fe128db69b34465c01 100644
--- a/docs-en/25-application/01-telegraf.md
+++ b/docs-en/25-application/01-telegraf.md
@@ -16,7 +16,7 @@ Current mainstream IT DevOps system usually include a data collection module, a
This article introduces how to quickly build a TDengine + Telegraf + Grafana based IT DevOps visualization system without writing even a single line of code and by simply modifying a few lines of configuration files. The architecture is as follows.
-![IT-DevOps-Solutions-Telegraf.png](./IT-DevOps-Solutions-Telegraf.png)
+![IT-DevOps-Solutions-Telegraf.webp](./IT-DevOps-Solutions-Telegraf.webp)
## Installation steps
@@ -75,7 +75,7 @@ Log in to the Grafana interface using a web browser at `IP:3000`, with the syste
Click on the gear icon on the left and select `Plugins`, you should find the TDengine data source plugin icon.
Click on the plus icon on the left and select `Import` to get the data from `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard- v0.1.0.json`, download the dashboard JSON file and import it. You will then see the dashboard in the following screen.
-![IT-DevOps-Solutions-telegraf-dashboard.png](./IT-DevOps-Solutions-telegraf-dashboard.png)
+![IT-DevOps-Solutions-telegraf-dashboard.webp](./IT-DevOps-Solutions-telegraf-dashboard.webp)
## Wrap-up
diff --git a/docs-en/25-application/02-collectd.md b/docs-en/25-application/02-collectd.md
index 1a3c8c9b058adb567a992cddfe93a6381cdce38e..0ddea2855497f1dfdfce7a2aa6749e0c5ba1b9ff 100644
--- a/docs-en/25-application/02-collectd.md
+++ b/docs-en/25-application/02-collectd.md
@@ -17,7 +17,7 @@ The new version of TDengine supports multiple data protocols and can accept data
This article introduces how to quickly build an IT DevOps visualization system based on TDengine + collectd / StatsD + Grafana without writing even a single line of code but by simply modifying a few lines of configuration files. The architecture is shown in the following figure.
-![IT-DevOps-Solutions-Collectd-StatsD.png](./IT-DevOps-Solutions-Collectd-StatsD.png)
+![IT-DevOps-Solutions-Collectd-StatsD.webp](./IT-DevOps-Solutions-Collectd-StatsD.webp)
## Installation Steps
@@ -83,19 +83,19 @@ Click on the gear icon on the left and select `Plugins`, you should find the TDe
Download the dashboard json from `https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json`, click the plus icon on the left and select Import, follow the instructions to import the JSON file. After that, you can see
The dashboard can be seen in the following screen.
-![IT-DevOps-Solutions-collectd-dashboard.png](./IT-DevOps-Solutions-collectd-dashboard.png)
+![IT-DevOps-Solutions-collectd-dashboard.webp](./IT-DevOps-Solutions-collectd-dashboard.webp)
#### import collectd dashboard
Download the dashboard json file from `https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json`. Download the dashboard json file, click the plus icon on the left side and select `Import`, and follow the interface prompts to select the JSON file to import. After that, you can see
dashboard with the following interface.
-![IT-DevOps-Solutions-collectd-dashboard.png](./IT-DevOps-Solutions-collectd-dashboard.png)
+![IT-DevOps-Solutions-collectd-dashboard.webp](./IT-DevOps-Solutions-collectd-dashboard.webp)
#### Importing the StatsD dashboard
Download the dashboard json from `https://github.com/taosdata/grafanaplugin/blob/master/examples/statsd/dashboards/statsd-with-tdengine-v0.1.0.json`. Click on the plus icon on the left and select `Import`, and follow the interface prompts to import the JSON file. You will then see the dashboard in the following screen.
-![IT-DevOps-Solutions-statsd-dashboard.png](./IT-DevOps-Solutions-statsd-dashboard.png)
+![IT-DevOps-Solutions-statsd-dashboard.webp](./IT-DevOps-Solutions-statsd-dashboard.webp)
## Wrap-up
diff --git a/docs-en/25-application/03-immigrate.md b/docs-en/25-application/03-immigrate.md
index b595e09556c8aac76cd8e9177ec51a09020d6552..68d8a2b8cc25c80b8a647332df66874bee344715 100644
--- a/docs-en/25-application/03-immigrate.md
+++ b/docs-en/25-application/03-immigrate.md
@@ -32,7 +32,7 @@ We will explain how to migrate OpenTSDB applications to TDengine quickly, secure
The following figure (Figure 1) shows the system's overall architecture for a typical DevOps application scenario.
**Figure 1. Typical architecture in a DevOps scenario**
-![IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.jpg "Figure 1. Typical architecture in a DevOps scenario")
+![IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp "Figure 1. Typical architecture in a DevOps scenario")
In this application scenario, there are Agent tools deployed in the application environment to collect machine metrics, network metrics, and application metrics. Data collectors to aggregate information collected by agents, systems for persistent data storage and management, and tools for monitoring data visualization (e.g., Grafana, etc.).
@@ -75,7 +75,7 @@ After writing the data to TDengine properly, you can adapt Grafana to visualize
TDengine provides two sets of Dashboard templates by default, and users only need to import the templates from the Grafana directory into Grafana to activate their use.
**Importing Grafana Templates** Figure 2.
-![](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.jpg "Figure 2. Importing a Grafana Template")
+![](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp "Figure 2. Importing a Grafana Template")
After the above steps, you completed the migration to replace OpenTSDB with TDengine. You can see that the whole process is straightforward, there is no need to write any code, and only some configuration files need to be adjusted to meet the migration work.
@@ -88,7 +88,7 @@ In most DevOps scenarios, if you have a small OpenTSDB cluster (3 or fewer nodes
Suppose your application is particularly complex, or the application domain is not a DevOps scenario. You can continue reading subsequent chapters for a more comprehensive and in-depth look at the advanced topics of migrating an OpenTSDB application to TDengine.
**Figure 3. System architecture after migration**
-![IT-DevOps-Solutions-Immigrate-TDengine-Arch](./IT-DevOps-Solutions-Immigrate-TDengine-Arch.jpg "Figure 3. System architecture after migration completion")
+![IT-DevOps-Solutions-Immigrate-TDengine-Arch](./IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp "Figure 3. System architecture after migration completion")
## Migration evaluation and strategy for other scenarios