+
+1. 应用将一个查询条件发往系统;
+2. taosc 将超级表的名字发往 meta node(管理节点);
+3. 管理节点将超级表所拥有的 vnode 列表发回 taosc;
+4. taosc 将计算的请求连同标签过滤条件发往这些 vnode 对应的多个数据节点;
+5. 每个 vnode 先在内存里查找出自己节点里符合标签过滤条件的表的集合,然后扫描存储的时序数据,完成相应的聚合计算,将结果返回给 taosc;
+6. taosc 将多个数据节点返回的结果做最后的聚合,将其返回给应用。
+
+由于 TDengine 在 vnode 内将标签数据与时序数据分离存储,通过在内存里过滤标签数据,先找到需要参与聚合操作的表的集合,将需要扫描的数据集大幅减少,大幅提升聚合计算速度。同时,由于数据分布在多个 vnode/dnode,聚合计算操作在多个 vnode 里并发进行,又进一步提升了聚合的速度。 对普通表的聚合函数以及绝大部分操作都适用于超级表,语法完全一样,细节请看 TAOS SQL。
+
+### 预计算
+
+为有效提升查询处理的性能,针对物联网数据的不可更改的特点,在数据块头部记录该数据块中存储数据的统计信息:包括最大值、最小值、和。我们称之为预计算单元。如果查询处理涉及整个数据块的全部数据,直接使用预计算结果,完全不需要读取数据块的内容。由于预计算数据量远小于磁盘上存储的数据块数据的大小,对于磁盘 I/O 为瓶颈的查询处理,使用预计算结果可以极大地减小读取 I/O 压力,加速查询处理的流程。预计算机制与 PostgreSQL 的索引 BRIN(block range index)有异曲同工之妙。
diff --git a/docs-cn/21-tdinternal/_category_.yml b/docs-cn/21-tdinternal/_category_.yml
new file mode 100644
index 0000000000000000000000000000000000000000..c7509bf66224fa94759de9a2ae82955e2a7eb82f
--- /dev/null
+++ b/docs-cn/21-tdinternal/_category_.yml
@@ -0,0 +1 @@
+label: 技术内幕
\ No newline at end of file
diff --git a/docs-cn/21-tdinternal/dnode.webp b/docs-cn/21-tdinternal/dnode.webp
new file mode 100644
index 0000000000000000000000000000000000000000..a56c7e4594df00a721cb48381d68ca3bc813cdc8
Binary files /dev/null and b/docs-cn/21-tdinternal/dnode.webp differ
diff --git a/docs-cn/21-tdinternal/index.md b/docs-cn/21-tdinternal/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..63a746623e0dd955f61ba887a76f8ecf7eb16972
--- /dev/null
+++ b/docs-cn/21-tdinternal/index.md
@@ -0,0 +1,10 @@
+---
+title: 技术内幕
+---
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
\ No newline at end of file
diff --git a/docs-cn/21-tdinternal/message.webp b/docs-cn/21-tdinternal/message.webp
new file mode 100644
index 0000000000000000000000000000000000000000..a2a42abff3d6e932b41a3abe9feae4a5cc13c9e5
Binary files /dev/null and b/docs-cn/21-tdinternal/message.webp differ
diff --git a/docs-cn/21-tdinternal/modules.webp b/docs-cn/21-tdinternal/modules.webp
new file mode 100644
index 0000000000000000000000000000000000000000..718a6abccdbe40d4a0df5e3812fe0ab943a7c523
Binary files /dev/null and b/docs-cn/21-tdinternal/modules.webp differ
diff --git a/docs-cn/21-tdinternal/multi_tables.webp b/docs-cn/21-tdinternal/multi_tables.webp
new file mode 100644
index 0000000000000000000000000000000000000000..8f649e34a3a62d1b11b4403b2e743ff6b5e47be2
Binary files /dev/null and b/docs-cn/21-tdinternal/multi_tables.webp differ
diff --git a/docs-cn/21-tdinternal/replica-forward.webp b/docs-cn/21-tdinternal/replica-forward.webp
new file mode 100644
index 0000000000000000000000000000000000000000..512efd4eba8f23ad0f8607eaaf5525f51ecdcf0e
Binary files /dev/null and b/docs-cn/21-tdinternal/replica-forward.webp differ
diff --git a/docs-cn/21-tdinternal/replica-master.webp b/docs-cn/21-tdinternal/replica-master.webp
new file mode 100644
index 0000000000000000000000000000000000000000..57030a11f563af2689dbcfd206183f410b121aee
Binary files /dev/null and b/docs-cn/21-tdinternal/replica-master.webp differ
diff --git a/docs-cn/21-tdinternal/replica-restore.webp b/docs-cn/21-tdinternal/replica-restore.webp
new file mode 100644
index 0000000000000000000000000000000000000000..f282c2d4d23f517e3ef08e906cea7e9c5edc0b2a
Binary files /dev/null and b/docs-cn/21-tdinternal/replica-restore.webp differ
diff --git a/docs-cn/21-tdinternal/structure.webp b/docs-cn/21-tdinternal/structure.webp
new file mode 100644
index 0000000000000000000000000000000000000000..b77a42c074b15302b5c3ab889fb550a46dd549b3
Binary files /dev/null and b/docs-cn/21-tdinternal/structure.webp differ
diff --git a/docs-cn/21-tdinternal/vnode.webp b/docs-cn/21-tdinternal/vnode.webp
new file mode 100644
index 0000000000000000000000000000000000000000..fae3104c89c542c26790b509d12ad56661082c32
Binary files /dev/null and b/docs-cn/21-tdinternal/vnode.webp differ
diff --git a/docs-cn/21-tdinternal/write_master.webp b/docs-cn/21-tdinternal/write_master.webp
new file mode 100644
index 0000000000000000000000000000000000000000..9624036ed3d46ed60924ead9ce5c61acee0f4652
Binary files /dev/null and b/docs-cn/21-tdinternal/write_master.webp differ
diff --git a/docs-cn/21-tdinternal/write_slave.webp b/docs-cn/21-tdinternal/write_slave.webp
new file mode 100644
index 0000000000000000000000000000000000000000..7c45dec11b00e6a738de458f9e1bedacfad75a96
Binary files /dev/null and b/docs-cn/21-tdinternal/write_slave.webp differ
diff --git a/docs-cn/25-application/01-telegraf.md b/docs-cn/25-application/01-telegraf.md
new file mode 100644
index 0000000000000000000000000000000000000000..95df8699ef85b02d6e9dba398c787644fc9089b2
--- /dev/null
+++ b/docs-cn/25-application/01-telegraf.md
@@ -0,0 +1,82 @@
+---
+sidebar_label: TDengine + Telegraf + Grafana
+title: 使用 TDengine + Telegraf + Grafana 快速搭建 IT 运维展示系统
+---
+
+## 背景介绍
+
+TDengine 是涛思数据专为物联网、车联网、工业互联网、IT 运维等设计和优化的大数据平台。自从 2019 年 7 月开源以来,凭借创新的数据建模设计、快捷的安装方式、易用的编程接口和强大的数据写入查询性能博得了大量时序数据开发者的青睐。
+
+IT 运维监测数据通常都是对时间特性比较敏感的数据,例如:
+
+- 系统资源指标:CPU、内存、IO、带宽等。
+- 软件系统指标:存活状态、连接数目、请求数目、超时数目、错误数目、响应时间、服务类型及其他与业务有关的指标。
+
+当前主流的 IT 运维系统通常包含一个数据采集模块,一个数据存储模块,和一个可视化显示模块。Telegraf 和 Grafana 分别是当前最流行的数据采集模块和可视化显示模块之一。而数据存储模块可供选择的软件比较多,其中 OpenTSDB 或 InfluxDB 比较流行。而 TDengine 作为新兴的时序大数据平台,具备极强的高性能、高可靠、易管理、易维护的优势。
+
+本文介绍不需要写一行代码,通过简单修改几行配置文件,就可以快速搭建一个基于 TDengine + Telegraf + Grafana 的 IT 运维系统。架构如下图:
+
+
+
+## 安装步骤
+
+### 安装 Telegraf,Grafana 和 TDengine
+
+安装 Telegraf、Grafana 和 TDengine 请参考相关官方文档。
+
+### Telegraf
+
+请参考[官方文档](https://portal.influxdata.com/downloads/)。
+
+### Grafana
+
+请参考[官方文档](https://grafana.com/grafana/download)。
+
+### TDengine
+
+从涛思数据官网[下载](http://taosdata.com/cn/all-downloads/)页面下载最新 TDengine-server 2.4.0.x 或以上版本安装。
+
+## 数据链路设置
+
+### 下载 TDengine 插件到 Grafana 插件目录
+
+```bash
+1. wget -c https://github.com/taosdata/grafanaplugin/releases/download/v3.1.3/tdengine-datasource-3.1.3.zip
+2. sudo unzip tdengine-datasource-3.1.3.zip -d /var/lib/grafana/plugins/
+3. sudo chown grafana:grafana -R /var/lib/grafana/plugins/tdengine
+4. echo -e "[plugins]\nallow_loading_unsigned_plugins = tdengine-datasource\n" | sudo tee -a /etc/grafana/grafana.ini
+5. sudo systemctl restart grafana-server.service
+```
+
+### 修改 /etc/telegraf/telegraf.conf
+
+配置方法,在 `/etc/telegraf/telegraf.conf` 增加如下文字,其中 `database name` 请填写希望在 TDengine 保存 Telegraf 数据的数据库名,`TDengine server/cluster host`、`username` 和 `password` 填写 TDengine 实际值:
+
+```
+[[outputs.http]]
+ url = "http://:6041/influxdb/v1/write?db="
+ method = "POST"
+ timeout = "5s"
+ username = ""
+ password = ""
+ data_format = "influx"
+ influx_max_line_bytes = 250
+```
+
+然后重启 Telegraf:
+
+```bash
+sudo systemctl start telegraf
+```
+
+### 导入 Dashboard
+
+使用 Web 浏览器访问 `IP:3000` 登录 Grafana 界面,系统初始用户名密码为 admin/admin。
+点击左侧齿轮图标并选择 `Plugins`,应该可以找到 TDengine data source 插件图标。
+点击左侧加号图标并选择 `Import`,从 `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v0.1.0.json` 下载 dashboard JSON 文件后导入。之后可以看到如下界面的仪表盘:
+
+
+
+## 总结
+
+以上演示如何快速搭建一个完整的 IT 运维展示系统。得力于 TDengine 2.4.0.0 版本中新增的 schemaless 协议解析功能,以及强大的生态软件适配能力,用户可以短短数分钟就可以搭建一个高效易用的 IT 运维系统。TDengine 强大的数据写入查询性能和其他丰富功能请参考官方文档和产品落地案例。
diff --git a/docs-cn/25-application/02-collectd.md b/docs-cn/25-application/02-collectd.md
new file mode 100644
index 0000000000000000000000000000000000000000..78c61bb969092d7040ddcb3d02ce7bd29a784858
--- /dev/null
+++ b/docs-cn/25-application/02-collectd.md
@@ -0,0 +1,95 @@
+---
+sidebar_label: TDengine + collectd/StatsD + Grafana
+title: 使用 TDengine + collectd/StatsD + Grafana 快速搭建 IT 运维监控系统
+---
+
+## 背景介绍
+
+TDengine 是涛思数据专为物联网、车联网、工业互联网、IT 运维等设计和优化的大数据平台。自从 2019 年 7 月开源以来,凭借创新的数据建模设计、快捷的安装方式、易用的编程接口和强大的数据写入查询性能博得了大量时序数据开发者的青睐。
+
+IT 运维监测数据通常都是对时间特性比较敏感的数据,例如:
+
+- 系统资源指标:CPU、内存、IO、带宽等。
+- 软件系统指标:存活状态、连接数目、请求数目、超时数目、错误数目、响应时间、服务类型及其他与业务有关的指标。
+
+当前主流的 IT 运维系统通常包含一个数据采集模块,一个数据存储模块,和一个可视化显示模块。collectd / statsD 作为老牌开源数据采集工具,具有广泛的用户群。但是 collectd / StatsD 自身功能有限,往往需要配合 Telegraf、Grafana 以及时序数据库组合搭建成为完整的监控系统。而 TDengine 新版本支持多种数据协议接入,可以直接接受 collectd 和 statsD 的数据写入,并提供 Grafana dashboard 进行图形化展示。
+
+本文介绍不需要写一行代码,通过简单修改几行配置文件,就可以快速搭建一个基于 TDengine + collectd / statsD + Grafana 的 IT 运维系统。架构如下图:
+
+
+
+## 安装步骤
+
+安装 collectd, StatsD, Grafana 和 TDengine 请参考相关官方文档。
+
+### 安装 collectd
+
+请参考[官方文档](https://collectd.org/documentation.shtml)。
+
+### 安装 StatsD
+
+请参考[官方文档](https://github.com/statsd/statsd)。
+
+### 安装 Grafana
+
+请参考[官方文档](https://grafana.com/grafana/download)。
+
+### 安装 TDengine
+
+从涛思数据官网[下载](http://taosdata.com/cn/all-downloads/)页面下载最新 TDengine-server 2.4.0.x 或以上版本安装。
+
+## 数据链路设置
+
+### 复制 TDengine 插件到 grafana 插件目录
+
+```bash
+1. wget -c https://github.com/taosdata/grafanaplugin/releases/download/v3.1.3/tdengine-datasource-3.1.3.zip
+2. sudo unzip tdengine-datasource-3.1.3.zip -d /var/lib/grafana/plugins/
+3. sudo chown grafana:grafana -R /var/lib/grafana/plugins/tdengine
+4. echo -e "[plugins]\nallow_loading_unsigned_plugins = tdengine-datasource\n" | sudo tee -a /etc/grafana/grafana.ini
+5. sudo systemctl restart grafana-server.service
+```
+
+### 配置 collectd
+
+在 `/etc/collectd/collectd.conf` 文件中增加如下内容,其中 `host` 和 `port` 请填写 TDengine 和 taosAdapter 配置的实际值:
+
+```
+LoadPlugin network
+
+ Server "" ""
+
+
+sudo systemctl start collectd
+```
+
+### 配置 StatsD
+
+在 `config.js` 文件中增加如下内容后启动 StatsD,其中 `host` 和 `port` 请填写 TDengine 和 taosAdapter 配置的实际值:
+
+```
+backends 部分添加 "./backends/repeater"
+repeater 部分添加 { host:'', port: }
+```
+
+### 导入 Dashboard
+
+使用 Web 浏览器访问运行 Grafana 的服务器的 3000 端口 `host:3000` 登录 Grafana 界面,系统初始用户名密码为 `admin/admin`。
+点击左侧齿轮图标并选择 `Plugins`,应该可以找到 TDengine data source 插件图标。
+
+#### 导入 collectd 仪表盘
+
+从 https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json 下载 dashboard json 文件,点击左侧加号图标并选择 `Import`,按照界面提示选择 JSON 文件导入。之后可以看到如下界面的仪表盘:
+
+
+
+#### 导入 StatsD 仪表盘
+
+从 `https://github.com/taosdata/grafanaplugin/blob/master/examples/statsd/dashboards/statsd-with-tdengine-v0.1.0.json` 下载 dashboard json 文件,点击左侧加号图标并选择 `Import`,按照界面提示导入 JSON 文件。之后可以看到如下界面的仪表盘:
+
+
+## 总结
+
+TDengine 作为新兴的时序大数据平台,具备极强的高性能、高可靠、易管理、易维护的优势。得力于 TDengine 2.4.0.0 版本中新增的 schemaless 协议解析功能,以及强大的生态软件适配能力,用户可以短短数分钟就可以搭建一个高效易用的 IT 运维系统或者适配一个已存在的系统。
+
+TDengine 强大的数据写入查询性能和其他丰富功能请参考官方文档和产品成功落地案例。
diff --git a/docs-cn/25-application/03-immigrate.md b/docs-cn/25-application/03-immigrate.md
new file mode 100644
index 0000000000000000000000000000000000000000..9d8946bc4a69639c5327ac1ffb6c0539ddbd0e63
--- /dev/null
+++ b/docs-cn/25-application/03-immigrate.md
@@ -0,0 +1,423 @@
+---
+sidebar_label: OpenTSDB 迁移到 TDengine
+title: OpenTSDB 应用迁移到 TDengine 的最佳实践
+---
+
+作为一个分布式、可伸缩、基于 HBase 的分布式时序数据库系统,得益于其先发优势,OpenTSDB 被 DevOps 领域的人员引入并广泛地应用在了运维监控领域。但最近几年,随着云计算、微服务、容器化等新技术快速落地发展,企业级服务种类变得越来越多,架构也越来越复杂,应用运行基础环境日益多样化,给系统和运行监控带来的压力也越来越大。从这一现状出发,使用 OpenTSDB 作为 DevOps 的监控后端存储,越来越受困于其性能问题以及迟缓的功能升级,以及由此而衍生出来的应用部署成本上升和运行效率降低等问题,这些问题随着系统规模的扩大日益严重。
+
+在这一背景下,为满足高速增长的物联网大数据市场和技术需求,在吸取众多传统关系型数据库、NoSQL 数据库、流计算引擎、消息队列等软件的优点之后,涛思数据自主开发出创新型大数据处理产品 TDengine。在时序大数据处理上,TDengine 有着自己独特的优势。就 OpenTSDB 当前遇到的问题来说,TDengine 能够有效解决。
+
+相对于 OpenTSDB,TDengine 具有如下显著特点:
+
+- 数据写入和查询的性能远超 OpenTSDB;
+- 针对时序数据的高效压缩机制,压缩后在磁盘上的存储空间不到 1/5;
+- 安装部署非常简单,单一安装包完成安装部署,不依赖其他的第三方软件,整个安装部署过程秒级搞定;
+- 提供的内建函数覆盖 OpenTSDB 支持的全部查询函数,还支持更多的时序数据查询函数、标量函数及聚合函数,支持多种时间窗口聚合、连接查询、表达式运算、多种分组聚合、用户定义排序、以及用户定义函数等高级查询功能。采用类 SQL 的语法规则,更加简单易学,基本上没有学习成本。
+- 支持多达 128 个标签,标签总长度可达到 16 KB;
+- 除 REST 接口之外,还提供 C/C++、Java、Python、Go、Rust、Node.js、C#、Lua(社区贡献)、PHP(社区贡献)等多种语言的接口,支持 JDBC 等多种企业级标准连接器协议。
+
+如果我们将原本运行在 OpenTSDB 上的应用迁移到 TDengine 上,不仅可以有效地降低计算和存储资源的占用、减少部署服务器的规模,还能够极大减少运行维护的成本的输出,让运维管理工作更简单、更轻松,大幅降低总拥有成本。与 OpenTSDB 一样,TDengine 也已经进行了开源,不同的是,除了单机版,后者还实现了集群版开源,被厂商绑定的顾虑一扫而空。
+
+在下文中我们将就“使用最典型并广泛应用的运维监控(DevOps)场景”来说明,如何在不编码的情况下将 OpenTSDB 的应用快速、安全、可靠地迁移到 TDengine 之上。后续的章节会做更深度的介绍,以便于进行非 DevOps 场景的迁移。
+
+## DevOps 应用快速迁移
+
+### 1、典型应用场景
+
+一个典型的 DevOps 应用场景的系统整体的架构如下图(图 1) 所示。
+
+**图 1. DevOps 场景中典型架构**
+
+
+在该应用场景中,包含了部署在应用环境中负责收集机器度量(Metrics)、网络度量(Metrics)以及应用度量(Metrics)的 Agent 工具、汇聚 Agent 收集信息的数据收集器,数据持久化存储和管理的系统以及监控数据可视化工具(例如:Grafana 等)。
+
+其中,部署在应用节点的 Agents 负责向 collectd/Statsd 提供不同来源的运行指标,collectd/StatsD 则负责将汇聚的数据推送到 OpenTSDB 集群系统,然后使用可视化看板 Grafana 将数据可视化呈现出来。
+
+### 2、迁移服务
+
+- **TDengine 安装部署**
+
+首先是 TDengine 的安装,从官网上下载 TDengine 最新稳定版进行安装。各种安装包的使用帮助请参见博客[《TDengine 多种安装包的安装和卸载》](https://www.taosdata.com/blog/2019/08/09/566.html)。
+
+注意,安装完成以后,不要立即启动 `taosd` 服务,在正确配置完成参数以后再启动。
+
+- **调整数据收集器配置**
+
+在 TDengine 2.4 版本中,包含一个组件 taosAdapter。taosAdapter 是一个无状态、可快速弹性伸缩的组件,它可以兼容 Influxdb 的 Line Protocol 和 OpenTSDB 的 telnet/JSON 写入协议规范,提供了丰富的数据接入能力,有效的节省用户迁移成本,降低用户应用迁移的难度。
+
+用户可以根据需求弹性部署 taosAdapter 实例,结合场景的需要,快速提升数据写入的吞吐量,为不同应用场景下的数据写入提供保障。
+
+通过 taosAdapter,用户可以将 collectd 或 StatsD 收集的数据直接推送到 TDengine ,实现应用场景的无缝迁移,非常的轻松便捷。taosAdapter 还支持 Telegraf、Icinga、TCollector 、node_exporter 的数据接入,使用详情参考[taosAdapter](/reference/taosadapter/)。
+
+如果使用 collectd,修改其默认位置 `/etc/collectd/collectd.conf` 的配置文件为指向 taosAdapter 部署的节点 IP 地址和端口。假设 taosAdapter 的 IP 地址为 192.168.1.130,端口为 6046,配置如下:
+
+```html
+LoadPlugin write_tsdb
+
+
+ Host "192.168.1.130" Port "6046" HostTags "status=production" StoreRates
+ false AlwaysAppendDS false
+
+
+```
+
+即可让 collectd 将数据使用推送到 OpenTSDB 的插件方式推送到 taosAdapter, taosAdapter 将调用 API 将数据写入到 TDengine 中,从而完成数据的写入工作。如果你使用的是 StatsD 相应地调整配置文件信息。
+
+- **调整看板(Dashboard)系统**
+
+在数据能够正常写入 TDengine 后,可以调整适配 Grafana 将写入 TDengine 的数据可视化呈现出来。获取和使用 TDengine 提供的 Grafana 插件请参考[与其他工具的连接](/third-party/grafana)。
+
+TDengine 提供了默认的两套 Dashboard 模板,用户只需要将 Grafana 目录下的模板导入到 Grafana 中即可激活使用。
+
+**图 2. 导入 Grafana 模板**
+
+
+操作完以上步骤后,就完成了将 OpenTSDB 替换成为 TDengine 的迁移工作。可以看到整个流程非常简单,不需要写代码,只需要对某些配置文件进行调整即可完成全部的迁移工作。
+
+### 3、迁移后架构
+
+完成迁移以后,此时的系统整体的架构如下图(图 3)所示,而整个过程中采集端、数据写入端、以及监控呈现端均保持了稳定,除了极少的配置调整外,不涉及任何重要的更改和变动。OpenTSDB 大量的应用场景均为 DevOps ,这种场景下,简单的参数设置即可完成 OpenTSDB 到 TDengine 迁移动作,使用上 TDengine 更加强大的处理能力和查询性能。
+
+在绝大多数的 DevOps 场景中,如果你拥有一个小规模的 OpenTSDB 集群(3 台及以下的节点)作为 DevOps 的存储端,依赖于 OpenTSDB 为系统持久化层提供数据存储和查询功能,那么你可以安全地将其替换为 TDengine,并节约更多的计算和存储资源。在同等计算资源配置情况下,单台 TDengine 即可满足 3 ~ 5 台 OpenTSDB 节点提供的服务能力。如果规模比较大,那便需要采用 TDengine 集群。
+
+如果你的应用特别复杂,或者应用领域并不是 DevOps 场景,你可以继续阅读后续的章节,更加全面深入地了解将 OpenTSDB 的应用迁移到 TDengine 的高级话题。
+
+**图 3. 迁移完成后的系统架构**
+
+
+## 其他场景的迁移评估与策略
+
+### 1、TDengine 与 OpenTSDB 的差异
+
+本章将详细介绍 OpenTSDB 与 TDengine 在系统功能层面上存在的差异。阅读完本章的内容,你可以全面地评估是否能够将某些基于 OpenTSDB 的复杂应用迁移到 TDengine 上,以及迁移之后应该注意的问题。
+
+TDengine 当前只支持 Grafana 的可视化看板呈现,所以如果你的应用中使用了 Grafana 以外的前端看板(例如[TSDash](https://github.com/facebook/tsdash)、[Status Wolf](https://github.com/box/StatusWolf)等),那么前端看板将无法直接迁移到 TDengine,需要将前端看板重新适配到 Grafana 才可以正常运行。
+
+在 2.3.0.x 版本中,TDengine 只能够支持 collectd 和 StatsD 作为数据收集汇聚软件,当然后面会陆续提供更多的数据收集聚合软件的接入支持。如果您的收集端使用了其他类型的数据汇聚器,您的应用需要适配到这两个数据汇聚端系统,才能够将数据正常写入。除了上述两个数据汇聚端软件协议以外,TDengine 还支持通过 InfluxDB 的行协议和 OpenTSDB 的数据写入协议、JSON 格式将数据直接写入,您可以重写数据推送端的逻辑,使用 TDengine 支持的行协议来写入数据。
+
+此外,如果你的应用中使用了 OpenTSDB 以下特性,在将应用迁移到 TDengine 之前你还需要了解以下注意事项:
+
+1. `/api/stats`:如果你的应用中使用了该项特性来监控 OpenTSDB 的服务状态,并在应用中建立了相关的逻辑来联动处理,那么这部分状态读取和获取的逻辑需要重新适配到 TDengine。TDengine 提供了全新的处理集群状态监控机制,来满足你的应用对其进行的监控和维护的需求。
+2. `/api/tree`:如果你依赖于 OpenTSDB 的该项特性来进行时间线的层级化组织和维护,那么便无法将其直接迁移至 TDengine。TDengine 采用了数据库->超级表->子表这样的层级来组织和维护时间线,归属于同一个超级表的所有的时间线在系统中同一个层级,但是可以通过不同标签值的特殊构造来模拟应用逻辑上的多级结构。
+3. `Rollup And PreAggregates`:采用了 Rollup 和 PreAggregates 需要应用来决定在合适的地方访问 Rollup 的结果,在某些场景下又要访问原始的结果,这种结构的不透明性让应用处理逻辑变得极为复杂而且完全不具有移植性。我们认为这种策略是时序数据库无法提供高性能聚合情况下的妥协与折中。TDengine 暂不支持多个时间线的自动降采样和(时间段范围的)预聚合,由于 其拥有的高性能查询处理逻辑,即使不依赖于 Rollup 和 (时间段)预聚合计算结果,也能够提供很高性能的查询响应,而且让你的应用查询处理逻辑更加简单。
+4. `Rate`: TDengine 提供了两个计算数值变化率的函数,分别是 Derivative(其计算结果与 InfluxDB 的 Derivative 行为一致)和 IRate(其计算结果与 Prometheus 中的 IRate 函数计算结果一致)。但是这两个函数的计算结果与 Rate 有细微的差别,但整体上功能更强大。此外,**OpenTSDB 提供的所有计算函数,TDengine 均有对应的查询函数支持,并且 TDengine 的查询函数功能远超过 OpenTSDB 支持的查询函数,**可以极大地简化你的应用处理逻辑。
+
+通过上面的介绍,相信你应该能够了解 OpenTSDB 迁移到 TDengine 带来的变化,这些信息也有助于你正确地判断是否可以接受将应用 迁移到 TDengine 之上,体验 TDengine 提供的强大的时序数据处理能力和便捷的使用体验。
+
+### 2、迁移策略
+
+首先将基于 OpenTSDB 的系统进行迁移涉及到的数据模式设计、系统规模估算、数据写入端改造,进行数据分流、应用适配工作;之后将两个系统并行运行一段时间,再将历史数据迁移到 TDengine 中。当然如果你的应用中有部分功能强依赖于上述 OpenTSDB 特性,同时又不希望停止使用,可以考虑保持原有的 OpenTSDB 系统运行,同时启动 TDengine 来提供主要的服务。
+
+## 数据模型设计
+
+一方面,TDengine 要求其入库的数据具有严格的模式定义。另一方面,TDengine 的数据模型相对于 OpenTSDB 来说又更加丰富,多值模型能够兼容全部的单值模型的建立需求。
+
+现在让我们假设一个 DevOps 的场景,我们使用了 collectd 收集设备的基础度量(metrics),包含了 memory 、swap、disk 等几个度量,其在 OpenTSDB 中的模式如下:
+
+| 序号 | 测量(metric) | 值名称 | 类型 | tag1 | tag2 | tag3 | tag4 | tag5 |
+| ---- | -------------- | ------ | ------ | ---- | ----------- | -------------------- | --------- | ------ |
+| 1 | memory | value | double | host | memory_type | memory_type_instance | source | n/a |
+| 2 | swap | value | double | host | swap_type | swap_type_instance | source | n/a |
+| 3 | disk | value | double | host | disk_point | disk_instance | disk_type | source |
+
+TDengine 要求存储的数据具有数据模式,即写入数据之前需创建超级表并指定超级表的模式。对于数据模式的建立,你有两种方式来完成此项工作:1)充分利用 TDengine 对 OpenTSDB 的数据原生写入的支持,调用 TDengine 提供的 API 将(文本行或 JSON 格式)数据写入,并自动化地建立单值模型。采用这种方式不需要对数据写入应用进行较大的调整,也不需要对写入的数据格式进行转换。
+
+在 C 语言层面,TDengine 提供了 `taos_schemaless_insert()` 函数来直接写入 OpenTSDB 格式的数据(在更早版本中该函数名称是 `taos_insert_lines()`)。其代码参考示例请参见安装包目录下示例代码 schemaless.c。
+
+2)在充分理解 TDengine 的数据模型基础上,结合生成数据的特点,手动方式建立 OpenTSDB 到 TDengine 的数据模型调整的映射关系。TDengine 能够支持多值模型和单值模型,考虑到 OpenTSDB 均为单值映射模型,这里推荐使用单值模型在 TDengine 中进行建模。
+
+- **单值模型**。
+
+具体步骤如下:将度量(metrics)的名称作为 TDengine 超级表的名称,该超级表建成后具有两个基础的数据列—时间戳(timestamp)和值(value),超级表的标签等效于 度量 的标签信息,标签数量等同于度量 的标签的数量。子表的表名采用具有固定规则的方式进行命名:`metric + '_' + tags1_value + '_' + tag2_value + '_' + tag3_value ...`作为子表名称。
+
+在 TDengine 中建立 3 个超级表:
+
+```sql
+create stable memory(ts timestamp, val float) tags(host binary(12),memory_type binary(20), memory_type_instance binary(20), source binary(20));
+create stable swap(ts timestamp, val double) tags(host binary(12), swap_type binary(20), swap_type_binary binary(20), source binary(20));
+create stable disk(ts timestamp, val double) tags(host binary(12), disk_point binary(20), disk_instance binary(20), disk_type binary(20), source binary(20));
+```
+
+对于子表使用动态建表的方式创建如下所示:
+
+```sql
+insert into memory_vm130_memory_buffered_collectd using memory tags(‘vm130’, ‘memory’, 'buffer', 'collectd') values(1632979445, 3.0656);
+```
+
+最终系统中会建立 340 个左右的子表,3 个超级表。需要注意的是,如果采用串联标签值的方式导致子表名称超过系统限制(191 字节),那么需要采用一定的编码方式(例如 MD5)将其转化为可接受长度。
+
+- **多值模型**
+
+如果你想要利用 TDengine 的多值模型能力,需要首先满足以下要求:不同的采集量具有相同的采集频率,且能够通过消息队列**同时到达**数据写入端,从而确保使用 SQL 语句将多个指标一次性写入。将度量的名称作为超级表的名称,建立具有相同采集频率且能够同时到达的数据多列模型。子表的表名采用具有固定规则的方式进行命名。上述每个度量均只包含一个测量值,因此无法将其转化为多值模型。
+
+## 数据分流与应用适配
+
+从消息队列中订阅数据,并启动调整后的写入程序写入数据。
+
+数据开始写入持续一段时间后,可以采用 SQL 语句检查写入的数据量是否符合预计的写入要求。统计数据量使用如下 SQL 语句:
+
+```sql
+select count(*) from memory
+```
+
+完成查询后,如果写入的数据与预期的相比没有差别,同时写入程序本身没有异常的报错信息,那么可用确认数据写入是完整有效的。
+
+TDengine 不支持采用 OpenTSDB 的查询语法进行查询或数据获取处理,但是针对 OpenTSDB 的每种查询都提供对应的支持。可以用检查附录 1 获取对应的查询处理的调整和应用使用的方式,如果需要全面了解 TDengine 支持的查询类型,请参阅 TDengine 的用户手册。
+
+TDengine 支持标准的 JDBC 3.0 接口操纵数据库,你也可以使用其他类型的高级语言的连接器来查询读取数据,以适配你的应用。具体的操作和使用帮助也请参阅用户手册。
+
+## 历史数据迁移
+
+### 1、使用工具自动迁移数据
+
+为了方便历史数据的迁移工作,我们为数据同步工具 DataX 提供了插件,能够将数据自动写入到 TDengine 中,需要注意的是 DataX 的自动化数据迁移只能够支持单值模型的数据迁移过程。
+
+DataX 具体的使用方式及如何使用 DataX 将数据写入 TDengine 请参见[基于 DataX 的 TDengine 数据迁移工具](https://www.taosdata.com/blog/2021/10/26/3156.html)。
+
+在对 DataX 进行迁移实践后,我们发现通过启动多个进程,同时迁移多个 metric 的方式,可以大幅度的提高迁移历史数据的效率,下面是迁移过程中的部分记录,希望这些能为应用迁移工作带来参考。
+
+| DataX 实例个数 (并发进程个数) | 迁移记录速度 (条/秒) |
+| ----------------------------- | --------------------- |
+| 1 | 约 13.9 万 |
+| 2 | 约 21.8 万 |
+| 3 | 约 24.9 万 |
+| 5 | 约 29.5 万 |
+| 10 | 约 33 万 |
+
+ (注:测试数据源自 单节点 Intel(R) Core(TM) i7-10700 CPU@2.90GHz 16 核 64G 硬件设备,channel 和 batchSize 分别为 8 和 1000,每条记录包含 10 个 tag)
+
+### 2、手动迁移数据
+
+如果你需要使用多值模型进行数据写入,就需要自行开发一个将数据从 OpenTSDB 导出的工具,然后确认哪些时间线能够合并导入到同一个时间线,再将可以同时导入的时间通过 SQL 语句的写入到数据库中。
+
+手动迁移数据需要注意以下两个问题:
+
+1)在磁盘中存储导出数据时,磁盘需要有足够的存储空间以便能够充分容纳导出的数据文件。为了避免全量数据导出后导致磁盘文件存储紧张,可以采用部分导入的模式,对于归属于同一个超级表的时间线优先导出,然后将导出部分的数据文件导入到 TDengine 系统中。
+
+2)在系统全负载运行下,如果有足够的剩余计算和 IO 资源,可以建立多线程的导入机制,最大限度地提升数据迁移的效率。考虑到数据解析对于 CPU 带来的巨大负载,需要控制最大的并行任务数量,以避免因导入历史数据而触发的系统整体过载。
+
+由于 TDengine 本身操作简易性,所以不需要在整个过程中进行索引维护、数据格式的变化处理等工作,整个过程只需要顺序执行即可。
+
+当历史数据完全导入到 TDengine 以后,此时两个系统处于同时运行的状态,之后便可以将查询请求切换到 TDengine 上,从而实现无缝的应用切换。
+
+## 附录 1: OpenTSDB 查询函数对应表
+
+### Avg
+
+等效函数:avg
+
+示例:
+
+```sql
+SELECT avg(val) FROM (SELECT first(val) FROM super_table WHERE ts >= startTime and ts <= endTime INTERVAL(20s) Fill(linear)) INTERVAL(20s)
+```
+
+备注:
+
+1. Interval 内的数值与外层查询的 interval 数值需要相同。
+2. 在 TDengine 中插值处理需要使用子查询来协助完成,如上所示,在内层查询中指明插值类型即可,由于 OpenTSDB 中数值的插值使用了线性插值,因此在插值子句中使用 fill(linear) 来声明插值类型。以下有相同插值计算需求的函数,均采用该方法处理。
+3. Interval 中参数 20s 表示将内层查询按照 20 秒一个时间窗口生成结果。在真实的查询中,需要调整为不同的记录之间的时间间隔。这样可确保等效于原始数据生成了插值结果。
+4. 由于 OpenTSDB 特殊的插值策略和机制,聚合查询(Aggregate)中先插值再计算的方式导致其计算结果与 TDengine 不可能完全一致。但是在降采样(Downsample)的情况下,TDengine 和 OpenTSDB 能够获得一致的结果(由于 OpenTSDB 在聚合查询和降采样查询中采用了完全不同的插值策略)。
+
+### Count
+
+等效函数:count
+
+示例:
+
+```sql
+select count(\*) from super_table_name;
+```
+
+### Dev
+
+等效函数:stddev
+
+示例:
+
+```sql
+Select stddev(val) from table_name
+```
+
+### Estimated percentiles
+
+等效函数:apercentile
+
+示例:
+
+```sql
+Select apercentile(col1, 50, “t-digest”) from table_name
+```
+
+备注:
+
+1. 近似查询处理过程中,OpenTSDB 默认采用 t-digest 算法,所以为了获得相同的计算结果,需要在 apercentile 函数中指明使用的算法。TDengine 能够支持两种不同的近似处理算法,分别通过“default”和“t-digest”来声明。
+### First
+
+等效函数:first
+
+示例:
+
+```sql
+Select first(col1) from table_name
+```
+
+### Last
+
+等效函数:last
+
+示例:
+
+```sql
+Select last(col1) from table_name
+```
+
+### Max
+
+等效函数:max
+
+示例:
+
+```sql
+Select max(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s)
+```
+
+备注:Max 函数需要插值,原因见上。
+
+### Min
+
+等效函数:min
+
+示例:
+
+```sql
+Select min(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s);
+```
+
+### MinMax
+
+等效函数:max
+
+```sql
+Select max(val) from table_name
+```
+
+备注:该函数无插值需求,因此可用直接计算。
+
+### MimMin
+
+等效函数:min
+
+```sql
+Select min(val) from table_name
+```
+
+备注:该函数无插值需求,因此可用直接计算。
+
+### Percentile
+
+等效函数:percentile
+
+备注:
+
+### Sum
+
+等效函数:sum
+
+```sql
+Select max(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s)
+```
+
+备注:该函数无插值需求,因此可用直接计算。
+
+### Zimsum
+
+等效函数:sum
+
+```sql
+Select sum(val) from table_name
+```
+
+备注:该函数无插值需求,因此可用直接计算。
+
+完整示例:
+
+```json
+// OpenTSDB 查询 JSON
+query = {
+“start”:1510560000,
+“end”: 1515000009,
+“queries”:[{
+“aggregator”: “count”,
+“metric”:”cpu.usage_user”,
+}]
+}
+
+//等效查询 SQL:
+SELECT count(*)
+FROM `cpu.usage_user`
+WHERE ts>=1510560000 AND ts<=1515000009
+```
+
+## 附录 2: 资源估算方法
+
+### 数据生成环境
+
+我们仍然使用第 4 章中的假设环境,3 个测量值。分别是:温度和湿度的数据写入的速率是每 5 秒一条记录,时间线 10 万个。空气质量的写入速率是 10 秒一条记录,时间线 1 万个,查询的请求频率 500 QPS。
+
+### 存储资源估算
+
+假设产生数据并需要存储的传感器设备数量为 `n`,数据生成的频率为`t`条/秒,每条记录的长度为 `L` bytes,则每天产生的数据规模为 `n×t×L` bytes。假设压缩比为 C,则每日产生数据规模为 `(n×t×L)/C` bytes。存储资源预估为能够容纳 1.5 年的数据规模,生产环境下 TDengine 的压缩比 C 一般在 5 ~ 7 之间,同时为最后结果增加 20% 的冗余,可计算得到需要存储资源:
+
+```matlab
+(n×t×L)×(365×1.5)×(1+20%)/C
+```
+
+结合以上的计算公式,将参数带入计算公式,在不考虑标签信息的情况下,每年产生的原始数据规模是 11.8TB。需要注意的是,由于标签信息在 TDengine 中关联到每个时间线,并不是每条记录。所以需要记录的数据量规模相对于产生的数据有一定的降低,而这部分标签数据整体上可以忽略不记。假设压缩比为 5,则保留的数据规模最终为 2.56 TB。
+
+### 存储设备选型考虑
+
+硬盘应该选用具有较好随机读性能的硬盘设备,如果能够有 SSD,尽可能考虑使用 SSD。较好的随机读性能的磁盘对于提升系统查询性能具有极大的帮助,能够整体上提升系统的查询响应性能。为了获得较好的查询性能,硬盘设备的单线程随机读 IOPS 的性能指标不应该低于 1000,能够达到 5000 IOPS 以上为佳。为了获得当前的设备随机读取的 IO 性能的评估,建议使用 `fio` 软件对其进行运行性能评估(具体的使用方式请参阅附录 1),确认其是否能够满足大文件随机读性能要求。
+
+硬盘写性能对于 TDengine 的影响不大。TDengine 写入过程采用了追加写的模式,所以只要有较好的顺序写性能即可,一般意义上的 SAS 硬盘和 SSD 均能够很好地满足 TDengine 对于磁盘写入性能的要求。
+
+### 计算资源估算
+
+由于物联网数据的特殊性,数据产生的频率固定以后,TDengine 写入的过程对于(计算和存储)资源消耗都保持一个相对固定的量。《[TDengine 运维指南](/operation/)》上的描述,该系统中每秒 22000 个写入,消耗 CPU 不到 1 个核。
+
+在针对查询所需要消耗的 CPU 资源的估算上,假设应用要求数据库提供的 QPS 为 10000,每次查询消耗的 CPU 时间约 1 ms,那么每个核每秒提供的查询为 1000 QPS,满足 10000 QPS 的查询请求,至少需要 10 个核。为了让系统整体上 CPU 负载小于 50%,整个集群需要 10 个核的两倍,即 20 个核。
+
+### 内存资源估算
+
+数据库默认为每个 Vnode 分配内存 16MB\*3 缓冲区,集群系统包括 22 个 CPU 核,则默认会建立 22 个虚拟节点 Vnode,每个 Vnode 包含 1000 张表,则可以容纳所有的表。则约 1 个半小时写满一个 block,从而触发落盘,可以不做调整。22 个 Vnode 共计需要内存缓存约 1GB。考虑到查询所需要的内存,假设每次查询的内存开销约 50MB,则 500 个查询并发需要的内存约 25GB。
+
+综上所述,可使用单台 16 核 32GB 的机器,或者使用 2 台 8 核 16GB 机器构成的集群。
+
+## 附录 3: 集群部署及启动
+
+TDengine 提供了丰富的帮助文档说明集群安装、部署的诸多方面的内容,这里提供相应的文档列表,供你参考。
+
+### 集群部署
+
+首先是安装 TDengine,从官网上下载 TDengine 最新稳定版,解压缩后运行 install.sh 进行安装。各种安装包的使用帮助请参见博客[《TDengine 多种安装包的安装和卸载》](https://www.taosdata.com/blog/2019/08/09/566.html)。
+
+注意安装完成以后,不要立即启动 `taosd` 服务,在正确配置完成参数以后才启动 `taosd` 服务。
+
+### 设置运行参数并启动服务
+
+为确保系统能够正常获取运行的必要信息。请在服务端正确设置以下关键参数:
+
+FQDN、firstEp、secondEP、dataDir、logDir、tmpDir、serverPort。各参数的具体含义及设置的要求,可参见文档《[TDengine 集群安装、管理](/cluster/)》
+
+按照相同的步骤,在需要运行的节点上设置参数,并启动 `taosd` 服务,然后添加 Dnode 到集群中。
+
+最后启动 `taos` 命令行程序,执行命令 `show dnodes`,如果能看到所有的加入集群的节点,那么集群顺利搭建完成。具体的操作流程及注意事项,请参阅文档《[TDengine 集群安装、管理](/cluster/)》
+
+## 附录 4: 超级表名称
+
+由于 OpenTSDB 的 metric 名称中带有点号(“.”),例如“cpu.usage_user”这种名称的 metric。但是点号在 TDengine 中具有特殊含义,是用来分隔数据库和表名称的分隔符。TDengine 也提供转义符,以允许用户在(超级)表名称中使用关键词或特殊分隔符(如:点号)。为了使用特殊字符,需要采用转义字符将表的名称括起来,例如:`cpu.usage_user`这样就是合法的(超级)表名称。
+
+## 附录 5:参考文章
+
+1. [使用 TDengine + collectd/StatsD + Grafana 快速搭建 IT 运维监控系统](/application/collectd/)
+2. [通过 collectd 将采集数据直接写入 TDengine](/third-party/collectd/)
diff --git a/docs-cn/25-application/IT-DevOps-Solutions-Collectd-StatsD.webp b/docs-cn/25-application/IT-DevOps-Solutions-Collectd-StatsD.webp
new file mode 100644
index 0000000000000000000000000000000000000000..147a65b17bff2aa0e44faa206618bdce5664e1ca
Binary files /dev/null and b/docs-cn/25-application/IT-DevOps-Solutions-Collectd-StatsD.webp differ
diff --git a/docs-cn/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp b/docs-cn/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp
new file mode 100644
index 0000000000000000000000000000000000000000..3ca99c835b33df8845adf1b52d8fb8eb63076e82
Binary files /dev/null and b/docs-cn/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp differ
diff --git a/docs-cn/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp b/docs-cn/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp
new file mode 100644
index 0000000000000000000000000000000000000000..04811f61b9b318e129552d87cd48eabf6e99feab
Binary files /dev/null and b/docs-cn/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp differ
diff --git a/docs-cn/25-application/IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp b/docs-cn/25-application/IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp
new file mode 100644
index 0000000000000000000000000000000000000000..36930068758556f4de5b58321804a96401c64b22
Binary files /dev/null and b/docs-cn/25-application/IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp differ
diff --git a/docs-cn/25-application/IT-DevOps-Solutions-Telegraf.webp b/docs-cn/25-application/IT-DevOps-Solutions-Telegraf.webp
new file mode 100644
index 0000000000000000000000000000000000000000..fd5461ec9b37be66cac4c17fb1f81fec76158330
Binary files /dev/null and b/docs-cn/25-application/IT-DevOps-Solutions-Telegraf.webp differ
diff --git a/docs-cn/25-application/IT-DevOps-Solutions-collectd-dashboard.webp b/docs-cn/25-application/IT-DevOps-Solutions-collectd-dashboard.webp
new file mode 100644
index 0000000000000000000000000000000000000000..879c27a1a5843c714ff3c33c1dccfa32a2154b82
Binary files /dev/null and b/docs-cn/25-application/IT-DevOps-Solutions-collectd-dashboard.webp differ
diff --git a/docs-cn/25-application/IT-DevOps-Solutions-statsd-dashboard.webp b/docs-cn/25-application/IT-DevOps-Solutions-statsd-dashboard.webp
new file mode 100644
index 0000000000000000000000000000000000000000..1d4c655970b5f3fcb3be2d65d67eb42f08f35862
Binary files /dev/null and b/docs-cn/25-application/IT-DevOps-Solutions-statsd-dashboard.webp differ
diff --git a/docs-cn/25-application/IT-DevOps-Solutions-telegraf-dashboard.webp b/docs-cn/25-application/IT-DevOps-Solutions-telegraf-dashboard.webp
new file mode 100644
index 0000000000000000000000000000000000000000..105afcdb8312b23675f62ff6339d5e737b5cd958
Binary files /dev/null and b/docs-cn/25-application/IT-DevOps-Solutions-telegraf-dashboard.webp differ
diff --git a/docs-cn/25-application/_category_.yml b/docs-cn/25-application/_category_.yml
new file mode 100644
index 0000000000000000000000000000000000000000..f43a4601b6c269822cbc0de1b7ed99dfdc70cfe5
--- /dev/null
+++ b/docs-cn/25-application/_category_.yml
@@ -0,0 +1 @@
+label: 应用实践
diff --git a/docs-cn/25-application/index.md b/docs-cn/25-application/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..1305cf230f78b68f988918921540a1df05f0931f
--- /dev/null
+++ b/docs-cn/25-application/index.md
@@ -0,0 +1,10 @@
+---
+title: 应用实践
+---
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
\ No newline at end of file
diff --git a/docs-cn/27-train-faq/01-faq.md b/docs-cn/27-train-faq/01-faq.md
new file mode 100644
index 0000000000000000000000000000000000000000..f298d7e14dec682b58a76ce1d7f1c10970ab2738
--- /dev/null
+++ b/docs-cn/27-train-faq/01-faq.md
@@ -0,0 +1,241 @@
+---
+title: 常见问题及反馈
+---
+
+## 问题反馈
+
+如果 FAQ 中的信息不能够帮到您,需要 TDengine 技术团队的技术支持与协助,请将以下两个目录中内容打包:
+
+1. /var/log/taos (如果没有修改过默认路径)
+2. /etc/taos
+
+附上必要的问题描述,包括使用的 TDengine 版本信息、平台环境信息、发生该问题的执行操作、出现问题的表征及大概的时间,在 [GitHub](https://github.com/taosdata/TDengine) 提交 issue。
+
+为了保证有足够的 debug 信息,如果问题能够重复,请修改/etc/taos/taos.cfg 文件,最后面添加一行“debugFlag 135"(不带引号本身),然后重启 taosd, 重复问题,然后再递交。也可以通过如下 SQL 语句,临时设置 taosd 的日志级别。
+
+```
+ alter dnode debugFlag 135;
+```
+
+但系统正常运行时,请一定将 debugFlag 设置为 131,否则会产生大量的日志信息,降低系统效率。
+
+## 常见问题列表
+
+### 1. TDengine2.0 之前的版本升级到 2.0 及以上的版本应该注意什么?☆☆☆
+
+2.0 版在之前版本的基础上,进行了完全的重构,配置文件和数据文件是不兼容的。在升级之前务必进行如下操作:
+
+1. 删除配置文件,执行 `sudo rm -rf /etc/taos/taos.cfg`
+2. 删除日志文件,执行 `sudo rm -rf /var/log/taos/`
+3. 确保数据已经不再需要的前提下,删除数据文件,执行 `sudo rm -rf /var/lib/taos/`
+4. 安装最新稳定版本的 TDengine
+5. 如果需要迁移数据或者数据文件损坏,请联系涛思数据官方技术支持团队,进行协助解决
+
+### 2. Windows 平台下 JDBCDriver 找不到动态链接库,怎么办?
+
+请看为此问题撰写的 [技术博客](https://www.taosdata.com/blog/2019/12/03/950.html)。
+
+### 3. 创建数据表时提示 more dnodes are needed
+
+请看为此问题撰写的 [技术博客](https://www.taosdata.com/blog/2019/12/03/965.html)。
+
+### 4. 如何让 TDengine crash 时生成 core 文件?
+
+请看为此问题撰写的 [技术博客](https://www.taosdata.com/blog/2019/12/06/974.html)。
+
+### 5. 遇到错误“Unable to establish connection” 怎么办?
+
+客户端遇到连接故障,请按照下面的步骤进行检查:
+
+1. 检查网络环境
+
+ - 云服务器:检查云服务器的安全组是否打开 TCP/UDP 端口 6030-6042 的访问权限
+ - 本地虚拟机:检查网络能否 ping 通,尽量避免使用`localhost` 作为 hostname
+ - 公司服务器:如果为 NAT 网络环境,请务必检查服务器能否将消息返回值客户端
+
+2. 确保客户端与服务端版本号是完全一致的,开源社区版和企业版也不能混用
+
+3. 在服务器,执行 `systemctl status taosd` 检查*taosd*运行状态。如果没有运行,启动*taosd*
+
+4. 确认客户端连接时指定了正确的服务器 FQDN (Fully Qualified Domain Name —— 可在服务器上执行 Linux 命令 hostname -f 获得),FQDN 配置参考:[一篇文章说清楚 TDengine 的 FQDN](https://www.taosdata.com/blog/2020/09/11/1824.html)。
+
+5. ping 服务器 FQDN,如果没有反应,请检查你的网络,DNS 设置,或客户端所在计算机的系统 hosts 文件。如果部署的是 TDengine 集群,客户端需要能 ping 通所有集群节点的 FQDN。
+
+6. 检查防火墙设置(Ubuntu 使用 ufw status,CentOS 使用 firewall-cmd --list-port),确保集群中所有主机在端口 6030-6042 上的 TCP/UDP 协议能够互通。
+
+7. 对于 Linux 上的 JDBC(ODBC, Python, Go 等接口类似)连接, 确保*libtaos.so*在目录*/usr/local/taos/driver*里, 并且*/usr/local/taos/driver*在系统库函数搜索路径*LD_LIBRARY_PATH*里
+
+8. 对于 Windows 上的 JDBC, ODBC, Python, Go 等连接,确保*C:\TDengine\driver\taos.dll*在你的系统库函数搜索目录里 (建议*taos.dll*放在目录 _C:\Windows\System32_)
+
+9. 如果仍不能排除连接故障
+
+ - Linux 系统请使用命令行工具 nc 来分别判断指定端口的 TCP 和 UDP 连接是否通畅
+ 检查 UDP 端口连接是否工作:`nc -vuz {hostIP} {port} `
+ 检查服务器侧 TCP 端口连接是否工作:`nc -l {port}`
+ 检查客户端侧 TCP 端口连接是否工作:`nc {hostIP} {port}`
+
+ - Windows 系统请使用 PowerShell 命令 Net-TestConnection -ComputerName {fqdn} -Port {port} 检测服务段端口是否访问
+
+10. 也可以使用 taos 程序内嵌的网络连通检测功能,来验证服务器和客户端之间指定的端口连接是否通畅(包括 TCP 和 UDP):[TDengine 内嵌网络检测工具使用指南](https://www.taosdata.com/blog/2020/09/08/1816.html)。
+
+### 6. 遇到错误 “Unexpected generic error in RPC”或者“Unable to resolve FQDN” 怎么办?
+
+产生这个错误,是由于客户端或数据节点无法解析 FQDN(Fully Qualified Domain Name)导致。对于 TAOS Shell 或客户端应用,请做如下检查:
+
+1. 请检查连接的服务器的 FQDN 是否正确,FQDN 配置参考:[一篇文章说清楚 TDengine 的 FQDN](https://www.taosdata.com/blog/2020/09/11/1824.html)
+2. 如果网络配置有 DNS server,请检查是否正常工作
+3. 如果网络没有配置 DNS server,请检查客户端所在机器的 hosts 文件,查看该 FQDN 是否配置,并是否有正确的 IP 地址
+4. 如果网络配置 OK,从客户端所在机器,你需要能 Ping 该连接的 FQDN,否则客户端是无法连接服务器的
+5. 如果服务器曾经使用过 TDengine,且更改过 hostname,建议检查 data 目录的 dnodeEps.json 是否符合当前配置的 EP,路径默认为/var/lib/taos/dnode。正常情况下,建议更换新的数据目录或者备份后删除以前的数据目录,这样可以避免该问题。
+6. 检查/etc/hosts 和/etc/hostname 是否是预配置的 FQDN
+
+### 7. 虽然语法正确,为什么我还是得到 "Invalid SQL" 错误?
+
+如果你确认语法正确,2.0 之前版本,请检查 SQL 语句长度是否超过 64K。如果超过,也会返回这个错误。
+
+### 8. 是否支持 validation queries?
+
+TDengine 还没有一组专用的 validation queries。然而建议你使用系统监测的数据库”log"来做。
+
+
+
+### 9. 我可以删除或更新一条记录吗?
+
+TDengine 目前尚不支持删除功能,未来根据用户需求可能会支持。
+
+从 2.0.8.0 开始,TDengine 支持更新已经写入数据的功能。使用更新功能需要在创建数据库时使用 UPDATE 1 参数,之后可以使用 INSERT INTO 命令更新已经写入的相同时间戳数据。UPDATE 参数不支持 ALTER DATABASE 命令修改。没有使用 UPDATE 1 参数创建的数据库,写入相同时间戳的数据不会修改之前的数据,也不会报错。
+
+另需注意,在 UPDATE 设置为 0 时,后发送的相同时间戳的数据会被直接丢弃,但并不会报错,而且仍然会被计入 affected rows (所以不能利用 INSERT 指令的返回信息进行时间戳查重)。这样设计的主要原因是,TDengine 把写入的数据看做一个数据流,无论时间戳是否出现冲突,TDengine 都认为产生数据的原始设备真实地产生了这样的数据。UPDATE 参数只是控制这样的流数据在进行持久化时要怎样处理——UPDATE 为 0 时,表示先写入的数据覆盖后写入的数据;而 UPDATE 为 1 时,表示后写入的数据覆盖先写入的数据。这种覆盖关系如何选择,取决于对数据的后续使用和统计中,希望以先还是后生成的数据为准。
+
+此外,从 2.1.7.0 版本开始,支持将 UPDATE 参数设为 2,表示“支持部分列更新”。也即,当 UPDATE 设为 1 时,如果更新一个数据行,其中某些列没有提供取值,那么这些列会被设为 NULL;而当 UPDATE 设为 2 时,如果更新一个数据行,其中某些列没有提供取值,那么这些列会保持原有数据行中的对应值。
+
+### 10. 我怎么创建超过 1024 列的表?
+
+使用 2.0 及其以上版本,默认支持 1024 列;2.0 之前的版本,TDengine 最大允许创建 250 列的表。但是如果确实超过限值,建议按照数据特性,逻辑地将这个宽表分解成几个小表。(从 2.1.7.0 版本开始,表的最大列数增加到了 4096 列。)
+
+### 11. 最有效的写入数据的方法是什么?
+
+批量插入。每条写入语句可以一张表同时插入多条记录,也可以同时插入多张表的多条记录。
+
+### 12. Windows 系统下插入的 nchar 类数据中的汉字被解析成了乱码如何解决?
+
+Windows 下插入 nchar 类的数据中如果有中文,请先确认系统的地区设置成了中国(在 Control Panel 里可以设置),这时 cmd 中的`taos`客户端应该已经可以正常工作了;如果是在 IDE 里开发 Java 应用,比如 Eclipse, IntelliJ,请确认 IDE 里的文件编码为 GBK(这是 Java 默认的编码类型),然后在生成 Connection 时,初始化客户端的配置,具体语句如下:
+
+```JAVA
+Class.forName("com.taosdata.jdbc.TSDBDriver");
+Properties properties = new Properties();
+properties.setProperty(TSDBDriver.LOCALE_KEY, "UTF-8");
+Connection = DriverManager.getConnection(url, properties);
+```
+
+### 13. Windows 系统下客户端无法正常显示中文字符?
+
+Windows 系统中一般是采用 GBK/GB18030 存储中文字符,而 TDengine 的默认字符集为 UTF-8 ,在 Windows 系统中使用 TDengine 客户端时,客户端驱动会将字符统一转换为 UTF-8 编码后发送到服务端存储,因此在应用开发过程中,调用接口时正确配置当前的中文字符集即可。
+
+【 v2.2.1.5以后版本 】在 Windows 10 环境下运行 TDengine 客户端命令行工具 taos 时,若无法正常输入、显示中文,可以对客户端 taos.cfg 做如下配置:
+
+```
+locale C
+charset UTF-8
+```
+
+### 14. JDBC 报错: the executed SQL is not a DML or a DDL?
+
+请更新至最新的 JDBC 驱动,参考 [Java 连接器](/reference/connector/java)
+
+### 15. taos connect failed, reason: invalid timestamp
+
+常见原因是服务器和客户端时间没有校准,可以通过和时间服务器同步的方式(Linux 下使用 ntpdate 命令,Windows 在系统时间设置中选择自动同步)校准。
+
+### 16. 表名显示不全
+
+由于 taos shell 在终端中显示宽度有限,有可能比较长的表名显示不全,如果按照显示的不全的表名进行相关操作会发生 Table does not exist 错误。解决方法可以是通过修改 taos.cfg 文件中的设置项 maxBinaryDisplayWidth, 或者直接输入命令 set max_binary_display_width 100。或者在命令结尾使用 \G 参数来调整结果的显示方式。
+
+### 17. 如何进行数据迁移?
+
+TDengine 是根据 hostname 唯一标志一台机器的,在数据文件从机器 A 移动机器 B 时,注意如下两件事:
+
+ - 2.0.0.0 至 2.0.6.x 的版本,重新配置机器 B 的 hostname 为机器 A 的 hostname。
+ - 2.0.7.0 及以后的版本,到/var/lib/taos/dnode 下,修复 dnodeEps.json 的 dnodeId 对应的 FQDN,重启。确保机器内所有机器的此文件是完全相同的。
+ - 1.x 和 2.x 版本的存储结构不兼容,需要使用迁移工具或者自己开发应用导出导入数据。
+
+### 18. 如何在命令行程序 taos 中临时调整日志级别
+
+为了调试方便,从 2.0.16 版本开始,命令行程序 taos 新增了与日志记录相关的两条指令:
+
+```sql
+ALTER LOCAL flag_name flag_value;
+```
+
+其含义是,在当前的命令行程序下,修改一个特定模块的日志记录级别(只对当前命令行程序有效,如果 taos 命令行程序重启,则需要重新设置):
+
+ - flag_name 的取值可以是:debugFlag,cDebugFlag,tmrDebugFlag,uDebugFlag,rpcDebugFlag
+ - flag_value 的取值可以是:131(输出错误和警告日志),135( 输出错误、警告和调试日志),143( 输出错误、警告、调试和跟踪日志)
+
+```sql
+ALTER LOCAL RESETLOG;
+```
+
+其含义是,清空本机所有由客户端生成的日志文件。
+
+
+
+### 19. go 语言编写组件编译失败怎样解决?
+
+TDengine 2.3.0.0 及之后的版本包含一个使用 go 语言开发的 taosAdapter 独立组件,需要单独运行,取代之前 taosd 内置的 httpd ,提供包含原 httpd 功能以及支持多种其他软件(Prometheus、Telegraf、collectd、StatsD 等)的数据接入功能。
+使用最新 develop 分支代码编译需要先 `git submodule update --init --recursive` 下载 taosAdapter 仓库代码后再编译。
+
+目前编译方式默认自动编译 taosAdapter。go 语言版本要求 1.14 以上,如果发生 go 编译错误,往往是国内访问 go mod 问题,可以通过设置 go 环境变量来解决:
+
+```sh
+go env -w GO111MODULE=on
+go env -w GOPROXY=https://goproxy.cn,direct
+```
+
+如果希望继续使用之前的内置 httpd,可以关闭 taosAdapter 编译,使用
+`cmake .. -DBUILD_HTTP=true` 使用原来内置的 httpd。
+
+### 20. 如何查询数据占用的存储空间大小?
+
+默认情况下,TDengine 的数据文件存储在 /var/lib/taos ,日志文件存储在 /var/log/taos 。
+
+若想查看所有数据文件占用的具体大小,可以执行 Shell 指令:`du -sh /var/lib/taos/vnode --exclude='wal'` 来查看。此处排除了 WAL 目录,因为在持续写入的情况下,这里大小几乎是固定的,并且每当正常关闭 TDengine 让数据落盘后,WAL 目录都会清空。
+
+若想查看单个数据库占用的大小,可在命令行程序 taos 内指定要查看的数据库后执行 `show vgroups;` ,通过得到的 VGroup id 去 /var/lib/taos/vnode 下查看包含的文件夹大小。
+
+若仅仅想查看指定(超级)表的数据块分布及大小,可查看[_block_dist 函数](https://docs.taosdata.com/taos-sql/select/#_block_dist-%E5%87%BD%E6%95%B0)
+
+### 21. 客户端连接串如何保证高可用?
+
+请看为此问题撰写的 [技术博客](https://www.taosdata.com/blog/2021/04/16/2287.html)
+
+### 22. 时间戳的时区信息是怎样处理的?
+
+TDengine 中时间戳的时区总是由客户端进行处理,而与服务端无关。具体来说,客户端会对 SQL 语句中的时间戳进行时区转换,转为 UTC 时区(即 Unix 时间戳——Unix Timestamp)再交由服务端进行写入和查询;在读取数据时,服务端也是采用 UTC 时区提供原始数据,客户端收到后再根据本地设置,把时间戳转换为本地系统所要求的时区进行显示。
+
+客户端在处理时间戳字符串时,会采取如下逻辑:
+
+1. 在未做特殊设置的情况下,客户端默认使用所在操作系统的时区设置。
+2. 如果在 taos.cfg 中设置了 timezone 参数,则客户端会以这个配置文件中的设置为准。
+3. 如果在 C/C++/Java/Python 等各种编程语言的 Connector Driver 中,在建立数据库连接时显式指定了 timezone,那么会以这个指定的时区设置为准。例如 Java Connector 的 JDBC URL 中就有 timezone 参数。
+4. 在书写 SQL 语句时,也可以直接使用 Unix 时间戳(例如 `1554984068000`)或带有时区的时间戳字符串,也即以 RFC 3339 格式(例如 `2013-04-12T15:52:01.123+08:00`)或 ISO-8601 格式(例如 `2013-04-12T15:52:01.123+0800`)来书写时间戳,此时这些时间戳的取值将不再受其他时区设置的影响。
+
+### 23. TDengine 2.0 都会用到哪些网络端口?
+
+使用到的网络端口请看文档:[serverport](/reference/config/#serverport)
+
+需要注意,文档上列举的端口号都是以默认端口 6030 为前提进行说明,如果修改了配置文件中的设置,那么列举的端口都会随之出现变化,管理员可以参考上述的信息调整防火墙设置。
+
+### 24. 为什么 RESTful 接口无响应、Grafana 无法添加 TDengine 为数据源、TDengineGUI 选了 6041 端口还是无法连接成功??
+
+taosAdapter 从 TDengine 2.4.0.0 版本开始成为 TDengine 服务端软件的组成部分,是 TDengine 集群和应用程序之间的桥梁和适配器。在此之前 RESTful 接口等功能是由 taosd 内置的 HTTP 服务提供的,而如今要实现上述功能需要执行:```systemctl start taosadapter``` 命令来启动 taosAdapter 服务。
+
+需要说明的是,taosAdapter 的日志路径 path 需要单独配置,默认路径是 /var/log/taos ;日志等级 logLevel 有 8 个等级,默认等级是 info ,配置成 panic 可关闭日志输出。请注意操作系统 / 目录的空间大小,可通过命令行参数、环境变量或配置文件来修改配置,默认配置文件是 /etc/taos/taosadapter.toml 。
+
+有关 taosAdapter 组件的详细介绍请看文档:[taosAdapter](https://docs.taosdata.com/reference/taosadapter/)
+
+### 25. 发生了 OOM 怎么办?
+
+OOM 是操作系统的保护机制,当操作系统内存(包括 SWAP )不足时,会杀掉某些进程,从而保证操作系统的稳定运行。通常内存不足主要是如下两个原因导致,一是剩余内存小于 vm.min_free_kbytes ;二是程序请求的内存大于剩余内存。还有一种情况是内存充足但程序占用了特殊的内存地址,也会触发 OOM 。
+
+TDengine 会预先为每个 VNode 分配好内存,每个 Database 的 VNode 个数受 maxVgroupsPerDb 影响,每个 VNode 占用的内存大小受 Blocks 和 Cache 影响。要防止 OOM,需要在项目建设之初合理规划内存,并合理设置 SWAP ,除此之外查询过量的数据也有可能导致内存暴涨,这取决于具体的查询语句。TDengine 企业版对内存管理做了优化,采用了新的内存分配器,对稳定性有更高要求的用户可以考虑选择企业版。
diff --git a/docs-cn/27-train-faq/03-docker.md b/docs-cn/27-train-faq/03-docker.md
new file mode 100644
index 0000000000000000000000000000000000000000..7791569b25e102b4634f0fb899fc0973cacc0aa1
--- /dev/null
+++ b/docs-cn/27-train-faq/03-docker.md
@@ -0,0 +1,330 @@
+---
+title: 通过 Docker 快速体验 TDengine
+---
+
+虽然并不推荐在生产环境中通过 Docker 来部署 TDengine 服务,但 Docker 工具能够很好地屏蔽底层操作系统的环境差异,很适合在开发测试或初次体验时用于安装运行 TDengine 的工具集。特别是,借助 Docker,能够比较方便地在 macOS 和 Windows 系统上尝试 TDengine,而无需安装虚拟机或额外租用 Linux 服务器。另外,从 2.0.14.0 版本开始,TDengine 提供的镜像已经可以同时支持 X86-64、X86、arm64、arm32 平台,像 NAS、树莓派、嵌入式开发板之类可以运行 docker 的非主流计算机也可以基于本文档轻松体验 TDengine。
+
+下文通过 Step by Step 风格的介绍,讲解如何通过 Docker 快速建立 TDengine 的单节点运行环境,以支持开发和测试。
+
+## 下载 Docker
+
+Docker 工具自身的下载请参考 [Docker 官网文档](https://docs.docker.com/get-docker/)。
+
+安装完毕后可以在命令行终端查看 Docker 版本。如果版本号正常输出,则说明 Docker 环境已经安装成功。
+
+```bash
+$ docker -v
+Docker version 20.10.3, build 48d30b5
+```
+
+## 使用 Docker 在容器中运行 TDengine
+
+### 在 Docker 容器中运行 TDengine server
+
+```bash
+$ docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
+526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd
+```
+
+这条命令,启动一个运行了 TDengine server 的 docker 容器,并且将容器的 6030 到 6049 端口映射到宿主机的 6030 到 6049 端口上。如果宿主机已经运行了 TDengine server 并占用了相同端口,需要映射容器的端口到不同的未使用端口段。(详情参见 [TDengine 2.0 端口说明](/train-faq/faq#port)。为了支持 TDengine 客户端操作 TDengine server 服务, TCP 和 UDP 端口都需要打开。
+
+- **docker run**:通过 Docker 运行一个容器
+- **-d**:让容器在后台运行
+- **-p**:指定映射端口。注意:如果不是用端口映射,依然可以进入 Docker 容器内部使用 TDengine 服务或进行应用开发,只是不能对容器外部提供服务
+- **tdengine/tdengine**:拉取的 TDengine 官方发布的应用镜像
+- **526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd**:这个返回的长字符是容器 ID,我们也可以通过容器 ID 来查看对应的容器
+
+进一步,还可以使用 docker run 命令启动运行 TDengine server 的 docker 容器,并使用 `--name` 命令行参数将容器命名为 `tdengine`,使用 `--hostname` 指定 hostname 为 `tdengine-server`,通过 `-v` 挂载本地目录到容器,实现宿主机与容器内部的数据同步,防止容器删除后,数据丢失。
+
+```bash
+docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:/var/log/taos -v ~/work/taos/data:/var/lib/taos -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
+```
+
+- **--name tdengine**:设置容器名称,我们可以通过容器名称来访问对应的容器
+- **--hostname=tdengine-server**:设置容器内 Linux 系统的 hostname,我们可以通过映射 hostname 和 IP 来解决容器 IP 可能变化的问题。
+- **-v**:设置宿主机文件目录映射到容器内目录,避免容器删除后数据丢失。
+
+### 使用 docker ps 命令确认容器是否已经正确运行
+
+```bash
+docker ps
+```
+
+输出示例如下:
+
+```
+CONTAINER ID IMAGE COMMAND CREATED STATUS ···
+c452519b0f9b tdengine/tdengine "taosd" 14 minutes ago Up 14 minutes ···
+```
+
+- **docker ps**:列出所有正在运行状态的容器信息。
+- **CONTAINER ID**:容器 ID。
+- **IMAGE**:使用的镜像。
+- **COMMAND**:启动容器时运行的命令。
+- **CREATED**:容器创建时间。
+- **STATUS**:容器状态。UP 表示运行中。
+
+### 通过 docker exec 命令,进入到 docker 容器中去做开发
+
+```bash
+$ docker exec -it tdengine /bin/bash
+root@tdengine-server:~/TDengine-server-2.4.0.4#
+```
+
+- **docker exec**:通过 docker exec 命令进入容器,如果退出,容器不会停止。
+- **-i**:进入交互模式。
+- **-t**:指定一个终端。
+- **tdengine**:容器名称,需要根据 docker ps 指令返回的值进行修改。
+- **/bin/bash**:载入容器后运行 bash 来进行交互。
+
+进入容器后,执行 taos shell 客户端程序。
+
+```bash
+root@tdengine-server:~/TDengine-server-2.4.0.4# taos
+
+Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
+Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
+
+taos>
+```
+
+TDengine 终端成功连接服务端,打印出了欢迎消息和版本信息。如果失败,会有错误信息打印出来。
+
+在 TDengine 终端中,可以通过 SQL 命令来创建/删除数据库、表、超级表等,并可以进行插入和查询操作。具体可以参考 [TAOS SQL 说明文档](/taos-sql/)。
+
+### 在宿主机访问 Docker 容器中的 TDengine server
+
+在使用了 -p 命令行参数映射了正确的端口启动了 TDengine Docker 容器后,就在宿主机使用 taos shell 命令即可访问运行在 Docker 容器中的 TDengine。
+
+```
+$ taos
+
+Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
+Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
+
+taos>
+```
+
+也可以在宿主机使用 curl 通过 RESTful 端口访问 Docker 容器内的 TDengine server。
+
+```
+curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql
+```
+
+输出示例如下:
+
+```
+{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep0,keep1,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep0,keep1,keep(D)",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["test","2021-08-18 06:01:11.021",10000,4,1,1,10,"3650,3650,3650",16,6,100,4096,1,3000,2,0,"ms",0,"ready"],["log","2021-08-18 05:51:51.065",4,1,1,1,10,"30,30,30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":2}
+```
+
+这条命令,通过 REST API 访问 TDengine server,这时连接的是本机的 6041 端口,可见连接成功。
+
+TDengine REST API 详情请参考[官方文档](/reference/rest-api/)。
+
+### 使用 Docker 容器运行 TDengine server 和 taosAdapter
+
+在 TDengine 2.4.0.0 之后版本的 Docker 容器,开始提供一个独立运行的组件 taosAdapter,代替之前版本 TDengine 中 taosd 进程中内置的 http server。taosAdapter 支持通过 RESTful 接口对 TDengine server 的数据写入和查询能力,并提供和 InfluxDB/OpenTSDB 兼容的数据摄取接口,允许 InfluxDB/OpenTSDB 应用程序无缝移植到 TDengine。在新版本 Docker 镜像中,默认启用了 taosAdapter,也可以使用 docker run 命令中设置 TAOS_DISABLE_ADAPTER=true 来禁用 taosAdapter;也可以在 docker run 命令中单独使用 taosAdapter,而不运行 taosd 。
+
+注意:如果容器中运行 taosAdapter,需要根据需要映射其他端口,具体端口默认配置和修改方法请参考[taosAdapter 文档](/reference/taosadapter/)。
+
+使用 docker 运行 TDengine 2.4.0.4 版本镜像(taosd + taosAdapter):
+
+```bash
+docker run -d --name tdengine-all -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine:2.4.0.4
+```
+
+使用 docker 运行 TDengine 2.4.0.4 版本镜像(仅 taosAdapter,需要设置 firstEp 配置项 或 TAOS_FIRST_EP 环境变量):
+
+```bash
+docker run -d --name tdengine-taosa -p 6041-6049:6041-6049 -p 6041-6049:6041-6049/udp -e TAOS_FIRST_EP=tdengine-all tdengine/tdengine:2.4.0.4 taosadapter
+```
+
+使用 docker 运行 TDengine 2.4.0.4 版本镜像(仅 taosd):
+
+```bash
+docker run -d --name tdengine-taosd -p 6030-6042:6030-6042 -p 6030-6042:6030-6042/udp -e TAOS_DISABLE_ADAPTER=true tdengine/tdengine:2.4.0.4
+```
+
+使用 curl 命令验证 RESTful 接口可以正常工作:
+
+```bash
+curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0.1:6041/rest/sql
+```
+
+输出示例如下:
+
+```
+{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
+```
+
+### 应用示例:在宿主机使用 taosBenchmark 写入数据到 Docker 容器中的 TDengine server
+
+1. 在宿主机命令行界面执行 taosBenchmark (曾命名为 taosdemo)写入数据到 Docker 容器中的 TDengine server
+
+ ```bash
+ $ taosBenchmark
+
+ taosBenchmark is simulating data generated by power equipments monitoring...
+
+ host: 127.0.0.1:6030
+ user: root
+ password: taosdata
+ configDir:
+ resultFile: ./output.txt
+ thread num of insert data: 10
+ thread num of create table: 10
+ top insert interval: 0
+ number of records per req: 30000
+ max sql length: 1048576
+ database count: 1
+ database[0]:
+ database[0] name: test
+ drop: yes
+ replica: 1
+ precision: ms
+ super table count: 1
+ super table[0]:
+ stbName: meters
+ autoCreateTable: no
+ childTblExists: no
+ childTblCount: 10000
+ childTblPrefix: d
+ dataSource: rand
+ iface: taosc
+ insertRows: 10000
+ interlaceRows: 0
+ disorderRange: 1000
+ disorderRatio: 0
+ maxSqlLen: 1048576
+ timeStampStep: 1
+ startTimestamp: 2017-07-14 10:40:00.000
+ sampleFormat:
+ sampleFile:
+ tagsFile:
+ columnCount: 3
+ column[0]:FLOAT column[1]:INT column[2]:FLOAT
+ tagCount: 2
+ tag[0]:INT tag[1]:BINARY(16)
+
+ Press enter key to continue or Ctrl-C to stop
+ ```
+
+ 回车后,该命令将在数据库 test 下面自动创建一张超级表 meters,该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupId,groupId 被设置为 1 到 10, location 被设置为 "California.SanFrancisco" 或者 "California.SanDieo"。
+
+ 最后共插入 1 亿条记录。
+
+2. 进入 TDengine 终端,查看 taosBenchmark 生成的数据。
+
+ - **进入命令行。**
+
+ ```bash
+ $ root@c452519b0f9b:~/TDengine-server-2.4.0.4# taos
+
+ Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
+ Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
+
+ taos>
+ ```
+
+ - **查看数据库。**
+
+ ```bash
+ $ taos> show databases;
+ name | created_time | ntables | vgroups | ···
+ test | 2021-08-18 06:01:11.021 | 10000 | 6 | ···
+ log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
+
+ ```
+
+ - **查看超级表。**
+
+ ```bash
+ $ taos> use test;
+ Database changed.
+
+ $ taos> show stables;
+ name | created_time | columns | tags | tables |
+ ============================================================================================
+ meters | 2021-08-18 06:01:11.116 | 4 | 2 | 10000 |
+ Query OK, 1 row(s) in set (0.003259s)
+
+ ```
+
+ - **查看表,限制输出十条。**
+
+ ```bash
+ $ taos> select * from test.t0 limit 10;
+
+ DB error: Table does not exist (0.002857s)
+ taos> select * from test.d0 limit 10;
+ ts | current | voltage | phase |
+ ======================================================================================
+ 2017-07-14 10:40:00.000 | 10.12072 | 223 | 0.34167 |
+ 2017-07-14 10:40:00.001 | 10.16103 | 224 | 0.34445 |
+ 2017-07-14 10:40:00.002 | 10.00204 | 220 | 0.33334 |
+ 2017-07-14 10:40:00.003 | 10.00030 | 220 | 0.33333 |
+ 2017-07-14 10:40:00.004 | 9.84029 | 216 | 0.32222 |
+ 2017-07-14 10:40:00.005 | 9.88028 | 217 | 0.32500 |
+ 2017-07-14 10:40:00.006 | 9.88110 | 217 | 0.32500 |
+ 2017-07-14 10:40:00.007 | 10.08137 | 222 | 0.33889 |
+ 2017-07-14 10:40:00.008 | 10.12063 | 223 | 0.34167 |
+ 2017-07-14 10:40:00.009 | 10.16086 | 224 | 0.34445 |
+ Query OK, 10 row(s) in set (0.016791s)
+
+ ```
+
+ - **查看 d0 表的标签值。**
+
+ ```bash
+ $ taos> select groupid, location from test.d0;
+ groupid | location |
+ =================================
+ 0 | California.SanDieo |
+ Query OK, 1 row(s) in set (0.003490s)
+ ```
+
+### 应用示例:使用数据收集代理软件写入 TDengine
+
+taosAdapter 支持多个数据收集代理软件(如 Telegraf、StatsD、collectd 等),这里仅模拟 StasD 写入数据,在宿主机执行命令如下:
+
+```
+echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
+```
+
+然后可以使用 taos shell 查询 taosAdapter 自动创建的数据库 statsd 和 超级表 foo 中的内容:
+
+```
+taos> show databases;
+ name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
+====================================================================================================================================================================================================================================================================================
+ log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
+ statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
+Query OK, 2 row(s) in set (0.002112s)
+
+taos> use statsd;
+Database changed.
+
+taos> show stables;
+ name | created_time | columns | tags | tables |
+============================================================================================
+ foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
+Query OK, 1 row(s) in set (0.001160s)
+
+taos> select * from foo;
+ ts | value | metric_type |
+=======================================================================================
+ 2021-12-28 09:21:48.840820836 | 1 | counter |
+Query OK, 1 row(s) in set (0.001639s)
+
+taos>
+```
+
+可以看到模拟数据已经被写入到 TDengine 中。
+
+## 停止正在 Docker 中运行的 TDengine 服务
+
+```bash
+docker stop tdengine
+```
+
+- **docker stop**:通过 docker stop 停止指定的正在运行中的 docker 镜像。
diff --git a/docs-cn/27-train-faq/_category_.yml b/docs-cn/27-train-faq/_category_.yml
new file mode 100644
index 0000000000000000000000000000000000000000..16b32bc38fd3ef88313150cf89e32b15696fe7ff
--- /dev/null
+++ b/docs-cn/27-train-faq/_category_.yml
@@ -0,0 +1 @@
+label: FAQ 及其他
diff --git a/docs-cn/27-train-faq/index.md b/docs-cn/27-train-faq/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..b42bff0288fc8ab59810a7d7121be28ddf781551
--- /dev/null
+++ b/docs-cn/27-train-faq/index.md
@@ -0,0 +1,10 @@
+---
+title: FAQ 及其他
+---
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
\ No newline at end of file
diff --git a/docs-cn/eco_system.webp b/docs-cn/eco_system.webp
new file mode 100644
index 0000000000000000000000000000000000000000..d60c38e97c67fa7b2acc703b2ba777d19ae5be13
Binary files /dev/null and b/docs-cn/eco_system.webp differ
diff --git a/docs-en/01-index.md b/docs-en/01-index.md
new file mode 100644
index 0000000000000000000000000000000000000000..f5b7f3e0f61507efbb09506b48548c12317e700b
--- /dev/null
+++ b/docs-en/01-index.md
@@ -0,0 +1,27 @@
+---
+title: TDengine Documentation
+sidebar_label: Documentation Home
+slug: /
+---
+
+TDengine is a [high-performance](https://tdengine.com/fast), [scalable](https://tdengine.com/scalable) time series database with [SQL support](https://tdengine.com/sql-support). This document is the TDengine user manual. It introduces the basic, as well as novel concepts, in TDengine, and also talks in detail about installation, features, SQL, APIs, operation, maintenance, kernel design and other topics. It’s written mainly for architects, developers and system administrators.
+
+To get an overview of TDengine, such as a feature list, benchmarks, and competitive advantages, please browse through the [Introduction](./intro) section.
+
+TDengine greatly improves the efficiency of data ingestion, querying and storage by exploiting the characteristics of time series data, introducing the novel concepts of "one table for one data collection point" and "super table", and designing an innovative storage engine. To understand the new concepts in TDengine and make full use of the features and capabilities of TDengine, please read [“Concepts”](./concept) thoroughly.
+
+If you are a developer, please read the [“Developer Guide”](./develop) carefully. This section introduces the database connection, data modeling, data ingestion, query, continuous query, cache, data subscription, user-defined functions, and other functionality in detail. Sample code is provided for a variety of programming languages. In most cases, you can just copy and paste the sample code, make a few changes to accommodate your application, and it will work.
+
+We live in the era of big data, and scale-up is unable to meet the growing needs of business. Any modern data system must have the ability to scale out, and clustering has become an indispensable feature of big data systems. Not only did the TDengine team develop the cluster feature, but also decided to open source this important feature. To learn how to deploy, manage and maintain a TDengine cluster please refer to ["cluster"](./cluster).
+
+TDengine uses ubiquitious SQL as its query language, which greatly reduces learning costs and migration costs. In addition to the standard SQL, TDengine has extensions to better support time series data analysis. These extensions include functions such as roll up, interpolation and time weighted average, among many others. The ["SQL Reference"](./taos-sql) chapter describes the SQL syntax in detail, and lists the various supported commands and functions.
+
+If you are a system administrator who cares about installation, upgrade, fault tolerance, disaster recovery, data import, data export, system configuration, how to monitor whether TDengine is running healthily, and how to improve system performance, please refer to, and thoroughly read the ["Administration"](./operation) section.
+
+If you want to know more about TDengine tools, the REST API, and connectors for various programming languages, please see the ["Reference"](./reference) chapter.
+
+If you are very interested in the internal design of TDengine, please read the chapter ["Inside TDengine”](./tdinternal), which introduces the cluster design, data partitioning, sharding, writing, and reading processes in detail. If you want to study TDengine code or even contribute code, please read this chapter carefully.
+
+TDengine is an open source database, and we would love for you to be a part of TDengine. If you find any errors in the documentation, or see parts where more clarity or elaboration is needed, please click "Edit this page" at the bottom of each page to edit it directly.
+
+Together, we make a difference.
diff --git a/docs-en/02-intro/_category_.yml b/docs-en/02-intro/_category_.yml
new file mode 100644
index 0000000000000000000000000000000000000000..a3d691e87b15eaf6a62030a130179ffe2e8e5fa6
--- /dev/null
+++ b/docs-en/02-intro/_category_.yml
@@ -0,0 +1 @@
+label: Introduction
diff --git a/docs-en/02-intro/eco_system.webp b/docs-en/02-intro/eco_system.webp
new file mode 100644
index 0000000000000000000000000000000000000000..d60c38e97c67fa7b2acc703b2ba777d19ae5be13
Binary files /dev/null and b/docs-en/02-intro/eco_system.webp differ
diff --git a/docs-en/02-intro/index.md b/docs-en/02-intro/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..f6766f910f4d7560b782bf02ffa97922523e6167
--- /dev/null
+++ b/docs-en/02-intro/index.md
@@ -0,0 +1,113 @@
+---
+title: Introduction
+toc_max_heading_level: 2
+---
+
+TDengine is a high-performance, scalable time-series database with SQL support. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](/develop/cache), [stream processing](/develop/continuous-query), [data subscription](/develop/subscribe) and other functionalities to reduce the complexity and cost of development and operation.
+
+This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine.
+
+## Major Features
+
+The major features are listed below:
+
+1. While TDengine supports [using SQL to insert](/develop/insert-data/sql-writing), it also supports [Schemaless writing](/reference/schemaless/) just like NoSQL databases. TDengine also supports standard protocols like [InfluxDB LINE](/develop/insert-data/influxdb-line),[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) among others.
+2. TDengine supports seamless integration with third-party data collection agents like [Telegraf](/third-party/telegraf),[Prometheus](/third-party/prometheus),[StatsD](/third-party/statsd),[collectd](/third-party/collectd),[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQX](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). These agents can write data into TDengine with simple configuration and without a single line of code.
+3. Support for [all kinds of queries](/develop/query-data), including aggregation, nested query, downsampling, interpolation and others.
+4. Support for [user defined functions](/develop/udf).
+5. Support for [caching](/develop/cache). TDengine always saves the last data point in cache, so Redis is not needed in some scenarios.
+6. Support for [continuous query](/develop/continuous-query).
+7. Support for [data subscription](/develop/subscribe) with the capability to specify filter conditions.
+8. Support for [cluster](/cluster/), with the capability of increasing processing power by adding more nodes. High availability is supported by replication.
+9. Provides an interactive [command-line interface](/reference/taos-shell) for management, maintenance and ad-hoc queries.
+10. Provides many ways to [import](/operation/import) and [export](/operation/export) data.
+11. Provides [monitoring](/operation/monitor) on running instances of TDengine.
+12. Provides [connectors](/reference/connector/) for [C/C++](/reference/connector/cpp), [Java](/reference/connector/java), [Python](/reference/connector/python), [Go](/reference/connector/go), [Rust](/reference/connector/rust), [Node.js](/reference/connector/node) and other programming languages.
+13. Provides a [REST API](/reference/rest-api/).
+14. Supports seamless integration with [Grafana](/third-party/grafana) for visualization.
+15. Supports seamless integration with Google Data Studio.
+
+For more details on features, please read through the entire documentation.
+
+## Competitive Advantages
+
+Time-series data is structured, not transactional, and is rarely deleted or updated. TDengine makes full use of [these characteristics of time series data](https://tdengine.com/2019/07/09/86.html) to build its own innovative storage engine and computing engine to differentiate itself from other time series databases, with the following advantages.
+
+- **[High Performance](https://tdengine.com/fast)**: With an innovatively designed and purpose-built storage engine, TDengine outperforms other time series databases in data ingestion and querying while significantly reducing storage costs and compute costs.
+
+- **[Scalable](https://tdengine.com/scalable)**: TDengine provides out-of-box scalability and high-availability through its native distributed design. Nodes can be added through simple configuration to achieve greater data processing power. In addition, this feature is open source.
+
+- **[SQL Support](https://tdengine.com/sql-support)**: TDengine uses SQL as the query language, thereby reducing learning and migration costs, while adding SQL extensions to better handle time-series. Keeping NoSQL developers in mind, TDengine also supports convenient and flexible, schemaless data ingestion.
+
+- **All in One**: TDengine has built-in caching, stream processing and data subscription functions. It is no longer necessary to integrate Kafka/Redis/HBase/Spark or other software in some scenarios. It makes the system architecture much simpler, cost-effective and easier to maintain.
+
+- **Seamless Integration**: Without a single line of code, TDengine provide seamless, configurable integration with third-party tools such as Telegraf, Grafana, EMQX, Prometheus, StatsD, collectd, etc. More third-party tools are being integrated.
+
+- **Zero Management**: Installation and cluster setup can be done in seconds. Data partitioning and sharding are executed automatically. TDengine’s running status can be monitored via Grafana or other DevOps tools.
+
+- **Zero Learning Costs**: With SQL as the query language and support for ubiquitous tools like Python, Java, C/C++, Go, Rust, and Node.js connectors, and a REST API, there are zero learning costs.
+
+- **Interactive Console**: TDengine provides convenient console access to the database, through a CLI, to run ad hoc queries, maintain the database, or manage the cluster, without any programming.
+
+With TDengine, the total cost of ownership of your time-series data platform can be greatly reduced. 1: With its superior performance, the computing and storage resources are reduced significantly 2: With SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly 3: With its simple architecture and zero management, the operation and maintenance costs are reduced.
+
+## Technical Ecosystem
+This is how TDengine would be situated, in a typical time-series data processing platform:
+
+
+
+
Figure 1. TDengine Technical Ecosystem
+
+On the left-hand side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right-hand side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides an interactive command-line interface and a web interface for management and maintenance.
+
+## Typical Use Cases
+
+As a high-performance, scalable and SQL supported time-series database, TDengine's typical use case include but are not limited to IoT, Industrial Internet, Connected Vehicles, IT operation and maintenance, energy, financial markets and other fields. TDengine is a purpose-built database optimized for the characteristics of time series data. As such, it cannot be used to process data from web crawlers, social media, e-commerce, ERP, CRM and so on. More generally TDengine is not a suitable storage engine for non-time-series data. This section makes a more detailed analysis of the applicable scenarios.
+
+### Characteristics and Requirements of Data Sources
+
+| **Data Source Characteristics and Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
+| -------------------------------------------------------- | ------------------ | ----------------------- | ------------------- | :----------------------------------------------------------- |
+| A massive amount of total data | | | √ | TDengine provides excellent scale-out functions in terms of capacity, and has a storage structure with matching high compression ratio to achieve the best storage efficiency in the industry.|
+| Data input velocity is extremely high | | | √ | TDengine's performance is much higher than that of other similar products. It can continuously process larger amounts of input data in the same hardware environment, and provides a performance evaluation tool that can easily run in the user environment. |
+| A huge number of data sources | | | √ | TDengine is optimized specifically for a huge number of data sources. It is especially suitable for efficiently ingesting, writing and querying data from billions of data sources. |
+
+### System Architecture Requirements
+
+| **System Architecture Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
+| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
+| A simple and reliable system architecture | | | √ | TDengine's system architecture is very simple and reliable, with its own message queue, cache, stream computing, monitoring and other functions. There is no need to integrate any additional third-party products. |
+| Fault-tolerance and high-reliability | | | √ | TDengine has cluster functions to automatically provide high-reliability and high-availability functions such as fault tolerance and disaster recovery. |
+| Standardization support | | | √ | TDengine supports standard SQL and provides SQL extensions for time-series data analysis. |
+
+### System Function Requirements
+
+| **System Function Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
+| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
+| Complete data processing algorithms built-in | | √ | | While TDengine implements various general data processing algorithms, industry specific algorithms and special types of processing will need to be implemented at the application level.|
+| A large number of crosstab queries | | √ | | This type of processing is better handled by general purpose relational database systems but TDengine can work in concert with relational database systems to provide more complete solutions. |
+
+### System Performance Requirements
+
+| **System Performance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
+| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
+| Very large total processing capacity | | | √ | TDengine’s cluster functions can easily improve processing capacity via multi-server coordination. |
+| Extremely high-speed data processing | | | √ | TDengine’s storage and data processing are optimized for IoT, and can process data many times faster than similar products.|
+| Extremely fast processing of high resolution data | | | √ | TDengine has achieved the same or better performance than other relational and NoSQL data processing systems. |
+
+### System Maintenance Requirements
+
+| **System Maintenance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
+| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
+| Native high-reliability | | | √ | TDengine has a very robust, reliable and easily configurable system architecture to simplify routine operation. Human errors and accidents are eliminated to the greatest extent, with a streamlined experience for operators. |
+| Minimize learning and maintenance costs | | | √ | In addition to being easily configurable, standard SQL support and the Taos shell for ad hoc queries makes maintenance simpler, allows reuse and reduces learning costs.|
+| Abundant talent supply | √ | | | Given the above, and given the extensive training and professional services provided by TDengine, it is easy to migrate from existing solutions or create a new and lasting solution based on TDengine.|
+
+## Comparison with other databases
+
+- [Writing Performance Comparison of TDengine and InfluxDB ](https://tdengine.com/2022/02/23/4975.html)
+- [Query Performance Comparison of TDengine and InfluxDB](https://tdengine.com/2022/02/24/5120.html)
+- [TDengine vs InfluxDB、OpenTSDB、Cassandra、MySQL、ClickHouse](https://www.tdengine.com/downloads/TDengine_Testing_Report_en.pdf)
+- [TDengine vs OpenTSDB](https://tdengine.com/2019/09/12/710.html)
+- [TDengine vs Cassandra](https://tdengine.com/2019/09/12/708.html)
+- [TDengine vs InfluxDB](https://tdengine.com/2019/09/12/706.html)
diff --git a/docs-en/04-concept/_category_.yml b/docs-en/04-concept/_category_.yml
new file mode 100644
index 0000000000000000000000000000000000000000..12c659a9265e86d0e74d88a751c19d5d715e9fe0
--- /dev/null
+++ b/docs-en/04-concept/_category_.yml
@@ -0,0 +1 @@
+label: Concepts
\ No newline at end of file
diff --git a/docs-en/04-concept/index.md b/docs-en/04-concept/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..850f705146c4829db579f14be1a686ef9052f678
--- /dev/null
+++ b/docs-en/04-concept/index.md
@@ -0,0 +1,170 @@
+---
+title: Concepts
+---
+
+In order to explain the basic concepts and provide some sample code, the TDengine documentation smart meters as a typical time series use case. We assume the following: 1. Each smart meter collects three metrics i.e. current, voltage, and phase 2. There are multiple smart meters, and 3. Each meter has static attributes like location and group ID. Based on this, collected data will look similar to the following table:
+
+
+
+Each row contains the device ID, time stamp, collected metrics (current, voltage, phase as above), and static tags (location and groupId in Table 1) associated with the devices. Each smart meter generates a row (measurement) in a pre-defined time interval or triggered by an external event. The device produces a sequence of measurements with associated time stamps.
+
+## Metric
+
+Metric refers to the physical quantity collected by sensors, equipment or other types of data collection devices, such as current, voltage, temperature, pressure, GPS position, etc., which change with time, and the data type can be integer, float, Boolean, or strings. As time goes by, the amount of collected metric data stored increases.
+
+## Label/Tag
+
+Label/Tag refers to the static properties of sensors, equipment or other types of data collection devices, which do not change with time, such as device model, color, fixed location of the device, etc. The data type can be any type. Although static, TDengine allows users to add, delete or update tag values at any time. Unlike the collected metric data, the amount of tag data stored does not change over time.
+
+## Data Collection Point
+
+Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points.
+
+## Table
+
+Since time-series data is most likely to be structured data, TDengine adopts the traditional relational database model to process them with a short learning curve. You need to create a database, create tables, then insert data points and execute queries to explore the data.
+
+To make full use of time-series data characteristics, TDengine adopts a strategy of "**One Table for One Data Collection Point**". TDengine requires the user to create a table for each data collection point (DCP) to store collected time-series data. For example, if there are over 10 million smart meters, it means 10 million tables should be created. For the table above, 4 tables should be created for devices D1001, D1002, D1003, and D1004 to store the data collected. This design has several benefits:
+
+1. Since the metric data from different DCP are fully independent, the data source of each DCP is unique, and a table has only one writer. In this way, data points can be written in a lock-free manner, and the writing speed can be greatly improved.
+2. For a DCP, the metric data generated by DCP is ordered by timestamp, so the write operation can be implemented by simple appending, which further greatly improves the data writing speed.
+3. The metric data from a DCP is continuously stored, block by block. If you read data for a period of time, it can greatly reduce random read operations and improve read and query performance by orders of magnitude.
+4. Inside a data block for a DCP, columnar storage is used, and different compression algorithms are used for different data types. Metrics generally don't vary as significantly between themselves over a time range as compared to other metrics, which allows for a higher compression rate.
+
+If the metric data of multiple DCPs are traditionally written into a single table, due to uncontrollable network delays, the timing of the data from different DCPs arriving at the server cannot be guaranteed, write operations must be protected by locks, and metric data from one DCP cannot be guaranteed to be continuously stored together. **One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest possible extent.**
+
+TDengine suggests using DCP ID as the table name (like D1001 in the above table). Each DCP may collect one or multiple metrics (like the current, voltage, phase as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the time stamp as the index, and won’t build the index on any metrics stored. Column wise storage is used.
+
+## Super Table (STable)
+
+The design of one table for one data collection point will require a huge number of tables, which is difficult to manage. Furthermore, applications often need to take aggregation operations among DCPs, thus aggregation operations will become complicated. To support aggregation over multiple tables efficiently, the STable(Super Table) concept is introduced by TDengine.
+
+STable is a template for a type of data collection point. A STable contains a set of data collection points (tables) that have the same schema or data structure, but with different static attributes (tags). To describe a STable, in addition to defining the table structure of the metrics, it is also necessary to define the schema of its tags. The data type of tags can be int, float, string, and there can be multiple tags, which can be added, deleted, or modified afterward. If the whole system has N different types of data collection points, N STables need to be established.
+
+In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**.
+
+## Subtable
+
+When creating a table for a specific data collection point, the user can use a STable as a template and specify the tag values of this specific DCP to create it. **The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is:
+1. Subtable is a table, all SQL commands applied on a regular table can be applied on subtable.
+2. Subtable is a table with extensions, it has static tags (labels), and these tags can be added, deleted, and updated after it is created. But a regular table does not have tags.
+3. A subtable belongs to only one STable, but a STable may have many subtables. Regular tables do not belong to a STable.
+4. A regular table can not be converted into a subtable, and vice versa.
+
+The relationship between a STable and the subtables created based on this STable is as follows:
+
+1. A STable contains multiple subtables with the same metric schema but with different tag values.
+2. The schema of metrics or labels cannot be adjusted through subtables, and it can only be changed via STable. Changes to the schema of a STable takes effect immediately for all associated subtables.
+3. STable defines only one template and does not store any data or label information by itself. Therefore, data cannot be written to a STable, only to subtables.
+
+Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs.
+
+In TDengine, it is recommended to use a subtable instead of a regular table for a DCP.
+
+## Database
+
+A database is a collection of tables. TDengine allows a running instance to have multiple databases, and each database can be configured with different storage policies. Different types of DCPs often have different data characteristics, including the frequency of data collection, data retention time, the number of replications, the size of data blocks, whether data is allowed to be updated, and so on. In order for TDengine to work with maximum efficiency in various scenarios, TDengine recommends that STables with different data characteristics be created in different databases.
+
+In a database, there can be one or more STables, but a STable belongs to only one database. All tables owned by a STable are stored in only one database.
+
+## FQDN & End Point
+
+FQDN (Fully Qualified Domain Name) is the full domain name of a specific computer or host on the Internet. FQDN consists of two parts: hostname and domain name. For example, the FQDN of a mail server might be mail.tdengine.com. The hostname is mail, and the host is located in the domain name tdengine.com. DNS (Domain Name System) is responsible for translating FQDN into IP. For systems without DNS, it can be solved by configuring the hosts file.
+
+Each node of a TDengine cluster is uniquely identified by an End Point, which consists of an FQDN and a Port, such as h1.tdengine.com:6030. In this way, when the IP changes, we can still use the FQDN to dynamically find the node without changing any configuration of the cluster. In addition, FQDN is used to facilitate unified access to the same cluster from the Intranet and the Internet.
+
+TDengine does not recommend using an IP address to access the cluster. FQDN is recommended for cluster management.
diff --git a/docs-en/05-get-started/_apt_get_install.mdx b/docs-en/05-get-started/_apt_get_install.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..40f6cad1f672a97fd28e6d4b5795d32b2ff0d26c
--- /dev/null
+++ b/docs-en/05-get-started/_apt_get_install.mdx
@@ -0,0 +1,26 @@
+`apt-get` can be used to install TDengine from official package repository.
+
+**Package Repository**
+
+```
+wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
+echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
+```
+
+The repository required for installing beta versions can be configured as below:
+
+```
+echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
+```
+
+**Install With apt-get**
+
+```
+sudo apt-get update
+apt-cache policy tdengine
+sudo apt-get install tdengine
+```
+
+:::tip
+`apt-get` can only be used on Debian or Ubuntu Linux.
+::::
diff --git a/docs-en/05-get-started/_category_.yml b/docs-en/05-get-started/_category_.yml
new file mode 100644
index 0000000000000000000000000000000000000000..043ae21554ffd8f274c6afe41c5ae5e7da742b26
--- /dev/null
+++ b/docs-en/05-get-started/_category_.yml
@@ -0,0 +1 @@
+label: Get Started
diff --git a/docs-en/05-get-started/_pkg_install.mdx b/docs-en/05-get-started/_pkg_install.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..cf10497c96ba1d777e45340b0312d97c127b6fcb
--- /dev/null
+++ b/docs-en/05-get-started/_pkg_install.mdx
@@ -0,0 +1,17 @@
+import PkgList from "/components/PkgList";
+
+It's very easy to install TDengine and would take you only a few minutes from downloading to finishing installation.
+
+For the convenience of users, from version 2.4.0.10, the standard server side installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark` and sample code. If only the `taosd` server and C/C++ connector are required, you can also choose to download the lite package.
+
+Three kinds of packages are provided, tar.gz, rpm and deb. Especially the tar.gz package is provided for the convenience of enterprise customers on different kinds of operating systems, it includes `taosdump` and TDinsight installation script which are normally only provided in taos-tools rpm and deb packages.
+
+Between two major release versions, some beta versions may be delivered for users to try some new features.
+
+
+
+For the details please refer to [Install and Uninstall](/operation/pkg-install)。
+
+To see the details of versions, please refer to [Download List](https://tdengine.com/all-downloads) and [Release Notes](https://github.com/taosdata/TDengine/releases).
+
+
diff --git a/docs-en/05-get-started/index.md b/docs-en/05-get-started/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..56958ef3ec1c206ee0cff45c67fd3c3a6fa6753a
--- /dev/null
+++ b/docs-en/05-get-started/index.md
@@ -0,0 +1,171 @@
+---
+title: Get Started
+description: 'Install TDengine from Docker image, apt-get or package, and run TAOS CLI and taosBenchmark to experience the features'
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+import PkgInstall from "./\_pkg_install.mdx";
+import AptGetInstall from "./\_apt_get_install.mdx";
+
+## Quick Install
+
+The full package of TDengine includes the server(taosd), taosAdapter for connecting with third-party systems and providing a RESTful interface, client driver(taosc), command-line program(CLI, taos) and some tools. For the current version, the server taosd and taosAdapter can only be installed and run on Linux systems. In the future taosd and taosAdapter will also be supported on Windows, macOS and other systems. The client driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to connectors for multiple languages, TDengine also provides a [RESTful interface](/reference/rest-api) through [taosAdapter](/reference/taosadapter). Prior to version 2.4.0.0, taosAdapter did not exist and the RESTful interface was provided by the built-in HTTP service of taosd.
+
+TDengine supports X64/ARM64/MIPS64/Alpha64 hardware platforms, and will support ARM32, RISC-V and other CPU architectures in the future.
+
+
+
+If docker is already installed on your computer, execute the following command:
+
+```shell
+docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
+```
+
+Make sure the container is running
+
+```shell
+docker ps
+```
+
+Enter into container and execute bash
+
+```shell
+docker exec -it bash
+```
+
+Then you can execute the Linux commands and access TDengine.
+
+For detailed steps, please visit [Experience TDengine via Docker](/train-faq/docker)。
+
+:::info
+Starting from 2.4.0.10,besides taosd,TDengine docker image includes: taos,taosAdapter,taosdump,taosBenchmark,TDinsight, scripts and sample code. Once the TDengine container is started,it will start both taosAdapter and taosd automatically to support RESTful interface.
+
+:::
+
+
+
+
+
+
+
+
+
+
+If you like to check the source code, build the package by yourself or contribute to the project, please check [TDengine GitHub Repository](https://github.com/taosdata/TDengine)
+
+
+
+
+## Quick Launch
+
+After installation, you can launch the TDengine service by the 'systemctl' command to start 'taosd'.
+
+```bash
+systemctl start taosd
+```
+
+Check if taosd is running:
+
+```bash
+systemctl status taosd
+```
+
+If everything is fine, you can run TDengine command-line interface `taos` to access TDengine and test it out yourself.
+
+:::info
+
+- systemctl requires _root_ privileges,if you are not _root_ ,please add sudo before the command.
+- To get feedback and keep improving the product, TDengine is collecting some basic usage information, but you can turn it off by setting telemetryReporting to 0 in configuration file taos.cfg.
+- TDengine uses FQDN (usually hostname)as the ID for a node. To make the system work, you need to configure the FQDN for the server running taosd, and configure the DNS service or hosts file on the the machine where the application or TDengine CLI runs to ensure that the FQDN can be resolved.
+- `systemctl stop taosd` won't stop the server right away, it will wait until all the data in memory are flushed to disk. It may takes time depending on the cache size.
+
+TDengine supports the installation on system which runs [`systemd`](https://en.wikipedia.org/wiki/Systemd) for process management,use `which systemctl` to check if the system has `systemd` installed:
+
+```bash
+which systemctl
+```
+
+If the system does not have `systemd`,you can start TDengine manually by executing `/usr/local/taos/bin/taosd`
+
+:::note
+
+## Command Line Interface
+
+To manage the TDengine running instance,or execute ad-hoc queries, TDengine provides a Command Line Interface (hereinafter referred to as TDengine CLI) taos. To enter into the interactive CLI,execute `taos` on a Linux terminal where TDengine is installed.
+
+```bash
+taos
+```
+
+If it connects to the TDengine server successfully, it will print out the version and welcome message. If it fails, it will print out the error message, please check [FAQ](/train-faq/faq) for trouble shooting connection issue. TDengine CLI's prompt is:
+
+```cmd
+taos>
+```
+
+Inside TDengine CLI,you can execute SQL commands to create/drop database/table, and run queries. The SQL command must be ended with a semicolon. For example:
+
+```sql
+create database demo;
+use demo;
+create table t (ts timestamp, speed int);
+insert into t values ('2019-07-15 00:00:00', 10);
+insert into t values ('2019-07-15 01:00:00', 20);
+select * from t;
+ ts | speed |
+========================================
+ 2019-07-15 00:00:00.000 | 10 |
+ 2019-07-15 01:00:00.000 | 20 |
+Query OK, 2 row(s) in set (0.003128s)
+```
+
+Besides executing SQL commands, system administrators can check running status, add/drop user accounts and manage the running instances. TAOS CLI with client driver can be installed and run on either Linux or Windows machines. For more details on CLI, please [check here](../reference/taos-shell/).
+
+## Experience the blazing fast speed
+
+After TDengine server is running,execute `taosBenchmark` (previously named taosdemo) from a Linux terminal:
+
+```bash
+taosBenchmark
+```
+
+This command will create a super table "meters" under database "test". Under "meters", 10000 tables are created with names from "d0" to "d9999". Each table has 10000 rows and each row has four columns (ts, current, voltage, phase). Time stamp is starting from "2017-07-14 10:40:00 000" to "2017-07-14 10:40:09 999". Each table has tags "location" and "groupId". groupId is set 1 to 10 randomly, and location is set to "California.SanFrancisco" or "California.SanDiego".
+
+This command will insert 100 million rows into the database quickly. Time to insert depends on the hardware configuration, it only takes a dozen seconds for a regular PC server.
+
+taosBenchmark provides command-line options and a configuration file to customize the scenarios, like number of tables, number of rows per table, number of columns and more. Please execute `taosBenchmark --help` to list them. For details on running taosBenchmark, please check [reference for taosBenchmark](/reference/taosbenchmark)
+
+## Experience query speed
+
+After using taosBenchmark to insert a number of rows data, you can execute queries from TDengine CLI to experience the lightning fast query speed.
+
+query the total number of rows under super table "meters":
+
+```sql
+taos> select count(*) from test.meters;
+```
+
+query the average, maximum, minimum of 100 million rows:
+
+```sql
+taos> select avg(current), max(voltage), min(phase) from test.meters;
+```
+
+query the total number of rows with location="California.SanFrancisco":
+
+```sql
+taos> select count(*) from test.meters where location="California.SanFrancisco";
+```
+
+query the average, maximum, minimum of all rows with groupId=10:
+
+```sql
+taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
+```
+
+query the average, maximum, minimum for table d10 in 10 seconds time interval:
+
+```sql
+taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
+```
diff --git a/docs-en/07-develop/01-connect/_category_.yml b/docs-en/07-develop/01-connect/_category_.yml
new file mode 100644
index 0000000000000000000000000000000000000000..83f9754f582f541ca62c7ff8701698dd949c3f99
--- /dev/null
+++ b/docs-en/07-develop/01-connect/_category_.yml
@@ -0,0 +1 @@
+label: Connect
diff --git a/docs-en/07-develop/01-connect/_connect_c.mdx b/docs-en/07-develop/01-connect/_connect_c.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..174bf45c4e2f26bab8f57c098f9f8f00d2f5064d
--- /dev/null
+++ b/docs-en/07-develop/01-connect/_connect_c.mdx
@@ -0,0 +1,3 @@
+```c title="Native Connection"
+{{#include docs-examples/c/connect_example.c}}
+```
diff --git a/docs-en/07-develop/01-connect/_connect_cs.mdx b/docs-en/07-develop/01-connect/_connect_cs.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..52ea2d437123a26bd87e6f3fdc05a17141f9f835
--- /dev/null
+++ b/docs-en/07-develop/01-connect/_connect_cs.mdx
@@ -0,0 +1,8 @@
+```csharp title="Native Connection"
+{{#include docs-examples/csharp/ConnectExample.cs}}
+```
+
+:::info
+C# connector supports only native connection for now.
+
+:::
diff --git a/docs-en/07-develop/01-connect/_connect_go.mdx b/docs-en/07-develop/01-connect/_connect_go.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..1dd5d67e3533bba21960269e49e3d843b026efc8
--- /dev/null
+++ b/docs-en/07-develop/01-connect/_connect_go.mdx
@@ -0,0 +1,17 @@
+#### Unified Database Access Interface
+
+```go title="Native Connection"
+{{#include docs-examples/go/connect/cgoexample/main.go}}
+```
+
+```go title="REST Connection"
+{{#include docs-examples/go/connect/restexample/main.go}}
+```
+
+#### Advanced Features
+
+The af package of driver-go can also be used to establish connection, with this way some advanced features of TDengine, like parameter binding and subscription, can be used.
+
+```go title="Establish native connection using af package"
+{{#include docs-examples/go/connect/afconn/main.go}}
+```
diff --git a/docs-en/07-develop/01-connect/_connect_java.mdx b/docs-en/07-develop/01-connect/_connect_java.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..1c3e9326bf2ae597ffba683250dd43986e670469
--- /dev/null
+++ b/docs-en/07-develop/01-connect/_connect_java.mdx
@@ -0,0 +1,15 @@
+```java title="Native Connection"
+{{#include docs-examples/java/src/main/java/com/taos/example/JNIConnectExample.java}}
+```
+
+```java title="REST Connection"
+{{#include docs-examples/java/src/main/java/com/taos/example/RESTConnectExample.java:main}}
+```
+
+When using REST connection, the feature of bulk pulling can be enabled if the size of resulting data set is huge.
+
+```java title="Enable Bulk Pulling" {4}
+{{#include docs-examples/java/src/main/java/com/taos/example/WSConnectExample.java:main}}
+```
+
+More configuration about connection,please refer to [Java Connector](/reference/connector/java)
diff --git a/docs-en/07-develop/01-connect/_connect_node.mdx b/docs-en/07-develop/01-connect/_connect_node.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..489b0386e991ee1e8ddd173205637b75ae5a0c95
--- /dev/null
+++ b/docs-en/07-develop/01-connect/_connect_node.mdx
@@ -0,0 +1,7 @@
+```js title="Native Connection"
+{{#include docs-examples/node/nativeexample/connect.js}}
+```
+
+```js title="REST Connection"
+{{#include docs-examples/node/restexample/connect.js}}
+```
diff --git a/docs-en/07-develop/01-connect/_connect_python.mdx b/docs-en/07-develop/01-connect/_connect_python.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..44b7586fadbf618231fce7753d3b4b68853a7f57
--- /dev/null
+++ b/docs-en/07-develop/01-connect/_connect_python.mdx
@@ -0,0 +1,3 @@
+```python title="Native Connection"
+{{#include docs-examples/python/connect_example.py}}
+```
diff --git a/docs-en/07-develop/01-connect/_connect_r.mdx b/docs-en/07-develop/01-connect/_connect_r.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..09c3d71ac35b1134d3089247daea9a13db4129e2
--- /dev/null
+++ b/docs-en/07-develop/01-connect/_connect_r.mdx
@@ -0,0 +1,3 @@
+```r title="Native Connection"
+{{#include docs-examples/R/connect_native.r:demo}}
+```
diff --git a/docs-en/07-develop/01-connect/_connect_rust.mdx b/docs-en/07-develop/01-connect/_connect_rust.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..aa19f58de6c9bab69df0663e5369402ab1a8f899
--- /dev/null
+++ b/docs-en/07-develop/01-connect/_connect_rust.mdx
@@ -0,0 +1,8 @@
+```rust title="Native Connection/REST Connection"
+{{#include docs-examples/rust/nativeexample/examples/connect.rs}}
+```
+
+:::note
+For Rust connector, the connection depends on the feature being used. If "rest" feature is enabled, then only the implementation for "rest" is compiled and packaged.
+
+:::
diff --git a/docs-en/07-develop/01-connect/index.md b/docs-en/07-develop/01-connect/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..b9217b828d0d08c4ff1eacd27406d4e3bfba8eac
--- /dev/null
+++ b/docs-en/07-develop/01-connect/index.md
@@ -0,0 +1,240 @@
+---
+sidebar_label: Connect
+title: Connect
+description: "This document explains how to establish connections to TDengine, and briefly introduces how to install and use TDengine connectors."
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+import ConnJava from "./\_connect_java.mdx";
+import ConnGo from "./\_connect_go.mdx";
+import ConnRust from "./\_connect_rust.mdx";
+import ConnNode from "./\_connect_node.mdx";
+import ConnPythonNative from "./\_connect_python.mdx";
+import ConnCSNative from "./\_connect_cs.mdx";
+import ConnC from "./\_connect_c.mdx";
+import ConnR from "./\_connect_r.mdx";
+import InstallOnWindows from "../../14-reference/03-connector/\_linux_install.mdx";
+import InstallOnLinux from "../../14-reference/03-connector/\_windows_install.mdx";
+import VerifyLinux from "../../14-reference/03-connector/\_verify_linux.mdx";
+import VerifyWindows from "../../14-reference/03-connector/\_verify_windows.mdx";
+
+Any application programs running on any kind of platform can access TDengine through the REST API provided by TDengine. For details, please refer to [REST API](/reference/rest-api/). Additionally, application programs can use the connectors of multiple programming languages including C/C++, Java, Python, Go, Node.js, C#, and Rust to access TDengine. This chapter describes how to establish a connection to TDengine and briefly introduces how to install and use connectors. For details about the connectors, please refer to [Connectors](/reference/connector/)
+
+## Establish Connection
+
+There are two ways for a connector to establish connections to TDengine:
+
+1. Connection through the REST API provided by the taosAdapter component, this way is called "REST connection" hereinafter.
+2. Connection through the TDengine client driver (taosc), this way is called "Native connection" hereinafter.
+
+Key differences:
+
+1. The TDengine client driver (taosc) has the highest performance with all the features of TDengine like [Parameter Binding](/reference/connector/cpp#parameter-binding-api), [Subscription](/reference/connector/cpp#subscription-and-consumption-api), etc.
+2. The TDengine client driver (taosc) is not supported across all platforms, and applications built on taosc may need to be modified when updating taosc to newer versions.
+3. The REST connection is more accessible with cross-platform support, however it results in a 30% performance downgrade.
+
+## Install Client Driver taosc
+
+If you are choosing to use the native connection and the the application is not on the same host as TDengine server, the TDengine client driver taosc needs to be installed on the application host. If choosing to use the REST connection or the application is on the same host as TDengine server, this step can be skipped. It's better to use same version of taosc as the TDengine server.
+
+### Install
+
+
+
+
+
+
+
+
+
+
+### Verify
+
+After the above installation and configuration are done and making sure TDengine service is already started and in service, the TDengine command-line interface `taos` can be launched to access TDengine.
+
+
+
+
+
+
+
+
+
+
+## Install Connectors
+
+
+
+
+If `maven` is used to manage the projects, what needs to be done is only adding below dependency in `pom.xml`.
+
+```xml
+
+ com.taosdata.jdbc
+ taos-jdbcdriver
+ 2.0.38
+
+```
+
+
+
+
+Install from PyPI using `pip`:
+
+```
+pip install taospy
+```
+
+Install from Git URL:
+
+```
+pip install git+https://github.com/taosdata/taos-connector-python.git
+```
+
+
+
+
+Just need to add `driver-go` dependency in `go.mod` .
+
+```go-mod title=go.mod
+module goexample
+
+go 1.17
+
+require github.com/taosdata/driver-go/v2 develop
+```
+
+:::note
+`driver-go` uses `cgo` to wrap the APIs provided by taosc, while `cgo` needs `gcc` to compile source code in C language, so please make sure you have proper `gcc` on your system.
+
+:::
+
+
+
+
+Just need to add `libtaos` dependency in `Cargo.toml`.
+
+```toml title=Cargo.toml
+[dependencies]
+libtaos = { version = "0.4.2"}
+```
+
+:::info
+Rust connector uses different features to distinguish the way to establish connection. To establish REST connection, please enable `rest` feature.
+
+```toml
+libtaos = { version = "*", features = ["rest"] }
+```
+
+:::
+
+
+
+
+Node.js connector provides different ways of establishing connections by providing different packages.
+
+1. Install Node.js Native Connector
+
+```
+npm i td2.0-connector
+```
+
+:::note
+It's recommend to use Node whose version is between `node-v12.8.0` and `node-v13.0.0`.
+:::
+
+2. Install Node.js REST Connector
+
+```
+npm i td2.0-rest-connector
+```
+
+
+
+
+Just need to add the reference to [TDengine.Connector](https://www.nuget.org/packages/TDengine.Connector/) in the project configuration file.
+
+```xml title=csharp.csproj {12}
+
+
+
+ Exe
+ net6.0
+ enable
+ enable
+ TDengineExample.AsyncQueryExample
+
+
+
+
+
+
+
+```
+
+Or add by `dotnet` command.
+
+```
+dotnet add package TDengine.Connector
+```
+
+:::note
+The sample code below are based on dotnet6.0, they may need to be adjusted if your dotnet version is not exactly same.
+
+:::
+
+
+
+
+1. Download [taos-jdbcdriver-version-dist.jar](https://repo1.maven.org/maven2/com/taosdata/jdbc/taos-jdbcdriver/2.0.38/).
+2. Install the dependency package `RJDBC`:
+
+```R
+install.packages("RJDBC")
+```
+
+
+
+
+If the client driver (taosc) is already installed, then the C connector is already available.
+
+
+
+
+
+## Establish Connection
+
+Prior to establishing connection, please make sure TDengine is already running and accessible. The following sample code assumes TDengine is running on the same host as the client program, with FQDN configured to "localhost" and serverPort configured to "6030".
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+:::tip
+If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](https://docs.taosdata.com/train-faq/faq).
+
+:::
diff --git a/docs-en/07-develop/02-model/_category_.yml b/docs-en/07-develop/02-model/_category_.yml
new file mode 100644
index 0000000000000000000000000000000000000000..a2b49eb879c593b29cba1b1bfab3f5b2b615c1e6
--- /dev/null
+++ b/docs-en/07-develop/02-model/_category_.yml
@@ -0,0 +1,2 @@
+label: Data Model
+
diff --git a/docs-en/07-develop/02-model/index.mdx b/docs-en/07-develop/02-model/index.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..86853aaaa3f7285fe042a892e2ec903d57894111
--- /dev/null
+++ b/docs-en/07-develop/02-model/index.mdx
@@ -0,0 +1,93 @@
+---
+title: Data Model
+---
+
+The data model employed by TDengine is similar to that of a relational database. You have to create databases and tables. You must design the data model based on your own business and application requirements. You should design the STable (an abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntactical details.
+
+## Create Database
+
+The [characteristics of time-series data](https://www.taosdata.com/blog/2019/07/09/86.html) from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. For TDengine to operate with the best performance, we strongly recommend that you create and configure different databases for data with different characteristics. This allows you, for example, to set up different storage and retention policies. When creating a database, there are a lot of parameters that can be configured such as, the days to keep data, the number of replicas, the number of memory blocks, time precision, the minimum and maximum number of rows in each data block, whether compression is enabled, the time range of the data in single data file and so on. Below is an example of the SQL statement to create a database.
+
+```sql
+CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 6 UPDATE 1;
+```
+
+In the above SQL statement:
+- a database named "power" will be created
+- the data in it will be kept for 365 days, which means that data older than 365 days will be deleted automatically
+- a new data file will be created every 10 days
+- the number of memory blocks is 6
+- data is allowed to be updated
+
+For more details please refer to [Database](/taos-sql/database).
+
+After creating a database, the current database in use can be switched using SQL command `USE`. For example the SQL statement below switches the current database to `power`. Without the current database specified, table name must be preceded with the corresponding database name.
+
+```sql
+USE power;
+```
+
+:::note
+
+- Any table or STable must belong to a database. To create a table or STable, the database it belongs to must be ready.
+- JOIN operations can't be performed on tables from two different databases.
+- Timestamp needs to be specified when inserting rows or querying historical rows.
+
+:::
+
+## Create STable
+
+In a time-series application, there may be multiple kinds of data collection points. For example, in the electrical power system there are meters, transformers, bus bars, switches, etc. For easy and efficient aggregation of multiple tables, one STable needs to be created for each kind of data collection point. For example, for the meters in [table 1](/tdinternal/arch#model_table1), the SQL statement below can be used to create the super table.
+
+```sql
+CREATE STable meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
+```
+
+:::note
+If you are using versions prior to 2.0.15, the `STable` keyword needs to be replaced with `TABLE`.
+
+:::
+
+Similar to creating a regular table, when creating a STable, the name and schema need to be provided. In the STable schema, the first column must always be a timestamp (like ts in the example), and the other columns (like current, voltage and phase in the example) are the data collected. The remaining columns can [contain data of type](/taos-sql/data-type/) integer, float, double, string etc. In addition, the schema for tags, like location and groupId in the example, must be provided. The tag type can be integer, float, string, etc. Tags are essentially the static properties of a data collection point. For example, properties like the location, device type, device group ID, manager ID are tags. Tags in the schema can be added, removed or updated. Please refer to [STable](/taos-sql/stable) for more details.
+
+For each kind of data collection point, a corresponding STable must be created. There may be many STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another data collection point for environmental data like temperature, humidity and wind direction. Multiple STables are required for these kinds of devices.
+
+At most 4096 (or 1024 prior to version 2.1.7.0) columns are allowed in a STable. If there are more than 4096 of metrics to be collected for a data collection point, multiple STables are required. There can be multiple databases in a system, while one or more STables can exist in a database.
+
+## Create Table
+
+A specific table needs to be created for each data collection point. Similar to RDBMS, table name and schema are required to create a table. Additionally, one or more tags can be created for each table. To create a table, a STable needs to be used as template and the values need to be specified for the tags. For example, for the meters in [Table 1](/tdinternal/arch#model_table1), the table can be created using below SQL statement.
+
+```sql
+CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2);
+```
+
+In the above SQL statement, "d1001" is the table name, "meters" is the STable name, followed by the value of tag "Location" and the value of tag "groupId", which are "California.SanFrancisco" and "2" respectively in the example. The tag values can be updated after the table is created. Please refer to [Tables](/taos-sql/table) for details.
+
+In the TDengine system, it's recommended to create a table for a data collection point via STable. A table created via STable is called subtable in some parts of the TDengine documentation. All SQL commands applied on regular tables can be applied on subtables.
+
+:::warning
+It's not recommended to create a table in a database while using a STable from another database as template.
+
+:::tip
+It's suggested to use the globally unique ID of a data collection point as the table name. For example the device serial number could be used as a unique ID. If a unique ID doesn't exist, multiple IDs that are not globally unique can be combined to form a globally unique ID. It's not recommended to use a globally unique ID as tag value.
+
+## Create Table Automatically
+
+In some circumstances, it's unknown whether the table already exists when inserting rows. The table can be created automatically using the SQL statement below, and nothing will happen if the table already exists.
+
+```sql
+INSERT INTO d1001 USING meters TAGS ("California.SanFrancisco", 2) VALUES (now, 10.2, 219, 0.32);
+```
+
+In the above SQL statement, a row with value `(now, 10.2, 219, 0.32)` will be inserted into table "d1001". If table "d1001" doesn't exist, it will be created automatically using STable "meters" as template with tag value `"California.SanFrancisco", 2`.
+
+For more details please refer to [Create Table Automatically](/taos-sql/insert#automatically-create-table-when-inserting).
+
+## Single Column vs Multiple Column
+
+A multiple columns data model is supported in TDengine. As long as multiple metrics are collected by the same data collection point at the same time, i.e. the timestamps are identical, these metrics can be put in a single STable as columns.
+
+However, there is another kind of design, i.e. single column data model in which a table is created for each metric. This means that a STable is required for each kind of metric. For example in a single column model, 3 STables would be required for current, voltage and phase.
+
+It's recommended to use a multiple column data model as much as possible because insert and query performance is higher. In some cases, however, the collected metrics may vary frequently and so the corresponding STable schema needs to be changed frequently too. In such cases, it's more convenient to use single column data model.
diff --git a/docs-en/07-develop/03-insert-data/01-sql-writing.mdx b/docs-en/07-develop/03-insert-data/01-sql-writing.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..397b1a14fd76c1372c79eb88575f2bf21cb62050
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/01-sql-writing.mdx
@@ -0,0 +1,130 @@
+---
+sidebar_label: Insert Using SQL
+title: Insert Using SQL
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+import JavaSQL from "./_java_sql.mdx";
+import JavaStmt from "./_java_stmt.mdx";
+import PySQL from "./_py_sql.mdx";
+import PyStmt from "./_py_stmt.mdx";
+import GoSQL from "./_go_sql.mdx";
+import GoStmt from "./_go_stmt.mdx";
+import RustSQL from "./_rust_sql.mdx";
+import RustStmt from "./_rust_stmt.mdx";
+import NodeSQL from "./_js_sql.mdx";
+import NodeStmt from "./_js_stmt.mdx";
+import CsSQL from "./_cs_sql.mdx";
+import CsStmt from "./_cs_stmt.mdx";
+import CSQL from "./_c_sql.mdx";
+import CStmt from "./_c_stmt.mdx";
+
+## Introduction
+
+Application programs can execute `INSERT` statement through connectors to insert rows. The TAOS CLI can also be used to manually insert data.
+
+### Insert Single Row
+
+The below SQL statement is used to insert one row into table "d1001".
+
+```sql
+INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31);
+```
+
+### Insert Multiple Rows
+
+Multiple rows can be inserted in a single SQL statement. The example below inserts 2 rows into table "d1001".
+
+```sql
+INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3, 218, 0.25);
+```
+
+### Insert into Multiple Tables
+
+Data can be inserted into multiple tables in the same SQL statement. The example below inserts 2 rows into table "d1001" and 1 row into table "d1002".
+
+```sql
+INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d1002 VALUES (1538548696800, 12.3, 221, 0.31);
+```
+
+For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
+
+:::info
+
+- Inserting in batches can improve performance. Normally, the higher the batch size, the better the performance. Please note that a single row can't exceed 48K bytes and each SQL statement can't exceed 1MB.
+- Inserting with multiple threads can also improve performance. However, depending on the system resources on the application side and the server side, when the number of inserting threads grows beyond a specific point the performance may drop instead of improving. The proper number of threads needs to be tested in a specific environment to find the best number.
+
+:::
+
+:::warning
+
+- If the timestamp for the row to be inserted already exists in the table, the behavior depends on the value of parameter `UPDATE`. If it's set to 0 (the default value), the row will be discarded. If it's set to 1, the new values will override the old values for the same row.
+- The timestamp to be inserted must be newer than the timestamp of subtracting current time by the parameter `KEEP`. If `KEEP` is set to 3650 days, then the data older than 3650 days ago can't be inserted. The timestamp to be inserted can't be newer than the timestamp of current time plus parameter `DAYS`. If `DAYS` is set to 2, the data newer than 2 days later can't be inserted.
+
+:::
+
+## Examples
+
+### Insert Using SQL
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+:::note
+
+1. With either native connection or REST connection, the above samples can work well.
+2. Please note that `use db` can't be used with a REST connection because REST connections are stateless, so in the samples `dbName.tbName` is used to specify the table name.
+
+:::
+
+### Insert with Parameter Binding
+
+TDengine also provides API support for parameter binding. Similar to MySQL, only `?` can be used in these APIs to represent the parameters to bind. From version 2.1.1.0 and 2.1.2.0, parameter binding support for inserting data has improved significantly to improve the insert performance by avoiding the cost of parsing SQL statements.
+
+Parameter binding is available only with native connection.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs-en/07-develop/03-insert-data/02-influxdb-line.mdx b/docs-en/07-develop/03-insert-data/02-influxdb-line.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..be46ebf0c97a29b57c1b57eb8ea5c9394f85b93a
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/02-influxdb-line.mdx
@@ -0,0 +1,70 @@
+---
+sidebar_label: InfluxDB Line Protocol
+title: InfluxDB Line Protocol
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+import JavaLine from "./_java_line.mdx";
+import PyLine from "./_py_line.mdx";
+import GoLine from "./_go_line.mdx";
+import RustLine from "./_rust_line.mdx";
+import NodeLine from "./_js_line.mdx";
+import CsLine from "./_cs_line.mdx";
+import CLine from "./_c_line.mdx";
+
+## Introduction
+
+In the InfluxDB Line protocol format, a single line of text is used to represent one row of data. Each line contains 4 parts as shown below.
+
+```
+measurement,tag_set field_set timestamp
+```
+
+- `measurement` will be used as the name of the STable
+- `tag_set` will be used as tags, with format like `=,=`
+- `field_set`will be used as data columns, with format like `=,=`
+- `timestamp` is the primary key timestamp corresponding to this row of data
+
+For example:
+
+```
+meters,location=California.LoSangeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500
+```
+
+:::note
+
+- All the data in `tag_set` will be converted to nchar type automatically .
+- Each data in `field_set` must be self-descriptive for its data type. For example 1.2f32 means a value 1.2 of float type. Without the "f" type suffix, it will be treated as type double.
+- Multiple kinds of precision can be used for the `timestamp` field. Time precision can be from nanosecond (ns) to hour (h).
+
+:::
+
+For more details please refer to [InfluxDB Line Protocol](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/) and [TDengine Schemaless](/reference/schemaless/#Schemaless-Line-Protocol)
+
+
+## Examples
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs-en/07-develop/03-insert-data/03-opentsdb-telnet.mdx b/docs-en/07-develop/03-insert-data/03-opentsdb-telnet.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..18a695cda8efbef075451ff53e542d9e69c58e0b
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/03-opentsdb-telnet.mdx
@@ -0,0 +1,84 @@
+---
+sidebar_label: OpenTSDB Line Protocol
+title: OpenTSDB Line Protocol
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+import JavaTelnet from "./_java_opts_telnet.mdx";
+import PyTelnet from "./_py_opts_telnet.mdx";
+import GoTelnet from "./_go_opts_telnet.mdx";
+import RustTelnet from "./_rust_opts_telnet.mdx";
+import NodeTelnet from "./_js_opts_telnet.mdx";
+import CsTelnet from "./_cs_opts_telnet.mdx";
+import CTelnet from "./_c_opts_telnet.mdx";
+
+## Introduction
+
+A single line of text is used in OpenTSDB line protocol to represent one row of data. OpenTSDB employs a single column data model, so each line can only contain a single data column. There can be multiple tags. Each line contains 4 parts as below:
+
+```
+=[ =]
+```
+
+- `metric` will be used as the STable name.
+- `timestamp` is the timestamp of current row of data. The time precision will be determined automatically based on the length of the timestamp. Second and millisecond time precision are supported.
+- `value` is a metric which must be a numeric value, the corresponding column name is "value".
+- The last part is the tag set separated by spaces, all tags will be converted to nchar type automatically.
+
+For example:
+
+```txt
+meters.current 1648432611250 11.3 location=California.LoSangeles groupid=3
+```
+
+Please refer to [OpenTSDB Telnet API](http://opentsdb.net/docs/build/html/api_telnet/put.html) for more details.
+
+## Examples
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+2 STables will be created automatically and each STable has 4 rows of data in the above sample code.
+
+```cmd
+taos> use test;
+Database changed.
+
+taos> show STables;
+ name | created_time | columns | tags | tables |
+============================================================================================
+ meters.current | 2022-03-30 17:04:10.877 | 2 | 2 | 2 |
+ meters.voltage | 2022-03-30 17:04:10.882 | 2 | 2 | 2 |
+Query OK, 2 row(s) in set (0.002544s)
+
+taos> select tbname, * from `meters.current`;
+ tbname | ts | value | groupid | location |
+==================================================================================================================================
+ t_0e7bcfa21a02331c06764f275... | 2022-03-28 09:56:51.249 | 10.800000000 | 3 | California.LoSangeles |
+ t_0e7bcfa21a02331c06764f275... | 2022-03-28 09:56:51.250 | 11.300000000 | 3 | California.LoSangeles |
+ t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.249 | 10.300000000 | 2 | California.SanFrancisco |
+ t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.250 | 12.600000000 | 2 | California.SanFrancisco |
+Query OK, 4 row(s) in set (0.005399s)
+```
diff --git a/docs-en/07-develop/03-insert-data/04-opentsdb-json.mdx b/docs-en/07-develop/03-insert-data/04-opentsdb-json.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..3a239440311c736159d6060db5e730c5e5665bcb
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/04-opentsdb-json.mdx
@@ -0,0 +1,99 @@
+---
+sidebar_label: OpenTSDB JSON Protocol
+title: OpenTSDB JSON Protocol
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+import JavaJson from "./_java_opts_json.mdx";
+import PyJson from "./_py_opts_json.mdx";
+import GoJson from "./_go_opts_json.mdx";
+import RustJson from "./_rust_opts_json.mdx";
+import NodeJson from "./_js_opts_json.mdx";
+import CsJson from "./_cs_opts_json.mdx";
+import CJson from "./_c_opts_json.mdx";
+
+## Introduction
+
+A JSON string is used in OpenTSDB JSON to represent one or more rows of data, for example:
+
+```json
+[
+ {
+ "metric": "sys.cpu.nice",
+ "timestamp": 1346846400,
+ "value": 18,
+ "tags": {
+ "host": "web01",
+ "dc": "lga"
+ }
+ },
+ {
+ "metric": "sys.cpu.nice",
+ "timestamp": 1346846400,
+ "value": 9,
+ "tags": {
+ "host": "web02",
+ "dc": "lga"
+ }
+ }
+]
+```
+
+Similar to OpenTSDB line protocol, `metric` will be used as the STable name, `timestamp` is the timestamp to be used, `value` represents the metric collected, `tags` are the tag sets.
+
+
+Please refer to [OpenTSDB HTTP API](http://opentsdb.net/docs/build/html/api_http/put.html) for more details.
+
+:::note
+- In JSON protocol, strings will be converted to nchar type and numeric values will be converted to double type.
+- Only data in array format is accepted and so an array must be used even if there is only one row.
+
+:::
+
+## Examples
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+The above sample code will created 2 STables automatically while each STable has 2 rows of data.
+
+```cmd
+taos> use test;
+Database changed.
+
+taos> show STables;
+ name | created_time | columns | tags | tables |
+============================================================================================
+ meters.current | 2022-03-29 16:05:25.193 | 2 | 2 | 1 |
+ meters.voltage | 2022-03-29 16:05:25.200 | 2 | 2 | 1 |
+Query OK, 2 row(s) in set (0.001954s)
+
+taos> select * from `meters.current`;
+ ts | value | groupid | location |
+===================================================================================================================
+ 2022-03-28 09:56:51.249 | 10.300000000 | 2.000000000 | California.SanFrancisco |
+ 2022-03-28 09:56:51.250 | 12.600000000 | 2.000000000 | California.SanFrancisco |
+Query OK, 2 row(s) in set (0.004076s)
+```
diff --git a/docs-en/07-develop/03-insert-data/_c_line.mdx b/docs-en/07-develop/03-insert-data/_c_line.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..5ef2e9af774c54e9f090357286f83d2280c2ab11
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_c_line.mdx
@@ -0,0 +1,3 @@
+```c
+{{#include docs-examples/c/line_example.c:main}}
+```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_c_opts_json.mdx b/docs-en/07-develop/03-insert-data/_c_opts_json.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..22ad2e0122797248a372734aac0f3a16a1356530
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_c_opts_json.mdx
@@ -0,0 +1,3 @@
+```c
+{{#include docs-examples/c/json_protocol_example.c:main}}
+```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_c_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_c_opts_telnet.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..508d7bc98a149f49766bcd0a474ffe226cbe30bb
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_c_opts_telnet.mdx
@@ -0,0 +1,3 @@
+```c
+{{#include docs-examples/c/telnet_line_example.c:main}}
+```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_c_sql.mdx b/docs-en/07-develop/03-insert-data/_c_sql.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..f4153fd2c427677a338d0c377663d0335f2672f0
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_c_sql.mdx
@@ -0,0 +1,3 @@
+```c
+{{#include docs-examples/c/insert_example.c}}
+```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_c_stmt.mdx b/docs-en/07-develop/03-insert-data/_c_stmt.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..7f5ef23a849689c36e732b6fd374a131695c9090
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_c_stmt.mdx
@@ -0,0 +1,6 @@
+```c title=Single Row Binding
+{{#include docs-examples/c/stmt_example.c}}
+```
+```c title=Multiple Row Binding 72:117
+{{#include docs-examples/c/multi_bind_example.c}}
+```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_category_.yml b/docs-en/07-develop/03-insert-data/_category_.yml
new file mode 100644
index 0000000000000000000000000000000000000000..e515d60e09ec44894e2c42f38fee74fe4286e17f
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_category_.yml
@@ -0,0 +1 @@
+label: Insert Data
diff --git a/docs-en/07-develop/03-insert-data/_cs_line.mdx b/docs-en/07-develop/03-insert-data/_cs_line.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..9c275ee3d7c7a1e52fbb34dbae922004543ee3ce
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_cs_line.mdx
@@ -0,0 +1,3 @@
+```csharp
+{{#include docs-examples/csharp/InfluxDBLineExample.cs}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_cs_opts_json.mdx b/docs-en/07-develop/03-insert-data/_cs_opts_json.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..3d538b8506b298241faecd8098f89571359135c9
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_cs_opts_json.mdx
@@ -0,0 +1,3 @@
+```csharp
+{{#include docs-examples/csharp/OptsJsonExample.cs}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_cs_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_cs_opts_telnet.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..c53bf3d7233115351e5af03b7d9e6318aa4a0da6
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_cs_opts_telnet.mdx
@@ -0,0 +1,3 @@
+```csharp
+{{#include docs-examples/csharp/OptsTelnetExample.cs}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_cs_sql.mdx b/docs-en/07-develop/03-insert-data/_cs_sql.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..c7688bfbe77a1135424d829fe9b29fbb1bc93ae2
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_cs_sql.mdx
@@ -0,0 +1,3 @@
+```csharp
+{{#include docs-examples/csharp/SQLInsertExample.cs}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_cs_stmt.mdx b/docs-en/07-develop/03-insert-data/_cs_stmt.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..97c3b910ffeb9e0c88fc143a02014115e819c147
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_cs_stmt.mdx
@@ -0,0 +1,3 @@
+```csharp
+{{#include docs-examples/csharp/StmtInsertExample.cs}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_go_line.mdx b/docs-en/07-develop/03-insert-data/_go_line.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..cd225945b70e28bef2ca7fdaf0d9be0ad7ffc18c
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_go_line.mdx
@@ -0,0 +1,3 @@
+```go
+{{#include docs-examples/go/insert/line/main.go}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_go_opts_json.mdx b/docs-en/07-develop/03-insert-data/_go_opts_json.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..0c0d3e5b6330e046988cdd02234285ec67e92f01
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_go_opts_json.mdx
@@ -0,0 +1,3 @@
+```go
+{{#include docs-examples/go/insert/json/main.go}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_go_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_go_opts_telnet.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..d5ca40cc146e62412476289853e8e2739e0e9e4b
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_go_opts_telnet.mdx
@@ -0,0 +1,3 @@
+```go
+{{#include docs-examples/go/insert/telnet/main.go}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_go_sql.mdx b/docs-en/07-develop/03-insert-data/_go_sql.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..613a65add1741eb763a4b24e65d180d05f7d670f
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_go_sql.mdx
@@ -0,0 +1,3 @@
+```go
+{{#include docs-examples/go/insert/sql/main.go}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_go_stmt.mdx b/docs-en/07-develop/03-insert-data/_go_stmt.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..c32bc21fb9bcaf45059e4f47df73fb57f047ed1c
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_go_stmt.mdx
@@ -0,0 +1,8 @@
+```go
+{{#include docs-examples/go/insert/stmt/main.go}}
+```
+
+:::tip
+`github.com/taosdata/driver-go/v2/wrapper` module in driver-go is the wrapper for C API, it can be used to insert data with parameter binding.
+
+:::
diff --git a/docs-en/07-develop/03-insert-data/_java_line.mdx b/docs-en/07-develop/03-insert-data/_java_line.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..2e59a5d4701b2a2ab04ec5711845dc5c80067a1e
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_java_line.mdx
@@ -0,0 +1,3 @@
+```java
+{{#include docs-examples/java/src/main/java/com/taos/example/LineProtocolExample.java}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_java_opts_json.mdx b/docs-en/07-develop/03-insert-data/_java_opts_json.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..826a1a07d9405cb193849f9d21e5444f68517914
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_java_opts_json.mdx
@@ -0,0 +1,3 @@
+```java
+{{#include docs-examples/java/src/main/java/com/taos/example/JSONProtocolExample.java}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_java_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_java_opts_telnet.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..954dcc1a482a150dea0b190e1e0593adbfbde796
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_java_opts_telnet.mdx
@@ -0,0 +1,3 @@
+```java
+{{#include docs-examples/java/src/main/java/com/taos/example/TelnetLineProtocolExample.java}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_java_sql.mdx b/docs-en/07-develop/03-insert-data/_java_sql.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..a863378defe43b1f22c1f98087a34f053a7d6619
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_java_sql.mdx
@@ -0,0 +1,3 @@
+```java
+{{#include docs-examples/java/src/main/java/com/taos/example/RestInsertExample.java:insert}}
+```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_java_stmt.mdx b/docs-en/07-develop/03-insert-data/_java_stmt.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..54443e535fa84bdf8dc9161ed4ad00f50b26266c
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_java_stmt.mdx
@@ -0,0 +1,3 @@
+```java
+{{#include docs-examples/java/src/main/java/com/taos/example/StmtInsertExample.java}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_js_line.mdx b/docs-en/07-develop/03-insert-data/_js_line.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..172c9bc17b8cff8b2620720b235a9c8e69bd4197
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_js_line.mdx
@@ -0,0 +1,3 @@
+```js
+{{#include docs-examples/node/nativeexample/influxdb_line_example.js}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_js_opts_json.mdx b/docs-en/07-develop/03-insert-data/_js_opts_json.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..20ac9ec91e8dc6675828b16d7da0acb09afd3b5f
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_js_opts_json.mdx
@@ -0,0 +1,3 @@
+```js
+{{#include docs-examples/node/nativeexample/opentsdb_json_example.js}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_js_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_js_opts_telnet.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..c3c8c40bd642f4f443de88e3db006ad50724d514
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_js_opts_telnet.mdx
@@ -0,0 +1,3 @@
+```js
+{{#include docs-examples/node/nativeexample/opentsdb_telnet_example.js}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_js_sql.mdx b/docs-en/07-develop/03-insert-data/_js_sql.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..f5e17c76892a57a94192a95451b508b1c176c984
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_js_sql.mdx
@@ -0,0 +1,3 @@
+```js
+{{#include docs-examples/node/nativeexample/insert_example.js}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_js_stmt.mdx b/docs-en/07-develop/03-insert-data/_js_stmt.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..964d7ddc11b90031b70936efb85fbaabe873ddbb
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_js_stmt.mdx
@@ -0,0 +1,12 @@
+```js title=Single Row Binding
+{{#include docs-examples/node/nativeexample/param_bind_example.js}}
+```
+
+```js title=Multiple Row Binding
+{{#include docs-examples/node/nativeexample/multi_bind_example.js:insertData}}
+```
+
+:::info
+Multiple row binding is better in performance than single row binding, but it can only be used with `INSERT` statement while single row binding can be used for other SQL statements besides `INSERT`.
+
+:::
diff --git a/docs-en/07-develop/03-insert-data/_py_line.mdx b/docs-en/07-develop/03-insert-data/_py_line.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..d3bb1ebb3403b53fa43bfc9d5d1a0de9764d7583
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_py_line.mdx
@@ -0,0 +1,3 @@
+```py
+{{#include docs-examples/python/line_protocol_example.py}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_py_opts_json.mdx b/docs-en/07-develop/03-insert-data/_py_opts_json.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..cfbfe13ccfdb4f3f34b77300812863fdf70d0f59
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_py_opts_json.mdx
@@ -0,0 +1,3 @@
+```py
+{{#include docs-examples/python/json_protocol_example.py}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_py_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_py_opts_telnet.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..14bc65a7a3da815abadf7f25c8deffeac666c8d7
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_py_opts_telnet.mdx
@@ -0,0 +1,3 @@
+```py
+{{#include docs-examples/python/telnet_line_protocol_example.py}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_py_sql.mdx b/docs-en/07-develop/03-insert-data/_py_sql.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..c0e15b8ec115b9244d50a47c9eafec04bcfdd70c
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_py_sql.mdx
@@ -0,0 +1,3 @@
+```py
+{{#include docs-examples/python/native_insert_example.py}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_py_stmt.mdx b/docs-en/07-develop/03-insert-data/_py_stmt.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..16d98f54329ad0d3dfb463392f5c1d41c9aab25b
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_py_stmt.mdx
@@ -0,0 +1,12 @@
+```py title=Single Row Binding
+{{#include docs-examples/python/bind_param_example.py}}
+```
+
+```py title=Multiple Row Binding
+{{#include docs-examples/python/multi_bind_example.py:bind_batch}}
+```
+
+:::info
+Multiple row binding is better in performance than single row binding, but it can only be used with `INSERT` statement while single row binding can be used for other SQL statements besides `INSERT`.
+
+:::
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_rust_line.mdx b/docs-en/07-develop/03-insert-data/_rust_line.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..696ddb7b854751b8dee01047066f97f74212933f
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_rust_line.mdx
@@ -0,0 +1,3 @@
+```rust
+{{#include docs-examples/rust/schemalessexample/examples/influxdb_line_example.rs}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_rust_opts_json.mdx b/docs-en/07-develop/03-insert-data/_rust_opts_json.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..97d9052dacd1894cc7548a59951ecfaad9caee87
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_rust_opts_json.mdx
@@ -0,0 +1,3 @@
+```rust
+{{#include docs-examples/rust/schemalessexample/examples/opentsdb_json_example.rs}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_rust_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_rust_opts_telnet.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..14021f43d8aff30c35dc30c5d278d4e51f375024
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_rust_opts_telnet.mdx
@@ -0,0 +1,3 @@
+```rust
+{{#include docs-examples/rust/schemalessexample/examples/opentsdb_telnet_example.rs}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_rust_sql.mdx b/docs-en/07-develop/03-insert-data/_rust_sql.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..8e8013e4ad734efcc262ea2f750b82210a538e49
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_rust_sql.mdx
@@ -0,0 +1,3 @@
+```rust
+{{#include docs-examples/rust/restexample/examples/insert_example.rs}}
+```
diff --git a/docs-en/07-develop/03-insert-data/_rust_stmt.mdx b/docs-en/07-develop/03-insert-data/_rust_stmt.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..590a7a0e717426ed0235331c49dfc578bc55b2f7
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/_rust_stmt.mdx
@@ -0,0 +1,3 @@
+```rust
+{{#include docs-examples/rust/nativeexample/examples/stmt_example.rs}}
+```
diff --git a/docs-en/07-develop/03-insert-data/index.md b/docs-en/07-develop/03-insert-data/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..1a71e719a56448e4b535632e570ce8a04d2282bb
--- /dev/null
+++ b/docs-en/07-develop/03-insert-data/index.md
@@ -0,0 +1,12 @@
+---
+title: Insert Data
+---
+
+TDengine supports multiple protocols of inserting data, including SQL, InfluxDB Line protocol, OpenTSDB Telnet protocol, and OpenTSDB JSON protocol. Data can be inserted row by row, or in batches. Data from one or more collection points can be inserted simultaneously. Data can be inserted with multiple threads, and out of order data and historical data can be inserted as well. InfluxDB Line protocol, OpenTSDB Telnet protocol and OpenTSDB JSON protocol are the 3 kinds of schemaless insert protocols supported by TDengine. It's not necessary to create STables and tables in advance if using schemaless protocols, and the schemas can be adjusted automatically based on the data being inserted.
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
diff --git a/docs-en/07-develop/04-query-data/_c.mdx b/docs-en/07-develop/04-query-data/_c.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..76c9067e2f6af19465cf7c52c3e9b48bb868547d
--- /dev/null
+++ b/docs-en/07-develop/04-query-data/_c.mdx
@@ -0,0 +1,3 @@
+```c
+{{#include docs-examples/c/query_example.c}}
+```
\ No newline at end of file
diff --git a/docs-en/07-develop/04-query-data/_c_async.mdx b/docs-en/07-develop/04-query-data/_c_async.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..09f3d3b3ff6d6644f837642ef41db459ba7c5753
--- /dev/null
+++ b/docs-en/07-develop/04-query-data/_c_async.mdx
@@ -0,0 +1,3 @@
+```c
+{{#include docs-examples/c/async_query_example.c:demo}}
+```
\ No newline at end of file
diff --git a/docs-en/07-develop/04-query-data/_category_.yml b/docs-en/07-develop/04-query-data/_category_.yml
new file mode 100644
index 0000000000000000000000000000000000000000..809db34621a63505ceace7ba182e07c698bdbddb
--- /dev/null
+++ b/docs-en/07-develop/04-query-data/_category_.yml
@@ -0,0 +1 @@
+label: Query Data
diff --git a/docs-en/07-develop/04-query-data/_cs.mdx b/docs-en/07-develop/04-query-data/_cs.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..2ab52feb564eff0fe251bc9900ea2539171e5dba
--- /dev/null
+++ b/docs-en/07-develop/04-query-data/_cs.mdx
@@ -0,0 +1,3 @@
+```csharp
+{{#include docs-examples/csharp/QueryExample.cs}}
+```
diff --git a/docs-en/07-develop/04-query-data/_cs_async.mdx b/docs-en/07-develop/04-query-data/_cs_async.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..f868994b303e62016b5e2f9304275135855c6ae5
--- /dev/null
+++ b/docs-en/07-develop/04-query-data/_cs_async.mdx
@@ -0,0 +1,3 @@
+```csharp
+{{#include docs-examples/csharp/AsyncQueryExample.cs}}
+```
diff --git a/docs-en/07-develop/04-query-data/_go.mdx b/docs-en/07-develop/04-query-data/_go.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..417c12315c06517e2f3de850ac9a379b7714b519
--- /dev/null
+++ b/docs-en/07-develop/04-query-data/_go.mdx
@@ -0,0 +1,3 @@
+```go
+{{#include docs-examples/go/query/sync/main.go}}
+```
diff --git a/docs-en/07-develop/04-query-data/_go_async.mdx b/docs-en/07-develop/04-query-data/_go_async.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..72fff411b980a0dcbdcaf4274722c63e0351db6f
--- /dev/null
+++ b/docs-en/07-develop/04-query-data/_go_async.mdx
@@ -0,0 +1,3 @@
+```go
+{{#include docs-examples/go/query/async/main.go}}
+```
diff --git a/docs-en/07-develop/04-query-data/_java.mdx b/docs-en/07-develop/04-query-data/_java.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..519b9266144486231caf3ee593e973d438941ee4
--- /dev/null
+++ b/docs-en/07-develop/04-query-data/_java.mdx
@@ -0,0 +1,3 @@
+```java
+{{#include docs-examples/java/src/main/java/com/taos/example/RestQueryExample.java}}
+```
diff --git a/docs-en/07-develop/04-query-data/_js.mdx b/docs-en/07-develop/04-query-data/_js.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..c5e4c4f3fc20d3940a2bc6e13e6a5dea8a15ff13
--- /dev/null
+++ b/docs-en/07-develop/04-query-data/_js.mdx
@@ -0,0 +1,3 @@
+```js
+{{#include docs-examples/node/nativeexample/query_example.js}}
+```
diff --git a/docs-en/07-develop/04-query-data/_js_async.mdx b/docs-en/07-develop/04-query-data/_js_async.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..c65d54ed12f6c4bbeb333e0de0ba9ca4638bff84
--- /dev/null
+++ b/docs-en/07-develop/04-query-data/_js_async.mdx
@@ -0,0 +1,3 @@
+```js
+{{#include docs-examples/node/nativeexample/async_query_example.js}}
+```
diff --git a/docs-en/07-develop/04-query-data/_py.mdx b/docs-en/07-develop/04-query-data/_py.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..aeae42a15e5c39b7e9d227afc424e77658109705
--- /dev/null
+++ b/docs-en/07-develop/04-query-data/_py.mdx
@@ -0,0 +1,11 @@
+Result set is iterated row by row.
+
+```py
+{{#include docs-examples/python/query_example.py:iter}}
+```
+
+Result set is retrieved as a whole, each row is converted to a dict and returned.
+
+```py
+{{#include docs-examples/python/query_example.py:fetch_all}}
+```
\ No newline at end of file
diff --git a/docs-en/07-develop/04-query-data/_py_async.mdx b/docs-en/07-develop/04-query-data/_py_async.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..ed6880ae64e59a860e7dc75a5d3c1ad5d2614d01
--- /dev/null
+++ b/docs-en/07-develop/04-query-data/_py_async.mdx
@@ -0,0 +1,8 @@
+```py
+{{#include docs-examples/python/async_query_example.py}}
+```
+
+:::note
+This sample code can't be run on Windows system for now.
+
+:::
diff --git a/docs-en/07-develop/04-query-data/_rust.mdx b/docs-en/07-develop/04-query-data/_rust.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..742d70fd025ff44b573eedf78441c9d73defad45
--- /dev/null
+++ b/docs-en/07-develop/04-query-data/_rust.mdx
@@ -0,0 +1,3 @@
+```rust
+{{#include docs-examples/rust/restexample/examples/query_example.rs}}
+```
diff --git a/docs-en/07-develop/04-query-data/index.mdx b/docs-en/07-develop/04-query-data/index.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..a212fa9529215fc24c55c95a166cfc1a407359b2
--- /dev/null
+++ b/docs-en/07-develop/04-query-data/index.mdx
@@ -0,0 +1,186 @@
+---
+Sidebar_label: Query data
+title: Query data
+description: "This chapter introduces major query functionalities and how to perform sync and async query using connectors."
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+import JavaQuery from "./_java.mdx";
+import PyQuery from "./_py.mdx";
+import GoQuery from "./_go.mdx";
+import RustQuery from "./_rust.mdx";
+import NodeQuery from "./_js.mdx";
+import CsQuery from "./_cs.mdx";
+import CQuery from "./_c.mdx";
+import PyAsync from "./_py_async.mdx";
+import NodeAsync from "./_js_async.mdx";
+import CsAsync from "./_cs_async.mdx";
+import CAsync from "./_c_async.mdx";
+
+## Introduction
+
+SQL is used by TDengine as its query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine's CLI `taos` can also be used to execute ad hoc SQL queries. Here is the list of major query functionalities supported by TDengine:
+
+- Query on single column or multiple columns
+- Filter on tags or data columns:>, <, =, <\>, like
+- Grouping of results: `Group By`
+- Sorting of results: `Order By`
+- Limit the number of results: `Limit/Offset`
+- Arithmetic on columns of numeric types or aggregate results
+- Join query with timestamp alignment
+- Aggregate functions: count, max, min, avg, sum, twa, stddev, leastsquares, top, bottom, first, last, percentile, apercentile, last_row, spread, diff
+
+For example, the SQL statement below can be executed in TDengine CLI `taos` to select records with voltage greater than 215 and limit the output to only 2 rows.
+
+```sql
+select * from d1001 where voltage > 215 order by ts desc limit 2;
+```
+
+```title=Output
+taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
+ ts | current | voltage | phase |
+======================================================================================
+ 2018-10-03 14:38:16.800 | 12.30000 | 221 | 0.31000 |
+ 2018-10-03 14:38:15.000 | 12.60000 | 218 | 0.33000 |
+Query OK, 2 row(s) in set (0.001100s)
+```
+
+To meet the requirements of varied use cases, some special functions have been added in TDengine. Some examples are `twa` (Time Weighted Average), `spread` (The difference between the maximum and the minimum), and `last_row` (the last row). Furthermore, continuous query is also supported in TDengine.
+
+For detailed query syntax please refer to [Select](/taos-sql/select).
+
+## Aggregation among Tables
+
+In most use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviation for super table), is used in TDengine to represent one type of data collection point, and a subtable is used to represent a specific data collection point of that type. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same type of data collection points. Aggregate functions applicable for tables can be used directly on STables; the syntax is exactly the same.
+
+In summary, records across subtables can be aggregated by a simple query on their STable. It is like a join operation. However, tables belonging to different STables can not be aggregated.
+
+### Example 1
+
+In TDengine CLI `taos`, use the SQL below to get the average voltage of all the meters in California grouped by location.
+
+```
+taos> SELECT AVG(voltage) FROM meters GROUP BY location;
+ avg(voltage) | location |
+=============================================================
+ 222.000000000 | California.LosAngeles |
+ 219.200000000 | California.SanFrancisco |
+Query OK, 2 row(s) in set (0.002136s)
+```
+
+### Example 2
+
+In TDengine CLI `taos`, use the SQL below to get the number of rows and the maximum current in the past 24 hours from meters whose groupId is 2.
+
+```
+taos> SELECT count(*), max(current) FROM meters where groupId = 2 and ts > now - 24h;
+ count(*) | max(current) |
+==================================
+ 5 | 13.4 |
+Query OK, 1 row(s) in set (0.002136s)
+```
+
+Join queries are only allowed between subtables of the same STable. In [Select](/taos-sql/select), all query operations are marked as to whether they support STables or not.
+
+## Down Sampling and Interpolation
+
+In IoT use cases, down sampling is widely used to aggregate data by time range. The `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, the SQL statement below can be used to get the sum of current every 10 seconds from meters table d1001.
+
+```
+taos> SELECT sum(current) FROM d1001 INTERVAL(10s);
+ ts | sum(current) |
+======================================================
+ 2018-10-03 14:38:00.000 | 10.300000191 |
+ 2018-10-03 14:38:10.000 | 24.900000572 |
+Query OK, 2 row(s) in set (0.000883s)
+```
+
+Down sampling can also be used for STable. For example, the below SQL statement can be used to get the sum of current from all meters in California.
+
+```
+taos> SELECT SUM(current) FROM meters where location like "California%" INTERVAL(1s);
+ ts | sum(current) |
+======================================================
+ 2018-10-03 14:38:04.000 | 10.199999809 |
+ 2018-10-03 14:38:05.000 | 32.900000572 |
+ 2018-10-03 14:38:06.000 | 11.500000000 |
+ 2018-10-03 14:38:15.000 | 12.600000381 |
+ 2018-10-03 14:38:16.000 | 36.000000000 |
+Query OK, 5 row(s) in set (0.001538s)
+```
+
+Down sampling also supports time offset. For example, the below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds.
+
+```
+taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
+ ts | sum(current) |
+======================================================
+ 2018-10-03 14:38:04.500 | 11.189999809 |
+ 2018-10-03 14:38:05.500 | 31.900000572 |
+ 2018-10-03 14:38:06.500 | 11.600000000 |
+ 2018-10-03 14:38:15.500 | 12.300000381 |
+ 2018-10-03 14:38:16.500 | 35.000000000 |
+Query OK, 5 row(s) in set (0.001521s)
+```
+
+In many use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle this by themselves. In TDengine, it's easy to achieve the alignment using down sampling.
+
+Interpolation can be performed in TDengine if there is no data in a time range.
+
+For more details please refer to [Aggregate by Window](/taos-sql/interval).
+
+## Examples
+
+### Query
+
+In the section describing [Insert](/develop/insert-data/sql-writing), a database named `power` is created and some data are inserted into STable `meters`. Below sample code demonstrates how to query the data in this STable.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+:::note
+
+1. With either REST connection or native connection, the above sample code works well.
+2. Please note that `use db` can't be used in case of REST connection because it's stateless.
+
+:::
+
+### Asynchronous Query
+
+Besides synchronous queries, an asynchronous query API is also provided by TDengine to insert or query data more efficiently. With a similar hardware and software environment, the async API is 2~4 times faster than sync APIs. Async API works in non-blocking mode, which means an operation can be returned without finishing so that the calling thread can switch to other work to improve the performance of the whole application system. Async APIs perform especially better in the case of poor networks.
+
+Please note that async query can only be used with a native connection.
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs-en/07-develop/05-continuous-query.mdx b/docs-en/07-develop/05-continuous-query.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..1aea5783fc8116a4e02a4b5345d341707cd399ea
--- /dev/null
+++ b/docs-en/07-develop/05-continuous-query.mdx
@@ -0,0 +1,83 @@
+---
+sidebar_label: Continuous Query
+description: "Continuous query is a query that's executed automatically at a predefined frequency to provide aggregate query capability by time window. It is essentially simplified, time driven, stream computing."
+title: "Continuous Query"
+---
+
+A continuous query is a query that's executed automatically at a predefined frequency to provide aggregate query capability by time window. It is essentially simplified, time driven, stream computing. A continuous query can be performed on a table or STable in TDengine. The results of a continuous query can be pushed to clients or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively.
+
+A continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With a continuous query, the result can be generated based on a time window to achieve down sampling of the original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to clients or written to TDengine.
+
+There are some differences between continuous query in TDengine and time window computation in stream computing:
+
+- The computation is performed and the result is returned in real time in stream computing, but the computation in continuous query is only started when a time window closes. For example, if the time window is 1 day, then the result will only be generated at 23:59:59.
+- If a historical data row is written in to a time window for which the computation has already finished, the computation will not be performed again and the result will not be pushed to client applications again. If the results have already been written into TDengine, they will not be updated.
+- In continuous query, if the result is pushed to a client, the client status is not cached on the server side and Exactly-once is not guaranteed by the server. If the client program crashes, a new time window will be generated from the time where the continuous query is restarted. If the result is written into TDengine, the data written into TDengine can be guaranteed as valid and continuous.
+
+## Syntax
+
+```sql
+[CREATE TABLE AS] SELECT select_expr [, select_expr ...]
+ FROM {tb_name_list}
+ [WHERE where_condition]
+ [INTERVAL(interval_val [, interval_offset]) [SLIDING sliding_val]]
+
+```
+
+INTERVAL: The time window for which continuous query is performed
+
+SLIDING: The time step for which the time window moves forward each time
+
+## How to Use
+
+In this section the use case of meters will be used to introduce how to use continuous query. Assume the STable and subtables have been created using the SQL statements below.
+
+```sql
+create table meters (ts timestamp, current float, voltage int, phase float) tags (location binary(64), groupId int);
+create table D1001 using meters tags ("California.SanFrancisco", 2);
+create table D1002 using meters tags ("California.LosAngeles", 2);
+```
+
+The SQL statement below retrieves the average voltage for a one minute time window, with each time window moving forward by 30 seconds.
+
+```sql
+select avg(voltage) from meters interval(1m) sliding(30s);
+```
+
+Whenever the above SQL statement is executed, all the existing data will be computed again. If the computation needs to be performed every 30 seconds automatically to compute on the data in the past one minute, the above SQL statement needs to be revised as below, in which `{startTime}` stands for the beginning timestamp in the latest time window.
+
+```sql
+select avg(voltage) from meters where ts > {startTime} interval(1m) sliding(30s);
+```
+
+An easier way to achieve this is to prepend `create table {tableName} as` before the `select`.
+
+```sql
+create table avg_vol as select avg(voltage) from meters interval(1m) sliding(30s);
+```
+
+A table named as `avg_vol` will be created automatically, then every 30 seconds the `select` statement will be executed automatically on the data in the past 1 minute, i.e. the latest time window, and the result is written into table `avg_vol`. The client program just needs to query from table `avg_vol`. For example:
+
+```sql
+taos> select * from avg_vol;
+ ts | avg_voltage_ |
+===================================================
+ 2020-07-29 13:37:30.000 | 222.0000000 |
+ 2020-07-29 13:38:00.000 | 221.3500000 |
+ 2020-07-29 13:38:30.000 | 220.1700000 |
+ 2020-07-29 13:39:00.000 | 223.0800000 |
+```
+
+Please note that the minimum allowed time window is 10 milliseconds, and there is no upper limit.
+
+It's possible to specify the start and end time of a continuous query. If the start time is not specified, the timestamp of the first row will be considered as the start time; if the end time is not specified, the continuous query will be performed indefinitely, otherwise it will be terminated once the end time is reached. For example, the continuous query in the SQL statement below will be started from now and terminated one hour later.
+
+```sql
+create table avg_vol as select avg(voltage) from meters where ts > now and ts <= now + 1h interval(1m) sliding(30s);
+```
+
+`now` in the above SQL statement stands for the time when the continuous query is created, not the time when the computation is actually performed. To avoid the trouble caused by a delay in receiving data as much as possible, the actual computation in a continuous query is started after a little delay. That means, once a time window closes, the computation is not started immediately. Normally, the result are available after a little time, normally within one minute, after the time window closes.
+
+## How to Manage
+
+`show streams` command can be used in the TDengine CLI `taos` to show all the continuous queries in the system, and `kill stream` can be used to terminate a continuous query.
diff --git a/docs-en/07-develop/06-subscribe.mdx b/docs-en/07-develop/06-subscribe.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..782fcdbaf221419dd231bd10958e26b8f4f856e5
--- /dev/null
+++ b/docs-en/07-develop/06-subscribe.mdx
@@ -0,0 +1,259 @@
+---
+sidebar_label: Data Subscription
+description: "Lightweight service for data subscription and publishing. Time series data inserted into TDengine continuously can be pushed automatically to subscribing clients."
+title: Data Subscription
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+import Java from "./_sub_java.mdx";
+import Python from "./_sub_python.mdx";
+import Go from "./_sub_go.mdx";
+import Rust from "./_sub_rust.mdx";
+import Node from "./_sub_node.mdx";
+import CSharp from "./_sub_cs.mdx";
+import CDemo from "./_sub_c.mdx";
+
+## Introduction
+
+Due to the nature of time series data, data insertion into TDengine is similar to data publishing in message queues. Data is stored in ascending order of timestamp inside TDengine, and so each table in TDengine can essentially be considered as a message queue.
+
+A lightweight service for data subscription and publishing is built into TDengine. With the API provided by TDengine, client programs can use `select` statements to subscribe to data from one or more tables. The subscription and state maintenance is performed on the client side. The client programs poll the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start retrieving new data is up to the client side.
+
+There are 3 major APIs related to subscription provided in the TDengine client driver.
+
+```c
+taos_subscribe
+taos_consume
+taos_unsubscribe
+```
+
+For more details about these APIs please refer to [C/C++ Connector](/reference/connector/cpp). Their usage will be introduced below using the use case of meters, in which the schema of STable and subtables from the previous section [Continuous Query](/develop/continuous-query) are used. Full sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c).
+
+If we want to get a notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways:
+
+The first way is to query each sub table and record the last timestamp matching the criteria. Then after some time, query the data later than the recorded timestamp, and repeat this process. The SQL statements for this way are as below.
+
+```sql
+select * from D1001 where ts > {last_timestamp1} and current > 10;
+select * from D1002 where ts > {last_timestamp2} and current > 10;
+...
+```
+
+The above way works, but the problem is that the number of `select` statements increases with the number of meters. Additionally, the performance of both client side and server side will be unacceptable once the number of meters grows to a big enough number.
+
+A better way is to query on the STable, only one `select` is enough regardless of the number of meters, like below:
+
+```sql
+select * from meters where ts > {last_timestamp} and current > 10;
+```
+
+However, this presents a new problem in how to choose `last_timestamp`. First, the timestamp when the data is generated is different from the timestamp when the data is inserted into the database, sometimes the difference between them may be very big. Second, the time when the data from different meters arrives at the database may be different too. If the timestamp of the "slowest" meter is used as `last_timestamp` in the query, the data from other meters may be selected repeatedly; but if the timestamp of the "fastest" meter is used as `last_timestamp`, some data from other meters may be missed.
+
+All the problems mentioned above can be resolved easily using the subscription functionality provided by TDengine.
+
+The first step is to create subscription using `taos_subscribe`.
+
+```c
+TAOS_SUB* tsub = NULL;
+if (async) {
+ // create an asynchronous subscription, the callback function will be called every 1s
+ tsub = taos_subscribe(taos, restart, topic, sql, subscribe_callback, &blockFetch, 1000);
+} else {
+ // create an synchronous subscription, need to call 'taos_consume' manually
+ tsub = taos_subscribe(taos, restart, topic, sql, NULL, NULL, 0);
+}
+```
+
+The subscription in TDengine can be either synchronous or asynchronous. In the above sample code, the value of variable `async` is determined from the CLI input, then it's used to create either an async or sync subscription. Sync subscription means the client program needs to invoke `taos_consume` to retrieve data, and async subscription means another thread created by `taos_subscribe` internally invokes `taos_consume` to retrieve data and pass the data to `subscribe_callback` for processing. `subscribe_callback` is a callback function provided by the client program. You should not perform time consuming operations in the callback function.
+
+The parameter `taos` is an established connection. Nothing special needs to be done for thread safety for synchronous subscription. For asynchronous subscription, the taos_subscribe function should be called exclusively by the current thread, to avoid unpredictable errors.
+
+The parameter `sql` is a `select` statement in which the `where` clause can be used to specify filter conditions. In our example, we can subscribe to the records in which the current exceeds 10A, with the following SQL statement:
+
+```sql
+select * from meters where current > 10;
+```
+
+Please note that, all the data will be processed because no start time is specified. If we only want to process data for the past day, a time related condition can be added:
+
+```sql
+select * from meters where ts > now - 1d and current > 10;
+```
+
+The parameter `topic` is the name of the subscription. The client application must guarantee that the name is unique. However, it doesn't have to be globally unique because subscription is implemented in the APIs on the client side.
+
+If the subscription named as `topic` doesn't exist, the parameter `restart` will be ignored. If the subscription named as `topic` has been created before by the client program, when the client program is restarted with the subscription named `topic`, parameter `restart` is used to determine whether to retrieve data from the beginning or from the last point where the subscription was broken.
+
+If the value of `restart` is **true** (i.e. a non-zero value), data will be retrieved from the beginning. If it is **false** (i.e. zero), the data already consumed before will not be processed again.
+
+The last parameter of `taos_subscribe` is the polling interval in units of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` will be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function.
+
+The second to last parameter of `taos_subscribe` is used to pass arguments to the call back function. `taos_subscribe` doesn't process this parameter and simply passes it to the call back function. This parameter is simply ignored in sync mode.
+
+After a subscription is created, its data can be consumed and processed. Shown below is the sample code to consume data in sync mode, in the else condition of `if (async)`.
+
+```c
+if (async) {
+ getchar();
+} else while(1) {
+ TAOS_RES* res = taos_consume(tsub);
+ if (res == NULL) {
+ printf("failed to consume data.");
+ break;
+ } else {
+ print_result(res, blockFetch);
+ getchar();
+ }
+}
+```
+
+In the above sample code in the else condition, there is an infinite loop. Each time carriage return is entered `taos_consume` is invoked. The return value of `taos_consume` is the selected result set. In the above sample, `print_result` is used to simplify the printing of the result set. It is similar to `taos_use_result`. Below is the implementation of `print_result`.
+
+```c
+void print_result(TAOS_RES* res, int blockFetch) {
+ TAOS_ROW row = NULL;
+ int num_fields = taos_num_fields(res);
+ TAOS_FIELD* fields = taos_fetch_fields(res);
+ int nRows = 0;
+ if (blockFetch) {
+ nRows = taos_fetch_block(res, &row);
+ for (int i = 0; i < nRows; i++) {
+ char temp[256];
+ taos_print_row(temp, row + i, fields, num_fields);
+ puts(temp);
+ }
+ } else {
+ while ((row = taos_fetch_row(res))) {
+ char temp[256];
+ taos_print_row(temp, row, fields, num_fields);
+ puts(temp);
+ nRows++;
+ }
+ }
+ printf("%d rows consumed.\n", nRows);
+}
+```
+
+In the above code `taos_print_row` is used to process the data consumed. All matching rows are printed.
+
+In async mode, consuming data is simpler as shown below.
+
+```c
+void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) {
+ print_result(res, *(int*)param);
+}
+```
+
+`taos_unsubscribe` can be invoked to terminate a subscription.
+
+```c
+taos_unsubscribe(tsub, keep);
+```
+
+The second parameter `keep` is used to specify whether to keep the subscription progress on the client sde. If it is **false**, i.e. **0**, then subscription will be restarted from beginning regardless of the `restart` parameter's value when `taos_subscribe` is invoked again. The subscription progress information is stored in _{DataDir}/subscribe/_ , under which there is a file with the same name as `topic` for each subscription(Note: The default value of `DataDir` in the `taos.cfg` file is **/var/lib/taos/**. However, **/var/lib/taos/** does not exist on the Windows server. So you need to change the `DataDir` value to the corresponding existing directory."), the subscription will be restarted from the beginning if the corresponding progress file is removed.
+
+Now let's see the effect of the above sample code, assuming below prerequisites have been done.
+
+- The sample code has been downloaded to local system
+- TDengine has been installed and launched properly on same system
+- The database, STable, and subtables required in the sample code are ready
+
+Launch the command below in the directory where the sample code resides to compile and start the program.
+
+```bash
+make
+./subscribe -sql='select * from meters where current > 10;'
+```
+
+After the program is started, open another terminal and launch TDengine CLI `taos`, then use the below SQL commands to insert a row whose current is 12A into table **D1001**.
+
+```sql
+use test;
+insert into D1001 values(now, 12, 220, 1);
+```
+
+Then, this row of data will be shown by the example program on the first terminal because its current exceeds 10A. More data can be inserted for you to observe the output of the example program.
+
+## Examples
+
+The example program below demonstrates how to subscribe, using connectors, to data rows in which current exceeds 10A.
+
+### Prepare Data
+
+```bash
+# create database "power"
+taos> create database power;
+# use "power" as the database in following operations
+taos> use power;
+# create super table "meters"
+taos> create table meters(ts timestamp, current float, voltage int, phase int) tags(location binary(64), groupId int);
+# create tabes using the schema defined by super table "meters"
+taos> create table d1001 using meters tags ("California.SanFrancisco", 2);
+taos> create table d1002 using meters tags ("California.LoSangeles", 2);
+# insert some rows
+taos> insert into d1001 values("2020-08-15 12:00:00.000", 12, 220, 1),("2020-08-15 12:10:00.000", 12.3, 220, 2),("2020-08-15 12:20:00.000", 12.2, 220, 1);
+taos> insert into d1002 values("2020-08-15 12:00:00.000", 9.9, 220, 1),("2020-08-15 12:10:00.000", 10.3, 220, 1),("2020-08-15 12:20:00.000", 11.2, 220, 1);
+# filter out the rows in which current is bigger than 10A
+taos> select * from meters where current > 10;
+ ts | current | voltage | phase | location | groupid |
+===========================================================================================================
+ 2020-08-15 12:10:00.000 | 10.30000 | 220 | 1 | California.LoSangeles | 2 |
+ 2020-08-15 12:20:00.000 | 11.20000 | 220 | 1 | California.LoSangeles | 2 |
+ 2020-08-15 12:00:00.000 | 12.00000 | 220 | 1 | California.SanFrancisco | 2 |
+ 2020-08-15 12:10:00.000 | 12.30000 | 220 | 2 | California.SanFrancisco | 2 |
+ 2020-08-15 12:20:00.000 | 12.20000 | 220 | 1 | California.SanFrancisco | 2 |
+Query OK, 5 row(s) in set (0.004896s)
+```
+
+### Example Programs
+
+
+
+
+
+
+
+
+ {/*
+
+ */}
+
+
+
+ {/*
+
+
+
+
+ */}
+
+
+
+
+
+### Run the Examples
+
+The example programs first consume all historical data matching the criteria.
+
+```bash
+ts: 1597464000000 current: 12.0 voltage: 220 phase: 1 location: California.SanFrancisco groupid : 2
+ts: 1597464600000 current: 12.3 voltage: 220 phase: 2 location: California.SanFrancisco groupid : 2
+ts: 1597465200000 current: 12.2 voltage: 220 phase: 1 location: California.SanFrancisco groupid : 2
+ts: 1597464600000 current: 10.3 voltage: 220 phase: 1 location: California.LoSangeles groupid : 2
+ts: 1597465200000 current: 11.2 voltage: 220 phase: 1 location: California.LoSangeles groupid : 2
+```
+
+Next, use TDengine CLI to insert a new row.
+
+```
+# taos
+taos> use power;
+taos> insert into d1001 values(now, 12.4, 220, 1);
+```
+
+Because the current in the inserted row exceeds 10A, it will be consumed by the example program.
+
+```
+ts: 1651146662805 current: 12.4 voltage: 220 phase: 1 location: California.SanFrancisco groupid: 2
+```
diff --git a/docs-en/07-develop/07-cache.md b/docs-en/07-develop/07-cache.md
new file mode 100644
index 0000000000000000000000000000000000000000..743452faff6a2be8466318a7dab61a44e33c3664
--- /dev/null
+++ b/docs-en/07-develop/07-cache.md
@@ -0,0 +1,19 @@
+---
+sidebar_label: Cache
+title: Cache
+description: "The latest row of each table is kept in cache to provide high performance query of latest state."
+---
+
+The cache management policy in TDengine is First-In-First-Out (FIFO). FIFO is also known as insert driven cache management policy and it is different from read driven cache management, which is more commonly known as Least-Recently-Used (LRU). FIFO simply stores the latest data in cache and flushes the oldest data in cache to disk, when the cache usage reaches a threshold. In IoT use cases, it is the current state i.e. the latest or most recent data that is important. The cache policy in TDengine, like much of the design and architecture of TDengine, is based on the nature of IoT data.
+
+Caching the latest data provides the capability of retrieving data in milliseconds. With this capability, TDengine can be configured properly to be used as a caching system without deploying another separate caching system. This simplifies the system architecture and minimizes operational costs. The cache is emptied after TDengine is restarted. TDengine does not reload data from disk into cache, like a key-value caching system.
+
+The memory space used by the TDengine cache is fixed in size and configurable. It should be allocated based on application requirements and system resources. An independent memory pool is allocated for and managed by each vnode (virtual node) in TDengine. There is no sharing of memory pools between vnodes. All the tables belonging to a vnode share all the cache memory of the vnode.
+
+The memory pool is divided into blocks and data is stored in row format in memory and each block follows FIFO policy. The size of each block is determined by configuration parameter `cache` and the number of blocks for each vnode is determined by the parameter `blocks`. For each vnode, the total cache size is `cache * blocks`. A cache block needs to ensure that each table can store at least dozens of records, to be efficient.
+
+`last_row` function can be used to retrieve the last row of a table or a STable to quickly show the current state of devices on monitoring screen. For example the below SQL statement retrieves the latest voltage of all meters in San Francisco, California.
+
+```sql
+select last_row(voltage) from meters where location='California.SanFrancisco';
+```
diff --git a/docs-en/07-develop/08-udf.md b/docs-en/07-develop/08-udf.md
new file mode 100644
index 0000000000000000000000000000000000000000..49bc95bd91a4c31d42d2b21ef05d69225f1bd963
--- /dev/null
+++ b/docs-en/07-develop/08-udf.md
@@ -0,0 +1,240 @@
+---
+sidebar_label: UDF
+title: User Defined Functions(UDF)
+description: "Scalar functions and aggregate functions developed by users can be utilized by the query framework to expand query capability"
+---
+
+In some use cases, built-in functions are not adequate for the query capability required by application programs. With UDF, the functions developed by users can be utilized by the query framework to meet business and application requirements. UDF normally takes one column of data as input, but can also support the result of a sub-query as input.
+
+From version 2.2.0.0, UDF written in C/C++ are supported by TDengine.
+
+
+## Types of UDF
+
+Two kinds of functions can be implemented by UDF: scalar functions and aggregate functions.
+
+Scalar functions return multiple rows and aggregate functions return either 0 or 1 row.
+
+In the case of a scalar function you only have to implement the "normal" function template.
+
+In the case of an aggregate function, in addition to the "normal" function, you also need to implement the "merge" and "finalize" function templates even if the implementation is empty. This will become clear in the sections below.
+
+### Scalar Function
+
+As mentioned earlier, a scalar UDF only has to implement the "normal" function template. The function template below can be used to define your own scalar function.
+
+`void udfNormalFunc(char* data, short itype, short ibytes, int numOfRows, long long* ts, char* dataOutput, char* interBuf, char* tsOutput, int* numOfOutput, short otype, short obytes, SUdfInit* buf)`
+
+`udfNormalFunc` is the place holder for a function name. A function implemented based on the above template can be used to perform scalar computation on data rows. The parameters are fixed to control the data exchange between UDF and TDengine.
+
+- Definitions of the parameters:
+
+ - data:input data
+ - itype:the type of input data, for details please refer to [type definition in column_meta](/reference/rest-api/), for example 4 represents INT
+ - iBytes:the number of bytes consumed by each value in the input data
+ - oType:the type of output data, similar to iType
+ - oBytes:the number of bytes consumed by each value in the output data
+ - numOfRows:the number of rows in the input data
+ - ts: the column of timestamp corresponding to the input data
+ - dataOutput:the buffer for output data, total size is `oBytes * numberOfRows`
+ - interBuf:the buffer for an intermediate result. Its size is specified by the `BUFSIZE` parameter when creating a UDF. It's normally used when the intermediate result is not same as the final result. This buffer is allocated and freed by TDengine.
+ - tsOutput:the column of timestamps corresponding to the output data; it can be used to output timestamp together with the output data if it's not NULL
+ - numOfOutput:the number of rows in output data
+ - buf:for the state exchange between UDF and TDengine
+
+ [add_one.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/add_one.c) is one example of a very simple UDF implementation, i.e. one instance of the above `udfNormalFunc` template. It adds one to each value of a passed in column, which can be filtered using the `where` clause, and outputs the result.
+
+### Aggregate Function
+
+For aggregate UDF, as mentioned earlier you must implement a "normal" function template (described above) and also implement the "merge" and "finalize" templates.
+
+#### Merge Function Template
+
+The function template below can be used to define your own merge function for an aggregate UDF.
+
+`void udfMergeFunc(char* data, int32_t numOfRows, char* dataOutput, int32_t* numOfOutput, SUdfInit* buf)`
+
+`udfMergeFunc` is the place holder for a function name. The function implemented with the above template is used to aggregate intermediate results and can only be used in the aggregate query for STable.
+
+Definitions of the parameters:
+
+- data:array of output data, if interBuf is used it's an array of interBuf
+- numOfRows:number of rows in `data`
+- dataOutput:the buffer for output data, the size is same as that of the final result; If the result is not final, it can be put in the interBuf, i.e. `data`.
+- numOfOutput:number of rows in the output data
+- buf:for the state exchange between UDF and TDengine
+
+#### Finalize Function Template
+
+The function template below can be used to finalize the result of your own UDF, normally used when interBuf is used.
+
+`void udfFinalizeFunc(char* dataOutput, char* interBuf, int* numOfOutput, SUdfInit* buf)`
+
+`udfFinalizeFunc` is the place holder of function name, definitions of the parameter are as below:
+
+- dataOutput:buffer for output data
+- interBuf:buffer for intermediate result, can be used as input for next processing step
+- numOfOutput:number of output data, can only be 0 or 1 for aggregate function
+- buf:for state exchange between UDF and TDengine
+
+### Example abs_max.c
+
+[abs_max.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/abs_max.c) is an example of a user defined aggregate function to get the maximum from the absolute values of a column.
+
+The internal processing happens as follows. The results of the select statement are divided into multiple row blocks and `udfNormalFunc`, i.e. `abs_max` in this case, is performed on each row block to generate the intermediate results for each sub table. Then `udfMergeFunc`, i.e. `abs_max_merge` in this case, is performed on the intermediate result of sub tables to aggregate and generate the final or intermediate result of STable. The intermediate result of STable is finally processed by `udfFinalizeFunc`, i.e. `abs_max_finalize` in this example, to generate the final result, which contains either 0 or 1 row.
+
+Other typical aggregation functions such as covariance, can also be implemented using aggregate UDF.
+
+## UDF Naming Conventions
+
+The naming convention for the 3 kinds of function templates required by UDF is as follows:
+ - udfNormalFunc, udfMergeFunc, and udfFinalizeFunc are required to have same prefix, i.e. the actual name of udfNormalFunc. The udfNormalFunc doesn't need a suffix following the function name.
+ - udfMergeFunc should be udfNormalFunc followed by `_merge`
+ - udfFinalizeFunc should be udfNormalFunc followed by `_finalize`.
+
+The naming convention is part of TDengine's UDF framework. TDengine follows this convention to invoke the corresponding actual functions.
+
+Depending on whether you are creating a scalar UDF or aggregate UDF, the functions that you need to implement are different.
+
+- Scalar function:udfNormalFunc is required.
+- Aggregate function:udfNormalFunc, udfMergeFunc (if query on STable) and udfFinalizeFunc are required.
+
+For clarity, assuming we want to implement a UDF named "foo":
+- If the function is a scalar function, we only need to implement the "normal" function template and it should be named simply `foo`.
+- If the function is an aggregate function, we need to implement `foo`, `foo_merge`, and `foo_finalize`. Note that for aggregate UDF, even though one of the three functions is not necessary, there must be an empty implementation.
+
+## Compile UDF
+
+The source code of UDF in C can't be utilized by TDengine directly. UDF can only be loaded into TDengine after compiling to dynamically linked library (DLL).
+
+For example, the example UDF `add_one.c` mentioned earlier, can be compiled into DLL using the command below, in a Linux Shell.
+
+```bash
+gcc -g -O0 -fPIC -shared add_one.c -o add_one.so
+```
+
+The generated DLL file `add_one.so` can be used later when creating a UDF. It's recommended to use GCC not older than 7.5.
+
+## Create and Use UDF
+
+When a UDF is created in a TDengine instance, it is available across the databases in that instance.
+
+### Create UDF
+
+SQL command can be executed on the host where the generated UDF DLL resides to load the UDF DLL into TDengine. This operation cannot be done through REST interface or web console. Once created, any client of the current TDengine can use these UDF functions in their SQL commands. UDF are stored in the management node of TDengine. The UDFs loaded in TDengine would be still available after TDengine is restarted.
+
+When creating UDF, the type of UDF, i.e. a scalar function or aggregate function must be specified. If the specified type is wrong, the SQL statements using the function would fail with errors. The input type and output type don't need to be the same in UDF, but the input data type and output data type must be consistent with the UDF definition.
+
+- Create Scalar Function
+
+```sql
+CREATE FUNCTION userDefinedFunctionName AS "/absolute/path/to/userDefinedFunctionName.so" OUTPUTTYPE [BUFSIZE B];
+```
+
+- userDefinedFunctionName:The function name to be used in SQL statement which must be consistent with the function name defined by `udfNormalFunc` and is also the name of the compiled DLL (.so file).
+- path:The absolute path of the DLL file including the name of the shared object file (.so). The path must be quoted with single or double quotes.
+- outputtype:The output data type, the value is the literal string of the supported TDengine data type.
+- B:the size of intermediate buffer, in bytes; it is an optional parameter and the range is [0,512].
+
+For example, below SQL statement can be used to create a UDF from `add_one.so`.
+
+```sql
+CREATE FUNCTION add_one AS "/home/taos/udf_example/add_one.so" OUTPUTTYPE INT;
+```
+
+- Create Aggregate Function
+
+```sql
+CREATE AGGREGATE FUNCTION userDefinedFunctionName AS "/absolute/path/to/userDefinedFunctionName.so" OUTPUTTYPE [ BUFSIZE B ];
+```
+
+- userDefinedFunctionName:the function name to be used in SQL statement which must be consistent with the function name defined by `udfNormalFunc` and is also the name of the compiled DLL (.so file).
+- path:the absolute path of the DLL file including the name of the shared object file (.so). The path needs to be quoted by single or double quotes.
+- OUTPUTTYPE:the output data type, the value is the literal string of the type
+- B:the size of intermediate buffer, in bytes; it's an optional parameter and the range is [0,512]
+
+For details about how to use intermediate result, please refer to example program [demo.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/demo.c).
+
+For example, below SQL statement can be used to create a UDF from `demo.so`.
+
+```sql
+CREATE AGGREGATE FUNCTION demo AS "/home/taos/udf_example/demo.so" OUTPUTTYPE DOUBLE bufsize 14;
+```
+
+### Manage UDF
+
+- Delete UDF
+
+```
+DROP FUNCTION ids(X);
+```
+
+- ids(X):same as that in `CREATE FUNCTION` statement
+
+```sql
+DROP FUNCTION add_one;
+```
+
+- Show Available UDF
+
+```sql
+SHOW FUNCTIONS;
+```
+
+### Use UDF
+
+The function name specified when creating UDF can be used directly in SQL statements, just like builtin functions.
+
+```sql
+SELECT X(c) FROM table/STable;
+```
+
+The above SQL statement invokes function X for column c.
+
+## Restrictions for UDF
+
+In current version there are some restrictions for UDF
+
+1. Only Linux is supported when creating and invoking UDF for both client side and server side
+2. UDF can't be mixed with builtin functions
+3. Only one UDF can be used in a SQL statement
+4. Only a single column is supported as input for UDF
+5. Once created successfully, UDF is persisted in MNode of TDengineUDF
+6. UDF can't be created through REST interface
+7. The function name used when creating UDF in SQL must be consistent with the function name defined in the DLL, i.e. the name defined by `udfNormalFunc`
+8. The name of a UDF should not conflict with any of TDengine's built-in functions
+
+## Examples
+
+### Scalar function example [add_one](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/add_one.c)
+
+
+add_one.c
+
+```c
+{{#include tests/script/sh/add_one.c}}
+```
+
+
+
+### Aggregate function example [abs_max](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/abs_max.c)
+
+
+abs_max.c
+
+```c
+{{#include tests/script/sh/abs_max.c}}
+```
+
+
+
+### Example for using intermediate result [demo](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/demo.c)
+
+
+demo.c
+
+```c
+{{#include tests/script/sh/demo.c}}
+```
+
+
diff --git a/docs-en/07-develop/_category_.yml b/docs-en/07-develop/_category_.yml
new file mode 100644
index 0000000000000000000000000000000000000000..6f0d66351a5c326eb2dced998e29e668d11cd1ca
--- /dev/null
+++ b/docs-en/07-develop/_category_.yml
@@ -0,0 +1 @@
+label: Developer Guide
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_c.mdx b/docs-en/07-develop/_sub_c.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..95fef0042d0a277f9136e6e6f8c15558487232f9
--- /dev/null
+++ b/docs-en/07-develop/_sub_c.mdx
@@ -0,0 +1,3 @@
+```c
+{{#include docs-examples/c/subscribe_demo.c}}
+```
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_cs.mdx b/docs-en/07-develop/_sub_cs.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..80934aa4d014a076896dce7f41e520f06ffd735d
--- /dev/null
+++ b/docs-en/07-develop/_sub_cs.mdx
@@ -0,0 +1,3 @@
+```csharp
+{{#include docs-examples/csharp/SubscribeDemo.cs}}
+```
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_go.mdx b/docs-en/07-develop/_sub_go.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..cd908fc12c3a35f49ca108ee56c3951c5388a95f
--- /dev/null
+++ b/docs-en/07-develop/_sub_go.mdx
@@ -0,0 +1,3 @@
+```go
+{{#include docs-examples/go/sub/main.go}}
+```
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_java.mdx b/docs-en/07-develop/_sub_java.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..e65bc576ebed030d935ced6a4572289cd367ffac
--- /dev/null
+++ b/docs-en/07-develop/_sub_java.mdx
@@ -0,0 +1,7 @@
+```java
+{{#include docs-examples/java/src/main/java/com/taos/example/SubscribeDemo.java}}
+```
+:::note
+For now Java connector doesn't provide asynchronous subscription, but `TimerTask` can be used to achieve similar purpose.
+
+:::
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_node.mdx b/docs-en/07-develop/_sub_node.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..c93ad627ce9a77ca71a014b41d571089e6c1727b
--- /dev/null
+++ b/docs-en/07-develop/_sub_node.mdx
@@ -0,0 +1,3 @@
+```js
+{{#include docs-examples/node/nativeexample/subscribe_demo.js}}
+```
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_python.mdx b/docs-en/07-develop/_sub_python.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..b817deeba6e283a3ba16fee0d580d3823c999536
--- /dev/null
+++ b/docs-en/07-develop/_sub_python.mdx
@@ -0,0 +1,3 @@
+```py
+{{#include docs-examples/python/subscribe_demo.py}}
+```
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_rust.mdx b/docs-en/07-develop/_sub_rust.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..4750cf7a3b871db48c9e5a26b22ab4b8a03f11be
--- /dev/null
+++ b/docs-en/07-develop/_sub_rust.mdx
@@ -0,0 +1,3 @@
+```rs
+{{#include docs-examples/rust/nativeexample/examples/subscribe_demo.rs}}
+```
\ No newline at end of file
diff --git a/docs-en/07-develop/index.md b/docs-en/07-develop/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..e3f55f290753f79ac1708337082ce90bb050b21f
--- /dev/null
+++ b/docs-en/07-develop/index.md
@@ -0,0 +1,25 @@
+---
+title: Developer Guide
+---
+
+To develop an application to process time-series data using TDengine, we recommend taking the following steps:
+
+1. Choose the method to connect to TDengine. No matter what programming language you use, you can always use the REST interface to access TDengine, but you can also use connectors unique to each programming language.
+2. Design the data model based on your own use cases. Learn the [concepts](/concept/) of TDengine including "one table for one data collection point" and the "super table" (STable) concept; learn about static labels, collected metrics, and subtables. Depending on the characteristics of your data and your requirements, you may decide to create one or more databases, and you should design the STable schema to fit your data.
+3. Decide how you will insert data. TDengine supports writing using standard SQL, but also supports schemaless writing, so that data can be written directly without creating tables manually.
+4. Based on business requirements, find out what SQL query statements need to be written. You may be able to repurpose any existing SQL.
+5. If you want to run real-time analysis based on time series data, including various dashboards, it is recommended that you use the TDengine continuous query feature instead of deploying complex streaming processing systems such as Spark or Flink.
+6. If your application has modules that need to consume inserted data, and they need to be notified when new data is inserted, it is recommended that you use the data subscription function provided by TDengine without the need to deploy Kafka.
+7. In many use cases (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately.
+8. If you find that the SQL functions of TDengine cannot meet your requirements, then you can use user-defined functions to solve the problem.
+
+This section is organized in the order described above. For ease of understanding, TDengine provides sample code for each supported programming language for each function. If you want to learn more about the use of SQL, please read the [SQL manual](/taos-sql/). For a more in-depth understanding of the use of each connector, please read the [Connector Reference Guide](/reference/connector/). If you also want to integrate TDengine with third-party systems, such as Grafana, please refer to the [third-party tools](/third-party/).
+
+If you encounter any problems during the development process, please click ["Submit an issue"](https://github.com/taosdata/TDengine/issues/new/choose) at the bottom of each page and submit it on GitHub right away.
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
diff --git a/docs-en/10-cluster/01-deploy.md b/docs-en/10-cluster/01-deploy.md
new file mode 100644
index 0000000000000000000000000000000000000000..200da1be3f8185818bd21dd3fcdc78c124a36831
--- /dev/null
+++ b/docs-en/10-cluster/01-deploy.md
@@ -0,0 +1,136 @@
+---
+title: Deployment
+---
+
+## Prerequisites
+
+### Step 1
+
+The FQDN of all hosts must be setup properly. For e.g. FQDNs may have to be configured in the /etc/hosts file on each host. You must confirm that each FQDN can be accessed from any other host. For e.g. you can do this by using the `ping` command.
+
+To get the hostname on any host, the command `hostname -f` can be executed. `ping ` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, needs to be checked and revised, to make any two hosts accessible to each other.
+
+:::note
+
+- The host where the client program runs also needs to be configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster.
+
+- Please ensure that your firewall rules do not block TCP/UDP on ports 6030-6042 on all hosts in the cluster.
+
+:::
+
+### Step 2
+
+If any previous version of TDengine has been installed and configured on any host, the installation needs to be removed and the data needs to be cleaned up. For details about uninstalling please refer to [Install and Uninstall](/operation/pkg-install). To clean up the data, please use `rm -rf /var/lib/taos/\*` assuming the `dataDir` is configured as `/var/lib/taos`.
+
+:::note
+
+As a best practice, before cleaning up any data files or directories, please ensure that your data has been backed up correctly, if required by your data integrity, backup, security, or other standard operating protocols (SOP).
+
+:::
+
+### Step 3
+
+Now it's time to install TDengine on all hosts but without starting `taosd`. Note that the versions on all hosts should be same. If you are prompted to input the existing TDengine cluster, simply press carriage return to ignore the prompt. `install.sh -e no` can also be used to disable this prompt. For details please refer to [Install and Uninstall](/operation/pkg-install).
+
+### Step 4
+
+Now each physical node (referred to, hereinafter, as `dnode` which is an abbreviation for "data node") of TDengine needs to be configured properly. Please note that one dnode doesn't stand for one host. Multiple TDengine dnodes can be started on a single host as long as they are configured properly without conflicting. More specifically each instance of the configuration file `taos.cfg` stands for a dnode. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following.
+
+```c
+// firstEp is the end point to connect to when any dnode starts
+firstEp h1.taosdata.com:6030
+
+// must be configured to the FQDN of the host where the dnode is launched
+fqdn h1.taosdata.com
+
+// the port used by the dnode, default is 6030
+serverPort 6030
+
+// only necessary when replica is configured to an even number
+#arbitrator ha.taosdata.com:6042
+```
+
+`firstEp` and `fqdn` must be configured properly. In `taos.cfg` of all dnodes in TDengine cluster, `firstEp` must be configured to point to same address, i.e. the first dnode of the cluster. `fqdn` and `serverPort` compose the address of each node itself. If you want to start multiple TDengine dnodes on a single host, please make sure all other configurations like `dataDir`, `logDir`, and other resources related parameters are not conflicting.
+
+For all the dnodes in a TDengine cluster, the below parameters must be configured exactly the same, any node whose configuration is different from dnodes already in the cluster can't join the cluster.
+
+| **#** | **Parameter** | **Definition** |
+| ----- | ------------------ | --------------------------------------------------------------------------------- |
+| 1 | numOfMnodes | The number of management nodes in the cluster |
+| 2 | mnodeEqualVnodeNum | The ratio of resource consuming of mnode to vnode |
+| 3 | offlineThreshold | The threshold of dnode offline, once it's reached the dnode is considered as down |
+| 4 | statusInterval | The interval by which dnode reports its status to mnode |
+| 5 | arbitrator | End point of the arbitrator component in the cluster |
+| 6 | timezone | Timezone |
+| 7 | balance | Enable load balance automatically |
+| 8 | maxTablesPerVnode | Maximum number of tables that can be created in each vnode |
+| 9 | maxVgroupsPerDb | Maximum number vgroups that can be used by each DB |
+
+:::note
+Prior to version 2.0.19.0, besides the above parameters, `locale` and `charset` must also be configured the same for each dnode.
+
+:::
+
+## Start Cluster
+
+In the following example we assume that first dnode has FQDN h1.taosdata.com and the second dnode has FQDN h2.taosdata.com.
+
+### Start The First DNODE
+
+The first dnode can be started following the instructions in [Get Started](/get-started/). Then TDengine CLI `taos` can be launched to execute command `show dnodes`, the output is as following for example:
+
+```
+Welcome to the TDengine shell from Linux, Client Version:2.0.0.0
+
+
+Copyright (c) 2017 by TAOS Data, Inc. All rights reserved.
+
+taos> show dnodes;
+ id | end_point | vnodes | cores | status | role | create_time |
+=====================================================================================
+ 1 | h1.taosdata.com:6030 | 0 | 2 | ready | any | 2020-07-31 03:49:29.202 |
+Query OK, 1 row(s) in set (0.006385s)
+
+taos>
+```
+
+From the above output, it is shown that the end point of the started dnode is "h1.taosdata.com:6030", which is the `firstEp` of the cluster.
+
+### Start Other DNODEs
+
+There are a few steps necessary to add other dnodes in the cluster.
+
+Let's assume we are starting the second dnode with FQDN, h2.taosdata.com. First we make sure the configuration is correct.
+
+```c
+// firstEp is the end point to connect to when any dnode starts
+firstEp h1.taosdata.com:6030
+
+// must be configured to the FQDN of the host where the dnode is launched
+fqdn h2.taosdata.com
+
+// the port used by the dnode, default is 6030
+serverPort 6030
+
+```
+
+Second, we can start `taosd` as instructed in [Get Started](/get-started/).
+
+Then, on the first dnode i.e. h1.taosdata.com in our example, use TDengine CLI `taos` to execute the following command to add the end point of the dnode in the cluster. In the command "fqdn:port" should be quoted using double quotes.
+
+```sql
+CREATE DNODE "h2.taos.com:6030";
+```
+
+Then on the first dnode h1.taosdata.com, execute `show dnodes` in `taos` to show whether the second dnode has been added in the cluster successfully or not.
+
+```sql
+SHOW DNODES;
+```
+
+If the status of the newly added dnode is offline, please check:
+
+- Whether the `taosd` process is running properly or not
+- In the log file `taosdlog.0` to see whether the fqdn and port are correct
+
+The above process can be repeated to add more dnodes in the cluster.
diff --git a/docs-en/10-cluster/02-cluster-mgmt.md b/docs-en/10-cluster/02-cluster-mgmt.md
new file mode 100644
index 0000000000000000000000000000000000000000..674c92e2766a4eb304079140af19c8efea72d55e
--- /dev/null
+++ b/docs-en/10-cluster/02-cluster-mgmt.md
@@ -0,0 +1,213 @@
+---
+sidebar_label: Operation
+title: Manage DNODEs
+---
+
+The previous section, [Deployment],(/cluster/deploy) showed you how to deploy and start a cluster from scratch. Once a cluster is ready, the status of dnode(s) in the cluster can be shown at any time. Dnodes can be managed from the TDengine CLI. New dnode(s) can be added to scale out the cluster, an existing dnode can be removed and you can even perform load balancing manually, if necessary.
+
+:::note
+All the commands introduced in this chapter must be run in the TDengine CLI - `taos`. Note that sometimes it is necessary to use root privilege.
+
+:::
+
+## Show DNODEs
+
+The below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes and so on. We recommend executing this command after adding or removing a dnode.
+
+```sql
+SHOW DNODES;
+```
+
+Below is the example output of this command.
+
+```
+taos> show dnodes;
+ id | end_point | vnodes | cores | status | role | create_time | offline reason |
+======================================================================================================================================
+ 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
+Query OK, 1 row(s) in set (0.008298s)
+```
+
+## Show VGROUPs
+
+To utilize system resources efficiently and provide scalability, data sharding is required. The data of each database is divided into multiple shards and stored in multiple vnodes. These vnodes may be located on different dnodes. One way of scaling out is to add more vnodes on dnodes. Each vnode can only be used for a single DB, but one DB can have multiple vnodes. The allocation of vnode is scheduled automatically by mnode based on system resources of the dnodes.
+
+Launch TDengine CLI `taos` and execute below command:
+
+```sql
+USE SOME_DATABASE;
+SHOW VGROUPS;
+```
+
+The example output is below:
+
+```
+taos> show dnodes;
+ id | end_point | vnodes | cores | status | role | create_time | offline reason |
+======================================================================================================================================
+ 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
+Query OK, 1 row(s) in set (0.008298s)
+
+taos> use db;
+Database changed.
+
+taos> show vgroups;
+ vgId | tables | status | onlines | v1_dnode | v1_status | compacting |
+==========================================================================================
+ 14 | 38000 | ready | 1 | 1 | master | 0 |
+ 15 | 38000 | ready | 1 | 1 | master | 0 |
+ 16 | 38000 | ready | 1 | 1 | master | 0 |
+ 17 | 38000 | ready | 1 | 1 | master | 0 |
+ 18 | 37001 | ready | 1 | 1 | master | 0 |
+ 19 | 37000 | ready | 1 | 1 | master | 0 |
+ 20 | 37000 | ready | 1 | 1 | master | 0 |
+ 21 | 37000 | ready | 1 | 1 | master | 0 |
+Query OK, 8 row(s) in set (0.001154s)
+```
+
+## Add DNODE
+
+Launch TDengine CLI `taos` and execute the command below to add the end point of a new dnode into the EPI (end point) list of the cluster. "fqdn:port" must be quoted using double quotes.
+
+```sql
+CREATE DNODE "fqdn:port";
+```
+
+The example output is as below:
+
+```
+taos> create dnode "localhost:7030";
+Query OK, 0 of 0 row(s) in database (0.008203s)
+
+taos> show dnodes;
+ id | end_point | vnodes | cores | status | role | create_time | offline reason |
+======================================================================================================================================
+ 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
+ 2 | localhost:7030 | 0 | 0 | offline | any | 2022-04-19 08:11:42.158 | status not received |
+Query OK, 2 row(s) in set (0.001017s)
+```
+
+It can be seen that the status of the new dnode is "offline". Once the dnode is started and connects to the firstEp of the cluster, you can execute the command again and get the example output below. As can be seen, both dnodes are in "ready" status.
+
+```
+taos> show dnodes;
+ id | end_point | vnodes | cores | status | role | create_time | offline reason |
+======================================================================================================================================
+ 1 | localhost:6030 | 3 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
+ 2 | localhost:7030 | 6 | 8 | ready | any | 2022-04-19 08:14:59.165 | |
+Query OK, 2 row(s) in set (0.001316s)
+```
+
+## Drop DNODE
+
+Launch TDengine CLI `taos` and execute the command below to drop or remove a dnode from the cluster. In the command, you can get `dnodeId` from `show dnodes`.
+
+```sql
+DROP DNODE "fqdn:port";
+```
+
+or
+
+```sql
+DROP DNODE dnodeId;
+```
+
+The example output is below:
+
+```
+taos> show dnodes;
+ id | end_point | vnodes | cores | status | role | create_time | offline reason |
+======================================================================================================================================
+ 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
+ 2 | localhost:7030 | 0 | 0 | offline | any | 2022-04-19 08:11:42.158 | status not received |
+Query OK, 2 row(s) in set (0.001017s)
+
+taos> drop dnode 2;
+Query OK, 0 of 0 row(s) in database (0.000518s)
+
+taos> show dnodes;
+ id | end_point | vnodes | cores | status | role | create_time | offline reason |
+======================================================================================================================================
+ 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
+Query OK, 1 row(s) in set (0.001137s)
+```
+
+In the above example, when `show dnodes` is executed the first time, two dnodes are shown. After `drop dnode 2` is executed, you can execute `show dnodes` again and it can be seen that only the dnode with ID 1 is still in the cluster.
+
+:::note
+
+- Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Before dropping a dnode, the data belonging to the dnode MUST be migrated/backed up according to your data retention, data security or other SOPs.
+- Please note that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped.
+- Once a dnode is dropped, other dnodes in the cluster will be notified of the drop and will not accept the request from the dropped dnode.
+- dnodeID is allocated automatically and can't be manually modified. dnodeID is generated in ascending order without duplication.
+
+:::
+
+## Move VNODE
+
+A vnode can be manually moved from one dnode to another.
+
+Launch TDengine CLI `taos` and execute below command:
+
+```sql
+ALTER DNODE BALANCE "VNODE:-DNODE:";
+```
+
+In the above command, `source-dnodeId` is the original dnodeId where the vnode resides, `dest-dnodeId` specifies the target dnode. vgId (vgroup ID) can be shown by `SHOW VGROUPS `.
+
+First `show vgroups` is executed to show the vgroup distribution.
+
+```
+taos> show vgroups;
+ vgId | tables | status | onlines | v1_dnode | v1_status | compacting |
+==========================================================================================
+ 14 | 38000 | ready | 1 | 3 | master | 0 |
+ 15 | 38000 | ready | 1 | 3 | master | 0 |
+ 16 | 38000 | ready | 1 | 3 | master | 0 |
+ 17 | 38000 | ready | 1 | 3 | master | 0 |
+ 18 | 37001 | ready | 1 | 3 | master | 0 |
+ 19 | 37000 | ready | 1 | 1 | master | 0 |
+ 20 | 37000 | ready | 1 | 1 | master | 0 |
+ 21 | 37000 | ready | 1 | 1 | master | 0 |
+Query OK, 8 row(s) in set (0.001314s)
+```
+
+It can be seen that there are 5 vgroups in dnode 3 and 3 vgroups in node 1, now we want to move vgId 18 from dnode 3 to dnode 1. Execute the below command in `taos`
+
+```
+taos> alter dnode 3 balance "vnode:18-dnode:1";
+
+DB error: Balance already enabled (0.00755
+```
+
+However, the operation fails with error message show above, which means automatic load balancing has been enabled in the current database so manual load balance can't be performed.
+
+Shutdown the cluster, configure `balance` parameter in all the dnodes to 0, then restart the cluster, and execute `alter dnode` and `show vgroups` as below.
+
+```
+taos> alter dnode 3 balance "vnode:18-dnode:1";
+Query OK, 0 row(s) in set (0.000575s)
+
+taos> show vgroups;
+ vgId | tables | status | onlines | v1_dnode | v1_status | v2_dnode | v2_status | compacting |
+=================================================================================================================
+ 14 | 38000 | ready | 1 | 3 | master | 0 | NULL | 0 |
+ 15 | 38000 | ready | 1 | 3 | master | 0 | NULL | 0 |
+ 16 | 38000 | ready | 1 | 3 | master | 0 | NULL | 0 |
+ 17 | 38000 | ready | 1 | 3 | master | 0 | NULL | 0 |
+ 18 | 37001 | ready | 2 | 1 | slave | 3 | master | 0 |
+ 19 | 37000 | ready | 1 | 1 | master | 0 | NULL | 0 |
+ 20 | 37000 | ready | 1 | 1 | master | 0 | NULL | 0 |
+ 21 | 37000 | ready | 1 | 1 | master | 0 | NULL | 0 |
+Query OK, 8 row(s) in set (0.001242s)
+```
+
+It can be seen from above output that vgId 18 has been moved from dnode 3 to dnode 1.
+
+:::note
+
+- Manual load balancing can only be performed when the automatic load balancing is disabled, i.e. `balance` is set to 0.
+- Only a vnode in normal state, i.e. master or slave, can be moved. vnode can't be moved when its in status offline, unsynced or syncing.
+- Before moving a vnode, it's necessary to make sure the target dnode has enough resources: CPU, memory and disk.
+
+:::
diff --git a/docs-en/10-cluster/03-ha-and-lb.md b/docs-en/10-cluster/03-ha-and-lb.md
new file mode 100644
index 0000000000000000000000000000000000000000..bd718eef9f8dc181628132de831dbca2af59d158
--- /dev/null
+++ b/docs-en/10-cluster/03-ha-and-lb.md
@@ -0,0 +1,81 @@
+---
+sidebar_label: HA & LB
+title: High Availability and Load Balancing
+---
+
+## High Availability of Vnode
+
+High availability of vnode and mnode can be achieved through replicas in TDengine.
+
+A TDengine cluster can have multiple databases. Each database has a number of vnodes associated with it. A different number of replicas can be configured for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas. The default value for `replica` is 1. Naturally, a single replica cannot guarantee high availability since if one node is down, the data service is unavailable. Note that the number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation will fail with error "more dnodes are needed". The SQL statement below is used to create a database named "demo" with 3 replicas.
+
+```sql
+CREATE DATABASE demo replica 3;
+```
+
+The data in a DB is divided into multiple shards and stored in multiple vgroups. The number of vnodes in each vgroup is determined by the number of replicas set for the DB. The vnodes in each vgroup store exactly the same data. For the purpose of high availability, the vnodes in a vgroup must be located in different dnodes on different hosts. As long as over half of the vnodes in a vgroup are in an online state, the vgroup is able to provide data access. Otherwise the vgroup can't provide data access for reading or inserting data.
+
+There may be data for multiple DBs in a dnode. When a dnode is down, multiple DBs may be affected. While in theory, the cluster will provide data access for reading or inserting data if over half the vnodes in vgroups are online, because of the possibly complex mapping between vnodes and dnodes, it is difficult to guarantee that the cluster will work properly if over half of the dnodes are online.
+
+## High Availability of Mnode
+
+Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`. The valid range for `numOfMnodes` is [1,3]. To ensure data consistency between mnodes, data replication between mnodes is performed synchronously.
+
+There may be multiple dnodes in a cluster, but only one mnode can be started in each dnode. Which one or ones of the dnodes will be designated as mnodes is automatically determined by TDengine according to the cluster configuration and system resources. The command `show mnodes` can be executed in TDengine `taos` to show the mnodes in the cluster.
+
+```sql
+SHOW MNODES;
+```
+
+The end point and role/status (master, slave, unsynced, or offline) of all mnodes can be shown by the above command. When the first dnode is started in a cluster, there must be one mnode in this dnode. Without at least one mnode, the cluster cannot work. If `numOfMNodes` is configured to 2, another mnode will be started when the second dnode is launched.
+
+For the high availability of mnode, `numOfMnodes` needs to be configured to 2 or a higher value. Because the data consistency between mnodes must be guaranteed, the replica confirmation parameter `quorum` is set to 2 automatically if `numOfMNodes` is set to 2 or higher.
+
+:::note
+If high availability is important for your system, both vnode and mnode must be configured to have multiple replicas.
+
+:::
+
+## Load Balancing
+
+Load balancing will be triggered in 3 cases without manual intervention.
+
+- When a new dnode joins the cluster, automatic load balancing may be triggered. Some data from other dnodes may be transferred to the new dnode automatically.
+- When a dnode is removed from the cluster, the data from this dnode will be transferred to other dnodes automatically.
+- When a dnode is too hot, i.e. too much data has been stored in it, automatic load balancing may be triggered to migrate some vnodes from this dnode to other dnodes.
+
+:::tip
+Automatic load balancing is controlled by the parameter `balance`, 0 means disabled and 1 means enabled. This is set in the file [taos.cfg](https://docs.tdengine.com/reference/config/#balance).
+
+:::
+
+## Dnode Offline
+
+When a dnode is offline, it can be detected by the TDengine cluster. There are two cases:
+
+- The dnode comes online before the threshold configured in `offlineThreshold` is reached. The dnode is still in the cluster and data replication is started automatically. The dnode can work properly after the data sync is finished.
+
+- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. A system alert will be generated and automatic load balancing will be triggered if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not join the cluster automatically. The system administrator has to manually join the dnode to the cluster.
+
+:::note
+If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsynced status, the master node can only be voted on, after all the vnodes or mnodes in the group become online and can exchange status. Following this, the vgroup (or mnode group) is able to provide service.
+
+:::
+
+## Arbitrator
+
+The "arbitrator" component is used to address the special case when the number of replicas is set to an even number like 2,4 etc. If half of the vnodes in a vgroup don't work, it is impossible to vote and select a master node. This situation also applies to mnodes if the number of mnodes is set to an even number like 2,4 etc.
+
+To resolve this problem, a new arbitrator component named `tarbitrator`, an abbreviation of TDengine Arbitrator, was introduced. The `tarbitrator` simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. As long as more than half of the vnode or mnode, including Arbitrator, are available the vnode group or mnode group can provide data insertion or query services normally.
+
+Normally, it's prudent to configure the replica number for each DB or system parameter `numOfMNodes` to be an odd number. However, if a user is very sensitive to storage space, a replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability.
+
+Arbitrator component is installed with the server package. For details about how to install, please refer to [Install](/operation/pkg-install). The `-p` parameter of `tarbitrator` can be used to specify the port on which it provides service.
+
+In the configuration file `taos.cfg` of each dnode, parameter `arbitrator` needs to be configured to the end point of the `tarbitrator` process. Arbitrator component will be used automatically if the replica is configured to an even number and will be ignored if the replica is configured to an odd number.
+
+Arbitrator can be shown by executing command in TDengine CLI `taos` with its role shown as "arb".
+
+```sql
+SHOW DNODES;
+```
diff --git a/docs-en/10-cluster/_category_.yml b/docs-en/10-cluster/_category_.yml
new file mode 100644
index 0000000000000000000000000000000000000000..141fd7832631d69efed214293c69cee336bc854d
--- /dev/null
+++ b/docs-en/10-cluster/_category_.yml
@@ -0,0 +1 @@
+label: Cluster
diff --git a/docs-en/10-cluster/index.md b/docs-en/10-cluster/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..5a45a2ce7b08c67322265cf1bbd54ef66cbfc027
--- /dev/null
+++ b/docs-en/10-cluster/index.md
@@ -0,0 +1,15 @@
+---
+title: Cluster
+keywords: ["cluster", "high availability", "load balance", "scale out"]
+---
+
+TDengine has a native distributed design and provides the ability to scale out. A few nodes can form a TDengine cluster. If you need higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source.
+
+This chapter mainly introduces cluster deployment, maintenance, and how to achieve high availability and load balancing.
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
diff --git a/docs-en/12-taos-sql/01-data-type.md b/docs-en/12-taos-sql/01-data-type.md
new file mode 100644
index 0000000000000000000000000000000000000000..d038219c8ac66db52416001f7a79c71018e2ca33
--- /dev/null
+++ b/docs-en/12-taos-sql/01-data-type.md
@@ -0,0 +1,69 @@
+---
+title: Data Types
+description: "TDengine supports a variety of data types including timestamp, float, JSON and many others."
+---
+
+## TIMESTAMP
+
+When using TDengine to store and query data, the most important part of the data is timestamp. Timestamp must be specified when creating and inserting data rows. Timestamp must follow the rules below:
+
+- The format must be `YYYY-MM-DD HH:mm:ss.MS`, the default time precision is millisecond (ms), for example `2017-08-12 18:25:58.128`
+- Internal function `now` can be used to get the current timestamp on the client side
+- The current timestamp of the client side is applied when `now` is used to insert data
+- Epoch Time:timestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from 1970-01-01 00:00:00.000 (UTC/GMT)
+- Add/subtract operations can be carried out on timestamps. For example `now-2h` means 2 hours prior to the time at which query is executed. The units of time in operations can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), or w(week). So `select * from t1 where ts > now-2w and ts <= now-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operations.
+
+Time precision in TDengine can be set by the `PRECISION` parameter when executing `CREATE DATABASE`. The default time precision is millisecond. In the statement below, the precision is set to nanonseconds.
+
+```sql
+CREATE DATABASE db_name PRECISION 'ns';
+```
+
+## Data Types
+
+In TDengine, the data types below can be used when specifying a column or tag.
+
+| # | **type** | **Bytes** | **Description** |
+| --- | :-------: | --------- | ------------------------- |
+| 1 | TIMESTAMP | 8 | Default precision is millisecond, microsecond and nanosecond are also supported |
+| 2 | INT | 4 | Integer, the value range is [-2^31, 2^31-1] |
+| 3 |INT UNSIGNED|4 | Unsigned integer, the value range is [0, 2^31-1] |
+| 4 | BIGINT | 8 | Long integer, the value range is [-2^63, 2^63-1] |
+| 5 | BIGINT UNSIGNED | 8 | Unsigned long integer, the value range is [0, 2^63-1] |
+| 6 | FLOAT | 4 | Floating point number, the effective number of digits is 6-7, the value range is [-3.4E38, 3.4E38] |
+| 7 | DOUBLE | 8 | Double precision floating point number, the effective number of digits is 15-16, the value range is [-1.7E308, 1.7E308] |
+| 8 | BINARY | User Defined | Single-byte string for ASCII visible characters. Length must be specified when defining a column or tag of binary type. The string length can be up to 16374 bytes. The string value must be quoted with single quotes. The literal single quote inside the string must be preceded with back slash like `\'` |
+| 9 | SMALLINT | 2 | Short integer, the value range is [-32768, 32767] |
+| 10 | SMALLINT UNSIGNED | 2 | Unsigned short integer, the value range is [0, 32767] |
+| 11 | TINYINT | 1 | Single-byte integer, the value range is [-128, 127] |
+| 12 | TINYINT UNSIGNED | 1 | Unsigned single-byte integer, the value range is [0, 127] |
+| 13 | BOOL | 1 | Bool, the value range is {true, false} |
+| 14 | NCHAR | User Defined| Multi-Byte string that can include multi byte characters like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\’`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. |
+| 15 | JSON | | JSON type can only be used on tags. A tag of json type is excluded with any other tags of any other type |
+| 16 | VARCHAR | User Defined| Alias of BINARY type |
+
+:::note
+- TDengine is case insensitive and treats any characters in the sql command as lower case by default, case sensitive strings must be quoted with single quotes.
+- Only ASCII visible characters are suggested to be used in a column or tag of BINARY type. Multi-byte characters must be stored in NCHAR type.
+- Numeric values in SQL statements will be determined as integer or float type according to whether there is decimal point or whether scientific notation is used, so attention must be paid to avoid overflow. For example, 9999999999999999999 will be considered as overflow because it exceeds the upper limit of long integer, but 9999999999999999999.0 will be considered as a legal float number.
+
+:::
+
+## Constants
+TDengine supports constants of multiple data type.
+
+| # | **Syntax** | **Type** | **Description** |
+| --- | :-------: | --------- | -------------------------------------- |
+| 1 | [{+ \| -}]123 | BIGINT | Numeric constants are treated as BIGINT type. The value will be truncated if it exceeds the range of BIGINT type. |
+| 2 | 123.45 | DOUBLE | Floating number constants are treated as DOUBLE type. TDengine determines whether it's a floating number based on if decimal point or scientific notation is used. |
+| 3 | 1.2E3 | DOUBLE | Constants in scientific notation are treated ad DOUBLE type. |
+| 4 | 'abc' | BINARY | String constants enclosed by single quotes are treated as BINARY type. Its size is determined as the acutal length. Single quote itself can be included by preceding backslash, i.e. `\'`, in a string constant. |
+| 5 | "abc" | BINARY | String constants enclosed by double quotes are treated as BINARY type. Its size is determined as the acutal length. Double quote itself can be included by preceding backslash, i.e. `\"`, in a string constant. |
+| 6 | TIMESTAMP {'literal' \| "literal"} | TIMESTAMP | A string constant following `TIMESTAMP` keyword is treated as TIMESTAMP type. The string should be in the format of "YYYY-MM-DD HH:mm:ss.MS". Its time precision is same as that of the current database being used. |
+| 7 | {TRUE \| FALSE} | BOOL | BOOL type contant. |
+| 8 | {'' \| "" \| '\t' \| "\t" \| ' ' \| " " \| NULL } | -- | NULL constant, it can be used for any type.|
+
+:::note
+- TDengine determines whether it's a floating number based on if decimal point or scientific notation is used. So whether the value is determined as overflow depends on both the value and the determined type. For example, 9999999999999999999 is determined as overflow because it exceeds the upper limit of BIGINT type, while 9999999999999999999.0 is considered as a valid floating number because it is within the range of DOUBLE type.
+
+:::
diff --git a/docs-en/12-taos-sql/02-database.md b/docs-en/12-taos-sql/02-database.md
new file mode 100644
index 0000000000000000000000000000000000000000..80581b2f1bc7ce9cd046c18873d3f22b6804d8cf
--- /dev/null
+++ b/docs-en/12-taos-sql/02-database.md
@@ -0,0 +1,127 @@
+---
+sidebar_label: Database
+title: Database
+description: "create and drop database, show or change database parameters"
+---
+
+## Create Database
+
+```
+CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
+```
+
+:::info
+
+1. KEEP specifies the number of days for which the data in the database will be retained. The default value is 3650 days, i.e. 10 years. The data will be deleted automatically once its age exceeds this threshold.
+2. UPDATE specifies whether the data can be updated and how the data can be updated.
+ 1. UPDATE set to 0 means update operation is not allowed. The update for data with an existing timestamp will be discarded silently and the original record in the database will be preserved as is.
+ 2. UPDATE set to 1 means the whole row will be updated. The columns for which no value is specified will be set to NULL.
+ 3. UPDATE set to 2 means updating a subset of columns for a row is allowed. The columns for which no value is specified will be kept unchanged.
+3. The maximum length of database name is 33 bytes.
+4. The maximum length of a SQL statement is 65,480 bytes.
+5. Below are the parameters that can be used when creating a database
+ - cache: [Description](/reference/config/#cache)
+ - blocks: [Description](/reference/config/#blocks)
+ - days: [Description](/reference/config/#days)
+ - keep: [Description](/reference/config/#keep)
+ - minRows: [Description](/reference/config/#minrows)
+ - maxRows: [Description](/reference/config/#maxrows)
+ - wal: [Description](/reference/config/#wallevel)
+ - fsync: [Description](/reference/config/#fsync)
+ - update: [Description](/reference/config/#update)
+ - cacheLast: [Description](/reference/config/#cachelast)
+ - replica: [Description](/reference/config/#replica)
+ - quorum: [Description](/reference/config/#quorum)
+ - maxVgroupsPerDb: [Description](/reference/config/#maxvgroupsperdb)
+ - comp: [Description](/reference/config/#comp)
+ - precision: [Description](/reference/config/#precision)
+6. Please note that all of the parameters mentioned in this section are configured in configuration file `taos.cfg` on the TDengine server. If not specified in the `create database` statement, the values from taos.cfg are used by default. To override default parameters, they must be specified in the `create database` statement.
+
+:::
+
+## Show Current Configuration
+
+```
+SHOW VARIABLES;
+```
+
+## Specify The Database In Use
+
+```
+USE db_name;
+```
+
+:::note
+This way is not applicable when using a REST connection. In a REST connection the database name must be specified before a table or stable name. For e.g. to query the stable "meters" in database "test" the query would be "SELECT count(*) from test.meters"
+
+:::
+
+## Drop Database
+
+```
+DROP DATABASE [IF EXISTS] db_name;
+```
+
+:::note
+All data in the database will be deleted too. This command must be used with extreme caution. Please follow your organization's data integrity, data backup, data security or any other applicable SOPs before using this command.
+
+:::
+
+## Change Database Configuration
+
+Some examples are shown below to demonstrate how to change the configuration of a database. Please note that some configuration parameters can be changed after the database is created, but some cannot. For details of the configuration parameters of database please refer to [Configuration Parameters](/reference/config/).
+
+```
+ALTER DATABASE db_name COMP 2;
+```
+
+COMP parameter specifies whether the data is compressed and how the data is compressed.
+
+```
+ALTER DATABASE db_name REPLICA 2;
+```
+
+REPLICA parameter specifies the number of replicas of the database.
+
+```
+ALTER DATABASE db_name KEEP 365;
+```
+
+KEEP parameter specifies the number of days for which the data will be kept.
+
+```
+ALTER DATABASE db_name QUORUM 2;
+```
+
+QUORUM parameter specifies the necessary number of confirmations to determine whether the data is written successfully.
+
+```
+ALTER DATABASE db_name BLOCKS 100;
+```
+
+BLOCKS parameter specifies the number of memory blocks used by each VNODE.
+
+```
+ALTER DATABASE db_name CACHELAST 0;
+```
+
+CACHELAST parameter specifies whether and how the latest data of a sub table is cached.
+
+:::tip
+The above parameters can be changed using `ALTER DATABASE` command without restarting. For more details of all configuration parameters please refer to [Configuration Parameters](/reference/config/).
+
+:::
+
+## Show All Databases
+
+```
+SHOW DATABASES;
+```
+
+## Show The Create Statement of A Database
+
+```
+SHOW CREATE DATABASE db_name;
+```
+
+This command is useful when migrating the data from one TDengine cluster to another. This command can be used to get the CREATE statement, which can be used in another TDengine instance to create the exact same database.
diff --git a/docs-en/12-taos-sql/03-table.md b/docs-en/12-taos-sql/03-table.md
new file mode 100644
index 0000000000000000000000000000000000000000..f065a8e2396583bb7a512446b513ed60056ad55e
--- /dev/null
+++ b/docs-en/12-taos-sql/03-table.md
@@ -0,0 +1,127 @@
+---
+sidebar_label: Table
+title: Table
+description: create super table, normal table and sub table, drop tables and change tables
+---
+
+## Create Table
+
+```
+CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]);
+```
+
+:::info
+
+1. The first column of a table MUST be of type TIMESTAMP. It is automatically set as the primary key.
+2. The maximum length of the table name is 192 bytes.
+3. The maximum length of each row is 48k bytes, please note that the extra 2 bytes used by each BINARY/NCHAR column are also counted.
+4. The name of the subtable can only consist of characters from the English alphabet, digits and underscore. Table names can't start with a digit. Table names are case insensitive.
+5. The maximum length in bytes must be specified when using BINARY or NCHAR types.
+6. Escape character "\`" can be used to avoid the conflict between table names and reserved keywords, above rules will be bypassed when using escape character on table names, but the upper limit for the name length is still valid. The table names specified using escape character are case sensitive. Only ASCII visible characters can be used with escape character.
+ For example \`aBc\` and \`abc\` are different table names but `abc` and `aBc` are same table names because they are both converted to `abc` internally.
+
+:::
+
+### Create Subtable Using STable As Template
+
+```
+CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name TAGS (tag_value1, ...);
+```
+
+The above command creates a subtable using the specified super table as a template and the specified tag values.
+
+### Create Subtable Using STable As Template With A Subset of Tags
+
+```
+CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name (tag_name1, ...) TAGS (tag_value1, ...);
+```
+
+The tags for which no value is specified will be set to NULL.
+
+### Create Tables in Batch
+
+```
+CREATE TABLE [IF NOT EXISTS] tb_name1 USING stb_name TAGS (tag_value1, ...) [IF NOT EXISTS] tb_name2 USING stb_name TAGS (tag_value2, ...) ...;
+```
+
+This can be used to create a lot of tables in a single SQL statement while making table creation much faster.
+
+:::info
+
+- Creating tables in batch must use a super table as a template.
+- The length of single statement is suggested to be between 1,000 and 3,000 bytes for best performance.
+
+:::
+
+## Drop Tables
+
+```
+DROP TABLE [IF EXISTS] tb_name;
+```
+
+## Show All Tables In Current Database
+
+```
+SHOW TABLES [LIKE tb_name_wildcard];
+```
+
+## Show Create Statement of A Table
+
+```
+SHOW CREATE TABLE tb_name;
+```
+
+This is useful when migrating the data in one TDengine cluster to another one because it can be used to create the exact same tables in the target database.
+
+## Show Table Definition
+
+```
+DESCRIBE tb_name;
+```
+
+## Change Table Definition
+
+### Add A Column
+
+```
+ALTER TABLE tb_name ADD COLUMN field_name data_type;
+```
+
+:::info
+
+1. The maximum number of columns is 4096, the minimum number of columns is 2.
+2. The maximum length of a column name is 64 bytes.
+
+:::
+
+### Remove A Column
+
+```
+ALTER TABLE tb_name DROP COLUMN field_name;
+```
+
+:::note
+If a table is created using a super table as template, the table definition can only be changed on the corresponding super table, and the change will be automatically applied to all the subtables created using this super table as template. For tables created in the normal way, the table definition can be changed directly on the table.
+
+:::
+
+### Change Column Length
+
+```
+ALTER TABLE tb_name MODIFY COLUMN field_name data_type(length);
+```
+
+If the type of a column is variable length, like BINARY or NCHAR, this command can be used to change the length of the column.
+
+:::note
+If a table is created using a super table as template, the table definition can only be changed on the corresponding super table, and the change will be automatically applied to all the subtables created using this super table as template. For tables created in the normal way, the table definition can be changed directly on the table.
+
+:::
+
+### Change Tag Value Of Sub Table
+
+```
+ALTER TABLE tb_name SET TAG tag_name=new_tag_value;
+```
+
+This command can be used to change the tag value if the table is created using a super table as template.
diff --git a/docs-en/12-taos-sql/04-stable.md b/docs-en/12-taos-sql/04-stable.md
new file mode 100644
index 0000000000000000000000000000000000000000..b8a608792ab327a81129d29ddd0ff44d7af6e6c5
--- /dev/null
+++ b/docs-en/12-taos-sql/04-stable.md
@@ -0,0 +1,118 @@
+---
+sidebar_label: STable
+title: Super Table
+---
+
+:::note
+
+Keyword `STable`, abbreviated for super table, is supported since version 2.0.15.
+
+:::
+
+## Create STable
+
+```
+CREATE STable [IF NOT EXISTS] stb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]) TAGS (tag1_name tag_type1, tag2_name tag_type2 [, tag3_name tag_type3]);
+```
+
+The SQL statement of creating a STable is similar to that of creating a table, but a special column set named `TAGS` must be specified with the names and types of the tags.
+
+:::info
+
+1. A tag can be of type timestamp, since version 2.1.3.0, but its value must be fixed and arithmetic operations cannot be performed on it. Prior to version 2.1.3.0, tag types specified in TAGS could not be of type timestamp.
+2. The tag names specified in TAGS should NOT be the same as other columns.
+3. The tag names specified in TAGS should NOT be the same as any reserved keywords.(Please refer to [keywords](/taos-sql/keywords/)
+4. The maximum number of tags specified in TAGS is 128, there must be at least one tag, and the total length of all tag columns should NOT exceed 16KB.
+
+:::
+
+## Drop STable
+
+```
+DROP STable [IF EXISTS] stb_name;
+```
+
+All the subtables created using the deleted STable will be deleted automatically.
+
+## Show All STables
+
+```
+SHOW STableS [LIKE tb_name_wildcard];
+```
+
+This command can be used to display the information of all STables in the current database, including name, creation time, number of columns, number of tags, and number of tables created using this STable.
+
+## Show The Create Statement of A STable
+
+```
+SHOW CREATE STable stb_name;
+```
+
+This command is useful in migrating data from one TDengine cluster to another because it can be used to create the exact same STable in the target database.
+
+## Get STable Definition
+
+```
+DESCRIBE stb_name;
+```
+
+## Change Columns Of STable
+
+### Add A Column
+
+```
+ALTER STable stb_name ADD COLUMN field_name data_type;
+```
+
+### Remove A Column
+
+```
+ALTER STable stb_name DROP COLUMN field_name;
+```
+
+### Change Column Length
+
+```
+ALTER STable stb_name MODIFY COLUMN field_name data_type(length);
+```
+
+This command can be used to change (or more specifically, increase) the length of a column of variable length types, like BINARY or NCHAR.
+
+## Change Tags of A STable
+
+### Add A Tag
+
+```
+ALTER STable stb_name ADD TAG new_tag_name tag_type;
+```
+
+This command is used to add a new tag for a STable and specify the tag type.
+
+### Remove A Tag
+
+```
+ALTER STable stb_name DROP TAG tag_name;
+```
+
+The tag will be removed automatically from all the subtables, created using the super table as template, once a tag is removed from a super table.
+
+### Change A Tag
+
+```
+ALTER STable stb_name CHANGE TAG old_tag_name new_tag_name;
+```
+
+The tag name will be changed automatically for all the subtables, created using the super table as template, once a tag name is changed for a super table.
+
+### Change Tag Length
+
+```
+ALTER STable stb_name MODIFY TAG tag_name data_type(length);
+```
+
+This command can be used to change (or more specifically, increase) the length of a tag of variable length types, like BINARY or NCHAR.
+
+:::note
+Changing tag values can be applied to only subtables. All other tag operations, like add tag, remove tag, however, can be applied to only STable. If a new tag is added for a STable, the tag will be added with NULL value for all its subtables.
+
+:::
diff --git a/docs-en/12-taos-sql/05-insert.md b/docs-en/12-taos-sql/05-insert.md
new file mode 100644
index 0000000000000000000000000000000000000000..1336cd7238a19190583ea9d268a64df242ffd3c9
--- /dev/null
+++ b/docs-en/12-taos-sql/05-insert.md
@@ -0,0 +1,164 @@
+---
+title: Insert
+---
+
+## Syntax
+
+```sql
+INSERT INTO
+ tb_name
+ [USING stb_name [(tag1_name, ...)] TAGS (tag1_value, ...)]
+ [(field1_name, ...)]
+ VALUES (field1_value, ...) [(field1_value2, ...) ...] | FILE csv_file_path
+ [tb2_name
+ [USING stb_name [(tag1_name, ...)] TAGS (tag1_value, ...)]
+ [(field1_name, ...)]
+ VALUES (field1_value, ...) [(field1_value2, ...) ...] | FILE csv_file_path
+ ...];
+```
+
+## Insert Single or Multiple Rows
+
+Single row or multiple rows specified with VALUES can be inserted into a specific table. For example:
+
+A single row is inserted using the below statement.
+
+```sq;
+INSERT INTO d1001 VALUES (NOW, 10.2, 219, 0.32);
+```
+
+Double rows are inserted using the below statement.
+
+```sql
+INSERT INTO d1001 VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32) (1626164208000, 10.15, 217, 0.33);
+```
+
+:::note
+
+1. In the second example above, different formats are used in the two rows to be inserted. In the first row, the timestamp format is a date and time string, which is interpreted from the string value only. In the second row, the timestamp format is a long integer, which will be interpreted based on the database time precision.
+2. When trying to insert multiple rows in a single statement, only the timestamp of one row can be set as NOW, otherwise there will be duplicate timestamps among the rows and the result may be out of expectation because NOW will be interpreted as the time when the statement is executed.
+3. The oldest timestamp that is allowed is subtracting the KEEP parameter from current time.
+4. The newest timestamp that is allowed is adding the DAYS parameter to current time.
+
+:::
+
+## Insert Into Specific Columns
+
+Data can be inserted into specific columns, either single row or multiple row, while other columns will be inserted as NULL value.
+
+```
+INSERT INTO d1001 (ts, current, phase) VALUES ('2021-07-13 14:06:33.196', 10.27, 0.31);
+```
+
+:::info
+If no columns are explicitly specified, all the columns must be provided with values, this is called "all column mode". The insert performance of all column mode is much better than specifying a subset of columns, so it's encouraged to use "all column mode" while providing NULL value explicitly for the columns for which no actual value can be provided.
+
+:::
+
+## Insert Into Multiple Tables
+
+One or multiple rows can be inserted into multiple tables in a single SQL statement, with or without specifying specific columns.
+
+```sql
+INSERT INTO d1001 VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
+ d1002 (ts, current, phase) VALUES ('2021-07-13 14:06:34.255', 10.27, 0.31);
+```
+
+## Automatically Create Table When Inserting
+
+If it's unknown whether the table already exists, the table can be created automatically while inserting using the SQL statement below. To use this functionality, a STable must be used as template and tag values must be provided.
+
+```sql
+INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32);
+```
+
+It's not necessary to provide values for all tags when creating tables automatically, the tags without values provided will be set to NULL.
+
+```sql
+INSERT INTO d21001 USING meters (groupId) TAGS (2) VALUES ('2021-07-13 14:06:33.196', 10.15, 217, 0.33);
+```
+
+Multiple rows can also be inserted into the same table in a single SQL statement.
+
+```sql
+INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
+ d21002 USING meters (groupId) TAGS (2) VALUES ('2021-07-13 14:06:34.255', 10.15, 217, 0.33)
+ d21003 USING meters (groupId) TAGS (2) (ts, current, phase) VALUES ('2021-07-13 14:06:34.255', 10.27, 0.31);
+```
+
+:::info
+Prior to version 2.0.20.5, when using `INSERT` to create tables automatically and specifying the columns, the column names must follow the table name immediately. From version 2.0.20.5, the column names can follow the table name immediately, also can be put between `TAGS` and `VALUES`. In the same SQL statement, however, these two ways of specifying column names can't be mixed.
+:::
+
+## Insert Rows From A File
+
+Besides using `VALUES` to insert one or multiple rows, the data to be inserted can also be prepared in a CSV file with comma as separator and each field value quoted by single quotes. Table definition is not required in the CSV file. For example, if file "/tmp/csvfile.csv" contains the below data:
+
+```
+'2021-07-13 14:07:34.630', '10.2', '219', '0.32'
+'2021-07-13 14:07:35.779', '10.15', '217', '0.33'
+```
+
+Then data in this file can be inserted by the SQL statement below:
+
+```sql
+INSERT INTO d1001 FILE '/tmp/csvfile.csv';
+```
+
+## Create Tables Automatically and Insert Rows From File
+
+From version 2.1.5.0, tables can be automatically created using a super table as template when inserting data from a CSV file, like below:
+
+```sql
+INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) FILE '/tmp/csvfile.csv';
+```
+
+Multiple tables can be automatically created and inserted in a single SQL statement, like below:
+
+```sql
+INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) FILE '/tmp/csvfile_21001.csv'
+ d21002 USING meters (groupId) TAGS (2) FILE '/tmp/csvfile_21002.csv';
+```
+
+## More About Insert
+
+For SQL statement like `insert`, a stream parsing strategy is applied. That means before an error is found and the execution is aborted, the part prior to the error point has already been executed. Below is an experiment to help understand the behavior.
+
+First, a super table is created.
+
+```sql
+CREATE TABLE meters(ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS(location BINARY(30), groupId INT);
+```
+
+It can be proven that the super table has been created by `SHOW STableS`, but no table exists using `SHOW TABLES`.
+
+```
+taos> SHOW STableS;
+ name | created_time | columns | tags | tables |
+============================================================================================
+ meters | 2020-08-06 17:50:27.831 | 4 | 2 | 0 |
+Query OK, 1 row(s) in set (0.001029s)
+
+taos> SHOW TABLES;
+Query OK, 0 row(s) in set (0.000946s)
+```
+
+Then, try to create table d1001 automatically when inserting data into it.
+
+```sql
+INSERT INTO d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES('a');
+```
+
+The output shows the value to be inserted is invalid. But `SHOW TABLES` proves that the table has been created automatically by the `INSERT` statement.
+
+```
+DB error: invalid SQL: 'a' (invalid timestamp) (0.039494s)
+
+taos> SHOW TABLES;
+ table_name | created_time | columns | STable_name |
+======================================================================================================
+ d1001 | 2020-08-06 17:52:02.097 | 4 | meters |
+Query OK, 1 row(s) in set (0.001091s)
+```
+
+From the above experiment, we can see that while the value to be inserted is invalid the table is still created.
diff --git a/docs-en/12-taos-sql/06-select.md b/docs-en/12-taos-sql/06-select.md
new file mode 100644
index 0000000000000000000000000000000000000000..8a017cf92e40aa4a854dcd531b7df291a9243515
--- /dev/null
+++ b/docs-en/12-taos-sql/06-select.md
@@ -0,0 +1,449 @@
+---
+title: Select
+---
+
+## Syntax
+
+```SQL
+SELECT select_expr [, select_expr ...]
+ FROM {tb_name_list}
+ [WHERE where_condition]
+ [SESSION(ts_col, tol_val)]
+ [STATE_WINDOW(col)]
+ [INTERVAL(interval_val [, interval_offset]) [SLIDING sliding_val]]
+ [FILL(fill_mod_and_val)]
+ [GROUP BY col_list]
+ [ORDER BY col_list { DESC | ASC }]
+ [SLIMIT limit_val [SOFFSET offset_val]]
+ [LIMIT limit_val [OFFSET offset_val]]
+ [>> export_file];
+```
+
+## Wildcard
+
+Wildcard \* can be used to specify all columns. The result includes only data columns for normal tables.
+
+```
+taos> SELECT * FROM d1001;
+ ts | current | voltage | phase |
+======================================================================================
+ 2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 |
+ 2018-10-03 14:38:15.000 | 12.60000 | 218 | 0.33000 |
+ 2018-10-03 14:38:16.800 | 12.30000 | 221 | 0.31000 |
+Query OK, 3 row(s) in set (0.001165s)
+```
+
+The result includes both data columns and tag columns for super table.
+
+```
+taos> SELECT * FROM meters;
+ ts | current | voltage | phase | location | groupid |
+=====================================================================================================================================
+ 2018-10-03 14:38:05.500 | 11.80000 | 221 | 0.28000 | California.LoSangeles | 2 |
+ 2018-10-03 14:38:16.600 | 13.40000 | 223 | 0.29000 | California.LoSangeles | 2 |
+ 2018-10-03 14:38:05.000 | 10.80000 | 223 | 0.29000 | California.LoSangeles | 3 |
+ 2018-10-03 14:38:06.500 | 11.50000 | 221 | 0.35000 | California.LoSangeles | 3 |
+ 2018-10-03 14:38:04.000 | 10.20000 | 220 | 0.23000 | California.SanFrancisco | 3 |
+ 2018-10-03 14:38:16.650 | 10.30000 | 218 | 0.25000 | California.SanFrancisco | 3 |
+ 2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 | California.SanFrancisco | 2 |
+ 2018-10-03 14:38:15.000 | 12.60000 | 218 | 0.33000 | California.SanFrancisco | 2 |
+ 2018-10-03 14:38:16.800 | 12.30000 | 221 | 0.31000 | California.SanFrancisco | 2 |
+Query OK, 9 row(s) in set (0.002022s)
+```
+
+Wildcard can be used with table name as prefix. Both SQL statements below have the same effect and return all columns.
+
+```SQL
+SELECT * FROM d1001;
+SELECT d1001.* FROM d1001;
+```
+
+In a JOIN query, however, the results are different with or without a table name prefix. \* without table prefix will return all the columns of both tables, but \* with table name as prefix will return only the columns of that table.
+
+```
+taos> SELECT * FROM d1001, d1003 WHERE d1001.ts=d1003.ts;
+ ts | current | voltage | phase | ts | current | voltage | phase |
+==================================================================================================================================
+ 2018-10-03 14:38:05.000 | 10.30000| 219 | 0.31000 | 2018-10-03 14:38:05.000 | 10.80000| 223 | 0.29000 |
+Query OK, 1 row(s) in set (0.017385s)
+```
+
+```
+taos> SELECT d1001.* FROM d1001,d1003 WHERE d1001.ts = d1003.ts;
+ ts | current | voltage | phase |
+======================================================================================
+ 2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 |
+Query OK, 1 row(s) in set (0.020443s)
+```
+
+Wildcard \* can be used with some functions, but the result may be different depending on the function being used. For example, `count(*)` returns only one column, i.e. the number of rows; `first`, `last` and `last_row` return all columns of the selected row.
+
+```
+taos> SELECT COUNT(*) FROM d1001;
+ count(*) |
+========================
+ 3 |
+Query OK, 1 row(s) in set (0.001035s)
+```
+
+```
+taos> SELECT FIRST(*) FROM d1001;
+ first(ts) | first(current) | first(voltage) | first(phase) |
+=========================================================================================
+ 2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 |
+Query OK, 1 row(s) in set (0.000849s)
+```
+
+## Tags
+
+Starting from version 2.0.14, tag columns can be selected together with data columns when querying sub tables. Please note however, that, wildcard \* cannot be used to represent any tag column. This means that tag columns must be specified explicitly like the example below.
+
+```
+taos> SELECT location, groupid, current FROM d1001 LIMIT 2;
+ location | groupid | current |
+======================================================================
+ California.SanFrancisco | 2 | 10.30000 |
+ California.SanFrancisco | 2 | 12.60000 |
+Query OK, 2 row(s) in set (0.003112s)
+```
+
+## Get distinct values
+
+`DISTINCT` keyword can be used to get all the unique values of tag columns from a super table. It can also be used to get all the unique values of data columns from a table or subtable.
+
+```sql
+SELECT DISTINCT tag_name [, tag_name ...] FROM stb_name;
+SELECT DISTINCT col_name [, col_name ...] FROM tb_name;
+```
+
+:::info
+
+1. Configuration parameter `maxNumOfDistinctRes` in `taos.cfg` is used to control the number of rows to output. The minimum configurable value is 100,000, the maximum configurable value is 100,000,000, the default value is 1,000,000. If the actual number of rows exceeds the value of this parameter, only the number of rows specified by this parameter will be output.
+2. It can't be guaranteed that the results selected by using `DISTINCT` on columns of `FLOAT` or `DOUBLE` are exactly unique because of the precision errors in floating point numbers.
+3. `DISTINCT` can't be used in the sub-query of a nested query statement, and can't be used together with aggregate functions, `GROUP BY` or `JOIN` in the same SQL statement.
+
+:::
+
+## Columns Names of Result Set
+
+When using `SELECT`, the column names in the result set will be the same as that in the select clause if `AS` is not used. `AS` can be used to rename the column names in the result set. For example
+
+```
+taos> SELECT ts, ts AS primary_key_ts FROM d1001;
+ ts | primary_key_ts |
+====================================================
+ 2018-10-03 14:38:05.000 | 2018-10-03 14:38:05.000 |
+ 2018-10-03 14:38:15.000 | 2018-10-03 14:38:15.000 |
+ 2018-10-03 14:38:16.800 | 2018-10-03 14:38:16.800 |
+Query OK, 3 row(s) in set (0.001191s)
+```
+
+`AS` can't be used together with `first(*)`, `last(*)`, or `last_row(*)`.
+
+## Implicit Columns
+
+`Select_exprs` can be column names of a table, or function expression or arithmetic expression on columns. The maximum number of allowed column names and expressions is 256. Timestamp and the corresponding tag names will be returned in the result set if `interval` or `group by tags` are used, and timestamp will always be the first column in the result set.
+
+## Table List
+
+`FROM` can be followed by a number of tables or super tables, or can be followed by a sub-query. If no database is specified as current database in use, table names must be preceded with database name, like `power.d1001`.
+
+```SQL
+SELECT * FROM power.d1001;
+```
+
+has same effect as
+
+```SQL
+USE power;
+SELECT * FROM d1001;
+```
+
+## Special Query
+
+Some special query functions can be invoked without `FROM` sub-clause. For example, the statement below can be used to get the current database in use.
+
+```
+taos> SELECT DATABASE();
+ database() |
+=================================
+ power |
+Query OK, 1 row(s) in set (0.000079s)
+```
+
+If no database is specified upon logging in and no database is specified with `USE` after login, NULL will be returned by `select database()`.
+
+```
+taos> SELECT DATABASE();
+ database() |
+=================================
+ NULL |
+Query OK, 1 row(s) in set (0.000184s)
+```
+
+The statement below can be used to get the version of client or server.
+
+```
+taos> SELECT CLIENT_VERSION();
+ client_version() |
+===================
+ 2.0.0.0 |
+Query OK, 1 row(s) in set (0.000070s)
+
+taos> SELECT SERVER_VERSION();
+ server_version() |
+===================
+ 2.0.0.0 |
+Query OK, 1 row(s) in set (0.000077s)
+```
+
+The statement below is used to check the server status. An integer, like `1`, is returned if the server status is OK, otherwise an error code is returned. This is compatible with the status check for TDengine from connection pool or 3rd party tools, and can avoid the problem of losing the connection from a connection pool when using the wrong heartbeat checking SQL statement.
+
+```
+taos> SELECT SERVER_STATUS();
+ server_status() |
+==================
+ 1 |
+Query OK, 1 row(s) in set (0.000074s)
+
+taos> SELECT SERVER_STATUS() AS status;
+ status |
+==============
+ 1 |
+Query OK, 1 row(s) in set (0.000081s)
+```
+
+## \_block_dist
+
+**Description**: Get the data block distribution of a table or STable.
+
+```SQL title="Syntax"
+SELECT _block_dist() FROM { tb_name | stb_name }
+```
+
+**Restrictions**:No argument is allowed, where clause is not allowed
+
+**Sub Query**:Sub query or nested query are not supported
+
+**Return value**: A string which includes the data block distribution of the specified table or STable, i.e. the histogram of rows stored in the data blocks of the table or STable.
+
+```text title="Result"
+summary:
+5th=[392], 10th=[392], 20th=[392], 30th=[392], 40th=[792], 50th=[792] 60th=[792], 70th=[792], 80th=[792], 90th=[792], 95th=[792], 99th=[792] Min=[392(Rows)] Max=[800(Rows)] Avg=[666(Rows)] Stddev=[2.17] Rows=[2000], Blocks=[3], Size=[5.440(Kb)] Comp=[0.23] RowsInMem=[0] SeekHeaderTime=[1(us)]
+```
+
+**More explanation about above example**:
+
+- Histogram about the rows stored in the data blocks of the table or STable: the value of rows for 5%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, 95%, and 99%
+- Minimum number of rows stored in a data block, i.e. Min=[392(Rows)]
+- Maximum number of rows stored in a data block, i.e. Max=[800(Rows)]
+- Average number of rows stored in a data block, i.e. Avg=[666(Rows)]
+- stddev of number of rows, i.e. Stddev=[2.17]
+- Total number of rows, i.e. Rows[2000]
+- Total number of data blocks, i.e. Blocks=[3]
+- Total disk size consumed, i.e. Size=[5.440(Kb)]
+- Compression ratio, which means the compressed size divided by original size, i.e. Comp=[0.23]
+- Total number of rows in memory, i.e. RowsInMem=[0], which means no rows in memory
+- The time spent on reading head file (to retrieve data block information), i.e. SeekHeaderTime=[1(us)], which means 1 microsecond.
+
+## Special Keywords in TAOS SQL
+
+- `TBNAME`: it is treated as a special tag when selecting on a super table, representing the name of subtables in that super table.
+- `_c0`: represents the first column of a table or super table.
+
+## Tips
+
+To get all the subtables and corresponding tag values from a super table:
+
+```SQL
+SELECT TBNAME, location FROM meters;
+```
+
+To get the number of sub tables in a super table:
+
+```SQL
+SELECT COUNT(TBNAME) FROM meters;
+```
+
+Only filter on `TAGS` are allowed in the `where` clause for above two query statements. For example:
+
+```
+taos> SELECT TBNAME, location FROM meters;
+ tbname | location |
+==================================================================
+ d1004 | California.LosAngeles |
+ d1003 | California.LosAngeles |
+ d1002 | California.SanFrancisco |
+ d1001 | California.SanFrancisco |
+Query OK, 4 row(s) in set (0.000881s)
+
+taos> SELECT COUNT(tbname) FROM meters WHERE groupId > 2;
+ count(tbname) |
+========================
+ 2 |
+Query OK, 1 row(s) in set (0.001091s)
+```
+
+- Wildcard \* can be used to get all columns, or specific column names can be specified. Arithmetic operation can be performed on columns of numerical types, columns can be renamed in the result set.
+- Arithmetic operation on columns can't be used in where clause. For example, `where a*2>6;` is not allowed but `where a>6/2;` can be used instead for the same purpose.
+- Arithmetic operation on columns can't be used as the objectives of select statement. For example, `select min(2*a) from t;` is not allowed but `select 2*min(a) from t;` can be used instead.
+- Logical operation can be used in `WHERE` clause to filter numeric values, wildcard can be used to filter string values.
+- Result sets are arranged in ascending order of the first column, i.e. timestamp, but it can be controlled to output as descending order of timestamp. If `order by` is used on other columns, the result may not be as expected. By the way, \_c0 is used to represent the first column, i.e. timestamp.
+- `LIMIT` parameter is used to control the number of rows to output. `OFFSET` parameter is used to specify from which row to output. `LIMIT` and `OFFSET` are executed after `ORDER BY` in the query execution. A simple tip is that `LIMIT 5 OFFSET 2` can be abbreviated as `LIMIT 2, 5`.
+- What is controlled by `LIMIT` is the number of rows in each group when `GROUP BY` is used.
+- `SLIMIT` parameter is used to control the number of groups when `GROUP BY` is used. Similar to `LIMIT`, `SLIMIT 5 OFFSET 2` can be abbreviated as `SLIMIT 2, 5`.
+- ">>" can be used to output the result set of `select` statement to the specified file.
+
+## Where
+
+Logical operations in below table can be used in the `where` clause to filter the resulting rows.
+
+| **Operation** | **Note** | **Applicable Data Types** |
+| ------------- | ------------------------ | ----------------------------------------- |
+| > | larger than | all types except bool |
+| < | smaller than | all types except bool |
+| >= | larger than or equal to | all types except bool |
+| <= | smaller than or equal to | all types except bool |
+| = | equal to | all types |
+| <\> | not equal to | all types |
+| is [not] null | is null or is not null | all types |
+| between and | within a certain range | all types except bool |
+| in | match any value in a set | all types except first column `timestamp` |
+| like | match a wildcard string | **`binary`** **`nchar`** |
+| match/nmatch | filter regex | **`binary`** **`nchar`** |
+
+**Explanations**:
+
+- Operator `<\>` is equal to `!=`, please note that this operator can't be used on the first column of any table, i.e.timestamp column.
+- Operator `like` is used together with wildcards to match strings
+ - '%' matches 0 or any number of characters, '\_' matches any single ASCII character.
+ - `\_` is used to match the \_ in the string.
+ - The maximum length of wildcard string is 100 bytes from version 2.1.6.1 (before that the maximum length is 20 bytes). `maxWildCardsLength` in `taos.cfg` can be used to control this threshold. A very long wildcard string may slowdown the execution performance of `LIKE` operator.
+- `AND` keyword can be used to filter multiple columns simultaneously. AND/OR operation can be performed on single or multiple columns from version 2.3.0.0. However, before 2.3.0.0 `OR` can't be used on multiple columns.
+- For timestamp column, only one condition can be used; for other columns or tags, `OR` keyword can be used to combine multiple logical operators. For example, `((value > 20 AND value < 30) OR (value < 12))`.
+ - From version 2.3.0.0, multiple conditions can be used on timestamp column, but the result set can only contain single time range.
+- From version 2.0.17.0, operator `BETWEEN AND` can be used in where clause, for example `WHERE col2 BETWEEN 1.5 AND 3.25` means the filter condition is equal to "1.5 ≤ col2 ≤ 3.25".
+- From version 2.1.4.0, operator `IN` can be used in the where clause. For example, `WHERE city IN ('California.SanFrancisco', 'California.SanDiego')`. For bool type, both `{true, false}` and `{0, 1}` are allowed, but integers other than 0 or 1 are not allowed. FLOAT and DOUBLE types are impacted by floating point precision errors. Only values that match the condition within the tolerance will be selected. Non-primary key column of timestamp type can be used with `IN`.
+- From version 2.3.0.0, regular expression is supported in the where clause with keyword `match` or `nmatch`. The regular expression is case insensitive.
+
+## Regular Expression
+
+### Syntax
+
+```SQL
+WHERE (column|tbname) **match/MATCH/nmatch/NMATCH** _regex_
+```
+
+### Specification
+
+The regular expression being used must be compliant with POSIX specification, please refer to [Regular Expressions](https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap09.html).
+
+### Restrictions
+
+Regular expression can be used against only table names, i.e. `tbname`, and tags of binary/nchar types, but can't be used against data columns.
+
+The maximum length of regular expression string is 128 bytes. Configuration parameter `maxRegexStringLen` can be used to set the maximum allowed regular expression. It's a configuration parameter on the client side, and will take effect after restarting the client.
+
+## JOIN
+
+From version 2.2.0.0, inner join is fully supported in TDengine. More specifically, the inner join between table and table, between STable and STable, and between sub query and sub query are supported.
+
+Only primary key, i.e. timestamp, can be used in the join operation between table and table. For example:
+
+```sql
+SELECT *
+FROM temp_tb_1 t1, pressure_tb_1 t2
+WHERE t1.ts = t2.ts
+```
+
+In the join operation between STable and STable, besides the primary key, i.e. timestamp, tags can also be used. For example:
+
+```sql
+SELECT *
+FROM temp_STable t1, temp_STable t2
+WHERE t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0;
+```
+
+Similarly, join operations can be performed on the result set of multiple sub queries.
+
+:::note
+Restrictions on join operation:
+
+- The number of tables or STables in a single join operation can't exceed 10.
+- `FILL` is not allowed in the query statement that includes JOIN operation.
+- Arithmetic operation is not allowed on the result set of join operation.
+- `GROUP BY` is not allowed on a part of tables that participate in join operation.
+- `OR` can't be used in the conditions for join operation
+- join operation can't be performed on data columns, i.e. can only be performed on tags or primary key, i.e. timestamp
+
+:::
+
+## Nested Query
+
+Nested query is also called sub query. This means that in a single SQL statement the result of inner query can be used as the data source of the outer query.
+
+From 2.2.0.0, unassociated sub query can be used in the `FROM` clause. Unassociated means the sub query doesn't use the parameters in the parent query. More specifically, in the `tb_name_list` of `SELECT` statement, an independent SELECT statement can be used. So a complete nested query looks like:
+
+```SQL
+SELECT ... FROM (SELECT ... FROM ...) ...;
+```
+
+:::info
+
+- Only one layer of nesting is allowed, that means no sub query is allowed within a sub query
+- The result set returned by the inner query will be used as a "virtual table" by the outer query. The "virtual table" can be renamed using `AS` keyword for easy reference in the outer query.
+- Sub query is not allowed in continuous query.
+- JOIN operation is allowed between tables/STables inside both inner and outer queries. Join operation can be performed on the result set of the inner query.
+- UNION operation is not allowed in either inner query or outer query.
+- The functions that can be used in the inner query are the same as those that can be used in a non-nested query.
+ - `ORDER BY` inside the inner query is unnecessary and will slow down the query performance significantly. It is best to avoid the use of `ORDER BY` inside the inner query.
+- Compared to the non-nested query, the functionality that can be used in the outer query has the following restrictions:
+ - Functions
+ - If the result set returned by the inner query doesn't contain timestamp column, then functions relying on timestamp can't be used in the outer query, like `TOP`, `BOTTOM`, `FIRST`, `LAST`, `DIFF`.
+ - Functions that need to scan the data twice can't be used in the outer query, like `STDDEV`, `PERCENTILE`.
+ - `IN` operator is not allowed in the outer query but can be used in the inner query.
+ - `GROUP BY` is not supported in the outer query.
+
+:::
+
+## UNION ALL
+
+```SQL title=Syntax
+SELECT ...
+UNION ALL SELECT ...
+[UNION ALL SELECT ...]
+```
+
+`UNION ALL` operator can be used to combine the result set from multiple select statements as long as the result set of these select statements have exactly the same columns. `UNION ALL` doesn't remove redundant rows from multiple result sets. In a single SQL statement, at most 100 `UNION ALL` can be supported.
+
+### Examples
+
+table `tb1` is created using below SQL statement:
+
+```SQL
+CREATE TABLE tb1 (ts TIMESTAMP, col1 INT, col2 FLOAT, col3 BINARY(50));
+```
+
+The rows in the past one hour in `tb1` can be selected using below SQL statement:
+
+```SQL
+SELECT * FROM tb1 WHERE ts >= NOW - 1h;
+```
+
+The rows between 2018-06-01 08:00:00.000 and 2018-06-02 08:00:00.000 and col3 ends with 'nny' can be selected in the descending order of timestamp using below SQL statement:
+
+```SQL
+SELECT * FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' AND ts <= '2018-06-02 08:00:00.000' AND col3 LIKE '%nny' ORDER BY ts DESC;
+```
+
+The sum of col1 and col2 for rows later than 2018-06-01 08:00:00.000 and whose col2 is bigger than 1.2 can be selected and renamed as "complex", while only 10 rows are output from the 5th row, by below SQL statement:
+
+```SQL
+SELECT (col1 + col2) AS 'complex' FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' AND col2 > 1.2 LIMIT 10 OFFSET 5;
+```
+
+The rows in the past 10 minutes and whose col2 is bigger than 3.14 are selected and output to the result file `/home/testoutput.csv` with below SQL statement:
+
+```SQL
+SELECT COUNT(*) FROM tb1 WHERE ts >= NOW - 10m AND col2 > 3.14 >> /home/testoutput.csv;
+```
diff --git a/docs-en/12-taos-sql/07-function.md b/docs-en/12-taos-sql/07-function.md
new file mode 100644
index 0000000000000000000000000000000000000000..129b7eb0c35b4409e8003855fb4facacb8e0c830
--- /dev/null
+++ b/docs-en/12-taos-sql/07-function.md
@@ -0,0 +1,1253 @@
+---
+title: Functions
+toc_max_heading_level: 4
+---
+
+## Single-Row Functions
+
+Single-Row functions return a result row for each row in the query result.
+
+### Numeric Functions
+
+#### ABS
+
+```sql
+SELECT ABS(field_name) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: The absolute of a specific column.
+
+**Return value type**: UBIGINT if the input value is integer; DOUBLE if the input value is FLOAT/DOUBLE.
+
+**Applicable data types**: Numeric types.
+
+**Applicable table types**: table, STable.
+
+**Applicable nested query**: Inner query and Outer query.
+
+**More explanations**:
+- Can't be used with aggregate functions.
+
+#### ACOS
+
+```sql
+SELECT ACOS(field_name) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: The anti-cosine of a specific column
+
+**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
+
+**Applicable data types**: Numeric types.
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**:
+- Can't be used with aggregate functions
+
+#### ASIN
+
+```sql
+SELECT ASIN(field_name) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: The anti-sine of a specific column
+
+**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
+
+**Applicable data types**: Numeric types.
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**:
+- Can't be used with aggregate functions
+
+#### ATAN
+
+```sql
+SELECT ATAN(field_name) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: anti-tangent of a specific column
+
+**Description**: The anti-cosine of a specific column
+
+**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
+
+**Applicable data types**: Numeric types.
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**:
+- Can't be used with aggregate functions
+
+#### CEIL
+
+```
+SELECT CEIL(field_name) FROM { tb_name | stb_name } [WHERE clause];
+```
+
+**Description**: The rounded up value of a specific column
+
+**Return value type**: Same as the column being used
+
+**Applicable data types**: Numeric types.
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and outer query
+
+**More explanations**:
+- Arithmetic operation can be performed on the result of `ceil` function
+- Can't be used with aggregate functions
+
+#### COS
+
+```sql
+SELECT COS(field_name) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: The cosine of a specific column
+
+**Description**: The anti-cosine of a specific column
+
+**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
+
+**Applicable data types**: Numeric types.
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**:
+- Can't be used with aggregate functions
+
+#### FLOOR
+
+```
+SELECT FLOOR(field_name) FROM { tb_name | stb_name } [WHERE clause];
+```
+
+**Description**: The rounded down value of a specific column
+
+**More explanations**: The restrictions are same as those of the `CEIL` function.
+
+#### LOG
+
+```sql
+SELECT LOG(field_name, base) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: The log of a specific with `base` as the radix
+
+**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
+
+**Applicable data types**: Numeric types.
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**:
+- Can't be used with aggregate functions
+
+#### POW
+
+```sql
+SELECT POW(field_name, power) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: The power of a specific column with `power` as the index
+
+**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
+
+**Applicable data types**: Numeric types.
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**:
+- Can't be used with aggregate functions
+
+#### ROUND
+
+```
+SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause];
+```
+
+**Description**: The rounded value of a specific column.
+
+**More explanations**: The restrictions are same as `CEIL` function.
+
+#### SIN
+
+```sql
+SELECT SIN(field_name) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: The sine of a specific column
+
+**Description**: The anti-cosine of a specific column
+
+**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
+
+**Applicable data types**: Numeric types.
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**:
+- Can't be used with aggregate functions
+
+#### SQRT
+
+```sql
+SELECT SQRT(field_name) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: The square root of a specific column
+
+**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
+
+**Applicable data types**: Numeric types.
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**:
+- Can't be used with aggregate functions
+
+#### TAN
+
+```sql
+SELECT TAN(field_name) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: The tangent of a specific column
+
+**Description**: The anti-cosine of a specific column
+
+**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
+
+**Applicable data types**: Numeric types.
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**:
+- Can't be used with aggregate functions
+
+### String Functions
+
+String functiosn take strings as input and output numbers or strings.
+
+#### CHAR_LENGTH
+
+```
+SELECT CHAR_LENGTH(str|column) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: The length in number of characters of a string
+
+**Return value type**: Integer
+
+**Applicable data types**: VARCHAR or NCHAR
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**
+
+- If the input value is NULL, the output is NULL too
+
+#### CONCAT
+
+```sql
+SELECT CONCAT(str1|column1, str2|column2, ...) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: The concatenation result of two or more strings, the number of strings to be concatenated is at least 2 and at most 8
+
+**Return value type**: If all input strings are VARCHAR type, the result is VARCHAR type too. If any one of input strings is NCHAR type, then the result is NCHAR.
+
+**Applicable data types**: VARCHAR, NCHAR. At least 2 input strings are requird, and at most 8 input strings are allowed.
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+#### CONCAT_WS
+
+```
+SELECT CONCAT_WS(separator, str1|column1, str2|column2, ...) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: The concatenation result of two or more strings with separator, the number of strings to be concatenated is at least 3 and at most 9
+
+**Return value type**: If all input strings are VARCHAR type, the result is VARCHAR type too. If any one of input strings is NCHAR type, then the result is NCHAR.
+
+**Applicable data types**: VARCHAR, NCHAR. At least 3 input strings are requird, and at most 9 input strings are allowed.
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**:
+
+- If the value of `separator` is NULL, the output is NULL. If the value of `separator` is not NULL but other input are all NULL, the output is empty string.
+
+#### LENGTH
+
+```
+SELECT LENGTH(str|column) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: The length in bytes of a string
+
+**Return value type**: Integer
+
+**Applicable data types**: VARCHAR or NCHAR
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**
+
+- If the input value is NULL, the output is NULL too
+
+#### LOWER
+
+```
+SELECT LOWER(str|column) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: Convert the input string to lower case
+
+**Return value type**: Same as input
+
+**Applicable data types**: VARCHAR or NCHAR
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**
+
+- If the input value is NULL, the output is NULL too
+
+#### LTRIM
+
+```
+SELECT LTRIM(str|column) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: Remove the left leading blanks of a string
+
+**Return value type**: Same as input
+
+**Applicable data types**: VARCHAR or NCHAR
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**
+
+- If the input value is NULL, the output is NULL too
+
+#### RTRIM
+
+```
+SELECT RTRIM(str|column) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: Remove the right tailing blanks of a string
+
+**Return value type**: Same as input
+
+**Applicable data types**: VARCHAR or NCHAR
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**
+
+- If the input value is NULL, the output is NULL too
+
+#### SUBSTR
+
+```
+SELECT SUBSTR(str,pos[,len]) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: The sub-string starting from `pos` with length of `len` from the original string `str`
+
+**Return value type**: Same as input
+
+**Applicable data types**: VARCHAR or NCHAR
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**:
+
+- If the input is NULL, the output is NULL
+- Parameter `pos` can be an positive or negative integer; If it's positive, the starting position will be counted from the beginning of the string; if it's negative, the starting position will be counted from the end of the string.
+- If `len` is not specified, it means from `pos` to the end.
+
+#### UPPER
+
+```
+SELECT UPPER(str|column) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: Convert the input string to upper case
+
+**Return value type**: Same as input
+
+**Applicable data types**: VARCHAR or NCHAR
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**
+
+- If the input value is NULL, the output is NULL too
+
+### Conversion Functions
+
+This kind of functions convert from one data type to another one.
+
+#### CAST
+
+```sql
+SELECT CAST(expression AS type_name) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: It's used for type casting. The input parameter `expression` can be data columns, constants, scalar functions or arithmetic between them.
+
+**Return value type**: The type specified by parameter `type_name`
+
+**Applicable data types**:
+
+- Parameter `expression` can be any data type except for JSON
+- The output data type specified by `type_name` can only be one of BIGINT/VARCHAR(N)/TIMESTAMP/NCHAR(N)/BIGINT UNSIGNED
+
+**More explanations**:
+
+- Error will be reported for unsupported type casting
+- NULL will be returned if the input value is NULL
+- Some values of some supported data types may not be casted, below are known issues:
+ 1)When casting VARCHAR/NCHAR to BIGINT/BIGINT UNSIGNED, some characters may be treated as illegal, for example "a" may be converted to 0.
+ 2)There may be overflow when casting singed integer or TIMESTAMP to unsigned BIGINT
+ 3)There may be overflow when casting unsigned BIGINT to BIGINT
+ 4)There may be overflow when casting FLOAT/DOUBLE to BIGINT or UNSIGNED BIGINT
+
+#### TO_ISO8601
+
+```sql
+SELECT TO_ISO8601(ts_val | ts_col) FROM { tb_name | stb_name } [WHERE clause];
+```
+
+**Description**: The ISO8601 date/time format converted from a UNIX timestamp, plus the timezone of the client side system
+
+**Return value type**: VARCHAR
+
+**Applicable column types**: TIMESTAMP, constant or a column
+
+**Applicable table types**: table, STable
+
+**More explanations**:
+
+- If the input is UNIX timestamp constant, the precision of the returned value is determined by the digits of the input timestamp
+- If the input is a column of TIMESTAMP type, The precision of the returned value is same as the precision set for the current data base in use
+
+#### TO_JSON
+
+```sql
+SELECT TO_JSON(str_literal) FROM { tb_name | stb_name } [WHERE clause];
+```
+
+**Description**: Convert a JSON string to a JSON body。
+
+**Return value type**: JSON
+
+**Applicable column types**: JSON string, in the format like '{ "literal" : literal }'. '{}' is NULL value. keys in the string must be string constants, values can be constants of numeric types, bool, string or NULL. Escaping characters are not allowed in the JSON string.
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query.
+
+#### TO_UNIXTIMESTAMP
+
+```sql
+SELECT TO_UNIXTIMESTAMP(datetime_string | ts_col) FROM { tb_name | stb_name } [WHERE clause];
+```
+
+**Description**: UNIX timestamp converted from a string of date/time format
+
+**Return value type**: Long integer
+
+**Applicable column types**: Constant or column of VARCHAR/NCHAR
+
+**Applicable table types**: table, STable
+
+**More explanations**:
+
+- The input string must be compatible with ISO8601/RFC3339 standard, 0 will be returned if the string can't be converted
+- The precision of the returned timestamp is same as the precision set for the current data base in use
+
+### DateTime Functions
+
+This kind of functiosn oeprate on timestamp data. NOW(), TODAY() and TIMEZONE() are executed only once even though they may occurr multiple times in a single SQL statement.
+
+#### NOW
+
+```sql
+SELECT NOW() FROM { tb_name | stb_name } [WHERE clause];
+SELECT select_expr FROM { tb_name | stb_name } WHERE ts_col cond_operatior NOW();
+INSERT INTO tb_name VALUES (NOW(), ...);
+```
+
+**Description**: The current time of the client side system
+
+**Return value type**: TIMESTAMP
+
+**Applicable column types**: TIMESTAMP only
+
+**Applicable table types**: table, STable
+
+**More explanations**:
+
+- Add and Subtract operation can be performed, for example NOW() + 1s, the time unit can be:
+ b(nanosecond), u(microsecond), a(millisecond)), s(second), m(minute), h(hour), d(day), w(week)
+- The precision of the returned timestamp is same as the precision set for the current data base in use
+
+#### TIMEDIFF
+
+```sql
+SELECT TIMEDIFF(ts_val1 | datetime_string1 | ts_col1, ts_val2 | datetime_string2 | ts_col2 [, time_unit]) FROM { tb_name | stb_name } [WHERE clause];
+```
+
+**Description**: The difference between two timestamps, and rounded to the time unit specified by `time_unit`
+
+**Return value type**: Long Integer
+
+**Applicable column types**: UNIX timestamp constant, string constant of date/time format, or a column of TIMESTAMP type
+
+**Applicable table types**: table, STable
+
+**More explanations**:
+
+- Time unit specified by `time_unit` can be:
+ 1u(microsecond),1a(millisecond),1s(second),1m(minute),1h(hour),1d(day).
+- The precision of the returned timestamp is same as the precision set for the current data base in use
+
+#### TIMETRUNCATE
+
+```sql
+SELECT TIMETRUNCATE(ts_val | datetime_string | ts_col, time_unit) FROM { tb_name | stb_name } [WHERE clause];
+```
+
+**Description**: Truncate the input timestamp with unit specified by `time_unit`
+
+**Return value type**: TIMESTAMP
+
+**Applicable column types**: UNIX timestamp constant, string constant of date/time format, or a column of timestamp
+
+**Applicable table types**: table, STable
+
+**More explanations**:
+
+- Time unit specified by `time_unit` can be:
+ 1u(microsecond),1a(millisecond),1s(second),1m(minute),1h(hour),1d(day).
+- The precision of the returned timestamp is same as the precision set for the current data base in use
+
+#### TIMEZONE
+
+```sql
+SELECT TIMEZONE() FROM { tb_name | stb_name } [WHERE clause];
+```
+
+**Description**: The timezone of the client side system
+
+**Return value type**: VARCHAR
+
+**Applicable column types**: None
+
+**Applicable table types**: table, STable
+
+#### TODAY
+
+```sql
+SELECT TODAY() FROM { tb_name | stb_name } [WHERE clause];
+SELECT select_expr FROM { tb_name | stb_name } WHERE ts_col cond_operatior TODAY()];
+INSERT INTO tb_name VALUES (TODAY(), ...);
+```
+
+**Description**: The timestamp of 00:00:00 of the client side system
+
+**Return value type**: TIMESTAMP
+
+**Applicable column types**: TIMESTAMP only
+
+**Applicable table types**: table, STable
+
+**More explanations**:
+
+- Add and Subtract operation can be performed, for example NOW() + 1s, the time unit can be:
+ b(nanosecond), u(microsecond), a(millisecond)), s(second), m(minute), h(hour), d(day), w(week)
+- The precision of the returned timestamp is same as the precision set for the current data base in use
+
+## Aggregate Functions
+
+Aggregate functions return single result row for each group in the query result set. Groups are determined by `GROUP BY` clause or time window clause if they are used; or the whole result is considered a group if neither of them is used.
+
+### AVG
+
+```
+SELECT AVG(field_name) FROM tb_name [WHERE clause];
+```
+
+**Description**: Get the average value of a column in a table or STable
+
+**Return value type**: Double precision floating number
+
+**Applicable column types**: Numeric type
+
+**Applicable table types**: table, STable
+
+### COUNT
+
+```
+SELECT COUNT([*|field_name]) FROM tb_name [WHERE clause];
+```
+
+**Description**: Get the number of rows or the number of non-null values in a table or a super table.
+
+**Return value type**: Long integer INT64
+
+**Applicable column types**: All
+
+**Applicable table types**: table, super table, sub table
+
+**More explanation**:
+
+- Wildcard (\*) is used to represent all columns. The `COUNT` function is used to get the total number of all rows.
+- The number of non-NULL values will be returned if this function is used on a specific column.
+
+### ELAPSED
+
+```mysql
+SELECT ELAPSED(field_name[, time_unit]) FROM { tb_name | stb_name } [WHERE clause] [INTERVAL(interval [, offset]) [SLIDING sliding]];
+```
+
+**Description**:`elapsed` function can be used to calculate the continuous time length in which there is valid data. If it's used with `INTERVAL` clause, the returned result is the calcualted time length within each time window. If it's used without `INTERVAL` caluse, the returned result is the calculated time length within the specified time range. Please be noted that the return value of `elapsed` is the number of `time_unit` in the calculated time length.
+
+**Return value type**:Double
+
+**Applicable Column type**:Timestamp
+
+**Applicable tables**: table, STable, outter in nested query
+
+**Explanations**:
+
+- `field_name` parameter can only be the first column of a table, i.e. timestamp primary key.
+- The minimum value of `time_unit` is the time precision of the database. If `time_unit` is not specified, the time precision of the database is used as the default ime unit.
+- It can be used with `INTERVAL` to get the time valid time length of each time window. Please be noted that the return value is same as the time window for all time windows except for the first and the last time window.
+- `order by asc/desc` has no effect on the result.
+- `group by tbname` must be used together when `elapsed` is used against a STable.
+- `group by` must NOT be used together when `elapsed` is used against a table or sub table.
+- When used in nested query, it's only applicable when the inner query outputs an implicit timestamp column as the primary key. For example, `select elapsed(ts) from (select diff(value) from sub1)` is legal usage while `select elapsed(ts) from (select * from sub1)` is not.
+- It can't be used with `leastsquares`, `diff`, `derivative`, `top`, `bottom`, `last_row`, `interp`.
+
+### LEASTSQUARES
+
+```
+SELECT LEASTSQUARES(field_name, start_val, step_val) FROM tb_name [WHERE clause];
+```
+
+**Description**: The linear regression function of the specified column and the timestamp column (primary key), `start_val` is the initial value and `step_val` is the step value.
+
+**Return value type**: A string in the format of "(slope, intercept)"
+
+**Applicable column types**: Numeric types
+
+**Applicable table types**: table only
+
+### MODE
+
+```
+SELECT MODE(field_name) FROM tb_name [WHERE clause];
+```
+
+**Description**:The value which has the highest frequency of occurrence. NULL is returned if there are multiple values which have highest frequency of occurrence. It can't be used on timestamp column.
+
+**Return value type**:Same as the data type of the column being operated upon
+
+**Applicable column types**:Data types except for timestamp
+
+**More explanations**:Considering the number of returned result set is unpredictable, it's suggested to limit the number of unique values to 100,000, otherwise error will be returned.
+
+### SPREAD
+
+```
+SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause];
+```
+
+**Description**: The difference between the max and the min of a specific column
+
+**Return value type**: Double precision floating point
+
+**Applicable column types**: Numeric types
+
+**Applicable table types**: table, STable
+
+**More explanations**: Can be used on a column of TIMESTAMP type, the result is the time range size.
+
+### STDDEV
+
+```
+SELECT STDDEV(field_name) FROM tb_name [WHERE clause];
+```
+
+**Description**: Standard deviation of a specific column in a table or STable
+
+**Return value type**: Double precision floating number
+
+**Applicable column types**: Numeric types
+
+**Applicable table types**: table, STable
+
+### SUM
+
+```
+SELECT SUM(field_name) FROM tb_name [WHERE clause];
+```
+
+**Description**: The sum of a specific column in a table or STable
+
+**Return value type**: Double precision floating number or long integer
+
+**Applicable column types**: Numeric types
+
+**Applicable table types**: table, STable
+
+### HYPERLOGLOG
+
+```
+SELECT HYPERLOGLOG(field_name) FROM { tb_name | stb_name } [WHERE clause];
+```
+
+**Description**:The cardinal number of a specific column is returned by using hyperloglog algorithm.
+
+**Return value type**:Integer
+
+**Applicable column types**:Any data type
+
+**More explanations**: The benefit of using hyperloglog algorithm is that the memory usage is under control when the data volume is huge. However, when the data volume is very small, the result may be not accurate, it's recommented to use `select count(data) from (select unique(col) as data from table)` in this case.
+
+### HISTOGRAM
+
+```
+SELECT HISTOGRAM(field_name,bin_type, bin_description, normalized) FROM tb_name [WHERE clause];
+```
+
+**Description**:Returns count of data points in user-specified ranges.
+
+**Return value type**:Double or INT64, depends on normalized parameter settings.
+
+**Applicable column type**:Numerical types.
+
+**Applicable table types**: table, STable
+
+**Explanations**:
+
+1. bin_type: parameter to indicate the bucket type, valid inputs are: "user_input", "linear_bin", "log_bin"。
+2. bin_description: parameter to describe how to generate buckets,can be in the following JSON formats for each bin_type respectively:
+
+ - "user_input": "[1, 3, 5, 7]": User specified bin values.
+
+ - "linear_bin": "{"start": 0.0, "width": 5.0, "count": 5, "infinity": true}"
+ "start" - bin starting point.
+ "width" - bin offset.
+ "count" - number of bins generated.
+ "infinity" - whether to add(-inf, inf)as start/end point in generated set of bins.
+ The above "linear_bin" descriptor generates a set of bins: [-inf, 0.0, 5.0, 10.0, 15.0, 20.0, +inf].
+
+ - "log_bin": "{"start":1.0, "factor": 2.0, "count": 5, "infinity": true}"
+ "start" - bin starting point.
+ "factor" - exponential factor of bin offset.
+ "count" - number of bins generated.
+ "infinity" - whether to add(-inf, inf)as start/end point in generated range of bins.
+ The above "log_bin" descriptor generates a set of bins:[-inf, 1.0, 2.0, 4.0, 8.0, 16.0, +inf].
+
+3. normalized: setting to 1/0 to turn on/off result normalization.
+
+## Selector Functions
+
+Selector functiosn choose one or more rows in the query result set to retrun according toe the semantics. You can specify to output ts column and other columns including tbname and tags so that you can easily know which rows the selected values belong to.
+
+### APERCENTILE
+
+```
+SELECT APERCENTILE(field_name, P[, algo_type])
+FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: Similar to `PERCENTILE`, but a simulated result is returned
+
+**Return value type**: Double precision floating point
+
+**Applicable column types**: Numeric types
+
+**Applicable table types**: table, STable
+
+**More explanations**
+
+- _P_ is in range [0,100], when _P_ is 0, the result is same as using function MIN; when _P_ is 100, the result is same as function MAX.
+- **algo_type** can only be input as `default` or `t-digest`, if it's not specified `default` will be used, i.e. `apercentile(column_name, 50)` is same as `apercentile(column_name, 50, "default")`.
+- When `t-digest` is used, `t-digest` sampling is used to calculate.
+
+**Nested query**: It can be used in both the outer query and inner query in a nested query.
+
+### BOTTOM
+
+```
+SELECT BOTTOM(field_name, K) FROM { tb_name | stb_name } [WHERE clause];
+```
+
+**Description**: The least _k_ values of a specific column in a table or STable. If a value has multiple occurrences in the column but counting all of them in will exceed the upper limit _k_, then a part of them will be returned randomly.
+
+**Return value type**: Same as the column being operated upon
+
+**Applicable column types**: Numeric types
+
+**Applicable table types**: table, STable
+
+**More explanations**:
+
+- _k_ must be in range [1,100]
+- The timestamp associated with the selected values are returned too
+- Can't be used with `FILL`
+
+### FIRST
+
+```
+SELECT FIRST(field_name) FROM { tb_name | stb_name } [WHERE clause];
+```
+
+**Description**: The first non-null value of a specific column in a table or STable
+
+**Return value type**: Same as the column being operated upon
+
+**Applicable column types**: Any data type
+
+**Applicable table types**: table, STable
+
+**More explanations**:
+
+- FIRST(\*) can be used to get the first non-null value of all columns
+- NULL will be returned if all the values of the specified column are all NULL
+- A result will NOT be returned if all the columns in the result set are all NULL
+
+### INTERP
+
+```
+SELECT INTERP(field_name) FROM { tb_name | stb_name } [WHERE where_condition] [ RANGE(timestamp1,timestamp2) ] [EVERY(interval)] [FILL ({ VALUE | PREV | NULL | LINEAR | NEXT})];
+```
+
+**Description**: The value that matches the specified timestamp range is returned, if existing; or an interpolation value is returned.
+
+**Return value type**: Same as the column being operated upon
+
+**Applicable column types**: Numeric data types
+
+**Applicable table types**: table, STable, nested query
+
+**More explanations**
+
+- `INTERP` is used to get the value that matches the specified time slice from a column. If no such value exists an interpolation value will be returned based on `FILL` parameter.
+- The input data of `INTERP` is the value of the specified column and a `where` clause can be used to filter the original data. If no `where` condition is specified then all original data is the input.
+- The output time range of `INTERP` is specified by `RANGE(timestamp1,timestamp2)` parameter, with timestamp1<=timestamp2. timestamp1 is the starting point of the output time range and must be specified. timestamp2 is the ending point of the output time range and must be specified. If `RANGE` is not specified, then the timestamp of the first row that matches the filter condition is treated as timestamp1, the timestamp of the last row that matches the filter condition is treated as timestamp2.
+- The number of rows in the result set of `INTERP` is determined by the parameter `EVERY`. Starting from timestamp1, one interpolation is performed for every time interval specified `EVERY` parameter. If `EVERY` parameter is not used, the time windows will be considered as no ending timestamp, i.e. there is only one time window from timestamp1.
+- Interpolation is performed based on `FILL` parameter. No interpolation is performed if `FILL` is not used, that means either the original data that matches is returned or nothing is returned.
+- `INTERP` can only be used to interpolate in single timeline. So it must be used with `group by tbname` when it's used on a STable. It can't be used with `GROUP BY` when it's used in the inner query of a nested query.
+- The result of `INTERP` is not influenced by `ORDER BY TIMESTAMP`, which impacts the output order only..
+
+### LAST
+
+```
+SELECT LAST(field_name) FROM { tb_name | stb_name } [WHERE clause];
+```
+
+**Description**: The last non-NULL value of a specific column in a table or STable
+
+**Return value type**: Same as the column being operated upon
+
+**Applicable column types**: Any data type
+
+**Applicable table types**: table, STable
+
+**More explanations**:
+
+- LAST(\*) can be used to get the last non-NULL value of all columns
+- If the values of a column in the result set are all NULL, NULL is returned for that column; if all columns in the result are all NULL, no result will be returned.
+- When it's used on a STable, if there are multiple values with the timestamp in the result set, one of them will be returned randomly and it's not guaranteed that the same value is returned if the same query is run multiple times.
+
+### LAST_ROW
+
+```
+SELECT LAST_ROW(field_name) FROM { tb_name | stb_name };
+```
+
+**Description**: The last row of a table or STable
+
+**Return value type**: Same as the column being operated upon
+
+**Applicable column types**: Any data type
+
+**Applicable table types**: table, STable
+
+**More explanations**:
+
+- When it's used against a STable, multiple rows with the same and largest timestamp may exist, in this case one of them is returned randomly and it's not guaranteed that the result is same if the query is run multiple times.
+- Can't be used with `INTERVAL`.
+
+### MAX
+
+```
+SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause];
+```
+
+**Description**: The maximum value of a specific column of a table or STable
+
+**Return value type**: Same as the data type of the column being operated upon
+
+**Applicable column types**: Numeric types
+
+**Applicable table types**: table, STable
+
+### MIN
+
+```
+SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause];
+```
+
+**Description**: The minimum value of a specific column in a table or STable
+
+**Return value type**: Same as the data type of the column being operated upon
+
+**Applicable column types**: Numeric types
+
+**Applicable table types**: table, STable
+
+### PERCENTILE
+
+```
+SELECT PERCENTILE(field_name, P) FROM { tb_name } [WHERE clause];
+```
+
+**Description**: The value whose rank in a specific column matches the specified percentage. If such a value matching the specified percentage doesn't exist in the column, an interpolation value will be returned.
+
+**Return value type**: Double precision floating point
+
+**Applicable column types**: Numeric types
+
+**Applicable table types**: table
+
+**More explanations**: _P_ is in range [0,100], when _P_ is 0, the result is same as using function MIN; when _P_ is 100, the result is same as function MAX.
+
+### TAIL
+
+```
+SELECT TAIL(field_name, k, offset_val) FROM {tb_name | stb_name} [WHERE clause];
+```
+
+**Description**: The next _k_ rows are returned after skipping the last `offset_val` rows, NULL values are not ignored. `offset_val` is optional parameter. When it's not specified, the last _k_ rows are returned. When `offset_val` is used, the effect is same as `order by ts desc LIMIT k OFFSET offset_val`.
+
+**Parameter value range**: k: [1,100] offset_val: [0,100]
+
+**Return value type**: Same as the column being operated upon
+
+**Applicable column types**: Any data type except form timestamp, i.e. the primary key
+
+### TOP
+
+```
+SELECT TOP(field_name, K) FROM { tb_name | stb_name } [WHERE clause];
+```
+
+**Description**: The greatest _k_ values of a specific column in a table or STable. If a value has multiple occurrences in the column but counting all of them in will exceed the upper limit _k_, then a part of them will be returned randomly.
+
+**Return value type**: Same as the column being operated upon
+
+**Applicable column types**: Numeric types
+
+**Applicable table types**: table, STable
+
+**More explanations**:
+
+- _k_ must be in range [1,100]
+- The timestamp associated with the selected values are returned too
+- Can't be used with `FILL`
+
+### UNIQUE
+
+```
+SELECT UNIQUE(field_name) FROM {tb_name | stb_name} [WHERE clause];
+```
+
+**Description**: The values that occur the first time in the specified column. The effect is similar to `distinct` keyword, but it can also be used to match tags or timestamp.
+
+**Return value type**: Same as the column or tag being operated upon
+
+**Applicable column types**: Any data types except for timestamp
+
+**More explanations**:
+
+- It can be used against table or STable, but can't be used together with time window, like `interval`, `state_window` or `session_window` .
+- Considering the number of result sets is unpredictable, it's suggested to limit the distinct values under 100,000 to control the memory usage, otherwise error will be returned.
+
+## Time-Series Specific Functions
+
+TDengine provides a set of time-series specific functions to better meet the requirements in querying time-series data. In general databases, similar functionalities can only be achieved with much more complex syntax and much worse performance. TDengine provides these functionalities in builtin functions so that the burden on user side is minimized.
+
+### CSUM
+
+```sql
+ SELECT CSUM(field_name) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: The cumulative sum of each row for a specific column. The number of output rows is same as that of the input rows.
+
+**Return value type**: Long integer for integers; Double for floating points. Timestamp is returned for each row.
+
+**Applicable data types**: Numeric types
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**:
+- Arithmetic operation can't be performed on the result of `csum` function
+- Can only be used with aggregate functions
+- `Group by tbname` must be used together on a STable to force the result on a single timeline
+
+### DERIVATIVE
+
+```
+SELECT DERIVATIVE(field_name, time_interval, ignore_negative) FROM tb_name [WHERE clause];
+```
+
+**Description**: The derivative of a specific column. The time rage can be specified by parameter `time_interval`, the minimum allowed time range is 1 second (1s); the value of `ignore_negative` can be 0 or 1, 1 means negative values are ignored.
+
+**Return value type**: Double precision floating point
+
+**Applicable column types**: Numeric types
+
+**Applicable table types**: table, STable
+
+**More explanations**:
+
+- The number of result rows is the number of total rows in the time range subtracted by one, no output for the first row.
+- It can be used together with `GROUP BY tbname` against a STable.
+
+### DIFF
+
+```sql
+SELECT {DIFF(field_name, ignore_negative) | DIFF(field_name)} FROM tb_name [WHERE clause];
+```
+
+**Description**: The different of each row with its previous row for a specific column. `ignore_negative` can be specified as 0 or 1, the default value is 1 if it's not specified. `1` means negative values are ignored.
+
+**Return value type**: Same as the column being operated upon
+
+**Applicable column types**: Numeric types
+
+**Applicable table types**: table, STable
+
+**More explanations**:
+
+- The number of result rows is the number of rows subtracted by one, no output for the first row
+- It can be used on STable with `GROUP by tbname`
+
+### IRATE
+
+```
+SELECT IRATE(field_name) FROM tb_name WHERE clause;
+```
+
+**Description**: instantaneous rate on a specific column. The last two samples in the specified time range are used to calculate instantaneous rate. If the last sample value is smaller, then only the last sample value is used instead of the difference between the last two sample values.
+
+**Return value type**: Double precision floating number
+
+**Applicable column types**: Numeric types
+
+**Applicable table types**: table, STable
+
+**More explanations**:
+
+- It can be used on stble with `GROUP BY`, i.e. timelines generated by `GROUP BY tbname` on a STable.
+
+### MAVG
+
+```sql
+ SELECT MAVG(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: The moving average of continuous _k_ values of a specific column. If the number of input rows is less than _k_, nothing is returned. The applicable range of _k_ is [1,1000].
+
+**Return value type**: Double precision floating point
+
+**Applicable data types**: Numeric types
+
+**Applicable nested query**: Inner query and Outer query
+
+**Applicable table types**: table, STable
+
+**More explanations**:
+
+- Arithmetic operation can't be performed on the result of `MAVG`.
+- Can't be used with aggregate functions.
+- Must be used with `GROUP BY tbname` when it's used on a STable to force the result on each single timeline.
+
+### SAMPLE
+
+```sql
+ SELECT SAMPLE(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
+```
+
+**Description**: _k_ sampling values of a specific column. The applicable range of _k_ is [1,10000]
+
+**Return value type**: Same as the column being operated plus the associated timestamp
+
+**Applicable data types**: Any data type except for tags of STable
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Inner query and Outer query
+
+**More explanations**:
+
+- Arithmetic operation can't be operated on the result of `SAMPLE` function
+- Must be used with `Group by tbname` when it's used on a STable to force the result on each single timeline
+
+### STATECOUNT
+
+```
+SELECT STATECOUNT(field_name, oper, val) FROM { tb_name | stb_name } [WHERE clause];
+```
+
+**Description**: The number of continuous rows satisfying the specified conditions for a specific column. The result is shown as an extra column for each row. If the specified condition is evaluated as true, the number is increased by 1; otherwise the number is reset to -1. If the input value is NULL, then the corresponding row is skipped.
+
+**Applicable parameter values**:
+
+- oper : Can be one of LT (lower than), GT (greater than), LE (lower than or euqal to), GE (greater than or equal to), NE (not equal to), EQ (equal to), the value is case insensitive
+- val : Numeric types
+
+**Return value type**: Integer
+
+**Applicable data types**: Numeric types
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Outer query only
+
+**More explanations**:
+
+- Must be used together with `GROUP BY tbname` when it's used on a STable to force the result into each single timeline]
+- Can't be used with window operation, like interval/state_window/session_window
+
+### STATEDURATION
+
+```
+SELECT stateDuration(field_name, oper, val, unit) FROM { tb_name | stb_name } [WHERE clause];
+```
+
+**Description**: The length of time range in which all rows satisfy the specified condition for a specific column. The result is shown as an extra column for each row. The length for the first row that satisfies the condition is 0. Next, if the condition is evaluated as true for a row, the time interval between current row and its previous row is added up to the time range; otherwise the time range length is reset to -1. If the value of the column is NULL, the corresponding row is skipped.
+
+**Applicable parameter values**:
+
+- oper : Can be one of LT (lower than), GT (greater than), LE (lower than or euqal to), GE (greater than or equal to), NE (not equal to), EQ (equal to), the value is case insensitive
+- val : Numeric types
+- unit: The unit of time interval, can be [1s, 1m, 1h], default is 1s
+
+**Return value type**: Integer
+
+**Applicable data types**: Numeric types
+
+**Applicable table types**: table, STable
+
+**Applicable nested query**: Outer query only
+
+**More explanations**:
+
+- Must be used together with `GROUP BY tbname` when it's used on a STable to force the result into each single timeline]
+- Can't be used with window operation, like interval/state_window/session_window
+
+### TWA
+
+```
+SELECT TWA(field_name) FROM tb_name WHERE clause;
+```
+
+**Description**: Time weighted average on a specific column within a time range
+
+**Return value type**: Double precision floating number
+
+**Applicable column types**: Numeric types
+
+**Applicable table types**: table, STable
+
+**More explanations**:
+
+- It can be used on stable with `GROUP BY`, i.e. timelines generated by `GROUP BY tbname` on a STable.
+
+## System Information Functions
+
+### DATABASE
+
+```
+SELECT DATABASE();
+```
+
+**Description**:Return the current database being used. If the user doesn't specify database when logon and doesn't use `USE` SQL command to switch the datbase, this function returns NULL.
+
+### CLIENT_VERSION
+
+```
+SELECT CLIENT_VERSION();
+```
+
+**Description**:Return the client version.
+
+### SERVER_VERSION
+
+```
+SELECT SERVER_VERSION();
+```
+
+**Description**:Returns the server version.
+
+### SERVER_STATUS
+
+```
+SELECT SERVER_VERSION();
+```
+
+**Description**:Returns the server's status.
diff --git a/docs-en/12-taos-sql/08-interval.md b/docs-en/12-taos-sql/08-interval.md
new file mode 100644
index 0000000000000000000000000000000000000000..acfb0de0e1521fd8c6a068497a3df7a17941524c
--- /dev/null
+++ b/docs-en/12-taos-sql/08-interval.md
@@ -0,0 +1,113 @@
+---
+sidebar_label: Interval
+title: Aggregate by Time Window
+---
+
+Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window.
+Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are three kinds of windows: time window, status window, and session window. There are two kinds of time windows: sliding window and flip time/tumbling window.
+
+## Time Window
+
+The `INTERVAL` clause is used to generate time windows of the same time interval. The `SLIDING` parameter is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining a continuous query, both the size of the time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time ranges of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time/tumbling window.
+
+
+
+`INTERVAL` and `SLIDING` should be used with aggregate functions and select functions. The SQL statement below is illegal because no aggregate or selection function is used with `INTERVAL`.
+
+```
+SELECT * FROM temp_tb_1 INTERVAL(1m);
+```
+
+The time step specified by `SLIDING` cannot exceed the time interval specified by `INTERVAL`. The SQL statement below is illegal because the time length specified by `SLIDING` exceeds that specified by `INTERVAL`.
+
+```
+SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m);
+```
+
+When the time length specified by `SLIDING` is the same as that specified by `INTERVAL`, the sliding window is actually a flip/tumbling window. The minimum time range specified by `INTERVAL` is 10 milliseconds (10a) prior to version 2.1.5.0. Since version 2.1.5.0, the minimum time range by `INTERVAL` can be 1 microsecond (1u). However, if the DB precision is millisecond, the minimum time range is 1 millisecond (1a). Please note that the `timezone` parameter should be configured to be the same value in the `taos.cfg` configuration file on client side and server side.
+
+## Status Window
+
+In case of using integer, bool, or string to represent the status of a device at any given moment, continuous rows with the same status belong to a status window. Once the status changes, the status window closes. As shown in the following figure, there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now.
+
+
+
+`STATE_WINDOW` is used to specify the column on which the status window will be based. For example:
+
+```
+SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status);
+```
+
+## Session Window
+
+```sql
+SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
+```
+
+The primary key, i.e. timestamp, is used to determine which session window a row belongs to. If the time interval between two adjacent rows is within the time range specified by `tol_val`, they belong to the same session window; otherwise they belong to two different session windows. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30], because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
+
+
+
+If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now.
+
+## More On Window Aggregate
+
+### Syntax
+
+The full syntax of aggregate by window is as follows:
+
+```sql
+SELECT function_list FROM tb_name
+ [WHERE where_condition]
+ [SESSION(ts_col, tol_val)]
+ [STATE_WINDOW(col)]
+ [INTERVAL(interval [, offset]) [SLIDING sliding]]
+ [FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
+
+SELECT function_list FROM stb_name
+ [WHERE where_condition]
+ [INTERVAL(interval [, offset]) [SLIDING sliding]]
+ [FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
+ [GROUP BY tags]
+```
+
+### Restrictions
+
+- Aggregate functions and select functions can be used in `function_list`, with each function having only one output. For example COUNT, AVG, SUM, STDDEV, LEASTSQUARES, PERCENTILE, MIN, MAX, FIRST, LAST. Functions having multiple outputs, such as DIFF or arithmetic operations can't be used.
+- `LAST_ROW` can't be used together with window aggregate.
+- Scalar functions, like CEIL/FLOOR, can't be used with window aggregate.
+- `WHERE` clause can be used to specify the starting and ending time and other filter conditions
+- `FILL` clause is used to specify how to fill when there is data missing in any window, including:
+ 1. NONE: No fill (the default fill mode)
+ 2. VALUE:Fill with a fixed value, which should be specified together, for example `FILL(VALUE, 1.23)`
+ 3. PREV:Fill with the previous non-NULL value, `FILL(PREV)`
+ 4. NULL:Fill with NULL, `FILL(NULL)`
+ 5. LINEAR:Fill with the closest non-NULL value, `FILL(LINEAR)`
+ 6. NEXT:Fill with the next non-NULL value, `FILL(NEXT)`
+
+:::info
+
+1. A huge volume of interpolation output may be returned using `FILL`, so it's recommended to specify the time range when using `FILL`. The maximum number of interpolation values that can be returned in a single query is 10,000,000.
+2. The result set is in ascending order of timestamp when you aggregate by time window.
+3. If aggregate by window is used on STable, the aggregate function is performed on all the rows matching the filter conditions. If `GROUP BY` is not used in the query, the result set will be returned in ascending order of timestamp; otherwise the result set is not exactly in the order of ascending timestamp in each group.
+
+:::
+
+Aggregate by time window is also used in continuous query, please refer to [Continuous Query](/develop/continuous-query).
+
+## Examples
+
+A table of intelligent meters can be created by the SQL statement below:
+
+```sql
+CREATE TABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT);
+```
+
+The average current, maximum current and median of current in every 10 minutes for the past 24 hours can be calculated using the SQL statement below, with missing values filled with the previous non-NULL values.
+
+```
+SELECT AVG(current), MAX(current), APERCENTILE(current, 50) FROM meters
+ WHERE ts>=NOW-1d and ts<=now
+ INTERVAL(10m)
+ FILL(PREV);
+```
diff --git a/docs-en/12-taos-sql/09-limit.md b/docs-en/12-taos-sql/09-limit.md
new file mode 100644
index 0000000000000000000000000000000000000000..db55cdd69e7bd29ca66ee15b61f28991568d9556
--- /dev/null
+++ b/docs-en/12-taos-sql/09-limit.md
@@ -0,0 +1,77 @@
+---
+title: Limits & Restrictions
+---
+
+## Naming Rules
+
+1. Only characters from the English alphabet, digits and underscore are allowed
+2. Names cannot start with a digit
+3. Case insensitive without escape character "\`"
+4. Identifier with escape character "\`"
+ To support more flexible table or column names, a new escape character "\`" is introduced. For more details please refer to [escape](/taos-sql/escape).
+
+## Password Rule
+
+The legal character set is `[a-zA-Z0-9!?$%^&*()_–+={[}]:;@~#|<,>.?/]`.
+
+## General Limits
+
+- Maximum length of database name is 32 bytes.
+- Maximum length of table name is 192 bytes, excluding the database name prefix and the separator.
+- Maximum length of each data row is 48K bytes since version 2.1.7.0 , before which the limit was 16K bytes. Please note that the upper limit includes the extra 2 bytes consumed by each column of BINARY/NCHAR type.
+- Maximum length of column name is 64.
+- Maximum number of columns is 4096. There must be at least 2 columns, and the first column must be timestamp.
+- Maximum length of tag name is 64.
+- Maximum number of tags is 128. There must be at least 1 tag. The total length of tag values should not exceed 16K bytes.
+- Maximum length of singe SQL statement is 1048576, i.e. 1 MB. It can be configured in the parameter `maxSQLLength` in the client side, the applicable range is [65480, 1048576].
+- At most 4096 columns (or 1024 prior to 2.1.7.0) can be returned by `SELECT`. Functions in the query statement constitute columns. An error is returned if the limit is exceeded.
+- Maximum numbers of databases, STables, tables are dependent only on the system resources.
+- Maximum of database name is 32 bytes, and it can't include "." or special characters.
+- Maximum number of replicas for a database is 3.
+- Maximum length of user name is 23 bytes.
+- Maximum length of password is 15 bytes.
+- Maximum number of rows depends only on the storage space.
+- Maximum number of tables depends only on the number of nodes.
+- Maximum number of databases depends only on the number of nodes.
+- Maximum number of vnodes for a single database is 64.
+
+## Restrictions of `GROUP BY`
+
+`GROUP BY` can be performed on tags and `TBNAME`. It can be performed on data columns too, with the only restriction being it can only be performed on one data column and the number of unique values in that column is lower than 100,000. Please note that `GROUP BY` cannot be performed on float or double types.
+
+## Restrictions of `IS NOT NULL`
+
+`IS NOT NULL` can be used on any data type of columns. The non-empty string evaluation expression, i.e. `< > ""` can only be used on non-numeric data types.
+
+## Restrictions of `ORDER BY`
+
+- Only one `order by` is allowed for normal table and subtable.
+- At most two `order by` are allowed for STable, and the second one must be `ts`.
+- `order by tag` must be used with `group by tag` on same tag. This rule is also applicable to `tbname`.
+- `order by column` must be used with `group by column` or `top/bottom` on same column. This rule is applicable to table and STable.
+- `order by ts` is applicable to table and STable.
+- If `order by ts` is used with `group by`, the result set is sorted using `ts` in each group.
+
+## Restrictions of Table/Column Names
+
+### Name Restrictions of Table/Column
+
+The name of a table or column can only be composed of ASCII characters, digits and underscore and it cannot start with a digit. The maximum length is 192 bytes. Names are case insensitive. The name mentioned in this rule doesn't include the database name prefix and the separator.
+
+### Name Restrictions After Escaping
+
+To support more flexible table or column names, new escape character "\`" is introduced in TDengine to avoid the conflict between table name and keywords and break the above restrictions for table names. The escape character is not counted in the length of table name.
+
+With escaping, the string inside escape characters are case sensitive, i.e. will not be converted to lower case internally.
+
+For example:
+\`aBc\` and \`abc\` are different table or column names, but "abc" and "aBc" are same names because internally they are all "abc".
+
+:::note
+The characters inside escape characters must be printable characters.
+
+:::
+
+### Applicable Versions
+
+Escape character "\`" is available from version 2.3.0.1.
diff --git a/docs-en/12-taos-sql/10-json.md b/docs-en/12-taos-sql/10-json.md
new file mode 100644
index 0000000000000000000000000000000000000000..7460a5e0ba3ce78ee7744569cda460c477cac19c
--- /dev/null
+++ b/docs-en/12-taos-sql/10-json.md
@@ -0,0 +1,82 @@
+---
+title: JSON Type
+---
+
+## Syntax
+
+1. Tag of type JSON
+
+ ```sql
+ create STable s1 (ts timestamp, v1 int) tags (info json);
+
+ create table s1_1 using s1 tags ('{"k1": "v1"}');
+ ```
+
+2. "->" Operator of JSON
+
+ ```sql
+ select * from s1 where info->'k1' = 'v1';
+
+ select info->'k1' from s1;
+ ```
+
+3. "contains" Operator of JSON
+
+ ```sql
+ select * from s1 where info contains 'k2';
+
+ select * from s1 where info contains 'k1';
+ ```
+
+## Applicable Operations
+
+1. When a JSON data type is used in `where`, `match/nmatch/between and/like/and/or/is null/is no null` can be used but `in` can't be used.
+
+ ```sql
+ select * from s1 where info->'k1' match 'v*';
+
+ select * from s1 where info->'k1' like 'v%' and info contains 'k2';
+
+ select * from s1 where info is null;
+
+ select * from s1 where info->'k1' is not null;
+ ```
+
+2. A tag of JSON type can be used in `group by`, `order by`, `join`, `union all` and sub query; for example `group by json->'key'`
+
+3. `Distinct` can be used with a tag of type JSON
+
+ ```sql
+ select distinct info->'k1' from s1;
+ ```
+
+4. Tag Operations
+
+ The value of a JSON tag can be altered. Please note that the full JSON will be overriden when doing this.
+
+ The name of a JSON tag can be altered. A tag of JSON type can't be added or removed. The column length of a JSON tag can't be changed.
+
+## Other Restrictions
+
+- JSON type can only be used for a tag. There can be only one tag of JSON type, and it's exclusive to any other types of tags.
+
+- The maximum length of keys in JSON is 256 bytes, and key must be printable ASCII characters. The maximum total length of a JSON is 4,096 bytes.
+
+- JSON format:
+
+ - The input string for JSON can be empty, i.e. "", "\t", or NULL, but it can't be non-NULL string, bool or array.
+ - object can be {}, and the entire JSON is empty if so. Key can be "", and it's ignored if so.
+ - value can be int, double, string, bool or NULL, and it can't be an array. Nesting is not allowed which means that the value of a key can't be JSON.
+ - If one key occurs twice in JSON, only the first one is valid.
+ - Escape characters are not allowed in JSON.
+
+- NULL is returned when querying a key that doesn't exist in JSON.
+
+- If a tag of JSON is the result of inner query, it can't be parsed and queried in the outer query.
+
+For example, the SQL statements below are not supported.
+
+```sql;
+select jtag->'key' from (select jtag from STable);
+select jtag->'key' from (select jtag from STable) where jtag->'key'>0;
+```
diff --git a/docs-en/12-taos-sql/11-escape.md b/docs-en/12-taos-sql/11-escape.md
new file mode 100644
index 0000000000000000000000000000000000000000..34ce9f7848a9d60811a23286a6675e8afa4f04fe
--- /dev/null
+++ b/docs-en/12-taos-sql/11-escape.md
@@ -0,0 +1,30 @@
+---
+title: Escape Characters
+---
+
+Below table is the list of escape characters used in TDengine.
+
+| Escape Character | **Actual Meaning** |
+| :--------------: | ------------------------ |
+| `\'` | Single quote ' |
+| `\"` | Double quote " |
+| \n | Line Break |
+| \r | Carriage Return |
+| \t | tab |
+| `\\` | Back Slash \ |
+| `\%` | % see below for details |
+| `\_` | \_ see below for details |
+
+:::note
+Escape characters are available from version 2.4.0.4 .
+
+:::
+
+## Restrictions
+
+1. If there are escape characters in identifiers (database name, table name, column name)
+ - Identifier without ``: Error will be returned because identifier must be constituted of digits, ASCII characters or underscore and can't be started with digits
+ - Identifier quoted with ``: Original content is kept, no escaping
+2. If there are escape characters in values
+ - The escape characters will be escaped as the above table. If the escape character doesn't match any supported one, the escape character "\" will be ignored.
+ - "%" and "\_" are used as wildcards in `like`. `\%` and `\_` should be used to represent literal "%" and "\_" in `like`,. If `\%` and `\_` are used out of `like` context, the evaluation result is "`\%`"and "`\_`", instead of "%" and "\_".
diff --git a/docs-en/12-taos-sql/12-keywords.md b/docs-en/12-taos-sql/12-keywords.md
new file mode 100644
index 0000000000000000000000000000000000000000..ed0c96b4e4d94dd70da1c3778f4129bd34daed62
--- /dev/null
+++ b/docs-en/12-taos-sql/12-keywords.md
@@ -0,0 +1,90 @@
+---
+title: Keywords
+---
+
+There are about 200 keywords reserved by TDengine, they can't be used as the name of database, STable or table with either upper case, lower case or mixed case.
+
+**Keywords List**
+
+| | | | | |
+| ----------- | ---------- | --------- | ---------- | ------------ |
+| ABORT | CREATE | IGNORE | NULL | STAR |
+| ACCOUNT | CTIME | IMMEDIATE | OF | STATE |
+| ACCOUNTS | DATABASE | IMPORT | OFFSET | STATEMENT |
+| ADD | DATABASES | IN | OR | STATE_WINDOW |
+| AFTER | DAYS | INITIALLY | ORDER | STORAGE |
+| ALL | DBS | INSERT | PARTITIONS | STREAM |
+| ALTER | DEFERRED | INSTEAD | PASS | STREAMS |
+| AND | DELIMITERS | INT | PLUS | STRING |
+| AS | DESC | INTEGER | PPS | SYNCDB |
+| ASC | DESCRIBE | INTERVAL | PRECISION | TABLE |
+| ATTACH | DETACH | INTO | PREV | TABLES |
+| BEFORE | DISTINCT | IS | PRIVILEGE | TAG |
+| BEGIN | DIVIDE | ISNULL | QTIME | TAGS |
+| BETWEEN | DNODE | JOIN | QUERIES | TBNAME |
+| BIGINT | DNODES | KEEP | QUERY | TIMES |
+| BINARY | DOT | KEY | QUORUM | TIMESTAMP |
+| BITAND | DOUBLE | KILL | RAISE | TINYINT |
+| BITNOT | DROP | LE | REM | TOPIC |
+| BITOR | EACH | LIKE | REPLACE | TOPICS |
+| BLOCKS | END | LIMIT | REPLICA | TRIGGER |
+| BOOL | EQ | LINEAR | RESET | TSERIES |
+| BY | EXISTS | LOCAL | RESTRICT | UMINUS |
+| CACHE | EXPLAIN | LP | ROW | UNION |
+| CACHELAST | FAIL | LSHIFT | RP | UNSIGNED |
+| CASCADE | FILE | LT | RSHIFT | UPDATE |
+| CHANGE | FILL | MATCH | SCORES | UPLUS |
+| CLUSTER | FLOAT | MAXROWS | SELECT | USE |
+| COLON | FOR | MINROWS | SEMI | USER |
+| COLUMN | FROM | MINUS | SESSION | USERS |
+| COMMA | FSYNC | MNODES | SET | USING |
+| COMP | GE | MODIFY | SHOW | VALUES |
+| COMPACT | GLOB | MODULES | SLASH | VARIABLE |
+| CONCAT | GRANTS | NCHAR | SLIDING | VARIABLES |
+| CONFLICT | GROUP | NE | SLIMIT | VGROUPS |
+| CONNECTION | GT | NONE | SMALLINT | VIEW |
+| CONNECTIONS | HAVING | NOT | SOFFSET | VNODES |
+| CONNS | ID | NOTNULL | STable | WAL |
+| COPY | IF | NOW | STableS | WHERE |
+| _C0 | _QSTART | _QSTOP | _QDURATION | _WSTART |
+| _WSTOP | _WDURATION | _ROWTS |
+
+## Explanations
+### TBNAME
+`TBNAME` can be considered as a special tag, which represents the name of the subtable, in a STable.
+
+Get the table name and tag values of all subtables in a STable.
+```mysql
+SELECT TBNAME, location FROM meters;
+```
+
+Count the number of subtables in a STable.
+```mysql
+SELECT COUNT(TBNAME) FROM meters;
+```
+
+Only filter on TAGS can be used in WHERE clause in the above two query statements.
+```mysql
+taos> SELECT TBNAME, location FROM meters;
+ tbname | location |
+==================================================================
+ d1004 | California.SanFrancisco |
+ d1003 | California.SanFrancisco |
+ d1002 | California.LosAngeles |
+ d1001 | California.LosAngeles |
+Query OK, 4 row(s) in set (0.000881s)
+
+taos> SELECT COUNT(tbname) FROM meters WHERE groupId > 2;
+ count(tbname) |
+========================
+ 2 |
+Query OK, 1 row(s) in set (0.001091s)
+```
+### _QSTART/_QSTOP/_QDURATION
+The start, stop and duration of a query time window.
+
+### _WSTART/_WSTOP/_WDURATION
+The start, stop and duration of aggegate query by time window, like interval, session window, state window.
+
+### _c0/_ROWTS
+_c0 is equal to _ROWTS, it means the first column of a table or STable.
diff --git a/docs-en/12-taos-sql/13-operators.md b/docs-en/12-taos-sql/13-operators.md
new file mode 100644
index 0000000000000000000000000000000000000000..0ca9ec49430a66384400bc41cd08562b3d5d28c7
--- /dev/null
+++ b/docs-en/12-taos-sql/13-operators.md
@@ -0,0 +1,66 @@
+---
+sidebar_label: Operators
+title: Operators
+---
+
+## Arithmetic Operators
+
+| # | **Operator** | **Data Types** | **Description** |
+| --- | :----------: | -------------- | --------------------------------------------------------- |
+| 1 | +, - | Numeric Types | Representing positive or negative numbers, unary operator |
+| 2 | +, - | Numeric Types | Addition and substraction, binary operator |
+| 3 | \*, / | Numeric Types | Multiplication and division, binary oeprator |
+| 4 | % | Numeric Types | Taking the remainder, binary operator |
+
+## Bitwise Operators
+
+| # | **Operator** | **Data Types** | **Description** |
+| --- | :----------: | -------------- | ----------------------------- |
+| 1 | & | Numeric Types | Bitewise AND, binary operator |
+| 2 | \| | Numeric Types | Bitewise OR, binary operator |
+
+## JSON Operator
+
+`->` operator can be used to get the value of a key in a column of JSON type, the left oeprand is the column name, the right operand is a string constant. For example, `col->'name'` returns the value of key `'name'`.
+
+## Set Operator
+
+Set operators are used to combine the results of two queries into single result. A query including set operators is called a combined query. The number of rows in each result in a combined query must be same, and the type is determined by the first query's result, the type of the following queriess result must be able to be converted to the type of the first query's result, the conversion rule is same as `CAST` function.
+
+TDengine provides 2 set operators: `UNION ALL` and `UNION`. `UNION ALL` combines the results without removing duplicate data. `UNION` combines the results and remove duplicate data rows. In single SQL statement, at most 100 set operators can be used.
+
+## Comparsion Operator
+
+| # | **Operator** | **Data Types** | **Description** |
+| --- | :---------------: | ------------------------------------------------------------------- | ----------------------------------------------- |
+| 1 | = | Except for BLOB, MEDIUMBLOB and JSON | Equal |
+| 2 | <\>, != | Except for BLOB, MEDIUMBLOB, JSON and primary key of timestamp type | Not equal |
+| 3 | \>, < | Except for BLOB, MEDIUMBLOB and JSON | Greater than, less than |
+| 4 | \>=, <= | Except for BLOB, MEDIUMBLOB and JSON | Greater than or equal to, less than or equal to |
+| 5 | IS [NOT] NULL | Any types | Is NULL or NOT |
+| 6 | [NOT] BETWEEN AND | Except for BLOB, MEDIUMBLOB and JSON | In a value range or not |
+| 7 | IN | Except for BLOB, MEDIUMBLOB, JSON and primary key of timestamp type | In a list of values or not |
+| 8 | LIKE | BINARY, NCHAR and VARCHAR | Wildcard matching |
+| 9 | MATCH, NMATCH | BINARY, NCHAR and VARCHAR | Regular expression matching |
+| 10 | CONTAINS | JSON | If A key exists in JSON |
+
+`LIKE` operator uses wildcard to match a string, the rules are:
+
+- '%' matches 0 to any number of characters; '\_' matches any single ASCII character.
+- \_ can be used to match a `_` in the string, i.e. using escape character backslash `\`
+- Wildcard string is 100 bytes at most. Longer a wildcard string is, worse the performance of LIKE operator is.
+
+`MATCH` and `NMATCH` operators use regular expressions to match a string, the rules are:
+
+- Regular expressions of POSIX standard are supported.
+- Only `tbname`, i.e. table name of sub tables, and tag columns of string types can be matched with regular expression, data columns are not supported.
+- Regular expression string is 128 bytes at most, and can be adjusted by setting parameter `maxRegexStringLen`, which is a client side configuration and needs to restart the client to take effect.
+
+## Logical Operators
+
+| # | **Operator** | **Data Types** | **Description** |
+| --- | :----------: | -------------- | ---------------------------------------------------------------------------------------- |
+| 1 | AND | BOOL | Logical AND, return TRUE if both conditions are TRUE; return FALSE if any one is FALSE. |
+| 2 | OR | BOOL | Logical OR, return TRUE if any condition is TRUE; return FALSE if both are FALSE |
+
+TDengine uses shortcircut optimization when performing logical operations. For AND operator, if the first condition is evaluated to FALSE, then the second one is not evaluated. For OR operator, if the first condition is evaluated to TRUE, then the second one is not evaluated.
diff --git a/docs-en/12-taos-sql/_category_.yml b/docs-en/12-taos-sql/_category_.yml
new file mode 100644
index 0000000000000000000000000000000000000000..74a3b6309e0a4ad35feb674f544c689ae1992299
--- /dev/null
+++ b/docs-en/12-taos-sql/_category_.yml
@@ -0,0 +1 @@
+label: TDengine SQL
diff --git a/docs-en/12-taos-sql/index.md b/docs-en/12-taos-sql/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..33656338a7bba38dc55cf536bdba8e95309c5acf
--- /dev/null
+++ b/docs-en/12-taos-sql/index.md
@@ -0,0 +1,31 @@
+---
+title: TDengine SQL
+description: "The syntax supported by TDengine SQL "
+---
+
+This section explains the syntax of SQL to perform operations on databases, tables and STables, insert data, select data and use functions. We also provide some tips that can be used in TDengine SQL. If you have previous experience with SQL this section will be fairly easy to understand. If you do not have previous experience with SQL, you'll come to appreciate the simplicity and power of SQL.
+
+TDengine SQL is the major interface for users to write data into or query from TDengine. For ease of use, the syntax is similar to that of standard SQL. However, please note that TDengine SQL is not standard SQL. For instance, TDengine doesn't provide a delete function for time series data and so corresponding statements are not provided in TDengine SQL.
+
+Syntax Specifications used in this chapter:
+
+- The content inside <\> needs to be input by the user, excluding <\> itself.
+- \[ \] means optional input, excluding [] itself.
+- | means one of a few options, excluding | itself.
+- … means the item prior to it can be repeated multiple times.
+
+To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of data from electric meters. Each meter collects 3 data measurements: current, voltage, phase. The data model is shown below:
+
+```sql
+taos> DESCRIBE meters;
+ Field | Type | Length | Note |
+=================================================================================
+ ts | TIMESTAMP | 8 | |
+ current | FLOAT | 4 | |
+ voltage | INT | 4 | |
+ phase | FLOAT | 4 | |
+ location | BINARY | 64 | TAG |
+ groupid | INT | 4 | TAG |
+```
+
+The data set includes the data collected by 4 meters, the corresponding table name is d1001, d1002, d1003 and d1004 based on the data model of TDengine.
diff --git a/docs-en/12-taos-sql/timewindow-1.webp b/docs-en/12-taos-sql/timewindow-1.webp
new file mode 100644
index 0000000000000000000000000000000000000000..82747558e96df752a0010d85be79a4af07e4a1df
Binary files /dev/null and b/docs-en/12-taos-sql/timewindow-1.webp differ
diff --git a/docs-en/12-taos-sql/timewindow-2.webp b/docs-en/12-taos-sql/timewindow-2.webp
new file mode 100644
index 0000000000000000000000000000000000000000..8f1314ae34f7f5c5cca1d3cb80455f555fad38c3
Binary files /dev/null and b/docs-en/12-taos-sql/timewindow-2.webp differ
diff --git a/docs-en/12-taos-sql/timewindow-3.webp b/docs-en/12-taos-sql/timewindow-3.webp
new file mode 100644
index 0000000000000000000000000000000000000000..5bd16e68e7fd5da6805551e9765975277cd5d4d9
Binary files /dev/null and b/docs-en/12-taos-sql/timewindow-3.webp differ
diff --git a/docs-en/13-operation/01-pkg-install.md b/docs-en/13-operation/01-pkg-install.md
new file mode 100644
index 0000000000000000000000000000000000000000..c098002962d62aa0acc7a94462c052303cb2ed90
--- /dev/null
+++ b/docs-en/13-operation/01-pkg-install.md
@@ -0,0 +1,284 @@
+---
+title: Install & Uninstall
+description: Install, Uninstall, Start, Stop and Upgrade
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+
+TDengine community version provides deb and rpm packages for users to choose from, based on their system environment. The deb package supports Debian, Ubuntu and derivative systems. The rpm package supports CentOS, RHEL, SUSE and derivative systems. Furthermore, a tar.gz package is provided for TDengine Enterprise customers.
+
+## Install
+
+
+
+
+1. Download deb package from official website, for example TDengine-server-2.4.0.7-Linux-x64.deb
+2. In the directory where the package is located, execute the command below
+
+```bash
+$ sudo dpkg -i TDengine-server-2.4.0.7-Linux-x64.deb
+(Reading database ... 137504 files and directories currently installed.)
+Preparing to unpack TDengine-server-2.4.0.7-Linux-x64.deb ...
+TDengine is removed successfully!
+Unpacking tdengine (2.4.0.7) over (2.4.0.7) ...
+Setting up tdengine (2.4.0.7) ...
+Start to install TDengine...
+
+System hostname is: ubuntu-1804
+
+Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
+OR leave it blank to build one:
+
+Enter your email address for priority support or enter empty to skip:
+Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service.
+
+To configure TDengine : edit /etc/taos/taos.cfg
+To start TDengine : sudo systemctl start taosd
+To access TDengine : taos -h ubuntu-1804 to login into TDengine server
+
+
+TDengine is installed successfully!
+```
+
+
+
+
+
+1. Download rpm package from official website, for example TDengine-server-2.4.0.7-Linux-x64.rpm;
+2. In the directory where the package is located, execute the command below
+
+```
+$ sudo rpm -ivh TDengine-server-2.4.0.7-Linux-x64.rpm
+Preparing... ################################# [100%]
+Updating / installing...
+ 1:tdengine-2.4.0.7-3 ################################# [100%]
+Start to install TDengine...
+
+System hostname is: centos7
+
+Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
+OR leave it blank to build one:
+
+Enter your email address for priority support or enter empty to skip:
+
+Created symlink from /etc/systemd/system/multi-user.target.wants/taosd.service to /etc/systemd/system/taosd.service.
+
+To configure TDengine : edit /etc/taos/taos.cfg
+To start TDengine : sudo systemctl start taosd
+To access TDengine : taos -h centos7 to login into TDengine server
+
+
+TDengine is installed successfully!
+```
+
+
+
+
+
+1. Download the tar.gz package, for example TDengine-server-2.4.0.7-Linux-x64.tar.gz;
+2. In the directory where the package is located, first decompress the file, then switch to the sub-directory generated in decompressing, i.e. "TDengine-enterprise-server-2.4.0.7/" in this example, and execute the `install.sh` script.
+
+```bash
+$ tar xvzf TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
+TDengine-enterprise-server-2.4.0.7/
+TDengine-enterprise-server-2.4.0.7/driver/
+TDengine-enterprise-server-2.4.0.7/driver/vercomp.txt
+TDengine-enterprise-server-2.4.0.7/driver/libtaos.so.2.4.0.7
+TDengine-enterprise-server-2.4.0.7/install.sh
+TDengine-enterprise-server-2.4.0.7/examples/
+...
+
+$ ll
+total 43816
+drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ./
+drwxr-xr-x 20 ubuntu ubuntu 4096 Feb 22 09:30 ../
+drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 TDengine-enterprise-server-2.4.0.7/
+-rw-rw-r-- 1 ubuntu ubuntu 44852544 Feb 22 09:31 TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
+
+$ cd TDengine-enterprise-server-2.4.0.7/
+
+ $ ll
+total 40784
+drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 ./
+drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ../
+drwxrwxr-x 2 ubuntu ubuntu 4096 Feb 22 09:30 driver/
+drwxrwxr-x 10 ubuntu ubuntu 4096 Feb 22 09:30 examples/
+-rwxrwxr-x 1 ubuntu ubuntu 33294 Feb 22 09:30 install.sh*
+-rw-rw-r-- 1 ubuntu ubuntu 41704288 Feb 22 09:30 taos.tar.gz
+
+$ sudo ./install.sh
+
+Start to update TDengine...
+Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service.
+Nginx for TDengine is updated successfully!
+
+To configure TDengine : edit /etc/taos/taos.cfg
+To configure Taos Adapter (if has) : edit /etc/taos/taosadapter.toml
+To start TDengine : sudo systemctl start taosd
+To access TDengine : use taos -h ubuntu-1804 in shell OR from http://127.0.0.1:6060
+
+TDengine is updated successfully!
+Install taoskeeper as a standalone service
+taoskeeper is installed, enable it by `systemctl enable taoskeeper`
+```
+
+:::info
+Users will be prompted to enter some configuration information when install.sh is executing. The interactive mode can be disabled by executing `./install.sh -e no`. `./install.sh -h` can show all parameters with detailed explanation.
+
+:::
+
+
+
+
+:::note
+When installing on the first node in the cluster, at the "Enter FQDN:" prompt, nothing needs to be provided. When installing on subsequent nodes, at the "Enter FQDN:" prompt, you must enter the end point of the first dnode in the cluster if it is already up. You can also just ignore it and configure it later after installation is finished.
+
+:::
+
+## Uninstall
+
+
+
+
+Deb package of TDengine can be uninstalled as below:
+
+```bash
+$ sudo dpkg -r tdengine
+(Reading database ... 137504 files and directories currently installed.)
+Removing tdengine (2.4.0.7) ...
+TDengine is removed successfully!
+
+```
+
+
+
+
+
+RPM package of TDengine can be uninstalled as below:
+
+```
+$ sudo rpm -e tdengine
+TDengine is removed successfully!
+```
+
+
+
+
+
+tar.gz package of TDengine can be uninstalled as below:
+
+```
+$ rmtaos
+Nginx for TDengine is running, stopping it...
+TDengine is removed successfully!
+
+taosKeeper is removed successfully!
+```
+
+
+
+
+:::note
+
+- We strongly recommend not to use multiple kinds of installation packages on a single host TDengine.
+- After deb package is installed, if the installation directory is removed manually, uninstall or reinstall will not work. This issue can be resolved by using the command below which cleans up TDengine package information. You can then reinstall if needed.
+
+```bash
+ $ sudo rm -f /var/lib/dpkg/info/tdengine*
+```
+
+- After rpm package is installed, if the installation directory is removed manually, uninstall or reinstall will not work. This issue can be resolved by using the command below which cleans up TDengine package information. You can then reinstall if needed.
+
+```bash
+ $ sudo rpm -e --noscripts tdengine
+```
+
+:::
+
+## Installation Directory
+
+TDengine is installed at /usr/local/taos if successful.
+
+```bash
+$ cd /usr/local/taos
+$ ll
+$ ll
+total 28
+drwxr-xr-x 7 root root 4096 Feb 22 09:34 ./
+drwxr-xr-x 12 root root 4096 Feb 22 09:34 ../
+drwxr-xr-x 2 root root 4096 Feb 22 09:34 bin/
+drwxr-xr-x 2 root root 4096 Feb 22 09:34 cfg/
+lrwxrwxrwx 1 root root 13 Feb 22 09:34 data -> /var/lib/taos/
+drwxr-xr-x 2 root root 4096 Feb 22 09:34 driver/
+drwxr-xr-x 10 root root 4096 Feb 22 09:34 examples/
+drwxr-xr-x 2 root root 4096 Feb 22 09:34 include/
+lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/
+```
+
+During the installation process:
+
+- Configuration directory, data directory, and log directory are created automatically if they don't exist
+- The default configuration file is located at /etc/taos/taos.cfg, which is a copy of /usr/local/taos/cfg/taos.cfg
+- The default data directory is /var/lib/taos, which is a soft link to /usr/local/taos/data
+- The default log directory is /var/log/taos, which is a soft link to /usr/local/taos/log
+- The executables at /usr/local/taos/bin are linked to /usr/bin
+- The DLL files at /usr/local/taos/driver are linked to /usr/lib
+- The header files at /usr/local/taos/include are linked to /usr/include
+
+:::note
+
+- When TDengine is uninstalled, the configuration /etc/taos/taos.cfg, data directory /var/lib/taos, log directory /var/log/taos are kept. They can be deleted manually with caution, because data can't be recovered. Please follow data integrity, security, backup or relevant SOPs before deleting any data.
+- When reinstalling TDengine, if the default configuration file /etc/taos/taos.cfg exists, it will be kept and the configuration file in the installation package will be renamed to taos.cfg.orig and stored at /usr/local/taos/cfg to be used as configuration sample. Otherwise the configuration file in the installation package will be installed to /etc/taos/taos.cfg and used.
+
+## Start and Stop
+
+Linux system services `systemd`, `systemctl` or `service` are used to start, stop and restart TDengine. The server process of TDengine is `taosd`, which is started automatically after the Linux system is started. System operators can use `systemd`, `systemctl` or `service` to start, stop or restart TDengine server.
+
+For example, if using `systemctl` , the commands to start, stop, restart and check TDengine server are below:
+
+- Start server:`systemctl start taosd`
+
+- Stop server:`systemctl stop taosd`
+
+- Restart server:`systemctl restart taosd`
+
+- Check server status:`systemctl status taosd`
+
+From version 2.4.0.0, a new independent component named as `taosAdapter` has been included in TDengine. `taosAdapter` should be started and stopped using `systemctl`.
+
+If the server process is OK, the output of `systemctl status` is like below:
+
+```
+Active: active (running)
+```
+
+Otherwise, the output is as below:
+
+```
+Active: inactive (dead)
+```
+
+## Upgrade
+
+There are two aspects in upgrade operation: upgrade installation package and upgrade a running server.
+
+To upgrade a package, follow the steps mentioned previously to first uninstall the old version then install the new version.
+
+Upgrading a running server is much more complex. First please check the version number of the old version and the new version. The version number of TDengine consists of 4 sections, only if the first 3 sections match can the old version be upgraded to the new version. The steps of upgrading a running server are as below:
+
+- Stop inserting data
+- Make sure all data is persisted to disk
+- Make some simple queries (Such as total rows in stables, tables and so on. Note down the values. Follow best practices and relevant SOPs.)
+- Stop the cluster of TDengine
+- Uninstall old version and install new version
+- Start the cluster of TDengine
+- Execute simple queries, such as the ones executed prior to installing the new package, to make sure there is no data loss
+- Run some simple data insertion statements to make sure the cluster works well
+- Restore business services
+
+:::warning
+
+TDengine doesn't guarantee any lower version is compatible with the data generated by a higher version, so it's never recommended to downgrade the version.
+
+:::
diff --git a/docs-en/13-operation/02-planning.mdx b/docs-en/13-operation/02-planning.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..c1baf92dbfa8d93f83174c05c2ea631d1a469739
--- /dev/null
+++ b/docs-en/13-operation/02-planning.mdx
@@ -0,0 +1,82 @@
+---
+title: Resource Planning
+---
+
+It is important to plan computing and storage resources if using TDengine to build an IoT, time-series or Big Data platform. How to plan the CPU, memory and disk resources required, will be described in this chapter.
+
+## Memory Requirement of Server Side
+
+By default, the number of vgroups created for each database is the same as the number of CPU cores. This can be configured by the parameter `maxVgroupsPerDb`. Each vnode in a vgroup stores one replica. Each vnode consumes a fixed amount of memory, i.e. `blocks` \* `cache`. In addition, some memory is required for tag values associated with each table. A fixed amount of memory is required for each cluster. So, the memory required for each DB can be calculated using the formula below:
+
+```
+Database Memory Size = maxVgroupsPerDb * replica * (blocks * cache + 10MB) + numOfTables * (tagSizePerTable + 0.5KB)
+```
+
+For example, assuming the default value of `maxVgroupPerDB` is 64, the default value of `cache` is 16M, the default value of `blocks` is 6, there are 100,000 tables in a DB, the replica number is 1, total length of tag values is 256 bytes, the total memory required for this DB is: 64 \* 1 \* (16 \* 6 + 10) + 100000 \* (0.25 + 0.5) / 1000 = 6792M.
+
+In the real operation of TDengine, we are more concerned about the memory used by each TDengine server process `taosd`.
+
+```
+ taosd_memory = vnode_memory + mnode_memory + query_memory
+```
+
+In the above formula:
+
+1. "vnode_memory" of a `taosd` process is the memory used by all vnodes hosted by this `taosd` process. It can be roughly calculated by firstly adding up the total memory of all DBs whose memory usage can be derived according to the formula for Database Memory Size, mentioned above, then dividing by number of dnodes and multiplying the number of replicas.
+
+```
+ vnode_memory = (sum(Database Memory Size) / number_of_dnodes) * replica
+```
+
+2. "mnode_memory" of a `taosd` process is the memory consumed by a mnode. If there is one (and only one) mnode hosted in a `taosd` process, the memory consumed by "mnode" is "0.2KB \* the total number of tables in the cluster".
+
+3. "query_memory" is the memory used when processing query requests. Each ongoing query consumes at least "0.2 KB \* total number of involved tables".
+
+Please note that the above formulas can only be used to estimate the minimum memory requirement, instead of maximum memory usage. In a real production environment, it's better to reserve some redundance beyond the estimated minimum memory requirement. If memory is abundant, it's suggested to increase the value of parameter `blocks` to speed up data insertion and data query.
+
+## Memory Requirement of Client Side
+
+For the client programs using TDengine client driver `taosc` to connect to the server side there is a memory requirement as well.
+
+The memory consumed by a client program is mainly about the SQL statements for data insertion, caching of table metadata, and some internal use. Assuming maximum number of tables is N (the memory consumed by the metadata of each table is 256 bytes), maximum number of threads for parallel insertion is T, maximum length of a SQL statement is S (normally 1 MB), the memory required by a client program can be estimated using the below formula:
+
+```
+M = (T * S * 3 + (N / 4096) + 100)
+```
+
+For example, if the number of parallel data insertion threads is 100, total number of tables is 10,000,000, then the minimum memory requirement of a client program is:
+
+```
+100 * 3 + (10000000 / 4096) + 100 = 2741 (MBytes)
+```
+
+So, at least 3GB needs to be reserved for such a client.
+
+## CPU Requirement
+
+The CPU resources required depend on two aspects:
+
+- **Data Insertion** Each dnode of TDengine can process at least 10,000 insertion requests in one second, while each insertion request can have multiple rows. The difference in computing resource consumed, between inserting 1 row at a time, and inserting 10 rows at a time is very small. So, the more the number of rows that can be inserted one time, the higher the efficiency. Inserting in batch also imposes requirements on the client side which needs to cache rows to insert in batch once the number of cached rows reaches a threshold.
+- **Data Query** High efficiency query is provided in TDengine, but it's hard to estimate the CPU resource required because the queries used in different use cases and the frequency of queries vary significantly. It can only be verified with the query statements, query frequency, data size to be queried, and other requirements provided by users.
+
+In short, the CPU resource required for data insertion can be estimated but it's hard to do so for query use cases. In real operation, it's suggested to control CPU usage below 50%. If this threshold is exceeded, it's a reminder for system operator to add more nodes in the cluster to expand resources.
+
+## Disk Requirement
+
+The compression ratio in TDengine is much higher than that in RDBMS. In most cases, the compression ratio in TDengine is bigger than 5, or even 10 in some cases, depending on the characteristics of the original data. The data size before compression can be calculated based on below formula:
+
+```
+Raw DataSize = numOfTables * rowSizePerTable * rowsPerTable
+```
+
+For example, there are 10,000,000 meters, while each meter collects data every 15 minutes and the data size of each collection is 128 bytes, so the raw data size of one year is: 10000000 \* 128 \* 24 \* 60 / 15 \* 365 = 44.8512(TB). Assuming compression ratio is 5, the actual disk size is: 44.851 / 5 = 8.97024(TB).
+
+Parameter `keep` can be used to set how long the data will be kept on disk. To further reduce storage cost, multiple storage levels can be enabled in TDengine, with the coldest data stored on the cheapest storage device. This is completely transparent to application programs.
+
+To increase performance, multiple disks can be setup for parallel data reading or data inserting. Please note that an expensive disk array is not necessary because replications are used in TDengine to provide high availability.
+
+## Number of Hosts
+
+A host can be either physical or virtual. The total memory, total CPU, total disk required can be estimated according to the formulae mentioned previously. Then, according to the system resources that a single host can provide, assuming all hosts have the same resources, the number of hosts can be derived easily.
+
+**Quick Estimation for CPU, Memory and Disk** Please refer to [Resource Estimate](https://www.taosdata.com/config/config.html).
diff --git a/docs-en/13-operation/03-tolerance.md b/docs-en/13-operation/03-tolerance.md
new file mode 100644
index 0000000000000000000000000000000000000000..d4d48d7fcdc2c990b6ea0821e2347c70a809ed79
--- /dev/null
+++ b/docs-en/13-operation/03-tolerance.md
@@ -0,0 +1,32 @@
+---
+sidebar_label: Fault Tolerance
+title: Fault Tolerance & Disaster Recovery
+---
+
+## Fault Tolerance
+
+TDengine uses **WAL**, i.e. Write Ahead Log, to achieve fault tolerance and high reliability.
+
+When a data block is received by TDengine, the original data block is first written into WAL. The log in WAL will be deleted only after the data has been written into data files in the database. Data can be recovered from WAL in case the server is stopped abnormally for any reason and then restarted.
+
+There are 2 configuration parameters related to WAL:
+
+- walLevel:
+ - 0:wal is disabled
+ - 1:wal is enabled without fsync
+ - 2:wal is enabled with fsync
+- fsync:This parameter is only valid when walLevel is set to 2. It specifies the interval, in milliseconds, of invoking fsync. If set to 0, it means fsync is invoked immediately once WAL is written.
+
+To achieve absolutely no data loss, walLevel should be set to 2 and fsync should be set to 1. There is a performance penalty to the data ingestion rate. However, if the concurrent data insertion threads on the client side can reach a big enough number, for example 50, the data ingestion performance will be still good enough. Our verification shows that the drop is only 30% when fsync is set to 3,000 milliseconds.
+
+## Disaster Recovery
+
+TDengine uses replication to provide high availability and disaster recovery capability.
+
+A TDengine cluster is managed by mnode. To ensure the high availability of mnode, multiple replicas can be configured by the system parameter `numOfMnodes`. The data replication between mnode replicas is performed in a synchronous way to guarantee metadata consistency.
+
+The number of replicas for time series data in TDengine is associated with each database. There can be many databases in a cluster and each database can be configured with a different number of replicas. When creating a database, parameter `replica` is used to configure the number of replications. To achieve high availability, `replica` needs to be higher than 1.
+
+The number of dnodes in a TDengine cluster must NOT be lower than the number of replicas for any database, otherwise it would fail when trying to create a table.
+
+As long as the dnodes of a TDengine cluster are deployed on different physical machines and the replica number is higher than 1, high availability can be achieved without any other assistance. For disaster recovery, dnodes of a TDengine cluster should be deployed in geographically different data centers.
diff --git a/docs-en/13-operation/06-admin.md b/docs-en/13-operation/06-admin.md
new file mode 100644
index 0000000000000000000000000000000000000000..458a91b88c6d8319fe8b84c2b34d8ff968957910
--- /dev/null
+++ b/docs-en/13-operation/06-admin.md
@@ -0,0 +1,50 @@
+---
+title: User Management
+---
+
+A system operator can use TDengine CLI `taos` to create or remove users or change passwords. The SQL commands are documented below:
+
+## Create User
+
+```sql
+CREATE USER PASS <'password'>;
+```
+
+When creating a user and specifying the user name and password, the password needs to be quoted using single quotes.
+
+## Drop User
+
+```sql
+DROP USER ;
+```
+
+Dropping a user can only be performed by root.
+
+## Change Password
+
+```sql
+ALTER USER PASS <'password'>;
+```
+
+To keep the case of the password when changing password, the password needs to be quoted using single quotes.
+
+## Change Privilege
+
+```sql
+ALTER USER PRIVILEGE ;
+```
+
+The privileges that can be changed to are `read` or `write` without single quotes.
+
+Note:there is another privilege `super`, which is not allowed to be authorized to any user.
+
+## Show Users
+
+```sql
+SHOW USERS;
+```
+
+:::note
+In SQL syntax, `< >` means the part that needs to be input by the user, excluding the `< >` itself.
+
+:::
diff --git a/docs-en/13-operation/07-import.md b/docs-en/13-operation/07-import.md
new file mode 100644
index 0000000000000000000000000000000000000000..8362cec1ab3072866018678b42a679d0c19b49de
--- /dev/null
+++ b/docs-en/13-operation/07-import.md
@@ -0,0 +1,61 @@
+---
+title: Data Import
+---
+
+There are multiple ways of importing data provided by TDengine: import with script, import from data file, import using `taosdump`.
+
+## Import Using Script
+
+TDengine CLI `taos` supports `source ` command for executing the SQL statements in the file in batch. The SQL statements for creating databases, creating tables, and inserting rows can be written in a single file with one statement on each line, then the file can be executed using the `source` command in TDengine CLI `taos` to execute the SQL statements in order and in batch. In the script file, any line beginning with "#" is treated as comments and ignored silently.
+
+## Import from Data File
+
+In TDengine CLI, data can be imported from a CSV file into an existing table. The data in a single CSV must belong to the same table and must be consistent with the schema of that table. The SQL statement is as below:
+
+```sql
+insert into tb1 file 'path/data.csv';
+```
+
+:::note
+If there is a description in the first line of the CSV file, please remove it before importing. If there is no value for a column, please use `NULL` without quotes.
+
+:::
+
+For example, there is a subtable d1001 whose schema is as below:
+
+```sql
+taos> DESCRIBE d1001
+ Field | Type | Length | Note |
+=================================================================================
+ ts | TIMESTAMP | 8 | |
+ current | FLOAT | 4 | |
+ voltage | INT | 4 | |
+ phase | FLOAT | 4 | |
+ location | BINARY | 64 | TAG |
+ groupid | INT | 4 | TAG |
+```
+
+The format of the CSV file to be imported, data.csv, is as below:
+
+```csv
+'2018-10-04 06:38:05.000',10.30000,219,0.31000
+'2018-10-05 06:38:15.000',12.60000,218,0.33000
+'2018-10-06 06:38:16.800',13.30000,221,0.32000
+'2018-10-07 06:38:05.000',13.30000,219,0.33000
+'2018-10-08 06:38:05.000',14.30000,219,0.34000
+'2018-10-09 06:38:05.000',15.30000,219,0.35000
+'2018-10-10 06:38:05.000',16.30000,219,0.31000
+'2018-10-11 06:38:05.000',17.30000,219,0.32000
+'2018-10-12 06:38:05.000',18.30000,219,0.31000
+```
+
+Then, the below SQL statement can be used to import data from file "data.csv", assuming the file is located under the home directory of the current Linux user.
+
+```sql
+taos> insert into d1001 file '~/data.csv';
+Query OK, 9 row(s) affected (0.004763s)
+```
+
+## Import using taosdump
+
+A convenient tool for importing and exporting data is provided by TDengine, `taosdump`, which can be used to export data from one TDengine cluster and import into another one. For the details of using `taosdump` please refer to [Tool for exporting and importing data: taosdump](/reference/taosdump).
diff --git a/docs-en/13-operation/08-export.md b/docs-en/13-operation/08-export.md
new file mode 100644
index 0000000000000000000000000000000000000000..5780de42faeaedbc1c985ad2aa2f52fe56c76971
--- /dev/null
+++ b/docs-en/13-operation/08-export.md
@@ -0,0 +1,21 @@
+---
+title: Data Export
+---
+
+There are two ways of exporting data from a TDengine cluster:
+- Using a SQL statement in TDengine CLI
+- Using the `taosdump` tool
+
+## Export Using SQL
+
+If you want to export the data of a table or a STable, please execute the SQL statement below, in the TDengine CLI.
+
+```sql
+select * from >> data.csv;
+```
+
+The data of table or STable specified by `tb_name` will be exported into a file named `data.csv` in CSV format.
+
+## Export Using taosdump
+
+With `taosdump`, you can choose to export the data of all databases, a database, a table or a STable, you can also choose to export the data within a time range, or even only export the schema definition of a table. For the details of using `taosdump` please refer to [Tool for exporting and importing data: taosdump](/reference/taosdump).
diff --git a/docs-en/13-operation/09-status.md b/docs-en/13-operation/09-status.md
new file mode 100644
index 0000000000000000000000000000000000000000..51396524ea281ae665c9fdf61d2e6e6202995537
--- /dev/null
+++ b/docs-en/13-operation/09-status.md
@@ -0,0 +1,54 @@
+---
+sidebar_label: Connections & Tasks
+title: Manage Connections and Query Tasks
+---
+
+A system operator can use the TDengine CLI to show connections, ongoing queries, stream computing, and can close connections or stop ongoing query tasks or stream computing.
+
+## Show Connections
+
+```sql
+SHOW CONNECTIONS;
+```
+
+One column of the output of the above SQL command is "ip:port", which is the end point of the client.
+
+## Force Close Connections
+
+```sql
+KILL CONNECTION ;
+```
+
+In the above SQL command, `connection-id` is from the first column of the output of `SHOW CONNECTIONS`.
+
+## Show Ongoing Queries
+
+```sql
+SHOW QUERIES;
+```
+
+The first column of the output is query ID, which is composed of the corresponding connection ID and the sequence number of the current query task started on this connection. The format is "connection-id:query-no".
+
+## Force Close Queries
+
+```sql
+KILL QUERY ;
+```
+
+In the above SQL command, `query-id` is from the first column of the output of `SHOW QUERIES `.
+
+## Show Continuous Query
+
+```sql
+SHOW STREAMS;
+```
+
+The first column of the output is stream ID, which is composed of the connection ID and the sequence number of the current stream started on this connection. The format is "connection-id:stream-no".
+
+## Force Close Continuous Query
+
+```sql
+KILL STREAM ;
+```
+
+The above SQL command, `stream-id` is from the first column of the output of `SHOW STREAMS`.
diff --git a/docs-en/13-operation/10-monitor.md b/docs-en/13-operation/10-monitor.md
new file mode 100644
index 0000000000000000000000000000000000000000..a4679983f2bc77bb4e438f5d43fa1b8beb39b120
--- /dev/null
+++ b/docs-en/13-operation/10-monitor.md
@@ -0,0 +1,60 @@
+---
+title: TDengine Monitoring
+---
+
+After TDengine is started, a database named `log` is created automatically to help with monitoring. Information that includes CPU, memory and disk usage, bandwidth, number of requests, disk I/O speed, slow queries, is written into the `log` database at a predefined interval. Additionally, some important system operations, like logon, create user, drop database, and alerts and warnings generated in TDengine are written into the `log` database too. A system operator can view the data in `log` database from TDengine CLI or from a web console.
+
+The collection of the monitoring information is enabled by default, but can be disabled by parameter `monitor` in the configuration file.
+
+## TDinsight
+
+TDinsight is a complete solution which uses the monitoring database `log` mentioned previously, and Grafana, to monitor a TDengine cluster.
+
+From version 2.3.3.0, more monitoring data has been added in the `log` database. Please refer to [TDinsight Grafana Dashboard](https://grafana.com/grafana/dashboards/15167) to learn more details about using TDinsight to monitor TDengine.
+
+A script `TDinsight.sh` is provided to deploy TDinsight automatically.
+
+Download `TDinsight.sh` with the below command:
+
+```bash
+wget https://github.com/taosdata/grafanaplugin/raw/master/dashboards/TDinsight.sh
+chmod +x TDinsight.sh
+```
+
+Prepare:
+
+1. TDengine Server
+
+ - The URL of REST service:for example `http://localhost:6041` if TDengine is deployed locally
+ - User name and password
+
+2. Grafana Alert Notification
+
+There are two ways to setup Grafana alert notification.
+
+- An existing Grafana Notification Channel can be specified with parameter `-E`, the notifier uid of the channel can be obtained by `curl -u admin:admin localhost:3000/api/alert-notifications |jq`
+
+ ```bash
+ sudo ./TDinsight.sh -a http://localhost:6041 -u root -p taosdata -E
+ ```
+
+- The AliCloud SMS alert built in TDengine data source plugin can be enabled with parameter `-s`, the parameters of enabling this plugin are listed below:
+
+ - `-I`: AliCloud SMS Key ID
+ - `-K`: AliCloud SMS Key Secret
+ - `-S`: AliCloud SMS Signature
+ - `-C`: SMS notification template
+ - `-T`: Input parameters in JSON format for the SMS notification template, for example`{"alarm_level":"%s","time":"%s","name":"%s","content":"%s"}`
+ - `-B`: List of mobile numbers to be notified
+
+ Below is an example of the full command using the AliCloud SMS alert.
+
+ ```bash
+ sudo ./TDinsight.sh -a http://localhost:6041 -u root -p taosdata -s \
+ -I XXXXXXX -K XXXXXXXX -S taosdata -C SMS_1111111 -B 18900000000 \
+ -T '{"alarm_level":"%s","time":"%s","name":"%s","content":"%s"}'
+ ```
+
+Launch `TDinsight.sh` with the command above and restart Grafana, then open Dashboard `http://localhost:3000/d/tdinsight`.
+
+For more use cases and restrictions please refer to [TDinsight](/reference/tdinsight/).
diff --git a/docs-en/13-operation/17-diagnose.md b/docs-en/13-operation/17-diagnose.md
new file mode 100644
index 0000000000000000000000000000000000000000..2b474fddba4af5ba0c29103cd8ab1249d10d055b
--- /dev/null
+++ b/docs-en/13-operation/17-diagnose.md
@@ -0,0 +1,122 @@
+---
+title: Problem Diagnostics
+---
+
+## Network Connection Diagnostics
+
+When a TDengine client is unable to access a TDengine server, the network connection between the client side and the server side must be checked to find the root cause and resolve problems.
+
+Diagnostics for network connections can be executed between Linux and Linux or between Linux and Windows.
+
+Diagnostic steps:
+
+1. If the port range to be diagnosed is being occupied by a `taosd` server process, please first stop `taosd.
+2. On the server side, execute command `taos -n server -P