-
-1. 应用将一个查询条件发往系统;
-2. taosc 将超级表的名字发往 meta node(管理节点);
-3. 管理节点将超级表所拥有的 vnode 列表发回 taosc;
-4. taosc 将计算的请求连同标签过滤条件发往这些 vnode 对应的多个数据节点;
-5. 每个 vnode 先在内存里查找出自己节点里符合标签过滤条件的表的集合,然后扫描存储的时序数据,完成相应的聚合计算,将结果返回给 taosc;
-6. taosc 将多个数据节点返回的结果做最后的聚合,将其返回给应用。
-
-由于 TDengine 在 vnode 内将标签数据与时序数据分离存储,通过在内存里过滤标签数据,先找到需要参与聚合操作的表的集合,将需要扫描的数据集大幅减少,大幅提升聚合计算速度。同时,由于数据分布在多个 vnode/dnode,聚合计算操作在多个 vnode 里并发进行,又进一步提升了聚合的速度。 对普通表的聚合函数以及绝大部分操作都适用于超级表,语法完全一样,细节请看 TAOS SQL。
-
-### 预计算
-
-为有效提升查询处理的性能,针对物联网数据的不可更改的特点,在数据块头部记录该数据块中存储数据的统计信息:包括最大值、最小值、和。我们称之为预计算单元。如果查询处理涉及整个数据块的全部数据,直接使用预计算结果,完全不需要读取数据块的内容。由于预计算数据量远小于磁盘上存储的数据块数据的大小,对于磁盘 I/O 为瓶颈的查询处理,使用预计算结果可以极大地减小读取 I/O 压力,加速查询处理的流程。预计算机制与 PostgreSQL 的索引 BRIN(block range index)有异曲同工之妙。
diff --git a/docs-cn/21-tdinternal/03-taosd.md b/docs-cn/21-tdinternal/03-taosd.md
deleted file mode 100644
index 0cf0a1aaa222e82f7ca6cc4f0314aa5a50442924..0000000000000000000000000000000000000000
--- a/docs-cn/21-tdinternal/03-taosd.md
+++ /dev/null
@@ -1,119 +0,0 @@
----
-sidebar_label: taosd 的设计
-title: taosd的设计
----
-
-逻辑上,TDengine 系统包含 dnode,taosc 和 App,dnode 是服务器侧执行代码 taosd 的一个运行实例,因此 taosd 是 TDengine 的核心,本文对 taosd 的设计做一简单的介绍,模块内的实现细节请见其他文档。
-
-## 系统模块图
-
-taosd 包含 rpc,dnode,vnode,tsdb,query,cq,sync,wal,mnode,http,monitor 等模块,具体如下图:
-
-
-
-taosd 的启动入口是 dnode 模块,dnode 然后启动其他模块,包括可选配置的 http,monitor 模块。taosc 或 dnode 之间交互的消息都是通过 rpc 模块进行,dnode 模块根据接收到的消息类型,将消息分发到 vnode 或 mnode 的消息队列,或由 dnode 模块自己消费。dnode 的工作线程(worker)消费消息队列里的消息,交给 mnode 或 vnode 进行处理。下面对各个模块做简要说明。
-
-## RPC 模块
-
-该模块负责 taosd 与 taosc,以及其他数据节点之间的通讯。TDengine 没有采取标准的 HTTP 或 gRPC 等第三方工具,而是实现了自己的通讯模块 RPC。
-
-考虑到物联网场景下,数据写入的包一般不大,因此除支持 TCP 连接之外,RPC 还支持 UDP 连接。当数据包小于 15K 时,RPC 将采用 UDP 方式进行连接,否则将采用 TCP 连接。对于查询类的消息,RPC 不管包的大小,总是采取 TCP 连接。对于 UDP 连接,RPC 实现了自己的超时、重传、顺序检查等机制,以保证数据可靠传输。
-
-RPC 模块还提供数据压缩功能,如果数据包的字节数超过系统配置参数 compressMsgSize,RPC 在传输中将自动压缩数据,以节省带宽。
-
-为保证数据的安全和数据的 integrity,RPC 模块采用 MD5 做数字签名,对数据的真实性和完整性进行认证。
-
-## DNODE 模块
-
-该模块是整个 taosd 的入口,它具体负责如下任务:
-
-- 系统的初始化,包括
- - 从文件 taos.cfg 读取系统配置参数,从文件 dnodeCfg.json 读取数据节点的配置参数;
- - 启动 RPC 模块,并建立起与 taosc 通讯的 server 连接,与其他数据节点通讯的 server 连接;
- - 启动并初始化 dnode 的内部管理,该模块将扫描该数据节点已有的 vnode ,并打开它们;
- - 初始化可配置的模块,如 mnode,http,monitor 等。
-- 数据节点的管理,包括
- - 定时的向 mnode 发送 status 消息,报告自己的状态;
- - 根据 mnode 的指示,创建、改变、删除 vnode;
- - 根据 mnode 的指示,修改自己的配置参数;
-- 消息的分发、消费,包括
- - 为每一个 vnode 和 mnode 的创建并维护一个读队列、一个写队列;
- - 将从 taosc 或其他数据节点来的消息,根据消息类型,将其直接分发到不同的消息队列,或由自己的管理模块直接消费;
- - 维护一个读的线程池,消费读队列的消息,交给 vnode 或 mnode 处理。为支持高并发,一个读线程(worker)可以消费多个队列的消息,一个读队列可以由多个 worker 消费;
- - 维护一个写的线程池,消费写队列的消息,交给 vnode 或 mnode 处理。为保证写操作的序列化,一个写队列只能由一个写线程负责,但一个写线程可以负责多个写队列。
-
-taosd 的消息消费由 dnode 通过读写线程池进行控制,是系统的中枢。该模块内的结构体图如下:
-
-
-
-## VNODE 模块
-
-vnode 是一独立的数据存储查询逻辑单元,但因为一个 vnode 只能容许一个 DB ,因此 vnode 内部没有 account,DB,user 等概念。为实现更好的模块化、封装以及未来的扩展,它有很多子模块,包括负责存储的 TSDB,负责查询的 query,负责数据复制的 sync,负责数据库日志的的 WAL,负责连续查询的 cq(continuous query),负责事件触发的流计算的 event 等模块,这些子模块只与 vnode 模块发生关系,与其他模块没有任何调用关系。模块图如下:
-
-
-
-vnode 模块向下,与 dnodeVRead,dnodeVWrite 发生互动,向上,与子模块发生互动。它主要的功能有:
-
-- 协调各个子模块的互动。各个子模块之间都不直接调用,都需要通过 vnode 模块进行;
-- 对于来自 taosc 或 mnode 的写操作,vnode 模块将其分解为写日志(WAL),转发(sync),本地存储(TSDB)子模块的操作;
-- 对于查询操作,分发到 query 模块进行。
-
-一个数据节点里有多个 vnode,因此 vnode 模块是有多个运行实例的。每个运行实例是完全独立的。
-
-vnode 与其子模块是通过 API 直接调用,而不是通过消息队列传递。而且各个子模块只与 vnode 模块有交互,不与 dnode,rpc 等模块发生任何直接关联。
-
-## MNODE 模块
-
-mnode 是整个系统的大脑,负责整个系统的资源调度,负责 meta data 的管理与存储。
-
-一个运行的系统里,只有一个 mnode,但它有多个副本(由系统配置参数 numOfMnodes 控制)。这些副本分布在不同的 dnode 里,目的是保证系统的高可靠运行。副本之间的数据复制是采用同步而非异步的方式,以确保数据的一致性,确保数据不会丢失。这些副本会自动选举一个 Master,其他副本是 slave。所有数据更新类的操作,都只能在 master 上进行,而查询类的可以在 slave 节点上进行。代码实现上,同步模块与 vnode 共享,但 mnode 被分配一个特殊的 vgroup ID: 1,而且 quorum 大于 1。整个集群系统是由多个 dnode 组成的,运行的 mnode 的副本数不可能超过 dnode 的个数,但不会超过配置的副本数。如果某个 mnode 副本宕机一段时间,只要超过半数的 mnode 副本仍在运行,运行的 mnode 会自动根据整个系统的资源情况,在其他 dnode 里再启动一个 mnode,以保证运行的副本数。
-
-各个 dnode 通过信息交换,保存有 mnode 各个副本的 End Point 列表,并向其中的 master 节点定时(间隔由系统配置参数 statusInterval 控制)发送 status 消息,消息体里包含该 dnode 的 CPU、内存、剩余存储空间、vnode 个数,以及各个 vnode 的状态(存储空间、原始数据大小、记录条数、角色等)。这样 mnode 就了解整个系统的资源情况,如果用户创建新的表,就可以决定需要在哪个 dnode 创建;如果增加或删除 dnode,或者监测到某 dnode 数据过热、或离线太长,就可以决定需要挪动那些 vnode,以实现负载均衡。
-
-mnode 里还负责 account,user,DB,stable,table,vgroup,dnode 的创建、删除与更新。mnode 不仅把这些 entity 的 meta data 保存在内存,还做持久化存储。但为节省内存,各个表的标签值不保存在 mnode(保存在 vnode),而且子表不维护自己的 schema,而是与 stable 共享。为减小 mnode 的查询压力,taosc 会缓存 table、stable 的 schema。对于查询类的操作,各个 slave mnode 也可以提供,以减轻 master 压力。
-
-## TSDB 模块
-
-TSDB 模块是 vnode 中的负责快速高并发地存储和读取属于该 vnode 的表的元数据及采集的时序数据的引擎。除此之外,TSDB 还提供了表结构的修改、表标签值的修改等功能。TSDB 提供 API 供 vnode 和 query 等模块调用。TSDB 中存储了两类数据,1:元数据信息;2:时序数据
-
-### 元数据信息
-
-TSDB 中存储的元数据包含属于其所在的 vnode 中表的类型,schema 的定义等。对于超级表和超级表下的子表而言,又包含了 tag 的 schema 定义以及子表的 tag 值等。对于元数据信息而言,TSDB 就相当于一个全内存的 KV 型数据库,属于该 vnode 的表对象全部在内存中,方便快速查询表的信息。除此之外,TSDB 还对其中的子表,按照 tag 的第一列取值做了全内存的索引,大大加快了对于标签的过滤查询。TSDB 中的元数据的最新状态在落盘时,会以追加(append-only)的形式,写入到 meta 文件中。meta 文件只进行追加操作,即便是元数据的删除,也会以一条记录的形式写入到文件末尾。TSDB 也提供了对于元数据的修改操作,如表 schema 的修改,tag schema 的修改以及 tag 值的修改等。
-
-### 时序数据
-
-每个 TSDB 在创建时,都会事先分配一定量的内存缓冲区,且内存缓冲区的大小可配可修改。表采集的时序数据,在写入 TSDB 时,首先以追加的方式写入到分配的内存缓冲区中,同时建立基于时间戳的内存索引,方便快速查询。当内存缓冲区的数据积累到一定的程度时(达到内存缓冲区总大小的 1/3),则会触发落盘操作,将缓冲区中的数据持久化到硬盘文件上。时序数据在内存缓冲区中是以行(row)的形式存储的。
-
-而时序数据在写入到 TSDB 的数据文件时,是以列(column)的形式存储的。TSDB 中的数据文件包含多个数据文件组,每个数据文件组中又包含 .head、.data 和 .last 三个文件,如(v2f1801.head、v2f1801.data、v2f1801.last)数据文件组。TSDB 中的数据文件组是按照时间跨度进行分片的,默认是 10 天一个文件组,且可通过配置文件及建库选项进行配置。分片的数据文件组又按照编号递增排列,方便快速定位某一时间段的时序数据,高效定位数据文件组。时序数据在 TSDB 的数据文件中是以块的形式进行列式存储的,每个块中只包含一张表的数据,且数据在一个块中是按照时间顺序递增排列的。在一个数据文件组中,.head 文件负责存储数据块的索引及统计信息,如每个块的位置,压缩算法,时间戳范围等。存储在 .head 文件中一张表的索引信息是按照数据块中存储的数据的时间递增排列的,方便进行折半查找等工作。.head 和 .last 文件是存储真实数据块的文件,若数据块中的数据累计到一定程度,则会写入 .data 文件中,否则,会写入 .last 文件中,等待下次落盘时合并数据写入 .data 文件中,从而大大减少文件中块的个数,避免数据的过度碎片化。
-
-## Query 模块
-
-该模块负责整体系统的查询处理。客户端调用该该模块进行 SQL 语法解析,并将查询或写入请求发送到 vnode ,同时负责针对超级表的查询进行二阶段的聚合操作。在 vnode 端,该模块调用 TSDB 模块读取系统中存储的数据进行查询处理。query 模块还定义了系统能够支持的全部查询函数,查询函数的实现机制与查询框架无耦合,可以在不修改查询流程的情况下动态增加查询函数。详细的设计请参见《TDengine 2.0 查询模块设计》。
-
-## SYNC 模块
-
-该模块实现数据的多副本复制,包括 vnode 与 mnode 的数据复制,支持异步和同步两种复制方式,以满足 meta data 与时序数据不同复制的需求。因为它为 mnode 与 vnode 共享,系统为 mnode 副本预留了一个特殊的 vgroup ID:1。因此 vnode group 的 ID 是从 2 开始的。
-
-每个 vnode/mnode 模块实例会有一对应的 sync 模块实例,他们是一一对应的。详细设计请见[TDengine 2.0 数据复制模块设计](/tdinternal/replica/)
-
-## WAL 模块
-
-该模块负责将新插入的数据写入 write ahead log(WAL),为 vnode,mnode 共享。以保证服务器 crash 或其他故障,能从 WAL 中恢复数据。
-
-每个 vnode/mnode 模块实例会有一对应的 WAL 模块实例,是完全一一对应的。WAL 的落盘操作由两个参数 walLevel,fsync 控制。看具体场景,如果要 100% 保证数据不会丢失,需要将 walLevel 配置为 2,fsync 设置为 0,每条数据插入请求,都会实时落盘后,才会给应用确认
-
-## HTTP 模块
-
-该模块负责处理系统对外的 RESTful 接口,可以通过配置,由 dnode 启动或停止 。(仅 2.2 及之前的版本中存在)
-
-该模块将接收到的 RESTful 请求,做了各种合法性检查后,将其变成标准的 SQL 语句,通过 taosc 的异步接口,将请求发往整个系统中的任一 dnode 。收到处理后的结果后,再翻译成 HTTP 协议,返回给应用。
-
-如果 HTTP 模块启动,就意味着启动了一个 taosc 的实例。任一一个 dnode 都可以启动该模块,以实现对 RESTful 请求的分布式处理。
-
-## Monitor 模块
-
-该模块负责检测一个 dnode 的运行状态,可以通过配置,由 dnode 启动或停止。原则上,每个 dnode 都应该启动一个 monitor 实例。
-
-Monitor 采集 TDengine 里的关键操作,比如创建、删除、更新账号、表、库等,而且周期性的收集 CPU、内存、网络等资源的使用情况(采集周期由系统配置参数 monitorInterval 控制)。获得这些数据后,monitor 模块将采集的数据写入系统的日志库(DB 名字由系统配置参数 monitorDbName 控制)。
-
-Monitor 模块使用 taosc 来将采集的数据写入系统,因此每个 monitor 实例,都有一个 taosc 运行实例。
diff --git a/docs-cn/25-application/03-immigrate.md b/docs-cn/25-application/03-immigrate.md
deleted file mode 100644
index 9d8946bc4a69639c5327ac1ffb6c0539ddbd0e63..0000000000000000000000000000000000000000
--- a/docs-cn/25-application/03-immigrate.md
+++ /dev/null
@@ -1,423 +0,0 @@
----
-sidebar_label: OpenTSDB 迁移到 TDengine
-title: OpenTSDB 应用迁移到 TDengine 的最佳实践
----
-
-作为一个分布式、可伸缩、基于 HBase 的分布式时序数据库系统,得益于其先发优势,OpenTSDB 被 DevOps 领域的人员引入并广泛地应用在了运维监控领域。但最近几年,随着云计算、微服务、容器化等新技术快速落地发展,企业级服务种类变得越来越多,架构也越来越复杂,应用运行基础环境日益多样化,给系统和运行监控带来的压力也越来越大。从这一现状出发,使用 OpenTSDB 作为 DevOps 的监控后端存储,越来越受困于其性能问题以及迟缓的功能升级,以及由此而衍生出来的应用部署成本上升和运行效率降低等问题,这些问题随着系统规模的扩大日益严重。
-
-在这一背景下,为满足高速增长的物联网大数据市场和技术需求,在吸取众多传统关系型数据库、NoSQL 数据库、流计算引擎、消息队列等软件的优点之后,涛思数据自主开发出创新型大数据处理产品 TDengine。在时序大数据处理上,TDengine 有着自己独特的优势。就 OpenTSDB 当前遇到的问题来说,TDengine 能够有效解决。
-
-相对于 OpenTSDB,TDengine 具有如下显著特点:
-
-- 数据写入和查询的性能远超 OpenTSDB;
-- 针对时序数据的高效压缩机制,压缩后在磁盘上的存储空间不到 1/5;
-- 安装部署非常简单,单一安装包完成安装部署,不依赖其他的第三方软件,整个安装部署过程秒级搞定;
-- 提供的内建函数覆盖 OpenTSDB 支持的全部查询函数,还支持更多的时序数据查询函数、标量函数及聚合函数,支持多种时间窗口聚合、连接查询、表达式运算、多种分组聚合、用户定义排序、以及用户定义函数等高级查询功能。采用类 SQL 的语法规则,更加简单易学,基本上没有学习成本。
-- 支持多达 128 个标签,标签总长度可达到 16 KB;
-- 除 REST 接口之外,还提供 C/C++、Java、Python、Go、Rust、Node.js、C#、Lua(社区贡献)、PHP(社区贡献)等多种语言的接口,支持 JDBC 等多种企业级标准连接器协议。
-
-如果我们将原本运行在 OpenTSDB 上的应用迁移到 TDengine 上,不仅可以有效地降低计算和存储资源的占用、减少部署服务器的规模,还能够极大减少运行维护的成本的输出,让运维管理工作更简单、更轻松,大幅降低总拥有成本。与 OpenTSDB 一样,TDengine 也已经进行了开源,不同的是,除了单机版,后者还实现了集群版开源,被厂商绑定的顾虑一扫而空。
-
-在下文中我们将就“使用最典型并广泛应用的运维监控(DevOps)场景”来说明,如何在不编码的情况下将 OpenTSDB 的应用快速、安全、可靠地迁移到 TDengine 之上。后续的章节会做更深度的介绍,以便于进行非 DevOps 场景的迁移。
-
-## DevOps 应用快速迁移
-
-### 1、典型应用场景
-
-一个典型的 DevOps 应用场景的系统整体的架构如下图(图 1) 所示。
-
-**图 1. DevOps 场景中典型架构**
-
-
-在该应用场景中,包含了部署在应用环境中负责收集机器度量(Metrics)、网络度量(Metrics)以及应用度量(Metrics)的 Agent 工具、汇聚 Agent 收集信息的数据收集器,数据持久化存储和管理的系统以及监控数据可视化工具(例如:Grafana 等)。
-
-其中,部署在应用节点的 Agents 负责向 collectd/Statsd 提供不同来源的运行指标,collectd/StatsD 则负责将汇聚的数据推送到 OpenTSDB 集群系统,然后使用可视化看板 Grafana 将数据可视化呈现出来。
-
-### 2、迁移服务
-
-- **TDengine 安装部署**
-
-首先是 TDengine 的安装,从官网上下载 TDengine 最新稳定版进行安装。各种安装包的使用帮助请参见博客[《TDengine 多种安装包的安装和卸载》](https://www.taosdata.com/blog/2019/08/09/566.html)。
-
-注意,安装完成以后,不要立即启动 `taosd` 服务,在正确配置完成参数以后再启动。
-
-- **调整数据收集器配置**
-
-在 TDengine 2.4 版本中,包含一个组件 taosAdapter。taosAdapter 是一个无状态、可快速弹性伸缩的组件,它可以兼容 Influxdb 的 Line Protocol 和 OpenTSDB 的 telnet/JSON 写入协议规范,提供了丰富的数据接入能力,有效的节省用户迁移成本,降低用户应用迁移的难度。
-
-用户可以根据需求弹性部署 taosAdapter 实例,结合场景的需要,快速提升数据写入的吞吐量,为不同应用场景下的数据写入提供保障。
-
-通过 taosAdapter,用户可以将 collectd 或 StatsD 收集的数据直接推送到 TDengine ,实现应用场景的无缝迁移,非常的轻松便捷。taosAdapter 还支持 Telegraf、Icinga、TCollector 、node_exporter 的数据接入,使用详情参考[taosAdapter](/reference/taosadapter/)。
-
-如果使用 collectd,修改其默认位置 `/etc/collectd/collectd.conf` 的配置文件为指向 taosAdapter 部署的节点 IP 地址和端口。假设 taosAdapter 的 IP 地址为 192.168.1.130,端口为 6046,配置如下:
-
-```html
-LoadPlugin write_tsdb
-
-
- Host "192.168.1.130" Port "6046" HostTags "status=production" StoreRates
- false AlwaysAppendDS false
-
-
-```
-
-即可让 collectd 将数据使用推送到 OpenTSDB 的插件方式推送到 taosAdapter, taosAdapter 将调用 API 将数据写入到 TDengine 中,从而完成数据的写入工作。如果你使用的是 StatsD 相应地调整配置文件信息。
-
-- **调整看板(Dashboard)系统**
-
-在数据能够正常写入 TDengine 后,可以调整适配 Grafana 将写入 TDengine 的数据可视化呈现出来。获取和使用 TDengine 提供的 Grafana 插件请参考[与其他工具的连接](/third-party/grafana)。
-
-TDengine 提供了默认的两套 Dashboard 模板,用户只需要将 Grafana 目录下的模板导入到 Grafana 中即可激活使用。
-
-**图 2. 导入 Grafana 模板**
-
-
-操作完以上步骤后,就完成了将 OpenTSDB 替换成为 TDengine 的迁移工作。可以看到整个流程非常简单,不需要写代码,只需要对某些配置文件进行调整即可完成全部的迁移工作。
-
-### 3、迁移后架构
-
-完成迁移以后,此时的系统整体的架构如下图(图 3)所示,而整个过程中采集端、数据写入端、以及监控呈现端均保持了稳定,除了极少的配置调整外,不涉及任何重要的更改和变动。OpenTSDB 大量的应用场景均为 DevOps ,这种场景下,简单的参数设置即可完成 OpenTSDB 到 TDengine 迁移动作,使用上 TDengine 更加强大的处理能力和查询性能。
-
-在绝大多数的 DevOps 场景中,如果你拥有一个小规模的 OpenTSDB 集群(3 台及以下的节点)作为 DevOps 的存储端,依赖于 OpenTSDB 为系统持久化层提供数据存储和查询功能,那么你可以安全地将其替换为 TDengine,并节约更多的计算和存储资源。在同等计算资源配置情况下,单台 TDengine 即可满足 3 ~ 5 台 OpenTSDB 节点提供的服务能力。如果规模比较大,那便需要采用 TDengine 集群。
-
-如果你的应用特别复杂,或者应用领域并不是 DevOps 场景,你可以继续阅读后续的章节,更加全面深入地了解将 OpenTSDB 的应用迁移到 TDengine 的高级话题。
-
-**图 3. 迁移完成后的系统架构**
-
-
-## 其他场景的迁移评估与策略
-
-### 1、TDengine 与 OpenTSDB 的差异
-
-本章将详细介绍 OpenTSDB 与 TDengine 在系统功能层面上存在的差异。阅读完本章的内容,你可以全面地评估是否能够将某些基于 OpenTSDB 的复杂应用迁移到 TDengine 上,以及迁移之后应该注意的问题。
-
-TDengine 当前只支持 Grafana 的可视化看板呈现,所以如果你的应用中使用了 Grafana 以外的前端看板(例如[TSDash](https://github.com/facebook/tsdash)、[Status Wolf](https://github.com/box/StatusWolf)等),那么前端看板将无法直接迁移到 TDengine,需要将前端看板重新适配到 Grafana 才可以正常运行。
-
-在 2.3.0.x 版本中,TDengine 只能够支持 collectd 和 StatsD 作为数据收集汇聚软件,当然后面会陆续提供更多的数据收集聚合软件的接入支持。如果您的收集端使用了其他类型的数据汇聚器,您的应用需要适配到这两个数据汇聚端系统,才能够将数据正常写入。除了上述两个数据汇聚端软件协议以外,TDengine 还支持通过 InfluxDB 的行协议和 OpenTSDB 的数据写入协议、JSON 格式将数据直接写入,您可以重写数据推送端的逻辑,使用 TDengine 支持的行协议来写入数据。
-
-此外,如果你的应用中使用了 OpenTSDB 以下特性,在将应用迁移到 TDengine 之前你还需要了解以下注意事项:
-
-1. `/api/stats`:如果你的应用中使用了该项特性来监控 OpenTSDB 的服务状态,并在应用中建立了相关的逻辑来联动处理,那么这部分状态读取和获取的逻辑需要重新适配到 TDengine。TDengine 提供了全新的处理集群状态监控机制,来满足你的应用对其进行的监控和维护的需求。
-2. `/api/tree`:如果你依赖于 OpenTSDB 的该项特性来进行时间线的层级化组织和维护,那么便无法将其直接迁移至 TDengine。TDengine 采用了数据库->超级表->子表这样的层级来组织和维护时间线,归属于同一个超级表的所有的时间线在系统中同一个层级,但是可以通过不同标签值的特殊构造来模拟应用逻辑上的多级结构。
-3. `Rollup And PreAggregates`:采用了 Rollup 和 PreAggregates 需要应用来决定在合适的地方访问 Rollup 的结果,在某些场景下又要访问原始的结果,这种结构的不透明性让应用处理逻辑变得极为复杂而且完全不具有移植性。我们认为这种策略是时序数据库无法提供高性能聚合情况下的妥协与折中。TDengine 暂不支持多个时间线的自动降采样和(时间段范围的)预聚合,由于 其拥有的高性能查询处理逻辑,即使不依赖于 Rollup 和 (时间段)预聚合计算结果,也能够提供很高性能的查询响应,而且让你的应用查询处理逻辑更加简单。
-4. `Rate`: TDengine 提供了两个计算数值变化率的函数,分别是 Derivative(其计算结果与 InfluxDB 的 Derivative 行为一致)和 IRate(其计算结果与 Prometheus 中的 IRate 函数计算结果一致)。但是这两个函数的计算结果与 Rate 有细微的差别,但整体上功能更强大。此外,**OpenTSDB 提供的所有计算函数,TDengine 均有对应的查询函数支持,并且 TDengine 的查询函数功能远超过 OpenTSDB 支持的查询函数,**可以极大地简化你的应用处理逻辑。
-
-通过上面的介绍,相信你应该能够了解 OpenTSDB 迁移到 TDengine 带来的变化,这些信息也有助于你正确地判断是否可以接受将应用 迁移到 TDengine 之上,体验 TDengine 提供的强大的时序数据处理能力和便捷的使用体验。
-
-### 2、迁移策略
-
-首先将基于 OpenTSDB 的系统进行迁移涉及到的数据模式设计、系统规模估算、数据写入端改造,进行数据分流、应用适配工作;之后将两个系统并行运行一段时间,再将历史数据迁移到 TDengine 中。当然如果你的应用中有部分功能强依赖于上述 OpenTSDB 特性,同时又不希望停止使用,可以考虑保持原有的 OpenTSDB 系统运行,同时启动 TDengine 来提供主要的服务。
-
-## 数据模型设计
-
-一方面,TDengine 要求其入库的数据具有严格的模式定义。另一方面,TDengine 的数据模型相对于 OpenTSDB 来说又更加丰富,多值模型能够兼容全部的单值模型的建立需求。
-
-现在让我们假设一个 DevOps 的场景,我们使用了 collectd 收集设备的基础度量(metrics),包含了 memory 、swap、disk 等几个度量,其在 OpenTSDB 中的模式如下:
-
-| 序号 | 测量(metric) | 值名称 | 类型 | tag1 | tag2 | tag3 | tag4 | tag5 |
-| ---- | -------------- | ------ | ------ | ---- | ----------- | -------------------- | --------- | ------ |
-| 1 | memory | value | double | host | memory_type | memory_type_instance | source | n/a |
-| 2 | swap | value | double | host | swap_type | swap_type_instance | source | n/a |
-| 3 | disk | value | double | host | disk_point | disk_instance | disk_type | source |
-
-TDengine 要求存储的数据具有数据模式,即写入数据之前需创建超级表并指定超级表的模式。对于数据模式的建立,你有两种方式来完成此项工作:1)充分利用 TDengine 对 OpenTSDB 的数据原生写入的支持,调用 TDengine 提供的 API 将(文本行或 JSON 格式)数据写入,并自动化地建立单值模型。采用这种方式不需要对数据写入应用进行较大的调整,也不需要对写入的数据格式进行转换。
-
-在 C 语言层面,TDengine 提供了 `taos_schemaless_insert()` 函数来直接写入 OpenTSDB 格式的数据(在更早版本中该函数名称是 `taos_insert_lines()`)。其代码参考示例请参见安装包目录下示例代码 schemaless.c。
-
-2)在充分理解 TDengine 的数据模型基础上,结合生成数据的特点,手动方式建立 OpenTSDB 到 TDengine 的数据模型调整的映射关系。TDengine 能够支持多值模型和单值模型,考虑到 OpenTSDB 均为单值映射模型,这里推荐使用单值模型在 TDengine 中进行建模。
-
-- **单值模型**。
-
-具体步骤如下:将度量(metrics)的名称作为 TDengine 超级表的名称,该超级表建成后具有两个基础的数据列—时间戳(timestamp)和值(value),超级表的标签等效于 度量 的标签信息,标签数量等同于度量 的标签的数量。子表的表名采用具有固定规则的方式进行命名:`metric + '_' + tags1_value + '_' + tag2_value + '_' + tag3_value ...`作为子表名称。
-
-在 TDengine 中建立 3 个超级表:
-
-```sql
-create stable memory(ts timestamp, val float) tags(host binary(12),memory_type binary(20), memory_type_instance binary(20), source binary(20));
-create stable swap(ts timestamp, val double) tags(host binary(12), swap_type binary(20), swap_type_binary binary(20), source binary(20));
-create stable disk(ts timestamp, val double) tags(host binary(12), disk_point binary(20), disk_instance binary(20), disk_type binary(20), source binary(20));
-```
-
-对于子表使用动态建表的方式创建如下所示:
-
-```sql
-insert into memory_vm130_memory_buffered_collectd using memory tags(‘vm130’, ‘memory’, 'buffer', 'collectd') values(1632979445, 3.0656);
-```
-
-最终系统中会建立 340 个左右的子表,3 个超级表。需要注意的是,如果采用串联标签值的方式导致子表名称超过系统限制(191 字节),那么需要采用一定的编码方式(例如 MD5)将其转化为可接受长度。
-
-- **多值模型**
-
-如果你想要利用 TDengine 的多值模型能力,需要首先满足以下要求:不同的采集量具有相同的采集频率,且能够通过消息队列**同时到达**数据写入端,从而确保使用 SQL 语句将多个指标一次性写入。将度量的名称作为超级表的名称,建立具有相同采集频率且能够同时到达的数据多列模型。子表的表名采用具有固定规则的方式进行命名。上述每个度量均只包含一个测量值,因此无法将其转化为多值模型。
-
-## 数据分流与应用适配
-
-从消息队列中订阅数据,并启动调整后的写入程序写入数据。
-
-数据开始写入持续一段时间后,可以采用 SQL 语句检查写入的数据量是否符合预计的写入要求。统计数据量使用如下 SQL 语句:
-
-```sql
-select count(*) from memory
-```
-
-完成查询后,如果写入的数据与预期的相比没有差别,同时写入程序本身没有异常的报错信息,那么可用确认数据写入是完整有效的。
-
-TDengine 不支持采用 OpenTSDB 的查询语法进行查询或数据获取处理,但是针对 OpenTSDB 的每种查询都提供对应的支持。可以用检查附录 1 获取对应的查询处理的调整和应用使用的方式,如果需要全面了解 TDengine 支持的查询类型,请参阅 TDengine 的用户手册。
-
-TDengine 支持标准的 JDBC 3.0 接口操纵数据库,你也可以使用其他类型的高级语言的连接器来查询读取数据,以适配你的应用。具体的操作和使用帮助也请参阅用户手册。
-
-## 历史数据迁移
-
-### 1、使用工具自动迁移数据
-
-为了方便历史数据的迁移工作,我们为数据同步工具 DataX 提供了插件,能够将数据自动写入到 TDengine 中,需要注意的是 DataX 的自动化数据迁移只能够支持单值模型的数据迁移过程。
-
-DataX 具体的使用方式及如何使用 DataX 将数据写入 TDengine 请参见[基于 DataX 的 TDengine 数据迁移工具](https://www.taosdata.com/blog/2021/10/26/3156.html)。
-
-在对 DataX 进行迁移实践后,我们发现通过启动多个进程,同时迁移多个 metric 的方式,可以大幅度的提高迁移历史数据的效率,下面是迁移过程中的部分记录,希望这些能为应用迁移工作带来参考。
-
-| DataX 实例个数 (并发进程个数) | 迁移记录速度 (条/秒) |
-| ----------------------------- | --------------------- |
-| 1 | 约 13.9 万 |
-| 2 | 约 21.8 万 |
-| 3 | 约 24.9 万 |
-| 5 | 约 29.5 万 |
-| 10 | 约 33 万 |
-
- (注:测试数据源自 单节点 Intel(R) Core(TM) i7-10700 CPU@2.90GHz 16 核 64G 硬件设备,channel 和 batchSize 分别为 8 和 1000,每条记录包含 10 个 tag)
-
-### 2、手动迁移数据
-
-如果你需要使用多值模型进行数据写入,就需要自行开发一个将数据从 OpenTSDB 导出的工具,然后确认哪些时间线能够合并导入到同一个时间线,再将可以同时导入的时间通过 SQL 语句的写入到数据库中。
-
-手动迁移数据需要注意以下两个问题:
-
-1)在磁盘中存储导出数据时,磁盘需要有足够的存储空间以便能够充分容纳导出的数据文件。为了避免全量数据导出后导致磁盘文件存储紧张,可以采用部分导入的模式,对于归属于同一个超级表的时间线优先导出,然后将导出部分的数据文件导入到 TDengine 系统中。
-
-2)在系统全负载运行下,如果有足够的剩余计算和 IO 资源,可以建立多线程的导入机制,最大限度地提升数据迁移的效率。考虑到数据解析对于 CPU 带来的巨大负载,需要控制最大的并行任务数量,以避免因导入历史数据而触发的系统整体过载。
-
-由于 TDengine 本身操作简易性,所以不需要在整个过程中进行索引维护、数据格式的变化处理等工作,整个过程只需要顺序执行即可。
-
-当历史数据完全导入到 TDengine 以后,此时两个系统处于同时运行的状态,之后便可以将查询请求切换到 TDengine 上,从而实现无缝的应用切换。
-
-## 附录 1: OpenTSDB 查询函数对应表
-
-### Avg
-
-等效函数:avg
-
-示例:
-
-```sql
-SELECT avg(val) FROM (SELECT first(val) FROM super_table WHERE ts >= startTime and ts <= endTime INTERVAL(20s) Fill(linear)) INTERVAL(20s)
-```
-
-备注:
-
-1. Interval 内的数值与外层查询的 interval 数值需要相同。
-2. 在 TDengine 中插值处理需要使用子查询来协助完成,如上所示,在内层查询中指明插值类型即可,由于 OpenTSDB 中数值的插值使用了线性插值,因此在插值子句中使用 fill(linear) 来声明插值类型。以下有相同插值计算需求的函数,均采用该方法处理。
-3. Interval 中参数 20s 表示将内层查询按照 20 秒一个时间窗口生成结果。在真实的查询中,需要调整为不同的记录之间的时间间隔。这样可确保等效于原始数据生成了插值结果。
-4. 由于 OpenTSDB 特殊的插值策略和机制,聚合查询(Aggregate)中先插值再计算的方式导致其计算结果与 TDengine 不可能完全一致。但是在降采样(Downsample)的情况下,TDengine 和 OpenTSDB 能够获得一致的结果(由于 OpenTSDB 在聚合查询和降采样查询中采用了完全不同的插值策略)。
-
-### Count
-
-等效函数:count
-
-示例:
-
-```sql
-select count(\*) from super_table_name;
-```
-
-### Dev
-
-等效函数:stddev
-
-示例:
-
-```sql
-Select stddev(val) from table_name
-```
-
-### Estimated percentiles
-
-等效函数:apercentile
-
-示例:
-
-```sql
-Select apercentile(col1, 50, “t-digest”) from table_name
-```
-
-备注:
-
-1. 近似查询处理过程中,OpenTSDB 默认采用 t-digest 算法,所以为了获得相同的计算结果,需要在 apercentile 函数中指明使用的算法。TDengine 能够支持两种不同的近似处理算法,分别通过“default”和“t-digest”来声明。
-### First
-
-等效函数:first
-
-示例:
-
-```sql
-Select first(col1) from table_name
-```
-
-### Last
-
-等效函数:last
-
-示例:
-
-```sql
-Select last(col1) from table_name
-```
-
-### Max
-
-等效函数:max
-
-示例:
-
-```sql
-Select max(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s)
-```
-
-备注:Max 函数需要插值,原因见上。
-
-### Min
-
-等效函数:min
-
-示例:
-
-```sql
-Select min(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s);
-```
-
-### MinMax
-
-等效函数:max
-
-```sql
-Select max(val) from table_name
-```
-
-备注:该函数无插值需求,因此可用直接计算。
-
-### MimMin
-
-等效函数:min
-
-```sql
-Select min(val) from table_name
-```
-
-备注:该函数无插值需求,因此可用直接计算。
-
-### Percentile
-
-等效函数:percentile
-
-备注:
-
-### Sum
-
-等效函数:sum
-
-```sql
-Select max(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s)
-```
-
-备注:该函数无插值需求,因此可用直接计算。
-
-### Zimsum
-
-等效函数:sum
-
-```sql
-Select sum(val) from table_name
-```
-
-备注:该函数无插值需求,因此可用直接计算。
-
-完整示例:
-
-```json
-// OpenTSDB 查询 JSON
-query = {
-“start”:1510560000,
-“end”: 1515000009,
-“queries”:[{
-“aggregator”: “count”,
-“metric”:”cpu.usage_user”,
-}]
-}
-
-//等效查询 SQL:
-SELECT count(*)
-FROM `cpu.usage_user`
-WHERE ts>=1510560000 AND ts<=1515000009
-```
-
-## 附录 2: 资源估算方法
-
-### 数据生成环境
-
-我们仍然使用第 4 章中的假设环境,3 个测量值。分别是:温度和湿度的数据写入的速率是每 5 秒一条记录,时间线 10 万个。空气质量的写入速率是 10 秒一条记录,时间线 1 万个,查询的请求频率 500 QPS。
-
-### 存储资源估算
-
-假设产生数据并需要存储的传感器设备数量为 `n`,数据生成的频率为`t`条/秒,每条记录的长度为 `L` bytes,则每天产生的数据规模为 `n×t×L` bytes。假设压缩比为 C,则每日产生数据规模为 `(n×t×L)/C` bytes。存储资源预估为能够容纳 1.5 年的数据规模,生产环境下 TDengine 的压缩比 C 一般在 5 ~ 7 之间,同时为最后结果增加 20% 的冗余,可计算得到需要存储资源:
-
-```matlab
-(n×t×L)×(365×1.5)×(1+20%)/C
-```
-
-结合以上的计算公式,将参数带入计算公式,在不考虑标签信息的情况下,每年产生的原始数据规模是 11.8TB。需要注意的是,由于标签信息在 TDengine 中关联到每个时间线,并不是每条记录。所以需要记录的数据量规模相对于产生的数据有一定的降低,而这部分标签数据整体上可以忽略不记。假设压缩比为 5,则保留的数据规模最终为 2.56 TB。
-
-### 存储设备选型考虑
-
-硬盘应该选用具有较好随机读性能的硬盘设备,如果能够有 SSD,尽可能考虑使用 SSD。较好的随机读性能的磁盘对于提升系统查询性能具有极大的帮助,能够整体上提升系统的查询响应性能。为了获得较好的查询性能,硬盘设备的单线程随机读 IOPS 的性能指标不应该低于 1000,能够达到 5000 IOPS 以上为佳。为了获得当前的设备随机读取的 IO 性能的评估,建议使用 `fio` 软件对其进行运行性能评估(具体的使用方式请参阅附录 1),确认其是否能够满足大文件随机读性能要求。
-
-硬盘写性能对于 TDengine 的影响不大。TDengine 写入过程采用了追加写的模式,所以只要有较好的顺序写性能即可,一般意义上的 SAS 硬盘和 SSD 均能够很好地满足 TDengine 对于磁盘写入性能的要求。
-
-### 计算资源估算
-
-由于物联网数据的特殊性,数据产生的频率固定以后,TDengine 写入的过程对于(计算和存储)资源消耗都保持一个相对固定的量。《[TDengine 运维指南](/operation/)》上的描述,该系统中每秒 22000 个写入,消耗 CPU 不到 1 个核。
-
-在针对查询所需要消耗的 CPU 资源的估算上,假设应用要求数据库提供的 QPS 为 10000,每次查询消耗的 CPU 时间约 1 ms,那么每个核每秒提供的查询为 1000 QPS,满足 10000 QPS 的查询请求,至少需要 10 个核。为了让系统整体上 CPU 负载小于 50%,整个集群需要 10 个核的两倍,即 20 个核。
-
-### 内存资源估算
-
-数据库默认为每个 Vnode 分配内存 16MB\*3 缓冲区,集群系统包括 22 个 CPU 核,则默认会建立 22 个虚拟节点 Vnode,每个 Vnode 包含 1000 张表,则可以容纳所有的表。则约 1 个半小时写满一个 block,从而触发落盘,可以不做调整。22 个 Vnode 共计需要内存缓存约 1GB。考虑到查询所需要的内存,假设每次查询的内存开销约 50MB,则 500 个查询并发需要的内存约 25GB。
-
-综上所述,可使用单台 16 核 32GB 的机器,或者使用 2 台 8 核 16GB 机器构成的集群。
-
-## 附录 3: 集群部署及启动
-
-TDengine 提供了丰富的帮助文档说明集群安装、部署的诸多方面的内容,这里提供相应的文档列表,供你参考。
-
-### 集群部署
-
-首先是安装 TDengine,从官网上下载 TDengine 最新稳定版,解压缩后运行 install.sh 进行安装。各种安装包的使用帮助请参见博客[《TDengine 多种安装包的安装和卸载》](https://www.taosdata.com/blog/2019/08/09/566.html)。
-
-注意安装完成以后,不要立即启动 `taosd` 服务,在正确配置完成参数以后才启动 `taosd` 服务。
-
-### 设置运行参数并启动服务
-
-为确保系统能够正常获取运行的必要信息。请在服务端正确设置以下关键参数:
-
-FQDN、firstEp、secondEP、dataDir、logDir、tmpDir、serverPort。各参数的具体含义及设置的要求,可参见文档《[TDengine 集群安装、管理](/cluster/)》
-
-按照相同的步骤,在需要运行的节点上设置参数,并启动 `taosd` 服务,然后添加 Dnode 到集群中。
-
-最后启动 `taos` 命令行程序,执行命令 `show dnodes`,如果能看到所有的加入集群的节点,那么集群顺利搭建完成。具体的操作流程及注意事项,请参阅文档《[TDengine 集群安装、管理](/cluster/)》
-
-## 附录 4: 超级表名称
-
-由于 OpenTSDB 的 metric 名称中带有点号(“.”),例如“cpu.usage_user”这种名称的 metric。但是点号在 TDengine 中具有特殊含义,是用来分隔数据库和表名称的分隔符。TDengine 也提供转义符,以允许用户在(超级)表名称中使用关键词或特殊分隔符(如:点号)。为了使用特殊字符,需要采用转义字符将表的名称括起来,例如:`cpu.usage_user`这样就是合法的(超级)表名称。
-
-## 附录 5:参考文章
-
-1. [使用 TDengine + collectd/StatsD + Grafana 快速搭建 IT 运维监控系统](/application/collectd/)
-2. [通过 collectd 将采集数据直接写入 TDengine](/third-party/collectd/)
diff --git a/docs-cn/27-train-faq/01-faq.md b/docs-cn/27-train-faq/01-faq.md
deleted file mode 100644
index e8a106d5d682948d97029cf36b7a47677a491804..0000000000000000000000000000000000000000
--- a/docs-cn/27-train-faq/01-faq.md
+++ /dev/null
@@ -1,241 +0,0 @@
----
-title: 常见问题及反馈
----
-
-## 问题反馈
-
-如果 FAQ 中的信息不能够帮到您,需要 TDengine 技术团队的技术支持与协助,请将以下两个目录中内容打包:
-
-1. /var/log/taos (如果没有修改过默认路径)
-2. /etc/taos
-
-附上必要的问题描述,包括使用的 TDengine 版本信息、平台环境信息、发生该问题的执行操作、出现问题的表征及大概的时间,在 [GitHub](https://github.com/taosdata/TDengine) 提交 issue。
-
-为了保证有足够的 debug 信息,如果问题能够重复,请修改/etc/taos/taos.cfg 文件,最后面添加一行“debugFlag 135"(不带引号本身),然后重启 taosd, 重复问题,然后再递交。也可以通过如下 SQL 语句,临时设置 taosd 的日志级别。
-
-```
- alter dnode debugFlag 135;
-```
-
-但系统正常运行时,请一定将 debugFlag 设置为 131,否则会产生大量的日志信息,降低系统效率。
-
-## 常见问题列表
-
-### 1. TDengine2.0 之前的版本升级到 2.0 及以上的版本应该注意什么?☆☆☆
-
-2.0 版在之前版本的基础上,进行了完全的重构,配置文件和数据文件是不兼容的。在升级之前务必进行如下操作:
-
-1. 删除配置文件,执行 `sudo rm -rf /etc/taos/taos.cfg`
-2. 删除日志文件,执行 `sudo rm -rf /var/log/taos/`
-3. 确保数据已经不再需要的前提下,删除数据文件,执行 `sudo rm -rf /var/lib/taos/`
-4. 安装最新稳定版本的 TDengine
-5. 如果需要迁移数据或者数据文件损坏,请联系涛思数据官方技术支持团队,进行协助解决
-
-### 2. Windows 平台下 JDBCDriver 找不到动态链接库,怎么办?
-
-请看为此问题撰写的 [技术博客](https://www.taosdata.com/blog/2019/12/03/950.html)。
-
-### 3. 创建数据表时提示 more dnodes are needed
-
-请看为此问题撰写的 [技术博客](https://www.taosdata.com/blog/2019/12/03/965.html)。
-
-### 4. 如何让 TDengine crash 时生成 core 文件?
-
-请看为此问题撰写的 [技术博客](https://www.taosdata.com/blog/2019/12/06/974.html)。
-
-### 5. 遇到错误“Unable to establish connection” 怎么办?
-
-客户端遇到连接故障,请按照下面的步骤进行检查:
-
-1. 检查网络环境
-
- - 云服务器:检查云服务器的安全组是否打开 TCP/UDP 端口 6030-6042 的访问权限
- - 本地虚拟机:检查网络能否 ping 通,尽量避免使用`localhost` 作为 hostname
- - 公司服务器:如果为 NAT 网络环境,请务必检查服务器能否将消息返回值客户端
-
-2. 确保客户端与服务端版本号是完全一致的,开源社区版和企业版也不能混用
-
-3. 在服务器,执行 `systemctl status taosd` 检查*taosd*运行状态。如果没有运行,启动*taosd*
-
-4. 确认客户端连接时指定了正确的服务器 FQDN (Fully Qualified Domain Name —— 可在服务器上执行 Linux 命令 hostname -f 获得),FQDN 配置参考:[一篇文章说清楚 TDengine 的 FQDN](https://www.taosdata.com/blog/2020/09/11/1824.html)。
-
-5. ping 服务器 FQDN,如果没有反应,请检查你的网络,DNS 设置,或客户端所在计算机的系统 hosts 文件。如果部署的是 TDengine 集群,客户端需要能 ping 通所有集群节点的 FQDN。
-
-6. 检查防火墙设置(Ubuntu 使用 ufw status,CentOS 使用 firewall-cmd --list-port),确保集群中所有主机在端口 6030-6042 上的 TCP/UDP 协议能够互通。
-
-7. 对于 Linux 上的 JDBC(ODBC, Python, Go 等接口类似)连接, 确保*libtaos.so*在目录*/usr/local/taos/driver*里, 并且*/usr/local/taos/driver*在系统库函数搜索路径*LD_LIBRARY_PATH*里
-
-8. 对于 Windows 上的 JDBC, ODBC, Python, Go 等连接,确保*C:\TDengine\driver\taos.dll*在你的系统库函数搜索目录里 (建议*taos.dll*放在目录 _C:\Windows\System32_)
-
-9. 如果仍不能排除连接故障
-
- - Linux 系统请使用命令行工具 nc 来分别判断指定端口的 TCP 和 UDP 连接是否通畅
- 检查 UDP 端口连接是否工作:`nc -vuz {hostIP} {port} `
- 检查服务器侧 TCP 端口连接是否工作:`nc -l {port}`
- 检查客户端侧 TCP 端口连接是否工作:`nc {hostIP} {port}`
-
- - Windows 系统请使用 PowerShell 命令 Test-NetConnection -ComputerName {fqdn} -Port {port} 检测服务段端口是否访问
-
-10. 也可以使用 taos 程序内嵌的网络连通检测功能,来验证服务器和客户端之间指定的端口连接是否通畅(包括 TCP 和 UDP):[TDengine 内嵌网络检测工具使用指南](https://www.taosdata.com/blog/2020/09/08/1816.html)。
-
-### 6. 遇到错误 “Unexpected generic error in RPC”或者“Unable to resolve FQDN” 怎么办?
-
-产生这个错误,是由于客户端或数据节点无法解析 FQDN(Fully Qualified Domain Name)导致。对于 TAOS Shell 或客户端应用,请做如下检查:
-
-1. 请检查连接的服务器的 FQDN 是否正确,FQDN 配置参考:[一篇文章说清楚 TDengine 的 FQDN](https://www.taosdata.com/blog/2020/09/11/1824.html)
-2. 如果网络配置有 DNS server,请检查是否正常工作
-3. 如果网络没有配置 DNS server,请检查客户端所在机器的 hosts 文件,查看该 FQDN 是否配置,并是否有正确的 IP 地址
-4. 如果网络配置 OK,从客户端所在机器,你需要能 Ping 该连接的 FQDN,否则客户端是无法连接服务器的
-5. 如果服务器曾经使用过 TDengine,且更改过 hostname,建议检查 data 目录的 dnodeEps.json 是否符合当前配置的 EP,路径默认为/var/lib/taos/dnode。正常情况下,建议更换新的数据目录或者备份后删除以前的数据目录,这样可以避免该问题。
-6. 检查/etc/hosts 和/etc/hostname 是否是预配置的 FQDN
-
-### 7. 虽然语法正确,为什么我还是得到 "Invalid SQL" 错误?
-
-如果你确认语法正确,2.0 之前版本,请检查 SQL 语句长度是否超过 64K。如果超过,也会返回这个错误。
-
-### 8. 是否支持 validation queries?
-
-TDengine 还没有一组专用的 validation queries。然而建议你使用系统监测的数据库”log"来做。
-
-
-
-### 9. 我可以删除或更新一条记录吗?
-
-TDengine 删除功能只在 2.6.0.0 及以后的企业版中提供。
-
-从 2.0.8.0 开始,TDengine 支持更新已经写入数据的功能。使用更新功能需要在创建数据库时使用 UPDATE 1 参数,之后可以使用 INSERT INTO 命令更新已经写入的相同时间戳数据。UPDATE 参数不支持 ALTER DATABASE 命令修改。没有使用 UPDATE 1 参数创建的数据库,写入相同时间戳的数据不会修改之前的数据,也不会报错。
-
-另需注意,在 UPDATE 设置为 0 时,后发送的相同时间戳的数据会被直接丢弃,但并不会报错,而且仍然会被计入 affected rows (所以不能利用 INSERT 指令的返回信息进行时间戳查重)。这样设计的主要原因是,TDengine 把写入的数据看做一个数据流,无论时间戳是否出现冲突,TDengine 都认为产生数据的原始设备真实地产生了这样的数据。UPDATE 参数只是控制这样的流数据在进行持久化时要怎样处理——UPDATE 为 0 时,表示先写入的数据覆盖后写入的数据;而 UPDATE 为 1 时,表示后写入的数据覆盖先写入的数据。这种覆盖关系如何选择,取决于对数据的后续使用和统计中,希望以先还是后生成的数据为准。
-
-此外,从 2.1.7.0 版本开始,支持将 UPDATE 参数设为 2,表示“支持部分列更新”。也即,当 UPDATE 设为 1 时,如果更新一个数据行,其中某些列没有提供取值,那么这些列会被设为 NULL;而当 UPDATE 设为 2 时,如果更新一个数据行,其中某些列没有提供取值,那么这些列会保持原有数据行中的对应值。
-
-### 10. 我怎么创建超过 1024 列的表?
-
-使用 2.0 及其以上版本,默认支持 1024 列;2.0 之前的版本,TDengine 最大允许创建 250 列的表。但是如果确实超过限值,建议按照数据特性,逻辑地将这个宽表分解成几个小表。(从 2.1.7.0 版本开始,表的最大列数增加到了 4096 列。)
-
-### 11. 最有效的写入数据的方法是什么?
-
-批量插入。每条写入语句可以一张表同时插入多条记录,也可以同时插入多张表的多条记录。
-
-### 12. Windows 系统下插入的 nchar 类数据中的汉字被解析成了乱码如何解决?
-
-Windows 下插入 nchar 类的数据中如果有中文,请先确认系统的地区设置成了中国(在 Control Panel 里可以设置),这时 cmd 中的`taos`客户端应该已经可以正常工作了;如果是在 IDE 里开发 Java 应用,比如 Eclipse, IntelliJ,请确认 IDE 里的文件编码为 GBK(这是 Java 默认的编码类型),然后在生成 Connection 时,初始化客户端的配置,具体语句如下:
-
-```JAVA
-Class.forName("com.taosdata.jdbc.TSDBDriver");
-Properties properties = new Properties();
-properties.setProperty(TSDBDriver.LOCALE_KEY, "UTF-8");
-Connection = DriverManager.getConnection(url, properties);
-```
-
-### 13. Windows 系统下客户端无法正常显示中文字符?
-
-Windows 系统中一般是采用 GBK/GB18030 存储中文字符,而 TDengine 的默认字符集为 UTF-8 ,在 Windows 系统中使用 TDengine 客户端时,客户端驱动会将字符统一转换为 UTF-8 编码后发送到服务端存储,因此在应用开发过程中,调用接口时正确配置当前的中文字符集即可。
-
-【 v2.2.1.5以后版本 】在 Windows 10 环境下运行 TDengine 客户端命令行工具 taos 时,若无法正常输入、显示中文,可以对客户端 taos.cfg 做如下配置:
-
-```
-locale C
-charset UTF-8
-```
-
-### 14. JDBC 报错: the executed SQL is not a DML or a DDL?
-
-请更新至最新的 JDBC 驱动,参考 [Java 连接器](/reference/connector/java)
-
-### 15. taos connect failed, reason: invalid timestamp
-
-常见原因是服务器和客户端时间没有校准,可以通过和时间服务器同步的方式(Linux 下使用 ntpdate 命令,Windows 在系统时间设置中选择自动同步)校准。
-
-### 16. 表名显示不全
-
-由于 taos shell 在终端中显示宽度有限,有可能比较长的表名显示不全,如果按照显示的不全的表名进行相关操作会发生 Table does not exist 错误。解决方法可以是通过修改 taos.cfg 文件中的设置项 maxBinaryDisplayWidth, 或者直接输入命令 set max_binary_display_width 100。或者在命令结尾使用 \G 参数来调整结果的显示方式。
-
-### 17. 如何进行数据迁移?
-
-TDengine 是根据 hostname 唯一标志一台机器的,在数据文件从机器 A 移动机器 B 时,注意如下两件事:
-
- - 2.0.0.0 至 2.0.6.x 的版本,重新配置机器 B 的 hostname 为机器 A 的 hostname。
- - 2.0.7.0 及以后的版本,到/var/lib/taos/dnode 下,修复 dnodeEps.json 的 dnodeId 对应的 FQDN,重启。确保机器内所有机器的此文件是完全相同的。
- - 1.x 和 2.x 版本的存储结构不兼容,需要使用迁移工具或者自己开发应用导出导入数据。
-
-### 18. 如何在命令行程序 taos 中临时调整日志级别
-
-为了调试方便,从 2.0.16 版本开始,命令行程序 taos 新增了与日志记录相关的两条指令:
-
-```sql
-ALTER LOCAL flag_name flag_value;
-```
-
-其含义是,在当前的命令行程序下,修改一个特定模块的日志记录级别(只对当前命令行程序有效,如果 taos 命令行程序重启,则需要重新设置):
-
- - flag_name 的取值可以是:debugFlag,cDebugFlag,tmrDebugFlag,uDebugFlag,rpcDebugFlag
- - flag_value 的取值可以是:131(输出错误和警告日志),135( 输出错误、警告和调试日志),143( 输出错误、警告、调试和跟踪日志)
-
-```sql
-ALTER LOCAL RESETLOG;
-```
-
-其含义是,清空本机所有由客户端生成的日志文件。
-
-
-
-### 19. go 语言编写组件编译失败怎样解决?
-
-TDengine 2.3.0.0 及之后的版本包含一个使用 go 语言开发的 taosAdapter 独立组件,需要单独运行,取代之前 taosd 内置的 httpd ,提供包含原 httpd 功能以及支持多种其他软件(Prometheus、Telegraf、collectd、StatsD 等)的数据接入功能。
-使用最新 develop 分支代码编译需要先 `git submodule update --init --recursive` 下载 taosAdapter 仓库代码后再编译。
-
-目前编译方式默认自动编译 taosAdapter。go 语言版本要求 1.14 以上,如果发生 go 编译错误,往往是国内访问 go mod 问题,可以通过设置 go 环境变量来解决:
-
-```sh
-go env -w GO111MODULE=on
-go env -w GOPROXY=https://goproxy.cn,direct
-```
-
-如果希望继续使用之前的内置 httpd,可以关闭 taosAdapter 编译,使用
-`cmake .. -DBUILD_HTTP=true` 使用原来内置的 httpd。
-
-### 20. 如何查询数据占用的存储空间大小?
-
-默认情况下,TDengine 的数据文件存储在 /var/lib/taos ,日志文件存储在 /var/log/taos 。
-
-若想查看所有数据文件占用的具体大小,可以执行 Shell 指令:`du -sh /var/lib/taos/vnode --exclude='wal'` 来查看。此处排除了 WAL 目录,因为在持续写入的情况下,这里大小几乎是固定的,并且每当正常关闭 TDengine 让数据落盘后,WAL 目录都会清空。
-
-若想查看单个数据库占用的大小,可在命令行程序 taos 内指定要查看的数据库后执行 `show vgroups;` ,通过得到的 VGroup id 去 /var/lib/taos/vnode 下查看包含的文件夹大小。
-
-若仅仅想查看指定(超级)表的数据块分布及大小,可查看[_block_dist 函数](https://docs.taosdata.com/taos-sql/select/#_block_dist-%E5%87%BD%E6%95%B0)
-
-### 21. 客户端连接串如何保证高可用?
-
-请看为此问题撰写的 [技术博客](https://www.taosdata.com/blog/2021/04/16/2287.html)
-
-### 22. 时间戳的时区信息是怎样处理的?
-
-TDengine 中时间戳的时区总是由客户端进行处理,而与服务端无关。具体来说,客户端会对 SQL 语句中的时间戳进行时区转换,转为 UTC 时区(即 Unix 时间戳——Unix Timestamp)再交由服务端进行写入和查询;在读取数据时,服务端也是采用 UTC 时区提供原始数据,客户端收到后再根据本地设置,把时间戳转换为本地系统所要求的时区进行显示。
-
-客户端在处理时间戳字符串时,会采取如下逻辑:
-
-1. 在未做特殊设置的情况下,客户端默认使用所在操作系统的时区设置。
-2. 如果在 taos.cfg 中设置了 timezone 参数,则客户端会以这个配置文件中的设置为准。
-3. 如果在 C/C++/Java/Python 等各种编程语言的 Connector Driver 中,在建立数据库连接时显式指定了 timezone,那么会以这个指定的时区设置为准。例如 Java Connector 的 JDBC URL 中就有 timezone 参数。
-4. 在书写 SQL 语句时,也可以直接使用 Unix 时间戳(例如 `1554984068000`)或带有时区的时间戳字符串,也即以 RFC 3339 格式(例如 `2013-04-12T15:52:01.123+08:00`)或 ISO-8601 格式(例如 `2013-04-12T15:52:01.123+0800`)来书写时间戳,此时这些时间戳的取值将不再受其他时区设置的影响。
-
-### 23. TDengine 2.0 都会用到哪些网络端口?
-
-使用到的网络端口请看文档:[serverport](/reference/config/#serverport)
-
-需要注意,文档上列举的端口号都是以默认端口 6030 为前提进行说明,如果修改了配置文件中的设置,那么列举的端口都会随之出现变化,管理员可以参考上述的信息调整防火墙设置。
-
-### 24. 为什么 RESTful 接口无响应、Grafana 无法添加 TDengine 为数据源、TDengineGUI 选了 6041 端口还是无法连接成功??
-
-taosAdapter 从 TDengine 2.4.0.0 版本开始成为 TDengine 服务端软件的组成部分,是 TDengine 集群和应用程序之间的桥梁和适配器。在此之前 RESTful 接口等功能是由 taosd 内置的 HTTP 服务提供的,而如今要实现上述功能需要执行:```systemctl start taosadapter``` 命令来启动 taosAdapter 服务。
-
-需要说明的是,taosAdapter 的日志路径 path 需要单独配置,默认路径是 /var/log/taos ;日志等级 logLevel 有 8 个等级,默认等级是 info ,配置成 panic 可关闭日志输出。请注意操作系统 / 目录的空间大小,可通过命令行参数、环境变量或配置文件来修改配置,默认配置文件是 /etc/taos/taosadapter.toml 。
-
-有关 taosAdapter 组件的详细介绍请看文档:[taosAdapter](https://docs.taosdata.com/reference/taosadapter/)
-
-### 25. 发生了 OOM 怎么办?
-
-OOM 是操作系统的保护机制,当操作系统内存(包括 SWAP )不足时,会杀掉某些进程,从而保证操作系统的稳定运行。通常内存不足主要是如下两个原因导致,一是剩余内存小于 vm.min_free_kbytes ;二是程序请求的内存大于剩余内存。还有一种情况是内存充足但程序占用了特殊的内存地址,也会触发 OOM 。
-
-TDengine 会预先为每个 VNode 分配好内存,每个 Database 的 VNode 个数受 maxVgroupsPerDb 影响,每个 VNode 占用的内存大小受 Blocks 和 Cache 影响。要防止 OOM,需要在项目建设之初合理规划内存,并合理设置 SWAP ,除此之外查询过量的数据也有可能导致内存暴涨,这取决于具体的查询语句。TDengine 企业版对内存管理做了优化,采用了新的内存分配器,对稳定性有更高要求的用户可以考虑选择企业版。
diff --git a/docs-cn/27-train-faq/03-docker.md b/docs-cn/27-train-faq/03-docker.md
deleted file mode 100644
index 7791569b25e102b4634f0fb899fc0973cacc0aa1..0000000000000000000000000000000000000000
--- a/docs-cn/27-train-faq/03-docker.md
+++ /dev/null
@@ -1,330 +0,0 @@
----
-title: 通过 Docker 快速体验 TDengine
----
-
-虽然并不推荐在生产环境中通过 Docker 来部署 TDengine 服务,但 Docker 工具能够很好地屏蔽底层操作系统的环境差异,很适合在开发测试或初次体验时用于安装运行 TDengine 的工具集。特别是,借助 Docker,能够比较方便地在 macOS 和 Windows 系统上尝试 TDengine,而无需安装虚拟机或额外租用 Linux 服务器。另外,从 2.0.14.0 版本开始,TDengine 提供的镜像已经可以同时支持 X86-64、X86、arm64、arm32 平台,像 NAS、树莓派、嵌入式开发板之类可以运行 docker 的非主流计算机也可以基于本文档轻松体验 TDengine。
-
-下文通过 Step by Step 风格的介绍,讲解如何通过 Docker 快速建立 TDengine 的单节点运行环境,以支持开发和测试。
-
-## 下载 Docker
-
-Docker 工具自身的下载请参考 [Docker 官网文档](https://docs.docker.com/get-docker/)。
-
-安装完毕后可以在命令行终端查看 Docker 版本。如果版本号正常输出,则说明 Docker 环境已经安装成功。
-
-```bash
-$ docker -v
-Docker version 20.10.3, build 48d30b5
-```
-
-## 使用 Docker 在容器中运行 TDengine
-
-### 在 Docker 容器中运行 TDengine server
-
-```bash
-$ docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
-526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd
-```
-
-这条命令,启动一个运行了 TDengine server 的 docker 容器,并且将容器的 6030 到 6049 端口映射到宿主机的 6030 到 6049 端口上。如果宿主机已经运行了 TDengine server 并占用了相同端口,需要映射容器的端口到不同的未使用端口段。(详情参见 [TDengine 2.0 端口说明](/train-faq/faq#port)。为了支持 TDengine 客户端操作 TDengine server 服务, TCP 和 UDP 端口都需要打开。
-
-- **docker run**:通过 Docker 运行一个容器
-- **-d**:让容器在后台运行
-- **-p**:指定映射端口。注意:如果不是用端口映射,依然可以进入 Docker 容器内部使用 TDengine 服务或进行应用开发,只是不能对容器外部提供服务
-- **tdengine/tdengine**:拉取的 TDengine 官方发布的应用镜像
-- **526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd**:这个返回的长字符是容器 ID,我们也可以通过容器 ID 来查看对应的容器
-
-进一步,还可以使用 docker run 命令启动运行 TDengine server 的 docker 容器,并使用 `--name` 命令行参数将容器命名为 `tdengine`,使用 `--hostname` 指定 hostname 为 `tdengine-server`,通过 `-v` 挂载本地目录到容器,实现宿主机与容器内部的数据同步,防止容器删除后,数据丢失。
-
-```bash
-docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:/var/log/taos -v ~/work/taos/data:/var/lib/taos -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
-```
-
-- **--name tdengine**:设置容器名称,我们可以通过容器名称来访问对应的容器
-- **--hostname=tdengine-server**:设置容器内 Linux 系统的 hostname,我们可以通过映射 hostname 和 IP 来解决容器 IP 可能变化的问题。
-- **-v**:设置宿主机文件目录映射到容器内目录,避免容器删除后数据丢失。
-
-### 使用 docker ps 命令确认容器是否已经正确运行
-
-```bash
-docker ps
-```
-
-输出示例如下:
-
-```
-CONTAINER ID IMAGE COMMAND CREATED STATUS ···
-c452519b0f9b tdengine/tdengine "taosd" 14 minutes ago Up 14 minutes ···
-```
-
-- **docker ps**:列出所有正在运行状态的容器信息。
-- **CONTAINER ID**:容器 ID。
-- **IMAGE**:使用的镜像。
-- **COMMAND**:启动容器时运行的命令。
-- **CREATED**:容器创建时间。
-- **STATUS**:容器状态。UP 表示运行中。
-
-### 通过 docker exec 命令,进入到 docker 容器中去做开发
-
-```bash
-$ docker exec -it tdengine /bin/bash
-root@tdengine-server:~/TDengine-server-2.4.0.4#
-```
-
-- **docker exec**:通过 docker exec 命令进入容器,如果退出,容器不会停止。
-- **-i**:进入交互模式。
-- **-t**:指定一个终端。
-- **tdengine**:容器名称,需要根据 docker ps 指令返回的值进行修改。
-- **/bin/bash**:载入容器后运行 bash 来进行交互。
-
-进入容器后,执行 taos shell 客户端程序。
-
-```bash
-root@tdengine-server:~/TDengine-server-2.4.0.4# taos
-
-Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
-Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
-
-taos>
-```
-
-TDengine 终端成功连接服务端,打印出了欢迎消息和版本信息。如果失败,会有错误信息打印出来。
-
-在 TDengine 终端中,可以通过 SQL 命令来创建/删除数据库、表、超级表等,并可以进行插入和查询操作。具体可以参考 [TAOS SQL 说明文档](/taos-sql/)。
-
-### 在宿主机访问 Docker 容器中的 TDengine server
-
-在使用了 -p 命令行参数映射了正确的端口启动了 TDengine Docker 容器后,就在宿主机使用 taos shell 命令即可访问运行在 Docker 容器中的 TDengine。
-
-```
-$ taos
-
-Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
-Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
-
-taos>
-```
-
-也可以在宿主机使用 curl 通过 RESTful 端口访问 Docker 容器内的 TDengine server。
-
-```
-curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql
-```
-
-输出示例如下:
-
-```
-{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep0,keep1,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep0,keep1,keep(D)",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["test","2021-08-18 06:01:11.021",10000,4,1,1,10,"3650,3650,3650",16,6,100,4096,1,3000,2,0,"ms",0,"ready"],["log","2021-08-18 05:51:51.065",4,1,1,1,10,"30,30,30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":2}
-```
-
-这条命令,通过 REST API 访问 TDengine server,这时连接的是本机的 6041 端口,可见连接成功。
-
-TDengine REST API 详情请参考[官方文档](/reference/rest-api/)。
-
-### 使用 Docker 容器运行 TDengine server 和 taosAdapter
-
-在 TDengine 2.4.0.0 之后版本的 Docker 容器,开始提供一个独立运行的组件 taosAdapter,代替之前版本 TDengine 中 taosd 进程中内置的 http server。taosAdapter 支持通过 RESTful 接口对 TDengine server 的数据写入和查询能力,并提供和 InfluxDB/OpenTSDB 兼容的数据摄取接口,允许 InfluxDB/OpenTSDB 应用程序无缝移植到 TDengine。在新版本 Docker 镜像中,默认启用了 taosAdapter,也可以使用 docker run 命令中设置 TAOS_DISABLE_ADAPTER=true 来禁用 taosAdapter;也可以在 docker run 命令中单独使用 taosAdapter,而不运行 taosd 。
-
-注意:如果容器中运行 taosAdapter,需要根据需要映射其他端口,具体端口默认配置和修改方法请参考[taosAdapter 文档](/reference/taosadapter/)。
-
-使用 docker 运行 TDengine 2.4.0.4 版本镜像(taosd + taosAdapter):
-
-```bash
-docker run -d --name tdengine-all -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine:2.4.0.4
-```
-
-使用 docker 运行 TDengine 2.4.0.4 版本镜像(仅 taosAdapter,需要设置 firstEp 配置项 或 TAOS_FIRST_EP 环境变量):
-
-```bash
-docker run -d --name tdengine-taosa -p 6041-6049:6041-6049 -p 6041-6049:6041-6049/udp -e TAOS_FIRST_EP=tdengine-all tdengine/tdengine:2.4.0.4 taosadapter
-```
-
-使用 docker 运行 TDengine 2.4.0.4 版本镜像(仅 taosd):
-
-```bash
-docker run -d --name tdengine-taosd -p 6030-6042:6030-6042 -p 6030-6042:6030-6042/udp -e TAOS_DISABLE_ADAPTER=true tdengine/tdengine:2.4.0.4
-```
-
-使用 curl 命令验证 RESTful 接口可以正常工作:
-
-```bash
-curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0.1:6041/rest/sql
-```
-
-输出示例如下:
-
-```
-{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
-```
-
-### 应用示例:在宿主机使用 taosBenchmark 写入数据到 Docker 容器中的 TDengine server
-
-1. 在宿主机命令行界面执行 taosBenchmark (曾命名为 taosdemo)写入数据到 Docker 容器中的 TDengine server
-
- ```bash
- $ taosBenchmark
-
- taosBenchmark is simulating data generated by power equipments monitoring...
-
- host: 127.0.0.1:6030
- user: root
- password: taosdata
- configDir:
- resultFile: ./output.txt
- thread num of insert data: 10
- thread num of create table: 10
- top insert interval: 0
- number of records per req: 30000
- max sql length: 1048576
- database count: 1
- database[0]:
- database[0] name: test
- drop: yes
- replica: 1
- precision: ms
- super table count: 1
- super table[0]:
- stbName: meters
- autoCreateTable: no
- childTblExists: no
- childTblCount: 10000
- childTblPrefix: d
- dataSource: rand
- iface: taosc
- insertRows: 10000
- interlaceRows: 0
- disorderRange: 1000
- disorderRatio: 0
- maxSqlLen: 1048576
- timeStampStep: 1
- startTimestamp: 2017-07-14 10:40:00.000
- sampleFormat:
- sampleFile:
- tagsFile:
- columnCount: 3
- column[0]:FLOAT column[1]:INT column[2]:FLOAT
- tagCount: 2
- tag[0]:INT tag[1]:BINARY(16)
-
- Press enter key to continue or Ctrl-C to stop
- ```
-
- 回车后,该命令将在数据库 test 下面自动创建一张超级表 meters,该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupId,groupId 被设置为 1 到 10, location 被设置为 "California.SanFrancisco" 或者 "California.SanDieo"。
-
- 最后共插入 1 亿条记录。
-
-2. 进入 TDengine 终端,查看 taosBenchmark 生成的数据。
-
- - **进入命令行。**
-
- ```bash
- $ root@c452519b0f9b:~/TDengine-server-2.4.0.4# taos
-
- Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
- Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
-
- taos>
- ```
-
- - **查看数据库。**
-
- ```bash
- $ taos> show databases;
- name | created_time | ntables | vgroups | ···
- test | 2021-08-18 06:01:11.021 | 10000 | 6 | ···
- log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
-
- ```
-
- - **查看超级表。**
-
- ```bash
- $ taos> use test;
- Database changed.
-
- $ taos> show stables;
- name | created_time | columns | tags | tables |
- ============================================================================================
- meters | 2021-08-18 06:01:11.116 | 4 | 2 | 10000 |
- Query OK, 1 row(s) in set (0.003259s)
-
- ```
-
- - **查看表,限制输出十条。**
-
- ```bash
- $ taos> select * from test.t0 limit 10;
-
- DB error: Table does not exist (0.002857s)
- taos> select * from test.d0 limit 10;
- ts | current | voltage | phase |
- ======================================================================================
- 2017-07-14 10:40:00.000 | 10.12072 | 223 | 0.34167 |
- 2017-07-14 10:40:00.001 | 10.16103 | 224 | 0.34445 |
- 2017-07-14 10:40:00.002 | 10.00204 | 220 | 0.33334 |
- 2017-07-14 10:40:00.003 | 10.00030 | 220 | 0.33333 |
- 2017-07-14 10:40:00.004 | 9.84029 | 216 | 0.32222 |
- 2017-07-14 10:40:00.005 | 9.88028 | 217 | 0.32500 |
- 2017-07-14 10:40:00.006 | 9.88110 | 217 | 0.32500 |
- 2017-07-14 10:40:00.007 | 10.08137 | 222 | 0.33889 |
- 2017-07-14 10:40:00.008 | 10.12063 | 223 | 0.34167 |
- 2017-07-14 10:40:00.009 | 10.16086 | 224 | 0.34445 |
- Query OK, 10 row(s) in set (0.016791s)
-
- ```
-
- - **查看 d0 表的标签值。**
-
- ```bash
- $ taos> select groupid, location from test.d0;
- groupid | location |
- =================================
- 0 | California.SanDieo |
- Query OK, 1 row(s) in set (0.003490s)
- ```
-
-### 应用示例:使用数据收集代理软件写入 TDengine
-
-taosAdapter 支持多个数据收集代理软件(如 Telegraf、StatsD、collectd 等),这里仅模拟 StasD 写入数据,在宿主机执行命令如下:
-
-```
-echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
-```
-
-然后可以使用 taos shell 查询 taosAdapter 自动创建的数据库 statsd 和 超级表 foo 中的内容:
-
-```
-taos> show databases;
- name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
-====================================================================================================================================================================================================================================================================================
- log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
- statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
-Query OK, 2 row(s) in set (0.002112s)
-
-taos> use statsd;
-Database changed.
-
-taos> show stables;
- name | created_time | columns | tags | tables |
-============================================================================================
- foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
-Query OK, 1 row(s) in set (0.001160s)
-
-taos> select * from foo;
- ts | value | metric_type |
-=======================================================================================
- 2021-12-28 09:21:48.840820836 | 1 | counter |
-Query OK, 1 row(s) in set (0.001639s)
-
-taos>
-```
-
-可以看到模拟数据已经被写入到 TDengine 中。
-
-## 停止正在 Docker 中运行的 TDengine 服务
-
-```bash
-docker stop tdengine
-```
-
-- **docker stop**:通过 docker stop 停止指定的正在运行中的 docker 镜像。
diff --git a/docs-cn/30-release/01-2.6.md b/docs-cn/30-release/01-2.6.md
deleted file mode 100644
index 85b76d9999e211336b5859beab3fdfc7988f4fda..0000000000000000000000000000000000000000
--- a/docs-cn/30-release/01-2.6.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-title: 2.6
----
-
-[2.6.0.4](https://github.com/taosdata/TDengine/releases/tag/ver-2.6.0.4)
-
-[2.6.0.1](https://github.com/taosdata/TDengine/releases/tag/ver-2.6.0.1)
-
-[2.6.0.0](https://github.com/taosdata/TDengine/releases/tag/ver-2.6.0.0)
diff --git a/docs-cn/30-release/02-2.4.md b/docs-cn/30-release/02-2.4.md
deleted file mode 100644
index 62580b327a3bd5098e1b7f1162a1c398ac2a5eff..0000000000000000000000000000000000000000
--- a/docs-cn/30-release/02-2.4.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: 2.4
----
-
-[2.4.0.26](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.26)
-
-[2.4.0.25](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.25)
-
-[2.4.0.24](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.24)
-
-[2.4.0.20](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.20)
-
-[2.4.0.18](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.18)
-
-[2.4.0.16](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.16)
-
-[2.4.0.14](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.14)
-
-[2.4.0.12](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.12)
-
-[2.4.0.10](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.10)
-
-[2.4.0.7](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.7)
-
-[2.4.0.5](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.5)
-
-[2.4.0.4](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.4)
-
-[2.4.0.0](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.0)
diff --git a/docs-en/01-index.md b/docs-en/01-index.md
deleted file mode 100644
index d76c12e10fce24dff9f916945f5b6236857ebb8d..0000000000000000000000000000000000000000
--- a/docs-en/01-index.md
+++ /dev/null
@@ -1,27 +0,0 @@
----
-title: TDengine Documentation
-sidebar_label: Documentation Home
-slug: /
----
-
-TDengine is a [high-performance](https://tdengine.com/fast), [scalable](https://tdengine.com/scalable) time series database with [SQL support](https://tdengine.com/sql-support). This document is the TDengine user manual. It introduces the basic, as well as novel concepts, in TDengine, and also talks in detail about installation, features, SQL, APIs, operation, maintenance, kernel design and other topics. It’s written mainly for architects, developers and system administrators.
-
-To get a global view about TDengine, like feature list, benchmarks, and competitive advantages, please browse through section [Introduction](./intro).
-
-TDengine greatly improves the efficiency of data ingestion, querying and storage by exploiting the characteristics of time series data, introducing the novel concepts of "one table for one data collection point" and "super table", and designing an innovative storage engine. To understand the new concepts in TDengine and make full use of the features and capabilities of TDengine, please read [“Concepts”](./concept) thoroughly.
-
-If you are a developer, please read the [“Developer Guide”](./develop) carefully. This section introduces the database connection, data modeling, data ingestion, query, continuous query, cache, data subscription, user-defined functions, and other functionality in detail. Sample code is provided for a variety of programming languages. In most cases, you can just copy and paste the sample code, make a few changes to accommodate your application, and it will work.
-
-We live in the era of big data, and scale-up is unable to meet the growing business needs. Any modern data system must have the ability to scale out, and clustering has become an indispensable feature of big data systems. Not only did the TDengine team develop the cluster feature, but also decided to open source this important feature. To learn how to deploy, manage and maintain a TDengine cluster please refer to ["cluster"](./cluster).
-
-TDengine uses ubiquitious SQL as its query language, which greatly reduces learning costs and migration costs. In addition to the standard SQL, TDengine has extensions to better support time series data analysis. These extensions include functions such as roll up, interpolation and time weighted average, among many others. The ["SQL Reference"](./taos-sql) chapter describes the SQL syntax in detail, and lists the various supported commands and functions.
-
-If you are a system administrator who cares about installation, upgrade, fault tolerance, disaster recovery, data import, data export, system configuration, how to monitor whether TDengine is running healthily, and how to improve system performance, please refer to, and thoroughly read the ["Administration"](./operation) section.
-
-If you want to know more about TDengine tools, the REST API, and connectors for various programming languages, please see the ["Reference"](./reference) chapter.
-
-If you are very interested in the internal design of TDengine, please read the chapter ["Inside TDengine”](./tdinternal), which introduces the cluster design, data partitioning, sharding, writing, and reading processes in detail. If you want to study TDengine code or even contribute code, please read this chapter carefully.
-
-TDengine is an open source database, and we would love for you to be a part of TDengine. If you find any errors in the documentation, or see parts where more clarity or elaboration is needed, please click "Edit this page" at the bottom of each page to edit it directly.
-
-Together, we make a difference.
diff --git a/docs-en/02-intro/index.md b/docs-en/02-intro/index.md
deleted file mode 100644
index f6766f910f4d7560b782bf02ffa97922523e6167..0000000000000000000000000000000000000000
--- a/docs-en/02-intro/index.md
+++ /dev/null
@@ -1,113 +0,0 @@
----
-title: Introduction
-toc_max_heading_level: 2
----
-
-TDengine is a high-performance, scalable time-series database with SQL support. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](/develop/cache), [stream processing](/develop/continuous-query), [data subscription](/develop/subscribe) and other functionalities to reduce the complexity and cost of development and operation.
-
-This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine.
-
-## Major Features
-
-The major features are listed below:
-
-1. While TDengine supports [using SQL to insert](/develop/insert-data/sql-writing), it also supports [Schemaless writing](/reference/schemaless/) just like NoSQL databases. TDengine also supports standard protocols like [InfluxDB LINE](/develop/insert-data/influxdb-line),[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) among others.
-2. TDengine supports seamless integration with third-party data collection agents like [Telegraf](/third-party/telegraf),[Prometheus](/third-party/prometheus),[StatsD](/third-party/statsd),[collectd](/third-party/collectd),[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQX](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). These agents can write data into TDengine with simple configuration and without a single line of code.
-3. Support for [all kinds of queries](/develop/query-data), including aggregation, nested query, downsampling, interpolation and others.
-4. Support for [user defined functions](/develop/udf).
-5. Support for [caching](/develop/cache). TDengine always saves the last data point in cache, so Redis is not needed in some scenarios.
-6. Support for [continuous query](/develop/continuous-query).
-7. Support for [data subscription](/develop/subscribe) with the capability to specify filter conditions.
-8. Support for [cluster](/cluster/), with the capability of increasing processing power by adding more nodes. High availability is supported by replication.
-9. Provides an interactive [command-line interface](/reference/taos-shell) for management, maintenance and ad-hoc queries.
-10. Provides many ways to [import](/operation/import) and [export](/operation/export) data.
-11. Provides [monitoring](/operation/monitor) on running instances of TDengine.
-12. Provides [connectors](/reference/connector/) for [C/C++](/reference/connector/cpp), [Java](/reference/connector/java), [Python](/reference/connector/python), [Go](/reference/connector/go), [Rust](/reference/connector/rust), [Node.js](/reference/connector/node) and other programming languages.
-13. Provides a [REST API](/reference/rest-api/).
-14. Supports seamless integration with [Grafana](/third-party/grafana) for visualization.
-15. Supports seamless integration with Google Data Studio.
-
-For more details on features, please read through the entire documentation.
-
-## Competitive Advantages
-
-Time-series data is structured, not transactional, and is rarely deleted or updated. TDengine makes full use of [these characteristics of time series data](https://tdengine.com/2019/07/09/86.html) to build its own innovative storage engine and computing engine to differentiate itself from other time series databases, with the following advantages.
-
-- **[High Performance](https://tdengine.com/fast)**: With an innovatively designed and purpose-built storage engine, TDengine outperforms other time series databases in data ingestion and querying while significantly reducing storage costs and compute costs.
-
-- **[Scalable](https://tdengine.com/scalable)**: TDengine provides out-of-box scalability and high-availability through its native distributed design. Nodes can be added through simple configuration to achieve greater data processing power. In addition, this feature is open source.
-
-- **[SQL Support](https://tdengine.com/sql-support)**: TDengine uses SQL as the query language, thereby reducing learning and migration costs, while adding SQL extensions to better handle time-series. Keeping NoSQL developers in mind, TDengine also supports convenient and flexible, schemaless data ingestion.
-
-- **All in One**: TDengine has built-in caching, stream processing and data subscription functions. It is no longer necessary to integrate Kafka/Redis/HBase/Spark or other software in some scenarios. It makes the system architecture much simpler, cost-effective and easier to maintain.
-
-- **Seamless Integration**: Without a single line of code, TDengine provide seamless, configurable integration with third-party tools such as Telegraf, Grafana, EMQX, Prometheus, StatsD, collectd, etc. More third-party tools are being integrated.
-
-- **Zero Management**: Installation and cluster setup can be done in seconds. Data partitioning and sharding are executed automatically. TDengine’s running status can be monitored via Grafana or other DevOps tools.
-
-- **Zero Learning Costs**: With SQL as the query language and support for ubiquitous tools like Python, Java, C/C++, Go, Rust, and Node.js connectors, and a REST API, there are zero learning costs.
-
-- **Interactive Console**: TDengine provides convenient console access to the database, through a CLI, to run ad hoc queries, maintain the database, or manage the cluster, without any programming.
-
-With TDengine, the total cost of ownership of your time-series data platform can be greatly reduced. 1: With its superior performance, the computing and storage resources are reduced significantly 2: With SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly 3: With its simple architecture and zero management, the operation and maintenance costs are reduced.
-
-## Technical Ecosystem
-This is how TDengine would be situated, in a typical time-series data processing platform:
-
-
-
-
Figure 1. TDengine Technical Ecosystem
-
-On the left-hand side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right-hand side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides an interactive command-line interface and a web interface for management and maintenance.
-
-## Typical Use Cases
-
-As a high-performance, scalable and SQL supported time-series database, TDengine's typical use case include but are not limited to IoT, Industrial Internet, Connected Vehicles, IT operation and maintenance, energy, financial markets and other fields. TDengine is a purpose-built database optimized for the characteristics of time series data. As such, it cannot be used to process data from web crawlers, social media, e-commerce, ERP, CRM and so on. More generally TDengine is not a suitable storage engine for non-time-series data. This section makes a more detailed analysis of the applicable scenarios.
-
-### Characteristics and Requirements of Data Sources
-
-| **Data Source Characteristics and Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
-| -------------------------------------------------------- | ------------------ | ----------------------- | ------------------- | :----------------------------------------------------------- |
-| A massive amount of total data | | | √ | TDengine provides excellent scale-out functions in terms of capacity, and has a storage structure with matching high compression ratio to achieve the best storage efficiency in the industry.|
-| Data input velocity is extremely high | | | √ | TDengine's performance is much higher than that of other similar products. It can continuously process larger amounts of input data in the same hardware environment, and provides a performance evaluation tool that can easily run in the user environment. |
-| A huge number of data sources | | | √ | TDengine is optimized specifically for a huge number of data sources. It is especially suitable for efficiently ingesting, writing and querying data from billions of data sources. |
-
-### System Architecture Requirements
-
-| **System Architecture Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
-| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
-| A simple and reliable system architecture | | | √ | TDengine's system architecture is very simple and reliable, with its own message queue, cache, stream computing, monitoring and other functions. There is no need to integrate any additional third-party products. |
-| Fault-tolerance and high-reliability | | | √ | TDengine has cluster functions to automatically provide high-reliability and high-availability functions such as fault tolerance and disaster recovery. |
-| Standardization support | | | √ | TDengine supports standard SQL and provides SQL extensions for time-series data analysis. |
-
-### System Function Requirements
-
-| **System Function Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
-| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
-| Complete data processing algorithms built-in | | √ | | While TDengine implements various general data processing algorithms, industry specific algorithms and special types of processing will need to be implemented at the application level.|
-| A large number of crosstab queries | | √ | | This type of processing is better handled by general purpose relational database systems but TDengine can work in concert with relational database systems to provide more complete solutions. |
-
-### System Performance Requirements
-
-| **System Performance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
-| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
-| Very large total processing capacity | | | √ | TDengine’s cluster functions can easily improve processing capacity via multi-server coordination. |
-| Extremely high-speed data processing | | | √ | TDengine’s storage and data processing are optimized for IoT, and can process data many times faster than similar products.|
-| Extremely fast processing of high resolution data | | | √ | TDengine has achieved the same or better performance than other relational and NoSQL data processing systems. |
-
-### System Maintenance Requirements
-
-| **System Maintenance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
-| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
-| Native high-reliability | | | √ | TDengine has a very robust, reliable and easily configurable system architecture to simplify routine operation. Human errors and accidents are eliminated to the greatest extent, with a streamlined experience for operators. |
-| Minimize learning and maintenance costs | | | √ | In addition to being easily configurable, standard SQL support and the Taos shell for ad hoc queries makes maintenance simpler, allows reuse and reduces learning costs.|
-| Abundant talent supply | √ | | | Given the above, and given the extensive training and professional services provided by TDengine, it is easy to migrate from existing solutions or create a new and lasting solution based on TDengine.|
-
-## Comparison with other databases
-
-- [Writing Performance Comparison of TDengine and InfluxDB ](https://tdengine.com/2022/02/23/4975.html)
-- [Query Performance Comparison of TDengine and InfluxDB](https://tdengine.com/2022/02/24/5120.html)
-- [TDengine vs InfluxDB、OpenTSDB、Cassandra、MySQL、ClickHouse](https://www.tdengine.com/downloads/TDengine_Testing_Report_en.pdf)
-- [TDengine vs OpenTSDB](https://tdengine.com/2019/09/12/710.html)
-- [TDengine vs Cassandra](https://tdengine.com/2019/09/12/708.html)
-- [TDengine vs InfluxDB](https://tdengine.com/2019/09/12/706.html)
diff --git a/docs-en/05-get-started/_pkg_install.mdx b/docs-en/05-get-started/_pkg_install.mdx
deleted file mode 100644
index cf10497c96ba1d777e45340b0312d97c127b6fcb..0000000000000000000000000000000000000000
--- a/docs-en/05-get-started/_pkg_install.mdx
+++ /dev/null
@@ -1,17 +0,0 @@
-import PkgList from "/components/PkgList";
-
-It's very easy to install TDengine and would take you only a few minutes from downloading to finishing installation.
-
-For the convenience of users, from version 2.4.0.10, the standard server side installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark` and sample code. If only the `taosd` server and C/C++ connector are required, you can also choose to download the lite package.
-
-Three kinds of packages are provided, tar.gz, rpm and deb. Especially the tar.gz package is provided for the convenience of enterprise customers on different kinds of operating systems, it includes `taosdump` and TDinsight installation script which are normally only provided in taos-tools rpm and deb packages.
-
-Between two major release versions, some beta versions may be delivered for users to try some new features.
-
-
-
-For the details please refer to [Install and Uninstall](/operation/pkg-install)。
-
-To see the details of versions, please refer to [Download List](https://tdengine.com/all-downloads) and [Release Notes](https://github.com/taosdata/TDengine/releases).
-
-
diff --git a/docs-en/05-get-started/index.md b/docs-en/05-get-started/index.md
deleted file mode 100644
index 56958ef3ec1c206ee0cff45c67fd3c3a6fa6753a..0000000000000000000000000000000000000000
--- a/docs-en/05-get-started/index.md
+++ /dev/null
@@ -1,171 +0,0 @@
----
-title: Get Started
-description: 'Install TDengine from Docker image, apt-get or package, and run TAOS CLI and taosBenchmark to experience the features'
----
-
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-import PkgInstall from "./\_pkg_install.mdx";
-import AptGetInstall from "./\_apt_get_install.mdx";
-
-## Quick Install
-
-The full package of TDengine includes the server(taosd), taosAdapter for connecting with third-party systems and providing a RESTful interface, client driver(taosc), command-line program(CLI, taos) and some tools. For the current version, the server taosd and taosAdapter can only be installed and run on Linux systems. In the future taosd and taosAdapter will also be supported on Windows, macOS and other systems. The client driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to connectors for multiple languages, TDengine also provides a [RESTful interface](/reference/rest-api) through [taosAdapter](/reference/taosadapter). Prior to version 2.4.0.0, taosAdapter did not exist and the RESTful interface was provided by the built-in HTTP service of taosd.
-
-TDengine supports X64/ARM64/MIPS64/Alpha64 hardware platforms, and will support ARM32, RISC-V and other CPU architectures in the future.
-
-
-
-If docker is already installed on your computer, execute the following command:
-
-```shell
-docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
-```
-
-Make sure the container is running
-
-```shell
-docker ps
-```
-
-Enter into container and execute bash
-
-```shell
-docker exec -it bash
-```
-
-Then you can execute the Linux commands and access TDengine.
-
-For detailed steps, please visit [Experience TDengine via Docker](/train-faq/docker)。
-
-:::info
-Starting from 2.4.0.10,besides taosd,TDengine docker image includes: taos,taosAdapter,taosdump,taosBenchmark,TDinsight, scripts and sample code. Once the TDengine container is started,it will start both taosAdapter and taosd automatically to support RESTful interface.
-
-:::
-
-
-
-
-
-
-
-
-
-
-If you like to check the source code, build the package by yourself or contribute to the project, please check [TDengine GitHub Repository](https://github.com/taosdata/TDengine)
-
-
-
-
-## Quick Launch
-
-After installation, you can launch the TDengine service by the 'systemctl' command to start 'taosd'.
-
-```bash
-systemctl start taosd
-```
-
-Check if taosd is running:
-
-```bash
-systemctl status taosd
-```
-
-If everything is fine, you can run TDengine command-line interface `taos` to access TDengine and test it out yourself.
-
-:::info
-
-- systemctl requires _root_ privileges,if you are not _root_ ,please add sudo before the command.
-- To get feedback and keep improving the product, TDengine is collecting some basic usage information, but you can turn it off by setting telemetryReporting to 0 in configuration file taos.cfg.
-- TDengine uses FQDN (usually hostname)as the ID for a node. To make the system work, you need to configure the FQDN for the server running taosd, and configure the DNS service or hosts file on the the machine where the application or TDengine CLI runs to ensure that the FQDN can be resolved.
-- `systemctl stop taosd` won't stop the server right away, it will wait until all the data in memory are flushed to disk. It may takes time depending on the cache size.
-
-TDengine supports the installation on system which runs [`systemd`](https://en.wikipedia.org/wiki/Systemd) for process management,use `which systemctl` to check if the system has `systemd` installed:
-
-```bash
-which systemctl
-```
-
-If the system does not have `systemd`,you can start TDengine manually by executing `/usr/local/taos/bin/taosd`
-
-:::note
-
-## Command Line Interface
-
-To manage the TDengine running instance,or execute ad-hoc queries, TDengine provides a Command Line Interface (hereinafter referred to as TDengine CLI) taos. To enter into the interactive CLI,execute `taos` on a Linux terminal where TDengine is installed.
-
-```bash
-taos
-```
-
-If it connects to the TDengine server successfully, it will print out the version and welcome message. If it fails, it will print out the error message, please check [FAQ](/train-faq/faq) for trouble shooting connection issue. TDengine CLI's prompt is:
-
-```cmd
-taos>
-```
-
-Inside TDengine CLI,you can execute SQL commands to create/drop database/table, and run queries. The SQL command must be ended with a semicolon. For example:
-
-```sql
-create database demo;
-use demo;
-create table t (ts timestamp, speed int);
-insert into t values ('2019-07-15 00:00:00', 10);
-insert into t values ('2019-07-15 01:00:00', 20);
-select * from t;
- ts | speed |
-========================================
- 2019-07-15 00:00:00.000 | 10 |
- 2019-07-15 01:00:00.000 | 20 |
-Query OK, 2 row(s) in set (0.003128s)
-```
-
-Besides executing SQL commands, system administrators can check running status, add/drop user accounts and manage the running instances. TAOS CLI with client driver can be installed and run on either Linux or Windows machines. For more details on CLI, please [check here](../reference/taos-shell/).
-
-## Experience the blazing fast speed
-
-After TDengine server is running,execute `taosBenchmark` (previously named taosdemo) from a Linux terminal:
-
-```bash
-taosBenchmark
-```
-
-This command will create a super table "meters" under database "test". Under "meters", 10000 tables are created with names from "d0" to "d9999". Each table has 10000 rows and each row has four columns (ts, current, voltage, phase). Time stamp is starting from "2017-07-14 10:40:00 000" to "2017-07-14 10:40:09 999". Each table has tags "location" and "groupId". groupId is set 1 to 10 randomly, and location is set to "California.SanFrancisco" or "California.SanDiego".
-
-This command will insert 100 million rows into the database quickly. Time to insert depends on the hardware configuration, it only takes a dozen seconds for a regular PC server.
-
-taosBenchmark provides command-line options and a configuration file to customize the scenarios, like number of tables, number of rows per table, number of columns and more. Please execute `taosBenchmark --help` to list them. For details on running taosBenchmark, please check [reference for taosBenchmark](/reference/taosbenchmark)
-
-## Experience query speed
-
-After using taosBenchmark to insert a number of rows data, you can execute queries from TDengine CLI to experience the lightning fast query speed.
-
-query the total number of rows under super table "meters":
-
-```sql
-taos> select count(*) from test.meters;
-```
-
-query the average, maximum, minimum of 100 million rows:
-
-```sql
-taos> select avg(current), max(voltage), min(phase) from test.meters;
-```
-
-query the total number of rows with location="California.SanFrancisco":
-
-```sql
-taos> select count(*) from test.meters where location="California.SanFrancisco";
-```
-
-query the average, maximum, minimum of all rows with groupId=10:
-
-```sql
-taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
-```
-
-query the average, maximum, minimum for table d10 in 10 seconds time interval:
-
-```sql
-taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
-```
diff --git a/docs-en/07-develop/01-connect/_connect_c.mdx b/docs-en/07-develop/01-connect/_connect_c.mdx
deleted file mode 100644
index 174bf45c4e2f26bab8f57c098f9f8f00d2f5064d..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/_connect_c.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```c title="Native Connection"
-{{#include docs-examples/c/connect_example.c}}
-```
diff --git a/docs-en/07-develop/01-connect/_connect_cs.mdx b/docs-en/07-develop/01-connect/_connect_cs.mdx
deleted file mode 100644
index 52ea2d437123a26bd87e6f3fdc05a17141f9f835..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/_connect_cs.mdx
+++ /dev/null
@@ -1,8 +0,0 @@
-```csharp title="Native Connection"
-{{#include docs-examples/csharp/ConnectExample.cs}}
-```
-
-:::info
-C# connector supports only native connection for now.
-
-:::
diff --git a/docs-en/07-develop/01-connect/_connect_go.mdx b/docs-en/07-develop/01-connect/_connect_go.mdx
deleted file mode 100644
index 1dd5d67e3533bba21960269e49e3d843b026efc8..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/_connect_go.mdx
+++ /dev/null
@@ -1,17 +0,0 @@
-#### Unified Database Access Interface
-
-```go title="Native Connection"
-{{#include docs-examples/go/connect/cgoexample/main.go}}
-```
-
-```go title="REST Connection"
-{{#include docs-examples/go/connect/restexample/main.go}}
-```
-
-#### Advanced Features
-
-The af package of driver-go can also be used to establish connection, with this way some advanced features of TDengine, like parameter binding and subscription, can be used.
-
-```go title="Establish native connection using af package"
-{{#include docs-examples/go/connect/afconn/main.go}}
-```
diff --git a/docs-en/07-develop/01-connect/_connect_java.mdx b/docs-en/07-develop/01-connect/_connect_java.mdx
deleted file mode 100644
index 1c3e9326bf2ae597ffba683250dd43986e670469..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/_connect_java.mdx
+++ /dev/null
@@ -1,15 +0,0 @@
-```java title="Native Connection"
-{{#include docs-examples/java/src/main/java/com/taos/example/JNIConnectExample.java}}
-```
-
-```java title="REST Connection"
-{{#include docs-examples/java/src/main/java/com/taos/example/RESTConnectExample.java:main}}
-```
-
-When using REST connection, the feature of bulk pulling can be enabled if the size of resulting data set is huge.
-
-```java title="Enable Bulk Pulling" {4}
-{{#include docs-examples/java/src/main/java/com/taos/example/WSConnectExample.java:main}}
-```
-
-More configuration about connection,please refer to [Java Connector](/reference/connector/java)
diff --git a/docs-en/07-develop/01-connect/_connect_node.mdx b/docs-en/07-develop/01-connect/_connect_node.mdx
deleted file mode 100644
index 489b0386e991ee1e8ddd173205637b75ae5a0c95..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/_connect_node.mdx
+++ /dev/null
@@ -1,7 +0,0 @@
-```js title="Native Connection"
-{{#include docs-examples/node/nativeexample/connect.js}}
-```
-
-```js title="REST Connection"
-{{#include docs-examples/node/restexample/connect.js}}
-```
diff --git a/docs-en/07-develop/01-connect/_connect_python.mdx b/docs-en/07-develop/01-connect/_connect_python.mdx
deleted file mode 100644
index 44b7586fadbf618231fce7753d3b4b68853a7f57..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/_connect_python.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```python title="Native Connection"
-{{#include docs-examples/python/connect_example.py}}
-```
diff --git a/docs-en/07-develop/01-connect/_connect_r.mdx b/docs-en/07-develop/01-connect/_connect_r.mdx
deleted file mode 100644
index 09c3d71ac35b1134d3089247daea9a13db4129e2..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/_connect_r.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```r title="Native Connection"
-{{#include docs-examples/R/connect_native.r:demo}}
-```
diff --git a/docs-en/07-develop/01-connect/_connect_rust.mdx b/docs-en/07-develop/01-connect/_connect_rust.mdx
deleted file mode 100644
index aa19f58de6c9bab69df0663e5369402ab1a8f899..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/_connect_rust.mdx
+++ /dev/null
@@ -1,8 +0,0 @@
-```rust title="Native Connection/REST Connection"
-{{#include docs-examples/rust/nativeexample/examples/connect.rs}}
-```
-
-:::note
-For Rust connector, the connection depends on the feature being used. If "rest" feature is enabled, then only the implementation for "rest" is compiled and packaged.
-
-:::
diff --git a/docs-en/07-develop/01-connect/index.md b/docs-en/07-develop/01-connect/index.md
deleted file mode 100644
index 720f8e2384c565d5494ce7d84d531188dae96fe0..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/index.md
+++ /dev/null
@@ -1,280 +0,0 @@
----
-sidebar_label: Connect
-title: Connect
-description: "This document explains how to establish connections to TDengine, and briefly introduces how to install and use TDengine connectors."
----
-
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-import ConnJava from "./\_connect_java.mdx";
-import ConnGo from "./\_connect_go.mdx";
-import ConnRust from "./\_connect_rust.mdx";
-import ConnNode from "./\_connect_node.mdx";
-import ConnPythonNative from "./\_connect_python.mdx";
-import ConnCSNative from "./\_connect_cs.mdx";
-import ConnC from "./\_connect_c.mdx";
-import ConnR from "./\_connect_r.mdx";
-import InstallOnWindows from "../../14-reference/03-connector/\_linux_install.mdx";
-import InstallOnLinux from "../../14-reference/03-connector/\_windows_install.mdx";
-import VerifyLinux from "../../14-reference/03-connector/\_verify_linux.mdx";
-import VerifyWindows from "../../14-reference/03-connector/\_verify_windows.mdx";
-
-Any application programs running on any kind of platform can access TDengine through the REST API provided by TDengine. For details, please refer to [REST API](/reference/rest-api/). Additionally, application programs can use the connectors of multiple programming languages including C/C++, Java, Python, Go, Node.js, C#, Rust to access TDengine. This chapter describes how to establish a connection to TDengine and briefly introduces how to install and use connectors. TDengine community also provides connectors in LUA and PHP languages. For details about the connectors, please refer to [Connectors](/reference/connector/).
-
-## Establish Connection
-
-There are two ways for a connector to establish connections to TDengine:
-
-1. Connection through the REST API provided by the taosAdapter component, this way is called "REST connection" hereinafter.
-2. Connection through the TDengine client driver (taosc), this way is called "Native connection" hereinafter.
-
-Key differences:
-
-1. The TDengine client driver (taosc) has the highest performance with all the features of TDengine like [Parameter Binding](/reference/connector/cpp#parameter-binding-api), [Subscription](/reference/connector/cpp#subscription-and-consumption-api), etc.
-2. The TDengine client driver (taosc) is not supported across all platforms, and applications built on taosc may need to be modified when updating taosc to newer versions.
-3. The REST connection is more accessible with cross-platform support, however it results in a 30% performance downgrade.
-
-## Install Client Driver taosc
-
-If you are choosing to use the native connection and the the application is not on the same host as TDengine server, the TDengine client driver taosc needs to be installed on the application host. If choosing to use the REST connection or the application is on the same host as TDengine server, this step can be skipped. It's better to use same version of taosc as the TDengine server.
-
-### Install
-
-
-
-
-
-
-
-
-
-
-### Verify
-
-After the above installation and configuration are done and making sure TDengine service is already started and in service, the TDengine command-line interface `taos` can be launched to access TDengine.
-
-
-
-
-
-
-
-
-
-
-## Install Connectors
-
-
-
-
-If `maven` is used to manage the projects, what needs to be done is only adding below dependency in `pom.xml`.
-
-```xml
-
- com.taosdata.jdbc
- taos-jdbcdriver
- 2.0.38
-
-```
-
-
-
-
-Install from PyPI using `pip`:
-
-```
-pip install taospy
-```
-
-Install from Git URL:
-
-```
-pip install git+https://github.com/taosdata/taos-connector-python.git
-```
-
-
-
-
-Just need to add `driver-go` dependency in `go.mod` .
-
-```go-mod title=go.mod
-module goexample
-
-go 1.17
-
-require github.com/taosdata/driver-go/v2 develop
-```
-
-:::note
-`driver-go` uses `cgo` to wrap the APIs provided by taosc, while `cgo` needs `gcc` to compile source code in C language, so please make sure you have proper `gcc` on your system.
-
-:::
-
-
-
-
-Just need to add `libtaos` dependency in `Cargo.toml`.
-
-```toml title=Cargo.toml
-[dependencies]
-libtaos = { version = "0.4.2"}
-```
-
-:::info
-Rust connector uses different features to distinguish the way to establish connection. To establish REST connection, please enable `rest` feature.
-
-```toml
-libtaos = { version = "*", features = ["rest"] }
-```
-
-:::
-
-
-
-
-Node.js connector provides different ways of establishing connections by providing different packages.
-
-1. Install Node.js Native Connector
-
-```
-npm i td2.0-connector
-```
-
-:::note
-It's recommend to use Node whose version is between `node-v12.8.0` and `node-v13.0.0`.
-:::
-
-2. Install Node.js REST Connector
-
-```
-npm i td2.0-rest-connector
-```
-
-
-
-
-Just need to add the reference to [TDengine.Connector](https://www.nuget.org/packages/TDengine.Connector/) in the project configuration file.
-
-```xml title=csharp.csproj {12}
-
-
-
- Exe
- net6.0
- enable
- enable
- TDengineExample.AsyncQueryExample
-
-
-
-
-
-
-
-```
-
-Or add by `dotnet` command.
-
-```
-dotnet add package TDengine.Connector
-```
-
-:::note
-The sample code below are based on dotnet6.0, they may need to be adjusted if your dotnet version is not exactly same.
-
-:::
-
-
-
-
-1. Download [taos-jdbcdriver-version-dist.jar](https://repo1.maven.org/maven2/com/taosdata/jdbc/taos-jdbcdriver/2.0.38/).
-2. Install the dependency package `RJDBC`:
-
-```R
-install.packages("RJDBC")
-```
-
-
-
-
-If the client driver (taosc) is already installed, then the C connector is already available.
-
-
-
-
-
-**Download Source Code Package and Unzip:**
-
-```shell
-curl -L -o php-tdengine.tar.gz https://github.com/Yurunsoft/php-tdengine/archive/refs/tags/v1.0.2.tar.gz \
-&& mkdir php-tdengine \
-&& tar -xzf php-tdengine.tar.gz -C php-tdengine --strip-components=1
-```
-
-> Version number `v1.0.2` is only for example, it can be replaced to any newer version, please check available version from [TDengine PHP Connector Releases](https://github.com/Yurunsoft/php-tdengine/releases).
-
-**Non-Swoole Environment:**
-
-```shell
-phpize && ./configure && make -j && make install
-```
-
-**Specify TDengine Location:**
-
-```shell
-phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/2.4.0.0 && make -j && make install
-```
-
-> `--with-tdengine-dir=` is followed by the TDengine installation location.
-> This way is useful in case TDengine location can't be found automatically or macOS.
-
-**Swoole Environment:**
-
-```shell
-phpize && ./configure --enable-swoole && make -j && make install
-```
-
-**Enable The Extension:**
-
-Option One: Add `extension=tdengine` in `php.ini`
-
-Option Two: Specify the extension on CLI `php -d extension=tdengine test.php`
-
-
-
-
-## Establish Connection
-
-Prior to establishing connection, please make sure TDengine is already running and accessible. The following sample code assumes TDengine is running on the same host as the client program, with FQDN configured to "localhost" and serverPort configured to "6030".
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-:::tip
-If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](https://docs.taosdata.com/train-faq/faq).
-
-:::
diff --git a/docs-en/07-develop/02-model/index.mdx b/docs-en/07-develop/02-model/index.mdx
deleted file mode 100644
index 86853aaaa3f7285fe042a892e2ec903d57894111..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/02-model/index.mdx
+++ /dev/null
@@ -1,93 +0,0 @@
----
-title: Data Model
----
-
-The data model employed by TDengine is similar to that of a relational database. You have to create databases and tables. You must design the data model based on your own business and application requirements. You should design the STable (an abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntactical details.
-
-## Create Database
-
-The [characteristics of time-series data](https://www.taosdata.com/blog/2019/07/09/86.html) from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. For TDengine to operate with the best performance, we strongly recommend that you create and configure different databases for data with different characteristics. This allows you, for example, to set up different storage and retention policies. When creating a database, there are a lot of parameters that can be configured such as, the days to keep data, the number of replicas, the number of memory blocks, time precision, the minimum and maximum number of rows in each data block, whether compression is enabled, the time range of the data in single data file and so on. Below is an example of the SQL statement to create a database.
-
-```sql
-CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 6 UPDATE 1;
-```
-
-In the above SQL statement:
-- a database named "power" will be created
-- the data in it will be kept for 365 days, which means that data older than 365 days will be deleted automatically
-- a new data file will be created every 10 days
-- the number of memory blocks is 6
-- data is allowed to be updated
-
-For more details please refer to [Database](/taos-sql/database).
-
-After creating a database, the current database in use can be switched using SQL command `USE`. For example the SQL statement below switches the current database to `power`. Without the current database specified, table name must be preceded with the corresponding database name.
-
-```sql
-USE power;
-```
-
-:::note
-
-- Any table or STable must belong to a database. To create a table or STable, the database it belongs to must be ready.
-- JOIN operations can't be performed on tables from two different databases.
-- Timestamp needs to be specified when inserting rows or querying historical rows.
-
-:::
-
-## Create STable
-
-In a time-series application, there may be multiple kinds of data collection points. For example, in the electrical power system there are meters, transformers, bus bars, switches, etc. For easy and efficient aggregation of multiple tables, one STable needs to be created for each kind of data collection point. For example, for the meters in [table 1](/tdinternal/arch#model_table1), the SQL statement below can be used to create the super table.
-
-```sql
-CREATE STable meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
-```
-
-:::note
-If you are using versions prior to 2.0.15, the `STable` keyword needs to be replaced with `TABLE`.
-
-:::
-
-Similar to creating a regular table, when creating a STable, the name and schema need to be provided. In the STable schema, the first column must always be a timestamp (like ts in the example), and the other columns (like current, voltage and phase in the example) are the data collected. The remaining columns can [contain data of type](/taos-sql/data-type/) integer, float, double, string etc. In addition, the schema for tags, like location and groupId in the example, must be provided. The tag type can be integer, float, string, etc. Tags are essentially the static properties of a data collection point. For example, properties like the location, device type, device group ID, manager ID are tags. Tags in the schema can be added, removed or updated. Please refer to [STable](/taos-sql/stable) for more details.
-
-For each kind of data collection point, a corresponding STable must be created. There may be many STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another data collection point for environmental data like temperature, humidity and wind direction. Multiple STables are required for these kinds of devices.
-
-At most 4096 (or 1024 prior to version 2.1.7.0) columns are allowed in a STable. If there are more than 4096 of metrics to be collected for a data collection point, multiple STables are required. There can be multiple databases in a system, while one or more STables can exist in a database.
-
-## Create Table
-
-A specific table needs to be created for each data collection point. Similar to RDBMS, table name and schema are required to create a table. Additionally, one or more tags can be created for each table. To create a table, a STable needs to be used as template and the values need to be specified for the tags. For example, for the meters in [Table 1](/tdinternal/arch#model_table1), the table can be created using below SQL statement.
-
-```sql
-CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2);
-```
-
-In the above SQL statement, "d1001" is the table name, "meters" is the STable name, followed by the value of tag "Location" and the value of tag "groupId", which are "California.SanFrancisco" and "2" respectively in the example. The tag values can be updated after the table is created. Please refer to [Tables](/taos-sql/table) for details.
-
-In the TDengine system, it's recommended to create a table for a data collection point via STable. A table created via STable is called subtable in some parts of the TDengine documentation. All SQL commands applied on regular tables can be applied on subtables.
-
-:::warning
-It's not recommended to create a table in a database while using a STable from another database as template.
-
-:::tip
-It's suggested to use the globally unique ID of a data collection point as the table name. For example the device serial number could be used as a unique ID. If a unique ID doesn't exist, multiple IDs that are not globally unique can be combined to form a globally unique ID. It's not recommended to use a globally unique ID as tag value.
-
-## Create Table Automatically
-
-In some circumstances, it's unknown whether the table already exists when inserting rows. The table can be created automatically using the SQL statement below, and nothing will happen if the table already exists.
-
-```sql
-INSERT INTO d1001 USING meters TAGS ("California.SanFrancisco", 2) VALUES (now, 10.2, 219, 0.32);
-```
-
-In the above SQL statement, a row with value `(now, 10.2, 219, 0.32)` will be inserted into table "d1001". If table "d1001" doesn't exist, it will be created automatically using STable "meters" as template with tag value `"California.SanFrancisco", 2`.
-
-For more details please refer to [Create Table Automatically](/taos-sql/insert#automatically-create-table-when-inserting).
-
-## Single Column vs Multiple Column
-
-A multiple columns data model is supported in TDengine. As long as multiple metrics are collected by the same data collection point at the same time, i.e. the timestamps are identical, these metrics can be put in a single STable as columns.
-
-However, there is another kind of design, i.e. single column data model in which a table is created for each metric. This means that a STable is required for each kind of metric. For example in a single column model, 3 STables would be required for current, voltage and phase.
-
-It's recommended to use a multiple column data model as much as possible because insert and query performance is higher. In some cases, however, the collected metrics may vary frequently and so the corresponding STable schema needs to be changed frequently too. In such cases, it's more convenient to use single column data model.
diff --git a/docs-en/07-develop/03-insert-data/01-sql-writing.mdx b/docs-en/07-develop/03-insert-data/01-sql-writing.mdx
deleted file mode 100644
index 397b1a14fd76c1372c79eb88575f2bf21cb62050..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/01-sql-writing.mdx
+++ /dev/null
@@ -1,130 +0,0 @@
----
-sidebar_label: Insert Using SQL
-title: Insert Using SQL
----
-
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-import JavaSQL from "./_java_sql.mdx";
-import JavaStmt from "./_java_stmt.mdx";
-import PySQL from "./_py_sql.mdx";
-import PyStmt from "./_py_stmt.mdx";
-import GoSQL from "./_go_sql.mdx";
-import GoStmt from "./_go_stmt.mdx";
-import RustSQL from "./_rust_sql.mdx";
-import RustStmt from "./_rust_stmt.mdx";
-import NodeSQL from "./_js_sql.mdx";
-import NodeStmt from "./_js_stmt.mdx";
-import CsSQL from "./_cs_sql.mdx";
-import CsStmt from "./_cs_stmt.mdx";
-import CSQL from "./_c_sql.mdx";
-import CStmt from "./_c_stmt.mdx";
-
-## Introduction
-
-Application programs can execute `INSERT` statement through connectors to insert rows. The TAOS CLI can also be used to manually insert data.
-
-### Insert Single Row
-
-The below SQL statement is used to insert one row into table "d1001".
-
-```sql
-INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31);
-```
-
-### Insert Multiple Rows
-
-Multiple rows can be inserted in a single SQL statement. The example below inserts 2 rows into table "d1001".
-
-```sql
-INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3, 218, 0.25);
-```
-
-### Insert into Multiple Tables
-
-Data can be inserted into multiple tables in the same SQL statement. The example below inserts 2 rows into table "d1001" and 1 row into table "d1002".
-
-```sql
-INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d1002 VALUES (1538548696800, 12.3, 221, 0.31);
-```
-
-For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
-
-:::info
-
-- Inserting in batches can improve performance. Normally, the higher the batch size, the better the performance. Please note that a single row can't exceed 48K bytes and each SQL statement can't exceed 1MB.
-- Inserting with multiple threads can also improve performance. However, depending on the system resources on the application side and the server side, when the number of inserting threads grows beyond a specific point the performance may drop instead of improving. The proper number of threads needs to be tested in a specific environment to find the best number.
-
-:::
-
-:::warning
-
-- If the timestamp for the row to be inserted already exists in the table, the behavior depends on the value of parameter `UPDATE`. If it's set to 0 (the default value), the row will be discarded. If it's set to 1, the new values will override the old values for the same row.
-- The timestamp to be inserted must be newer than the timestamp of subtracting current time by the parameter `KEEP`. If `KEEP` is set to 3650 days, then the data older than 3650 days ago can't be inserted. The timestamp to be inserted can't be newer than the timestamp of current time plus parameter `DAYS`. If `DAYS` is set to 2, the data newer than 2 days later can't be inserted.
-
-:::
-
-## Examples
-
-### Insert Using SQL
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-:::note
-
-1. With either native connection or REST connection, the above samples can work well.
-2. Please note that `use db` can't be used with a REST connection because REST connections are stateless, so in the samples `dbName.tbName` is used to specify the table name.
-
-:::
-
-### Insert with Parameter Binding
-
-TDengine also provides API support for parameter binding. Similar to MySQL, only `?` can be used in these APIs to represent the parameters to bind. From version 2.1.1.0 and 2.1.2.0, parameter binding support for inserting data has improved significantly to improve the insert performance by avoiding the cost of parsing SQL statements.
-
-Parameter binding is available only with native connection.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs-en/07-develop/03-insert-data/_c_line.mdx b/docs-en/07-develop/03-insert-data/_c_line.mdx
deleted file mode 100644
index 5ef2e9af774c54e9f090357286f83d2280c2ab11..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_c_line.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```c
-{{#include docs-examples/c/line_example.c:main}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_c_opts_json.mdx b/docs-en/07-develop/03-insert-data/_c_opts_json.mdx
deleted file mode 100644
index 22ad2e0122797248a372734aac0f3a16a1356530..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_c_opts_json.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```c
-{{#include docs-examples/c/json_protocol_example.c:main}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_c_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_c_opts_telnet.mdx
deleted file mode 100644
index 508d7bc98a149f49766bcd0a474ffe226cbe30bb..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_c_opts_telnet.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```c
-{{#include docs-examples/c/telnet_line_example.c:main}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_c_sql.mdx b/docs-en/07-develop/03-insert-data/_c_sql.mdx
deleted file mode 100644
index f4153fd2c427677a338d0c377663d0335f2672f0..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_c_sql.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```c
-{{#include docs-examples/c/insert_example.c}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_c_stmt.mdx b/docs-en/07-develop/03-insert-data/_c_stmt.mdx
deleted file mode 100644
index 7f5ef23a849689c36e732b6fd374a131695c9090..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_c_stmt.mdx
+++ /dev/null
@@ -1,6 +0,0 @@
-```c title=Single Row Binding
-{{#include docs-examples/c/stmt_example.c}}
-```
-```c title=Multiple Row Binding 72:117
-{{#include docs-examples/c/multi_bind_example.c}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_cs_line.mdx b/docs-en/07-develop/03-insert-data/_cs_line.mdx
deleted file mode 100644
index 9c275ee3d7c7a1e52fbb34dbae922004543ee3ce..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_cs_line.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```csharp
-{{#include docs-examples/csharp/InfluxDBLineExample.cs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_cs_opts_json.mdx b/docs-en/07-develop/03-insert-data/_cs_opts_json.mdx
deleted file mode 100644
index 3d538b8506b298241faecd8098f89571359135c9..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_cs_opts_json.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```csharp
-{{#include docs-examples/csharp/OptsJsonExample.cs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_cs_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_cs_opts_telnet.mdx
deleted file mode 100644
index c53bf3d7233115351e5af03b7d9e6318aa4a0da6..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_cs_opts_telnet.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```csharp
-{{#include docs-examples/csharp/OptsTelnetExample.cs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_cs_sql.mdx b/docs-en/07-develop/03-insert-data/_cs_sql.mdx
deleted file mode 100644
index c7688bfbe77a1135424d829fe9b29fbb1bc93ae2..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_cs_sql.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```csharp
-{{#include docs-examples/csharp/SQLInsertExample.cs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_cs_stmt.mdx b/docs-en/07-develop/03-insert-data/_cs_stmt.mdx
deleted file mode 100644
index 97c3b910ffeb9e0c88fc143a02014115e819c147..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_cs_stmt.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```csharp
-{{#include docs-examples/csharp/StmtInsertExample.cs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_go_line.mdx b/docs-en/07-develop/03-insert-data/_go_line.mdx
deleted file mode 100644
index cd225945b70e28bef2ca7fdaf0d9be0ad7ffc18c..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_go_line.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```go
-{{#include docs-examples/go/insert/line/main.go}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_go_opts_json.mdx b/docs-en/07-develop/03-insert-data/_go_opts_json.mdx
deleted file mode 100644
index 0c0d3e5b6330e046988cdd02234285ec67e92f01..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_go_opts_json.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```go
-{{#include docs-examples/go/insert/json/main.go}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_go_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_go_opts_telnet.mdx
deleted file mode 100644
index d5ca40cc146e62412476289853e8e2739e0e9e4b..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_go_opts_telnet.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```go
-{{#include docs-examples/go/insert/telnet/main.go}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_go_sql.mdx b/docs-en/07-develop/03-insert-data/_go_sql.mdx
deleted file mode 100644
index 613a65add1741eb763a4b24e65d180d05f7d670f..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_go_sql.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```go
-{{#include docs-examples/go/insert/sql/main.go}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_go_stmt.mdx b/docs-en/07-develop/03-insert-data/_go_stmt.mdx
deleted file mode 100644
index c32bc21fb9bcaf45059e4f47df73fb57f047ed1c..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_go_stmt.mdx
+++ /dev/null
@@ -1,8 +0,0 @@
-```go
-{{#include docs-examples/go/insert/stmt/main.go}}
-```
-
-:::tip
-`github.com/taosdata/driver-go/v2/wrapper` module in driver-go is the wrapper for C API, it can be used to insert data with parameter binding.
-
-:::
diff --git a/docs-en/07-develop/03-insert-data/_java_line.mdx b/docs-en/07-develop/03-insert-data/_java_line.mdx
deleted file mode 100644
index 2e59a5d4701b2a2ab04ec5711845dc5c80067a1e..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_java_line.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```java
-{{#include docs-examples/java/src/main/java/com/taos/example/LineProtocolExample.java}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_java_opts_json.mdx b/docs-en/07-develop/03-insert-data/_java_opts_json.mdx
deleted file mode 100644
index 826a1a07d9405cb193849f9d21e5444f68517914..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_java_opts_json.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```java
-{{#include docs-examples/java/src/main/java/com/taos/example/JSONProtocolExample.java}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_java_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_java_opts_telnet.mdx
deleted file mode 100644
index 954dcc1a482a150dea0b190e1e0593adbfbde796..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_java_opts_telnet.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```java
-{{#include docs-examples/java/src/main/java/com/taos/example/TelnetLineProtocolExample.java}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_java_sql.mdx b/docs-en/07-develop/03-insert-data/_java_sql.mdx
deleted file mode 100644
index a863378defe43b1f22c1f98087a34f053a7d6619..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_java_sql.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```java
-{{#include docs-examples/java/src/main/java/com/taos/example/RestInsertExample.java:insert}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_java_stmt.mdx b/docs-en/07-develop/03-insert-data/_java_stmt.mdx
deleted file mode 100644
index 54443e535fa84bdf8dc9161ed4ad00f50b26266c..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_java_stmt.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```java
-{{#include docs-examples/java/src/main/java/com/taos/example/StmtInsertExample.java}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_js_line.mdx b/docs-en/07-develop/03-insert-data/_js_line.mdx
deleted file mode 100644
index 172c9bc17b8cff8b2620720b235a9c8e69bd4197..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_js_line.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```js
-{{#include docs-examples/node/nativeexample/influxdb_line_example.js}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_js_opts_json.mdx b/docs-en/07-develop/03-insert-data/_js_opts_json.mdx
deleted file mode 100644
index 20ac9ec91e8dc6675828b16d7da0acb09afd3b5f..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_js_opts_json.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```js
-{{#include docs-examples/node/nativeexample/opentsdb_json_example.js}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_js_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_js_opts_telnet.mdx
deleted file mode 100644
index c3c8c40bd642f4f443de88e3db006ad50724d514..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_js_opts_telnet.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```js
-{{#include docs-examples/node/nativeexample/opentsdb_telnet_example.js}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_js_sql.mdx b/docs-en/07-develop/03-insert-data/_js_sql.mdx
deleted file mode 100644
index f5e17c76892a57a94192a95451b508b1c176c984..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_js_sql.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```js
-{{#include docs-examples/node/nativeexample/insert_example.js}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_js_stmt.mdx b/docs-en/07-develop/03-insert-data/_js_stmt.mdx
deleted file mode 100644
index 964d7ddc11b90031b70936efb85fbaabe873ddbb..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_js_stmt.mdx
+++ /dev/null
@@ -1,12 +0,0 @@
-```js title=Single Row Binding
-{{#include docs-examples/node/nativeexample/param_bind_example.js}}
-```
-
-```js title=Multiple Row Binding
-{{#include docs-examples/node/nativeexample/multi_bind_example.js:insertData}}
-```
-
-:::info
-Multiple row binding is better in performance than single row binding, but it can only be used with `INSERT` statement while single row binding can be used for other SQL statements besides `INSERT`.
-
-:::
diff --git a/docs-en/07-develop/03-insert-data/_py_line.mdx b/docs-en/07-develop/03-insert-data/_py_line.mdx
deleted file mode 100644
index d3bb1ebb3403b53fa43bfc9d5d1a0de9764d7583..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_py_line.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```py
-{{#include docs-examples/python/line_protocol_example.py}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_py_opts_json.mdx b/docs-en/07-develop/03-insert-data/_py_opts_json.mdx
deleted file mode 100644
index cfbfe13ccfdb4f3f34b77300812863fdf70d0f59..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_py_opts_json.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```py
-{{#include docs-examples/python/json_protocol_example.py}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_py_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_py_opts_telnet.mdx
deleted file mode 100644
index 14bc65a7a3da815abadf7f25c8deffeac666c8d7..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_py_opts_telnet.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```py
-{{#include docs-examples/python/telnet_line_protocol_example.py}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_py_sql.mdx b/docs-en/07-develop/03-insert-data/_py_sql.mdx
deleted file mode 100644
index c0e15b8ec115b9244d50a47c9eafec04bcfdd70c..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_py_sql.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```py
-{{#include docs-examples/python/native_insert_example.py}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_py_stmt.mdx b/docs-en/07-develop/03-insert-data/_py_stmt.mdx
deleted file mode 100644
index 16d98f54329ad0d3dfb463392f5c1d41c9aab25b..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_py_stmt.mdx
+++ /dev/null
@@ -1,12 +0,0 @@
-```py title=Single Row Binding
-{{#include docs-examples/python/bind_param_example.py}}
-```
-
-```py title=Multiple Row Binding
-{{#include docs-examples/python/multi_bind_example.py:bind_batch}}
-```
-
-:::info
-Multiple row binding is better in performance than single row binding, but it can only be used with `INSERT` statement while single row binding can be used for other SQL statements besides `INSERT`.
-
-:::
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_rust_line.mdx b/docs-en/07-develop/03-insert-data/_rust_line.mdx
deleted file mode 100644
index 696ddb7b854751b8dee01047066f97f74212933f..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_rust_line.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```rust
-{{#include docs-examples/rust/schemalessexample/examples/influxdb_line_example.rs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_rust_opts_json.mdx b/docs-en/07-develop/03-insert-data/_rust_opts_json.mdx
deleted file mode 100644
index 97d9052dacd1894cc7548a59951ecfaad9caee87..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_rust_opts_json.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```rust
-{{#include docs-examples/rust/schemalessexample/examples/opentsdb_json_example.rs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_rust_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_rust_opts_telnet.mdx
deleted file mode 100644
index 14021f43d8aff30c35dc30c5d278d4e51f375024..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_rust_opts_telnet.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```rust
-{{#include docs-examples/rust/schemalessexample/examples/opentsdb_telnet_example.rs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_rust_sql.mdx b/docs-en/07-develop/03-insert-data/_rust_sql.mdx
deleted file mode 100644
index 8e8013e4ad734efcc262ea2f750b82210a538e49..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_rust_sql.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```rust
-{{#include docs-examples/rust/restexample/examples/insert_example.rs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_rust_stmt.mdx b/docs-en/07-develop/03-insert-data/_rust_stmt.mdx
deleted file mode 100644
index 590a7a0e717426ed0235331c49dfc578bc55b2f7..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_rust_stmt.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```rust
-{{#include docs-examples/rust/nativeexample/examples/stmt_example.rs}}
-```
diff --git a/docs-en/07-develop/04-query-data/_c.mdx b/docs-en/07-develop/04-query-data/_c.mdx
deleted file mode 100644
index 76c9067e2f6af19465cf7c52c3e9b48bb868547d..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_c.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```c
-{{#include docs-examples/c/query_example.c}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/04-query-data/_c_async.mdx b/docs-en/07-develop/04-query-data/_c_async.mdx
deleted file mode 100644
index 09f3d3b3ff6d6644f837642ef41db459ba7c5753..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_c_async.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```c
-{{#include docs-examples/c/async_query_example.c:demo}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/04-query-data/_cs.mdx b/docs-en/07-develop/04-query-data/_cs.mdx
deleted file mode 100644
index 2ab52feb564eff0fe251bc9900ea2539171e5dba..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_cs.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```csharp
-{{#include docs-examples/csharp/QueryExample.cs}}
-```
diff --git a/docs-en/07-develop/04-query-data/_cs_async.mdx b/docs-en/07-develop/04-query-data/_cs_async.mdx
deleted file mode 100644
index f868994b303e62016b5e2f9304275135855c6ae5..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_cs_async.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```csharp
-{{#include docs-examples/csharp/AsyncQueryExample.cs}}
-```
diff --git a/docs-en/07-develop/04-query-data/_go.mdx b/docs-en/07-develop/04-query-data/_go.mdx
deleted file mode 100644
index 417c12315c06517e2f3de850ac9a379b7714b519..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_go.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```go
-{{#include docs-examples/go/query/sync/main.go}}
-```
diff --git a/docs-en/07-develop/04-query-data/_go_async.mdx b/docs-en/07-develop/04-query-data/_go_async.mdx
deleted file mode 100644
index 72fff411b980a0dcbdcaf4274722c63e0351db6f..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_go_async.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```go
-{{#include docs-examples/go/query/async/main.go}}
-```
diff --git a/docs-en/07-develop/04-query-data/_java.mdx b/docs-en/07-develop/04-query-data/_java.mdx
deleted file mode 100644
index 519b9266144486231caf3ee593e973d438941ee4..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_java.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```java
-{{#include docs-examples/java/src/main/java/com/taos/example/RestQueryExample.java}}
-```
diff --git a/docs-en/07-develop/04-query-data/_js.mdx b/docs-en/07-develop/04-query-data/_js.mdx
deleted file mode 100644
index c5e4c4f3fc20d3940a2bc6e13e6a5dea8a15ff13..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_js.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```js
-{{#include docs-examples/node/nativeexample/query_example.js}}
-```
diff --git a/docs-en/07-develop/04-query-data/_js_async.mdx b/docs-en/07-develop/04-query-data/_js_async.mdx
deleted file mode 100644
index c65d54ed12f6c4bbeb333e0de0ba9ca4638bff84..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_js_async.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```js
-{{#include docs-examples/node/nativeexample/async_query_example.js}}
-```
diff --git a/docs-en/07-develop/04-query-data/_py.mdx b/docs-en/07-develop/04-query-data/_py.mdx
deleted file mode 100644
index aeae42a15e5c39b7e9d227afc424e77658109705..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_py.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
-Result set is iterated row by row.
-
-```py
-{{#include docs-examples/python/query_example.py:iter}}
-```
-
-Result set is retrieved as a whole, each row is converted to a dict and returned.
-
-```py
-{{#include docs-examples/python/query_example.py:fetch_all}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/04-query-data/_py_async.mdx b/docs-en/07-develop/04-query-data/_py_async.mdx
deleted file mode 100644
index ed6880ae64e59a860e7dc75a5d3c1ad5d2614d01..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_py_async.mdx
+++ /dev/null
@@ -1,8 +0,0 @@
-```py
-{{#include docs-examples/python/async_query_example.py}}
-```
-
-:::note
-This sample code can't be run on Windows system for now.
-
-:::
diff --git a/docs-en/07-develop/04-query-data/_rust.mdx b/docs-en/07-develop/04-query-data/_rust.mdx
deleted file mode 100644
index 742d70fd025ff44b573eedf78441c9d73defad45..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_rust.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```rust
-{{#include docs-examples/rust/restexample/examples/query_example.rs}}
-```
diff --git a/docs-en/07-develop/04-query-data/index.mdx b/docs-en/07-develop/04-query-data/index.mdx
deleted file mode 100644
index a212fa9529215fc24c55c95a166cfc1a407359b2..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/index.mdx
+++ /dev/null
@@ -1,186 +0,0 @@
----
-Sidebar_label: Query data
-title: Query data
-description: "This chapter introduces major query functionalities and how to perform sync and async query using connectors."
----
-
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-import JavaQuery from "./_java.mdx";
-import PyQuery from "./_py.mdx";
-import GoQuery from "./_go.mdx";
-import RustQuery from "./_rust.mdx";
-import NodeQuery from "./_js.mdx";
-import CsQuery from "./_cs.mdx";
-import CQuery from "./_c.mdx";
-import PyAsync from "./_py_async.mdx";
-import NodeAsync from "./_js_async.mdx";
-import CsAsync from "./_cs_async.mdx";
-import CAsync from "./_c_async.mdx";
-
-## Introduction
-
-SQL is used by TDengine as its query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine's CLI `taos` can also be used to execute ad hoc SQL queries. Here is the list of major query functionalities supported by TDengine:
-
-- Query on single column or multiple columns
-- Filter on tags or data columns:>, <, =, <\>, like
-- Grouping of results: `Group By`
-- Sorting of results: `Order By`
-- Limit the number of results: `Limit/Offset`
-- Arithmetic on columns of numeric types or aggregate results
-- Join query with timestamp alignment
-- Aggregate functions: count, max, min, avg, sum, twa, stddev, leastsquares, top, bottom, first, last, percentile, apercentile, last_row, spread, diff
-
-For example, the SQL statement below can be executed in TDengine CLI `taos` to select records with voltage greater than 215 and limit the output to only 2 rows.
-
-```sql
-select * from d1001 where voltage > 215 order by ts desc limit 2;
-```
-
-```title=Output
-taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
- ts | current | voltage | phase |
-======================================================================================
- 2018-10-03 14:38:16.800 | 12.30000 | 221 | 0.31000 |
- 2018-10-03 14:38:15.000 | 12.60000 | 218 | 0.33000 |
-Query OK, 2 row(s) in set (0.001100s)
-```
-
-To meet the requirements of varied use cases, some special functions have been added in TDengine. Some examples are `twa` (Time Weighted Average), `spread` (The difference between the maximum and the minimum), and `last_row` (the last row). Furthermore, continuous query is also supported in TDengine.
-
-For detailed query syntax please refer to [Select](/taos-sql/select).
-
-## Aggregation among Tables
-
-In most use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviation for super table), is used in TDengine to represent one type of data collection point, and a subtable is used to represent a specific data collection point of that type. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same type of data collection points. Aggregate functions applicable for tables can be used directly on STables; the syntax is exactly the same.
-
-In summary, records across subtables can be aggregated by a simple query on their STable. It is like a join operation. However, tables belonging to different STables can not be aggregated.
-
-### Example 1
-
-In TDengine CLI `taos`, use the SQL below to get the average voltage of all the meters in California grouped by location.
-
-```
-taos> SELECT AVG(voltage) FROM meters GROUP BY location;
- avg(voltage) | location |
-=============================================================
- 222.000000000 | California.LosAngeles |
- 219.200000000 | California.SanFrancisco |
-Query OK, 2 row(s) in set (0.002136s)
-```
-
-### Example 2
-
-In TDengine CLI `taos`, use the SQL below to get the number of rows and the maximum current in the past 24 hours from meters whose groupId is 2.
-
-```
-taos> SELECT count(*), max(current) FROM meters where groupId = 2 and ts > now - 24h;
- count(*) | max(current) |
-==================================
- 5 | 13.4 |
-Query OK, 1 row(s) in set (0.002136s)
-```
-
-Join queries are only allowed between subtables of the same STable. In [Select](/taos-sql/select), all query operations are marked as to whether they support STables or not.
-
-## Down Sampling and Interpolation
-
-In IoT use cases, down sampling is widely used to aggregate data by time range. The `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, the SQL statement below can be used to get the sum of current every 10 seconds from meters table d1001.
-
-```
-taos> SELECT sum(current) FROM d1001 INTERVAL(10s);
- ts | sum(current) |
-======================================================
- 2018-10-03 14:38:00.000 | 10.300000191 |
- 2018-10-03 14:38:10.000 | 24.900000572 |
-Query OK, 2 row(s) in set (0.000883s)
-```
-
-Down sampling can also be used for STable. For example, the below SQL statement can be used to get the sum of current from all meters in California.
-
-```
-taos> SELECT SUM(current) FROM meters where location like "California%" INTERVAL(1s);
- ts | sum(current) |
-======================================================
- 2018-10-03 14:38:04.000 | 10.199999809 |
- 2018-10-03 14:38:05.000 | 32.900000572 |
- 2018-10-03 14:38:06.000 | 11.500000000 |
- 2018-10-03 14:38:15.000 | 12.600000381 |
- 2018-10-03 14:38:16.000 | 36.000000000 |
-Query OK, 5 row(s) in set (0.001538s)
-```
-
-Down sampling also supports time offset. For example, the below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds.
-
-```
-taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
- ts | sum(current) |
-======================================================
- 2018-10-03 14:38:04.500 | 11.189999809 |
- 2018-10-03 14:38:05.500 | 31.900000572 |
- 2018-10-03 14:38:06.500 | 11.600000000 |
- 2018-10-03 14:38:15.500 | 12.300000381 |
- 2018-10-03 14:38:16.500 | 35.000000000 |
-Query OK, 5 row(s) in set (0.001521s)
-```
-
-In many use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle this by themselves. In TDengine, it's easy to achieve the alignment using down sampling.
-
-Interpolation can be performed in TDengine if there is no data in a time range.
-
-For more details please refer to [Aggregate by Window](/taos-sql/interval).
-
-## Examples
-
-### Query
-
-In the section describing [Insert](/develop/insert-data/sql-writing), a database named `power` is created and some data are inserted into STable `meters`. Below sample code demonstrates how to query the data in this STable.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-:::note
-
-1. With either REST connection or native connection, the above sample code works well.
-2. Please note that `use db` can't be used in case of REST connection because it's stateless.
-
-:::
-
-### Asynchronous Query
-
-Besides synchronous queries, an asynchronous query API is also provided by TDengine to insert or query data more efficiently. With a similar hardware and software environment, the async API is 2~4 times faster than sync APIs. Async API works in non-blocking mode, which means an operation can be returned without finishing so that the calling thread can switch to other work to improve the performance of the whole application system. Async APIs perform especially better in the case of poor networks.
-
-Please note that async query can only be used with a native connection.
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs-en/07-develop/07-subscribe.mdx b/docs-en/07-develop/07-subscribe.mdx
deleted file mode 100644
index 782fcdbaf221419dd231bd10958e26b8f4f856e5..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/07-subscribe.mdx
+++ /dev/null
@@ -1,259 +0,0 @@
----
-sidebar_label: Data Subscription
-description: "Lightweight service for data subscription and publishing. Time series data inserted into TDengine continuously can be pushed automatically to subscribing clients."
-title: Data Subscription
----
-
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-import Java from "./_sub_java.mdx";
-import Python from "./_sub_python.mdx";
-import Go from "./_sub_go.mdx";
-import Rust from "./_sub_rust.mdx";
-import Node from "./_sub_node.mdx";
-import CSharp from "./_sub_cs.mdx";
-import CDemo from "./_sub_c.mdx";
-
-## Introduction
-
-Due to the nature of time series data, data insertion into TDengine is similar to data publishing in message queues. Data is stored in ascending order of timestamp inside TDengine, and so each table in TDengine can essentially be considered as a message queue.
-
-A lightweight service for data subscription and publishing is built into TDengine. With the API provided by TDengine, client programs can use `select` statements to subscribe to data from one or more tables. The subscription and state maintenance is performed on the client side. The client programs poll the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start retrieving new data is up to the client side.
-
-There are 3 major APIs related to subscription provided in the TDengine client driver.
-
-```c
-taos_subscribe
-taos_consume
-taos_unsubscribe
-```
-
-For more details about these APIs please refer to [C/C++ Connector](/reference/connector/cpp). Their usage will be introduced below using the use case of meters, in which the schema of STable and subtables from the previous section [Continuous Query](/develop/continuous-query) are used. Full sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c).
-
-If we want to get a notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways:
-
-The first way is to query each sub table and record the last timestamp matching the criteria. Then after some time, query the data later than the recorded timestamp, and repeat this process. The SQL statements for this way are as below.
-
-```sql
-select * from D1001 where ts > {last_timestamp1} and current > 10;
-select * from D1002 where ts > {last_timestamp2} and current > 10;
-...
-```
-
-The above way works, but the problem is that the number of `select` statements increases with the number of meters. Additionally, the performance of both client side and server side will be unacceptable once the number of meters grows to a big enough number.
-
-A better way is to query on the STable, only one `select` is enough regardless of the number of meters, like below:
-
-```sql
-select * from meters where ts > {last_timestamp} and current > 10;
-```
-
-However, this presents a new problem in how to choose `last_timestamp`. First, the timestamp when the data is generated is different from the timestamp when the data is inserted into the database, sometimes the difference between them may be very big. Second, the time when the data from different meters arrives at the database may be different too. If the timestamp of the "slowest" meter is used as `last_timestamp` in the query, the data from other meters may be selected repeatedly; but if the timestamp of the "fastest" meter is used as `last_timestamp`, some data from other meters may be missed.
-
-All the problems mentioned above can be resolved easily using the subscription functionality provided by TDengine.
-
-The first step is to create subscription using `taos_subscribe`.
-
-```c
-TAOS_SUB* tsub = NULL;
-if (async) {
- // create an asynchronous subscription, the callback function will be called every 1s
- tsub = taos_subscribe(taos, restart, topic, sql, subscribe_callback, &blockFetch, 1000);
-} else {
- // create an synchronous subscription, need to call 'taos_consume' manually
- tsub = taos_subscribe(taos, restart, topic, sql, NULL, NULL, 0);
-}
-```
-
-The subscription in TDengine can be either synchronous or asynchronous. In the above sample code, the value of variable `async` is determined from the CLI input, then it's used to create either an async or sync subscription. Sync subscription means the client program needs to invoke `taos_consume` to retrieve data, and async subscription means another thread created by `taos_subscribe` internally invokes `taos_consume` to retrieve data and pass the data to `subscribe_callback` for processing. `subscribe_callback` is a callback function provided by the client program. You should not perform time consuming operations in the callback function.
-
-The parameter `taos` is an established connection. Nothing special needs to be done for thread safety for synchronous subscription. For asynchronous subscription, the taos_subscribe function should be called exclusively by the current thread, to avoid unpredictable errors.
-
-The parameter `sql` is a `select` statement in which the `where` clause can be used to specify filter conditions. In our example, we can subscribe to the records in which the current exceeds 10A, with the following SQL statement:
-
-```sql
-select * from meters where current > 10;
-```
-
-Please note that, all the data will be processed because no start time is specified. If we only want to process data for the past day, a time related condition can be added:
-
-```sql
-select * from meters where ts > now - 1d and current > 10;
-```
-
-The parameter `topic` is the name of the subscription. The client application must guarantee that the name is unique. However, it doesn't have to be globally unique because subscription is implemented in the APIs on the client side.
-
-If the subscription named as `topic` doesn't exist, the parameter `restart` will be ignored. If the subscription named as `topic` has been created before by the client program, when the client program is restarted with the subscription named `topic`, parameter `restart` is used to determine whether to retrieve data from the beginning or from the last point where the subscription was broken.
-
-If the value of `restart` is **true** (i.e. a non-zero value), data will be retrieved from the beginning. If it is **false** (i.e. zero), the data already consumed before will not be processed again.
-
-The last parameter of `taos_subscribe` is the polling interval in units of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` will be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function.
-
-The second to last parameter of `taos_subscribe` is used to pass arguments to the call back function. `taos_subscribe` doesn't process this parameter and simply passes it to the call back function. This parameter is simply ignored in sync mode.
-
-After a subscription is created, its data can be consumed and processed. Shown below is the sample code to consume data in sync mode, in the else condition of `if (async)`.
-
-```c
-if (async) {
- getchar();
-} else while(1) {
- TAOS_RES* res = taos_consume(tsub);
- if (res == NULL) {
- printf("failed to consume data.");
- break;
- } else {
- print_result(res, blockFetch);
- getchar();
- }
-}
-```
-
-In the above sample code in the else condition, there is an infinite loop. Each time carriage return is entered `taos_consume` is invoked. The return value of `taos_consume` is the selected result set. In the above sample, `print_result` is used to simplify the printing of the result set. It is similar to `taos_use_result`. Below is the implementation of `print_result`.
-
-```c
-void print_result(TAOS_RES* res, int blockFetch) {
- TAOS_ROW row = NULL;
- int num_fields = taos_num_fields(res);
- TAOS_FIELD* fields = taos_fetch_fields(res);
- int nRows = 0;
- if (blockFetch) {
- nRows = taos_fetch_block(res, &row);
- for (int i = 0; i < nRows; i++) {
- char temp[256];
- taos_print_row(temp, row + i, fields, num_fields);
- puts(temp);
- }
- } else {
- while ((row = taos_fetch_row(res))) {
- char temp[256];
- taos_print_row(temp, row, fields, num_fields);
- puts(temp);
- nRows++;
- }
- }
- printf("%d rows consumed.\n", nRows);
-}
-```
-
-In the above code `taos_print_row` is used to process the data consumed. All matching rows are printed.
-
-In async mode, consuming data is simpler as shown below.
-
-```c
-void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) {
- print_result(res, *(int*)param);
-}
-```
-
-`taos_unsubscribe` can be invoked to terminate a subscription.
-
-```c
-taos_unsubscribe(tsub, keep);
-```
-
-The second parameter `keep` is used to specify whether to keep the subscription progress on the client sde. If it is **false**, i.e. **0**, then subscription will be restarted from beginning regardless of the `restart` parameter's value when `taos_subscribe` is invoked again. The subscription progress information is stored in _{DataDir}/subscribe/_ , under which there is a file with the same name as `topic` for each subscription(Note: The default value of `DataDir` in the `taos.cfg` file is **/var/lib/taos/**. However, **/var/lib/taos/** does not exist on the Windows server. So you need to change the `DataDir` value to the corresponding existing directory."), the subscription will be restarted from the beginning if the corresponding progress file is removed.
-
-Now let's see the effect of the above sample code, assuming below prerequisites have been done.
-
-- The sample code has been downloaded to local system
-- TDengine has been installed and launched properly on same system
-- The database, STable, and subtables required in the sample code are ready
-
-Launch the command below in the directory where the sample code resides to compile and start the program.
-
-```bash
-make
-./subscribe -sql='select * from meters where current > 10;'
-```
-
-After the program is started, open another terminal and launch TDengine CLI `taos`, then use the below SQL commands to insert a row whose current is 12A into table **D1001**.
-
-```sql
-use test;
-insert into D1001 values(now, 12, 220, 1);
-```
-
-Then, this row of data will be shown by the example program on the first terminal because its current exceeds 10A. More data can be inserted for you to observe the output of the example program.
-
-## Examples
-
-The example program below demonstrates how to subscribe, using connectors, to data rows in which current exceeds 10A.
-
-### Prepare Data
-
-```bash
-# create database "power"
-taos> create database power;
-# use "power" as the database in following operations
-taos> use power;
-# create super table "meters"
-taos> create table meters(ts timestamp, current float, voltage int, phase int) tags(location binary(64), groupId int);
-# create tabes using the schema defined by super table "meters"
-taos> create table d1001 using meters tags ("California.SanFrancisco", 2);
-taos> create table d1002 using meters tags ("California.LoSangeles", 2);
-# insert some rows
-taos> insert into d1001 values("2020-08-15 12:00:00.000", 12, 220, 1),("2020-08-15 12:10:00.000", 12.3, 220, 2),("2020-08-15 12:20:00.000", 12.2, 220, 1);
-taos> insert into d1002 values("2020-08-15 12:00:00.000", 9.9, 220, 1),("2020-08-15 12:10:00.000", 10.3, 220, 1),("2020-08-15 12:20:00.000", 11.2, 220, 1);
-# filter out the rows in which current is bigger than 10A
-taos> select * from meters where current > 10;
- ts | current | voltage | phase | location | groupid |
-===========================================================================================================
- 2020-08-15 12:10:00.000 | 10.30000 | 220 | 1 | California.LoSangeles | 2 |
- 2020-08-15 12:20:00.000 | 11.20000 | 220 | 1 | California.LoSangeles | 2 |
- 2020-08-15 12:00:00.000 | 12.00000 | 220 | 1 | California.SanFrancisco | 2 |
- 2020-08-15 12:10:00.000 | 12.30000 | 220 | 2 | California.SanFrancisco | 2 |
- 2020-08-15 12:20:00.000 | 12.20000 | 220 | 1 | California.SanFrancisco | 2 |
-Query OK, 5 row(s) in set (0.004896s)
-```
-
-### Example Programs
-
-
-
-
-
-
-
-
- {/*
-
- */}
-
-
-
- {/*
-
-
-
-
- */}
-
-
-
-
-
-### Run the Examples
-
-The example programs first consume all historical data matching the criteria.
-
-```bash
-ts: 1597464000000 current: 12.0 voltage: 220 phase: 1 location: California.SanFrancisco groupid : 2
-ts: 1597464600000 current: 12.3 voltage: 220 phase: 2 location: California.SanFrancisco groupid : 2
-ts: 1597465200000 current: 12.2 voltage: 220 phase: 1 location: California.SanFrancisco groupid : 2
-ts: 1597464600000 current: 10.3 voltage: 220 phase: 1 location: California.LoSangeles groupid : 2
-ts: 1597465200000 current: 11.2 voltage: 220 phase: 1 location: California.LoSangeles groupid : 2
-```
-
-Next, use TDengine CLI to insert a new row.
-
-```
-# taos
-taos> use power;
-taos> insert into d1001 values(now, 12.4, 220, 1);
-```
-
-Because the current in the inserted row exceeds 10A, it will be consumed by the example program.
-
-```
-ts: 1651146662805 current: 12.4 voltage: 220 phase: 1 location: California.SanFrancisco groupid: 2
-```
diff --git a/docs-en/07-develop/_sub_c.mdx b/docs-en/07-develop/_sub_c.mdx
deleted file mode 100644
index 95fef0042d0a277f9136e6e6f8c15558487232f9..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/_sub_c.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```c
-{{#include docs-examples/c/subscribe_demo.c}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_cs.mdx b/docs-en/07-develop/_sub_cs.mdx
deleted file mode 100644
index 80934aa4d014a076896dce7f41e520f06ffd735d..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/_sub_cs.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```csharp
-{{#include docs-examples/csharp/SubscribeDemo.cs}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_go.mdx b/docs-en/07-develop/_sub_go.mdx
deleted file mode 100644
index cd908fc12c3a35f49ca108ee56c3951c5388a95f..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/_sub_go.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```go
-{{#include docs-examples/go/sub/main.go}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_java.mdx b/docs-en/07-develop/_sub_java.mdx
deleted file mode 100644
index e65bc576ebed030d935ced6a4572289cd367ffac..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/_sub_java.mdx
+++ /dev/null
@@ -1,7 +0,0 @@
-```java
-{{#include docs-examples/java/src/main/java/com/taos/example/SubscribeDemo.java}}
-```
-:::note
-For now Java connector doesn't provide asynchronous subscription, but `TimerTask` can be used to achieve similar purpose.
-
-:::
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_node.mdx b/docs-en/07-develop/_sub_node.mdx
deleted file mode 100644
index c93ad627ce9a77ca71a014b41d571089e6c1727b..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/_sub_node.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```js
-{{#include docs-examples/node/nativeexample/subscribe_demo.js}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_python.mdx b/docs-en/07-develop/_sub_python.mdx
deleted file mode 100644
index b817deeba6e283a3ba16fee0d580d3823c999536..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/_sub_python.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```py
-{{#include docs-examples/python/subscribe_demo.py}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_rust.mdx b/docs-en/07-develop/_sub_rust.mdx
deleted file mode 100644
index 4750cf7a3b871db48c9e5a26b22ab4b8a03f11be..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/_sub_rust.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```rs
-{{#include docs-examples/rust/nativeexample/examples/subscribe_demo.rs}}
-```
\ No newline at end of file
diff --git a/docs-en/10-cluster/02-cluster-mgmt.md b/docs-en/10-cluster/02-cluster-mgmt.md
deleted file mode 100644
index 674c92e2766a4eb304079140af19c8efea72d55e..0000000000000000000000000000000000000000
--- a/docs-en/10-cluster/02-cluster-mgmt.md
+++ /dev/null
@@ -1,213 +0,0 @@
----
-sidebar_label: Operation
-title: Manage DNODEs
----
-
-The previous section, [Deployment],(/cluster/deploy) showed you how to deploy and start a cluster from scratch. Once a cluster is ready, the status of dnode(s) in the cluster can be shown at any time. Dnodes can be managed from the TDengine CLI. New dnode(s) can be added to scale out the cluster, an existing dnode can be removed and you can even perform load balancing manually, if necessary.
-
-:::note
-All the commands introduced in this chapter must be run in the TDengine CLI - `taos`. Note that sometimes it is necessary to use root privilege.
-
-:::
-
-## Show DNODEs
-
-The below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes and so on. We recommend executing this command after adding or removing a dnode.
-
-```sql
-SHOW DNODES;
-```
-
-Below is the example output of this command.
-
-```
-taos> show dnodes;
- id | end_point | vnodes | cores | status | role | create_time | offline reason |
-======================================================================================================================================
- 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
-Query OK, 1 row(s) in set (0.008298s)
-```
-
-## Show VGROUPs
-
-To utilize system resources efficiently and provide scalability, data sharding is required. The data of each database is divided into multiple shards and stored in multiple vnodes. These vnodes may be located on different dnodes. One way of scaling out is to add more vnodes on dnodes. Each vnode can only be used for a single DB, but one DB can have multiple vnodes. The allocation of vnode is scheduled automatically by mnode based on system resources of the dnodes.
-
-Launch TDengine CLI `taos` and execute below command:
-
-```sql
-USE SOME_DATABASE;
-SHOW VGROUPS;
-```
-
-The example output is below:
-
-```
-taos> show dnodes;
- id | end_point | vnodes | cores | status | role | create_time | offline reason |
-======================================================================================================================================
- 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
-Query OK, 1 row(s) in set (0.008298s)
-
-taos> use db;
-Database changed.
-
-taos> show vgroups;
- vgId | tables | status | onlines | v1_dnode | v1_status | compacting |
-==========================================================================================
- 14 | 38000 | ready | 1 | 1 | master | 0 |
- 15 | 38000 | ready | 1 | 1 | master | 0 |
- 16 | 38000 | ready | 1 | 1 | master | 0 |
- 17 | 38000 | ready | 1 | 1 | master | 0 |
- 18 | 37001 | ready | 1 | 1 | master | 0 |
- 19 | 37000 | ready | 1 | 1 | master | 0 |
- 20 | 37000 | ready | 1 | 1 | master | 0 |
- 21 | 37000 | ready | 1 | 1 | master | 0 |
-Query OK, 8 row(s) in set (0.001154s)
-```
-
-## Add DNODE
-
-Launch TDengine CLI `taos` and execute the command below to add the end point of a new dnode into the EPI (end point) list of the cluster. "fqdn:port" must be quoted using double quotes.
-
-```sql
-CREATE DNODE "fqdn:port";
-```
-
-The example output is as below:
-
-```
-taos> create dnode "localhost:7030";
-Query OK, 0 of 0 row(s) in database (0.008203s)
-
-taos> show dnodes;
- id | end_point | vnodes | cores | status | role | create_time | offline reason |
-======================================================================================================================================
- 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
- 2 | localhost:7030 | 0 | 0 | offline | any | 2022-04-19 08:11:42.158 | status not received |
-Query OK, 2 row(s) in set (0.001017s)
-```
-
-It can be seen that the status of the new dnode is "offline". Once the dnode is started and connects to the firstEp of the cluster, you can execute the command again and get the example output below. As can be seen, both dnodes are in "ready" status.
-
-```
-taos> show dnodes;
- id | end_point | vnodes | cores | status | role | create_time | offline reason |
-======================================================================================================================================
- 1 | localhost:6030 | 3 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
- 2 | localhost:7030 | 6 | 8 | ready | any | 2022-04-19 08:14:59.165 | |
-Query OK, 2 row(s) in set (0.001316s)
-```
-
-## Drop DNODE
-
-Launch TDengine CLI `taos` and execute the command below to drop or remove a dnode from the cluster. In the command, you can get `dnodeId` from `show dnodes`.
-
-```sql
-DROP DNODE "fqdn:port";
-```
-
-or
-
-```sql
-DROP DNODE dnodeId;
-```
-
-The example output is below:
-
-```
-taos> show dnodes;
- id | end_point | vnodes | cores | status | role | create_time | offline reason |
-======================================================================================================================================
- 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
- 2 | localhost:7030 | 0 | 0 | offline | any | 2022-04-19 08:11:42.158 | status not received |
-Query OK, 2 row(s) in set (0.001017s)
-
-taos> drop dnode 2;
-Query OK, 0 of 0 row(s) in database (0.000518s)
-
-taos> show dnodes;
- id | end_point | vnodes | cores | status | role | create_time | offline reason |
-======================================================================================================================================
- 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
-Query OK, 1 row(s) in set (0.001137s)
-```
-
-In the above example, when `show dnodes` is executed the first time, two dnodes are shown. After `drop dnode 2` is executed, you can execute `show dnodes` again and it can be seen that only the dnode with ID 1 is still in the cluster.
-
-:::note
-
-- Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Before dropping a dnode, the data belonging to the dnode MUST be migrated/backed up according to your data retention, data security or other SOPs.
-- Please note that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped.
-- Once a dnode is dropped, other dnodes in the cluster will be notified of the drop and will not accept the request from the dropped dnode.
-- dnodeID is allocated automatically and can't be manually modified. dnodeID is generated in ascending order without duplication.
-
-:::
-
-## Move VNODE
-
-A vnode can be manually moved from one dnode to another.
-
-Launch TDengine CLI `taos` and execute below command:
-
-```sql
-ALTER DNODE BALANCE "VNODE:-DNODE:";
-```
-
-In the above command, `source-dnodeId` is the original dnodeId where the vnode resides, `dest-dnodeId` specifies the target dnode. vgId (vgroup ID) can be shown by `SHOW VGROUPS `.
-
-First `show vgroups` is executed to show the vgroup distribution.
-
-```
-taos> show vgroups;
- vgId | tables | status | onlines | v1_dnode | v1_status | compacting |
-==========================================================================================
- 14 | 38000 | ready | 1 | 3 | master | 0 |
- 15 | 38000 | ready | 1 | 3 | master | 0 |
- 16 | 38000 | ready | 1 | 3 | master | 0 |
- 17 | 38000 | ready | 1 | 3 | master | 0 |
- 18 | 37001 | ready | 1 | 3 | master | 0 |
- 19 | 37000 | ready | 1 | 1 | master | 0 |
- 20 | 37000 | ready | 1 | 1 | master | 0 |
- 21 | 37000 | ready | 1 | 1 | master | 0 |
-Query OK, 8 row(s) in set (0.001314s)
-```
-
-It can be seen that there are 5 vgroups in dnode 3 and 3 vgroups in node 1, now we want to move vgId 18 from dnode 3 to dnode 1. Execute the below command in `taos`
-
-```
-taos> alter dnode 3 balance "vnode:18-dnode:1";
-
-DB error: Balance already enabled (0.00755
-```
-
-However, the operation fails with error message show above, which means automatic load balancing has been enabled in the current database so manual load balance can't be performed.
-
-Shutdown the cluster, configure `balance` parameter in all the dnodes to 0, then restart the cluster, and execute `alter dnode` and `show vgroups` as below.
-
-```
-taos> alter dnode 3 balance "vnode:18-dnode:1";
-Query OK, 0 row(s) in set (0.000575s)
-
-taos> show vgroups;
- vgId | tables | status | onlines | v1_dnode | v1_status | v2_dnode | v2_status | compacting |
-=================================================================================================================
- 14 | 38000 | ready | 1 | 3 | master | 0 | NULL | 0 |
- 15 | 38000 | ready | 1 | 3 | master | 0 | NULL | 0 |
- 16 | 38000 | ready | 1 | 3 | master | 0 | NULL | 0 |
- 17 | 38000 | ready | 1 | 3 | master | 0 | NULL | 0 |
- 18 | 37001 | ready | 2 | 1 | slave | 3 | master | 0 |
- 19 | 37000 | ready | 1 | 1 | master | 0 | NULL | 0 |
- 20 | 37000 | ready | 1 | 1 | master | 0 | NULL | 0 |
- 21 | 37000 | ready | 1 | 1 | master | 0 | NULL | 0 |
-Query OK, 8 row(s) in set (0.001242s)
-```
-
-It can be seen from above output that vgId 18 has been moved from dnode 3 to dnode 1.
-
-:::note
-
-- Manual load balancing can only be performed when the automatic load balancing is disabled, i.e. `balance` is set to 0.
-- Only a vnode in normal state, i.e. master or slave, can be moved. vnode can't be moved when its in status offline, unsynced or syncing.
-- Before moving a vnode, it's necessary to make sure the target dnode has enough resources: CPU, memory and disk.
-
-:::
diff --git a/docs-en/10-cluster/03-ha-and-lb.md b/docs-en/10-cluster/03-ha-and-lb.md
deleted file mode 100644
index bd718eef9f8dc181628132de831dbca2af59d158..0000000000000000000000000000000000000000
--- a/docs-en/10-cluster/03-ha-and-lb.md
+++ /dev/null
@@ -1,81 +0,0 @@
----
-sidebar_label: HA & LB
-title: High Availability and Load Balancing
----
-
-## High Availability of Vnode
-
-High availability of vnode and mnode can be achieved through replicas in TDengine.
-
-A TDengine cluster can have multiple databases. Each database has a number of vnodes associated with it. A different number of replicas can be configured for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas. The default value for `replica` is 1. Naturally, a single replica cannot guarantee high availability since if one node is down, the data service is unavailable. Note that the number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation will fail with error "more dnodes are needed". The SQL statement below is used to create a database named "demo" with 3 replicas.
-
-```sql
-CREATE DATABASE demo replica 3;
-```
-
-The data in a DB is divided into multiple shards and stored in multiple vgroups. The number of vnodes in each vgroup is determined by the number of replicas set for the DB. The vnodes in each vgroup store exactly the same data. For the purpose of high availability, the vnodes in a vgroup must be located in different dnodes on different hosts. As long as over half of the vnodes in a vgroup are in an online state, the vgroup is able to provide data access. Otherwise the vgroup can't provide data access for reading or inserting data.
-
-There may be data for multiple DBs in a dnode. When a dnode is down, multiple DBs may be affected. While in theory, the cluster will provide data access for reading or inserting data if over half the vnodes in vgroups are online, because of the possibly complex mapping between vnodes and dnodes, it is difficult to guarantee that the cluster will work properly if over half of the dnodes are online.
-
-## High Availability of Mnode
-
-Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`. The valid range for `numOfMnodes` is [1,3]. To ensure data consistency between mnodes, data replication between mnodes is performed synchronously.
-
-There may be multiple dnodes in a cluster, but only one mnode can be started in each dnode. Which one or ones of the dnodes will be designated as mnodes is automatically determined by TDengine according to the cluster configuration and system resources. The command `show mnodes` can be executed in TDengine `taos` to show the mnodes in the cluster.
-
-```sql
-SHOW MNODES;
-```
-
-The end point and role/status (master, slave, unsynced, or offline) of all mnodes can be shown by the above command. When the first dnode is started in a cluster, there must be one mnode in this dnode. Without at least one mnode, the cluster cannot work. If `numOfMNodes` is configured to 2, another mnode will be started when the second dnode is launched.
-
-For the high availability of mnode, `numOfMnodes` needs to be configured to 2 or a higher value. Because the data consistency between mnodes must be guaranteed, the replica confirmation parameter `quorum` is set to 2 automatically if `numOfMNodes` is set to 2 or higher.
-
-:::note
-If high availability is important for your system, both vnode and mnode must be configured to have multiple replicas.
-
-:::
-
-## Load Balancing
-
-Load balancing will be triggered in 3 cases without manual intervention.
-
-- When a new dnode joins the cluster, automatic load balancing may be triggered. Some data from other dnodes may be transferred to the new dnode automatically.
-- When a dnode is removed from the cluster, the data from this dnode will be transferred to other dnodes automatically.
-- When a dnode is too hot, i.e. too much data has been stored in it, automatic load balancing may be triggered to migrate some vnodes from this dnode to other dnodes.
-
-:::tip
-Automatic load balancing is controlled by the parameter `balance`, 0 means disabled and 1 means enabled. This is set in the file [taos.cfg](https://docs.tdengine.com/reference/config/#balance).
-
-:::
-
-## Dnode Offline
-
-When a dnode is offline, it can be detected by the TDengine cluster. There are two cases:
-
-- The dnode comes online before the threshold configured in `offlineThreshold` is reached. The dnode is still in the cluster and data replication is started automatically. The dnode can work properly after the data sync is finished.
-
-- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. A system alert will be generated and automatic load balancing will be triggered if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not join the cluster automatically. The system administrator has to manually join the dnode to the cluster.
-
-:::note
-If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsynced status, the master node can only be voted on, after all the vnodes or mnodes in the group become online and can exchange status. Following this, the vgroup (or mnode group) is able to provide service.
-
-:::
-
-## Arbitrator
-
-The "arbitrator" component is used to address the special case when the number of replicas is set to an even number like 2,4 etc. If half of the vnodes in a vgroup don't work, it is impossible to vote and select a master node. This situation also applies to mnodes if the number of mnodes is set to an even number like 2,4 etc.
-
-To resolve this problem, a new arbitrator component named `tarbitrator`, an abbreviation of TDengine Arbitrator, was introduced. The `tarbitrator` simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. As long as more than half of the vnode or mnode, including Arbitrator, are available the vnode group or mnode group can provide data insertion or query services normally.
-
-Normally, it's prudent to configure the replica number for each DB or system parameter `numOfMNodes` to be an odd number. However, if a user is very sensitive to storage space, a replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability.
-
-Arbitrator component is installed with the server package. For details about how to install, please refer to [Install](/operation/pkg-install). The `-p` parameter of `tarbitrator` can be used to specify the port on which it provides service.
-
-In the configuration file `taos.cfg` of each dnode, parameter `arbitrator` needs to be configured to the end point of the `tarbitrator` process. Arbitrator component will be used automatically if the replica is configured to an even number and will be ignored if the replica is configured to an odd number.
-
-Arbitrator can be shown by executing command in TDengine CLI `taos` with its role shown as "arb".
-
-```sql
-SHOW DNODES;
-```
diff --git a/docs-en/12-taos-sql/02-database.md b/docs-en/12-taos-sql/02-database.md
deleted file mode 100644
index 80581b2f1bc7ce9cd046c18873d3f22b6804d8cf..0000000000000000000000000000000000000000
--- a/docs-en/12-taos-sql/02-database.md
+++ /dev/null
@@ -1,127 +0,0 @@
----
-sidebar_label: Database
-title: Database
-description: "create and drop database, show or change database parameters"
----
-
-## Create Database
-
-```
-CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
-```
-
-:::info
-
-1. KEEP specifies the number of days for which the data in the database will be retained. The default value is 3650 days, i.e. 10 years. The data will be deleted automatically once its age exceeds this threshold.
-2. UPDATE specifies whether the data can be updated and how the data can be updated.
- 1. UPDATE set to 0 means update operation is not allowed. The update for data with an existing timestamp will be discarded silently and the original record in the database will be preserved as is.
- 2. UPDATE set to 1 means the whole row will be updated. The columns for which no value is specified will be set to NULL.
- 3. UPDATE set to 2 means updating a subset of columns for a row is allowed. The columns for which no value is specified will be kept unchanged.
-3. The maximum length of database name is 33 bytes.
-4. The maximum length of a SQL statement is 65,480 bytes.
-5. Below are the parameters that can be used when creating a database
- - cache: [Description](/reference/config/#cache)
- - blocks: [Description](/reference/config/#blocks)
- - days: [Description](/reference/config/#days)
- - keep: [Description](/reference/config/#keep)
- - minRows: [Description](/reference/config/#minrows)
- - maxRows: [Description](/reference/config/#maxrows)
- - wal: [Description](/reference/config/#wallevel)
- - fsync: [Description](/reference/config/#fsync)
- - update: [Description](/reference/config/#update)
- - cacheLast: [Description](/reference/config/#cachelast)
- - replica: [Description](/reference/config/#replica)
- - quorum: [Description](/reference/config/#quorum)
- - maxVgroupsPerDb: [Description](/reference/config/#maxvgroupsperdb)
- - comp: [Description](/reference/config/#comp)
- - precision: [Description](/reference/config/#precision)
-6. Please note that all of the parameters mentioned in this section are configured in configuration file `taos.cfg` on the TDengine server. If not specified in the `create database` statement, the values from taos.cfg are used by default. To override default parameters, they must be specified in the `create database` statement.
-
-:::
-
-## Show Current Configuration
-
-```
-SHOW VARIABLES;
-```
-
-## Specify The Database In Use
-
-```
-USE db_name;
-```
-
-:::note
-This way is not applicable when using a REST connection. In a REST connection the database name must be specified before a table or stable name. For e.g. to query the stable "meters" in database "test" the query would be "SELECT count(*) from test.meters"
-
-:::
-
-## Drop Database
-
-```
-DROP DATABASE [IF EXISTS] db_name;
-```
-
-:::note
-All data in the database will be deleted too. This command must be used with extreme caution. Please follow your organization's data integrity, data backup, data security or any other applicable SOPs before using this command.
-
-:::
-
-## Change Database Configuration
-
-Some examples are shown below to demonstrate how to change the configuration of a database. Please note that some configuration parameters can be changed after the database is created, but some cannot. For details of the configuration parameters of database please refer to [Configuration Parameters](/reference/config/).
-
-```
-ALTER DATABASE db_name COMP 2;
-```
-
-COMP parameter specifies whether the data is compressed and how the data is compressed.
-
-```
-ALTER DATABASE db_name REPLICA 2;
-```
-
-REPLICA parameter specifies the number of replicas of the database.
-
-```
-ALTER DATABASE db_name KEEP 365;
-```
-
-KEEP parameter specifies the number of days for which the data will be kept.
-
-```
-ALTER DATABASE db_name QUORUM 2;
-```
-
-QUORUM parameter specifies the necessary number of confirmations to determine whether the data is written successfully.
-
-```
-ALTER DATABASE db_name BLOCKS 100;
-```
-
-BLOCKS parameter specifies the number of memory blocks used by each VNODE.
-
-```
-ALTER DATABASE db_name CACHELAST 0;
-```
-
-CACHELAST parameter specifies whether and how the latest data of a sub table is cached.
-
-:::tip
-The above parameters can be changed using `ALTER DATABASE` command without restarting. For more details of all configuration parameters please refer to [Configuration Parameters](/reference/config/).
-
-:::
-
-## Show All Databases
-
-```
-SHOW DATABASES;
-```
-
-## Show The Create Statement of A Database
-
-```
-SHOW CREATE DATABASE db_name;
-```
-
-This command is useful when migrating the data from one TDengine cluster to another. This command can be used to get the CREATE statement, which can be used in another TDengine instance to create the exact same database.
diff --git a/docs-en/12-taos-sql/08-interval.md b/docs-en/12-taos-sql/08-interval.md
deleted file mode 100644
index acfb0de0e1521fd8c6a068497a3df7a17941524c..0000000000000000000000000000000000000000
--- a/docs-en/12-taos-sql/08-interval.md
+++ /dev/null
@@ -1,113 +0,0 @@
----
-sidebar_label: Interval
-title: Aggregate by Time Window
----
-
-Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window.
-Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are three kinds of windows: time window, status window, and session window. There are two kinds of time windows: sliding window and flip time/tumbling window.
-
-## Time Window
-
-The `INTERVAL` clause is used to generate time windows of the same time interval. The `SLIDING` parameter is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining a continuous query, both the size of the time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time ranges of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time/tumbling window.
-
-
-
-`INTERVAL` and `SLIDING` should be used with aggregate functions and select functions. The SQL statement below is illegal because no aggregate or selection function is used with `INTERVAL`.
-
-```
-SELECT * FROM temp_tb_1 INTERVAL(1m);
-```
-
-The time step specified by `SLIDING` cannot exceed the time interval specified by `INTERVAL`. The SQL statement below is illegal because the time length specified by `SLIDING` exceeds that specified by `INTERVAL`.
-
-```
-SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m);
-```
-
-When the time length specified by `SLIDING` is the same as that specified by `INTERVAL`, the sliding window is actually a flip/tumbling window. The minimum time range specified by `INTERVAL` is 10 milliseconds (10a) prior to version 2.1.5.0. Since version 2.1.5.0, the minimum time range by `INTERVAL` can be 1 microsecond (1u). However, if the DB precision is millisecond, the minimum time range is 1 millisecond (1a). Please note that the `timezone` parameter should be configured to be the same value in the `taos.cfg` configuration file on client side and server side.
-
-## Status Window
-
-In case of using integer, bool, or string to represent the status of a device at any given moment, continuous rows with the same status belong to a status window. Once the status changes, the status window closes. As shown in the following figure, there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now.
-
-
-
-`STATE_WINDOW` is used to specify the column on which the status window will be based. For example:
-
-```
-SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status);
-```
-
-## Session Window
-
-```sql
-SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
-```
-
-The primary key, i.e. timestamp, is used to determine which session window a row belongs to. If the time interval between two adjacent rows is within the time range specified by `tol_val`, they belong to the same session window; otherwise they belong to two different session windows. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30], because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
-
-
-
-If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now.
-
-## More On Window Aggregate
-
-### Syntax
-
-The full syntax of aggregate by window is as follows:
-
-```sql
-SELECT function_list FROM tb_name
- [WHERE where_condition]
- [SESSION(ts_col, tol_val)]
- [STATE_WINDOW(col)]
- [INTERVAL(interval [, offset]) [SLIDING sliding]]
- [FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
-
-SELECT function_list FROM stb_name
- [WHERE where_condition]
- [INTERVAL(interval [, offset]) [SLIDING sliding]]
- [FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
- [GROUP BY tags]
-```
-
-### Restrictions
-
-- Aggregate functions and select functions can be used in `function_list`, with each function having only one output. For example COUNT, AVG, SUM, STDDEV, LEASTSQUARES, PERCENTILE, MIN, MAX, FIRST, LAST. Functions having multiple outputs, such as DIFF or arithmetic operations can't be used.
-- `LAST_ROW` can't be used together with window aggregate.
-- Scalar functions, like CEIL/FLOOR, can't be used with window aggregate.
-- `WHERE` clause can be used to specify the starting and ending time and other filter conditions
-- `FILL` clause is used to specify how to fill when there is data missing in any window, including:
- 1. NONE: No fill (the default fill mode)
- 2. VALUE:Fill with a fixed value, which should be specified together, for example `FILL(VALUE, 1.23)`
- 3. PREV:Fill with the previous non-NULL value, `FILL(PREV)`
- 4. NULL:Fill with NULL, `FILL(NULL)`
- 5. LINEAR:Fill with the closest non-NULL value, `FILL(LINEAR)`
- 6. NEXT:Fill with the next non-NULL value, `FILL(NEXT)`
-
-:::info
-
-1. A huge volume of interpolation output may be returned using `FILL`, so it's recommended to specify the time range when using `FILL`. The maximum number of interpolation values that can be returned in a single query is 10,000,000.
-2. The result set is in ascending order of timestamp when you aggregate by time window.
-3. If aggregate by window is used on STable, the aggregate function is performed on all the rows matching the filter conditions. If `GROUP BY` is not used in the query, the result set will be returned in ascending order of timestamp; otherwise the result set is not exactly in the order of ascending timestamp in each group.
-
-:::
-
-Aggregate by time window is also used in continuous query, please refer to [Continuous Query](/develop/continuous-query).
-
-## Examples
-
-A table of intelligent meters can be created by the SQL statement below:
-
-```sql
-CREATE TABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT);
-```
-
-The average current, maximum current and median of current in every 10 minutes for the past 24 hours can be calculated using the SQL statement below, with missing values filled with the previous non-NULL values.
-
-```
-SELECT AVG(current), MAX(current), APERCENTILE(current, 50) FROM meters
- WHERE ts>=NOW-1d and ts<=now
- INTERVAL(10m)
- FILL(PREV);
-```
diff --git a/docs-en/12-taos-sql/10-json.md b/docs-en/12-taos-sql/10-json.md
deleted file mode 100644
index 7460a5e0ba3ce78ee7744569cda460c477cac19c..0000000000000000000000000000000000000000
--- a/docs-en/12-taos-sql/10-json.md
+++ /dev/null
@@ -1,82 +0,0 @@
----
-title: JSON Type
----
-
-## Syntax
-
-1. Tag of type JSON
-
- ```sql
- create STable s1 (ts timestamp, v1 int) tags (info json);
-
- create table s1_1 using s1 tags ('{"k1": "v1"}');
- ```
-
-2. "->" Operator of JSON
-
- ```sql
- select * from s1 where info->'k1' = 'v1';
-
- select info->'k1' from s1;
- ```
-
-3. "contains" Operator of JSON
-
- ```sql
- select * from s1 where info contains 'k2';
-
- select * from s1 where info contains 'k1';
- ```
-
-## Applicable Operations
-
-1. When a JSON data type is used in `where`, `match/nmatch/between and/like/and/or/is null/is no null` can be used but `in` can't be used.
-
- ```sql
- select * from s1 where info->'k1' match 'v*';
-
- select * from s1 where info->'k1' like 'v%' and info contains 'k2';
-
- select * from s1 where info is null;
-
- select * from s1 where info->'k1' is not null;
- ```
-
-2. A tag of JSON type can be used in `group by`, `order by`, `join`, `union all` and sub query; for example `group by json->'key'`
-
-3. `Distinct` can be used with a tag of type JSON
-
- ```sql
- select distinct info->'k1' from s1;
- ```
-
-4. Tag Operations
-
- The value of a JSON tag can be altered. Please note that the full JSON will be overriden when doing this.
-
- The name of a JSON tag can be altered. A tag of JSON type can't be added or removed. The column length of a JSON tag can't be changed.
-
-## Other Restrictions
-
-- JSON type can only be used for a tag. There can be only one tag of JSON type, and it's exclusive to any other types of tags.
-
-- The maximum length of keys in JSON is 256 bytes, and key must be printable ASCII characters. The maximum total length of a JSON is 4,096 bytes.
-
-- JSON format:
-
- - The input string for JSON can be empty, i.e. "", "\t", or NULL, but it can't be non-NULL string, bool or array.
- - object can be {}, and the entire JSON is empty if so. Key can be "", and it's ignored if so.
- - value can be int, double, string, bool or NULL, and it can't be an array. Nesting is not allowed which means that the value of a key can't be JSON.
- - If one key occurs twice in JSON, only the first one is valid.
- - Escape characters are not allowed in JSON.
-
-- NULL is returned when querying a key that doesn't exist in JSON.
-
-- If a tag of JSON is the result of inner query, it can't be parsed and queried in the outer query.
-
-For example, the SQL statements below are not supported.
-
-```sql;
-select jtag->'key' from (select jtag from STable);
-select jtag->'key' from (select jtag from STable) where jtag->'key'>0;
-```
diff --git a/docs-en/12-taos-sql/12-keywords.md b/docs-en/12-taos-sql/12-keywords.md
deleted file mode 100644
index 56a82a02a1fada712141f3572b761e0cd18576c6..0000000000000000000000000000000000000000
--- a/docs-en/12-taos-sql/12-keywords.md
+++ /dev/null
@@ -1,89 +0,0 @@
----
-title: Keywords
----
-
-There are about 200 keywords reserved by TDengine, they can't be used as the name of database, STable or table with either upper case, lower case or mixed case.
-
-**Keywords List**
-
-| | | | | |
-| ----------- | ---------- | --------- | ---------- | ------------ |
-| ABORT | CREATE | IGNORE | NULL | STAR |
-| ACCOUNT | CTIME | IMMEDIATE | OF | STATE |
-| ACCOUNTS | DATABASE | IMPORT | OFFSET | STATEMENT |
-| ADD | DATABASES | IN | OR | STATE_WINDOW |
-| AFTER | DAYS | INITIALLY | ORDER | STORAGE |
-| ALL | DBS | INSERT | PARTITIONS | STREAM |
-| ALTER | DEFERRED | INSTEAD | PASS | STREAMS |
-| AND | DELIMITERS | INT | PLUS | STRING |
-| AS | DESC | INTEGER | PPS | SYNCDB |
-| ASC | DESCRIBE | INTERVAL | PRECISION | TABLE |
-| ATTACH | DETACH | INTO | PREV | TABLES |
-| BEFORE | DISTINCT | IS | PRIVILEGE | TAG |
-| BEGIN | DIVIDE | ISNULL | QTIME | TAGS |
-| BETWEEN | DNODE | JOIN | QUERIES | TBNAME |
-| BIGINT | DNODES | KEEP | QUERY | TIMES |
-| BINARY | DOT | KEY | QUORUM | TIMESTAMP |
-| BITAND | DOUBLE | KILL | RAISE | TINYINT |
-| BITNOT | DROP | LE | REM | TOPIC |
-| BITOR | EACH | LIKE | REPLACE | TOPICS |
-| BLOCKS | END | LIMIT | REPLICA | TRIGGER |
-| BOOL | EQ | LINEAR | RESET | TSERIES |
-| BY | EXISTS | LOCAL | RESTRICT | UMINUS |
-| CACHE | EXPLAIN | LP | ROW | UNION |
-| CACHELAST | FAIL | LSHIFT | RP | UNSIGNED |
-| CASCADE | FILE | LT | RSHIFT | UPDATE |
-| CHANGE | FILL | MATCH | SCORES | UPLUS |
-| CLUSTER | FLOAT | MAXROWS | SELECT | USE |
-| COLON | FOR | MINROWS | SEMI | USER |
-| COLUMN | FROM | MINUS | SESSION | USERS |
-| COMMA | FSYNC | MNODES | SET | USING |
-| COMP | GE | MODIFY | SHOW | VALUES |
-| COMPACT | GLOB | MODULES | SLASH | VARIABLE |
-| CONCAT | GRANTS | NCHAR | SLIDING | VARIABLES |
-| CONFLICT | GROUP | NE | SLIMIT | VGROUPS |
-| CONNECTION | GT | NONE | SMALLINT | VIEW |
-| CONNECTIONS | HAVING | NOT | SOFFSET | VNODES |
-| CONNS | ID | NOTNULL | STable | WAL |
-| COPY | IF | NOW | STableS | WHERE |
-| _C0 | _QSTART | _QSTOP | _QDURATION | _WSTART |
-| _WSTOP | _WDURATION |
-
-## Explanations
-### TBNAME
-`TBNAME` can be considered as a special tag, which represents the name of the subtable, in STable.
-
-Get the table name and tag values of all subtables in a STable.
-```mysql
-SELECT TBNAME, location FROM meters;
-
-Count the number of subtables in a STable.
-```mysql
-SELECT COUNT(TBNAME) FROM meters;
-```
-
-Only filter on TAGS can be used in WHERE clause in the above two query statements.
-```mysql
-taos> SELECT TBNAME, location FROM meters;
- tbname | location |
-==================================================================
- d1004 | California.SanFrancisco |
- d1003 | California.SanFrancisco |
- d1002 | California.LosAngeles |
- d1001 | California.LosAngeles |
-Query OK, 4 row(s) in set (0.000881s)
-
-taos> SELECT COUNT(tbname) FROM meters WHERE groupId > 2;
- count(tbname) |
-========================
- 2 |
-Query OK, 1 row(s) in set (0.001091s)
-```
-### _QSTART/_QSTOP/_QDURATION
-The start, stop and duration of a query time window (Since version 2.6.0.0).
-
-### _WSTART/_WSTOP/_WDURATION
-The start, stop and duration of aggegate query by time window, like interval, session window, state window (Since version 2.6.0.0).
-
-### _c0
-The first column of a table or STable.
\ No newline at end of file
diff --git a/docs-en/12-taos-sql/index.md b/docs-en/12-taos-sql/index.md
deleted file mode 100644
index 33656338a7bba38dc55cf536bdba8e95309c5acf..0000000000000000000000000000000000000000
--- a/docs-en/12-taos-sql/index.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: TDengine SQL
-description: "The syntax supported by TDengine SQL "
----
-
-This section explains the syntax of SQL to perform operations on databases, tables and STables, insert data, select data and use functions. We also provide some tips that can be used in TDengine SQL. If you have previous experience with SQL this section will be fairly easy to understand. If you do not have previous experience with SQL, you'll come to appreciate the simplicity and power of SQL.
-
-TDengine SQL is the major interface for users to write data into or query from TDengine. For ease of use, the syntax is similar to that of standard SQL. However, please note that TDengine SQL is not standard SQL. For instance, TDengine doesn't provide a delete function for time series data and so corresponding statements are not provided in TDengine SQL.
-
-Syntax Specifications used in this chapter:
-
-- The content inside <\> needs to be input by the user, excluding <\> itself.
-- \[ \] means optional input, excluding [] itself.
-- | means one of a few options, excluding | itself.
-- … means the item prior to it can be repeated multiple times.
-
-To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of data from electric meters. Each meter collects 3 data measurements: current, voltage, phase. The data model is shown below:
-
-```sql
-taos> DESCRIBE meters;
- Field | Type | Length | Note |
-=================================================================================
- ts | TIMESTAMP | 8 | |
- current | FLOAT | 4 | |
- voltage | INT | 4 | |
- phase | FLOAT | 4 | |
- location | BINARY | 64 | TAG |
- groupid | INT | 4 | TAG |
-```
-
-The data set includes the data collected by 4 meters, the corresponding table name is d1001, d1002, d1003 and d1004 based on the data model of TDengine.
diff --git a/docs-en/14-reference/02-rest-api/02-rest-api.mdx b/docs-en/14-reference/02-rest-api/02-rest-api.mdx
deleted file mode 100644
index 990af861961e9daf4ac775462e21d6d9852d17c1..0000000000000000000000000000000000000000
--- a/docs-en/14-reference/02-rest-api/02-rest-api.mdx
+++ /dev/null
@@ -1,307 +0,0 @@
----
-title: REST API
----
-
-To support the development of various types of applications and platforms, TDengine provides an API that conforms to REST principles; namely REST API. To minimize the learning cost, unlike REST APIs for other database engines, TDengine allows insertion of SQL commands in the BODY of an HTTP POST request, to operate the database.
-
-:::note
-One difference from the native connector is that the REST interface is stateless and so the `USE db_name` command has no effect. All references to table names and super table names need to specify the database name in the prefix. (Since version 2.2.0.0, TDengine supports specification of the db_name in RESTful URL. If the database name prefix is not specified in the SQL command, the `db_name` specified in the URL will be used. Since version 2.4.0.0, REST service is provided by taosAdapter by default and it requires that the `db_name` must be specified in the URL.)
-:::
-
-## Installation
-
-The REST interface does not rely on any TDengine native library, so the client application does not need to install any TDengine libraries. The client application's development language only needs to support the HTTP protocol.
-
-## Verification
-
-If the TDengine server is already installed, it can be verified as follows:
-
-The following example is in an Ubuntu environment and uses the `curl` tool to verify that the REST interface is working. Note that the `curl` tool may need to be installed in your environment.
-
-The following example lists all databases on the host h1.taosdata.com. To use it in your environment, replace `h1.taosdata.com` and `6041` (the default port) with the actual running TDengine service FQDN and port number.
-
-```html
-curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' h1.taosdata.com:6041/rest/sql
-```
-
-The following return value results indicate that the verification passed.
-
-```json
-{
- "status": "succ",
- "head": [
- "name",
- "created_time",
- "ntables",
- "vgroups",
- "replica",
- "quorum",
- "days",
- "keep1,keep2,keep(D)",
- "cache(MB)",
- "blocks",
- "minrows",
- "maxrows",
- "wallevel",
- "fsync",
- "comp",
- "precision",
- "status"
- ],
- "data": [
- [
- "log",
- "2020-09-02 17:23:00.039",
- 4,
- 1,
- 1,
- 1,
- 10,
- "30,30,30",
- 1,
- 3,
- 100,
- 4096,
- 1,
- 3000,
- 2,
- "us",
- "ready"
- ]
- ],
- "rows": 1
-}
-```
-
-## HTTP request URL format
-
-```
-http://:/rest/sql/[db_name]
-```
-
-Parameter Description:
-
-- fqnd: FQDN or IP address of any host in the cluster
-- port: httpPort configuration item in the configuration file, default is 6041
-- db_name: Optional parameter that specifies the default database name for the executed SQL command. (supported since version 2.2.0.0)
-
-For example, `http://h1.taos.com:6041/rest/sql/test` is a URL to `h1.taos.com:6041` and sets the default database name to `test`.
-
-TDengine supports both Basic authentication and custom authentication mechanisms, and subsequent versions will provide a standard secure digital signature mechanism for authentication.
-
-- The custom authentication information is as follows. More details about "token" later.
-
- ```
- Authorization: Taosd
- ```
-
-- Basic authentication information is shown below
-
- ```
- Authorization: Basic
- ```
-
-The HTTP request's BODY is a complete SQL command, and the data table in the SQL statement should be provided with a database prefix, e.g., `db_name.tb_name`. If the table name does not have a database prefix and the database name is not specified in the URL, the system will response an error because the HTTP module is a simple forwarder and has no awareness of the current DB.
-
-Use `curl` to initiate an HTTP request with a custom authentication method, with the following syntax.
-
-```bash
-curl -H 'Authorization: Basic ' -d '' :/rest/sql/[db_name]
-```
-
-Or
-
-```bash
-curl -u username:password -d '' :/rest/sql/[db_name]
-```
-
-where `TOKEN` is the string after Base64 encoding of `{username}:{password}`, e.g. `root:taosdata` is encoded as `cm9vdDp0YW9zZGF0YQ==`.
-
-## HTTP Return Format
-
-The return result is in JSON format, as follows:
-
-```json
-{
- "status": "succ",
- "head": ["ts", "current", ...],
- "column_meta": [["ts",9,8],["current",6,4], ...],
- "data": [
- ["2018-10-03 14:38:05.000", 10.3, ...],
- ["2018-10-03 14:38:15.000", 12.6, ...]
- ],
- "rows": 2
-}
-```
-
-Description:
-
-- status: tells you whethre the operation result is success or failure.
-- head: the definition of the table, or just one column "affected_rows" if no result set is returned. (As of version 2.0.17.0, it is recommended not to rely on the head return value to determine the data column type but rather use column_meta. In later versions, the head item may be removed from the return value.)
-- column_meta: this item is added to the return value to indicate the data type of each column in the data with version 2.0.17.0 and later versions. Each column is described by three values: column name, column type, and type length. For example, `["current",6,4]` means that the column name is "current", the column type is 6, which is the float type, and the type length is 4, which is the float type with 4 bytes. If the column type is binary or nchar, the type length indicates the maximum length of content stored in the column, not the length of the specific data in this return value. When the column type is nchar, the type length indicates the number of Unicode characters that can be saved, not bytes.
-- data: The exact data returned, presented row by row, or just [[affected_rows]] if no result set is returned. The order of the data columns in each row of data is the same as that of the data columns described in column_meta.
-- rows: Indicates how many rows of data there are.
-
-The column types in column_meta are described as follows:
-
-- 1:BOOL
-- 2:TINYINT
-- 3:SMALLINT
-- 4:INT
-- 5:BIGINT
-- 6:FLOAT
-- 7:DOUBLE
-- 8:BINARY
-- 9:TIMESTAMP
-- 10:NCHAR
-
-## Custom Authorization Code
-
-HTTP requests require an authorization code `` for identification purposes. The administrator usually provides the authorization code, and it can be obtained simply by sending an ``HTTP GET`` request as follows:
-
-```bash
-curl http://:/rest/login//
-```
-
-Where `fqdn` is the FQDN or IP address of the TDengine database. `port` is the port number of the TDengine service. `username` is the database username. `password` is the database password. The return value is in `JSON` format, and the meaning of each field is as follows.
-
-- status: flag bit of the request result
-
-- code: return value code
-
-- desc: authorization code
-
-Example of getting authorization code.
-
-```bash
-curl http://192.168.0.1:6041/rest/login/root/taosdata
-```
-
-Response body:
-
-```json
-{
- "status": "succ",
- "code": 0,
- "desc": "/KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04"
-}
-```
-
-## For example
-
-- query all records from table d1001 of database demo
-
- ```bash
- curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001' 192.168.0.1:6041/rest/sql
- ```
-
- Response body:
-
- ```json
- {
- "status": "succ",
- "head": ["ts", "current", "voltage", "phase"],
- "column_meta": [
- ["ts", 9, 8],
- ["current", 6, 4],
- ["voltage", 4, 4],
- ["phase", 6, 4]
- ],
- "data": [
- ["2018-10-03 14:38:05.000", 10.3, 219, 0.31],
- ["2018-10-03 14:38:15.000", 12.6, 218, 0.33]
- ],
- "rows": 2
- }
- ```
-
-- Create database demo:
-
- ```bash
- curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'create database demo' 192.168.0.1:6041/rest/sql
- ```
-
- Response body:
-
- ```json
- {
- "status": "succ",
- "head": ["affected_rows"],
- "column_meta": [["affected_rows", 4, 4]],
- "data": [[1]],
- "rows": 1
- }
- ```
-
-## Other Uses
-
-### Unix timestamps for result sets
-
-When the HTTP request URL uses `/rest/sqlt`, the returned result set's timestamp value will be in Unix timestamp format, for example:
-
-```bash
-curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001' 192.168.0.1:6041/rest/sqlt
-```
-
-Response body:
-
-```json
-{
- "status": "succ",
- "head": ["ts", "current", "voltage", "phase"],
- "column_meta": [
- ["ts", 9, 8],
- ["current", 6, 4],
- ["voltage", 4, 4],
- ["phase", 6, 4]
- ],
- "data": [
- [1538548685000, 10.3, 219, 0.31],
- [1538548695000, 12.6, 218, 0.33]
- ],
- "rows": 2
-}
-```
-
-### UTC format for the result set
-
-When the HTTP request URL uses `/rest/sqlutc`, the timestamp of the returned result set will be expressed as a UTC format, for example:
-
-```bash
- curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.t1' 192.168.0.1:6041/rest/sqlutc
-```
-
-Response body:
-
-```json
-{
- "status": "succ",
- "head": ["ts", "current", "voltage", "phase"],
- "column_meta": [
- ["ts", 9, 8],
- ["current", 6, 4],
- ["voltage", 4, 4],
- ["phase", 6, 4]
- ],
- "data": [
- ["2018-10-03T14:38:05.000+0800", 10.3, 219, 0.31],
- ["2018-10-03T14:38:15.000+0800", 12.6, 218, 0.33]
- ],
- "rows": 2
-}
-```
-
-## Important configuration items
-
-Only some configuration parameters related to the RESTful interface are listed below. Please see the description in the configuration file for other system parameters.
-
-- The port number of the external RESTful service is bound to 6041 by default (the actual value is serverPort + 11, so it can be changed by modifying the setting of the serverPort parameter).
-- httpMaxThreads: the number of threads to start, default is 2 (the default value is rounded down to half of the CPU cores with version 2.0.17.0 and later versions).
-- restfulRowLimit: the maximum number of result sets (in JSON format) to return. The default value is 10240.
-- httpEnableCompress: whether to support compression, the default is not supported. Currently, TDengine only supports the gzip compression format.
-- httpDebugFlag: logging switch, default is 131. 131: error and alarm messages only, 135: debug messages, 143: very detailed debug messages.
-- httpDbNameMandatory: users must specify the default database name in the RESTful URL. The default is 0, which turns off this check. If set to 1, users must put a default database name in every RESTful URL. Otherwise, it will return an execution error and reject this SQL statement, regardless of whether the SQL statement executed at this time requires a specified database.
-
-:::note
-If you are using the REST API provided by taosd, you should write the above configuration in taosd's configuration file taos.cfg. If you use the REST API of taosAdapter, you need to refer to taosAdapter [corresponding configuration method](/reference/taosadapter/).
-:::
diff --git a/docs-en/14-reference/03-connector/_verify_windows.mdx b/docs-en/14-reference/03-connector/_verify_windows.mdx
deleted file mode 100644
index c3d6af84d8e8cdf8b75c8efc5bb36955df4884bd..0000000000000000000000000000000000000000
--- a/docs-en/14-reference/03-connector/_verify_windows.mdx
+++ /dev/null
@@ -1,14 +0,0 @@
-Go to the `C:\TDengine` directory from `cmd` and execute TDengine CLI program `taos.exe` directly to connect to the TDengine service and enter the TDengine CLI interface, for example, as follows:
-
-```text
- C:\TDengine>taos
- Welcome to the TDengine shell from Linux, Client Version:2.0.5.0
- Copyright (c) 2017 by TAOS Data, Inc. All rights reserved.
- taos> show databases;
- name | created_time | ntables | vgroups | replica | quorum | days | keep1,keep2,keep(D) | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | precision | status |
- ===================================================================================================================================================================================================================================================================
- test | 2020-10-14 10:35:48.617 | 10 | 1 | 1 | 1 | 2 | 3650,3650,3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | ms | ready |
- log | 2020-10-12 09:08:21.651 | 4 | 1 | 1 | 1 | 10 | 30,30,30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | us | ready |
- Query OK, 2 row(s) in set (0.045000s)
- taos>
-```
diff --git a/docs-en/14-reference/03-connector/go.mdx b/docs-en/14-reference/03-connector/go.mdx
deleted file mode 100644
index 8a05f2d841bbcdbab2bdb7471691ca0ae49a4f6b..0000000000000000000000000000000000000000
--- a/docs-en/14-reference/03-connector/go.mdx
+++ /dev/null
@@ -1,415 +0,0 @@
----
-toc_max_heading_level: 4
-sidebar_position: 4
-sidebar_label: Go
-title: TDengine Go Connector
----
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-import Preparation from "./_preparation.mdx"
-import GoInsert from "../../07-develop/03-insert-data/_go_sql.mdx"
-import GoInfluxLine from "../../07-develop/03-insert-data/_go_line.mdx"
-import GoOpenTSDBTelnet from "../../07-develop/03-insert-data/_go_opts_telnet.mdx"
-import GoOpenTSDBJson from "../../07-develop/03-insert-data/_go_opts_json.mdx"
-import GoQuery from "../../07-develop/04-query-data/_go.mdx"
-
-`driver-go` is the official Go language connector for TDengine. It implements the [database/sql](https://golang.org/pkg/database/sql/) package, the generic Go language interface to SQL databases. Go developers can use it to develop applications that access TDengine cluster data.
-
-`driver-go` provides two ways to establish connections. One is **native connection**, which connects to TDengine instances natively through the TDengine client driver (taosc), supporting data writing, querying, subscriptions, schemaless writing, and bind interface. The other is the **REST connection**, which connects to TDengine instances via the REST interface provided by taosAdapter. The set of features implemented by the REST connection differs slightly from those implemented by the native connection.
-
-This article describes how to install `driver-go` and connect to TDengine clusters and perform basic operations such as data query and data writing through `driver-go`.
-
-The source code of `driver-go` is hosted on [GitHub](https://github.com/taosdata/driver-go).
-
-## Supported Platforms
-
-Native connections are supported on the same platforms as the TDengine client driver.
-REST connections are supported on all platforms that can run Go.
-
-## Version support
-
-Please refer to [version support list](/reference/connector#version-support)
-
-## Supported features
-
-### Native connections
-
-A "native connection" is established by the connector directly to the TDengine instance via the TDengine client driver (taosc). The supported functional features are:
-
-* Normal queries
-* Continuous queries
-* Subscriptions
-* schemaless interface
-* parameter binding interface
-
-### REST connection
-
-A "REST connection" is a connection between the application and the TDengine instance via the REST API provided by the taosAdapter component. The following features are supported:
-
-* General queries
-* Continuous queries
-
-## Installation steps
-
-### Pre-installation
-
-- Install Go development environment (Go 1.14 and above, GCC 4.8.5 and above)
-- If you use the native connector, please install the TDengine client driver. Please refer to [Install Client Driver](/reference/connector/#install-client-driver) for specific steps
-
-Configure the environment variables and check the command.
-
-* `go env`
-* `gcc -v`
-
-### Use go get to install
-
-```
-go get -u github.com/taosdata/driver-go/v2@develop
-```
-
-### Manage with go mod
-
-1. Initialize the project with the `go mod` command.
-
- ```text
- go mod init taos-demo
- ```
-
-2. Introduce taosSql
-
- ```go
- import (
- "database/sql"
- _ "github.com/taosdata/driver-go/v2/taosSql"
- )
- ```
-
-3. Update the dependency packages with `go mod tidy`.
-
- ```text
- go mod tidy
- ```
-
-4. Run the program with `go run taos-demo` or compile the binary with the `go build` command.
-
- ```text
- go run taos-demo
- go build
- ```
-
-## Create a connection
-
-### Data source name (DSN)
-
-Data source names have a standard format, e.g. [PEAR DB](http://pear.php.net/manual/en/package.database.db.intro-dsn.php), but no type prefix (square brackets indicate optionally):
-
-``` text
-[username[:password]@][protocol[(address)]]/[dbname][?param1=value1&... ¶mN=valueN]
-```
-
-DSN in full form.
-
-```text
-username:password@protocol(address)/dbname?param=value
-```
-
-### Connecting via connector
-
-
-
-
-_taosSql_ implements Go's `database/sql/driver` interface via cgo. You can use the [`database/sql`](https://golang.org/pkg/database/sql/) interface by simply introducing the driver.
-
-Use `taosSql` as `driverName` and use a correct [DSN](#DSN) as `dataSourceName`, DSN supports the following parameters.
-
-* configPath specifies the `taos.cfg` directory
-
-Example.
-
-```go
-package main
-
-import (
- "database/sql"
- "fmt"
-
- _ "github.com/taosdata/driver-go/v2/taosSql"
-)
-
-func main() {
- var taosUri = "root:taosdata@tcp(localhost:6030)/"
- taos, err := sql.Open("taosSql", taosUri)
- if err ! = nil {
- fmt.Println("failed to connect TDengine, err:", err)
- return
- }
-}
-```
-
-
-
-
-_taosRestful_ implements Go's `database/sql/driver` interface via `http client`. You can use the [`database/sql`](https://golang.org/pkg/database/sql/) interface by simply introducing the driver.
-
-Use `taosRestful` as `driverName` and use a correct [DSN](#DSN) as `dataSourceName` with the following parameters supported by the DSN.
-
-* `disableCompression` whether to accept compressed data, default is true do not accept compressed data, set to false if transferring data using gzip compression.
-* `readBufferSize` The default size of the buffer for reading data is 4K (4096), which can be adjusted upwards when the query result has a lot of data.
-
-Example.
-
-```go
-package main
-
-import (
- "database/sql"
- "fmt"
-
- _ "github.com/taosdata/driver-go/v2/taosRestful"
-)
-
-func main() {
- var taosUri = "root:taosdata@http(localhost:6041)/"
- taos, err := sql.Open("taosRestful", taosUri)
- if err ! = nil {
- fmt.Println("failed to connect TDengine, err:", err)
- return
- }
-}
-```
-
-
-
-## Usage examples
-
-### Write data
-
-#### SQL Write
-
-
-
-#### InfluxDB line protocol write
-
-
-
-#### OpenTSDB Telnet line protocol write
-
-
-
-#### OpenTSDB JSON line protocol write
-
-
-
-### Query data
-
-
-
-### More sample programs
-
-* [sample program](https://github.com/taosdata/TDengine/tree/develop/examples/go)
-* [Video tutorial](https://www.taosdata.com/blog/2020/11/11/1951.html).
-
-## Usage limitations
-
-Since the REST interface is stateless, the `use db` syntax will not work. You need to put the db name into the SQL command, e.g. `create table if not exists tb1 (ts timestamp, a int)` to `create table if not exists test.tb1 (ts timestamp, a int)` otherwise it will report the error `[0x217] Database not specified or available`.
-
-You can also put the db name in the DSN by changing `root:taosdata@http(localhost:6041)/` to `root:taosdata@http(localhost:6041)/test`. This method is supported by taosAdapter since TDengine 2.4.0.5. Executing the `create database` statement when the specified db does not exist will not report an error while executing other queries or writing against that db will report an error.
-
-The complete example is as follows.
-
-```go
-package main
-
-import (
- "database/sql"
- "fmt"
- "time"
-
- _ "github.com/taosdata/driver-go/v2/taosRestful"
-)
-
-func main() {
- var taosDSN = "root:taosdata@http(localhost:6041)/test"
- taos, err := sql.Open("taosRestful", taosDSN)
- if err != nil {
- fmt.Println("failed to connect TDengine, err:", err)
- return
- }
- defer taos.Close()
- taos.Exec("create database if not exists test")
- taos.Exec("create table if not exists tb1 (ts timestamp, a int)")
- _, err = taos.Exec("insert into tb1 values(now, 0)(now+1s,1)(now+2s,2)(now+3s,3)")
- if err != nil {
- fmt.Println("failed to insert, err:", err)
- return
- }
- rows, err := taos.Query("select * from tb1")
- if err != nil {
- fmt.Println("failed to select from table, err:", err)
- return
- }
-
- defer rows.Close()
- for rows.Next() {
- var r struct {
- ts time.Time
- a int
- }
- err := rows.Scan(&r.ts, &r.a)
- if err != nil {
- fmt.Println("scan error:\n", err)
- return
- }
- fmt.Println(r.ts, r.a)
- }
-}
-```
-
-## Frequently Asked Questions
-
-1. Cannot find the package `github.com/taosdata/driver-go/v2/taosRestful`
-
- Change the `github.com/taosdata/driver-go/v2` line in the require block of the `go.mod` file to `github.com/taosdata/driver-go/v2 develop`, then execute `go mod tidy`.
-
-2. bind interface in database/sql crashes
-
- REST does not support parameter binding related interface. It is recommended to use `db.Exec` and `db.Query`.
-
-3. error `[0x217] Database not specified or available` after executing other statements with `use db` statement
-
- The execution of SQL command in the REST interface is not contextual, so using `use db` statement will not work, see the usage restrictions section above.
-
-4. use `taosSql` without error but use `taosRestful` with error `[0x217] Database not specified or available`
-
- Because the REST interface is stateless, using the `use db` statement will not take effect. See the usage restrictions section above.
-
-5. Upgrade `github.com/taosdata/driver-go/v2/taosRestful`
-
- Change the `github.com/taosdata/driver-go/v2` line in the `go.mod` file to `github.com/taosdata/driver-go/v2 develop`, then execute `go mod tidy`.
-
-6. `readBufferSize` parameter has no significant effect after being increased
-
- Increasing `readBufferSize` will reduce the number of `syscall` calls when fetching results. If the query result is smaller, modifying this parameter will not improve performance significantly. If you increase the parameter value too much, the bottleneck will be parsing JSON data. If you need to optimize the query speed, you must adjust the value based on the actual situation to achieve the best query performance.
-
-7. `disableCompression` parameter is set to `false` when the query efficiency is reduced
-
- When set `disableCompression` parameter to `false`, the query result will be compressed by `gzip` and then transmitted, so you have to decompress the data by `gzip` after getting it.
-
-8. `go get` command can't get the package, or timeout to get the package
-
- Set Go proxy `go env -w GOPROXY=https://goproxy.cn,direct`.
-
-## Common APIs
-
-### database/sql API
-
-* `sql.Open(DRIVER_NAME string, dataSourceName string) *DB`
-
- Use This API to open a DB, returning an object of type \*DB.
-
-:::info
-This API is created successfully without checking permissions, but only when you execute a Query or Exec, and check if user/password/host/port is legal.
-
-:::
-
-* `func (db *DB) Exec(query string, args . .interface{}) (Result, error)`
-
- `sql.Open` built-in method to execute non-query related SQL.
-
-* `func (db *DB) Query(query string, args ... . interface{}) (*Rows, error)`
-
- `sql.Open` Built-in method to execute query statements.
-
-### Advanced functions (af) API
-
-The `af` package encapsulates TDengine advanced functions such as connection management, subscriptions, schemaless, parameter binding, etc.
-
-#### Connection management
-
-* `af.Open(host, user, pass, db string, port int) (*Connector, error)`
-
- This API creates a connection to taosd via cgo.
-
-* `func (conn *Connector) Close() error`
-
- Closes the connection.
-
-#### Subscribe to
-
-* `func (conn *Connector) Subscribe(restart bool, topic string, sql string, interval time.Duration) (Subscriber, error)`
-
- Subscribe to data.
-
-* `func (s *taosSubscriber) Consume() (driver.Rows, error)`
-
- Consume the subscription data, returning the `Rows` structure of the `database/sql/driver` package.
-
-* `func (s *taosSubscriber) Unsubscribe(keepProgress bool)`
-
- Unsubscribe from data.
-
-#### schemaless
-
-* `func (conn *Connector) InfluxDBInsertLines(lines []string, precision string) error`
-
- Write to influxDB line protocol.
-
-* `func (conn *Connector) OpenTSDBInsertTelnetLines(lines []string) error`
-
- Write OpenTDSB telnet protocol data.
-
-* `func (conn *Connector) OpenTSDBInsertJsonPayload(payload string) error`
-
- Writes OpenTSDB JSON protocol data.
-
-#### parameter binding
-
-* `func (conn *Connector) StmtExecute(sql string, params *param.Param) (res driver.Result, err error)`
-
- Parameter bound single row insert.
-
-* `func (conn *Connector) StmtQuery(sql string, params *param.Param) (rows driver.Rows, err error)`
-
- Parameter bound query that returns the `Rows` structure of the `database/sql/driver` package.
-
-* `func (conn *Connector) InsertStmt() *insertstmt.
-
- Initialize the parameters.
-
-* `func (stmt *InsertStmt) Prepare(sql string) error`
-
- Parameter binding preprocessing SQL statement.
-
-* `func (stmt *InsertStmt) SetTableName(name string) error`
-
- Bind the set table name parameter.
-
-* `func (stmt *InsertStmt) SetSubTableName(name string) error`
-
- Parameter binding to set the sub table name.
-
-* `func (stmt *InsertStmt) BindParam(params []*param.Param, bindType *param.ColumnType) error`
-
- Parameter bind multiple rows of data.
-
-* `func (stmt *InsertStmt) AddBatch() error`
-
- Add to a parameter-bound batch.
-
-* `func (stmt *InsertStmt) Execute() error`
-
- Execute a parameter binding.
-
-* `func (stmt *InsertStmt) GetAffectedRows() int`
-
- Gets the number of affected rows inserted by the parameter binding.
-
-* `func (stmt *InsertStmt) Close() error`
-
- Closes the parameter binding.
-
-## API Reference
-
-Full API see [driver-go documentation](https://pkg.go.dev/github.com/taosdata/driver-go/v2)
diff --git a/docs-en/14-reference/03-connector/java.mdx b/docs-en/14-reference/03-connector/java.mdx
deleted file mode 100644
index ff15acf1a9c5dbfd74e6f3101459cfc7bdeda515..0000000000000000000000000000000000000000
--- a/docs-en/14-reference/03-connector/java.mdx
+++ /dev/null
@@ -1,845 +0,0 @@
----
-toc_max_heading_level: 4
-sidebar_position: 2
-sidebar_label: Java
-title: TDengine Java Connector
-description: TDengine Java based on JDBC API and provide both native and REST connections
----
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-'taos-jdbcdriver' is TDengine's official Java language connector, which allows Java developers to develop applications that access the TDengine database. 'taos-jdbcdriver' implements the interface of the JDBC driver standard and provides two forms of connectors. One is to connect to a TDengine instance natively through the TDengine client driver (taosc), which supports functions including data writing, querying, subscription, schemaless writing, and bind interface. And the other is to connect to a TDengine instance through the REST interface provided by taosAdapter (2.4.0.0 and later). The implementation of the REST connection and those of the native connections have slight differences in features.
-
-
-
-The preceding diagram shows two ways for a Java app to access TDengine via connector:
-
-- JDBC native connection: Java applications use TSDBDriver on physical node 1 (pnode1) to call client-driven directly (`libtaos.so` or `taos.dll`) APIs to send writing and query requests to taosd instances located on physical node 2 (pnode2).
-- JDBC REST connection: The Java application encapsulates the SQL as a REST request via RestfulDriver, sends it to the REST server (taosAdapter) on physical node 2. taosAdapter forwards the request to TDengine server and returns the result.
-
-The REST connection, which does not rely on TDengine client drivers, is more convenient and flexible, in addition to being cross-platform. However the performance is about 30% lower than that of the native connection.
-
-:::info
-TDengine's JDBC driver implementation is as consistent as possible with the relational database driver. Still, there are differences in the use scenarios and technical characteristics of TDengine and relational object databases. So 'taos-jdbcdriver' also has some differences from traditional JDBC drivers. It is important to keep the following points in mind:
-
-- TDengine does not currently support delete operations for individual data records.
-- Transactional operations are not currently supported.
-
-:::
-
-## Supported platforms
-
-Native connection supports the same platform as TDengine client-driven support.
-REST connection supports all platforms that can run Java.
-
-## Version support
-
-Please refer to [Version Support List](/reference/connector#version-support).
-
-## TDengine DataType vs. Java DataType
-
-TDengine currently supports timestamp, number, character, Boolean type, and the corresponding type conversion with Java is as follows:
-
-| TDengine DataType | JDBCType (driver version < 2.0.24) | JDBCType (driver version > = 2.0.24) |
-| ----------------- | ---------------------------------- | ------------------------------------ |
-| TIMESTAMP | java.lang.Long | java.sql.Timestamp |
-| INT | java.lang.Integer | java.lang.Integer |
-| BIGINT | java.lang.Long | java.lang.Long |
-| FLOAT | java.lang.Float | java.lang.Float |
-| DOUBLE | java.lang.Double | java.lang.Double |
-| SMALLINT | java.lang.Short | java.lang.Short |
-| TINYINT | java.lang.Byte | java.lang.Byte |
-| BOOL | java.lang.Boolean | java.lang.Boolean |
-| BINARY | java.lang.String | byte array |
-| NCHAR | java.lang.String | java.lang.String |
-| JSON | - | java.lang.String |
-
-**Note**: Only TAG supports JSON types
-
-## Installation steps
-
-### Pre-installation preparation
-
-Before using Java Connector to connect to the database, the following conditions are required.
-
-- Java 1.8 or above runtime environment and Maven 3.6 or above installed
-- TDengine client driver installed (required for native connections, not required for REST connections), please refer to [Installing Client Driver](/reference/connector#Install-Client-Driver)
-
-### Install the connectors
-
-
-
-
-- [sonatype](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver)
-- [mvnrepository](https://mvnrepository.com/artifact/com.taosdata.jdbc/taos-jdbcdriver)
-- [maven.aliyun](https://maven.aliyun.com/mvn/search)
-
-Add following dependency in the `pom.xml` file of your Maven project:
-
-```xml
-
- com.taosdata.jdbc
- taos-jdbcdriver
- 2.0.**
-
-```
-
-
-
-
-You can build Java connector from source code after cloning the TDengine project:
-
-```
-git clone https://github.com/taosdata/taos-connector-jdbc.git
-cd taos-connector-jdbc
-mvn clean install -Dmaven.test.skip=true
-```
-
-After compilation, a jar package named taos-jdbcdriver-2.0.XX-dist.jar is generated in the target directory, and the compiled jar file is automatically placed in the local Maven repository.
-
-
-
-
-## Establish a connection
-
-TDengine's JDBC URL specification format is:
-`jdbc:[TAOS| TAOS-RS]://[host_name]:[port]/[database_name]? [user={user}|&password={password}|&charset={charset}|&cfgdir={config_dir}|&locale={locale}|&timezone={timezone}]`
-
-For establishing connections, native connections differ slightly from REST connections.
-
-
-
-
-```java
-Class.forName("com.taosdata.jdbc.TSDBDriver");
-String jdbcUrl = "jdbc:TAOS://taosdemo.com:6030/test?user=root&password=taosdata";
-Connection conn = DriverManager.getConnection(jdbcUrl);
-```
-
-In the above example, TSDBDriver, which uses a JDBC native connection, establishes a connection to a hostname `taosdemo.com`, port `6030` (the default port for TDengine), and a database named `test`. In this URL, the user name `user` is specified as `root`, and the `password` is `taosdata`.
-
-Note: With JDBC native connections, taos-jdbcdriver relies on the client driver (`libtaos.so` on Linux; `taos.dll` on Windows).
-
-The configuration parameters in the URL are as follows:
-
-- user: Log in to the TDengine username. The default value is 'root'.
-- password: User login password, the default value is 'taosdata'.
-- cfgdir: client configuration file directory path, default '/etc/taos' on Linux OS, 'C:/TDengine/cfg' on Windows OS.
-- charset: The character set used by the client, the default value is the system character set.
-- locale: Client locale, by default, use the system's current locale.
-- timezone: The time zone used by the client, the default value is the system's current time zone.
-- batchfetch: true: pulls result sets in batches when executing queries; false: pulls result sets row by row. The default value is: false. Enabling batch pulling and obtaining a batch of data can improve query performance when the query data volume is large.
-- batchErrorIgnore:true: When executing statement executeBatch, if there is a SQL execution failure in the middle, the following SQL will continue to be executed. false: No more statements after the failed SQL are executed. The default value is: false.
-
-For more information about JDBC native connections, see [Video Tutorial](https://www.taosdata.com/blog/2020/11/11/1955.html).
-
-**Connect using the TDengine client-driven configuration file **
-
-When you use a JDBC native connection to connect to a TDengine cluster, you can use the TDengine client driver configuration file to specify parameters such as `firstEp` and `secondEp` of the cluster in the configuration file as below:
-
-1. Do not specify hostname and port in Java applications.
-
- ```java
- public Connection getConn() throws Exception{
- Class.forName("com.taosdata.jdbc.TSDBDriver");
- String jdbcUrl = "jdbc:TAOS://:/test?user=root&password=taosdata";
- Properties connProps = new Properties();
- connProps.setProperty(TSDBDriver.PROPERTY_KEY_CHARSET, "UTF-8");
- connProps.setProperty(TSDBDriver.PROPERTY_KEY_LOCALE, "en_US.UTF-8");
- connProps.setProperty(TSDBDriver.PROPERTY_KEY_TIME_ZONE, "UTC-8");
- Connection conn = DriverManager.getConnection(jdbcUrl, connProps);
- return conn;
- }
- ```
-
-2. specify the firstEp and the secondEp in the configuration file taos.cfg
-
- ```shell
- # first fully qualified domain name (FQDN) for TDengine system
- firstEp cluster_node1:6030
-
- # second fully qualified domain name (FQDN) for TDengine system, for cluster only
- secondEp cluster_node2:6030
-
- # default system charset
- # charset UTF-8
-
- # system locale
- # locale en_US.UTF-8
- ```
-
-In the above example, JDBC uses the client's configuration file to establish a connection to a hostname `cluster_node1`, port 6030, and a database named `test`. When the firstEp node in the cluster fails, JDBC attempts to connect to the cluster using secondEp.
-
-In TDengine, as long as one node in firstEp and secondEp is valid, the connection to the cluster can be established normally.
-
-:::note
-The configuration file here refers to the configuration file on the machine where the application that calls the JDBC Connector is located, the default path is `/etc/taos/taos.cfg` on Linux, and the default path is `C://TDengine/cfg/taos.cfg` on Windows.
-
-:::
-
-
-
-
-```java
-Class.forName("com.taosdata.jdbc.rs.RestfulDriver");
-String jdbcUrl = "jdbc:TAOS-RS://taosdemo.com:6041/test?user=root&password=taosdata";
-Connection conn = DriverManager.getConnection(jdbcUrl);
-```
-
-In the above example, a RestfulDriver with a JDBC REST connection is used to establish a connection to a database named `test` with hostname `taosdemo.com` on port `6041`. The URL specifies the user name as `root` and the password as `taosdata`.
-
-There is no dependency on the client driver when Using a JDBC REST connection. Compared to a JDBC native connection, only the following are required:
-
-1. driverClass specified as "com.taosdata.jdbc.rs.RestfulDriver".
-2. jdbcUrl starting with "jdbc:TAOS-RS://".
-3. use 6041 as the connection port.
-
-The configuration parameters in the URL are as follows.
-
-- user: Login TDengine user name, default value 'root'.
-- password: user login password, default value 'taosdata'.
-- batchfetch: true: pull the result set in batch when executing the query; false: pull the result set row by row. The default value is false. batchfetch uses HTTP for data transfer. The JDBC REST connection supports bulk data pulling function in taos-jdbcdriver-2.0.38 and TDengine 2.4.0.12 and later versions. taos-jdbcdriver and TDengine transfer data via WebSocket connection. Compared with HTTP, WebSocket enables JDBC REST connection to support large data volume querying and improve query performance.
-- charset: specify the charset to parse the string, this parameter is valid only when set batchfetch to true.
-- batchErrorIgnore: true: when executing executeBatch of Statement, if one SQL execution fails in the middle, continue to execute the following SQL. false: no longer execute any statement after the failed SQL. The default value is: false.
-
-**Note**: Some configuration items (e.g., locale, timezone) do not work in the REST connection.
-
-:::note
-
-- Unlike the native connection method, the REST interface is stateless. When using the JDBC REST connection, you need to specify the database name of the table and super table in SQL. For example.
-
-```sql
-INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('California.SanFrancisco') VALUES(now, 24.6);
-```
-
-- Starting from taos-jdbcdriver-2.0.36 and TDengine 2.2.0.0, if dbname is specified in the URL, JDBC REST connections will use `/rest/sql/dbname` as the URL for REST requests by default, and there is no need to specify dbname in SQL. For example, if the URL is `jdbc:TAOS-RS://127.0.0.1:6041/test`, then the SQL can be executed: insert into test using weather(ts, temperature) tags('California.SanFrancisco') values(now, 24.6);
-
-:::
-
-
-
-
-### Specify the URL and Properties to get the connection
-
-In addition to getting the connection from the specified URL, you can use Properties to specify parameters when the connection is established.
-
-**Note**:
-
-- The client parameter set in the application is process-level. If you want to update the parameters of the client, you need to restart the application. This is because the client parameter is a global parameter that takes effect only the first time the application is set.
-- The following sample code is based on taos-jdbcdriver-2.0.36.
-
-```java
-public Connection getConn() throws Exception{
- Class.forName("com.taosdata.jdbc.TSDBDriver");
- String jdbcUrl = "jdbc:TAOS://taosdemo.com:6030/test?user=root&password=taosdata";
- Properties connProps = new Properties();
- connProps.setProperty(TSDBDriver.PROPERTY_KEY_CHARSET, "UTF-8");
- connProps.setProperty(TSDBDriver.PROPERTY_KEY_LOCALE, "en_US.UTF-8");
- connProps.setProperty(TSDBDriver.PROPERTY_KEY_TIME_ZONE, "UTC-8");
- connProps.setProperty("debugFlag", "135");
- connProps.setProperty("maxSQLLength", "1048576");
- Connection conn = DriverManager.getConnection(jdbcUrl, connProps);
- return conn;
-}
-
-public Connection getRestConn() throws Exception{
- Class.forName("com.taosdata.jdbc.rs.RestfulDriver");
- String jdbcUrl = "jdbc:TAOS-RS://taosdemo.com:6041/test?user=root&password=taosdata";
- Properties connProps = new Properties();
- connProps.setProperty(TSDBDriver.PROPERTY_KEY_BATCH_LOAD, "true");
- Connection conn = DriverManager.getConnection(jdbcUrl, connProps);
- return conn;
-}
-```
-
-In the above example, a connection is established to `taosdemo.com`, port is 6030/6041, and database named `test`. The connection specifies the user name as `root` and the password as `taosdata` in the URL and specifies the character set, language environment, time zone, and whether to enable bulk fetching in the connProps.
-
-The configuration parameters in properties are as follows.
-
-- TSDBDriver.PROPERTY_KEY_USER: Login TDengine user name, default value 'root'.
-- TSDBDriver.PROPERTY_KEY_PASSWORD: user login password, default value 'taosdata'.
-- TSDBDriver.PROPERTY_KEY_BATCH_LOAD: true: pull the result set in batch when executing query; false: pull the result set row by row. The default value is: false.
-- TSDBDriver.PROPERTY_KEY_BATCH_ERROR_IGNORE: true: when executing executeBatch of Statement, if there is a SQL execution failure in the middle, continue to execute the following sq. false: no longer execute any statement after the failed SQL. The default value is: false.
-- TSDBDriver.PROPERTY_KEY_CONFIG_DIR: Only works when using JDBC native connection. Client configuration file directory path, default value `/etc/taos` on Linux OS, default value `C:/TDengine/cfg` on Windows OS.
-- TSDBDriver.PROPERTY_KEY_CHARSET: In the character set used by the client, the default value is the system character set.
-- TSDBDriver.PROPERTY_KEY_LOCALE: this only takes effect when using JDBC native connection. Client language environment, the default value is system current locale.
-- TSDBDriver.PROPERTY_KEY_TIME_ZONE: only takes effect when using JDBC native connection. In the time zone used by the client, the default value is the system's current time zone.
- For JDBC native connections, you can specify other parameters, such as log level, SQL length, etc., by specifying URL and Properties. For more detailed configuration, please refer to [Client Configuration](/reference/config/#Client-Only).
-
-### Priority of configuration parameters
-
-If the configuration parameters are duplicated in the URL, Properties, or client configuration file, the `priority` of the parameters, from highest to lowest, are as follows:
-
-1. JDBC URL parameters, as described above, can be specified in the parameters of the JDBC URL.
-2. Properties connProps
-3. the configuration file taos.cfg of the TDengine client driver when using a native connection
-
-For example, if you specify the password as `taosdata` in the URL and specify the password as `taosdemo` in the Properties simultaneously, JDBC will use the password in the URL to establish the connection.
-
-## Usage examples
-
-### Create database and tables
-
-```java
-Statement stmt = conn.createStatement();
-
-// create database
-stmt.executeUpdate("create database if not exists db");
-
-// use database
-stmt.executeUpdate("use db");
-
-// create table
-stmt.executeUpdate("create table if not exists tb (ts timestamp, temperature int, humidity float)");
-```
-
-> **Note**: If you do not use `use db` to specify the database, all subsequent operations on the table need to add the database name as a prefix, such as db.tb.
-
-### Insert data
-
-```java
-// insert data
-int affectedRows = stmt.executeUpdate("insert into tb values(now, 23, 10.3) (now + 1s, 20, 9.3)");
-
-System.out.println("insert " + affectedRows + " rows.");
-```
-
-> now is an internal function. The default is the current time of the client's computer.
-> `now + 1s` represents the current time of the client plus 1 second, followed by the number representing the unit of time: a (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), n (months), y (years).
-
-### Querying data
-
-```java
-// query data
-ResultSet resultSet = stmt.executeQuery("select * from tb");
-
-Timestamp ts = null;
-int temperature = 0;
-float humidity = 0;
-while(resultSet.next()){
-
- ts = resultSet.getTimestamp(1);
- temperature = resultSet.getInt(2);
- humidity = resultSet.getFloat("humidity");
-
- System.out.printf("%s, %d, %s\n", ts, temperature, humidity);
-}
-```
-
-> The query is consistent with operating a relational database. When using subscripts to get the contents of the returned fields, you have to start from 1. However, we recommend using the field names to get the values of the fields in the result set.
-
-### Handling exceptions
-
-After an error is reported, the error message and error code can be obtained through SQLException.
-
-```java
-try (Statement statement = connection.createStatement()) {
- // executeQuery
- ResultSet resultSet = statement.executeQuery(sql);
- // print result
- printResult(resultSet);
-} catch (SQLException e) {
- System.out.println("ERROR Message: " + e.getMessage());
- System.out.println("ERROR Code: " + e.getErrorCode());
- e.printStackTrace();
-}
-```
-
-There are three types of error codes that the JDBC connector can report:
-
-- Error code of the JDBC driver itself (error code between 0x2301 and 0x2350)
-- Error code of the native connection method (error code between 0x2351 and 0x2400)
-- Error code of other TDengine function modules
-
-For specific error codes, please refer to.
-
-- [TDengine Java Connector](https://github.com/taosdata/taos-connector-jdbc/blob/main/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java)
-- [TDengine_ERROR_CODE](https://github.com/taosdata/TDengine/blob/develop/src/inc/taoserror.h)
-
-### Writing data via parameter binding
-
-TDengine's native JDBC connection implementation has significantly improved its support for data writing (INSERT) scenarios via bind interface with version 2.1.2.0 and later versions. Writing data in this way avoids the resource consumption of SQL syntax parsing, resulting in significant write performance improvements in many cases.
-
-**Note**.
-
-- JDBC REST connections do not currently support bind interface
-- The following sample code is based on taos-jdbcdriver-2.0.36
-- The setString method should be called for binary type data, and the setNString method should be called for nchar type data
-- both setString and setNString require the user to declare the width of the corresponding column in the size parameter of the table definition
-
-```java
-public class ParameterBindingDemo {
-
- private static final String host = "127.0.0.1";
- private static final Random random = new Random(System.currentTimeMillis());
- private static final int BINARY_COLUMN_SIZE = 20;
- private static final String[] schemaList = {
- "create table stable1(ts timestamp, f1 tinyint, f2 smallint, f3 int, f4 bigint) tags(t1 tinyint, t2 smallint, t3 int, t4 bigint)",
- "create table stable2(ts timestamp, f1 float, f2 double) tags(t1 float, t2 double)",
- "create table stable3(ts timestamp, f1 bool) tags(t1 bool)",
- "create table stable4(ts timestamp, f1 binary(" + BINARY_COLUMN_SIZE + ")) tags(t1 binary(" + BINARY_COLUMN_SIZE + "))",
- "create table stable5(ts timestamp, f1 nchar(" + BINARY_COLUMN_SIZE + ")) tags(t1 nchar(" + BINARY_COLUMN_SIZE + "))"
- };
- private static final int numOfSubTable = 10, numOfRow = 10;
-
- public static void main(String[] args) throws SQLException {
-
- String jdbcUrl = "jdbc:TAOS://" + host + ":6030/";
- Connection conn = DriverManager.getConnection(jdbcUrl, "root", "taosdata");
-
- init(conn);
-
- bindInteger(conn);
-
- bindFloat(conn);
-
- bindBoolean(conn);
-
- bindBytes(conn);
-
- bindString(conn);
-
- conn.close();
- }
-
- private static void init(Connection conn) throws SQLException {
- try (Statement stmt = conn.createStatement()) {
- stmt.execute("drop database if exists test_parabind");
- stmt.execute("create database if not exists test_parabind");
- stmt.execute("use test_parabind");
- for (int i = 0; i < schemaList.length; i++) {
- stmt.execute(schemaList[i]);
- }
- }
- }
-
- private static void bindInteger(Connection conn) throws SQLException {
- String sql = "insert into ? using stable1 tags(?,?,?,?) values(?,?,?,?,?)";
-
- try (TSDBPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSDBPreparedStatement.class)) {
-
- for (int i = 1; i <= numOfSubTable; i++) {
- // set table name
- pstmt.setTableName("t1_" + i);
- // set tags
- pstmt.setTagByte(0, Byte.parseByte(Integer.toString(random.nextInt(Byte.MAX_VALUE))));
- pstmt.setTagShort(1, Short.parseShort(Integer.toString(random.nextInt(Short.MAX_VALUE))));
- pstmt.setTagInt(2, random.nextInt(Integer.MAX_VALUE));
- pstmt.setTagLong(3, random.nextLong());
- // set columns
- ArrayList tsList = new ArrayList<>();
- long current = System.currentTimeMillis();
- for (int j = 0; j < numOfRow; j++)
- tsList.add(current + j);
- pstmt.setTimestamp(0, tsList);
-
- ArrayList f1List = new ArrayList<>();
- for (int j = 0; j < numOfRow; j++)
- f1List.add(Byte.parseByte(Integer.toString(random.nextInt(Byte.MAX_VALUE))));
- pstmt.setByte(1, f1List);
-
- ArrayList f2List = new ArrayList<>();
- for (int j = 0; j < numOfRow; j++)
- f2List.add(Short.parseShort(Integer.toString(random.nextInt(Short.MAX_VALUE))));
- pstmt.setShort(2, f2List);
-
- ArrayList f3List = new ArrayList<>();
- for (int j = 0; j < numOfRow; j++)
- f3List.add(random.nextInt(Integer.MAX_VALUE));
- pstmt.setInt(3, f3List);
-
- ArrayList f4List = new ArrayList<>();
- for (int j = 0; j < numOfRow; j++)
- f4List.add(random.nextLong());
- pstmt.setLong(4, f4List);
-
- // add column
- pstmt.columnDataAddBatch();
- }
- // execute column
- pstmt.columnDataExecuteBatch();
- }
- }
-
- private static void bindFloat(Connection conn) throws SQLException {
- String sql = "insert into ? using stable2 tags(?,?) values(?,?,?)";
-
- TSDBPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSDBPreparedStatement.class);
-
- for (int i = 1; i <= numOfSubTable; i++) {
- // set table name
- pstmt.setTableName("t2_" + i);
- // set tags
- pstmt.setTagFloat(0, random.nextFloat());
- pstmt.setTagDouble(1, random.nextDouble());
- // set columns
- ArrayList tsList = new ArrayList<>();
- long current = System.currentTimeMillis();
- for (int j = 0; j < numOfRow; j++)
- tsList.add(current + j);
- pstmt.setTimestamp(0, tsList);
-
- ArrayList f1List = new ArrayList<>();
- for (int j = 0; j < numOfRow; j++)
- f1List.add(random.nextFloat());
- pstmt.setFloat(1, f1List);
-
- ArrayList f2List = new ArrayList<>();
- for (int j = 0; j < numOfRow; j++)
- f2List.add(random.nextDouble());
- pstmt.setDouble(2, f2List);
-
- // add column
- pstmt.columnDataAddBatch();
- }
- // execute
- pstmt.columnDataExecuteBatch();
- // close if no try-with-catch statement is used
- pstmt.close();
- }
-
- private static void bindBoolean(Connection conn) throws SQLException {
- String sql = "insert into ? using stable3 tags(?) values(?,?)";
-
- try (TSDBPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSDBPreparedStatement.class)) {
- for (int i = 1; i <= numOfSubTable; i++) {
- // set table name
- pstmt.setTableName("t3_" + i);
- // set tags
- pstmt.setTagBoolean(0, random.nextBoolean());
- // set columns
- ArrayList tsList = new ArrayList<>();
- long current = System.currentTimeMillis();
- for (int j = 0; j < numOfRow; j++)
- tsList.add(current + j);
- pstmt.setTimestamp(0, tsList);
-
- ArrayList f1List = new ArrayList<>();
- for (int j = 0; j < numOfRow; j++)
- f1List.add(random.nextBoolean());
- pstmt.setBoolean(1, f1List);
-
- // add column
- pstmt.columnDataAddBatch();
- }
- // execute
- pstmt.columnDataExecuteBatch();
- }
- }
-
- private static void bindBytes(Connection conn) throws SQLException {
- String sql = "insert into ? using stable4 tags(?) values(?,?)";
-
- try (TSDBPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSDBPreparedStatement.class)) {
-
- for (int i = 1; i <= numOfSubTable; i++) {
- // set table name
- pstmt.setTableName("t4_" + i);
- // set tags
- pstmt.setTagString(0, new String("abc"));
-
- // set columns
- ArrayList tsList = new ArrayList<>();
- long current = System.currentTimeMillis();
- for (int j = 0; j < numOfRow; j++)
- tsList.add(current + j);
- pstmt.setTimestamp(0, tsList);
-
- ArrayList f1List = new ArrayList<>();
- for (int j = 0; j < numOfRow; j++) {
- f1List.add(new String("abc"));
- }
- pstmt.setString(1, f1List, BINARY_COLUMN_SIZE);
-
- // add column
- pstmt.columnDataAddBatch();
- }
- // execute
- pstmt.columnDataExecuteBatch();
- }
- }
-
- private static void bindString(Connection conn) throws SQLException {
- String sql = "insert into ? using stable5 tags(?) values(?,?)";
-
- try (TSDBPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSDBPreparedStatement.class)) {
-
- for (int i = 1; i <= numOfSubTable; i++) {
- // set table name
- pstmt.setTableName("t5_" + i);
- // set tags
- pstmt.setTagNString(0, "California-abc");
-
- // set columns
- ArrayList tsList = new ArrayList<>();
- long current = System.currentTimeMillis();
- for (int j = 0; j < numOfRow; j++)
- tsList.add(current + j);
- pstmt.setTimestamp(0, tsList);
-
- ArrayList f1List = new ArrayList<>();
- for (int j = 0; j < numOfRow; j++) {
- f1List.add("California-abc");
- }
- pstmt.setNString(1, f1List, BINARY_COLUMN_SIZE);
-
- // add column
- pstmt.columnDataAddBatch();
- }
- // execute
- pstmt.columnDataExecuteBatch();
- }
- }
-}
-```
-
-The methods to set TAGS values:
-
-```java
-public void setTagNull(int index, int type)
-public void setTagBoolean(int index, boolean value)
-public void setTagInt(int index, int value)
-public void setTagByte(int index, byte value)
-public void setTagShort(int index, short value)
-public void setTagLong(int index, long value)
-public void setTagTimestamp(int index, long value)
-public void setTagFloat(int index, float value)
-public void setTagDouble(int index, double value)
-public void setTagString(int index, String value)
-public void setTagNString(int index, String value)
-```
-
-The methods to set VALUES columns:
-
-```java
-public void setInt(int columnIndex, ArrayList list) throws SQLException
-public void setFloat(int columnIndex, ArrayList list) throws SQLException
-public void setTimestamp(int columnIndex, ArrayList list) throws SQLException
-public void setLong(int columnIndex, ArrayList list) throws SQLException
-public void setDouble(int columnIndex, ArrayList list) throws SQLException
-public void setBoolean(int columnIndex, ArrayList list) throws SQLException
-public void setByte(int columnIndex, ArrayList list) throws SQLException
-public void setShort(int columnIndex, ArrayList list) throws SQLException
-public void setString(int columnIndex, ArrayList list, int size) throws SQLException
-public void setNString(int columnIndex, ArrayList list, int size) throws SQLException
-```
-
-### Schemaless Writing
-
-Starting with version 2.2.0.0, TDengine has added the ability to perform schemaless writing. It is compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. See [schemaless writing](/reference/schemaless/) for details.
-
-**Note**.
-
-- JDBC REST connections do not currently support schemaless writes
-- The following sample code is based on taos-jdbcdriver-2.0.36
-
-```java
-public class SchemalessInsertTest {
- private static final String host = "127.0.0.1";
- private static final String lineDemo = "st,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000";
- private static final String telnetDemo = "stb0_0 1626006833 4 host=host0 interface=eth0";
- private static final String jsonDemo = "{\"metric\": \"meter_current\",\"timestamp\": 1346846400,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"California.SanFrancisco\", \"id\": \"d1001\"}}";
-
- public static void main(String[] args) throws SQLException {
- final String url = "jdbc:TAOS://" + host + ":6030/?user=root&password=taosdata";
- try (Connection connection = DriverManager.getConnection(url)) {
- init(connection);
-
- SchemalessWriter writer = new SchemalessWriter(connection);
- writer.write(lineDemo, SchemalessProtocolType.LINE, SchemalessTimestampType.NANO_SECONDS);
- writer.write(telnetDemo, SchemalessProtocolType.TELNET, SchemalessTimestampType.MILLI_SECONDS);
- writer.write(jsonDemo, SchemalessProtocolType.JSON, SchemalessTimestampType.NOT_CONFIGURED);
- }
- }
-
- private static void init(Connection connection) throws SQLException {
- try (Statement stmt = connection.createStatement()) {
- stmt.executeUpdate("drop database if exists test_schemaless");
- stmt.executeUpdate("create database if not exists test_schemaless");
- stmt.executeUpdate("use test_schemaless");
- }
- }
-}
-```
-
-### Subscriptions
-
-The TDengine Java Connector supports subscription functionality with the following application API.
-
-#### Create subscriptions
-
-```java
-TSDBSubscribe sub = ((TSDBConnection)conn).subscribe("topicname", "select * from meters", false);
-```
-
-The three parameters of the `subscribe()` method have the following meanings.
-
-- topicname: the name of the subscribed topic. This parameter is the unique identifier of the subscription.
-- sql: the query statement of the subscription. This statement can only be a `select` statement. Only original data can be queried, and you can query the data only temporal order.
-- restart: if the subscription already exists, whether to restart or continue the previous subscription
-
-The above example will use the SQL command `select * from meters` to create a subscription named `topicname`. If the subscription exists, it will continue the progress of the previous query instead of consuming all the data from the beginning.
-
-#### Subscribe to consume data
-
-```java
-int total = 0;
-while(true) {
- TSDBResultSet rs = sub.consume();
- int count = 0;
- while(rs.next()) {
- count++;
- }
- total += count;
- System.out.printf("%d rows consumed, total %d\n", count, total);
- Thread.sleep(1000);
-}
-```
-
-The `consume()` method returns a result set containing all new data from the last `consume()`. Be sure to choose a reasonable frequency for calling `consume()` as needed (e.g. `Thread.sleep(1000)` in the example). Otherwise, it will cause unnecessary stress on the server-side.
-
-#### Close subscriptions
-
-```java
-sub.close(true);
-```
-
-The `close()` method closes a subscription. If its argument is `true` it means that the subscription progress information is retained, and the subscription with the same name can be created to continue consuming data; if it is `false` it does not retain the subscription progress.
-
-### Closing resources
-
-```java
-resultSet.close();
-stmt.close();
-conn.close();
-```
-
-> **Be sure to close the connection**, otherwise, there will be a connection leak.
-
-### Use with connection pool
-
-#### HikariCP
-
-Example usage is as follows.
-
-```java
- public static void main(String[] args) throws SQLException {
- HikariConfig config = new HikariConfig();
- // jdbc properties
- config.setJdbcUrl("jdbc:TAOS://127.0.0.1:6030/log");
- config.setUsername("root");
- config.setPassword("taosdata");
- // connection pool configurations
- config.setMinimumIdle(10); //minimum number of idle connection
- config.setMaximumPoolSize(10); //maximum number of connection in the pool
- config.setConnectionTimeout(30000); //maximum wait milliseconds for get connection from pool
- config.setMaxLifetime(0); // maximum life time for each connection
- config.setIdleTimeout(0); // max idle time for recycle idle connection
- config.setConnectionTestQuery("select server_status()"); //validation query
-
- HikariDataSource ds = new HikariDataSource(config); //create datasource
-
- Connection connection = ds.getConnection(); // get connection
- Statement statement = connection.createStatement(); // get statement
-
- //query or insert
- // ...
-
- connection.close(); // put back to connection pool
-}
-```
-
-> getConnection(), you need to call the close() method after you finish using it. It doesn't close the connection. It just puts it back into the connection pool.
-> For more questions about using HikariCP, please see the [official instructions](https://github.com/brettwooldridge/HikariCP).
-
-#### Druid
-
-Example usage is as follows.
-
-```java
-public static void main(String[] args) throws Exception {
-
- DruidDataSource dataSource = new DruidDataSource();
- // jdbc properties
- dataSource.setDriverClassName("com.taosdata.jdbc.TSDBDriver");
- dataSource.setUrl(url);
- dataSource.setUsername("root");
- dataSource.setPassword("taosdata");
- // pool configurations
- dataSource.setInitialSize(10);
- dataSource.setMinIdle(10);
- dataSource.setMaxActive(10);
- dataSource.setMaxWait(30000);
- dataSource.setValidationQuery("select server_status()");
-
- Connection connection = dataSource.getConnection(); // get connection
- Statement statement = connection.createStatement(); // get statement
- //query or insert
- // ...
-
- connection.close(); // put back to connection pool
-}
-```
-
-> For more questions about using druid, please see [Official Instructions](https://github.com/alibaba/druid).
-
-**Caution:**
-
-- TDengine `v1.6.4.1` provides a special function `select server_status()` for heartbeat detection, so it is recommended to use `select server_status()` for Validation Query when using connection pooling.
-
-As you can see below, `select server_status()` returns `1` on successful execution.
-
-```sql
-taos> select server_status();
-server_status()|
-================
-1 |
-Query OK, 1 row(s) in set (0.000141s)
-```
-
-### More sample programs
-
-The source code of the sample application is under `TDengine/examples/JDBC`:
-
-- JDBCDemo: JDBC sample source code.
-- JDBCConnectorChecker: JDBC installation checker source and jar package.
-- connectionPools: using taos-jdbcdriver in connection pools such as HikariCP, Druid, dbcp, c3p0, etc.
-- SpringJdbcTemplate: using taos-jdbcdriver in Spring JdbcTemplate.
-- mybatisplus-demo: using taos-jdbcdriver in Springboot + Mybatis.
-
-Please refer to: [JDBC example](https://github.com/taosdata/TDengine/tree/develop/examples/JDBC)
-
-## Recent update logs
-
-| taos-jdbcdriver version | major changes |
-| :---------------------: | :------------------------------------------: |
-| 2.0.38 | JDBC REST connections add bulk pull function |
-| 2.0.37 | Added support for json tags |
-| 2.0.36 | Add support for schemaless writing |
-
-## Frequently Asked Questions
-
-1. Why is there no performance improvement when using Statement's `addBatch()` and `executeBatch()` to perform `batch data writing/update`?
-
- **Cause**: In TDengine's JDBC implementation, SQL statements submitted by `addBatch()` method are executed sequentially in the order they are added, which does not reduce the number of interactions with the server and does not bring performance improvement.
-
- **Solution**: 1. splice multiple values in a single insert statement; 2. use multi-threaded concurrent insertion; 3. use parameter-bound writing
-
-2. java.lang.UnsatisfiedLinkError: no taos in java.library.path
-
- **Cause**: The program did not find the dependent native library `taos`.
-
- **Solution**: On Windows you can copy `C:\TDengine\driver\taos.dll` to the `C:\Windows\System32` directory, on Linux the following soft link will be created `ln -s /usr/local/taos/driver/libtaos.so.x.x.x.x /usr/lib/libtaos.so` will work.
-
-3. java.lang.UnsatisfiedLinkError: taos.dll Can't load AMD 64 bit on an IA 32-bit platform
-
- **Cause**: Currently, TDengine only supports 64-bit JDK.
-
- **Solution**: Reinstall the 64-bit JDK. 4.
-
-For other questions, please refer to [FAQ](/train-faq/faq)
-
-## API Reference
-
-[taos-jdbcdriver doc](https://docs.taosdata.com/api/taos-jdbcdriver)
diff --git a/docs-en/14-reference/03-connector/php.mdx b/docs-en/14-reference/03-connector/php.mdx
deleted file mode 100644
index 839a5c8c3cd27f39b234b51aab4d41ad05e93fbc..0000000000000000000000000000000000000000
--- a/docs-en/14-reference/03-connector/php.mdx
+++ /dev/null
@@ -1,150 +0,0 @@
----
-sidebar_position: 1
-sidebar_label: PHP
-title: PHP Connector
----
-
-`php-tdengine` is the TDengine PHP connector provided by TDengine community. In particular, it supports Swoole coroutine.
-
-PHP Connector relies on TDengine client driver.
-
-Project Repository:
-
-After TDengine client or server is installed, `taos.h` is located at:
-
-- Linux:`/usr/local/taos/include`
-- Windows:`C:\TDengine\include`
-
-TDengine client driver is located at:
-
-- Linux: `/usr/local/taos/driver/libtaos.so`
-- Windows: `C:\TDengine\taos.dll`
-
-## Supported Platforms
-
-- Windows、Linux、MacOS
-
-- PHP >= 7.4
-
-- TDengine >= 2.0
-
-- Swoole >= 4.8 (Optional)
-
-## Supported Versions
-
-Because the version of TDengine client driver is tightly associated with that of TDengine server, it's strongly suggested to use the client driver of same version as TDengine server, even though the client driver can work with TDengine server if the first 3 sections of the versions are same.
-
-## Installation
-
-### Install TDengine Client Driver
-
-Regarding how to install TDengine client driver please refer to [Install Client Driver](/reference/connector#installation-steps)
-
-### Install php-tdengine
-
-**Download Source Code Package and Unzip:**
-
-```shell
-curl -L -o php-tdengine.tar.gz https://github.com/Yurunsoft/php-tdengine/archive/refs/tags/v1.0.2.tar.gz \
-&& mkdir php-tdengine \
-&& tar -xzf php-tdengine.tar.gz -C php-tdengine --strip-components=1
-```
-
-> Version number `v1.0.2` is only for example, it can be replaced to any newer version, please find available versions in [TDengine PHP Connector Releases](https://github.com/Yurunsoft/php-tdengine/releases).
-
-**Non-Swoole Environment:**
-
-```shell
-phpize && ./configure && make -j && make install
-```
-
-**Specify TDengine location:**
-
-```shell
-phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/2.4.0.0 && make -j && make install
-```
-
-> `--with-tdengine-dir=` is followed by TDengine location.
-> It's useful in case TDengine installatio location can't be found automatically or MacOS.
-
-**Swoole Environment:**
-
-```shell
-phpize && ./configure --enable-swoole && make -j && make install
-```
-
-**Enable Extension:**
-
-Option One: Add `extension=tdengine` in `php.ini`.
-
-Option Two: Use CLI `php -dextension=tdengine test.php`.
-
-## Sample Programs
-
-In this section a few sample programs which use TDengine PHP connector to access TDengine cluster are demonstrated.
-
-> Any error would throw exception: `TDengine\Exception\TDengineException`
-
-### Establish Conection
-
-
-Establish Connection
-
-```c
-{{#include docs-examples/php/connect.php}}
-```
-
-
-
-### Insert Data
-
-
-Insert Data
-
-```c
-{{#include docs-examples/php/insert.php}}
-```
-
-
-
-### Synchronous Query
-
-
-Synchronous Query
-
-```c
-{{#include docs-examples/php/query.php}}
-```
-
-
-
-### Parameter Binding
-
-
-Parameter Binding
-
-```c
-{{#include docs-examples/php/insert_stmt.php}}
-```
-
-
-
-## Constants
-
-| Constant | Description |
-| ----------------------------------- | ----------- |
-| `TDengine\TSDB_DATA_TYPE_NULL` | null |
-| `TDengine\TSDB_DATA_TYPE_BOOL` | bool |
-| `TDengine\TSDB_DATA_TYPE_TINYINT` | tinyint |
-| `TDengine\TSDB_DATA_TYPE_SMALLINT` | smallint |
-| `TDengine\TSDB_DATA_TYPE_INT` | int |
-| `TDengine\TSDB_DATA_TYPE_BIGINT` | bigint |
-| `TDengine\TSDB_DATA_TYPE_FLOAT` | float |
-| `TDengine\TSDB_DATA_TYPE_DOUBLE` | double |
-| `TDengine\TSDB_DATA_TYPE_BINARY` | binary |
-| `TDengine\TSDB_DATA_TYPE_TIMESTAMP` | timestamp |
-| `TDengine\TSDB_DATA_TYPE_NCHAR` | nchar |
-| `TDengine\TSDB_DATA_TYPE_UTINYINT` | utinyint |
-| `TDengine\TSDB_DATA_TYPE_USMALLINT` | usmallint |
-| `TDengine\TSDB_DATA_TYPE_UINT` | uint |
-| `TDengine\TSDB_DATA_TYPE_UBIGINT` | ubigint |
diff --git a/docs-en/14-reference/03-connector/python.mdx b/docs-en/14-reference/03-connector/python.mdx
deleted file mode 100644
index 58b94f13ae0f08404cef328834ef1c925c307816..0000000000000000000000000000000000000000
--- a/docs-en/14-reference/03-connector/python.mdx
+++ /dev/null
@@ -1,345 +0,0 @@
----
-sidebar_position: 3
-sidebar_label: Python
-title: TDengine Python Connector
-description: "taospy is the official Python connector for TDengine. taospy provides a rich API that makes it easy for Python applications to use TDengine. tasopy wraps both the native and REST interfaces of TDengine, corresponding to the two submodules of tasopy: taos and taosrest. In addition to wrapping the native and REST interfaces, taospy also provides a programming interface that conforms to the Python Data Access Specification (PEP 249), making it easy to integrate taospy with many third-party tools, such as SQLAlchemy and pandas."
----
-
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-
-`taospy` is the official Python connector for TDengine. `taospy` provides a rich set of APIs that makes it easy for Python applications to access TDengine. `taospy` wraps both the [native interface](/reference/connector/cpp) and [REST interface](/reference/rest-api) of TDengine, which correspond to the `taos` and `taosrest` modules of the `taospy` package, respectively.
-In addition to wrapping the native and REST interfaces, `taospy` also provides a set of programming interfaces that conforms to the [Python Data Access Specification (PEP 249)](https://peps.python.org/pep-0249/). It is easy to integrate `taospy` with many third-party tools, such as [SQLAlchemy](https://www.sqlalchemy.org/) and [pandas](https://pandas.pydata.org/).
-
-The direct connection to the server using the native interface provided by the client driver is referred to hereinafter as a "native connection"; the connection to the server using the REST interface provided by taosAdapter is referred to hereinafter as a "REST connection".
-
-The source code for the Python connector is hosted on [GitHub](https://github.com/taosdata/taos-connector-python).
-
-## Supported Platforms
-
-- The [supported platforms](/reference/connector/#supported-platforms) for the native connection are the same as the ones supported by the TDengine client.
-- REST connections are supported on all platforms that can run Python.
-
-## Version selection
-
-We recommend using the latest version of `taospy`, regardless of the version of TDengine.
-
-## Supported features
-
-- Native connections support all the core features of TDengine, including connection management, SQL execution, bind interface, subscriptions, and schemaless writing.
-- REST connections support features such as connection management and SQL execution. (SQL execution allows you to: manage databases, tables, and supertables, write data, query data, create continuous queries, etc.).
-
-## Installation
-
-### Preparation
-
-1. Install Python. Python >= 3.6 is recommended. If Python is not available on your system, refer to the [Python BeginnersGuide](https://wiki.python.org/moin/BeginnersGuide/Download) to install it.
-2. Install [pip](https://pypi.org/project/pip/). In most cases, the Python installer comes with the pip utility. If not, please refer to [pip documentation](https://pip.pypa.io/en/stable/installation/) to install it.
-
-If you use a native connection, you will also need to [Install Client Driver](/reference/connector#Install-Client-Driver). The client install package includes the TDengine client dynamic link library (`libtaos.so` or `taos.dll`) and the TDengine CLI.
-
-### Install via pip
-
-#### Uninstalling an older version
-
-If you have installed an older version of the Python Connector, please uninstall it beforehand.
-
-```
-pip3 uninstall taos taospy
-```
-
-:::note
-Earlier TDengine client software includes the Python connector. If the Python connector is installed from the client package's installation directory, the corresponding Python package name is `taos`. So the above uninstall command includes `taos`, and it doesn't matter if it doesn't exist.
-
-:::
-
-#### To install `taospy`
-
-
-
-
-Install the latest version of:
-
-```
-pip3 install taospy
-```
-
-You can also specify a specific version to install:
-
-```
-pip3 install taospy==2.3.0
-```
-
-
-
-
-```
-pip3 install git+https://github.com/taosdata/taos-connector-python.git
-```
-
-
-
-
-### Installation verification
-
-
-
-
-For native connection, you need to verify that both the client driver and the Python connector itself are installed correctly. The client driver and Python connector have been installed properly if you can successfully import the `taos` module. In the Python Interactive Shell, you can type.
-
-```python
-import taos
-```
-
-
-
-
-For REST connections, verifying that the `taosrest` module can be imported successfully can be done in the Python Interactive Shell by typing.
-
-```python
-import taosrest
-```
-
-
-
-
-:::tip
-If you have multiple versions of Python on your system, you may have various `pip` commands. Be sure to use the correct path for the `pip` command. Above, we installed the `pip3` command, which rules out the possibility of using the `pip` corresponding to Python 2.x versions. However, if you have more than one version of Python 3.x on your system, you still need to check that the installation path is correct. The easiest way to verify this is to type `pip3 install taospy` again in the command, and it will print out the exact location of `taospy`, for example, on Windows.
-
-```
-C:\> pip3 install taospy
-Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
-Requirement already satisfied: taospy in c:\users\username\appdata\local\programs\python\python310\lib\site-packages (2.3.0)
-
-:::
-
-## Establish connection
-
-### Connectivity testing
-
-Before establishing a connection with the connector, we recommend testing the connectivity of the local TDengine CLI to the TDengine cluster.
-
-
-
-
-Ensure that the TDengine instance is up and that the FQDN of the machines in the cluster (the FQDN defaults to hostname if you are starting a standalone version) can be resolved locally, by testing with the `ping` command.
-
-```
-ping
-```
-
-Then test if the cluster can be appropriately connected with TDengine CLI:
-
-```
-taos -h -p
-```
-
-The FQDN above can be the FQDN of any dnode in the cluster, and the PORT is the serverPort corresponding to this dnode.
-
-
-
-
-For REST connections, make sure the cluster and taosAdapter component, are running. This can be tested using the following `curl ` command.
-
-```
-curl -u root:taosdata http://:/rest/sql -d "select server_version()"
-```
-
-The FQDN above is the FQDN of the machine running taosAdapter, PORT is the port taosAdapter listening, default is `6041`.
-If the test is successful, it will output the server version information, e.g.
-
-```json
-{
- "status": "succ",
- "head": ["server_version()"],
- "column_meta": [["server_version()", 8, 8]],
- "data": [["2.4.0.16"]],
- "rows": 1
-}
-```
-
-
-
-
-### Using connectors to establish connections
-
-The following example code assumes that TDengine is installed locally and that the default configuration is used for both FQDN and serverPort.
-
-
-
-
-```python
-{{#include docs-examples/python/connect_native_reference.py}}
-```
-
-All arguments of the `connect()` function are optional keyword arguments. The following are the connection parameters specified.
-
-- `host` : The FQDN of the node to connect to. There is no default value. If this parameter is not provided, the firstEP in the client configuration file will be connected.
-- `user` : The TDengine user name. The default value is `root`.
-- `password` : TDengine user password. The default value is `taosdata`.
-- `port` : The starting port of the data node to connect to, i.e., the serverPort configuration. The default value is 6030, which will only take effect if the host parameter is provided.
-- `config` : The path to the client configuration file. On Windows systems, the default is `C:\TDengine\cfg`. The default is `/etc/taos/` on Linux systems.
-- `timezone` : The timezone used to convert the TIMESTAMP data in the query results to python `datetime` objects. The default is the local timezone.
-
-:::warning
-`config` and `timezone` are both process-level configurations. We recommend that all connections made by a process use the same parameter values. Otherwise, unpredictable errors may occur.
-:::
-
-:::tip
-The `connect()` function returns a `taos.TaosConnection` instance. In client-side multi-threaded scenarios, we recommend that each thread request a separate connection instance rather than sharing a connection between multiple threads.
-
-:::
-
-
-
-
-```python
-{{#include docs-examples/python/connect_rest_examples.py:connect}}
-```
-
-All arguments to the `connect()` function are optional keyword arguments. The following are the connection parameters specified.
-
-- `url`: The URL of taosAdapter REST service. The default is .
-- `user`: TDengine user name. The default is `root`.
-- `password`: TDengine user password. The default is `taosdata`.
-- `timeout`: HTTP request timeout in seconds. The default is `socket._GLOBAL_DEFAULT_TIMEOUT`. Usually, no configuration is needed.
-
-
-
-
-## Sample program
-
-### Basic Usage
-
-
-
-
-##### TaosConnection class
-
-The `TaosConnection` class contains both an implementation of the PEP249 Connection interface (e.g., the `cursor()` method and the `close()` method) and many extensions (e.g., the `execute()`, `query()`, `schemaless_insert()`, and `subscribe()` methods).
-
-```python title="execute method"
-{{#include docs-examples/python/connection_usage_native_reference.py:insert}}
-```
-
-```python title="query method"
-{{#include docs-examples/python/connection_usage_native_reference.py:query}}
-```
-
-:::tip
-The queried results can only be fetched once. For example, only one of `fetch_all()` and `fetch_all_into_dict()` can be used in the example above. Repeated fetches will result in an empty list.
-:::
-
-##### Use of TaosResult class
-
-In the above example of using the `TaosConnection` class, we have shown two ways to get the result of a query: `fetch_all()` and `fetch_all_into_dict()`. In addition, `TaosResult` also provides methods to iterate through the result set by rows (`rows_iter`) or by data blocks (`blocks_iter`). Using these two methods will be more efficient in scenarios where the query has a large amount of data.
-
-```python title="blocks_iter method"
-{{#include docs-examples/python/result_set_examples.py}}
-```
-##### Use of the TaosCursor class
-
-The `TaosConnection` class and the `TaosResult` class already implement all the functionality of the native interface. If you are familiar with the interfaces in the PEP249 specification, you can also use the methods provided by the `TaosCursor` class.
-
-```python title="Use of TaosCursor"
-{{#include docs-examples/python/cursor_usage_native_reference.py}}
-```
-
-:::note
-The TaosCursor class uses native connections for write and query operations. In a client-side multi-threaded scenario, this cursor instance must remain thread exclusive and cannot be shared across threads for use, otherwise, it will result in errors in the returned results.
-
-:::
-
-
-
-
-##### Use of TaosRestCursor class
-
-The ``TaosRestCursor`` class is an implementation of the PEP249 Cursor interface.
-
-```python title="Use of TaosRestCursor"
-{{#include docs-examples/python/connect_rest_examples.py:basic}}
-```
-- `cursor.execute` : Used to execute arbitrary SQL statements.
-- `cursor.rowcount` : For write operations, returns the number of successful rows written. For query operations, returns the number of rows in the result set.
-- `cursor.description` : Returns the description of the field. Please refer to [TaosRestCursor](https://docs.taosdata.com/api/taospy/taosrest/cursor.html) for the specific format of the description information.
-
-##### Use of the RestClient class
-
-The `RestClient` class is a direct wrapper for the [REST API](/reference/rest-api). It contains only a `sql()` method for executing arbitrary SQL statements and returning the result.
-
-```python title="Use of RestClient"
-{{#include docs-examples/python/rest_client_example.py}}
-```
-
-For a more detailed description of the `sql()` method, please refer to [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html).
-
-
-
-
-### Used with pandas
-
-
-
-
-```python
-{{#include docs-examples/python/conn_native_pandas.py}}
-```
-
-
-
-
-```python
-{{#include docs-examples/python/conn_rest_pandas.py}}
-```
-
-
-
-
-### Other sample programs
-
-| Example program links | Example program content |
-| ------------------------------------------------------------------------------------------------------------- | ------------------- ---- |
-| [bind_multi.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-multi.py) | parameter binding, bind multiple rows at once |
-| [bind_row.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-row.py) | bind_row.py
-| [insert_lines.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/insert-lines.py) | InfluxDB line protocol writing |
-| [json_tag.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/json-tag.py) | Use JSON type tags |
-| [subscribe-async.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/subscribe-async.py) | Asynchronous subscription |
-| [subscribe-sync.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/subscribe-sync.py) | synchronous-subscribe |
-
-## Other notes
-
-### Exception handling
-
-All errors from database operations are thrown directly as exceptions and the error message from the database is passed up the exception stack. The application is responsible for exception handling. For example:
-
-```python
-{{#include docs-examples/python/handle_exception.py}}
-```
-
-### About nanoseconds
-
-Due to the current imperfection of Python's nanosecond support (see link below), the current implementation returns integers at nanosecond precision instead of the `datetime` type produced by `ms` and `us`, which application developers will need to handle on their own. And it is recommended to use pandas' to_datetime(). The Python Connector may modify the interface in the future if Python officially supports nanoseconds in full.
-
-1. https://stackoverflow.com/questions/10611328/parsing-datetime-strings-containing-nanoseconds
-2. https://www.python.org/dev/peps/pep-0564/
-
-
-## Frequently Asked Questions
-
-Welcome to [ask questions or report questions](https://github.com/taosdata/taos-connector-python/issues).
-
-## Important Update
-
-| Connector version | Important Update | Release date |
-| ---------- | --------------------------------------------------------------------------------- | ---------- |
-| 2.3.1 | 1. support TDengine REST API 2. remove support for Python version below 3.6 | 2022-04-28 |
-| 2.2.5 | support timezone option when connect | 2022-04-13 |
-| 2.2.2 | support sqlalchemy dialect plugin | 2022-03-28 |
-
-[**Release Notes**] (https://github.com/taosdata/taos-connector-python/releases)
-
-## API Reference
-
-- [taos](https://docs.taosdata.com/api/taospy/taos/)
-- [taosrest](https://docs.taosdata.com/api/taospy/taosrest)
diff --git a/docs-en/14-reference/03-connector/rust.mdx b/docs-en/14-reference/03-connector/rust.mdx
deleted file mode 100644
index a5cbaeac8077cda42690d9cc232062a685a51f41..0000000000000000000000000000000000000000
--- a/docs-en/14-reference/03-connector/rust.mdx
+++ /dev/null
@@ -1,384 +0,0 @@
----
-toc_max_heading_level: 4
-sidebar_position: 5
-sidebar_label: Rust
-title: TDengine Rust Connector
----
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-import Preparation from "./_preparation.mdx"
-import RustInsert from "../../07-develop/03-insert-data/_rust_sql.mdx"
-import RustInfluxLine from "../../07-develop/03-insert-data/_rust_line.mdx"
-import RustOpenTSDBTelnet from "../../07-develop/03-insert-data/_rust_opts_telnet.mdx"
-import RustOpenTSDBJson from "../../07-develop/03-insert-data/_rust_opts_json.mdx"
-import RustQuery from "../../07-develop/04-query-data/_rust.mdx"
-
-`libtaos` is the official Rust language connector for TDengine. Rust developers can develop applications to access the TDengine instance data.
-
-`libtaos` provides two ways to establish connections. One is the **Native Connection**, which connects to TDengine instances via the TDengine client driver (taosc). The other is **REST connection**, which connects to TDengine instances via taosAdapter's REST interface.
-
-The source code for `libtaos` is hosted on [GitHub](https://github.com/taosdata/libtaos-rs).
-
-## Supported platforms
-
-The platforms supported by native connections are the same as those supported by the TDengine client driver.
-REST connections are supported on all platforms that can run Rust.
-
-## Version support
-
-Please refer to [version support list](/reference/connector#version-support).
-
-The Rust Connector is still under rapid development and is not guaranteed to be backward compatible before 1.0. We recommend using TDengine version 2.4 or higher to avoid known issues.
-
-## Installation
-
-### Pre-installation
-* Install the Rust development toolchain
-* If using the native connection, please install the TDengine client driver. Please refer to [install client driver](/reference/connector#install-client-driver)
-
-### Adding libtaos dependencies
-
-Add the [libtaos][libtaos] dependency to the [Rust](https://rust-lang.org) project as follows, depending on the connection method selected.
-
-
-
-
-Add [libtaos][libtaos] to the `Cargo.toml` file.
-
-```toml
-[dependencies]
-# use default feature
-libtaos = "*"
-```
-
-
-
-
-Add [libtaos][libtaos] to the `Cargo.toml` file and enable the `rest` feature.
-
-```toml
-[dependencies]
-# use rest feature
-libtaos = { version = "*", features = ["rest"]}
-```
-
-
-
-
-
-### Using connection pools
-
-Please enable the `r2d2` feature in `Cargo.toml`.
-
-```toml
-[dependencies]
-# with taosc
-libtaos = { version = "*", features = ["r2d2"] }
-# or rest
-libtaos = { version = "*", features = ["rest", "r2d2"] }
-```
-
-## Create a connection
-
-The [TaosCfgBuilder] provides the user with an API in the form of a constructor for the subsequent creation of connections or use of connection pools.
-
-```rust
-let cfg: TaosCfg = TaosCfgBuilder::default()
- .ip("127.0.0.1")
- .user("root")
- .pass("taosdata")
- .db("log") // do not set if not require a default database.
- .port(6030u16)
- .build()
- .expect("TaosCfg builder error");
-}
-```
-
-You can now use this object to create the connection.
-
-```rust
-let conn = cfg.connect()? ;
-```
-
-The connection object can create more than one.
-
-```rust
-let conn = cfg.connect()? ;
-let conn2 = cfg.connect()? ;
-```
-
-You can use connection pools in applications.
-
-```rust
-let pool = r2d2::Pool::builder()
- .max_size(10000) // max connections
- .build(cfg)? ;
-
-// ...
-// Use pool to get connection
-let conn = pool.get()? ;
-```
-
-After that, you can perform the following operations on the database.
-
-```rust
-async fn demo() -> Result<(), Error> {
- // get connection ...
-
- // create database
- conn.exec("create database if not exists demo").await?
- // change database context
- conn.exec("use demo").await?
- // create table
- conn.exec("create table if not exists tb1 (ts timestamp, v int)").await?
- // insert
- conn.exec("insert into tb1 values(now, 1)").await?
- // query
- let rows = conn.query("select * from tb1").await?
- for row in rows.rows {
- println!("{}", row.into_iter().join(","));
- }
-}
-```
-
-## Usage examples
-
-### Write data
-
-#### SQL Write
-
-
-
-#### InfluxDB line protocol write
-
-
-
-#### OpenTSDB Telnet line protocol write
-
-
-
-#### OpenTSDB JSON line protocol write
-
-
-
-### Query data
-
-
-
-### More sample programs
-
-| Program Path | Program Description |
-| -------------- | ----------------------------------------------------------------------------- |
-| [demo.rs] | Basic API Usage Examples |
-| [bailongma-rs] | Using TDengine as the Prometheus remote storage API adapter for the storage backend, using the r2d2 connection pool |
-
-## API Reference
-
-### Connection constructor API
-
-The [Builder Pattern](https://doc.rust-lang.org/1.0.0/style/ownership/builders.html) constructor pattern is Rust's solution for handling complex data types or optional configuration types. The [libtaos] implementation uses the connection constructor [TaosCfgBuilder] as the entry point for the TDengine Rust connector. The [TaosCfgBuilder] provides optional configuration of servers, ports, databases, usernames, passwords, etc.
-
-Using the `default()` method, you can construct a [TaosCfg] with default parameters for subsequent connections to the database or establishing connection pools.
-
-```rust
-let cfg = TaosCfgBuilder::default().build()? ;
-```
-
-Using the constructor pattern, the user can set on-demand.
-
-```rust
-let cfg = TaosCfgBuilder::default()
- .ip("127.0.0.1")
- .user("root")
- .pass("taosdata")
- .db("log")
- .port(6030u16)
- .build()? ;
-```
-
-Create TDengine connection using [TaosCfg] object.
-
-```rust
-let conn: Taos = cfg.connect();
-```
-
-### Connection pooling
-
-In complex applications, we recommend enabling connection pools. Connection pool for [libtaos] is implemented using [r2d2].
-
-As follows, a connection pool with default parameters can be generated.
-
-```rust
-let pool = r2d2::Pool::new(cfg)? ;
-```
-
-You can set the same connection pool parameters using the connection pool's constructor.
-
-```rust
- use std::time::Duration;
- let pool = r2d2::Pool::builder()
- .max_size(5000) // max connections
- .max_lifetime(Some(Duration::from_minutes(100))) // lifetime of each connection
- .min_idle(Some(1000)) // minimal idle connections
- .connection_timeout(Duration::from_minutes(2))
- .build(cfg);
-```
-
-In the application code, use `pool.get()? ` to get a connection object [Taos].
-
-```rust
-let taos = pool.get()? ;
-```
-
-The [Taos] structure is the connection manager in [libtaos] and provides two main APIs.
-
-1. `exec`: Execute some non-query SQL statements, such as `CREATE`, `ALTER`, `INSERT`, etc.
-
- ```rust
- taos.exec().await?
- ```
-
-2. `query`: Execute the query statement and return the [TaosQueryData] object.
-
- ```rust
- let q = taos.query("select * from log.logs").await?
- ```
-
- The [TaosQueryData] object stores the query result data and basic information about the returned columns (column name, type, length).
-
- Column information is stored using [ColumnMeta].
-
- ``rust
- let cols = &q.column_meta;
- for col in cols {
- println!("name: {}, type: {:?} , bytes: {}", col.name, col.type_, col.bytes);
- }
- ```
-
- It fetches data line by line.
-
- ```rust
- for (i, row) in q.rows.iter().enumerate() {
- for (j, cell) in row.iter().enumerate() {
- println!("cell({}, {}) data: {}", i, j, cell);
- }
- }
- ```
-
-Note that Rust asynchronous functions and an asynchronous runtime are required.
-
-[Taos] provides a few Rust methods that encapsulate SQL to reduce the frequency of `format!` code blocks.
-
-- `.describe(table: &str)`: Executes `DESCRIBE` and returns a Rust data structure.
-- `.create_database(database: &str)`: Executes the `CREATE DATABASE` statement.
-- `.use_database(database: &str)`: Executes the `USE` statement.
-
-In addition, this structure is also the entry point for [Parameter Binding](#Parameter Binding Interface) and [Line Protocol Interface](#Line Protocol Interface). Please refer to the specific API descriptions for usage.
-
-### Bind Interface
-
-Similar to the C interface, Rust provides the bind interface's wrapping. First, create a bind object [Stmt] for a SQL command from the [Taos] object.
-
-```rust
-let mut stmt: Stmt = taos.stmt("insert into ? values(? ,?)") ? ;
-```
-
-The bind object provides a set of interfaces for implementing parameter binding.
-
-##### `.set_tbname(tbname: impl ToCString)`
-
-To bind table names.
-
-##### `.set_tbname_tags(tbname: impl ToCString, tags: impl IntoParams)`
-
-Bind sub-table table names and tag values when the SQL statement uses a super table.
-
-```rust
-let mut stmt = taos.stmt("insert into ? using stb0 tags(?) values(? ,?)") ? ;
-// tags can be created with any supported type, here is an example using JSON
-let v = Field::Json(serde_json::from_str("{\"tag1\":\"one, two, three, four, five, six, seven, eight, nine, ten\"}").unwrap());
-stmt.set_tbname_tags("tb0", [&tag])? ;
-```
-
-##### `.bind(params: impl IntoParams)`
-
-Bind value types. Use the [Field] structure to construct the desired type and bind.
-
-```rust
-let ts = Field::Timestamp(Timestamp::now());
-let value = Field::Float(0.0);
-stmt.bind(vec![ts, value].iter())? ;
-```
-
-##### `.execute()`
-
-Execute SQL.[Stmt] objects can be reused, re-binded, and executed after execution.
-
-```rust
-stmt.execute()? ;
-
-// next bind cycle.
-// stmt.set_tbname()? ;
-//stmt.bind()? ;
-//stmt.execute()? ;
-```
-
-### Line protocol interface
-
-The line protocol interface supports multiple modes and different precision and requires the introduction of constants in the schemaless module to set.
-
-```rust
-use libtaos::*;
-use libtaos::schemaless::*;
-```
-
-- InfluxDB line protocol
-
- ```rust
- let lines = [
- "st,t1=abc,t2=def,t3=anything c1=3i64,c3=L\"pass\",c2=false 1626006833639000000"
- "st,t1=abc,t2=def,t3=anything c1=3i64,c3=L\"abc\",c4=4f64 1626006833639000000"
- ];
- taos.schemaless_insert(&lines, TSDB_SML_LINE_PROTOCOL, TSDB_SML_TIMESTAMP_NANOSECONDS)? ;
- ```
-
-- OpenTSDB Telnet Protocol
-
- ```rust
- let lines = ["sys.if.bytes.out 1479496100 1.3E3 host=web01 interface=eth0"];
- taos.schemaless_insert(&lines, TSDB_SML_LINE_PROTOCOL, TSDB_SML_TIMESTAMP_SECONDS)? ;
- ```
-
-- OpenTSDB JSON protocol
-
- ```rust
- let lines = [r#"
- {
- "metric": "st",
- "timestamp": 1626006833,
- "value": 10,
- "tags": {
- "t1": true,
- "t2": false,
- "t3": 10,
- "t4": "123_abc_.! @#$%^&*:;,. /? |+-=()[]{}<>"
- }
- }"#];
- taos.schemaless_insert(&lines, TSDB_SML_LINE_PROTOCOL, TSDB_SML_TIMESTAMP_SECONDS)? ;
- ```
-
-Please move to the Rust documentation hosting page for other related structure API usage instructions: .
-
-[libtaos]: https://github.com/taosdata/libtaos-rs
-[tdengine]: https://github.com/taosdata/TDengine
-[bailongma-rs]: https://github.com/taosdata/bailongma-rs
-[r2d2]: https://crates.io/crates/r2d2
-[demo.rs]: https://github.com/taosdata/libtaos-rs/blob/main/examples/demo.rs
-[TaosCfgBuilder]: https://docs.rs/libtaos/latest/libtaos/struct.TaosCfgBuilder.html
-[TaosCfg]: https://docs.rs/libtaos/latest/libtaos/struct.TaosCfg.html
-[Taos]: https://docs.rs/libtaos/latest/libtaos/struct.Taos.html
-[TaosQueryData]: https://docs.rs/libtaos/latest/libtaos/field/struct.TaosQueryData.html
-[Field]: https://docs.rs/libtaos/latest/libtaos/field/enum.Field.html
-[Stmt]: https://docs.rs/libtaos/latest/libtaos/stmt/struct.Stmt.html
diff --git a/docs-en/14-reference/04-taosadapter.md b/docs-en/14-reference/04-taosadapter.md
deleted file mode 100644
index 3264124655e7040e1d94b43500a0b582d95cb5a1..0000000000000000000000000000000000000000
--- a/docs-en/14-reference/04-taosadapter.md
+++ /dev/null
@@ -1,337 +0,0 @@
----
-title: "taosAdapter"
-description: "taosAdapter is a TDengine companion tool that acts as a bridge and adapter between TDengine clusters and applications. It provides an easy-to-use and efficient way to ingest data directly from data collection agent software such as Telegraf, StatsD, collectd, etc. It also provides an InfluxDB/OpenTSDB compatible data ingestion interface, allowing InfluxDB/OpenTSDB applications to be seamlessly ported to TDengine."
-sidebar_label: "taosAdapter"
----
-
-import Prometheus from "./_prometheus.mdx"
-import CollectD from "./_collectd.mdx"
-import StatsD from "./_statsd.mdx"
-import Icinga2 from "./_icinga2.mdx"
-import TCollector from "./_tcollector.mdx"
-
-taosAdapter is a TDengine companion tool that acts as a bridge and adapter between TDengine clusters and applications. It provides an easy-to-use and efficient way to ingest data directly from data collection agent software such as Telegraf, StatsD, collectd, etc. It also provides an InfluxDB/OpenTSDB compatible data ingestion interface that allows InfluxDB/OpenTSDB applications to be seamlessly ported to TDengine.
-
-taosAdapter provides the following features.
-
-- RESTful interface
-- InfluxDB v1 compliant write interface
-- OpenTSDB JSON and telnet format writes compatible
-- Seamless connection to Telegraf
-- Seamless connection to collectd
-- Seamless connection to StatsD
-- Supports Prometheus remote_read and remote_write
-
-## taosAdapter architecture diagram
-
-
-
-## taosAdapter Deployment Method
-
-### Install taosAdapter
-
-taosAdapter has been part of TDengine server software since TDengine v2.4.0.0. If you use the TDengine server, you don't need additional steps to install taosAdapter. You can download taosAdapter from [TDengine official website](https://tdengine.com/all-downloads/) to download the TDengine server installation package (taosAdapter is included in v2.4.0.0 and later version). If you need to deploy taosAdapter separately on another server other than the TDengine server, you should install the full TDengine server package on that server to install taosAdapter. If you need to build taosAdapter from source code, you can refer to the [Building taosAdapter]( https://github.com/taosdata/taosadapter/blob/develop/BUILD.md) documentation.
-
-### Start/Stop taosAdapter
-
-On Linux systems, the taosAdapter service is managed by `systemd` by default. You can use the command `systemctl start taosadapter` to start the taosAdapter service and use the command `systemctl stop taosadapter` to stop the taosAdapter service.
-
-### Remove taosAdapter
-
-Use the command `rmtaos` to remove the TDengine server software if you use tar.gz package. If you installed using a .deb or .rpm package, use the corresponding command, for your package manager, like apt or rpm to remove the TDengine server, including taosAdapter.
-
-### Upgrade taosAdapter
-
-taosAdapter and TDengine server need to use the same version. Please upgrade the taosAdapter by upgrading the TDengine server.
-You need to upgrade the taosAdapter deployed separately from TDengine server by upgrading the TDengine server on the deployed server.
-
-## taosAdapter parameter list
-
-taosAdapter is configurable via command-line arguments, environment variables and configuration files. The default configuration file is /etc/taos/taosadapter.toml on Linux.
-
-Command-line arguments take precedence over environment variables over configuration files. The command-line usage is arg=val, e.g., taosadapter -p=30000 --debug=true. The detailed list is as follows:
-
-```shell
-Usage of taosAdapter:
- --collectd.db string collectd db name. Env "TAOS_ADAPTER_COLLECTD_DB" (default "collectd")
- --collectd.enable enable collectd. Env "TAOS_ADAPTER_COLLECTD_ENABLE" (default true)
- --collectd.password string collectd password. Env "TAOS_ADAPTER_COLLECTD_PASSWORD" (default "taosdata")
- --collectd.port int collectd server port. Env "TAOS_ADAPTER_COLLECTD_PORT" (default 6045)
- --collectd.user string collectd user. Env "TAOS_ADAPTER_COLLECTD_USER" (default "root")
- --collectd.worker int collectd write worker. Env "TAOS_ADAPTER_COLLECTD_WORKER" (default 10)
- -c, --config string config path default /etc/taos/taosadapter.toml
- --cors.allowAllOrigins cors allow all origins. Env "TAOS_ADAPTER_CORS_ALLOW_ALL_ORIGINS" (default true)
- --cors.allowCredentials cors allow credentials. Env "TAOS_ADAPTER_CORS_ALLOW_Credentials"
- --cors.allowHeaders stringArray cors allow HEADERS. Env "TAOS_ADAPTER_ALLOW_HEADERS"
- --cors.allowOrigins stringArray cors allow origins. Env "TAOS_ADAPTER_ALLOW_ORIGINS"
- --cors.allowWebSockets cors allow WebSockets. Env "TAOS_ADAPTER_CORS_ALLOW_WebSockets"
- --cors.exposeHeaders stringArray cors expose headers. Env "TAOS_ADAPTER_Expose_Headers"
- --debug enable debug mode. Env "TAOS_ADAPTER_DEBUG"
- --help Print this help message and exit
- --influxdb.enable enable influxdb. Env "TAOS_ADAPTER_INFLUXDB_ENABLE" (default true)
- --log.path string log path. Env "TAOS_ADAPTER_LOG_PATH" (default "/var/log/taos")
- --log.rotationCount uint log rotation count. Env "TAOS_ADAPTER_LOG_ROTATION_COUNT" (default 30)
- --log.rotationSize string log rotation size(KB MB GB), must be a positive integer. Env "TAOS_ADAPTER_LOG_ROTATION_SIZE" (default "1GB")
- --log.rotationTime duration log rotation time. Env "TAOS_ADAPTER_LOG_ROTATION_TIME" (default 24h0m0s)
- --logLevel string log level (panic fatal error warn warning info debug trace). Env "TAOS_ADAPTER_LOG_LEVEL" (default "info")
- --monitor.collectDuration duration Set monitor duration. Env "TAOS_MONITOR_COLLECT_DURATION" (default 3s)
- --monitor.identity string The identity of the current instance, or 'hostname:port' if it is empty. Env "TAOS_MONITOR_IDENTITY"
- --monitor.incgroup Whether running in cgroup. Env "TAOS_MONITOR_INCGROUP"
- --monitor.password string TDengine password. Env "TAOS_MONITOR_PASSWORD" (default "taosdata")
- --monitor.pauseAllMemoryThreshold float Memory percentage threshold for pause all. Env "TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD" (default 80)
- --monitor.pauseQueryMemoryThreshold float Memory percentage threshold for pause query. Env "TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD" (default 70)
- --monitor.user string TDengine user. Env "TAOS_MONITOR_USER" (default "root")
- --monitor.writeInterval duration Set write to TDengine interval. Env "TAOS_MONITOR_WRITE_INTERVAL" (default 30s)
- --monitor.writeToTD Whether write metrics to TDengine. Env "TAOS_MONITOR_WRITE_TO_TD" (default true)
- --node_exporter.caCertFile string node_exporter ca cert file path. Env "TAOS_ADAPTER_NODE_EXPORTER_CA_CERT_FILE"
- --node_exporter.certFile string node_exporter cert file path. Env "TAOS_ADAPTER_NODE_EXPORTER_CERT_FILE"
- --node_exporter.db string node_exporter db name. Env "TAOS_ADAPTER_NODE_EXPORTER_DB" (default "node_exporter")
- --node_exporter.enable enable node_exporter. Env "TAOS_ADAPTER_NODE_EXPORTER_ENABLE"
- --node_exporter.gatherDuration duration node_exporter gather duration. Env "TAOS_ADAPTER_NODE_EXPORTER_GATHER_DURATION" (default 5s)
- --node_exporter.httpBearerTokenString string node_exporter http bearer token. Env "TAOS_ADAPTER_NODE_EXPORTER_HTTP_BEARER_TOKEN_STRING"
- --node_exporter.httpPassword string node_exporter http password. Env "TAOS_ADAPTER_NODE_EXPORTER_HTTP_PASSWORD"
- --node_exporter.httpUsername string node_exporter http username. Env "TAOS_ADAPTER_NODE_EXPORTER_HTTP_USERNAME"
- --node_exporter.insecureSkipVerify node_exporter skip ssl check. Env "TAOS_ADAPTER_NODE_EXPORTER_INSECURE_SKIP_VERIFY" (default true)
- --node_exporter.keyFile string node_exporter cert key file path. Env "TAOS_ADAPTER_NODE_EXPORTER_KEY_FILE"
- --node_exporter.password string node_exporter password. Env "TAOS_ADAPTER_NODE_EXPORTER_PASSWORD" (default "taosdata")
- --node_exporter.responseTimeout duration node_exporter response timeout. Env "TAOS_ADAPTER_NODE_EXPORTER_RESPONSE_TIMEOUT" (default 5s)
- --node_exporter.urls strings node_exporter urls. Env "TAOS_ADAPTER_NODE_EXPORTER_URLS" (default [http://localhost:9100])
- --node_exporter.user string node_exporter user. Env "TAOS_ADAPTER_NODE_EXPORTER_USER" (default "root")
- --opentsdb.enable enable opentsdb. Env "TAOS_ADAPTER_OPENTSDB_ENABLE" (default true)
- --opentsdb_telnet.dbs strings opentsdb_telnet db names. Env "TAOS_ADAPTER_OPENTSDB_TELNET_DBS" (default [opentsdb_telnet,collectd_tsdb,icinga2_tsdb,tcollector_tsdb])
- --opentsdb_telnet.enable enable opentsdb telnet,warning: without auth info(default false). Env "TAOS_ADAPTER_OPENTSDB_TELNET_ENABLE"
- --opentsdb_telnet.maxTCPConnections int max tcp connections. Env "TAOS_ADAPTER_OPENTSDB_TELNET_MAX_TCP_CONNECTIONS" (default 250)
- --opentsdb_telnet.password string opentsdb_telnet password. Env "TAOS_ADAPTER_OPENTSDB_TELNET_PASSWORD" (default "taosdata")
- --opentsdb_telnet.ports ints opentsdb telnet tcp port. Env "TAOS_ADAPTER_OPENTSDB_TELNET_PORTS" (default [6046,6047,6048,6049])
- --opentsdb_telnet.tcpKeepAlive enable tcp keep alive. Env "TAOS_ADAPTER_OPENTSDB_TELNET_TCP_KEEP_ALIVE"
- --opentsdb_telnet.user string opentsdb_telnet user. Env "TAOS_ADAPTER_OPENTSDB_TELNET_USER" (default "root")
- --pool.idleTimeout duration Set idle connection timeout. Env "TAOS_ADAPTER_POOL_IDLE_TIMEOUT" (default 1h0m0s)
- --pool.maxConnect int max connections to taosd. Env "TAOS_ADAPTER_POOL_MAX_CONNECT" (default 4000)
- --pool.maxIdle int max idle connections to taosd. Env "TAOS_ADAPTER_POOL_MAX_IDLE" (default 4000)
- -P, --port int http port. Env "TAOS_ADAPTER_PORT" (default 6041)
- --prometheus.enable enable prometheus. Env "TAOS_ADAPTER_PROMETHEUS_ENABLE" (default true)
- --restfulRowLimit int restful returns the maximum number of rows (-1 means no limit). Env "TAOS_ADAPTER_RESTFUL_ROW_LIMIT" (default -1)
- --ssl.certFile string ssl cert file path. Env "TAOS_ADAPTER_SSL_CERT_FILE"
- --ssl.enable enable ssl. Env "TAOS_ADAPTER_SSL_ENABLE"
- --ssl.keyFile string ssl key file path. Env "TAOS_ADAPTER_SSL_KEY_FILE"
- --statsd.allowPendingMessages int statsd allow pending messages. Env "TAOS_ADAPTER_STATSD_ALLOW_PENDING_MESSAGES" (default 50000)
- --statsd.db string statsd db name. Env "TAOS_ADAPTER_STATSD_DB" (default "statsd")
- --statsd.deleteCounters statsd delete counter cache after gather. Env "TAOS_ADAPTER_STATSD_DELETE_COUNTERS" (default true)
- --statsd.deleteGauges statsd delete gauge cache after gather. Env "TAOS_ADAPTER_STATSD_DELETE_GAUGES" (default true)
- --statsd.deleteSets statsd delete set cache after gather. Env "TAOS_ADAPTER_STATSD_DELETE_SETS" (default true)
- --statsd.deleteTimings statsd delete timing cache after gather. Env "TAOS_ADAPTER_STATSD_DELETE_TIMINGS" (default true)
- --statsd.enable enable statsd. Env "TAOS_ADAPTER_STATSD_ENABLE" (default true)
- --statsd.gatherInterval duration statsd gather interval. Env "TAOS_ADAPTER_STATSD_GATHER_INTERVAL" (default 5s)
- --statsd.maxTCPConnections int statsd max tcp connections. Env "TAOS_ADAPTER_STATSD_MAX_TCP_CONNECTIONS" (default 250)
- --statsd.password string statsd password. Env "TAOS_ADAPTER_STATSD_PASSWORD" (default "taosdata")
- --statsd.port int statsd server port. Env "TAOS_ADAPTER_STATSD_PORT" (default 6044)
- --statsd.protocol string statsd protocol [tcp or udp]. Env "TAOS_ADAPTER_STATSD_PROTOCOL" (default "udp")
- --statsd.tcpKeepAlive enable tcp keep alive. Env "TAOS_ADAPTER_STATSD_TCP_KEEP_ALIVE"
- --statsd.user string statsd user. Env "TAOS_ADAPTER_STATSD_USER" (default "root")
- --statsd.worker int statsd write worker. Env "TAOS_ADAPTER_STATSD_WORKER" (default 10)
- --taosConfigDir string load taos client config path. Env "TAOS_ADAPTER_TAOS_CONFIG_FILE"
- --version Print the version and exit
-```
-
-Note:
-Please set the following Cross-Origin Resource Sharing (CORS) parameters according to the actual situation when using a browser for interface calls.
-
-```text
-AllowAllOrigins
-AllowOrigins
-AllowHeaders
-ExposeHeaders
-AllowCredentials
-AllowWebSockets
-```
-
-You do not need to care about these configurations if you do not make interface calls through the browser.
-
-For details on the CORS protocol, please refer to: [https://www.w3.org/wiki/CORS_Enabled](https://www.w3.org/wiki/CORS_Enabled) or [https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CORS](https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CORS).
-
-See [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/blob/develop/example/config/taosadapter.toml) for sample configuration files.
-
-## Feature List
-
-- Compatible with RESTful interfaces [REST API](/reference/rest-api/)
-- Compatible with InfluxDB v1 write interface
- [https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/)
-- Compatible with OpenTSDB JSON and telnet format writes
- -
- -
-- Seamless connection to collectd
- collectd is a system statistics collection daemon, please visit [https://collectd.org/](https://collectd.org/) for more information.
-- Seamless connection with StatsD
- StatsD is a simple yet powerful daemon for aggregating statistical information. Please visit [https://github.com/statsd/statsd](https://github.com/statsd/statsd) for more information.
-- Seamless connection with icinga2
- icinga2 is a software that collects inspection result metrics and performance data. Please visit [https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer](https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer) for more information.
-- Seamless connection to TCollector
- TCollector is a client process that collects data from a local collector and pushes the data to OpenTSDB. Please visit [http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html) for more information.
-- Seamless connection to node_exporter
- node_export is an exporter for machine metrics. Please visit [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) for more information.
-- Support for Prometheus remote_read and remote_write
- remote_read and remote_write are interfaces for Prometheus data read and write from/to other data storage solution. Please visit [https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis) for more information.
-
-## Interfaces
-
-### TDengine RESTful interface
-
-You can use any client that supports the http protocol to write data to or query data from TDengine by accessing the REST interface address `http://:6041/`. See the [official documentation](/reference/connector#restful) for details. The following EndPoint is supported.
-
-```text
-/rest/sql
-/rest/sqlt
-/rest/sqlutc
-```
-
-### InfluxDB
-
-You can use any client that supports the http protocol to access the RESTful interface address `http://:6041/` to write data in InfluxDB compatible format to TDengine. The EndPoint is as follows:
-
-```text
-/influxdb/v1/write
-```
-
-Support InfluxDB query parameters as follows.
-
-- `db` Specifies the database name used by TDengine
-- `precision` The time precision used by TDengine
-- `u` TDengine user name
-- `p` TDengine password
-
-Note: InfluxDB token authorization is not supported at present. Only Basic authorization and query parameter validation are supported.
-
-### OpenTSDB
-
-You can use any client that supports the http protocol to access the RESTful interface address `http://:6041/` to write data in OpenTSDB compatible format to TDengine.
-
-```text
-/opentsdb/v1/put/json/:db
-/opentsdb/v1/put/telnet/:db
-```
-
-### collectd
-
-
-
-### StatsD
-
-
-
-### icinga2 OpenTSDB writer
-
-
-
-### TCollector
-
-
-
-### node_exporter
-
-node_export is an exporter of hardware and OS metrics exposed by the \*NIX kernel used by Prometheus
-
-- Enable the taosAdapter configuration `node_exporter.enable`
-- Set the configuration of the node_exporter
-- Restart taosAdapter
-
-### Prometheus
-
-
-
-## Memory usage optimization methods
-
-taosAdapter will monitor its memory usage during operation and adjust it with two thresholds. Valid values are integers between 1 to 100, and represent a percentage of the system's physical memory.
-
-- pauseQueryMemoryThreshold
-- pauseAllMemoryThreshold
-
-Stops processing query requests when the `pauseQueryMemoryThreshold` threshold is exceeded.
-
-HTTP response content.
-
-- code 503
-- body "query memory exceeds threshold"
-
-Stops processing all write and query requests when the `pauseAllMemoryThreshold` threshold is exceeded.
-
-HTTP response: code 503
-
-- code 503
-- body "memory exceeds threshold"
-
-Resume the corresponding function when the memory falls back below the threshold.
-
-Status check interface `http://:6041/-/ping`
-
-- Normal returns `code 200`
-- No parameter If memory exceeds pauseAllMemoryThreshold returns `code 503`
-- Request parameter `action=query` returns `code 503` if memory exceeds `pauseQueryMemoryThreshold` or `pauseAllMemoryThreshold`
-
-Corresponding configuration parameter
-
-``text
- monitor.collectDuration monitoring interval environment variable `TAOS_MONITOR_COLLECT_DURATION` (default value 3s)
- monitor.incgroup whether to run in cgroup (set to true for running in container) environment variable `TAOS_MONITOR_INCGROUP`
- monitor.pauseAllMemoryThreshold memory threshold for no more inserts and queries environment variable `TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD` (default 80)
- monitor.pauseQueryMemoryThreshold memory threshold for no more queries Environment variable `TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD` (default 70)
-```
-
-You should adjust this parameter based on your specific application scenario and operation strategy. We recommend using monitoring software to monitor system memory status. The load balancer can also check the taosAdapter running status through this interface.
-
-## taosAdapter Monitoring Metrics
-
-taosAdapter collects HTTP-related metrics, CPU percentage, and memory percentage.
-
-### HTTP interface
-
-Provides an interface conforming to [OpenMetrics](https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md).
-
-```text
-http://:6041/metrics
-```
-
-### Write to TDengine
-
-taosAdapter supports writing the metrics of HTTP monitoring, CPU percentage, and memory percentage to TDengine.
-
-For configuration parameters
-
-| **Configuration items** | **Description** | **Default values** |
-| ----------------------- | --------------------------------------------------------- | ---------- |
-| monitor.collectDuration | CPU and memory collection interval | 3s |
-| monitor.identity | The current taosadapter identifier will be used if not set to `hostname:port` | |
-| monitor.incgroup | whether it is running in a cgroup (set to true for running in a container) | false |
-| monitor.writeToTD | Whether to write to TDengine | true |
-| monitor.user | TDengine connection username | root |
-| monitor.password | TDengine connection password | taosdata |
-| monitor.writeInterval | Write to TDengine interval | 30s |
-
-## Limit the number of results returned
-
-taosAdapter controls the number of results returned by the parameter `restfulRowLimit`, -1 means no limit, default is no limit.
-
-This parameter controls the number of results returned by the following interfaces:
-
-- `http://:6041/rest/sql`
-- `http://:6041/rest/sqlt`
-- `http://:6041/rest/sqlutc`
-- ` http://:6041/prometheus/v1/remote_read/:db`
-
-## Troubleshooting
-
-You can check the taosAdapter running status with the `systemctl status taosadapter` command.
-
-You can also adjust the level of the taosAdapter log output by setting the `--logLevel` parameter or the environment variable `TAOS_ADAPTER_LOG_LEVEL`. Valid values are: panic, fatal, error, warn, warning, info, debug and trace.
-
-## How to migrate from older TDengine versions to taosAdapter
-
-In TDengine server 2.2.x.x or earlier, the TDengine server process (taosd) contains an embedded HTTP service. As mentioned earlier, taosAdapter is a standalone software managed using `systemd` and has its own process ID. There are some configuration parameters and behaviors that are different between the two. See the following table for details.
-
-| **#** | **embedded httpd** | **taosAdapter** | **comment** |
-| ----- | ------------------- | ------------------------------------ | ------------------------------------------------------------------ ------------------------------------------------------------------------ |
-| 1 | httpEnableRecordSql | --logLevel=debug | |
-| 2 | httpMaxThreads | n/a | taosAdapter Automatically manages thread pools without this parameter |
-| 3 | telegrafUseFieldNum | See the taosAdapter telegraf configuration method | |
-| 4 | restfulRowLimit | restfulRowLimit | Embedded httpd outputs 10240 rows of data by default, the maximum allowed is 102400. taosAdapter also provides restfulRowLimit but it is not limited by default. You can configure it according to the actual scenario.
-| 5 | httpDebugFlag | Not applicable | httpdDebugFlag does not work for taosAdapter |
-| 6 | httpDBNameMandatory | N/A | taosAdapter requires the database name to be specified in the URL |
diff --git a/docs-en/14-reference/05-taosbenchmark.md b/docs-en/14-reference/05-taosbenchmark.md
deleted file mode 100644
index 7cf1f95eb116b5f87b3bc1e05b647b9b0da3c544..0000000000000000000000000000000000000000
--- a/docs-en/14-reference/05-taosbenchmark.md
+++ /dev/null
@@ -1,434 +0,0 @@
----
-title: taosBenchmark
-sidebar_label: taosBenchmark
-toc_max_heading_level: 4
-description: "taosBenchmark (once called taosdemo ) is a tool for testing the performance of TDengine."
----
-
-## Introduction
-
-taosBenchmark (formerly taosdemo ) is a tool for testing the performance of TDengine products. taosBenchmark can test the performance of TDengine's insert, query, and subscription functions and simulate large amounts of data generated by many devices. taosBenchmark can flexibly control the number and type of databases, supertables, tag columns, number and type of data columns, and sub-tables, and types of databases, super tables, the number and types of data columns, the number of sub-tables, the amount of data per sub-table, the time interval for inserting data, the number of working threads, whether and how to insert disordered data, and so on. The installer provides taosdemo as a soft link to taosBenchmark for compatibility and for the convenience of past users.
-
-## Installation
-
-There are two ways to install taosBenchmark:
-
-- Installing the official TDengine installer will automatically install taosBenchmark. Please refer to [TDengine installation](/operation/pkg-install) for details.
-
-- Compile taos-tools separately and install them. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.
-
-## Run
-
-### Configuration and running methods
-
-TaosBenchmark needs to be executed on the terminal of the operating system, it supports two configuration methods: [Command-line arguments](#Command-line arguments in detailed) and [JSON configuration file](#Configuration file arguments in detailed). These two methods are mutually exclusive. Users can use `-f ` to specify a configuration file. When running taosBenchmark with command-line arguments to control its behavior, users should use other parameters for configuration, but not the `-f` parameter. In addition, taosBenchmark offers a special way of running without parameters.
-
-taosBenchmark supports complete performance testing of TDengine. taosBenchmark supports the TDengine functions in three categories: write, query, and subscribe. These three functions are mutually exclusive, and users can select only one of them each time taosBenchmark runs. It is important to note that the type of functionality to be tested is not configurable when using the command-line configuration method, which can only test writing performance. To test the query and subscription performance of the TDengine, you must use the configuration file method and specify the function type to test via the parameter `filetype` in the configuration file.
-
-**Make sure that the TDengine cluster is running correctly before running taosBenchmark. **
-
-### Run without command-line arguments
-
-Execute the following commands to quickly experience taosBenchmark's default configuration-based write performance testing of TDengine.
-
-```bash
-taosBenchmark
-```
-
-When run without parameters, taosBenchmark connects to the TDengine cluster specified in `/etc/taos` by default and creates a database named `test`, a super table named `meters` under the test database, and 10,000 tables under the super table with 10,000 records written to each table. Note that if there is already a database named "test" this command will delete it first and create a new database.
-
-### Run with command-line configuration parameters
-
-The `-f ` argument cannot be used when running taosBenchmark with command-line parameters and controlling its behavior. Users must specify all configuration parameters from the command-line. The following is an example of testing taosBenchmark writing performance using the command-line approach.
-
-```bash
-taosBenchmark -I stmt -n 200 -t 100
-```
-
-Using the above command, `taosBenchmark` will create a database named `test`, create a super table `meters` in it, create 100 sub-tables in the super table and insert 200 records for each sub-table using parameter binding.
-
-### Run with the configuration file
-
-A sample configuration file is provided in the taosBenchmark installation package under `/examples/taosbenchmark-json`.
-
-Use the following command-line to run taosBenchmark and control its behavior via a configuration file.
-
-```bash
-taosBenchmark -f
-```
-
-**Here are a few examples of configuration files:**
-
-#### Example of inserting a scenario JSON configuration file
-
-
-insert.json
-
-```json
-{{#include /taos-tools/example/insert.json}}
-```
-
-
-
-#### Query Scenario JSON Profile Example
-
-
-query.json
-
-```json
-{{#include /taos-tools/example/query.json}}
-```
-
-
-
-#### Subscription JSON configuration example
-
-
-subscribe.json
-
-```json
-{{#include /taos-tools/example/subscribe.json}}
-```
-
-
-
-## Command-line argument in detailed
-
-- **-f/--file ** :
- specify the configuration file to use. This file includes All parameters. Users should not use this parameter with other parameters on the command-line. There is no default value.
-
-- **-c/--config-dir ** :
- specify the directory where the TDengine cluster configuration file. The default path is `/etc/taos`.
-
-- **-h/--host ** :
- Specify the FQDN of the TDengine server to connect to. The default value is localhost.
-
-- **-P/--port ** :
- The port number of the TDengine server to connect to, the default value is 6030.
-
-- **-I/--interface ** :
- Insert mode. Options are taosc, rest, stmt, sml, sml-rest, corresponding to normal write, restful interface writing, parameter binding interface writing, schemaless interface writing, RESTful schemaless interface writing (provided by taosAdapter). The default value is taosc.
-
-- **-u/--user ** :
- User name to connect to the TDengine server. Default is root.
-
-- **-p/--password ** :
- The default password to connect to the TDengine server is `taosdata`.
-
-- **-o/--output ** :
- specify the path of the result output file, the default value is `. /output.txt`.
-
-- **-T/--thread ** :
- The number of threads to insert data. Default is 8.
-
-- **-B/--interlace-rows ** :
- Enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table. Interleaved insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables have been inserted. The default value is 0, i.e., data is inserted into one sub-table before the next sub-table is inserted.
-
-- **-i/--insert-interval ** :
- Specify the insert interval in `ms` for interleaved insert mode. The default value is 0. It only works if `-B/--interlace-rows` is greater than 0. That means that after inserting interlaced rows for each child table, the data insertion with multiple threads will wait for the interval specified by this value before proceeding to the next round of writes.
-
-- **-r/--rec-per-req ** :
- Writing the number of rows of records per request to TDengine, the default value is 30000.
-
-- **-t/--tables ** :
- Specify the number of sub-tables. The default is 10000.
-
-- **-S/--timestampstep ** :
- Timestamp step for inserting data in each child table in ms, default is 1.
-
-- **-n/--records ** :
- The default value of the number of records inserted in each sub-table is 10000.
-
-- **-d/--database ** :
- The name of the database used, the default value is `test`.
-
-- **-b/--data-type ** :
- specify the type of the data columns of the super table. It defaults to three columns of type FLOAT, INT, and FLOAT if not used.
-
-- **-l/--columns ** :
- specify the number of columns in the super table. If both this parameter and `-b/--data-type` is set, the final result number of columns is the greater of the two. If the number specified by this parameter is greater than the number of columns specified by `-b/--data-type`, the unspecified column type defaults to INT, for example: `-l 5 -b float,double`, then the final column is `FLOAT,DOUBLE,INT,INT,INT`. If the number of columns specified is less than or equal to the number of columns specified by `-b/--data-type`, then the result is the column and type specified by `-b/--data-type`, e.g.: `-l 3 -b float,double,float,bigint`. The last column is `FLOAT,DOUBLE, FLOAT,BIGINT`.
-
-- **-A/--tag-type ** :
- The tag column type of the super table. nchar and binary types can both set the length, for example:
-
-```
-taosBenchmark -A INT,DOUBLE,NCHAR,BINARY(16)
-```
-
-If users did not set tag type, the default is two tags, whose types are INT and BINARY(16).
-Note: In some shells, such as bash, "()" needs to be escaped, so the above command should be
-
-```
-taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
-```
-
-- **-w/--binwidth **:
- specify the default length for nchar and binary types. The default value is 64.
-
-- **-m/--table-prefix ** :
- The prefix of the sub-table name, the default value is "d".
-
-- **-E/--escape-character** :
- Switch parameter specifying whether to use escape characters in the super table and sub-table names. By default is not used.
-
-- **-C/--chinese** :
- Switch specifying whether to use Unicode Chinese characters in nchar and binary. By default is not used.
-
-- **-N/--normal-table** :
- This parameter indicates that taosBenchmark will create only normal tables instead of super tables. The default value is false. It can be used if the insert mode is taosc, stmt, and rest.
-
-- **-M/--random** :
- This parameter indicates writing data with random values. The default is false. If users use this parameter, taosBenchmark will generate the random values. For tag/data columns of numeric type, the value is a random value within the range of values of that type. For NCHAR and BINARY type tag columns/data columns, the value is the random string within the specified length range.
-
-- **-x/--aggr-func** :
- Switch parameter to indicate query aggregation function after insertion. The default value is false.
-
-- **-y/--answer-yes** :
- Switch parameter that requires the user to confirm at the prompt to continue. The default value is false.
-
-- **-O/--disorder ** :
- Specify the percentage probability of disordered data, with a value range of [0,50]. The default is 0, i.e., there is no disordered data.
-
-- **-R/--disorder-range ** :
- Specify the timestamp range for the disordered data. It leads the resulting disorder timestamp as the ordered timestamp minus a random value in this range. Valid only if the percentage of disordered data specified by `-O/--disorder` is greater than 0.
-
-- **-F/--prepare_rand ** :
- Specify the number of unique values in the generated random data. A value of 1 means that all data are equal. The default value is 10000.
-
-- **-a/--replica ** :
- Specify the number of replicas when creating the database. The default value is 1.
-
-- **-V/--version** :
- Show version information only. Users should not use it with other parameters.
-
-- **-? /--help** :
- Show help information and exit. Users should not use it with other parameters.
-
-## Configuration file parameters in detailed
-
-### General configuration parameters
-
-The parameters listed in this section apply to all function modes.
-
-- **filetype** : The function to be tested, with optional values `insert`, `query` and `subscribe`. These correspond to the insert, query, and subscribe functions, respectively. Users can specify only one of these in each configuration file.
-**cfgdir**: specify the TDengine cluster configuration file's directory. The default path is /etc/taos.
-
-- **host**: Specify the FQDN of the TDengine server to connect. The default value is `localhost`.
-
-- **port**: The port number of the TDengine server to connect to, the default value is `6030`.
-
-- **user**: The user name of the TDengine server to connect to, the default is `root`.
-
-- **password**: The password to connect to the TDengine server, the default value is `taosdata`.
-
-### Insert scenario configuration parameters
-
-`filetype` must be set to `insert` in the insertion scenario. See [General Configuration Parameters](#General Configuration Parameters)
-
-#### Database related configuration parameters
-
-The parameters related to database creation are configured in `dbinfo` in the json configuration file, as follows. These parameters correspond to the database parameters specified when `create database` in TDengine.
-
-- **name**: specify the name of the database.
-
-- **drop**: indicate whether to delete the database before inserting. The default is true.
-
-- **replica**: specify the number of replicas when creating the database.
-
-- **days**: specify the time span for storing data in a single data file. The default is 10.
-
-- **cache**: specify the size of the cache blocks in MB. The default value is 16.
-
-- **blocks**: specify the number of cache blocks in each vnode. The default is 6.
-
-- **precision**: specify the database time precision. The default value is "ms".
-
-- **keep**: specify the number of days to keep the data. The default value is 3650.
-
-- **minRows**: specify the minimum number of records in the file block. The default value is 100.
-
-- **maxRows**: specify the maximum number of records in the file block. The default value is 4096.
-
-- **comp**: specify the file compression level. The default value is 2.
-
-- **walLevel** : specify WAL level, default is 1.
-
-- **cacheLast**: indicate whether to allow the last record of each table to be kept in memory. The default value is 0. The value can be 0, 1, 2, or 3.
-
-- **quorum**: specify the number of writing acknowledgments in multi-replica mode. The default value is 1.
-
-- **fsync**: specify the interval of fsync in ms when users set WAL to 2. The default value is 3000.
-
-- **update** : indicate whether to support data update, default value is 0, optional values are 0, 1, 2.
-
-#### Super table related configuration parameters
-
-The parameters for creating super tables are configured in `super_tables` in the json configuration file, as shown below.
-
-- **name**: Super table name, mandatory, no default value.
-- **child_table_exists** : whether the child table already exists, default value is "no", optional value is "yes" or "no".
-
-- **child_table_count** : The number of child tables, the default value is 10.
-
-- **child_table_prefix** : The prefix of the child table name, mandatory configuration item, no default value.
-
-- **escape_character**: specify the super table and child table names containing escape characters. The value can be "yes" or "no". The default is "no".
-
-- **auto_create_table**: only when insert_mode is taosc, rest, stmt, and childtable_exists is "no". "yes" means taosBenchmark will automatically create non-existent tables when inserting data; "no" means that taosBenchmark will create all tables before inserting.
-
-- **batch_create_tbl_num** : the number of tables per batch when creating sub-tables, default is 10. Note: the actual number of batches may not be the same as this value. If the executed SQL statement is larger than the maximum length supported, it will be automatically truncated and re-executed to continue creating.
-
-- **data_source**: specify the source of data-generation. Default is taosBenchmark randomly generated. Users can configure it as "rand" and "sample". When "sample" is used, taosBenchmark will use the data in the file specified by the `sample_file` parameter.
-
-- **insert_mode**: insertion mode with options taosc, rest, stmt, sml, sml-rest, corresponding to normal write, restful interface write, parameter binding interface write, schemaless interface write, restful schemaless interface write (provided by taosAdapter). The default value is taosc.
-
-- **non_stop_mode**: Specify whether to keep writing. If "yes", insert_rows will be disabled, and writing will not stop until Ctrl + C stops the program. The default value is "no", i.e., taosBenchmark will stop the writing after the specified number of rows are written. Note: insert_rows must be configured as a non-zero positive integer even if it fails in continuous write mode.
-
-- **line_protocol**: Insert data using line protocol. Only works when insert_mode is sml or sml-rest. The value can be `line`, `telnet`, or `json`.
-
-- **tcp_transfer**: Communication protocol in telnet mode only takes effect when insert_mode is sml-rest, and line_protocol is telnet. If not configured, the default protocol is http.
-
-- **insert_rows** : The number of inserted rows per child table, default is 0.
-
-- **childtable_offset**: Effective only if childtable_exists is yes, specifies the offset when fetching the list of child tables from the super table, i.e., starting from the first child table.
-
-- **childtable_limit**: Effective only when childtable_exists is yes, specifies the upper limit for fetching the list of child tables from the super table.
-
-- **interlace_rows**: Enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table at a time. Staggered insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables have been inserted. The default value is 0, i.e., data is inserted into one sub-table before the next sub-table is inserted.
-
-- **insert_interval** : Specifies the insertion interval in ms for interleaved insertion mode. The default value is 0. It only works if `-B/--interlace-rows` is greater than 0. After inserting interlaced rows for each child table, the data insertion thread will wait for the interval specified by this value before proceeding to the next round of writes.
-
-- **partial_col_num**: If this value is a positive number n, only the first n columns are written to, only if insert_mode is taosc and rest, or all columns if n is 0.
-
-- **disorder_ratio** : Specifies the percentage probability of disordered (i.e. out-of-order) data in the value range [0,50]. The default is 0, which means there is no disorder data.
-
-- **disorder_range** : Specifies the timestamp fallback range for the disordered data. The disordered timestamp is generated by subtracting a random value in this range, from the timestamp that would be used in the non-disorder case. Valid only if the percentage of disordered data specified by `-O/--disorder` is greater than 0.
-
-- **timestamp_step**: The timestamp step for inserting data in each child table, in units consistent with the `precision` of the database. For e.g. if the `precision` is milliseconds, the timestamp step will be in milliseconds. The default value is 1.
-
-- **start_timestamp** : The timestamp start value of each sub-table, the default value is now.
-
-- **sample_format**: The type of the sample data file; for now only "csv" is supported.
-
-- **sample_file**: Specify a CSV format file as the data source. It only works when data_source is a sample. If the number of rows in the CSV file is less than or equal to prepared_rand, then taosBenchmark will read the CSV file data cyclically until it is the same as prepared_rand; otherwise, taosBenchmark will read only the rows with the number of prepared_rand. The final number of rows of data generated is the smaller of the two.
-
-- **use_sample_ts**: effective only when data_source is `sample`, indicates whether the CSV file specified by sample_file contains the first timestamp column. Default is no. If set to yes, the first column of the CSV file is used as `timestamp`. Since the timestamp of the same sub-table cannot be repeated, the amount of data generated depends on the same number of rows of data in the CSV file, and insert_rows will be invalidated.
-
-- **tags_file** : only works when insert_mode is taosc, rest. The final tag value is related to the childtable_count. Suppose the tag data rows in the CSV file are smaller than the given number of child tables. In that case, taosBenchmark will read the CSV file data cyclically until the number of child tables specified by childtable_count is generated. Otherwise, taosBenchmark will read the childtable_count rows of tag data only. The final number of child tables generated is the smaller of the two.
-
-#### Tag and Data Column Configuration Parameters
-
-The configuration parameters for specifying super table tag columns and data columns are in `columns` and `tag` in `super_tables`, respectively.
-
-- **type**: Specify the column type. For optional values, please refer to the data types supported by TDengine.
- Note: JSON data type is unique and can only be used for tags. When using JSON type as a tag, there is and can only be this one tag. At this time, `count` and `len` represent the meaning of the number of key-value pairs within the JSON tag and the length of the value of each KV pair. Respectively, the value is a string by default.
-
-- **len**: Specifies the length of this data type, valid for NCHAR, BINARY, and JSON data types. If this parameter is configured for other data types, a value of 0 means that the column is always written with a null value; if it is not 0, it is ignored.
-
-- **count**: Specifies the number of consecutive occurrences of the column type, e.g., "count": 4096 generates 4096 columns of the specified type.
-
-- **name** : The name of the column, if used together with count, e.g. "name": "current", "count":3, then the names of the 3 columns are current, current_2. current_3.
-
-- **min**: The minimum value of the column/label of the data type.
-
-- **max**: The maximum value of the column/label of the data type.
-
-- **values**: The value field of the nchar/binary column/label, which will be chosen randomly from the values.
-
-#### insertion behavior configuration parameters
-
-- **thread_count**: specify the number of threads to insert data. Default is 8.
-
-- **create_table_thread_count** : The number of threads to build the table, default is 8.
-
-- **connection_pool_size** : The number of pre-established connections to the TDengine server. If not configured, it is the same as number of threads specified.
-
-- **result_file** : The path to the result output file, the default value is . /output.txt.
-
-- **confirm_parameter_prompt**: The switch parameter requires the user to confirm after the prompt to continue. The default value is false.
-
-- **interlace_rows**: Enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table at a time. Interleaved insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables are inserted. The default value is 0, which means that data will be inserted into the following child table only after data is inserted into one child table.
- This parameter can also be configured in `super_tables`, and if so, the configuration in `super_tables` takes precedence and overrides the global setting.
-
-- **insert_interval** :
- Specifies the insertion interval in ms for interleaved insertion mode. The default value is 0. Only works if `-B/--interlace-rows` is greater than 0. It means that after inserting interlace rows for each child table, the data insertion thread will wait for the interval specified by this value before proceeding to the next round of writes.
- This parameter can also be configured in `super_tables`, and if configured, the configuration in `super_tables` takes high priority, overriding the global setting.
-
-- **num_of_records_per_req** :
- The number of rows of data to be written per request to TDengine, the default value is 30000. When it is set too large, the TDengine client driver will return the corresponding error message, so you need to lower the setting of this parameter to meet the writing requirements.
-
-- **prepare_rand**: The number of unique values in the generated random data. A value of 1 means that all data are the same. The default value is 10000.
-
-### Query scenario configuration parameters
-
-`filetype` must be set to `query` in the query scenario. See [General Configuration Parameters](#General Configuration Parameters) for details of this parameter and other general parameters
-
-#### Configuration parameters for executing the specified query statement
-
-The configuration parameters for querying the sub-tables or the normal tables are set in `specified_table_query`.
-
-- **query_interval** : The query interval in seconds, the default value is 0.
-
-- **threads**: The number of threads to execute the query SQL, the default value is 1.
-
-- **sqls**.
- - **sql**: the SQL command to be executed.
- - **result**: the file to save the query result. If it is unspecified, taosBenchark will not save the result.
-
-#### Configuration parameters of query super table
-
-The configuration parameters of the super table query are set in `super_table_query`.
-
-- **stblname**: Specify the name of the super table to be queried, required.
-
-- **query_interval** : The query interval in seconds, the default value is 0.
-
-- **threads**: The number of threads to execute the query SQL, the default value is 1.
-
-- **sqls** : The default value is 1.
- - **sql**: The SQL command to be executed. For the query SQL of super table, keep "xxxx" in the SQL command. The program will automatically replace it with all the sub-table names of the super table.
- Replace it with all the sub-table names in the super table.
- - **result**: The file to save the query result. If not specified, taosBenchmark will not save result.
-
-### Subscription scenario configuration parameters
-
-`filetype` must be set to `subscribe` in the subscription scenario. See [General Configuration Parameters](#General Configuration Parameters) for details of this and other general parameters
-
-#### Configuration parameters for executing the specified subscription statement
-
-The configuration parameters for subscribing to a sub-table or a generic table are set in `specified_table_query`.
-
-- **threads**: The number of threads to execute SQL, default is 1.
-
-- **interval**: The time interval to execute the subscription, in seconds, default is 0.
-
-- **restart** : "yes" means start a new subscription, "no" means continue the previous subscription, the default value is "no".
-
-- **keepProgress**: "yes" means keep the progress of the subscription, "no" means don't keep it, and the default value is "no".
-
-- **resubAfterConsume**: "yes" means cancel the previous subscription and then subscribe again, "no" means continue the previous subscription, and the default value is "no".
-
-- **sqls** : The default value is "no".
- - **sql** : The SQL command to be executed, required.
- - **result** : The file to save the query result, unspecified is not saved.
-
-#### Configuration parameters for subscribing to supertables
-
-The configuration parameters for subscribing to a super table are set in `super_table_query`.
-
-- **stblname**: The name of the super table to subscribe.
-
-- **threads**: The number of threads to execute SQL, default is 1.
-
-- **interval**: The time interval to execute the subscription, in seconds, default is 0.
-
-- **restart** : "yes" means start a new subscription, "no" means continue the previous subscription, the default value is "no".
-
-- **keepProgress**: "yes" means keep the progress of the subscription, "no" means don't keep it, and the default value is "no".
-
-- **resubAfterConsume**: "yes" means cancel the previous subscription and then subscribe again, "no" means continue the previous subscription, and the default value is "no".
-
-- **sqls** : The default value is "no".
- - **sql**: SQL command to be executed, required; for the query SQL of the super table, keep "xxxx" in the SQL command, and the program will replace it with all the sub-table names of the super table automatically.
- Replace it with all the sub-table names in the super table.
- - **result**: The file to save the query result, if not specified, it will not be saved.
diff --git a/docs-en/14-reference/06-taosdump.md b/docs-en/14-reference/06-taosdump.md
deleted file mode 100644
index 5403e40925f633ce62795cc6037fc8c8f7aad07a..0000000000000000000000000000000000000000
--- a/docs-en/14-reference/06-taosdump.md
+++ /dev/null
@@ -1,115 +0,0 @@
----
-title: taosdump
-description: "taosdump is a tool that supports backing up data from a running TDengine cluster and restoring the backed up data to the same, or another running TDengine cluster."
----
-
-## Introduction
-
-taosdump is a tool that supports backing up data from a running TDengine cluster and restoring the backed up data to the same, or another running TDengine cluster.
-
-taosdump can back up a database, a super table, or a normal table as a logical data unit or backup data records in the database, super tables, and normal tables. When using taosdump, you can specify the directory path for data backup. If you do not specify a directory, taosdump will back up the data to the current directory by default.
-
-If the specified location already has data files, taosdump will prompt the user and exit immediately to avoid data overwriting. This means that the same path can only be used for one backup.
-
-Please be careful if you see a prompt for this and please ensure that you follow best practices and relevant SOPs for data integrity, backup and data security.
-
-Users should not use taosdump to back up raw data, environment settings, hardware information, server configuration, or cluster topology. taosdump uses [Apache AVRO](https://avro.apache.org/) as the data file format to store backup data.
-
-## Installation
-
-There are two ways to install taosdump:
-
-- Install the taosTools official installer. Please find taosTools from [All download links](https://www.tdengine.com/all-downloads) page and download and install it.
-
-- Compile taos-tools separately and install it. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.
-
-## Common usage scenarios
-
-### taosdump backup data
-
-1. backing up all databases: specify `-A` or `-all-databases` parameter.
-2. backup multiple specified databases: use `-D db1,db2,... ` parameters;
-3. back up some super or normal tables in the specified database: use `-dbname stbname1 stbname2 tbname1 tbname2 ... ` parameters. Note that the first parameter of this input sequence is the database name, and only one database is supported. The second and subsequent parameters are the names of super or normal tables in that database, separated by spaces.
-4. back up the system log database: TDengine clusters usually contain a system database named `log`. The data in this database is the data that TDengine runs itself, and the taosdump will not back up the log database by default. If users need to back up the log database, users can use the `-a` or `-allow-sys` command-line parameter.
-5. Loose mode backup: taosdump version 1.4.1 onwards provides `-n` and `-L` parameters for backing up data without using escape characters and "loose" mode, which can reduce the number of backups if table names, column names, tag names do not use escape characters. This can also reduce the backup data time and backup data footprint. If you are unsure about using `-n` and `-L` conditions, please use the default parameters for "strict" mode backup. See the [official documentation](/taos-sql/escape) for a description of escaped characters.
-
-:::tip
-- taosdump versions after 1.4.1 provide the `-I` argument for parsing Avro file schema and data. If users specify `-s` then only taosdump will parse schema.
-- Backups after taosdump 1.4.2 use the batch count specified by the `-B` parameter. The default value is 16384. If, in some environments, low network speed or disk performance causes "Error actual dump ... batch ...", then try changing the `-B` parameter to a smaller value.
-
-:::
-
-### taosdump recover data
-
-Restore the data file in the specified path: use the `-i` parameter plus the path to the data file. You should not use the same directory to backup different data sets, and you should not backup the same data set multiple times in the same path. Otherwise, the backup data will cause overwriting or multiple backups.
-
-:::tip
-taosdump internally uses TDengine stmt binding API for writing recovery data with a default batch size of 16384 for better data recovery performance. If there are more columns in the backup data, it may cause a "WAL size exceeds limit" error. You can try to adjust the batch size to a smaller value by using the `-B` parameter.
-
-:::
-
-## Detailed command-line parameter list
-
-The following is a detailed list of taosdump command-line arguments.
-
-```
-Usage: taosdump [OPTION...] dbname [tbname ...]
- or: taosdump [OPTION...] --databases db1,db2,...
- or: taosdump [OPTION...] --all-databases
- or: taosdump [OPTION...] -i inpath
- or: taosdump [OPTION...] -o outpath
-
- -h, --host=HOST Server host from which to dump data. Default is
- localhost.
- -p, --password User password to connect to server. Default is
- taosdata.
- -P, --port=PORT Port to connect
- -u, --user=USER User name used to connect to server. Default is
- root.
- -c, --config-dir=CONFIG_DIR Configure directory. Default is /etc/taos
- -i, --inpath=INPATH Input file path.
- -o, --outpath=OUTPATH Output file path.
- -r, --resultFile=RESULTFILE DumpOut/In Result file path and name.
- -a, --allow-sys Allow to dump system database
- -A, --all-databases Dump all databases.
- -D, --databases=DATABASES Dump listed databases. Use comma to separate
- database names.
- -N, --without-property Dump database without its properties.
- -s, --schemaonly Only dump table schemas.
- -y, --answer-yes Input yes for prompt. It will skip data file
- checking!
- -d, --avro-codec=snappy Choose an avro codec among null, deflate, snappy,
- and lzma.
- -S, --start-time=START_TIME Start time to dump. Either epoch or
- ISO8601/RFC3339 format is acceptable. ISO8601
- format example: 2017-10-01T00:00:00.000+0800 or
- 2017-10-0100:00:00:000+0800 or '2017-10-01
- 00:00:00.000+0800'
- -E, --end-time=END_TIME End time to dump. Either epoch or ISO8601/RFC3339
- format is acceptable. ISO8601 format example:
- 2017-10-01T00:00:00.000+0800 or
- 2017-10-0100:00:00.000+0800 or '2017-10-01
- 00:00:00.000+0800'
- -B, --data-batch=DATA_BATCH Number of data per query/insert statement when
- backup/restore. Default value is 16384. If you see
- 'error actual dump .. batch ..' when backup or if
- you see 'WAL size exceeds limit' error when
- restore, please adjust the value to a smaller one
- and try. The workable value is related to the
- length of the row and type of table schema.
- -I, --inspect inspect avro file content and print on screen
- -L, --loose-mode Use loose mode if the table name and column name
- use letter and number only. Default is NOT.
- -n, --no-escape No escape char '`'. Default is using it.
- -T, --thread-num=THREAD_NUM Number of thread for dump in file. Default is
- 5.
- -g, --debug Print debug info.
- -?, --help Give this help list
- --usage Give a short usage message
- -V, --version Print program version
-
-Mandatory or optional arguments to long options are also mandatory or optional
-for any corresponding short options.
-
-Report bugs to .
-```
diff --git a/docs-en/14-reference/07-tdinsight/assets/15146-tdengine-monitor-dashboard.json b/docs-en/14-reference/07-tdinsight/assets/15146-tdengine-monitor-dashboard.json
deleted file mode 100644
index f651983528ca824b4e6b14586aac5a5bfb4ecab8..0000000000000000000000000000000000000000
--- a/docs-en/14-reference/07-tdinsight/assets/15146-tdengine-monitor-dashboard.json
+++ /dev/null
@@ -1,3191 +0,0 @@
-{
- "__inputs": [
- {
- "name": "DS_TDENGINE",
- "label": "TDengine",
- "description": "",
- "type": "datasource",
- "pluginId": "tdengine-datasource",
- "pluginName": "TDengine"
- }
- ],
- "__requires": [
- {
- "type": "panel",
- "id": "gauge",
- "name": "Gauge",
- "version": ""
- },
- {
- "type": "grafana",
- "id": "grafana",
- "name": "Grafana",
- "version": "7.5.10"
- },
- {
- "type": "panel",
- "id": "graph",
- "name": "Graph",
- "version": ""
- },
- {
- "type": "panel",
- "id": "piechart",
- "name": "Pie chart v2",
- "version": ""
- },
- {
- "type": "panel",
- "id": "stat",
- "name": "Stat",
- "version": ""
- },
- {
- "type": "panel",
- "id": "table",
- "name": "Table",
- "version": ""
- },
- {
- "type": "datasource",
- "id": "tdengine-datasource",
- "name": "TDengine",
- "version": "3.1.0"
- },
- {
- "type": "panel",
- "id": "text",
- "name": "Text",
- "version": ""
- }
- ],
- "annotations": {
- "list": [
- {
- "builtIn": 1,
- "datasource": "-- Grafana --",
- "enable": true,
- "hide": true,
- "iconColor": "rgba(0, 211, 255, 1)",
- "name": "Annotations & Alerts",
- "type": "dashboard"
- }
- ]
- },
- "description": "TDengine nodes metrics.",
- "editable": true,
- "gnetId": 15146,
- "graphTooltip": 0,
- "id": null,
- "iteration": 1635263227798,
- "links": [],
- "panels": [
- {
- "collapsed": false,
- "datasource": null,
- "gridPos": {
- "h": 1,
- "w": 24,
- "x": 0,
- "y": 0
- },
- "id": 57,
- "panels": [],
- "title": "Cluster Status",
- "type": "row"
- },
- {
- "datasource": null,
- "fieldConfig": {
- "defaults": {},
- "overrides": []
- },
- "gridPos": {
- "h": 3,
- "w": 24,
- "x": 0,
- "y": 1
- },
- "id": 32,
- "options": {
- "content": "
>\n",
- "mode": "markdown"
- },
- "pluginVersion": "7.5.10",
- "repeatDirection": "h",
- "targets": [
- {
- "alias": "mnodes",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "show mnodes",
- "target": "select metric",
- "timeshift": {
- "period": null
- },
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "-- OVERVIEW --",
- "transformations": [
- {
- "id": "calculateField",
- "options": {
- "binary": {
- "left": "Time",
- "operator": "+",
- "reducer": "sum",
- "right": ""
- },
- "mode": "binary",
- "reduce": {
- "reducer": "sum"
- }
- }
- }
- ],
- "type": "text"
- },
- {
- "datasource": "${ds}",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 80
- }
- ]
- }
- },
- "overrides": []
- },
- "gridPos": {
- "h": 3,
- "w": 8,
- "x": 0,
- "y": 4
- },
- "id": 28,
- "options": {
- "colorMode": "value",
- "graphMode": "area",
- "justifyMode": "auto",
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "lastNotNull"
- ],
- "fields": "",
- "values": true
- },
- "text": {},
- "textMode": "auto"
- },
- "pluginVersion": "7.5.10",
- "repeatDirection": "h",
- "targets": [
- {
- "alias": "dnodes",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "show mnodes",
- "target": "select metric",
- "timeshift": {
- "period": null
- },
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Master MNode",
- "transformations": [
- {
- "id": "filterByValue",
- "options": {
- "filters": [
- {
- "config": {
- "id": "regex",
- "options": {
- "value": "master"
- }
- },
- "fieldName": "role"
- }
- ],
- "match": "all",
- "type": "include"
- }
- },
- {
- "id": "filterFieldsByName",
- "options": {
- "include": {
- "names": [
- "dnodes"
- ]
- }
- }
- }
- ],
- "type": "stat"
- },
- {
- "datasource": "${ds}",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- }
- ]
- }
- },
- "overrides": []
- },
- "gridPos": {
- "h": 3,
- "w": 7,
- "x": 8,
- "y": 4
- },
- "id": 70,
- "options": {
- "colorMode": "value",
- "graphMode": "area",
- "justifyMode": "auto",
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "lastNotNull"
- ],
- "fields": "/^Time$/",
- "values": true
- },
- "text": {},
- "textMode": "auto"
- },
- "pluginVersion": "7.5.10",
- "repeatDirection": "h",
- "targets": [
- {
- "alias": "dnodes",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "show mnodes",
- "target": "select metric",
- "timeshift": {
- "period": null
- },
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Master MNode Create Time",
- "transformations": [
- {
- "id": "filterByValue",
- "options": {
- "filters": [
- {
- "config": {
- "id": "regex",
- "options": {
- "value": "master"
- }
- },
- "fieldName": "role"
- }
- ],
- "match": "all",
- "type": "include"
- }
- },
- {
- "id": "filterFieldsByName",
- "options": {
- "include": {
- "names": [
- "Time"
- ]
- }
- }
- },
- {
- "id": "calculateField",
- "options": {
- "mode": "reduceRow",
- "reduce": {
- "reducer": "min"
- }
- }
- }
- ],
- "type": "stat"
- },
- {
- "datasource": "${ds}",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "custom": {
- "align": null,
- "filterable": false
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 80
- }
- ]
- }
- },
- "overrides": []
- },
- "gridPos": {
- "h": 9,
- "w": 9,
- "x": 15,
- "y": 4
- },
- "id": 29,
- "options": {
- "showHeader": true
- },
- "pluginVersion": "7.5.10",
- "repeatDirection": "h",
- "targets": [
- {
- "alias": "dnodes",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "show variables",
- "target": "select metric",
- "timeshift": {
- "period": null
- },
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Variables",
- "transformations": [
- {
- "id": "filterFieldsByName",
- "options": {
- "include": {
- "names": [
- "value",
- "name"
- ]
- }
- }
- },
- {
- "id": "filterByValue",
- "options": {
- "filters": [
- {
- "config": {
- "id": "regex",
- "options": {
- "value": ".*"
- }
- },
- "fieldName": "name"
- }
- ],
- "match": "all",
- "type": "include"
- }
- }
- ],
- "type": "table"
- },
- {
- "datasource": "${ds}",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 80
- }
- ]
- }
- },
- "overrides": []
- },
- "gridPos": {
- "h": 3,
- "w": 2,
- "x": 0,
- "y": 7
- },
- "id": 33,
- "options": {
- "colorMode": "value",
- "graphMode": "area",
- "justifyMode": "auto",
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "lastNotNull"
- ],
- "fields": "/.*/",
- "values": true
- },
- "text": {},
- "textMode": "auto"
- },
- "pluginVersion": "7.5.10",
- "repeatDirection": "h",
- "targets": [
- {
- "alias": "dnodes",
- "formatType": "Table",
- "queryType": "SQL",
- "refId": "A",
- "sql": "select server_version()",
- "target": "select metric",
- "timeshift": {
- "period": null
- },
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Server Version",
- "transformations": [],
- "type": "stat"
- },
- {
- "datasource": "${ds}",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 80
- }
- ]
- }
- },
- "overrides": []
- },
- "gridPos": {
- "h": 3,
- "w": 3,
- "x": 2,
- "y": 7
- },
- "id": 27,
- "options": {
- "colorMode": "value",
- "graphMode": "area",
- "justifyMode": "auto",
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "lastNotNull"
- ],
- "fields": "",
- "values": true
- },
- "text": {},
- "textMode": "auto"
- },
- "pluginVersion": "7.5.10",
- "repeatDirection": "h",
- "targets": [
- {
- "alias": "dnodes",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "show mnodes",
- "target": "select metric",
- "timeshift": {
- "period": null
- },
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Number of MNodes",
- "transformations": [
- {
- "id": "filterByValue",
- "options": {
- "filters": [
- {
- "config": {
- "id": "greater",
- "options": {
- "value": 0
- }
- },
- "fieldName": "id"
- }
- ],
- "match": "any",
- "type": "include"
- }
- },
- {
- "id": "reduce",
- "options": {
- "includeTimeField": false,
- "mode": "reduceFields",
- "reducers": [
- "count"
- ]
- }
- },
- {
- "id": "filterFieldsByName",
- "options": {
- "include": {
- "names": [
- "id"
- ]
- }
- }
- }
- ],
- "type": "stat"
- },
- {
- "datasource": "${ds}",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- }
- ]
- }
- },
- "overrides": []
- },
- "gridPos": {
- "h": 3,
- "w": 2,
- "x": 5,
- "y": 7
- },
- "id": 41,
- "options": {
- "colorMode": "value",
- "graphMode": "none",
- "justifyMode": "auto",
- "orientation": "horizontal",
- "reduceOptions": {
- "calcs": [
- "last"
- ],
- "fields": "",
- "values": false
- },
- "text": {},
- "textMode": "value"
- },
- "pluginVersion": "7.5.10",
- "targets": [
- {
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "show dnodes",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Total Dnodes",
- "transformations": [
- {
- "id": "reduce",
- "options": {
- "includeTimeField": false,
- "mode": "reduceFields",
- "reducers": [
- "count"
- ]
- }
- },
- {
- "id": "filterFieldsByName",
- "options": {
- "include": {
- "names": [
- "id"
- ]
- }
- }
- }
- ],
- "type": "stat"
- },
- {
- "datasource": "${ds}",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 1
- }
- ]
- }
- },
- "overrides": []
- },
- "gridPos": {
- "h": 3,
- "w": 2,
- "x": 7,
- "y": 7
- },
- "id": 31,
- "options": {
- "colorMode": "value",
- "graphMode": "none",
- "justifyMode": "auto",
- "orientation": "horizontal",
- "reduceOptions": {
- "calcs": [
- "last"
- ],
- "fields": "",
- "values": false
- },
- "text": {},
- "textMode": "value"
- },
- "pluginVersion": "7.5.10",
- "targets": [
- {
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "show dnodes",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Offline Dnodes",
- "transformations": [
- {
- "id": "filterByValue",
- "options": {
- "filters": [
- {
- "config": {
- "id": "regex",
- "options": {
- "value": "ready"
- }
- },
- "fieldName": "status"
- }
- ],
- "match": "all",
- "type": "exclude"
- }
- },
- {
- "id": "reduce",
- "options": {
- "includeTimeField": false,
- "mode": "reduceFields",
- "reducers": [
- "count"
- ]
- }
- },
- {
- "id": "filterFieldsByName",
- "options": {
- "include": {
- "names": [
- "id"
- ]
- }
- }
- }
- ],
- "type": "stat"
- },
- {
- "datasource": "${ds}",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 80
- }
- ]
- }
- },
- "overrides": []
- },
- "gridPos": {
- "h": 3,
- "w": 3,
- "x": 9,
- "y": 7
- },
- "id": 65,
- "options": {
- "colorMode": "value",
- "graphMode": "area",
- "justifyMode": "auto",
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "lastNotNull"
- ],
- "fields": "",
- "values": true
- },
- "text": {},
- "textMode": "auto"
- },
- "pluginVersion": "7.5.10",
- "repeatDirection": "h",
- "targets": [
- {
- "alias": "dnodes",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "show databases;",
- "target": "select metric",
- "timeshift": {
- "period": null
- },
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Number of Databases",
- "transformations": [
- {
- "id": "reduce",
- "options": {
- "includeTimeField": false,
- "mode": "reduceFields",
- "reducers": [
- "count"
- ]
- }
- },
- {
- "id": "filterFieldsByName",
- "options": {
- "include": {
- "names": [
- "name"
- ]
- }
- }
- }
- ],
- "type": "stat"
- },
- {
- "datasource": "${ds}",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 80
- }
- ]
- }
- },
- "overrides": []
- },
- "gridPos": {
- "h": 3,
- "w": 3,
- "x": 12,
- "y": 7
- },
- "id": 69,
- "options": {
- "colorMode": "value",
- "graphMode": "area",
- "justifyMode": "auto",
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "lastNotNull"
- ],
- "fields": "",
- "values": true
- },
- "text": {},
- "textMode": "auto"
- },
- "pluginVersion": "7.5.10",
- "repeatDirection": "h",
- "targets": [
- {
- "alias": "dnodes",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "show databases;",
- "target": "select metric",
- "timeshift": {
- "period": null
- },
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Total Number of Vgroups",
- "transformations": [
- {
- "id": "filterFieldsByName",
- "options": {
- "include": {
- "names": [
- "vgroups"
- ]
- }
- }
- },
- {
- "id": "reduce",
- "options": {
- "includeTimeField": false,
- "mode": "reduceFields",
- "reducers": [
- "sum"
- ]
- }
- }
- ],
- "type": "stat"
- },
- {
- "datasource": "${ds}",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "custom": {
- "align": "center",
- "displayMode": "auto",
- "filterable": true
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- }
- ]
- }
- },
- "overrides": [
- {
- "matcher": {
- "id": "byName",
- "options": "role"
- },
- "properties": [
- {
- "id": "mappings",
- "value": [
- {
- "from": "",
- "id": 1,
- "text": "",
- "to": "",
- "type": 2,
- "value": ""
- }
- ]
- }
- ]
- }
- ]
- },
- "gridPos": {
- "h": 3,
- "w": 9,
- "x": 0,
- "y": 10
- },
- "id": 67,
- "options": {
- "showHeader": true
- },
- "pluginVersion": "7.5.10",
- "targets": [
- {
- "formatType": "Table",
- "queryType": "SQL",
- "refId": "A",
- "sql": "show dnodes",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Number of DNodes for each Role",
- "transformations": [
- {
- "id": "groupBy",
- "options": {
- "fields": {
- "end_point": {
- "aggregations": [
- "count"
- ],
- "operation": "aggregate"
- },
- "role": {
- "aggregations": [],
- "operation": "groupby"
- }
- }
- }
- },
- {
- "id": "filterFieldsByName",
- "options": {}
- },
- {
- "id": "organize",
- "options": {
- "excludeByName": {},
- "indexByName": {},
- "renameByName": {
- "end_point (count)": "Number of DNodes",
- "role": "Dnode Role"
- }
- }
- }
- ],
- "type": "table"
- },
- {
- "datasource": "${ds}",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 80
- }
- ]
- }
- },
- "overrides": []
- },
- "gridPos": {
- "h": 3,
- "w": 3,
- "x": 9,
- "y": 10
- },
- "id": 55,
- "options": {
- "colorMode": "value",
- "graphMode": "area",
- "justifyMode": "auto",
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "lastNotNull"
- ],
- "fields": "",
- "values": true
- },
- "text": {},
- "textMode": "auto"
- },
- "pluginVersion": "7.5.10",
- "repeatDirection": "h",
- "targets": [
- {
- "alias": "dnodes",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "show connections",
- "target": "select metric",
- "timeshift": {
- "period": null
- },
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Number of Connections",
- "transformations": [
- {
- "id": "reduce",
- "options": {
- "includeTimeField": false,
- "mode": "reduceFields",
- "reducers": [
- "count"
- ]
- }
- },
- {
- "id": "filterFieldsByName",
- "options": {
- "include": {
- "names": [
- "connId"
- ]
- }
- }
- }
- ],
- "type": "stat"
- },
- {
- "datasource": "${ds}",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- }
- ]
- }
- },
- "overrides": []
- },
- "gridPos": {
- "h": 3,
- "w": 3,
- "x": 12,
- "y": 10
- },
- "id": 68,
- "options": {
- "colorMode": "value",
- "graphMode": "area",
- "justifyMode": "auto",
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "lastNotNull"
- ],
- "fields": "",
- "values": true
- },
- "text": {},
- "textMode": "auto"
- },
- "pluginVersion": "7.5.10",
- "repeatDirection": "h",
- "targets": [
- {
- "alias": "dnodes",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "show databases;",
- "target": "select metric",
- "timeshift": {
- "period": null
- },
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Total Number of Tables",
- "transformations": [
- {
- "id": "filterFieldsByName",
- "options": {
- "include": {
- "names": [
- "ntables"
- ]
- }
- }
- },
- {
- "id": "reduce",
- "options": {
- "includeTimeField": false,
- "mode": "reduceFields",
- "reducers": [
- "sum"
- ]
- }
- }
- ],
- "type": "stat"
- },
- {
- "collapsed": false,
- "datasource": null,
- "gridPos": {
- "h": 1,
- "w": 24,
- "x": 0,
- "y": 13
- },
- "id": 24,
- "panels": [],
- "title": "Dnodes Status",
- "type": "row"
- },
- {
- "datasource": "${ds}",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "custom": {
- "align": "center",
- "displayMode": "auto",
- "filterable": true
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 80
- }
- ]
- }
- },
- "overrides": [
- {
- "matcher": {
- "id": "byName",
- "options": "status"
- },
- "properties": [
- {
- "id": "custom.width",
- "value": null
- }
- ]
- },
- {
- "matcher": {
- "id": "byName",
- "options": "vnodes"
- },
- "properties": [
- {
- "id": "custom.width",
- "value": null
- }
- ]
- }
- ]
- },
- "gridPos": {
- "h": 5,
- "w": 16,
- "x": 0,
- "y": 14
- },
- "id": 36,
- "options": {
- "showHeader": true,
- "sortBy": []
- },
- "pluginVersion": "7.5.10",
- "targets": [
- {
- "formatType": "Table",
- "queryType": "SQL",
- "refId": "A",
- "sql": "show dnodes",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "DNodes Status",
- "type": "table"
- },
- {
- "datasource": "${ds}",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "palette-classic"
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 80
- }
- ]
- }
- },
- "overrides": []
- },
- "gridPos": {
- "h": 5,
- "w": 8,
- "x": 16,
- "y": 14
- },
- "id": 40,
- "options": {
- "displayLabels": [],
- "legend": {
- "displayMode": "table",
- "placement": "right",
- "values": [
- "value"
- ]
- },
- "pieType": "pie",
- "reduceOptions": {
- "calcs": [
- "lastNotNull"
- ],
- "fields": "/.*/",
- "values": false
- },
- "text": {
- "titleSize": 6
- }
- },
- "pluginVersion": "7.5.10",
- "targets": [
- {
- "formatType": "Table",
- "queryType": "SQL",
- "refId": "A",
- "sql": "show dnodes",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "title": "Offline Reasons",
- "transformations": [
- {
- "id": "filterByValue",
- "options": {
- "filters": [
- {
- "config": {
- "id": "regex",
- "options": {
- "value": "ready"
- }
- },
- "fieldName": "status"
- }
- ],
- "match": "all",
- "type": "exclude"
- }
- },
- {
- "id": "filterFieldsByName",
- "options": {
- "include": {
- "names": [
- "offline reason",
- "end_point"
- ]
- }
- }
- },
- {
- "id": "groupBy",
- "options": {
- "fields": {
- "Time": {
- "aggregations": [
- "count"
- ],
- "operation": "aggregate"
- },
- "end_point": {
- "aggregations": [
- "count"
- ],
- "operation": "aggregate"
- },
- "offline reason": {
- "aggregations": [],
- "operation": "groupby"
- }
- }
- }
- }
- ],
- "type": "piechart"
- },
- {
- "collapsed": false,
- "datasource": null,
- "gridPos": {
- "h": 1,
- "w": 24,
- "x": 0,
- "y": 19
- },
- "id": 22,
- "panels": [],
- "title": "Mnodes Status",
- "type": "row"
- },
- {
- "datasource": "${ds}",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "custom": {
- "align": "center",
- "filterable": false
- },
- "mappings": [],
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 80
- }
- ]
- }
- },
- "overrides": []
- },
- "gridPos": {
- "h": 5,
- "w": 24,
- "x": 0,
- "y": 20
- },
- "id": 38,
- "options": {
- "showHeader": true
- },
- "pluginVersion": "7.5.10",
- "targets": [
- {
- "formatType": "Table",
- "queryType": "SQL",
- "refId": "A",
- "sql": "show mnodes;",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "title": "Mnodes Status",
- "type": "table"
- },
- {
- "collapsed": false,
- "datasource": null,
- "gridPos": {
- "h": 1,
- "w": 24,
- "x": 0,
- "y": 25
- },
- "id": 20,
- "panels": [],
- "repeat": "fqdn",
- "scopedVars": {
- "fqdn": {
- "selected": true,
- "text": "huolinhe-TM1701:6030",
- "value": "huolinhe-TM1701:6030"
- }
- },
- "title": "节点资源占用 [ $fqdn ]",
- "type": "row"
- },
- {
- "datasource": "${ds}",
- "description": "",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "continuous-GrYlRd"
- },
- "mappings": [],
- "min": 0,
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- }
- ]
- },
- "unit": "decmbytes"
- },
- "overrides": []
- },
- "gridPos": {
- "h": 6,
- "w": 5,
- "x": 0,
- "y": 26
- },
- "id": 66,
- "options": {
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "mean"
- ],
- "fields": "/^taosd$/",
- "values": false
- },
- "showThresholdLabels": true,
- "showThresholdMarkers": true,
- "text": {}
- },
- "pluginVersion": "7.5.10",
- "scopedVars": {
- "fqdn": {
- "selected": true,
- "text": "huolinhe-TM1701:6030",
- "value": "huolinhe-TM1701:6030"
- }
- },
- "targets": [
- {
- "alias": "memory",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "select last(mem_taosd) as taosd, last(mem_total) as total from log.dn where fqdn = '$fqdn' and ts >= now -5m and ts < now",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Current Memory Usage of taosd",
- "type": "gauge"
- },
- {
- "datasource": "${ds}",
- "description": "taosd max memery last 10 minutes",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "continuous-GrYlRd"
- },
- "mappings": [],
- "min": 0,
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "#EAB839",
- "value": 0.5
- },
- {
- "color": "red",
- "value": 0.8
- }
- ]
- },
- "unit": "percentunit"
- },
- "overrides": [
- {
- "matcher": {
- "id": "byName",
- "options": "last(cpu_taosd)"
- },
- "properties": [
- {
- "id": "thresholds",
- "value": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "red",
- "value": 80
- }
- ]
- }
- }
- ]
- }
- ]
- },
- "gridPos": {
- "h": 6,
- "w": 5,
- "x": 5,
- "y": 26
- },
- "id": 45,
- "options": {
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "mean"
- ],
- "fields": "/^last\\(cpu_taosd\\)$/",
- "values": false
- },
- "showThresholdLabels": true,
- "showThresholdMarkers": true,
- "text": {}
- },
- "pluginVersion": "7.5.10",
- "scopedVars": {
- "fqdn": {
- "selected": true,
- "text": "huolinhe-TM1701:6030",
- "value": "huolinhe-TM1701:6030"
- }
- },
- "targets": [
- {
- "alias": "mem_taosd",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "select last(cpu_taosd) from log.dn where fqdn = '$fqdn'",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Current CPU Usage of taosd",
- "type": "gauge"
- },
- {
- "datasource": "${ds}",
- "description": "avg band speed last one minute",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "mappings": [],
- "max": 8192,
- "min": 0,
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "#EAB839",
- "value": 4916
- },
- {
- "color": "red",
- "value": 6554
- }
- ]
- },
- "unit": "Kbits"
- },
- "overrides": []
- },
- "gridPos": {
- "h": 6,
- "w": 4,
- "x": 10,
- "y": 26
- },
- "id": 14,
- "options": {
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "last"
- ],
- "fields": "",
- "values": false
- },
- "showThresholdLabels": false,
- "showThresholdMarkers": true,
- "text": {}
- },
- "pluginVersion": "7.5.10",
- "scopedVars": {
- "fqdn": {
- "selected": true,
- "text": "huolinhe-TM1701:6030",
- "value": "huolinhe-TM1701:6030"
- }
- },
- "targets": [
- {
- "alias": "band_speed",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "select avg(band_speed) from log.dn where fqdn='$fqdn' and ts >= now-5m and ts < now interval(1m)",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "band speed",
- "type": "gauge"
- },
- {
- "datasource": "${ds}",
- "description": "io read/write rate",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "mappings": [],
- "max": 8192,
- "min": 0,
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "#EAB839",
- "value": 4916
- },
- {
- "color": "red",
- "value": 6554
- }
- ]
- },
- "unit": "Kbits"
- },
- "overrides": []
- },
- "gridPos": {
- "h": 6,
- "w": 5,
- "x": 14,
- "y": 26
- },
- "id": 48,
- "options": {
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "last"
- ],
- "fields": "",
- "values": false
- },
- "showThresholdLabels": false,
- "showThresholdMarkers": true,
- "text": {}
- },
- "pluginVersion": "7.5.10",
- "scopedVars": {
- "fqdn": {
- "selected": true,
- "text": "huolinhe-TM1701:6030",
- "value": "huolinhe-TM1701:6030"
- }
- },
- "targets": [
- {
- "alias": "",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "select last(io_read) as io_read, last(io_write) as io_write from log.dn where fqdn='$fqdn' and ts >= now-1h and ts < now interval(1m)",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "IO Rate",
- "type": "gauge"
- },
- {
- "datasource": "${ds}",
- "description": "",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "mappings": [],
- "max": 1,
- "min": 0,
- "thresholds": {
- "mode": "percentage",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "#EAB839",
- "value": 75
- },
- {
- "color": "red",
- "value": 80
- },
- {
- "color": "dark-red",
- "value": 95
- }
- ]
- },
- "unit": "percentunit"
- },
- "overrides": []
- },
- "gridPos": {
- "h": 6,
- "w": 5,
- "x": 19,
- "y": 26
- },
- "id": 51,
- "options": {
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "last"
- ],
- "fields": "/^disk_used_percent$/",
- "values": false
- },
- "showThresholdLabels": true,
- "showThresholdMarkers": true,
- "text": {}
- },
- "pluginVersion": "7.5.10",
- "scopedVars": {
- "fqdn": {
- "selected": true,
- "text": "huolinhe-TM1701:6030",
- "value": "huolinhe-TM1701:6030"
- }
- },
- "targets": [
- {
- "alias": "disk_used",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "select last(disk_used) as used from log.dn where fqdn = '$fqdn' and ts >= $from and ts < $to interval(1m)",
- "target": "select metric",
- "timeshift": {
- "period": null
- },
- "type": "timeserie"
- },
- {
- "alias": "disk_total",
- "formatType": "Time series",
- "hide": false,
- "queryType": "SQL",
- "refId": "B",
- "sql": "select last(disk_total) as total from log.dn where fqdn = '$fqdn' and ts >= $from and ts < $to interval(1m)",
- "target": "select metric",
- "type": "timeserie"
- },
- {
- "alias": "disk_used_percent",
- "expression": "A/B",
- "formatType": "Time series",
- "hide": false,
- "queryType": "Arithmetic",
- "refId": "C",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Disk Used",
- "transformations": [
- {
- "id": "reduce",
- "options": {
- "includeTimeField": false,
- "mode": "reduceFields",
- "reducers": [
- "lastNotNull"
- ]
- }
- }
- ],
- "type": "gauge"
- },
- {
- "datasource": "${ds}",
- "description": "taosd max memery last 10 minutes",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "continuous-GrYlRd"
- },
- "mappings": [],
- "min": 0,
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- }
- ]
- },
- "unit": "decmbytes"
- },
- "overrides": []
- },
- "gridPos": {
- "h": 6,
- "w": 5,
- "x": 0,
- "y": 32
- },
- "id": 12,
- "options": {
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "mean"
- ],
- "fields": "/^taosd$/",
- "values": false
- },
- "showThresholdLabels": true,
- "showThresholdMarkers": true,
- "text": {}
- },
- "pluginVersion": "7.5.10",
- "scopedVars": {
- "fqdn": {
- "selected": true,
- "text": "huolinhe-TM1701:6030",
- "value": "huolinhe-TM1701:6030"
- }
- },
- "targets": [
- {
- "alias": "memory",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "select max(mem_taosd) as taosd, max(mem_total) as total from log.dn where fqdn = '$fqdn' and ts >= now -5m and ts < now",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Max Memory Usage of taosd in Last 5 minute",
- "type": "gauge"
- },
- {
- "datasource": "${ds}",
- "description": "taosd max memery last 10 minutes",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "continuous-GrYlRd"
- },
- "mappings": [],
- "max": 1,
- "min": 0,
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "#EAB839",
- "value": 0.5
- },
- {
- "color": "red",
- "value": 0.8
- }
- ]
- },
- "unit": "percentunit"
- },
- "overrides": []
- },
- "gridPos": {
- "h": 6,
- "w": 5,
- "x": 5,
- "y": 32
- },
- "id": 43,
- "options": {
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "mean"
- ],
- "fields": "",
- "values": false
- },
- "showThresholdLabels": true,
- "showThresholdMarkers": true,
- "text": {}
- },
- "pluginVersion": "7.5.10",
- "scopedVars": {
- "fqdn": {
- "selected": true,
- "text": "huolinhe-TM1701:6030",
- "value": "huolinhe-TM1701:6030"
- }
- },
- "targets": [
- {
- "alias": "mem_taosd",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "select max(cpu_taosd) from log.dn where fqdn = '$fqdn' and ts >= now -5m and ts < now",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Max CPU Usage of taosd in Last 5 minute",
- "type": "gauge"
- },
- {
- "datasource": "${ds}",
- "description": "avg band speed last one minute",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "mappings": [],
- "max": 8192,
- "min": 0,
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "#EAB839",
- "value": 4916
- },
- {
- "color": "red",
- "value": 6554
- }
- ]
- },
- "unit": "Kbits"
- },
- "overrides": []
- },
- "gridPos": {
- "h": 6,
- "w": 4,
- "x": 10,
- "y": 32
- },
- "id": 50,
- "options": {
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "last"
- ],
- "fields": "",
- "values": false
- },
- "showThresholdLabels": false,
- "showThresholdMarkers": true,
- "text": {}
- },
- "pluginVersion": "7.5.10",
- "scopedVars": {
- "fqdn": {
- "selected": true,
- "text": "huolinhe-TM1701:6030",
- "value": "huolinhe-TM1701:6030"
- }
- },
- "targets": [
- {
- "alias": "band_speed",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "select max(band_speed) from log.dn where fqdn = '$fqdn' and ts >= now-1h",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Max band speed in last hour",
- "type": "gauge"
- },
- {
- "datasource": "${ds}",
- "description": "io read/write rate",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "mappings": [],
- "max": 8192,
- "min": 0,
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "#EAB839",
- "value": 4916
- },
- {
- "color": "red",
- "value": 6554
- }
- ]
- },
- "unit": "Kbits"
- },
- "overrides": []
- },
- "gridPos": {
- "h": 6,
- "w": 5,
- "x": 14,
- "y": 32
- },
- "id": 49,
- "options": {
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "last"
- ],
- "fields": "",
- "values": false
- },
- "showThresholdLabels": false,
- "showThresholdMarkers": true,
- "text": {}
- },
- "pluginVersion": "7.5.10",
- "scopedVars": {
- "fqdn": {
- "selected": true,
- "text": "huolinhe-TM1701:6030",
- "value": "huolinhe-TM1701:6030"
- }
- },
- "targets": [
- {
- "alias": "",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "select max(io_read) as io_read, max(io_write) as io_write from log.dn where fqdn = '$fqdn'",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Max IO Rate in last hour",
- "type": "gauge"
- },
- {
- "datasource": "${ds}",
- "description": "io read/write rate",
- "fieldConfig": {
- "defaults": {
- "color": {
- "mode": "thresholds"
- },
- "mappings": [],
- "max": 8192,
- "min": 0,
- "thresholds": {
- "mode": "absolute",
- "steps": [
- {
- "color": "green",
- "value": null
- },
- {
- "color": "#EAB839",
- "value": 4916
- },
- {
- "color": "red",
- "value": 6554
- }
- ]
- },
- "unit": "cpm"
- },
- "overrides": []
- },
- "gridPos": {
- "h": 6,
- "w": 5,
- "x": 19,
- "y": 32
- },
- "id": 52,
- "options": {
- "orientation": "auto",
- "reduceOptions": {
- "calcs": [
- "last"
- ],
- "fields": "",
- "values": false
- },
- "showThresholdLabels": false,
- "showThresholdMarkers": true,
- "text": {}
- },
- "pluginVersion": "7.5.10",
- "scopedVars": {
- "fqdn": {
- "selected": true,
- "text": "huolinhe-TM1701:6030",
- "value": "huolinhe-TM1701:6030"
- }
- },
- "targets": [
- {
- "alias": "req-http",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "select sum(req_http) as req_http from log.dn where fqdn = '$fqdn' and ts >= now - 1h interval(1m)",
- "target": "select metric",
- "type": "timeserie"
- },
- {
- "alias": "req-inserts",
- "formatType": "Time series",
- "hide": false,
- "queryType": "SQL",
- "refId": "B",
- "sql": "select sum(req_insert) as req_insert from log.dn where fqdn = '$fqdn' and ts >= now - 1h interval(1m)",
- "target": "select metric",
- "type": "timeserie"
- },
- {
- "alias": "req-selects",
- "formatType": "Time series",
- "hide": false,
- "queryType": "SQL",
- "refId": "C",
- "sql": "select sum(req_select) as req_select from log.dn where fqdn = '$fqdn' and ts >= now - 1h interval(1m)",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "timeFrom": null,
- "timeShift": null,
- "title": "Requests in last Minute",
- "type": "gauge"
- },
- {
- "aliasColors": {},
- "bars": false,
- "cacheTimeout": null,
- "dashLength": 10,
- "dashes": false,
- "datasource": "${ds}",
- "description": "monitor system cpu",
- "fieldConfig": {
- "defaults": {
- "links": []
- },
- "overrides": []
- },
- "fill": 1,
- "fillGradient": 0,
- "gridPos": {
- "h": 11,
- "w": 12,
- "x": 0,
- "y": 38
- },
- "hiddenSeries": false,
- "hideTimeOverride": true,
- "id": 2,
- "legend": {
- "avg": false,
- "current": false,
- "max": false,
- "min": false,
- "show": true,
- "total": false,
- "values": false
- },
- "lines": true,
- "linewidth": 1,
- "links": [],
- "nullPointMode": "null",
- "options": {
- "alertThreshold": true
- },
- "percentage": false,
- "pluginVersion": "7.5.10",
- "pointradius": 2,
- "points": false,
- "renderer": "flot",
- "scopedVars": {
- "fqdn": {
- "selected": true,
- "text": "huolinhe-TM1701:6030",
- "value": "huolinhe-TM1701:6030"
- }
- },
- "seriesOverrides": [],
- "spaceLength": 10,
- "stack": false,
- "steppedLine": false,
- "targets": [
- {
- "alias": "cpu_system",
- "formatType": "Time series",
- "hide": false,
- "queryType": "SQL",
- "refId": "A",
- "sql": "select avg(cpu_system) from log.dn where fqdn='$fqdn' and ts >= now-1h and ts < now interval(30s)",
- "target": "select metric",
- "type": "timeserie"
- },
- {
- "alias": "cpu_taosd",
- "formatType": "Time series",
- "hide": false,
- "queryType": "SQL",
- "refId": "B",
- "sql": "select avg(cpu_taosd) from log.dn where fqdn='$fqdn' and ts >= now-1h and ts < now interval(30s)",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "thresholds": [],
- "timeFrom": "1h",
- "timeRegions": [],
- "timeShift": "30s",
- "title": "CPU 资源占用情况",
- "tooltip": {
- "shared": true,
- "sort": 0,
- "value_type": "individual"
- },
- "type": "graph",
- "xaxis": {
- "buckets": null,
- "mode": "time",
- "name": null,
- "show": true,
- "values": []
- },
- "yaxes": [
- {
- "$$hashKey": "object:58",
- "decimals": null,
- "format": "percent",
- "label": "使用占比",
- "logBase": 1,
- "max": null,
- "min": null,
- "show": true
- },
- {
- "$$hashKey": "object:59",
- "format": "short",
- "label": null,
- "logBase": 1,
- "max": null,
- "min": null,
- "show": false
- }
- ],
- "yaxis": {
- "align": false,
- "alignLevel": null
- }
- },
- {
- "aliasColors": {},
- "bars": false,
- "cacheTimeout": null,
- "dashLength": 10,
- "dashes": false,
- "datasource": "${ds}",
- "description": "monitor system cpu",
- "fieldConfig": {
- "defaults": {
- "links": []
- },
- "overrides": []
- },
- "fill": 1,
- "fillGradient": 0,
- "gridPos": {
- "h": 11,
- "w": 12,
- "x": 12,
- "y": 38
- },
- "hiddenSeries": false,
- "hideTimeOverride": true,
- "id": 42,
- "legend": {
- "avg": false,
- "current": false,
- "max": false,
- "min": false,
- "show": true,
- "total": false,
- "values": false
- },
- "lines": true,
- "linewidth": 1,
- "links": [],
- "nullPointMode": "null",
- "options": {
- "alertThreshold": true
- },
- "percentage": false,
- "pluginVersion": "7.5.10",
- "pointradius": 2,
- "points": false,
- "renderer": "flot",
- "scopedVars": {
- "fqdn": {
- "selected": true,
- "text": "huolinhe-TM1701:6030",
- "value": "huolinhe-TM1701:6030"
- }
- },
- "seriesOverrides": [],
- "spaceLength": 10,
- "stack": false,
- "steppedLine": false,
- "targets": [
- {
- "alias": "system",
- "formatType": "Time series",
- "hide": false,
- "queryType": "SQL",
- "refId": "A",
- "sql": "select avg(mem_system) from log.dn where fqdn = '$fqdn' and ts >= now-1h and ts < now interval(30s)",
- "target": "select metric",
- "type": "timeserie"
- },
- {
- "alias": "taosd",
- "formatType": "Time series",
- "hide": false,
- "queryType": "SQL",
- "refId": "B",
- "sql": "select avg(mem_taosd) from log.dn where fqdn = '$fqdn' and ts >= now-1h and ts < now interval(30s)",
- "target": "select metric",
- "type": "timeserie"
- },
- {
- "alias": "total",
- "formatType": "Time series",
- "hide": false,
- "queryType": "SQL",
- "refId": "C",
- "sql": "select avg(mem_total) from log.dn where fqdn = '$fqdn' and ts >= now-1h and ts < now interval(30s)",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "thresholds": [],
- "timeFrom": "1h",
- "timeRegions": [],
- "timeShift": "30s",
- "title": "内存资源占用情况",
- "tooltip": {
- "shared": true,
- "sort": 0,
- "value_type": "individual"
- },
- "type": "graph",
- "xaxis": {
- "buckets": null,
- "mode": "time",
- "name": null,
- "show": true,
- "values": []
- },
- "yaxes": [
- {
- "$$hashKey": "object:58",
- "decimals": null,
- "format": "decmbytes",
- "label": "使用占比",
- "logBase": 1,
- "max": null,
- "min": null,
- "show": true
- },
- {
- "$$hashKey": "object:59",
- "format": "short",
- "label": null,
- "logBase": 1,
- "max": null,
- "min": null,
- "show": false
- }
- ],
- "yaxis": {
- "align": false,
- "alignLevel": null
- }
- },
- {
- "aliasColors": {},
- "bars": false,
- "dashLength": 10,
- "dashes": false,
- "datasource": "${ds}",
- "fieldConfig": {
- "defaults": {
- "unit": "percent"
- },
- "overrides": []
- },
- "fill": 1,
- "fillGradient": 0,
- "gridPos": {
- "h": 9,
- "w": 12,
- "x": 0,
- "y": 49
- },
- "hiddenSeries": false,
- "id": 54,
- "legend": {
- "alignAsTable": false,
- "avg": false,
- "current": true,
- "max": false,
- "min": false,
- "rightSide": false,
- "show": true,
- "total": false,
- "values": true
- },
- "lines": true,
- "linewidth": 1,
- "nullPointMode": "null",
- "options": {
- "alertThreshold": true
- },
- "percentage": false,
- "pluginVersion": "7.5.10",
- "pointradius": 2,
- "points": false,
- "renderer": "flot",
- "scopedVars": {
- "fqdn": {
- "selected": true,
- "text": "huolinhe-TM1701:6030",
- "value": "huolinhe-TM1701:6030"
- }
- },
- "seriesOverrides": [
- {
- "$$hashKey": "object:249",
- "alias": "disk_used",
- "hiddenSeries": true
- },
- {
- "$$hashKey": "object:256",
- "alias": "disk_total",
- "hiddenSeries": true
- }
- ],
- "spaceLength": 10,
- "stack": false,
- "steppedLine": false,
- "targets": [
- {
- "alias": "disk_used",
- "formatType": "Time series",
- "hide": false,
- "queryType": "SQL",
- "refId": "A",
- "sql": "select avg(disk_used) from log.dn where fqdn = '$fqdn' and ts >= $from and ts < $to interval(30s)",
- "target": "select metric",
- "timeshift": {
- "period": null
- },
- "type": "timeserie"
- },
- {
- "alias": "disk_total",
- "formatType": "Time series",
- "hide": false,
- "queryType": "SQL",
- "refId": "B",
- "sql": "select avg(disk_total) from log.dn where fqdn = '$fqdn' and ts >= $from and ts < $to interval(30s)",
- "target": "select metric",
- "timeshift": {
- "period": null
- },
- "type": "timeserie"
- },
- {
- "alias": "percent",
- "expression": "A/B * 100",
- "formatType": "Time series",
- "hide": false,
- "queryType": "Arithmetic",
- "refId": "C",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "thresholds": [],
- "timeFrom": null,
- "timeRegions": [],
- "timeShift": null,
- "title": "Disk Used Percent",
- "tooltip": {
- "shared": false,
- "sort": 0,
- "value_type": "individual"
- },
- "type": "graph",
- "xaxis": {
- "buckets": null,
- "mode": "time",
- "name": null,
- "show": true,
- "values": []
- },
- "yaxes": [
- {
- "$$hashKey": "object:456",
- "format": "percent",
- "label": null,
- "logBase": 1,
- "max": "100",
- "min": "0",
- "show": true
- },
- {
- "$$hashKey": "object:457",
- "format": "percentunit",
- "label": "Disk Used",
- "logBase": 1,
- "max": null,
- "min": null,
- "show": false
- }
- ],
- "yaxis": {
- "align": false,
- "alignLevel": null
- }
- },
- {
- "aliasColors": {},
- "bars": false,
- "dashLength": 10,
- "dashes": false,
- "datasource": "${ds}",
- "fieldConfig": {
- "defaults": {},
- "overrides": []
- },
- "fill": 1,
- "fillGradient": 0,
- "gridPos": {
- "h": 9,
- "w": 12,
- "x": 12,
- "y": 49
- },
- "hiddenSeries": false,
- "id": 64,
- "legend": {
- "alignAsTable": false,
- "avg": false,
- "current": true,
- "max": false,
- "min": false,
- "rightSide": false,
- "show": true,
- "total": false,
- "values": true
- },
- "lines": true,
- "linewidth": 1,
- "nullPointMode": "null",
- "options": {
- "alertThreshold": true
- },
- "percentage": false,
- "pluginVersion": "7.5.10",
- "pointradius": 2,
- "points": false,
- "renderer": "flot",
- "scopedVars": {
- "fqdn": {
- "selected": true,
- "text": "huolinhe-TM1701:6030",
- "value": "huolinhe-TM1701:6030"
- }
- },
- "seriesOverrides": [
- {
- "$$hashKey": "object:834",
- "alias": "percent",
- "yaxis": 2
- }
- ],
- "spaceLength": 10,
- "stack": false,
- "steppedLine": false,
- "targets": [
- {
- "alias": "disk_used",
- "formatType": "Time series",
- "hide": false,
- "queryType": "SQL",
- "refId": "A",
- "sql": "select avg(disk_used) from log.dn where fqdn = '$fqdn' and ts >= $from and ts < $to interval(30s)",
- "target": "select metric",
- "timeshift": {
- "period": null
- },
- "type": "timeserie"
- },
- {
- "alias": "disk_total",
- "formatType": "Time series",
- "hide": false,
- "queryType": "SQL",
- "refId": "B",
- "sql": "select avg(disk_total) from log.dn where fqdn = '$fqdn' and ts >= $from and ts < $to interval(30s)",
- "target": "select metric",
- "timeshift": {
- "period": null
- },
- "type": "timeserie"
- },
- {
- "alias": "percent",
- "expression": "A/B",
- "formatType": "Time series",
- "hide": false,
- "queryType": "Arithmetic",
- "refId": "C",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "thresholds": [],
- "timeFrom": null,
- "timeRegions": [],
- "timeShift": null,
- "title": "Disk Used",
- "tooltip": {
- "shared": false,
- "sort": 0,
- "value_type": "individual"
- },
- "type": "graph",
- "xaxis": {
- "buckets": null,
- "mode": "time",
- "name": null,
- "show": true,
- "values": []
- },
- "yaxes": [
- {
- "$$hashKey": "object:456",
- "format": "decgbytes",
- "label": null,
- "logBase": 1,
- "max": null,
- "min": null,
- "show": true
- },
- {
- "$$hashKey": "object:457",
- "format": "percentunit",
- "label": "Disk Used",
- "logBase": 1,
- "max": null,
- "min": null,
- "show": true
- }
- ],
- "yaxis": {
- "align": false,
- "alignLevel": null
- }
- },
- {
- "aliasColors": {},
- "bars": false,
- "cacheTimeout": null,
- "dashLength": 10,
- "dashes": false,
- "datasource": "${ds}",
- "description": "total select request per minute last hour",
- "fieldConfig": {
- "defaults": {
- "unit": "cpm"
- },
- "overrides": []
- },
- "fill": 1,
- "fillGradient": 0,
- "gridPos": {
- "h": 9,
- "w": 12,
- "x": 0,
- "y": 58
- },
- "hiddenSeries": false,
- "id": 8,
- "interval": null,
- "legend": {
- "avg": false,
- "current": false,
- "max": false,
- "min": false,
- "show": true,
- "total": false,
- "values": false
- },
- "lines": true,
- "linewidth": 1,
- "links": [],
- "maxDataPoints": 100,
- "nullPointMode": "null",
- "options": {
- "alertThreshold": true
- },
- "percentage": false,
- "pluginVersion": "7.5.10",
- "pointradius": 2,
- "points": false,
- "renderer": "flot",
- "scopedVars": {
- "fqdn": {
- "selected": true,
- "text": "huolinhe-TM1701:6030",
- "value": "huolinhe-TM1701:6030"
- }
- },
- "seriesOverrides": [],
- "spaceLength": 10,
- "stack": false,
- "steppedLine": false,
- "targets": [
- {
- "alias": "req_select",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "select sum(req_select) from log.dn where fqdn = '$fqdn' and ts >= $from and ts < $to interval(1m)",
- "target": "select metric",
- "timeshift": {
- "period": null
- },
- "type": "timeserie"
- },
- {
- "alias": "req_insert",
- "formatType": "Time series",
- "hide": false,
- "queryType": "SQL",
- "refId": "B",
- "sql": "select sum(req_insert) from log.dn where fqdn = '$fqdn' and ts >= $from and ts < $to interval(1m)",
- "target": "select metric",
- "type": "timeserie"
- },
- {
- "alias": "req_http",
- "formatType": "Time series",
- "hide": false,
- "queryType": "SQL",
- "refId": "C",
- "sql": "select sum(req_http) from log.dn where fqdn = '$fqdn' and ts >= $from and ts < $to interval(1m)",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "thresholds": [],
- "timeFrom": null,
- "timeRegions": [],
- "timeShift": null,
- "title": "Requets Count per Minutes $fqdn",
- "tooltip": {
- "shared": true,
- "sort": 0,
- "value_type": "individual"
- },
- "type": "graph",
- "xaxis": {
- "buckets": null,
- "mode": "time",
- "name": null,
- "show": true,
- "values": []
- },
- "yaxes": [
- {
- "$$hashKey": "object:127",
- "format": "cpm",
- "label": null,
- "logBase": 1,
- "max": null,
- "min": "0",
- "show": true
- },
- {
- "$$hashKey": "object:128",
- "format": "short",
- "label": null,
- "logBase": 1,
- "max": null,
- "min": null,
- "show": true
- }
- ],
- "yaxis": {
- "align": false,
- "alignLevel": null
- }
- },
- {
- "aliasColors": {},
- "bars": false,
- "cacheTimeout": null,
- "dashLength": 10,
- "dashes": false,
- "datasource": "${ds}",
- "description": "io",
- "fieldConfig": {
- "defaults": {
- "links": [],
- "unit": "Kbits"
- },
- "overrides": []
- },
- "fill": 1,
- "fillGradient": 0,
- "gridPos": {
- "h": 9,
- "w": 12,
- "x": 12,
- "y": 58
- },
- "hiddenSeries": false,
- "hideTimeOverride": true,
- "id": 47,
- "legend": {
- "avg": false,
- "current": false,
- "max": false,
- "min": false,
- "show": true,
- "total": false,
- "values": false
- },
- "lines": true,
- "linewidth": 1,
- "links": [],
- "nullPointMode": "null",
- "options": {
- "alertThreshold": true
- },
- "percentage": false,
- "pluginVersion": "7.5.10",
- "pointradius": 2,
- "points": false,
- "renderer": "flot",
- "scopedVars": {
- "fqdn": {
- "selected": true,
- "text": "huolinhe-TM1701:6030",
- "value": "huolinhe-TM1701:6030"
- }
- },
- "seriesOverrides": [],
- "spaceLength": 10,
- "stack": false,
- "steppedLine": false,
- "targets": [
- {
- "alias": "io-read",
- "formatType": "Time series",
- "hide": false,
- "queryType": "SQL",
- "refId": "A",
- "sql": "select avg(io_read) from log.dn where fqdn = '$fqdn' and ts >= now-1h and ts < now interval(1m)",
- "target": "select metric",
- "type": "timeserie"
- },
- {
- "alias": "io-write",
- "formatType": "Time series",
- "hide": false,
- "queryType": "SQL",
- "refId": "B",
- "sql": "select avg(io_write) from log.dn where fqdn = '$fqdn' and ts >= now-1h and ts < now interval(1m)",
- "target": "select metric",
- "type": "timeserie"
- },
- {
- "alias": "io-read-last-hour",
- "formatType": "Time series",
- "hide": false,
- "queryType": "SQL",
- "refId": "C",
- "sql": "select avg(io_read) from log.dn where fqdn = '$fqdn' and ts >= now-2h and ts < now - 1h interval(1m)",
- "target": "select metric",
- "timeshift": {
- "period": 1,
- "unit": "hours"
- },
- "type": "timeserie"
- },
- {
- "alias": "io-write-last-hour",
- "formatType": "Time series",
- "hide": false,
- "queryType": "SQL",
- "refId": "D",
- "sql": "select avg(io_write) from log.dn where fqdn = '$fqdn' and ts >= now-1h and ts < now interval(1m)",
- "target": "select metric",
- "timeshift": {
- "period": 1,
- "unit": "hours"
- },
- "type": "timeserie"
- }
- ],
- "thresholds": [],
- "timeFrom": "1h",
- "timeRegions": [],
- "timeShift": "30s",
- "title": "IO",
- "tooltip": {
- "shared": true,
- "sort": 0,
- "value_type": "individual"
- },
- "type": "graph",
- "xaxis": {
- "buckets": null,
- "mode": "time",
- "name": null,
- "show": true,
- "values": []
- },
- "yaxes": [
- {
- "$$hashKey": "object:58",
- "decimals": null,
- "format": "Kbits",
- "label": "使用占比",
- "logBase": 1,
- "max": null,
- "min": null,
- "show": true
- },
- {
- "$$hashKey": "object:59",
- "format": "short",
- "label": null,
- "logBase": 1,
- "max": null,
- "min": null,
- "show": false
- }
- ],
- "yaxis": {
- "align": false,
- "alignLevel": null
- }
- },
- {
- "collapsed": false,
- "datasource": null,
- "gridPos": {
- "h": 1,
- "w": 24,
- "x": 0,
- "y": 67
- },
- "id": 63,
- "panels": [],
- "title": "Login History",
- "type": "row"
- },
- {
- "aliasColors": {},
- "bars": false,
- "dashLength": 10,
- "dashes": false,
- "datasource": "${ds}",
- "fieldConfig": {
- "defaults": {
- "displayName": "Logins Per Minute",
- "unit": "cpm"
- },
- "overrides": []
- },
- "fill": 1,
- "fillGradient": 0,
- "gridPos": {
- "h": 8,
- "w": 12,
- "x": 0,
- "y": 68
- },
- "hiddenSeries": false,
- "id": 61,
- "legend": {
- "avg": false,
- "current": false,
- "max": false,
- "min": false,
- "show": true,
- "total": false,
- "values": false
- },
- "lines": true,
- "linewidth": 1,
- "nullPointMode": "null",
- "options": {
- "alertThreshold": true
- },
- "percentage": false,
- "pluginVersion": "7.5.10",
- "pointradius": 2,
- "points": false,
- "renderer": "flot",
- "seriesOverrides": [
- {
- "$$hashKey": "object:756",
- "alias": "logins",
- "nullPointMode": "null as zero"
- }
- ],
- "spaceLength": 10,
- "stack": false,
- "steppedLine": false,
- "targets": [
- {
- "alias": "logins",
- "formatType": "Time series",
- "queryType": "SQL",
- "refId": "A",
- "sql": "select count(*) from log.log where ts >= $from and ts < $to interval (1m)",
- "target": "select metric",
- "type": "timeserie"
- }
- ],
- "thresholds": [],
- "timeFrom": null,
- "timeRegions": [],
- "timeShift": null,
- "title": "Login Counts per Minute",
- "tooltip": {
- "shared": true,
- "sort": 0,
- "value_type": "individual"
- },
- "type": "graph",
- "xaxis": {
- "buckets": null,
- "mode": "time",
- "name": null,
- "show": true,
- "values": []
- },
- "yaxes": [
- {
- "$$hashKey": "object:585",
- "format": "cpm",
- "label": null,
- "logBase": 1,
- "max": null,
- "min": null,
- "show": true
- },
- {
- "$$hashKey": "object:586",
- "format": "short",
- "label": null,
- "logBase": 1,
- "max": null,
- "min": null,
- "show": true
- }
- ],
- "yaxis": {
- "align": false,
- "alignLevel": null
- }
- }
- ],
- "refresh": "1m",
- "schemaVersion": 27,
- "style": "dark",
- "tags": [
- "TDengine"
- ],
- "templating": {
- "list": [
- {
- "current": {
- "selected": true,
- "text": "TDengine",
- "value": "TDengine"
- },
- "description": "TDengine Data Source Selector",
- "error": null,
- "hide": 0,
- "includeAll": false,
- "label": "Datasource",
- "multi": false,
- "name": "ds",
- "options": [],
- "query": "tdengine-datasource",
- "queryValue": "",
- "refresh": 1,
- "regex": "",
- "skipUrlSync": false,
- "type": "datasource"
- },
- {
- "allValue": null,
- "current": {
- "selected": false,
- "text": "huolinhe-TM1701:6030",
- "value": "huolinhe-TM1701:6030"
- },
- "datasource": "${ds}",
- "definition": "select fqdn from log.dn",
- "description": "TDengine Nodes FQDN (Hostname)",
- "error": null,
- "hide": 0,
- "includeAll": false,
- "label": null,
- "multi": false,
- "name": "fqdn",
- "options": [],
- "query": "select fqdn from log.dn",
- "refresh": 1,
- "regex": "",
- "skipUrlSync": false,
- "sort": 0,
- "tagValuesQuery": "",
- "tags": [],
- "tagsQuery": "",
- "type": "query",
- "useTags": false
- }
- ]
- },
- "time": {
- "from": "now-1h",
- "to": "now"
- },
- "timepicker": {
- "refresh_intervals": [
- "5s",
- "10s",
- "30s",
- "1m",
- "5m",
- "15m",
- "30m",
- "1h",
- "2h",
- "1d"
- ]
- },
- "timezone": "",
- "title": "TDengine",
- "uid": "tdengine",
- "version": 8
-}
\ No newline at end of file
diff --git a/docs-en/14-reference/07-tdinsight/index.md b/docs-en/14-reference/07-tdinsight/index.md
deleted file mode 100644
index cebfafa225e6e8de75ff84bb51fa664784177910..0000000000000000000000000000000000000000
--- a/docs-en/14-reference/07-tdinsight/index.md
+++ /dev/null
@@ -1,428 +0,0 @@
----
-title: TDinsight - Grafana-based Zero-Dependency Monitoring Solution for TDengine
-sidebar_label: TDinsight
----
-
-TDinsight is a solution for monitoring TDengine using the builtin native monitoring database and [Grafana].
-
-After TDengine starts, it will automatically create a monitoring database `log`. TDengine will automatically write many metrics in specific intervals into the `log` database. The metrics may include the server's CPU, memory, hard disk space, network bandwidth, number of requests, disk read/write speed, slow queries, other information like important system operations (user login, database creation, database deletion, etc.), and error alarms. With [Grafana] and [TDengine Data Source Plugin](https://github.com/taosdata/grafanaplugin/releases), TDinsight can visualize cluster status, node information, insertion and query requests, resource usage, vnode, dnode, and mnode status, exception alerts and many other metrics. This is very convenient for developers who want to monitor TDengine cluster status in real-time. This article will guide users to install the Grafana server, automatically install the TDengine data source plug-in, and deploy the TDinsight visualization panel using the `TDinsight.sh` installation script.
-
-## System Requirements
-
-To deploy TDinsight, a single-node TDengine server or a multi-node TDengine cluster and a [Grafana] server are required. This dashboard requires TDengine 2.3.3.0 and above, with the `log` database enabled (`monitor = 1`).
-
-## Installing Grafana
-
-We recommend using the latest [Grafana] version 7 or 8 here. You can install Grafana on any [supported operating system](https://grafana.com/docs/grafana/latest/installation/requirements/#supported-operating-systems) by following the [official Grafana documentation Instructions](https://grafana.com/docs/grafana/latest/installation/) to install [Grafana].
-
-### Installing Grafana on Debian or Ubuntu
-
-For Debian or Ubuntu operating systems, we recommend the Grafana image repository and using the following command to install from scratch.
-
-```bash
-sudo apt-get install -y apt-transport-https
-sudo apt-get install -y software-properties-common wget
-wget -q -O - https://packages.grafana.com/gpg.key |\
- sudo apt-key add -
-echo "deb https://packages.grafana.com/oss/deb stable main" |\
- sudo tee -a /etc/apt/sources.list.d/grafana.list
-sudo apt-get update
-sudo apt-get install grafana
-```
-
-### Install Grafana on CentOS / RHEL
-
-You can install it from its official YUM repository.
-
-```bash
-sudo tee /etc/yum.repos.d/grafana.repo << EOF
-[grafana]
-name=grafana
-baseurl=https://packages.grafana.com/oss/rpm
-repo_gpgcheck=1
-enabled=1
-gpgcheck=1
-gpgkey=https://packages.grafana.com/gpg.key
-sslverify=1
-sslcacert=/etc/pki/tls/certs/ca-bundle.crt
-EOF
-sudo yum install grafana
-```
-
-Or install it with RPM package.
-
-```bash
-wget https://dl.grafana.com/oss/release/grafana-7.5.11-1.x86_64.rpm
-sudo yum install grafana-7.5.11-1.x86_64.rpm
-# or
-sudo yum install \
- https://dl.grafana.com/oss/release/grafana-7.5.11-1.x86_64.rpm
-```
-
-## Automated deployment of TDinsight
-
-We provide an installation script [`TDinsight.sh`](https://github.com/taosdata/grafanaplugin/releases/latest/download/TDinsight.sh) to allow users to configure the installation automatically and quickly.
-
-You can download the script via `wget` or other tools:
-
-```bash
-wget https://github.com/taosdata/grafanaplugin/releases/latest/download/TDinsight.sh
-chmod +x TDinsight.sh
-./TDinsight.sh
-```
-
-This script will automatically download the latest [Grafana TDengine data source plugin](https://github.com/taosdata/grafanaplugin/releases/latest) and [TDinsight dashboard](https://grafana.com/grafana/dashboards/15167) with configurable parameters for command-line options to the [Grafana Provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/) configuration file to automate deployment and updates, etc. With the alert setting options provided by this script, you can also get built-in support for AliCloud SMS alert notifications.
-
-Assume you use TDengine and Grafana's default services on the same host. Run `. /TDinsight.sh` and open the Grafana browser window to see the TDinsight dashboard.
-
-The following is a description of TDinsight.sh usage.
-
-```text
-Usage:
- ./TDinsight.sh
- ./TDinsight.sh -h|--help
- ./TDinsight.sh -n -a -u -p
-
-Install and configure TDinsight dashboard in Grafana on Ubuntu 18.04/20.04 system.
-
--h, -help, --help Display help
-
--V, -verbose, --verbose Run script in verbose mode. Will print out each step of execution.
-
--v, --plugin-version TDengine datasource plugin version, [default: latest]
-
--P, --grafana-provisioning-dir Grafana provisioning directory, [default: /etc/grafana/provisioning/]
--G, --grafana-plugins-dir Grafana plugins directory, [default: /var/lib/grafana/plugins]
--O, --grafana-org-id Grafana organization id. [default: 1]
-
--n, --tdengine-ds-name TDengine datasource name, no space. [default: TDengine]
--a, --tdengine-api TDengine REST API endpoint. [default: http://127.0.0.1:6041]
--u, --tdengine-user TDengine user name. [default: root]
--p, --tdengine-password TDengine password. [default: taosdata]
-
--i, --tdinsight-uid Replace with a non-space ASCII code as the dashboard id. [default: tdinsight]
--t, --tdinsight-title Dashboard title. [default: TDinsight]
--e, --tdinsight-editable If the provisioning dashboard could be editable. [default: false]
-
--E, --external-notifier Apply external notifier uid to TDinsight dashboard.
-
-Alibaba Cloud SMS as Notifier:
--s, --sms-enabled To enable tdengine-datasource plugin builtin Alibaba Cloud SMS webhook.
--N, --sms-notifier-name Provisioning notifier name.[default: TDinsight Builtin SMS]
--U, --sms-notifier-uid Provisioning notifier uid, use lowercase notifier name by default.
--D, --sms-notifier-is-default Set notifier as default.
--I, --sms-access-key-id Alibaba Cloud SMS access key id
--K, --sms-access-key-secret Alibaba Cloud SMS access key secret
--S, --sms-sign-name Sign name
--C, --sms-template-code Template code
--T, --sms-template-param Template param, a escaped JSON string like '{"alarm_level":"%s","time":"%s","name":"%s","content":"%s"}'
--B, --sms-phone-numbers Comma-separated numbers list, eg "189xxxxxxxx,132xxxxxxxx"
--L, --sms-listen-addr [default: 127.0.0.1:9100]
-```
-
-Most command-line options can take effect the same as environment variables.
-
-| Short Options | Long Options | Environment Variables | Description |
-| ------ | -------------------------- | ---------------------------- | ------------------------------------------------------------------ --------- |
-| -v | --plugin-version | TDENGINE_PLUGIN_VERSION | The TDengine data source plugin version, the latest version is used by default. | -P
-| -P | --grafana-provisioning-dir | GF_PROVISIONING_DIR | The Grafana configuration directory, defaults to `/etc/grafana/provisioning/` |
-| -G | --grafana-plugins-dir | GF_PLUGINS_DIR | The Grafana plugin directory, defaults to `/var/lib/grafana/plugins`. | -O
-| -O | --grafana-org-id | GF_ORG_ID | The Grafana organization ID, default is 1. |
-| -n | --tdengine-ds-name | TDENGINE_DS_NAME | The name of the TDengine data source, defaults to TDengine. | -a | --tdengine-ds-name | The name of the TDengine data source, defaults to TDengine.
-| -a | --tdengine-api | TDENGINE_API | The TDengine REST API endpoint. Defaults to `http://127.0.0.1:6041`. | -u
-| -u | --tdengine-user | TDENGINE_USER | TDengine username. [default: root] |
-| -p | --tdengine-password | TDENGINE_PASSWORD | TDengine password. [default: tadosdata] | -i | --tdengine-password
-| -i | --tdinsight-uid | TDINSIGHT_DASHBOARD_UID | TDinsight `uid` of the dashboard. [default: tdinsight] |
-| -t | --tdinsight-title | TDINSIGHT_DASHBOARD_TITLE | TDinsight dashboard title. [Default: TDinsight] | -e | -tdinsight-title
-| -e | --tdinsight-editable | TDINSIGHT_DASHBOARD_EDITABLE | If the dashboard is configured to be editable. [Default: false] | -e | --external
-| -E | --external-notifier | EXTERNAL_NOTIFIER | Apply the external notifier uid to the TDinsight dashboard. | -s
-| -s | --sms-enabled | SMS_ENABLED | Enable the tdengine-datasource plugin built into Alibaba Cloud SMS webhook. | -s
-| -N | --sms-notifier-name | SMS_NOTIFIER_NAME | The name of the provisioning notifier. [Default: `TDinsight Builtin SMS`] | -U
-| -U | --sms-notifier-uid | SMS_NOTIFIER_UID | "Notification Channel" `uid`, lowercase of the program name is used by default, other characters are replaced by "-". |-sms
-| -D | --sms-notifier-is-default | SMS_NOTIFIER_IS_DEFAULT | Set built-in SMS notification to default value. |-sms-notifier-is-default
-| -I | --sms-access-key-id | SMS_ACCESS_KEY_ID | Alibaba Cloud SMS access key id |
-| -K | --sms-access-key-secret | SMS_ACCESS_KEY_SECRET | AliCloud SMS-access-secret-key |
-| -S | --sms-sign-name | SMS_SIGN_NAME | Signature |
-| -C | --sms-template-code | SMS_TEMPLATE_CODE | Template code |
-| -T | --sms-template-param | SMS_TEMPLATE_PARAM | JSON template for template parameters |
-| -B | --sms-phone-numbers | SMS_PHONE_NUMBERS | A comma-separated list of phone numbers, e.g. `"189xxxxxxxx,132xxxxxxxx"` |
-| -L | --sms-listen-addr | SMS_LISTEN_ADDR | Built-in SMS webhook listener address, default is `127.0.0.1:9100` |
-
-Suppose you start a TDengine database on host `tdengine` with HTTP API port `6041`, user `root1`, and password `pass5ord`. Execute the script.
-
-```bash
-sudo . /TDinsight.sh -a http://tdengine:6041 -u root1 -p pass5ord
-```
-
-We provide a "-E" option to configure TDinsight to use the existing Notification Channel from the command line. Assuming your Grafana user and password is `admin:admin`, use the following command to get the `uid` of an existing notification channel.
-
-```bash
-curl --no-progress-meter -u admin:admin http://localhost:3000/api/alert-notifications | jq
-```
-
-Use the `uid` value obtained above as `-E` input.
-
-```bash
-sudo ./TDinsight.sh -a http://tdengine:6041 -u root1 -p pass5ord -E existing-notifier
-```
-
-If you want to use the [Alibaba Cloud SMS](https://www.aliyun.com/product/sms) service as a notification channel, you should enable it with the `-s` flag add the following parameters.
-
-- `-N`: Notification Channel name, default is `TDinsight Builtin SMS`.
-- `-U`: Channel uid, default is lowercase of `name`, any other character is replaced with -, for the default `-N`, its uid is `tdinsight-builtin-sms`.
-- `-I`: Alibaba Cloud SMS access key id.
-- `-K`: Alibaba Cloud SMS access secret key.
-- `-S`: Alibaba Cloud SMS signature.
-- `-C`: Alibaba Cloud SMS template id.
-- `-T`: Alibaba Cloud SMS template parameters, for JSON format template, example is as follows `'{"alarm_level":"%s", "time":"%s", "name":"%s", "content":"%s"}'`. There are four parameters: alarm level, time, name and alarm content.
-- `-B`: a list of phone numbers, separated by a comma `,`.
-
-If you want to monitor multiple TDengine clusters, you need to set up numerous TDinsight dashboards. Setting up non-default TDinsight requires some changes: the `-n` `-i` `-t` options need to be changed to non-default names, and `-N` and `-L` should also be changed if using the built-in SMS alerting feature.
-
-```bash
-sudo . /TDengine.sh -n TDengine-Env1 -a http://another:6041 -u root -p taosdata -i tdinsight-env1 -t 'TDinsight Env1'
-# If using built-in SMS notifications
-sudo . /TDengine.sh -n TDengine-Env1 -a http://another:6041 -u root -p taosdata -i tdinsight-env1 -t 'TDinsight Env1' \
- -s -N 'Env1 SMS' -I xx -K xx -S xx -C SMS_XX -T '' -B 00000000000 -L 127.0.0.01:10611
-```
-
-Please note that the configuration data source, notification channel, and dashboard are not changeable on the front end. You should update the configuration again via this script or manually change the configuration file in the `/etc/grafana/provisioning` directory (this is the default directory for Grafana, use the `-P` option to change it as needed).
-
-Specifically, `-O` can be used to set the organization ID when you are using Grafana Cloud or another organization. `-G` specifies the Grafana plugin installation directory. The `-e` parameter sets the dashboard to be editable.
-
-## Set up TDinsight manually
-
-### Install the TDengine data source plugin
-
-Install the latest version of the TDengine Data Source plugin from GitHub.
-
-```bash
-get_latest_release() {
- curl --silent "https://api.github.com/repos/taosdata/grafanaplugin/releases/latest" |
- grep '"tag_name":' |
- sed -E 's/.*"v([^"]+)".*/\1/'
-}
-TDENGINE_PLUGIN_VERSION=$(get_latest_release)
-sudo grafana-cli \
- --pluginUrl https://github.com/taosdata/grafanaplugin/releases/download/v$TDENGINE_PLUGIN_VERSION/tdengine-datasource-$TDENGINE_PLUGIN_VERSION.zip \
- plugins install tdengine-datasource
-```
-
-:::note
-The 3.1.6 and earlier version plugins require the following setting in the configuration file `/etc/grafana/grafana.ini` to enable unsigned plugins.
-
-```ini
-[plugins]
-allow_loading_unsigned_plugins = tdengine-datasource
-```
-:::
-
-### Start the Grafana service
-
-```bash
-sudo systemctl start grafana-server
-sudo systemctl enable grafana-server
-```
-
-### Logging into Grafana
-
-Open the default Grafana URL in a web browser: ``http://localhost:3000``.
-The default username/password is `admin`. Grafana will require a password change after the first login.
-
-### Adding a TDengine Data Source
-
-Point to the **Configurations** -> **Data Sources** menu, and click the **Add data source** button.
-
-
-
-Search for and select **TDengine**.
-
-
-
-Configure the TDengine datasource.
-
-
-
-Save and test. It will report 'TDengine Data source is working' under normal circumstances.
-
-
-
-### Importing dashboards
-
-Point to **+** / **Create** - **import** (or `/dashboard/import` url).
-
-
-
-Type the dashboard ID `15167` in the **Import via grafana.com** location and **Load**.
-
-
-
-Once the import is complete, the full page view of TDinsight is shown below.
-
-
-
-## TDinsight dashboard details
-
-The TDinsight dashboard is designed to provide the usage and status of TDengine-related resources [dnodes, mnodes, vnodes](https://www.taosdata.com/cn/documentation/architecture#cluster) or databases.
-
-Details of the metrics are as follows.
-
-### Cluster Status
-
-
-
-This section contains the current information and status of the cluster, the alert information is also here (from left to right, top to bottom).
-
-- **First EP**: the `firstEp` setting in the current TDengine cluster.
-- **Version**: TDengine server version (master mnode).
-- **Master Uptime**: The time elapsed since the current Master MNode was elected as Master.
-- **Expire Time** - Enterprise version expiration time.
-- **Used Measuring Points** - The number of measuring points used by the Enterprise Edition.
-- **Databases** - The number of databases.
-- **Connections** - The number of current connections.
-- **DNodes/MNodes/VGroups/VNodes** - Total number of each resource and the number of survivors.
-- **DNodes/MNodes/VGroups/VNodes Alive Percent**: The ratio of the number of alive/total for each resource, enabling the alert rule and triggering it when the resource liveness rate (the average percentage of healthy resources in 1 minute) is less than 100%.
-- **Measuring Points Used**: The number of measuring points used to enable the alert rule (no data available in the community version, healthy by default).
-- **Grants Expire Time**: the expiration time of the enterprise version of the enabled alert rule (no data available for the community version, healthy by default).
-- **Error Rate**: Aggregate error rate (average number of errors per second) for alert-enabled clusters.
-- **Variables**: `show variables` table display.
-
-### DNodes Status
-
-
-
-- **DNodes Status**: simple table view of `show dnodes`.
-- **DNodes Lifetime**: the time elapsed since the dnode was created.
-- **DNodes Number**: the number of DNodes changes.
-- **Offline Reason**: if any dnode status is offline, the reason for offline is shown as a pie chart.
-
-### MNode Overview
-
-
-
-1. **MNodes Status**: a simple table view of `show mnodes`.
-2. **MNodes Number**: similar to `DNodes Number`, the number of MNodes changes.
-
-### Request
-
-
-
-1. **Requests Rate(Inserts per Second)**: average number of inserts per second.
-2. **Requests (Selects)**: number of query requests and change rate (count of second).
-3. **Requests (HTTP)**: number of HTTP requests and request rate (count of second).
-
-### Database
-
-
-
-Database usage, repeated for each value of the variable `$database` i.e. multiple rows per database.
-
-1. **STables**: number of super tables.
-2. **Total Tables**: number of all tables.
-3. **Sub Tables**: the number of all super table subtables.
-4. **Tables**: graph of all normal table numbers over time.
-5. **Tables Number Foreach VGroups**: The number of tables contained in each VGroups.
-
-### DNode Resource Usage
-
-
-
-Data node resource usage display with repeated multiple rows for the variable `$fqdn` i.e., each data node. Includes.
-
-1. **Uptime**: the time elapsed since the dnode was created.
-2. **Has MNodes?**: whether the current dnode is a mnode.
-3. **CPU Cores**: the number of CPU cores.
-4. **VNodes Number**: the number of VNodes in the current dnode.
-5. **VNodes Masters**: the number of vnodes in the master role.
-6. **Current CPU Usage of taosd**: CPU usage rate of taosd processes.
-7. **Current Memory Usage of taosd**: memory usage of taosd processes.
-8. **Disk Used**: The total disk usage percentage of the taosd data directory.
-9. **CPU Usage**: Process and system CPU usage.
-10. **RAM Usage**: Time series view of RAM usage metrics.
-11. **Disk Used**: Disks used at each level of multi-level storage (default is level0).
-12. **Disk Increasing Rate per Minute**: Percentage increase or decrease in disk usage per minute.
-13. **Disk IO**: Disk IO rate.
-14. **Net IO**: Network IO, the aggregate network IO rate in addition to the local network.
-
-### Login History
-
-
-
-Currently, only the number of logins per minute is reported.
-
-### Monitoring taosAdapter
-
-
-
-Support monitoring taosAdapter request statistics and status details. Includes.
-
-1. **http_request**: contains the total number of requests, the number of failed requests, and the number of requests being processed
-2. **top 3 request endpoint**: data of the top 3 requests by endpoint group
-3. **Memory Used**: taosAdapter memory usage
-4. **latency_quantile(ms)**: quantile of (1, 2, 5, 9, 99) stages
-5. **top 3 failed request endpoint**: data of the top 3 failed requests by endpoint grouping
-6. **CPU Used**: taosAdapter CPU usage
-
-## Upgrade
-
-TDinsight installed via the `TDinsight.sh` script can be upgraded to the latest Grafana plugin and TDinsight Dashboard by re-running the script.
-
-In the case of a manual installation, follow the steps above to install the new Grafana plugin and Dashboard yourself.
-
-## Uninstall
-
-TDinsight installed via the `TDinsight.sh` script can be cleaned up using the command line `TDinsight.sh -R` to clean up the associated resources.
-
-To completely uninstall TDinsight during a manual installation, you need to clean up the following.
-
-1. the TDinsight Dashboard in Grafana.
-2. the Data Source in Grafana.
-3. remove the `tdengine-datasource` plugin from the plugin installation directory.
-
-## Integrated Docker Example
-
-```bash
-git clone --depth 1 https://github.com/taosdata/grafanaplugin.git
-cd grafanaplugin
-```
-
-Change as needed in the ``docker-compose.yml`` file to
-
-```yaml
-version: '3.7'
-
-services:
- grafana:
- image: grafana/grafana:7.5.10
- volumes:
- - . /dist:/var/lib/grafana/plugins/tdengine-datasource
- - . /grafana/grafana.ini:/etc/grafana/grafana.ini
- - . /grafana/provisioning/:/etc/grafana/provisioning/
- - grafana-data:/var/lib/grafana
- environment:
- TDENGINE_API: ${TDENGINE_API}
- TDENGINE_USER: ${TDENGINE_USER}
- TDENGINE_PASS: ${TDENGINE_PASS}
- SMS_ACCESS_KEY_ID: ${SMS_ACCESS_KEY_ID}
- SMS_ACCESS_KEY_SECRET: ${SMS_ACCESS_KEY_SECRET}
- SMS_SIGN_NAME: ${SMS_SIGN_NAME}
- SMS_TEMPLATE_CODE: ${SMS_TEMPLATE_CODE}
- SMS_TEMPLATE_PARAM: '${SMS_TEMPLATE_PARAM}'
- SMS_PHONE_NUMBERS: $SMS_PHONE_NUMBERS
- SMS_LISTEN_ADDR: ${SMS_LISTEN_ADDR}
- ports:
- - 3000:3000
-volumes:
- grafana-data:
-```
-
-Replace the environment variables in `docker-compose.yml` or save the environment variables to the `.env` file, then start Grafana with `docker-compose up`. See [Docker Compose Reference](https://docs.docker.com/compose/)
-
-```bash
-docker-compose up -d
-```
-
-Then the TDinsight was deployed via Provisioning. Go to http://localhost:3000/d/tdinsight/ to view the dashboard.
-
-[grafana]: https://grafana.com
-[tdengine]: https://tdengine.com
diff --git a/docs-en/14-reference/12-directory.md b/docs-en/14-reference/12-directory.md
deleted file mode 100644
index 304e3bcb434ee9a6ba338577a4d1ba546b548e3f..0000000000000000000000000000000000000000
--- a/docs-en/14-reference/12-directory.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-title: File directory structure
-description: "TDengine installation directory description"
----
-
-After TDengine is installed, the following directories or files will be created in the system by default.
-
-| directory/file | description |
-| ------------------------- | -------------------------------------------------------------------- |
-| /usr/local/taos/bin | The TDengine executable directory. The executable files are soft-linked to the /usr/bin directory. |
-| /usr/local/taos/driver | The TDengine dynamic link library directory. It is soft-linked to the /usr/lib directory. |
-| /usr/local/taos/examples | The TDengine various language application examples directory. |
-| /usr/local/taos/include | The header files for TDengine's external C interface. |
-| /etc/taos/taos.cfg | TDengine default [configuration file] |
-| /var/lib/taos | TDengine's default data file directory. The location can be changed via [configuration file]. |
-| /var/log/taos | TDengine default log file directory. The location can be changed via [configure file]. |
-
-## Executable files
-
-All executable files of TDengine are in the _/usr/local/taos/bin_ directory by default. These include.
-
-- _taosd_: TDengine server-side executable files
-- _taos_: TDengine CLI executable
-- _taosdump_: data import and export tool
-- _taosBenchmark_: TDengine testing tool
-- _remove.sh_: script to uninstall TDengine, please execute it carefully, link to the **rmtaos** command in the /usr/bin directory. Will remove the TDengine installation directory `/usr/local/taos`, but will keep `/etc/taos`, `/var/lib/taos`, `/var/log/taos`
-- _taosadapter_: server-side executable that provides RESTful services and accepts writing requests from a variety of other softwares
-- _tarbitrator_: provides arbitration for two-node cluster deployments
-- _run_taosd_and_taosadapter.sh_: script to start both taosd and taosAdapter
-- _TDinsight.sh_: script to download TDinsight and install it
-- _set_core.sh_: script for setting up the system to generate core dump files for easy debugging
-- _taosd-dump-cfg.gdb_: script to facilitate debugging of taosd's gdb execution.
-
-:::note
-taosdump after version 2.4.0.0 require taosTools as a standalone installation. A new version of taosBenchmark is include in taosTools too.
-:::
-
-:::tip
-You can configure different data directories and log directories by modifying the system configuration file `taos.cfg`.
-:::
diff --git a/docs-en/14-reference/13-schemaless/13-schemaless.md b/docs-en/14-reference/13-schemaless/13-schemaless.md
deleted file mode 100644
index acbbb1cd3c5a7c50e226644f2de9e0e77274c6dd..0000000000000000000000000000000000000000
--- a/docs-en/14-reference/13-schemaless/13-schemaless.md
+++ /dev/null
@@ -1,159 +0,0 @@
----
-title: Schemaless Writing
-description: "The Schemaless write method eliminates the need to create super tables/sub tables in advance and automatically creates the storage structure corresponding to the data, as it is written to the interface."
----
-
-In IoT applications, data is collected for many purposes such as intelligent control, business analysis, device monitoring and so on. Due to changes in business or functional requirements or changes in device hardware, the application logic and even the data collected may change. To provide the flexibility needed in such cases and in a rapidly changing IoT landscape, TDengine starting from version 2.2.0.0, provides a series of interfaces for the schemaless writing method. These interfaces eliminate the need to create super tables and subtables in advance by automatically creating the storage structure corresponding to the data as the data is written to the interface. When necessary, schemaless writing will automatically add the required columns to ensure that the data written by the user is stored correctly.
-
-The schemaless writing method creates super tables and their corresponding subtables. These are completely indistinguishable from the super tables and subtables created directly via SQL. You can write data directly to them via SQL statements. Note that the names of tables created by schemaless writing are based on fixed mapping rules for tag values, so they are not explicitly ideographic and they lack readability.
-
-## Schemaless Writing Line Protocol
-
-TDengine's schemaless writing line protocol supports InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. However, when using these three protocols, you need to specify in the API the standard of the parsing protocol to be used for the input content.
-
-For the standard writing protocols of InfluxDB and OpenTSDB, please refer to the documentation of each protocol. The following is a description of TDengine's extended protocol, based on InfluxDB's line protocol first. They allow users to control the (super table) schema more granularly.
-
-With the following formatting conventions, schemaless writing uses a single string to express a data row (multiple rows can be passed into the writing API at once to enable bulk writing).
-
-```json
-measurement,tag_set field_set timestamp
-```
-
-where :
-
-- measurement will be used as the data table name. It will be separated from tag_set by a comma.
-- tag_set will be used as tag data in the format `=,=`, i.e. multiple tags' data can be separated by a comma. It is separated from field_set by space.
-- field_set will be used as normal column data in the format of `=,=`, again using a comma to separate multiple normal columns of data. It is separated from the timestamp by a space.
-- The timestamp is the primary key corresponding to the data in this row.
-
-All data in tag_set is automatically converted to the NCHAR data type and does not require double quotes (").
-
-In the schemaless writing data line protocol, each data item in the field_set needs to be described with its data type. Let's explain in detail:
-
-- If there are English double quotes on both sides, it indicates the BINARY(32) type. For example, `"abc"`.
-- If there are double quotes on both sides and an L prefix, it means NCHAR(32) type. For example, `L"error message"`.
-- Spaces, equal signs (=), commas (,), and double quotes (") need to be escaped with a backslash (\\) in front. (All refer to the ASCII character)
-- Numeric types will be distinguished from data types by the suffix.
-
-| **Serial number** | **Postfix** | **Mapping type** | **Size (bytes)** |
-| -------- | -------- | ------------ | -------------- |
-| 1 | none or f64 | double | 8 |
-| 2 | f32 | float | 4 |
-| 3 | i8 | TinyInt | 1 |
-| 4 | i16 | SmallInt | 2 |
-| 5 | i32 | Int | 4 |
-| 6 | i64 or i | Bigint | 8 |
-
-- `t`, `T`, `true`, `True`, `TRUE`, `f`, `F`, `false`, and `False` will be handled directly as BOOL types.
-
-For example, the following data rows indicate that the t1 label is "3" (NCHAR), the t2 label is "4" (NCHAR), and the t3 label is "t3" to the super table named `st` labeled "t3" (NCHAR), write c1 column as 3 (BIGINT), c2 column as false (BOOL), c3 column is "passit" (BINARY), c4 column is 4 (DOUBLE), and the primary key timestamp is 1626006833639000000 in one row.
-
-```json
-st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
-```
-
-Note that if the wrong case is used when describing the data type suffix, or if the wrong data type is specified for the data, it may cause an error message and cause the data to fail to be written.
-
-## Main processing logic for schemaless writing
-
-Schemaless writes process row data according to the following principles.
-
-1. You can use the following rules to generate the subtable names: first, combine the measurement name and the key and value of the label into the next string:
-
-```json
-"measurement,tag_key1=tag_value1,tag_key2=tag_value2"
-```
-
-Note that tag_key1, tag_key2 are not the original order of the tags entered by the user but the result of using the tag names in ascending order of the strings. Therefore, tag_key1 is not the first tag entered in the line protocol.
-The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t*" is a fixed prefix that every table generated by this mapping relationship has.
-
-2. If the super table obtained by parsing the line protocol does not exist, this super table is created.
-If the subtable obtained by the parse line protocol does not exist, Schemaless creates the sub-table according to the subtable name determined in steps 1 or 2.
-4. If the specified tag or regular column in the data row does not exist, the corresponding tag or regular column is added to the super table (only incremental).
-5. If there are some tag columns or regular columns in the super table that are not specified to take values in a data row, then the values of these columns are set to NULL.
-6. For BINARY or NCHAR columns, if the length of the value provided in a data row exceeds the column type limit, the maximum length of characters allowed to be stored in the column is automatically increased (only incremented and not decremented) to ensure complete preservation of the data.
-7. If the specified data subtable already exists, and the specified tag column takes a value different from the saved value this time, the value in the latest data row overwrites the old tag column take value.
-8. Errors encountered throughout the processing will interrupt the writing process and return an error code.
-
-:::tip
-All processing logic of schemaless will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed 48k bytes. See [TAOS SQL Boundary Limits](/taos-sql/limit) for specific constraints in this area.
-:::
-
-## Time resolution recognition
-
-Three specified modes are supported in the schemaless writing process, as follows:
-
-| **Serial** | **Value** | **Description** |
-| -------- | ------------------- | ------------------------------- |
-| 1 | SML_LINE_PROTOCOL | InfluxDB Line Protocol |
-| 2 | SML_TELNET_PROTOCOL | OpenTSDB Text Line Protocol |
-| 3 | SML_JSON_PROTOCOL | JSON protocol format |
-
-In the SML_LINE_PROTOCOL parsing mode, the user is required to specify the time resolution of the input timestamp. The available time resolutions are shown in the following table.
-
-| **Serial Number** | **Time Resolution Definition** | **Meaning** |
-| -------- | --------------------------------- | -------------- |
-| 1 | TSDB_SML_TIMESTAMP_NOT_CONFIGURED | Not defined (invalid) |
-| 2 | TSDB_SML_TIMESTAMP_HOURS | hour |
-| 3 | TSDB_SML_TIMESTAMP_MINUTES | MINUTES
-| 4 | TSDB_SML_TIMESTAMP_SECONDS | SECONDS
-| 5 | TSDB_SML_TIMESTAMP_MILLI_SECONDS | milliseconds
-| 6 | TSDB_SML_TIMESTAMP_MICRO_SECONDS | microseconds
-| 7 | TSDB_SML_TIMESTAMP_NANO_SECONDS | nanoseconds |
-
-In SML_TELNET_PROTOCOL and SML_JSON_PROTOCOL modes, the time precision is determined based on the length of the timestamp (in the same way as the OpenTSDB standard operation), and the user-specified time resolution is ignored at this point.
-
-## Data schema mapping rules
-
-This section describes how data for line protocols are mapped to data with a schema. The data measurement in each line protocol is mapped as follows:
-- The tag name in tag_set is the name of the tag in the data schema
-- The name in field_set is the column's name.
-
-The following data is used as an example to illustrate the mapping rules.
-
-```json
-st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
-```
-
-The row data mapping generates a super table: `st`, which contains three labels of type NCHAR: t1, t2, t3. Five data columns are ts (timestamp), c1 (bigint), c3 (binary), c2 (bool), c4 (bigint). The mapping becomes the following SQL statement.
-
-```json
-create stable st (_ts timestamp, c1 bigint, c2 bool, c3 binary(6), c4 bigint) tags(t1 nchar(1), t2 nchar(1), t3 nchar(2))
-```
-
-## Data schema change handling
-
-This section describes the impact on the data schema for different line protocol data writing cases.
-
-When writing to an explicitly identified field type using the line protocol, subsequent changes to the field's type definition will result in an explicit data schema error, i.e., will trigger a write API report error. As shown below, the
-
-```json
-st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4 1626006833639000000
-st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4i 1626006833640000000
-```
-
-The data type mapping in the first row defines column c4 as DOUBLE, but the data in the second row is declared as BIGINT by the numeric suffix, which triggers a parsing error with schemaless writing.
-
-If the line protocol before the column declares the data column as BINARY, the subsequent one requires a longer binary length, which triggers a super table schema change.
-
-```json
-st,t1=3,t2=4,t3=t3 c1=3i64,c5="pass" 1626006833639000000
-st,t1=3,t2=4,t3=t3 c1=3i64,c5="passit" 1626006833640000000
-```
-
-The first line of the line protocol parsing will declare column c5 is a BINARY(4) field. The second line data write will parse column c5 as a BINARY column. But in the second line, c5's width is 6 so you need to increase the width of the BINARY field to be able to accommodate the new string.
-
-```json
-st,t1=3,t2=4,t3=t3 c1=3i64 1626006833639000000
-st,t1=3,t2=4,t3=t3 c1=3i64,c6="passit" 1626006833640000000
-```
-
-The second line of data has an additional column c6 of type BINARY(6) compared to the first row. Then a column c6 of type BINARY(6) is automatically added at this point.
-
-## Write integrity
-
-TDengine provides idempotency guarantees for data writing, i.e., you can repeatedly call the API to write data with errors. However, it does not give atomicity guarantees for writing multiple rows of data. During the process of writing numerous rows of data in one batch, some data will be written successfully, and some data will fail.
-
-## Error code
-
-If it is an error in the data itself during the schemaless writing process, the application will get `TSDB_CODE_TSC_LINE_SYNTAX_ERROR` error message, which indicates that the error occurred in writing. The other error codes are consistent with the TDengine and can be obtained via the `taos_errstr()` to get the specific cause of the error.
diff --git a/docs-en/20-third-party/01-grafana.mdx b/docs-en/20-third-party/01-grafana.mdx
deleted file mode 100644
index b51d5a8d904601802efec0db5847203b72fa2668..0000000000000000000000000000000000000000
--- a/docs-en/20-third-party/01-grafana.mdx
+++ /dev/null
@@ -1,148 +0,0 @@
----
-sidebar_label: Grafana
-title: Grafana
----
-
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-
-TDengine can be quickly integrated with the open-source data visualization system [Grafana](https://www.grafana.com/) to build a data monitoring and alerting system. The whole process does not require any code development. And you can visualize the contents of the data tables in TDengine on a dashboard.
-
-You can learn more about using the TDengine plugin on [GitHub](https://github.com/taosdata/grafanaplugin/blob/master/README.md).
-
-## Prerequisites
-
-In order for Grafana to add the TDengine data source successfully, the following preparations are required:
-
-1. The TDengine cluster is deployed and functioning properly
-2. taosAdapter is installed and running properly. Please refer to the taosAdapter manual for details.
-
-Record these values:
-
-- TDengine REST API url: `http://tdengine.local:6041`.
-- TDengine cluster authorization, with user + password.
-
-## Installing Grafana
-
-TDengine currently supports Grafana versions 7.5 and above. Users can go to the Grafana official website to download the installation package and execute the installation according to the current operating system. The download address is as follows: .
-
-## Configuring Grafana
-
-### Install Grafana Plugin and Configure Data Source
-
-
-
-
-Set the url and authorization environment variables by `export` or a [`.env`(dotenv) file](https://hexdocs.pm/dotenvy/dotenv-file-format.html):
-
-```sh
-export TDENGINE_API=http://tdengine.local:6041
-# user + password
-export TDENGINE_USER=user
-export TDENGINE_PASSWORD=password
-
-# Other useful variables
-# - If to install TDengine data source, default is true
-export TDENGINE_DS_ENABLED=false
-# - Data source name to be created, default is TDengine
-export TDENGINE_DS_NAME=TDengine
-# - Data source organization id, default is 1
-export GF_ORG_ID=1
-# - Data source is editable in admin ui or not, default is 0 (false)
-export TDENGINE_EDITABLE=1
-```
-
-Run `install.sh`:
-
-```sh
-bash -c "$(curl -fsSL https://raw.githubusercontent.com/taosdata/grafanaplugin/master/install.sh)"
-```
-
-With this script, TDengine data source plugin and the Grafana data source will be installed and created automatically with Grafana provisioning configurations. Save the script and type `./install.sh --help` for the full usage of the script.
-
-And then, restart Grafana service and open Grafana in web-browser, usually .
-
-
-
-
-Follow the installation steps in [Grafana](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation) with the [``grafana-cli`` command-line tool](https://grafana.com/docs/grafana/latest/administration/cli/) for plugin installation.
-
-```bash
-grafana-cli plugins install tdengine-datasource
-# with sudo
-sudo -u grafana grafana-cli plugins install tdengine-datasource
-```
-
-Alternatively, you can manually download the .zip file from [GitHub](https://github.com/taosdata/grafanaplugin/tags) or [Grafana](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation) and unpack it into your grafana plugins directory.
-
-```bash
-GF_VERSION=3.2.2
-# from GitHub
-wget https://github.com/taosdata/grafanaplugin/releases/download/v$GF_VERSION/tdengine-datasource-$GF_VERSION.zip
-# from Grafana
-wget -O tdengine-datasource-$GF_VERSION.zip https://grafana.com/api/plugins/tdengine-datasource/versions/$GF_VERSION/download
-```
-
-Take CentOS 7.2 for example, extract the plugin package to /var/lib/grafana/plugins directory, and restart grafana.
-
-```bash
-sudo unzip tdengine-datasource-$GF_VERSION.zip -d /var/lib/grafana/plugins/
-```
-
-If Grafana is running in a Docker environment, the TDengine plugin can be automatically installed and set up using the following environment variable settings:
-
-```bash
-GF_INSTALL_PLUGINS=tdengine-datasource
-```
-
-Now users can log in to the Grafana server (username/password: admin/admin) directly through the URL `http://localhost:3000` and add a datasource through `Configuration -> Data Sources` on the left side, as shown in the following figure.
-
-
-
-Click `Add data source` to enter the Add data source page, and enter TDengine in the query box to add it, as shown in the following figure.
-
-
-
-Enter the datasource configuration page, and follow the default prompts to modify the corresponding configuration.
-
-
-
-- Host: IP address of the server where the components of the TDengine cluster provide REST service (offered by taosd before 2.4 and by taosAdapter since 2.4) and the port number of the TDengine REST service (6041), by default use `http://localhost:6041`.
-- User: TDengine user name.
-- Password: TDengine user password.
-
-Click `Save & Test` to test. You should see a success message if the test worked.
-
-
-
-
-
-
-### Create Dashboard
-
-Go back to the main interface to create a dashboard and click Add Query to enter the panel query page:
-
-
-
-As shown above, select the `TDengine` data source in the `Query` and enter the corresponding SQL in the query box below for query.
-
-- INPUT SQL: enter the statement to be queried (the result set of the SQL statement should be two columns and multiple rows), for example: `select avg(mem_system) from log.dn where ts >= $from and ts < $to interval($interval)`, where, from, to and interval are built-in variables of the TDengine plugin, indicating the range and time interval of queries fetched from the Grafana plugin panel. In addition to the built-in variables, custom template variables are also supported.
-- ALIAS BY: This allows you to set the current query alias.
-- GENERATE SQL: Clicking this button will automatically replace the corresponding variables and generate the final executed statement.
-
-Follow the default prompt to query the average system memory usage for the specified interval on the server where the current TDengine deployment is located as follows.
-
-
-
-> For more information on how to use Grafana to create the appropriate monitoring interface and for more details on using Grafana, refer to the official Grafana [documentation](https://grafana.com/docs/).
-
-### Importing the Dashboard
-
-You can install TDinsight dashboard in data source configuration page (like `http://localhost:3000/datasources/edit/1/dashboards`) as a monitoring visualization tool for TDengine cluster. The dashboard is published in Grafana as [Dashboard 15167 - TDinsight](https://grafana.com/grafana/dashboards/15167). Check the [TDinsight User Manual](/reference/tdinsight/) for the details.
-
-For more dashboards using TDengine data source, [search here in Grafana](https://grafana.com/grafana/dashboards/?dataSource=tdengine-datasource). Here is a sub list:
-
-- [15146](https://grafana.com/grafana/dashboards/15146): Monitor multiple TDengine clusters.
-- [15155](https://grafana.com/grafana/dashboards/15155): TDengine alert demo.
-- [15167](https://grafana.com/grafana/dashboards/15167): TDinsight.
-- [16388](https://grafana.com/grafana/dashboards/16388): Telegraf node metrics dashboard using TDengine data source.
diff --git a/docs-en/20-third-party/09-emq-broker.md b/docs-en/20-third-party/09-emq-broker.md
deleted file mode 100644
index 7c6b83cf99dd733f9e9a86435e079a2daee00ad9..0000000000000000000000000000000000000000
--- a/docs-en/20-third-party/09-emq-broker.md
+++ /dev/null
@@ -1,140 +0,0 @@
----
-sidebar_label: EMQX Broker
-title: EMQX Broker writing
----
-
-MQTT is a popular IoT data transfer protocol. [EMQX](https://github.com/emqx/emqx) is an open-source MQTT Broker software. You can write MQTT data directly to TDengine without any code. You only need to setup "rules" in EMQX Dashboard to create a simple configuration. EMQX supports saving data to TDengine by sending data to a web service and provides a native TDengine driver for direct saving in the Enterprise Edition. Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use it.).
-
-## Prerequisites
-
-The following preparations are required for EMQX to add TDengine data sources correctly.
-- The TDengine cluster is deployed and working properly
-- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details.
-- If you use the emulated writers described later, you need to install the appropriate version of Node.js. V12 is recommended.
-
-## Install and start EMQX
-
-Depending on the current operating system, users can download the installation package from the [EMQX official website](https://www.emqx.io/downloads) and execute the installation. After installation, use `sudo emqx start` or `sudo systemctl start emqx` to start the EMQX service.
-
-
-## Create Database and Table
-
-In this step we create the appropriate database and table schema in TDengine for receiving MQTT data. Open TDengine CLI and execute SQL bellow:
-
-```sql
-CREATE DATABASE test;
-USE test;
-CREATE TABLE sensor_data (ts TIMESTAMP, temperature FLOAT, humidity FLOAT, volume FLOAT, pm10 FLOAT, pm25 FLOAT, so2 FLOAT, no2 FLOAT, co FLOAT, sensor_id NCHAR(255), area TINYINT, coll_time TIMESTAMP);
-```
-
-Note: The table schema is based on the blog [(In Chinese) Data Transfer, Storage, Presentation, EMQX + TDengine Build MQTT IoT Data Visualization Platform](https://www.taosdata.com/blog/2020/08/04/1722.html) as an example. Subsequent operations are carried out with this blog scenario too. Please modify it according to your actual application scenario.
-
-## Configuring EMQX Rules
-
-Since the configuration interface of EMQX differs from version to version, here is v4.4.3 as an example. For other versions, please refer to the corresponding official documentation.
-
-### Login EMQX Dashboard
-
-Use your browser to open the URL `http://IP:18083` and log in to EMQX Dashboard. The initial installation username is `admin` and the password is: `public`.
-
-
-
-### Creating Rule
-
-Select "Rule" in the "Rule Engine" on the left and click the "Create" button: !
-
-
-
-### Edit SQL fields
-
-Copy SQL bellow and paste it to the SQL edit area:
-
-```sql
-SELECT
- payload
-FROM
- "sensor/data"
-```
-
-
-
-### Add "action handler"
-
-
-
-### Add "Resource"
-
-
-
-Select "Data to Web Service" and click the "New Resource" button.
-
-### Edit "Resource"
-
-Select "WebHook" and fill in the request URL as the address and port of the server running taosAdapter (default is 6041). Leave the other properties at their default values.
-
-
-
-### Edit "action"
-
-Edit the resource configuration to add the key/value pairing for Authorization. If you use the default TDengine username and password then the value of key Authorization is:
-```
-Basic cm9vdDp0YW9zZGF0YQ==
-```
-
-Please refer to the [ TDengine REST API documentation ](/reference/rest-api/) for the authorization in details.
-
-Enter the rule engine replacement template in the message body:
-
-```sql
-INSERT INTO test.sensor_data VALUES(
- now,
- ${payload.temperature},
- ${payload.humidity},
- ${payload.volume},
- ${payload.PM10},
- ${payload.pm25},
- ${payload.SO2},
- ${payload.NO2},
- ${payload.CO},
- '${payload.id}',
- ${payload.area},
- ${payload.ts}
-)
-```
-
-
-
-Finally, click the "Create" button at bottom left corner saving the rule.
-## Compose program to mock data
-
-```javascript
-{{#include docs-examples/other/mock.js}}
-```
-
-Note: `CLIENT_NUM` in the code can be set to a smaller value at the beginning of the test to avoid hardware performance be not capable to handle a more significant number of concurrent clients.
-
-
-
-## Execute tests to simulate sending MQTT data
-
-```
-npm install mqtt mockjs --save ---registry=https://registry.npm.taobao.org
-node mock.js
-```
-
-
-
-## Verify that EMQX is receiving data
-
-Refresh the EMQX Dashboard rules engine interface to see how many records were received correctly:
-
-
-
-## Verify that data writing to TDengine
-
-Use the TDengine CLI program to log in and query the appropriate databases and tables to verify that the data is being written to TDengine correctly:
-
-
-
-Please refer to the [TDengine official documentation](https://docs.taosdata.com/) for more details on how to use TDengine.
-EMQX Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use EMQX.
diff --git a/docs-en/20-third-party/10-hive-mq-broker.md b/docs-en/20-third-party/10-hive-mq-broker.md
deleted file mode 100644
index 333e00fa0e9b724ffbb067a83ad07d0b846b1a23..0000000000000000000000000000000000000000
--- a/docs-en/20-third-party/10-hive-mq-broker.md
+++ /dev/null
@@ -1,6 +0,0 @@
----
-sidebar_label: HiveMQ Broker
-title: HiveMQ Broker writing
----
-
-[HiveMQ](https://www.hivemq.com/) is an MQTT broker that provides community and enterprise editions. HiveMQ is mainly for enterprise emerging machine-to-machine M2M communication and internal transport, meeting scalability, ease of management, and security features. HiveMQ provides an open-source plug-in development kit. MQTT data can be saved to TDengine via TDengine extension for HiveMQ. Please refer to the [HiveMQ extension - TDengine documentation](https://github.com/huskar-t/hivemq-tdengine-extension/blob/b62a26ecc164a310104df57691691b237e091c89/README_EN.md) for details on how to use it.
\ No newline at end of file
diff --git a/docs-en/21-tdinternal/01-arch.md b/docs-en/21-tdinternal/01-arch.md
deleted file mode 100644
index 4d8bed4d2d6b3a0404e10213aeab599767325cc2..0000000000000000000000000000000000000000
--- a/docs-en/21-tdinternal/01-arch.md
+++ /dev/null
@@ -1,287 +0,0 @@
----
-sidebar_label: Architecture
-title: Architecture
----
-
-## Cluster and Primary Logic Unit
-
-The design of TDengine is based on the assumption that any hardware or software system is not 100% reliable and that no single node can provide sufficient computing and storage resources to process massive data. Therefore, since day one, TDengine has been designed as a natively distributed system, with high-reliability architecture. Hardware failure or software failure of a single, or even multiple servers will not affect the availability and reliability of the system. At the same time, through node virtualization and automatic load-balancing technology, TDengine can make the most efficient use of computing and storage resources in heterogeneous clusters to reduce hardware resource needs, significantly.
-
-### Primary Logic Unit
-
-Logical structure diagram of TDengine's distributed architecture is as follows:
-
-
-
Figure 1: TDengine architecture diagram
-
-A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDengine client driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit.
-
-**Physical node (pnode)**: A pnode is a computer that runs independently and has its own computing, storage and network capabilities. It can be a physical machine, virtual machine, or Docker container installed with OS. The physical node is identified by its configured FQDN (Fully Qualified Domain Name). TDengine relies entirely on FQDN for network communication. If you don't know about FQDN, please check [wikipedia](https://en.wikipedia.org/wiki/Fully_qualified_domain_name).
-
-**Data node (dnode):** A dnode is a running instance of the TDengine server-side execution code taosd on a physical node (pnode). A working system must have at least one data node. A dnode contains zero to multiple logical virtual nodes (VNODE) and zero or at most one logical management node (mnode). The unique identification of a dnode in the system is determined by the instance's End Point (EP). EP is a combination of FQDN (Fully Qualified Domain Name) of the physical node where the dnode is located and the network port number (Port) configured by the system. By configuring different ports, a physical node (a physical machine, virtual machine or container) can run multiple instances or have multiple data nodes.
-
-**Virtual node (vnode)**: To better support data sharding, load balancing and prevent data from overheating or skewing, data nodes are virtualized into multiple virtual nodes (vnode, V2, V3, V4, etc. in the figure). Each vnode is a relatively independent work unit, which is the basic unit of time-series data storage and has independent running threads, memory space and persistent storage path. A vnode contains a certain number of tables (data collection points). When a new table is created, the system checks whether a new vnode needs to be created. The number of vnodes that can be created on a data node depends on the capacity of the hardware of the physical node where the data node is located. A vnode belongs to only one DB, but a DB can have multiple vnodes. In addition to the stored time-series data, a vnode also stores the schema and tag values of the included tables. A virtual node is uniquely identified in the system by the EP of the data node and the VGroup ID to which it belongs and is created and managed by the management node.
-
-**Management node (mnode)**: A virtual logical unit responsible for monitoring and maintaining the running status of all data nodes and load balancing among nodes (M in the figure). At the same time, the management node is also responsible for the storage and management of metadata (including users, databases, tables, static tags, etc.), so it is also called Meta Node. Multiple (up to 5) mnodes can be configured in a TDengine cluster, and they are automatically constructed into a virtual management node group (M0, M1, M2 in the figure). The master/slave mechanism is adopted for the mnode group and the data synchronization is carried out in a strongly consistent way. Any data update operation can only be executed on the master. The creation of mnode cluster is completed automatically by the system without manual intervention. There is at most one mnode on each dnode, which is uniquely identified by the EP of the data node to which it belongs. Each dnode automatically obtains the EP of the dnode where all mnodes in the whole cluster are located, through internal messaging interaction.
-
-**Virtual node group (VGroup)**: Vnodes on different data nodes can form a virtual node group to ensure the high availability of the system. The virtual node group is managed in a master/slave mechanism. Write operations can only be performed on the master vnode, and then replicated to slave vnodes, thus ensuring that one single replica of data is copied on multiple physical nodes. The number of virtual nodes in a vgroup equals the number of data replicas. If the number of replicas of a DB is N, the system must have at least N data nodes. The number of replicas can be specified by the parameter `“replica”` when creating a DB, and the default is 1. Using the multi-replication feature of TDengine, the same high data reliability can be achieved without the need for expensive storage devices such as disk arrays. Virtual node groups are created and managed by the management node, and the management node assigns a system unique ID, aka VGroup ID. If two virtual nodes have the same vnode group ID, it means that they belong to the same group and the data is backed up to each other. The number of virtual nodes in a virtual node group can be dynamically changed, allowing only one, that is, no data replication. VGroup ID is never changed. Even if a virtual node group is deleted, its ID will not be reused.
-
-**TAOSC**: TAOSC is the driver provided by TDengine to applications. It is responsible for dealing with the interaction between application and cluster, and provides the native interface for the C/C++ language. It is also embedded in the JDBC, C #, Python, Go, Node.js language connection libraries. Applications interact with the whole cluster through TAOSC instead of directly connecting to data nodes in the cluster. This module is responsible for obtaining and caching metadata; forwarding requests for insertion, query, etc. to the correct data node; when returning the results to the application, TAOSC also needs to be responsible for the final level of aggregation, sorting, filtering and other operations. For JDBC, C/C++/C#/Python/Go/Node.js interfaces, this module runs on the physical node where the application is located. At the same time, in order to support the fully distributed RESTful interface, TAOSC has a running instance on each dnode of TDengine cluster.
-
-### Node Communication
-
-**Communication mode**: The communication among each data node of TDengine system, and among the client driver and each data node is carried out through TCP/UDP. Considering an IoT scenario, the data writing packets are generally not large, so TDengine uses UDP in addition to TCP for transmission, because UDP is more efficient and is not limited by the number of connections. TDengine implements its own timeout, retransmission, confirmation and other mechanisms to ensure reliable transmission of UDP. For packets with a data volume of less than 15K, UDP is adopted for transmission, and TCP is automatically adopted for transmission of packets with a data volume of more than 15K or query operations. At the same time, TDengine will automatically compress/decompress the data, digitally sign/authenticate the data according to the configuration and data packet. For data replication among data nodes, only TCP is used for data transportation.
-
-**FQDN configuration:** A data node has one or more FQDNs, which can be specified in the system configuration file taos.cfg with the parameter “fqdn”. If it is not specified, the system will automatically use the hostname of the computer as its FQDN. If the node is not configured with FQDN, you can directly set the configuration parameter “fqdn” of the node to its IP address. However, IP is not recommended because IP address may be changed, and once it changes, the cluster will not work properly. The EP (End Point) of a data node consists of FQDN + Port. With FQDN, it is necessary to ensure the DNS service is running, or hosts files on nodes are configured properly.
-
-**Port configuration**: The external port of a data node is determined by the system configuration parameter “serverPort” in TDengine, and the port for internal communication of cluster is serverPort+5. The data replication operation among data nodes in the cluster also occupies a TCP port, which is serverPort+10. In order to support multithreading and efficient processing of UDP data, each internal and external UDP connection needs to occupy 5 consecutive ports. Therefore, the total port range of a data node will be serverPort to serverPort + 10, for a total of 11 TCP/UDP ports. To run the system, make sure that the firewall keeps these ports open. Each data node can be configured with a different serverPort.
-
-**Cluster external connection**: TDengine cluster can accommodate a single, multiple or even thousands of data nodes. The application only needs to initiate a connection to any data node in the cluster. The network parameter required for connection is the End Point (FQDN plus configured port number) of a data node. When starting the application taos through CLI, the FQDN of the data node can be specified through the option `-h`, and the configured port number can be specified through `-p`. If the port is not configured, the system configuration parameter “serverPort” of TDengine will be adopted.
-
-**Inter-cluster communication**: Data nodes connect with each other through TCP/UDP. When a data node starts, it will obtain the EP information of the dnode where the mnode is located, and then establish a connection with the mnode in the system to exchange information. There are three steps to obtain EP information of the mnode:
-
-1. Check whether the mnodeEpList file exists, if it does not exist or cannot be opened normally to obtain EP information of the mnode, skip to the second step;
-2. Check the system configuration file taos.cfg to obtain node configuration parameters “firstEp” and “secondEp” (the node specified by these two parameters can be a normal node without mnode, in this case, the node will try to redirect to the mnode node when connected). If these two configuration parameters do not exist or do not exist in taos.cfg, or are invalid, skip to the third step;
-3. Set your own EP as a mnode EP and run it independently. After obtaining the mnode EP list, the data node initiates the connection. It will successfully join the working cluster after connection. If not successful, it will try the next item in the mnode EP list. If all attempts are made, but the connection still fails, sleep for a few seconds before trying again.
-
-**The choice of MNODE**: TDengine logically has a management node, but there is no separate execution code. The server-side only has one set of execution code, taosd. So which data node will be the management node? This is determined automatically by the system without any manual intervention. The principle is as follows: when a data node starts, it will check its End Point and compare it with the obtained mnode EP List. If its EP exists in it, the data node shall start the mnode module and become a mnode. If your own EP is not in the mnode EP List, the mnode module will not start. During the system operation, due to load balancing, downtime and other reasons, mnode may migrate to the new dnode, totally transparently and without manual intervention. The modification of configuration parameters is the decision made by mnode itself according to resources usage.
-
-**Add new data nodes:** After the system has a data node, it has become a working system. There are two steps to add a new node into the cluster.
-- Step1: Connect to the existing working data node using TDengine CLI, and then add the End Point of the new data node with the command "create dnode"
-- Step 2: In the system configuration parameter file taos.cfg of the new data node, set the “firstEp” and “secondEp” parameters to the EP of any two data nodes in the existing cluster. Please refer to the user tutorial for detailed steps. In this way, the cluster will be established step by step.
-
-**Redirection**: Regardless of dnode or TAOSC, the connection to the mnode is initiated first. The mnode is automatically created and maintained by the system, so the user does not know which dnode is running the mnode. TDengine only requires a connection to any working dnode in the system. Because any running dnode maintains the currently running mnode EP List, when receiving a connecting request from the newly started dnode or TAOSC, if it’s not an mnode itself, it will reply to the mnode with the EP List. After receiving this list, TAOSC or the newly started dnode will try to establish the connection again. When the mnode EP List changes, each data node quickly obtains the latest list and notifies TAOSC through messaging interaction among nodes.
-
-### A Typical Data Writing Process
-
-To explain the relationship between vnode, mnode, TAOSC and application and their respective roles, the following is an analysis of a typical data writing process.
-
-
-
Figure 2: Typical process of TDengine
-
-1. Application initiates a request to insert data through JDBC, ODBC, or other APIs.
-2. TAOSC checks the cache to see if meta data exists for the table. If it does, it goes straight to Step 4. If not, TAOSC sends a get meta-data request to mnode.
-3. Mnode returns the meta-data of the table to TAOSC. Meta-data contains the schema of the table, and also the vgroup information to which the table belongs (the vnode ID and the End Point of the dnode where the table belongs. If the number of replicas is N, there will be N groups of End Points). If TAOSC does not receive a response from the mnode for a long time, and there are multiple mnodes, TAOSC will send a request to the next mnode.
-4. TAOSC initiates an insert request to master vnode.
-5. After vnode inserts the data, it gives a reply to TAOSC, indicating that the insertion is successful. If TAOSC doesn't get a response from vnode for a long time, TAOSC will treat this node as offline. In this case, if there are multiple replicas of the inserted database, TAOSC will issue an insert request to the next vnode in vgroup.
-6. TAOSC notifies APP that writing is successful.
-
-For Step 2 and 3, when TAOSC starts, it does not know the End Point of mnode, so it will directly initiate a request to the configured serving End Point of the cluster. If the dnode that receives the request does not have a mnode configured, it will reply with the mnode EP list, so that TAOSC will re-issue a request to obtain meta-data to the EP of another mnode.
-
-For Step 4 and 5, without caching, TAOSC can't recognize the master in the virtual node group, so assumes that the first vnode is the master and sends a request to it. If this vnode is not the master, it will reply to the actual master as a new target to which TAOSC shall send a request. Once a response of successful insertion is obtained, TAOSC will cache the information of master node.
-
-The above describes the process of inserting data. The processes of querying and computing are the same. TAOSC encapsulates and hides all these complicated processes, and it is transparent to applications.
-
-Through TAOSC caching mechanism, mnode needs to be accessed only when a table is accessed for the first time, so mnode will not become a system bottleneck. However, because schema and vgroup may change (such as load balancing), TAOSC will interact with mnode regularly to automatically update the cache.
-
-## Storage Model and Data Partitioning/Sharding
-
-### Storage Model
-
-The data stored by TDengine includes collected time-series data, metadata related to database and tables, tag data, etc. All of the data is specifically divided into three parts:
-
-- Time-series data: stored in vnode and composed of data, head and last files. The amount of data is large and query amount depends on the application scenario. Out-of-order writing is allowed, but delete operation is not supported for the time being, and update operation is only allowed when database “update” parameter is set to 1. By adopting the model with **one table for each data collection point**, the data of a given time period is continuously stored, and the writing against one single table is a simple appending operation. Multiple records can be read at one time, thus ensuring the best performance for both insert and query operations of a single data collection point.
-- Tag data: meta files stored in vnode. Four standard operations of create, read, update and delete are supported. The amount of data is not large. If there are N tables, there are N records, so all can be stored in memory. To make tag filtering efficient, TDengine supports multi-core and multi-threaded concurrent queries. As long as the computing resources are sufficient, even with millions of tables, the tag filtering results will return in milliseconds.
-- Metadata: stored in mnode and includes system node, user, DB, table schema and other information. Four standard operations of create, delete, update and read are supported. The amount of this data is not large and can be stored in memory. Moreover, the number of queries is not large because of client cache. Even though TDengine uses centralized storage management, because of the architecture, there is no performance bottleneck.
-
-Compared with the typical NoSQL storage model, TDengine stores tag data and time-series data completely separately. This has two major advantages:
-
-- Reduces the redundancy of tag data storage significantly. General NoSQL database or time-series database adopts K-V (key-value) storage, in which the key includes a timestamp, a device ID and various tags. Each record carries these duplicated tags, so storage space is wasted. Moreover, if the application needs to add, modify or delete tags on historical data, it has to traverse the data and rewrite them again, which is an extremely expensive operation.
-- Aggregate data efficiently between multiple tables: when aggregating data between multiple tables, it first finds the tables which satisfy the filtering conditions, and then finds the corresponding data blocks of these tables. This greatly reduces the data sets to be scanned which in turn improves the aggregation efficiency. Moreover, tag data is managed and maintained in a full-memory structure, and tag data queries in tens of millions can return in milliseconds.
-
-### Data Sharding
-
-For large-scale data management, to achieve scale-out, it is generally necessary to adopt a Partitioning or Sharding strategy. TDengine implements data sharding via vnode, and time-series data partitioning via one data file for a time range.
-
-VNode (Virtual Data Node) is responsible for providing writing, query and computing functions for collected time-series data. To facilitate load balancing, data recovery and support heterogeneous environments, TDengine splits a data node into multiple vnodes according to its computing and storage resources. The management of these vnodes is done automatically by TDengine and is completely transparent to the application.
-
-For a single data collection point, regardless of the amount of data, a vnode (or vnode group, if the number of replicas is greater than 1) has enough computing resource and storage resource to process (if a 16-byte record is generated per second, the original data generated in one year will be less than 0.5 G). So TDengine stores all the data of a table (a data collection point) in one vnode instead of distributing the data to two or more dnodes. Moreover, a vnode can store data from multiple data collection points (tables), and the upper limit of the tables’ quantity for a vnode is one million. By design, all tables in a vnode belong to the same DB. On a data node, unless specially configured, the number of vnodes owned by a DB will not exceed the number of system cores.
-
-When creating a DB, the system does not allocate resources immediately. However, when creating a table, the system will check if there is an allocated vnode with free tablespace. If so, the table will be created in the vacant vnode immediately. If not, the system will create a new vnode on a dnode from the cluster according to the current workload, and then a table. If there are multiple replicas of a DB, the system does not create only one vnode, but a vgroup (virtual data node group). The system has no limit on the number of vnodes, which is just limited by the computing and storage resources of physical nodes.
-
-The meta data of each table (including schema, tags, etc.) is also stored in vnode instead of centralized storage in mnode. In fact, this means sharding of meta data, which is good for efficient and parallel tag filtering operations.
-
-### Data Partitioning
-
-In addition to vnode sharding, TDengine partitions the time-series data by time range. Each data file contains only one time range of time-series data, and the length of the time range is determined by the database configuration parameter `“days”`. This method of partitioning by time range is also convenient to efficiently implement data retention policies. As long as the data file exceeds the specified number of days (system configuration parameter `“keep”`), it will be automatically deleted. Moreover, different time ranges can be stored in different paths and storage media, so as to facilitate tiered-storage. Cold/hot data can be stored in different storage media to significantly reduce storage costs.
-
-In general, **TDengine splits big data by vnode and time range in two dimensions** to manage the data efficiently with horizontal scalability.
-
-### Load Balancing
-
-Each dnode regularly reports its status (including hard disk space, memory size, CPU, network, number of virtual nodes, etc.) to the mnode (virtual management node) so that the mnode knows the status of the entire cluster. Based on the overall status, when the mnode finds a dnode is overloaded, it will migrate one or more vnodes to other dnodes. During the process, TDengine services keep running and the data insertion, query and computing operations are not affected.
-
-If the mnode has not received the dnode status for a period of time, the dnode will be treated as offline. If the dnode stays offline beyond the time configured by parameter `“offlineThreshold”`, the dnode will be forcibly removed from the cluster by mnode. If the number of replicas of vnodes on this dnode is greater than one, the system will automatically create new replicas on other dnodes to ensure the replica number. If there are other mnodes on this dnode and the number of mnodes replicas is greater than one, the system will automatically create new mnodes on other dnodes to ensure the replica number.
-
-When new data nodes are added to the cluster, with new computing and storage resources, the system will automatically start the load balancing process.
-
-The load balancing process does not require any manual intervention, and it is transparent to the application. **Note: load balancing is controlled by parameter “balance”, which determines to turn on/off automatic load balancing.**
-
-## Data Writing and Replication Process
-
-If a database has N replicas, a virtual node group has N virtual nodes. But only one is the Master and all others are slaves. When the application writes a new record to system, only the Master vnode can accept the writing request. If a slave vnode receives a writing request, the system will notifies TAOSC to redirect.
-
-### Master vnode Writing Process
-
-Master Vnode uses a writing process as follows:
-
-
-
Figure 3: TDengine Master writing process
-
-1. Master vnode receives the application data insertion request, verifies, and moves to next step;
-2. If the system configuration parameter `“walLevel”` is greater than 0, vnode will write the original request packet into database log file WAL. If walLevel is set to 2 and fsync is set to 0, TDengine will make WAL data written immediately to ensure that even system goes down, all data can be recovered from database log file;
-3. If there are multiple replicas, vnode will forward data packet to slave vnodes in the same virtual node group, and the forwarded packet has a version number with data;
-4. Write into memory and add the record to “skip list”;
-5. Master vnode returns a confirmation message to the application, indicating a successful write.
-6. If any of Step 2, 3 or 4 fails, the error will directly return to the application.
-
-### Slave vnode Writing Process
-
-For a slave vnode, the write process as follows:
-
-
-
Figure 4: TDengine Slave Writing Process
-
-1. Slave vnode receives a data insertion request forwarded by Master vnode;
-2. If the system configuration parameter `“walLevel”` is greater than 0, vnode will write the original request packet into database log file WAL. If walLevel is set to 2 and fsync is set to 0, TDengine will make WAL data written immediately to ensure that even system goes down, all data can be recovered from database log file;
-3. Write into memory and add the record to “skip list”.
-
-Compared with Master vnode, slave vnode has no forwarding or reply confirmation step, means two steps less. But writing into memory and WAL is exactly the same.
-
-### Remote Disaster Recovery and IDC (Internet Data Center) Migration
-
-As discussed above, TDengine writes using Master and Slave processes. TDengine adopts asynchronous replication for data synchronization. This method can greatly improve write performance, with no obvious impact from network delay. By configuring IDC and rack number for each physical node, it can be ensured that for a virtual node group, virtual nodes are composed of physical nodes from different IDC and different racks, thus implementing remote disaster recovery without other tools.
-
-On the other hand, TDengine supports dynamic modification of the replica number. Once the number of replicas increases, the newly added virtual nodes will immediately enter the data synchronization process. After synchronization is complete, added virtual nodes can provide services. In the synchronization process, master and other synchronized virtual nodes keep serving. With this feature, TDengine can provide IDC migration without service interruption. It is only necessary to add new physical nodes to the existing IDC cluster, and then remove old physical nodes after the data synchronization is completed.
-
-However, the asynchronous replication has a very low probability scenario where data may be lost. The specific scenario is as follows:
-
-1. Master vnode has finished its 5-step operations, confirmed the success of writing to APP, and then goes down;
-2. Slave vnode receives the write request, then processing fails before writing to the log in Step 2;
-3. Slave vnode will become the new master, thus losing one record.
-
-In theory, for asynchronous replication, there is no guarantee to prevent data loss. However, this is an extremely low probability scenario as described above.
-
-Note: Remote disaster recovery and no-downtime IDC migration are only supported by Enterprise Edition. **Hint: This function is not available yet**
-
-### Master/slave Selection
-
-Vnode maintains a version number. When memory data is persisted, the version number will also be persisted. For each data update operation, whether it is time-series data or metadata, this version number will be increased by one.
-
-When a vnode starts, the roles (master, slave) are uncertain, and the data is in an unsynchronized state. It’s necessary to establish TCP connections with other nodes in the virtual node group and exchange status, including version and its own roles. Through the exchange, the system implements a master-selection process. The rules are as follows:
-
-1. If there’s only one replica, it’s always master
-2. When all replicas are online, the one with latest version is master
-3. Over half of online nodes are virtual nodes, and some virtual node is slave, it will automatically become master
-4. For 2 and 3, if multiple virtual nodes meet the requirement, the first vnode in virtual node group list will be selected as master.
-
-### Synchronous Replication
-
-For scenarios with strong data consistency requirements, asynchronous data replication is not applicable, because there is a small probability of data loss. So, TDengine provides a synchronous replication mechanism for users. When creating a database, in addition to specifying the number of replicas, user also needs to specify a new parameter “quorum”. If quorum is greater than one, it means that every time the Master forwards a message to the replica, it needs to wait for “quorum-1” reply confirms before informing the application that data has been successfully written in slave. If “quorum-1” reply confirms are not received within a certain period of time, the master vnode will return an error to the application.
-
-With synchronous replication, performance of system will decrease and latency will increase. Because metadata needs strong consistency, the default for data synchronization between mnodes is synchronous replication.
-
-## Caching and Persistence
-
-### Caching
-
-TDengine adopts a time-driven cache management strategy (First-In-First-Out, FIFO), also known as a Write-driven Cache Management Mechanism. This strategy is different from the read-driven data caching mode (Least-Recent-Used, LRU), which directly puts the most recently written data in the system buffer. When the buffer reaches a threshold, the earliest data are written to disk in batches. Generally speaking, for the use of IoT data, users are most concerned about the most recently generated data, that is, the current status. TDengine takes full advantage of this feature to put the most recently arrived (current state) data in the buffer.
-
-TDengine provides millisecond-level data collecting capability to users through query functions. Putting the recently arrived data directly in the buffer can respond to users' analysis query for the latest piece or batch of data more quickly, and provide faster database query response capability as a whole. In this sense, **TDengine can be used as a data cache by setting appropriate configuration parameters without deploying Redis or other additional cache systems**. This can effectively simplify the system architecture and reduce operational costs. It should be noted that after TDengine is restarted, the buffer of the system will be emptied, the previously cached data will be written to disk in batches, and the previously cached data will not be reloaded into the buffer. In this sense, TDengine's cache differs from proprietary key-value cache systems.
-
-Each vnode has its own independent memory, and it is composed of multiple memory blocks of fixed size, and different vnodes are completely isolated. When writing data, similar to the writing of logs, data is sequentially added to memory, but each vnode maintains its own skip list for quick search. When more than one third of the memory block are used, the disk writing operation will start, and the subsequent writing operation is carried out in a new memory block. By this design, one third of the memory blocks in a vnode keep the latest data, so as to achieve the purpose of caching and quick search. The number of memory blocks of a vnode is determined by the configuration parameter “blocks”, and the size of memory blocks is determined by the configuration parameter “cache”.
-
-### Persistent Storage
-
-TDengine uses a data-driven method to write the data from buffer into hard disk for persistent storage. When the cached data in vnode reaches a certain volume, TDengine will pull up the disk-writing thread to write the cached data into persistent storage so that subsequent data writing is not blocked. TDengine will open a new database log file when the data is written, and delete the old database log file after successfull persistence, to avoid unlimited log growth.
-
-To make full use of the characteristics of time-series data, TDengine splits the data stored in persistent storage by a vnode into multiple files, each file only saves data for a fixed number of days, which is determined by the system configuration parameter `“days”`. Thus for given start and end dates of a query, you can locate the data files to open immediately without any index. This greatly speeds up read operations.
-
-For time-series data, there is generally a retention policy, which is determined by the system configuration parameter `“keep”`. Data files exceeding this set number of days will be automatically deleted by the system to free up storage space.
-
-Given “days” and “keep” parameters, the total number of data files in a vnode is: keep/days. The total number of data files should not be too large or too small. 10 to 100 is appropriate. Based on this principle, reasonable days can be set. In the current version, parameter “keep” can be modified, but parameter “days” cannot be modified once it is set.
-
-In each data file, the data of a table is stored in blocks. A table can have one or more data file blocks. In a file block, data is stored in columns, occupying a continuous storage space, thus greatly improving the reading speed. The size of file block is determined by the system parameter `“maxRows”` (the maximum number of records per block), and the default value is 4096. This value should not be too large or too small. If it is too large, data location for queries will take a longer tim. If it is too small, the index of data block is too large, and the compression efficiency will be low with slower reading speed.
-
-Each data file (with a .data postfix) has a corresponding index file (with a .head postfix). The index file has summary information of a data block for each table, recording the offset of each data block in the data file, start and end time of data and other information which allows the system to locate the data to be found very quickly. Each data file also has a corresponding last file (with a .last postfix), which is designed to prevent data block fragmentation when written in disk. If the number of written records from a table does not reach the system configuration parameter `“minRows”` (minimum number of records per block), it will be stored in the last file first. At the next write operation to the disk, the newly written records will be merged with the records in last file and then written into data file.
-
-When data is written to disk, the system decideswhether to compress the data based on the system configuration parameter `“comp”`. TDengine provides three compression options: no compression, one-stage compression and two-stage compression, corresponding to comp values of 0, 1 and 2 respectively. One-stage compression is carried out according to the type of data. Compression algorithms include delta-delta coding, simple 8B method, zig-zag coding, LZ4 and other algorithms. Two-stage compression is based on one-stage compression and compressed by general compression algorithm, which has higher compression ratio.
-
-### Tiered Storage
-
-By default, TDengine saves all data in /var/lib/taos directory, and the data files of each vnode are saved in a different directory under this directory. In order to expand the storage space, minimize the bottleneck of file reading and improve the data throughput rate, TDengine can configure the system parameter “dataDir” to allow multiple mounted hard disks to be used by system at the same time. In addition, TDengine also provides the function of tiered data storage, i.e. storage on different storage media according to the time stamps of data files. For example, the latest data is stored on SSD, the data older than a week is stored on local hard disk, and data older than four weeks is stored on network storage device. This reduces storage costs and ensures efficient data access. The movement of data on different storage media is automatically done by the system and is completely transparent to applications. Tiered storage of data is also configured through the system parameter “dataDir”.
-
-dataDir format is as follows:
-```
-dataDir data_path [tier_level]
-```
-
-Where data_path is the folder path of mount point and tier_level is the media storage-tier. The higher the media storage-tier, means the older the data file. Multiple hard disks can be mounted at the same storage-tier, and data files on the same storage-tier are distributed on all hard disks within the tier. TDengine supports up to 3 tiers of storage, so tier_level values are 0, 1, and 2. When configuring dataDir, there must be only one mount path without specifying tier_level, which is called special mount disk (path). The mount path defaults to level 0 storage media and contains special file links, which cannot be removed, otherwise it will have a devastating impact on the written data.
-
-Suppose there is a physical node with six mountable hard disks/mnt/disk1,/mnt/disk2, …,/mnt/disk6, where disk1 and disk2 need to be designated as level 0 storage media, disk3 and disk4 are level 1 storage media, and disk5 and disk6 are level 2 storage media. Disk1 is a special mount disk, you can configure it in/etc/taos/taos.cfg as follows:
-
-```
-dataDir /mnt/disk1/taos
-dataDir /mnt/disk2/taos 0
-dataDir /mnt/disk3/taos 1
-dataDir /mnt/disk4/taos 1
-dataDir /mnt/disk5/taos 2
-dataDir /mnt/disk6/taos 2
-```
-
-Mounted disks can also be a non-local network disk, as long as the system can access it.
-
-Note: Tiered Storage is only supported in Enterprise Edition
-
-## Data Query
-
-TDengine provides a variety of query processing functions for tables and STables. In addition to common aggregation queries, TDengine also provides window queries and statistical aggregation functions for time-series data. Query processing in TDengine needs the collaboration of client, vnode and mnode.
-
-### Single Table Query
-
-The parsing and verification of SQL statements are completed on the client side. SQL statements are parsed and generate an Abstract Syntax Tree (AST), which is then checksummed. Then metadata information (table metadata) for the table specified is requested in the query from management node (mnode).
-
-According to the End Point information in metadata information, the query request is serialized and sent to the data node (dnode) where the table is located. After receiving the query, the dnode identifies the virtual node (vnode) pointed to and forwards the message to the query execution queue of the vnode. The query execution thread of vnode establishes the basic query execution environment, immediately returns the query request and starts executing the query at the same time.
-
-When client obtains query result, the worker thread in query execution queue of dnode will wait for the execution of vnode execution thread to complete before returning the query result to the requesting client.
-
-### Aggregation by Time Axis, Downsampling, Interpolation
-
-Time-series data is different from ordinary data in that each record has a timestamp. So aggregating data by timestamps on the time axis is an important and distinct feature of time-series databases which is different from that of common databases. It is similar to the window query of stream computing engines.
-
-The keyword `interval` is introduced into TDengine to split fixed length time windows on the time axis. The data is aggregated based on time windows, and the data within time window ranges is aggregated as needed. For example:
-
-```mysql
-select count(*) from d1001 interval(1h);
-```
-
-For the data collected by device D1001, the number of records stored per hour is returned by a 1-hour time window.
-
-In application scenarios where query results need to be obtained continuously, if there is data missing in a given time interval, the data results in this interval will also be lost. TDengine provides a strategy to interpolate the results of timeline aggregation calculation. The results of time axis aggregation can be interpolated by using keyword Fill. For example:
-
-```mysql
-select count(*) from d1001 interval(1h) fill(prev);
-```
-
-For the data collected by device D1001, the number of records per hour is counted. If there is no data in a certain hour, statistical data of the previous hour is returned. TDengine provides forward interpolation (prev), linear interpolation (linear), NULL value populating (NULL), and specific value populating (value).
-
-### Multi-table Aggregation Query
-
-TDengine creates a separate table for each data collection point, but in practical applications, it is often necessary to aggregate data from different data collection points. In order to perform aggregation operations efficiently, TDengine introduces the concept of STable (super table). STable is used to represent a specific type of data collection point. It is a table set containing multiple tables. The schema of each table in the set is the same, but each table has its own static tag. There can be multiple tags which can be added, deleted and modified at any time. Applications can aggregate or statistically operate on all or a subset of tables under a STABLE by specifying tag filters. This greatly simplifies the development of applications. The process is shown in the following figure:
-
-
-
Figure 5: Diagram of multi-table aggregation query
-
-1. Application sends a query condition to system;
-2. TAOSC sends the STable name to Meta Node(management node);
-3. Management node sends the vnode list owned by the STable back to TAOSC;
-4. TAOSC sends the computing request together with tag filters to multiple data nodes corresponding to these vnodes;
-5. Each vnode first finds the set of tables within its own node that meet the tag filters from memory, then scans the stored time-series data, completes corresponding aggregation calculations, and returns result to TAOSC;
-6. TAOSC finally aggregates the results returned by multiple data nodes and send them back to application.
-
-Since TDengine stores tag data and time-series data separately in vnode, by filtering tag data in memory, the set of tables that need to participate in aggregation operation is first found, which reduces the volume of data to be scanned and improves aggregation speed. At the same time, because the data is distributed in multiple vnodes/dnodes, the aggregation operation is carried out concurrently in multiple vnodes, which further improves the aggregation speed. Aggregation functions for ordinary tables and most operations are applicable to STables. The syntax is exactly the same. Please see TAOS SQL for details.
-
-### Precomputation
-
-In order to effectively improve the performance of query processing, based-on the unchangeable feature of IoT data, statistical information of data stored in data block is recorded in the head of data block, including max value, min value, and sum. We call it a precomputing unit. If the query processing involves all the data of a whole data block, the pre-calculated results are directly used, and no need to read the data block contents at all. Since the amount of pre-calculated data is much smaller than the actual size of data block stored on disk, for query processing with disk IO as bottleneck, the use of pre-calculated results can greatly reduce the pressure of reading IO and accelerate the query process. The precomputation mechanism is similar to the BRIN (Block Range Index) of PostgreSQL.
-
diff --git a/docs-en/25-application/03-immigrate.md b/docs-en/25-application/03-immigrate.md
deleted file mode 100644
index 4d47aec1d76014ba63f6be91004abcc3934769f7..0000000000000000000000000000000000000000
--- a/docs-en/25-application/03-immigrate.md
+++ /dev/null
@@ -1,435 +0,0 @@
----
-sidebar_label: OpenTSDB Migration to TDengine
-title: Best Practices for Migrating OpenTSDB Applications to TDengine
----
-
-As a distributed, scalable, distributed time-series database platform based on HBase, and thanks to its first-mover advantage, OpenTSDB is widely used for monitoring in DevOps. However, as new technologies like cloud computing, microservices, and containerization technology has developed rapidly, Enterprise-level services are becoming more and more diverse and the architecture is becoming more complex.
-
-As a result, as a DevOps backend for monitoring, OpenTSDB is plagued by performance issues and delayed feature upgrades. This has resulted in increased application deployment costs and reduced operational efficiency. These problems become increasingly severe as the system tries to scale up.
-
-To meet the fast-growing IoT big data market and technical needs, TAOSData developed an innovative big-data processing product, **TDengine**.
-
-After learning the advantages of many traditional relational databases and NoSQL databases, stream computing engines, and message queues, TDengine has its unique benefits in time-series big data processing. TDengine can effectively solve the problems currently encountered by OpenTSDB.
-
-Compared with OpenTSDB, TDengine has the following distinctive features.
-
-- Data writing and querying performance far exceeds that of OpenTSDB.
-- Efficient compression mechanism for time-series data, which compresses to less than 1/5 of the storage space, on disk.
-- The installation and deployment are straightforward. A single installation package can complete the installation and deployment and does not rely on other third-party software. The entire installation and deployment process takes a few seconds.
-- The built-in functions cover all of OpenTSDB's query functions and TDengine supports more time-series data query functions, scalar functions, and aggregation functions. TDengine also supports advanced query functions such as multiple time-window aggregations, join query, expression operation, multiple group aggregation, user-defined sorting, and user-defined functions. With a SQL-like query language, querying is more straightforward and has no learning cost.
-- Supports up to 128 tags, with a total tag length of 16 KB.
-- In addition to the REST interface, it also provides interfaces to Java, Python, C, Rust, Go, C# and other languages. Its supports a variety of enterprise-class standard connector protocols such as JDBC.
-
-Migrating applications originally running on OpenTSDB to TDengine, effectively reduces compute and storage resource consumption and the number of deployed servers. It also significantly reduces operation and maintenance costs, makes operation and maintenance management more straightforward and more accessible, and considerably reduces the total cost of ownership. Like OpenTSDB, TDengine has also been open-sourced. Both the stand-alone version and the cluster version are open-sourced and there is no need to be concerned about the vendor-lock problem.
-
-We will explain how to migrate OpenTSDB applications to TDengine quickly, securely, and reliably without coding, using the most typical DevOps scenarios. Subsequent chapters will go into more depth to facilitate migration for non-DevOps systems.
-
-## DevOps Application Quick Migration
-
-### 1. Typical Application Scenarios
-
-The following figure (Figure 1) shows the system's overall architecture for a typical DevOps application scenario.
-
-**Figure 1. Typical architecture in a DevOps scenario**
-
-
-In this application scenario, there are Agent tools deployed in the application environment to collect machine metrics, network metrics, and application metrics. There are also data collectors to aggregate information collected by agents, systems for persistent data storage and management, and tools for data visualization (e.g., Grafana, etc.).
-
-The agents deployed in the application nodes are responsible for providing operational metrics from different sources to collectd/Statsd. And collectd/StatsD is accountable for pushing the aggregated data to the OpenTSDB cluster system and then visualizing the data using the visualization kanban board software, Grafana.
-
-### 2. Migration Services
-
-- **TDengine installation and deployment**
-
-First of all, please install TDengine. Download the latest stable version of TDengine from the official website and install it. For help with using various installation packages, please refer to the blog ["Installation and Uninstallation of TDengine Multiple Installation Packages"](https://www.taosdata.com/blog/2019/08/09/566.html).
-
-Note that once the installation is complete, do not start the `taosd` service before properly configuring the parameters.
-
-- **Adjusting the data collector configuration**
-
-TDengine version 2.4 and later version includes `taosAdapter`. taosAdapter is a stateless, rapidly elastic, and scalable component. taosAdapter supports Influxdb's Line Protocol and OpenTSDB's telnet/JSON writing protocol specification, providing rich data access capabilities, effectively saving user migration costs and reducing the difficulty of user migration.
-
-Users can flexibly deploy taosAdapter instances, based on their requirements, to improve data writing throughput and provide guarantees for data writes in different application scenarios.
-
-Through taosAdapter, users can directly write the data collected by `collectd` or `StatsD` to TDengine to achieve easy, convenient and seamless migration in application scenarios. taosAdapter also supports Telegraf, Icinga, TCollector, and node_exporter data. For more details, please refer to [taosAdapter](/reference/taosadapter/).
-
-If using collectd, modify the configuration file in its default location `/etc/collectd/collectd.conf` to point to the IP address and port of the node where to deploy taosAdapter. For example, assuming the taosAdapter IP address is 192.168.1.130 and port 6046, configure it as follows.
-
-```html
-LoadPlugin write_tsdb
-
-
- Host "192.168.1.130" Port "6046" HostTags "status=production" StoreRates
- false AlwaysAppendDS false
-
-
-```
-
-You can use collectd and push the data to taosAdapter utilizing the write_tsdb plugin. taosAdapter will call the API to write the data to TDengine. If you are using StatsD, adjust the profile information accordingly.
-
-- **Tuning the Dashboard system**
-
-After writing the data to TDengine, you can configure Grafana to visualize the data written to TDengine. To obtain and use the Grafana plugin provided by TDengine, please refer to [Links to other tools](/third-party/grafana).
-
-TDengine provides two sets of Dashboard templates by default, and users only need to import the templates from the Grafana directory into Grafana to activate their use.
-
-**Importing Grafana Templates** Figure 2.
-
-
-With the above steps completed, you have finished replacing OpenTSDB with TDengine. You can see that the whole process is straightforward, there is no need to write any code, and only some configuration files need to be changed.
-
-### 3. Post-migration architecture
-
-After completing the migration, the figure below (Figure 3) shows the system's overall architecture. The whole process of the acquisition side, the data writing, and the monitoring and presentation side are all kept stable. There are a few configuration adjustments, which do not involve any critical changes or alterations. Migrating to TDengine from OpenTSDB leads to powerful processing power and query performance.
-
-In most DevOps scenarios, if you have a small OpenTSDB cluster (3 or fewer nodes) which provides storage and data persistence layer in addition to query capability, you can safely replace OpenTSDB with TDengine. TDengine will save compute and storage resources. With the same compute resource allocation, a single TDengine can meet the service capacity provided by 3 to 5 OpenTSDB nodes. TDengine clustering may be required depending on the scale of the application.
-
-**Figure 3. System architecture after migration**
-
-
-The following chapters provide a more comprehensive and in-depth look at the advanced topics of migrating an OpenTSDB application to TDengine. This will be useful if your application is particularly complex and is not a DevOps application.
-
-## Migration evaluation and strategy for other scenarios
-
-### 1. Differences between TDengine and OpenTSDB
-
-This chapter describes the differences between OpenTSDB and TDengine at the system functionality level. After reading this chapter, you can fully evaluate whether you can migrate some complex OpenTSDB-based applications to TDengine, and what you should pay attention to after migration.
-
-TDengine currently only supports Grafana for visual kanban rendering, so if your application uses front-end kanban boards other than Grafana (e.g., [TSDash](https://github.com/facebook/tsdash), [Status Wolf](https://github.com/box/StatusWolf), etc.) you cannot directly migrate those front-end kanbans to TDengine. The front-end kanban will need to be ported to Grafana to work correctly.
-
-TDengine version 2.3.0.x only supports collectd and StatsD as data collection and aggregation software but future versions will provide support for more data collection and aggregation software in the future. If you use other data aggregators on the collection side, your application needs to be ported to these two data aggregation systems to write data correctly.
-In addition to the two data aggregator software protocols mentioned above, TDengine also supports writing data directly via InfluxDB's line protocol and OpenTSDB's data writing protocol, JSON format. You can rewrite the logic on the data push side to write data using the line protocols supported by TDengine.
-
-In addition, if your application uses the following features of OpenTSDB, you need to take into account the following considerations before migrating your application to TDengine.
-
-1. `/api/stats`: If your application uses this feature to monitor the service status of OpenTSDB, and you have built the relevant logic to link the processing in your application, then this part of the status reading and fetching logic needs to be re-adapted to TDengine. TDengine provides a new mechanism for handling cluster state monitoring to meet the monitoring and maintenance needs of your application.
-2. `/api/tree`: If you rely on this feature of OpenTSDB for the hierarchical organization and maintenance of timelines, you cannot migrate it directly to TDengine, which uses a database -> super table -> sub-table hierarchy to organize and maintain timelines, with all timelines belonging to the same super table in the same system hierarchy. But it is possible to simulate a logical multi-level structure of the application through the unique construction of different tag values.
-3. `Rollup And PreAggregates`: The use of Rollup and PreAggregates requires the application to decide where to access the Rollup results and, in some scenarios, to access the actual results. The opacity of this structure makes the application processing logic extraordinarily complex and not portable at all.
-While TDengine does not currently support automatic downsampling of multiple timelines and preaggregation (for a range of periods), thanks to its high-performance query processing logic, it can provide very high-performance query responses without relying on Rollup and preaggregation (for a range of periods). This makes your application query processing logic straightforward and simple.
-4. `Rate`: TDengine provides two functions to calculate the rate of change of values, namely `Derivative` (the result is consistent with the Derivative behavior of InfluxDB) and `IRate` (the result is compatible with the IRate function in Prometheus). However, the results of these two functions are slightly different from that of Rate. But the TDengine functions are more powerful. In addition, TDengine supports all the calculation functions provided by OpenTSDB. TDengine's query functions are much more powerful than those supported by OpenTSDB, which can significantly simplify the processing logic of your application.
-
-With the above introduction, we believe you should be able to understand the changes brought about by the migration of OpenTSDB to TDengine. And this information will also help you correctly determine whether you should migrate your application to TDengine to experience the powerful and convenient time-series data processing capability provided by TDengine.
-
-### 2. Migration strategy suggestion
-
-OpenTSDB-based system migration involves data schema design, system scale estimation, data write transformation, data streaming, and application changes. The two systems should run in parallel for a while and then the historical data should be migrated to TDengine if your application has some functions that strongly depend on the above OpenTSDB features and you do not want to stop using them.
-You can also consider keeping the original OpenTSDB system running while using TDengine to provide the primary services.
-
-## Data model design
-
-On the one hand, TDengine requires a strict schema definition for its incoming data. On the other hand, the data model of TDengine is richer than that of OpenTSDB, and the multi-valued model is compatible with all single-valued model building requirements.
-
-Let us now assume a DevOps scenario where we use collectd to collect the underlying metrics of the device, including memory, swap, disk, etc. The schema in OpenTSDB is as follows.
-
-| metric | value name | type | tag1 | tag2 | tag3 | tag4 | tag5 |
-| ---- | -------------- | ------ | ------ | ---- | ----------- | -------------------- | --------- | ------ |
-| 1 | memory | value | double | host | memory_type | memory_type_instance | source | n/a |
-| 2 | swap | value | double | host | swap_type | swap_type_instance | source | n/a |
-| 3 | disk | value | double | host | disk_point | disk_instance | disk_type | source |
-
-TDengine requires the data stored to have a data schema, i.e., you need to create a super table and specify the schema of the super table before writing the data. For data schema creation, you have two ways to do this:
-1) Take advantage of TDengine's native data writing support for OpenTSDB by calling the TDengine API to write (text line or JSON format) and automate the creation of single-value models. This approach does not require significant adjustments to the data writing application, nor does it require converting the written data format.
-
-At the C level, TDengine provides the `taos_schemaless_insert()` function to write data in OpenTSDB format directly (in early version this function was named `taos_insert_lines()`). Please refer to the sample code `schemaless.c` in the installation package directory as reference.
-
-(2) Based on a thorough understanding of TDengine's data model, establish a mapping between OpenTSDB and TDengine's data model. Considering that OpenTSDB is a single-value mapping model, we recommended using the single-value model in TDengine for simplicity. But keep in mind that TDengine supports both multi-value and single-value models.
-
-- **Single-valued model**.
-
-The steps are as follows:
-- Use the name of the metrics as the name of the TDengine super table
-- Build with two basic data columns - timestamp and value. The label of the super table is equivalent to the label information of the metrics, and the number of labels is equal to the number of labels of the metrics.
-- The names of sub-tables are named with fixed rules: `metric + '_' + tags1_value + '_' + tag2_value + '_' + tag3_value ...` as the sub-table name.
-
-Create 3 super tables in TDengine.
-
-```sql
-create stable memory(ts timestamp, val float) tags(host binary(12), memory_type binary(20), memory_type_instance binary(20), source binary(20)) ;
-create stable swap(ts timestamp, val double) tags(host binary(12), swap_type binary(20), swap_type_binary binary(20), source binary(20));
-create stable disk(ts timestamp, val double) tags(host binary(12), disk_point binary(20), disk_instance binary(20), disk_type binary(20), source binary(20));
-```
-
-For sub-tables use dynamic table creation as shown below.
-
-```sql
-insert into memory_vm130_memory_buffered_collectd using memory tags('vm130', 'memory', ' buffer', 'collectd') values(1632979445, 3.0656);
-```
-
-The final system will have about 340 sub-tables and three super-tables. Note that if the use of concatenated tagged values causes the sub-table names to exceed the system limit (191 bytes), then some encoding (e.g., MD5) needs to be used to convert them to an acceptable length.
-
-- **Multi-value model**
-
-Ideally you should take advantage of TDengine's multi-value modeling capabilities. In that case, you first need to meet the requirement that different collection quantities have the same collection frequency and can reach the **data write side simultaneously via a message queue**, thus ensuring writing multiple metrics at once, using SQL statements. The metric's name is used as the name of the super table to create a multi-column model of data that has the same collection frequency and can arrive simultaneously. The sub-tables are named using a fixed rule. Each of the above metrics contains only one measurement value, so converting it into a multi-value model is impossible.
-
-## Data triage and application adaptation
-
-Subscribe to the message queue and start writing data to TDengine.
-
-After data has been written for a while, you can use SQL statements to check whether the amount of data written meets the expected writing requirements. Use the following SQL statement to count the amount of data.
-
-```sql
-select count(*) from memory
-```
-
-After completing the query, if the data written does not differ from what is expected and there are no abnormal error messages from the writing program itself, you can confirm that the written data is complete and valid.
-
-TDengine does not support querying, or data fetching using the OpenTSDB query syntax but does provide a counterpart for each of the OpenTSDB queries. The corresponding query processing can be adapted and applied in a manner obtained by examining Appendix 1. To fully understand the types of queries supported by TDengine, refer to the TDengine user manual.
-
-TDengine supports the standard JDBC 3.0 interface for manipulating databases, but you can also use other types of high-level language connectors for querying and reading data to suit your application. Please read the user manual for specific operations and usage.
-
-## Historical Data Migration
-
-### 1. Use the tool to migrate data automatically
-
-To facilitate historical data migration, we provide a plug-in for the data synchronization tool DataX, which can automatically write data into TDengine.The automatic data migration of DataX can only support the data migration process of a single value model.
-
-For the specific usage of DataX and how to use DataX to write data to TDengine, please refer to [DataX-based TDengine Data Migration Tool](https://www.taosdata.com/blog/2021/10/26/3156.html).
-
-After migrating via DataX, we found that we can significantly improve the efficiency of migrating historical data by starting multiple processes and migrating numerous metrics simultaneously. The following are some records of the migration process. We provide these as a reference for application migration.
-
-| Number of datax instances (number of concurrent processes) | Migration record speed (pieces/second) |
-| ----------------------------- | ------------------- -- |
-| 1 | About 139,000 |
-| 2 | About 218,000 |
-| 3 | About 249,000 |
-| 5 | About 295,000 |
-| 10 | About 330,000 |
-
- (Note: The test data comes from a single-node Intel(R) Core(TM) i7-10700 CPU@2.90GHz 16-core 64G hardware device, the channel and batchSize are 8 and 1000 respectively, and each record contains 10 tags)
-
-### 2. Manual data migration
-
-Suppose you need to use the multi-value model for data writing. In that case, you need to develop a tool to export data from OpenTSDB, confirm which timelines can be merged and imported into the same timeline, and then pass the time to import simultaneously through the SQL statement—written to the database.
-
-Manual migration of data requires attention to the following two issues:
-
-1) When storing the exported data on the disk, the disk needs to have enough storage space to accommodate the exported data files fully. To avoid running out of disk space, you can adopt a partial import mode in which you preferentially export the timelines belonging to the same super table and then only those files are imported into TDengine.
-
-2) Under the full load of the system, if there are enough remaining computing and IO resources, establish a multi-threaded import to maximize the efficiency of data migration. Considering the vast load that data parsing brings to the CPU, it is necessary to control the maximum number of parallel tasks to avoid overloading the system when importing historical data.
-
-Due to the ease of operation of TDengine itself, there is no need to perform index maintenance and data format change processing in the entire process. The whole process only needs to be executed sequentially.
-
-While importing historical data into TDengine, the two systems should run simultaneously. Once all the data is migrated, switch the query request to TDengine to achieve seamless application switching.
-
-## Appendix 1: OpenTSDB query function correspondence table
-
-### Avg
-
-Equivalent function: avg
-
-Example:
-
-```sql
-SELECT avg(val) FROM (SELECT first(val) FROM super_table WHERE ts >= startTime and ts <= endTime INTERVAL(20s) Fill(linear)) INTERVAL(20s)
-```
-
-Remarks:
-
-1. The value in Interval needs to be the same as the interval value in the outer query.
-2. Interpolation processing in TDengine uses subqueries to assist in completion. As shown above, it is enough to specify the interpolation type in the inner query. Since OpenTSDB uses linear interpolation, use `fill(linear)` to declare the interpolation type in TDengine. Some of the functions mentioned below have exactly the same interpolation calculation requirements.
-3. The parameter 20s in Interval indicates that the inner query will generate results according to a time window of 20 seconds. In an actual query, it needs to adjust to the time interval between different records. It ensures that interpolation results are equivalent to the original data.
-4. Due to the particular interpolation strategy and mechanism of OpenTSDB i.e. interpolation followed by aggregate calculation, it is impossible for the results to be completely consistent with those of TDengine. But in the case of downsampling (Downsample), TDengine and OpenTSDB can obtain consistent results (since OpenTSDB performs aggregation and downsampling queries).
-
-### Count
-
-Equivalent function: count
-
-Example:
-
-```sql
-select count(\*) from super_table_name;
-```
-
-### Dev
-
-Equivalent function: stddev
-
-Example:
-
-```sql
-Select stddev(val) from table_name
-```
-
-### Estimated percentiles
-
-Equivalent function: apercentile
-
-Example:
-
-```sql
-Select apercentile(col1, 50, “t-digest”) from table_name
-```
-
-Remark:
-
-1. When calculating estimate percentiles, OpenTSDB uses the t-digest algorithm by default. In order to obtain the same calculation results in TDengine, the algorithm used needs to be specified in the `apercentile()` function. TDengine can support two different percentile calculation algorithms named "default" and "t-digest" respectively.
-
-### First
-
-Equivalent function: first
-
-Example:
-
-```sql
-Select first(col1) from table_name
-```
-
-### Last
-
-Equivalent function: last
-
-Example:
-
-```sql
-Select last(col1) from table_name
-```
-
-### Max
-
-Equivalent function: max
-
-Example:
-
-```sql
-Select max(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s)
-```
-
-Note: The Max function requires interpolation for the reasons described above.
-
-### Min
-
-Equivalent function: min
-
-Example:
-
-```sql
-Select min(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s);
-```
-
-### MinMax
-
-Equivalent function: max
-
-```sql
-Select max(val) from table_name
-```
-
-Note: This function has no interpolation requirements, so it can be directly calculated.
-
-### MimMin
-
-Equivalent function: min
-
-```sql
-Select min(val) from table_name
-```
-
-Note: This function has no interpolation requirements, so it can be directly calculated.
-
-### Percentile
-
-Equivalent function: percentile
-
-Remark:
-
-### Sum
-
-Equivalent function: sum
-
-```sql
-Select max(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s)
-```
-
-Note: This function has no interpolation requirements, so it can be directly calculated.
-
-### Zimsum
-
-Equivalent function: sum
-
-```sql
-Select sum(val) from table_name
-```
-
-Note: This function has no interpolation requirements, so it can be directly calculated.
-
-Complete example:
-
-````json
-// OpenTSDB query JSON
-query = {
-"start": 1510560000,
-"end": 1515000009,
-"queries": [{
-"aggregator": "count",
-"metric": "cpu.usage_user",
-}]
-}
-
-// Equivalent query SQL:
-SELECT count(*)
-FROM `cpu.usage_user`
-WHERE ts>=1510560000 AND ts<=1515000009
-````
-
-## Appendix 2: Resource Estimation Methodology
-
-### Data generation environment
-
-We still use the hypothetical environment from Chapter 4. There are three measurements. Respectively: the data writing rate of temperature and humidity is one record every 5 seconds, and the timeline is 100,000. The writing rate of air pollution is one record every 10 seconds, the timeline is 10,000, and the query request frequency is 500 QPS.
-
-### Storage resource estimation
-
-Assuming that the number of sensor devices that generate data and need to be stored is `n`, the frequency of data generation is `t` per second, and the length of each record is `L` bytes, the scale of data generated per day is `n * t * L` bytes. Assuming the compression ratio is `C`, the daily data size is `(n * t * L)/C` bytes. The storage resources are estimated to accommodate the data scale for 1.5 years. In the production environment, the compression ratio C of TDengine is generally between 5 and 7.
-With additional 20% redundancy, you can calculate the required storage resources:
-
-```matlab
-(n * t * L) * (365 * 1.5) * (1+20%)/C
-````
-Substituting in the above formula, the raw data generated every year is 11.8TB without considering the label information. Note that tag information is associated with each timeline in TDengine, not every record. The amount of data to be recorded is somewhat reduced relative to the generated data, and label data can be ignored as a whole. Assuming a compression ratio of 5, the size of the retained data ends up being 2.56 TB.
-
-### Storage Device Selection Considerations
-
-A disk with better random read performance, such as an SSD, improves the system's query performance and improves the query response performance of the whole system. To obtain better query performance, the performance index of the single-threaded random read IOPS of the hard disk device should not be lower than 1000, and it is better to reach 5000 IOPS or more. We recommend using `fio` utility software to evaluate the running performance (please refer to Appendix 1 for specific usage) for the random IO read of the current device to confirm whether it can meet the requirements of random read of large files.
-
-Hard disk writing performance has little effect on TDengine. The TDengine writing process adopts the append write mode, so as long as it has good sequential write performance, both SAS hard disks and SSDs in the general sense can well meet TDengine's requirements for disk write performance.
-
-### Computational resource estimates
-
-Due to the characteristics of IoT data, when the frequency of data generation is consistent, the writing process of TDengine maintains a relatively fixed amount of resource consumption (computing and storage). According to the [TDengine Operation and Maintenance Guide](/operation/) description, the system consumes less than 1 CPU core at 22,000 writes per second.
-
-In estimating the CPU resources consumed by the query, assuming that the application requires the database to provide 10,000 QPS, the CPU time consumed by each query is about 1 ms. The query provided by each core per second is 1,000 QPS, which satisfies 10,000 QPS. The query request requires at least 10 cores. For the system as a whole system to have less than 50% CPU load, the entire cluster needs twice as many cores i.e. 20 cores.
-
-### Memory resource estimation
-
-The database allocates 16MB\*3 buffer memory for each Vnode by default. If the cluster system includes 22 CPU cores, TDengine will create 22 Vnodes (virtual nodes) by default. Each Vnode contains 1000 tables, which is more than enough to accommodate all the tables in our hypothetical scenario. Then it takes about 1.5 hours to write a block, which triggers persistence to disk without requiring any adjustment. A total of 22 Vnodes require about 1GB of memory cache. Considering the memory needed for the query, assuming that the memory overhead of each query is about 50MB, the memory required for 500 queries concurrently is about 25GB.
-
-In summary, using a single 16-core 32GB machine or a cluster of 2 8-core 16GB machines is enough.
-
-## Appendix 3: Cluster Deployment and Startup
-
-TDengine provides a wealth of help documents to explain many aspects of cluster installation and deployment. Here is the list of documents for your reference.
-
-### Cluster Deployment
-
-The first is TDengine installation. Download the latest stable version of TDengine from the official website, and install it. Please refer to the blog ["Installation and Uninstallation of Various Installation Packages of TDengine"](https://www.taosdata.com/blog/2019/08/09/566.html) for the various installation package formats.
-
-Note that once the installation is complete, do not immediately start the `taosd` service, but start it after correctly configuring the parameters.
-
-### Set running parameters and start the service
-
-To ensure that the system can obtain the necessary information for regular operation. Please set the following vital parameters correctly on the server:
-
-FQDN, firstEp, secondEP, dataDir, logDir, tmpDir, serverPort. For the specific meaning and setting requirements of each parameter, please refer to the document "[TDengine Cluster Installation and Management](/cluster/)"
-
-Follow the same steps to set parameters on the other nodes, start the taosd service, and then add Dnodes to the cluster.
-
-Finally, start `taos` and execute the `show dnodes` command. If you can see all the nodes that have joined the cluster, the cluster building process was successfully completed. For specific operation procedures and precautions, please refer to the document "[TDengine Cluster Installation and Management](/cluster/)".
-
-## Appendix 4: Super Table Names
-
-Since OpenTSDB's metric name has a dot (".") in it, for example, a metric with a name like "cpu.usage_user", the dot has a special meaning in TDengine and is a separator used to separate database and table names. TDengine also provides "escape" characters to allow users to use keywords or special separators (e.g., dots) in (super)table names. To use special characters, enclose the table name in escape characters, e.g.: `cpu.usage_user`. It is a valid (super) table name.
-
-## Appendix 5: Reference Articles
-
-1. [Using TDengine + collectd/StatsD + Grafana to quickly build an IT operation and maintenance monitoring system](/application/collectd/)
-2. [Write collected data directly to TDengine through collectd](/third-party/collectd/)
diff --git a/docs-en/27-train-faq/03-docker.md b/docs-en/27-train-faq/03-docker.md
deleted file mode 100644
index afee13c1377b0b4331d6f7ec20251d1aa2db81a1..0000000000000000000000000000000000000000
--- a/docs-en/27-train-faq/03-docker.md
+++ /dev/null
@@ -1,285 +0,0 @@
----
-sidebar_label: TDengine in Docker
-title: Deploy TDengine in Docker
----
-
-We do not recommend deploying TDengine using Docker in a production system. However, Docker is still very useful in a development environment, especially when your host is not Linux. From version 2.0.14.0, the official image of TDengine can support X86-64, X86, arm64, and rm32 .
-
-In this chapter we introduce a simple step by step guide to use TDengine in Docker.
-
-## Install Docker
-
-To install Docker please refer to [Get Docker](https://docs.docker.com/get-docker/).
-
-After Docker is installed, you can check whether Docker is installed properly by displaying Docker version.
-
-```bash
-$ docker -v
-Docker version 20.10.3, build 48d30b5
-```
-
-## Launch TDengine in Docker
-
-### Launch TDengine Server
-
-```bash
-$ docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
-526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd
-```
-
-In the above command, a docker container is started to run TDengine server, the port range 6030-6049 of the container is mapped to host port range 6030-6049. If port range 6030-6049 has been occupied on the host, please change to an available host port range. For port requirements on the host, please refer to [Port Configuration](/reference/config/#serverport).
-
-- **docker run**: Launch a docker container
-- **-d**: the container will run in background mode
-- **-p**: port mapping
-- **tdengine/tdengine**: The image from which to launch the container
-- **526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd**: the container ID if successfully launched.
-
-Furthermore, `--name` can be used with `docker run` to specify name for the container, `--hostname` can be used to specify hostname for the container, `-v` can be used to mount local volumes to the container so that the data generated inside the container can be persisted to disk on the host.
-
-```bash
-docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:/var/log/taos -v ~/work/taos/data:/var/lib/taos -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
-```
-
-- **--name tdengine**: specify the name of the container, the name can be used to specify the container later
-- **--hostname=tdengine-server**: specify the hostname inside the container, the hostname can be used inside the container without worrying the container IP may vary
-- **-v**: volume mapping between host and container
-
-### Check the container
-
-```bash
-docker ps
-```
-
-The output is like below:
-
-```
-CONTAINER ID IMAGE COMMAND CREATED STATUS ···
-c452519b0f9b tdengine/tdengine "taosd" 14 minutes ago Up 14 minutes ···
-```
-
-- **docker ps**: List all the containers
-- **CONTAINER ID**: Container ID
-- **IMAGE**: The image used for the container
-- **COMMAND**: The command used when launching the container
-- **CREATED**: When the container was created
-- **STATUS**: Status of the container
-
-### Access TDengine inside container
-
-```bash
-$ docker exec -it tdengine /bin/bash
-root@tdengine-server:~/TDengine-server-2.4.0.4#
-```
-
-- **docker exec**: Attach to the container
-- **-i**: Interactive mode
-- **-t**: Use terminal
-- **tdengine**: Container name, up to the output of `docker ps`
-- **/bin/bash**: The command to execute once the container is attached
-
-Inside the container, start TDengine CLI `taos`
-
-```bash
-root@tdengine-server:~/TDengine-server-2.4.0.4# taos
-
-Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
-Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
-
-taos>
-```
-
-The above example is for a successful connection. If `taos` fails to connect to the server side, error information would be shown.
-
-In TDengine CLI, SQL commands can be executed to create/drop databases, tables, STables, and insert or query data. For details please refer to [TAOS SQL](/taos-sql/).
-
-### Access TDengine from host
-
-If option `-p` used to map ports properly between host and container, it's also able to access TDengine in container from the host as long as `firstEp` is configured correctly for the client on host.
-
-```
-$ taos
-
-Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
-Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
-
-taos>
-```
-
-It's also able to access the REST interface provided by TDengine in container from the host.
-
-```
-curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql
-```
-
-Output is like below:
-
-```
-{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep0,keep1,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep0,keep1,keep(D)",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["test","2021-08-18 06:01:11.021",10000,4,1,1,10,"3650,3650,3650",16,6,100,4096,1,3000,2,0,"ms",0,"ready"],["log","2021-08-18 05:51:51.065",4,1,1,1,10,"30,30,30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":2}
-```
-
-For details of REST API please refer to [REST API](/reference/rest-api/).
-
-### Run TDengine server and taosAdapter inside container
-
-From version 2.4.0.0, in the TDengine Docker image, `taosAdapter` is enabled by default, but can be disabled using environment variable `TAOS_DISABLE_ADAPTER=true` . `taosAdapter` can also be run alone without `taosd` when launching a container.
-
-For the port mapping of `taosAdapter`, please refer to [taosAdapter](/reference/taosadapter/).
-
-- Run both `taosd` and `taosAdapter` (by default) in docker container:
-
-```bash
-docker run -d --name tdengine-all -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine:2.4.0.4
-```
-
-- Run `taosAdapter` only in docker container, `TAOS_FIRST_EP` environment variable needs to be used to specify the container name in which `taosd` is running:
-
-```bash
-docker run -d --name tdengine-taosa -p 6041-6049:6041-6049 -p 6041-6049:6041-6049/udp -e TAOS_FIRST_EP=tdengine-all tdengine/tdengine:2.4.0.4 taosadapter
-```
-
-- Run `taosd` only in docker container:
-
-```bash
-docker run -d --name tdengine-taosd -p 6030-6042:6030-6042 -p 6030-6042:6030-6042/udp -e TAOS_DISABLE_ADAPTER=true tdengine/tdengine:2.4.0.4
-```
-
-- Verify the REST interface:
-
-```bash
-curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0.1:6041/rest/sql
-```
-
-Below is an example output:
-
-```
-{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
-```
-
-### Use taosBenchmark on host to access TDengine server in container
-
-1. Run `taosBenchmark`, named as `taosdemo` previously, on the host:
-
- ```bash
- $ taosBenchmark
-
- taosBenchmark is simulating data generated by power equipments monitoring...
-
- host: 127.0.0.1:6030
- user: root
- password: taosdata
- configDir:
- resultFile: ./output.txt
- thread num of insert data: 10
- thread num of create table: 10
- top insert interval: 0
- number of records per req: 30000
- max sql length: 1048576
- database count: 1
- database[0]:
- database[0] name: test
- drop: yes
- replica: 1
- precision: ms
- super table count: 1
- super table[0]:
- stbName: meters
- autoCreateTable: no
- childTblExists: no
- childTblCount: 10000
- childTblPrefix: d
- dataSource: rand
- iface: taosc
- insertRows: 10000
- interlaceRows: 0
- disorderRange: 1000
- disorderRatio: 0
- maxSqlLen: 1048576
- timeStampStep: 1
- startTimestamp: 2017-07-14 10:40:00.000
- sampleFormat:
- sampleFile:
- tagsFile:
- columnCount: 3
- column[0]:FLOAT column[1]:INT column[2]:FLOAT
- tagCount: 2
- tag[0]:INT tag[1]:BINARY(16)
-
- Press enter key to continue or Ctrl-C to stop
- ```
-
- Once the execution is finished, a database `test` is created, a STable `meters` is created in database `test`, 10,000 sub tables are created using `meters` as template, named as "d0" to "d9999", while 10,000 rows are inserted into each table, so totally 100,000,000 rows are inserted.
-
-2. Check the data
-
- - **Check database**
-
- ```bash
- $ taos> show databases;
- name | created_time | ntables | vgroups | ···
- test | 2021-08-18 06:01:11.021 | 10000 | 6 | ···
- log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
-
- ```
-
- - **Check STable**
-
- ```bash
- $ taos> use test;
- Database changed.
-
- $ taos> show stables;
- name | created_time | columns | tags | tables |
- ============================================================================================
- meters | 2021-08-18 06:01:11.116 | 4 | 2 | 10000 |
- Query OK, 1 row(s) in set (0.003259s)
-
- ```
-
- - **Check Tables**
-
- ```bash
- $ taos> select * from test.t0 limit 10;
-
- DB error: Table does not exist (0.002857s)
- taos> select * from test.d0 limit 10;
- ts | current | voltage | phase |
- ======================================================================================
- 2017-07-14 10:40:00.000 | 10.12072 | 223 | 0.34167 |
- 2017-07-14 10:40:00.001 | 10.16103 | 224 | 0.34445 |
- 2017-07-14 10:40:00.002 | 10.00204 | 220 | 0.33334 |
- 2017-07-14 10:40:00.003 | 10.00030 | 220 | 0.33333 |
- 2017-07-14 10:40:00.004 | 9.84029 | 216 | 0.32222 |
- 2017-07-14 10:40:00.005 | 9.88028 | 217 | 0.32500 |
- 2017-07-14 10:40:00.006 | 9.88110 | 217 | 0.32500 |
- 2017-07-14 10:40:00.007 | 10.08137 | 222 | 0.33889 |
- 2017-07-14 10:40:00.008 | 10.12063 | 223 | 0.34167 |
- 2017-07-14 10:40:00.009 | 10.16086 | 224 | 0.34445 |
- Query OK, 10 row(s) in set (0.016791s)
-
- ```
-
- - **Check tag values of table d0**
-
- ```bash
- $ taos> select groupid, location from test.d0;
- groupid | location |
- =================================
- 0 | California.SanDiego |
- Query OK, 1 row(s) in set (0.003490s)
- ```
-
-### Access TDengine from 3rd party tools
-
-A lot of 3rd party tools can be used to write data into TDengine through `taosAdapter`, for details please refer to [3rd party tools](/third-party/).
-
-There is nothing different from the 3rd party side to access TDengine server inside a container, as long as the end point is specified correctly, the end point should be the FQDN and the mapped port of the host.
-
-## Stop TDengine inside container
-
-```bash
-docker stop tdengine
-```
-
-- **docker stop**: stop a container
-- **tdengine**: container name
diff --git a/docs-en/30-release/01-2.6.md b/docs-en/30-release/01-2.6.md
deleted file mode 100644
index 85b76d9999e211336b5859beab3fdfc7988f4fda..0000000000000000000000000000000000000000
--- a/docs-en/30-release/01-2.6.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-title: 2.6
----
-
-[2.6.0.4](https://github.com/taosdata/TDengine/releases/tag/ver-2.6.0.4)
-
-[2.6.0.1](https://github.com/taosdata/TDengine/releases/tag/ver-2.6.0.1)
-
-[2.6.0.0](https://github.com/taosdata/TDengine/releases/tag/ver-2.6.0.0)
diff --git a/docs-en/30-release/02-2.4.md b/docs-en/30-release/02-2.4.md
deleted file mode 100644
index 62580b327a3bd5098e1b7f1162a1c398ac2a5eff..0000000000000000000000000000000000000000
--- a/docs-en/30-release/02-2.4.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: 2.4
----
-
-[2.4.0.26](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.26)
-
-[2.4.0.25](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.25)
-
-[2.4.0.24](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.24)
-
-[2.4.0.20](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.20)
-
-[2.4.0.18](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.18)
-
-[2.4.0.16](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.16)
-
-[2.4.0.14](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.14)
-
-[2.4.0.12](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.12)
-
-[2.4.0.10](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.10)
-
-[2.4.0.7](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.7)
-
-[2.4.0.5](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.5)
-
-[2.4.0.4](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.4)
-
-[2.4.0.0](https://github.com/taosdata/TDengine/releases/tag/ver-2.4.0.0)
diff --git a/docs-examples/.gitignore b/docs-examples/.gitignore
deleted file mode 100644
index 7ed6d403bf5f64c0cb230265b4dffee609dea93b..0000000000000000000000000000000000000000
--- a/docs-examples/.gitignore
+++ /dev/null
@@ -1,3 +0,0 @@
-.vscode
-*.lock
-.idea
\ No newline at end of file
diff --git a/docs-examples/.gitignre b/docs-examples/.gitignre
deleted file mode 100644
index 0853156c65c2c6c1b693290e74c3ee630bcaac19..0000000000000000000000000000000000000000
--- a/docs-examples/.gitignre
+++ /dev/null
@@ -1,2 +0,0 @@
-.vscode
-*.lock
\ No newline at end of file
diff --git a/docs-examples/go/go.mod b/docs-examples/go/go.mod
deleted file mode 100644
index 5945e395e93b373d47fe71f3584c37fed9526638..0000000000000000000000000000000000000000
--- a/docs-examples/go/go.mod
+++ /dev/null
@@ -1,6 +0,0 @@
-module goexample
-
-go 1.17
-
-require github.com/taosdata/driver-go/v2 develop
-
diff --git a/docs-examples/java/pom.xml b/docs-examples/java/pom.xml
deleted file mode 100644
index a48ba398da92f401235819d067aa2ba6f8b173ea..0000000000000000000000000000000000000000
--- a/docs-examples/java/pom.xml
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
-
- 4.0.0
-
- com.taos
- javaexample
- 1.0
-
- JavaExample
-
-
- UTF-8
- 1.8
- 1.8
-
-
-
-
-
- com.taosdata.jdbc
- taos-jdbcdriver
- 2.0.38
-
-
-
- junit
- junit
- 4.13.1
- test
-
-
-
-
diff --git a/docs-examples/java/src/main/java/com/taos/example/LineProtocolExample.java b/docs-examples/java/src/main/java/com/taos/example/LineProtocolExample.java
deleted file mode 100644
index 990922b7a516bd32a7e299f5743bd1b5e321868a..0000000000000000000000000000000000000000
--- a/docs-examples/java/src/main/java/com/taos/example/LineProtocolExample.java
+++ /dev/null
@@ -1,42 +0,0 @@
-package com.taos.example;
-
-import com.taosdata.jdbc.SchemalessWriter;
-import com.taosdata.jdbc.enums.SchemalessProtocolType;
-import com.taosdata.jdbc.enums.SchemalessTimestampType;
-
-import java.sql.Connection;
-import java.sql.DriverManager;
-import java.sql.SQLException;
-import java.sql.Statement;
-
-public class LineProtocolExample {
- // format: measurement,tag_set field_set timestamp
- private static String[] lines = {
- "meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249000", // micro
- // seconds
- "meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500",
- "meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249300",
- "meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611249800",
- };
-
- private static Connection getConnection() throws SQLException {
- String jdbcUrl = "jdbc:TAOS://localhost:6030?user=root&password=taosdata";
- return DriverManager.getConnection(jdbcUrl);
- }
-
- private static void createDatabase(Connection conn) throws SQLException {
- try (Statement stmt = conn.createStatement()) {
- // the default precision is ms (microsecond), but we use us(microsecond) here.
- stmt.execute("CREATE DATABASE IF NOT EXISTS test PRECISION 'us'");
- stmt.execute("USE test");
- }
- }
-
- public static void main(String[] args) throws SQLException {
- try (Connection conn = getConnection()) {
- createDatabase(conn);
- SchemalessWriter writer = new SchemalessWriter(conn);
- writer.write(lines, SchemalessProtocolType.LINE, SchemalessTimestampType.MICRO_SECONDS);
- }
- }
-}
diff --git a/docs-examples/python/conn_native_pandas.py b/docs-examples/python/conn_native_pandas.py
deleted file mode 100644
index 56942ef57085766cd128b03cabb7a357587eab16..0000000000000000000000000000000000000000
--- a/docs-examples/python/conn_native_pandas.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import pandas
-from sqlalchemy import create_engine
-
-engine = create_engine("taos://root:taosdata@localhost:6030/power")
-df = pandas.read_sql("SELECT * FROM meters", engine)
-
-# print index
-print(df.index)
-# print data type of element in ts column
-print(type(df.ts[0]))
-print(df.head(3))
-
-# output:
-# RangeIndex(start=0, stop=8, step=1)
-#
-# ts current ... location groupid
-# 0 2018-10-03 14:38:05.500 11.8 ... california.losangeles 2
-# 1 2018-10-03 14:38:16.600 13.4 ... california.losangeles 2
-# 2 2018-10-03 14:38:05.000 10.8 ... california.losangeles 3
diff --git a/docs/en/01-index.md b/docs/en/01-index.md
new file mode 100644
index 0000000000000000000000000000000000000000..1f2f88d47d8d20e55c6a495f571bd0d11a600d74
--- /dev/null
+++ b/docs/en/01-index.md
@@ -0,0 +1,27 @@
+---
+title: TDengine Documentation
+sidebar_label: Documentation Home
+slug: /
+---
+
+TDengine is a [high-performance](https://tdengine.com/fast), [scalable](https://tdengine.com/scalable) time series database with [SQL support](https://tdengine.com/sql-support). This document is the TDengine user manual. It introduces the basic, as well as novel concepts, in TDengine, and also talks in detail about installation, features, SQL, APIs, operation, maintenance, kernel design and other topics. It’s written mainly for architects, developers and system administrators.
+
+To get a global view about TDengine, like feature list, benchmarks, and competitive advantages, please browse through section [Introduction](./intro). If you want to get some basics about time-series databases, please check [here](https://tdengine.com/tsdb).
+
+TDengine greatly improves the efficiency of data ingestion, querying and storage by exploiting the characteristics of time series data, introducing the novel concepts of "one table for one data collection point" and "super table", and designing an innovative storage engine. To understand the new concepts in TDengine and make full use of the features and capabilities of TDengine, please read [“Concepts”](./concept) thoroughly.
+
+If you are a developer, please read the [“Developer Guide”](./develop) carefully. This section introduces the database connection, data modeling, data ingestion, query, continuous query, cache, data subscription, user-defined functions, and other functionality in detail. Sample code is provided for a variety of programming languages. In most cases, you can just copy and paste the sample code, make a few changes to accommodate your application, and it will work.
+
+We live in the era of big data, and scale-up is unable to meet the growing business needs. Any modern data system must have the ability to scale out, and clustering has become an indispensable feature of big data systems. Not only did the TDengine team develop the cluster feature, but also decided to open source this important feature. To learn how to deploy, manage and maintain a TDengine cluster please refer to ["cluster"](./cluster).
+
+TDengine uses ubiquitious SQL as its query language, which greatly reduces learning costs and migration costs. In addition to the standard SQL, TDengine has extensions to better support time series data analysis. These extensions include functions such as roll up, interpolation and time weighted average, among many others. The ["SQL Reference"](./taos-sql) chapter describes the SQL syntax in detail, and lists the various supported commands and functions.
+
+If you are a system administrator who cares about installation, upgrade, fault tolerance, disaster recovery, data import, data export, system configuration, how to monitor whether TDengine is running healthily, and how to improve system performance, please refer to, and thoroughly read the ["Administration"](./operation) section.
+
+If you want to know more about TDengine tools, the REST API, and connectors for various programming languages, please see the ["Reference"](./reference) chapter.
+
+If you are very interested in the internal design of TDengine, please read the chapter ["Inside TDengine”](./tdinternal), which introduces the cluster design, data partitioning, sharding, writing, and reading processes in detail. If you want to study TDengine code or even contribute code, please read this chapter carefully.
+
+TDengine is an open source database, and we would love for you to be a part of TDengine. If you find any errors in the documentation, or see parts where more clarity or elaboration is needed, please click "Edit this page" at the bottom of each page to edit it directly.
+
+Together, we make a difference.
diff --git a/docs-en/02-intro/_category_.yml b/docs/en/02-intro/_category_.yml
similarity index 100%
rename from docs-en/02-intro/_category_.yml
rename to docs/en/02-intro/_category_.yml
diff --git a/docs-cn/eco_system.webp b/docs/en/02-intro/eco_system.webp
similarity index 100%
rename from docs-cn/eco_system.webp
rename to docs/en/02-intro/eco_system.webp
diff --git a/docs/en/02-intro/index.md b/docs/en/02-intro/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..1dc27ae0a06dc94a6dbadec914041353811a9c5f
--- /dev/null
+++ b/docs/en/02-intro/index.md
@@ -0,0 +1,116 @@
+---
+title: Introduction
+toc_max_heading_level: 2
+---
+
+TDengine is a high-performance, scalable [time-series database](https://tdengine.com/tsdb) with SQL support. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](../develop/cache), [stream processing](../develop/continuous-query), [data subscription](../develop/subscribe) and other functionalities to reduce the complexity and cost of development and operation.
+
+This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine.
+
+## Major Features
+
+The major features are listed below:
+
+1. While TDengine supports [using SQL to insert](../develop/insert-data/sql-writing), it also supports [Schemaless writing](../reference/schemaless/) just like NoSQL databases. TDengine also supports standard protocols like [InfluxDB LINE](/develop/insert-data/influxdb-line),[OpenTSDB Telnet](../develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](../develop/insert-data/opentsdb-json) among others.
+2. TDengine supports seamless integration with third-party data collection agents like [Telegraf](../third-party/telegraf),[Prometheus](../third-party/prometheus),[StatsD](../third-party/statsd),[collectd](../third-party/collectd),[icinga2](../third-party/icinga2), [TCollector](../third-party/tcollector), [EMQX](../third-party/emq-broker), [HiveMQ](../third-party/hive-mq-broker). These agents can write data into TDengine with simple configuration and without a single line of code.
+3. Support for [all kinds of queries](../develop/query-data), including aggregation, nested query, downsampling, interpolation and others.
+4. Support for [user defined functions](../develop/udf).
+5. Support for [caching](../develop/cache). TDengine always saves the last data point in cache, so Redis is not needed in some scenarios.
+6. Support for [continuous query](../develop/continuous-query).
+7. Support for [data subscription](../develop/subscribe) with the capability to specify filter conditions.
+8. Support for [cluster](../cluster/), with the capability of increasing processing power by adding more nodes. High availability is supported by replication.
+9. Provides an interactive [command-line interface](../reference/taos-shell) for management, maintenance and ad-hoc queries.
+10. Provides many ways to [import](/operation/import) and [export](../operation/export) data.
+11. Provides [monitoring](../operation/monitor) on running instances of TDengine.
+12. Provides [connectors](../reference/connector/) for [C/C++](../reference/connector/cpp), [Java](../reference/connector/java), [Python](../reference/connector/python), [Go](../reference/connector/go), [Rust](../reference/connector/rust), [Node.js](../reference/connector/node) and other programming languages.
+13. Provides a [REST API](../reference/rest-api/).
+14. Supports seamless integration with [Grafana](../third-party/grafana) for visualization.
+15. Supports seamless integration with Google Data Studio.
+
+For more details on features, please read through the entire documentation.
+
+## Competitive Advantages
+
+Time-series data is structured, not transactional, and is rarely deleted or updated. TDengine makes full use of [these characteristics of time series data](https://tdengine.com/2019/07/09/86.html) to build its own innovative storage engine and computing engine to differentiate itself from other time series databases, with the following advantages.
+
+- **[High Performance](https://tdengine.com/fast)**: With an innovatively designed and purpose-built storage engine, TDengine outperforms other time series databases in data ingestion and querying while significantly reducing storage costs and compute costs.
+
+- **[Scalable](https://tdengine.com/scalable)**: TDengine provides out-of-box scalability and high-availability through its native distributed design. Nodes can be added through simple configuration to achieve greater data processing power. In addition, this feature is open source.
+
+- **[SQL Support](https://tdengine.com/sql-support)**: TDengine uses SQL as the query language, thereby reducing learning and migration costs, while adding SQL extensions to better handle time-series. Keeping NoSQL developers in mind, TDengine also supports convenient and flexible, schemaless data ingestion.
+
+- **All in One**: TDengine has built-in caching, stream processing and data subscription functions. It is no longer necessary to integrate Kafka/Redis/HBase/Spark or other software in some scenarios. It makes the system architecture much simpler, cost-effective and easier to maintain.
+
+- **Seamless Integration**: Without a single line of code, TDengine provide seamless, configurable integration with third-party tools such as Telegraf, Grafana, EMQX, Prometheus, StatsD, collectd, etc. More third-party tools are being integrated.
+
+- **Zero Management**: Installation and cluster setup can be done in seconds. Data partitioning and sharding are executed automatically. TDengine’s running status can be monitored via Grafana or other DevOps tools.
+
+- **Zero Learning Costs**: With SQL as the query language and support for ubiquitous tools like Python, Java, C/C++, Go, Rust, and Node.js connectors, and a REST API, there are zero learning costs.
+
+- **Interactive Console**: TDengine provides convenient console access to the database, through a CLI, to run ad hoc queries, maintain the database, or manage the cluster, without any programming.
+
+With TDengine, the total cost of ownership of your time-series data platform can be greatly reduced. 1: With its superior performance, the computing and storage resources are reduced significantly 2: With SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly 3: With its simple architecture and zero management, the operation and maintenance costs are reduced.
+
+## Technical Ecosystem
+This is how TDengine would be situated, in a typical time-series data processing platform:
+
+
+
+
Figure 1. TDengine Technical Ecosystem
+
+On the left-hand side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right-hand side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides an interactive command-line interface and a web interface for management and maintenance.
+
+## Typical Use Cases
+
+As a high-performance, scalable and SQL supported time-series database, TDengine's typical use case include but are not limited to IoT, Industrial Internet, Connected Vehicles, IT operation and maintenance, energy, financial markets and other fields. TDengine is a purpose-built database optimized for the characteristics of time series data. As such, it cannot be used to process data from web crawlers, social media, e-commerce, ERP, CRM and so on. More generally TDengine is not a suitable storage engine for non-time-series data. This section makes a more detailed analysis of the applicable scenarios.
+
+### Characteristics and Requirements of Data Sources
+
+| **Data Source Characteristics and Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
+| -------------------------------------------------------- | ------------------ | ----------------------- | ------------------- | :----------------------------------------------------------- |
+| A massive amount of total data | | | √ | TDengine provides excellent scale-out functions in terms of capacity, and has a storage structure with matching high compression ratio to achieve the best storage efficiency in the industry.|
+| Data input velocity is extremely high | | | √ | TDengine's performance is much higher than that of other similar products. It can continuously process larger amounts of input data in the same hardware environment, and provides a performance evaluation tool that can easily run in the user environment. |
+| A huge number of data sources | | | √ | TDengine is optimized specifically for a huge number of data sources. It is especially suitable for efficiently ingesting, writing and querying data from billions of data sources. |
+
+### System Architecture Requirements
+
+| **System Architecture Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
+| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
+| A simple and reliable system architecture | | | √ | TDengine's system architecture is very simple and reliable, with its own message queue, cache, stream computing, monitoring and other functions. There is no need to integrate any additional third-party products. |
+| Fault-tolerance and high-reliability | | | √ | TDengine has cluster functions to automatically provide high-reliability and high-availability functions such as fault tolerance and disaster recovery. |
+| Standardization support | | | √ | TDengine supports standard SQL and provides SQL extensions for time-series data analysis. |
+
+### System Function Requirements
+
+| **System Function Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
+| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
+| Complete data processing algorithms built-in | | √ | | While TDengine implements various general data processing algorithms, industry specific algorithms and special types of processing will need to be implemented at the application level.|
+| A large number of crosstab queries | | √ | | This type of processing is better handled by general purpose relational database systems but TDengine can work in concert with relational database systems to provide more complete solutions. |
+
+### System Performance Requirements
+
+| **System Performance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
+| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
+| Very large total processing capacity | | | √ | TDengine’s cluster functions can easily improve processing capacity via multi-server coordination. |
+| Extremely high-speed data processing | | | √ | TDengine’s storage and data processing are optimized for IoT, and can process data many times faster than similar products.|
+| Extremely fast processing of high resolution data | | | √ | TDengine has achieved the same or better performance than other relational and NoSQL data processing systems. |
+
+### System Maintenance Requirements
+
+| **System Maintenance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
+| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
+| Native high-reliability | | | √ | TDengine has a very robust, reliable and easily configurable system architecture to simplify routine operation. Human errors and accidents are eliminated to the greatest extent, with a streamlined experience for operators. |
+| Minimize learning and maintenance costs | | | √ | In addition to being easily configurable, standard SQL support and the Taos shell for ad hoc queries makes maintenance simpler, allows reuse and reduces learning costs.|
+| Abundant talent supply | √ | | | Given the above, and given the extensive training and professional services provided by TDengine, it is easy to migrate from existing solutions or create a new and lasting solution based on TDengine.|
+
+## Comparison with other databases
+
+- [Writing Performance Comparison of TDengine and InfluxDB ](https://tdengine.com/2022/02/23/4975.html)
+- [Query Performance Comparison of TDengine and InfluxDB](https://tdengine.com/2022/02/24/5120.html)
+- [TDengine vs InfluxDB、OpenTSDB、Cassandra、MySQL、ClickHouse](https://www.tdengine.com/downloads/TDengine_Testing_Report_en.pdf)
+- [TDengine vs OpenTSDB](https://tdengine.com/2019/09/12/710.html)
+- [TDengine vs Cassandra](https://tdengine.com/2019/09/12/708.html)
+- [TDengine vs InfluxDB](https://tdengine.com/2019/09/12/706.html)
+
+
+If you want to learn some basics about time-series databases, please check [here](https://tdengine.com/tsdb).
diff --git a/docs-en/04-concept/_category_.yml b/docs/en/04-concept/_category_.yml
similarity index 100%
rename from docs-en/04-concept/_category_.yml
rename to docs/en/04-concept/_category_.yml
diff --git a/docs-en/04-concept/index.md b/docs/en/04-concept/index.md
similarity index 100%
rename from docs-en/04-concept/index.md
rename to docs/en/04-concept/index.md
diff --git a/docs-en/05-get-started/_apt_get_install.mdx b/docs/en/05-get-started/_apt_get_install.mdx
similarity index 100%
rename from docs-en/05-get-started/_apt_get_install.mdx
rename to docs/en/05-get-started/_apt_get_install.mdx
diff --git a/docs-en/05-get-started/_category_.yml b/docs/en/05-get-started/_category_.yml
similarity index 100%
rename from docs-en/05-get-started/_category_.yml
rename to docs/en/05-get-started/_category_.yml
diff --git a/docs/en/05-get-started/_pkg_install.mdx b/docs/en/05-get-started/_pkg_install.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..2d514d6cd22b94cbe3da8e833d9f5f9f24da733f
--- /dev/null
+++ b/docs/en/05-get-started/_pkg_install.mdx
@@ -0,0 +1,15 @@
+import PkgList from "/components/PkgList";
+
+It's very easy to install TDengine and would take you only a few minutes from downloading to finishing installation.
+
+For the convenience of users, from version 2.4.0.10, the standard server side installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark` and sample code. If only the `taosd` server and C/C++ connector are required, you can also choose to download the lite package.
+
+Three kinds of packages are provided, tar.gz, rpm and deb. Especially the tar.gz package is provided for the convenience of enterprise customers on different kinds of operating systems, it includes `taosdump` and TDinsight installation script which are normally only provided in taos-tools rpm and deb packages.
+
+Between two major release versions, some beta versions may be delivered for users to try some new features.
+
+
+
+For the details please refer to [Install and Uninstall](../13-operation/01-pkg-install.md).
+
+To see the details of versions, please refer to [Download List](https://tdengine.com/all-downloads) and [Release Notes](https://github.com/taosdata/TDengine/releases).
diff --git a/docs/en/05-get-started/index.md b/docs/en/05-get-started/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..0450d132ddb56e16f2f521887637b3e2096da7dd
--- /dev/null
+++ b/docs/en/05-get-started/index.md
@@ -0,0 +1,171 @@
+---
+title: Get Started
+description: 'Install TDengine from Docker image, apt-get or package, and run TDengine CLI and taosBenchmark to experience the features'
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+import PkgInstall from "./\_pkg_install.mdx";
+import AptGetInstall from "./\_apt_get_install.mdx";
+
+## Quick Install
+
+The full package of TDengine includes the server(taosd), taosAdapter for connecting with third-party systems and providing a RESTful interface, client driver(taosc), command-line program(CLI, taos) and some tools. For the current version, the server taosd and taosAdapter can only be installed and run on Linux systems. In the future taosd and taosAdapter will also be supported on Windows, macOS and other systems. The client driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to connectors for multiple languages, TDengine also provides a [RESTful interface](../14-reference/02-rest-api/02-rest-api.mdx) through [taosAdapter](../14-reference/04-taosadapter.md). Prior to version 2.4.0.0, taosAdapter did not exist and the RESTful interface was provided by the built-in HTTP service of taosd.
+
+TDengine supports X64/ARM64/MIPS64/Alpha64 hardware platforms, and will support ARM32, RISC-V and other CPU architectures in the future.
+
+
+
+If docker is already installed on your computer, execute the following command:
+
+```shell
+docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
+```
+
+Make sure the container is running
+
+```shell
+docker ps
+```
+
+Enter into container and execute bash
+
+```shell
+docker exec -it bash
+```
+
+Then you can execute the Linux commands and access TDengine.
+
+For detailed steps, please visit [Experience TDengine via Docker](../27-train-faq/03-docker.md).
+
+:::info
+Starting from 2.4.0.10,besides taosd,TDengine docker image includes: taos,taosAdapter,taosdump,taosBenchmark,TDinsight, scripts and sample code. Once the TDengine container is started,it will start both taosAdapter and taosd automatically to support RESTful interface.
+
+:::
+
+
+
+
+
+
+
+
+
+
+If you like to check the source code, build the package by yourself or contribute to the project, please check [TDengine GitHub Repository](https://github.com/taosdata/TDengine)
+
+
+
+
+## Quick Launch
+
+After installation, you can launch the TDengine service by the 'systemctl' command to start 'taosd'.
+
+```bash
+systemctl start taosd
+```
+
+Check if taosd is running:
+
+```bash
+systemctl status taosd
+```
+
+If everything is fine, you can run TDengine command-line interface `taos` to access TDengine and test it out yourself.
+
+:::info
+
+- systemctl requires _root_ privileges,if you are not _root_ ,please add sudo before the command.
+- To get feedback and keep improving the product, TDengine is collecting some basic usage information, but you can turn it off by setting telemetryReporting to 0 in configuration file taos.cfg.
+- TDengine uses FQDN (usually hostname)as the ID for a node. To make the system work, you need to configure the FQDN for the server running taosd, and configure the DNS service or hosts file on the the machine where the application or TDengine CLI runs to ensure that the FQDN can be resolved.
+- `systemctl stop taosd` won't stop the server right away, it will wait until all the data in memory are flushed to disk. It may takes time depending on the cache size.
+
+TDengine supports the installation on system which runs [`systemd`](https://en.wikipedia.org/wiki/Systemd) for process management,use `which systemctl` to check if the system has `systemd` installed:
+
+```bash
+which systemctl
+```
+
+If the system does not have `systemd`,you can start TDengine manually by executing `/usr/local/taos/bin/taosd`
+
+:::note
+
+## Command Line Interface
+
+To manage the TDengine running instance,or execute ad-hoc queries, TDengine provides a Command Line Interface (hereinafter referred to as TDengine CLI) taos. To enter into the interactive CLI,execute `taos` on a Linux terminal where TDengine is installed.
+
+```bash
+taos
+```
+
+If it connects to the TDengine server successfully, it will print out the version and welcome message. If it fails, it will print out the error message, please check [FAQ](../27-train-faq/01-faq.md) for trouble shooting connection issue. TDengine CLI's prompt is:
+
+```cmd
+taos>
+```
+
+Inside TDengine CLI,you can execute SQL commands to create/drop database/table, and run queries. The SQL command must be ended with a semicolon. For example:
+
+```sql
+create database demo;
+use demo;
+create table t (ts timestamp, speed int);
+insert into t values ('2019-07-15 00:00:00', 10);
+insert into t values ('2019-07-15 01:00:00', 20);
+select * from t;
+ ts | speed |
+========================================
+ 2019-07-15 00:00:00.000 | 10 |
+ 2019-07-15 01:00:00.000 | 20 |
+Query OK, 2 row(s) in set (0.003128s)
+```
+
+Besides executing SQL commands, system administrators can check running status, add/drop user accounts and manage the running instances. TDengine CLI with client driver can be installed and run on either Linux or Windows machines. For more details on CLI, please [check here](../14-reference/08-taos-shell.md).
+
+## Experience the blazing fast speed
+
+After TDengine server is running,execute `taosBenchmark` (previously named taosdemo) from a Linux terminal:
+
+```bash
+taosBenchmark
+```
+
+This command will create a super table "meters" under database "test". Under "meters", 10000 tables are created with names from "d0" to "d9999". Each table has 10000 rows and each row has four columns (ts, current, voltage, phase). Time stamp is starting from "2017-07-14 10:40:00 000" to "2017-07-14 10:40:09 999". Each table has tags "location" and "groupId". groupId is set 1 to 10 randomly, and location is set to "California.SanFrancisco" or "California.SanDiego".
+
+This command will insert 100 million rows into the database quickly. Time to insert depends on the hardware configuration, it only takes a dozen seconds for a regular PC server.
+
+taosBenchmark provides command-line options and a configuration file to customize the scenarios, like number of tables, number of rows per table, number of columns and more. Please execute `taosBenchmark --help` to list them. For details on running taosBenchmark, please check [reference for taosBenchmark](../14-reference/05-taosbenchmark.md)
+
+## Experience query speed
+
+After using taosBenchmark to insert a number of rows data, you can execute queries from TDengine CLI to experience the lightning fast query speed.
+
+query the total number of rows under super table "meters":
+
+```sql
+taos> select count(*) from test.meters;
+```
+
+query the average, maximum, minimum of 100 million rows:
+
+```sql
+taos> select avg(current), max(voltage), min(phase) from test.meters;
+```
+
+query the total number of rows with location="California.SanFrancisco":
+
+```sql
+taos> select count(*) from test.meters where location="California.SanFrancisco";
+```
+
+query the average, maximum, minimum of all rows with groupId=10:
+
+```sql
+taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
+```
+
+query the average, maximum, minimum for table d10 in 10 seconds time interval:
+
+```sql
+taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
+```
diff --git a/docs-en/07-develop/01-connect/_category_.yml b/docs/en/07-develop/01-connect/_category_.yml
similarity index 100%
rename from docs-en/07-develop/01-connect/_category_.yml
rename to docs/en/07-develop/01-connect/_category_.yml
diff --git a/docs/en/07-develop/01-connect/_connect_c.mdx b/docs/en/07-develop/01-connect/_connect_c.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..4d13d80e085956a7ceccdc404b7106620b22c25e
--- /dev/null
+++ b/docs/en/07-develop/01-connect/_connect_c.mdx
@@ -0,0 +1,3 @@
+```c title="Native Connection"
+{{#include docs/examples/c/connect_example.c}}
+```
diff --git a/docs/en/07-develop/01-connect/_connect_cs.mdx b/docs/en/07-develop/01-connect/_connect_cs.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..f8d8e519fde7fc6d0954bbfe865155221c0b0595
--- /dev/null
+++ b/docs/en/07-develop/01-connect/_connect_cs.mdx
@@ -0,0 +1,8 @@
+```csharp title="Native Connection"
+{{#include docs/examples/csharp/ConnectExample.cs}}
+```
+
+:::info
+C# connector supports only native connection for now.
+
+:::
diff --git a/docs/en/07-develop/01-connect/_connect_go.mdx b/docs/en/07-develop/01-connect/_connect_go.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..6f742ea0bcf027de6c97132167d4de65e2cbee8a
--- /dev/null
+++ b/docs/en/07-develop/01-connect/_connect_go.mdx
@@ -0,0 +1,17 @@
+#### Unified Database Access Interface
+
+```go title="Native Connection"
+{{#include docs/examples/go/connect/cgoexample/main.go}}
+```
+
+```go title="REST Connection"
+{{#include docs/examples/go/connect/restexample/main.go}}
+```
+
+#### Advanced Features
+
+The af package of driver-go can also be used to establish connection, with this way some advanced features of TDengine, like parameter binding and subscription, can be used.
+
+```go title="Establish native connection using af package"
+{{#include docs/examples/go/connect/afconn/main.go}}
+```
diff --git a/docs/en/07-develop/01-connect/_connect_java.mdx b/docs/en/07-develop/01-connect/_connect_java.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..880d2aa3e489566203fa0f4b8379feb653a98f73
--- /dev/null
+++ b/docs/en/07-develop/01-connect/_connect_java.mdx
@@ -0,0 +1,15 @@
+```java title="Native Connection"
+{{#include docs/examples/java/src/main/java/com/taos/example/JNIConnectExample.java}}
+```
+
+```java title="REST Connection"
+{{#include docs/examples/java/src/main/java/com/taos/example/RESTConnectExample.java:main}}
+```
+
+When using REST connection, the feature of bulk pulling can be enabled if the size of resulting data set is huge.
+
+```java title="Enable Bulk Pulling" {4}
+{{#include docs/examples/java/src/main/java/com/taos/example/WSConnectExample.java:main}}
+```
+
+More configuration about connection,please refer to [Java Connector](/reference/connector/java)
diff --git a/docs/en/07-develop/01-connect/_connect_node.mdx b/docs/en/07-develop/01-connect/_connect_node.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..943677b36be22f73c970d5b1f4228ff757b0a62e
--- /dev/null
+++ b/docs/en/07-develop/01-connect/_connect_node.mdx
@@ -0,0 +1,7 @@
+```js title="Native Connection"
+{{#include docs/examples/node/nativeexample/connect.js}}
+```
+
+```js title="REST Connection"
+{{#include docs/examples/node/restexample/connect.js}}
+```
diff --git a/docs/en/07-develop/01-connect/_connect_python.mdx b/docs/en/07-develop/01-connect/_connect_python.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..60b454d52f3977d1feac9e745da984db83a38668
--- /dev/null
+++ b/docs/en/07-develop/01-connect/_connect_python.mdx
@@ -0,0 +1,3 @@
+```python title="Native Connection"
+{{#include docs/examples/python/connect_example.py}}
+```
diff --git a/docs/en/07-develop/01-connect/_connect_r.mdx b/docs/en/07-develop/01-connect/_connect_r.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..e2d7f631d2c467937589bd00271a7decd036506d
--- /dev/null
+++ b/docs/en/07-develop/01-connect/_connect_r.mdx
@@ -0,0 +1,3 @@
+```r title="Native Connection"
+{{#include docs/examples/R/connect_native.r:demo}}
+```
diff --git a/docs/en/07-develop/01-connect/_connect_rust.mdx b/docs/en/07-develop/01-connect/_connect_rust.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..80ac1f4ff4a8174acc4c2f6af11b31f027ece602
--- /dev/null
+++ b/docs/en/07-develop/01-connect/_connect_rust.mdx
@@ -0,0 +1,8 @@
+```rust title="Native Connection/REST Connection"
+{{#include docs/examples/rust/nativeexample/examples/connect.rs}}
+```
+
+:::note
+For Rust connector, the connection depends on the feature being used. If "rest" feature is enabled, then only the implementation for "rest" is compiled and packaged.
+
+:::
diff --git a/docs/en/07-develop/01-connect/index.md b/docs/en/07-develop/01-connect/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..df793f6d3f35cb8d3a9e25f909464c724a2a05c0
--- /dev/null
+++ b/docs/en/07-develop/01-connect/index.md
@@ -0,0 +1,276 @@
+---
+sidebar_label: Connect
+title: Connect
+description: "This document explains how to establish connections to TDengine, and briefly introduces how to install and use TDengine connectors."
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+import ConnJava from "./\_connect_java.mdx";
+import ConnGo from "./\_connect_go.mdx";
+import ConnRust from "./\_connect_rust.mdx";
+import ConnNode from "./\_connect_node.mdx";
+import ConnPythonNative from "./\_connect_python.mdx";
+import ConnCSNative from "./\_connect_cs.mdx";
+import ConnC from "./\_connect_c.mdx";
+import ConnR from "./\_connect_r.mdx";
+import InstallOnWindows from "../../14-reference/03-connector/\_linux_install.mdx";
+import InstallOnLinux from "../../14-reference/03-connector/\_windows_install.mdx";
+import VerifyLinux from "../../14-reference/03-connector/\_verify_linux.mdx";
+import VerifyWindows from "../../14-reference/03-connector/\_verify_windows.mdx";
+
+Any application programs running on any kind of platform can access TDengine through the REST API provided by TDengine. For details, please refer to [REST API](/reference/rest-api/). Additionally, application programs can use the connectors of multiple programming languages including C/C++, Java, Python, Go, Node.js, C#, Rust to access TDengine. This chapter describes how to establish a connection to TDengine and briefly introduces how to install and use connectors. TDengine community also provides connectors in LUA and PHP languages. For details about the connectors, please refer to [Connectors](/reference/connector/).
+
+## Establish Connection
+
+There are two ways for a connector to establish connections to TDengine:
+
+1. Connection through the REST API provided by the taosAdapter component, this way is called "REST connection" hereinafter.
+2. Connection through the TDengine client driver (taosc), this way is called "Native connection" hereinafter.
+
+Key differences:
+
+1. The TDengine client driver (taosc) has the highest performance with all the features of TDengine like [Parameter Binding](/reference/connector/cpp#parameter-binding-api), [Subscription](/reference/connector/cpp#subscription-and-consumption-api), etc.
+2. The TDengine client driver (taosc) is not supported across all platforms, and applications built on taosc may need to be modified when updating taosc to newer versions.
+3. The REST connection is more accessible with cross-platform support, however it results in a 30% performance downgrade.
+
+## Install Client Driver taosc
+
+If you are choosing to use the native connection and the the application is not on the same host as TDengine server, the TDengine client driver taosc needs to be installed on the application host. If choosing to use the REST connection or the application is on the same host as TDengine server, this step can be skipped. It's better to use same version of taosc as the TDengine server.
+
+### Install
+
+
+
+
+
+
+
+
+
+
+### Verify
+
+After the above installation and configuration are done and making sure TDengine service is already started and in service, the TDengine command-line interface `taos` can be launched to access TDengine.
+
+
+
+
+
+
+
+
+
+
+## Install Connectors
+
+
+
+
+If `maven` is used to manage the projects, what needs to be done is only adding below dependency in `pom.xml`.
+
+```xml
+
+ com.taosdata.jdbc
+ taos-jdbcdriver
+ 2.0.38
+
+```
+
+
+
+
+Install from PyPI using `pip`:
+
+```
+pip install taospy
+```
+
+Install from Git URL:
+
+```
+pip install git+https://github.com/taosdata/taos-connector-python.git
+```
+
+
+
+
+Just need to add `driver-go` dependency in `go.mod` .
+
+```go-mod title=go.mod
+{{#include docs/examples/go/go.mod}}
+```
+
+:::note
+`driver-go` uses `cgo` to wrap the APIs provided by taosc, while `cgo` needs `gcc` to compile source code in C language, so please make sure you have proper `gcc` on your system.
+
+:::
+
+
+
+
+Just need to add `libtaos` dependency in `Cargo.toml`.
+
+```toml title=Cargo.toml
+[dependencies]
+libtaos = { version = "0.4.2"}
+```
+
+:::info
+Rust connector uses different features to distinguish the way to establish connection. To establish REST connection, please enable `rest` feature.
+
+```toml
+libtaos = { version = "*", features = ["rest"] }
+```
+
+:::
+
+
+
+
+Node.js connector provides different ways of establishing connections by providing different packages.
+
+1. Install Node.js Native Connector
+
+```
+npm i td2.0-connector
+```
+
+:::note
+It's recommend to use Node whose version is between `node-v12.8.0` and `node-v13.0.0`.
+:::
+
+2. Install Node.js REST Connector
+
+```
+npm i td2.0-rest-connector
+```
+
+
+
+
+Just need to add the reference to [TDengine.Connector](https://www.nuget.org/packages/TDengine.Connector/) in the project configuration file.
+
+```xml title=csharp.csproj {12}
+
+
+
+ Exe
+ net6.0
+ enable
+ enable
+ TDengineExample.AsyncQueryExample
+
+
+
+
+
+
+
+```
+
+Or add by `dotnet` command.
+
+```
+dotnet add package TDengine.Connector
+```
+
+:::note
+The sample code below are based on dotnet6.0, they may need to be adjusted if your dotnet version is not exactly same.
+
+:::
+
+
+
+
+1. Download [taos-jdbcdriver-version-dist.jar](https://repo1.maven.org/maven2/com/taosdata/jdbc/taos-jdbcdriver/2.0.38/).
+2. Install the dependency package `RJDBC`:
+
+```R
+install.packages("RJDBC")
+```
+
+
+
+
+If the client driver (taosc) is already installed, then the C connector is already available.
+
+
+
+
+
+**Download Source Code Package and Unzip:**
+
+```shell
+curl -L -o php-tdengine.tar.gz https://github.com/Yurunsoft/php-tdengine/archive/refs/tags/v1.0.2.tar.gz \
+&& mkdir php-tdengine \
+&& tar -xzf php-tdengine.tar.gz -C php-tdengine --strip-components=1
+```
+
+> Version number `v1.0.2` is only for example, it can be replaced to any newer version, please check available version from [TDengine PHP Connector Releases](https://github.com/Yurunsoft/php-tdengine/releases).
+
+**Non-Swoole Environment:**
+
+```shell
+phpize && ./configure && make -j && make install
+```
+
+**Specify TDengine Location:**
+
+```shell
+phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/2.4.0.0 && make -j && make install
+```
+
+> `--with-tdengine-dir=` is followed by the TDengine installation location.
+> This way is useful in case TDengine location can't be found automatically or macOS.
+
+**Swoole Environment:**
+
+```shell
+phpize && ./configure --enable-swoole && make -j && make install
+```
+
+**Enable The Extension:**
+
+Option One: Add `extension=tdengine` in `php.ini`
+
+Option Two: Specify the extension on CLI `php -d extension=tdengine test.php`
+
+
+
+
+## Establish Connection
+
+Prior to establishing connection, please make sure TDengine is already running and accessible. The following sample code assumes TDengine is running on the same host as the client program, with FQDN configured to "localhost" and serverPort configured to "6030".
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+:::tip
+If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](https://docs.taosdata.com/train-faq/faq).
+
+:::
diff --git a/docs-en/07-develop/02-model/_category_.yml b/docs/en/07-develop/02-model/_category_.yml
similarity index 100%
rename from docs-en/07-develop/02-model/_category_.yml
rename to docs/en/07-develop/02-model/_category_.yml
diff --git a/docs/en/07-develop/02-model/index.mdx b/docs/en/07-develop/02-model/index.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..e0378cc77ca28a1a82ef6a52fa1f74d6cd580a01
--- /dev/null
+++ b/docs/en/07-develop/02-model/index.mdx
@@ -0,0 +1,93 @@
+---
+title: Data Model
+---
+
+The data model employed by TDengine is similar to that of a relational database. You have to create databases and tables. You must design the data model based on your own business and application requirements. You should design the STable (an abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntactical details.
+
+## Create Database
+
+The [characteristics of time-series data](https://www.taosdata.com/blog/2019/07/09/86.html) from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. For TDengine to operate with the best performance, we strongly recommend that you create and configure different databases for data with different characteristics. This allows you, for example, to set up different storage and retention policies. When creating a database, there are a lot of parameters that can be configured such as, the days to keep data, the number of replicas, the number of memory blocks, time precision, the minimum and maximum number of rows in each data block, whether compression is enabled, the time range of the data in single data file and so on. Below is an example of the SQL statement to create a database.
+
+```sql
+CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 6 UPDATE 1;
+```
+
+In the above SQL statement:
+- a database named "power" will be created
+- the data in it will be kept for 365 days, which means that data older than 365 days will be deleted automatically
+- a new data file will be created every 10 days
+- the number of memory blocks is 6
+- data is allowed to be updated
+
+For more details please refer to [Database](/taos-sql/database).
+
+After creating a database, the current database in use can be switched using SQL command `USE`. For example the SQL statement below switches the current database to `power`. Without the current database specified, table name must be preceded with the corresponding database name.
+
+```sql
+USE power;
+```
+
+:::note
+
+- Any table or STable must belong to a database. To create a table or STable, the database it belongs to must be ready.
+- JOIN operations can't be performed on tables from two different databases.
+- Timestamp needs to be specified when inserting rows or querying historical rows.
+
+:::
+
+## Create STable
+
+In a time-series application, there may be multiple kinds of data collection points. For example, in the electrical power system there are meters, transformers, bus bars, switches, etc. For easy and efficient aggregation of multiple tables, one STable needs to be created for each kind of data collection point. For example, for the meters in [table 1](/concept/#model_table1), the SQL statement below can be used to create the super table.
+
+```sql
+CREATE STable meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
+```
+
+:::note
+If you are using versions prior to 2.0.15, the `STable` keyword needs to be replaced with `TABLE`.
+
+:::
+
+Similar to creating a regular table, when creating a STable, the name and schema need to be provided. In the STable schema, the first column must always be a timestamp (like ts in the example), and the other columns (like current, voltage and phase in the example) are the data collected. The remaining columns can [contain data of type](/taos-sql/data-type/) integer, float, double, string etc. In addition, the schema for tags, like location and groupId in the example, must be provided. The tag type can be integer, float, string, etc. Tags are essentially the static properties of a data collection point. For example, properties like the location, device type, device group ID, manager ID are tags. Tags in the schema can be added, removed or updated. Please refer to [STable](/taos-sql/stable) for more details.
+
+For each kind of data collection point, a corresponding STable must be created. There may be many STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another data collection point for environmental data like temperature, humidity and wind direction. Multiple STables are required for these kinds of devices.
+
+At most 4096 (or 1024 prior to version 2.1.7.0) columns are allowed in a STable. If there are more than 4096 of metrics to be collected for a data collection point, multiple STables are required. There can be multiple databases in a system, while one or more STables can exist in a database.
+
+## Create Table
+
+A specific table needs to be created for each data collection point. Similar to RDBMS, table name and schema are required to create a table. Additionally, one or more tags can be created for each table. To create a table, a STable needs to be used as template and the values need to be specified for the tags. For example, for the meters in [Table 1](/tdinternal/arch#model_table1), the table can be created using below SQL statement.
+
+```sql
+CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2);
+```
+
+In the above SQL statement, "d1001" is the table name, "meters" is the STable name, followed by the value of tag "Location" and the value of tag "groupId", which are "California.SanFrancisco" and "2" respectively in the example. The tag values can be updated after the table is created. Please refer to [Tables](/taos-sql/table) for details.
+
+In the TDengine system, it's recommended to create a table for a data collection point via STable. A table created via STable is called subtable in some parts of the TDengine documentation. All SQL commands applied on regular tables can be applied on subtables.
+
+:::warning
+It's not recommended to create a table in a database while using a STable from another database as template.
+
+:::tip
+It's suggested to use the globally unique ID of a data collection point as the table name. For example the device serial number could be used as a unique ID. If a unique ID doesn't exist, multiple IDs that are not globally unique can be combined to form a globally unique ID. It's not recommended to use a globally unique ID as tag value.
+
+## Create Table Automatically
+
+In some circumstances, it's unknown whether the table already exists when inserting rows. The table can be created automatically using the SQL statement below, and nothing will happen if the table already exists.
+
+```sql
+INSERT INTO d1001 USING meters TAGS ("California.SanFrancisco", 2) VALUES (now, 10.2, 219, 0.32);
+```
+
+In the above SQL statement, a row with value `(now, 10.2, 219, 0.32)` will be inserted into table "d1001". If table "d1001" doesn't exist, it will be created automatically using STable "meters" as template with tag value `"California.SanFrancisco", 2`.
+
+For more details please refer to [Create Table Automatically](/taos-sql/insert#automatically-create-table-when-inserting).
+
+## Single Column vs Multiple Column
+
+A multiple columns data model is supported in TDengine. As long as multiple metrics are collected by the same data collection point at the same time, i.e. the timestamps are identical, these metrics can be put in a single STable as columns.
+
+However, there is another kind of design, i.e. single column data model in which a table is created for each metric. This means that a STable is required for each kind of metric. For example in a single column model, 3 STables would be required for current, voltage and phase.
+
+It's recommended to use a multiple column data model as much as possible because insert and query performance is higher. In some cases, however, the collected metrics may vary frequently and so the corresponding STable schema needs to be changed frequently too. In such cases, it's more convenient to use single column data model.
diff --git a/docs/en/07-develop/03-insert-data/01-sql-writing.mdx b/docs/en/07-develop/03-insert-data/01-sql-writing.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..d8c4453f409dfaf1db1ec154e9ba35f8db74862e
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/01-sql-writing.mdx
@@ -0,0 +1,130 @@
+---
+sidebar_label: Insert Using SQL
+title: Insert Using SQL
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+import JavaSQL from "./_java_sql.mdx";
+import JavaStmt from "./_java_stmt.mdx";
+import PySQL from "./_py_sql.mdx";
+import PyStmt from "./_py_stmt.mdx";
+import GoSQL from "./_go_sql.mdx";
+import GoStmt from "./_go_stmt.mdx";
+import RustSQL from "./_rust_sql.mdx";
+import RustStmt from "./_rust_stmt.mdx";
+import NodeSQL from "./_js_sql.mdx";
+import NodeStmt from "./_js_stmt.mdx";
+import CsSQL from "./_cs_sql.mdx";
+import CsStmt from "./_cs_stmt.mdx";
+import CSQL from "./_c_sql.mdx";
+import CStmt from "./_c_stmt.mdx";
+
+## Introduction
+
+Application programs can execute `INSERT` statement through connectors to insert rows. The TDengine CLI can also be used to manually insert data.
+
+### Insert Single Row
+
+The below SQL statement is used to insert one row into table "d1001".
+
+```sql
+INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31);
+```
+
+### Insert Multiple Rows
+
+Multiple rows can be inserted in a single SQL statement. The example below inserts 2 rows into table "d1001".
+
+```sql
+INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3, 218, 0.25);
+```
+
+### Insert into Multiple Tables
+
+Data can be inserted into multiple tables in the same SQL statement. The example below inserts 2 rows into table "d1001" and 1 row into table "d1002".
+
+```sql
+INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d1002 VALUES (1538548696800, 12.3, 221, 0.31);
+```
+
+For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
+
+:::info
+
+- Inserting in batches can improve performance. Normally, the higher the batch size, the better the performance. Please note that a single row can't exceed 48K bytes and each SQL statement can't exceed 1MB.
+- Inserting with multiple threads can also improve performance. However, depending on the system resources on the application side and the server side, when the number of inserting threads grows beyond a specific point the performance may drop instead of improving. The proper number of threads needs to be tested in a specific environment to find the best number.
+
+:::
+
+:::warning
+
+- If the timestamp for the row to be inserted already exists in the table, the behavior depends on the value of parameter `UPDATE`. If it's set to 0 (the default value), the row will be discarded. If it's set to 1, the new values will override the old values for the same row.
+- The timestamp to be inserted must be newer than the timestamp of subtracting current time by the parameter `KEEP`. If `KEEP` is set to 3650 days, then the data older than 3650 days ago can't be inserted. The timestamp to be inserted can't be newer than the timestamp of current time plus parameter `DAYS`. If `DAYS` is set to 2, the data newer than 2 days later can't be inserted.
+
+:::
+
+## Examples
+
+### Insert Using SQL
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+:::note
+
+1. With either native connection or REST connection, the above samples can work well.
+2. Please note that `use db` can't be used with a REST connection because REST connections are stateless, so in the samples `dbName.tbName` is used to specify the table name.
+
+:::
+
+### Insert with Parameter Binding
+
+TDengine also provides API support for parameter binding. Similar to MySQL, only `?` can be used in these APIs to represent the parameters to bind. From version 2.1.1.0 and 2.1.2.0, parameter binding support for inserting data has improved significantly to improve the insert performance by avoiding the cost of parsing SQL statements.
+
+Parameter binding is available only with native connection.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs-en/07-develop/03-insert-data/02-influxdb-line.mdx b/docs/en/07-develop/03-insert-data/02-influxdb-line.mdx
similarity index 100%
rename from docs-en/07-develop/03-insert-data/02-influxdb-line.mdx
rename to docs/en/07-develop/03-insert-data/02-influxdb-line.mdx
diff --git a/docs-en/07-develop/03-insert-data/03-opentsdb-telnet.mdx b/docs/en/07-develop/03-insert-data/03-opentsdb-telnet.mdx
similarity index 100%
rename from docs-en/07-develop/03-insert-data/03-opentsdb-telnet.mdx
rename to docs/en/07-develop/03-insert-data/03-opentsdb-telnet.mdx
diff --git a/docs-en/07-develop/03-insert-data/04-opentsdb-json.mdx b/docs/en/07-develop/03-insert-data/04-opentsdb-json.mdx
similarity index 100%
rename from docs-en/07-develop/03-insert-data/04-opentsdb-json.mdx
rename to docs/en/07-develop/03-insert-data/04-opentsdb-json.mdx
diff --git a/docs/en/07-develop/03-insert-data/05-high-volume.md b/docs/en/07-develop/03-insert-data/05-high-volume.md
new file mode 100644
index 0000000000000000000000000000000000000000..1a4813f74e680905206b5bdd8fe37cd4eca2b0be
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/05-high-volume.md
@@ -0,0 +1,444 @@
+---
+sidebar_label: High Performance Writing
+title: High Performance Writing
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+
+This chapter introduces how to write data into TDengine with high throughput.
+
+## How to achieve high performance data writing
+
+To achieve high performance writing, there are a few aspects to consider. In the following sections we will describe these important factors in achieving high performance writing.
+
+### Application Program
+
+From the perspective of application program, you need to consider:
+
+1. The data size of each single write, also known as batch size. Generally speaking, higher batch size generates better writing performance. However, once the batch size is over a specific value, you will not get any additional benefit anymore. When using SQL to write into TDengine, it's better to put as much as possible data in single SQL. The maximum SQL length supported by TDengine is 1,048,576 bytes, i.e. 1 MB. It can be configured by parameter `maxSQLLength` on client side, and the default value is 65,480.
+
+2. The number of concurrent connections. Normally more connections can get better result. However, once the number of connections exceeds the processing ability of the server side, the performance may downgrade.
+
+3. The distribution of data to be written across tables or sub-tables. Writing to single table in one batch is more efficient than writing to multiple tables in one batch.
+
+4. Data Writing Protocol.
+ - Prameter binding mode is more efficient than SQL because it doesn't have the cost of parsing SQL.
+ - Writing to known existing tables is more efficient than wirting to uncertain tables in automatic creating mode because the later needs to check whether the table exists or not before actually writing data into it
+ - Writing in SQL is more efficient than writing in schemaless mode because schemaless writing creats table automatically and may alter table schema
+
+Application programs need to take care of the above factors and try to take advantage of them. The application progam should write to single table in each write batch. The batch size needs to be tuned to a proper value on a specific system. The number of concurrent connections needs to be tuned to a proper value too to achieve the best writing throughput.
+
+### Data Source
+
+Application programs need to read data from data source then write into TDengine. If you meet one or more of below situations, you need to setup message queues between the threads for reading from data source and the threads for writing into TDengine.
+
+1. There are multiple data sources, the data generation speed of each data source is much slower than the speed of single writing thread. In this case, the purpose of message queues is to consolidate the data from multiple data sources together to increase the batch size of single write.
+2. The speed of data generation from single data source is much higher than the speed of single writing thread. The purpose of message queue in this case is to provide buffer so that data is not lost and multiple writing threads can get data from the buffer.
+3. The data for single table are from multiple data source. In this case the purpose of message queues is to combine the data for single table together to improve the write efficiency.
+
+If the data source is Kafka, then the appication program is a consumer of Kafka, you can benefit from some kafka features to achieve high performance writing:
+
+1. Put the data for a table in single partition of single topic so that it's easier to put the data for each table together and write in batch
+2. Subscribe multiple topics to accumulate data together.
+3. Add more consumers to gain more concurrency and throughput.
+4. Incrase the size of single fetch to increase the size of write batch.
+
+### Tune TDengine
+
+TDengine is a distributed and high performance time series database, there are also some ways to tune TDengine to get better writing performance.
+
+1. Set proper number of `vgroups` according to available CPU cores. Normally, we recommend 2 \* number_of_cores as a starting point. If the verification result shows this is not enough to utilize CPU resources, you can use a higher value.
+2. Set proper `minTablesPerVnode`, `tableIncStepPerVnode`, and `maxVgroupsPerDb` according to the number of tables so that tables are distributed even across vgroups. The purpose is to balance the workload among all vnodes so that system resources can be utilized better to get higher performance.
+
+For more performance tuning tips, please refer to [Performance Optimization](../../../operation/optimize) and [Configuration Parameters](../../../reference/config).
+
+## Sample Programs
+
+This section will introduce the sample programs to demonstrate how to write into TDengine with high performance.
+
+### Scenario
+
+Below are the scenario for the sample programs of high performance wrting.
+
+- Application program reads data from data source, the sample program simulates a data source by generating data
+- The speed of single writing thread is much slower than the speed of generating data, so the program starts multiple writing threads while each thread establish a connection to TDengine and each thread has a message queue of fixed size.
+- Application program maps the received data to different writing threads based on table name to make sure all the data for each table is always processed by a specific writing thread.
+- Each writing thread writes the received data into TDengine once the message queue becomes empty or the read data meets a threshold.
+
+
+
+### Sample Programs
+
+The sample programs listed in this section are based on the scenario described previously. If your scenarios is different, please try to adjust the code based on the principles described in this chapter.
+
+The sample programs assume the source data is for all the different sub tables in same super table (meters). The super table has been created before the sample program starts to writing data. Sub tables are created automatically according to received data. If there are multiple super tables in your case, please try to adjust the part of creating table automatically.
+
+
+
+
+**Program Inventory**
+
+| Class | Description |
+| ---------------- | ----------------------------------------------------------------------------------------------------- |
+| FastWriteExample | Main Program |
+| ReadTask | Read data from simulated data source and put into a queue according to the hash value of table name |
+| WriteTask | Read data from Queue, compose a wirte batch and write into TDengine |
+| MockDataSource | Generate data for some sub tables of super table meters |
+| SQLWriter | WriteTask uses this class to compose SQL, create table automatically, check SQL length and write data |
+| StmtWriter | Write in Parameter binding mode (Not finished yet) |
+| DataBaseMonitor | Calculate the writing speed and output on console every 10 seconds |
+
+Below is the list of complete code of the classes in above table and more detailed description.
+
+
+FastWriteExample
+The main Program is responsible for:
+
+1. Create message queues
+2. Start writing threads
+3. Start reading threads
+4. Otuput writing speed every 10 seconds
+
+The main program provides 4 parameters for tuning:
+
+1. The number of reading threads, default value is 1
+2. The number of writing threads, default alue is 2
+3. The total number of tables in the generated data, default value is 1000. These tables are distributed evenly across all writing threads. If the number of tables is very big, it will cost much time to firstly create these tables.
+4. The batch size of single write, default value is 3,000
+
+The capacity of message queue also impacts performance and can be tuned by modifying program. Normally it's always better to have a larger message queue. A larger message queue means lower possibility of being blocked when enqueueing and higher throughput. But a larger message queue consumes more memory space. The default value used in the sample programs is already big enoug.
+
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/FastWriteExample.java}}
+```
+
+
+
+
+ReadTask
+
+ReadTask reads data from data source. Each ReadTask is associated with a simulated data source, each data source generates data for a group of specific tables, and the data of any table is only generated from a single specific data source.
+
+ReadTask puts data in message queue in blocking mode. That means, the putting operation is blocked if the message queue is full.
+
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/ReadTask.java}}
+```
+
+
+
+
+WriteTask
+
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/WriteTask.java}}
+```
+
+
+
+
+
+MockDataSource
+
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/MockDataSource.java}}
+```
+
+
+
+
+
+SQLWriter
+
+SQLWriter class encapsulates the logic of composing SQL and writing data. Please be noted that the tables have not been created before writing, but are created automatically when catching the exception of table doesn't exist. For other exceptions caught, the SQL which caused the exception are logged for you to debug.
+
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/SQLWriter.java}}
+```
+
+
+
+
+
+DataBaseMonitor
+
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/DataBaseMonitor.java}}
+```
+
+
+
+**Steps to Launch**
+
+
+Launch Java Sample Program
+
+You need to set environment variable `TDENGINE_JDBC_URL` before launching the program. If TDengine Server is setup on localhost, then the default value for user name, password and port can be used, like below:
+
+```
+TDENGINE_JDBC_URL="jdbc:TAOS://localhost:6030?user=root&password=taosdata"
+```
+
+**Launch in IDE**
+
+1. Clone TDengine repolitory
+ ```
+ git clone git@github.com:taosdata/TDengine.git --depth 1
+ ```
+2. Use IDE to open `docs/examples/java` directory
+3. Configure environment variable `TDENGINE_JDBC_URL`, you can also configure it before launching the IDE, if so you can skip this step.
+4. Run class `com.taos.example.highvolume.FastWriteExample`
+
+**Launch on server**
+
+If you want to launch the sample program on a remote server, please follow below steps:
+
+1. Package the sample programs. Execute below command under directory `TDengine/docs/examples/java` :
+ ```
+ mvn package
+ ```
+2. Create `examples/java` directory on the server
+ ```
+ mkdir -p examples/java
+ ```
+3. Copy dependencies (below commands assume you are working on a local Windows host and try to launch on a remote Linux host)
+ - Copy dependent packages
+ ```
+ scp -r .\target\lib @:~/examples/java
+ ```
+ - Copy the jar of sample programs
+ ```
+ scp -r .\target\javaexample-1.0.jar @:~/examples/java
+ ```
+4. Configure environment variable
+ Edit `~/.bash_profile` or `~/.bashrc` and add below:
+
+ ```
+ export TDENGINE_JDBC_URL="jdbc:TAOS://localhost:6030?user=root&password=taosdata"
+ ```
+
+ If your TDengine server is not deployed on localhost or doesn't use default port, you need to change the above URL to correct value in your environment.
+
+5. Launch the sample program
+
+ ```
+ java -classpath lib/*:javaexample-1.0.jar com.taos.example.highvolume.FastWriteExample
+ ```
+
+6. The sample program doesn't exit unless you press CTRL + C to terminate it.
+ Below is the output of running on a server of 16 cores, 64GB memory and SSD hard disk.
+
+ ```
+ root@vm85$ java -classpath lib/*:javaexample-1.0.jar com.taos.example.highvolume.FastWriteExample 2 12
+ 18:56:35.896 [main] INFO c.t.e.highvolume.FastWriteExample - readTaskCount=2, writeTaskCount=12 tableCount=1000 maxBatchSize=3000
+ 18:56:36.011 [WriteThread-0] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.015 [WriteThread-0] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.021 [WriteThread-1] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.022 [WriteThread-1] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.031 [WriteThread-2] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.032 [WriteThread-2] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.041 [WriteThread-3] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.042 [WriteThread-3] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.093 [WriteThread-4] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.094 [WriteThread-4] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.099 [WriteThread-5] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.100 [WriteThread-5] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.100 [WriteThread-6] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.101 [WriteThread-6] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.103 [WriteThread-7] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.104 [WriteThread-7] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.105 [WriteThread-8] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.107 [WriteThread-8] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.108 [WriteThread-9] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.109 [WriteThread-9] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.156 [WriteThread-10] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.157 [WriteThread-11] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.158 [WriteThread-10] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.158 [ReadThread-0] INFO com.taos.example.highvolume.ReadTask - started
+ 18:56:36.158 [ReadThread-1] INFO com.taos.example.highvolume.ReadTask - started
+ 18:56:36.158 [WriteThread-11] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:46.369 [main] INFO c.t.e.highvolume.FastWriteExample - count=18554448 speed=1855444
+ 18:56:56.946 [main] INFO c.t.e.highvolume.FastWriteExample - count=39059660 speed=2050521
+ 18:57:07.322 [main] INFO c.t.e.highvolume.FastWriteExample - count=59403604 speed=2034394
+ 18:57:18.032 [main] INFO c.t.e.highvolume.FastWriteExample - count=80262938 speed=2085933
+ 18:57:28.432 [main] INFO c.t.e.highvolume.FastWriteExample - count=101139906 speed=2087696
+ 18:57:38.921 [main] INFO c.t.e.highvolume.FastWriteExample - count=121807202 speed=2066729
+ 18:57:49.375 [main] INFO c.t.e.highvolume.FastWriteExample - count=142952417 speed=2114521
+ 18:58:00.689 [main] INFO c.t.e.highvolume.FastWriteExample - count=163650306 speed=2069788
+ 18:58:11.646 [main] INFO c.t.e.highvolume.FastWriteExample - count=185019808 speed=2136950
+ ```
+
+
+
+
+
+
+**Program Inventory**
+
+Sample programs in Python uses multi-process and cross-process message queues.
+
+| Function/CLass | Description |
+| ---------------------------- | --------------------------------------------------------------------------- |
+| main Function | Program entry point, create child processes and message queues |
+| run_monitor_process Function | Create database, super table, calculate writing speed and output to console |
+| run_read_task Function | Read data and distribute to message queues |
+| MockDataSource Class | Simulate data source, return next 1,000 rows of each table |
+| run_write_task Function | Read as much as possible data from message queue and write in batch |
+| SQLWriter Class | Write in SQL and create table utomatically |
+| StmtWriter Class | Write in parameter binding mode (not finished yet) |
+
+
+main function
+
+`main` function is responsible for creating message queues and fork child processes, there are 3 kinds of child processes:
+
+1. Monitoring process, initializes database and calculating writing speed
+2. Reading process (n), reads data from data source
+3. Writing process (m), wirtes data into TDengine
+
+`main` function provides 5 parameters:
+
+1. The number of reading tasks, default value is 1
+2. The number of writing tasks, default value is 1
+3. The number of tables, default value is 1,000
+4. The capacity of message queue, default value is 1,000,000 bytes
+5. The batch size in single write, default value is 3000
+
+```python
+{{#include docs/examples/python/fast_write_example.py:main}}
+```
+
+
+
+
+run_monitor_process
+
+Monitoring process initilizes database and monitoring writing speed.
+
+```python
+{{#include docs/examples/python/fast_write_example.py:monitor}}
+```
+
+
+
+
+
+run_read_task function
+
+Reading process reads data from other data system and distributes to the message queue allocated for it.
+
+```python
+{{#include docs/examples/python/fast_write_example.py:read}}
+```
+
+
+
+
+
+MockDataSource
+
+Below is the simulated data source, we assume table name exists in each generated data.
+
+```python
+{{#include docs/examples/python/mockdatasource.py}}
+```
+
+
+
+
+run_write_task function
+
+Writing process tries to read as much as possible data from message queue and writes in batch.
+
+```python
+{{#include docs/examples/python/fast_write_example.py:write}}
+```
+
+
+
+
+
+SQLWriter class encapsulates the logic of composing SQL and writing data. Please be noted that the tables have not been created before writing, but are created automatically when catching the exception of table doesn't exist. For other exceptions caught, the SQL which caused the exception are logged for you to debug. This class also checks the SQL length, if the SQL length is closed to `maxSQLLength` the SQL will be executed immediately. To improve writing efficiency, it's better to increase `maxSQLLength` properly.
+
+SQLWriter
+
+```python
+{{#include docs/examples/python/sql_writer.py}}
+```
+
+
+
+**Steps to Launch**
+
+
+
+Launch Sample Program in Python
+
+1. Prerequisities
+
+ - TDengine client driver has been installed
+ - Python3 has been installed, the the version >= 3.8
+ - TDengine Python connector `taospy` has been installed
+
+2. Install faster-fifo to replace python builtin multiprocessing.Queue
+
+ ```
+ pip3 install faster-fifo
+ ```
+
+3. Click the "Copy" in the above sample programs to copy `fast_write_example.py` 、 `sql_writer.py` and `mockdatasource.py`.
+
+4. Execute the program
+
+ ```
+ python3 fast_write_example.py
+ ```
+
+ Below is the output of running on a server of 16 cores, 64GB memory and SSD hard disk.
+
+ ```
+ root@vm85$ python3 fast_write_example.py 8 8
+ 2022-07-14 19:13:45,869 [root] - READ_TASK_COUNT=8, WRITE_TASK_COUNT=8, TABLE_COUNT=1000, QUEUE_SIZE=1000000, MAX_BATCH_SIZE=3000
+ 2022-07-14 19:13:48,882 [root] - WriteTask-0 started with pid 718347
+ 2022-07-14 19:13:48,883 [root] - WriteTask-1 started with pid 718348
+ 2022-07-14 19:13:48,884 [root] - WriteTask-2 started with pid 718349
+ 2022-07-14 19:13:48,884 [root] - WriteTask-3 started with pid 718350
+ 2022-07-14 19:13:48,885 [root] - WriteTask-4 started with pid 718351
+ 2022-07-14 19:13:48,885 [root] - WriteTask-5 started with pid 718352
+ 2022-07-14 19:13:48,886 [root] - WriteTask-6 started with pid 718353
+ 2022-07-14 19:13:48,886 [root] - WriteTask-7 started with pid 718354
+ 2022-07-14 19:13:48,887 [root] - ReadTask-0 started with pid 718355
+ 2022-07-14 19:13:48,888 [root] - ReadTask-1 started with pid 718356
+ 2022-07-14 19:13:48,889 [root] - ReadTask-2 started with pid 718357
+ 2022-07-14 19:13:48,889 [root] - ReadTask-3 started with pid 718358
+ 2022-07-14 19:13:48,890 [root] - ReadTask-4 started with pid 718359
+ 2022-07-14 19:13:48,891 [root] - ReadTask-5 started with pid 718361
+ 2022-07-14 19:13:48,892 [root] - ReadTask-6 started with pid 718364
+ 2022-07-14 19:13:48,893 [root] - ReadTask-7 started with pid 718365
+ 2022-07-14 19:13:56,042 [DataBaseMonitor] - count=6676310 speed=667631.0
+ 2022-07-14 19:14:06,196 [DataBaseMonitor] - count=20004310 speed=1332800.0
+ 2022-07-14 19:14:16,366 [DataBaseMonitor] - count=32290310 speed=1228600.0
+ 2022-07-14 19:14:26,527 [DataBaseMonitor] - count=44438310 speed=1214800.0
+ 2022-07-14 19:14:36,673 [DataBaseMonitor] - count=56608310 speed=1217000.0
+ 2022-07-14 19:14:46,834 [DataBaseMonitor] - count=68757310 speed=1214900.0
+ 2022-07-14 19:14:57,280 [DataBaseMonitor] - count=80992310 speed=1223500.0
+ 2022-07-14 19:15:07,689 [DataBaseMonitor] - count=93805310 speed=1281300.0
+ 2022-07-14 19:15:18,020 [DataBaseMonitor] - count=106111310 speed=1230600.0
+ 2022-07-14 19:15:28,356 [DataBaseMonitor] - count=118394310 speed=1228300.0
+ 2022-07-14 19:15:38,690 [DataBaseMonitor] - count=130742310 speed=1234800.0
+ 2022-07-14 19:15:49,000 [DataBaseMonitor] - count=143051310 speed=1230900.0
+ 2022-07-14 19:15:59,323 [DataBaseMonitor] - count=155276310 speed=1222500.0
+ 2022-07-14 19:16:09,649 [DataBaseMonitor] - count=167603310 speed=1232700.0
+ 2022-07-14 19:16:19,995 [DataBaseMonitor] - count=179976310 speed=1237300.0
+ ```
+
+
+
+:::note
+Don't establish connection to TDengine in the parent process if using Python connector in multi-process way, otherwise all the connections in child processes are blocked always. This is a known issue.
+
+:::
+
+
+
diff --git a/docs/en/07-develop/03-insert-data/_c_line.mdx b/docs/en/07-develop/03-insert-data/_c_line.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..7f2f0d5dd8198d52dda1da34256e54a1bbb4c967
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_c_line.mdx
@@ -0,0 +1,3 @@
+```c
+{{#include docs/examples/c/line_example.c:main}}
+```
\ No newline at end of file
diff --git a/docs/en/07-develop/03-insert-data/_c_opts_json.mdx b/docs/en/07-develop/03-insert-data/_c_opts_json.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..34b1d8ab3c1e299c2ab2a1ad6d47f81dfaa364cc
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_c_opts_json.mdx
@@ -0,0 +1,3 @@
+```c
+{{#include docs/examples/c/json_protocol_example.c:main}}
+```
\ No newline at end of file
diff --git a/docs/en/07-develop/03-insert-data/_c_opts_telnet.mdx b/docs/en/07-develop/03-insert-data/_c_opts_telnet.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..6bda068d12fd0b379a5af96438029c9ae476a753
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_c_opts_telnet.mdx
@@ -0,0 +1,3 @@
+```c
+{{#include docs/examples/c/telnet_line_example.c:main}}
+```
\ No newline at end of file
diff --git a/docs/en/07-develop/03-insert-data/_c_sql.mdx b/docs/en/07-develop/03-insert-data/_c_sql.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..4e55c3387ee1c6fe860f312afdbdad65142bf7fb
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_c_sql.mdx
@@ -0,0 +1,3 @@
+```c
+{{#include docs/examples/c/insert_example.c}}
+```
\ No newline at end of file
diff --git a/docs/en/07-develop/03-insert-data/_c_stmt.mdx b/docs/en/07-develop/03-insert-data/_c_stmt.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..4b609efe5e942c7ecb8296e8fdbd0607f1421229
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_c_stmt.mdx
@@ -0,0 +1,6 @@
+```c title=Single Row Binding
+{{#include docs/examples/c/stmt_example.c}}
+```
+```c title=Multiple Row Binding 72:117
+{{#include docs/examples/c/multi_bind_example.c}}
+```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_category_.yml b/docs/en/07-develop/03-insert-data/_category_.yml
similarity index 100%
rename from docs-en/07-develop/03-insert-data/_category_.yml
rename to docs/en/07-develop/03-insert-data/_category_.yml
diff --git a/docs/en/07-develop/03-insert-data/_cs_line.mdx b/docs/en/07-develop/03-insert-data/_cs_line.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..71f46c62be3dfe7d771a35b2298e476bed353aba
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_cs_line.mdx
@@ -0,0 +1,3 @@
+```csharp
+{{#include docs/examples/csharp/InfluxDBLineExample.cs}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_cs_opts_json.mdx b/docs/en/07-develop/03-insert-data/_cs_opts_json.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..8d80d042c984c513df5ca91813c0cd0a17b58eb5
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_cs_opts_json.mdx
@@ -0,0 +1,3 @@
+```csharp
+{{#include docs/examples/csharp/OptsJsonExample.cs}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_cs_opts_telnet.mdx b/docs/en/07-develop/03-insert-data/_cs_opts_telnet.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..cff32abf1feaf703971111542749fbe40152bc33
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_cs_opts_telnet.mdx
@@ -0,0 +1,3 @@
+```csharp
+{{#include docs/examples/csharp/OptsTelnetExample.cs}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_cs_sql.mdx b/docs/en/07-develop/03-insert-data/_cs_sql.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..1dc7bb3d1366aa3000212786756506eb5eb280e6
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_cs_sql.mdx
@@ -0,0 +1,3 @@
+```csharp
+{{#include docs/examples/csharp/SQLInsertExample.cs}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_cs_stmt.mdx b/docs/en/07-develop/03-insert-data/_cs_stmt.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..229c874ab9f515e7eae66890a3dfe2e59c129e86
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_cs_stmt.mdx
@@ -0,0 +1,3 @@
+```csharp
+{{#include docs/examples/csharp/StmtInsertExample.cs}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_go_line.mdx b/docs/en/07-develop/03-insert-data/_go_line.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..df2afc0e8720ca14e42e0e4bd7e50276cecace43
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_go_line.mdx
@@ -0,0 +1,3 @@
+```go
+{{#include docs/examples/go/insert/line/main.go}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_go_opts_json.mdx b/docs/en/07-develop/03-insert-data/_go_opts_json.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..362ce430515c70a3ac502e646630025d7f950612
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_go_opts_json.mdx
@@ -0,0 +1,3 @@
+```go
+{{#include docs/examples/go/insert/json/main.go}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_go_opts_telnet.mdx b/docs/en/07-develop/03-insert-data/_go_opts_telnet.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..518ea4c8164ab148afff9e21b03d892cbc1bfaf8
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_go_opts_telnet.mdx
@@ -0,0 +1,3 @@
+```go
+{{#include docs/examples/go/insert/telnet/main.go}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_go_sql.mdx b/docs/en/07-develop/03-insert-data/_go_sql.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..02f4d4e2ba21bc14dd67cb0443a1631b06750923
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_go_sql.mdx
@@ -0,0 +1,3 @@
+```go
+{{#include docs/examples/go/insert/sql/main.go}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_go_stmt.mdx b/docs/en/07-develop/03-insert-data/_go_stmt.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..ab519c9a806345c2f14337f62c74728da955d2e0
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_go_stmt.mdx
@@ -0,0 +1,8 @@
+```go
+{{#include docs/examples/go/insert/stmt/main.go}}
+```
+
+:::tip
+`github.com/taosdata/driver-go/v2/wrapper` module in driver-go is the wrapper for C API, it can be used to insert data with parameter binding.
+
+:::
diff --git a/docs/en/07-develop/03-insert-data/_java_line.mdx b/docs/en/07-develop/03-insert-data/_java_line.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..17f759d30fdb76744dc032be60ee91b6dd9f1540
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_java_line.mdx
@@ -0,0 +1,3 @@
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/LineProtocolExample.java}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_java_opts_json.mdx b/docs/en/07-develop/03-insert-data/_java_opts_json.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..1fc0adc202f26c73e64da09456e7e42bdc6367f6
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_java_opts_json.mdx
@@ -0,0 +1,3 @@
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/JSONProtocolExample.java}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_java_opts_telnet.mdx b/docs/en/07-develop/03-insert-data/_java_opts_telnet.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..b68f54b4e872a57f34ae6d5c3651a70812b71154
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_java_opts_telnet.mdx
@@ -0,0 +1,3 @@
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/TelnetLineProtocolExample.java}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_java_sql.mdx b/docs/en/07-develop/03-insert-data/_java_sql.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..636c7e00eb8846704678ef3cdd8394a99a4528f8
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_java_sql.mdx
@@ -0,0 +1,3 @@
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/RestInsertExample.java:insert}}
+```
\ No newline at end of file
diff --git a/docs/en/07-develop/03-insert-data/_java_stmt.mdx b/docs/en/07-develop/03-insert-data/_java_stmt.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..2f6a33769044ef5052e633e28a9b60fdab130e88
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_java_stmt.mdx
@@ -0,0 +1,3 @@
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/StmtInsertExample.java}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_js_line.mdx b/docs/en/07-develop/03-insert-data/_js_line.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..cc138a76bde76e779eaa1fe554ecc82c1f564e24
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_js_line.mdx
@@ -0,0 +1,3 @@
+```js
+{{#include docs/examples/node/nativeexample/influxdb_line_example.js}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_js_opts_json.mdx b/docs/en/07-develop/03-insert-data/_js_opts_json.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..cb3c275ce8140ed58d668bf03972a1f960bb6564
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_js_opts_json.mdx
@@ -0,0 +1,3 @@
+```js
+{{#include docs/examples/node/nativeexample/opentsdb_json_example.js}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_js_opts_telnet.mdx b/docs/en/07-develop/03-insert-data/_js_opts_telnet.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..db96742f31440342516134636db998af987af9fb
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_js_opts_telnet.mdx
@@ -0,0 +1,3 @@
+```js
+{{#include docs/examples/node/nativeexample/opentsdb_telnet_example.js}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_js_sql.mdx b/docs/en/07-develop/03-insert-data/_js_sql.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..a9a12f5d2cfb31bcaefba25a82846b455dbc8671
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_js_sql.mdx
@@ -0,0 +1,3 @@
+```js
+{{#include docs/examples/node/nativeexample/insert_example.js}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_js_stmt.mdx b/docs/en/07-develop/03-insert-data/_js_stmt.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..8df1065c4a42537c2e4c61087ad77cdde9e24a77
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_js_stmt.mdx
@@ -0,0 +1,12 @@
+```js title=Single Row Binding
+{{#include docs/examples/node/nativeexample/param_bind_example.js}}
+```
+
+```js title=Multiple Row Binding
+{{#include docs/examples/node/nativeexample/multi_bind_example.js:insertData}}
+```
+
+:::info
+Multiple row binding is better in performance than single row binding, but it can only be used with `INSERT` statement while single row binding can be used for other SQL statements besides `INSERT`.
+
+:::
diff --git a/docs/en/07-develop/03-insert-data/_py_line.mdx b/docs/en/07-develop/03-insert-data/_py_line.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..85f7e32e6681c6d428a2332220194c169c421f2f
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_py_line.mdx
@@ -0,0 +1,3 @@
+```py
+{{#include docs/examples/python/line_protocol_example.py}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_py_opts_json.mdx b/docs/en/07-develop/03-insert-data/_py_opts_json.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..195c7090c02e03131c4261c57f1414a5ab1ba6b6
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_py_opts_json.mdx
@@ -0,0 +1,3 @@
+```py
+{{#include docs/examples/python/json_protocol_example.py}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_py_opts_telnet.mdx b/docs/en/07-develop/03-insert-data/_py_opts_telnet.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..3bae1ea57bcffe50be5b4e96a7ae8f83faed2087
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_py_opts_telnet.mdx
@@ -0,0 +1,3 @@
+```py
+{{#include docs/examples/python/telnet_line_protocol_example.py}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_py_sql.mdx b/docs/en/07-develop/03-insert-data/_py_sql.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..1557e3994b04e64c596918ee67c63e7765ebaa07
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_py_sql.mdx
@@ -0,0 +1,3 @@
+```py
+{{#include docs/examples/python/native_insert_example.py}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_py_stmt.mdx b/docs/en/07-develop/03-insert-data/_py_stmt.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..4f7636bfb8ea920e1e879b8e59083543cf798d01
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_py_stmt.mdx
@@ -0,0 +1,12 @@
+```py title=Single Row Binding
+{{#include docs/examples/python/bind_param_example.py}}
+```
+
+```py title=Multiple Row Binding
+{{#include docs/examples/python/multi_bind_example.py:bind_batch}}
+```
+
+:::info
+Multiple row binding is better in performance than single row binding, but it can only be used with `INSERT` statement while single row binding can be used for other SQL statements besides `INSERT`.
+
+:::
\ No newline at end of file
diff --git a/docs/en/07-develop/03-insert-data/_rust_line.mdx b/docs/en/07-develop/03-insert-data/_rust_line.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..dbb35d76bc3517463902b642ce4a3861ae42b2f8
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_rust_line.mdx
@@ -0,0 +1,3 @@
+```rust
+{{#include docs/examples/rust/schemalessexample/examples/influxdb_line_example.rs}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_rust_opts_json.mdx b/docs/en/07-develop/03-insert-data/_rust_opts_json.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..cc2055510bce006491ed277a8e884b9958a5a993
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_rust_opts_json.mdx
@@ -0,0 +1,3 @@
+```rust
+{{#include docs/examples/rust/schemalessexample/examples/opentsdb_json_example.rs}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_rust_opts_telnet.mdx b/docs/en/07-develop/03-insert-data/_rust_opts_telnet.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..109c0c5d019e250b87e12c535e4f55c69924b4af
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_rust_opts_telnet.mdx
@@ -0,0 +1,3 @@
+```rust
+{{#include docs/examples/rust/schemalessexample/examples/opentsdb_telnet_example.rs}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_rust_sql.mdx b/docs/en/07-develop/03-insert-data/_rust_sql.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..fb59a4826510e666457ac592328cc5ba17412c79
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_rust_sql.mdx
@@ -0,0 +1,3 @@
+```rust
+{{#include docs/examples/rust/restexample/examples/insert_example.rs}}
+```
diff --git a/docs/en/07-develop/03-insert-data/_rust_stmt.mdx b/docs/en/07-develop/03-insert-data/_rust_stmt.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..a889b56745601158489037a590b6cf5bd80da543
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/_rust_stmt.mdx
@@ -0,0 +1,3 @@
+```rust
+{{#include docs/examples/rust/nativeexample/examples/stmt_example.rs}}
+```
diff --git a/docs/en/07-develop/03-insert-data/highvolume.webp b/docs/en/07-develop/03-insert-data/highvolume.webp
new file mode 100644
index 0000000000000000000000000000000000000000..46dfc74ae3b0043c591ff930c62251da49cae7ad
Binary files /dev/null and b/docs/en/07-develop/03-insert-data/highvolume.webp differ
diff --git a/docs-en/07-develop/03-insert-data/index.md b/docs/en/07-develop/03-insert-data/index.md
similarity index 100%
rename from docs-en/07-develop/03-insert-data/index.md
rename to docs/en/07-develop/03-insert-data/index.md
diff --git a/docs/en/07-develop/04-query-data/_c.mdx b/docs/en/07-develop/04-query-data/_c.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..c51557ef2918dd9152e329c6e1937109d286b11c
--- /dev/null
+++ b/docs/en/07-develop/04-query-data/_c.mdx
@@ -0,0 +1,3 @@
+```c
+{{#include docs/examples/c/query_example.c}}
+```
\ No newline at end of file
diff --git a/docs/en/07-develop/04-query-data/_c_async.mdx b/docs/en/07-develop/04-query-data/_c_async.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..641a53e82ddb252e1b3255799bd922158a08f229
--- /dev/null
+++ b/docs/en/07-develop/04-query-data/_c_async.mdx
@@ -0,0 +1,3 @@
+```c
+{{#include docs/examples/c/async_query_example.c:demo}}
+```
\ No newline at end of file
diff --git a/docs-en/07-develop/04-query-data/_category_.yml b/docs/en/07-develop/04-query-data/_category_.yml
similarity index 100%
rename from docs-en/07-develop/04-query-data/_category_.yml
rename to docs/en/07-develop/04-query-data/_category_.yml
diff --git a/docs/en/07-develop/04-query-data/_cs.mdx b/docs/en/07-develop/04-query-data/_cs.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..4bb582ecbfaeceac679af975e7752d1caeacb018
--- /dev/null
+++ b/docs/en/07-develop/04-query-data/_cs.mdx
@@ -0,0 +1,3 @@
+```csharp
+{{#include docs/examples/csharp/QueryExample.cs}}
+```
diff --git a/docs/en/07-develop/04-query-data/_cs_async.mdx b/docs/en/07-develop/04-query-data/_cs_async.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..3ecf635fd39db402d1db68de6d7336b7b2d9d8e8
--- /dev/null
+++ b/docs/en/07-develop/04-query-data/_cs_async.mdx
@@ -0,0 +1,3 @@
+```csharp
+{{#include docs/examples/csharp/AsyncQueryExample.cs}}
+```
diff --git a/docs/en/07-develop/04-query-data/_go.mdx b/docs/en/07-develop/04-query-data/_go.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..b43894a1ebe8aa0a261cce5f2469f2b3f8449fc4
--- /dev/null
+++ b/docs/en/07-develop/04-query-data/_go.mdx
@@ -0,0 +1,3 @@
+```go
+{{#include docs/examples/go/query/sync/main.go}}
+```
diff --git a/docs/en/07-develop/04-query-data/_go_async.mdx b/docs/en/07-develop/04-query-data/_go_async.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..3fbc6f5b6dac9d3987678e64d7268eed200ce513
--- /dev/null
+++ b/docs/en/07-develop/04-query-data/_go_async.mdx
@@ -0,0 +1,3 @@
+```go
+{{#include docs/examples/go/query/async/main.go}}
+```
diff --git a/docs/en/07-develop/04-query-data/_java.mdx b/docs/en/07-develop/04-query-data/_java.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..74de32658c658fb81c29349a1997e32ed512db1b
--- /dev/null
+++ b/docs/en/07-develop/04-query-data/_java.mdx
@@ -0,0 +1,3 @@
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/RestQueryExample.java}}
+```
diff --git a/docs/en/07-develop/04-query-data/_js.mdx b/docs/en/07-develop/04-query-data/_js.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..5883d378e7c7acab033bffb2018f00f1ab5a48d5
--- /dev/null
+++ b/docs/en/07-develop/04-query-data/_js.mdx
@@ -0,0 +1,3 @@
+```js
+{{#include docs/examples/node/nativeexample/query_example.js}}
+```
diff --git a/docs/en/07-develop/04-query-data/_js_async.mdx b/docs/en/07-develop/04-query-data/_js_async.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..4b0f54a0342e62da1e5050d49546ca605ae1d729
--- /dev/null
+++ b/docs/en/07-develop/04-query-data/_js_async.mdx
@@ -0,0 +1,3 @@
+```js
+{{#include docs/examples/node/nativeexample/async_query_example.js}}
+```
diff --git a/docs/en/07-develop/04-query-data/_py.mdx b/docs/en/07-develop/04-query-data/_py.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..8ebeca450bd611913874b606b73e65f1e484d239
--- /dev/null
+++ b/docs/en/07-develop/04-query-data/_py.mdx
@@ -0,0 +1,11 @@
+Result set is iterated row by row.
+
+```py
+{{#include docs/examples/python/query_example.py:iter}}
+```
+
+Result set is retrieved as a whole, each row is converted to a dict and returned.
+
+```py
+{{#include docs/examples/python/query_example.py:fetch_all}}
+```
\ No newline at end of file
diff --git a/docs/en/07-develop/04-query-data/_py_async.mdx b/docs/en/07-develop/04-query-data/_py_async.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..393a5b173351bafcbdb469ac7d00db0a6b22dbc1
--- /dev/null
+++ b/docs/en/07-develop/04-query-data/_py_async.mdx
@@ -0,0 +1,8 @@
+```py
+{{#include docs/examples/python/async_query_example.py}}
+```
+
+:::note
+This sample code can't be run on Windows system for now.
+
+:::
diff --git a/docs/en/07-develop/04-query-data/_rust.mdx b/docs/en/07-develop/04-query-data/_rust.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..cab1b403fbba0cb432ecb9cb280a0fa7582c5be1
--- /dev/null
+++ b/docs/en/07-develop/04-query-data/_rust.mdx
@@ -0,0 +1,3 @@
+```rust
+{{#include docs/examples/rust/restexample/examples/query_example.rs}}
+```
diff --git a/docs/en/07-develop/04-query-data/index.mdx b/docs/en/07-develop/04-query-data/index.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..e8e4b5c0ad555c0807af5f50a75afdffc1aaa50c
--- /dev/null
+++ b/docs/en/07-develop/04-query-data/index.mdx
@@ -0,0 +1,186 @@
+---
+Sidebar_label: Query data
+title: Query data
+description: "This chapter introduces major query functionalities and how to perform sync and async query using connectors."
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+import JavaQuery from "./_java.mdx";
+import PyQuery from "./_py.mdx";
+import GoQuery from "./_go.mdx";
+import RustQuery from "./_rust.mdx";
+import NodeQuery from "./_js.mdx";
+import CsQuery from "./_cs.mdx";
+import CQuery from "./_c.mdx";
+import PyAsync from "./_py_async.mdx";
+import NodeAsync from "./_js_async.mdx";
+import CsAsync from "./_cs_async.mdx";
+import CAsync from "./_c_async.mdx";
+
+## Introduction
+
+SQL is used by TDengine as its query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine's CLI `taos` can also be used to execute ad hoc SQL queries. Here is the list of major query functionalities supported by TDengine:
+
+- Query on single column or multiple columns
+- Filter on tags or data columns:>, <, =, <\>, like
+- Grouping of results: `Group By`
+- Sorting of results: `Order By`
+- Limit the number of results: `Limit/Offset`
+- Arithmetic on columns of numeric types or aggregate results
+- Join query with timestamp alignment
+- Aggregate functions: count, max, min, avg, sum, twa, stddev, leastsquares, top, bottom, first, last, percentile, apercentile, last_row, spread, diff
+
+For example, the SQL statement below can be executed in TDengine CLI `taos` to select records with voltage greater than 215 and limit the output to only 2 rows.
+
+```sql
+select * from d1001 where voltage > 215 order by ts desc limit 2;
+```
+
+```title=Output
+taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
+ ts | current | voltage | phase |
+======================================================================================
+ 2018-10-03 14:38:16.800 | 12.30000 | 221 | 0.31000 |
+ 2018-10-03 14:38:15.000 | 12.60000 | 218 | 0.33000 |
+Query OK, 2 row(s) in set (0.001100s)
+```
+
+To meet the requirements of varied use cases, some special functions have been added in TDengine. Some examples are `twa` (Time Weighted Average), `spread` (The difference between the maximum and the minimum), and `last_row` (the last row). Furthermore, continuous query is also supported in TDengine.
+
+For detailed query syntax please refer to [Select](../../12-taos-sql/06-select.md).
+
+## Aggregation among Tables
+
+In most use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviation for super table), is used in TDengine to represent one type of data collection point, and a subtable is used to represent a specific data collection point of that type. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same type of data collection points. Aggregate functions applicable for tables can be used directly on STables; the syntax is exactly the same.
+
+In summary, records across subtables can be aggregated by a simple query on their STable. It is like a join operation. However, tables belonging to different STables can not be aggregated.
+
+### Example 1
+
+In TDengine CLI `taos`, use the SQL below to get the average voltage of all the meters in California grouped by location.
+
+```
+taos> SELECT AVG(voltage) FROM meters GROUP BY location;
+ avg(voltage) | location |
+=============================================================
+ 222.000000000 | California.LosAngeles |
+ 219.200000000 | California.SanFrancisco |
+Query OK, 2 row(s) in set (0.002136s)
+```
+
+### Example 2
+
+In TDengine CLI `taos`, use the SQL below to get the number of rows and the maximum current in the past 24 hours from meters whose groupId is 2.
+
+```
+taos> SELECT count(*), max(current) FROM meters where groupId = 2 and ts > now - 24h;
+ count(*) | max(current) |
+==================================
+ 5 | 13.4 |
+Query OK, 1 row(s) in set (0.002136s)
+```
+
+Join queries are only allowed between subtables of the same STable. In [Select](../../12-taos-sql/06-select.md), all query operations are marked as to whether they support STables or not.
+
+## Down Sampling and Interpolation
+
+In IoT use cases, down sampling is widely used to aggregate data by time range. The `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, the SQL statement below can be used to get the sum of current every 10 seconds from meters table d1001.
+
+```
+taos> SELECT sum(current) FROM d1001 INTERVAL(10s);
+ ts | sum(current) |
+======================================================
+ 2018-10-03 14:38:00.000 | 10.300000191 |
+ 2018-10-03 14:38:10.000 | 24.900000572 |
+Query OK, 2 row(s) in set (0.000883s)
+```
+
+Down sampling can also be used for STable. For example, the below SQL statement can be used to get the sum of current from all meters in California.
+
+```
+taos> SELECT SUM(current) FROM meters where location like "California%" INTERVAL(1s);
+ ts | sum(current) |
+======================================================
+ 2018-10-03 14:38:04.000 | 10.199999809 |
+ 2018-10-03 14:38:05.000 | 32.900000572 |
+ 2018-10-03 14:38:06.000 | 11.500000000 |
+ 2018-10-03 14:38:15.000 | 12.600000381 |
+ 2018-10-03 14:38:16.000 | 36.000000000 |
+Query OK, 5 row(s) in set (0.001538s)
+```
+
+Down sampling also supports time offset. For example, the below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds.
+
+```
+taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
+ ts | sum(current) |
+======================================================
+ 2018-10-03 14:38:04.500 | 11.189999809 |
+ 2018-10-03 14:38:05.500 | 31.900000572 |
+ 2018-10-03 14:38:06.500 | 11.600000000 |
+ 2018-10-03 14:38:15.500 | 12.300000381 |
+ 2018-10-03 14:38:16.500 | 35.000000000 |
+Query OK, 5 row(s) in set (0.001521s)
+```
+
+In many use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle this by themselves. In TDengine, it's easy to achieve the alignment using down sampling.
+
+Interpolation can be performed in TDengine if there is no data in a time range.
+
+For more details please refer to [Aggregate by Window](../../12-taos-sql/12-interval.md).
+
+## Examples
+
+### Query
+
+In the section describing [Insert](../03-insert-data/01-sql-writing.mdx), a database named `power` is created and some data are inserted into STable `meters`. Below sample code demonstrates how to query the data in this STable.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+:::note
+
+1. With either REST connection or native connection, the above sample code works well.
+2. Please note that `use db` can't be used in case of REST connection because it's stateless.
+
+:::
+
+### Asynchronous Query
+
+Besides synchronous queries, an asynchronous query API is also provided by TDengine to insert or query data more efficiently. With a similar hardware and software environment, the async API is 2~4 times faster than sync APIs. Async API works in non-blocking mode, which means an operation can be returned without finishing so that the calling thread can switch to other work to improve the performance of the whole application system. Async APIs perform especially better in the case of poor networks.
+
+Please note that async query can only be used with a native connection.
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs-en/07-develop/06-continuous-query.mdx b/docs/en/07-develop/06-continuous-query.mdx
similarity index 100%
rename from docs-en/07-develop/06-continuous-query.mdx
rename to docs/en/07-develop/06-continuous-query.mdx
diff --git a/docs/en/07-develop/07-subscribe.mdx b/docs/en/07-develop/07-subscribe.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..e309a33fc8f2c30c7fe2ab2a21a517029b089ab1
--- /dev/null
+++ b/docs/en/07-develop/07-subscribe.mdx
@@ -0,0 +1,256 @@
+---
+sidebar_label: Data Subscription
+description: "Lightweight service for data subscription and publishing. Time series data inserted into TDengine continuously can be pushed automatically to subscribing clients."
+title: Data Subscription
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+import Java from "./_sub_java.mdx";
+import Python from "./_sub_python.mdx";
+import Go from "./_sub_go.mdx";
+import Rust from "./_sub_rust.mdx";
+import Node from "./_sub_node.mdx";
+import CSharp from "./_sub_cs.mdx";
+import CDemo from "./_sub_c.mdx";
+
+## Introduction
+
+Due to the nature of time series data, data insertion into TDengine is similar to data publishing in message queues. Data is stored in ascending order of timestamp inside TDengine, and so each table in TDengine can essentially be considered as a message queue.
+
+A lightweight service for data subscription and publishing is built into TDengine. With the API provided by TDengine, client programs can use `select` statements to subscribe to data from one or more tables. The subscription and state maintenance is performed on the client side. The client programs poll the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start retrieving new data is up to the client side.
+
+There are 3 major APIs related to subscription provided in the TDengine client driver.
+
+```c
+taos_subscribe
+taos_consume
+taos_unsubscribe
+```
+
+For more details about these APIs please refer to [C/C++ Connector](/reference/connector/cpp). Their usage will be introduced below using the use case of meters, in which the schema of STable and subtables from the previous section [Continuous Query](../continuous-query) are used. Full sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c).
+
+If we want to get a notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways:
+
+The first way is to query each sub table and record the last timestamp matching the criteria. Then after some time, query the data later than the recorded timestamp, and repeat this process. The SQL statements for this way are as below.
+
+```sql
+select * from D1001 where ts > {last_timestamp1} and current > 10;
+select * from D1002 where ts > {last_timestamp2} and current > 10;
+...
+```
+
+The above way works, but the problem is that the number of `select` statements increases with the number of meters. Additionally, the performance of both client side and server side will be unacceptable once the number of meters grows to a big enough number.
+
+A better way is to query on the STable, only one `select` is enough regardless of the number of meters, like below:
+
+```sql
+select * from meters where ts > {last_timestamp} and current > 10;
+```
+
+However, this presents a new problem in how to choose `last_timestamp`. First, the timestamp when the data is generated is different from the timestamp when the data is inserted into the database, sometimes the difference between them may be very big. Second, the time when the data from different meters arrives at the database may be different too. If the timestamp of the "slowest" meter is used as `last_timestamp` in the query, the data from other meters may be selected repeatedly; but if the timestamp of the "fastest" meter is used as `last_timestamp`, some data from other meters may be missed.
+
+All the problems mentioned above can be resolved easily using the subscription functionality provided by TDengine.
+
+The first step is to create subscription using `taos_subscribe`.
+
+```c
+TAOS_SUB* tsub = NULL;
+if (async) {
+ // create an asynchronous subscription, the callback function will be called every 1s
+ tsub = taos_subscribe(taos, restart, topic, sql, subscribe_callback, &blockFetch, 1000);
+} else {
+ // create an synchronous subscription, need to call 'taos_consume' manually
+ tsub = taos_subscribe(taos, restart, topic, sql, NULL, NULL, 0);
+}
+```
+
+The subscription in TDengine can be either synchronous or asynchronous. In the above sample code, the value of variable `async` is determined from the CLI input, then it's used to create either an async or sync subscription. Sync subscription means the client program needs to invoke `taos_consume` to retrieve data, and async subscription means another thread created by `taos_subscribe` internally invokes `taos_consume` to retrieve data and pass the data to `subscribe_callback` for processing. `subscribe_callback` is a callback function provided by the client program. You should not perform time consuming operations in the callback function.
+
+The parameter `taos` is an established connection. Nothing special needs to be done for thread safety for synchronous subscription. For asynchronous subscription, the taos_subscribe function should be called exclusively by the current thread, to avoid unpredictable errors.
+
+The parameter `sql` is a `select` statement in which the `where` clause can be used to specify filter conditions. In our example, we can subscribe to the records in which the current exceeds 10A, with the following SQL statement:
+
+```sql
+select * from meters where current > 10;
+```
+
+Please note that, all the data will be processed because no start time is specified. If we only want to process data for the past day, a time related condition can be added:
+
+```sql
+select * from meters where ts > now - 1d and current > 10;
+```
+
+The parameter `topic` is the name of the subscription. The client application must guarantee that the name is unique. However, it doesn't have to be globally unique because subscription is implemented in the APIs on the client side.
+
+If the subscription named as `topic` doesn't exist, the parameter `restart` will be ignored. If the subscription named as `topic` has been created before by the client program, when the client program is restarted with the subscription named `topic`, parameter `restart` is used to determine whether to retrieve data from the beginning or from the last point where the subscription was broken.
+
+If the value of `restart` is **true** (i.e. a non-zero value), data will be retrieved from the beginning. If it is **false** (i.e. zero), the data already consumed before will not be processed again.
+
+The last parameter of `taos_subscribe` is the polling interval in units of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` will be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function.
+
+The second to last parameter of `taos_subscribe` is used to pass arguments to the call back function. `taos_subscribe` doesn't process this parameter and simply passes it to the call back function. This parameter is simply ignored in sync mode.
+
+After a subscription is created, its data can be consumed and processed. Shown below is the sample code to consume data in sync mode, in the else condition of `if (async)`.
+
+```c
+if (async) {
+ getchar();
+} else while(1) {
+ TAOS_RES* res = taos_consume(tsub);
+ if (res == NULL) {
+ printf("failed to consume data.");
+ break;
+ } else {
+ print_result(res, blockFetch);
+ getchar();
+ }
+}
+```
+
+In the above sample code in the else condition, there is an infinite loop. Each time carriage return is entered `taos_consume` is invoked. The return value of `taos_consume` is the selected result set. In the above sample, `print_result` is used to simplify the printing of the result set. It is similar to `taos_use_result`. Below is the implementation of `print_result`.
+
+```c
+void print_result(TAOS_RES* res, int blockFetch) {
+ TAOS_ROW row = NULL;
+ int num_fields = taos_num_fields(res);
+ TAOS_FIELD* fields = taos_fetch_fields(res);
+ int nRows = 0;
+ if (blockFetch) {
+ nRows = taos_fetch_block(res, &row);
+ for (int i = 0; i < nRows; i++) {
+ char temp[256];
+ taos_print_row(temp, row + i, fields, num_fields);
+ puts(temp);
+ }
+ } else {
+ while ((row = taos_fetch_row(res))) {
+ char temp[256];
+ taos_print_row(temp, row, fields, num_fields);
+ puts(temp);
+ nRows++;
+ }
+ }
+ printf("%d rows consumed.\n", nRows);
+}
+```
+
+In the above code `taos_print_row` is used to process the data consumed. All matching rows are printed.
+
+In async mode, consuming data is simpler as shown below.
+
+```c
+void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) {
+ print_result(res, *(int*)param);
+}
+```
+
+`taos_unsubscribe` can be invoked to terminate a subscription.
+
+```c
+taos_unsubscribe(tsub, keep);
+```
+
+The second parameter `keep` is used to specify whether to keep the subscription progress on the client sde. If it is **false**, i.e. **0**, then subscription will be restarted from beginning regardless of the `restart` parameter's value when `taos_subscribe` is invoked again. The subscription progress information is stored in _{DataDir}/subscribe/_ , under which there is a file with the same name as `topic` for each subscription(Note: The default value of `DataDir` in the `taos.cfg` file is **/var/lib/taos/**. However, **/var/lib/taos/** does not exist on the Windows server. So you need to change the `DataDir` value to the corresponding existing directory."), the subscription will be restarted from the beginning if the corresponding progress file is removed.
+
+Now let's see the effect of the above sample code, assuming below prerequisites have been done.
+
+- The sample code has been downloaded to local system
+- TDengine has been installed and launched properly on same system
+- The database, STable, and subtables required in the sample code are ready
+
+Launch the command below in the directory where the sample code resides to compile and start the program.
+
+```bash
+make
+./subscribe -sql='select * from meters where current > 10;'
+```
+
+After the program is started, open another terminal and launch TDengine CLI `taos`, then use the below SQL commands to insert a row whose current is 12A into table **D1001**.
+
+```sql
+use test;
+insert into D1001 values(now, 12, 220, 1);
+```
+
+Then, this row of data will be shown by the example program on the first terminal because its current exceeds 10A. More data can be inserted for you to observe the output of the example program.
+
+## Examples
+
+The example program below demonstrates how to subscribe, using connectors, to data rows in which current exceeds 10A.
+
+### Prepare Data
+
+```bash
+# create database "power"
+taos> create database power;
+# use "power" as the database in following operations
+taos> use power;
+# create super table "meters"
+taos> create table meters(ts timestamp, current float, voltage int, phase int) tags(location binary(64), groupId int);
+# create tabes using the schema defined by super table "meters"
+taos> create table d1001 using meters tags ("California.SanFrancisco", 2);
+taos> create table d1002 using meters tags ("California.LoSangeles", 2);
+# insert some rows
+taos> insert into d1001 values("2020-08-15 12:00:00.000", 12, 220, 1),("2020-08-15 12:10:00.000", 12.3, 220, 2),("2020-08-15 12:20:00.000", 12.2, 220, 1);
+taos> insert into d1002 values("2020-08-15 12:00:00.000", 9.9, 220, 1),("2020-08-15 12:10:00.000", 10.3, 220, 1),("2020-08-15 12:20:00.000", 11.2, 220, 1);
+# filter out the rows in which current is bigger than 10A
+taos> select * from meters where current > 10;
+ ts | current | voltage | phase | location | groupid |
+===========================================================================================================
+ 2020-08-15 12:10:00.000 | 10.30000 | 220 | 1 | California.LoSangeles | 2 |
+ 2020-08-15 12:20:00.000 | 11.20000 | 220 | 1 | California.LoSangeles | 2 |
+ 2020-08-15 12:00:00.000 | 12.00000 | 220 | 1 | California.SanFrancisco | 2 |
+ 2020-08-15 12:10:00.000 | 12.30000 | 220 | 2 | California.SanFrancisco | 2 |
+ 2020-08-15 12:20:00.000 | 12.20000 | 220 | 1 | California.SanFrancisco | 2 |
+Query OK, 5 row(s) in set (0.004896s)
+```
+
+### Example Programs
+
+
+
+
+
+
+
+
+ {/*
+
+ */}
+ {/*
+
+
+
+
+ */}
+
+
+
+
+
+### Run the Examples
+
+The example programs first consume all historical data matching the criteria.
+
+```bash
+ts: 1597464000000 current: 12.0 voltage: 220 phase: 1 location: California.SanFrancisco groupid : 2
+ts: 1597464600000 current: 12.3 voltage: 220 phase: 2 location: California.SanFrancisco groupid : 2
+ts: 1597465200000 current: 12.2 voltage: 220 phase: 1 location: California.SanFrancisco groupid : 2
+ts: 1597464600000 current: 10.3 voltage: 220 phase: 1 location: California.LoSangeles groupid : 2
+ts: 1597465200000 current: 11.2 voltage: 220 phase: 1 location: California.LoSangeles groupid : 2
+```
+
+Next, use TDengine CLI to insert a new row.
+
+```
+# taos
+taos> use power;
+taos> insert into d1001 values(now, 12.4, 220, 1);
+```
+
+Because the current in the inserted row exceeds 10A, it will be consumed by the example program.
+
+```
+ts: 1651146662805 current: 12.4 voltage: 220 phase: 1 location: California.SanFrancisco groupid: 2
+```
diff --git a/docs-en/07-develop/08-cache.md b/docs/en/07-develop/08-cache.md
similarity index 100%
rename from docs-en/07-develop/08-cache.md
rename to docs/en/07-develop/08-cache.md
diff --git a/docs-en/07-develop/09-udf.md b/docs/en/07-develop/09-udf.md
similarity index 100%
rename from docs-en/07-develop/09-udf.md
rename to docs/en/07-develop/09-udf.md
diff --git a/docs-en/07-develop/_category_.yml b/docs/en/07-develop/_category_.yml
similarity index 100%
rename from docs-en/07-develop/_category_.yml
rename to docs/en/07-develop/_category_.yml
diff --git a/docs/en/07-develop/_sub_c.mdx b/docs/en/07-develop/_sub_c.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..da492a0269f064d8cdf9dfb80969894131d94015
--- /dev/null
+++ b/docs/en/07-develop/_sub_c.mdx
@@ -0,0 +1,3 @@
+```c
+{{#include docs/examples/c/subscribe_demo.c}}
+```
\ No newline at end of file
diff --git a/docs/en/07-develop/_sub_cs.mdx b/docs/en/07-develop/_sub_cs.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..a435ea0273c94cbe75eaf7431e1a9c39d49d92e3
--- /dev/null
+++ b/docs/en/07-develop/_sub_cs.mdx
@@ -0,0 +1,3 @@
+```csharp
+{{#include docs/examples/csharp/SubscribeDemo.cs}}
+```
\ No newline at end of file
diff --git a/docs/en/07-develop/_sub_go.mdx b/docs/en/07-develop/_sub_go.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..34b2aefd92c5eef75b59fbbba96b83da091722a7
--- /dev/null
+++ b/docs/en/07-develop/_sub_go.mdx
@@ -0,0 +1,3 @@
+```go
+{{#include docs/examples/go/sub/main.go}}
+```
\ No newline at end of file
diff --git a/docs/en/07-develop/_sub_java.mdx b/docs/en/07-develop/_sub_java.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..ab77f61348c115d3fe3336df47d467c5525f41b8
--- /dev/null
+++ b/docs/en/07-develop/_sub_java.mdx
@@ -0,0 +1,7 @@
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java}}
+```
+:::note
+For now Java connector doesn't provide asynchronous subscription, but `TimerTask` can be used to achieve similar purpose.
+
+:::
\ No newline at end of file
diff --git a/docs/en/07-develop/_sub_node.mdx b/docs/en/07-develop/_sub_node.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..3eeff0922a31a478dd34a77c6cb6471f51a57a8c
--- /dev/null
+++ b/docs/en/07-develop/_sub_node.mdx
@@ -0,0 +1,3 @@
+```js
+{{#include docs/examples/node/nativeexample/subscribe_demo.js}}
+```
\ No newline at end of file
diff --git a/docs/en/07-develop/_sub_python.mdx b/docs/en/07-develop/_sub_python.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..490b76fca6deb61e61dc59c2096b30742a7d25f7
--- /dev/null
+++ b/docs/en/07-develop/_sub_python.mdx
@@ -0,0 +1,3 @@
+```py
+{{#include docs/examples/python/subscribe_demo.py}}
+```
\ No newline at end of file
diff --git a/docs/en/07-develop/_sub_rust.mdx b/docs/en/07-develop/_sub_rust.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..afb8d79daa3bbd72d72795cb4425f12277d710fc
--- /dev/null
+++ b/docs/en/07-develop/_sub_rust.mdx
@@ -0,0 +1,3 @@
+```rs
+{{#include docs/examples/rust/nativeexample/examples/subscribe_demo.rs}}
+```
\ No newline at end of file
diff --git a/docs-en/07-develop/index.md b/docs/en/07-develop/index.md
similarity index 100%
rename from docs-en/07-develop/index.md
rename to docs/en/07-develop/index.md
diff --git a/docs-en/10-cluster/01-deploy.md b/docs/en/10-cluster/01-deploy.md
similarity index 100%
rename from docs-en/10-cluster/01-deploy.md
rename to docs/en/10-cluster/01-deploy.md
diff --git a/docs/en/10-cluster/02-cluster-mgmt.md b/docs/en/10-cluster/02-cluster-mgmt.md
new file mode 100644
index 0000000000000000000000000000000000000000..bd3386c41161fc55b4bedcecd6ad3ab5c35be8b6
--- /dev/null
+++ b/docs/en/10-cluster/02-cluster-mgmt.md
@@ -0,0 +1,213 @@
+---
+sidebar_label: Operation
+title: Manage DNODEs
+---
+
+The previous section, [Deployment],(/cluster/deploy) showed you how to deploy and start a cluster from scratch. Once a cluster is ready, the status of dnode(s) in the cluster can be shown at any time. Dnodes can be managed from the TDengine CLI. New dnode(s) can be added to scale out the cluster, an existing dnode can be removed and you can even perform load balancing manually, if necessary.
+
+:::note
+All the commands introduced in this chapter must be run in the TDengine CLI - `taos`. Note that sometimes it is necessary to use root privilege.
+
+:::
+
+## Show DNODEs
+
+The below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes and so on. We recommend executing this command after adding or removing a dnode.
+
+```sql
+SHOW DNODES;
+```
+
+Below is the example output of this command.
+
+```
+taos> show dnodes;
+ id | end_point | vnodes | cores | status | role | create_time | offline reason |
+======================================================================================================================================
+ 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
+Query OK, 1 row(s) in set (0.008298s)
+```
+
+## Show VGROUPs
+
+To utilize system resources efficiently and provide scalability, data sharding is required. The data of each database is divided into multiple shards and stored in multiple vnodes. These vnodes may be located on different dnodes. One way of scaling out is to add more vnodes on dnodes. Each vnode can only be used for a single DB, but one DB can have multiple vnodes. The allocation of vnode is scheduled automatically by mnode based on system resources of the dnodes.
+
+Launch TDengine CLI `taos` and execute below command:
+
+```sql
+USE SOME_DATABASE;
+SHOW VGROUPS;
+```
+
+The example output is below:
+
+```
+taos> show dnodes;
+ id | end_point | vnodes | cores | status | role | create_time | offline reason |
+======================================================================================================================================
+ 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
+Query OK, 1 row(s) in set (0.008298s)
+
+taos> use db;
+Database changed.
+
+taos> show vgroups;
+ vgId | tables | status | onlines | v1_dnode | v1_status | compacting |
+==========================================================================================
+ 14 | 38000 | ready | 1 | 1 | leader | 0 |
+ 15 | 38000 | ready | 1 | 1 | leader | 0 |
+ 16 | 38000 | ready | 1 | 1 | leader | 0 |
+ 17 | 38000 | ready | 1 | 1 | leader | 0 |
+ 18 | 37001 | ready | 1 | 1 | leader | 0 |
+ 19 | 37000 | ready | 1 | 1 | leader | 0 |
+ 20 | 37000 | ready | 1 | 1 | leader | 0 |
+ 21 | 37000 | ready | 1 | 1 | leader | 0 |
+Query OK, 8 row(s) in set (0.001154s)
+```
+
+## Add DNODE
+
+Launch TDengine CLI `taos` and execute the command below to add the end point of a new dnode into the EPI (end point) list of the cluster. "fqdn:port" must be quoted using double quotes.
+
+```sql
+CREATE DNODE "fqdn:port";
+```
+
+The example output is as below:
+
+```
+taos> create dnode "localhost:7030";
+Query OK, 0 of 0 row(s) in database (0.008203s)
+
+taos> show dnodes;
+ id | end_point | vnodes | cores | status | role | create_time | offline reason |
+======================================================================================================================================
+ 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
+ 2 | localhost:7030 | 0 | 0 | offline | any | 2022-04-19 08:11:42.158 | status not received |
+Query OK, 2 row(s) in set (0.001017s)
+```
+
+It can be seen that the status of the new dnode is "offline". Once the dnode is started and connects to the firstEp of the cluster, you can execute the command again and get the example output below. As can be seen, both dnodes are in "ready" status.
+
+```
+taos> show dnodes;
+ id | end_point | vnodes | cores | status | role | create_time | offline reason |
+======================================================================================================================================
+ 1 | localhost:6030 | 3 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
+ 2 | localhost:7030 | 6 | 8 | ready | any | 2022-04-19 08:14:59.165 | |
+Query OK, 2 row(s) in set (0.001316s)
+```
+
+## Drop DNODE
+
+Launch TDengine CLI `taos` and execute the command below to drop or remove a dnode from the cluster. In the command, you can get `dnodeId` from `show dnodes`.
+
+```sql
+DROP DNODE "fqdn:port";
+```
+
+or
+
+```sql
+DROP DNODE dnodeId;
+```
+
+The example output is below:
+
+```
+taos> show dnodes;
+ id | end_point | vnodes | cores | status | role | create_time | offline reason |
+======================================================================================================================================
+ 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
+ 2 | localhost:7030 | 0 | 0 | offline | any | 2022-04-19 08:11:42.158 | status not received |
+Query OK, 2 row(s) in set (0.001017s)
+
+taos> drop dnode 2;
+Query OK, 0 of 0 row(s) in database (0.000518s)
+
+taos> show dnodes;
+ id | end_point | vnodes | cores | status | role | create_time | offline reason |
+======================================================================================================================================
+ 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
+Query OK, 1 row(s) in set (0.001137s)
+```
+
+In the above example, when `show dnodes` is executed the first time, two dnodes are shown. After `drop dnode 2` is executed, you can execute `show dnodes` again and it can be seen that only the dnode with ID 1 is still in the cluster.
+
+:::note
+
+- Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Before dropping a dnode, the data belonging to the dnode MUST be migrated/backed up according to your data retention, data security or other SOPs.
+- Please note that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped.
+- Once a dnode is dropped, other dnodes in the cluster will be notified of the drop and will not accept the request from the dropped dnode.
+- dnodeID is allocated automatically and can't be manually modified. dnodeID is generated in ascending order without duplication.
+
+:::
+
+## Move VNODE
+
+A vnode can be manually moved from one dnode to another.
+
+Launch TDengine CLI `taos` and execute below command:
+
+```sql
+ALTER DNODE BALANCE "VNODE:-DNODE:";
+```
+
+In the above command, `source-dnodeId` is the original dnodeId where the vnode resides, `dest-dnodeId` specifies the target dnode. vgId (vgroup ID) can be shown by `SHOW VGROUPS `.
+
+First `show vgroups` is executed to show the vgroup distribution.
+
+```
+taos> show vgroups;
+ vgId | tables | status | onlines | v1_dnode | v1_status | compacting |
+==========================================================================================
+ 14 | 38000 | ready | 1 | 3 | leader | 0 |
+ 15 | 38000 | ready | 1 | 3 | leader | 0 |
+ 16 | 38000 | ready | 1 | 3 | leader | 0 |
+ 17 | 38000 | ready | 1 | 3 | leader | 0 |
+ 18 | 37001 | ready | 1 | 3 | leader | 0 |
+ 19 | 37000 | ready | 1 | 1 | leader | 0 |
+ 20 | 37000 | ready | 1 | 1 | leader | 0 |
+ 21 | 37000 | ready | 1 | 1 | leader | 0 |
+Query OK, 8 row(s) in set (0.001314s)
+```
+
+It can be seen that there are 5 vgroups in dnode 3 and 3 vgroups in node 1, now we want to move vgId 18 from dnode 3 to dnode 1. Execute the below command in `taos`
+
+```
+taos> alter dnode 3 balance "vnode:18-dnode:1";
+
+DB error: Balance already enabled (0.00755
+```
+
+However, the operation fails with error message show above, which means automatic load balancing has been enabled in the current database so manual load balance can't be performed.
+
+Shutdown the cluster, configure `balance` parameter in all the dnodes to 0, then restart the cluster, and execute `alter dnode` and `show vgroups` as below.
+
+```
+taos> alter dnode 3 balance "vnode:18-dnode:1";
+Query OK, 0 row(s) in set (0.000575s)
+
+taos> show vgroups;
+ vgId | tables | status | onlines | v1_dnode | v1_status | v2_dnode | v2_status | compacting |
+=================================================================================================================
+ 14 | 38000 | ready | 1 | 3 | leader | 0 | NULL | 0 |
+ 15 | 38000 | ready | 1 | 3 | leader | 0 | NULL | 0 |
+ 16 | 38000 | ready | 1 | 3 | leader | 0 | NULL | 0 |
+ 17 | 38000 | ready | 1 | 3 | leader | 0 | NULL | 0 |
+ 18 | 37001 | ready | 2 | 1 | follower | 3 | leader | 0 |
+ 19 | 37000 | ready | 1 | 1 | leader | 0 | NULL | 0 |
+ 20 | 37000 | ready | 1 | 1 | leader | 0 | NULL | 0 |
+ 21 | 37000 | ready | 1 | 1 | leader | 0 | NULL | 0 |
+Query OK, 8 row(s) in set (0.001242s)
+```
+
+It can be seen from above output that vgId 18 has been moved from dnode 3 to dnode 1.
+
+:::note
+
+- Manual load balancing can only be performed when the automatic load balancing is disabled, i.e. `balance` is set to 0.
+- Only a vnode in normal state, i.e. leader or follower, can be moved. vnode can't be moved when its in status offline, unsynced or syncing.
+- Before moving a vnode, it's necessary to make sure the target dnode has enough resources: CPU, memory and disk.
+
+:::
diff --git a/docs/en/10-cluster/03-ha-and-lb.md b/docs/en/10-cluster/03-ha-and-lb.md
new file mode 100644
index 0000000000000000000000000000000000000000..9780e8f6c68904e444d07c6a8c87b095c6b70ead
--- /dev/null
+++ b/docs/en/10-cluster/03-ha-and-lb.md
@@ -0,0 +1,81 @@
+---
+sidebar_label: HA & LB
+title: High Availability and Load Balancing
+---
+
+## High Availability of Vnode
+
+High availability of vnode and mnode can be achieved through replicas in TDengine.
+
+A TDengine cluster can have multiple databases. Each database has a number of vnodes associated with it. A different number of replicas can be configured for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas. The default value for `replica` is 1. Naturally, a single replica cannot guarantee high availability since if one node is down, the data service is unavailable. Note that the number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation will fail with error "more dnodes are needed". The SQL statement below is used to create a database named "demo" with 3 replicas.
+
+```sql
+CREATE DATABASE demo replica 3;
+```
+
+The data in a DB is divided into multiple shards and stored in multiple vgroups. The number of vnodes in each vgroup is determined by the number of replicas set for the DB. The vnodes in each vgroup store exactly the same data. For the purpose of high availability, the vnodes in a vgroup must be located in different dnodes on different hosts. As long as over half of the vnodes in a vgroup are in an online state, the vgroup is able to provide data access. Otherwise the vgroup can't provide data access for reading or inserting data.
+
+There may be data for multiple DBs in a dnode. When a dnode is down, multiple DBs may be affected. While in theory, the cluster will provide data access for reading or inserting data if over half the vnodes in vgroups are online, because of the possibly complex mapping between vnodes and dnodes, it is difficult to guarantee that the cluster will work properly if over half of the dnodes are online.
+
+## High Availability of Mnode
+
+Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`. The valid range for `numOfMnodes` is [1,3]. To ensure data consistency between mnodes, data replication between mnodes is performed synchronously.
+
+There may be multiple dnodes in a cluster, but only one mnode can be started in each dnode. Which one or ones of the dnodes will be designated as mnodes is automatically determined by TDengine according to the cluster configuration and system resources. The command `show mnodes` can be executed in TDengine `taos` to show the mnodes in the cluster.
+
+```sql
+SHOW MNODES;
+```
+
+The end point and role/status (leader, follower, unsynced, or offline) of all mnodes can be shown by the above command. When the first dnode is started in a cluster, there must be one mnode in this dnode. Without at least one mnode, the cluster cannot work. If `numOfMNodes` is configured to 2, another mnode will be started when the second dnode is launched.
+
+For the high availability of mnode, `numOfMnodes` needs to be configured to 2 or a higher value. Because the data consistency between mnodes must be guaranteed, the replica confirmation parameter `quorum` is set to 2 automatically if `numOfMNodes` is set to 2 or higher.
+
+:::note
+If high availability is important for your system, both vnode and mnode must be configured to have multiple replicas.
+
+:::
+
+## Load Balancing
+
+Load balancing will be triggered in 3 cases without manual intervention.
+
+- When a new dnode joins the cluster, automatic load balancing may be triggered. Some data from other dnodes may be transferred to the new dnode automatically.
+- When a dnode is removed from the cluster, the data from this dnode will be transferred to other dnodes automatically.
+- When a dnode is too hot, i.e. too much data has been stored in it, automatic load balancing may be triggered to migrate some vnodes from this dnode to other dnodes.
+
+:::tip
+Automatic load balancing is controlled by the parameter `balance`, 0 means disabled and 1 means enabled. This is set in the file [taos.cfg](https://docs.tdengine.com/reference/config/#balance).
+
+:::
+
+## Dnode Offline
+
+When a dnode is offline, it can be detected by the TDengine cluster. There are two cases:
+
+- The dnode comes online before the threshold configured in `offlineThreshold` is reached. The dnode is still in the cluster and data replication is started automatically. The dnode can work properly after the data sync is finished.
+
+- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. A system alert will be generated and automatic load balancing will be triggered if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not join the cluster automatically. The system administrator has to manually join the dnode to the cluster.
+
+:::note
+If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsynced status, the leader node can only be voted on, after all the vnodes or mnodes in the group become online and can exchange status. Following this, the vgroup (or mnode group) is able to provide service.
+
+:::
+
+## Arbitrator
+
+The "arbitrator" component is used to address the special case when the number of replicas is set to an even number like 2,4 etc. If half of the vnodes in a vgroup don't work, it is impossible to vote and select a leader node. This situation also applies to mnodes if the number of mnodes is set to an even number like 2,4 etc.
+
+To resolve this problem, a new arbitrator component named `tarbitrator`, an abbreviation of TDengine Arbitrator, was introduced. The `tarbitrator` simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. As long as more than half of the vnode or mnode, including Arbitrator, are available the vnode group or mnode group can provide data insertion or query services normally.
+
+Normally, it's prudent to configure the replica number for each DB or system parameter `numOfMNodes` to be an odd number. However, if a user is very sensitive to storage space, a replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability.
+
+Arbitrator component is installed with the server package. For details about how to install, please refer to [Install](/operation/pkg-install). The `-p` parameter of `tarbitrator` can be used to specify the port on which it provides service.
+
+In the configuration file `taos.cfg` of each dnode, parameter `arbitrator` needs to be configured to the end point of the `tarbitrator` process. Arbitrator component will be used automatically if the replica is configured to an even number and will be ignored if the replica is configured to an odd number.
+
+Arbitrator can be shown by executing command in TDengine CLI `taos` with its role shown as "arb".
+
+```sql
+SHOW DNODES;
+```
diff --git a/docs-en/10-cluster/_category_.yml b/docs/en/10-cluster/_category_.yml
similarity index 100%
rename from docs-en/10-cluster/_category_.yml
rename to docs/en/10-cluster/_category_.yml
diff --git a/docs-en/10-cluster/index.md b/docs/en/10-cluster/index.md
similarity index 100%
rename from docs-en/10-cluster/index.md
rename to docs/en/10-cluster/index.md
diff --git a/docs-en/12-taos-sql/01-data-type.md b/docs/en/12-taos-sql/01-data-type.md
similarity index 100%
rename from docs-en/12-taos-sql/01-data-type.md
rename to docs/en/12-taos-sql/01-data-type.md
diff --git a/docs/en/12-taos-sql/02-database.md b/docs/en/12-taos-sql/02-database.md
new file mode 100644
index 0000000000000000000000000000000000000000..c2961d62415cd7d23b031777082801426b221190
--- /dev/null
+++ b/docs/en/12-taos-sql/02-database.md
@@ -0,0 +1,126 @@
+---
+sidebar_label: Database
+title: Database
+description: "create and drop database, show or change database parameters"
+---
+
+## Create Database
+
+```
+CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
+```
+
+:::info
+
+1. KEEP specifies the number of days for which the data in the database will be retained. The default value is 3650 days, i.e. 10 years. The data will be deleted automatically once its age exceeds this threshold.
+2. UPDATE specifies whether the data can be updated and how the data can be updated.
+ 1. UPDATE set to 0 means update operation is not allowed. The update for data with an existing timestamp will be discarded silently and the original record in the database will be preserved as is.
+ 2. UPDATE set to 1 means the whole row will be updated. The columns for which no value is specified will be set to NULL.
+ 3. UPDATE set to 2 means updating a subset of columns for a row is allowed. The columns for which no value is specified will be kept unchanged.
+3. The maximum length of database name is 33 bytes.
+4. The maximum length of a SQL statement is 65,480 bytes.
+5. Below are the parameters that can be used when creating a database
+ - cache: [Description](/reference/config/#cache)
+ - blocks: [Description](/reference/config/#blocks)
+ - days: [Description](/reference/config/#days)
+ - keep: [Description](/reference/config/#keep)
+ - minRows: [Description](/reference/config/#minrows)
+ - maxRows: [Description](/reference/config/#maxrows)
+ - wal: [Description](/reference/config/#wallevel)
+ - fsync: [Description](/reference/config/#fsync)
+ - update: [Description](/reference/config/#update)
+ - cacheLast: [Description](/reference/config/#cachelast)
+ - replica: [Description](/reference/config/#replica)
+ - quorum: [Description](/reference/config/#quorum)
+ - comp: [Description](/reference/config/#comp)
+ - precision: [Description](/reference/config/#precision)
+6. Please note that all of the parameters mentioned in this section are configured in configuration file `taos.cfg` on the TDengine server. If not specified in the `create database` statement, the values from taos.cfg are used by default. To override default parameters, they must be specified in the `create database` statement.
+
+:::
+
+## Show Current Configuration
+
+```
+SHOW VARIABLES;
+```
+
+## Specify The Database In Use
+
+```
+USE db_name;
+```
+
+:::note
+This way is not applicable when using a REST connection. In a REST connection the database name must be specified before a table or stable name. For e.g. to query the stable "meters" in database "test" the query would be "SELECT count(*) from test.meters"
+
+:::
+
+## Drop Database
+
+```
+DROP DATABASE [IF EXISTS] db_name;
+```
+
+:::note
+All data in the database will be deleted too. This command must be used with extreme caution. Please follow your organization's data integrity, data backup, data security or any other applicable SOPs before using this command.
+
+:::
+
+## Change Database Configuration
+
+Some examples are shown below to demonstrate how to change the configuration of a database. Please note that some configuration parameters can be changed after the database is created, but some cannot. For details of the configuration parameters of database please refer to [Configuration Parameters](/reference/config/).
+
+```
+ALTER DATABASE db_name COMP 2;
+```
+
+COMP parameter specifies whether the data is compressed and how the data is compressed.
+
+```
+ALTER DATABASE db_name REPLICA 2;
+```
+
+REPLICA parameter specifies the number of replicas of the database.
+
+```
+ALTER DATABASE db_name KEEP 365;
+```
+
+KEEP parameter specifies the number of days for which the data will be kept.
+
+```
+ALTER DATABASE db_name QUORUM 2;
+```
+
+QUORUM parameter specifies the necessary number of confirmations to determine whether the data is written successfully.
+
+```
+ALTER DATABASE db_name BLOCKS 100;
+```
+
+BLOCKS parameter specifies the number of memory blocks used by each VNODE.
+
+```
+ALTER DATABASE db_name CACHELAST 0;
+```
+
+CACHELAST parameter specifies whether and how the latest data of a sub table is cached.
+
+:::tip
+The above parameters can be changed using `ALTER DATABASE` command without restarting. For more details of all configuration parameters please refer to [Configuration Parameters](/reference/config/).
+
+:::
+
+## Show All Databases
+
+```
+SHOW DATABASES;
+```
+
+## Show The Create Statement of A Database
+
+```
+SHOW CREATE DATABASE db_name;
+```
+
+This command is useful when migrating the data from one TDengine cluster to another. This command can be used to get the CREATE statement, which can be used in another TDengine instance to create the exact same database.
diff --git a/docs-en/12-taos-sql/03-table.md b/docs/en/12-taos-sql/03-table.md
similarity index 100%
rename from docs-en/12-taos-sql/03-table.md
rename to docs/en/12-taos-sql/03-table.md
diff --git a/docs-en/12-taos-sql/04-stable.md b/docs/en/12-taos-sql/04-stable.md
similarity index 100%
rename from docs-en/12-taos-sql/04-stable.md
rename to docs/en/12-taos-sql/04-stable.md
diff --git a/docs-en/12-taos-sql/05-insert.md b/docs/en/12-taos-sql/05-insert.md
similarity index 100%
rename from docs-en/12-taos-sql/05-insert.md
rename to docs/en/12-taos-sql/05-insert.md
diff --git a/docs-en/12-taos-sql/06-select.md b/docs/en/12-taos-sql/06-select.md
similarity index 100%
rename from docs-en/12-taos-sql/06-select.md
rename to docs/en/12-taos-sql/06-select.md
diff --git a/docs-en/07-develop/05-delete-data.mdx b/docs/en/12-taos-sql/08-delete-data.mdx
similarity index 100%
rename from docs-en/07-develop/05-delete-data.mdx
rename to docs/en/12-taos-sql/08-delete-data.mdx
diff --git a/docs-en/12-taos-sql/07-function.md b/docs/en/12-taos-sql/10-function.md
similarity index 100%
rename from docs-en/12-taos-sql/07-function.md
rename to docs/en/12-taos-sql/10-function.md
diff --git a/docs/en/12-taos-sql/12-interval.md b/docs/en/12-taos-sql/12-interval.md
new file mode 100644
index 0000000000000000000000000000000000000000..2d5502781081e12b008314a84101fbf4c37effd7
--- /dev/null
+++ b/docs/en/12-taos-sql/12-interval.md
@@ -0,0 +1,113 @@
+---
+sidebar_label: Interval
+title: Aggregate by Time Window
+---
+
+Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window.
+Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are three kinds of windows: time window, status window, and session window. There are two kinds of time windows: sliding window and flip time/tumbling window.
+
+## Time Window
+
+The `INTERVAL` clause is used to generate time windows of the same time interval. The `SLIDING` parameter is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining a continuous query, both the size of the time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time ranges of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time/tumbling window.
+
+
+
+`INTERVAL` and `SLIDING` should be used with aggregate functions and select functions. The SQL statement below is illegal because no aggregate or selection function is used with `INTERVAL`.
+
+```
+SELECT * FROM temp_tb_1 INTERVAL(1m);
+```
+
+The time step specified by `SLIDING` cannot exceed the time interval specified by `INTERVAL`. The SQL statement below is illegal because the time length specified by `SLIDING` exceeds that specified by `INTERVAL`.
+
+```
+SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m);
+```
+
+When the time length specified by `SLIDING` is the same as that specified by `INTERVAL`, the sliding window is actually a flip/tumbling window. The minimum time range specified by `INTERVAL` is 10 milliseconds (10a) prior to version 2.1.5.0. Since version 2.1.5.0, the minimum time range by `INTERVAL` can be 1 microsecond (1u). However, if the DB precision is millisecond, the minimum time range is 1 millisecond (1a). Please note that the `timezone` parameter should be configured to be the same value in the `taos.cfg` configuration file on client side and server side.
+
+## Status Window
+
+In case of using integer, bool, or string to represent the status of a device at any given moment, continuous rows with the same status belong to a status window. Once the status changes, the status window closes. As shown in the following figure, there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now.
+
+
+
+`STATE_WINDOW` is used to specify the column on which the status window will be based. For example:
+
+```
+SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status);
+```
+
+## Session Window
+
+```sql
+SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
+```
+
+The primary key, i.e. timestamp, is used to determine which session window a row belongs to. If the time interval between two adjacent rows is within the time range specified by `tol_val`, they belong to the same session window; otherwise they belong to two different session windows. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30], because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
+
+
+
+If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now.
+
+## More On Window Aggregate
+
+### Syntax
+
+The full syntax of aggregate by window is as follows:
+
+```sql
+SELECT function_list FROM tb_name
+ [WHERE where_condition]
+ [SESSION(ts_col, tol_val)]
+ [STATE_WINDOW(col)]
+ [INTERVAL(interval [, offset]) [SLIDING sliding]]
+ [FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
+
+SELECT function_list FROM stb_name
+ [WHERE where_condition]
+ [INTERVAL(interval [, offset]) [SLIDING sliding]]
+ [FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
+ [GROUP BY tags]
+```
+
+### Restrictions
+
+- Aggregate functions and select functions can be used in `function_list`, with each function having only one output. For example COUNT, AVG, SUM, STDDEV, LEASTSQUARES, PERCENTILE, MIN, MAX, FIRST, LAST. Functions having multiple outputs, such as DIFF or arithmetic operations can't be used.
+- `LAST_ROW` can't be used together with window aggregate.
+- Scalar functions, like CEIL/FLOOR, can't be used with window aggregate.
+- `WHERE` clause can be used to specify the starting and ending time and other filter conditions
+- `FILL` clause is used to specify how to fill when there is data missing in any window, including:
+ 1. NONE: No fill (the default fill mode)
+ 2. VALUE:Fill with a fixed value, which should be specified together, for example `FILL(VALUE, 1.23)`
+ 3. PREV:Fill with the previous non-NULL value, `FILL(PREV)`
+ 4. NULL:Fill with NULL, `FILL(NULL)`
+ 5. LINEAR:Fill with the closest non-NULL value, `FILL(LINEAR)`
+ 6. NEXT:Fill with the next non-NULL value, `FILL(NEXT)`
+
+:::info
+
+1. A huge volume of interpolation output may be returned using `FILL`, so it's recommended to specify the time range when using `FILL`. The maximum number of interpolation values that can be returned in a single query is 10,000,000.
+2. The result set is in ascending order of timestamp when you aggregate by time window.
+3. If aggregate by window is used on STable, the aggregate function is performed on all the rows matching the filter conditions. If `GROUP BY` is not used in the query, the result set will be returned in ascending order of timestamp; otherwise the result set is not exactly in the order of ascending timestamp in each group.
+
+:::
+
+Aggregate by time window is also used in continuous query, please refer to [Continuous Query](../../develop/continuous-query).
+
+## Examples
+
+A table of intelligent meters can be created by the SQL statement below:
+
+```sql
+CREATE TABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT);
+```
+
+The average current, maximum current and median of current in every 10 minutes for the past 24 hours can be calculated using the SQL statement below, with missing values filled with the previous non-NULL values.
+
+```
+SELECT AVG(current), MAX(current), APERCENTILE(current, 50) FROM meters
+ WHERE ts>=NOW-1d and ts<=now
+ INTERVAL(10m)
+ FILL(PREV);
+```
diff --git a/docs-en/12-taos-sql/09-limit.md b/docs/en/12-taos-sql/14-limit.md
similarity index 100%
rename from docs-en/12-taos-sql/09-limit.md
rename to docs/en/12-taos-sql/14-limit.md
diff --git a/docs/en/12-taos-sql/16-json.md b/docs/en/12-taos-sql/16-json.md
new file mode 100644
index 0000000000000000000000000000000000000000..61c473d120fefba1ec92902f3e68aa7037875a72
--- /dev/null
+++ b/docs/en/12-taos-sql/16-json.md
@@ -0,0 +1,93 @@
+---
+title: JSON Type
+---
+
+## Syntax
+
+1. Tag of type JSON
+
+ ```sql
+ create STable s1 (ts timestamp, v1 int) tags (info json);
+
+ create table s1_1 using s1 tags ('{"k1": "v1"}');
+ ```
+
+2. "->" Operator of JSON
+
+ ```sql
+ select * from s1 where info->'k1' = 'v1';
+
+ select info->'k1' from s1;
+ ```
+
+3. "contains" Operator of JSON
+
+ ```sql
+ select * from s1 where info contains 'k2';
+
+ select * from s1 where info contains 'k1';
+ ```
+
+## Applicable Operations
+
+1. When a JSON data type is used in `where`, `match/nmatch/between and/like/and/or/is null/is no null` can be used but `in` can't be used.
+
+ ```sql
+ select * from s1 where info->'k1' match 'v*';
+
+ select * from s1 where info->'k1' like 'v%' and info contains 'k2';
+
+ select * from s1 where info is null;
+
+ select * from s1 where info->'k1' is not null;
+ ```
+
+2. A tag of JSON type can be used in `group by`, `order by`, `join`, `union all` and sub query; for example `group by json->'key'`
+
+3. `Distinct` can be used with a tag of type JSON
+
+ ```sql
+ select distinct info->'k1' from s1;
+ ```
+
+4. Tag Operations
+
+ The value of a JSON tag can be altered. Please note that the full JSON will be overriden when doing this:
+
+ ```sql
+ alter table s1_1 set tag info = '{"k1": "v2"}';
+ ```
+
+ The name of a JSON tag can be altered:
+
+ ```sql
+ alter stable s1 change tag info info2 ;
+ ```
+
+ A tag of JSON type can't be added or removed. The column length of a JSON tag can't be changed.
+
+
+## Other Restrictions
+
+- JSON type can only be used for a tag. There can be only one tag of JSON type, and it's exclusive to any other types of tags.
+
+- The maximum length of keys in JSON is 256 bytes, and key must be printable ASCII characters. The maximum total length of a JSON is 4,096 bytes.
+
+- JSON format:
+
+ - The input string for JSON can be empty, i.e. "", "\t", or NULL, but it can't be non-NULL string, bool or array.
+ - object can be {}, and the entire JSON is empty if so. Key can be "", and it's ignored if so.
+ - value can be int, double, string, bool or NULL, and it can't be an array. Nesting is not allowed which means that the value of a key can't be JSON.
+ - If one key occurs twice in JSON, only the first one is valid.
+ - Escape characters are not allowed in JSON.
+
+- NULL is returned when querying a key that doesn't exist in JSON.
+
+- If a tag of JSON is the result of inner query, it can't be parsed and queried in the outer query.
+
+For example, the SQL statements below are not supported.
+
+```sql;
+select jtag->'key' from (select jtag from STable);
+select jtag->'key' from (select jtag from STable) where jtag->'key'>0;
+```
diff --git a/docs-en/12-taos-sql/11-escape.md b/docs/en/12-taos-sql/18-escape.md
similarity index 100%
rename from docs-en/12-taos-sql/11-escape.md
rename to docs/en/12-taos-sql/18-escape.md
diff --git a/docs/en/12-taos-sql/20-keywords.md b/docs/en/12-taos-sql/20-keywords.md
new file mode 100644
index 0000000000000000000000000000000000000000..0e79a07362476501f9cfc3f3d03ff59abd25abc9
--- /dev/null
+++ b/docs/en/12-taos-sql/20-keywords.md
@@ -0,0 +1,315 @@
+---
+title: Keywords
+---
+
+There are about 200 keywords reserved by TDengine, they can't be used as the name of database, STable or table with either upper case, lower case or mixed case.
+
+## Keyword List
+
+### A
+
+- ABORT
+- ACCOUNT
+- ACCOUNTS
+- ADD
+- AFTER
+- ALL
+- ALTER
+- AND
+- AS
+- ASC
+- ATTACH
+
+### B
+
+- BEFORE
+- BEGIN
+- BETWEEN
+- BIGINT
+- BINARY
+- BITAND
+- BITNOT
+- BITOR
+- BLOCKS
+- BOOL
+- BY
+
+### C
+
+- CACHE
+- CACHELAST
+- CASCADE
+- CHANGE
+- CLUSTER
+- COLON
+- COLUMN
+- COMMA
+- COMP
+- COMPACT
+- CONCAT
+- CONFLICT
+- CONNECTION
+- CONNECTIONS
+- CONNS
+- COPY
+- CREATE
+- CTIME
+
+### D
+
+- DATABASE
+- DATABASES
+- DAYS
+- DBS
+- DEFERRED
+- DELIMITERS
+- DELETE
+- DESC
+- DESCRIBE
+- DETACH
+- DISTINCT
+- DIVIDE
+- DNODE
+- DNODES
+- DOT
+- DOUBLE
+- DROP
+
+### E
+
+- END
+- EQ
+- EXISTS
+- EXPLAIN
+
+### F
+
+- FAIL
+- FILE
+- FILL
+- FLOAT
+- FOR
+- FROM
+- FSYNC
+
+### G
+
+- GE
+- GLOB
+- GRANTS
+- GROUP
+- GT
+
+### H
+
+- HAVING
+
+### I
+
+- ID
+- IF
+- IGNORE
+- IMMEDIA
+- IMPORT
+- IN
+- INITIAL
+- INSERT
+- INSTEAD
+- INT
+- INTEGER
+- INTERVA
+- INTO
+- IS
+- ISNULL
+
+### J
+
+- JOIN
+
+### K
+
+- KEEP
+- KEY
+- KILL
+
+### L
+
+- LE
+- LIKE
+- LIMIT
+- LINEAR
+- LOCAL
+- LP
+- LSHIFT
+- LT
+
+### M
+
+- MATCH
+- MAXROWS
+- MINROWS
+- MINUS
+- MNODES
+- MODIFY
+- MODULES
+
+### N
+
+- NE
+- NONE
+- NOT
+- NOTNULL
+- NOW
+- NULL
+
+### O
+
+- OF
+- OFFSET
+- OR
+- ORDER
+
+### P
+
+- PARTITION
+- PASS
+- PLUS
+- PPS
+- PRECISION
+- PREV
+- PRIVILEGE
+
+### Q
+
+- QTIME
+- QUERIE
+- QUERY
+- QUORUM
+
+### R
+
+- RAISE
+- REM
+- REPLACE
+- REPLICA
+- RESET
+- RESTRIC
+- ROW
+- RP
+- RSHIFT
+
+### S
+
+- SCORES
+- SELECT
+- SEMI
+- SESSION
+- SET
+- SHOW
+- SLASH
+- SLIDING
+- SLIMIT
+- SMALLIN
+- SOFFSET
+- STable
+- STableS
+- STAR
+- STATE
+- STATEMEN
+- STATE_WI
+- STORAGE
+- STREAM
+- STREAMS
+- STRING
+- SYNCDB
+
+### T
+
+- TABLE
+- TABLES
+- TAG
+- TAGS
+- TBNAME
+- TIMES
+- TIMESTAMP
+- TINYINT
+- TOPIC
+- TOPICS
+- TRIGGER
+- TSERIES
+
+### U
+
+- UMINUS
+- UNION
+- UNSIGNED
+- UPDATE
+- UPLUS
+- USE
+- USER
+- USERS
+- USING
+
+### V
+
+- VALUES
+- VARIABLE
+- VARIABLES
+- VGROUPS
+- VIEW
+- VNODES
+
+### W
+
+- WAL
+- WHERE
+
+### _
+
+- _C0
+- _QSTART
+- _QSTOP
+- _QDURATION
+- _WSTART
+- _WSTOP
+- _WDURATION
+
+## Explanations
+### TBNAME
+`TBNAME` can be considered as a special tag, which represents the name of the subtable, in STable.
+
+Get the table name and tag values of all subtables in a STable.
+```mysql
+SELECT TBNAME, location FROM meters;
+```
+
+Count the number of subtables in a STable.
+```mysql
+SELECT COUNT(TBNAME) FROM meters;
+```
+
+Only filter on TAGS can be used in WHERE clause in the above two query statements.
+```mysql
+taos> SELECT TBNAME, location FROM meters;
+ tbname | location |
+==================================================================
+ d1004 | California.SanFrancisco |
+ d1003 | California.SanFrancisco |
+ d1002 | California.LosAngeles |
+ d1001 | California.LosAngeles |
+Query OK, 4 row(s) in set (0.000881s)
+
+taos> SELECT COUNT(tbname) FROM meters WHERE groupId > 2;
+ count(tbname) |
+========================
+ 2 |
+Query OK, 1 row(s) in set (0.001091s)
+```
+### _QSTART/_QSTOP/_QDURATION
+The start, stop and duration of a query time window (Since version 2.6.0.0).
+
+### _WSTART/_WSTOP/_WDURATION
+The start, stop and duration of aggegate query by time window, like interval, session window, state window (Since version 2.6.0.0).
+
+### _c0
+The first column of a table or STable.
diff --git a/docs-en/12-taos-sql/_category_.yml b/docs/en/12-taos-sql/_category_.yml
similarity index 100%
rename from docs-en/12-taos-sql/_category_.yml
rename to docs/en/12-taos-sql/_category_.yml
diff --git a/docs/en/12-taos-sql/index.md b/docs/en/12-taos-sql/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..1d1cb04ad4005372bb9d3a41c1c98533071ac4b2
--- /dev/null
+++ b/docs/en/12-taos-sql/index.md
@@ -0,0 +1,31 @@
+---
+title: TDengine SQL
+description: "The syntax supported by TDengine SQL "
+---
+
+This section explains the syntax of SQL to perform operations on databases, tables and STables, insert data, select data and use functions. We also provide some tips that can be used in TDengine SQL. If you have previous experience with SQL this section will be fairly easy to understand. If you do not have previous experience with SQL, you'll come to appreciate the simplicity and power of SQL.
+
+TDengine SQL is the major interface for users to write data into or query from TDengine. For ease of use, the syntax is similar to that of standard SQL. However, please note that TDengine SQL is not standard SQL. For instance, TDengine doesn't provide a delete function for time series data and so corresponding statements are not provided in TDengine SQL. However, TDengine Enterprise Edition provides the DELETE function since version 2.6.
+
+Syntax Specifications used in this chapter:
+
+- The content inside <\> needs to be input by the user, excluding <\> itself.
+- \[ \] means optional input, excluding [] itself.
+- | means one of a few options, excluding | itself.
+- … means the item prior to it can be repeated multiple times.
+
+To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of data from electric meters. Each meter collects 3 data measurements: current, voltage, phase. The data model is shown below:
+
+```sql
+taos> DESCRIBE meters;
+ Field | Type | Length | Note |
+=================================================================================
+ ts | TIMESTAMP | 8 | |
+ current | FLOAT | 4 | |
+ voltage | INT | 4 | |
+ phase | FLOAT | 4 | |
+ location | BINARY | 64 | TAG |
+ groupid | INT | 4 | TAG |
+```
+
+The data set includes the data collected by 4 meters, the corresponding table name is d1001, d1002, d1003 and d1004 based on the data model of TDengine.
diff --git a/docs-cn/12-taos-sql/timewindow-1.webp b/docs/en/12-taos-sql/timewindow-1.webp
similarity index 100%
rename from docs-cn/12-taos-sql/timewindow-1.webp
rename to docs/en/12-taos-sql/timewindow-1.webp
diff --git a/docs-cn/12-taos-sql/timewindow-2.webp b/docs/en/12-taos-sql/timewindow-2.webp
similarity index 100%
rename from docs-cn/12-taos-sql/timewindow-2.webp
rename to docs/en/12-taos-sql/timewindow-2.webp
diff --git a/docs-cn/12-taos-sql/timewindow-3.webp b/docs/en/12-taos-sql/timewindow-3.webp
similarity index 100%
rename from docs-cn/12-taos-sql/timewindow-3.webp
rename to docs/en/12-taos-sql/timewindow-3.webp
diff --git a/docs-en/13-operation/01-pkg-install.md b/docs/en/13-operation/01-pkg-install.md
similarity index 100%
rename from docs-en/13-operation/01-pkg-install.md
rename to docs/en/13-operation/01-pkg-install.md
diff --git a/docs-en/13-operation/02-planning.mdx b/docs/en/13-operation/02-planning.mdx
similarity index 100%
rename from docs-en/13-operation/02-planning.mdx
rename to docs/en/13-operation/02-planning.mdx
diff --git a/docs-en/13-operation/03-tolerance.md b/docs/en/13-operation/03-tolerance.md
similarity index 100%
rename from docs-en/13-operation/03-tolerance.md
rename to docs/en/13-operation/03-tolerance.md
diff --git a/docs-en/13-operation/06-admin.md b/docs/en/13-operation/06-admin.md
similarity index 100%
rename from docs-en/13-operation/06-admin.md
rename to docs/en/13-operation/06-admin.md
diff --git a/docs-en/13-operation/07-import.md b/docs/en/13-operation/07-import.md
similarity index 100%
rename from docs-en/13-operation/07-import.md
rename to docs/en/13-operation/07-import.md
diff --git a/docs-en/13-operation/08-export.md b/docs/en/13-operation/08-export.md
similarity index 100%
rename from docs-en/13-operation/08-export.md
rename to docs/en/13-operation/08-export.md
diff --git a/docs-en/13-operation/09-status.md b/docs/en/13-operation/09-status.md
similarity index 100%
rename from docs-en/13-operation/09-status.md
rename to docs/en/13-operation/09-status.md
diff --git a/docs-en/13-operation/10-monitor.md b/docs/en/13-operation/10-monitor.md
similarity index 100%
rename from docs-en/13-operation/10-monitor.md
rename to docs/en/13-operation/10-monitor.md
diff --git a/docs-en/13-operation/11-optimize.md b/docs/en/13-operation/11-optimize.md
similarity index 100%
rename from docs-en/13-operation/11-optimize.md
rename to docs/en/13-operation/11-optimize.md
diff --git a/docs-en/13-operation/17-diagnose.md b/docs/en/13-operation/17-diagnose.md
similarity index 100%
rename from docs-en/13-operation/17-diagnose.md
rename to docs/en/13-operation/17-diagnose.md
diff --git a/docs-en/13-operation/_category_.yml b/docs/en/13-operation/_category_.yml
similarity index 100%
rename from docs-en/13-operation/_category_.yml
rename to docs/en/13-operation/_category_.yml
diff --git a/docs-en/13-operation/index.md b/docs/en/13-operation/index.md
similarity index 100%
rename from docs-en/13-operation/index.md
rename to docs/en/13-operation/index.md
diff --git a/docs/en/14-reference/02-rest-api/02-rest-api.mdx b/docs/en/14-reference/02-rest-api/02-rest-api.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..fe18349a6dae3ad44772b4a30a2c3d4ad75b0f47
--- /dev/null
+++ b/docs/en/14-reference/02-rest-api/02-rest-api.mdx
@@ -0,0 +1,307 @@
+---
+title: REST API
+---
+
+To support the development of various types of applications and platforms, TDengine provides an API that conforms to REST principles; namely REST API. To minimize the learning cost, unlike REST APIs for other database engines, TDengine allows insertion of SQL commands in the BODY of an HTTP POST request, to operate the database.
+
+:::note
+One difference from the native connector is that the REST interface is stateless and so the `USE db_name` command has no effect. All references to table names and super table names need to specify the database name in the prefix. (Since version 2.2.0.0, TDengine supports specification of the db_name in RESTful URL. If the database name prefix is not specified in the SQL command, the `db_name` specified in the URL will be used. Since version 2.4.0.0, REST service is provided by taosAdapter by default and it requires that the `db_name` must be specified in the URL.)
+:::
+
+## Installation
+
+The REST interface does not rely on any TDengine native library, so the client application does not need to install any TDengine libraries. The client application's development language only needs to support the HTTP protocol.
+
+## Verification
+
+If the TDengine server is already installed, it can be verified as follows:
+
+The following example is in an Ubuntu environment and uses the `curl` tool to verify that the REST interface is working. Note that the `curl` tool may need to be installed in your environment.
+
+The following example lists all databases on the host h1.taosdata.com. To use it in your environment, replace `h1.taosdata.com` and `6041` (the default port) with the actual running TDengine service FQDN and port number.
+
+```html
+curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -d "show databases;" h1.taosdata.com:6041/rest/sql
+```
+
+The following return value results indicate that the verification passed.
+
+```json
+{
+ "status": "succ",
+ "head": [
+ "name",
+ "created_time",
+ "ntables",
+ "vgroups",
+ "replica",
+ "quorum",
+ "days",
+ "keep1,keep2,keep(D)",
+ "cache(MB)",
+ "blocks",
+ "minrows",
+ "maxrows",
+ "wallevel",
+ "fsync",
+ "comp",
+ "precision",
+ "status"
+ ],
+ "data": [
+ [
+ "log",
+ "2020-09-02 17:23:00.039",
+ 4,
+ 1,
+ 1,
+ 1,
+ 10,
+ "30,30,30",
+ 1,
+ 3,
+ 100,
+ 4096,
+ 1,
+ 3000,
+ 2,
+ "us",
+ "ready"
+ ]
+ ],
+ "rows": 1
+}
+```
+
+## HTTP request URL format
+
+```
+http://:/rest/sql/[db_name]
+```
+
+Parameter Description:
+
+- fqnd: FQDN or IP address of any host in the cluster
+- port: httpPort configuration item in the configuration file, default is 6041
+- db_name: Optional parameter that specifies the default database name for the executed SQL command. (supported since version 2.2.0.0)
+
+For example, `http://h1.taos.com:6041/rest/sql/test` is a URL to `h1.taos.com:6041` and sets the default database name to `test`.
+
+TDengine supports both Basic authentication and custom authentication mechanisms, and subsequent versions will provide a standard secure digital signature mechanism for authentication.
+
+- The custom authentication information is as follows. More details about "token" later.
+
+ ```
+ Authorization: Taosd
+ ```
+
+- Basic authentication information is shown below
+
+ ```
+ Authorization: Basic
+ ```
+
+The HTTP request's BODY is a complete SQL command, and the data table in the SQL statement should be provided with a database prefix, e.g., `db_name.tb_name`. If the table name does not have a database prefix and the database name is not specified in the URL, the system will response an error because the HTTP module is a simple forwarder and has no awareness of the current DB.
+
+Use `curl` to initiate an HTTP request with a custom authentication method, with the following syntax.
+
+```bash
+curl -L -H "Authorization: Basic " -d "" :/rest/sql/[db_name]
+```
+
+Or
+
+```bash
+curl -L -u username:password -d "" :/rest/sql/[db_name]
+```
+
+where `TOKEN` is the string after Base64 encoding of `{username}:{password}`, e.g. `root:taosdata` is encoded as `cm9vdDp0YW9zZGF0YQ==`.
+
+## HTTP Return Format
+
+The return result is in JSON format, as follows:
+
+```json
+{
+ "status": "succ",
+ "head": ["ts", "current", ...],
+ "column_meta": [["ts",9,8],["current",6,4], ...],
+ "data": [
+ ["2018-10-03 14:38:05.000", 10.3, ...],
+ ["2018-10-03 14:38:15.000", 12.6, ...]
+ ],
+ "rows": 2
+}
+```
+
+Description:
+
+- status: tells you whethre the operation result is success or failure.
+- head: the definition of the table, or just one column "affected_rows" if no result set is returned. (As of version 2.0.17.0, it is recommended not to rely on the head return value to determine the data column type but rather use column_meta. In later versions, the head item may be removed from the return value.)
+- column_meta: this item is added to the return value to indicate the data type of each column in the data with version 2.0.17.0 and later versions. Each column is described by three values: column name, column type, and type length. For example, `["current",6,4]` means that the column name is "current", the column type is 6, which is the float type, and the type length is 4, which is the float type with 4 bytes. If the column type is binary or nchar, the type length indicates the maximum length of content stored in the column, not the length of the specific data in this return value. When the column type is nchar, the type length indicates the number of Unicode characters that can be saved, not bytes.
+- data: The exact data returned, presented row by row, or just [[affected_rows]] if no result set is returned. The order of the data columns in each row of data is the same as that of the data columns described in column_meta.
+- rows: Indicates how many rows of data there are.
+
+The column types in column_meta are described as follows:
+
+- 1:BOOL
+- 2:TINYINT
+- 3:SMALLINT
+- 4:INT
+- 5:BIGINT
+- 6:FLOAT
+- 7:DOUBLE
+- 8:BINARY
+- 9:TIMESTAMP
+- 10:NCHAR
+
+## Custom Authorization Code
+
+HTTP requests require an authorization code `` for identification purposes. The administrator usually provides the authorization code, and it can be obtained simply by sending an ``HTTP GET`` request as follows:
+
+```bash
+curl http://:/rest/login//
+```
+
+Where `fqdn` is the FQDN or IP address of the TDengine database. `port` is the port number of the TDengine service. `username` is the database username. `password` is the database password. The return value is in `JSON` format, and the meaning of each field is as follows.
+
+- status: flag bit of the request result
+
+- code: return value code
+
+- desc: authorization code
+
+Example of getting authorization code.
+
+```bash
+curl http://192.168.0.1:6041/rest/login/root/taosdata
+```
+
+Response body:
+
+```json
+{
+ "status": "succ",
+ "code": 0,
+ "desc": "/KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04"
+}
+```
+
+## For example
+
+- query all records from table d1001 of database demo
+
+ ```bash
+ curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -d "select * from demo.d1001" 192.168.0.1:6041/rest/sql
+ ```
+
+ Response body:
+
+ ```json
+ {
+ "status": "succ",
+ "head": ["ts", "current", "voltage", "phase"],
+ "column_meta": [
+ ["ts", 9, 8],
+ ["current", 6, 4],
+ ["voltage", 4, 4],
+ ["phase", 6, 4]
+ ],
+ "data": [
+ ["2018-10-03 14:38:05.000", 10.3, 219, 0.31],
+ ["2018-10-03 14:38:15.000", 12.6, 218, 0.33]
+ ],
+ "rows": 2
+ }
+ ```
+
+- Create database demo:
+
+ ```bash
+ curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -d "create database demo" 192.168.0.1:6041/rest/sql
+ ```
+
+ Response body:
+
+ ```json
+ {
+ "status": "succ",
+ "head": ["affected_rows"],
+ "column_meta": [["affected_rows", 4, 4]],
+ "data": [[1]],
+ "rows": 1
+ }
+ ```
+
+## Other Uses
+
+### Unix timestamps for result sets
+
+When the HTTP request URL uses `/rest/sqlt`, the returned result set's timestamp value will be in Unix timestamp format, for example:
+
+```bash
+curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -d "select * from demo.d1001" 192.168.0.1:6041/rest/sqlt
+```
+
+Response body:
+
+```json
+{
+ "status": "succ",
+ "head": ["ts", "current", "voltage", "phase"],
+ "column_meta": [
+ ["ts", 9, 8],
+ ["current", 6, 4],
+ ["voltage", 4, 4],
+ ["phase", 6, 4]
+ ],
+ "data": [
+ [1538548685000, 10.3, 219, 0.31],
+ [1538548695000, 12.6, 218, 0.33]
+ ],
+ "rows": 2
+}
+```
+
+### UTC format for the result set
+
+When the HTTP request URL uses `/rest/sqlutc`, the timestamp of the returned result set will be expressed as a UTC format, for example:
+
+```bash
+ curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -d "select * from demo.t1" 192.168.0.1:6041/rest/sqlutc
+```
+
+Response body:
+
+```json
+{
+ "status": "succ",
+ "head": ["ts", "current", "voltage", "phase"],
+ "column_meta": [
+ ["ts", 9, 8],
+ ["current", 6, 4],
+ ["voltage", 4, 4],
+ ["phase", 6, 4]
+ ],
+ "data": [
+ ["2018-10-03T14:38:05.000+0800", 10.3, 219, 0.31],
+ ["2018-10-03T14:38:15.000+0800", 12.6, 218, 0.33]
+ ],
+ "rows": 2
+}
+```
+
+## Important configuration items
+
+Only some configuration parameters related to the RESTful interface are listed below. Please see the description in the configuration file for other system parameters.
+
+- The port number of the external RESTful service is bound to 6041 by default (the actual value is serverPort + 11, so it can be changed by modifying the setting of the serverPort parameter).
+- httpMaxThreads: the number of threads to start, default is 2 (the default value is rounded down to half of the CPU cores with version 2.0.17.0 and later versions).
+- restfulRowLimit: the maximum number of result sets (in JSON format) to return. The default value is 10240.
+- httpEnableCompress: whether to support compression, the default is not supported. Currently, TDengine only supports the gzip compression format.
+- httpDebugFlag: logging switch, default is 131. 131: error and alarm messages only, 135: debug messages, 143: very detailed debug messages.
+- httpDbNameMandatory: users must specify the default database name in the RESTful URL. The default is 0, which turns off this check. If set to 1, users must put a default database name in every RESTful URL. Otherwise, it will return an execution error and reject this SQL statement, regardless of whether the SQL statement executed at this time requires a specified database.
+
+:::note
+If you are using the REST API provided by taosd, you should write the above configuration in taosd's configuration file taos.cfg. If you use the REST API of taosAdapter, you need to refer to taosAdapter [corresponding configuration method](/reference/taosadapter/).
+:::
diff --git a/docs-cn/14-reference/02-rest-api/_category_.yml b/docs/en/14-reference/02-rest-api/_category_.yml
similarity index 100%
rename from docs-cn/14-reference/02-rest-api/_category_.yml
rename to docs/en/14-reference/02-rest-api/_category_.yml
diff --git a/docs-en/14-reference/03-connector/03-connector.mdx b/docs/en/14-reference/03-connector/03-connector.mdx
similarity index 100%
rename from docs-en/14-reference/03-connector/03-connector.mdx
rename to docs/en/14-reference/03-connector/03-connector.mdx
diff --git a/docs-en/14-reference/03-connector/_category_.yml b/docs/en/14-reference/03-connector/_category_.yml
similarity index 100%
rename from docs-en/14-reference/03-connector/_category_.yml
rename to docs/en/14-reference/03-connector/_category_.yml
diff --git a/docs-en/14-reference/03-connector/_linux_install.mdx b/docs/en/14-reference/03-connector/_linux_install.mdx
similarity index 100%
rename from docs-en/14-reference/03-connector/_linux_install.mdx
rename to docs/en/14-reference/03-connector/_linux_install.mdx
diff --git a/docs-en/14-reference/03-connector/_preparation.mdx b/docs/en/14-reference/03-connector/_preparation.mdx
similarity index 100%
rename from docs-en/14-reference/03-connector/_preparation.mdx
rename to docs/en/14-reference/03-connector/_preparation.mdx
diff --git a/docs-en/14-reference/03-connector/_verify_linux.mdx b/docs/en/14-reference/03-connector/_verify_linux.mdx
similarity index 100%
rename from docs-en/14-reference/03-connector/_verify_linux.mdx
rename to docs/en/14-reference/03-connector/_verify_linux.mdx
diff --git a/docs/en/14-reference/03-connector/_verify_windows.mdx b/docs/en/14-reference/03-connector/_verify_windows.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..daeb151bb1252436c0ef16eab1d50a64d664e437
--- /dev/null
+++ b/docs/en/14-reference/03-connector/_verify_windows.mdx
@@ -0,0 +1,14 @@
+Go to the `C:\TDengine` directory from `cmd` and execute TDengine CLI program `taos.exe` directly to connect to the TDengine service and enter the TDengine CLI interface, for example, as follows:
+
+```text
+ C:\TDengine>taos
+ Welcome to the TDengine shell from Linux, Client Version:2.0.5.0
+ Copyright (c) 2017 by TAOS Data, Inc. All rights reserved.
+ taos> show databases;
+ name | created_time | ntables | vgroups | replica | quorum | days | keep1,keep2,keep(D) | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | precision | status |
+ ===================================================================================================================================================================================================================================================================
+ test | 2020-10-14 10:35:48.617 | 10 | 1 | 1 | 1 | 2 | 3650,3650,3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | ms | ready |
+ log | 2020-10-12 09:08:21.651 | 4 | 1 | 1 | 1 | 10 | 30,30,30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | us | ready |
+ Query OK, 2 row(s) in set (0.045000s)
+ taos>
+```
diff --git a/docs-en/14-reference/03-connector/_windows_install.mdx b/docs/en/14-reference/03-connector/_windows_install.mdx
similarity index 100%
rename from docs-en/14-reference/03-connector/_windows_install.mdx
rename to docs/en/14-reference/03-connector/_windows_install.mdx
diff --git a/docs-cn/14-reference/03-connector/connector.webp b/docs/en/14-reference/03-connector/connector.webp
similarity index 100%
rename from docs-cn/14-reference/03-connector/connector.webp
rename to docs/en/14-reference/03-connector/connector.webp
diff --git a/docs-en/14-reference/03-connector/cpp.mdx b/docs/en/14-reference/03-connector/cpp.mdx
similarity index 100%
rename from docs-en/14-reference/03-connector/cpp.mdx
rename to docs/en/14-reference/03-connector/cpp.mdx
diff --git a/docs-en/14-reference/03-connector/csharp.mdx b/docs/en/14-reference/03-connector/csharp.mdx
similarity index 100%
rename from docs-en/14-reference/03-connector/csharp.mdx
rename to docs/en/14-reference/03-connector/csharp.mdx
diff --git a/docs/en/14-reference/03-connector/go.mdx b/docs/en/14-reference/03-connector/go.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..69e1b56f38c3fe6feb1766abd1eea532e130ed49
--- /dev/null
+++ b/docs/en/14-reference/03-connector/go.mdx
@@ -0,0 +1,412 @@
+---
+toc_max_heading_level: 4
+sidebar_position: 4
+sidebar_label: Go
+title: TDengine Go Connector
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+
+import Preparation from "./_preparation.mdx";
+import GoInsert from "../../07-develop/03-insert-data/_go_sql.mdx";
+import GoInfluxLine from "../../07-develop/03-insert-data/_go_line.mdx";
+import GoOpenTSDBTelnet from "../../07-develop/03-insert-data/_go_opts_telnet.mdx";
+import GoOpenTSDBJson from "../../07-develop/03-insert-data/_go_opts_json.mdx";
+import GoQuery from "../../07-develop/04-query-data/_go.mdx";
+
+`driver-go` is the official Go language connector for TDengine. It implements the [database/sql](https://golang.org/pkg/database/sql/) package, the generic Go language interface to SQL databases. Go developers can use it to develop applications that access TDengine cluster data.
+
+`driver-go` provides two ways to establish connections. One is **native connection**, which connects to TDengine instances natively through the TDengine client driver (taosc), supporting data writing, querying, subscriptions, schemaless writing, and bind interface. The other is the **REST connection**, which connects to TDengine instances via the REST interface provided by taosAdapter. The set of features implemented by the REST connection differs slightly from those implemented by the native connection.
+
+This article describes how to install `driver-go` and connect to TDengine clusters and perform basic operations such as data query and data writing through `driver-go`.
+
+The source code of `driver-go` is hosted on [GitHub](https://github.com/taosdata/driver-go).
+
+## Supported Platforms
+
+Native connections are supported on the same platforms as the TDengine client driver.
+REST connections are supported on all platforms that can run Go.
+
+## Version support
+
+Please refer to [version support list](/reference/connector#version-support)
+
+## Supported features
+
+### Native connections
+
+A "native connection" is established by the connector directly to the TDengine instance via the TDengine client driver (taosc). The supported functional features are:
+
+- Normal queries
+- Continuous queries
+- Subscriptions
+- schemaless interface
+- parameter binding interface
+
+### REST connection
+
+A "REST connection" is a connection between the application and the TDengine instance via the REST API provided by the taosAdapter component. The following features are supported:
+
+- General queries
+- Continuous queries
+
+## Installation steps
+
+### Pre-installation
+
+- Install Go development environment (Go 1.14 and above, GCC 4.8.5 and above)
+- If you use the native connector, please install the TDengine client driver. Please refer to [Install Client Driver](/reference/connector/#install-client-driver) for specific steps
+
+Configure the environment variables and check the command.
+
+- `go env`
+- `gcc -v`
+
+### Use go get to install
+
+```
+go get -u github.com/taosdata/driver-go/v2@latest
+```
+
+### Manage with go mod
+
+1. Initialize the project with the `go mod` command.
+
+```text
+go mod init taos-demo
+```
+
+2. Introduce taosSql
+
+```go
+import (
+ "database/sql"
+ _ "github.com/taosdata/driver-go/v2/taosSql"
+)
+```
+
+3. Update the dependency packages with `go mod tidy`.
+
+```text
+go mod tidy
+```
+
+4. Run the program with `go run taos-demo` or compile the binary with the `go build` command.
+
+```text
+go run taos-demo
+go build
+```
+
+## Create a connection
+
+### Data source name (DSN)
+
+Data source names have a standard format, e.g. [PEAR DB](http://pear.php.net/manual/en/package.database.db.intro-dsn.php), but no type prefix (square brackets indicate optionally):
+
+```text
+[username[:password]@][protocol[(address)]]/[dbname][?param1=value1&... ¶mN=valueN]
+```
+
+DSN in full form.
+
+```text
+username:password@protocol(address)/dbname?param=value
+```
+
+### Connecting via connector
+
+
+
+
+_taosSql_ implements Go's `database/sql/driver` interface via cgo. You can use the [`database/sql`](https://golang.org/pkg/database/sql/) interface by simply introducing the driver.
+
+Use `taosSql` as `driverName` and use a correct [DSN](#DSN) as `dataSourceName`, DSN supports the following parameters.
+
+- configPath specifies the `taos.cfg` directory
+
+Example.
+
+```go
+package main
+
+import (
+ "database/sql"
+ "fmt"
+
+ _ "github.com/taosdata/driver-go/v2/taosSql"
+)
+
+func main() {
+ var taosUri = "root:taosdata@tcp(localhost:6030)/"
+ taos, err := sql.Open("taosSql", taosUri)
+ if err ! = nil {
+ fmt.Println("failed to connect TDengine, err:", err)
+ return
+ }
+}
+```
+
+
+
+
+_taosRestful_ implements Go's `database/sql/driver` interface via `http client`. You can use the [`database/sql`](https://golang.org/pkg/database/sql/) interface by simply introducing the driver.
+
+Use `taosRestful` as `driverName` and use a correct [DSN](#DSN) as `dataSourceName` with the following parameters supported by the DSN.
+
+- `disableCompression` whether to accept compressed data, default is true do not accept compressed data, set to false if transferring data using gzip compression.
+- `readBufferSize` The default size of the buffer for reading data is 4K (4096), which can be adjusted upwards when the query result has a lot of data.
+
+Example.
+
+```go
+package main
+
+import (
+ "database/sql"
+ "fmt"
+
+ _ "github.com/taosdata/driver-go/v2/taosRestful"
+)
+
+func main() {
+ var taosUri = "root:taosdata@http(localhost:6041)/"
+ taos, err := sql.Open("taosRestful", taosUri)
+ if err ! = nil {
+ fmt.Println("failed to connect TDengine, err:", err)
+ return
+ }
+}
+```
+
+
+
+
+## Usage examples
+
+### Write data
+
+#### SQL Write
+
+
+
+#### InfluxDB line protocol write
+
+
+
+#### OpenTSDB Telnet line protocol write
+
+
+
+#### OpenTSDB JSON line protocol write
+
+
+
+### Query data
+
+
+
+### More sample programs
+
+- [sample program](https://github.com/taosdata/TDengine/tree/develop/examples/go)
+- [Video tutorial](https://www.taosdata.com/blog/2020/11/11/1951.html).
+
+## Usage limitations
+
+Since the REST interface is stateless, the `use db` syntax will not work. You need to put the db name into the SQL command, e.g. `create table if not exists tb1 (ts timestamp, a int)` to `create table if not exists test.tb1 (ts timestamp, a int)` otherwise it will report the error `[0x217] Database not specified or available`.
+
+You can also put the db name in the DSN by changing `root:taosdata@http(localhost:6041)/` to `root:taosdata@http(localhost:6041)/test`. This method is supported by taosAdapter since TDengine 2.4.0.5. Executing the `create database` statement when the specified db does not exist will not report an error while executing other queries or writing against that db will report an error.
+
+The complete example is as follows.
+
+```go
+package main
+
+import (
+ "database/sql"
+ "fmt"
+ "time"
+
+ _ "github.com/taosdata/driver-go/v2/taosRestful"
+)
+
+func main() {
+ var taosDSN = "root:taosdata@http(localhost:6041)/test"
+ taos, err := sql.Open("taosRestful", taosDSN)
+ if err != nil {
+ fmt.Println("failed to connect TDengine, err:", err)
+ return
+ }
+ defer taos.Close()
+ taos.Exec("create database if not exists test")
+ taos.Exec("create table if not exists tb1 (ts timestamp, a int)")
+ _, err = taos.Exec("insert into tb1 values(now, 0)(now+1s,1)(now+2s,2)(now+3s,3)")
+ if err != nil {
+ fmt.Println("failed to insert, err:", err)
+ return
+ }
+ rows, err := taos.Query("select * from tb1")
+ if err != nil {
+ fmt.Println("failed to select from table, err:", err)
+ return
+ }
+
+ defer rows.Close()
+ for rows.Next() {
+ var r struct {
+ ts time.Time
+ a int
+ }
+ err := rows.Scan(&r.ts, &r.a)
+ if err != nil {
+ fmt.Println("scan error:\n", err)
+ return
+ }
+ fmt.Println(r.ts, r.a)
+ }
+}
+```
+
+## Frequently Asked Questions
+
+1. bind interface in database/sql crashes
+
+ REST does not support parameter binding related interface. It is recommended to use `db.Exec` and `db.Query`.
+
+2. error `[0x217] Database not specified or available` after executing other statements with `use db` statement
+
+ The execution of SQL command in the REST interface is not contextual, so using `use db` statement will not work, see the usage restrictions section above.
+
+3. use `taosSql` without error but use `taosRestful` with error `[0x217] Database not specified or available`
+
+ Because the REST interface is stateless, using the `use db` statement will not take effect. See the usage restrictions section above.
+
+4. Upgrade `github.com/taosdata/driver-go/v2/taosRestful`
+
+ Change the `github.com/taosdata/driver-go/v2` line in the `go.mod` file to `github.com/taosdata/driver-go/v2 develop`, then execute `go mod tidy`.
+
+5. `readBufferSize` parameter has no significant effect after being increased
+
+ Increasing `readBufferSize` will reduce the number of `syscall` calls when fetching results. If the query result is smaller, modifying this parameter will not improve performance significantly. If you increase the parameter value too much, the bottleneck will be parsing JSON data. If you need to optimize the query speed, you must adjust the value based on the actual situation to achieve the best query performance.
+
+6. `disableCompression` parameter is set to `false` when the query efficiency is reduced
+
+ When set `disableCompression` parameter to `false`, the query result will be compressed by `gzip` and then transmitted, so you have to decompress the data by `gzip` after getting it.
+
+7. `go get` command can't get the package, or timeout to get the package
+
+ Set Go proxy `go env -w GOPROXY=https://goproxy.cn,direct`.
+
+## Common APIs
+
+### database/sql API
+
+- `sql.Open(DRIVER_NAME string, dataSourceName string) *DB`
+
+ Use This API to open a DB, returning an object of type \*DB.
+
+ :::info
+ This API is created successfully without checking permissions, but only when you execute a Query or Exec, and check if user/password/host/port is legal.
+
+ :::
+
+- `func (db *DB) Exec(query string, args . .interface{}) (Result, error)`
+
+ `sql.Open` built-in method to execute non-query related SQL.
+
+- `func (db *DB) Query(query string, args ... . interface{}) (*Rows, error)`
+
+ `sql.Open` Built-in method to execute query statements.
+
+### Advanced functions (af) API
+
+The `af` package encapsulates TDengine advanced functions such as connection management, subscriptions, schemaless, parameter binding, etc.
+
+#### Connection management
+
+- `af.Open(host, user, pass, db string, port int) (*Connector, error)`
+
+ This API creates a connection to taosd via cgo.
+
+- `func (conn *Connector) Close() error`
+
+ Closes the connection.
+
+#### Subscribe to
+
+- `func (conn *Connector) Subscribe(restart bool, topic string, sql string, interval time.Duration) (Subscriber, error)`
+
+ Subscribe to data.
+
+- `func (s *taosSubscriber) Consume() (driver.Rows, error)`
+
+ Consume the subscription data, returning the `Rows` structure of the `database/sql/driver` package.
+
+- `func (s *taosSubscriber) Unsubscribe(keepProgress bool)`
+
+ Unsubscribe from data.
+
+#### schemaless
+
+- `func (conn *Connector) InfluxDBInsertLines(lines []string, precision string) error`
+
+ Write to influxDB line protocol.
+
+- `func (conn *Connector) OpenTSDBInsertTelnetLines(lines []string) error`
+
+ Write OpenTDSB telnet protocol data.
+
+- `func (conn *Connector) OpenTSDBInsertJsonPayload(payload string) error`
+
+ Writes OpenTSDB JSON protocol data.
+
+#### parameter binding
+
+- `func (conn *Connector) StmtExecute(sql string, params *param.Param) (res driver.Result, err error)`
+
+ Parameter bound single row insert.
+
+- `func (conn *Connector) StmtQuery(sql string, params *param.Param) (rows driver.Rows, err error)`
+
+ Parameter bound query that returns the `Rows` structure of the `database/sql/driver` package.
+
+- `func (conn *Connector) InsertStmt() *insertstmt.
+
+ Initialize the parameters.
+
+- `func (stmt *InsertStmt) Prepare(sql string) error`
+
+ Parameter binding preprocessing SQL statement.
+
+- `func (stmt *InsertStmt) SetTableName(name string) error`
+
+ Bind the set table name parameter.
+
+- `func (stmt *InsertStmt) SetSubTableName(name string) error`
+
+ Parameter binding to set the sub table name.
+
+- `func (stmt *InsertStmt) BindParam(params []*param.Param, bindType *param.ColumnType) error`
+
+ Parameter bind multiple rows of data.
+
+- `func (stmt *InsertStmt) AddBatch() error`
+
+ Add to a parameter-bound batch.
+
+- `func (stmt *InsertStmt) Execute() error`
+
+ Execute a parameter binding.
+
+- `func (stmt *InsertStmt) GetAffectedRows() int`
+
+ Gets the number of affected rows inserted by the parameter binding.
+
+- `func (stmt *InsertStmt) Close() error`
+
+ Closes the parameter binding.
+
+## API Reference
+
+Full API see [driver-go documentation](https://pkg.go.dev/github.com/taosdata/driver-go/v2)
diff --git a/docs/en/14-reference/03-connector/java.mdx b/docs/en/14-reference/03-connector/java.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..22f99bb9ae8fa669155ba8ac7cec1ad2c609cb32
--- /dev/null
+++ b/docs/en/14-reference/03-connector/java.mdx
@@ -0,0 +1,854 @@
+---
+toc_max_heading_level: 4
+sidebar_position: 2
+sidebar_label: Java
+title: TDengine Java Connector
+description: TDengine Java based on JDBC API and provide both native and REST connections
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+'taos-jdbcdriver' is TDengine's official Java language connector, which allows Java developers to develop applications that access the TDengine database. 'taos-jdbcdriver' implements the interface of the JDBC driver standard and provides two forms of connectors. One is to connect to a TDengine instance natively through the TDengine client driver (taosc), which supports functions including data writing, querying, subscription, schemaless writing, and bind interface. And the other is to connect to a TDengine instance through the REST interface provided by taosAdapter (2.4.0.0 and later). The implementation of the REST connection and those of the native connections have slight differences in features.
+
+
+
+The preceding diagram shows two ways for a Java app to access TDengine via connector:
+
+- JDBC native connection: Java applications use TSDBDriver on physical node 1 (pnode1) to call client-driven directly (`libtaos.so` or `taos.dll`) APIs to send writing and query requests to taosd instances located on physical node 2 (pnode2).
+- JDBC REST connection: The Java application encapsulates the SQL as a REST request via RestfulDriver, sends it to the REST server (taosAdapter) on physical node 2. taosAdapter forwards the request to TDengine server and returns the result.
+
+The REST connection, which does not rely on TDengine client drivers, is more convenient and flexible, in addition to being cross-platform. However the performance is about 30% lower than that of the native connection.
+
+:::info
+TDengine's JDBC driver implementation is as consistent as possible with the relational database driver. Still, there are differences in the use scenarios and technical characteristics of TDengine and relational object databases. So 'taos-jdbcdriver' also has some differences from traditional JDBC drivers. It is important to keep the following points in mind:
+
+- TDengine does not currently support delete operations for individual data records.
+- Transactional operations are not currently supported.
+
+:::
+
+## Supported platforms
+
+Native connection supports the same platform as TDengine client-driven support.
+REST connection supports all platforms that can run Java.
+
+## Version support
+
+Please refer to [Version Support List](/reference/connector#version-support).
+
+## TDengine DataType vs. Java DataType
+
+TDengine currently supports timestamp, number, character, Boolean type, and the corresponding type conversion with Java is as follows:
+
+| TDengine DataType | JDBCType (driver version < 2.0.24) | JDBCType (driver version > = 2.0.24) |
+| ----------------- | ---------------------------------- | ------------------------------------ |
+| TIMESTAMP | java.lang.Long | java.sql.Timestamp |
+| INT | java.lang.Integer | java.lang.Integer |
+| BIGINT | java.lang.Long | java.lang.Long |
+| FLOAT | java.lang.Float | java.lang.Float |
+| DOUBLE | java.lang.Double | java.lang.Double |
+| SMALLINT | java.lang.Short | java.lang.Short |
+| TINYINT | java.lang.Byte | java.lang.Byte |
+| BOOL | java.lang.Boolean | java.lang.Boolean |
+| BINARY | java.lang.String | byte array |
+| NCHAR | java.lang.String | java.lang.String |
+| JSON | - | java.lang.String |
+
+**Note**: Only TAG supports JSON types
+
+## Installation steps
+
+### Pre-installation preparation
+
+Before using Java Connector to connect to the database, the following conditions are required.
+
+- Java 1.8 or above runtime environment and Maven 3.6 or above installed
+- TDengine client driver installed (required for native connections, not required for REST connections), please refer to [Installing Client Driver](/reference/connector#Install-Client-Driver)
+
+### Install the connectors
+
+
+
+
+- [sonatype](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver)
+- [mvnrepository](https://mvnrepository.com/artifact/com.taosdata.jdbc/taos-jdbcdriver)
+- [maven.aliyun](https://maven.aliyun.com/mvn/search)
+
+Add following dependency in the `pom.xml` file of your Maven project:
+
+```xml
+
+ com.taosdata.jdbc
+ taos-jdbcdriver
+ 2.0.**
+
+```
+
+
+
+
+You can build Java connector from source code after cloning the TDengine project:
+
+```
+git clone https://github.com/taosdata/taos-connector-jdbc.git --branch 2.0
+cd taos-connector-jdbc
+mvn clean install -Dmaven.test.skip=true
+```
+
+After compilation, a jar package named taos-jdbcdriver-2.0.XX-dist.jar is generated in the target directory, and the compiled jar file is automatically placed in the local Maven repository.
+
+
+
+
+## Establish a connection
+
+TDengine's JDBC URL specification format is:
+`jdbc:[TAOS| TAOS-RS]://[host_name]:[port]/[database_name]? [user={user}|&password={password}|&charset={charset}|&cfgdir={config_dir}|&locale={locale}|&timezone={timezone}]`
+
+For establishing connections, native connections differ slightly from REST connections.
+
+
+
+
+```java
+Class.forName("com.taosdata.jdbc.TSDBDriver");
+String jdbcUrl = "jdbc:TAOS://taosdemo.com:6030/test?user=root&password=taosdata";
+Connection conn = DriverManager.getConnection(jdbcUrl);
+```
+
+In the above example, TSDBDriver, which uses a JDBC native connection, establishes a connection to a hostname `taosdemo.com`, port `6030` (the default port for TDengine), and a database named `test`. In this URL, the user name `user` is specified as `root`, and the `password` is `taosdata`.
+
+Note: With JDBC native connections, taos-jdbcdriver relies on the client driver (`libtaos.so` on Linux; `taos.dll` on Windows).
+
+The configuration parameters in the URL are as follows:
+
+- user: Log in to the TDengine username. The default value is 'root'.
+- password: User login password, the default value is 'taosdata'.
+- cfgdir: client configuration file directory path, default '/etc/taos' on Linux OS, 'C:/TDengine/cfg' on Windows OS.
+- charset: The character set used by the client, the default value is the system character set.
+- locale: Client locale, by default, use the system's current locale.
+- timezone: The time zone used by the client, the default value is the system's current time zone.
+- batchfetch: true: pulls result sets in batches when executing queries; false: pulls result sets row by row. The default value is: false. Enabling batch pulling and obtaining a batch of data can improve query performance when the query data volume is large.
+- batchErrorIgnore:true: When executing statement executeBatch, if there is a SQL execution failure in the middle, the following SQL will continue to be executed. false: No more statements after the failed SQL are executed. The default value is: false.
+
+For more information about JDBC native connections, see [Video Tutorial](https://www.taosdata.com/blog/2020/11/11/1955.html).
+
+**Connect using the TDengine client-driven configuration file **
+
+When you use a JDBC native connection to connect to a TDengine cluster, you can use the TDengine client driver configuration file to specify parameters such as `firstEp` and `secondEp` of the cluster in the configuration file as below:
+
+1. Do not specify hostname and port in Java applications.
+
+ ```java
+ public Connection getConn() throws Exception{
+ Class.forName("com.taosdata.jdbc.TSDBDriver");
+ String jdbcUrl = "jdbc:TAOS://:/test?user=root&password=taosdata";
+ Properties connProps = new Properties();
+ connProps.setProperty(TSDBDriver.PROPERTY_KEY_CHARSET, "UTF-8");
+ connProps.setProperty(TSDBDriver.PROPERTY_KEY_LOCALE, "en_US.UTF-8");
+ connProps.setProperty(TSDBDriver.PROPERTY_KEY_TIME_ZONE, "UTC-8");
+ Connection conn = DriverManager.getConnection(jdbcUrl, connProps);
+ return conn;
+ }
+ ```
+
+2. specify the firstEp and the secondEp in the configuration file taos.cfg
+
+ ```shell
+ # first fully qualified domain name (FQDN) for TDengine system
+ firstEp cluster_node1:6030
+
+ # second fully qualified domain name (FQDN) for TDengine system, for cluster only
+ secondEp cluster_node2:6030
+
+ # default system charset
+ # charset UTF-8
+
+ # system locale
+ # locale en_US.UTF-8
+ ```
+
+In the above example, JDBC uses the client's configuration file to establish a connection to a hostname `cluster_node1`, port 6030, and a database named `test`. When the firstEp node in the cluster fails, JDBC attempts to connect to the cluster using secondEp.
+
+In TDengine, as long as one node in firstEp and secondEp is valid, the connection to the cluster can be established normally.
+
+:::note
+The configuration file here refers to the configuration file on the machine where the application that calls the JDBC Connector is located, the default path is `/etc/taos/taos.cfg` on Linux, and the default path is `C://TDengine/cfg/taos.cfg` on Windows.
+
+:::
+
+
+
+
+```java
+Class.forName("com.taosdata.jdbc.rs.RestfulDriver");
+String jdbcUrl = "jdbc:TAOS-RS://taosdemo.com:6041/test?user=root&password=taosdata";
+Connection conn = DriverManager.getConnection(jdbcUrl);
+```
+
+In the above example, a RestfulDriver with a JDBC REST connection is used to establish a connection to a database named `test` with hostname `taosdemo.com` on port `6041`. The URL specifies the user name as `root` and the password as `taosdata`.
+
+There is no dependency on the client driver when Using a JDBC REST connection. Compared to a JDBC native connection, only the following are required:
+
+1. driverClass specified as "com.taosdata.jdbc.rs.RestfulDriver".
+2. jdbcUrl starting with "jdbc:TAOS-RS://".
+3. use 6041 as the connection port.
+
+The configuration parameters in the URL are as follows.
+
+- user: Login TDengine user name, default value 'root'.
+- password: user login password, default value 'taosdata'.
+- batchfetch: true: pull the result set in batch when executing the query; false: pull the result set row by row. The default value is false. batchfetch uses HTTP for data transfer. The JDBC REST connection supports bulk data pulling function in taos-jdbcdriver-2.0.38 and TDengine 2.4.0.12 and later versions. taos-jdbcdriver and TDengine transfer data via WebSocket connection. Compared with HTTP, WebSocket enables JDBC REST connection to support large data volume querying and improve query performance.
+- charset: specify the charset to parse the string, this parameter is valid only when set batchfetch to true.
+- batchErrorIgnore: true: when executing executeBatch of Statement, if one SQL execution fails in the middle, continue to execute the following SQL. false: no longer execute any statement after the failed SQL. The default value is: false.
+- httpConnectTimeout: REST connection timeout in milliseconds, the default value is 5000 ms.
+- httpSocketTimeout: socket timeout in milliseconds, the default value is 5000 ms. It only takes effect when batchfetch is false.
+- messageWaitTimeout: message transmission timeout in milliseconds, the default value is 3000 ms. It only takes effect when batchfetch is true.
+- useSSL: connecting Securely Using SSL. true: using SSL conneciton, false: not using SSL connection.
+
+**Note**: Some configuration items (e.g., locale, timezone) do not work in the REST connection.
+
+:::note
+
+- Unlike the native connection method, the REST interface is stateless. When using the JDBC REST connection, you need to specify the database name of the table and super table in SQL. For example.
+
+```sql
+INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('California.SanFrancisco') VALUES(now, 24.6);
+```
+
+- Starting from taos-jdbcdriver-2.0.36 and TDengine 2.2.0.0, if dbname is specified in the URL, JDBC REST connections will use `/rest/sql/dbname` as the URL for REST requests by default, and there is no need to specify dbname in SQL. For example, if the URL is `jdbc:TAOS-RS://127.0.0.1:6041/test`, then the SQL can be executed: insert into test using weather(ts, temperature) tags('California.SanFrancisco') values(now, 24.6);
+
+:::
+
+
+
+
+### Specify the URL and Properties to get the connection
+
+In addition to getting the connection from the specified URL, you can use Properties to specify parameters when the connection is established.
+
+**Note**:
+
+- The client parameter set in the application is process-level. If you want to update the parameters of the client, you need to restart the application. This is because the client parameter is a global parameter that takes effect only the first time the application is set.
+- The following sample code is based on taos-jdbcdriver-2.0.36.
+
+```java
+public Connection getConn() throws Exception{
+ Class.forName("com.taosdata.jdbc.TSDBDriver");
+ String jdbcUrl = "jdbc:TAOS://taosdemo.com:6030/test?user=root&password=taosdata";
+ Properties connProps = new Properties();
+ connProps.setProperty(TSDBDriver.PROPERTY_KEY_CHARSET, "UTF-8");
+ connProps.setProperty(TSDBDriver.PROPERTY_KEY_LOCALE, "en_US.UTF-8");
+ connProps.setProperty(TSDBDriver.PROPERTY_KEY_TIME_ZONE, "UTC-8");
+ connProps.setProperty("debugFlag", "135");
+ connProps.setProperty("maxSQLLength", "1048576");
+ Connection conn = DriverManager.getConnection(jdbcUrl, connProps);
+ return conn;
+}
+
+public Connection getRestConn() throws Exception{
+ Class.forName("com.taosdata.jdbc.rs.RestfulDriver");
+ String jdbcUrl = "jdbc:TAOS-RS://taosdemo.com:6041/test?user=root&password=taosdata";
+ Properties connProps = new Properties();
+ connProps.setProperty(TSDBDriver.PROPERTY_KEY_BATCH_LOAD, "true");
+ Connection conn = DriverManager.getConnection(jdbcUrl, connProps);
+ return conn;
+}
+```
+
+In the above example, a connection is established to `taosdemo.com`, port is 6030/6041, and database named `test`. The connection specifies the user name as `root` and the password as `taosdata` in the URL and specifies the character set, language environment, time zone, and whether to enable bulk fetching in the connProps.
+
+The configuration parameters in properties are as follows.
+
+- TSDBDriver.PROPERTY_KEY_USER: login TDengine user name, default value 'root'.
+- TSDBDriver.PROPERTY_KEY_PASSWORD: user login password, default value 'taosdata'.
+- TSDBDriver.PROPERTY_KEY_BATCH_LOAD: true: pull the result set in batch when executing query; false: pull the result set row by row. The default value is: false.
+- TSDBDriver.PROPERTY_KEY_BATCH_ERROR_IGNORE: true: when executing executeBatch of Statement, if there is a SQL execution failure in the middle, continue to execute the following sq. false: no longer execute any statement after the failed SQL. The default value is: false.
+- TSDBDriver.PROPERTY_KEY_CONFIG_DIR: only works when using JDBC native connection. Client configuration file directory path, default value `/etc/taos` on Linux OS, default value `C:/TDengine/cfg` on Windows OS.
+- TSDBDriver.PROPERTY_KEY_CHARSET: In the character set used by the client, the default value is the system character set.
+- TSDBDriver.PROPERTY_KEY_LOCALE: this only takes effect when using JDBC native connection. Client language environment, the default value is system current locale.
+- TSDBDriver.PROPERTY_KEY_TIME_ZONE: only takes effect when using JDBC native connection. In the time zone used by the client, the default value is the system's current time zone.
+- TSDBDriver.HTTP_CONNECT_TIMEOUT: REST connection timeout in milliseconds, the default value is 5000 ms. It only takes effect when using JDBC REST connection.
+- TSDBDriver.HTTP_SOCKET_TIMEOUT: socket timeout in milliseconds, the default value is 5000 ms. It only takes effect when using JDBC REST connection and batchfetch is false.
+- TSDBDriver.PROPERTY_KEY_MESSAGE_WAIT_TIMEOUT: message transmission timeout in milliseconds, the default value is 3000 ms. It only takes effect when using JDBC REST connection and batchfetch is true.
+- TSDBDriver.PROPERTY_KEY_USE_SSL: connecting Securely Using SSL. true: using SSL conneciton, false: not using SSL connection. It only takes effect when using using JDBC REST connection.
+ For JDBC native connections, you can specify other parameters, such as log level, SQL length, etc., by specifying URL and Properties. For more detailed configuration, please refer to [Client Configuration](/reference/config/#Client-Only).
+
+### Priority of configuration parameters
+
+If the configuration parameters are duplicated in the URL, Properties, or client configuration file, the `priority` of the parameters, from highest to lowest, are as follows:
+
+1. JDBC URL parameters, as described above, can be specified in the parameters of the JDBC URL.
+2. Properties connProps
+3. the configuration file taos.cfg of the TDengine client driver when using a native connection
+
+For example, if you specify the password as `taosdata` in the URL and specify the password as `taosdemo` in the Properties simultaneously, JDBC will use the password in the URL to establish the connection.
+
+## Usage examples
+
+### Create database and tables
+
+```java
+Statement stmt = conn.createStatement();
+
+// create database
+stmt.executeUpdate("create database if not exists db");
+
+// use database
+stmt.executeUpdate("use db");
+
+// create table
+stmt.executeUpdate("create table if not exists tb (ts timestamp, temperature int, humidity float)");
+```
+
+> **Note**: If you do not use `use db` to specify the database, all subsequent operations on the table need to add the database name as a prefix, such as db.tb.
+
+### Insert data
+
+```java
+// insert data
+int affectedRows = stmt.executeUpdate("insert into tb values(now, 23, 10.3) (now + 1s, 20, 9.3)");
+
+System.out.println("insert " + affectedRows + " rows.");
+```
+
+> now is an internal function. The default is the current time of the client's computer.
+> `now + 1s` represents the current time of the client plus 1 second, followed by the number representing the unit of time: a (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), n (months), y (years).
+
+### Querying data
+
+```java
+// query data
+ResultSet resultSet = stmt.executeQuery("select * from tb");
+
+Timestamp ts = null;
+int temperature = 0;
+float humidity = 0;
+while(resultSet.next()){
+
+ ts = resultSet.getTimestamp(1);
+ temperature = resultSet.getInt(2);
+ humidity = resultSet.getFloat("humidity");
+
+ System.out.printf("%s, %d, %s\n", ts, temperature, humidity);
+}
+```
+
+> The query is consistent with operating a relational database. When using subscripts to get the contents of the returned fields, you have to start from 1. However, we recommend using the field names to get the values of the fields in the result set.
+
+### Handling exceptions
+
+After an error is reported, the error message and error code can be obtained through SQLException.
+
+```java
+try (Statement statement = connection.createStatement()) {
+ // executeQuery
+ ResultSet resultSet = statement.executeQuery(sql);
+ // print result
+ printResult(resultSet);
+} catch (SQLException e) {
+ System.out.println("ERROR Message: " + e.getMessage());
+ System.out.println("ERROR Code: " + e.getErrorCode());
+ e.printStackTrace();
+}
+```
+
+There are three types of error codes that the JDBC connector can report:
+
+- Error code of the JDBC driver itself (error code between 0x2301 and 0x2350)
+- Error code of the native connection method (error code between 0x2351 and 0x2400)
+- Error code of other TDengine function modules
+
+For specific error codes, please refer to.
+
+- [TDengine Java Connector](https://github.com/taosdata/taos-connector-jdbc/blob/main/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java)
+- [TDengine_ERROR_CODE](https://github.com/taosdata/TDengine/blob/develop/src/inc/taoserror.h)
+
+### Writing data via parameter binding
+
+TDengine's native JDBC connection implementation has significantly improved its support for data writing (INSERT) scenarios via bind interface with version 2.1.2.0 and later versions. Writing data in this way avoids the resource consumption of SQL syntax parsing, resulting in significant write performance improvements in many cases.
+
+**Note**.
+
+- JDBC REST connections do not currently support bind interface
+- The following sample code is based on taos-jdbcdriver-2.0.36
+- The setString method should be called for binary type data, and the setNString method should be called for nchar type data
+- both setString and setNString require the user to declare the width of the corresponding column in the size parameter of the table definition
+
+```java
+public class ParameterBindingDemo {
+
+ private static final String host = "127.0.0.1";
+ private static final Random random = new Random(System.currentTimeMillis());
+ private static final int BINARY_COLUMN_SIZE = 20;
+ private static final String[] schemaList = {
+ "create table stable1(ts timestamp, f1 tinyint, f2 smallint, f3 int, f4 bigint) tags(t1 tinyint, t2 smallint, t3 int, t4 bigint)",
+ "create table stable2(ts timestamp, f1 float, f2 double) tags(t1 float, t2 double)",
+ "create table stable3(ts timestamp, f1 bool) tags(t1 bool)",
+ "create table stable4(ts timestamp, f1 binary(" + BINARY_COLUMN_SIZE + ")) tags(t1 binary(" + BINARY_COLUMN_SIZE + "))",
+ "create table stable5(ts timestamp, f1 nchar(" + BINARY_COLUMN_SIZE + ")) tags(t1 nchar(" + BINARY_COLUMN_SIZE + "))"
+ };
+ private static final int numOfSubTable = 10, numOfRow = 10;
+
+ public static void main(String[] args) throws SQLException {
+
+ String jdbcUrl = "jdbc:TAOS://" + host + ":6030/";
+ Connection conn = DriverManager.getConnection(jdbcUrl, "root", "taosdata");
+
+ init(conn);
+
+ bindInteger(conn);
+
+ bindFloat(conn);
+
+ bindBoolean(conn);
+
+ bindBytes(conn);
+
+ bindString(conn);
+
+ conn.close();
+ }
+
+ private static void init(Connection conn) throws SQLException {
+ try (Statement stmt = conn.createStatement()) {
+ stmt.execute("drop database if exists test_parabind");
+ stmt.execute("create database if not exists test_parabind");
+ stmt.execute("use test_parabind");
+ for (int i = 0; i < schemaList.length; i++) {
+ stmt.execute(schemaList[i]);
+ }
+ }
+ }
+
+ private static void bindInteger(Connection conn) throws SQLException {
+ String sql = "insert into ? using stable1 tags(?,?,?,?) values(?,?,?,?,?)";
+
+ try (TSDBPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSDBPreparedStatement.class)) {
+
+ for (int i = 1; i <= numOfSubTable; i++) {
+ // set table name
+ pstmt.setTableName("t1_" + i);
+ // set tags
+ pstmt.setTagByte(0, Byte.parseByte(Integer.toString(random.nextInt(Byte.MAX_VALUE))));
+ pstmt.setTagShort(1, Short.parseShort(Integer.toString(random.nextInt(Short.MAX_VALUE))));
+ pstmt.setTagInt(2, random.nextInt(Integer.MAX_VALUE));
+ pstmt.setTagLong(3, random.nextLong());
+ // set columns
+ ArrayList tsList = new ArrayList<>();
+ long current = System.currentTimeMillis();
+ for (int j = 0; j < numOfRow; j++)
+ tsList.add(current + j);
+ pstmt.setTimestamp(0, tsList);
+
+ ArrayList f1List = new ArrayList<>();
+ for (int j = 0; j < numOfRow; j++)
+ f1List.add(Byte.parseByte(Integer.toString(random.nextInt(Byte.MAX_VALUE))));
+ pstmt.setByte(1, f1List);
+
+ ArrayList f2List = new ArrayList<>();
+ for (int j = 0; j < numOfRow; j++)
+ f2List.add(Short.parseShort(Integer.toString(random.nextInt(Short.MAX_VALUE))));
+ pstmt.setShort(2, f2List);
+
+ ArrayList f3List = new ArrayList<>();
+ for (int j = 0; j < numOfRow; j++)
+ f3List.add(random.nextInt(Integer.MAX_VALUE));
+ pstmt.setInt(3, f3List);
+
+ ArrayList f4List = new ArrayList<>();
+ for (int j = 0; j < numOfRow; j++)
+ f4List.add(random.nextLong());
+ pstmt.setLong(4, f4List);
+
+ // add column
+ pstmt.columnDataAddBatch();
+ }
+ // execute column
+ pstmt.columnDataExecuteBatch();
+ }
+ }
+
+ private static void bindFloat(Connection conn) throws SQLException {
+ String sql = "insert into ? using stable2 tags(?,?) values(?,?,?)";
+
+ TSDBPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSDBPreparedStatement.class);
+
+ for (int i = 1; i <= numOfSubTable; i++) {
+ // set table name
+ pstmt.setTableName("t2_" + i);
+ // set tags
+ pstmt.setTagFloat(0, random.nextFloat());
+ pstmt.setTagDouble(1, random.nextDouble());
+ // set columns
+ ArrayList tsList = new ArrayList<>();
+ long current = System.currentTimeMillis();
+ for (int j = 0; j < numOfRow; j++)
+ tsList.add(current + j);
+ pstmt.setTimestamp(0, tsList);
+
+ ArrayList f1List = new ArrayList<>();
+ for (int j = 0; j < numOfRow; j++)
+ f1List.add(random.nextFloat());
+ pstmt.setFloat(1, f1List);
+
+ ArrayList f2List = new ArrayList<>();
+ for (int j = 0; j < numOfRow; j++)
+ f2List.add(random.nextDouble());
+ pstmt.setDouble(2, f2List);
+
+ // add column
+ pstmt.columnDataAddBatch();
+ }
+ // execute
+ pstmt.columnDataExecuteBatch();
+ // close if no try-with-catch statement is used
+ pstmt.close();
+ }
+
+ private static void bindBoolean(Connection conn) throws SQLException {
+ String sql = "insert into ? using stable3 tags(?) values(?,?)";
+
+ try (TSDBPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSDBPreparedStatement.class)) {
+ for (int i = 1; i <= numOfSubTable; i++) {
+ // set table name
+ pstmt.setTableName("t3_" + i);
+ // set tags
+ pstmt.setTagBoolean(0, random.nextBoolean());
+ // set columns
+ ArrayList tsList = new ArrayList<>();
+ long current = System.currentTimeMillis();
+ for (int j = 0; j < numOfRow; j++)
+ tsList.add(current + j);
+ pstmt.setTimestamp(0, tsList);
+
+ ArrayList f1List = new ArrayList<>();
+ for (int j = 0; j < numOfRow; j++)
+ f1List.add(random.nextBoolean());
+ pstmt.setBoolean(1, f1List);
+
+ // add column
+ pstmt.columnDataAddBatch();
+ }
+ // execute
+ pstmt.columnDataExecuteBatch();
+ }
+ }
+
+ private static void bindBytes(Connection conn) throws SQLException {
+ String sql = "insert into ? using stable4 tags(?) values(?,?)";
+
+ try (TSDBPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSDBPreparedStatement.class)) {
+
+ for (int i = 1; i <= numOfSubTable; i++) {
+ // set table name
+ pstmt.setTableName("t4_" + i);
+ // set tags
+ pstmt.setTagString(0, new String("abc"));
+
+ // set columns
+ ArrayList tsList = new ArrayList<>();
+ long current = System.currentTimeMillis();
+ for (int j = 0; j < numOfRow; j++)
+ tsList.add(current + j);
+ pstmt.setTimestamp(0, tsList);
+
+ ArrayList f1List = new ArrayList<>();
+ for (int j = 0; j < numOfRow; j++) {
+ f1List.add(new String("abc"));
+ }
+ pstmt.setString(1, f1List, BINARY_COLUMN_SIZE);
+
+ // add column
+ pstmt.columnDataAddBatch();
+ }
+ // execute
+ pstmt.columnDataExecuteBatch();
+ }
+ }
+
+ private static void bindString(Connection conn) throws SQLException {
+ String sql = "insert into ? using stable5 tags(?) values(?,?)";
+
+ try (TSDBPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSDBPreparedStatement.class)) {
+
+ for (int i = 1; i <= numOfSubTable; i++) {
+ // set table name
+ pstmt.setTableName("t5_" + i);
+ // set tags
+ pstmt.setTagNString(0, "California-abc");
+
+ // set columns
+ ArrayList tsList = new ArrayList<>();
+ long current = System.currentTimeMillis();
+ for (int j = 0; j < numOfRow; j++)
+ tsList.add(current + j);
+ pstmt.setTimestamp(0, tsList);
+
+ ArrayList f1List = new ArrayList<>();
+ for (int j = 0; j < numOfRow; j++) {
+ f1List.add("California-abc");
+ }
+ pstmt.setNString(1, f1List, BINARY_COLUMN_SIZE);
+
+ // add column
+ pstmt.columnDataAddBatch();
+ }
+ // execute
+ pstmt.columnDataExecuteBatch();
+ }
+ }
+}
+```
+
+The methods to set TAGS values:
+
+```java
+public void setTagNull(int index, int type)
+public void setTagBoolean(int index, boolean value)
+public void setTagInt(int index, int value)
+public void setTagByte(int index, byte value)
+public void setTagShort(int index, short value)
+public void setTagLong(int index, long value)
+public void setTagTimestamp(int index, long value)
+public void setTagFloat(int index, float value)
+public void setTagDouble(int index, double value)
+public void setTagString(int index, String value)
+public void setTagNString(int index, String value)
+```
+
+The methods to set VALUES columns:
+
+```java
+public void setInt(int columnIndex, ArrayList list) throws SQLException
+public void setFloat(int columnIndex, ArrayList list) throws SQLException
+public void setTimestamp(int columnIndex, ArrayList list) throws SQLException
+public void setLong(int columnIndex, ArrayList list) throws SQLException
+public void setDouble(int columnIndex, ArrayList list) throws SQLException
+public void setBoolean(int columnIndex, ArrayList list) throws SQLException
+public void setByte(int columnIndex, ArrayList list) throws SQLException
+public void setShort(int columnIndex, ArrayList list) throws SQLException
+public void setString(int columnIndex, ArrayList list, int size) throws SQLException
+public void setNString(int columnIndex, ArrayList list, int size) throws SQLException
+```
+
+### Schemaless Writing
+
+Starting with version 2.2.0.0, TDengine has added the ability to perform schemaless writing. It is compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. See [schemaless writing](/reference/schemaless/) for details.
+
+**Note**.
+
+- JDBC REST connections do not currently support schemaless writes
+- The following sample code is based on taos-jdbcdriver-2.0.36
+
+```java
+public class SchemalessInsertTest {
+ private static final String host = "127.0.0.1";
+ private static final String lineDemo = "st,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000";
+ private static final String telnetDemo = "stb0_0 1626006833 4 host=host0 interface=eth0";
+ private static final String jsonDemo = "{\"metric\": \"meter_current\",\"timestamp\": 1346846400,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"California.SanFrancisco\", \"id\": \"d1001\"}}";
+
+ public static void main(String[] args) throws SQLException {
+ final String url = "jdbc:TAOS://" + host + ":6030/?user=root&password=taosdata";
+ try (Connection connection = DriverManager.getConnection(url)) {
+ init(connection);
+
+ SchemalessWriter writer = new SchemalessWriter(connection);
+ writer.write(lineDemo, SchemalessProtocolType.LINE, SchemalessTimestampType.NANO_SECONDS);
+ writer.write(telnetDemo, SchemalessProtocolType.TELNET, SchemalessTimestampType.MILLI_SECONDS);
+ writer.write(jsonDemo, SchemalessProtocolType.JSON, SchemalessTimestampType.NOT_CONFIGURED);
+ }
+ }
+
+ private static void init(Connection connection) throws SQLException {
+ try (Statement stmt = connection.createStatement()) {
+ stmt.executeUpdate("drop database if exists test_schemaless");
+ stmt.executeUpdate("create database if not exists test_schemaless");
+ stmt.executeUpdate("use test_schemaless");
+ }
+ }
+}
+```
+
+### Subscriptions
+
+The TDengine Java Connector supports subscription functionality with the following application API.
+
+#### Create subscriptions
+
+```java
+TSDBSubscribe sub = ((TSDBConnection)conn).subscribe("topicname", "select * from meters", false);
+```
+
+The three parameters of the `subscribe()` method have the following meanings.
+
+- topicname: the name of the subscribed topic. This parameter is the unique identifier of the subscription.
+- sql: the query statement of the subscription. This statement can only be a `select` statement. Only original data can be queried, and you can query the data only temporal order.
+- restart: if the subscription already exists, whether to restart or continue the previous subscription
+
+The above example will use the SQL command `select * from meters` to create a subscription named `topicname`. If the subscription exists, it will continue the progress of the previous query instead of consuming all the data from the beginning.
+
+#### Subscribe to consume data
+
+```java
+int total = 0;
+while(true) {
+ TSDBResultSet rs = sub.consume();
+ int count = 0;
+ while(rs.next()) {
+ count++;
+ }
+ total += count;
+ System.out.printf("%d rows consumed, total %d\n", count, total);
+ Thread.sleep(1000);
+}
+```
+
+The `consume()` method returns a result set containing all new data from the last `consume()`. Be sure to choose a reasonable frequency for calling `consume()` as needed (e.g. `Thread.sleep(1000)` in the example). Otherwise, it will cause unnecessary stress on the server-side.
+
+#### Close subscriptions
+
+```java
+sub.close(true);
+```
+
+The `close()` method closes a subscription. If its argument is `true` it means that the subscription progress information is retained, and the subscription with the same name can be created to continue consuming data; if it is `false` it does not retain the subscription progress.
+
+### Closing resources
+
+```java
+resultSet.close();
+stmt.close();
+conn.close();
+```
+
+> **Be sure to close the connection**, otherwise, there will be a connection leak.
+
+### Use with connection pool
+
+#### HikariCP
+
+Example usage is as follows.
+
+```java
+ public static void main(String[] args) throws SQLException {
+ HikariConfig config = new HikariConfig();
+ // jdbc properties
+ config.setJdbcUrl("jdbc:TAOS://127.0.0.1:6030/log");
+ config.setUsername("root");
+ config.setPassword("taosdata");
+ // connection pool configurations
+ config.setMinimumIdle(10); //minimum number of idle connection
+ config.setMaximumPoolSize(10); //maximum number of connection in the pool
+ config.setConnectionTimeout(30000); //maximum wait milliseconds for get connection from pool
+ config.setMaxLifetime(0); // maximum life time for each connection
+ config.setIdleTimeout(0); // max idle time for recycle idle connection
+ config.setConnectionTestQuery("select server_status()"); //validation query
+
+ HikariDataSource ds = new HikariDataSource(config); //create datasource
+
+ Connection connection = ds.getConnection(); // get connection
+ Statement statement = connection.createStatement(); // get statement
+
+ //query or insert
+ // ...
+
+ connection.close(); // put back to connection pool
+}
+```
+
+> getConnection(), you need to call the close() method after you finish using it. It doesn't close the connection. It just puts it back into the connection pool.
+> For more questions about using HikariCP, please see the [official instructions](https://github.com/brettwooldridge/HikariCP).
+
+#### Druid
+
+Example usage is as follows.
+
+```java
+public static void main(String[] args) throws Exception {
+
+ DruidDataSource dataSource = new DruidDataSource();
+ // jdbc properties
+ dataSource.setDriverClassName("com.taosdata.jdbc.TSDBDriver");
+ dataSource.setUrl(url);
+ dataSource.setUsername("root");
+ dataSource.setPassword("taosdata");
+ // pool configurations
+ dataSource.setInitialSize(10);
+ dataSource.setMinIdle(10);
+ dataSource.setMaxActive(10);
+ dataSource.setMaxWait(30000);
+ dataSource.setValidationQuery("select server_status()");
+
+ Connection connection = dataSource.getConnection(); // get connection
+ Statement statement = connection.createStatement(); // get statement
+ //query or insert
+ // ...
+
+ connection.close(); // put back to connection pool
+}
+```
+
+> For more questions about using druid, please see [Official Instructions](https://github.com/alibaba/druid).
+
+**Caution:**
+
+- TDengine `v1.6.4.1` provides a special function `select server_status()` for heartbeat detection, so it is recommended to use `select server_status()` for Validation Query when using connection pooling.
+
+As you can see below, `select server_status()` returns `1` on successful execution.
+
+```sql
+taos> select server_status();
+server_status()|
+================
+1 |
+Query OK, 1 row(s) in set (0.000141s)
+```
+
+### More sample programs
+
+The source code of the sample application is under `TDengine/examples/JDBC`:
+
+- JDBCDemo: JDBC sample source code.
+- JDBCConnectorChecker: JDBC installation checker source and jar package.
+- connectionPools: using taos-jdbcdriver in connection pools such as HikariCP, Druid, dbcp, c3p0, etc.
+- SpringJdbcTemplate: using taos-jdbcdriver in Spring JdbcTemplate.
+- mybatisplus-demo: using taos-jdbcdriver in Springboot + Mybatis.
+
+Please refer to: [JDBC example](https://github.com/taosdata/TDengine/tree/develop/examples/JDBC)
+
+## Recent update logs
+
+| taos-jdbcdriver version | major changes |
+| :---------------------: | :--------------------------------------------: |
+| 2.0.39 - 2.0.40 | Add REST connection/request timeout parameters |
+| 2.0.38 | JDBC REST connections add bulk pull function |
+| 2.0.37 | Support json tags |
+| 2.0.36 | Support schemaless writing |
+
+## Frequently Asked Questions
+
+1. Why is there no performance improvement when using Statement's `addBatch()` and `executeBatch()` to perform `batch data writing/update`?
+
+ **Cause**: In TDengine's JDBC implementation, SQL statements submitted by `addBatch()` method are executed sequentially in the order they are added, which does not reduce the number of interactions with the server and does not bring performance improvement.
+
+ **Solution**: 1. splice multiple values in a single insert statement; 2. use multi-threaded concurrent insertion; 3. use parameter-bound writing
+
+2. java.lang.UnsatisfiedLinkError: no taos in java.library.path
+
+ **Cause**: The program did not find the dependent native library `taos`.
+
+ **Solution**: On Windows you can copy `C:\TDengine\driver\taos.dll` to the `C:\Windows\System32` directory, on Linux the following soft link will be created `ln -s /usr/local/taos/driver/libtaos.so.x.x.x.x /usr/lib/libtaos.so` will work.
+
+3. java.lang.UnsatisfiedLinkError: taos.dll Can't load AMD 64 bit on an IA 32-bit platform
+
+ **Cause**: Currently, TDengine only supports 64-bit JDK.
+
+ **Solution**: Reinstall the 64-bit JDK. 4.
+
+For other questions, please refer to [FAQ](/train-faq/faq)
+
+## API Reference
+
+[taos-jdbcdriver doc](https://docs.taosdata.com/api/taos-jdbcdriver)
diff --git a/docs-en/14-reference/03-connector/node.mdx b/docs/en/14-reference/03-connector/node.mdx
similarity index 100%
rename from docs-en/14-reference/03-connector/node.mdx
rename to docs/en/14-reference/03-connector/node.mdx
diff --git a/docs/en/14-reference/03-connector/php.mdx b/docs/en/14-reference/03-connector/php.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..69dcce91e80fa05face1ffb35effe1ce1efa2631
--- /dev/null
+++ b/docs/en/14-reference/03-connector/php.mdx
@@ -0,0 +1,150 @@
+---
+sidebar_position: 1
+sidebar_label: PHP
+title: PHP Connector
+---
+
+`php-tdengine` is the TDengine PHP connector provided by TDengine community. In particular, it supports Swoole coroutine.
+
+PHP Connector relies on TDengine client driver.
+
+Project Repository:
+
+After TDengine client or server is installed, `taos.h` is located at:
+
+- Linux:`/usr/local/taos/include`
+- Windows:`C:\TDengine\include`
+
+TDengine client driver is located at:
+
+- Linux: `/usr/local/taos/driver/libtaos.so`
+- Windows: `C:\TDengine\taos.dll`
+
+## Supported Platforms
+
+- Windows、Linux、MacOS
+
+- PHP >= 7.4
+
+- TDengine >= 2.0
+
+- Swoole >= 4.8 (Optional)
+
+## Supported Versions
+
+Because the version of TDengine client driver is tightly associated with that of TDengine server, it's strongly suggested to use the client driver of same version as TDengine server, even though the client driver can work with TDengine server if the first 3 sections of the versions are same.
+
+## Installation
+
+### Install TDengine Client Driver
+
+Regarding how to install TDengine client driver please refer to [Install Client Driver](/reference/connector#installation-steps)
+
+### Install php-tdengine
+
+**Download Source Code Package and Unzip:**
+
+```shell
+curl -L -o php-tdengine.tar.gz https://github.com/Yurunsoft/php-tdengine/archive/refs/tags/v1.0.2.tar.gz \
+&& mkdir php-tdengine \
+&& tar -xzf php-tdengine.tar.gz -C php-tdengine --strip-components=1
+```
+
+> Version number `v1.0.2` is only for example, it can be replaced to any newer version, please find available versions in [TDengine PHP Connector Releases](https://github.com/Yurunsoft/php-tdengine/releases).
+
+**Non-Swoole Environment:**
+
+```shell
+phpize && ./configure && make -j && make install
+```
+
+**Specify TDengine location:**
+
+```shell
+phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/2.4.0.0 && make -j && make install
+```
+
+> `--with-tdengine-dir=` is followed by TDengine location.
+> It's useful in case TDengine installatio location can't be found automatically or MacOS.
+
+**Swoole Environment:**
+
+```shell
+phpize && ./configure --enable-swoole && make -j && make install
+```
+
+**Enable Extension:**
+
+Option One: Add `extension=tdengine` in `php.ini`.
+
+Option Two: Use CLI `php -dextension=tdengine test.php`.
+
+## Sample Programs
+
+In this section a few sample programs which use TDengine PHP connector to access TDengine cluster are demonstrated.
+
+> Any error would throw exception: `TDengine\Exception\TDengineException`
+
+### Establish Conection
+
+
+Establish Connection
+
+```c
+{{#include docs/examples/php/connect.php}}
+```
+
+
+
+### Insert Data
+
+
+Insert Data
+
+```c
+{{#include docs/examples/php/insert.php}}
+```
+
+
+
+### Synchronous Query
+
+
+Synchronous Query
+
+```c
+{{#include docs/examples/php/query.php}}
+```
+
+
+
+### Parameter Binding
+
+
+Parameter Binding
+
+```c
+{{#include docs/examples/php/insert_stmt.php}}
+```
+
+
+
+## Constants
+
+| Constant | Description |
+| ----------------------------------- | ----------- |
+| `TDengine\TSDB_DATA_TYPE_NULL` | null |
+| `TDengine\TSDB_DATA_TYPE_BOOL` | bool |
+| `TDengine\TSDB_DATA_TYPE_TINYINT` | tinyint |
+| `TDengine\TSDB_DATA_TYPE_SMALLINT` | smallint |
+| `TDengine\TSDB_DATA_TYPE_INT` | int |
+| `TDengine\TSDB_DATA_TYPE_BIGINT` | bigint |
+| `TDengine\TSDB_DATA_TYPE_FLOAT` | float |
+| `TDengine\TSDB_DATA_TYPE_DOUBLE` | double |
+| `TDengine\TSDB_DATA_TYPE_BINARY` | binary |
+| `TDengine\TSDB_DATA_TYPE_TIMESTAMP` | timestamp |
+| `TDengine\TSDB_DATA_TYPE_NCHAR` | nchar |
+| `TDengine\TSDB_DATA_TYPE_UTINYINT` | utinyint |
+| `TDengine\TSDB_DATA_TYPE_USMALLINT` | usmallint |
+| `TDengine\TSDB_DATA_TYPE_UINT` | uint |
+| `TDengine\TSDB_DATA_TYPE_UBIGINT` | ubigint |
diff --git a/docs/en/14-reference/03-connector/python.mdx b/docs/en/14-reference/03-connector/python.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..c992d4fcf6803f914aa778b22d8c8c18d22d4bfb
--- /dev/null
+++ b/docs/en/14-reference/03-connector/python.mdx
@@ -0,0 +1,360 @@
+---
+sidebar_position: 3
+sidebar_label: Python
+title: TDengine Python Connector
+description: "taospy is the official Python connector for TDengine. taospy provides a rich API that makes it easy for Python applications to use TDengine. tasopy wraps both the native and REST interfaces of TDengine, corresponding to the two submodules of tasopy: taos and taosrest. In addition to wrapping the native and REST interfaces, taospy also provides a programming interface that conforms to the Python Data Access Specification (PEP 249), making it easy to integrate taospy with many third-party tools, such as SQLAlchemy and pandas."
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+
+`taospy` is the official Python connector for TDengine. `taospy` provides a rich set of APIs that makes it easy for Python applications to access TDengine. `taospy` wraps both the [native interface](/reference/connector/cpp) and [REST interface](/reference/rest-api) of TDengine, which correspond to the `taos` and `taosrest` modules of the `taospy` package, respectively.
+In addition to wrapping the native and REST interfaces, `taospy` also provides a set of programming interfaces that conforms to the [Python Data Access Specification (PEP 249)](https://peps.python.org/pep-0249/). It is easy to integrate `taospy` with many third-party tools, such as [SQLAlchemy](https://www.sqlalchemy.org/) and [pandas](https://pandas.pydata.org/).
+
+The direct connection to the server using the native interface provided by the client driver is referred to hereinafter as a "native connection"; the connection to the server using the REST interface provided by taosAdapter is referred to hereinafter as a "REST connection".
+
+The source code for the Python connector is hosted on [GitHub](https://github.com/taosdata/taos-connector-python).
+
+## Supported Platforms
+
+- The [supported platforms](/reference/connector/#supported-platforms) for the native connection are the same as the ones supported by the TDengine client.
+- REST connections are supported on all platforms that can run Python.
+
+## Version selection
+
+We recommend using the latest version of `taospy`, regardless of the version of TDengine.
+
+## Supported features
+
+- Native connections support all the core features of TDengine, including connection management, SQL execution, bind interface, subscriptions, and schemaless writing.
+- REST connections support features such as connection management and SQL execution. (SQL execution allows you to: manage databases, tables, and supertables, write data, query data, create continuous queries, etc.).
+
+## Installation
+
+### Preparation
+
+1. Install Python. Python >= 3.6 is recommended. If Python is not available on your system, refer to the [Python BeginnersGuide](https://wiki.python.org/moin/BeginnersGuide/Download) to install it.
+2. Install [pip](https://pypi.org/project/pip/). In most cases, the Python installer comes with the pip utility. If not, please refer to [pip documentation](https://pip.pypa.io/en/stable/installation/) to install it.
+
+If you use a native connection, you will also need to [Install Client Driver](/reference/connector#Install-Client-Driver). The client install package includes the TDengine client dynamic link library (`libtaos.so` or `taos.dll`) and the TDengine CLI.
+
+### Install via pip
+
+#### Uninstalling an older version
+
+If you have installed an older version of the Python Connector, please uninstall it beforehand.
+
+```
+pip3 uninstall taos taospy
+```
+
+:::note
+Earlier TDengine client software includes the Python connector. If the Python connector is installed from the client package's installation directory, the corresponding Python package name is `taos`. So the above uninstall command includes `taos`, and it doesn't matter if it doesn't exist.
+
+:::
+
+#### To install `taospy`
+
+
+
+
+Install the latest version of:
+
+```
+pip3 install taospy
+```
+
+You can also specify a specific version to install:
+
+```
+pip3 install taospy==2.3.0
+```
+
+
+
+
+```
+pip3 install git+https://github.com/taosdata/taos-connector-python.git
+```
+
+
+
+
+### Installation verification
+
+
+
+
+For native connection, you need to verify that both the client driver and the Python connector itself are installed correctly. The client driver and Python connector have been installed properly if you can successfully import the `taos` module. In the Python Interactive Shell, you can type.
+
+```python
+import taos
+```
+
+
+
+
+For REST connections, verifying that the `taosrest` module can be imported successfully can be done in the Python Interactive Shell by typing.
+
+```python
+import taosrest
+```
+
+
+
+
+:::tip
+If you have multiple versions of Python on your system, you may have various `pip` commands. Be sure to use the correct path for the `pip` command. Above, we installed the `pip3` command, which rules out the possibility of using the `pip` corresponding to Python 2.x versions. However, if you have more than one version of Python 3.x on your system, you still need to check that the installation path is correct. The easiest way to verify this is to type `pip3 install taospy` again in the command, and it will print out the exact location of `taospy`, for example, on Windows.
+
+```
+C:\> pip3 install taospy
+Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
+Requirement already satisfied: taospy in c:\users\username\appdata\local\programs\python\python310\lib\site-packages (2.3.0)
+```
+
+:::
+
+## Establish connection
+
+### Connectivity testing
+
+Before establishing a connection with the connector, we recommend testing the connectivity of the local TDengine CLI to the TDengine cluster.
+
+
+
+
+Ensure that the TDengine instance is up and that the FQDN of the machines in the cluster (the FQDN defaults to hostname if you are starting a standalone version) can be resolved locally, by testing with the `ping` command.
+
+```
+ping
+```
+
+Then test if the cluster can be appropriately connected with TDengine CLI:
+
+```
+taos -h -p
+```
+
+The FQDN above can be the FQDN of any dnode in the cluster, and the PORT is the serverPort corresponding to this dnode.
+
+
+
+
+For REST connections, make sure the cluster and taosAdapter component, are running. This can be tested using the following `curl ` command.
+
+```
+curl -u root:taosdata http://:/rest/sql -d "select server_version()"
+```
+
+The FQDN above is the FQDN of the machine running taosAdapter, PORT is the port taosAdapter listening, default is `6041`.
+If the test is successful, it will output the server version information, e.g.
+
+```json
+{
+ "status": "succ",
+ "head": ["server_version()"],
+ "column_meta": [["server_version()", 8, 8]],
+ "data": [["2.4.0.16"]],
+ "rows": 1
+}
+```
+
+
+
+
+### Using connectors to establish connections
+
+The following example code assumes that TDengine is installed locally and that the default configuration is used for both FQDN and serverPort.
+
+
+
+
+```python
+{{#include docs/examples/python/connect_native_reference.py}}
+```
+
+All arguments of the `connect()` function are optional keyword arguments. The following are the connection parameters specified.
+
+- `host` : The FQDN of the node to connect to. There is no default value. If this parameter is not provided, the firstEP in the client configuration file will be connected.
+- `user` : The TDengine user name. The default value is `root`.
+- `password` : TDengine user password. The default value is `taosdata`.
+- `port` : The starting port of the data node to connect to, i.e., the serverPort configuration. The default value is 6030, which will only take effect if the host parameter is provided.
+- `config` : The path to the client configuration file. On Windows systems, the default is `C:\TDengine\cfg`. The default is `/etc/taos/` on Linux systems.
+- `timezone` : The timezone used to convert the TIMESTAMP data in the query results to python `datetime` objects. The default is the local timezone.
+
+:::warning
+`config` and `timezone` are both process-level configurations. We recommend that all connections made by a process use the same parameter values. Otherwise, unpredictable errors may occur.
+:::
+
+:::tip
+The `connect()` function returns a `taos.TaosConnection` instance. In client-side multi-threaded scenarios, we recommend that each thread request a separate connection instance rather than sharing a connection between multiple threads.
+
+:::
+
+
+
+
+```python
+{{#include docs/examples/python/connect_rest_examples.py:connect}}
+```
+
+All arguments to the `connect()` function are optional keyword arguments. The following are the connection parameters specified.
+
+- `url`: The URL of taosAdapter REST service. The default is .
+- `user`: TDengine user name. The default is `root`.
+- `password`: TDengine user password. The default is `taosdata`.
+- `timeout`: HTTP request timeout in seconds. The default is `socket._GLOBAL_DEFAULT_TIMEOUT`. Usually, no configuration is needed.
+
+
+
+
+## Sample program
+
+### Basic Usage
+
+
+
+
+##### TaosConnection class
+
+The `TaosConnection` class contains both an implementation of the PEP249 Connection interface (e.g., the `cursor()` method and the `close()` method) and many extensions (e.g., the `execute()`, `query()`, `schemaless_insert()`, and `subscribe()` methods).
+
+```python title="execute method"
+{{#include docs/examples/python/connection_usage_native_reference.py:insert}}
+```
+
+```python title="query method"
+{{#include docs/examples/python/connection_usage_native_reference.py:query}}
+```
+
+:::tip
+The queried results can only be fetched once. For example, only one of `fetch_all()` and `fetch_all_into_dict()` can be used in the example above. Repeated fetches will result in an empty list.
+:::
+
+##### Use of TaosResult class
+
+In the above example of using the `TaosConnection` class, we have shown two ways to get the result of a query: `fetch_all()` and `fetch_all_into_dict()`. In addition, `TaosResult` also provides methods to iterate through the result set by rows (`rows_iter`) or by data blocks (`blocks_iter`). Using these two methods will be more efficient in scenarios where the query has a large amount of data.
+
+```python title="blocks_iter method"
+{{#include docs/examples/python/result_set_examples.py}}
+```
+##### Use of the TaosCursor class
+
+The `TaosConnection` class and the `TaosResult` class already implement all the functionality of the native interface. If you are familiar with the interfaces in the PEP249 specification, you can also use the methods provided by the `TaosCursor` class.
+
+```python title="Use of TaosCursor"
+{{#include docs/examples/python/cursor_usage_native_reference.py}}
+```
+
+:::note
+The TaosCursor class uses native connections for write and query operations. In a client-side multi-threaded scenario, this cursor instance must remain thread exclusive and cannot be shared across threads for use, otherwise, it will result in errors in the returned results.
+
+:::
+
+
+
+
+##### Use of TaosRestCursor class
+
+The `TaosRestCursor` class is an implementation of the PEP249 Cursor interface.
+
+```python title="Use of TaosRestCursor"
+{{#include docs/examples/python/connect_rest_examples.py:basic}}
+```
+- `cursor.execute` : Used to execute arbitrary SQL statements.
+- `cursor.rowcount` : For write operations, returns the number of successful rows written. For query operations, returns the number of rows in the result set.
+- `cursor.description` : Returns the description of the field. Please refer to [TaosRestCursor](https://docs.taosdata.com/api/taospy/taosrest/cursor.html) for the specific format of the description information.
+
+##### Use of the RestClient class
+
+The `RestClient` class is a direct wrapper for the [REST API](/reference/rest-api). It contains only a `sql()` method for executing arbitrary SQL statements and returning the result.
+
+```python title="Use of RestClient"
+{{#include docs/examples/python/rest_client_example.py}}
+```
+
+For a more detailed description of the `sql()` method, please refer to [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html).
+
+
+
+
+### Used with pandas
+
+
+
+
+```python
+{{#include docs/examples/python/conn_native_pandas.py}}
+```
+
+
+
+
+```python
+{{#include docs/examples/python/conn_rest_pandas.py}}
+```
+
+
+
+
+```python
+{{#include docs/examples/python/conn_native_sqlalchemy.py}}
+```
+
+
+
+
+```python
+{{#include docs/examples/python/conn_rest_sqlalchemy.py}}
+```
+
+
+
+
+### Other sample programs
+
+| Example program links | Example program content |
+| ------------------------------------------------------------------------------------------------------------- | ------------------- ---- |
+| [bind_multi.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-multi.py) | parameter binding, bind multiple rows at once |
+| [bind_row.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-row.py) | bind_row.py
+| [insert_lines.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/insert-lines.py) | InfluxDB line protocol writing |
+| [json_tag.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/json-tag.py) | Use JSON type tags |
+| [subscribe-async.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/subscribe-async.py) | Asynchronous subscription |
+| [subscribe-sync.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/subscribe-sync.py) | synchronous-subscribe |
+
+## Other notes
+
+### Exception handling
+
+All errors from database operations are thrown directly as exceptions and the error message from the database is passed up the exception stack. The application is responsible for exception handling. For example:
+
+```python
+{{#include docs/examples/python/handle_exception.py}}
+```
+
+### About nanoseconds
+
+Due to the current imperfection of Python's nanosecond support (see link below), the current implementation returns integers at nanosecond precision instead of the `datetime` type produced by `ms` and `us`, which application developers will need to handle on their own. And it is recommended to use pandas' to_datetime(). The Python Connector may modify the interface in the future if Python officially supports nanoseconds in full.
+
+1. https://stackoverflow.com/questions/10611328/parsing-datetime-strings-containing-nanoseconds
+2. https://www.python.org/dev/peps/pep-0564/
+
+
+## Frequently Asked Questions
+
+Welcome to [ask questions or report questions](https://github.com/taosdata/taos-connector-python/issues).
+
+## Important Update
+
+| Connector version | Important Update | Release date |
+| ---------- | --------------------------------------------------------------------------------- | ---------- |
+| 2.3.1 | 1. support TDengine REST API 2. remove support for Python version below 3.6 | 2022-04-28 |
+| 2.2.5 | support timezone option when connect | 2022-04-13 |
+| 2.2.2 | support sqlalchemy dialect plugin | 2022-03-28 |
+
+[**Release Notes**] (https://github.com/taosdata/taos-connector-python/releases)
+
+## API Reference
+
+- [taos](https://docs.taosdata.com/api/taospy/taos/)
+- [taosrest](https://docs.taosdata.com/api/taospy/taosrest)
diff --git a/docs/en/14-reference/03-connector/rust.mdx b/docs/en/14-reference/03-connector/rust.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..56ca586c7e8ada6e4422596906e01887d4726fd0
--- /dev/null
+++ b/docs/en/14-reference/03-connector/rust.mdx
@@ -0,0 +1,384 @@
+---
+toc_max_heading_level: 4
+sidebar_position: 5
+sidebar_label: Rust
+title: TDengine Rust Connector
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+import Preparation from "./_preparation.mdx"
+import RustInsert from "../../07-develop/03-insert-data/_rust_sql.mdx"
+import RustInfluxLine from "../../07-develop/03-insert-data/_rust_line.mdx"
+import RustOpenTSDBTelnet from "../../07-develop/03-insert-data/_rust_opts_telnet.mdx"
+import RustOpenTSDBJson from "../../07-develop/03-insert-data/_rust_opts_json.mdx"
+import RustQuery from "../../07-develop/04-query-data/_rust.mdx"
+
+`libtaos` is the official Rust language connector for TDengine. Rust developers can develop applications to access the TDengine instance data.
+
+`libtaos` provides two ways to establish connections. One is the **Native Connection**, which connects to TDengine instances via the TDengine client driver (taosc). The other is **REST connection**, which connects to TDengine instances via taosAdapter's REST interface.
+
+The source code for `libtaos` is hosted on [GitHub](https://github.com/taosdata/libtaos-rs).
+
+## Supported platforms
+
+The platforms supported by native connections are the same as those supported by the TDengine client driver.
+REST connections are supported on all platforms that can run Rust.
+
+## Version support
+
+Please refer to [version support list](/reference/connector#version-support).
+
+The Rust Connector is still under rapid development and is not guaranteed to be backward compatible before 1.0. We recommend using TDengine version 2.4 or higher to avoid known issues.
+
+## Installation
+
+### Pre-installation
+* Install the Rust development toolchain
+* If using the native connection, please install the TDengine client driver. Please refer to [install client driver](/reference/connector#install-client-driver)
+
+### Adding libtaos dependencies
+
+Add the [libtaos][libtaos] dependency to the [Rust](https://rust-lang.org) project as follows, depending on the connection method selected.
+
+
+
+
+Add [libtaos][libtaos] to the `Cargo.toml` file.
+
+```toml
+[dependencies]
+# use default feature
+libtaos = "*"
+```
+
+
+
+
+Add [libtaos][libtaos] to the `Cargo.toml` file and enable the `rest` feature.
+
+```toml
+[dependencies]
+# use rest feature
+libtaos = { version = "*", features = ["rest"]}
+```
+
+
+
+
+
+### Using connection pools
+
+Please enable the `r2d2` feature in `Cargo.toml`.
+
+```toml
+[dependencies]
+# with taosc
+libtaos = { version = "*", features = ["r2d2"] }
+# or rest
+libtaos = { version = "*", features = ["rest", "r2d2"] }
+```
+
+## Create a connection
+
+The [TaosCfgBuilder] provides the user with an API in the form of a constructor for the subsequent creation of connections or use of connection pools.
+
+```rust
+let cfg: TaosCfg = TaosCfgBuilder::default()
+ .ip("127.0.0.1")
+ .user("root")
+ .pass("taosdata")
+ .db("log") // do not set if not require a default database.
+ .port(6030u16)
+ .build()
+ .expect("TaosCfg builder error");
+}
+```
+
+You can now use this object to create the connection.
+
+```rust
+let conn = cfg.connect()? ;
+```
+
+The connection object can create more than one.
+
+```rust
+let conn = cfg.connect()? ;
+let conn2 = cfg.connect()? ;
+```
+
+You can use connection pools in applications.
+
+```rust
+let pool = r2d2::Pool::builder()
+ .max_size(10000) // max connections
+ .build(cfg)? ;
+
+// ...
+// Use pool to get connection
+let conn = pool.get()? ;
+```
+
+After that, you can perform the following operations on the database.
+
+```rust
+async fn demo() -> Result<(), Error> {
+ // get connection ...
+
+ // create database
+ conn.exec("create database if not exists demo").await?
+ // change database context
+ conn.exec("use demo").await?
+ // create table
+ conn.exec("create table if not exists tb1 (ts timestamp, v int)").await?
+ // insert
+ conn.exec("insert into tb1 values(now, 1)").await?
+ // query
+ let rows = conn.query("select * from tb1").await?
+ for row in rows.rows {
+ println!("{}", row.into_iter().join(","));
+ }
+}
+```
+
+## Usage examples
+
+### Write data
+
+#### SQL Write
+
+
+
+#### InfluxDB line protocol write
+
+
+
+#### OpenTSDB Telnet line protocol write
+
+
+
+#### OpenTSDB JSON line protocol write
+
+
+
+### Query data
+
+
+
+### More sample programs
+
+| Program Path | Program Description |
+| -------------- | ----------------------------------------------------------------------------- |
+| [demo.rs] | Basic API Usage Examples |
+| [bailongma-rs] | Using TDengine as the Prometheus remote storage API adapter for the storage backend, using the r2d2 connection pool |
+
+## API Reference
+
+### Connection constructor API
+
+The [Builder Pattern](https://doc.rust-lang.org/1.0.0/style/ownership/builders.html) constructor pattern is Rust's solution for handling complex data types or optional configuration types. The [libtaos] implementation uses the connection constructor [TaosCfgBuilder] as the entry point for the TDengine Rust connector. The [TaosCfgBuilder] provides optional configuration of servers, ports, databases, usernames, passwords, etc.
+
+Using the `default()` method, you can construct a [TaosCfg] with default parameters for subsequent connections to the database or establishing connection pools.
+
+```rust
+let cfg = TaosCfgBuilder::default().build()? ;
+```
+
+Using the constructor pattern, the user can set on-demand.
+
+```rust
+let cfg = TaosCfgBuilder::default()
+ .ip("127.0.0.1")
+ .user("root")
+ .pass("taosdata")
+ .db("log")
+ .port(6030u16)
+ .build()? ;
+```
+
+Create TDengine connection using [TaosCfg] object.
+
+```rust
+let conn: Taos = cfg.connect();
+```
+
+### Connection pooling
+
+In complex applications, we recommend enabling connection pools. Connection pool for [libtaos] is implemented using [r2d2].
+
+As follows, a connection pool with default parameters can be generated.
+
+```rust
+let pool = r2d2::Pool::new(cfg)? ;
+```
+
+You can set the same connection pool parameters using the connection pool's constructor.
+
+```rust
+ use std::time::Duration;
+ let pool = r2d2::Pool::builder()
+ .max_size(5000) // max connections
+ .max_lifetime(Some(Duration::from_minutes(100))) // lifetime of each connection
+ .min_idle(Some(1000)) // minimal idle connections
+ .connection_timeout(Duration::from_minutes(2))
+ .build(cfg);
+```
+
+In the application code, use `pool.get()? ` to get a connection object [Taos].
+
+```rust
+let taos = pool.get()? ;
+```
+
+The [Taos] structure is the connection manager in [libtaos] and provides two main APIs.
+
+1. `exec`: Execute some non-query SQL statements, such as `CREATE`, `ALTER`, `INSERT`, etc.
+
+ ```rust
+ taos.exec().await?
+ ```
+
+2. `query`: Execute the query statement and return the [TaosQueryData] object.
+
+ ```rust
+ let q = taos.query("select * from log.logs").await?
+ ```
+
+ The [TaosQueryData] object stores the query result data and basic information about the returned columns (column name, type, length).
+
+ Column information is stored using [ColumnMeta].
+
+ ```rust
+ let cols = &q.column_meta;
+ for col in cols {
+ println!("name: {}, type: {:?} , bytes: {}", col.name, col.type_, col.bytes);
+ }
+ ```
+
+ It fetches data line by line.
+
+ ```rust
+ for (i, row) in q.rows.iter().enumerate() {
+ for (j, cell) in row.iter().enumerate() {
+ println!("cell({}, {}) data: {}", i, j, cell);
+ }
+ }
+ ```
+
+Note that Rust asynchronous functions and an asynchronous runtime are required.
+
+[Taos] provides a few Rust methods that encapsulate SQL to reduce the frequency of `format!` code blocks.
+
+- `.describe(table: &str)`: Executes `DESCRIBE` and returns a Rust data structure.
+- `.create_database(database: &str)`: Executes the `CREATE DATABASE` statement.
+- `.use_database(database: &str)`: Executes the `USE` statement.
+
+In addition, this structure is also the entry point for [Parameter Binding](#Parameter Binding Interface) and [Line Protocol Interface](#Line Protocol Interface). Please refer to the specific API descriptions for usage.
+
+### Bind Interface
+
+Similar to the C interface, Rust provides the bind interface's wrapping. First, create a bind object [Stmt] for a SQL command from the [Taos] object.
+
+```rust
+let mut stmt: Stmt = taos.stmt("insert into ? values(? ,?)") ? ;
+```
+
+The bind object provides a set of interfaces for implementing parameter binding.
+
+##### `.set_tbname(tbname: impl ToCString)`
+
+To bind table names.
+
+##### `.set_tbname_tags(tbname: impl ToCString, tags: impl IntoParams)`
+
+Bind sub-table table names and tag values when the SQL statement uses a super table.
+
+```rust
+let mut stmt = taos.stmt("insert into ? using stb0 tags(?) values(? ,?)") ? ;
+// tags can be created with any supported type, here is an example using JSON
+let v = Field::Json(serde_json::from_str("{\"tag1\":\"one, two, three, four, five, six, seven, eight, nine, ten\"}").unwrap());
+stmt.set_tbname_tags("tb0", [&tag])? ;
+```
+
+##### `.bind(params: impl IntoParams)`
+
+Bind value types. Use the [Field] structure to construct the desired type and bind.
+
+```rust
+let ts = Field::Timestamp(Timestamp::now());
+let value = Field::Float(0.0);
+stmt.bind(vec![ts, value].iter())? ;
+```
+
+##### `.execute()`
+
+Execute SQL.[Stmt] objects can be reused, re-binded, and executed after execution.
+
+```rust
+stmt.execute()? ;
+
+// next bind cycle.
+// stmt.set_tbname()? ;
+//stmt.bind()? ;
+//stmt.execute()? ;
+```
+
+### Line protocol interface
+
+The line protocol interface supports multiple modes and different precision and requires the introduction of constants in the schemaless module to set.
+
+```rust
+use libtaos::*;
+use libtaos::schemaless::*;
+```
+
+- InfluxDB line protocol
+
+ ```rust
+ let lines = [
+ "st,t1=abc,t2=def,t3=anything c1=3i64,c3=L\"pass\",c2=false 1626006833639000000"
+ "st,t1=abc,t2=def,t3=anything c1=3i64,c3=L\"abc\",c4=4f64 1626006833639000000"
+ ];
+ taos.schemaless_insert(&lines, TSDB_SML_LINE_PROTOCOL, TSDB_SML_TIMESTAMP_NANOSECONDS)? ;
+ ```
+
+- OpenTSDB Telnet Protocol
+
+ ```rust
+ let lines = ["sys.if.bytes.out 1479496100 1.3E3 host=web01 interface=eth0"];
+ taos.schemaless_insert(&lines, TSDB_SML_LINE_PROTOCOL, TSDB_SML_TIMESTAMP_SECONDS)? ;
+ ```
+
+- OpenTSDB JSON protocol
+
+ ```rust
+ let lines = [r#"
+ {
+ "metric": "st",
+ "timestamp": 1626006833,
+ "value": 10,
+ "tags": {
+ "t1": true,
+ "t2": false,
+ "t3": 10,
+ "t4": "123_abc_.! @#$%^&*:;,. /? |+-=()[]{}<>"
+ }
+ }"#];
+ taos.schemaless_insert(&lines, TSDB_SML_LINE_PROTOCOL, TSDB_SML_TIMESTAMP_SECONDS)? ;
+ ```
+
+Please move to the Rust documentation hosting page for other related structure API usage instructions: .
+
+[libtaos]: https://github.com/taosdata/libtaos-rs
+[tdengine]: https://github.com/taosdata/TDengine
+[bailongma-rs]: https://github.com/taosdata/bailongma-rs
+[r2d2]: https://crates.io/crates/r2d2
+[demo.rs]: https://github.com/taosdata/libtaos-rs/blob/main/examples/demo.rs
+[TaosCfgBuilder]: https://docs.rs/libtaos/latest/libtaos/struct.TaosCfgBuilder.html
+[TaosCfg]: https://docs.rs/libtaos/latest/libtaos/struct.TaosCfg.html
+[Taos]: https://docs.rs/libtaos/latest/libtaos/struct.Taos.html
+[TaosQueryData]: https://docs.rs/libtaos/latest/libtaos/field/struct.TaosQueryData.html
+[Field]: https://docs.rs/libtaos/latest/libtaos/field/enum.Field.html
+[Stmt]: https://docs.rs/libtaos/latest/libtaos/stmt/struct.Stmt.html
diff --git a/docs-en/14-reference/03-connector/tdengine-jdbc-connector.webp b/docs/en/14-reference/03-connector/tdengine-jdbc-connector.webp
similarity index 100%
rename from docs-en/14-reference/03-connector/tdengine-jdbc-connector.webp
rename to docs/en/14-reference/03-connector/tdengine-jdbc-connector.webp
diff --git a/docs/en/14-reference/04-taosadapter.md b/docs/en/14-reference/04-taosadapter.md
new file mode 100644
index 0000000000000000000000000000000000000000..cad229c32d602e8fc595ec06f72a1a486e2af77b
--- /dev/null
+++ b/docs/en/14-reference/04-taosadapter.md
@@ -0,0 +1,337 @@
+---
+title: "taosAdapter"
+description: "taosAdapter is a TDengine companion tool that acts as a bridge and adapter between TDengine clusters and applications. It provides an easy-to-use and efficient way to ingest data directly from data collection agent software such as Telegraf, StatsD, collectd, etc. It also provides an InfluxDB/OpenTSDB compatible data ingestion interface, allowing InfluxDB/OpenTSDB applications to be seamlessly ported to TDengine."
+sidebar_label: "taosAdapter"
+---
+
+import Prometheus from "./_prometheus.mdx"
+import CollectD from "./_collectd.mdx"
+import StatsD from "./_statsd.mdx"
+import Icinga2 from "./_icinga2.mdx"
+import TCollector from "./_tcollector.mdx"
+
+taosAdapter is a TDengine companion tool that acts as a bridge and adapter between TDengine clusters and applications. It provides an easy-to-use and efficient way to ingest data directly from data collection agent software such as Telegraf, StatsD, collectd, etc. It also provides an InfluxDB/OpenTSDB compatible data ingestion interface that allows InfluxDB/OpenTSDB applications to be seamlessly ported to TDengine.
+
+taosAdapter provides the following features.
+
+- RESTful interface
+- InfluxDB v1 compliant write interface
+- OpenTSDB JSON and telnet format writes compatible
+- Seamless connection to Telegraf
+- Seamless connection to collectd
+- Seamless connection to StatsD
+- Supports Prometheus remote_read and remote_write
+
+## taosAdapter architecture diagram
+
+
+
+## taosAdapter Deployment Method
+
+### Install taosAdapter
+
+taosAdapter has been part of TDengine server software since TDengine v2.4.0.0. If you use the TDengine server, you don't need additional steps to install taosAdapter. You can download taosAdapter from [TDengine official website](https://tdengine.com/all-downloads/) to download the TDengine server installation package (taosAdapter is included in v2.4.0.0 and later version). If you need to deploy taosAdapter separately on another server other than the TDengine server, you should install the full TDengine server package on that server to install taosAdapter. If you need to build taosAdapter from source code, you can refer to the [Building taosAdapter]( https://github.com/taosdata/taosadapter/blob/develop/BUILD.md) documentation.
+
+### Start/Stop taosAdapter
+
+On Linux systems, the taosAdapter service is managed by `systemd` by default. You can use the command `systemctl start taosadapter` to start the taosAdapter service and use the command `systemctl stop taosadapter` to stop the taosAdapter service.
+
+### Remove taosAdapter
+
+Use the command `rmtaos` to remove the TDengine server software if you use tar.gz package. If you installed using a .deb or .rpm package, use the corresponding command, for your package manager, like apt or rpm to remove the TDengine server, including taosAdapter.
+
+### Upgrade taosAdapter
+
+taosAdapter and TDengine server need to use the same version. Please upgrade the taosAdapter by upgrading the TDengine server.
+You need to upgrade the taosAdapter deployed separately from TDengine server by upgrading the TDengine server on the deployed server.
+
+## taosAdapter parameter list
+
+taosAdapter is configurable via command-line arguments, environment variables and configuration files. The default configuration file is /etc/taos/taosadapter.toml on Linux.
+
+Command-line arguments take precedence over environment variables over configuration files. The command-line usage is arg=val, e.g., taosadapter -p=30000 --debug=true. The detailed list is as follows:
+
+```shell
+Usage of taosAdapter:
+ --collectd.db string collectd db name. Env "TAOS_ADAPTER_COLLECTD_DB" (default "collectd")
+ --collectd.enable enable collectd. Env "TAOS_ADAPTER_COLLECTD_ENABLE" (default true)
+ --collectd.password string collectd password. Env "TAOS_ADAPTER_COLLECTD_PASSWORD" (default "taosdata")
+ --collectd.port int collectd server port. Env "TAOS_ADAPTER_COLLECTD_PORT" (default 6045)
+ --collectd.user string collectd user. Env "TAOS_ADAPTER_COLLECTD_USER" (default "root")
+ --collectd.worker int collectd write worker. Env "TAOS_ADAPTER_COLLECTD_WORKER" (default 10)
+ -c, --config string config path default /etc/taos/taosadapter.toml
+ --cors.allowAllOrigins cors allow all origins. Env "TAOS_ADAPTER_CORS_ALLOW_ALL_ORIGINS" (default true)
+ --cors.allowCredentials cors allow credentials. Env "TAOS_ADAPTER_CORS_ALLOW_Credentials"
+ --cors.allowHeaders stringArray cors allow HEADERS. Env "TAOS_ADAPTER_ALLOW_HEADERS"
+ --cors.allowOrigins stringArray cors allow origins. Env "TAOS_ADAPTER_ALLOW_ORIGINS"
+ --cors.allowWebSockets cors allow WebSockets. Env "TAOS_ADAPTER_CORS_ALLOW_WebSockets"
+ --cors.exposeHeaders stringArray cors expose headers. Env "TAOS_ADAPTER_Expose_Headers"
+ --debug enable debug mode. Env "TAOS_ADAPTER_DEBUG"
+ --help Print this help message and exit
+ --influxdb.enable enable influxdb. Env "TAOS_ADAPTER_INFLUXDB_ENABLE" (default true)
+ --log.path string log path. Env "TAOS_ADAPTER_LOG_PATH" (default "/var/log/taos")
+ --log.rotationCount uint log rotation count. Env "TAOS_ADAPTER_LOG_ROTATION_COUNT" (default 30)
+ --log.rotationSize string log rotation size(KB MB GB), must be a positive integer. Env "TAOS_ADAPTER_LOG_ROTATION_SIZE" (default "1GB")
+ --log.rotationTime duration log rotation time. Env "TAOS_ADAPTER_LOG_ROTATION_TIME" (default 24h0m0s)
+ --logLevel string log level (panic fatal error warn warning info debug trace). Env "TAOS_ADAPTER_LOG_LEVEL" (default "info")
+ --monitor.collectDuration duration Set monitor duration. Env "TAOS_MONITOR_COLLECT_DURATION" (default 3s)
+ --monitor.identity string The identity of the current instance, or 'hostname:port' if it is empty. Env "TAOS_MONITOR_IDENTITY"
+ --monitor.incgroup Whether running in cgroup. Env "TAOS_MONITOR_INCGROUP"
+ --monitor.password string TDengine password. Env "TAOS_MONITOR_PASSWORD" (default "taosdata")
+ --monitor.pauseAllMemoryThreshold float Memory percentage threshold for pause all. Env "TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD" (default 80)
+ --monitor.pauseQueryMemoryThreshold float Memory percentage threshold for pause query. Env "TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD" (default 70)
+ --monitor.user string TDengine user. Env "TAOS_MONITOR_USER" (default "root")
+ --monitor.writeInterval duration Set write to TDengine interval. Env "TAOS_MONITOR_WRITE_INTERVAL" (default 30s)
+ --monitor.writeToTD Whether write metrics to TDengine. Env "TAOS_MONITOR_WRITE_TO_TD" (default true)
+ --node_exporter.caCertFile string node_exporter ca cert file path. Env "TAOS_ADAPTER_NODE_EXPORTER_CA_CERT_FILE"
+ --node_exporter.certFile string node_exporter cert file path. Env "TAOS_ADAPTER_NODE_EXPORTER_CERT_FILE"
+ --node_exporter.db string node_exporter db name. Env "TAOS_ADAPTER_NODE_EXPORTER_DB" (default "node_exporter")
+ --node_exporter.enable enable node_exporter. Env "TAOS_ADAPTER_NODE_EXPORTER_ENABLE"
+ --node_exporter.gatherDuration duration node_exporter gather duration. Env "TAOS_ADAPTER_NODE_EXPORTER_GATHER_DURATION" (default 5s)
+ --node_exporter.httpBearerTokenString string node_exporter http bearer token. Env "TAOS_ADAPTER_NODE_EXPORTER_HTTP_BEARER_TOKEN_STRING"
+ --node_exporter.httpPassword string node_exporter http password. Env "TAOS_ADAPTER_NODE_EXPORTER_HTTP_PASSWORD"
+ --node_exporter.httpUsername string node_exporter http username. Env "TAOS_ADAPTER_NODE_EXPORTER_HTTP_USERNAME"
+ --node_exporter.insecureSkipVerify node_exporter skip ssl check. Env "TAOS_ADAPTER_NODE_EXPORTER_INSECURE_SKIP_VERIFY" (default true)
+ --node_exporter.keyFile string node_exporter cert key file path. Env "TAOS_ADAPTER_NODE_EXPORTER_KEY_FILE"
+ --node_exporter.password string node_exporter password. Env "TAOS_ADAPTER_NODE_EXPORTER_PASSWORD" (default "taosdata")
+ --node_exporter.responseTimeout duration node_exporter response timeout. Env "TAOS_ADAPTER_NODE_EXPORTER_RESPONSE_TIMEOUT" (default 5s)
+ --node_exporter.urls strings node_exporter urls. Env "TAOS_ADAPTER_NODE_EXPORTER_URLS" (default [http://localhost:9100])
+ --node_exporter.user string node_exporter user. Env "TAOS_ADAPTER_NODE_EXPORTER_USER" (default "root")
+ --opentsdb.enable enable opentsdb. Env "TAOS_ADAPTER_OPENTSDB_ENABLE" (default true)
+ --opentsdb_telnet.dbs strings opentsdb_telnet db names. Env "TAOS_ADAPTER_OPENTSDB_TELNET_DBS" (default [opentsdb_telnet,collectd_tsdb,icinga2_tsdb,tcollector_tsdb])
+ --opentsdb_telnet.enable enable opentsdb telnet,warning: without auth info(default false). Env "TAOS_ADAPTER_OPENTSDB_TELNET_ENABLE"
+ --opentsdb_telnet.maxTCPConnections int max tcp connections. Env "TAOS_ADAPTER_OPENTSDB_TELNET_MAX_TCP_CONNECTIONS" (default 250)
+ --opentsdb_telnet.password string opentsdb_telnet password. Env "TAOS_ADAPTER_OPENTSDB_TELNET_PASSWORD" (default "taosdata")
+ --opentsdb_telnet.ports ints opentsdb telnet tcp port. Env "TAOS_ADAPTER_OPENTSDB_TELNET_PORTS" (default [6046,6047,6048,6049])
+ --opentsdb_telnet.tcpKeepAlive enable tcp keep alive. Env "TAOS_ADAPTER_OPENTSDB_TELNET_TCP_KEEP_ALIVE"
+ --opentsdb_telnet.user string opentsdb_telnet user. Env "TAOS_ADAPTER_OPENTSDB_TELNET_USER" (default "root")
+ --pool.idleTimeout duration Set idle connection timeout. Env "TAOS_ADAPTER_POOL_IDLE_TIMEOUT" (default 1h0m0s)
+ --pool.maxConnect int max connections to taosd. Env "TAOS_ADAPTER_POOL_MAX_CONNECT" (default 4000)
+ --pool.maxIdle int max idle connections to taosd. Env "TAOS_ADAPTER_POOL_MAX_IDLE" (default 4000)
+ -P, --port int http port. Env "TAOS_ADAPTER_PORT" (default 6041)
+ --prometheus.enable enable prometheus. Env "TAOS_ADAPTER_PROMETHEUS_ENABLE" (default true)
+ --restfulRowLimit int restful returns the maximum number of rows (-1 means no limit). Env "TAOS_ADAPTER_RESTFUL_ROW_LIMIT" (default -1)
+ --ssl.certFile string ssl cert file path. Env "TAOS_ADAPTER_SSL_CERT_FILE"
+ --ssl.enable enable ssl. Env "TAOS_ADAPTER_SSL_ENABLE"
+ --ssl.keyFile string ssl key file path. Env "TAOS_ADAPTER_SSL_KEY_FILE"
+ --statsd.allowPendingMessages int statsd allow pending messages. Env "TAOS_ADAPTER_STATSD_ALLOW_PENDING_MESSAGES" (default 50000)
+ --statsd.db string statsd db name. Env "TAOS_ADAPTER_STATSD_DB" (default "statsd")
+ --statsd.deleteCounters statsd delete counter cache after gather. Env "TAOS_ADAPTER_STATSD_DELETE_COUNTERS" (default true)
+ --statsd.deleteGauges statsd delete gauge cache after gather. Env "TAOS_ADAPTER_STATSD_DELETE_GAUGES" (default true)
+ --statsd.deleteSets statsd delete set cache after gather. Env "TAOS_ADAPTER_STATSD_DELETE_SETS" (default true)
+ --statsd.deleteTimings statsd delete timing cache after gather. Env "TAOS_ADAPTER_STATSD_DELETE_TIMINGS" (default true)
+ --statsd.enable enable statsd. Env "TAOS_ADAPTER_STATSD_ENABLE" (default true)
+ --statsd.gatherInterval duration statsd gather interval. Env "TAOS_ADAPTER_STATSD_GATHER_INTERVAL" (default 5s)
+ --statsd.maxTCPConnections int statsd max tcp connections. Env "TAOS_ADAPTER_STATSD_MAX_TCP_CONNECTIONS" (default 250)
+ --statsd.password string statsd password. Env "TAOS_ADAPTER_STATSD_PASSWORD" (default "taosdata")
+ --statsd.port int statsd server port. Env "TAOS_ADAPTER_STATSD_PORT" (default 6044)
+ --statsd.protocol string statsd protocol [tcp or udp]. Env "TAOS_ADAPTER_STATSD_PROTOCOL" (default "udp")
+ --statsd.tcpKeepAlive enable tcp keep alive. Env "TAOS_ADAPTER_STATSD_TCP_KEEP_ALIVE"
+ --statsd.user string statsd user. Env "TAOS_ADAPTER_STATSD_USER" (default "root")
+ --statsd.worker int statsd write worker. Env "TAOS_ADAPTER_STATSD_WORKER" (default 10)
+ --taosConfigDir string load taos client config path. Env "TAOS_ADAPTER_TAOS_CONFIG_FILE"
+ --version Print the version and exit
+```
+
+Note:
+Please set the following Cross-Origin Resource Sharing (CORS) parameters according to the actual situation when using a browser for interface calls.
+
+```text
+AllowAllOrigins
+AllowOrigins
+AllowHeaders
+ExposeHeaders
+AllowCredentials
+AllowWebSockets
+```
+
+You do not need to care about these configurations if you do not make interface calls through the browser.
+
+For details on the CORS protocol, please refer to: [https://www.w3.org/wiki/CORS_Enabled](https://www.w3.org/wiki/CORS_Enabled) or [https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CORS](https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CORS).
+
+See [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/blob/develop/example/config/taosadapter.toml) for sample configuration files.
+
+## Feature List
+
+- Compatible with RESTful interfaces [REST API](/reference/rest-api/)
+- Compatible with InfluxDB v1 write interface
+ [https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/)
+- Compatible with OpenTSDB JSON and telnet format writes
+ -
+ -
+- Seamless connection to collectd
+ collectd is a system statistics collection daemon, please visit [https://collectd.org/](https://collectd.org/) for more information.
+- Seamless connection with StatsD
+ StatsD is a simple yet powerful daemon for aggregating statistical information. Please visit [https://github.com/statsd/statsd](https://github.com/statsd/statsd) for more information.
+- Seamless connection with icinga2
+ icinga2 is a software that collects inspection result metrics and performance data. Please visit [https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer](https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer) for more information.
+- Seamless connection to TCollector
+ TCollector is a client process that collects data from a local collector and pushes the data to OpenTSDB. Please visit [http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html) for more information.
+- Seamless connection to node_exporter
+ node_export is an exporter for machine metrics. Please visit [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) for more information.
+- Support for Prometheus remote_read and remote_write
+ remote_read and remote_write are interfaces for Prometheus data read and write from/to other data storage solution. Please visit [https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis) for more information.
+
+## Interfaces
+
+### TDengine RESTful interface
+
+You can use any client that supports the http protocol to write data to or query data from TDengine by accessing the REST interface address `http://:6041/`. See the [official documentation](/reference/connector#restful) for details. The following EndPoint is supported.
+
+```text
+/rest/sql
+/rest/sqlt
+/rest/sqlutc
+```
+
+### InfluxDB
+
+You can use any client that supports the http protocol to access the RESTful interface address `http://:6041/` to write data in InfluxDB compatible format to TDengine. The EndPoint is as follows:
+
+```text
+/influxdb/v1/write
+```
+
+Support InfluxDB query parameters as follows.
+
+- `db` Specifies the database name used by TDengine
+- `precision` The time precision used by TDengine
+- `u` TDengine user name
+- `p` TDengine password
+
+Note: InfluxDB token authorization is not supported at present. Only Basic authorization and query parameter validation are supported.
+
+### OpenTSDB
+
+You can use any client that supports the http protocol to access the RESTful interface address `http://