-
-On the left-hand side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right-hand side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides an interactive command-line interface and a web interface for management and maintenance.
-
-## Typical Use Cases
-
-As a high-performance, scalable and SQL supported time-series database, TDengine's typical use case include but are not limited to IoT, Industrial Internet, Connected Vehicles, IT operation and maintenance, energy, financial markets and other fields. TDengine is a purpose-built database optimized for the characteristics of time series data. As such, it cannot be used to process data from web crawlers, social media, e-commerce, ERP, CRM and so on. More generally TDengine is not a suitable storage engine for non-time-series data. This section makes a more detailed analysis of the applicable scenarios.
-
-### Characteristics and Requirements of Data Sources
-
-| **Data Source Characteristics and Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
-| -------------------------------------------------------- | ------------------ | ----------------------- | ------------------- | :----------------------------------------------------------- |
-| A massive amount of total data | | | √ | TDengine provides excellent scale-out functions in terms of capacity, and has a storage structure with matching high compression ratio to achieve the best storage efficiency in the industry.|
-| Data input velocity is extremely high | | | √ | TDengine's performance is much higher than that of other similar products. It can continuously process larger amounts of input data in the same hardware environment, and provides a performance evaluation tool that can easily run in the user environment. |
-| A huge number of data sources | | | √ | TDengine is optimized specifically for a huge number of data sources. It is especially suitable for efficiently ingesting, writing and querying data from billions of data sources. |
-
-### System Architecture Requirements
-
-| **System Architecture Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
-| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
-| A simple and reliable system architecture | | | √ | TDengine's system architecture is very simple and reliable, with its own message queue, cache, stream computing, monitoring and other functions. There is no need to integrate any additional third-party products. |
-| Fault-tolerance and high-reliability | | | √ | TDengine has cluster functions to automatically provide high-reliability and high-availability functions such as fault tolerance and disaster recovery. |
-| Standardization support | | | √ | TDengine supports standard SQL and provides SQL extensions for time-series data analysis. |
-
-### System Function Requirements
-
-| **System Function Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
-| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
-| Complete data processing algorithms built-in | | √ | | While TDengine implements various general data processing algorithms, industry specific algorithms and special types of processing will need to be implemented at the application level.|
-| A large number of crosstab queries | | √ | | This type of processing is better handled by general purpose relational database systems but TDengine can work in concert with relational database systems to provide more complete solutions. |
-
-### System Performance Requirements
-
-| **System Performance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
-| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
-| Very large total processing capacity | | | √ | TDengine’s cluster functions can easily improve processing capacity via multi-server coordination. |
-| Extremely high-speed data processing | | | √ | TDengine’s storage and data processing are optimized for IoT, and can process data many times faster than similar products.|
-| Extremely fast processing of high resolution data | | | √ | TDengine has achieved the same or better performance than other relational and NoSQL data processing systems. |
-
-### System Maintenance Requirements
-
-| **System Maintenance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
-| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
-| Native high-reliability | | | √ | TDengine has a very robust, reliable and easily configurable system architecture to simplify routine operation. Human errors and accidents are eliminated to the greatest extent, with a streamlined experience for operators. |
-| Minimize learning and maintenance costs | | | √ | In addition to being easily configurable, standard SQL support and the Taos shell for ad hoc queries makes maintenance simpler, allows reuse and reduces learning costs.|
-| Abundant talent supply | √ | | | Given the above, and given the extensive training and professional services provided by TDengine, it is easy to migrate from existing solutions or create a new and lasting solution based on TDengine.|
-
-## Comparison with other databases
-
-- [Writing Performance Comparison of TDengine and InfluxDB ](https://tdengine.com/2022/02/23/4975.html)
-- [Query Performance Comparison of TDengine and InfluxDB](https://tdengine.com/2022/02/24/5120.html)
-- [TDengine vs InfluxDB、OpenTSDB、Cassandra、MySQL、ClickHouse](https://www.tdengine.com/downloads/TDengine_Testing_Report_en.pdf)
-- [TDengine vs OpenTSDB](https://tdengine.com/2019/09/12/710.html)
-- [TDengine vs Cassandra](https://tdengine.com/2019/09/12/708.html)
-- [TDengine vs InfluxDB](https://tdengine.com/2019/09/12/706.html)
diff --git a/docs-en/04-concept/_category_.yml b/docs-en/04-concept/_category_.yml
deleted file mode 100644
index 12c659a9265e86d0e74d88a751c19d5d715e9fe0..0000000000000000000000000000000000000000
--- a/docs-en/04-concept/_category_.yml
+++ /dev/null
@@ -1 +0,0 @@
-label: Concepts
\ No newline at end of file
diff --git a/docs-en/04-concept/index.md b/docs-en/04-concept/index.md
deleted file mode 100644
index 850f705146c4829db579f14be1a686ef9052f678..0000000000000000000000000000000000000000
--- a/docs-en/04-concept/index.md
+++ /dev/null
@@ -1,170 +0,0 @@
----
-title: Concepts
----
-
-In order to explain the basic concepts and provide some sample code, the TDengine documentation smart meters as a typical time series use case. We assume the following: 1. Each smart meter collects three metrics i.e. current, voltage, and phase 2. There are multiple smart meters, and 3. Each meter has static attributes like location and group ID. Based on this, collected data will look similar to the following table:
-
-
-
-Each row contains the device ID, time stamp, collected metrics (current, voltage, phase as above), and static tags (location and groupId in Table 1) associated with the devices. Each smart meter generates a row (measurement) in a pre-defined time interval or triggered by an external event. The device produces a sequence of measurements with associated time stamps.
-
-## Metric
-
-Metric refers to the physical quantity collected by sensors, equipment or other types of data collection devices, such as current, voltage, temperature, pressure, GPS position, etc., which change with time, and the data type can be integer, float, Boolean, or strings. As time goes by, the amount of collected metric data stored increases.
-
-## Label/Tag
-
-Label/Tag refers to the static properties of sensors, equipment or other types of data collection devices, which do not change with time, such as device model, color, fixed location of the device, etc. The data type can be any type. Although static, TDengine allows users to add, delete or update tag values at any time. Unlike the collected metric data, the amount of tag data stored does not change over time.
-
-## Data Collection Point
-
-Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points.
-
-## Table
-
-Since time-series data is most likely to be structured data, TDengine adopts the traditional relational database model to process them with a short learning curve. You need to create a database, create tables, then insert data points and execute queries to explore the data.
-
-To make full use of time-series data characteristics, TDengine adopts a strategy of "**One Table for One Data Collection Point**". TDengine requires the user to create a table for each data collection point (DCP) to store collected time-series data. For example, if there are over 10 million smart meters, it means 10 million tables should be created. For the table above, 4 tables should be created for devices D1001, D1002, D1003, and D1004 to store the data collected. This design has several benefits:
-
-1. Since the metric data from different DCP are fully independent, the data source of each DCP is unique, and a table has only one writer. In this way, data points can be written in a lock-free manner, and the writing speed can be greatly improved.
-2. For a DCP, the metric data generated by DCP is ordered by timestamp, so the write operation can be implemented by simple appending, which further greatly improves the data writing speed.
-3. The metric data from a DCP is continuously stored, block by block. If you read data for a period of time, it can greatly reduce random read operations and improve read and query performance by orders of magnitude.
-4. Inside a data block for a DCP, columnar storage is used, and different compression algorithms are used for different data types. Metrics generally don't vary as significantly between themselves over a time range as compared to other metrics, which allows for a higher compression rate.
-
-If the metric data of multiple DCPs are traditionally written into a single table, due to uncontrollable network delays, the timing of the data from different DCPs arriving at the server cannot be guaranteed, write operations must be protected by locks, and metric data from one DCP cannot be guaranteed to be continuously stored together. **One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest possible extent.**
-
-TDengine suggests using DCP ID as the table name (like D1001 in the above table). Each DCP may collect one or multiple metrics (like the current, voltage, phase as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the time stamp as the index, and won’t build the index on any metrics stored. Column wise storage is used.
-
-## Super Table (STable)
-
-The design of one table for one data collection point will require a huge number of tables, which is difficult to manage. Furthermore, applications often need to take aggregation operations among DCPs, thus aggregation operations will become complicated. To support aggregation over multiple tables efficiently, the STable(Super Table) concept is introduced by TDengine.
-
-STable is a template for a type of data collection point. A STable contains a set of data collection points (tables) that have the same schema or data structure, but with different static attributes (tags). To describe a STable, in addition to defining the table structure of the metrics, it is also necessary to define the schema of its tags. The data type of tags can be int, float, string, and there can be multiple tags, which can be added, deleted, or modified afterward. If the whole system has N different types of data collection points, N STables need to be established.
-
-In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**.
-
-## Subtable
-
-When creating a table for a specific data collection point, the user can use a STable as a template and specify the tag values of this specific DCP to create it. **The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is:
-1. Subtable is a table, all SQL commands applied on a regular table can be applied on subtable.
-2. Subtable is a table with extensions, it has static tags (labels), and these tags can be added, deleted, and updated after it is created. But a regular table does not have tags.
-3. A subtable belongs to only one STable, but a STable may have many subtables. Regular tables do not belong to a STable.
-4. A regular table can not be converted into a subtable, and vice versa.
-
-The relationship between a STable and the subtables created based on this STable is as follows:
-
-1. A STable contains multiple subtables with the same metric schema but with different tag values.
-2. The schema of metrics or labels cannot be adjusted through subtables, and it can only be changed via STable. Changes to the schema of a STable takes effect immediately for all associated subtables.
-3. STable defines only one template and does not store any data or label information by itself. Therefore, data cannot be written to a STable, only to subtables.
-
-Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs.
-
-In TDengine, it is recommended to use a subtable instead of a regular table for a DCP.
-
-## Database
-
-A database is a collection of tables. TDengine allows a running instance to have multiple databases, and each database can be configured with different storage policies. Different types of DCPs often have different data characteristics, including the frequency of data collection, data retention time, the number of replications, the size of data blocks, whether data is allowed to be updated, and so on. In order for TDengine to work with maximum efficiency in various scenarios, TDengine recommends that STables with different data characteristics be created in different databases.
-
-In a database, there can be one or more STables, but a STable belongs to only one database. All tables owned by a STable are stored in only one database.
-
-## FQDN & End Point
-
-FQDN (Fully Qualified Domain Name) is the full domain name of a specific computer or host on the Internet. FQDN consists of two parts: hostname and domain name. For example, the FQDN of a mail server might be mail.tdengine.com. The hostname is mail, and the host is located in the domain name tdengine.com. DNS (Domain Name System) is responsible for translating FQDN into IP. For systems without DNS, it can be solved by configuring the hosts file.
-
-Each node of a TDengine cluster is uniquely identified by an End Point, which consists of an FQDN and a Port, such as h1.tdengine.com:6030. In this way, when the IP changes, we can still use the FQDN to dynamically find the node without changing any configuration of the cluster. In addition, FQDN is used to facilitate unified access to the same cluster from the Intranet and the Internet.
-
-TDengine does not recommend using an IP address to access the cluster. FQDN is recommended for cluster management.
diff --git a/docs-en/04-get-started.md b/docs-en/04-get-started.md
new file mode 100644
index 0000000000000000000000000000000000000000..66a84d50d4f03085e48e37b17006b96959a5a5c3
--- /dev/null
+++ b/docs-en/04-get-started.md
@@ -0,0 +1 @@
+# Get Started
\ No newline at end of file
diff --git a/docs-en/05-develop/01-connect.md b/docs-en/05-develop/01-connect.md
new file mode 100644
index 0000000000000000000000000000000000000000..a18966a362a6393efd89df2aeacabcfff7f672c8
--- /dev/null
+++ b/docs-en/05-develop/01-connect.md
@@ -0,0 +1 @@
+# Connect
\ No newline at end of file
diff --git a/docs-en/05-develop/02-model.md b/docs-en/05-develop/02-model.md
new file mode 100644
index 0000000000000000000000000000000000000000..c2b8b119d76ba8300d89d76772616d18b92bc879
--- /dev/null
+++ b/docs-en/05-develop/02-model.md
@@ -0,0 +1 @@
+# Data Model
\ No newline at end of file
diff --git a/docs-en/05-develop/03-insert-data.md b/docs-en/05-develop/03-insert-data.md
new file mode 100644
index 0000000000000000000000000000000000000000..31017cef5b50473ce148eb482544d3481ec7a7ba
--- /dev/null
+++ b/docs-en/05-develop/03-insert-data.md
@@ -0,0 +1 @@
+# Insert Data
diff --git a/docs-en/05-develop/04-query-data.md b/docs-en/05-develop/04-query-data.md
new file mode 100644
index 0000000000000000000000000000000000000000..36200f59fe25f15bfd6dc08def904eb82e5b35fb
--- /dev/null
+++ b/docs-en/05-develop/04-query-data.md
@@ -0,0 +1 @@
+# Query Data
\ No newline at end of file
diff --git a/docs-en/07-develop/index.md b/docs-en/05-develop/index.md
similarity index 100%
rename from docs-en/07-develop/index.md
rename to docs-en/05-develop/index.md
diff --git a/docs-en/05-get-started/_apt_get_install.mdx b/docs-en/05-get-started/_apt_get_install.mdx
deleted file mode 100644
index 40f6cad1f672a97fd28e6d4b5795d32b2ff0d26c..0000000000000000000000000000000000000000
--- a/docs-en/05-get-started/_apt_get_install.mdx
+++ /dev/null
@@ -1,26 +0,0 @@
-`apt-get` can be used to install TDengine from official package repository.
-
-**Package Repository**
-
-```
-wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
-echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
-```
-
-The repository required for installing beta versions can be configured as below:
-
-```
-echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
-```
-
-**Install With apt-get**
-
-```
-sudo apt-get update
-apt-cache policy tdengine
-sudo apt-get install tdengine
-```
-
-:::tip
-`apt-get` can only be used on Debian or Ubuntu Linux.
-::::
diff --git a/docs-en/05-get-started/_category_.yml b/docs-en/05-get-started/_category_.yml
deleted file mode 100644
index 043ae21554ffd8f274c6afe41c5ae5e7da742b26..0000000000000000000000000000000000000000
--- a/docs-en/05-get-started/_category_.yml
+++ /dev/null
@@ -1 +0,0 @@
-label: Get Started
diff --git a/docs-en/05-get-started/_pkg_install.mdx b/docs-en/05-get-started/_pkg_install.mdx
deleted file mode 100644
index cf10497c96ba1d777e45340b0312d97c127b6fcb..0000000000000000000000000000000000000000
--- a/docs-en/05-get-started/_pkg_install.mdx
+++ /dev/null
@@ -1,17 +0,0 @@
-import PkgList from "/components/PkgList";
-
-It's very easy to install TDengine and would take you only a few minutes from downloading to finishing installation.
-
-For the convenience of users, from version 2.4.0.10, the standard server side installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark` and sample code. If only the `taosd` server and C/C++ connector are required, you can also choose to download the lite package.
-
-Three kinds of packages are provided, tar.gz, rpm and deb. Especially the tar.gz package is provided for the convenience of enterprise customers on different kinds of operating systems, it includes `taosdump` and TDinsight installation script which are normally only provided in taos-tools rpm and deb packages.
-
-Between two major release versions, some beta versions may be delivered for users to try some new features.
-
-
-
-For the details please refer to [Install and Uninstall](/operation/pkg-install)。
-
-To see the details of versions, please refer to [Download List](https://tdengine.com/all-downloads) and [Release Notes](https://github.com/taosdata/TDengine/releases).
-
-
diff --git a/docs-en/05-get-started/index.md b/docs-en/05-get-started/index.md
deleted file mode 100644
index 56958ef3ec1c206ee0cff45c67fd3c3a6fa6753a..0000000000000000000000000000000000000000
--- a/docs-en/05-get-started/index.md
+++ /dev/null
@@ -1,171 +0,0 @@
----
-title: Get Started
-description: 'Install TDengine from Docker image, apt-get or package, and run TAOS CLI and taosBenchmark to experience the features'
----
-
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-import PkgInstall from "./\_pkg_install.mdx";
-import AptGetInstall from "./\_apt_get_install.mdx";
-
-## Quick Install
-
-The full package of TDengine includes the server(taosd), taosAdapter for connecting with third-party systems and providing a RESTful interface, client driver(taosc), command-line program(CLI, taos) and some tools. For the current version, the server taosd and taosAdapter can only be installed and run on Linux systems. In the future taosd and taosAdapter will also be supported on Windows, macOS and other systems. The client driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to connectors for multiple languages, TDengine also provides a [RESTful interface](/reference/rest-api) through [taosAdapter](/reference/taosadapter). Prior to version 2.4.0.0, taosAdapter did not exist and the RESTful interface was provided by the built-in HTTP service of taosd.
-
-TDengine supports X64/ARM64/MIPS64/Alpha64 hardware platforms, and will support ARM32, RISC-V and other CPU architectures in the future.
-
-
-
-If docker is already installed on your computer, execute the following command:
-
-```shell
-docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
-```
-
-Make sure the container is running
-
-```shell
-docker ps
-```
-
-Enter into container and execute bash
-
-```shell
-docker exec -it bash
-```
-
-Then you can execute the Linux commands and access TDengine.
-
-For detailed steps, please visit [Experience TDengine via Docker](/train-faq/docker)。
-
-:::info
-Starting from 2.4.0.10,besides taosd,TDengine docker image includes: taos,taosAdapter,taosdump,taosBenchmark,TDinsight, scripts and sample code. Once the TDengine container is started,it will start both taosAdapter and taosd automatically to support RESTful interface.
-
-:::
-
-
-
-
-
-
-
-
-
-
-If you like to check the source code, build the package by yourself or contribute to the project, please check [TDengine GitHub Repository](https://github.com/taosdata/TDengine)
-
-
-
-
-## Quick Launch
-
-After installation, you can launch the TDengine service by the 'systemctl' command to start 'taosd'.
-
-```bash
-systemctl start taosd
-```
-
-Check if taosd is running:
-
-```bash
-systemctl status taosd
-```
-
-If everything is fine, you can run TDengine command-line interface `taos` to access TDengine and test it out yourself.
-
-:::info
-
-- systemctl requires _root_ privileges,if you are not _root_ ,please add sudo before the command.
-- To get feedback and keep improving the product, TDengine is collecting some basic usage information, but you can turn it off by setting telemetryReporting to 0 in configuration file taos.cfg.
-- TDengine uses FQDN (usually hostname)as the ID for a node. To make the system work, you need to configure the FQDN for the server running taosd, and configure the DNS service or hosts file on the the machine where the application or TDengine CLI runs to ensure that the FQDN can be resolved.
-- `systemctl stop taosd` won't stop the server right away, it will wait until all the data in memory are flushed to disk. It may takes time depending on the cache size.
-
-TDengine supports the installation on system which runs [`systemd`](https://en.wikipedia.org/wiki/Systemd) for process management,use `which systemctl` to check if the system has `systemd` installed:
-
-```bash
-which systemctl
-```
-
-If the system does not have `systemd`,you can start TDengine manually by executing `/usr/local/taos/bin/taosd`
-
-:::note
-
-## Command Line Interface
-
-To manage the TDengine running instance,or execute ad-hoc queries, TDengine provides a Command Line Interface (hereinafter referred to as TDengine CLI) taos. To enter into the interactive CLI,execute `taos` on a Linux terminal where TDengine is installed.
-
-```bash
-taos
-```
-
-If it connects to the TDengine server successfully, it will print out the version and welcome message. If it fails, it will print out the error message, please check [FAQ](/train-faq/faq) for trouble shooting connection issue. TDengine CLI's prompt is:
-
-```cmd
-taos>
-```
-
-Inside TDengine CLI,you can execute SQL commands to create/drop database/table, and run queries. The SQL command must be ended with a semicolon. For example:
-
-```sql
-create database demo;
-use demo;
-create table t (ts timestamp, speed int);
-insert into t values ('2019-07-15 00:00:00', 10);
-insert into t values ('2019-07-15 01:00:00', 20);
-select * from t;
- ts | speed |
-========================================
- 2019-07-15 00:00:00.000 | 10 |
- 2019-07-15 01:00:00.000 | 20 |
-Query OK, 2 row(s) in set (0.003128s)
-```
-
-Besides executing SQL commands, system administrators can check running status, add/drop user accounts and manage the running instances. TAOS CLI with client driver can be installed and run on either Linux or Windows machines. For more details on CLI, please [check here](../reference/taos-shell/).
-
-## Experience the blazing fast speed
-
-After TDengine server is running,execute `taosBenchmark` (previously named taosdemo) from a Linux terminal:
-
-```bash
-taosBenchmark
-```
-
-This command will create a super table "meters" under database "test". Under "meters", 10000 tables are created with names from "d0" to "d9999". Each table has 10000 rows and each row has four columns (ts, current, voltage, phase). Time stamp is starting from "2017-07-14 10:40:00 000" to "2017-07-14 10:40:09 999". Each table has tags "location" and "groupId". groupId is set 1 to 10 randomly, and location is set to "California.SanFrancisco" or "California.SanDiego".
-
-This command will insert 100 million rows into the database quickly. Time to insert depends on the hardware configuration, it only takes a dozen seconds for a regular PC server.
-
-taosBenchmark provides command-line options and a configuration file to customize the scenarios, like number of tables, number of rows per table, number of columns and more. Please execute `taosBenchmark --help` to list them. For details on running taosBenchmark, please check [reference for taosBenchmark](/reference/taosbenchmark)
-
-## Experience query speed
-
-After using taosBenchmark to insert a number of rows data, you can execute queries from TDengine CLI to experience the lightning fast query speed.
-
-query the total number of rows under super table "meters":
-
-```sql
-taos> select count(*) from test.meters;
-```
-
-query the average, maximum, minimum of 100 million rows:
-
-```sql
-taos> select avg(current), max(voltage), min(phase) from test.meters;
-```
-
-query the total number of rows with location="California.SanFrancisco":
-
-```sql
-taos> select count(*) from test.meters where location="California.SanFrancisco";
-```
-
-query the average, maximum, minimum of all rows with groupId=10:
-
-```sql
-taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
-```
-
-query the average, maximum, minimum for table d10 in 10 seconds time interval:
-
-```sql
-taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
-```
diff --git a/docs-en/12-taos-sql/index.md b/docs-en/06-taos-sql/index.md
similarity index 100%
rename from docs-en/12-taos-sql/index.md
rename to docs-en/06-taos-sql/index.md
diff --git a/docs-en/07-develop/01-connect/_category_.yml b/docs-en/07-develop/01-connect/_category_.yml
deleted file mode 100644
index 83f9754f582f541ca62c7ff8701698dd949c3f99..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/_category_.yml
+++ /dev/null
@@ -1 +0,0 @@
-label: Connect
diff --git a/docs-en/07-develop/01-connect/_connect_c.mdx b/docs-en/07-develop/01-connect/_connect_c.mdx
deleted file mode 100644
index 174bf45c4e2f26bab8f57c098f9f8f00d2f5064d..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/_connect_c.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```c title="Native Connection"
-{{#include docs-examples/c/connect_example.c}}
-```
diff --git a/docs-en/07-develop/01-connect/_connect_cs.mdx b/docs-en/07-develop/01-connect/_connect_cs.mdx
deleted file mode 100644
index 52ea2d437123a26bd87e6f3fdc05a17141f9f835..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/_connect_cs.mdx
+++ /dev/null
@@ -1,8 +0,0 @@
-```csharp title="Native Connection"
-{{#include docs-examples/csharp/ConnectExample.cs}}
-```
-
-:::info
-C# connector supports only native connection for now.
-
-:::
diff --git a/docs-en/07-develop/01-connect/_connect_go.mdx b/docs-en/07-develop/01-connect/_connect_go.mdx
deleted file mode 100644
index 1dd5d67e3533bba21960269e49e3d843b026efc8..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/_connect_go.mdx
+++ /dev/null
@@ -1,17 +0,0 @@
-#### Unified Database Access Interface
-
-```go title="Native Connection"
-{{#include docs-examples/go/connect/cgoexample/main.go}}
-```
-
-```go title="REST Connection"
-{{#include docs-examples/go/connect/restexample/main.go}}
-```
-
-#### Advanced Features
-
-The af package of driver-go can also be used to establish connection, with this way some advanced features of TDengine, like parameter binding and subscription, can be used.
-
-```go title="Establish native connection using af package"
-{{#include docs-examples/go/connect/afconn/main.go}}
-```
diff --git a/docs-en/07-develop/01-connect/_connect_java.mdx b/docs-en/07-develop/01-connect/_connect_java.mdx
deleted file mode 100644
index 1c3e9326bf2ae597ffba683250dd43986e670469..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/_connect_java.mdx
+++ /dev/null
@@ -1,15 +0,0 @@
-```java title="Native Connection"
-{{#include docs-examples/java/src/main/java/com/taos/example/JNIConnectExample.java}}
-```
-
-```java title="REST Connection"
-{{#include docs-examples/java/src/main/java/com/taos/example/RESTConnectExample.java:main}}
-```
-
-When using REST connection, the feature of bulk pulling can be enabled if the size of resulting data set is huge.
-
-```java title="Enable Bulk Pulling" {4}
-{{#include docs-examples/java/src/main/java/com/taos/example/WSConnectExample.java:main}}
-```
-
-More configuration about connection,please refer to [Java Connector](/reference/connector/java)
diff --git a/docs-en/07-develop/01-connect/_connect_node.mdx b/docs-en/07-develop/01-connect/_connect_node.mdx
deleted file mode 100644
index 489b0386e991ee1e8ddd173205637b75ae5a0c95..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/_connect_node.mdx
+++ /dev/null
@@ -1,7 +0,0 @@
-```js title="Native Connection"
-{{#include docs-examples/node/nativeexample/connect.js}}
-```
-
-```js title="REST Connection"
-{{#include docs-examples/node/restexample/connect.js}}
-```
diff --git a/docs-en/07-develop/01-connect/_connect_python.mdx b/docs-en/07-develop/01-connect/_connect_python.mdx
deleted file mode 100644
index 44b7586fadbf618231fce7753d3b4b68853a7f57..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/_connect_python.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```python title="Native Connection"
-{{#include docs-examples/python/connect_example.py}}
-```
diff --git a/docs-en/07-develop/01-connect/_connect_r.mdx b/docs-en/07-develop/01-connect/_connect_r.mdx
deleted file mode 100644
index 09c3d71ac35b1134d3089247daea9a13db4129e2..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/_connect_r.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```r title="Native Connection"
-{{#include docs-examples/R/connect_native.r:demo}}
-```
diff --git a/docs-en/07-develop/01-connect/_connect_rust.mdx b/docs-en/07-develop/01-connect/_connect_rust.mdx
deleted file mode 100644
index aa19f58de6c9bab69df0663e5369402ab1a8f899..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/_connect_rust.mdx
+++ /dev/null
@@ -1,8 +0,0 @@
-```rust title="Native Connection/REST Connection"
-{{#include docs-examples/rust/nativeexample/examples/connect.rs}}
-```
-
-:::note
-For Rust connector, the connection depends on the feature being used. If "rest" feature is enabled, then only the implementation for "rest" is compiled and packaged.
-
-:::
diff --git a/docs-en/07-develop/01-connect/index.md b/docs-en/07-develop/01-connect/index.md
deleted file mode 100644
index b9217b828d0d08c4ff1eacd27406d4e3bfba8eac..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/01-connect/index.md
+++ /dev/null
@@ -1,240 +0,0 @@
----
-sidebar_label: Connect
-title: Connect
-description: "This document explains how to establish connections to TDengine, and briefly introduces how to install and use TDengine connectors."
----
-
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-import ConnJava from "./\_connect_java.mdx";
-import ConnGo from "./\_connect_go.mdx";
-import ConnRust from "./\_connect_rust.mdx";
-import ConnNode from "./\_connect_node.mdx";
-import ConnPythonNative from "./\_connect_python.mdx";
-import ConnCSNative from "./\_connect_cs.mdx";
-import ConnC from "./\_connect_c.mdx";
-import ConnR from "./\_connect_r.mdx";
-import InstallOnWindows from "../../14-reference/03-connector/\_linux_install.mdx";
-import InstallOnLinux from "../../14-reference/03-connector/\_windows_install.mdx";
-import VerifyLinux from "../../14-reference/03-connector/\_verify_linux.mdx";
-import VerifyWindows from "../../14-reference/03-connector/\_verify_windows.mdx";
-
-Any application programs running on any kind of platform can access TDengine through the REST API provided by TDengine. For details, please refer to [REST API](/reference/rest-api/). Additionally, application programs can use the connectors of multiple programming languages including C/C++, Java, Python, Go, Node.js, C#, and Rust to access TDengine. This chapter describes how to establish a connection to TDengine and briefly introduces how to install and use connectors. For details about the connectors, please refer to [Connectors](/reference/connector/)
-
-## Establish Connection
-
-There are two ways for a connector to establish connections to TDengine:
-
-1. Connection through the REST API provided by the taosAdapter component, this way is called "REST connection" hereinafter.
-2. Connection through the TDengine client driver (taosc), this way is called "Native connection" hereinafter.
-
-Key differences:
-
-1. The TDengine client driver (taosc) has the highest performance with all the features of TDengine like [Parameter Binding](/reference/connector/cpp#parameter-binding-api), [Subscription](/reference/connector/cpp#subscription-and-consumption-api), etc.
-2. The TDengine client driver (taosc) is not supported across all platforms, and applications built on taosc may need to be modified when updating taosc to newer versions.
-3. The REST connection is more accessible with cross-platform support, however it results in a 30% performance downgrade.
-
-## Install Client Driver taosc
-
-If you are choosing to use the native connection and the the application is not on the same host as TDengine server, the TDengine client driver taosc needs to be installed on the application host. If choosing to use the REST connection or the application is on the same host as TDengine server, this step can be skipped. It's better to use same version of taosc as the TDengine server.
-
-### Install
-
-
-
-
-
-
-
-
-
-
-### Verify
-
-After the above installation and configuration are done and making sure TDengine service is already started and in service, the TDengine command-line interface `taos` can be launched to access TDengine.
-
-
-
-
-
-
-
-
-
-
-## Install Connectors
-
-
-
-
-If `maven` is used to manage the projects, what needs to be done is only adding below dependency in `pom.xml`.
-
-```xml
-
- com.taosdata.jdbc
- taos-jdbcdriver
- 2.0.38
-
-```
-
-
-
-
-Install from PyPI using `pip`:
-
-```
-pip install taospy
-```
-
-Install from Git URL:
-
-```
-pip install git+https://github.com/taosdata/taos-connector-python.git
-```
-
-
-
-
-Just need to add `driver-go` dependency in `go.mod` .
-
-```go-mod title=go.mod
-module goexample
-
-go 1.17
-
-require github.com/taosdata/driver-go/v2 develop
-```
-
-:::note
-`driver-go` uses `cgo` to wrap the APIs provided by taosc, while `cgo` needs `gcc` to compile source code in C language, so please make sure you have proper `gcc` on your system.
-
-:::
-
-
-
-
-Just need to add `libtaos` dependency in `Cargo.toml`.
-
-```toml title=Cargo.toml
-[dependencies]
-libtaos = { version = "0.4.2"}
-```
-
-:::info
-Rust connector uses different features to distinguish the way to establish connection. To establish REST connection, please enable `rest` feature.
-
-```toml
-libtaos = { version = "*", features = ["rest"] }
-```
-
-:::
-
-
-
-
-Node.js connector provides different ways of establishing connections by providing different packages.
-
-1. Install Node.js Native Connector
-
-```
-npm i td2.0-connector
-```
-
-:::note
-It's recommend to use Node whose version is between `node-v12.8.0` and `node-v13.0.0`.
-:::
-
-2. Install Node.js REST Connector
-
-```
-npm i td2.0-rest-connector
-```
-
-
-
-
-Just need to add the reference to [TDengine.Connector](https://www.nuget.org/packages/TDengine.Connector/) in the project configuration file.
-
-```xml title=csharp.csproj {12}
-
-
-
- Exe
- net6.0
- enable
- enable
- TDengineExample.AsyncQueryExample
-
-
-
-
-
-
-
-```
-
-Or add by `dotnet` command.
-
-```
-dotnet add package TDengine.Connector
-```
-
-:::note
-The sample code below are based on dotnet6.0, they may need to be adjusted if your dotnet version is not exactly same.
-
-:::
-
-
-
-
-1. Download [taos-jdbcdriver-version-dist.jar](https://repo1.maven.org/maven2/com/taosdata/jdbc/taos-jdbcdriver/2.0.38/).
-2. Install the dependency package `RJDBC`:
-
-```R
-install.packages("RJDBC")
-```
-
-
-
-
-If the client driver (taosc) is already installed, then the C connector is already available.
-
-
-
-
-
-## Establish Connection
-
-Prior to establishing connection, please make sure TDengine is already running and accessible. The following sample code assumes TDengine is running on the same host as the client program, with FQDN configured to "localhost" and serverPort configured to "6030".
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-:::tip
-If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](https://docs.taosdata.com/train-faq/faq).
-
-:::
diff --git a/docs-en/07-develop/02-model/_category_.yml b/docs-en/07-develop/02-model/_category_.yml
deleted file mode 100644
index a2b49eb879c593b29cba1b1bfab3f5b2b615c1e6..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/02-model/_category_.yml
+++ /dev/null
@@ -1,2 +0,0 @@
-label: Data Model
-
diff --git a/docs-en/07-develop/02-model/index.mdx b/docs-en/07-develop/02-model/index.mdx
deleted file mode 100644
index 86853aaaa3f7285fe042a892e2ec903d57894111..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/02-model/index.mdx
+++ /dev/null
@@ -1,93 +0,0 @@
----
-title: Data Model
----
-
-The data model employed by TDengine is similar to that of a relational database. You have to create databases and tables. You must design the data model based on your own business and application requirements. You should design the STable (an abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntactical details.
-
-## Create Database
-
-The [characteristics of time-series data](https://www.taosdata.com/blog/2019/07/09/86.html) from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. For TDengine to operate with the best performance, we strongly recommend that you create and configure different databases for data with different characteristics. This allows you, for example, to set up different storage and retention policies. When creating a database, there are a lot of parameters that can be configured such as, the days to keep data, the number of replicas, the number of memory blocks, time precision, the minimum and maximum number of rows in each data block, whether compression is enabled, the time range of the data in single data file and so on. Below is an example of the SQL statement to create a database.
-
-```sql
-CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 6 UPDATE 1;
-```
-
-In the above SQL statement:
-- a database named "power" will be created
-- the data in it will be kept for 365 days, which means that data older than 365 days will be deleted automatically
-- a new data file will be created every 10 days
-- the number of memory blocks is 6
-- data is allowed to be updated
-
-For more details please refer to [Database](/taos-sql/database).
-
-After creating a database, the current database in use can be switched using SQL command `USE`. For example the SQL statement below switches the current database to `power`. Without the current database specified, table name must be preceded with the corresponding database name.
-
-```sql
-USE power;
-```
-
-:::note
-
-- Any table or STable must belong to a database. To create a table or STable, the database it belongs to must be ready.
-- JOIN operations can't be performed on tables from two different databases.
-- Timestamp needs to be specified when inserting rows or querying historical rows.
-
-:::
-
-## Create STable
-
-In a time-series application, there may be multiple kinds of data collection points. For example, in the electrical power system there are meters, transformers, bus bars, switches, etc. For easy and efficient aggregation of multiple tables, one STable needs to be created for each kind of data collection point. For example, for the meters in [table 1](/tdinternal/arch#model_table1), the SQL statement below can be used to create the super table.
-
-```sql
-CREATE STable meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
-```
-
-:::note
-If you are using versions prior to 2.0.15, the `STable` keyword needs to be replaced with `TABLE`.
-
-:::
-
-Similar to creating a regular table, when creating a STable, the name and schema need to be provided. In the STable schema, the first column must always be a timestamp (like ts in the example), and the other columns (like current, voltage and phase in the example) are the data collected. The remaining columns can [contain data of type](/taos-sql/data-type/) integer, float, double, string etc. In addition, the schema for tags, like location and groupId in the example, must be provided. The tag type can be integer, float, string, etc. Tags are essentially the static properties of a data collection point. For example, properties like the location, device type, device group ID, manager ID are tags. Tags in the schema can be added, removed or updated. Please refer to [STable](/taos-sql/stable) for more details.
-
-For each kind of data collection point, a corresponding STable must be created. There may be many STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another data collection point for environmental data like temperature, humidity and wind direction. Multiple STables are required for these kinds of devices.
-
-At most 4096 (or 1024 prior to version 2.1.7.0) columns are allowed in a STable. If there are more than 4096 of metrics to be collected for a data collection point, multiple STables are required. There can be multiple databases in a system, while one or more STables can exist in a database.
-
-## Create Table
-
-A specific table needs to be created for each data collection point. Similar to RDBMS, table name and schema are required to create a table. Additionally, one or more tags can be created for each table. To create a table, a STable needs to be used as template and the values need to be specified for the tags. For example, for the meters in [Table 1](/tdinternal/arch#model_table1), the table can be created using below SQL statement.
-
-```sql
-CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2);
-```
-
-In the above SQL statement, "d1001" is the table name, "meters" is the STable name, followed by the value of tag "Location" and the value of tag "groupId", which are "California.SanFrancisco" and "2" respectively in the example. The tag values can be updated after the table is created. Please refer to [Tables](/taos-sql/table) for details.
-
-In the TDengine system, it's recommended to create a table for a data collection point via STable. A table created via STable is called subtable in some parts of the TDengine documentation. All SQL commands applied on regular tables can be applied on subtables.
-
-:::warning
-It's not recommended to create a table in a database while using a STable from another database as template.
-
-:::tip
-It's suggested to use the globally unique ID of a data collection point as the table name. For example the device serial number could be used as a unique ID. If a unique ID doesn't exist, multiple IDs that are not globally unique can be combined to form a globally unique ID. It's not recommended to use a globally unique ID as tag value.
-
-## Create Table Automatically
-
-In some circumstances, it's unknown whether the table already exists when inserting rows. The table can be created automatically using the SQL statement below, and nothing will happen if the table already exists.
-
-```sql
-INSERT INTO d1001 USING meters TAGS ("California.SanFrancisco", 2) VALUES (now, 10.2, 219, 0.32);
-```
-
-In the above SQL statement, a row with value `(now, 10.2, 219, 0.32)` will be inserted into table "d1001". If table "d1001" doesn't exist, it will be created automatically using STable "meters" as template with tag value `"California.SanFrancisco", 2`.
-
-For more details please refer to [Create Table Automatically](/taos-sql/insert#automatically-create-table-when-inserting).
-
-## Single Column vs Multiple Column
-
-A multiple columns data model is supported in TDengine. As long as multiple metrics are collected by the same data collection point at the same time, i.e. the timestamps are identical, these metrics can be put in a single STable as columns.
-
-However, there is another kind of design, i.e. single column data model in which a table is created for each metric. This means that a STable is required for each kind of metric. For example in a single column model, 3 STables would be required for current, voltage and phase.
-
-It's recommended to use a multiple column data model as much as possible because insert and query performance is higher. In some cases, however, the collected metrics may vary frequently and so the corresponding STable schema needs to be changed frequently too. In such cases, it's more convenient to use single column data model.
diff --git a/docs-en/07-develop/03-insert-data/01-sql-writing.mdx b/docs-en/07-develop/03-insert-data/01-sql-writing.mdx
deleted file mode 100644
index 397b1a14fd76c1372c79eb88575f2bf21cb62050..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/01-sql-writing.mdx
+++ /dev/null
@@ -1,130 +0,0 @@
----
-sidebar_label: Insert Using SQL
-title: Insert Using SQL
----
-
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-import JavaSQL from "./_java_sql.mdx";
-import JavaStmt from "./_java_stmt.mdx";
-import PySQL from "./_py_sql.mdx";
-import PyStmt from "./_py_stmt.mdx";
-import GoSQL from "./_go_sql.mdx";
-import GoStmt from "./_go_stmt.mdx";
-import RustSQL from "./_rust_sql.mdx";
-import RustStmt from "./_rust_stmt.mdx";
-import NodeSQL from "./_js_sql.mdx";
-import NodeStmt from "./_js_stmt.mdx";
-import CsSQL from "./_cs_sql.mdx";
-import CsStmt from "./_cs_stmt.mdx";
-import CSQL from "./_c_sql.mdx";
-import CStmt from "./_c_stmt.mdx";
-
-## Introduction
-
-Application programs can execute `INSERT` statement through connectors to insert rows. The TAOS CLI can also be used to manually insert data.
-
-### Insert Single Row
-
-The below SQL statement is used to insert one row into table "d1001".
-
-```sql
-INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31);
-```
-
-### Insert Multiple Rows
-
-Multiple rows can be inserted in a single SQL statement. The example below inserts 2 rows into table "d1001".
-
-```sql
-INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3, 218, 0.25);
-```
-
-### Insert into Multiple Tables
-
-Data can be inserted into multiple tables in the same SQL statement. The example below inserts 2 rows into table "d1001" and 1 row into table "d1002".
-
-```sql
-INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d1002 VALUES (1538548696800, 12.3, 221, 0.31);
-```
-
-For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
-
-:::info
-
-- Inserting in batches can improve performance. Normally, the higher the batch size, the better the performance. Please note that a single row can't exceed 48K bytes and each SQL statement can't exceed 1MB.
-- Inserting with multiple threads can also improve performance. However, depending on the system resources on the application side and the server side, when the number of inserting threads grows beyond a specific point the performance may drop instead of improving. The proper number of threads needs to be tested in a specific environment to find the best number.
-
-:::
-
-:::warning
-
-- If the timestamp for the row to be inserted already exists in the table, the behavior depends on the value of parameter `UPDATE`. If it's set to 0 (the default value), the row will be discarded. If it's set to 1, the new values will override the old values for the same row.
-- The timestamp to be inserted must be newer than the timestamp of subtracting current time by the parameter `KEEP`. If `KEEP` is set to 3650 days, then the data older than 3650 days ago can't be inserted. The timestamp to be inserted can't be newer than the timestamp of current time plus parameter `DAYS`. If `DAYS` is set to 2, the data newer than 2 days later can't be inserted.
-
-:::
-
-## Examples
-
-### Insert Using SQL
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-:::note
-
-1. With either native connection or REST connection, the above samples can work well.
-2. Please note that `use db` can't be used with a REST connection because REST connections are stateless, so in the samples `dbName.tbName` is used to specify the table name.
-
-:::
-
-### Insert with Parameter Binding
-
-TDengine also provides API support for parameter binding. Similar to MySQL, only `?` can be used in these APIs to represent the parameters to bind. From version 2.1.1.0 and 2.1.2.0, parameter binding support for inserting data has improved significantly to improve the insert performance by avoiding the cost of parsing SQL statements.
-
-Parameter binding is available only with native connection.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs-en/07-develop/03-insert-data/02-influxdb-line.mdx b/docs-en/07-develop/03-insert-data/02-influxdb-line.mdx
deleted file mode 100644
index be46ebf0c97a29b57c1b57eb8ea5c9394f85b93a..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/02-influxdb-line.mdx
+++ /dev/null
@@ -1,70 +0,0 @@
----
-sidebar_label: InfluxDB Line Protocol
-title: InfluxDB Line Protocol
----
-
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-import JavaLine from "./_java_line.mdx";
-import PyLine from "./_py_line.mdx";
-import GoLine from "./_go_line.mdx";
-import RustLine from "./_rust_line.mdx";
-import NodeLine from "./_js_line.mdx";
-import CsLine from "./_cs_line.mdx";
-import CLine from "./_c_line.mdx";
-
-## Introduction
-
-In the InfluxDB Line protocol format, a single line of text is used to represent one row of data. Each line contains 4 parts as shown below.
-
-```
-measurement,tag_set field_set timestamp
-```
-
-- `measurement` will be used as the name of the STable
-- `tag_set` will be used as tags, with format like `=,=`
-- `field_set`will be used as data columns, with format like `=,=`
-- `timestamp` is the primary key timestamp corresponding to this row of data
-
-For example:
-
-```
-meters,location=California.LoSangeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500
-```
-
-:::note
-
-- All the data in `tag_set` will be converted to nchar type automatically .
-- Each data in `field_set` must be self-descriptive for its data type. For example 1.2f32 means a value 1.2 of float type. Without the "f" type suffix, it will be treated as type double.
-- Multiple kinds of precision can be used for the `timestamp` field. Time precision can be from nanosecond (ns) to hour (h).
-
-:::
-
-For more details please refer to [InfluxDB Line Protocol](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/) and [TDengine Schemaless](/reference/schemaless/#Schemaless-Line-Protocol)
-
-
-## Examples
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs-en/07-develop/03-insert-data/03-opentsdb-telnet.mdx b/docs-en/07-develop/03-insert-data/03-opentsdb-telnet.mdx
deleted file mode 100644
index 18a695cda8efbef075451ff53e542d9e69c58e0b..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/03-opentsdb-telnet.mdx
+++ /dev/null
@@ -1,84 +0,0 @@
----
-sidebar_label: OpenTSDB Line Protocol
-title: OpenTSDB Line Protocol
----
-
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-import JavaTelnet from "./_java_opts_telnet.mdx";
-import PyTelnet from "./_py_opts_telnet.mdx";
-import GoTelnet from "./_go_opts_telnet.mdx";
-import RustTelnet from "./_rust_opts_telnet.mdx";
-import NodeTelnet from "./_js_opts_telnet.mdx";
-import CsTelnet from "./_cs_opts_telnet.mdx";
-import CTelnet from "./_c_opts_telnet.mdx";
-
-## Introduction
-
-A single line of text is used in OpenTSDB line protocol to represent one row of data. OpenTSDB employs a single column data model, so each line can only contain a single data column. There can be multiple tags. Each line contains 4 parts as below:
-
-```
-=[ =]
-```
-
-- `metric` will be used as the STable name.
-- `timestamp` is the timestamp of current row of data. The time precision will be determined automatically based on the length of the timestamp. Second and millisecond time precision are supported.
-- `value` is a metric which must be a numeric value, the corresponding column name is "value".
-- The last part is the tag set separated by spaces, all tags will be converted to nchar type automatically.
-
-For example:
-
-```txt
-meters.current 1648432611250 11.3 location=California.LoSangeles groupid=3
-```
-
-Please refer to [OpenTSDB Telnet API](http://opentsdb.net/docs/build/html/api_telnet/put.html) for more details.
-
-## Examples
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-2 STables will be created automatically and each STable has 4 rows of data in the above sample code.
-
-```cmd
-taos> use test;
-Database changed.
-
-taos> show STables;
- name | created_time | columns | tags | tables |
-============================================================================================
- meters.current | 2022-03-30 17:04:10.877 | 2 | 2 | 2 |
- meters.voltage | 2022-03-30 17:04:10.882 | 2 | 2 | 2 |
-Query OK, 2 row(s) in set (0.002544s)
-
-taos> select tbname, * from `meters.current`;
- tbname | ts | value | groupid | location |
-==================================================================================================================================
- t_0e7bcfa21a02331c06764f275... | 2022-03-28 09:56:51.249 | 10.800000000 | 3 | California.LoSangeles |
- t_0e7bcfa21a02331c06764f275... | 2022-03-28 09:56:51.250 | 11.300000000 | 3 | California.LoSangeles |
- t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.249 | 10.300000000 | 2 | California.SanFrancisco |
- t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.250 | 12.600000000 | 2 | California.SanFrancisco |
-Query OK, 4 row(s) in set (0.005399s)
-```
diff --git a/docs-en/07-develop/03-insert-data/04-opentsdb-json.mdx b/docs-en/07-develop/03-insert-data/04-opentsdb-json.mdx
deleted file mode 100644
index 3a239440311c736159d6060db5e730c5e5665bcb..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/04-opentsdb-json.mdx
+++ /dev/null
@@ -1,99 +0,0 @@
----
-sidebar_label: OpenTSDB JSON Protocol
-title: OpenTSDB JSON Protocol
----
-
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-import JavaJson from "./_java_opts_json.mdx";
-import PyJson from "./_py_opts_json.mdx";
-import GoJson from "./_go_opts_json.mdx";
-import RustJson from "./_rust_opts_json.mdx";
-import NodeJson from "./_js_opts_json.mdx";
-import CsJson from "./_cs_opts_json.mdx";
-import CJson from "./_c_opts_json.mdx";
-
-## Introduction
-
-A JSON string is used in OpenTSDB JSON to represent one or more rows of data, for example:
-
-```json
-[
- {
- "metric": "sys.cpu.nice",
- "timestamp": 1346846400,
- "value": 18,
- "tags": {
- "host": "web01",
- "dc": "lga"
- }
- },
- {
- "metric": "sys.cpu.nice",
- "timestamp": 1346846400,
- "value": 9,
- "tags": {
- "host": "web02",
- "dc": "lga"
- }
- }
-]
-```
-
-Similar to OpenTSDB line protocol, `metric` will be used as the STable name, `timestamp` is the timestamp to be used, `value` represents the metric collected, `tags` are the tag sets.
-
-
-Please refer to [OpenTSDB HTTP API](http://opentsdb.net/docs/build/html/api_http/put.html) for more details.
-
-:::note
-- In JSON protocol, strings will be converted to nchar type and numeric values will be converted to double type.
-- Only data in array format is accepted and so an array must be used even if there is only one row.
-
-:::
-
-## Examples
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-The above sample code will created 2 STables automatically while each STable has 2 rows of data.
-
-```cmd
-taos> use test;
-Database changed.
-
-taos> show STables;
- name | created_time | columns | tags | tables |
-============================================================================================
- meters.current | 2022-03-29 16:05:25.193 | 2 | 2 | 1 |
- meters.voltage | 2022-03-29 16:05:25.200 | 2 | 2 | 1 |
-Query OK, 2 row(s) in set (0.001954s)
-
-taos> select * from `meters.current`;
- ts | value | groupid | location |
-===================================================================================================================
- 2022-03-28 09:56:51.249 | 10.300000000 | 2.000000000 | California.SanFrancisco |
- 2022-03-28 09:56:51.250 | 12.600000000 | 2.000000000 | California.SanFrancisco |
-Query OK, 2 row(s) in set (0.004076s)
-```
diff --git a/docs-en/07-develop/03-insert-data/_c_line.mdx b/docs-en/07-develop/03-insert-data/_c_line.mdx
deleted file mode 100644
index 5ef2e9af774c54e9f090357286f83d2280c2ab11..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_c_line.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```c
-{{#include docs-examples/c/line_example.c:main}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_c_opts_json.mdx b/docs-en/07-develop/03-insert-data/_c_opts_json.mdx
deleted file mode 100644
index 22ad2e0122797248a372734aac0f3a16a1356530..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_c_opts_json.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```c
-{{#include docs-examples/c/json_protocol_example.c:main}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_c_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_c_opts_telnet.mdx
deleted file mode 100644
index 508d7bc98a149f49766bcd0a474ffe226cbe30bb..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_c_opts_telnet.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```c
-{{#include docs-examples/c/telnet_line_example.c:main}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_c_sql.mdx b/docs-en/07-develop/03-insert-data/_c_sql.mdx
deleted file mode 100644
index f4153fd2c427677a338d0c377663d0335f2672f0..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_c_sql.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```c
-{{#include docs-examples/c/insert_example.c}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_c_stmt.mdx b/docs-en/07-develop/03-insert-data/_c_stmt.mdx
deleted file mode 100644
index 7f5ef23a849689c36e732b6fd374a131695c9090..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_c_stmt.mdx
+++ /dev/null
@@ -1,6 +0,0 @@
-```c title=Single Row Binding
-{{#include docs-examples/c/stmt_example.c}}
-```
-```c title=Multiple Row Binding 72:117
-{{#include docs-examples/c/multi_bind_example.c}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_category_.yml b/docs-en/07-develop/03-insert-data/_category_.yml
deleted file mode 100644
index e515d60e09ec44894e2c42f38fee74fe4286e17f..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_category_.yml
+++ /dev/null
@@ -1 +0,0 @@
-label: Insert Data
diff --git a/docs-en/07-develop/03-insert-data/_cs_line.mdx b/docs-en/07-develop/03-insert-data/_cs_line.mdx
deleted file mode 100644
index 9c275ee3d7c7a1e52fbb34dbae922004543ee3ce..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_cs_line.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```csharp
-{{#include docs-examples/csharp/InfluxDBLineExample.cs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_cs_opts_json.mdx b/docs-en/07-develop/03-insert-data/_cs_opts_json.mdx
deleted file mode 100644
index 3d538b8506b298241faecd8098f89571359135c9..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_cs_opts_json.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```csharp
-{{#include docs-examples/csharp/OptsJsonExample.cs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_cs_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_cs_opts_telnet.mdx
deleted file mode 100644
index c53bf3d7233115351e5af03b7d9e6318aa4a0da6..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_cs_opts_telnet.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```csharp
-{{#include docs-examples/csharp/OptsTelnetExample.cs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_cs_sql.mdx b/docs-en/07-develop/03-insert-data/_cs_sql.mdx
deleted file mode 100644
index c7688bfbe77a1135424d829fe9b29fbb1bc93ae2..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_cs_sql.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```csharp
-{{#include docs-examples/csharp/SQLInsertExample.cs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_cs_stmt.mdx b/docs-en/07-develop/03-insert-data/_cs_stmt.mdx
deleted file mode 100644
index 97c3b910ffeb9e0c88fc143a02014115e819c147..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_cs_stmt.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```csharp
-{{#include docs-examples/csharp/StmtInsertExample.cs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_go_line.mdx b/docs-en/07-develop/03-insert-data/_go_line.mdx
deleted file mode 100644
index cd225945b70e28bef2ca7fdaf0d9be0ad7ffc18c..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_go_line.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```go
-{{#include docs-examples/go/insert/line/main.go}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_go_opts_json.mdx b/docs-en/07-develop/03-insert-data/_go_opts_json.mdx
deleted file mode 100644
index 0c0d3e5b6330e046988cdd02234285ec67e92f01..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_go_opts_json.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```go
-{{#include docs-examples/go/insert/json/main.go}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_go_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_go_opts_telnet.mdx
deleted file mode 100644
index d5ca40cc146e62412476289853e8e2739e0e9e4b..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_go_opts_telnet.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```go
-{{#include docs-examples/go/insert/telnet/main.go}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_go_sql.mdx b/docs-en/07-develop/03-insert-data/_go_sql.mdx
deleted file mode 100644
index 613a65add1741eb763a4b24e65d180d05f7d670f..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_go_sql.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```go
-{{#include docs-examples/go/insert/sql/main.go}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_go_stmt.mdx b/docs-en/07-develop/03-insert-data/_go_stmt.mdx
deleted file mode 100644
index c32bc21fb9bcaf45059e4f47df73fb57f047ed1c..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_go_stmt.mdx
+++ /dev/null
@@ -1,8 +0,0 @@
-```go
-{{#include docs-examples/go/insert/stmt/main.go}}
-```
-
-:::tip
-`github.com/taosdata/driver-go/v2/wrapper` module in driver-go is the wrapper for C API, it can be used to insert data with parameter binding.
-
-:::
diff --git a/docs-en/07-develop/03-insert-data/_java_line.mdx b/docs-en/07-develop/03-insert-data/_java_line.mdx
deleted file mode 100644
index 2e59a5d4701b2a2ab04ec5711845dc5c80067a1e..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_java_line.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```java
-{{#include docs-examples/java/src/main/java/com/taos/example/LineProtocolExample.java}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_java_opts_json.mdx b/docs-en/07-develop/03-insert-data/_java_opts_json.mdx
deleted file mode 100644
index 826a1a07d9405cb193849f9d21e5444f68517914..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_java_opts_json.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```java
-{{#include docs-examples/java/src/main/java/com/taos/example/JSONProtocolExample.java}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_java_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_java_opts_telnet.mdx
deleted file mode 100644
index 954dcc1a482a150dea0b190e1e0593adbfbde796..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_java_opts_telnet.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```java
-{{#include docs-examples/java/src/main/java/com/taos/example/TelnetLineProtocolExample.java}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_java_sql.mdx b/docs-en/07-develop/03-insert-data/_java_sql.mdx
deleted file mode 100644
index a863378defe43b1f22c1f98087a34f053a7d6619..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_java_sql.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```java
-{{#include docs-examples/java/src/main/java/com/taos/example/RestInsertExample.java:insert}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_java_stmt.mdx b/docs-en/07-develop/03-insert-data/_java_stmt.mdx
deleted file mode 100644
index 54443e535fa84bdf8dc9161ed4ad00f50b26266c..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_java_stmt.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```java
-{{#include docs-examples/java/src/main/java/com/taos/example/StmtInsertExample.java}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_js_line.mdx b/docs-en/07-develop/03-insert-data/_js_line.mdx
deleted file mode 100644
index 172c9bc17b8cff8b2620720b235a9c8e69bd4197..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_js_line.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```js
-{{#include docs-examples/node/nativeexample/influxdb_line_example.js}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_js_opts_json.mdx b/docs-en/07-develop/03-insert-data/_js_opts_json.mdx
deleted file mode 100644
index 20ac9ec91e8dc6675828b16d7da0acb09afd3b5f..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_js_opts_json.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```js
-{{#include docs-examples/node/nativeexample/opentsdb_json_example.js}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_js_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_js_opts_telnet.mdx
deleted file mode 100644
index c3c8c40bd642f4f443de88e3db006ad50724d514..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_js_opts_telnet.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```js
-{{#include docs-examples/node/nativeexample/opentsdb_telnet_example.js}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_js_sql.mdx b/docs-en/07-develop/03-insert-data/_js_sql.mdx
deleted file mode 100644
index f5e17c76892a57a94192a95451b508b1c176c984..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_js_sql.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```js
-{{#include docs-examples/node/nativeexample/insert_example.js}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_js_stmt.mdx b/docs-en/07-develop/03-insert-data/_js_stmt.mdx
deleted file mode 100644
index 964d7ddc11b90031b70936efb85fbaabe873ddbb..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_js_stmt.mdx
+++ /dev/null
@@ -1,12 +0,0 @@
-```js title=Single Row Binding
-{{#include docs-examples/node/nativeexample/param_bind_example.js}}
-```
-
-```js title=Multiple Row Binding
-{{#include docs-examples/node/nativeexample/multi_bind_example.js:insertData}}
-```
-
-:::info
-Multiple row binding is better in performance than single row binding, but it can only be used with `INSERT` statement while single row binding can be used for other SQL statements besides `INSERT`.
-
-:::
diff --git a/docs-en/07-develop/03-insert-data/_py_line.mdx b/docs-en/07-develop/03-insert-data/_py_line.mdx
deleted file mode 100644
index d3bb1ebb3403b53fa43bfc9d5d1a0de9764d7583..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_py_line.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```py
-{{#include docs-examples/python/line_protocol_example.py}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_py_opts_json.mdx b/docs-en/07-develop/03-insert-data/_py_opts_json.mdx
deleted file mode 100644
index cfbfe13ccfdb4f3f34b77300812863fdf70d0f59..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_py_opts_json.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```py
-{{#include docs-examples/python/json_protocol_example.py}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_py_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_py_opts_telnet.mdx
deleted file mode 100644
index 14bc65a7a3da815abadf7f25c8deffeac666c8d7..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_py_opts_telnet.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```py
-{{#include docs-examples/python/telnet_line_protocol_example.py}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_py_sql.mdx b/docs-en/07-develop/03-insert-data/_py_sql.mdx
deleted file mode 100644
index c0e15b8ec115b9244d50a47c9eafec04bcfdd70c..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_py_sql.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```py
-{{#include docs-examples/python/native_insert_example.py}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_py_stmt.mdx b/docs-en/07-develop/03-insert-data/_py_stmt.mdx
deleted file mode 100644
index 16d98f54329ad0d3dfb463392f5c1d41c9aab25b..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_py_stmt.mdx
+++ /dev/null
@@ -1,12 +0,0 @@
-```py title=Single Row Binding
-{{#include docs-examples/python/bind_param_example.py}}
-```
-
-```py title=Multiple Row Binding
-{{#include docs-examples/python/multi_bind_example.py:bind_batch}}
-```
-
-:::info
-Multiple row binding is better in performance than single row binding, but it can only be used with `INSERT` statement while single row binding can be used for other SQL statements besides `INSERT`.
-
-:::
\ No newline at end of file
diff --git a/docs-en/07-develop/03-insert-data/_rust_line.mdx b/docs-en/07-develop/03-insert-data/_rust_line.mdx
deleted file mode 100644
index 696ddb7b854751b8dee01047066f97f74212933f..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_rust_line.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```rust
-{{#include docs-examples/rust/schemalessexample/examples/influxdb_line_example.rs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_rust_opts_json.mdx b/docs-en/07-develop/03-insert-data/_rust_opts_json.mdx
deleted file mode 100644
index 97d9052dacd1894cc7548a59951ecfaad9caee87..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_rust_opts_json.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```rust
-{{#include docs-examples/rust/schemalessexample/examples/opentsdb_json_example.rs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_rust_opts_telnet.mdx b/docs-en/07-develop/03-insert-data/_rust_opts_telnet.mdx
deleted file mode 100644
index 14021f43d8aff30c35dc30c5d278d4e51f375024..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_rust_opts_telnet.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```rust
-{{#include docs-examples/rust/schemalessexample/examples/opentsdb_telnet_example.rs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_rust_sql.mdx b/docs-en/07-develop/03-insert-data/_rust_sql.mdx
deleted file mode 100644
index 8e8013e4ad734efcc262ea2f750b82210a538e49..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_rust_sql.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```rust
-{{#include docs-examples/rust/restexample/examples/insert_example.rs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/_rust_stmt.mdx b/docs-en/07-develop/03-insert-data/_rust_stmt.mdx
deleted file mode 100644
index 590a7a0e717426ed0235331c49dfc578bc55b2f7..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/_rust_stmt.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```rust
-{{#include docs-examples/rust/nativeexample/examples/stmt_example.rs}}
-```
diff --git a/docs-en/07-develop/03-insert-data/index.md b/docs-en/07-develop/03-insert-data/index.md
deleted file mode 100644
index 1a71e719a56448e4b535632e570ce8a04d2282bb..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/03-insert-data/index.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Insert Data
----
-
-TDengine supports multiple protocols of inserting data, including SQL, InfluxDB Line protocol, OpenTSDB Telnet protocol, and OpenTSDB JSON protocol. Data can be inserted row by row, or in batches. Data from one or more collection points can be inserted simultaneously. Data can be inserted with multiple threads, and out of order data and historical data can be inserted as well. InfluxDB Line protocol, OpenTSDB Telnet protocol and OpenTSDB JSON protocol are the 3 kinds of schemaless insert protocols supported by TDengine. It's not necessary to create STables and tables in advance if using schemaless protocols, and the schemas can be adjusted automatically based on the data being inserted.
-
-```mdx-code-block
-import DocCardList from '@theme/DocCardList';
-import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
-
-
-```
diff --git a/docs-en/07-develop/04-query-data/_c.mdx b/docs-en/07-develop/04-query-data/_c.mdx
deleted file mode 100644
index 76c9067e2f6af19465cf7c52c3e9b48bb868547d..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_c.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```c
-{{#include docs-examples/c/query_example.c}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/04-query-data/_c_async.mdx b/docs-en/07-develop/04-query-data/_c_async.mdx
deleted file mode 100644
index 09f3d3b3ff6d6644f837642ef41db459ba7c5753..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_c_async.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```c
-{{#include docs-examples/c/async_query_example.c:demo}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/04-query-data/_category_.yml b/docs-en/07-develop/04-query-data/_category_.yml
deleted file mode 100644
index 809db34621a63505ceace7ba182e07c698bdbddb..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_category_.yml
+++ /dev/null
@@ -1 +0,0 @@
-label: Query Data
diff --git a/docs-en/07-develop/04-query-data/_cs.mdx b/docs-en/07-develop/04-query-data/_cs.mdx
deleted file mode 100644
index 2ab52feb564eff0fe251bc9900ea2539171e5dba..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_cs.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```csharp
-{{#include docs-examples/csharp/QueryExample.cs}}
-```
diff --git a/docs-en/07-develop/04-query-data/_cs_async.mdx b/docs-en/07-develop/04-query-data/_cs_async.mdx
deleted file mode 100644
index f868994b303e62016b5e2f9304275135855c6ae5..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_cs_async.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```csharp
-{{#include docs-examples/csharp/AsyncQueryExample.cs}}
-```
diff --git a/docs-en/07-develop/04-query-data/_go.mdx b/docs-en/07-develop/04-query-data/_go.mdx
deleted file mode 100644
index 417c12315c06517e2f3de850ac9a379b7714b519..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_go.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```go
-{{#include docs-examples/go/query/sync/main.go}}
-```
diff --git a/docs-en/07-develop/04-query-data/_go_async.mdx b/docs-en/07-develop/04-query-data/_go_async.mdx
deleted file mode 100644
index 72fff411b980a0dcbdcaf4274722c63e0351db6f..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_go_async.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```go
-{{#include docs-examples/go/query/async/main.go}}
-```
diff --git a/docs-en/07-develop/04-query-data/_java.mdx b/docs-en/07-develop/04-query-data/_java.mdx
deleted file mode 100644
index 519b9266144486231caf3ee593e973d438941ee4..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_java.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```java
-{{#include docs-examples/java/src/main/java/com/taos/example/RestQueryExample.java}}
-```
diff --git a/docs-en/07-develop/04-query-data/_js.mdx b/docs-en/07-develop/04-query-data/_js.mdx
deleted file mode 100644
index c5e4c4f3fc20d3940a2bc6e13e6a5dea8a15ff13..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_js.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```js
-{{#include docs-examples/node/nativeexample/query_example.js}}
-```
diff --git a/docs-en/07-develop/04-query-data/_js_async.mdx b/docs-en/07-develop/04-query-data/_js_async.mdx
deleted file mode 100644
index c65d54ed12f6c4bbeb333e0de0ba9ca4638bff84..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_js_async.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```js
-{{#include docs-examples/node/nativeexample/async_query_example.js}}
-```
diff --git a/docs-en/07-develop/04-query-data/_py.mdx b/docs-en/07-develop/04-query-data/_py.mdx
deleted file mode 100644
index aeae42a15e5c39b7e9d227afc424e77658109705..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_py.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
-Result set is iterated row by row.
-
-```py
-{{#include docs-examples/python/query_example.py:iter}}
-```
-
-Result set is retrieved as a whole, each row is converted to a dict and returned.
-
-```py
-{{#include docs-examples/python/query_example.py:fetch_all}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/04-query-data/_py_async.mdx b/docs-en/07-develop/04-query-data/_py_async.mdx
deleted file mode 100644
index ed6880ae64e59a860e7dc75a5d3c1ad5d2614d01..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_py_async.mdx
+++ /dev/null
@@ -1,8 +0,0 @@
-```py
-{{#include docs-examples/python/async_query_example.py}}
-```
-
-:::note
-This sample code can't be run on Windows system for now.
-
-:::
diff --git a/docs-en/07-develop/04-query-data/_rust.mdx b/docs-en/07-develop/04-query-data/_rust.mdx
deleted file mode 100644
index 742d70fd025ff44b573eedf78441c9d73defad45..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/_rust.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```rust
-{{#include docs-examples/rust/restexample/examples/query_example.rs}}
-```
diff --git a/docs-en/07-develop/04-query-data/index.mdx b/docs-en/07-develop/04-query-data/index.mdx
deleted file mode 100644
index a212fa9529215fc24c55c95a166cfc1a407359b2..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/04-query-data/index.mdx
+++ /dev/null
@@ -1,186 +0,0 @@
----
-Sidebar_label: Query data
-title: Query data
-description: "This chapter introduces major query functionalities and how to perform sync and async query using connectors."
----
-
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-import JavaQuery from "./_java.mdx";
-import PyQuery from "./_py.mdx";
-import GoQuery from "./_go.mdx";
-import RustQuery from "./_rust.mdx";
-import NodeQuery from "./_js.mdx";
-import CsQuery from "./_cs.mdx";
-import CQuery from "./_c.mdx";
-import PyAsync from "./_py_async.mdx";
-import NodeAsync from "./_js_async.mdx";
-import CsAsync from "./_cs_async.mdx";
-import CAsync from "./_c_async.mdx";
-
-## Introduction
-
-SQL is used by TDengine as its query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine's CLI `taos` can also be used to execute ad hoc SQL queries. Here is the list of major query functionalities supported by TDengine:
-
-- Query on single column or multiple columns
-- Filter on tags or data columns:>, <, =, <\>, like
-- Grouping of results: `Group By`
-- Sorting of results: `Order By`
-- Limit the number of results: `Limit/Offset`
-- Arithmetic on columns of numeric types or aggregate results
-- Join query with timestamp alignment
-- Aggregate functions: count, max, min, avg, sum, twa, stddev, leastsquares, top, bottom, first, last, percentile, apercentile, last_row, spread, diff
-
-For example, the SQL statement below can be executed in TDengine CLI `taos` to select records with voltage greater than 215 and limit the output to only 2 rows.
-
-```sql
-select * from d1001 where voltage > 215 order by ts desc limit 2;
-```
-
-```title=Output
-taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
- ts | current | voltage | phase |
-======================================================================================
- 2018-10-03 14:38:16.800 | 12.30000 | 221 | 0.31000 |
- 2018-10-03 14:38:15.000 | 12.60000 | 218 | 0.33000 |
-Query OK, 2 row(s) in set (0.001100s)
-```
-
-To meet the requirements of varied use cases, some special functions have been added in TDengine. Some examples are `twa` (Time Weighted Average), `spread` (The difference between the maximum and the minimum), and `last_row` (the last row). Furthermore, continuous query is also supported in TDengine.
-
-For detailed query syntax please refer to [Select](/taos-sql/select).
-
-## Aggregation among Tables
-
-In most use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviation for super table), is used in TDengine to represent one type of data collection point, and a subtable is used to represent a specific data collection point of that type. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same type of data collection points. Aggregate functions applicable for tables can be used directly on STables; the syntax is exactly the same.
-
-In summary, records across subtables can be aggregated by a simple query on their STable. It is like a join operation. However, tables belonging to different STables can not be aggregated.
-
-### Example 1
-
-In TDengine CLI `taos`, use the SQL below to get the average voltage of all the meters in California grouped by location.
-
-```
-taos> SELECT AVG(voltage) FROM meters GROUP BY location;
- avg(voltage) | location |
-=============================================================
- 222.000000000 | California.LosAngeles |
- 219.200000000 | California.SanFrancisco |
-Query OK, 2 row(s) in set (0.002136s)
-```
-
-### Example 2
-
-In TDengine CLI `taos`, use the SQL below to get the number of rows and the maximum current in the past 24 hours from meters whose groupId is 2.
-
-```
-taos> SELECT count(*), max(current) FROM meters where groupId = 2 and ts > now - 24h;
- count(*) | max(current) |
-==================================
- 5 | 13.4 |
-Query OK, 1 row(s) in set (0.002136s)
-```
-
-Join queries are only allowed between subtables of the same STable. In [Select](/taos-sql/select), all query operations are marked as to whether they support STables or not.
-
-## Down Sampling and Interpolation
-
-In IoT use cases, down sampling is widely used to aggregate data by time range. The `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, the SQL statement below can be used to get the sum of current every 10 seconds from meters table d1001.
-
-```
-taos> SELECT sum(current) FROM d1001 INTERVAL(10s);
- ts | sum(current) |
-======================================================
- 2018-10-03 14:38:00.000 | 10.300000191 |
- 2018-10-03 14:38:10.000 | 24.900000572 |
-Query OK, 2 row(s) in set (0.000883s)
-```
-
-Down sampling can also be used for STable. For example, the below SQL statement can be used to get the sum of current from all meters in California.
-
-```
-taos> SELECT SUM(current) FROM meters where location like "California%" INTERVAL(1s);
- ts | sum(current) |
-======================================================
- 2018-10-03 14:38:04.000 | 10.199999809 |
- 2018-10-03 14:38:05.000 | 32.900000572 |
- 2018-10-03 14:38:06.000 | 11.500000000 |
- 2018-10-03 14:38:15.000 | 12.600000381 |
- 2018-10-03 14:38:16.000 | 36.000000000 |
-Query OK, 5 row(s) in set (0.001538s)
-```
-
-Down sampling also supports time offset. For example, the below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds.
-
-```
-taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
- ts | sum(current) |
-======================================================
- 2018-10-03 14:38:04.500 | 11.189999809 |
- 2018-10-03 14:38:05.500 | 31.900000572 |
- 2018-10-03 14:38:06.500 | 11.600000000 |
- 2018-10-03 14:38:15.500 | 12.300000381 |
- 2018-10-03 14:38:16.500 | 35.000000000 |
-Query OK, 5 row(s) in set (0.001521s)
-```
-
-In many use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle this by themselves. In TDengine, it's easy to achieve the alignment using down sampling.
-
-Interpolation can be performed in TDengine if there is no data in a time range.
-
-For more details please refer to [Aggregate by Window](/taos-sql/interval).
-
-## Examples
-
-### Query
-
-In the section describing [Insert](/develop/insert-data/sql-writing), a database named `power` is created and some data are inserted into STable `meters`. Below sample code demonstrates how to query the data in this STable.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-:::note
-
-1. With either REST connection or native connection, the above sample code works well.
-2. Please note that `use db` can't be used in case of REST connection because it's stateless.
-
-:::
-
-### Asynchronous Query
-
-Besides synchronous queries, an asynchronous query API is also provided by TDengine to insert or query data more efficiently. With a similar hardware and software environment, the async API is 2~4 times faster than sync APIs. Async API works in non-blocking mode, which means an operation can be returned without finishing so that the calling thread can switch to other work to improve the performance of the whole application system. Async APIs perform especially better in the case of poor networks.
-
-Please note that async query can only be used with a native connection.
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/docs-en/07-develop/05-continuous-query.mdx b/docs-en/07-develop/05-continuous-query.mdx
deleted file mode 100644
index 1aea5783fc8116a4e02a4b5345d341707cd399ea..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/05-continuous-query.mdx
+++ /dev/null
@@ -1,83 +0,0 @@
----
-sidebar_label: Continuous Query
-description: "Continuous query is a query that's executed automatically at a predefined frequency to provide aggregate query capability by time window. It is essentially simplified, time driven, stream computing."
-title: "Continuous Query"
----
-
-A continuous query is a query that's executed automatically at a predefined frequency to provide aggregate query capability by time window. It is essentially simplified, time driven, stream computing. A continuous query can be performed on a table or STable in TDengine. The results of a continuous query can be pushed to clients or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively.
-
-A continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With a continuous query, the result can be generated based on a time window to achieve down sampling of the original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to clients or written to TDengine.
-
-There are some differences between continuous query in TDengine and time window computation in stream computing:
-
-- The computation is performed and the result is returned in real time in stream computing, but the computation in continuous query is only started when a time window closes. For example, if the time window is 1 day, then the result will only be generated at 23:59:59.
-- If a historical data row is written in to a time window for which the computation has already finished, the computation will not be performed again and the result will not be pushed to client applications again. If the results have already been written into TDengine, they will not be updated.
-- In continuous query, if the result is pushed to a client, the client status is not cached on the server side and Exactly-once is not guaranteed by the server. If the client program crashes, a new time window will be generated from the time where the continuous query is restarted. If the result is written into TDengine, the data written into TDengine can be guaranteed as valid and continuous.
-
-## Syntax
-
-```sql
-[CREATE TABLE AS] SELECT select_expr [, select_expr ...]
- FROM {tb_name_list}
- [WHERE where_condition]
- [INTERVAL(interval_val [, interval_offset]) [SLIDING sliding_val]]
-
-```
-
-INTERVAL: The time window for which continuous query is performed
-
-SLIDING: The time step for which the time window moves forward each time
-
-## How to Use
-
-In this section the use case of meters will be used to introduce how to use continuous query. Assume the STable and subtables have been created using the SQL statements below.
-
-```sql
-create table meters (ts timestamp, current float, voltage int, phase float) tags (location binary(64), groupId int);
-create table D1001 using meters tags ("California.SanFrancisco", 2);
-create table D1002 using meters tags ("California.LosAngeles", 2);
-```
-
-The SQL statement below retrieves the average voltage for a one minute time window, with each time window moving forward by 30 seconds.
-
-```sql
-select avg(voltage) from meters interval(1m) sliding(30s);
-```
-
-Whenever the above SQL statement is executed, all the existing data will be computed again. If the computation needs to be performed every 30 seconds automatically to compute on the data in the past one minute, the above SQL statement needs to be revised as below, in which `{startTime}` stands for the beginning timestamp in the latest time window.
-
-```sql
-select avg(voltage) from meters where ts > {startTime} interval(1m) sliding(30s);
-```
-
-An easier way to achieve this is to prepend `create table {tableName} as` before the `select`.
-
-```sql
-create table avg_vol as select avg(voltage) from meters interval(1m) sliding(30s);
-```
-
-A table named as `avg_vol` will be created automatically, then every 30 seconds the `select` statement will be executed automatically on the data in the past 1 minute, i.e. the latest time window, and the result is written into table `avg_vol`. The client program just needs to query from table `avg_vol`. For example:
-
-```sql
-taos> select * from avg_vol;
- ts | avg_voltage_ |
-===================================================
- 2020-07-29 13:37:30.000 | 222.0000000 |
- 2020-07-29 13:38:00.000 | 221.3500000 |
- 2020-07-29 13:38:30.000 | 220.1700000 |
- 2020-07-29 13:39:00.000 | 223.0800000 |
-```
-
-Please note that the minimum allowed time window is 10 milliseconds, and there is no upper limit.
-
-It's possible to specify the start and end time of a continuous query. If the start time is not specified, the timestamp of the first row will be considered as the start time; if the end time is not specified, the continuous query will be performed indefinitely, otherwise it will be terminated once the end time is reached. For example, the continuous query in the SQL statement below will be started from now and terminated one hour later.
-
-```sql
-create table avg_vol as select avg(voltage) from meters where ts > now and ts <= now + 1h interval(1m) sliding(30s);
-```
-
-`now` in the above SQL statement stands for the time when the continuous query is created, not the time when the computation is actually performed. To avoid the trouble caused by a delay in receiving data as much as possible, the actual computation in a continuous query is started after a little delay. That means, once a time window closes, the computation is not started immediately. Normally, the result are available after a little time, normally within one minute, after the time window closes.
-
-## How to Manage
-
-`show streams` command can be used in the TDengine CLI `taos` to show all the continuous queries in the system, and `kill stream` can be used to terminate a continuous query.
diff --git a/docs-en/07-develop/06-subscribe.mdx b/docs-en/07-develop/06-subscribe.mdx
deleted file mode 100644
index 782fcdbaf221419dd231bd10958e26b8f4f856e5..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/06-subscribe.mdx
+++ /dev/null
@@ -1,259 +0,0 @@
----
-sidebar_label: Data Subscription
-description: "Lightweight service for data subscription and publishing. Time series data inserted into TDengine continuously can be pushed automatically to subscribing clients."
-title: Data Subscription
----
-
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-import Java from "./_sub_java.mdx";
-import Python from "./_sub_python.mdx";
-import Go from "./_sub_go.mdx";
-import Rust from "./_sub_rust.mdx";
-import Node from "./_sub_node.mdx";
-import CSharp from "./_sub_cs.mdx";
-import CDemo from "./_sub_c.mdx";
-
-## Introduction
-
-Due to the nature of time series data, data insertion into TDengine is similar to data publishing in message queues. Data is stored in ascending order of timestamp inside TDengine, and so each table in TDengine can essentially be considered as a message queue.
-
-A lightweight service for data subscription and publishing is built into TDengine. With the API provided by TDengine, client programs can use `select` statements to subscribe to data from one or more tables. The subscription and state maintenance is performed on the client side. The client programs poll the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start retrieving new data is up to the client side.
-
-There are 3 major APIs related to subscription provided in the TDengine client driver.
-
-```c
-taos_subscribe
-taos_consume
-taos_unsubscribe
-```
-
-For more details about these APIs please refer to [C/C++ Connector](/reference/connector/cpp). Their usage will be introduced below using the use case of meters, in which the schema of STable and subtables from the previous section [Continuous Query](/develop/continuous-query) are used. Full sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c).
-
-If we want to get a notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways:
-
-The first way is to query each sub table and record the last timestamp matching the criteria. Then after some time, query the data later than the recorded timestamp, and repeat this process. The SQL statements for this way are as below.
-
-```sql
-select * from D1001 where ts > {last_timestamp1} and current > 10;
-select * from D1002 where ts > {last_timestamp2} and current > 10;
-...
-```
-
-The above way works, but the problem is that the number of `select` statements increases with the number of meters. Additionally, the performance of both client side and server side will be unacceptable once the number of meters grows to a big enough number.
-
-A better way is to query on the STable, only one `select` is enough regardless of the number of meters, like below:
-
-```sql
-select * from meters where ts > {last_timestamp} and current > 10;
-```
-
-However, this presents a new problem in how to choose `last_timestamp`. First, the timestamp when the data is generated is different from the timestamp when the data is inserted into the database, sometimes the difference between them may be very big. Second, the time when the data from different meters arrives at the database may be different too. If the timestamp of the "slowest" meter is used as `last_timestamp` in the query, the data from other meters may be selected repeatedly; but if the timestamp of the "fastest" meter is used as `last_timestamp`, some data from other meters may be missed.
-
-All the problems mentioned above can be resolved easily using the subscription functionality provided by TDengine.
-
-The first step is to create subscription using `taos_subscribe`.
-
-```c
-TAOS_SUB* tsub = NULL;
-if (async) {
- // create an asynchronous subscription, the callback function will be called every 1s
- tsub = taos_subscribe(taos, restart, topic, sql, subscribe_callback, &blockFetch, 1000);
-} else {
- // create an synchronous subscription, need to call 'taos_consume' manually
- tsub = taos_subscribe(taos, restart, topic, sql, NULL, NULL, 0);
-}
-```
-
-The subscription in TDengine can be either synchronous or asynchronous. In the above sample code, the value of variable `async` is determined from the CLI input, then it's used to create either an async or sync subscription. Sync subscription means the client program needs to invoke `taos_consume` to retrieve data, and async subscription means another thread created by `taos_subscribe` internally invokes `taos_consume` to retrieve data and pass the data to `subscribe_callback` for processing. `subscribe_callback` is a callback function provided by the client program. You should not perform time consuming operations in the callback function.
-
-The parameter `taos` is an established connection. Nothing special needs to be done for thread safety for synchronous subscription. For asynchronous subscription, the taos_subscribe function should be called exclusively by the current thread, to avoid unpredictable errors.
-
-The parameter `sql` is a `select` statement in which the `where` clause can be used to specify filter conditions. In our example, we can subscribe to the records in which the current exceeds 10A, with the following SQL statement:
-
-```sql
-select * from meters where current > 10;
-```
-
-Please note that, all the data will be processed because no start time is specified. If we only want to process data for the past day, a time related condition can be added:
-
-```sql
-select * from meters where ts > now - 1d and current > 10;
-```
-
-The parameter `topic` is the name of the subscription. The client application must guarantee that the name is unique. However, it doesn't have to be globally unique because subscription is implemented in the APIs on the client side.
-
-If the subscription named as `topic` doesn't exist, the parameter `restart` will be ignored. If the subscription named as `topic` has been created before by the client program, when the client program is restarted with the subscription named `topic`, parameter `restart` is used to determine whether to retrieve data from the beginning or from the last point where the subscription was broken.
-
-If the value of `restart` is **true** (i.e. a non-zero value), data will be retrieved from the beginning. If it is **false** (i.e. zero), the data already consumed before will not be processed again.
-
-The last parameter of `taos_subscribe` is the polling interval in units of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` will be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function.
-
-The second to last parameter of `taos_subscribe` is used to pass arguments to the call back function. `taos_subscribe` doesn't process this parameter and simply passes it to the call back function. This parameter is simply ignored in sync mode.
-
-After a subscription is created, its data can be consumed and processed. Shown below is the sample code to consume data in sync mode, in the else condition of `if (async)`.
-
-```c
-if (async) {
- getchar();
-} else while(1) {
- TAOS_RES* res = taos_consume(tsub);
- if (res == NULL) {
- printf("failed to consume data.");
- break;
- } else {
- print_result(res, blockFetch);
- getchar();
- }
-}
-```
-
-In the above sample code in the else condition, there is an infinite loop. Each time carriage return is entered `taos_consume` is invoked. The return value of `taos_consume` is the selected result set. In the above sample, `print_result` is used to simplify the printing of the result set. It is similar to `taos_use_result`. Below is the implementation of `print_result`.
-
-```c
-void print_result(TAOS_RES* res, int blockFetch) {
- TAOS_ROW row = NULL;
- int num_fields = taos_num_fields(res);
- TAOS_FIELD* fields = taos_fetch_fields(res);
- int nRows = 0;
- if (blockFetch) {
- nRows = taos_fetch_block(res, &row);
- for (int i = 0; i < nRows; i++) {
- char temp[256];
- taos_print_row(temp, row + i, fields, num_fields);
- puts(temp);
- }
- } else {
- while ((row = taos_fetch_row(res))) {
- char temp[256];
- taos_print_row(temp, row, fields, num_fields);
- puts(temp);
- nRows++;
- }
- }
- printf("%d rows consumed.\n", nRows);
-}
-```
-
-In the above code `taos_print_row` is used to process the data consumed. All matching rows are printed.
-
-In async mode, consuming data is simpler as shown below.
-
-```c
-void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) {
- print_result(res, *(int*)param);
-}
-```
-
-`taos_unsubscribe` can be invoked to terminate a subscription.
-
-```c
-taos_unsubscribe(tsub, keep);
-```
-
-The second parameter `keep` is used to specify whether to keep the subscription progress on the client sde. If it is **false**, i.e. **0**, then subscription will be restarted from beginning regardless of the `restart` parameter's value when `taos_subscribe` is invoked again. The subscription progress information is stored in _{DataDir}/subscribe/_ , under which there is a file with the same name as `topic` for each subscription(Note: The default value of `DataDir` in the `taos.cfg` file is **/var/lib/taos/**. However, **/var/lib/taos/** does not exist on the Windows server. So you need to change the `DataDir` value to the corresponding existing directory."), the subscription will be restarted from the beginning if the corresponding progress file is removed.
-
-Now let's see the effect of the above sample code, assuming below prerequisites have been done.
-
-- The sample code has been downloaded to local system
-- TDengine has been installed and launched properly on same system
-- The database, STable, and subtables required in the sample code are ready
-
-Launch the command below in the directory where the sample code resides to compile and start the program.
-
-```bash
-make
-./subscribe -sql='select * from meters where current > 10;'
-```
-
-After the program is started, open another terminal and launch TDengine CLI `taos`, then use the below SQL commands to insert a row whose current is 12A into table **D1001**.
-
-```sql
-use test;
-insert into D1001 values(now, 12, 220, 1);
-```
-
-Then, this row of data will be shown by the example program on the first terminal because its current exceeds 10A. More data can be inserted for you to observe the output of the example program.
-
-## Examples
-
-The example program below demonstrates how to subscribe, using connectors, to data rows in which current exceeds 10A.
-
-### Prepare Data
-
-```bash
-# create database "power"
-taos> create database power;
-# use "power" as the database in following operations
-taos> use power;
-# create super table "meters"
-taos> create table meters(ts timestamp, current float, voltage int, phase int) tags(location binary(64), groupId int);
-# create tabes using the schema defined by super table "meters"
-taos> create table d1001 using meters tags ("California.SanFrancisco", 2);
-taos> create table d1002 using meters tags ("California.LoSangeles", 2);
-# insert some rows
-taos> insert into d1001 values("2020-08-15 12:00:00.000", 12, 220, 1),("2020-08-15 12:10:00.000", 12.3, 220, 2),("2020-08-15 12:20:00.000", 12.2, 220, 1);
-taos> insert into d1002 values("2020-08-15 12:00:00.000", 9.9, 220, 1),("2020-08-15 12:10:00.000", 10.3, 220, 1),("2020-08-15 12:20:00.000", 11.2, 220, 1);
-# filter out the rows in which current is bigger than 10A
-taos> select * from meters where current > 10;
- ts | current | voltage | phase | location | groupid |
-===========================================================================================================
- 2020-08-15 12:10:00.000 | 10.30000 | 220 | 1 | California.LoSangeles | 2 |
- 2020-08-15 12:20:00.000 | 11.20000 | 220 | 1 | California.LoSangeles | 2 |
- 2020-08-15 12:00:00.000 | 12.00000 | 220 | 1 | California.SanFrancisco | 2 |
- 2020-08-15 12:10:00.000 | 12.30000 | 220 | 2 | California.SanFrancisco | 2 |
- 2020-08-15 12:20:00.000 | 12.20000 | 220 | 1 | California.SanFrancisco | 2 |
-Query OK, 5 row(s) in set (0.004896s)
-```
-
-### Example Programs
-
-
-
-
-
-
-
-
- {/*
-
- */}
-
-
-
- {/*
-
-
-
-
- */}
-
-
-
-
-
-### Run the Examples
-
-The example programs first consume all historical data matching the criteria.
-
-```bash
-ts: 1597464000000 current: 12.0 voltage: 220 phase: 1 location: California.SanFrancisco groupid : 2
-ts: 1597464600000 current: 12.3 voltage: 220 phase: 2 location: California.SanFrancisco groupid : 2
-ts: 1597465200000 current: 12.2 voltage: 220 phase: 1 location: California.SanFrancisco groupid : 2
-ts: 1597464600000 current: 10.3 voltage: 220 phase: 1 location: California.LoSangeles groupid : 2
-ts: 1597465200000 current: 11.2 voltage: 220 phase: 1 location: California.LoSangeles groupid : 2
-```
-
-Next, use TDengine CLI to insert a new row.
-
-```
-# taos
-taos> use power;
-taos> insert into d1001 values(now, 12.4, 220, 1);
-```
-
-Because the current in the inserted row exceeds 10A, it will be consumed by the example program.
-
-```
-ts: 1651146662805 current: 12.4 voltage: 220 phase: 1 location: California.SanFrancisco groupid: 2
-```
diff --git a/docs-en/07-develop/07-cache.md b/docs-en/07-develop/07-cache.md
deleted file mode 100644
index 743452faff6a2be8466318a7dab61a44e33c3664..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/07-cache.md
+++ /dev/null
@@ -1,19 +0,0 @@
----
-sidebar_label: Cache
-title: Cache
-description: "The latest row of each table is kept in cache to provide high performance query of latest state."
----
-
-The cache management policy in TDengine is First-In-First-Out (FIFO). FIFO is also known as insert driven cache management policy and it is different from read driven cache management, which is more commonly known as Least-Recently-Used (LRU). FIFO simply stores the latest data in cache and flushes the oldest data in cache to disk, when the cache usage reaches a threshold. In IoT use cases, it is the current state i.e. the latest or most recent data that is important. The cache policy in TDengine, like much of the design and architecture of TDengine, is based on the nature of IoT data.
-
-Caching the latest data provides the capability of retrieving data in milliseconds. With this capability, TDengine can be configured properly to be used as a caching system without deploying another separate caching system. This simplifies the system architecture and minimizes operational costs. The cache is emptied after TDengine is restarted. TDengine does not reload data from disk into cache, like a key-value caching system.
-
-The memory space used by the TDengine cache is fixed in size and configurable. It should be allocated based on application requirements and system resources. An independent memory pool is allocated for and managed by each vnode (virtual node) in TDengine. There is no sharing of memory pools between vnodes. All the tables belonging to a vnode share all the cache memory of the vnode.
-
-The memory pool is divided into blocks and data is stored in row format in memory and each block follows FIFO policy. The size of each block is determined by configuration parameter `cache` and the number of blocks for each vnode is determined by the parameter `blocks`. For each vnode, the total cache size is `cache * blocks`. A cache block needs to ensure that each table can store at least dozens of records, to be efficient.
-
-`last_row` function can be used to retrieve the last row of a table or a STable to quickly show the current state of devices on monitoring screen. For example the below SQL statement retrieves the latest voltage of all meters in San Francisco, California.
-
-```sql
-select last_row(voltage) from meters where location='California.SanFrancisco';
-```
diff --git a/docs-en/07-develop/08-udf.md b/docs-en/07-develop/08-udf.md
deleted file mode 100644
index 49bc95bd91a4c31d42d2b21ef05d69225f1bd963..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/08-udf.md
+++ /dev/null
@@ -1,240 +0,0 @@
----
-sidebar_label: UDF
-title: User Defined Functions(UDF)
-description: "Scalar functions and aggregate functions developed by users can be utilized by the query framework to expand query capability"
----
-
-In some use cases, built-in functions are not adequate for the query capability required by application programs. With UDF, the functions developed by users can be utilized by the query framework to meet business and application requirements. UDF normally takes one column of data as input, but can also support the result of a sub-query as input.
-
-From version 2.2.0.0, UDF written in C/C++ are supported by TDengine.
-
-
-## Types of UDF
-
-Two kinds of functions can be implemented by UDF: scalar functions and aggregate functions.
-
-Scalar functions return multiple rows and aggregate functions return either 0 or 1 row.
-
-In the case of a scalar function you only have to implement the "normal" function template.
-
-In the case of an aggregate function, in addition to the "normal" function, you also need to implement the "merge" and "finalize" function templates even if the implementation is empty. This will become clear in the sections below.
-
-### Scalar Function
-
-As mentioned earlier, a scalar UDF only has to implement the "normal" function template. The function template below can be used to define your own scalar function.
-
-`void udfNormalFunc(char* data, short itype, short ibytes, int numOfRows, long long* ts, char* dataOutput, char* interBuf, char* tsOutput, int* numOfOutput, short otype, short obytes, SUdfInit* buf)`
-
-`udfNormalFunc` is the place holder for a function name. A function implemented based on the above template can be used to perform scalar computation on data rows. The parameters are fixed to control the data exchange between UDF and TDengine.
-
-- Definitions of the parameters:
-
- - data:input data
- - itype:the type of input data, for details please refer to [type definition in column_meta](/reference/rest-api/), for example 4 represents INT
- - iBytes:the number of bytes consumed by each value in the input data
- - oType:the type of output data, similar to iType
- - oBytes:the number of bytes consumed by each value in the output data
- - numOfRows:the number of rows in the input data
- - ts: the column of timestamp corresponding to the input data
- - dataOutput:the buffer for output data, total size is `oBytes * numberOfRows`
- - interBuf:the buffer for an intermediate result. Its size is specified by the `BUFSIZE` parameter when creating a UDF. It's normally used when the intermediate result is not same as the final result. This buffer is allocated and freed by TDengine.
- - tsOutput:the column of timestamps corresponding to the output data; it can be used to output timestamp together with the output data if it's not NULL
- - numOfOutput:the number of rows in output data
- - buf:for the state exchange between UDF and TDengine
-
- [add_one.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/add_one.c) is one example of a very simple UDF implementation, i.e. one instance of the above `udfNormalFunc` template. It adds one to each value of a passed in column, which can be filtered using the `where` clause, and outputs the result.
-
-### Aggregate Function
-
-For aggregate UDF, as mentioned earlier you must implement a "normal" function template (described above) and also implement the "merge" and "finalize" templates.
-
-#### Merge Function Template
-
-The function template below can be used to define your own merge function for an aggregate UDF.
-
-`void udfMergeFunc(char* data, int32_t numOfRows, char* dataOutput, int32_t* numOfOutput, SUdfInit* buf)`
-
-`udfMergeFunc` is the place holder for a function name. The function implemented with the above template is used to aggregate intermediate results and can only be used in the aggregate query for STable.
-
-Definitions of the parameters:
-
-- data:array of output data, if interBuf is used it's an array of interBuf
-- numOfRows:number of rows in `data`
-- dataOutput:the buffer for output data, the size is same as that of the final result; If the result is not final, it can be put in the interBuf, i.e. `data`.
-- numOfOutput:number of rows in the output data
-- buf:for the state exchange between UDF and TDengine
-
-#### Finalize Function Template
-
-The function template below can be used to finalize the result of your own UDF, normally used when interBuf is used.
-
-`void udfFinalizeFunc(char* dataOutput, char* interBuf, int* numOfOutput, SUdfInit* buf)`
-
-`udfFinalizeFunc` is the place holder of function name, definitions of the parameter are as below:
-
-- dataOutput:buffer for output data
-- interBuf:buffer for intermediate result, can be used as input for next processing step
-- numOfOutput:number of output data, can only be 0 or 1 for aggregate function
-- buf:for state exchange between UDF and TDengine
-
-### Example abs_max.c
-
-[abs_max.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/abs_max.c) is an example of a user defined aggregate function to get the maximum from the absolute values of a column.
-
-The internal processing happens as follows. The results of the select statement are divided into multiple row blocks and `udfNormalFunc`, i.e. `abs_max` in this case, is performed on each row block to generate the intermediate results for each sub table. Then `udfMergeFunc`, i.e. `abs_max_merge` in this case, is performed on the intermediate result of sub tables to aggregate and generate the final or intermediate result of STable. The intermediate result of STable is finally processed by `udfFinalizeFunc`, i.e. `abs_max_finalize` in this example, to generate the final result, which contains either 0 or 1 row.
-
-Other typical aggregation functions such as covariance, can also be implemented using aggregate UDF.
-
-## UDF Naming Conventions
-
-The naming convention for the 3 kinds of function templates required by UDF is as follows:
- - udfNormalFunc, udfMergeFunc, and udfFinalizeFunc are required to have same prefix, i.e. the actual name of udfNormalFunc. The udfNormalFunc doesn't need a suffix following the function name.
- - udfMergeFunc should be udfNormalFunc followed by `_merge`
- - udfFinalizeFunc should be udfNormalFunc followed by `_finalize`.
-
-The naming convention is part of TDengine's UDF framework. TDengine follows this convention to invoke the corresponding actual functions.
-
-Depending on whether you are creating a scalar UDF or aggregate UDF, the functions that you need to implement are different.
-
-- Scalar function:udfNormalFunc is required.
-- Aggregate function:udfNormalFunc, udfMergeFunc (if query on STable) and udfFinalizeFunc are required.
-
-For clarity, assuming we want to implement a UDF named "foo":
-- If the function is a scalar function, we only need to implement the "normal" function template and it should be named simply `foo`.
-- If the function is an aggregate function, we need to implement `foo`, `foo_merge`, and `foo_finalize`. Note that for aggregate UDF, even though one of the three functions is not necessary, there must be an empty implementation.
-
-## Compile UDF
-
-The source code of UDF in C can't be utilized by TDengine directly. UDF can only be loaded into TDengine after compiling to dynamically linked library (DLL).
-
-For example, the example UDF `add_one.c` mentioned earlier, can be compiled into DLL using the command below, in a Linux Shell.
-
-```bash
-gcc -g -O0 -fPIC -shared add_one.c -o add_one.so
-```
-
-The generated DLL file `add_one.so` can be used later when creating a UDF. It's recommended to use GCC not older than 7.5.
-
-## Create and Use UDF
-
-When a UDF is created in a TDengine instance, it is available across the databases in that instance.
-
-### Create UDF
-
-SQL command can be executed on the host where the generated UDF DLL resides to load the UDF DLL into TDengine. This operation cannot be done through REST interface or web console. Once created, any client of the current TDengine can use these UDF functions in their SQL commands. UDF are stored in the management node of TDengine. The UDFs loaded in TDengine would be still available after TDengine is restarted.
-
-When creating UDF, the type of UDF, i.e. a scalar function or aggregate function must be specified. If the specified type is wrong, the SQL statements using the function would fail with errors. The input type and output type don't need to be the same in UDF, but the input data type and output data type must be consistent with the UDF definition.
-
-- Create Scalar Function
-
-```sql
-CREATE FUNCTION userDefinedFunctionName AS "/absolute/path/to/userDefinedFunctionName.so" OUTPUTTYPE [BUFSIZE B];
-```
-
-- userDefinedFunctionName:The function name to be used in SQL statement which must be consistent with the function name defined by `udfNormalFunc` and is also the name of the compiled DLL (.so file).
-- path:The absolute path of the DLL file including the name of the shared object file (.so). The path must be quoted with single or double quotes.
-- outputtype:The output data type, the value is the literal string of the supported TDengine data type.
-- B:the size of intermediate buffer, in bytes; it is an optional parameter and the range is [0,512].
-
-For example, below SQL statement can be used to create a UDF from `add_one.so`.
-
-```sql
-CREATE FUNCTION add_one AS "/home/taos/udf_example/add_one.so" OUTPUTTYPE INT;
-```
-
-- Create Aggregate Function
-
-```sql
-CREATE AGGREGATE FUNCTION userDefinedFunctionName AS "/absolute/path/to/userDefinedFunctionName.so" OUTPUTTYPE [ BUFSIZE B ];
-```
-
-- userDefinedFunctionName:the function name to be used in SQL statement which must be consistent with the function name defined by `udfNormalFunc` and is also the name of the compiled DLL (.so file).
-- path:the absolute path of the DLL file including the name of the shared object file (.so). The path needs to be quoted by single or double quotes.
-- OUTPUTTYPE:the output data type, the value is the literal string of the type
-- B:the size of intermediate buffer, in bytes; it's an optional parameter and the range is [0,512]
-
-For details about how to use intermediate result, please refer to example program [demo.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/demo.c).
-
-For example, below SQL statement can be used to create a UDF from `demo.so`.
-
-```sql
-CREATE AGGREGATE FUNCTION demo AS "/home/taos/udf_example/demo.so" OUTPUTTYPE DOUBLE bufsize 14;
-```
-
-### Manage UDF
-
-- Delete UDF
-
-```
-DROP FUNCTION ids(X);
-```
-
-- ids(X):same as that in `CREATE FUNCTION` statement
-
-```sql
-DROP FUNCTION add_one;
-```
-
-- Show Available UDF
-
-```sql
-SHOW FUNCTIONS;
-```
-
-### Use UDF
-
-The function name specified when creating UDF can be used directly in SQL statements, just like builtin functions.
-
-```sql
-SELECT X(c) FROM table/STable;
-```
-
-The above SQL statement invokes function X for column c.
-
-## Restrictions for UDF
-
-In current version there are some restrictions for UDF
-
-1. Only Linux is supported when creating and invoking UDF for both client side and server side
-2. UDF can't be mixed with builtin functions
-3. Only one UDF can be used in a SQL statement
-4. Only a single column is supported as input for UDF
-5. Once created successfully, UDF is persisted in MNode of TDengineUDF
-6. UDF can't be created through REST interface
-7. The function name used when creating UDF in SQL must be consistent with the function name defined in the DLL, i.e. the name defined by `udfNormalFunc`
-8. The name of a UDF should not conflict with any of TDengine's built-in functions
-
-## Examples
-
-### Scalar function example [add_one](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/add_one.c)
-
-
-add_one.c
-
-```c
-{{#include tests/script/sh/add_one.c}}
-```
-
-
-
-### Aggregate function example [abs_max](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/abs_max.c)
-
-
-abs_max.c
-
-```c
-{{#include tests/script/sh/abs_max.c}}
-```
-
-
-
-### Example for using intermediate result [demo](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/demo.c)
-
-
-demo.c
-
-```c
-{{#include tests/script/sh/demo.c}}
-```
-
-
diff --git a/docs-en/07-develop/_category_.yml b/docs-en/07-develop/_category_.yml
deleted file mode 100644
index 6f0d66351a5c326eb2dced998e29e668d11cd1ca..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/_category_.yml
+++ /dev/null
@@ -1 +0,0 @@
-label: Developer Guide
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_c.mdx b/docs-en/07-develop/_sub_c.mdx
deleted file mode 100644
index 95fef0042d0a277f9136e6e6f8c15558487232f9..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/_sub_c.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```c
-{{#include docs-examples/c/subscribe_demo.c}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_cs.mdx b/docs-en/07-develop/_sub_cs.mdx
deleted file mode 100644
index 80934aa4d014a076896dce7f41e520f06ffd735d..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/_sub_cs.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```csharp
-{{#include docs-examples/csharp/SubscribeDemo.cs}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_go.mdx b/docs-en/07-develop/_sub_go.mdx
deleted file mode 100644
index cd908fc12c3a35f49ca108ee56c3951c5388a95f..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/_sub_go.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```go
-{{#include docs-examples/go/sub/main.go}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_java.mdx b/docs-en/07-develop/_sub_java.mdx
deleted file mode 100644
index e65bc576ebed030d935ced6a4572289cd367ffac..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/_sub_java.mdx
+++ /dev/null
@@ -1,7 +0,0 @@
-```java
-{{#include docs-examples/java/src/main/java/com/taos/example/SubscribeDemo.java}}
-```
-:::note
-For now Java connector doesn't provide asynchronous subscription, but `TimerTask` can be used to achieve similar purpose.
-
-:::
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_node.mdx b/docs-en/07-develop/_sub_node.mdx
deleted file mode 100644
index c93ad627ce9a77ca71a014b41d571089e6c1727b..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/_sub_node.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```js
-{{#include docs-examples/node/nativeexample/subscribe_demo.js}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_python.mdx b/docs-en/07-develop/_sub_python.mdx
deleted file mode 100644
index b817deeba6e283a3ba16fee0d580d3823c999536..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/_sub_python.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```py
-{{#include docs-examples/python/subscribe_demo.py}}
-```
\ No newline at end of file
diff --git a/docs-en/07-develop/_sub_rust.mdx b/docs-en/07-develop/_sub_rust.mdx
deleted file mode 100644
index 4750cf7a3b871db48c9e5a26b22ab4b8a03f11be..0000000000000000000000000000000000000000
--- a/docs-en/07-develop/_sub_rust.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```rs
-{{#include docs-examples/rust/nativeexample/examples/subscribe_demo.rs}}
-```
\ No newline at end of file
diff --git a/docs-en/07-third-party/01-grafana.md b/docs-en/07-third-party/01-grafana.md
new file mode 100644
index 0000000000000000000000000000000000000000..f9a8e7c40d6db91ba35d10780380fd2388f758aa
--- /dev/null
+++ b/docs-en/07-third-party/01-grafana.md
@@ -0,0 +1 @@
+# Grafana
\ No newline at end of file
diff --git a/docs-en/20-third-party/index.md b/docs-en/07-third-party/index.md
similarity index 94%
rename from docs-en/20-third-party/index.md
rename to docs-en/07-third-party/index.md
index 87bd9e075133d1182ee93d1c1c43617c766755b9..8c7c18257dd0603c78c7fe62b234dd41f844cbb6 100644
--- a/docs-en/20-third-party/index.md
+++ b/docs-en/07-third-party/index.md
@@ -1,6 +1,4 @@
----
-title: Third Party Tools
----
+# Third Party Tools
Since TDengine supports standard SQL commands, common database connector standards (e.g., JDBC), ORM, and other popular time-series database writing protocols (e.g., InfluxDB Line Protocol, OpenTSDB JSON, OpenTSDB Telnet, etc.), it is very easy to integrate TDengine with other third party tools. You only need to provide simple configuration, the integration can be done without a line of code.
diff --git a/docs-en/08-operation/index.md b/docs-en/08-operation/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..02fe776ab2cbbb68beb0eb2a58d9be67054aef23
--- /dev/null
+++ b/docs-en/08-operation/index.md
@@ -0,0 +1 @@
+# Administration
\ No newline at end of file
diff --git a/docs-en/14-reference/index.md b/docs-en/09-reference/index.md
similarity index 93%
rename from docs-en/14-reference/index.md
rename to docs-en/09-reference/index.md
index f350eebfc1a1ca2feaedc18c4b4fa798742e31b4..63a35ec732dafde9b9e8d89455752cb58117dea5 100644
--- a/docs-en/14-reference/index.md
+++ b/docs-en/09-reference/index.md
@@ -1,6 +1,4 @@
----
-title: Reference
----
+# Reference
The reference guide is a detailed introduction to TDengine including various TDengine connectors in different languages, and the tools that come with TDengine.
diff --git a/docs-en/09-reference/java-connector.md b/docs-en/09-reference/java-connector.md
new file mode 100644
index 0000000000000000000000000000000000000000..e62d5f08d1c7fd639b758fffbdea5007cf20c886
--- /dev/null
+++ b/docs-en/09-reference/java-connector.md
@@ -0,0 +1 @@
+# Java Connector
\ No newline at end of file
diff --git a/docs-en/10-cluster/01-deploy.md b/docs-en/10-cluster/01-deploy.md
deleted file mode 100644
index 200da1be3f8185818bd21dd3fcdc78c124a36831..0000000000000000000000000000000000000000
--- a/docs-en/10-cluster/01-deploy.md
+++ /dev/null
@@ -1,136 +0,0 @@
----
-title: Deployment
----
-
-## Prerequisites
-
-### Step 1
-
-The FQDN of all hosts must be setup properly. For e.g. FQDNs may have to be configured in the /etc/hosts file on each host. You must confirm that each FQDN can be accessed from any other host. For e.g. you can do this by using the `ping` command.
-
-To get the hostname on any host, the command `hostname -f` can be executed. `ping ` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, needs to be checked and revised, to make any two hosts accessible to each other.
-
-:::note
-
-- The host where the client program runs also needs to be configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster.
-
-- Please ensure that your firewall rules do not block TCP/UDP on ports 6030-6042 on all hosts in the cluster.
-
-:::
-
-### Step 2
-
-If any previous version of TDengine has been installed and configured on any host, the installation needs to be removed and the data needs to be cleaned up. For details about uninstalling please refer to [Install and Uninstall](/operation/pkg-install). To clean up the data, please use `rm -rf /var/lib/taos/\*` assuming the `dataDir` is configured as `/var/lib/taos`.
-
-:::note
-
-As a best practice, before cleaning up any data files or directories, please ensure that your data has been backed up correctly, if required by your data integrity, backup, security, or other standard operating protocols (SOP).
-
-:::
-
-### Step 3
-
-Now it's time to install TDengine on all hosts but without starting `taosd`. Note that the versions on all hosts should be same. If you are prompted to input the existing TDengine cluster, simply press carriage return to ignore the prompt. `install.sh -e no` can also be used to disable this prompt. For details please refer to [Install and Uninstall](/operation/pkg-install).
-
-### Step 4
-
-Now each physical node (referred to, hereinafter, as `dnode` which is an abbreviation for "data node") of TDengine needs to be configured properly. Please note that one dnode doesn't stand for one host. Multiple TDengine dnodes can be started on a single host as long as they are configured properly without conflicting. More specifically each instance of the configuration file `taos.cfg` stands for a dnode. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following.
-
-```c
-// firstEp is the end point to connect to when any dnode starts
-firstEp h1.taosdata.com:6030
-
-// must be configured to the FQDN of the host where the dnode is launched
-fqdn h1.taosdata.com
-
-// the port used by the dnode, default is 6030
-serverPort 6030
-
-// only necessary when replica is configured to an even number
-#arbitrator ha.taosdata.com:6042
-```
-
-`firstEp` and `fqdn` must be configured properly. In `taos.cfg` of all dnodes in TDengine cluster, `firstEp` must be configured to point to same address, i.e. the first dnode of the cluster. `fqdn` and `serverPort` compose the address of each node itself. If you want to start multiple TDengine dnodes on a single host, please make sure all other configurations like `dataDir`, `logDir`, and other resources related parameters are not conflicting.
-
-For all the dnodes in a TDengine cluster, the below parameters must be configured exactly the same, any node whose configuration is different from dnodes already in the cluster can't join the cluster.
-
-| **#** | **Parameter** | **Definition** |
-| ----- | ------------------ | --------------------------------------------------------------------------------- |
-| 1 | numOfMnodes | The number of management nodes in the cluster |
-| 2 | mnodeEqualVnodeNum | The ratio of resource consuming of mnode to vnode |
-| 3 | offlineThreshold | The threshold of dnode offline, once it's reached the dnode is considered as down |
-| 4 | statusInterval | The interval by which dnode reports its status to mnode |
-| 5 | arbitrator | End point of the arbitrator component in the cluster |
-| 6 | timezone | Timezone |
-| 7 | balance | Enable load balance automatically |
-| 8 | maxTablesPerVnode | Maximum number of tables that can be created in each vnode |
-| 9 | maxVgroupsPerDb | Maximum number vgroups that can be used by each DB |
-
-:::note
-Prior to version 2.0.19.0, besides the above parameters, `locale` and `charset` must also be configured the same for each dnode.
-
-:::
-
-## Start Cluster
-
-In the following example we assume that first dnode has FQDN h1.taosdata.com and the second dnode has FQDN h2.taosdata.com.
-
-### Start The First DNODE
-
-The first dnode can be started following the instructions in [Get Started](/get-started/). Then TDengine CLI `taos` can be launched to execute command `show dnodes`, the output is as following for example:
-
-```
-Welcome to the TDengine shell from Linux, Client Version:2.0.0.0
-
-
-Copyright (c) 2017 by TAOS Data, Inc. All rights reserved.
-
-taos> show dnodes;
- id | end_point | vnodes | cores | status | role | create_time |
-=====================================================================================
- 1 | h1.taosdata.com:6030 | 0 | 2 | ready | any | 2020-07-31 03:49:29.202 |
-Query OK, 1 row(s) in set (0.006385s)
-
-taos>
-```
-
-From the above output, it is shown that the end point of the started dnode is "h1.taosdata.com:6030", which is the `firstEp` of the cluster.
-
-### Start Other DNODEs
-
-There are a few steps necessary to add other dnodes in the cluster.
-
-Let's assume we are starting the second dnode with FQDN, h2.taosdata.com. First we make sure the configuration is correct.
-
-```c
-// firstEp is the end point to connect to when any dnode starts
-firstEp h1.taosdata.com:6030
-
-// must be configured to the FQDN of the host where the dnode is launched
-fqdn h2.taosdata.com
-
-// the port used by the dnode, default is 6030
-serverPort 6030
-
-```
-
-Second, we can start `taosd` as instructed in [Get Started](/get-started/).
-
-Then, on the first dnode i.e. h1.taosdata.com in our example, use TDengine CLI `taos` to execute the following command to add the end point of the dnode in the cluster. In the command "fqdn:port" should be quoted using double quotes.
-
-```sql
-CREATE DNODE "h2.taos.com:6030";
-```
-
-Then on the first dnode h1.taosdata.com, execute `show dnodes` in `taos` to show whether the second dnode has been added in the cluster successfully or not.
-
-```sql
-SHOW DNODES;
-```
-
-If the status of the newly added dnode is offline, please check:
-
-- Whether the `taosd` process is running properly or not
-- In the log file `taosdlog.0` to see whether the fqdn and port are correct
-
-The above process can be repeated to add more dnodes in the cluster.
diff --git a/docs-en/10-cluster/02-cluster-mgmt.md b/docs-en/10-cluster/02-cluster-mgmt.md
deleted file mode 100644
index 674c92e2766a4eb304079140af19c8efea72d55e..0000000000000000000000000000000000000000
--- a/docs-en/10-cluster/02-cluster-mgmt.md
+++ /dev/null
@@ -1,213 +0,0 @@
----
-sidebar_label: Operation
-title: Manage DNODEs
----
-
-The previous section, [Deployment],(/cluster/deploy) showed you how to deploy and start a cluster from scratch. Once a cluster is ready, the status of dnode(s) in the cluster can be shown at any time. Dnodes can be managed from the TDengine CLI. New dnode(s) can be added to scale out the cluster, an existing dnode can be removed and you can even perform load balancing manually, if necessary.
-
-:::note
-All the commands introduced in this chapter must be run in the TDengine CLI - `taos`. Note that sometimes it is necessary to use root privilege.
-
-:::
-
-## Show DNODEs
-
-The below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes and so on. We recommend executing this command after adding or removing a dnode.
-
-```sql
-SHOW DNODES;
-```
-
-Below is the example output of this command.
-
-```
-taos> show dnodes;
- id | end_point | vnodes | cores | status | role | create_time | offline reason |
-======================================================================================================================================
- 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
-Query OK, 1 row(s) in set (0.008298s)
-```
-
-## Show VGROUPs
-
-To utilize system resources efficiently and provide scalability, data sharding is required. The data of each database is divided into multiple shards and stored in multiple vnodes. These vnodes may be located on different dnodes. One way of scaling out is to add more vnodes on dnodes. Each vnode can only be used for a single DB, but one DB can have multiple vnodes. The allocation of vnode is scheduled automatically by mnode based on system resources of the dnodes.
-
-Launch TDengine CLI `taos` and execute below command:
-
-```sql
-USE SOME_DATABASE;
-SHOW VGROUPS;
-```
-
-The example output is below:
-
-```
-taos> show dnodes;
- id | end_point | vnodes | cores | status | role | create_time | offline reason |
-======================================================================================================================================
- 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
-Query OK, 1 row(s) in set (0.008298s)
-
-taos> use db;
-Database changed.
-
-taos> show vgroups;
- vgId | tables | status | onlines | v1_dnode | v1_status | compacting |
-==========================================================================================
- 14 | 38000 | ready | 1 | 1 | master | 0 |
- 15 | 38000 | ready | 1 | 1 | master | 0 |
- 16 | 38000 | ready | 1 | 1 | master | 0 |
- 17 | 38000 | ready | 1 | 1 | master | 0 |
- 18 | 37001 | ready | 1 | 1 | master | 0 |
- 19 | 37000 | ready | 1 | 1 | master | 0 |
- 20 | 37000 | ready | 1 | 1 | master | 0 |
- 21 | 37000 | ready | 1 | 1 | master | 0 |
-Query OK, 8 row(s) in set (0.001154s)
-```
-
-## Add DNODE
-
-Launch TDengine CLI `taos` and execute the command below to add the end point of a new dnode into the EPI (end point) list of the cluster. "fqdn:port" must be quoted using double quotes.
-
-```sql
-CREATE DNODE "fqdn:port";
-```
-
-The example output is as below:
-
-```
-taos> create dnode "localhost:7030";
-Query OK, 0 of 0 row(s) in database (0.008203s)
-
-taos> show dnodes;
- id | end_point | vnodes | cores | status | role | create_time | offline reason |
-======================================================================================================================================
- 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
- 2 | localhost:7030 | 0 | 0 | offline | any | 2022-04-19 08:11:42.158 | status not received |
-Query OK, 2 row(s) in set (0.001017s)
-```
-
-It can be seen that the status of the new dnode is "offline". Once the dnode is started and connects to the firstEp of the cluster, you can execute the command again and get the example output below. As can be seen, both dnodes are in "ready" status.
-
-```
-taos> show dnodes;
- id | end_point | vnodes | cores | status | role | create_time | offline reason |
-======================================================================================================================================
- 1 | localhost:6030 | 3 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
- 2 | localhost:7030 | 6 | 8 | ready | any | 2022-04-19 08:14:59.165 | |
-Query OK, 2 row(s) in set (0.001316s)
-```
-
-## Drop DNODE
-
-Launch TDengine CLI `taos` and execute the command below to drop or remove a dnode from the cluster. In the command, you can get `dnodeId` from `show dnodes`.
-
-```sql
-DROP DNODE "fqdn:port";
-```
-
-or
-
-```sql
-DROP DNODE dnodeId;
-```
-
-The example output is below:
-
-```
-taos> show dnodes;
- id | end_point | vnodes | cores | status | role | create_time | offline reason |
-======================================================================================================================================
- 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
- 2 | localhost:7030 | 0 | 0 | offline | any | 2022-04-19 08:11:42.158 | status not received |
-Query OK, 2 row(s) in set (0.001017s)
-
-taos> drop dnode 2;
-Query OK, 0 of 0 row(s) in database (0.000518s)
-
-taos> show dnodes;
- id | end_point | vnodes | cores | status | role | create_time | offline reason |
-======================================================================================================================================
- 1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
-Query OK, 1 row(s) in set (0.001137s)
-```
-
-In the above example, when `show dnodes` is executed the first time, two dnodes are shown. After `drop dnode 2` is executed, you can execute `show dnodes` again and it can be seen that only the dnode with ID 1 is still in the cluster.
-
-:::note
-
-- Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Before dropping a dnode, the data belonging to the dnode MUST be migrated/backed up according to your data retention, data security or other SOPs.
-- Please note that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped.
-- Once a dnode is dropped, other dnodes in the cluster will be notified of the drop and will not accept the request from the dropped dnode.
-- dnodeID is allocated automatically and can't be manually modified. dnodeID is generated in ascending order without duplication.
-
-:::
-
-## Move VNODE
-
-A vnode can be manually moved from one dnode to another.
-
-Launch TDengine CLI `taos` and execute below command:
-
-```sql
-ALTER DNODE BALANCE "VNODE:-DNODE:";
-```
-
-In the above command, `source-dnodeId` is the original dnodeId where the vnode resides, `dest-dnodeId` specifies the target dnode. vgId (vgroup ID) can be shown by `SHOW VGROUPS `.
-
-First `show vgroups` is executed to show the vgroup distribution.
-
-```
-taos> show vgroups;
- vgId | tables | status | onlines | v1_dnode | v1_status | compacting |
-==========================================================================================
- 14 | 38000 | ready | 1 | 3 | master | 0 |
- 15 | 38000 | ready | 1 | 3 | master | 0 |
- 16 | 38000 | ready | 1 | 3 | master | 0 |
- 17 | 38000 | ready | 1 | 3 | master | 0 |
- 18 | 37001 | ready | 1 | 3 | master | 0 |
- 19 | 37000 | ready | 1 | 1 | master | 0 |
- 20 | 37000 | ready | 1 | 1 | master | 0 |
- 21 | 37000 | ready | 1 | 1 | master | 0 |
-Query OK, 8 row(s) in set (0.001314s)
-```
-
-It can be seen that there are 5 vgroups in dnode 3 and 3 vgroups in node 1, now we want to move vgId 18 from dnode 3 to dnode 1. Execute the below command in `taos`
-
-```
-taos> alter dnode 3 balance "vnode:18-dnode:1";
-
-DB error: Balance already enabled (0.00755
-```
-
-However, the operation fails with error message show above, which means automatic load balancing has been enabled in the current database so manual load balance can't be performed.
-
-Shutdown the cluster, configure `balance` parameter in all the dnodes to 0, then restart the cluster, and execute `alter dnode` and `show vgroups` as below.
-
-```
-taos> alter dnode 3 balance "vnode:18-dnode:1";
-Query OK, 0 row(s) in set (0.000575s)
-
-taos> show vgroups;
- vgId | tables | status | onlines | v1_dnode | v1_status | v2_dnode | v2_status | compacting |
-=================================================================================================================
- 14 | 38000 | ready | 1 | 3 | master | 0 | NULL | 0 |
- 15 | 38000 | ready | 1 | 3 | master | 0 | NULL | 0 |
- 16 | 38000 | ready | 1 | 3 | master | 0 | NULL | 0 |
- 17 | 38000 | ready | 1 | 3 | master | 0 | NULL | 0 |
- 18 | 37001 | ready | 2 | 1 | slave | 3 | master | 0 |
- 19 | 37000 | ready | 1 | 1 | master | 0 | NULL | 0 |
- 20 | 37000 | ready | 1 | 1 | master | 0 | NULL | 0 |
- 21 | 37000 | ready | 1 | 1 | master | 0 | NULL | 0 |
-Query OK, 8 row(s) in set (0.001242s)
-```
-
-It can be seen from above output that vgId 18 has been moved from dnode 3 to dnode 1.
-
-:::note
-
-- Manual load balancing can only be performed when the automatic load balancing is disabled, i.e. `balance` is set to 0.
-- Only a vnode in normal state, i.e. master or slave, can be moved. vnode can't be moved when its in status offline, unsynced or syncing.
-- Before moving a vnode, it's necessary to make sure the target dnode has enough resources: CPU, memory and disk.
-
-:::
diff --git a/docs-en/10-cluster/03-ha-and-lb.md b/docs-en/10-cluster/03-ha-and-lb.md
deleted file mode 100644
index bd718eef9f8dc181628132de831dbca2af59d158..0000000000000000000000000000000000000000
--- a/docs-en/10-cluster/03-ha-and-lb.md
+++ /dev/null
@@ -1,81 +0,0 @@
----
-sidebar_label: HA & LB
-title: High Availability and Load Balancing
----
-
-## High Availability of Vnode
-
-High availability of vnode and mnode can be achieved through replicas in TDengine.
-
-A TDengine cluster can have multiple databases. Each database has a number of vnodes associated with it. A different number of replicas can be configured for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas. The default value for `replica` is 1. Naturally, a single replica cannot guarantee high availability since if one node is down, the data service is unavailable. Note that the number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation will fail with error "more dnodes are needed". The SQL statement below is used to create a database named "demo" with 3 replicas.
-
-```sql
-CREATE DATABASE demo replica 3;
-```
-
-The data in a DB is divided into multiple shards and stored in multiple vgroups. The number of vnodes in each vgroup is determined by the number of replicas set for the DB. The vnodes in each vgroup store exactly the same data. For the purpose of high availability, the vnodes in a vgroup must be located in different dnodes on different hosts. As long as over half of the vnodes in a vgroup are in an online state, the vgroup is able to provide data access. Otherwise the vgroup can't provide data access for reading or inserting data.
-
-There may be data for multiple DBs in a dnode. When a dnode is down, multiple DBs may be affected. While in theory, the cluster will provide data access for reading or inserting data if over half the vnodes in vgroups are online, because of the possibly complex mapping between vnodes and dnodes, it is difficult to guarantee that the cluster will work properly if over half of the dnodes are online.
-
-## High Availability of Mnode
-
-Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`. The valid range for `numOfMnodes` is [1,3]. To ensure data consistency between mnodes, data replication between mnodes is performed synchronously.
-
-There may be multiple dnodes in a cluster, but only one mnode can be started in each dnode. Which one or ones of the dnodes will be designated as mnodes is automatically determined by TDengine according to the cluster configuration and system resources. The command `show mnodes` can be executed in TDengine `taos` to show the mnodes in the cluster.
-
-```sql
-SHOW MNODES;
-```
-
-The end point and role/status (master, slave, unsynced, or offline) of all mnodes can be shown by the above command. When the first dnode is started in a cluster, there must be one mnode in this dnode. Without at least one mnode, the cluster cannot work. If `numOfMNodes` is configured to 2, another mnode will be started when the second dnode is launched.
-
-For the high availability of mnode, `numOfMnodes` needs to be configured to 2 or a higher value. Because the data consistency between mnodes must be guaranteed, the replica confirmation parameter `quorum` is set to 2 automatically if `numOfMNodes` is set to 2 or higher.
-
-:::note
-If high availability is important for your system, both vnode and mnode must be configured to have multiple replicas.
-
-:::
-
-## Load Balancing
-
-Load balancing will be triggered in 3 cases without manual intervention.
-
-- When a new dnode joins the cluster, automatic load balancing may be triggered. Some data from other dnodes may be transferred to the new dnode automatically.
-- When a dnode is removed from the cluster, the data from this dnode will be transferred to other dnodes automatically.
-- When a dnode is too hot, i.e. too much data has been stored in it, automatic load balancing may be triggered to migrate some vnodes from this dnode to other dnodes.
-
-:::tip
-Automatic load balancing is controlled by the parameter `balance`, 0 means disabled and 1 means enabled. This is set in the file [taos.cfg](https://docs.tdengine.com/reference/config/#balance).
-
-:::
-
-## Dnode Offline
-
-When a dnode is offline, it can be detected by the TDengine cluster. There are two cases:
-
-- The dnode comes online before the threshold configured in `offlineThreshold` is reached. The dnode is still in the cluster and data replication is started automatically. The dnode can work properly after the data sync is finished.
-
-- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. A system alert will be generated and automatic load balancing will be triggered if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not join the cluster automatically. The system administrator has to manually join the dnode to the cluster.
-
-:::note
-If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsynced status, the master node can only be voted on, after all the vnodes or mnodes in the group become online and can exchange status. Following this, the vgroup (or mnode group) is able to provide service.
-
-:::
-
-## Arbitrator
-
-The "arbitrator" component is used to address the special case when the number of replicas is set to an even number like 2,4 etc. If half of the vnodes in a vgroup don't work, it is impossible to vote and select a master node. This situation also applies to mnodes if the number of mnodes is set to an even number like 2,4 etc.
-
-To resolve this problem, a new arbitrator component named `tarbitrator`, an abbreviation of TDengine Arbitrator, was introduced. The `tarbitrator` simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. As long as more than half of the vnode or mnode, including Arbitrator, are available the vnode group or mnode group can provide data insertion or query services normally.
-
-Normally, it's prudent to configure the replica number for each DB or system parameter `numOfMNodes` to be an odd number. However, if a user is very sensitive to storage space, a replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability.
-
-Arbitrator component is installed with the server package. For details about how to install, please refer to [Install](/operation/pkg-install). The `-p` parameter of `tarbitrator` can be used to specify the port on which it provides service.
-
-In the configuration file `taos.cfg` of each dnode, parameter `arbitrator` needs to be configured to the end point of the `tarbitrator` process. Arbitrator component will be used automatically if the replica is configured to an even number and will be ignored if the replica is configured to an odd number.
-
-Arbitrator can be shown by executing command in TDengine CLI `taos` with its role shown as "arb".
-
-```sql
-SHOW DNODES;
-```
diff --git a/docs-en/10-cluster/_category_.yml b/docs-en/10-cluster/_category_.yml
deleted file mode 100644
index 141fd7832631d69efed214293c69cee336bc854d..0000000000000000000000000000000000000000
--- a/docs-en/10-cluster/_category_.yml
+++ /dev/null
@@ -1 +0,0 @@
-label: Cluster
diff --git a/docs-en/10-cluster/index.md b/docs-en/10-cluster/index.md
deleted file mode 100644
index 5a45a2ce7b08c67322265cf1bbd54ef66cbfc027..0000000000000000000000000000000000000000
--- a/docs-en/10-cluster/index.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: Cluster
-keywords: ["cluster", "high availability", "load balance", "scale out"]
----
-
-TDengine has a native distributed design and provides the ability to scale out. A few nodes can form a TDengine cluster. If you need higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source.
-
-This chapter mainly introduces cluster deployment, maintenance, and how to achieve high availability and load balancing.
-
-```mdx-code-block
-import DocCardList from '@theme/DocCardList';
-import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
-
-
-```
diff --git a/docs-en/10-faq.md b/docs-en/10-faq.md
new file mode 100644
index 0000000000000000000000000000000000000000..32cce9075d6bdc38f5389e97b109b8163e31b622
--- /dev/null
+++ b/docs-en/10-faq.md
@@ -0,0 +1 @@
+# FAQ
\ No newline at end of file
diff --git a/docs-en/12-taos-sql/01-data-type.md b/docs-en/12-taos-sql/01-data-type.md
deleted file mode 100644
index 3f5a49e3135771c6c1e62bcf158a99ee30f1ed9d..0000000000000000000000000000000000000000
--- a/docs-en/12-taos-sql/01-data-type.md
+++ /dev/null
@@ -1,49 +0,0 @@
----
-title: Data Types
-description: "TDengine supports a variety of data types including timestamp, float, JSON and many others."
----
-
-When using TDengine to store and query data, the most important part of the data is timestamp. Timestamp must be specified when creating and inserting data rows. Timestamp must follow the rules below:
-
-- The format must be `YYYY-MM-DD HH:mm:ss.MS`, the default time precision is millisecond (ms), for example `2017-08-12 18:25:58.128`
-- Internal function `now` can be used to get the current timestamp on the client side
-- The current timestamp of the client side is applied when `now` is used to insert data
-- Epoch Time:timestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from 1970-01-01 00:00:00.000 (UTC/GMT)
-- Add/subtract operations can be carried out on timestamps. For example `now-2h` means 2 hours prior to the time at which query is executed. The units of time in operations can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), or w(week). So `select * from t1 where ts > now-2w and ts <= now-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operations.
-
-Time precision in TDengine can be set by the `PRECISION` parameter when executing `CREATE DATABASE`. The default time precision is millisecond. In the statement below, the precision is set to nanonseconds.
-
-```sql
-CREATE DATABASE db_name PRECISION 'ns';
-```
-
-In TDengine, the data types below can be used when specifying a column or tag.
-
-| # | **type** | **Bytes** | **Description** |
-| --- | :-------: | --------- | ------------------------- |
-| 1 | TIMESTAMP | 8 | Default precision is millisecond, microsecond and nanosecond are also supported |
-| 2 | INT | 4 | Integer, the value range is [-2^31+1, 2^31-1], while -2^31 is treated as NULL |
-| 3 | BIGINT | 8 | Long integer, the value range is [-2^63+1, 2^63-1], while -2^63 is treated as NULL |
-| 4 | FLOAT | 4 | Floating point number, the effective number of digits is 6-7, the value range is [-3.4E38, 3.4E38] |
-| 5 | DOUBLE | 8 | Double precision floating point number, the effective number of digits is 15-16, the value range is [-1.7E308, 1.7E308] |
-| 6 | BINARY | User Defined | Single-byte string for ASCII visible characters. Length must be specified when defining a column or tag of binary type. The string length can be up to 16374 bytes. The string value must be quoted with single quotes. The literal single quote inside the string must be preceded with back slash like `\'` |
-| 7 | SMALLINT | 2 | Short integer, the value range is [-32767, 32767], while -32768 is treated as NULL |
-| 8 | TINYINT | 1 | Single-byte integer, the value range is [-127, 127], while -128 is treated as NULL |
-| 9 | BOOL | 1 | Bool, the value range is {true, false} |
-| 10 | NCHAR | User Defined| Multi-Byte string that can include multi byte characters like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\’`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. |
-| 11 | JSON | | JSON type can only be used on tags. A tag of json type is excluded with any other tags of any other type |
-
-:::tip
-TDengine is case insensitive and treats any characters in the sql command as lower case by default, case sensitive strings must be quoted with single quotes.
-
-:::
-
-:::note
-Only ASCII visible characters are suggested to be used in a column or tag of BINARY type. Multi-byte characters must be stored in NCHAR type.
-
-:::
-
-:::note
-Numeric values in SQL statements will be determined as integer or float type according to whether there is decimal point or whether scientific notation is used, so attention must be paid to avoid overflow. For example, 9999999999999999999 will be considered as overflow because it exceeds the upper limit of long integer, but 9999999999999999999.0 will be considered as a legal float number.
-
-:::
diff --git a/docs-en/12-taos-sql/02-database.md b/docs-en/12-taos-sql/02-database.md
deleted file mode 100644
index 80581b2f1bc7ce9cd046c18873d3f22b6804d8cf..0000000000000000000000000000000000000000
--- a/docs-en/12-taos-sql/02-database.md
+++ /dev/null
@@ -1,127 +0,0 @@
----
-sidebar_label: Database
-title: Database
-description: "create and drop database, show or change database parameters"
----
-
-## Create Database
-
-```
-CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
-```
-
-:::info
-
-1. KEEP specifies the number of days for which the data in the database will be retained. The default value is 3650 days, i.e. 10 years. The data will be deleted automatically once its age exceeds this threshold.
-2. UPDATE specifies whether the data can be updated and how the data can be updated.
- 1. UPDATE set to 0 means update operation is not allowed. The update for data with an existing timestamp will be discarded silently and the original record in the database will be preserved as is.
- 2. UPDATE set to 1 means the whole row will be updated. The columns for which no value is specified will be set to NULL.
- 3. UPDATE set to 2 means updating a subset of columns for a row is allowed. The columns for which no value is specified will be kept unchanged.
-3. The maximum length of database name is 33 bytes.
-4. The maximum length of a SQL statement is 65,480 bytes.
-5. Below are the parameters that can be used when creating a database
- - cache: [Description](/reference/config/#cache)
- - blocks: [Description](/reference/config/#blocks)
- - days: [Description](/reference/config/#days)
- - keep: [Description](/reference/config/#keep)
- - minRows: [Description](/reference/config/#minrows)
- - maxRows: [Description](/reference/config/#maxrows)
- - wal: [Description](/reference/config/#wallevel)
- - fsync: [Description](/reference/config/#fsync)
- - update: [Description](/reference/config/#update)
- - cacheLast: [Description](/reference/config/#cachelast)
- - replica: [Description](/reference/config/#replica)
- - quorum: [Description](/reference/config/#quorum)
- - maxVgroupsPerDb: [Description](/reference/config/#maxvgroupsperdb)
- - comp: [Description](/reference/config/#comp)
- - precision: [Description](/reference/config/#precision)
-6. Please note that all of the parameters mentioned in this section are configured in configuration file `taos.cfg` on the TDengine server. If not specified in the `create database` statement, the values from taos.cfg are used by default. To override default parameters, they must be specified in the `create database` statement.
-
-:::
-
-## Show Current Configuration
-
-```
-SHOW VARIABLES;
-```
-
-## Specify The Database In Use
-
-```
-USE db_name;
-```
-
-:::note
-This way is not applicable when using a REST connection. In a REST connection the database name must be specified before a table or stable name. For e.g. to query the stable "meters" in database "test" the query would be "SELECT count(*) from test.meters"
-
-:::
-
-## Drop Database
-
-```
-DROP DATABASE [IF EXISTS] db_name;
-```
-
-:::note
-All data in the database will be deleted too. This command must be used with extreme caution. Please follow your organization's data integrity, data backup, data security or any other applicable SOPs before using this command.
-
-:::
-
-## Change Database Configuration
-
-Some examples are shown below to demonstrate how to change the configuration of a database. Please note that some configuration parameters can be changed after the database is created, but some cannot. For details of the configuration parameters of database please refer to [Configuration Parameters](/reference/config/).
-
-```
-ALTER DATABASE db_name COMP 2;
-```
-
-COMP parameter specifies whether the data is compressed and how the data is compressed.
-
-```
-ALTER DATABASE db_name REPLICA 2;
-```
-
-REPLICA parameter specifies the number of replicas of the database.
-
-```
-ALTER DATABASE db_name KEEP 365;
-```
-
-KEEP parameter specifies the number of days for which the data will be kept.
-
-```
-ALTER DATABASE db_name QUORUM 2;
-```
-
-QUORUM parameter specifies the necessary number of confirmations to determine whether the data is written successfully.
-
-```
-ALTER DATABASE db_name BLOCKS 100;
-```
-
-BLOCKS parameter specifies the number of memory blocks used by each VNODE.
-
-```
-ALTER DATABASE db_name CACHELAST 0;
-```
-
-CACHELAST parameter specifies whether and how the latest data of a sub table is cached.
-
-:::tip
-The above parameters can be changed using `ALTER DATABASE` command without restarting. For more details of all configuration parameters please refer to [Configuration Parameters](/reference/config/).
-
-:::
-
-## Show All Databases
-
-```
-SHOW DATABASES;
-```
-
-## Show The Create Statement of A Database
-
-```
-SHOW CREATE DATABASE db_name;
-```
-
-This command is useful when migrating the data from one TDengine cluster to another. This command can be used to get the CREATE statement, which can be used in another TDengine instance to create the exact same database.
diff --git a/docs-en/12-taos-sql/03-table.md b/docs-en/12-taos-sql/03-table.md
deleted file mode 100644
index f065a8e2396583bb7a512446b513ed60056ad55e..0000000000000000000000000000000000000000
--- a/docs-en/12-taos-sql/03-table.md
+++ /dev/null
@@ -1,127 +0,0 @@
----
-sidebar_label: Table
-title: Table
-description: create super table, normal table and sub table, drop tables and change tables
----
-
-## Create Table
-
-```
-CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]);
-```
-
-:::info
-
-1. The first column of a table MUST be of type TIMESTAMP. It is automatically set as the primary key.
-2. The maximum length of the table name is 192 bytes.
-3. The maximum length of each row is 48k bytes, please note that the extra 2 bytes used by each BINARY/NCHAR column are also counted.
-4. The name of the subtable can only consist of characters from the English alphabet, digits and underscore. Table names can't start with a digit. Table names are case insensitive.
-5. The maximum length in bytes must be specified when using BINARY or NCHAR types.
-6. Escape character "\`" can be used to avoid the conflict between table names and reserved keywords, above rules will be bypassed when using escape character on table names, but the upper limit for the name length is still valid. The table names specified using escape character are case sensitive. Only ASCII visible characters can be used with escape character.
- For example \`aBc\` and \`abc\` are different table names but `abc` and `aBc` are same table names because they are both converted to `abc` internally.
-
-:::
-
-### Create Subtable Using STable As Template
-
-```
-CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name TAGS (tag_value1, ...);
-```
-
-The above command creates a subtable using the specified super table as a template and the specified tag values.
-
-### Create Subtable Using STable As Template With A Subset of Tags
-
-```
-CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name (tag_name1, ...) TAGS (tag_value1, ...);
-```
-
-The tags for which no value is specified will be set to NULL.
-
-### Create Tables in Batch
-
-```
-CREATE TABLE [IF NOT EXISTS] tb_name1 USING stb_name TAGS (tag_value1, ...) [IF NOT EXISTS] tb_name2 USING stb_name TAGS (tag_value2, ...) ...;
-```
-
-This can be used to create a lot of tables in a single SQL statement while making table creation much faster.
-
-:::info
-
-- Creating tables in batch must use a super table as a template.
-- The length of single statement is suggested to be between 1,000 and 3,000 bytes for best performance.
-
-:::
-
-## Drop Tables
-
-```
-DROP TABLE [IF EXISTS] tb_name;
-```
-
-## Show All Tables In Current Database
-
-```
-SHOW TABLES [LIKE tb_name_wildcard];
-```
-
-## Show Create Statement of A Table
-
-```
-SHOW CREATE TABLE tb_name;
-```
-
-This is useful when migrating the data in one TDengine cluster to another one because it can be used to create the exact same tables in the target database.
-
-## Show Table Definition
-
-```
-DESCRIBE tb_name;
-```
-
-## Change Table Definition
-
-### Add A Column
-
-```
-ALTER TABLE tb_name ADD COLUMN field_name data_type;
-```
-
-:::info
-
-1. The maximum number of columns is 4096, the minimum number of columns is 2.
-2. The maximum length of a column name is 64 bytes.
-
-:::
-
-### Remove A Column
-
-```
-ALTER TABLE tb_name DROP COLUMN field_name;
-```
-
-:::note
-If a table is created using a super table as template, the table definition can only be changed on the corresponding super table, and the change will be automatically applied to all the subtables created using this super table as template. For tables created in the normal way, the table definition can be changed directly on the table.
-
-:::
-
-### Change Column Length
-
-```
-ALTER TABLE tb_name MODIFY COLUMN field_name data_type(length);
-```
-
-If the type of a column is variable length, like BINARY or NCHAR, this command can be used to change the length of the column.
-
-:::note
-If a table is created using a super table as template, the table definition can only be changed on the corresponding super table, and the change will be automatically applied to all the subtables created using this super table as template. For tables created in the normal way, the table definition can be changed directly on the table.
-
-:::
-
-### Change Tag Value Of Sub Table
-
-```
-ALTER TABLE tb_name SET TAG tag_name=new_tag_value;
-```
-
-This command can be used to change the tag value if the table is created using a super table as template.
diff --git a/docs-en/12-taos-sql/04-stable.md b/docs-en/12-taos-sql/04-stable.md
deleted file mode 100644
index b8a608792ab327a81129d29ddd0ff44d7af6e6c5..0000000000000000000000000000000000000000
--- a/docs-en/12-taos-sql/04-stable.md
+++ /dev/null
@@ -1,118 +0,0 @@
----
-sidebar_label: STable
-title: Super Table
----
-
-:::note
-
-Keyword `STable`, abbreviated for super table, is supported since version 2.0.15.
-
-:::
-
-## Create STable
-
-```
-CREATE STable [IF NOT EXISTS] stb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]) TAGS (tag1_name tag_type1, tag2_name tag_type2 [, tag3_name tag_type3]);
-```
-
-The SQL statement of creating a STable is similar to that of creating a table, but a special column set named `TAGS` must be specified with the names and types of the tags.
-
-:::info
-
-1. A tag can be of type timestamp, since version 2.1.3.0, but its value must be fixed and arithmetic operations cannot be performed on it. Prior to version 2.1.3.0, tag types specified in TAGS could not be of type timestamp.
-2. The tag names specified in TAGS should NOT be the same as other columns.
-3. The tag names specified in TAGS should NOT be the same as any reserved keywords.(Please refer to [keywords](/taos-sql/keywords/)
-4. The maximum number of tags specified in TAGS is 128, there must be at least one tag, and the total length of all tag columns should NOT exceed 16KB.
-
-:::
-
-## Drop STable
-
-```
-DROP STable [IF EXISTS] stb_name;
-```
-
-All the subtables created using the deleted STable will be deleted automatically.
-
-## Show All STables
-
-```
-SHOW STableS [LIKE tb_name_wildcard];
-```
-
-This command can be used to display the information of all STables in the current database, including name, creation time, number of columns, number of tags, and number of tables created using this STable.
-
-## Show The Create Statement of A STable
-
-```
-SHOW CREATE STable stb_name;
-```
-
-This command is useful in migrating data from one TDengine cluster to another because it can be used to create the exact same STable in the target database.
-
-## Get STable Definition
-
-```
-DESCRIBE stb_name;
-```
-
-## Change Columns Of STable
-
-### Add A Column
-
-```
-ALTER STable stb_name ADD COLUMN field_name data_type;
-```
-
-### Remove A Column
-
-```
-ALTER STable stb_name DROP COLUMN field_name;
-```
-
-### Change Column Length
-
-```
-ALTER STable stb_name MODIFY COLUMN field_name data_type(length);
-```
-
-This command can be used to change (or more specifically, increase) the length of a column of variable length types, like BINARY or NCHAR.
-
-## Change Tags of A STable
-
-### Add A Tag
-
-```
-ALTER STable stb_name ADD TAG new_tag_name tag_type;
-```
-
-This command is used to add a new tag for a STable and specify the tag type.
-
-### Remove A Tag
-
-```
-ALTER STable stb_name DROP TAG tag_name;
-```
-
-The tag will be removed automatically from all the subtables, created using the super table as template, once a tag is removed from a super table.
-
-### Change A Tag
-
-```
-ALTER STable stb_name CHANGE TAG old_tag_name new_tag_name;
-```
-
-The tag name will be changed automatically for all the subtables, created using the super table as template, once a tag name is changed for a super table.
-
-### Change Tag Length
-
-```
-ALTER STable stb_name MODIFY TAG tag_name data_type(length);
-```
-
-This command can be used to change (or more specifically, increase) the length of a tag of variable length types, like BINARY or NCHAR.
-
-:::note
-Changing tag values can be applied to only subtables. All other tag operations, like add tag, remove tag, however, can be applied to only STable. If a new tag is added for a STable, the tag will be added with NULL value for all its subtables.
-
-:::
diff --git a/docs-en/12-taos-sql/05-insert.md b/docs-en/12-taos-sql/05-insert.md
deleted file mode 100644
index 1336cd7238a19190583ea9d268a64df242ffd3c9..0000000000000000000000000000000000000000
--- a/docs-en/12-taos-sql/05-insert.md
+++ /dev/null
@@ -1,164 +0,0 @@
----
-title: Insert
----
-
-## Syntax
-
-```sql
-INSERT INTO
- tb_name
- [USING stb_name [(tag1_name, ...)] TAGS (tag1_value, ...)]
- [(field1_name, ...)]
- VALUES (field1_value, ...) [(field1_value2, ...) ...] | FILE csv_file_path
- [tb2_name
- [USING stb_name [(tag1_name, ...)] TAGS (tag1_value, ...)]
- [(field1_name, ...)]
- VALUES (field1_value, ...) [(field1_value2, ...) ...] | FILE csv_file_path
- ...];
-```
-
-## Insert Single or Multiple Rows
-
-Single row or multiple rows specified with VALUES can be inserted into a specific table. For example:
-
-A single row is inserted using the below statement.
-
-```sq;
-INSERT INTO d1001 VALUES (NOW, 10.2, 219, 0.32);
-```
-
-Double rows are inserted using the below statement.
-
-```sql
-INSERT INTO d1001 VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32) (1626164208000, 10.15, 217, 0.33);
-```
-
-:::note
-
-1. In the second example above, different formats are used in the two rows to be inserted. In the first row, the timestamp format is a date and time string, which is interpreted from the string value only. In the second row, the timestamp format is a long integer, which will be interpreted based on the database time precision.
-2. When trying to insert multiple rows in a single statement, only the timestamp of one row can be set as NOW, otherwise there will be duplicate timestamps among the rows and the result may be out of expectation because NOW will be interpreted as the time when the statement is executed.
-3. The oldest timestamp that is allowed is subtracting the KEEP parameter from current time.
-4. The newest timestamp that is allowed is adding the DAYS parameter to current time.
-
-:::
-
-## Insert Into Specific Columns
-
-Data can be inserted into specific columns, either single row or multiple row, while other columns will be inserted as NULL value.
-
-```
-INSERT INTO d1001 (ts, current, phase) VALUES ('2021-07-13 14:06:33.196', 10.27, 0.31);
-```
-
-:::info
-If no columns are explicitly specified, all the columns must be provided with values, this is called "all column mode". The insert performance of all column mode is much better than specifying a subset of columns, so it's encouraged to use "all column mode" while providing NULL value explicitly for the columns for which no actual value can be provided.
-
-:::
-
-## Insert Into Multiple Tables
-
-One or multiple rows can be inserted into multiple tables in a single SQL statement, with or without specifying specific columns.
-
-```sql
-INSERT INTO d1001 VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
- d1002 (ts, current, phase) VALUES ('2021-07-13 14:06:34.255', 10.27, 0.31);
-```
-
-## Automatically Create Table When Inserting
-
-If it's unknown whether the table already exists, the table can be created automatically while inserting using the SQL statement below. To use this functionality, a STable must be used as template and tag values must be provided.
-
-```sql
-INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32);
-```
-
-It's not necessary to provide values for all tags when creating tables automatically, the tags without values provided will be set to NULL.
-
-```sql
-INSERT INTO d21001 USING meters (groupId) TAGS (2) VALUES ('2021-07-13 14:06:33.196', 10.15, 217, 0.33);
-```
-
-Multiple rows can also be inserted into the same table in a single SQL statement.
-
-```sql
-INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
- d21002 USING meters (groupId) TAGS (2) VALUES ('2021-07-13 14:06:34.255', 10.15, 217, 0.33)
- d21003 USING meters (groupId) TAGS (2) (ts, current, phase) VALUES ('2021-07-13 14:06:34.255', 10.27, 0.31);
-```
-
-:::info
-Prior to version 2.0.20.5, when using `INSERT` to create tables automatically and specifying the columns, the column names must follow the table name immediately. From version 2.0.20.5, the column names can follow the table name immediately, also can be put between `TAGS` and `VALUES`. In the same SQL statement, however, these two ways of specifying column names can't be mixed.
-:::
-
-## Insert Rows From A File
-
-Besides using `VALUES` to insert one or multiple rows, the data to be inserted can also be prepared in a CSV file with comma as separator and each field value quoted by single quotes. Table definition is not required in the CSV file. For example, if file "/tmp/csvfile.csv" contains the below data:
-
-```
-'2021-07-13 14:07:34.630', '10.2', '219', '0.32'
-'2021-07-13 14:07:35.779', '10.15', '217', '0.33'
-```
-
-Then data in this file can be inserted by the SQL statement below:
-
-```sql
-INSERT INTO d1001 FILE '/tmp/csvfile.csv';
-```
-
-## Create Tables Automatically and Insert Rows From File
-
-From version 2.1.5.0, tables can be automatically created using a super table as template when inserting data from a CSV file, like below:
-
-```sql
-INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) FILE '/tmp/csvfile.csv';
-```
-
-Multiple tables can be automatically created and inserted in a single SQL statement, like below:
-
-```sql
-INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) FILE '/tmp/csvfile_21001.csv'
- d21002 USING meters (groupId) TAGS (2) FILE '/tmp/csvfile_21002.csv';
-```
-
-## More About Insert
-
-For SQL statement like `insert`, a stream parsing strategy is applied. That means before an error is found and the execution is aborted, the part prior to the error point has already been executed. Below is an experiment to help understand the behavior.
-
-First, a super table is created.
-
-```sql
-CREATE TABLE meters(ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS(location BINARY(30), groupId INT);
-```
-
-It can be proven that the super table has been created by `SHOW STableS`, but no table exists using `SHOW TABLES`.
-
-```
-taos> SHOW STableS;
- name | created_time | columns | tags | tables |
-============================================================================================
- meters | 2020-08-06 17:50:27.831 | 4 | 2 | 0 |
-Query OK, 1 row(s) in set (0.001029s)
-
-taos> SHOW TABLES;
-Query OK, 0 row(s) in set (0.000946s)
-```
-
-Then, try to create table d1001 automatically when inserting data into it.
-
-```sql
-INSERT INTO d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES('a');
-```
-
-The output shows the value to be inserted is invalid. But `SHOW TABLES` proves that the table has been created automatically by the `INSERT` statement.
-
-```
-DB error: invalid SQL: 'a' (invalid timestamp) (0.039494s)
-
-taos> SHOW TABLES;
- table_name | created_time | columns | STable_name |
-======================================================================================================
- d1001 | 2020-08-06 17:52:02.097 | 4 | meters |
-Query OK, 1 row(s) in set (0.001091s)
-```
-
-From the above experiment, we can see that while the value to be inserted is invalid the table is still created.
diff --git a/docs-en/12-taos-sql/06-select.md b/docs-en/12-taos-sql/06-select.md
deleted file mode 100644
index 8a017cf92e40aa4a854dcd531b7df291a9243515..0000000000000000000000000000000000000000
--- a/docs-en/12-taos-sql/06-select.md
+++ /dev/null
@@ -1,449 +0,0 @@
----
-title: Select
----
-
-## Syntax
-
-```SQL
-SELECT select_expr [, select_expr ...]
- FROM {tb_name_list}
- [WHERE where_condition]
- [SESSION(ts_col, tol_val)]
- [STATE_WINDOW(col)]
- [INTERVAL(interval_val [, interval_offset]) [SLIDING sliding_val]]
- [FILL(fill_mod_and_val)]
- [GROUP BY col_list]
- [ORDER BY col_list { DESC | ASC }]
- [SLIMIT limit_val [SOFFSET offset_val]]
- [LIMIT limit_val [OFFSET offset_val]]
- [>> export_file];
-```
-
-## Wildcard
-
-Wildcard \* can be used to specify all columns. The result includes only data columns for normal tables.
-
-```
-taos> SELECT * FROM d1001;
- ts | current | voltage | phase |
-======================================================================================
- 2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 |
- 2018-10-03 14:38:15.000 | 12.60000 | 218 | 0.33000 |
- 2018-10-03 14:38:16.800 | 12.30000 | 221 | 0.31000 |
-Query OK, 3 row(s) in set (0.001165s)
-```
-
-The result includes both data columns and tag columns for super table.
-
-```
-taos> SELECT * FROM meters;
- ts | current | voltage | phase | location | groupid |
-=====================================================================================================================================
- 2018-10-03 14:38:05.500 | 11.80000 | 221 | 0.28000 | California.LoSangeles | 2 |
- 2018-10-03 14:38:16.600 | 13.40000 | 223 | 0.29000 | California.LoSangeles | 2 |
- 2018-10-03 14:38:05.000 | 10.80000 | 223 | 0.29000 | California.LoSangeles | 3 |
- 2018-10-03 14:38:06.500 | 11.50000 | 221 | 0.35000 | California.LoSangeles | 3 |
- 2018-10-03 14:38:04.000 | 10.20000 | 220 | 0.23000 | California.SanFrancisco | 3 |
- 2018-10-03 14:38:16.650 | 10.30000 | 218 | 0.25000 | California.SanFrancisco | 3 |
- 2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 | California.SanFrancisco | 2 |
- 2018-10-03 14:38:15.000 | 12.60000 | 218 | 0.33000 | California.SanFrancisco | 2 |
- 2018-10-03 14:38:16.800 | 12.30000 | 221 | 0.31000 | California.SanFrancisco | 2 |
-Query OK, 9 row(s) in set (0.002022s)
-```
-
-Wildcard can be used with table name as prefix. Both SQL statements below have the same effect and return all columns.
-
-```SQL
-SELECT * FROM d1001;
-SELECT d1001.* FROM d1001;
-```
-
-In a JOIN query, however, the results are different with or without a table name prefix. \* without table prefix will return all the columns of both tables, but \* with table name as prefix will return only the columns of that table.
-
-```
-taos> SELECT * FROM d1001, d1003 WHERE d1001.ts=d1003.ts;
- ts | current | voltage | phase | ts | current | voltage | phase |
-==================================================================================================================================
- 2018-10-03 14:38:05.000 | 10.30000| 219 | 0.31000 | 2018-10-03 14:38:05.000 | 10.80000| 223 | 0.29000 |
-Query OK, 1 row(s) in set (0.017385s)
-```
-
-```
-taos> SELECT d1001.* FROM d1001,d1003 WHERE d1001.ts = d1003.ts;
- ts | current | voltage | phase |
-======================================================================================
- 2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 |
-Query OK, 1 row(s) in set (0.020443s)
-```
-
-Wildcard \* can be used with some functions, but the result may be different depending on the function being used. For example, `count(*)` returns only one column, i.e. the number of rows; `first`, `last` and `last_row` return all columns of the selected row.
-
-```
-taos> SELECT COUNT(*) FROM d1001;
- count(*) |
-========================
- 3 |
-Query OK, 1 row(s) in set (0.001035s)
-```
-
-```
-taos> SELECT FIRST(*) FROM d1001;
- first(ts) | first(current) | first(voltage) | first(phase) |
-=========================================================================================
- 2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 |
-Query OK, 1 row(s) in set (0.000849s)
-```
-
-## Tags
-
-Starting from version 2.0.14, tag columns can be selected together with data columns when querying sub tables. Please note however, that, wildcard \* cannot be used to represent any tag column. This means that tag columns must be specified explicitly like the example below.
-
-```
-taos> SELECT location, groupid, current FROM d1001 LIMIT 2;
- location | groupid | current |
-======================================================================
- California.SanFrancisco | 2 | 10.30000 |
- California.SanFrancisco | 2 | 12.60000 |
-Query OK, 2 row(s) in set (0.003112s)
-```
-
-## Get distinct values
-
-`DISTINCT` keyword can be used to get all the unique values of tag columns from a super table. It can also be used to get all the unique values of data columns from a table or subtable.
-
-```sql
-SELECT DISTINCT tag_name [, tag_name ...] FROM stb_name;
-SELECT DISTINCT col_name [, col_name ...] FROM tb_name;
-```
-
-:::info
-
-1. Configuration parameter `maxNumOfDistinctRes` in `taos.cfg` is used to control the number of rows to output. The minimum configurable value is 100,000, the maximum configurable value is 100,000,000, the default value is 1,000,000. If the actual number of rows exceeds the value of this parameter, only the number of rows specified by this parameter will be output.
-2. It can't be guaranteed that the results selected by using `DISTINCT` on columns of `FLOAT` or `DOUBLE` are exactly unique because of the precision errors in floating point numbers.
-3. `DISTINCT` can't be used in the sub-query of a nested query statement, and can't be used together with aggregate functions, `GROUP BY` or `JOIN` in the same SQL statement.
-
-:::
-
-## Columns Names of Result Set
-
-When using `SELECT`, the column names in the result set will be the same as that in the select clause if `AS` is not used. `AS` can be used to rename the column names in the result set. For example
-
-```
-taos> SELECT ts, ts AS primary_key_ts FROM d1001;
- ts | primary_key_ts |
-====================================================
- 2018-10-03 14:38:05.000 | 2018-10-03 14:38:05.000 |
- 2018-10-03 14:38:15.000 | 2018-10-03 14:38:15.000 |
- 2018-10-03 14:38:16.800 | 2018-10-03 14:38:16.800 |
-Query OK, 3 row(s) in set (0.001191s)
-```
-
-`AS` can't be used together with `first(*)`, `last(*)`, or `last_row(*)`.
-
-## Implicit Columns
-
-`Select_exprs` can be column names of a table, or function expression or arithmetic expression on columns. The maximum number of allowed column names and expressions is 256. Timestamp and the corresponding tag names will be returned in the result set if `interval` or `group by tags` are used, and timestamp will always be the first column in the result set.
-
-## Table List
-
-`FROM` can be followed by a number of tables or super tables, or can be followed by a sub-query. If no database is specified as current database in use, table names must be preceded with database name, like `power.d1001`.
-
-```SQL
-SELECT * FROM power.d1001;
-```
-
-has same effect as
-
-```SQL
-USE power;
-SELECT * FROM d1001;
-```
-
-## Special Query
-
-Some special query functions can be invoked without `FROM` sub-clause. For example, the statement below can be used to get the current database in use.
-
-```
-taos> SELECT DATABASE();
- database() |
-=================================
- power |
-Query OK, 1 row(s) in set (0.000079s)
-```
-
-If no database is specified upon logging in and no database is specified with `USE` after login, NULL will be returned by `select database()`.
-
-```
-taos> SELECT DATABASE();
- database() |
-=================================
- NULL |
-Query OK, 1 row(s) in set (0.000184s)
-```
-
-The statement below can be used to get the version of client or server.
-
-```
-taos> SELECT CLIENT_VERSION();
- client_version() |
-===================
- 2.0.0.0 |
-Query OK, 1 row(s) in set (0.000070s)
-
-taos> SELECT SERVER_VERSION();
- server_version() |
-===================
- 2.0.0.0 |
-Query OK, 1 row(s) in set (0.000077s)
-```
-
-The statement below is used to check the server status. An integer, like `1`, is returned if the server status is OK, otherwise an error code is returned. This is compatible with the status check for TDengine from connection pool or 3rd party tools, and can avoid the problem of losing the connection from a connection pool when using the wrong heartbeat checking SQL statement.
-
-```
-taos> SELECT SERVER_STATUS();
- server_status() |
-==================
- 1 |
-Query OK, 1 row(s) in set (0.000074s)
-
-taos> SELECT SERVER_STATUS() AS status;
- status |
-==============
- 1 |
-Query OK, 1 row(s) in set (0.000081s)
-```
-
-## \_block_dist
-
-**Description**: Get the data block distribution of a table or STable.
-
-```SQL title="Syntax"
-SELECT _block_dist() FROM { tb_name | stb_name }
-```
-
-**Restrictions**:No argument is allowed, where clause is not allowed
-
-**Sub Query**:Sub query or nested query are not supported
-
-**Return value**: A string which includes the data block distribution of the specified table or STable, i.e. the histogram of rows stored in the data blocks of the table or STable.
-
-```text title="Result"
-summary:
-5th=[392], 10th=[392], 20th=[392], 30th=[392], 40th=[792], 50th=[792] 60th=[792], 70th=[792], 80th=[792], 90th=[792], 95th=[792], 99th=[792] Min=[392(Rows)] Max=[800(Rows)] Avg=[666(Rows)] Stddev=[2.17] Rows=[2000], Blocks=[3], Size=[5.440(Kb)] Comp=[0.23] RowsInMem=[0] SeekHeaderTime=[1(us)]
-```
-
-**More explanation about above example**:
-
-- Histogram about the rows stored in the data blocks of the table or STable: the value of rows for 5%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, 95%, and 99%
-- Minimum number of rows stored in a data block, i.e. Min=[392(Rows)]
-- Maximum number of rows stored in a data block, i.e. Max=[800(Rows)]
-- Average number of rows stored in a data block, i.e. Avg=[666(Rows)]
-- stddev of number of rows, i.e. Stddev=[2.17]
-- Total number of rows, i.e. Rows[2000]
-- Total number of data blocks, i.e. Blocks=[3]
-- Total disk size consumed, i.e. Size=[5.440(Kb)]
-- Compression ratio, which means the compressed size divided by original size, i.e. Comp=[0.23]
-- Total number of rows in memory, i.e. RowsInMem=[0], which means no rows in memory
-- The time spent on reading head file (to retrieve data block information), i.e. SeekHeaderTime=[1(us)], which means 1 microsecond.
-
-## Special Keywords in TAOS SQL
-
-- `TBNAME`: it is treated as a special tag when selecting on a super table, representing the name of subtables in that super table.
-- `_c0`: represents the first column of a table or super table.
-
-## Tips
-
-To get all the subtables and corresponding tag values from a super table:
-
-```SQL
-SELECT TBNAME, location FROM meters;
-```
-
-To get the number of sub tables in a super table:
-
-```SQL
-SELECT COUNT(TBNAME) FROM meters;
-```
-
-Only filter on `TAGS` are allowed in the `where` clause for above two query statements. For example:
-
-```
-taos> SELECT TBNAME, location FROM meters;
- tbname | location |
-==================================================================
- d1004 | California.LosAngeles |
- d1003 | California.LosAngeles |
- d1002 | California.SanFrancisco |
- d1001 | California.SanFrancisco |
-Query OK, 4 row(s) in set (0.000881s)
-
-taos> SELECT COUNT(tbname) FROM meters WHERE groupId > 2;
- count(tbname) |
-========================
- 2 |
-Query OK, 1 row(s) in set (0.001091s)
-```
-
-- Wildcard \* can be used to get all columns, or specific column names can be specified. Arithmetic operation can be performed on columns of numerical types, columns can be renamed in the result set.
-- Arithmetic operation on columns can't be used in where clause. For example, `where a*2>6;` is not allowed but `where a>6/2;` can be used instead for the same purpose.
-- Arithmetic operation on columns can't be used as the objectives of select statement. For example, `select min(2*a) from t;` is not allowed but `select 2*min(a) from t;` can be used instead.
-- Logical operation can be used in `WHERE` clause to filter numeric values, wildcard can be used to filter string values.
-- Result sets are arranged in ascending order of the first column, i.e. timestamp, but it can be controlled to output as descending order of timestamp. If `order by` is used on other columns, the result may not be as expected. By the way, \_c0 is used to represent the first column, i.e. timestamp.
-- `LIMIT` parameter is used to control the number of rows to output. `OFFSET` parameter is used to specify from which row to output. `LIMIT` and `OFFSET` are executed after `ORDER BY` in the query execution. A simple tip is that `LIMIT 5 OFFSET 2` can be abbreviated as `LIMIT 2, 5`.
-- What is controlled by `LIMIT` is the number of rows in each group when `GROUP BY` is used.
-- `SLIMIT` parameter is used to control the number of groups when `GROUP BY` is used. Similar to `LIMIT`, `SLIMIT 5 OFFSET 2` can be abbreviated as `SLIMIT 2, 5`.
-- ">>" can be used to output the result set of `select` statement to the specified file.
-
-## Where
-
-Logical operations in below table can be used in the `where` clause to filter the resulting rows.
-
-| **Operation** | **Note** | **Applicable Data Types** |
-| ------------- | ------------------------ | ----------------------------------------- |
-| > | larger than | all types except bool |
-| < | smaller than | all types except bool |
-| >= | larger than or equal to | all types except bool |
-| <= | smaller than or equal to | all types except bool |
-| = | equal to | all types |
-| <\> | not equal to | all types |
-| is [not] null | is null or is not null | all types |
-| between and | within a certain range | all types except bool |
-| in | match any value in a set | all types except first column `timestamp` |
-| like | match a wildcard string | **`binary`** **`nchar`** |
-| match/nmatch | filter regex | **`binary`** **`nchar`** |
-
-**Explanations**:
-
-- Operator `<\>` is equal to `!=`, please note that this operator can't be used on the first column of any table, i.e.timestamp column.
-- Operator `like` is used together with wildcards to match strings
- - '%' matches 0 or any number of characters, '\_' matches any single ASCII character.
- - `\_` is used to match the \_ in the string.
- - The maximum length of wildcard string is 100 bytes from version 2.1.6.1 (before that the maximum length is 20 bytes). `maxWildCardsLength` in `taos.cfg` can be used to control this threshold. A very long wildcard string may slowdown the execution performance of `LIKE` operator.
-- `AND` keyword can be used to filter multiple columns simultaneously. AND/OR operation can be performed on single or multiple columns from version 2.3.0.0. However, before 2.3.0.0 `OR` can't be used on multiple columns.
-- For timestamp column, only one condition can be used; for other columns or tags, `OR` keyword can be used to combine multiple logical operators. For example, `((value > 20 AND value < 30) OR (value < 12))`.
- - From version 2.3.0.0, multiple conditions can be used on timestamp column, but the result set can only contain single time range.
-- From version 2.0.17.0, operator `BETWEEN AND` can be used in where clause, for example `WHERE col2 BETWEEN 1.5 AND 3.25` means the filter condition is equal to "1.5 ≤ col2 ≤ 3.25".
-- From version 2.1.4.0, operator `IN` can be used in the where clause. For example, `WHERE city IN ('California.SanFrancisco', 'California.SanDiego')`. For bool type, both `{true, false}` and `{0, 1}` are allowed, but integers other than 0 or 1 are not allowed. FLOAT and DOUBLE types are impacted by floating point precision errors. Only values that match the condition within the tolerance will be selected. Non-primary key column of timestamp type can be used with `IN`.
-- From version 2.3.0.0, regular expression is supported in the where clause with keyword `match` or `nmatch`. The regular expression is case insensitive.
-
-## Regular Expression
-
-### Syntax
-
-```SQL
-WHERE (column|tbname) **match/MATCH/nmatch/NMATCH** _regex_
-```
-
-### Specification
-
-The regular expression being used must be compliant with POSIX specification, please refer to [Regular Expressions](https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap09.html).
-
-### Restrictions
-
-Regular expression can be used against only table names, i.e. `tbname`, and tags of binary/nchar types, but can't be used against data columns.
-
-The maximum length of regular expression string is 128 bytes. Configuration parameter `maxRegexStringLen` can be used to set the maximum allowed regular expression. It's a configuration parameter on the client side, and will take effect after restarting the client.
-
-## JOIN
-
-From version 2.2.0.0, inner join is fully supported in TDengine. More specifically, the inner join between table and table, between STable and STable, and between sub query and sub query are supported.
-
-Only primary key, i.e. timestamp, can be used in the join operation between table and table. For example:
-
-```sql
-SELECT *
-FROM temp_tb_1 t1, pressure_tb_1 t2
-WHERE t1.ts = t2.ts
-```
-
-In the join operation between STable and STable, besides the primary key, i.e. timestamp, tags can also be used. For example:
-
-```sql
-SELECT *
-FROM temp_STable t1, temp_STable t2
-WHERE t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0;
-```
-
-Similarly, join operations can be performed on the result set of multiple sub queries.
-
-:::note
-Restrictions on join operation:
-
-- The number of tables or STables in a single join operation can't exceed 10.
-- `FILL` is not allowed in the query statement that includes JOIN operation.
-- Arithmetic operation is not allowed on the result set of join operation.
-- `GROUP BY` is not allowed on a part of tables that participate in join operation.
-- `OR` can't be used in the conditions for join operation
-- join operation can't be performed on data columns, i.e. can only be performed on tags or primary key, i.e. timestamp
-
-:::
-
-## Nested Query
-
-Nested query is also called sub query. This means that in a single SQL statement the result of inner query can be used as the data source of the outer query.
-
-From 2.2.0.0, unassociated sub query can be used in the `FROM` clause. Unassociated means the sub query doesn't use the parameters in the parent query. More specifically, in the `tb_name_list` of `SELECT` statement, an independent SELECT statement can be used. So a complete nested query looks like:
-
-```SQL
-SELECT ... FROM (SELECT ... FROM ...) ...;
-```
-
-:::info
-
-- Only one layer of nesting is allowed, that means no sub query is allowed within a sub query
-- The result set returned by the inner query will be used as a "virtual table" by the outer query. The "virtual table" can be renamed using `AS` keyword for easy reference in the outer query.
-- Sub query is not allowed in continuous query.
-- JOIN operation is allowed between tables/STables inside both inner and outer queries. Join operation can be performed on the result set of the inner query.
-- UNION operation is not allowed in either inner query or outer query.
-- The functions that can be used in the inner query are the same as those that can be used in a non-nested query.
- - `ORDER BY` inside the inner query is unnecessary and will slow down the query performance significantly. It is best to avoid the use of `ORDER BY` inside the inner query.
-- Compared to the non-nested query, the functionality that can be used in the outer query has the following restrictions:
- - Functions
- - If the result set returned by the inner query doesn't contain timestamp column, then functions relying on timestamp can't be used in the outer query, like `TOP`, `BOTTOM`, `FIRST`, `LAST`, `DIFF`.
- - Functions that need to scan the data twice can't be used in the outer query, like `STDDEV`, `PERCENTILE`.
- - `IN` operator is not allowed in the outer query but can be used in the inner query.
- - `GROUP BY` is not supported in the outer query.
-
-:::
-
-## UNION ALL
-
-```SQL title=Syntax
-SELECT ...
-UNION ALL SELECT ...
-[UNION ALL SELECT ...]
-```
-
-`UNION ALL` operator can be used to combine the result set from multiple select statements as long as the result set of these select statements have exactly the same columns. `UNION ALL` doesn't remove redundant rows from multiple result sets. In a single SQL statement, at most 100 `UNION ALL` can be supported.
-
-### Examples
-
-table `tb1` is created using below SQL statement:
-
-```SQL
-CREATE TABLE tb1 (ts TIMESTAMP, col1 INT, col2 FLOAT, col3 BINARY(50));
-```
-
-The rows in the past one hour in `tb1` can be selected using below SQL statement:
-
-```SQL
-SELECT * FROM tb1 WHERE ts >= NOW - 1h;
-```
-
-The rows between 2018-06-01 08:00:00.000 and 2018-06-02 08:00:00.000 and col3 ends with 'nny' can be selected in the descending order of timestamp using below SQL statement:
-
-```SQL
-SELECT * FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' AND ts <= '2018-06-02 08:00:00.000' AND col3 LIKE '%nny' ORDER BY ts DESC;
-```
-
-The sum of col1 and col2 for rows later than 2018-06-01 08:00:00.000 and whose col2 is bigger than 1.2 can be selected and renamed as "complex", while only 10 rows are output from the 5th row, by below SQL statement:
-
-```SQL
-SELECT (col1 + col2) AS 'complex' FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' AND col2 > 1.2 LIMIT 10 OFFSET 5;
-```
-
-The rows in the past 10 minutes and whose col2 is bigger than 3.14 are selected and output to the result file `/home/testoutput.csv` with below SQL statement:
-
-```SQL
-SELECT COUNT(*) FROM tb1 WHERE ts >= NOW - 10m AND col2 > 3.14 >> /home/testoutput.csv;
-```
diff --git a/docs-en/12-taos-sql/07-function.md b/docs-en/12-taos-sql/07-function.md
deleted file mode 100644
index 86ff5a58ce31a357d6e247294ffdac791cb0c032..0000000000000000000000000000000000000000
--- a/docs-en/12-taos-sql/07-function.md
+++ /dev/null
@@ -1,1963 +0,0 @@
----
-title: Functions
----
-
-## Aggregate Functions
-
-Aggregate queries are supported in TDengine by the following aggregate functions and selection functions.
-
-### COUNT
-
-```
-SELECT COUNT([*|field_name]) FROM tb_name [WHERE clause];
-```
-
-**Description**: Get the number of rows or the number of non-null values in a table or a super table.
-
-**Return value type**: Long integer INT64
-
-**Applicable column types**: All
-
-**Applicable table types**: table, super table, sub table
-
-**More explanation**:
-
-- Wildcard (\*) is used to represent all columns. The `COUNT` function is used to get the total number of all rows.
-- The number of non-NULL values will be returned if this function is used on a specific column.
-
-**Examples**:
-
-```
-taos> SELECT COUNT(*), COUNT(voltage) FROM meters;
- count(*) | count(voltage) |
-================================================
- 9 | 9 |
-Query OK, 1 row(s) in set (0.004475s)
-
-taos> SELECT COUNT(*), COUNT(voltage) FROM d1001;
- count(*) | count(voltage) |
-================================================
- 3 | 3 |
-Query OK, 1 row(s) in set (0.001075s)
-```
-
-### AVG
-
-```
-SELECT AVG(field_name) FROM tb_name [WHERE clause];
-```
-
-**Description**: Get the average value of a column in a table or STable
-
-**Return value type**: Double precision floating number
-
-**Applicable column types**: Data types except for timestamp, binary, nchar and bool
-
-**Applicable table types**: table, STable
-
-**Examples**:
-
-```
-taos> SELECT AVG(current), AVG(voltage), AVG(phase) FROM meters;
- avg(current) | avg(voltage) | avg(phase) |
-====================================================================================
- 11.466666751 | 220.444444444 | 0.293333333 |
-Query OK, 1 row(s) in set (0.004135s)
-
-taos> SELECT AVG(current), AVG(voltage), AVG(phase) FROM d1001;
- avg(current) | avg(voltage) | avg(phase) |
-====================================================================================
- 11.733333588 | 219.333333333 | 0.316666673 |
-Query OK, 1 row(s) in set (0.000943s)
-```
-
-### TWA
-
-```
-SELECT TWA(field_name) FROM tb_name WHERE clause;
-```
-
-**Description**: Time weighted average on a specific column within a time range
-
-**Return value type**: Double precision floating number
-
-**Applicable column types**: Data types except for timestamp, binary, nchar and bool
-
-**Applicable table types**: table, STable
-
-**More explanations**:
-
-- Since version 2.1.3.0, function TWA can be used on stable with `GROUP BY`, i.e. timelines generated by `GROUP BY tbname` on a STable.
-
-### IRATE
-
-```
-SELECT IRATE(field_name) FROM tb_name WHERE clause;
-```
-
-**Description**: instantaneous rate on a specific column. The last two samples in the specified time range are used to calculate instantaneous rate. If the last sample value is smaller, then only the last sample value is used instead of the difference between the last two sample values.
-
-**Return value type**: Double precision floating number
-
-**Applicable column types**: Data types except for timestamp, binary, nchar and bool
-
-**Applicable table types**: table, STable
-
-**More explanations**:
-
-- Since version 2.1.3.0, function IRATE can be used on stble with `GROUP BY`, i.e. timelines generated by `GROUP BY tbname` on a STable.
-
-### SUM
-
-```
-SELECT SUM(field_name) FROM tb_name [WHERE clause];
-```
-
-**Description**: The sum of a specific column in a table or STable
-
-**Return value type**: Double precision floating number or long integer
-
-**Applicable column types**: Data types except for timestamp, binary, nchar and bool
-
-**Applicable table types**: table, STable
-
-**Examples**:
-
-```
-taos> SELECT SUM(current), SUM(voltage), SUM(phase) FROM meters;
- sum(current) | sum(voltage) | sum(phase) |
-================================================================================
- 103.200000763 | 1984 | 2.640000001 |
-Query OK, 1 row(s) in set (0.001702s)
-
-taos> SELECT SUM(current), SUM(voltage), SUM(phase) FROM d1001;
- sum(current) | sum(voltage) | sum(phase) |
-================================================================================
- 35.200000763 | 658 | 0.950000018 |
-Query OK, 1 row(s) in set (0.000980s)
-```
-
-### STDDEV
-
-```
-SELECT STDDEV(field_name) FROM tb_name [WHERE clause];
-```
-
-**Description**: Standard deviation of a specific column in a table or STable
-
-**Return value type**: Double precision floating number
-
-**Applicable column types**: Data types except for timestamp, binary, nchar and bool
-
-**Applicable table types**: table, STable (since version 2.0.15.1)
-
-**Examples**:
-
-```
-taos> SELECT STDDEV(current) FROM d1001;
- stddev(current) |
-============================
- 1.020892909 |
-Query OK, 1 row(s) in set (0.000915s)
-```
-
-### LEASTSQUARES
-
-```
-SELECT LEASTSQUARES(field_name, start_val, step_val) FROM tb_name [WHERE clause];
-```
-
-**Description**: The linear regression function of the specified column and the timestamp column (primary key), `start_val` is the initial value and `step_val` is the step value.
-
-**Return value type**: A string in the format of "(slope, intercept)"
-
-**Applicable column types**: Data types except for timestamp, binary, nchar and bool
-
-**Applicable table types**: table only
-
-**Examples**:
-
-```
-taos> SELECT LEASTSQUARES(current, 1, 1) FROM d1001;
- leastsquares(current, 1, 1) |
-=====================================================
-{slop:1.000000, intercept:9.733334} |
-Query OK, 1 row(s) in set (0.000921s)
-```
-
-### MODE
-
-```
-SELECT MODE(field_name) FROM tb_name [WHERE clause];
-```
-
-**Description**:The value which has the highest frequency of occurrence. NULL is returned if there are multiple values which have highest frequency of occurrence. It can't be used on timestamp column or tags.
-
-**Return value type**:Same as the data type of the column being operated upon
-
-**Applicable column types**:Data types except for timestamp
-
-**More explanations**:Considering the number of returned result set is unpredictable, it's suggested to limit the number of unique values to 100,000, otherwise error will be returned.
-
-**Applicable version**:Since version 2.6.0.0
-
-**Examples**:
-
-```
-taos> select voltage from d002;
- voltage |
-========================
- 1 |
- 1 |
- 2 |
- 19 |
-Query OK, 4 row(s) in set (0.003545s)
-
-taos> select mode(voltage) from d002;
- mode(voltage) |
-========================
- 1 |
-Query OK, 1 row(s) in set (0.019393s)
-```
-
-### HYPERLOGLOG
-
-```
-SELECT HYPERLOGLOG(field_name) FROM { tb_name | stb_name } [WHERE clause];
-```
-
-**Description**:The cardinal number of a specific column is returned by using hyperloglog algorithm.
-
-**Return value type**:Integer
-
-**Applicable column types**:Any data type
-
-**More explanations**: The benefit of using hyperloglog algorithm is that the memory usage is under control when the data volume is huge. However, when the data volume is very small, the result may be not accurate, it's recommented to use `select count(data) from (select unique(col) as data from table)` in this case.
-
-**Applicable versions**:Since version 2.6.0.0
-
-**Examples**:
-
-```
-taos> select dbig from shll;
- dbig |
-========================
- 1 |
- 1 |
- 1 |
- NULL |
- 2 |
- 19 |
- NULL |
- 9 |
-Query OK, 8 row(s) in set (0.003755s)
-
-taos> select hyperloglog(dbig) from shll;
- hyperloglog(dbig)|
-========================
- 4 |
-Query OK, 1 row(s) in set (0.008388s)
-```
-
-### HISTOGRAM
-
-```
-SELECT HISTOGRAM(field_name,bin_type, bin_description, normalized) FROM tb_name [WHERE clause];
-```
-
-**Description**:Returns count of data points in user-specified ranges.
-
-**Return value type**:Double or INT64, depends on normalized parameter settings.
-
-**Applicable column type**:Numerical types.
-
-**Applicable versions**:Since version 2.6.0.0.
-
-**Applicable table types**: table, STable
-
-**Explanations**:
-
-1. bin_type: parameter to indicate the bucket type, valid inputs are: "user_input", "linear_bin", "log_bin"。
-2. bin_description: parameter to describe how to generate buckets,can be in the following JSON formats for each bin_type respectively:
-
- - "user_input": "[1, 3, 5, 7]": User specified bin values.
-
- - "linear_bin": "{"start": 0.0, "width": 5.0, "count": 5, "infinity": true}"
- "start" - bin starting point.
- "width" - bin offset.
- "count" - number of bins generated.
- "infinity" - whether to add(-inf, inf)as start/end point in generated set of bins.
- The above "linear_bin" descriptor generates a set of bins: [-inf, 0.0, 5.0, 10.0, 15.0, 20.0, +inf].
-
- - "log_bin": "{"start":1.0, "factor": 2.0, "count": 5, "infinity": true}"
- "start" - bin starting point.
- "factor" - exponential factor of bin offset.
- "count" - number of bins generated.
- "infinity" - whether to add(-inf, inf)as start/end point in generated range of bins.
- The above "log_bin" descriptor generates a set of bins:[-inf, 1.0, 2.0, 4.0, 8.0, 16.0, +inf].
-
-3. normalized: setting to 1/0 to turn on/off result normalization.
-
-**Example**:
-
-```mysql
-taos> SELECT HISTOGRAM(voltage, "user_input", "[1,3,5,7]", 1) FROM meters;
- histogram(voltage, "user_input", "[1,3,5,7]", 1) |
- =======================================================
- {"lower_bin":1, "upper_bin":3, "count":0.333333} |
- {"lower_bin":3, "upper_bin":5, "count":0.333333} |
- {"lower_bin":5, "upper_bin":7, "count":0.333333} |
- Query OK, 3 row(s) in set (0.004273s)
-
-taos> SELECT HISTOGRAM(voltage, 'linear_bin', '{"start": 1, "width": 3, "count": 3, "infinity": false}', 0) FROM meters;
- histogram(voltage, 'linear_bin', '{"start": 1, "width": 3, " |
- ===================================================================
- {"lower_bin":1, "upper_bin":4, "count":3} |
- {"lower_bin":4, "upper_bin":7, "count":3} |
- {"lower_bin":7, "upper_bin":10, "count":3} |
- Query OK, 3 row(s) in set (0.004887s)
-
-taos> SELECT HISTOGRAM(voltage, 'log_bin', '{"start": 1, "factor": 3, "count": 3, "infinity": true}', 0) FROM meters;
- histogram(voltage, 'log_bin', '{"start": 1, "factor": 3, "count" |
- ===================================================================
- {"lower_bin":-inf, "upper_bin":1, "count":3} |
- {"lower_bin":1, "upper_bin":3, "count":2} |
- {"lower_bin":3, "upper_bin":9, "count":6} |
- {"lower_bin":9, "upper_bin":27, "count":3} |
- {"lower_bin":27, "upper_bin":inf, "count":1} |
-```
-
-### ELAPSED
-
-```mysql
-SELECT ELAPSED(field_name[, time_unit]) FROM { tb_name | stb_name } [WHERE clause] [INTERVAL(interval [, offset]) [SLIDING sliding]];
-```
-
-**Description**:`elapsed` function can be used to calculate the continuous time length in which there is valid data. If it's used with `INTERVAL` clause, the returned result is the calcualted time length within each time window. If it's used without `INTERVAL` caluse, the returned result is the calculated time length within the specified time range. Please be noted that the return value of `elapsed` is the number of `time_unit` in the calculated time length.
-
-**Return value type**:Double
-
-**Applicable Column type**:Timestamp
-
-**Applicable versions**:Sicne version 2.6.0.0
-
-**Applicable tables**: table, STable, outter in nested query
-
-**Explanations**:
-- `field_name` parameter can only be the first column of a table, i.e. timestamp primary key.
-- The minimum value of `time_unit` is the time precision of the database. If `time_unit` is not specified, the time precision of the database is used as the default ime unit.
-- It can be used with `INTERVAL` to get the time valid time length of each time window. Please be noted that the return value is same as the time window for all time windows except for the first and the last time window.
-- `order by asc/desc` has no effect on the result.
-- `group by tbname` must be used together when `elapsed` is used against a STable.
-- `group by` must NOT be used together when `elapsed` is used against a table or sub table.
-- When used in nested query, it's only applicable when the inner query outputs an implicit timestamp column as the primary key. For example, `select elapsed(ts) from (select diff(value) from sub1)` is legal usage while `select elapsed(ts) from (select * from sub1)` is not.
-- It can't be used with `leastsquares`, `diff`, `derivative`, `top`, `bottom`, `last_row`, `interp`.
-
-## Selection Functions
-
-When any select function is used, timestamp column or tag columns including `tbname` can be specified to show that the selected value are from which rows.
-
-### MIN
-
-```
-SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause];
-```
-
-**Description**: The minimum value of a specific column in a table or STable
-
-**Return value type**: Same as the data type of the column being operated upon
-
-**Applicable column types**: Data types except for timestamp, binary, nchar and bool
-
-**Applicable table types**: table, STable
-
-**Examples**:
-
-```
-taos> SELECT MIN(current), MIN(voltage) FROM meters;
- min(current) | min(voltage) |
-======================================
- 10.20000 | 218 |
-Query OK, 1 row(s) in set (0.001765s)
-
-taos> SELECT MIN(current), MIN(voltage) FROM d1001;
- min(current) | min(voltage) |
-======================================
- 10.30000 | 218 |
-Query OK, 1 row(s) in set (0.000950s)
-```
-
-### MAX
-
-```
-SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause];
-```
-
-**Description**: The maximum value of a specific column of a table or STable
-
-**Return value type**: Same as the data type of the column being operated upon
-
-**Applicable column types**: Data types except for timestamp, binary, nchar and bool
-
-**Applicable table types**: table, STable
-
-**Examples**:
-
-```
-taos> SELECT MAX(current), MAX(voltage) FROM meters;
- max(current) | max(voltage) |
-======================================
- 13.40000 | 223 |
-Query OK, 1 row(s) in set (0.001123s)
-
-taos> SELECT MAX(current), MAX(voltage) FROM d1001;
- max(current) | max(voltage) |
-======================================
- 12.60000 | 221 |
-Query OK, 1 row(s) in set (0.000987s)
-```
-
-### FIRST
-
-```
-SELECT FIRST(field_name) FROM { tb_name | stb_name } [WHERE clause];
-```
-
-**Description**: The first non-null value of a specific column in a table or STable
-
-**Return value type**: Same as the column being operated upon
-
-**Applicable column types**: Any data type
-
-**Applicable table types**: table, STable
-
-**More explanations**:
-
-- FIRST(\*) can be used to get the first non-null value of all columns
-- NULL will be returned if all the values of the specified column are all NULL
-- A result will NOT be returned if all the columns in the result set are all NULL
-
-**Examples**:
-
-```
-taos> SELECT FIRST(*) FROM meters;
- first(ts) | first(current) | first(voltage) | first(phase) |
-=========================================================================================
-2018-10-03 14:38:04.000 | 10.20000 | 220 | 0.23000 |
-Query OK, 1 row(s) in set (0.004767s)
-
-taos> SELECT FIRST(current) FROM d1002;
- first(current) |
-=======================
- 10.20000 |
-Query OK, 1 row(s) in set (0.001023s)
-```
-
-### LAST
-
-```
-SELECT LAST(field_name) FROM { tb_name | stb_name } [WHERE clause];
-```
-
-**Description**: The last non-NULL value of a specific column in a table or STable
-
-**Return value type**: Same as the column being operated upon
-
-**Applicable column types**: Any data type
-
-**Applicable table types**: table, STable
-
-**More explanations**:
-
-- LAST(\*) can be used to get the last non-NULL value of all columns
-- If the values of a column in the result set are all NULL, NULL is returned for that column; if all columns in the result are all NULL, no result will be returned.
-- When it's used on a STable, if there are multiple values with the timestamp in the result set, one of them will be returned randomly and it's not guaranteed that the same value is returned if the same query is run multiple times.
-
-**Examples**:
-
-```
-taos> SELECT LAST(*) FROM meters;
- last(ts) | last(current) | last(voltage) | last(phase) |
-========================================================================================
-2018-10-03 14:38:16.800 | 12.30000 | 221 | 0.31000 |
-Query OK, 1 row(s) in set (0.001452s)
-
-taos> SELECT LAST(current) FROM d1002;
- last(current) |
-=======================
- 10.30000 |
-Query OK, 1 row(s) in set (0.000843s)
-```
-
-### TOP
-
-```
-SELECT TOP(field_name, K) FROM { tb_name | stb_name } [WHERE clause];
-```
-
-**Description**: The greatest _k_ values of a specific column in a table or STable. If a value has multiple occurrences in the column but counting all of them in will exceed the upper limit _k_, then a part of them will be returned randomly.
-
-**Return value type**: Same as the column being operated upon
-
-**Applicable column types**: Data types except for timestamp, binary, nchar and bool
-
-**Applicable table types**: table, STable
-
-**More explanations**:
-
-- _k_ must be in range [1,100]
-- The timestamp associated with the selected values are returned too
-- Can't be used with `FILL`
-
-**Examples**:
-
-```
-taos> SELECT TOP(current, 3) FROM meters;
- ts | top(current, 3) |
-=================================================
-2018-10-03 14:38:15.000 | 12.60000 |
-2018-10-03 14:38:16.600 | 13.40000 |
-2018-10-03 14:38:16.800 | 12.30000 |
-Query OK, 3 row(s) in set (0.001548s)
-
-taos> SELECT TOP(current, 2) FROM d1001;
- ts | top(current, 2) |
-=================================================
-2018-10-03 14:38:15.000 | 12.60000 |
-2018-10-03 14:38:16.800 | 12.30000 |
-Query OK, 2 row(s) in set (0.000810s)
-```
-
-### BOTTOM
-
-```
-SELECT BOTTOM(field_name, K) FROM { tb_name | stb_name } [WHERE clause];
-```
-
-**Description**: The least _k_ values of a specific column in a table or STable. If a value has multiple occurrences in the column but counting all of them in will exceed the upper limit _k_, then a part of them will be returned randomly.
-
-**Return value type**: Same as the column being operated upon
-
-**Applicable column types**: Data types except for timestamp, binary, nchar and bool
-
-**Applicable table types**: table, STable
-
-**More explanations**:
-
-- _k_ must be in range [1,100]
-- The timestamp associated with the selected values are returned too
-- Can't be used with `FILL`
-
-**Examples**:
-
-```
-taos> SELECT BOTTOM(voltage, 2) FROM meters;
- ts | bottom(voltage, 2) |
-===============================================
-2018-10-03 14:38:15.000 | 218 |
-2018-10-03 14:38:16.650 | 218 |
-Query OK, 2 row(s) in set (0.001332s)
-
-taos> SELECT BOTTOM(current, 2) FROM d1001;
- ts | bottom(current, 2) |
-=================================================
-2018-10-03 14:38:05.000 | 10.30000 |
-2018-10-03 14:38:16.800 | 12.30000 |
-Query OK, 2 row(s) in set (0.000793s)
-```
-
-### PERCENTILE
-
-```
-SELECT PERCENTILE(field_name, P) FROM { tb_name } [WHERE clause];
-```
-
-**Description**: The value whose rank in a specific column matches the specified percentage. If such a value matching the specified percentage doesn't exist in the column, an interpolation value will be returned.
-
-**Return value type**: Double precision floating point
-
-**Applicable column types**: Data types except for timestamp, binary, nchar and bool
-
-**Applicable table types**: table
-
-**More explanations**: _P_ is in range [0,100], when _P_ is 0, the result is same as using function MIN; when _P_ is 100, the result is same as function MAX.
-
-**Examples**:
-
-```
-taos> SELECT PERCENTILE(current, 20) FROM d1001;
-percentile(current, 20) |
-============================
- 11.100000191 |
-Query OK, 1 row(s) in set (0.000787s)
-```
-
-### APERCENTILE
-
-```
-SELECT APERCENTILE(field_name, P[, algo_type])
-FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: Similar to `PERCENTILE`, but a simulated result is returned
-
-**Return value type**: Double precision floating point
-
-**Applicable column types**: Data types except for timestamp, binary, nchar and bool
-
-**Applicable table types**: table, STable
-
-**More explanations**
-
-- _P_ is in range [0,100], when _P_ is 0, the result is same as using function MIN; when _P_ is 100, the result is same as function MAX.
-- **algo_type** can only be input as `default` or `t-digest`, if it's not specified `default` will be used, i.e. `apercentile(column_name, 50)` is same as `apercentile(column_name, 50, "default")`.
-- When `t-digest` is used, `t-digest` sampling is used to calculate. It can be used from version 2.2.0.0.
-
-**Nested query**: It can be used in both the outer query and inner query in a nested query.
-
-```
-taos> SELECT APERCENTILE(current, 20) FROM d1001;
-apercentile(current, 20) |
-============================
- 10.300000191 |
-Query OK, 1 row(s) in set (0.000645s)
-
-taos> select apercentile (count, 80, 'default') from stb1;
- apercentile (c0, 80, 'default') |
-==================================
- 601920857.210056424 |
-Query OK, 1 row(s) in set (0.012363s)
-
-taos> select apercentile (count, 80, 't-digest') from stb1;
- apercentile (c0, 80, 't-digest') |
-===================================
- 605869120.966666579 |
-Query OK, 1 row(s) in set (0.011639s)
-```
-
-### LAST_ROW
-
-```
-SELECT LAST_ROW(field_name) FROM { tb_name | stb_name };
-```
-
-**Description**: The last row of a table or STable
-
-**Return value type**: Same as the column being operated upon
-
-**Applicable column types**: Any data type
-
-**Applicable table types**: table, STable
-
-**More explanations**:
-
-- When it's used against a STable, multiple rows with the same and largest timestamp may exist, in this case one of them is returned randomly and it's not guaranteed that the result is same if the query is run multiple times.
-- Can't be used with `INTERVAL`.
-
-**Examples**:
-
-```
- taos> SELECT LAST_ROW(current) FROM meters;
- last_row(current) |
- =======================
- 12.30000 |
- Query OK, 1 row(s) in set (0.001238s)
-
- taos> SELECT LAST_ROW(current) FROM d1002;
- last_row(current) |
- =======================
- 10.30000 |
- Query OK, 1 row(s) in set (0.001042s)
-```
-
-### INTERP [Since version 2.3.1]
-
-```
-SELECT INTERP(field_name) FROM { tb_name | stb_name } [WHERE where_condition] [ RANGE(timestamp1,timestamp2) ] [EVERY(interval)] [FILL ({ VALUE | PREV | NULL | LINEAR | NEXT})];
-```
-
-**Description**: The value that matches the specified timestamp range is returned, if existing; or an interpolation value is returned.
-
-**Return value type**: Same as the column being operated upon
-
-**Applicable column types**: Numeric data types
-
-**Applicable table types**: table, STable, nested query
-
-**More explanations**
-
-- `INTERP` is used to get the value that matches the specified time slice from a column. If no such value exists an interpolation value will be returned based on `FILL` parameter.
-- The input data of `INTERP` is the value of the specified column and a `where` clause can be used to filter the original data. If no `where` condition is specified then all original data is the input.
-- The output time range of `INTERP` is specified by `RANGE(timestamp1,timestamp2)` parameter, with timestamp1<=timestamp2. timestamp1 is the starting point of the output time range and must be specified. timestamp2 is the ending point of the output time range and must be specified. If `RANGE` is not specified, then the timestamp of the first row that matches the filter condition is treated as timestamp1, the timestamp of the last row that matches the filter condition is treated as timestamp2.
-- The number of rows in the result set of `INTERP` is determined by the parameter `EVERY`. Starting from timestamp1, one interpolation is performed for every time interval specified `EVERY` parameter. If `EVERY` parameter is not used, the time windows will be considered as no ending timestamp, i.e. there is only one time window from timestamp1.
-- Interpolation is performed based on `FILL` parameter. No interpolation is performed if `FILL` is not used, that means either the original data that matches is returned or nothing is returned.
-- `INTERP` can only be used to interpolate in single timeline. So it must be used with `group by tbname` when it's used on a STable. It can't be used with `GROUP BY` when it's used in the inner query of a nested query.
-- The result of `INTERP` is not influenced by `ORDER BY TIMESTAMP`, which impacts the output order only..
-
-**Examples**: Based on the `meters` schema used throughout the documents
-
-- Single point linear interpolation between "2017-07-14 18:40:00" and "2017-07-14 18:40:00:
-
-```
- taos> SELECT INTERP(current) FROM t1 RANGE('2017-7-14 18:40:00','2017-7-14 18:40:00') FILL(LINEAR);
-```
-
-- Get original data every 5 seconds, no interpolation, between "2017-07-14 18:00:00" and "2017-07-14 19:00:00:
-
-```
- taos> SELECT INTERP(current) FROM t1 RANGE('2017-7-14 18:00:00','2017-7-14 19:00:00') EVERY(5s);
-```
-
-- Linear interpolation every 5 seconds between "2017-07-14 18:00:00" and "2017-07-14 19:00:00:
-
-```
- taos> SELECT INTERP(current) FROM t1 RANGE('2017-7-14 18:00:00','2017-7-14 19:00:00') EVERY(5s) FILL(LINEAR);
-```
-
-- Backward interpolation every 5 seconds
-
-```
- taos> SELECT INTERP(current) FROM t1 EVERY(5s) FILL(NEXT);
-```
-
-- Linear interpolation every 5 seconds between "2017-07-14 17:00:00" and "2017-07-14 20:00:00"
-
-```
- taos> SELECT INTERP(current) FROM t1 where ts >= '2017-07-14 17:00:00' and ts <= '2017-07-14 20:00:00' RANGE('2017-7-14 18:00:00','2017-7-14 19:00:00') EVERY(5s) FILL(LINEAR);
-```
-
-### INTERP [Since version 2.0.15.0]
-
-```
-SELECT INTERP(field_name) FROM { tb_name | stb_name } WHERE ts='timestamp' [FILL ({ VALUE | PREV | NULL | LINEAR | NEXT})];
-```
-
-**Description**: The value of a specific column that matches the specified time slice
-
-**Return value type**: Same as the column being operated upon
-
-**Applicable column types**: Numeric data type
-
-**Applicable table types**: table, STable
-
-**More explanations**:
-
-- Time slice must be specified. If there is no data matching the specified time slice, interpolation is performed based on `FILL` parameter. Conditions such as tags or `tbname` can be used `Where` clause can be used to filter data.
-- The timestamp specified must be within the time range of the data rows of the table or STable. If it is beyond the valid time range, nothing is returned even with `FILL` parameter.
-- `INTERP` can be used to query only single time point once. `INTERP` can be used with `EVERY` to get the interpolation value every time interval.
-- **Examples**:
-
-```
- taos> SELECT INTERP(*) FROM meters WHERE ts='2017-7-14 18:40:00.004';
- interp(ts) | interp(current) | interp(voltage) | interp(phase) |
- ==========================================================================================
- 2017-07-14 18:40:00.004 | 9.84020 | 216 | 0.32222 |
- Query OK, 1 row(s) in set (0.002652s)
-```
-
-If there is no data corresponding to the specified timestamp, an interpolation value is returned if interpolation policy is specified by `FILL` parameter; or nothing is returned.
-
-```
- taos> SELECT INTERP(*) FROM meters WHERE tbname IN ('d636') AND ts='2017-7-14 18:40:00.005';
- Query OK, 0 row(s) in set (0.004022s)
-
- taos> SELECT INTERP(*) FROM meters WHERE tbname IN ('d636') AND ts='2017-7-14 18:40:00.005' FILL(PREV);
- interp(ts) | interp(current) | interp(voltage) | interp(phase) |
- ==========================================================================================
- 2017-07-14 18:40:00.005 | 9.88150 | 217 | 0.32500 |
- Query OK, 1 row(s) in set (0.003056s)
-```
-
-Interpolation is performed every 5 milliseconds between `['2017-7-14 18:40:00', '2017-7-14 18:40:00.014']`
-
-```
- taos> SELECT INTERP(current) FROM d636 WHERE ts>='2017-7-14 18:40:00' AND ts<='2017-7-14 18:40:00.014' EVERY(5a);
- ts | interp(current) |
- =================================================
- 2017-07-14 18:40:00.000 | 10.04179 |
- 2017-07-14 18:40:00.010 | 10.16123 |
- Query OK, 2 row(s) in set (0.003487s)
-```
-
-### TAIL
-
-```
-SELECT TAIL(field_name, k, offset_val) FROM {tb_name | stb_name} [WHERE clause];
-```
-
-**Description**: The next _k_ rows are returned after skipping the last `offset_val` rows, NULL values are not ignored. `offset_val` is optional parameter. When it's not specified, the last _k_ rows are returned. When `offset_val` is used, the effect is same as `order by ts desc LIMIT k OFFSET offset_val`.
-
-**Parameter value range**: k: [1,100] offset_val: [0,100]
-
-**Return value type**: Same as the column being operated upon
-
-**Applicable column types**: Any data type except form timestamp, i.e. the primary key
-
-**Applicable versions**: Since version 2.6.0.0
-
-**Examples**:
-
-```
-taos> select ts,dbig from tail2;
- ts | dbig |
-==================================================
-2021-10-15 00:31:33.000 | 1 |
-2021-10-17 00:31:31.000 | NULL |
-2021-12-24 00:31:34.000 | 2 |
-2022-01-01 08:00:05.000 | 19 |
-2022-01-01 08:00:06.000 | NULL |
-2022-01-01 08:00:07.000 | 9 |
-Query OK, 6 row(s) in set (0.001952s)
-
-taos> select tail(dbig,2,2) from tail2;
-ts | tail(dbig,2,2) |
-==================================================
-2021-12-24 00:31:34.000 | 2 |
-2022-01-01 08:00:05.000 | 19 |
-Query OK, 2 row(s) in set (0.002307s)
-```
-
-### UNIQUE
-
-```
-SELECT UNIQUE(field_name) FROM {tb_name | stb_name} [WHERE clause];
-```
-
-**Description**: The values that occur the first time in the specified column. The effect is similar to `distinct` keyword, but it can also be used to match tags or timestamp.
-
-**Return value type**: Same as the column or tag being operated upon
-
-**Applicable column types**: Any data types except for timestamp
-
-**Applicable versions**: Since version 2.6.0.0
-
-**More explanations**:
-
-- It can be used against table or STable, but can't be used together with time window, like `interval`, `state_window` or `session_window` .
-- Considering the number of result sets is unpredictable, it's suggested to limit the distinct values under 100,000 to control the memory usage, otherwise error will be returned.
-
-**Examples**:
-
-```
-taos> select ts,voltage from unique1;
- ts | voltage |
-==================================================
-2021-10-17 00:31:31.000 | 1 |
-2022-01-24 00:31:31.000 | 1 |
-2021-10-17 00:31:31.000 | 1 |
-2021-12-24 00:31:31.000 | 2 |
-2022-01-01 08:00:01.000 | 19 |
-2021-10-17 00:31:31.000 | NULL |
-2022-01-01 08:00:02.000 | NULL |
-2022-01-01 08:00:03.000 | 9 |
-Query OK, 8 row(s) in set (0.003018s)
-
-taos> select unique(voltage) from unique1;
-ts | unique(voltage) |
-==================================================
-2021-10-17 00:31:31.000 | 1 |
-2021-10-17 00:31:31.000 | NULL |
-2021-12-24 00:31:31.000 | 2 |
-2022-01-01 08:00:01.000 | 19 |
-2022-01-01 08:00:03.000 | 9 |
-Query OK, 5 row(s) in set (0.108458s)
-```
-
-## Scalar functions
-
-### DIFF
-
-```sql
-SELECT {DIFF(field_name, ignore_negative) | DIFF(field_name)} FROM tb_name [WHERE clause];
-```
-
-**Description**: The different of each row with its previous row for a specific column. `ignore_negative` can be specified as 0 or 1, the default value is 1 if it's not specified. `1` means negative values are ignored.
-
-**Return value type**: Same as the column being operated upon
-
-**Applicable column types**: Data types except for timestamp, binary, nchar and bool
-
-**Applicable table types**: table, STable
-
-**More explanations**:
-
-- The number of result rows is the number of rows subtracted by one, no output for the first row
-- Since version 2.1.30, `DIFF` can be used on STable with `GROUP by tbname`
-- Since version 2.6.0, `ignore_negative` parameter is supported
-
-**Examples**:
-
-```sql
-taos> SELECT DIFF(current) FROM d1001;
- ts | diff(current) |
-=================================================
-2018-10-03 14:38:15.000 | 2.30000 |
-2018-10-03 14:38:16.800 | -0.30000 |
-Query OK, 2 row(s) in set (0.001162s)
-```
-
-### DERIVATIVE
-
-```
-SELECT DERIVATIVE(field_name, time_interval, ignore_negative) FROM tb_name [WHERE clause];
-```
-
-**Description**: The derivative of a specific column. The time rage can be specified by parameter `time_interval`, the minimum allowed time range is 1 second (1s); the value of `ignore_negative` can be 0 or 1, 1 means negative values are ignored.
-
-**Return value type**: Double precision floating point
-
-**Applicable column types**: Data types except for timestamp, binary, nchar and bool
-
-**Applicable table types**: table, STable
-
-**More explanations**:
-
-- It is available from version 2.1.3.0, the number of result rows is the number of total rows in the time range subtracted by one, no output for the first row.
-- It can be used together with `GROUP BY tbname` against a STable.
-
-**Examples**:
-
-```
-taos> select derivative(current, 10m, 0) from t1;
- ts | derivative(current, 10m, 0) |
-========================================================
- 2021-08-20 10:11:22.790 | 0.500000000 |
- 2021-08-20 11:11:22.791 | 0.166666620 |
- 2021-08-20 12:11:22.791 | 0.000000000 |
- 2021-08-20 13:11:22.792 | 0.166666620 |
- 2021-08-20 14:11:22.792 | -0.666666667 |
-Query OK, 5 row(s) in set (0.004883s)
-```
-
-### SPREAD
-
-```
-SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause];
-```
-
-**Description**: The difference between the max and the min of a specific column
-
-**Return value type**: Double precision floating point
-
-**Applicable column types**: Data types except for binary, nchar, and bool
-
-**Applicable table types**: table, STable
-
-**More explanations**: Can be used on a column of TIMESTAMP type, the result is the time range size.
-
-**Examples**:
-
-```
-taos> SELECT SPREAD(voltage) FROM meters;
- spread(voltage) |
-============================
- 5.000000000 |
-Query OK, 1 row(s) in set (0.001792s)
-
-taos> SELECT SPREAD(voltage) FROM d1001;
- spread(voltage) |
-============================
- 3.000000000 |
-Query OK, 1 row(s) in set (0.000836s)
-```
-
-### CEIL
-
-```
-SELECT CEIL(field_name) FROM { tb_name | stb_name } [WHERE clause];
-```
-
-**Description**: The rounded up value of a specific column
-
-**Return value type**: Same as the column being used
-
-**Applicable data types**: Data types except for timestamp, binary, nchar, bool
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and outer query
-
-**More explanations**:
-
-- Can't be used on any tags of any type
-- Arithmetic operation can be performed on the result of `ceil` function
-- Can't be used with aggregate functions
-
-### FLOOR
-
-```
-SELECT FLOOR(field_name) FROM { tb_name | stb_name } [WHERE clause];
-```
-
-**Description**: The rounded down value of a specific column
-
-**More explanations**: The restrictions are same as those of the `CEIL` function.
-
-### ROUND
-
-```
-SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause];
-```
-
-**Description**: The rounded value of a specific column.
-
-**More explanations**: The restrictions are same as `CEIL` function.
-
-### CSUM
-
-```sql
- SELECT CSUM(field_name) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: The cumulative sum of each row for a specific column. The number of output rows is same as that of the input rows.
-
-**Return value type**: Long integer for integers; Double for floating points. Timestamp is returned for each row.
-
-**Applicable data types**: Data types except for timestamp, binary, nchar, and bool
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**More explanations**:
-
-- Can't be used on tags when it's used on STable
-- Arithmetic operation can't be performed on the result of `csum` function
-- Can only be used with aggregate functions
-- `Group by tbname` must be used together on a STable to force the result on a single timeline
-
-**Applicable versions**: Since 2.3.0.x
-
-### MAVG
-
-```sql
- SELECT MAVG(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: The moving average of continuous _k_ values of a specific column. If the number of input rows is less than _k_, nothing is returned. The applicable range of _k_ is [1,1000].
-
-**Return value type**: Double precision floating point
-
-**Applicable data types**: Data types except for timestamp, binary, nchar, and bool
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable table types**: table, STable
-
-**More explanations**:
-
-- Arithmetic operation can't be performed on the result of `MAVG`.
-- Can only be used with data columns, can't be used with tags.
-- Can't be used with aggregate functions.
-- Must be used with `GROUP BY tbname` when it's used on a STable to force the result on each single timeline.
-
-**Applicable versions**: Since 2.3.0.x
-
-### SAMPLE
-
-```sql
- SELECT SAMPLE(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: _k_ sampling values of a specific column. The applicable range of _k_ is [1,10000]
-
-**Return value type**: Same as the column being operated plus the associated timestamp
-
-**Applicable data types**: Any data type except for tags of STable
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**More explanations**:
-
-- Arithmetic operation can't be operated on the result of `SAMPLE` function
-- Must be used with `Group by tbname` when it's used on a STable to force the result on each single timeline
-
-**Applicable versions**: Since 2.3.0.x
-
-### ASIN
-
-```sql
-SELECT ASIN(field_name) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: The anti-sine of a specific column
-
-**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
-
-**Applicable data types**: Data types except for timestamp, binary, nchar, bool
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**:
-
-- Can't be used with tags
-- Can't be used with aggregate functions
-
-### ACOS
-
-```sql
-SELECT ACOS(field_name) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: The anti-cosine of a specific column
-
-**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
-
-**Applicable data types**: Data types except for timestamp, binary, nchar, bool
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**:
-
-- Can't be used with tags
-- Can't be used with aggregate functions
-
-### ATAN
-
-```sql
-SELECT ATAN(field_name) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: anti-tangent of a specific column
-
-**Description**: The anti-cosine of a specific column
-
-**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
-
-**Applicable data types**: Data types except for timestamp, binary, nchar, bool
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**:
-
-- Can't be used with tags
-- Can't be used with aggregate functions
-
-### SIN
-
-```sql
-SELECT SIN(field_name) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: The sine of a specific column
-
-**Description**: The anti-cosine of a specific column
-
-**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
-
-**Applicable data types**: Data types except for timestamp, binary, nchar, bool
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**:
-
-- Can't be used with tags
-- Can't be used with aggregate functions
-
-### COS
-
-```sql
-SELECT COS(field_name) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: The cosine of a specific column
-
-**Description**: The anti-cosine of a specific column
-
-**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
-
-**Applicable data types**: Data types except for timestamp, binary, nchar, bool
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**:
-
-- Can't be used with tags
-- Can't be used with aggregate functions
-
-### TAN
-
-```sql
-SELECT TAN(field_name) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: The tangent of a specific column
-
-**Description**: The anti-cosine of a specific column
-
-**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
-
-**Applicable data types**: Data types except for timestamp, binary, nchar, bool
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**:
-
-- Can't be used with tags
-- Can't be used with aggregate functions
-
-### POW
-
-```sql
-SELECT POW(field_name, power) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: The power of a specific column with `power` as the index
-
-**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
-
-**Applicable data types**: Data types except for timestamp, binary, nchar, bool
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**:
-
-- Can't be used with tags
-- Can't be used with aggregate functions
-
-### LOG
-
-```sql
-SELECT LOG(field_name, base) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: The log of a specific with `base` as the radix
-
-**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
-
-**Applicable data types**: Data types except for timestamp, binary, nchar, bool
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**:
-
-- Can't be used with tags
-- Can't be used with aggregate functions
-
-### ABS
-
-```sql
-SELECT ABS(field_name) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: The absolute of a specific column
-
-**Return value type**: UBIGINT if the input value is integer; DOUBLE if the input value is FLOAT/DOUBLE
-
-**Applicable data types**: Data types except for timestamp, binary, nchar, bool
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**:
-
-- Can't be used with tags
-- Can't be used with aggregate functions
-
-### SQRT
-
-```sql
-SELECT SQRT(field_name) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: The square root of a specific column
-
-**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
-
-**Applicable data types**: Data types except for timestamp, binary, nchar, bool
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**:
-
-- Can't be used with tags
-- Can't be used with aggregate functions
-
-### CAST
-
-```sql
-SELECT CAST(expression AS type_name) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: It's used for type casting. The input parameter `expression` can be data columns, constants, scalar functions or arithmetic between them. Can't be used with tags, and can only be used in `select` clause.
-
-**Return value type**: The type specified by parameter `type_name`
-
-**Applicable data types**:
-
-- Parameter `expression` can be any data type except for JSON, more specifically it can be any of BOOL/TINYINT/SMALLINT/INT/BIGINT/FLOAT/DOUBLE/BINARY(M)/TIMESTAMP/NCHAR(M)/TINYINT UNSIGNED/SMALLINT UNSIGNED/INT UNSIGNED/BIGINT UNSIGNED
-- The output data type specified by `type_name` can only be one of BIGINT/BINARY(N)/TIMESTAMP/NCHAR(N)/BIGINT UNSIGNED
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**:
-
-- Error will be reported for unsupported type casting
-- NULL will be returned if the input value is NULL
-- Some values of some supported data types may not be casted, below are known issues:
- 1)When casting BINARY/NCHAR to BIGINT/BIGINT UNSIGNED, some characters may be treated as illegal, for example "a" may be converted to 0.
- 2)There may be overflow when casting singed integer or TIMESTAMP to unsigned BIGINT
- 3)There may be overflow when casting unsigned BIGINT to BIGINT
- 4)There may be overflow when casting FLOAT/DOUBLE to BIGINT or UNSIGNED BIGINT
-
-### CONCAT
-
-```sql
-SELECT CONCAT(str1|column1, str2|column2, ...) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: The concatenation result of two or more strings, the number of strings to be concatenated is at least 2 and at most 8
-
-**Return value type**: Same as the columns being operated, BINARY or NCHAR; or NULL if all the input are NULL
-
-**Applicable data types**: The input data must be in either all BINARY or in all NCHAR; can't be used on tag columns
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-### CONCAT_WS
-
-```
-SELECT CONCAT_WS(separator, str1|column1, str2|column2, ...) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: The concatenation result of two or more strings with separator, the number of strings to be concatenated is at least 3 and at most 9
-
-**Return value type**: Same as the columns being operated, BINARY or NCHAR; or NULL if all the input are NULL
-
-**Applicable data types**: The input data must be in either all BINARY or in all NCHAR; can't be used on tag columns
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**:
-
-- If the value of `separator` is NULL, the output is NULL. If the value of `separator` is not NULL but other input are all NULL, the output is empty string.
-
-### LENGTH
-
-```
-SELECT LENGTH(str|column) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: The length in bytes of a string
-
-**Return value type**: Integer
-
-**Applicable data types**: BINARY or NCHAR, can't be used on tags
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**
-
-- If the input value is NULL, the output is NULL too
-
-### CHAR_LENGTH
-
-```
-SELECT CHAR_LENGTH(str|column) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: The length in number of characters of a string
-
-**Return value type**: Integer
-
-**Applicable data types**: BINARY or NCHAR, can't be used on tags
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**
-
-- If the input value is NULL, the output is NULL too
-
-### LOWER
-
-```
-SELECT LOWER(str|column) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: Convert the input string to lower case
-
-**Return value type**: Same as input
-
-**Applicable data types**: BINARY or NCHAR, can't be used on tags
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**
-
-- If the input value is NULL, the output is NULL too
-
-### UPPER
-
-```
-SELECT UPPER(str|column) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: Convert the input string to upper case
-
-**Return value type**: Same as input
-
-**Applicable data types**: BINARY or NCHAR, can't be used on tags
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**
-
-- If the input value is NULL, the output is NULL too
-
-### LTRIM
-
-```
-SELECT LTRIM(str|column) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: Remove the left leading blanks of a string
-
-**Return value type**: Same as input
-
-**Applicable data types**: BINARY or NCHAR, can't be used on tags
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**
-
-- If the input value is NULL, the output is NULL too
-
-### RTRIM
-
-```
-SELECT RTRIM(str|column) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: Remove the right tailing blanks of a string
-
-**Return value type**: Same as input
-
-**Applicable data types**: BINARY or NCHAR, can't be used on tags
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**
-
-- If the input value is NULL, the output is NULL too
-
-### SUBSTR
-
-```
-SELECT SUBSTR(str,pos[,len]) FROM { tb_name | stb_name } [WHERE clause]
-```
-
-**Description**: The sub-string starting from `pos` with length of `len` from the original string `str`
-
-**Return value type**: Same as input
-
-**Applicable data types**: BINARY or NCHAR, can't be used on tags
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Inner query and Outer query
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**:
-
-- If the input is NULL, the output is NULL
-- Parameter `pos` can be an positive or negative integer; If it's positive, the starting position will be counted from the beginning of the string; if it's negative, the starting position will be counted from the end of the string.
-- If `len` is not specified, it means from `pos` to the end.
-
-### Arithmetic Operations
-
-```
-SELECT field_name [+|-|*|/|%][Value|field_name] FROM { tb_name | stb_name } [WHERE clause];
-```
-
-**Description**: The sum, difference, product, quotient, or remainder between one or more columns
-
-**Return value type**: Double precision floating point
-
-**Applicable column types**: Data types except for timestamp, binary, nchar, bool
-
-**Applicable table types**: table, STable
-
-**More explanations**:
-
-- Arithmetic operations can be performed on two or more columns, Parentheses `()` can be used to control the order of precedence.
-- NULL doesn't participate in the operation i.e. if one of the operands is NULL then result is NULL.
-
-**Examples**:
-
-```
-taos> SELECT current + voltage * phase FROM d1001;
-(current+(voltage*phase)) |
-============================
- 78.190000713 |
- 84.540003240 |
- 80.810000718 |
-Query OK, 3 row(s) in set (0.001046s)
-```
-
-### STATECOUNT
-
-```
-SELECT STATECOUNT(field_name, oper, val) FROM { tb_name | stb_name } [WHERE clause];
-```
-
-**Description**: The number of continuous rows satisfying the specified conditions for a specific column. The result is shown as an extra column for each row. If the specified condition is evaluated as true, the number is increased by 1; otherwise the number is reset to -1. If the input value is NULL, then the corresponding row is skipped.
-
-**Applicable parameter values**:
-
-- oper : Can be one of LT (lower than), GT (greater than), LE (lower than or euqal to), GE (greater than or equal to), NE (not equal to), EQ (equal to), the value is case insensitive
-- val : Numeric types
-
-**Return value type**: Integer
-
-**Applicable data types**: Data types excpet for timestamp, binary, nchar, bool
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Outer query only
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**:
-
-- Must be used together with `GROUP BY tbname` when it's used on a STable to force the result into each single timeline]
-- Can't be used with window operation, like interval/state_window/session_window
-
-**Examples**:
-
-```
-taos> select ts,dbig from statef2;
- ts | dbig |
-========================================================
-2021-10-15 00:31:33.000000000 | 1 |
-2021-10-17 00:31:31.000000000 | NULL |
-2021-12-24 00:31:34.000000000 | 2 |
-2022-01-01 08:00:05.000000000 | 19 |
-2022-01-01 08:00:06.000000000 | NULL |
-2022-01-01 08:00:07.000000000 | 9 |
-Query OK, 6 row(s) in set (0.002977s)
-
-taos> select stateCount(dbig,GT,2) from statef2;
-ts | dbig | statecount(dbig,gt,2) |
-================================================================================
-2021-10-15 00:31:33.000000000 | 1 | -1 |
-2021-10-17 00:31:31.000000000 | NULL | NULL |
-2021-12-24 00:31:34.000000000 | 2 | -1 |
-2022-01-01 08:00:05.000000000 | 19 | 1 |
-2022-01-01 08:00:06.000000000 | NULL | NULL |
-2022-01-01 08:00:07.000000000 | 9 | 2 |
-Query OK, 6 row(s) in set (0.002791s)
-```
-
-### STATEDURATION
-
-```
-SELECT stateDuration(field_name, oper, val, unit) FROM { tb_name | stb_name } [WHERE clause];
-```
-
-**Description**: The length of time range in which all rows satisfy the specified condition for a specific column. The result is shown as an extra column for each row. The length for the first row that satisfies the condition is 0. Next, if the condition is evaluated as true for a row, the time interval between current row and its previous row is added up to the time range; otherwise the time range length is reset to -1. If the value of the column is NULL, the corresponding row is skipped.
-
-**Applicable parameter values**:
-
-- oper : Can be one of LT (lower than), GT (greater than), LE (lower than or euqal to), GE (greater than or equal to), NE (not equal to), EQ (equal to), the value is case insensitive
-- val : Numeric types
-- unit: The unit of time interval, can be [1s, 1m, 1h], default is 1s
-
-**Return value type**: Integer
-
-**Applicable data types**: Data types excpet for timestamp, binary, nchar, bool
-
-**Applicable table types**: table, STable
-
-**Applicable nested query**: Outer query only
-
-**Applicable versions**: From 2.6.0.0
-
-**More explanations**:
-
-- Must be used together with `GROUP BY tbname` when it's used on a STable to force the result into each single timeline]
-- Can't be used with window operation, like interval/state_window/session_window
-
-**Examples**:
-
-```
-taos> select ts,dbig from statef2;
- ts | dbig |
-========================================================
-2021-10-15 00:31:33.000000000 | 1 |
-2021-10-17 00:31:31.000000000 | NULL |
-2021-12-24 00:31:34.000000000 | 2 |
-2022-01-01 08:00:05.000000000 | 19 |
-2022-01-01 08:00:06.000000000 | NULL |
-2022-01-01 08:00:07.000000000 | 9 |
-Query OK, 6 row(s) in set (0.002407s)
-
-taos> select stateDuration(dbig,GT,2) from statef2;
-ts | dbig | stateduration(dbig,gt,2) |
-===================================================================================
-2021-10-15 00:31:33.000000000 | 1 | -1 |
-2021-10-17 00:31:31.000000000 | NULL | NULL |
-2021-12-24 00:31:34.000000000 | 2 | -1 |
-2022-01-01 08:00:05.000000000 | 19 | 0 |
-2022-01-01 08:00:06.000000000 | NULL | NULL |
-2022-01-01 08:00:07.000000000 | 9 | 2 |
-Query OK, 6 row(s) in set (0.002613s)
-```
-
-## Time Functions
-
-Since version 2.6.0.0, below time related functions can be used in TDengine.
-
-### NOW
-
-```sql
-SELECT NOW() FROM { tb_name | stb_name } [WHERE clause];
-SELECT select_expr FROM { tb_name | stb_name } WHERE ts_col cond_operatior NOW();
-INSERT INTO tb_name VALUES (NOW(), ...);
-```
-
-**Description**: The current time of the client side system
-
-**Return value type**: TIMESTAMP
-
-**Applicable column types**: TIMESTAMP only
-
-**Applicable table types**: table, STable
-
-**More explanations**:
-
-- Add and Subtract operation can be performed, for example NOW() + 1s, the time unit can be:
- b(nanosecond), u(microsecond), a(millisecond)), s(second), m(minute), h(hour), d(day), w(week)
-- The precision of the returned timestamp is same as the precision set for the current data base in use
-
-**Examples**:
-
-```sql
-taos> SELECT NOW() FROM meters;
- now() |
-==========================
- 2022-02-02 02:02:02.456 |
-Query OK, 1 row(s) in set (0.002093s)
-
-taos> SELECT NOW() + 1h FROM meters;
- now() + 1h |
-==========================
- 2022-02-02 03:02:02.456 |
-Query OK, 1 row(s) in set (0.002093s)
-
-taos> SELECT COUNT(voltage) FROM d1001 WHERE ts < NOW();
- count(voltage) |
-=============================
- 5 |
-Query OK, 5 row(s) in set (0.004475s)
-
-taos> INSERT INTO d1001 VALUES (NOW(), 10.2, 219, 0.32);
-Query OK, 1 of 1 row(s) in database (0.002210s)
-```
-
-### TODAY
-
-```sql
-SELECT TODAY() FROM { tb_name | stb_name } [WHERE clause];
-SELECT select_expr FROM { tb_name | stb_name } WHERE ts_col cond_operatior TODAY()];
-INSERT INTO tb_name VALUES (TODAY(), ...);
-```
-
-**Description**: The timestamp of 00:00:00 of the client side system
-
-**Return value type**: TIMESTAMP
-
-**Applicable column types**: TIMESTAMP only
-
-**Applicable table types**: table, STable
-
-**More explanations**:
-
-- Add and Subtract operation can be performed, for example NOW() + 1s, the time unit can be:
- b(nanosecond), u(microsecond), a(millisecond)), s(second), m(minute), h(hour), d(day), w(week)
-- The precision of the returned timestamp is same as the precision set for the current data base in use
-
-**Examples**:
-
-```sql
-taos> SELECT TODAY() FROM meters;
- today() |
-==========================
- 2022-02-02 00:00:00.000 |
-Query OK, 1 row(s) in set (0.002093s)
-
-taos> SELECT TODAY() + 1h FROM meters;
- today() + 1h |
-==========================
- 2022-02-02 01:00:00.000 |
-Query OK, 1 row(s) in set (0.002093s)
-
-taos> SELECT COUNT(voltage) FROM d1001 WHERE ts < TODAY();
- count(voltage) |
-=============================
- 5 |
-Query OK, 5 row(s) in set (0.004475s)
-
-taos> INSERT INTO d1001 VALUES (TODAY(), 10.2, 219, 0.32);
-Query OK, 1 of 1 row(s) in database (0.002210s)
-```
-
-### TIMEZONE
-
-```sql
-SELECT TIMEZONE() FROM { tb_name | stb_name } [WHERE clause];
-```
-
-**Description**: The timezone of the client side system
-
-**Return value type**: BINARY
-
-**Applicable column types**: None
-
-**Applicable table types**: table, STable
-
-**Examples**:
-
-```sql
-taos> SELECT TIMEZONE() FROM meters;
- timezone() |
-=================================
- UTC (UTC, +0000) |
-Query OK, 1 row(s) in set (0.002093s)
-```
-
-### TO_ISO8601
-
-```sql
-SELECT TO_ISO8601(ts_val | ts_col) FROM { tb_name | stb_name } [WHERE clause];
-```
-
-**Description**: The ISO8601 date/time format converted from a UNIX timestamp, plus the timezone of the client side system
-
-**Return value type**: BINARY
-
-**Applicable column types**: TIMESTAMP, constant or a column
-
-**Applicable table types**: table, STable
-
-**More explanations**:
-
-- If the input is UNIX timestamp constant, the precision of the returned value is determined by the digits of the input timestamp
-- If the input is a column of TIMESTAMP type, The precision of the returned value is same as the precision set for the current data base in use
-
-**Examples**:
-
-```sql
-taos> SELECT TO_ISO8601(1643738400) FROM meters;
- to_iso8601(1643738400) |
-==============================
- 2022-02-02T02:00:00+0800 |
-
-taos> SELECT TO_ISO8601(ts) FROM meters;
- to_iso8601(ts) |
-==============================
- 2022-02-02T02:00:00+0800 |
- 2022-02-02T02:00:00+0800 |
- 2022-02-02T02:00:00+0800 |
-```
-
-### TO_UNIXTIMESTAMP
-
-```sql
-SELECT TO_UNIXTIMESTAMP(datetime_string | ts_col) FROM { tb_name | stb_name } [WHERE clause];
-```
-
-**Description**: UNIX timestamp converted from a string of date/time format
-
-**Return value type**: Long integer
-
-**Applicable column types**: Constant or column of BINARY/NCHAR
-
-**Applicable table types**: table, STable
-
-**More explanations**:
-
-- The input string must be compatible with ISO8601/RFC3339 standard, 0 will be returned if the string can't be converted
-- The precision of the returned timestamp is same as the precision set for the current data base in use
-
-**Examples**:
-
-```sql
-taos> SELECT TO_UNIXTIMESTAMP("2022-02-02T02:00:00.000Z") FROM meters;
-to_unixtimestamp("2022-02-02T02:00:00.000Z") |
-==============================================
- 1643767200000 |
-
-taos> SELECT TO_UNIXTIMESTAMP(col_binary) FROM meters;
- to_unixtimestamp(col_binary) |
-========================================
- 1643767200000 |
- 1643767200000 |
- 1643767200000 |
-```
-
-### TIMETRUNCATE
-
-```sql
-SELECT TIMETRUNCATE(ts_val | datetime_string | ts_col, time_unit) FROM { tb_name | stb_name } [WHERE clause];
-```
-
-**Description**: Truncate the input timestamp with unit specified by `time_unit`
-
-**Return value type**: TIMESTAMP
-
-**Applicable column types**: UNIX timestamp constant, string constant of date/time format, or a column of timestamp
-
-**Applicable table types**: table, STable
-
-**More explanations**:
-
-- Time unit specified by `time_unit` can be:
- 1u(microsecond),1a(millisecond),1s(second),1m(minute),1h(hour),1d(day).
-- The precision of the returned timestamp is same as the precision set for the current data base in use
-
-**Examples**:
-
-```sql
-taos> SELECT TIMETRUNCATE(1643738522000, 1h) FROM meters;
- timetruncate(1643738522000, 1h) |
-===================================
- 2022-02-02 02:00:00.000 |
-Query OK, 1 row(s) in set (0.001499s)
-
-taos> SELECT TIMETRUNCATE("2022-02-02 02:02:02", 1h) FROM meters;
- timetruncate("2022-02-02 02:02:02", 1h) |
-===========================================
- 2022-02-02 02:00:00.000 |
-Query OK, 1 row(s) in set (0.003903s)
-
-taos> SELECT TIMETRUNCATE(ts, 1h) FROM meters;
- timetruncate(ts, 1h) |
-==========================
- 2022-02-02 02:00:00.000 |
- 2022-02-02 02:00:00.000 |
- 2022-02-02 02:00:00.000 |
-Query OK, 3 row(s) in set (0.003903s)
-```
-
-### TIMEDIFF
-
-```sql
-SELECT TIMEDIFF(ts_val1 | datetime_string1 | ts_col1, ts_val2 | datetime_string2 | ts_col2 [, time_unit]) FROM { tb_name | stb_name } [WHERE clause];
-```
-
-**Description**: The difference between two timestamps, and rounded to the time unit specified by `time_unit`
-
-**Return value type**: Long Integer
-
-**Applicable column types**: UNIX timestamp constant, string constant of date/time format, or a column of TIMESTAMP type
-
-**Applicable table types**: table, STable
-
-**More explanations**:
-
-- Time unit specified by `time_unit` can be:
- 1u(microsecond),1a(millisecond),1s(second),1m(minute),1h(hour),1d(day).
-- The precision of the returned timestamp is same as the precision set for the current data base in use
-
-**Applicable versions**:Since version 2.6.0.0
-
-**Examples**:
-
-```sql
-taos> SELECT TIMEDIFF(1643738400000, 1643742000000) FROM meters;
- timediff(1643738400000, 1643742000000) |
-=========================================
- 3600000 |
-Query OK, 1 row(s) in set (0.002553s)
-taos> SELECT TIMEDIFF(1643738400000, 1643742000000, 1h) FROM meters;
- timediff(1643738400000, 1643742000000, 1h) |
-=============================================
- 1 |
-Query OK, 1 row(s) in set (0.003726s)
-
-taos> SELECT TIMEDIFF("2022-02-02 03:00:00", "2022-02-02 02:00:00", 1h) FROM meters;
- timediff("2022-02-02 03:00:00", "2022-02-02 02:00:00", 1h) |
-=============================================================
- 1 |
-Query OK, 1 row(s) in set (0.001937s)
-
-taos> SELECT TIMEDIFF(ts_col1, ts_col2, 1h) FROM meters;
- timediff(ts_col1, ts_col2, 1h) |
-===================================
- 1 |
-Query OK, 1 row(s) in set (0.001937s)
-```
diff --git a/docs-en/12-taos-sql/08-interval.md b/docs-en/12-taos-sql/08-interval.md
deleted file mode 100644
index acfb0de0e1521fd8c6a068497a3df7a17941524c..0000000000000000000000000000000000000000
--- a/docs-en/12-taos-sql/08-interval.md
+++ /dev/null
@@ -1,113 +0,0 @@
----
-sidebar_label: Interval
-title: Aggregate by Time Window
----
-
-Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window.
-Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are three kinds of windows: time window, status window, and session window. There are two kinds of time windows: sliding window and flip time/tumbling window.
-
-## Time Window
-
-The `INTERVAL` clause is used to generate time windows of the same time interval. The `SLIDING` parameter is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining a continuous query, both the size of the time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time ranges of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time/tumbling window.
-
-
-
-`INTERVAL` and `SLIDING` should be used with aggregate functions and select functions. The SQL statement below is illegal because no aggregate or selection function is used with `INTERVAL`.
-
-```
-SELECT * FROM temp_tb_1 INTERVAL(1m);
-```
-
-The time step specified by `SLIDING` cannot exceed the time interval specified by `INTERVAL`. The SQL statement below is illegal because the time length specified by `SLIDING` exceeds that specified by `INTERVAL`.
-
-```
-SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m);
-```
-
-When the time length specified by `SLIDING` is the same as that specified by `INTERVAL`, the sliding window is actually a flip/tumbling window. The minimum time range specified by `INTERVAL` is 10 milliseconds (10a) prior to version 2.1.5.0. Since version 2.1.5.0, the minimum time range by `INTERVAL` can be 1 microsecond (1u). However, if the DB precision is millisecond, the minimum time range is 1 millisecond (1a). Please note that the `timezone` parameter should be configured to be the same value in the `taos.cfg` configuration file on client side and server side.
-
-## Status Window
-
-In case of using integer, bool, or string to represent the status of a device at any given moment, continuous rows with the same status belong to a status window. Once the status changes, the status window closes. As shown in the following figure, there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now.
-
-
-
-`STATE_WINDOW` is used to specify the column on which the status window will be based. For example:
-
-```
-SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status);
-```
-
-## Session Window
-
-```sql
-SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
-```
-
-The primary key, i.e. timestamp, is used to determine which session window a row belongs to. If the time interval between two adjacent rows is within the time range specified by `tol_val`, they belong to the same session window; otherwise they belong to two different session windows. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30], because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
-
-
-
-If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now.
-
-## More On Window Aggregate
-
-### Syntax
-
-The full syntax of aggregate by window is as follows:
-
-```sql
-SELECT function_list FROM tb_name
- [WHERE where_condition]
- [SESSION(ts_col, tol_val)]
- [STATE_WINDOW(col)]
- [INTERVAL(interval [, offset]) [SLIDING sliding]]
- [FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
-
-SELECT function_list FROM stb_name
- [WHERE where_condition]
- [INTERVAL(interval [, offset]) [SLIDING sliding]]
- [FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
- [GROUP BY tags]
-```
-
-### Restrictions
-
-- Aggregate functions and select functions can be used in `function_list`, with each function having only one output. For example COUNT, AVG, SUM, STDDEV, LEASTSQUARES, PERCENTILE, MIN, MAX, FIRST, LAST. Functions having multiple outputs, such as DIFF or arithmetic operations can't be used.
-- `LAST_ROW` can't be used together with window aggregate.
-- Scalar functions, like CEIL/FLOOR, can't be used with window aggregate.
-- `WHERE` clause can be used to specify the starting and ending time and other filter conditions
-- `FILL` clause is used to specify how to fill when there is data missing in any window, including:
- 1. NONE: No fill (the default fill mode)
- 2. VALUE:Fill with a fixed value, which should be specified together, for example `FILL(VALUE, 1.23)`
- 3. PREV:Fill with the previous non-NULL value, `FILL(PREV)`
- 4. NULL:Fill with NULL, `FILL(NULL)`
- 5. LINEAR:Fill with the closest non-NULL value, `FILL(LINEAR)`
- 6. NEXT:Fill with the next non-NULL value, `FILL(NEXT)`
-
-:::info
-
-1. A huge volume of interpolation output may be returned using `FILL`, so it's recommended to specify the time range when using `FILL`. The maximum number of interpolation values that can be returned in a single query is 10,000,000.
-2. The result set is in ascending order of timestamp when you aggregate by time window.
-3. If aggregate by window is used on STable, the aggregate function is performed on all the rows matching the filter conditions. If `GROUP BY` is not used in the query, the result set will be returned in ascending order of timestamp; otherwise the result set is not exactly in the order of ascending timestamp in each group.
-
-:::
-
-Aggregate by time window is also used in continuous query, please refer to [Continuous Query](/develop/continuous-query).
-
-## Examples
-
-A table of intelligent meters can be created by the SQL statement below:
-
-```sql
-CREATE TABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT);
-```
-
-The average current, maximum current and median of current in every 10 minutes for the past 24 hours can be calculated using the SQL statement below, with missing values filled with the previous non-NULL values.
-
-```
-SELECT AVG(current), MAX(current), APERCENTILE(current, 50) FROM meters
- WHERE ts>=NOW-1d and ts<=now
- INTERVAL(10m)
- FILL(PREV);
-```
diff --git a/docs-en/12-taos-sql/09-limit.md b/docs-en/12-taos-sql/09-limit.md
deleted file mode 100644
index db55cdd69e7bd29ca66ee15b61f28991568d9556..0000000000000000000000000000000000000000
--- a/docs-en/12-taos-sql/09-limit.md
+++ /dev/null
@@ -1,77 +0,0 @@
----
-title: Limits & Restrictions
----
-
-## Naming Rules
-
-1. Only characters from the English alphabet, digits and underscore are allowed
-2. Names cannot start with a digit
-3. Case insensitive without escape character "\`"
-4. Identifier with escape character "\`"
- To support more flexible table or column names, a new escape character "\`" is introduced. For more details please refer to [escape](/taos-sql/escape).
-
-## Password Rule
-
-The legal character set is `[a-zA-Z0-9!?$%^&*()_–+={[}]:;@~#|<,>.?/]`.
-
-## General Limits
-
-- Maximum length of database name is 32 bytes.
-- Maximum length of table name is 192 bytes, excluding the database name prefix and the separator.
-- Maximum length of each data row is 48K bytes since version 2.1.7.0 , before which the limit was 16K bytes. Please note that the upper limit includes the extra 2 bytes consumed by each column of BINARY/NCHAR type.
-- Maximum length of column name is 64.
-- Maximum number of columns is 4096. There must be at least 2 columns, and the first column must be timestamp.
-- Maximum length of tag name is 64.
-- Maximum number of tags is 128. There must be at least 1 tag. The total length of tag values should not exceed 16K bytes.
-- Maximum length of singe SQL statement is 1048576, i.e. 1 MB. It can be configured in the parameter `maxSQLLength` in the client side, the applicable range is [65480, 1048576].
-- At most 4096 columns (or 1024 prior to 2.1.7.0) can be returned by `SELECT`. Functions in the query statement constitute columns. An error is returned if the limit is exceeded.
-- Maximum numbers of databases, STables, tables are dependent only on the system resources.
-- Maximum of database name is 32 bytes, and it can't include "." or special characters.
-- Maximum number of replicas for a database is 3.
-- Maximum length of user name is 23 bytes.
-- Maximum length of password is 15 bytes.
-- Maximum number of rows depends only on the storage space.
-- Maximum number of tables depends only on the number of nodes.
-- Maximum number of databases depends only on the number of nodes.
-- Maximum number of vnodes for a single database is 64.
-
-## Restrictions of `GROUP BY`
-
-`GROUP BY` can be performed on tags and `TBNAME`. It can be performed on data columns too, with the only restriction being it can only be performed on one data column and the number of unique values in that column is lower than 100,000. Please note that `GROUP BY` cannot be performed on float or double types.
-
-## Restrictions of `IS NOT NULL`
-
-`IS NOT NULL` can be used on any data type of columns. The non-empty string evaluation expression, i.e. `< > ""` can only be used on non-numeric data types.
-
-## Restrictions of `ORDER BY`
-
-- Only one `order by` is allowed for normal table and subtable.
-- At most two `order by` are allowed for STable, and the second one must be `ts`.
-- `order by tag` must be used with `group by tag` on same tag. This rule is also applicable to `tbname`.
-- `order by column` must be used with `group by column` or `top/bottom` on same column. This rule is applicable to table and STable.
-- `order by ts` is applicable to table and STable.
-- If `order by ts` is used with `group by`, the result set is sorted using `ts` in each group.
-
-## Restrictions of Table/Column Names
-
-### Name Restrictions of Table/Column
-
-The name of a table or column can only be composed of ASCII characters, digits and underscore and it cannot start with a digit. The maximum length is 192 bytes. Names are case insensitive. The name mentioned in this rule doesn't include the database name prefix and the separator.
-
-### Name Restrictions After Escaping
-
-To support more flexible table or column names, new escape character "\`" is introduced in TDengine to avoid the conflict between table name and keywords and break the above restrictions for table names. The escape character is not counted in the length of table name.
-
-With escaping, the string inside escape characters are case sensitive, i.e. will not be converted to lower case internally.
-
-For example:
-\`aBc\` and \`abc\` are different table or column names, but "abc" and "aBc" are same names because internally they are all "abc".
-
-:::note
-The characters inside escape characters must be printable characters.
-
-:::
-
-### Applicable Versions
-
-Escape character "\`" is available from version 2.3.0.1.
diff --git a/docs-en/12-taos-sql/10-json.md b/docs-en/12-taos-sql/10-json.md
deleted file mode 100644
index 7460a5e0ba3ce78ee7744569cda460c477cac19c..0000000000000000000000000000000000000000
--- a/docs-en/12-taos-sql/10-json.md
+++ /dev/null
@@ -1,82 +0,0 @@
----
-title: JSON Type
----
-
-## Syntax
-
-1. Tag of type JSON
-
- ```sql
- create STable s1 (ts timestamp, v1 int) tags (info json);
-
- create table s1_1 using s1 tags ('{"k1": "v1"}');
- ```
-
-2. "->" Operator of JSON
-
- ```sql
- select * from s1 where info->'k1' = 'v1';
-
- select info->'k1' from s1;
- ```
-
-3. "contains" Operator of JSON
-
- ```sql
- select * from s1 where info contains 'k2';
-
- select * from s1 where info contains 'k1';
- ```
-
-## Applicable Operations
-
-1. When a JSON data type is used in `where`, `match/nmatch/between and/like/and/or/is null/is no null` can be used but `in` can't be used.
-
- ```sql
- select * from s1 where info->'k1' match 'v*';
-
- select * from s1 where info->'k1' like 'v%' and info contains 'k2';
-
- select * from s1 where info is null;
-
- select * from s1 where info->'k1' is not null;
- ```
-
-2. A tag of JSON type can be used in `group by`, `order by`, `join`, `union all` and sub query; for example `group by json->'key'`
-
-3. `Distinct` can be used with a tag of type JSON
-
- ```sql
- select distinct info->'k1' from s1;
- ```
-
-4. Tag Operations
-
- The value of a JSON tag can be altered. Please note that the full JSON will be overriden when doing this.
-
- The name of a JSON tag can be altered. A tag of JSON type can't be added or removed. The column length of a JSON tag can't be changed.
-
-## Other Restrictions
-
-- JSON type can only be used for a tag. There can be only one tag of JSON type, and it's exclusive to any other types of tags.
-
-- The maximum length of keys in JSON is 256 bytes, and key must be printable ASCII characters. The maximum total length of a JSON is 4,096 bytes.
-
-- JSON format:
-
- - The input string for JSON can be empty, i.e. "", "\t", or NULL, but it can't be non-NULL string, bool or array.
- - object can be {}, and the entire JSON is empty if so. Key can be "", and it's ignored if so.
- - value can be int, double, string, bool or NULL, and it can't be an array. Nesting is not allowed which means that the value of a key can't be JSON.
- - If one key occurs twice in JSON, only the first one is valid.
- - Escape characters are not allowed in JSON.
-
-- NULL is returned when querying a key that doesn't exist in JSON.
-
-- If a tag of JSON is the result of inner query, it can't be parsed and queried in the outer query.
-
-For example, the SQL statements below are not supported.
-
-```sql;
-select jtag->'key' from (select jtag from STable);
-select jtag->'key' from (select jtag from STable) where jtag->'key'>0;
-```
diff --git a/docs-en/12-taos-sql/11-escape.md b/docs-en/12-taos-sql/11-escape.md
deleted file mode 100644
index 34ce9f7848a9d60811a23286a6675e8afa4f04fe..0000000000000000000000000000000000000000
--- a/docs-en/12-taos-sql/11-escape.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Escape Characters
----
-
-Below table is the list of escape characters used in TDengine.
-
-| Escape Character | **Actual Meaning** |
-| :--------------: | ------------------------ |
-| `\'` | Single quote ' |
-| `\"` | Double quote " |
-| \n | Line Break |
-| \r | Carriage Return |
-| \t | tab |
-| `\\` | Back Slash \ |
-| `\%` | % see below for details |
-| `\_` | \_ see below for details |
-
-:::note
-Escape characters are available from version 2.4.0.4 .
-
-:::
-
-## Restrictions
-
-1. If there are escape characters in identifiers (database name, table name, column name)
- - Identifier without ``: Error will be returned because identifier must be constituted of digits, ASCII characters or underscore and can't be started with digits
- - Identifier quoted with ``: Original content is kept, no escaping
-2. If there are escape characters in values
- - The escape characters will be escaped as the above table. If the escape character doesn't match any supported one, the escape character "\" will be ignored.
- - "%" and "\_" are used as wildcards in `like`. `\%` and `\_` should be used to represent literal "%" and "\_" in `like`,. If `\%` and `\_` are used out of `like` context, the evaluation result is "`\%`"and "`\_`", instead of "%" and "\_".
diff --git a/docs-en/12-taos-sql/12-keywords.md b/docs-en/12-taos-sql/12-keywords.md
deleted file mode 100644
index 56a82a02a1fada712141f3572b761e0cd18576c6..0000000000000000000000000000000000000000
--- a/docs-en/12-taos-sql/12-keywords.md
+++ /dev/null
@@ -1,89 +0,0 @@
----
-title: Keywords
----
-
-There are about 200 keywords reserved by TDengine, they can't be used as the name of database, STable or table with either upper case, lower case or mixed case.
-
-**Keywords List**
-
-| | | | | |
-| ----------- | ---------- | --------- | ---------- | ------------ |
-| ABORT | CREATE | IGNORE | NULL | STAR |
-| ACCOUNT | CTIME | IMMEDIATE | OF | STATE |
-| ACCOUNTS | DATABASE | IMPORT | OFFSET | STATEMENT |
-| ADD | DATABASES | IN | OR | STATE_WINDOW |
-| AFTER | DAYS | INITIALLY | ORDER | STORAGE |
-| ALL | DBS | INSERT | PARTITIONS | STREAM |
-| ALTER | DEFERRED | INSTEAD | PASS | STREAMS |
-| AND | DELIMITERS | INT | PLUS | STRING |
-| AS | DESC | INTEGER | PPS | SYNCDB |
-| ASC | DESCRIBE | INTERVAL | PRECISION | TABLE |
-| ATTACH | DETACH | INTO | PREV | TABLES |
-| BEFORE | DISTINCT | IS | PRIVILEGE | TAG |
-| BEGIN | DIVIDE | ISNULL | QTIME | TAGS |
-| BETWEEN | DNODE | JOIN | QUERIES | TBNAME |
-| BIGINT | DNODES | KEEP | QUERY | TIMES |
-| BINARY | DOT | KEY | QUORUM | TIMESTAMP |
-| BITAND | DOUBLE | KILL | RAISE | TINYINT |
-| BITNOT | DROP | LE | REM | TOPIC |
-| BITOR | EACH | LIKE | REPLACE | TOPICS |
-| BLOCKS | END | LIMIT | REPLICA | TRIGGER |
-| BOOL | EQ | LINEAR | RESET | TSERIES |
-| BY | EXISTS | LOCAL | RESTRICT | UMINUS |
-| CACHE | EXPLAIN | LP | ROW | UNION |
-| CACHELAST | FAIL | LSHIFT | RP | UNSIGNED |
-| CASCADE | FILE | LT | RSHIFT | UPDATE |
-| CHANGE | FILL | MATCH | SCORES | UPLUS |
-| CLUSTER | FLOAT | MAXROWS | SELECT | USE |
-| COLON | FOR | MINROWS | SEMI | USER |
-| COLUMN | FROM | MINUS | SESSION | USERS |
-| COMMA | FSYNC | MNODES | SET | USING |
-| COMP | GE | MODIFY | SHOW | VALUES |
-| COMPACT | GLOB | MODULES | SLASH | VARIABLE |
-| CONCAT | GRANTS | NCHAR | SLIDING | VARIABLES |
-| CONFLICT | GROUP | NE | SLIMIT | VGROUPS |
-| CONNECTION | GT | NONE | SMALLINT | VIEW |
-| CONNECTIONS | HAVING | NOT | SOFFSET | VNODES |
-| CONNS | ID | NOTNULL | STable | WAL |
-| COPY | IF | NOW | STableS | WHERE |
-| _C0 | _QSTART | _QSTOP | _QDURATION | _WSTART |
-| _WSTOP | _WDURATION |
-
-## Explanations
-### TBNAME
-`TBNAME` can be considered as a special tag, which represents the name of the subtable, in STable.
-
-Get the table name and tag values of all subtables in a STable.
-```mysql
-SELECT TBNAME, location FROM meters;
-
-Count the number of subtables in a STable.
-```mysql
-SELECT COUNT(TBNAME) FROM meters;
-```
-
-Only filter on TAGS can be used in WHERE clause in the above two query statements.
-```mysql
-taos> SELECT TBNAME, location FROM meters;
- tbname | location |
-==================================================================
- d1004 | California.SanFrancisco |
- d1003 | California.SanFrancisco |
- d1002 | California.LosAngeles |
- d1001 | California.LosAngeles |
-Query OK, 4 row(s) in set (0.000881s)
-
-taos> SELECT COUNT(tbname) FROM meters WHERE groupId > 2;
- count(tbname) |
-========================
- 2 |
-Query OK, 1 row(s) in set (0.001091s)
-```
-### _QSTART/_QSTOP/_QDURATION
-The start, stop and duration of a query time window (Since version 2.6.0.0).
-
-### _WSTART/_WSTOP/_WDURATION
-The start, stop and duration of aggegate query by time window, like interval, session window, state window (Since version 2.6.0.0).
-
-### _c0
-The first column of a table or STable.
\ No newline at end of file
diff --git a/docs-en/12-taos-sql/_category_.yml b/docs-en/12-taos-sql/_category_.yml
deleted file mode 100644
index 74a3b6309e0a4ad35feb674f544c689ae1992299..0000000000000000000000000000000000000000
--- a/docs-en/12-taos-sql/_category_.yml
+++ /dev/null
@@ -1 +0,0 @@
-label: TDengine SQL
diff --git a/docs-en/12-taos-sql/timewindow-1.webp b/docs-en/12-taos-sql/timewindow-1.webp
deleted file mode 100644
index 82747558e96df752a0010d85be79a4af07e4a1df..0000000000000000000000000000000000000000
Binary files a/docs-en/12-taos-sql/timewindow-1.webp and /dev/null differ
diff --git a/docs-en/12-taos-sql/timewindow-2.webp b/docs-en/12-taos-sql/timewindow-2.webp
deleted file mode 100644
index 8f1314ae34f7f5c5cca1d3cb80455f555fad38c3..0000000000000000000000000000000000000000
Binary files a/docs-en/12-taos-sql/timewindow-2.webp and /dev/null differ
diff --git a/docs-en/12-taos-sql/timewindow-3.webp b/docs-en/12-taos-sql/timewindow-3.webp
deleted file mode 100644
index 5bd16e68e7fd5da6805551e9765975277cd5d4d9..0000000000000000000000000000000000000000
Binary files a/docs-en/12-taos-sql/timewindow-3.webp and /dev/null differ
diff --git a/docs-en/13-operation/01-pkg-install.md b/docs-en/13-operation/01-pkg-install.md
deleted file mode 100644
index c098002962d62aa0acc7a94462c052303cb2ed90..0000000000000000000000000000000000000000
--- a/docs-en/13-operation/01-pkg-install.md
+++ /dev/null
@@ -1,284 +0,0 @@
----
-title: Install & Uninstall
-description: Install, Uninstall, Start, Stop and Upgrade
----
-
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-
-TDengine community version provides deb and rpm packages for users to choose from, based on their system environment. The deb package supports Debian, Ubuntu and derivative systems. The rpm package supports CentOS, RHEL, SUSE and derivative systems. Furthermore, a tar.gz package is provided for TDengine Enterprise customers.
-
-## Install
-
-
-
-
-1. Download deb package from official website, for example TDengine-server-2.4.0.7-Linux-x64.deb
-2. In the directory where the package is located, execute the command below
-
-```bash
-$ sudo dpkg -i TDengine-server-2.4.0.7-Linux-x64.deb
-(Reading database ... 137504 files and directories currently installed.)
-Preparing to unpack TDengine-server-2.4.0.7-Linux-x64.deb ...
-TDengine is removed successfully!
-Unpacking tdengine (2.4.0.7) over (2.4.0.7) ...
-Setting up tdengine (2.4.0.7) ...
-Start to install TDengine...
-
-System hostname is: ubuntu-1804
-
-Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
-OR leave it blank to build one:
-
-Enter your email address for priority support or enter empty to skip:
-Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service.
-
-To configure TDengine : edit /etc/taos/taos.cfg
-To start TDengine : sudo systemctl start taosd
-To access TDengine : taos -h ubuntu-1804 to login into TDengine server
-
-
-TDengine is installed successfully!
-```
-
-
-
-
-
-1. Download rpm package from official website, for example TDengine-server-2.4.0.7-Linux-x64.rpm;
-2. In the directory where the package is located, execute the command below
-
-```
-$ sudo rpm -ivh TDengine-server-2.4.0.7-Linux-x64.rpm
-Preparing... ################################# [100%]
-Updating / installing...
- 1:tdengine-2.4.0.7-3 ################################# [100%]
-Start to install TDengine...
-
-System hostname is: centos7
-
-Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
-OR leave it blank to build one:
-
-Enter your email address for priority support or enter empty to skip:
-
-Created symlink from /etc/systemd/system/multi-user.target.wants/taosd.service to /etc/systemd/system/taosd.service.
-
-To configure TDengine : edit /etc/taos/taos.cfg
-To start TDengine : sudo systemctl start taosd
-To access TDengine : taos -h centos7 to login into TDengine server
-
-
-TDengine is installed successfully!
-```
-
-
-
-
-
-1. Download the tar.gz package, for example TDengine-server-2.4.0.7-Linux-x64.tar.gz;
-2. In the directory where the package is located, first decompress the file, then switch to the sub-directory generated in decompressing, i.e. "TDengine-enterprise-server-2.4.0.7/" in this example, and execute the `install.sh` script.
-
-```bash
-$ tar xvzf TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
-TDengine-enterprise-server-2.4.0.7/
-TDengine-enterprise-server-2.4.0.7/driver/
-TDengine-enterprise-server-2.4.0.7/driver/vercomp.txt
-TDengine-enterprise-server-2.4.0.7/driver/libtaos.so.2.4.0.7
-TDengine-enterprise-server-2.4.0.7/install.sh
-TDengine-enterprise-server-2.4.0.7/examples/
-...
-
-$ ll
-total 43816
-drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ./
-drwxr-xr-x 20 ubuntu ubuntu 4096 Feb 22 09:30 ../
-drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 TDengine-enterprise-server-2.4.0.7/
--rw-rw-r-- 1 ubuntu ubuntu 44852544 Feb 22 09:31 TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
-
-$ cd TDengine-enterprise-server-2.4.0.7/
-
- $ ll
-total 40784
-drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 ./
-drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ../
-drwxrwxr-x 2 ubuntu ubuntu 4096 Feb 22 09:30 driver/
-drwxrwxr-x 10 ubuntu ubuntu 4096 Feb 22 09:30 examples/
--rwxrwxr-x 1 ubuntu ubuntu 33294 Feb 22 09:30 install.sh*
--rw-rw-r-- 1 ubuntu ubuntu 41704288 Feb 22 09:30 taos.tar.gz
-
-$ sudo ./install.sh
-
-Start to update TDengine...
-Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service.
-Nginx for TDengine is updated successfully!
-
-To configure TDengine : edit /etc/taos/taos.cfg
-To configure Taos Adapter (if has) : edit /etc/taos/taosadapter.toml
-To start TDengine : sudo systemctl start taosd
-To access TDengine : use taos -h ubuntu-1804 in shell OR from http://127.0.0.1:6060
-
-TDengine is updated successfully!
-Install taoskeeper as a standalone service
-taoskeeper is installed, enable it by `systemctl enable taoskeeper`
-```
-
-:::info
-Users will be prompted to enter some configuration information when install.sh is executing. The interactive mode can be disabled by executing `./install.sh -e no`. `./install.sh -h` can show all parameters with detailed explanation.
-
-:::
-
-
-
-
-:::note
-When installing on the first node in the cluster, at the "Enter FQDN:" prompt, nothing needs to be provided. When installing on subsequent nodes, at the "Enter FQDN:" prompt, you must enter the end point of the first dnode in the cluster if it is already up. You can also just ignore it and configure it later after installation is finished.
-
-:::
-
-## Uninstall
-
-
-
-
-Deb package of TDengine can be uninstalled as below:
-
-```bash
-$ sudo dpkg -r tdengine
-(Reading database ... 137504 files and directories currently installed.)
-Removing tdengine (2.4.0.7) ...
-TDengine is removed successfully!
-
-```
-
-
-
-
-
-RPM package of TDengine can be uninstalled as below:
-
-```
-$ sudo rpm -e tdengine
-TDengine is removed successfully!
-```
-
-
-
-
-
-tar.gz package of TDengine can be uninstalled as below:
-
-```
-$ rmtaos
-Nginx for TDengine is running, stopping it...
-TDengine is removed successfully!
-
-taosKeeper is removed successfully!
-```
-
-
-
-
-:::note
-
-- We strongly recommend not to use multiple kinds of installation packages on a single host TDengine.
-- After deb package is installed, if the installation directory is removed manually, uninstall or reinstall will not work. This issue can be resolved by using the command below which cleans up TDengine package information. You can then reinstall if needed.
-
-```bash
- $ sudo rm -f /var/lib/dpkg/info/tdengine*
-```
-
-- After rpm package is installed, if the installation directory is removed manually, uninstall or reinstall will not work. This issue can be resolved by using the command below which cleans up TDengine package information. You can then reinstall if needed.
-
-```bash
- $ sudo rpm -e --noscripts tdengine
-```
-
-:::
-
-## Installation Directory
-
-TDengine is installed at /usr/local/taos if successful.
-
-```bash
-$ cd /usr/local/taos
-$ ll
-$ ll
-total 28
-drwxr-xr-x 7 root root 4096 Feb 22 09:34 ./
-drwxr-xr-x 12 root root 4096 Feb 22 09:34 ../
-drwxr-xr-x 2 root root 4096 Feb 22 09:34 bin/
-drwxr-xr-x 2 root root 4096 Feb 22 09:34 cfg/
-lrwxrwxrwx 1 root root 13 Feb 22 09:34 data -> /var/lib/taos/
-drwxr-xr-x 2 root root 4096 Feb 22 09:34 driver/
-drwxr-xr-x 10 root root 4096 Feb 22 09:34 examples/
-drwxr-xr-x 2 root root 4096 Feb 22 09:34 include/
-lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/
-```
-
-During the installation process:
-
-- Configuration directory, data directory, and log directory are created automatically if they don't exist
-- The default configuration file is located at /etc/taos/taos.cfg, which is a copy of /usr/local/taos/cfg/taos.cfg
-- The default data directory is /var/lib/taos, which is a soft link to /usr/local/taos/data
-- The default log directory is /var/log/taos, which is a soft link to /usr/local/taos/log
-- The executables at /usr/local/taos/bin are linked to /usr/bin
-- The DLL files at /usr/local/taos/driver are linked to /usr/lib
-- The header files at /usr/local/taos/include are linked to /usr/include
-
-:::note
-
-- When TDengine is uninstalled, the configuration /etc/taos/taos.cfg, data directory /var/lib/taos, log directory /var/log/taos are kept. They can be deleted manually with caution, because data can't be recovered. Please follow data integrity, security, backup or relevant SOPs before deleting any data.
-- When reinstalling TDengine, if the default configuration file /etc/taos/taos.cfg exists, it will be kept and the configuration file in the installation package will be renamed to taos.cfg.orig and stored at /usr/local/taos/cfg to be used as configuration sample. Otherwise the configuration file in the installation package will be installed to /etc/taos/taos.cfg and used.
-
-## Start and Stop
-
-Linux system services `systemd`, `systemctl` or `service` are used to start, stop and restart TDengine. The server process of TDengine is `taosd`, which is started automatically after the Linux system is started. System operators can use `systemd`, `systemctl` or `service` to start, stop or restart TDengine server.
-
-For example, if using `systemctl` , the commands to start, stop, restart and check TDengine server are below:
-
-- Start server:`systemctl start taosd`
-
-- Stop server:`systemctl stop taosd`
-
-- Restart server:`systemctl restart taosd`
-
-- Check server status:`systemctl status taosd`
-
-From version 2.4.0.0, a new independent component named as `taosAdapter` has been included in TDengine. `taosAdapter` should be started and stopped using `systemctl`.
-
-If the server process is OK, the output of `systemctl status` is like below:
-
-```
-Active: active (running)
-```
-
-Otherwise, the output is as below:
-
-```
-Active: inactive (dead)
-```
-
-## Upgrade
-
-There are two aspects in upgrade operation: upgrade installation package and upgrade a running server.
-
-To upgrade a package, follow the steps mentioned previously to first uninstall the old version then install the new version.
-
-Upgrading a running server is much more complex. First please check the version number of the old version and the new version. The version number of TDengine consists of 4 sections, only if the first 3 sections match can the old version be upgraded to the new version. The steps of upgrading a running server are as below:
-
-- Stop inserting data
-- Make sure all data is persisted to disk
-- Make some simple queries (Such as total rows in stables, tables and so on. Note down the values. Follow best practices and relevant SOPs.)
-- Stop the cluster of TDengine
-- Uninstall old version and install new version
-- Start the cluster of TDengine
-- Execute simple queries, such as the ones executed prior to installing the new package, to make sure there is no data loss
-- Run some simple data insertion statements to make sure the cluster works well
-- Restore business services
-
-:::warning
-
-TDengine doesn't guarantee any lower version is compatible with the data generated by a higher version, so it's never recommended to downgrade the version.
-
-:::
diff --git a/docs-en/13-operation/02-planning.mdx b/docs-en/13-operation/02-planning.mdx
deleted file mode 100644
index c1baf92dbfa8d93f83174c05c2ea631d1a469739..0000000000000000000000000000000000000000
--- a/docs-en/13-operation/02-planning.mdx
+++ /dev/null
@@ -1,82 +0,0 @@
----
-title: Resource Planning
----
-
-It is important to plan computing and storage resources if using TDengine to build an IoT, time-series or Big Data platform. How to plan the CPU, memory and disk resources required, will be described in this chapter.
-
-## Memory Requirement of Server Side
-
-By default, the number of vgroups created for each database is the same as the number of CPU cores. This can be configured by the parameter `maxVgroupsPerDb`. Each vnode in a vgroup stores one replica. Each vnode consumes a fixed amount of memory, i.e. `blocks` \* `cache`. In addition, some memory is required for tag values associated with each table. A fixed amount of memory is required for each cluster. So, the memory required for each DB can be calculated using the formula below:
-
-```
-Database Memory Size = maxVgroupsPerDb * replica * (blocks * cache + 10MB) + numOfTables * (tagSizePerTable + 0.5KB)
-```
-
-For example, assuming the default value of `maxVgroupPerDB` is 64, the default value of `cache` is 16M, the default value of `blocks` is 6, there are 100,000 tables in a DB, the replica number is 1, total length of tag values is 256 bytes, the total memory required for this DB is: 64 \* 1 \* (16 \* 6 + 10) + 100000 \* (0.25 + 0.5) / 1000 = 6792M.
-
-In the real operation of TDengine, we are more concerned about the memory used by each TDengine server process `taosd`.
-
-```
- taosd_memory = vnode_memory + mnode_memory + query_memory
-```
-
-In the above formula:
-
-1. "vnode_memory" of a `taosd` process is the memory used by all vnodes hosted by this `taosd` process. It can be roughly calculated by firstly adding up the total memory of all DBs whose memory usage can be derived according to the formula for Database Memory Size, mentioned above, then dividing by number of dnodes and multiplying the number of replicas.
-
-```
- vnode_memory = (sum(Database Memory Size) / number_of_dnodes) * replica
-```
-
-2. "mnode_memory" of a `taosd` process is the memory consumed by a mnode. If there is one (and only one) mnode hosted in a `taosd` process, the memory consumed by "mnode" is "0.2KB \* the total number of tables in the cluster".
-
-3. "query_memory" is the memory used when processing query requests. Each ongoing query consumes at least "0.2 KB \* total number of involved tables".
-
-Please note that the above formulas can only be used to estimate the minimum memory requirement, instead of maximum memory usage. In a real production environment, it's better to reserve some redundance beyond the estimated minimum memory requirement. If memory is abundant, it's suggested to increase the value of parameter `blocks` to speed up data insertion and data query.
-
-## Memory Requirement of Client Side
-
-For the client programs using TDengine client driver `taosc` to connect to the server side there is a memory requirement as well.
-
-The memory consumed by a client program is mainly about the SQL statements for data insertion, caching of table metadata, and some internal use. Assuming maximum number of tables is N (the memory consumed by the metadata of each table is 256 bytes), maximum number of threads for parallel insertion is T, maximum length of a SQL statement is S (normally 1 MB), the memory required by a client program can be estimated using the below formula:
-
-```
-M = (T * S * 3 + (N / 4096) + 100)
-```
-
-For example, if the number of parallel data insertion threads is 100, total number of tables is 10,000,000, then the minimum memory requirement of a client program is:
-
-```
-100 * 3 + (10000000 / 4096) + 100 = 2741 (MBytes)
-```
-
-So, at least 3GB needs to be reserved for such a client.
-
-## CPU Requirement
-
-The CPU resources required depend on two aspects:
-
-- **Data Insertion** Each dnode of TDengine can process at least 10,000 insertion requests in one second, while each insertion request can have multiple rows. The difference in computing resource consumed, between inserting 1 row at a time, and inserting 10 rows at a time is very small. So, the more the number of rows that can be inserted one time, the higher the efficiency. Inserting in batch also imposes requirements on the client side which needs to cache rows to insert in batch once the number of cached rows reaches a threshold.
-- **Data Query** High efficiency query is provided in TDengine, but it's hard to estimate the CPU resource required because the queries used in different use cases and the frequency of queries vary significantly. It can only be verified with the query statements, query frequency, data size to be queried, and other requirements provided by users.
-
-In short, the CPU resource required for data insertion can be estimated but it's hard to do so for query use cases. In real operation, it's suggested to control CPU usage below 50%. If this threshold is exceeded, it's a reminder for system operator to add more nodes in the cluster to expand resources.
-
-## Disk Requirement
-
-The compression ratio in TDengine is much higher than that in RDBMS. In most cases, the compression ratio in TDengine is bigger than 5, or even 10 in some cases, depending on the characteristics of the original data. The data size before compression can be calculated based on below formula:
-
-```
-Raw DataSize = numOfTables * rowSizePerTable * rowsPerTable
-```
-
-For example, there are 10,000,000 meters, while each meter collects data every 15 minutes and the data size of each collection is 128 bytes, so the raw data size of one year is: 10000000 \* 128 \* 24 \* 60 / 15 \* 365 = 44.8512(TB). Assuming compression ratio is 5, the actual disk size is: 44.851 / 5 = 8.97024(TB).
-
-Parameter `keep` can be used to set how long the data will be kept on disk. To further reduce storage cost, multiple storage levels can be enabled in TDengine, with the coldest data stored on the cheapest storage device. This is completely transparent to application programs.
-
-To increase performance, multiple disks can be setup for parallel data reading or data inserting. Please note that an expensive disk array is not necessary because replications are used in TDengine to provide high availability.
-
-## Number of Hosts
-
-A host can be either physical or virtual. The total memory, total CPU, total disk required can be estimated according to the formulae mentioned previously. Then, according to the system resources that a single host can provide, assuming all hosts have the same resources, the number of hosts can be derived easily.
-
-**Quick Estimation for CPU, Memory and Disk** Please refer to [Resource Estimate](https://www.taosdata.com/config/config.html).
diff --git a/docs-en/13-operation/03-tolerance.md b/docs-en/13-operation/03-tolerance.md
deleted file mode 100644
index d4d48d7fcdc2c990b6ea0821e2347c70a809ed79..0000000000000000000000000000000000000000
--- a/docs-en/13-operation/03-tolerance.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-sidebar_label: Fault Tolerance
-title: Fault Tolerance & Disaster Recovery
----
-
-## Fault Tolerance
-
-TDengine uses **WAL**, i.e. Write Ahead Log, to achieve fault tolerance and high reliability.
-
-When a data block is received by TDengine, the original data block is first written into WAL. The log in WAL will be deleted only after the data has been written into data files in the database. Data can be recovered from WAL in case the server is stopped abnormally for any reason and then restarted.
-
-There are 2 configuration parameters related to WAL:
-
-- walLevel:
- - 0:wal is disabled
- - 1:wal is enabled without fsync
- - 2:wal is enabled with fsync
-- fsync:This parameter is only valid when walLevel is set to 2. It specifies the interval, in milliseconds, of invoking fsync. If set to 0, it means fsync is invoked immediately once WAL is written.
-
-To achieve absolutely no data loss, walLevel should be set to 2 and fsync should be set to 1. There is a performance penalty to the data ingestion rate. However, if the concurrent data insertion threads on the client side can reach a big enough number, for example 50, the data ingestion performance will be still good enough. Our verification shows that the drop is only 30% when fsync is set to 3,000 milliseconds.
-
-## Disaster Recovery
-
-TDengine uses replication to provide high availability and disaster recovery capability.
-
-A TDengine cluster is managed by mnode. To ensure the high availability of mnode, multiple replicas can be configured by the system parameter `numOfMnodes`. The data replication between mnode replicas is performed in a synchronous way to guarantee metadata consistency.
-
-The number of replicas for time series data in TDengine is associated with each database. There can be many databases in a cluster and each database can be configured with a different number of replicas. When creating a database, parameter `replica` is used to configure the number of replications. To achieve high availability, `replica` needs to be higher than 1.
-
-The number of dnodes in a TDengine cluster must NOT be lower than the number of replicas for any database, otherwise it would fail when trying to create a table.
-
-As long as the dnodes of a TDengine cluster are deployed on different physical machines and the replica number is higher than 1, high availability can be achieved without any other assistance. For disaster recovery, dnodes of a TDengine cluster should be deployed in geographically different data centers.
diff --git a/docs-en/13-operation/06-admin.md b/docs-en/13-operation/06-admin.md
deleted file mode 100644
index 458a91b88c6d8319fe8b84c2b34d8ff968957910..0000000000000000000000000000000000000000
--- a/docs-en/13-operation/06-admin.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-title: User Management
----
-
-A system operator can use TDengine CLI `taos` to create or remove users or change passwords. The SQL commands are documented below:
-
-## Create User
-
-```sql
-CREATE USER PASS <'password'>;
-```
-
-When creating a user and specifying the user name and password, the password needs to be quoted using single quotes.
-
-## Drop User
-
-```sql
-DROP USER ;
-```
-
-Dropping a user can only be performed by root.
-
-## Change Password
-
-```sql
-ALTER USER PASS <'password'>;
-```
-
-To keep the case of the password when changing password, the password needs to be quoted using single quotes.
-
-## Change Privilege
-
-```sql
-ALTER USER PRIVILEGE