diff --git a/docs/en/01-index.md b/docs/en/01-index.md
index 9090c6ef1dff349d705a252b4ffac943ddc2aaf9..92dcba0372c5e39d3c866a4853f584aca6dc292d 100644
--- a/docs/en/01-index.md
+++ b/docs/en/01-index.md
@@ -21,10 +21,10 @@ This is the documentation structure for TDengine Cloud.
7. The [TDengine SQL](./taos-sql) section provides comprehensive information about both standard SQL as well as TDengine's extensions for easy time series analysis.
-8. In [Connector](./connector), you can choose between Python, Java, Go, Rust and Node.js, to easily connect to TDengine to ingest and query data in your preferred development language.
+8. In [Connector](./programming/connector), you can choose between Python, Java, Go, Rust and Node.js, to easily connect to TDengine to ingest and query data in your preferred development language.
9. The [Tools](./tools) section introduces the Taos CLI which gives you shell access to easily perform ad hoc queries on your instances and databases. Additionally, taosBenchmark is introduced. It is a tool that can help you generate large amounts of data very easily with simple configurations and test the performance of TDengine Cloud.
-10. Finally, in the [FAQ](./faq) section, we try to preemptively answer questions that we anticipate. Of course, we will continue to add to this section all the time.
+
We are very excited that you have chosen TDengine Cloud to be part of your time series platform and look forward to hearing your feedback and ways in which we can improve and be a small part of your success.
diff --git a/docs/en/02-intro.md b/docs/en/02-intro.md
index 6b8f91f866bbafe926c281f98853cfc02c1dde36..dfb4995dc5c2c3736c3eed602c9a6fc29d44f8de 100644
--- a/docs/en/02-intro.md
+++ b/docs/en/02-intro.md
@@ -5,40 +5,77 @@ title: Introduction to TDengine Cloud Service
TDengine Cloud, is the fast, elastic, serverless and cost effective time-series data processing service based on the popular open source time-series database, TDengine. With TDengine Cloud you get the highly optimized and purpose-built for IoT time-series platform, for which TDengine is known.
-This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine.
+This section introduces the major features, competitive advantages and typical use-cases to help you get a high level overview of TDengine cloud service.
## Major Features
The major features are listed below:
-1. While TDengine supports [using SQL to insert](/develop/insert-data/sql-writing), it also supports [Schemaless writing](/reference/schemaless/) just like NoSQL databases. TDengine also supports standard protocols like [InfluxDB LINE](/develop/insert-data/influxdb-line),[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) among others.
-2. TDengine supports seamless integration with third-party data collection agents like [Telegraf](/third-party/telegraf),[Prometheus](/third-party/prometheus),[StatsD](/third-party/statsd),[collectd](/third-party/collectd),[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQX](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). These agents can write data into TDengine with simple configuration and without a single line of code.
-3. Support for [all kinds of queries](/develop/query-data), including aggregation, nested query, downsampling, interpolation and others.
-4. Support for [user defined functions](/develop/udf).
-5. Support for [caching](/develop/cache). TDengine always saves the last data point in cache, so Redis is not needed in some scenarios.
-6. Support for [stream processing](../taos-sql).
-7. Support for [data subscription](../taos-sql) with the capability to specify filter conditions.
-8. High availability is supported by replication including multi-cloud replication.
-9. Provides an interactive [command-line interface](/reference/taos-shell) for management, maintenance and ad-hoc queries.
-10. Provides many ways to [get data in](../data-in) and [get data out](../data-out) data.
-11. Provides a Dashboard to monitor your running instances of TDengine.
-12. Provides [connectors](../connector/) for [Java](../connector/java), [Python](../connector/python), [Go](../connector/go), [Rust](../connector/rust), and [Node.js](../connector/node).
-13. Provides a [REST API](/reference/rest-api/).
-14. Supports seamless integration with [Grafana](../visual/grafana) for visualization.
-15. Supports seamless integration with Google Data Studio.
-
-For more details on features, please read through the entire documentation.
+1. Data In
+ - Supports [using SQL to insert](../data-in/insert-data).
+ - Supports [Telegraf](../data-in/telegraf/).
+ - Supports [Prometheus](../data-in/prometheus/).
+2. Data Out
+ - Supports standard [SQL](../data-out/query-data/), including nested query.
+ - Supports exporting data via tool [taosDump](../data-out/taosdump/).
+ - Supports writing data to [Prometheus](../data-out/prometheus/).
+ - Supports exporting data via [data subscription](../tmq/).
+3. Data Explorer: browse through databases and even run SQL queryies once you login.
+4. Visualization:
+ - Supports [Grafana](../visual/grafana/)
+ - Supports Google data studio (to be released soon)
+ - Supports Grafana cloud (to be released soon)
+6. [Stream Processing](../stream/): Not only is the continuous query is supported, but TDengine also supports event driven stream processing, so Flink or Spark is not needed for time-series data processing.
+7. [Data Subscription](../tmq/): Application can subscribe a table or a set of tables. API is the same as Kafka, but you can specify filter conditions.
+8. Enterprise
+ - Supports backuping data everyday.
+ - Supports replicating a database to another region or cloud.
+ - Supports VPC peering.
+ - Supports Allowed IP list for security.
+9. Tools
+ - Provides an interactive [Command-line Interface (CLI)](../tools/cli/) for management and ad-hoc queries.
+ - Provides a tool [taosBenchmark](../tools/taosbenchmark/) for testing the performance of TDengine.
+10. Programming
+ - Provides [connectors](../programming/connector/) for Java, Python, Go, Rust, Node.js and other programming languages.
+ - Provides a [REST API](../programming/connector/rest-api/).
+
+For more details on features, please read through the entire documentation.
## Competitive Advantages
-By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine Cloud differentiates itself from other time series platforms, with the following advantages.
+By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/) and its cloud native design, TDengine Cloud differentiates itself from other time series data cloud services, with the following advantages.
-- **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine Cloud is a fast, elastic, serverless purpose built platform for IoT time-series data. It is the only time-series platform to solve the high cardinality issue to support billions of data collection points while outperforming other time-series platforms for data ingestion, querying and data compression.
+- **Worry Free**: TDengine Cloud is a fast, elastic, serverless purpose built cloud platform for time-series data. It provides worry-free operations with a fully managed cloud service. You pay as you go.
- **[Simplified Solution](https://tdengine.com/tdengine/simplified-time-series-data-solution/)**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly.
-- **[Cloud Native](https://tdengine.com/tdengine/cloud-native-time-series-database/)**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds. It is Enterprise ready with backup, multi-cloud replication, VPC peering and IP whitelisting.
+- **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: It is the only time-series platform to solve the high cardinality issue to support billions of data collection points while outperforming other time-series platforms for data ingestion, querying and data compression.
- **[Ease of Use](https://tdengine.com/tdengine/easy-time-series-data-platform/)**: For administrators, TDengine Cloud provides worry-free operations with a fully managed cloud native solution. For developers, it provides a simple interface, simplified solution and seamless integration with third party tools. For data users, it provides SQL support with powerful time series extensions built for data analytics.
- **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
+
+- **Enterprise Ready**: It supports backup, multi-cloud/multi-region database replication, VPC peering and IP whitelisting.
+
+With TDengine cloud, the **total cost of ownership of your time-series data platform can be greatly reduced**.
+
+1. With its built-in caching, stream processing and data subscription, system complexity and operation cost are highly reduced.
+2. With SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly.
+3. With the elastic, serverless and fully managed service, the operation and maintenance costs are reduced significantly.
+
+## Technical Ecosystem
+
+This is how TDengine would be situated, in a typical time-series data processing platform:
+
+
+
+
+
+
Figure 1. TDengine Technical Ecosystem
+
+
+On the left-hand side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right-hand side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides an interactive command-line interface and a web interface for management and maintenance.
+
+## Typical Use Cases
+
+As a high-performance and cloud native time-series database, TDengine's typical use case include but are not limited to IoT, Industrial Internet, Connected Vehicles, IT operation and maintenance, energy, financial markets and other fields. TDengine is a purpose-built database optimized for the characteristics of time series data. As such, it cannot be used to process data from web crawlers, social media, e-commerce, ERP, CRM and so on. More generally TDengine is not a suitable storage engine for non-time-series data.
diff --git a/docs/en/04-concept/_category_.yml b/docs/en/04-concept/_category_.yml
new file mode 100644
index 0000000000000000000000000000000000000000..12c659a9265e86d0e74d88a751c19d5d715e9fe0
--- /dev/null
+++ b/docs/en/04-concept/_category_.yml
@@ -0,0 +1 @@
+label: Concepts
\ No newline at end of file
diff --git a/docs/en/04-concept/index.md b/docs/en/04-concept/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..986801d01eadc0c67f375a3a3154ed46c391b823
--- /dev/null
+++ b/docs/en/04-concept/index.md
@@ -0,0 +1,175 @@
+---
+title: Concepts
+---
+
+In order to explain the basic concepts and provide some sample code, the TDengine documentation smart meters as a typical time series use case. We assume the following: 1. Each smart meter collects three metrics i.e. current, voltage, and phase; 2. There are multiple smart meters; 3. Each meter has static attributes like location and group ID. Based on this, collected data will look similar to the following table:
+
+
+
+Each row contains the device ID, time stamp, collected metrics (current, voltage, phase as above), and static tags (location and groupId in Table 1) associated with the devices. Each smart meter generates a row (measurement) in a pre-defined time interval or triggered by an external event. The device produces a sequence of measurements with associated time stamps.
+
+## Metric
+
+Metric refers to the physical quantity collected by sensors, equipment or other types of data collection devices, such as current, voltage, temperature, pressure, GPS position, etc., which change with time, and the data type can be integer, float, Boolean, or strings. As time goes by, the amount of collected metric data stored increases. In the smart meters example, current, voltage and phase are the metrics.
+
+## Label/Tag
+
+Label/Tag refers to the static properties of sensors, equipment or other types of data collection devices, which do not change with time, such as device model, color, fixed location of the device, etc. The data type can be any type. Although static, TDengine allows users to add, delete or update tag values at any time. Unlike the collected metric data, the amount of tag data stored does not change over time. In the meters example, `location` and `groupid` are the tags.
+
+## Data Collection Point
+
+Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points. In the smart meters example, d1001, d1002, d1003, and d1004 are the data collection points.
+
+## Table
+
+Since time-series data is most likely to be structured data, TDengine adopts the traditional relational database model to process them with a short learning curve. You need to create a database, create tables, then insert data points and execute queries to explore the data.
+
+To make full use of time-series data characteristics, TDengine adopts a strategy of "**One Table for One Data Collection Point**". TDengine requires the user to create a table for each data collection point (DCP) to store collected time-series data. For example, if there are over 10 million smart meters, it means 10 million tables should be created. For the table above, 4 tables should be created for devices D1001, D1002, D1003, and D1004 to store the data collected. This design has several benefits:
+
+1. Since the metric data from different DCP are fully independent, the data source of each DCP is unique, and a table has only one writer. In this way, data points can be written in a lock-free manner, and the writing speed can be greatly improved.
+2. For a DCP, the metric data generated by DCP is ordered by timestamp, so the write operation can be implemented by simple appending, which further greatly improves the data writing speed.
+3. The metric data from a DCP is continuously stored, block by block. If you read data for a period of time, it can greatly reduce random read operations and improve read and query performance by orders of magnitude.
+4. Inside a data block for a DCP, columnar storage is used, and different compression algorithms are used for different data types. Metrics generally don't vary as significantly between themselves over a time range as compared to other metrics, which allows for a higher compression rate.
+
+If the metric data of multiple DCPs are traditionally written into a single table, due to uncontrollable network delays, the timing of the data from different DCPs arriving at the server cannot be guaranteed, write operations must be protected by locks, and metric data from one DCP cannot be guaranteed to be continuously stored together. ** One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest possible extent.**
+
+TDengine suggests using DCP ID as the table name (like D1001 in the above table). Each DCP may collect one or multiple metrics (like the current, voltage, phase as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the time stamp as the index, and won’t build the index on any metrics stored. Column wise storage is used.
+
+Complex devices, such as connected cars, may have multiple DCPs. In this case, multiple tables are created for a single device, one table per DCP.
+
+## Super Table (STable)
+
+The design of one table for one data collection point will require a huge number of tables, which is difficult to manage. Furthermore, applications often need to take aggregation operations among DCPs, thus aggregation operations will become complicated. To support aggregation over multiple tables efficiently, the STable(Super Table) concept is introduced by TDengine.
+
+STable is a template for a type of data collection point. A STable contains a set of data collection points (tables) that have the same schema or data structure, but with different static attributes (tags). To describe a STable, in addition to defining the table structure of the metrics, it is also necessary to define the schema of its tags. The data type of tags can be int, float, string, and there can be multiple tags, which can be added, deleted, or modified afterward. If the whole system has N different types of data collection points, N STables need to be established.
+
+In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**. In the smart meters example, we can create a super table named `meters`.
+
+## Subtable
+
+When creating a table for a specific data collection point, the user can use a STable as a template and specify the tag values of this specific DCP to create it. ** The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is:
+
+1. Subtable is a table, all SQL commands applied on a regular table can be applied on subtable.
+2. Subtable is a table with extensions, it has static tags (labels), and these tags can be added, deleted, and updated after it is created. But a regular table does not have tags.
+3. A subtable belongs to only one STable, but a STable may have many subtables. Regular tables do not belong to a STable.
+4. A regular table can not be converted into a subtable, and vice versa.
+
+The relationship between a STable and the subtables created based on this STable is as follows:
+
+1. A STable contains multiple subtables with the same metric schema but with different tag values.
+2. The schema of metrics or labels cannot be adjusted through subtables, and it can only be changed via STable. Changes to the schema of a STable takes effect immediately for all associated subtables.
+3. STable defines only one template and does not store any data or label information by itself. Therefore, data cannot be written to a STable, only to subtables.
+
+Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs. In essence, querying a supertable is a very efficient aggregate query on multiple DCPs of the same type.
+
+In TDengine, it is recommended to use a subtable instead of a regular table for a DCP. In the smart meters example, we can create subtables like d1001, d1002, d1003, and d1004 under super table meters.
+
+To better understand the data model using metri, tags, super table and subtable, please refer to the diagram below which demonstrates the data model of the smart meters example. 
+
+## Database
+
+A database is a collection of tables. TDengine allows a running instance to have multiple databases, and each database can be configured with different storage policies. The [characteristics of time-series data](https://www.taosdata.com/blog/2019/07/09/86.html) from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. In order for TDengine to work with maximum efficiency in various scenarios, TDengine recommends that STables with different data characteristics be created in different databases.
+
+In a database, there can be one or more STables, but a STable belongs to only one database. All tables owned by a STable are stored in only one database.
+
+## Instance, URL, Token
+
+An instance is a running cluster of nodes of TDengine with one or more databases. An instance cannot span across multiple regions or multiple clouds, but a single account (organization) can have multiple instances. An account may invite multiple users into his/her organization to share the data, and each user can be configured with different access rights.
+
+TDengine cloud provides a unique URL for each instance and uses tokens to authenticate the access. The token is generated by TDengine cloud for each user and for each instance. The token has a duration and can be reset by the user for each instance at any time for security purpose.
+
+
diff --git a/docs/en/04-concept/supertable.webp b/docs/en/04-concept/supertable.webp
new file mode 100644
index 0000000000000000000000000000000000000000..764b8f3de7ee92a103b2fcd0e75c03773af5ee37
Binary files /dev/null and b/docs/en/04-concept/supertable.webp differ
diff --git a/docs/en/04-get-started.md b/docs/en/05-get-started.md
similarity index 73%
rename from docs/en/04-get-started.md
rename to docs/en/05-get-started.md
index 6d66fc1fd2a3978368bf4d9b5c39ba3bba12ee51..ec3a42dbd9cae7359e3a3ef57e60086eefa7b377 100644
--- a/docs/en/04-get-started.md
+++ b/docs/en/05-get-started.md
@@ -4,4 +4,6 @@ title: Get Started
description: A quick guide for how to access TDengine cloud service
---
-It's very convenient to access TDengine cloud service, just open your browser, connect to [TDengine Cloud Service Portal](https://cloud.tdengine.com), create an account with a valid email address, activate your account, then you will get a free TDengine cloud service. Enjoy!
\ No newline at end of file
+It's very convenient to access TDengine cloud service, just open your browser, connect to [TDengine Cloud Service Portal](https://cloud.tdengine.com), create an account with a valid email address, activate your account, then you will get a free TDengine cloud service.
+
+TDengine cloud is running on AWS, Azure and Google Cloud. You can choose free plan, standard plan or enterprise plan. Enjoy!
diff --git a/docs/en/06-replication/index.md b/docs/en/06-replication/index.md
deleted file mode 100644
index 42e004950854319c48f17bbed1a79b8dfae22e96..0000000000000000000000000000000000000000
--- a/docs/en/06-replication/index.md
+++ /dev/null
@@ -1,7 +0,0 @@
----
-sidebar_label: Data Replication
-title: Data Replication
-description: Briefly introduce how to replicate data among TDengine cloud services
----
-
-TDengine provides full support for data replication. You can replicate data from TDengine cloud service to local TDengine, from local TDengine to TDengine cloud service, or from one cloud service to another one and it doesn't matter which cloud or region the two services reside in.
\ No newline at end of file
diff --git a/docs/en/08-data-in/01-insert-data.md b/docs/en/07-data-in/01-insert-data.md
similarity index 88%
rename from docs/en/08-data-in/01-insert-data.md
rename to docs/en/07-data-in/01-insert-data.md
index 891f3cdbf3d3bfc3eb15f7dd4d18cfd835567f12..f9c64601c0722e81d42b8eac345810a6596ceddc 100644
--- a/docs/en/08-data-in/01-insert-data.md
+++ b/docs/en/07-data-in/01-insert-data.md
@@ -1,7 +1,7 @@
---
sidebar_label: SQL
title: Insert Data Using SQL
-description: This section describes how to insert data using TDengine SQL
+description: Insert data using TDengine SQL
---
# Insert Data
@@ -42,10 +42,15 @@ For more details about `INSERT` please refer to [INSERT](https://docs.tdengine.c
## Connector Examples
+:::note
+Before executing the sample code in this section, you need to firstly establish connection to TDegnine cloud service, please refer to [Connect to TDengine Cloud Service](../../programming/connect/).
+
+:::
+
-In this example, we use `execute` method to execute SQL and get affected rows. The variable `conn` is an instance of class `taosrest.TaosRestConnection` we just created at [Connect Tutorial](../../develop/connect/python#connect).
+In this example, we use `execute` method to execute SQL and get affected rows. The variable `conn` is an instance of class `taosrest.TaosRestConnection` we just created at [Connect Tutorial](../../programming/connect/python#connect).
```python
{{#include docs/examples/python/develop_tutorial.py:insert}}
diff --git a/docs/en/08-data-in/02-prometheus.md b/docs/en/07-data-in/02-prometheus.md
similarity index 64%
rename from docs/en/08-data-in/02-prometheus.md
rename to docs/en/07-data-in/02-prometheus.md
index 9c11e0d6099b7840c0200164e5c9bbc7f4f1396f..73b0c44856c41524e76b6c4c87eeebd03f68dcba 100644
--- a/docs/en/08-data-in/02-prometheus.md
+++ b/docs/en/07-data-in/02-prometheus.md
@@ -1,22 +1,24 @@
---
sidebar_label: Prometheus
title: Prometheus for TDengine Cloud
-description: This topic introduces how to write data into TDengine from Prometheus.
+description: Write data into TDengine from Prometheus.
---
Prometheus is a widespread open-source monitoring and alerting system. Prometheus joined the Cloud Native Computing Foundation (CNCF) in 2016 as the second incubated project after Kubernetes, which has a very active developer and user community.
-Prometheus provides `remote_write` and `remote_read` interfaces to leverage other database products as its storage engine. To enable users of the Prometheus ecosystem to take advantage of TDengine's efficient writing and querying, TDengine also provides support for these two interfaces.
+Prometheus provides `remote_write` interface to leverage other database products as its storage engine. To enable users of the Prometheus ecosystem to take advantage of TDengine's efficient writing, TDengine also provides support for this interface so that Prometheus data can be stored in TDengine via the `remote_write` interface with proper configuration to take full advantage of TDengine's efficient storage performance and clustering capabilities for time-series data.
-Prometheus data can be stored in TDengine via the `remote_write` interface with proper configuration. Data stored in TDengine can be queried via the `remote_read` interface, taking full advantage of TDengine's efficient storage query performance and clustering capabilities for time-series data.
+## Prerequisites
+
+In your TDengine cloud instance, click "Explorer" on the left panel, then click "+" besides Databases, to create a new database named as "prometheus_data". Then execute `show databases` to confirm the database has been created successfully.
## Install Prometheus
Supposed that you use Linux system with architecture amd64:
1. Download
- ```
- wget https://github.com/prometheus/prometheus/releases/download/v2.37.0/prometheus-2.37.0.linux-amd64.tar.gz
- ```
+ ```
+ wget https://github.com/prometheus/prometheus/releases/download/v2.37.0/prometheus-2.37.0.linux-amd64.tar.gz
+ ```
2. Decompress and rename
```
tar xvfz prometheus-*.tar.gz && mv prometheus-2.37.0.linux-amd64 prometheus
@@ -28,7 +30,7 @@ Supposed that you use Linux system with architecture amd64:
Then Prometheus is installed in current directory. For more installation options, please refer to the [official documentation](https://prometheus.io/docs/prometheus/latest/installation/).
-## Configure
+## Configure Prometheus
Configuring Prometheus is done by editing the Prometheus configuration file `prometheus.yml` (If you followed previous steps, you can find prometheus.xml in current directory).
@@ -62,15 +64,3 @@ Log in TDengine Cloud, click "Explorer" on the left navigation bar. You will see

-## Verify Remote Read
-
-Lets retrieve some metrics from TDengine Cloud via prometheus web server. Browse to and use the "Graph" tab.
-
-Enter the following expression to graph the per-second rate of chunks being created in the self-scraped Prometheus:
-
-```
-rate(prometheus_tsdb_head_chunks_created_total[1m])
-```
-
-
-
diff --git a/docs/en/08-data-in/03-telegraf.md b/docs/en/07-data-in/03-telegraf.md
similarity index 84%
rename from docs/en/08-data-in/03-telegraf.md
rename to docs/en/07-data-in/03-telegraf.md
index 2dc6aa3556598c642b0c58106928a03c2631bac6..6a6ddd77b01fc2bfa0007fdb5a8af9ab84696e4c 100644
--- a/docs/en/08-data-in/03-telegraf.md
+++ b/docs/en/07-data-in/03-telegraf.md
@@ -1,13 +1,17 @@
---
sidebar_label: Telegraf
title: Telegraf for TDengine Cloud
-description: This section explains how to write data into TDengine from telegraf.
+description: Write data into TDengine from telegraf.
---
Telegraf is an open-source, metrics collection software. Telegraf can collect the operation information of various components without having to write any scripts to collect regularly, reducing the difficulty of data acquisition.
Telegraf's data can be written to TDengine by simply adding the output configuration of Telegraf to the URL corresponding to taosAdapter and modifying several configuration items. The presence of Telegraf data in TDengine can take advantage of TDengine's efficient storage query performance and clustering capabilities for time-series data.
+## Prerequisiteis
+
+Before telegraf can write data into TDengine cloud service, you need to firstly manually create a database. Log in TDengine Cloud, click "Explorer" on the left navigation bar, then click the "+" button besides "Databases" to add a database named as "telegraf" using all default parameters.
+
## Install Telegraf
Supposed that you use Ubuntu system:
@@ -63,9 +67,7 @@ telegraf --config telegraf.conf
## Verify
-Log in TDengine Cloud, click "Explorer" on the left navigation bar.
-
-Check weather database "telegraf" exist by executing:
+- Check weather database "telegraf" exist by executing:
```sql
show databases;
diff --git a/docs/en/08-data-in/emqx/add-action-handler.webp b/docs/en/07-data-in/emqx/add-action-handler.webp
similarity index 100%
rename from docs/en/08-data-in/emqx/add-action-handler.webp
rename to docs/en/07-data-in/emqx/add-action-handler.webp
diff --git a/docs/en/08-data-in/emqx/check-result-in-taos.webp b/docs/en/07-data-in/emqx/check-result-in-taos.webp
similarity index 100%
rename from docs/en/08-data-in/emqx/check-result-in-taos.webp
rename to docs/en/07-data-in/emqx/check-result-in-taos.webp
diff --git a/docs/en/08-data-in/emqx/check-rule-matched.webp b/docs/en/07-data-in/emqx/check-rule-matched.webp
similarity index 100%
rename from docs/en/08-data-in/emqx/check-rule-matched.webp
rename to docs/en/07-data-in/emqx/check-rule-matched.webp
diff --git a/docs/en/08-data-in/emqx/client-num.webp b/docs/en/07-data-in/emqx/client-num.webp
similarity index 100%
rename from docs/en/08-data-in/emqx/client-num.webp
rename to docs/en/07-data-in/emqx/client-num.webp
diff --git a/docs/en/08-data-in/emqx/create-resource.webp b/docs/en/07-data-in/emqx/create-resource.webp
similarity index 100%
rename from docs/en/08-data-in/emqx/create-resource.webp
rename to docs/en/07-data-in/emqx/create-resource.webp
diff --git a/docs/en/08-data-in/emqx/create-rule.webp b/docs/en/07-data-in/emqx/create-rule.webp
similarity index 100%
rename from docs/en/08-data-in/emqx/create-rule.webp
rename to docs/en/07-data-in/emqx/create-rule.webp
diff --git a/docs/en/08-data-in/emqx/edit-action.webp b/docs/en/07-data-in/emqx/edit-action.webp
similarity index 100%
rename from docs/en/08-data-in/emqx/edit-action.webp
rename to docs/en/07-data-in/emqx/edit-action.webp
diff --git a/docs/en/08-data-in/emqx/edit-resource.webp b/docs/en/07-data-in/emqx/edit-resource.webp
similarity index 100%
rename from docs/en/08-data-in/emqx/edit-resource.webp
rename to docs/en/07-data-in/emqx/edit-resource.webp
diff --git a/docs/en/08-data-in/emqx/login-dashboard.webp b/docs/en/07-data-in/emqx/login-dashboard.webp
similarity index 100%
rename from docs/en/08-data-in/emqx/login-dashboard.webp
rename to docs/en/07-data-in/emqx/login-dashboard.webp
diff --git a/docs/en/08-data-in/emqx/rule-engine.webp b/docs/en/07-data-in/emqx/rule-engine.webp
similarity index 100%
rename from docs/en/08-data-in/emqx/rule-engine.webp
rename to docs/en/07-data-in/emqx/rule-engine.webp
diff --git a/docs/en/08-data-in/emqx/rule-header-key-value.webp b/docs/en/07-data-in/emqx/rule-header-key-value.webp
similarity index 100%
rename from docs/en/08-data-in/emqx/rule-header-key-value.webp
rename to docs/en/07-data-in/emqx/rule-header-key-value.webp
diff --git a/docs/en/08-data-in/emqx/run-mock.webp b/docs/en/07-data-in/emqx/run-mock.webp
similarity index 100%
rename from docs/en/08-data-in/emqx/run-mock.webp
rename to docs/en/07-data-in/emqx/run-mock.webp
diff --git a/docs/en/07-data-in/index.md b/docs/en/07-data-in/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..74dc2d76c0dfe087b15564f26423261e4551ddcd
--- /dev/null
+++ b/docs/en/07-data-in/index.md
@@ -0,0 +1,12 @@
+---
+sidebar_label: Data In
+title: Write Data Into TDengine Cloud Service
+description: A number of ways for writing data into TDengine.
+---
+
+This chapter introduces a number of ways which can be used to write data into TDengine, users can use TDengine SQL to write data into TDengine cloud service, or use the [connectors](../programming/connector) provided by TDengine to program writing into TDengine. TDengine provides [taosBenchmark](../tools/taosbenchmark), which is a performance testing tool to write into TDengine, and taosX, which is a tool provided by TDengine enterprise edition, to sync data from one TDengine cloud service to another. Furthermore, 3rd party tools, like telegraf and prometheus, can also be used to write data into TDengine.
+
+:::note
+Because of privilege limitation on cloud, you need to firstly create database in the data explorer on cloud console before preparing to write data into TDengine cloud service. This limitation is applicable to any way of writing data.
+
+:::
diff --git a/docs/en/08-data-in/prometheus_data.webp b/docs/en/07-data-in/prometheus_data.webp
similarity index 100%
rename from docs/en/08-data-in/prometheus_data.webp
rename to docs/en/07-data-in/prometheus_data.webp
diff --git a/docs/en/08-data-in/prometheus_read.webp b/docs/en/07-data-in/prometheus_read.webp
similarity index 100%
rename from docs/en/08-data-in/prometheus_read.webp
rename to docs/en/07-data-in/prometheus_read.webp
diff --git a/docs/en/08-data-in/telegraf-show-databases.webp b/docs/en/07-data-in/telegraf-show-databases.webp
similarity index 100%
rename from docs/en/08-data-in/telegraf-show-databases.webp
rename to docs/en/07-data-in/telegraf-show-databases.webp
diff --git a/docs/en/08-data-in/telegraf-show-stables.webp b/docs/en/07-data-in/telegraf-show-stables.webp
similarity index 100%
rename from docs/en/08-data-in/telegraf-show-stables.webp
rename to docs/en/07-data-in/telegraf-show-stables.webp
diff --git a/docs/en/08-data-in/index.md b/docs/en/08-data-in/index.md
deleted file mode 100644
index 3ad147d2d513a8e965695021cc9f2ff20ea6c4bb..0000000000000000000000000000000000000000
--- a/docs/en/08-data-in/index.md
+++ /dev/null
@@ -1,7 +0,0 @@
----
-sidebar_label: Data In
-title: Write Data Into TDengine Cloud Service
-description: A number of ways for writing data into TDengine.
----
-
-This chapter introduces a number of ways which can be used to write data into TDengine, users can use 3rd party tools, like telegraf and prometheus, to write data into TDengine cloud service, or use [taosBenchmark](../tools/taosbenchmark) which is a tool provided by TDengine to write data into TDengine cloud service. Users can use taosX, which is also a tool provided by TDengine, to sync data from one TDengine cloud service to another. Furthermore, 3rd party tools, like telegraf and prometheus, can also be used to write data into TDengine.
\ No newline at end of file
diff --git a/docs/en/09-data-out/01-query.data.md b/docs/en/09-data-out/01-query-data.md
similarity index 96%
rename from docs/en/09-data-out/01-query.data.md
rename to docs/en/09-data-out/01-query-data.md
index 3bf94d1fe4c12fde71c349d94c51c3b0304140fb..24ea7ffcdd7cd08266bb0cac8425be51cb00abd3 100644
--- a/docs/en/09-data-out/01-query.data.md
+++ b/docs/en/09-data-out/01-query-data.md
@@ -1,7 +1,7 @@
---
sidebar_label: SQL
title: Query Data Using SQL
-description: This topic introduces how to read data from TDengine using basic SQL.
+description: Read data from TDengine using basic SQL.
---
# Query Data
@@ -123,6 +123,11 @@ For more details please refer to [Aggregate by Window](https://docs.tdengine.com
## Connector Examples
+:::note
+Before executing the sample code in this section, you need to firstly establish connection to TDegnine cloud service, please refer to [Connect to TDengine Cloud Service](../../programming/connect/).
+
+:::
+
diff --git a/docs/en/09-data-out/02-tmq.md b/docs/en/09-data-out/02-tmq.md
deleted file mode 100644
index d3fb4760685b6be8e094a29461408b91a302835b..0000000000000000000000000000000000000000
--- a/docs/en/09-data-out/02-tmq.md
+++ /dev/null
@@ -1,879 +0,0 @@
----
-sidebar_label: Subscription
-title: Data Subscritpion
-description: Use data subscription to get data from TDengine.
----
-
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-import Java from "./_sub_java.mdx";
-import Python from "./_sub_python.mdx";
-import Go from "./_sub_go.mdx";
-import Rust from "./_sub_rust.mdx";
-import Node from "./_sub_node.mdx";
-import CSharp from "./_sub_cs.mdx";
-import CDemo from "./_sub_c.mdx";
-
-This topic introduces how to read out data from TDengine using data subscription, which is an advanced feature in TDengine. To access the data in TDengine in data subscription way, you need to create topic, create consumer, subscribe to a topic, and consume data. In this document we will briefly explain these main steps of data subscription.
-
-## Create Topic
-
-A topic can be created on a database, on some selected columns,or on a supertable.
-
-### Topic on Columns
-
-The most common way to create a topic is to create a topic on some specifically selected columns. The Syntax is like below:
-
-```sql
-CREATE TOPIC topic_name as subquery;
-```
-
-You can subscribe to a topic through a SELECT statement. Statements that specify columns, such as `SELECT *` and `SELECT ts, cl` are supported, as are filtering conditions and scalar functions. Aggregate functions and time window aggregation are not supported. Note:
-
-- The schema of topics created in this manner is determined by the subscribed data.
-- You cannot modify (`ALTER
MODIFY`) or delete (`ALTER
DROP`) columns or tags that are used in a subscription or calculation.
-- Columns added to a table after the subscription is created are not displayed in the results. Deleting columns will cause an error.
-
-For example:
-
-```sql
-CREATE TOPIC topic_name AS SELECT ts, c1, c2, c3 FROM tmqdb.stb WHERE c1 > 1;
-```
-
-### Topic on SuperTable
-
-Syntax:
-
-```sql
-CREATE TOPIC topic_name AS STABLE stb_name;
-```
-
-Creating a topic in this manner differs from a `SELECT * from stbName` statement as follows:
-
-- The table schema can be modified.
-- Unstructured data is returned. The format of the data returned changes based on the supertable schema.
-- A different table schema may exist for every data block to be processed.
-- The data returned does not include tags.
-
-### Topic on Database
-
-Syntax:
-
-```sql
-CREATE TOPIC topic_name [WITH META] AS DATABASE db_name;
-```
-
-This SQL statement creates a subscription to all tables in the database. You can add the `WITH META` parameter to include schema changes in the subscription, including creating and deleting supertables; adding, deleting, and modifying columns; and creating, deleting, and modifying the tags of subtables. Consumers can determine the message type from the API. Note that this differs from Kafka.
-
-## Create Consumer
-
-To create a consumer, you must use the APIs provided by TDengine connectors. Below is the sample code of using connectors of different languages.
-
-
-You configure the following parameters when creating a consumer:
-
-| Parameter | Type | Description | Remarks |
-| :----------------------------: | :-----: | -------------------------------------------------------- | ------------------------------------------- |
-| `td.connect.ip` | string | Used in establishing a connection; same as `taos_connect` | |
-| `td.connect.user` | string | Used in establishing a connection; same as `taos_connect` | |
-| `td.connect.pass` | string | Used in establishing a connection; same as `taos_connect` | |
-| `td.connect.port` | string | Used in establishing a connection; same as `taos_connect` | |
-| `group.id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. |
-| `client.id` | string | Client ID | Maximum length: 192. |
-| `auto.offset.reset` | enum | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
-| `enable.auto.commit` | boolean | Commit automatically | Specify `true` or `false`. |
-| `auto.commit.interval.ms` | integer | Interval for automatic commits, in milliseconds |
-| `enable.heartbeat.background` | boolean | Backend heartbeat; if enabled, the consumer does not go offline even if it has not polled for a long time | |
-| `experimental.snapshot.enable` | boolean | Specify whether to consume messages from the WAL or from TSBS | |
-| `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages |
-
-The method of specifying these parameters depends on the language used:
-
-
-
-
-```c
-/* Create consumer groups on demand (group.id) and enable automatic commits (enable.auto.commit),
- an automatic commit interval (auto.commit.interval.ms), and a username (td.connect.user) and password (td.connect.pass) */
-tmq_conf_t* conf = tmq_conf_new();
-tmq_conf_set(conf, "enable.auto.commit", "true");
-tmq_conf_set(conf, "auto.commit.interval.ms", "1000");
-tmq_conf_set(conf, "group.id", "cgrpName");
-tmq_conf_set(conf, "td.connect.user", "root");
-tmq_conf_set(conf, "td.connect.pass", "taosdata");
-tmq_conf_set(conf, "auto.offset.reset", "earliest");
-tmq_conf_set(conf, "experimental.snapshot.enable", "true");
-tmq_conf_set(conf, "msg.with.table.name", "true");
-tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL);
-
-tmq_t* tmq = tmq_consumer_new(conf, NULL, 0);
-tmq_conf_destroy(conf);
-```
-
-
-
-
-Java programs use the following parameters:
-
-| Parameter | Type | Description | Remarks |
-| ----------------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------- |
-| `bootstrap.servers` | string |Connection address, such as `localhost:6030` |
-| `value.deserializer` | string | Value deserializer; to use this method, implement the `com.taosdata.jdbc.tmq.Deserializer` interface or inherit the `com.taosdata.jdbc.tmq.ReferenceDeserializer` type |
-| `value.deserializer.encoding` | string | Specify the encoding for string deserialization | |
-
-Note: The `bootstrap.servers` parameter is used instead of `td.connect.ip` and `td.connect.port` to provide an interface that is consistent with Kafka.
-
-```java
-Properties properties = new Properties();
-properties.setProperty("enable.auto.commit", "true");
-properties.setProperty("auto.commit.interval.ms", "1000");
-properties.setProperty("group.id", "cgrpName");
-properties.setProperty("bootstrap.servers", "127.0.0.1:6030");
-properties.setProperty("td.connect.user", "root");
-properties.setProperty("td.connect.pass", "taosdata");
-properties.setProperty("auto.offset.reset", "earliest");
-properties.setProperty("msg.with.table.name", "true");
-properties.setProperty("value.deserializer", "com.taos.example.MetersDeserializer");
-
-TaosConsumer consumer = new TaosConsumer<>(properties);
-
-/* value deserializer definition. */
-import com.taosdata.jdbc.tmq.ReferenceDeserializer;
-
-public class MetersDeserializer extends ReferenceDeserializer {
-}
-```
-
-
-
-
-
-```go
-config := tmq.NewConfig()
-defer config.Destroy()
-err = config.SetGroupID("test")
-if err != nil {
- panic(err)
-}
-err = config.SetAutoOffsetReset("earliest")
-if err != nil {
- panic(err)
-}
-err = config.SetConnectIP("127.0.0.1")
-if err != nil {
- panic(err)
-}
-err = config.SetConnectUser("root")
-if err != nil {
- panic(err)
-}
-err = config.SetConnectPass("taosdata")
-if err != nil {
- panic(err)
-}
-err = config.SetConnectPort("6030")
-if err != nil {
- panic(err)
-}
-err = config.SetMsgWithTableName(true)
-if err != nil {
- panic(err)
-}
-err = config.EnableHeartBeat()
-if err != nil {
- panic(err)
-}
-err = config.EnableAutoCommit(func(result *wrapper.TMQCommitCallbackResult) {
- if result.ErrCode != 0 {
- errStr := wrapper.TMQErr2Str(result.ErrCode)
- err := errors.NewError(int(result.ErrCode), errStr)
- panic(err)
- }
-})
-if err != nil {
- panic(err)
-}
-```
-
-
-
-
-
-```rust
-let mut dsn: Dsn = "taos://".parse()?;
-dsn.set("group.id", "group1");
-dsn.set("client.id", "test");
-dsn.set("auto.offset.reset", "earliest");
-
-let tmq = TmqBuilder::from_dsn(dsn)?;
-
-let mut consumer = tmq.build()?;
-```
-
-
-
-
-
-Python programs use the following parameters:
-
-| Parameter | Type | Description | Remarks |
-| :----------------------------: | :----: | -------------------------------------------------------- | ------------------------------------------- |
-| `td_connect_ip` | string | Used in establishing a connection; same as `taos_connect` | |
-| `td_connect_user` | string | Used in establishing a connection; same as `taos_connect` | |
-| `td_connect_pass` | string | Used in establishing a connection; same as `taos_connect` | |
-| `td_connect_port` | string | Used in establishing a connection; same as `taos_connect` | |
-| `group_id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. |
-| `client_id` | string | Client ID | Maximum length: 192. |
-| `auto_offset_reset` | string | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
-| `enable_auto_commit` | string | Commit automatically | Specify `true` or `false`. |
-| `auto_commit_interval_ms` | string | Interval for automatic commits, in milliseconds |
-| `enable_heartbeat_background` | string | Backend heartbeat; if enabled, the consumer does not go offline even if it has not polled for a long time | Specify `true` or `false`. |
-| `experimental_snapshot_enable` | string | Specify whether to consume messages from the WAL or from TSBS | Specify `true` or `false`. |
-| `msg_with_table_name` | string | Specify whether to deserialize table names from messages | Specify `true` or `false`.
-| `timeout` | int | Consumer pull timeout | |
-
-
-
-
-
-```js
-// Create consumer groups on demand (group.id) and enable automatic commits (enable.auto.commit),
-// an automatic commit interval (auto.commit.interval.ms), and a username (td.connect.user) and password (td.connect.pass)
-
-let consumer = taos.consumer({
- 'enable.auto.commit': 'true',
- 'auto.commit.interval.ms','1000',
- 'group.id': 'tg2',
- 'td.connect.user': 'root',
- 'td.connect.pass': 'taosdata',
- 'auto.offset.reset','earliest',
- 'msg.with.table.name': 'true',
- 'td.connect.ip','127.0.0.1',
- 'td.connect.port','6030'
- });
-```
-
-
-
-
-
-```csharp
-using TDengineTMQ;
-
-// Create consumer groups on demand (GourpID) and enable automatic commits (EnableAutoCommit),
-// an automatic commit interval (AutoCommitIntervalMs), and a username (TDConnectUser) and password (TDConnectPasswd)
-var cfg = new ConsumerConfig
- {
- EnableAutoCommit = "true"
- AutoCommitIntervalMs = "1000"
- GourpId = "TDengine-TMQ-C#",
- TDConnectUser = "root",
- TDConnectPasswd = "taosdata",
- AutoOffsetReset = "earliest"
- MsgWithTableName = "true",
- TDConnectIp = "127.0.0.1",
- TDConnectPort = "6030"
- };
-
-var consumer = new ConsumerBuilder(cfg).Build();
-
-```
-
-
-
-
-
-A consumer group is automatically created when multiple consumers are configured with the same consumer group ID.
-
-## Subscribe to a Topic
-
-A single consumer can subscribe to multiple topics.
-
-
-
-
-```c
-// Create a list of subscribed topics
-tmq_list_t* topicList = tmq_list_new();
-tmq_list_append(topicList, "topicName");
-// Enable subscription
-tmq_subscribe(tmq, topicList);
-tmq_list_destroy(topicList);
-
-```
-
-
-
-
-```java
-List topics = new ArrayList<>();
-topics.add("tmq_topic");
-consumer.subscribe(topics);
-```
-
-
-
-
-```go
-consumer, err := tmq.NewConsumer(config)
-if err != nil {
- panic(err)
-}
-err = consumer.Subscribe([]string{"example_tmq_topic"})
-if err != nil {
- panic(err)
-}
-```
-
-
-
-
-```rust
-consumer.subscribe(["tmq_meters"]).await?;
-```
-
-
-
-
-
-```python
-consumer = TaosConsumer('topic_ctb_column', group_id='vg2')
-```
-
-
-
-
-
-```js
-// Create a list of subscribed topics
-let topics = ['topic_test']
-
-// Enable subscription
-consumer.subscribe(topics);
-```
-
-
-
-
-
-```csharp
-// Create a list of subscribed topics
-List topics = new List();
-topics.add("tmq_topic");
-// Enable subscription
-consumer.Subscribe(topics);
-```
-
-
-
-
-
-## Consume messages
-
-The following code demonstrates how to consume the messages in a queue.
-
-
-
-
-```c
-## Consume data
-while (running) {
- TAOS_RES* msg = tmq_consumer_poll(tmq, timeOut);
- msg_process(msg);
-}
-```
-
-The `while` loop obtains a message each time it calls `tmq_consumer_poll()`. This message is exactly the same as the result returned by a query, and the same deserialization API can be used on it.
-
-
-
-
-```java
-while(running){
- ConsumerRecords meters = consumer.poll(Duration.ofMillis(100));
- for (Meters meter : meters) {
- processMsg(meter);
- }
-}
-```
-
-
-
-
-
-```go
-for {
- result, err := consumer.Poll(time.Second)
- if err != nil {
- panic(err)
- }
- fmt.Println(result)
- consumer.Commit(context.Background(), result.Message)
- consumer.FreeMessage(result.Message)
-}
-```
-
-
-
-
-
-```rust
-{
- let mut stream = consumer.stream();
-
- while let Some((offset, message)) = stream.try_next().await? {
- // get information from offset
-
- // the topic
- let topic = offset.topic();
- // the vgroup id, like partition id in kafka.
- let vgroup_id = offset.vgroup_id();
- println!("* in vgroup id {vgroup_id} of topic {topic}\n");
-
- if let Some(data) = message.into_data() {
- while let Some(block) = data.fetch_raw_block().await? {
- // one block for one table, get table name if needed
- let name = block.table_name();
- let records: Vec = block.deserialize().try_collect()?;
- println!(
- "** table: {}, got {} records: {:#?}\n",
- name.unwrap(),
- records.len(),
- records
- );
- }
- }
- consumer.commit(offset).await?;
- }
-}
-```
-
-
-
-
-```python
-for msg in consumer:
- for row in msg:
- print(row)
-```
-
-
-
-
-
-```js
-while(true){
- msg = consumer.consume(200);
- // process message(consumeResult)
- console.log(msg.topicPartition);
- console.log(msg.block);
- console.log(msg.fields)
-}
-```
-
-
-
-
-
-```csharp
-## Consume data
-while (true)
-{
- var consumerRes = consumer.Consume(100);
- // process ConsumeResult
- ProcessMsg(consumerRes);
- consumer.Commit(consumerRes);
-}
-```
-
-
-
-
-
-## Subscribe to a Topic
-
-A single consumer can subscribe to multiple topics.
-
-
-
-
-```c
-// Create a list of subscribed topics
-tmq_list_t* topicList = tmq_list_new();
-tmq_list_append(topicList, "topicName");
-// Enable subscription
-tmq_subscribe(tmq, topicList);
-tmq_list_destroy(topicList);
-
-```
-
-
-
-
-```java
-List topics = new ArrayList<>();
-topics.add("tmq_topic");
-consumer.subscribe(topics);
-```
-
-
-
-
-```go
-consumer, err := tmq.NewConsumer(config)
-if err != nil {
- panic(err)
-}
-err = consumer.Subscribe([]string{"example_tmq_topic"})
-if err != nil {
- panic(err)
-}
-```
-
-
-
-
-```rust
-consumer.subscribe(["tmq_meters"]).await?;
-```
-
-
-
-
-
-```python
-consumer = TaosConsumer('topic_ctb_column', group_id='vg2')
-```
-
-
-
-
-
-```js
-// Create a list of subscribed topics
-let topics = ['topic_test']
-
-// Enable subscription
-consumer.subscribe(topics);
-```
-
-
-
-
-
-```csharp
-// Create a list of subscribed topics
-List topics = new List();
-topics.add("tmq_topic");
-// Enable subscription
-consumer.Subscribe(topics);
-```
-
-
-
-
-
-
-## Consume Data
-
-The following code demonstrates how to consume the messages in a queue.
-
-
-
-
-```c
-## Consume data
-while (running) {
- TAOS_RES* msg = tmq_consumer_poll(tmq, timeOut);
- msg_process(msg);
-}
-```
-
-The `while` loop obtains a message each time it calls `tmq_consumer_poll()`. This message is exactly the same as the result returned by a query, and the same deserialization API can be used on it.
-
-
-
-
-```java
-while(running){
- ConsumerRecords meters = consumer.poll(Duration.ofMillis(100));
- for (Meters meter : meters) {
- processMsg(meter);
- }
-}
-```
-
-
-
-
-
-```go
-for {
- result, err := consumer.Poll(time.Second)
- if err != nil {
- panic(err)
- }
- fmt.Println(result)
- consumer.Commit(context.Background(), result.Message)
- consumer.FreeMessage(result.Message)
-}
-```
-
-
-
-
-
-```rust
-{
- let mut stream = consumer.stream();
-
- while let Some((offset, message)) = stream.try_next().await? {
- // get information from offset
-
- // the topic
- let topic = offset.topic();
- // the vgroup id, like partition id in kafka.
- let vgroup_id = offset.vgroup_id();
- println!("* in vgroup id {vgroup_id} of topic {topic}\n");
-
- if let Some(data) = message.into_data() {
- while let Some(block) = data.fetch_raw_block().await? {
- // one block for one table, get table name if needed
- let name = block.table_name();
- let records: Vec = block.deserialize().try_collect()?;
- println!(
- "** table: {}, got {} records: {:#?}\n",
- name.unwrap(),
- records.len(),
- records
- );
- }
- }
- consumer.commit(offset).await?;
- }
-}
-```
-
-
-
-
-```python
-for msg in consumer:
- for row in msg:
- print(row)
-```
-
-
-
-
-
-```js
-while(true){
- msg = consumer.consume(200);
- // process message(consumeResult)
- console.log(msg.topicPartition);
- console.log(msg.block);
- console.log(msg.fields)
-}
-```
-
-
-
-
-
-```csharp
-## Consume data
-while (true)
-{
- var consumerRes = consumer.Consume(100);
- // process ConsumeResult
- ProcessMsg(consumerRes);
- consumer.Commit(consumerRes);
-}
-```
-
-
-
-
-
-## Close the consumer
-
-After message consumption is finished, the consumer is unsubscribed.
-
-
-
-
-```c
-/* Unsubscribe */
-tmq_unsubscribe(tmq);
-
-/* Close consumer object */
-tmq_consumer_close(tmq);
-```
-
-
-
-
-```java
-/* Unsubscribe */
-consumer.unsubscribe();
-
-/* Close consumer */
-consumer.close();
-```
-
-
-
-
-
-```go
-consumer.Close()
-```
-
-
-
-
-
-```rust
-consumer.unsubscribe().await;
-```
-
-
-
-
-
-```py
-# Unsubscribe
-consumer.unsubscribe()
-# Close consumer
-consumer.close()
-```
-
-
-
-
-```js
-consumer.unsubscribe();
-consumer.close();
-```
-
-
-
-
-
-```csharp
-// Unsubscribe
-consumer.Unsubscribe();
-
-// Close consumer
-consumer.Close();
-```
-
-
-
-
-
-
-## Close Consumer
-
-After message consumption is finished, the consumer is unsubscribed.
-
-
-
-
-```c
-/* Unsubscribe */
-tmq_unsubscribe(tmq);
-
-/* Close consumer object */
-tmq_consumer_close(tmq);
-```
-
-
-
-
-```java
-/* Unsubscribe */
-consumer.unsubscribe();
-
-/* Close consumer */
-consumer.close();
-```
-
-
-
-
-
-```go
-consumer.Close()
-```
-
-
-
-
-
-```rust
-consumer.unsubscribe().await;
-```
-
-
-
-
-
-```py
-# Unsubscribe
-consumer.unsubscribe()
-# Close consumer
-consumer.close()
-```
-
-
-
-
-```js
-consumer.unsubscribe();
-consumer.close();
-```
-
-
-
-
-
-```csharp
-// Unsubscribe
-consumer.Unsubscribe();
-
-// Close consumer
-consumer.Close();
-```
-
-
-
-
-
-## Delete Topic
-
-Once a topic becomes useless, it can be deleted.
-
-You can delete topics that are no longer useful. Note that you must unsubscribe all consumers from a topic before deleting it.
-
-```sql
-/* Delete topic/
-DROP TOPIC topic_name;
-```
-
-## Check Status
-
-At any time, you can check the status of existing topics and consumers.
-
-1. Query all existing topics.
-
-```sql
-SHOW TOPICS;
-```
-
-2. Query the status and subscribed topics of all consumers.
-
-```sql
-SHOW CONSUMERS;
-```
\ No newline at end of file
diff --git a/docs/en/09-data-out/04-taosdump.md b/docs/en/09-data-out/04-taosdump.md
index 4df87001702dab3d7e84ce0cae13751a1406d650..3a4198e53bf0e67b33ab8ed81dc834c3eb2e7548 100644
--- a/docs/en/09-data-out/04-taosdump.md
+++ b/docs/en/09-data-out/04-taosdump.md
@@ -1,7 +1,7 @@
---
sidebar_label: taosDump
title: Dump Data Using taosDump
-description: Introduces how to dump data from TDengine into files using taosDump
+description: Dump data from TDengine into files using taosDump
---
# taosDump
@@ -18,11 +18,7 @@ Users should not use taosdump to back up raw data, environment settings, hardwar
## Installation
-There are two ways to install taosdump:
-
-- Install the taosTools official installer. Please find taosTools from [All download links](https://www.tdengine.com/all-downloads) page and download and install it.
-
-- Compile taos-tools separately and install it. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.
+Please refer to [Install taosTools](https://docs.tdengine.com/cloud/tools/taosdump/#installation).
## Common usage scenarios
@@ -32,7 +28,7 @@ There are two ways to install taosdump:
2. backup multiple specified databases: use `-D db1,db2,... ` parameters;
3. back up some super or normal tables in the specified database: use `-dbname stbname1 stbname2 tbname1 tbname2 ... ` parameters. Note that the first parameter of this input sequence is the database name, and only one database is supported. The second and subsequent parameters are the names of super or normal tables in that database, separated by spaces.
4. back up the system log database: TDengine clusters usually contain a system database named `log`. The data in this database is the data that TDengine runs itself, and the taosdump will not back up the log database by default. If users need to back up the log database, users can use the `-a` or `-allow-sys` command-line parameter.
-5. Loose mode backup: taosdump version 1.4.1 onwards provides `-n` and `-L` parameters for backing up data without using escape characters and "loose" mode, which can reduce the number of backups if table names, column names, tag names do not use escape characters. This can also reduce the backup data time and backup data footprint. If you are unsure about using `-n` and `-L` conditions, please use the default parameters for "strict" mode backup. See the [official documentation](/taos-sql/escape) for a description of escaped characters.
+5. Loose mode backup: taosdump version 1.4.1 onwards provides `-n` and `-L` parameters for backing up data without using escape characters and "loose" mode, which can reduce the number of backups if table names, column names, tag names do not use escape characters. This can also reduce the backup data time and backup data footprint. If you are unsure about using `-n` and `-L` conditions, please use the default parameters for "strict" mode backup. See the [official documentation](https://docs.tdengine.com/taos-sql/escape/) for a description of escaped characters.
:::tip
diff --git a/docs/en/09-data-out/05-prometheus.md b/docs/en/09-data-out/05-prometheus.md
new file mode 100644
index 0000000000000000000000000000000000000000..ed7ae626261f9ba754f5104f89041ec7ae421c52
--- /dev/null
+++ b/docs/en/09-data-out/05-prometheus.md
@@ -0,0 +1,34 @@
+---
+sidebar_label: Prometheus
+title: Prometheus remote read
+description: Prometheus remote_read from TDengine cloud server
+---
+
+Prometheus is a widespread open-source monitoring and alerting system. Prometheus joined the Cloud Native Computing Foundation (CNCF) in 2016 as the second incubated project after Kubernetes, which has a very active developer and user community.
+
+Prometheus provides `remote_read` interface to leverage other database products as its storage engine. To enable users of the Prometheus ecosystem to take advantage of TDengine's efficient querying, TDengine also provides support for this interface so that data stored in TDengine can be queried via the `remote_read` interface, taking full advantage of TDengine's efficient query performance and clustering capabilities for time-series data.
+
+## Install Prometheus
+
+Please refer to [Install Prometheus](https://docs.tdengine.com/cloud/data-in/prometheus#install-prometheus).
+
+## Configure Prometheus
+
+Please refer to [Configure Prometheus](https://docs.tdengine.com/cloud/prometheus/#configure-prometheus).
+
+## Start Prometheus
+
+Please refer to [Start Prometheus](https://docs.tdengine.com/cloud/data-in/prometheus/#start-prometheus).
+
+## Verify Remote Read
+
+Lets retrieve some metrics from TDengine Cloud via prometheus web server. Browse to and use the "Graph" tab.
+
+Enter the following expression to graph the per-second rate of chunks being created in the self-scraped Prometheus:
+
+```
+rate(prometheus_tsdb_head_chunks_created_total[1m])
+```
+
+
+
diff --git a/docs/en/09-data-out/_sub_cs.mdx b/docs/en/09-data-out/_sub_cs.mdx
index a435ea0273c94cbe75eaf7431e1a9c39d49d92e3..694970e7eb8415d991389b68727012592113af4c 100644
--- a/docs/en/09-data-out/_sub_cs.mdx
+++ b/docs/en/09-data-out/_sub_cs.mdx
@@ -1,3 +1,3 @@
```csharp
-{{#include docs/examples/csharp/SubscribeDemo.cs}}
+// {{#include docs/examples/csharp/SubscribeDemo.cs}}
```
\ No newline at end of file
diff --git a/docs/en/09-data-out/_sub_rust.mdx b/docs/en/09-data-out/_sub_rust.mdx
index 0021666a7024a9b63d6b9c38bf8a57b6eded6d66..eb06c8f18c3e0f2e908a2d8d9fad9b0e73b866a2 100644
--- a/docs/en/09-data-out/_sub_rust.mdx
+++ b/docs/en/09-data-out/_sub_rust.mdx
@@ -1,3 +1,3 @@
```rust
-{{#include docs/examples/rust/nativeexample/examples/subscribe_demo.rs}}
+{{#include docs/examples/rust/cloud-example/examples/subscribe_demo.rs}}
```
diff --git a/docs/en/09-data-out/index.md b/docs/en/09-data-out/index.md
index 08363491edebf8e098a5d256d369c525363274e4..5c620b1f38f6f045a5bb7e79cf2cd609401f7015 100644
--- a/docs/en/09-data-out/index.md
+++ b/docs/en/09-data-out/index.md
@@ -4,4 +4,4 @@ title: Get Data Out of TDengine
description: A number of ways getting data out of TDengine.
---
-This chapter introduces how to get data out of TDengine cloud service. Besides normal query using SQL, users can use data subscription which is provided by the message queue component inside TDengine to access the data stored in TDengine. `taosdump`, which is a tool provided by TDengine, can be used to dump the data stored in TDengine cloud service into files. `taosX`, which is another tool provided by TDengine, can be used to sync up the data in one TDengine cloud service into another.
\ No newline at end of file
+This chapter introduces how to get data out of TDengine cloud service. Besides normal query using SQL, users can use [data subscription](../tmq) which is provided by the message queue component inside TDengine to access the data stored in TDengine. TDengine provides [connectors](../programming/connector) for application programmers to access the data stored in TDengine. TDengine also provides some tools, like [taosdump](../tools/taosdump), which is a tool provided by TDengine to dump the data stored in TDengine cloud service into files, and `taosX`, which is another tool to sync up the data in one TDengine cloud service into another. Furthermore, 3rd party tools, like prometheus, can also be used to write data into TDengine.
diff --git a/docs/en/09-data-out/prometheus_data.webp b/docs/en/09-data-out/prometheus_data.webp
new file mode 100644
index 0000000000000000000000000000000000000000..760890656daae09b9127d52c03486ce9b2bb0913
Binary files /dev/null and b/docs/en/09-data-out/prometheus_data.webp differ
diff --git a/docs/en/09-data-out/prometheus_read.webp b/docs/en/09-data-out/prometheus_read.webp
new file mode 100644
index 0000000000000000000000000000000000000000..2c91aa6fb8df897effddb4bfab3d522b6975ed1a
Binary files /dev/null and b/docs/en/09-data-out/prometheus_read.webp differ
diff --git a/docs/en/10-visual/index.md b/docs/en/10-visual/index.md
index a69e2be8fe60bc4bf1c62c1115061e6bf9033990..a995315a5a2938e8197118b9df6778ed304ee267 100644
--- a/docs/en/10-visual/index.md
+++ b/docs/en/10-visual/index.md
@@ -1,6 +1,6 @@
---
sidebar_label: Visualization
-sidebar_title: Visualization
+title: Visualization
description: View TDengine in visual ways.
---
diff --git a/docs/en/11-tmq.md b/docs/en/11-tmq.md
new file mode 100644
index 0000000000000000000000000000000000000000..8208925fea1ee1b219d1738ff607bddece968f3d
--- /dev/null
+++ b/docs/en/11-tmq.md
@@ -0,0 +1,158 @@
+---
+sidebar_label: Subscription
+title: Data Subscritpion
+description: Use data subscription to get data from TDengine.
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+
+This topic introduces how to read out data from TDengine using data subscription, which is an advanced feature in TDengine. To access the data in TDengine in data subscription way, you need to create topic, create consumer, subscribe to a topic, and consume data. In this document we will briefly explain these main steps of data subscription.
+
+## Create Topic
+
+A topic can be created on a database, on some selected columns,or on a supertable.
+
+### Topic on Columns
+
+The most common way to create a topic is to create a topic on some specifically selected columns. The Syntax is like below:
+
+```sql
+CREATE TOPIC topic_name as subquery;
+```
+
+You can subscribe to a topic through a SELECT statement. Statements that specify columns, such as `SELECT *` and `SELECT ts, cl` are supported, as are filtering conditions and scalar functions. Aggregate functions and time window aggregation are not supported. Note:
+
+- The schema of topics created in this manner is determined by the subscribed data.
+- You cannot modify (`ALTER
MODIFY`) or delete (`ALTER
DROP`) columns or tags that are used in a subscription or calculation.
+- Columns added to a table after the subscription is created are not displayed in the results. Deleting columns will cause an error.
+
+For example:
+
+```sql
+CREATE TOPIC topic_name AS SELECT ts, c1, c2, c3 FROM tmqdb.stb WHERE c1 > 1;
+```
+
+### Topic on SuperTable
+
+Syntax:
+
+```sql
+CREATE TOPIC topic_name AS STABLE stb_name;
+```
+
+Creating a topic in this manner differs from a `SELECT * from stbName` statement as follows:
+
+- The table schema can be modified.
+- Unstructured data is returned. The format of the data returned changes based on the supertable schema.
+- A different table schema may exist for every data block to be processed.
+- The data returned does not include tags.
+
+### Topic on Database
+
+Syntax:
+
+```sql
+CREATE TOPIC topic_name [WITH META] AS DATABASE db_name;
+```
+
+This SQL statement creates a subscription to all tables in the database. You can add the `WITH META` parameter to include schema changes in the subscription, including creating and deleting supertables; adding, deleting, and modifying columns; and creating, deleting, and modifying the tags of subtables. Consumers can determine the message type from the API. Note that this differs from Kafka.
+
+## Programming Model
+
+To subscribe the data from a created topic, the client program needs to follow the programming model described in this section.
+
+1. Create Consumer
+
+To create a consumer, you must use the APIs provided by TDengine connectors. Below is the sample code of using connectors of different languages.
+
+2. Subscribe to a Topic
+
+A single consumer can subscribe to multiple topics.
+
+3. Consume messages
+
+4. Subscribe to a Topic
+
+A single consumer can subscribe to multiple topics.
+
+5. Consume Data
+
+6. Close the consumer
+
+After message consumption is finished, the consumer is unsubscribed.
+
+## Sample Code
+
+
+
+
+
+Will be available soon
+
+
+
+
+Will be available soon
+
+
+
+
+
+Will be available soon
+
+
+
+
+
+```rust
+{{#include docs/examples/rust/cloud-example/examples/subscribe_demo.rs}}
+```
+
+
+
+
+
+Will be available soon
+
+
+
+
+
+Will be available soon
+
+
+
+
+
+Will be available soon
+
+
+
+
+
+## Delete Topic
+
+Once a topic becomes useless, it can be deleted.
+
+You can delete topics that are no longer useful. Note that you must unsubscribe all consumers from a topic before deleting it.
+
+```sql
+DROP TOPIC topic_name;
+```
+
+## Check Status
+
+At any time, you can check the status of existing topics and consumers.
+
+1. Query all existing topics.
+
+```sql
+SHOW TOPICS;
+```
+
+2. Query the status and subscribed topics of all consumers.
+
+```sql
+SHOW CONSUMERS;
+```
diff --git a/docs/en/05-develop/04-stream.md b/docs/en/12-stream.md
similarity index 99%
rename from docs/en/05-develop/04-stream.md
rename to docs/en/12-stream.md
index 36f903ee9a4f2d210e63d0b79e702bc199f790ed..1f44a44e24b4c7e54eaa0195d9ea98bb10a79736 100644
--- a/docs/en/05-develop/04-stream.md
+++ b/docs/en/12-stream.md
@@ -22,7 +22,6 @@ CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name AS subq
stream_options: {
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
WATERMARK time
- IGNORE EXPIRED [0 | 1]
}
```
diff --git a/docs/en/13-replication/index.md b/docs/en/13-replication/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..bd50feacdaf60b57d3502113285e3bc9948582f7
--- /dev/null
+++ b/docs/en/13-replication/index.md
@@ -0,0 +1,9 @@
+---
+sidebar_label: Data Replication
+title: Data Replication
+description: Replicate data between TDengine cloud services
+---
+
+TDengine provides full support for data replication. You can replicate data from TDengine cloud to private TDengine instance, from private TDengine instance to TDengine cloud, or from one cloud platform to another one and it doesn't matter which cloud or region the two services reside in.
+
+TDengine also provides database backup for enterprise plan.
diff --git a/docs/en/05-develop/01-connect/01-python.md b/docs/en/15-programming/01-connect/01-python.md
similarity index 74%
rename from docs/en/05-develop/01-connect/01-python.md
rename to docs/en/15-programming/01-connect/01-python.md
index 9781bfe1ece242112ccffd8b91ad9dfad1bd8323..7f23d93e92966f09b58af251d01c5fa3bbad918e 100644
--- a/docs/en/05-develop/01-connect/01-python.md
+++ b/docs/en/15-programming/01-connect/01-python.md
@@ -1,14 +1,17 @@
---
sidebar_label: Python
title: Connect with Python Connector
+description: Connect to TDengine cloud service using Python connector
---
+
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
+
## Install Connector
-First, you need to install the `taospy` module version >= `2.3.3`. Run the command below in your terminal.
+First, you need to install the `taospy` module version >= `2.6.2`. Run the command below in your terminal.
@@ -78,3 +81,7 @@ Copy code bellow to your editor and run it.
```python
{{#include docs/examples/python/develop_tutorial.py:connect}}
```
+
+For how to write data and query data, please refer to and .
+
+For more details about how to write or query data via REST API, please check [REST API](https://docs.tdengine.com/cloud/programming/connector/rest-api/).
\ No newline at end of file
diff --git a/docs/en/05-develop/01-connect/02-java.md b/docs/en/15-programming/01-connect/02-java.md
similarity index 67%
rename from docs/en/05-develop/01-connect/02-java.md
rename to docs/en/15-programming/01-connect/02-java.md
index 651de9e553529b4cce01c45fcae45a56d0d85a6f..113311bc7a45b924e201390ff1a4451e1b7434ed 100644
--- a/docs/en/05-develop/01-connect/02-java.md
+++ b/docs/en/15-programming/01-connect/02-java.md
@@ -1,11 +1,14 @@
---
sidebar_label: Java
title: Connect with Java Connector
+description: Connect to TDengine cloud service using Java connector
---
+
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
+
## Add Dependency
@@ -13,11 +16,7 @@ import TabItem from '@theme/TabItem';
```xml title="pom.xml"
-
- com.taosdata.jdbc
- taos-jdbcdriver
- 2.0.39
-
+{{#include docs/examples/java/pom.xml:dep}}
```
@@ -25,7 +24,7 @@ import TabItem from '@theme/TabItem';
```groovy title="build.gradle"
dependencies {
- implementation 'com.taosdata.jdbc:taos-jdbcdriver:2.0.39'
+ implementation 'com.taosdata.jdbc:taos-jdbcdriver:3.0.0.0'
}
```
@@ -67,7 +66,7 @@ Alternatively, you can set environment variable in your IDE's run configurations
:::note
Replace with real JDBC URL, it will seems like: `jdbc:TAOS-RS://example.com?usessl=true&token=xxxx`.
-To obtain the value of JDBC URL, please log in [TDengine Cloud](https://cloud.tdengine.com) and click "Connector" and then select "Java".
+To obtain the value of JDBC URL, please log in [TDengine Cloud](https://cloud.tdengine.com) and click "Data Insert" on the left menu.
:::
## Connect
@@ -78,3 +77,6 @@ Code bellow get JDBC URL from environment variables first and then create a `Con
{{#include docs/examples/java/src/main/java/com/taos/example/ConnectCloudExample.java:connect}}
```
+The client connection is then established. For how to write data and query data, please refer to and .
+
+For more details about how to write or query data via REST API, please check [REST API](https://docs.tdengine.com/cloud/programming/connector/rest-api/).
diff --git a/docs/en/05-develop/01-connect/03-go.md b/docs/en/15-programming/01-connect/03-go.md
similarity index 69%
rename from docs/en/05-develop/01-connect/03-go.md
rename to docs/en/15-programming/01-connect/03-go.md
index 3caf7ca7babcc5f345010dca50ed9e431cc612de..c377366affb8c358b8faac42db5ec6a70fe2041b 100644
--- a/docs/en/05-develop/01-connect/03-go.md
+++ b/docs/en/15-programming/01-connect/03-go.md
@@ -1,11 +1,14 @@
---
sidebar_label: Go
title: Connect with Go Connector
+description: Connect to TDengine cloud service using Go connector
---
+
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
+
## Initialize Module
```
@@ -52,7 +55,7 @@ $env:TDENGINE_GO_DSN=""
:::note
Replace with the real value, the format should be `https()/?token=`.
-To obtain the value of `goDSN`, please log in [TDengine Cloud](https://cloud.tdengine.com) and click "Connector" and then select "Go".
+To obtain the value of `goDSN`, please log in [TDengine Cloud](https://cloud.tdengine.com) and click "Data In" on the lef menu.
:::
@@ -76,3 +79,7 @@ Finally, test the connection:
```
go run main.go
```
+
+The client connection is then established. For how to write data and query data, please refer to and .
+
+For more details about how to write or query data via REST API, please check [REST API](https://docs.tdengine.com/cloud/programming/connector/rest-api/).
\ No newline at end of file
diff --git a/docs/en/05-develop/01-connect/04-rust.md b/docs/en/15-programming/01-connect/04-rust.md
similarity index 60%
rename from docs/en/05-develop/01-connect/04-rust.md
rename to docs/en/15-programming/01-connect/04-rust.md
index 2c195d5d58448244669228199570c5d64e9efd82..23e51383e5042766b1decf428aebbea380d75489 100644
--- a/docs/en/05-develop/01-connect/04-rust.md
+++ b/docs/en/15-programming/01-connect/04-rust.md
@@ -1,10 +1,13 @@
---
sidebar_label: Rust
title: Connect with Rust Connector
+description: Connect to TDengine cloud service using Rust connector
---
+
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
+
## Create Project
```
@@ -15,7 +18,15 @@ cargo new --bin cloud-example
Add dependency to `Cargo.toml`.
```toml title="Cargo.toml"
-{{#include docs/examples/rust/cloud-example/Cargo.toml}}
+[package]
+name = "cloud-example"
+version = "0.1.0"
+edition = "2021"
+
+[dependencies]
+taos = { version = "*", default-features = false, features = ["ws"] }
+tokio = { version = "1", features = ["full"]}
+anyhow = "1.0.0"
```
## Config
@@ -61,5 +72,7 @@ Copy following code to `main.rs`.
{{#include docs/examples/rust/cloud-example/src/main.rs}}
```
-Then you can execute `cargo run` to test the connection.
+Then you can execute `cargo run` to test the connection. For how to write data and query data, please refer to and .
+
+For more details about how to write or query data via REST API, please check [REST API](https://docs.tdengine.com/cloud/programming/connector/rest-api/).
diff --git a/docs/en/05-develop/01-connect/05-node.md b/docs/en/15-programming/01-connect/05-node.md
similarity index 69%
rename from docs/en/05-develop/01-connect/05-node.md
rename to docs/en/15-programming/01-connect/05-node.md
index 5f13df643cef9c0833f180871f44ab920297bc12..efd781fc460dbe9a26f2ab350de6b63e80974220 100644
--- a/docs/en/05-develop/01-connect/05-node.md
+++ b/docs/en/15-programming/01-connect/05-node.md
@@ -1,14 +1,17 @@
---
sidebar_label: Node.js
title: Connect with Node.js Connector
+description: Connect to TDengine cloud service using Node.JS connector
---
+
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
+
## Install Connector
```bash
-npm i td2.0-rest-connector
+npm install @tdengine/rest
```
## Config
@@ -54,4 +57,8 @@ To obtain the value of cloud token and URL, please log in [TDengine Cloud](https
```javascript
{{#include docs/examples/node/connect.js}}
-```
\ No newline at end of file
+```
+
+For how to write data and query data, please refer to and .
+
+For more details about how to write or query data via REST API, please check [REST API](https://docs.tdengine.com/cloud/programming/connector/rest-api/).
\ No newline at end of file
diff --git a/docs/en/15-programming/01-connect/06-csharp.md b/docs/en/15-programming/01-connect/06-csharp.md
new file mode 100644
index 0000000000000000000000000000000000000000..1c67a345b1ae2a180c84c3ce55ee06cc6d5e7709
--- /dev/null
+++ b/docs/en/15-programming/01-connect/06-csharp.md
@@ -0,0 +1,69 @@
+---
+sidebar_label: C#
+title: Connect with C# Connector
+description: Connect to TDengine cloud service using C# connector
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+
+## Create Project
+
+```bash
+dotnet new console -o example
+```
+
+## Add C# TDengine Driver class lib
+
+```bash
+cd example
+dotnet add package TDengine.Connector
+```
+
+## Config
+
+Run this command in your terminal to save TDengine cloud token as variables:
+
+
+
+
+```bash
+export TDENGINE_CLOUD_DSN=""
+```
+
+
+
+
+```bash
+set TDENGINE_CLOUD_DSN=""
+```
+
+
+
+
+```powershell
+$env:TDENGINE_CLOUD_DSN=""
+```
+
+
+
+
+
+
+:::note
+Replace with real TDengine cloud DSN. To obtain the real value, please log in [TDengine Cloud](https://cloud.tdengine.com) and click "Connector" and then select "C#".
+
+:::
+
+
+## Connect
+
+
+```C#
+{{#include docs/examples/csharp/cloud-example/connect/Program.cs}}
+```
+
+The client connection is then established. For how to write data and query data, please refer to and .
+
+For more details about how to write or query data via REST API, please check [REST API](https://docs.tdengine.com/cloud/programming/connector/rest-api/).
\ No newline at end of file
diff --git a/docs/en/05-develop/01-connect/09-rest-api.md b/docs/en/15-programming/01-connect/09-rest-api.md
similarity index 89%
rename from docs/en/05-develop/01-connect/09-rest-api.md
rename to docs/en/15-programming/01-connect/09-rest-api.md
index f64a47dcc58bc073e2b80a7d46ee9404bdf427be..e6da5318fec04992fb9c600cb5fc500e7b13570b 100644
--- a/docs/en/05-develop/01-connect/09-rest-api.md
+++ b/docs/en/15-programming/01-connect/09-rest-api.md
@@ -1,11 +1,14 @@
---
sidebar_label: REST API
title: REST API
+description: Connect to TDengine Cloud Service through RESTful API
---
+
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
+
## Config
Run this command in your terminal to save the TDengine cloud token and URL as variables:
diff --git a/docs/en/05-develop/01-connect/_category_.yml b/docs/en/15-programming/01-connect/_category_.yml
similarity index 100%
rename from docs/en/05-develop/01-connect/_category_.yml
rename to docs/en/15-programming/01-connect/_category_.yml
diff --git a/docs/en/15-programming/01-connect/index.md b/docs/en/15-programming/01-connect/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..5937cd371618878b554cc57f9a074e9c86942ad7
--- /dev/null
+++ b/docs/en/15-programming/01-connect/index.md
@@ -0,0 +1,14 @@
+---
+sidebar_label: Quick Start
+title: Connect to TDengine Cloud Service
+description: Quick start of using TDengine connectors to connect to TDengine cloud service
+---
+
+This section briefly describes how to connect to TDengine cloud service using the connectors provided by TDengine so that programmers can get started quickly.
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
+
+
+```
diff --git a/docs/en/05-develop/02-model.md b/docs/en/15-programming/02-model.md
similarity index 99%
rename from docs/en/05-develop/02-model.md
rename to docs/en/15-programming/02-model.md
index 86853aaaa3f7285fe042a892e2ec903d57894111..99730707cdb69fb4fee15c4ff33ff99e4a6c177f 100644
--- a/docs/en/05-develop/02-model.md
+++ b/docs/en/15-programming/02-model.md
@@ -1,5 +1,6 @@
---
title: Data Model
+desription: Typical Data Model used in TDengine
---
The data model employed by TDengine is similar to that of a relational database. You have to create databases and tables. You must design the data model based on your own business and application requirements. You should design the STable (an abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntactical details.
diff --git a/docs/en/15-programming/03-insert.md b/docs/en/15-programming/03-insert.md
new file mode 100644
index 0000000000000000000000000000000000000000..4cb731e681fa71dc1baee4ae9206d3a2fac1af57
--- /dev/null
+++ b/docs/en/15-programming/03-insert.md
@@ -0,0 +1,7 @@
+---
+sidebar_label: Insert
+title: Insert Data Into TDengine
+description: Programming Guide for Inserting Data into TDengine
+---
+
+To quickly start your programming about writing data into TDengine, please refer to [Insert Data](../../data-in/insert-data).
\ No newline at end of file
diff --git a/docs/en/15-programming/04-query.md b/docs/en/15-programming/04-query.md
new file mode 100644
index 0000000000000000000000000000000000000000..91ddae8fbbe5154c81f9276e3aa7c1e8f284dedb
--- /dev/null
+++ b/docs/en/15-programming/04-query.md
@@ -0,0 +1,7 @@
+---
+sidebar_label: Query
+title: Query Data From TDengine
+description: Programming Guide for Querying Data
+---
+
+To quickly start your programming about querying data from TDengine, please refer to [Query Data](../../data-out/query-data).
\ No newline at end of file
diff --git a/docs/en/15-connector/01-python.md b/docs/en/15-programming/06-connector/01-python.md
similarity index 88%
rename from docs/en/15-connector/01-python.md
rename to docs/en/15-programming/06-connector/01-python.md
index b26bf06187c863952c9999cc6389cf0b2c679706..7875f1ca1c6e3f5a4b6a13a5db233d7aae315dbb 100644
--- a/docs/en/15-connector/01-python.md
+++ b/docs/en/15-programming/06-connector/01-python.md
@@ -1,6 +1,7 @@
---
sidebar_label: Python
title: TDengine Python Connector
+description: Detailed guide for Python Connector
---
`taospy` is the official Python connector for TDengine. `taospy` wraps the [REST interface](/reference/rest-api) of TDengine. Additionally `taospy` provides a set of programming interfaces that conforms to the [Python Data Access Specification (PEP 249)](https://peps.python.org/pep-0249/). It is easy to integrate `taospy` with many third-party tools, such as [SQLAlchemy](https://www.sqlalchemy.org/) and [pandas](https://pandas.pydata.org/).
@@ -78,7 +79,11 @@ For a more detailed description of the `sql()` method, please refer to [RestClie
| Connector version | Important Update | Release date |
| ----------------- | ----------------------------------------- | ------------ |
-| 2.4.0 | add execute method to TaosRestConnection | 2022-07-18 |
+| 2.6.2 | fix ci script | 2022-08-18 |
+| 2.5.2 | fix taos-ws-py python version dependency | 2022-08-12 |
+| 2.5.1 | (rest): add timezone option | 2022-08-11 |
+| 2.5.0 | add taosws module | 2022-08-10 |
+| 2.4.0 | add execute method to TaosRestConnection | 2022-07-18 |
| 2.3.3 | support connect to TDengine Cloud Service | 2022-06-06 |
[**Release Notes**](https://github.com/taosdata/taos-connector-python/releases)
diff --git a/docs/en/15-connector/02-java.md b/docs/en/15-programming/06-connector/02-java.md
similarity index 99%
rename from docs/en/15-connector/02-java.md
rename to docs/en/15-programming/06-connector/02-java.md
index 788285cb0980307fc3303c0a8354d8ce8b464399..d16b21621b40c68069609a4c5aa4b672d212a420 100644
--- a/docs/en/15-connector/02-java.md
+++ b/docs/en/15-programming/06-connector/02-java.md
@@ -3,6 +3,7 @@ toc_max_heading_level: 4
sidebar_position: 2
sidebar_label: Java
title: TDengine Java Connector
+description: Detailed guide for Java Connector
---
import Tabs from '@theme/Tabs';
diff --git a/docs/en/15-connector/03-go.md b/docs/en/15-programming/06-connector/03-go.md
similarity index 98%
rename from docs/en/15-connector/03-go.md
rename to docs/en/15-programming/06-connector/03-go.md
index 62a0486f1a4b60e7262b943e8c956f422d3b4782..9de2b3141237a8dd1f3867312d3914f25f449875 100644
--- a/docs/en/15-connector/03-go.md
+++ b/docs/en/15-programming/06-connector/03-go.md
@@ -1,6 +1,7 @@
---
sidebar_label: Go
title: TDengine Go Connector
+description: Detailed guide for Python Connector
---
`driver-go` is the official Go language connector for TDengine. It implements the [database/sql](https://golang.org/pkg/database/sql/) package, the generic Go language interface to SQL databases. Go developers can use it to develop applications that access TDengine cluster data.
diff --git a/docs/en/15-connector/04-rust.md b/docs/en/15-programming/06-connector/04-rust.md
similarity index 98%
rename from docs/en/15-connector/04-rust.md
rename to docs/en/15-programming/06-connector/04-rust.md
index 8aead88139f6fed095d3ef8b9365cc16e664a0e6..99765329f488b7650e119dd07e1bd8199b673a4d 100644
--- a/docs/en/15-connector/04-rust.md
+++ b/docs/en/15-programming/06-connector/04-rust.md
@@ -3,6 +3,7 @@ toc_max_heading_level: 4
sidebar_position: 5
sidebar_label: Rust
title: TDengine Rust Connector
+description: Detailed guide for Rust Connector
---
diff --git a/docs/en/15-connector/05-node.md b/docs/en/15-programming/06-connector/05-node.md
similarity index 84%
rename from docs/en/15-connector/05-node.md
rename to docs/en/15-programming/06-connector/05-node.md
index 91ab64136c53b679ead6e40135ae50fbdd87b677..096d65c255eef632424002cb04abbc02e90fda04 100644
--- a/docs/en/15-connector/05-node.md
+++ b/docs/en/15-programming/06-connector/05-node.md
@@ -1,6 +1,7 @@
---
-sidebar_label: Node.js
-title: TDengine Node.js Connector
+sidebar_label: Node.JS
+title: TDengine Node.JS Connector
+description: Detailed guide for Node.JS Connector
---
`td2.0-rest-connector` are the official Node.js language connectors for TDengine. Node.js developers can develop applications to access TDengine instance data. `td2.0-rest-connector` is a **REST connector** that connects to TDengine instances via the REST API.
diff --git a/docs/en/15-programming/06-connector/06-csharp.md b/docs/en/15-programming/06-connector/06-csharp.md
new file mode 100644
index 0000000000000000000000000000000000000000..b745a9c970321020caec8b9482376c82980cb422
--- /dev/null
+++ b/docs/en/15-programming/06-connector/06-csharp.md
@@ -0,0 +1,101 @@
+---
+sidebar_label: C#
+title: TDengine C# Connector
+description: Detailed guide for C# Connector
+---
+
+ `TDengine.Connector` is the official C# connector for TDengine. C# developers can develop applications to access TDengine instance data.
+
+The source code for `TDengine.Connector` is hosted on [GitHub](https://github.com/taosdata/taos-connector-dotnet/tree/3.0).
+
+## Installation
+
+### Pre-installation
+
+Install the .NET deployment SDK.
+
+### Add TDengine.Connector through Nuget
+
+```bash
+dotnet add package TDengine.Connector
+```
+
+## Establishing a connection
+
+``` XML
+
+
+
+ Exe
+ net5.0
+ enable
+
+
+
+
+
+
+
+```
+
+``` C#
+{{#include docs/examples/csharp/cloud-example/connect/Program.cs}}
+```
+
+## Usage examples
+
+### Basic Insert and Query
+
+``` XML
+
+
+
+ Exe
+ net5.0
+ enable
+
+
+
+
+
+
+
+
+```
+
+```C#
+{{#include docs/examples/csharp/cloud-example/usage/Program.cs}}
+```
+
+### STMT Insert
+
+``` XML
+
+
+
+ Exe
+ net5
+ enable
+
+
+
+
+
+
+
+
+```
+
+```C#
+{{#include docs/examples/csharp/cloud-example/stmt/Program.cs}}
+```
+
+## Important Updates
+
+| TDengine.Connector | Description |
+| ------------------------- | ---------------------------------------------------------------- |
+| 3.0.1 | Support connect to TDengine cloud service
+
+## API Reference
+
+[API Reference](https://docs.taosdata.com/api/connector-csharp/html/860d2ac1-dd52-39c9-e460-0829c4e5a40b.htm)
diff --git a/docs/en/15-connector/09-rest-api.md b/docs/en/15-programming/06-connector/09-rest-api.md
similarity index 99%
rename from docs/en/15-connector/09-rest-api.md
rename to docs/en/15-programming/06-connector/09-rest-api.md
index 3904c957bb44ad689ff35cc056c5a5961cdd3e35..b51d25843c099b5472fb2e2eb05f9148a64d2e0d 100644
--- a/docs/en/15-connector/09-rest-api.md
+++ b/docs/en/15-programming/06-connector/09-rest-api.md
@@ -1,6 +1,7 @@
---
sidebar_label: REST API
title: REST API
+description: Detailed guide for REST API
---
To support the development of various types of applications and platforms, TDengine provides an API that conforms to REST principles; namely REST API. To minimize the learning cost, unlike REST APIs for other database engines, TDengine allows insertion of SQL commands in the BODY of an HTTP POST request, to operate the database.
diff --git a/docs/en/15-connector/index.md b/docs/en/15-programming/06-connector/index.md
similarity index 50%
rename from docs/en/15-connector/index.md
rename to docs/en/15-programming/06-connector/index.md
index 635b8923dd68a694db3b33d90708ca8ff6a2a0b2..f37bf329a84cef8dfae1efb8bffaa0001a9c3d37 100644
--- a/docs/en/15-connector/index.md
+++ b/docs/en/15-programming/06-connector/index.md
@@ -1,5 +1,10 @@
-# Connector
+---
+sidebar_label: Connector
+title: Connector Reference
+description: 'Reference guide for connectors'
+---
+This section is a detailed reference guide of the connectors provided by TDengine.
```mdx-code-block
import DocCardList from '@theme/DocCardList';
diff --git a/docs/en/05-develop/index.md b/docs/en/15-programming/index.md
similarity index 92%
rename from docs/en/05-develop/index.md
rename to docs/en/15-programming/index.md
index b7df5f1c3bea43a53c079dce92736edf2d09d1fd..03770cc9595017485056d2c4f8e0903be3d462d5 100644
--- a/docs/en/05-develop/index.md
+++ b/docs/en/15-programming/index.md
@@ -15,7 +15,7 @@ To develop an application to process time-series data using TDengine, we recomme
7. In many use cases (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately.
8. If you find that the SQL functions of TDengine cannot meet your requirements, then you can use user-defined functions to solve the problem.
-This section is organized in the order described above. For ease of understanding, TDengine provides sample code for each supported programming language for each function. If you want to learn more about the use of SQL, please read the [SQL manual](../taos-sql/). For a more in-depth understanding of the use of each connector, please read the [Connector Reference Guide](../connector/). For more ways to writing data into TDengine, please refer to [Data In](../data-in), for more ways to read data out of TDengine, please refer to [Data Out](../data-out).
+This section is organized in the order described above. For ease of understanding, TDengine provides sample code for each supported programming language for each function. If you want to learn more about the use of SQL, please read the [SQL manual](../taos-sql/). For a more in-depth understanding of the use of each connector, please read the [Connector Reference Guide](./connector/). For more ways to writing data into TDengine, please refer to [Data In](../data-in), for more ways to read data out of TDengine, please refer to [Data Out](../data-out).
If you encounter any problems during the development process, please click ["Submit an issue"](https://github.com/taosdata/TDengine/issues/new/choose) at the bottom of each page and submit it on GitHub right away.
diff --git a/docs/en/12-taos-sql/01-data-type.md b/docs/en/17-taos-sql/01-data-type.md
similarity index 100%
rename from docs/en/12-taos-sql/01-data-type.md
rename to docs/en/17-taos-sql/01-data-type.md
diff --git a/docs/en/12-taos-sql/02-database.md b/docs/en/17-taos-sql/02-database.md
similarity index 100%
rename from docs/en/12-taos-sql/02-database.md
rename to docs/en/17-taos-sql/02-database.md
diff --git a/docs/en/12-taos-sql/03-table.md b/docs/en/17-taos-sql/03-table.md
similarity index 100%
rename from docs/en/12-taos-sql/03-table.md
rename to docs/en/17-taos-sql/03-table.md
diff --git a/docs/en/12-taos-sql/04-stable.md b/docs/en/17-taos-sql/04-stable.md
similarity index 99%
rename from docs/en/12-taos-sql/04-stable.md
rename to docs/en/17-taos-sql/04-stable.md
index 6a0a0922cce7d9f831f333e4999789798be8d867..dbeda21c2b369abe16fa6560aefc0120854f7bc8 100644
--- a/docs/en/12-taos-sql/04-stable.md
+++ b/docs/en/17-taos-sql/04-stable.md
@@ -1,6 +1,7 @@
---
sidebar_label: Supertable
title: Supertable
+description: Operations about Super Tables.
---
## Create a Supertable
diff --git a/docs/en/12-taos-sql/05-insert.md b/docs/en/17-taos-sql/05-insert.md
similarity index 99%
rename from docs/en/12-taos-sql/05-insert.md
rename to docs/en/17-taos-sql/05-insert.md
index e7d56fb3c734affa92c8c71c190b1132cd89e335..e798d1f804a7a9501a67acbfdf2a1fc081b89fe9 100644
--- a/docs/en/12-taos-sql/05-insert.md
+++ b/docs/en/17-taos-sql/05-insert.md
@@ -1,6 +1,7 @@
---
sidebar_label: Insert
title: Insert
+description: Insert data into TDengine
---
## Syntax
diff --git a/docs/en/12-taos-sql/06-select.md b/docs/en/17-taos-sql/06-select.md
similarity index 99%
rename from docs/en/12-taos-sql/06-select.md
rename to docs/en/17-taos-sql/06-select.md
index 1dd0caed38235d3d10813b2cd74fec6446c5ec24..cf35c5b7554fda3178da3107955eb421cd8a0c84 100644
--- a/docs/en/12-taos-sql/06-select.md
+++ b/docs/en/17-taos-sql/06-select.md
@@ -1,6 +1,7 @@
---
sidebar_label: Select
title: Select
+description: Query Data from TDengine.
---
## Syntax
diff --git a/docs/en/12-taos-sql/08-delete-data.mdx b/docs/en/17-taos-sql/08-delete-data.mdx
similarity index 100%
rename from docs/en/12-taos-sql/08-delete-data.mdx
rename to docs/en/17-taos-sql/08-delete-data.mdx
diff --git a/docs/en/12-taos-sql/10-function.md b/docs/en/17-taos-sql/10-function.md
similarity index 99%
rename from docs/en/12-taos-sql/10-function.md
rename to docs/en/17-taos-sql/10-function.md
index d35fd3109998608475e4e0429265c8ac7274f57d..f19e13f63835147e4ede078612dc24d58df1d5ae 100644
--- a/docs/en/12-taos-sql/10-function.md
+++ b/docs/en/17-taos-sql/10-function.md
@@ -2,6 +2,7 @@
sidebar_label: Functions
title: Functions
toc_max_heading_level: 4
+description: TDengine Built-in Functions.
---
## Single Row Functions
diff --git a/docs/en/12-taos-sql/12-distinguished.md b/docs/en/17-taos-sql/12-distinguished.md
similarity index 99%
rename from docs/en/12-taos-sql/12-distinguished.md
rename to docs/en/17-taos-sql/12-distinguished.md
index 707089abe54fc12bb09de47c1c51af1a32b8cbcd..e816fe615efc544f7d7672bd3e80e0cb3c868f90 100644
--- a/docs/en/12-taos-sql/12-distinguished.md
+++ b/docs/en/17-taos-sql/12-distinguished.md
@@ -1,6 +1,7 @@
---
sidebar_label: Time-Series Extensions
title: Time-Series Extensions
+description: TimeSeries Data Specific Queries.
---
As a purpose-built database for storing and processing time-series data, TDengine provides time-series-specific extensions to standard SQL.
diff --git a/docs/en/12-taos-sql/13-tmq.md b/docs/en/17-taos-sql/13-tmq.md
similarity index 97%
rename from docs/en/12-taos-sql/13-tmq.md
rename to docs/en/17-taos-sql/13-tmq.md
index befab4f4f01e595564e93ffcfbb0723e13294af0..33122e770ee734015f59ebe734bfb465375c76ac 100644
--- a/docs/en/12-taos-sql/13-tmq.md
+++ b/docs/en/17-taos-sql/13-tmq.md
@@ -1,6 +1,7 @@
---
sidebar_label: Data Subscription
title: Data Subscription
+description: Subscribe Data from TDengine.
---
The information in this document is related to the TDengine data subscription feature.
diff --git a/docs/en/12-taos-sql/14-stream.md b/docs/en/17-taos-sql/14-stream.md
similarity index 99%
rename from docs/en/12-taos-sql/14-stream.md
rename to docs/en/17-taos-sql/14-stream.md
index fcd78765104af17285b43749969821ceb98da33b..d26adc9c7f5f1606b0649b48336117cbd4f7f2fc 100644
--- a/docs/en/12-taos-sql/14-stream.md
+++ b/docs/en/17-taos-sql/14-stream.md
@@ -1,6 +1,7 @@
---
sidebar_label: Stream Processing
title: Stream Processing
+description: Built-in Stream Processing.
---
Raw time-series data is often cleaned and preprocessed before being permanently stored in a database. Stream processing components like Kafka, Flink, and Spark are often deployed alongside a time-series database to handle these operations, increasing system complexity and maintenance costs.
diff --git a/docs/en/12-taos-sql/16-operators.md b/docs/en/17-taos-sql/16-operators.md
similarity index 99%
rename from docs/en/12-taos-sql/16-operators.md
rename to docs/en/17-taos-sql/16-operators.md
index c426e2879342e430c61c4f8133aa9f8186888941..8dd1cef5ca8f4d221f946fa112f823645439f232 100644
--- a/docs/en/12-taos-sql/16-operators.md
+++ b/docs/en/17-taos-sql/16-operators.md
@@ -1,6 +1,7 @@
---
sidebar_label: Operators
title: Operators
+description: TDengine Supported Operators
---
## Arithmetic Operators
diff --git a/docs/en/12-taos-sql/17-json.md b/docs/en/17-taos-sql/17-json.md
similarity index 98%
rename from docs/en/12-taos-sql/17-json.md
rename to docs/en/17-taos-sql/17-json.md
index 77f774303316b466a15226f548f84da69be8f92d..1f08197dad683659e09c0be3591f3998565ce538 100644
--- a/docs/en/12-taos-sql/17-json.md
+++ b/docs/en/17-taos-sql/17-json.md
@@ -1,6 +1,7 @@
---
sidebar_label: JSON Type
title: JSON Type
+description: JSON Data Type
---
diff --git a/docs/en/12-taos-sql/18-escape.md b/docs/en/17-taos-sql/18-escape.md
similarity index 97%
rename from docs/en/12-taos-sql/18-escape.md
rename to docs/en/17-taos-sql/18-escape.md
index a2ae40de98be677e599e83a634952a39faeaafbf..872397b29a65967b19699fea8e57ea80a5c3e88a 100644
--- a/docs/en/12-taos-sql/18-escape.md
+++ b/docs/en/17-taos-sql/18-escape.md
@@ -1,5 +1,6 @@
---
title: Escape Characters
+description: How to use Escape
---
## Escape Characters
diff --git a/docs/en/12-taos-sql/19-limit.md b/docs/en/17-taos-sql/19-limit.md
similarity index 98%
rename from docs/en/12-taos-sql/19-limit.md
rename to docs/en/17-taos-sql/19-limit.md
index 0486ea30940cdcb5d034bb730d12c0c120a59cd1..b63cf469b89bf212447ad8dc7dd9eaf2adbcb05b 100644
--- a/docs/en/12-taos-sql/19-limit.md
+++ b/docs/en/17-taos-sql/19-limit.md
@@ -1,6 +1,7 @@
---
-sidebar_label: Name and Size Limits
-title: Name and Size Limits
+sidebar_label: Limits
+title: Limits
+description: Naming Limits
---
## Naming Rules
diff --git a/docs/en/12-taos-sql/20-keywords.md b/docs/en/17-taos-sql/20-keywords.md
similarity index 96%
rename from docs/en/12-taos-sql/20-keywords.md
rename to docs/en/17-taos-sql/20-keywords.md
index 6f166c8034382b0613845d18470556622106e673..9d8d7c47678cc9720a5fbfa03268ca4ae071034e 100644
--- a/docs/en/12-taos-sql/20-keywords.md
+++ b/docs/en/17-taos-sql/20-keywords.md
@@ -1,6 +1,7 @@
---
-sidebar_label: Reserved Keywords
+sidebar_label: Keywords
title: Reserved Keywords
+description: Reserved Keywords in TDengine SQL
---
## Keyword List
diff --git a/docs/en/12-taos-sql/26-udf.md b/docs/en/17-taos-sql/26-udf.md
similarity index 98%
rename from docs/en/12-taos-sql/26-udf.md
rename to docs/en/17-taos-sql/26-udf.md
index 03251067adb3c19aefcf271390c16c939effa432..71603c8804161ce3da4290043b8e611585c12158 100644
--- a/docs/en/12-taos-sql/26-udf.md
+++ b/docs/en/17-taos-sql/26-udf.md
@@ -1,6 +1,7 @@
---
-sidebar_label: User-Defined Functions
+sidebar_label: UDF
title: User-Defined Functions (UDF)
+description: User Defined Functions
---
You can create user-defined functions and import them into TDengine.
diff --git a/docs/en/12-taos-sql/27-index.md b/docs/en/17-taos-sql/27-index.md
similarity index 97%
rename from docs/en/12-taos-sql/27-index.md
rename to docs/en/17-taos-sql/27-index.md
index 7d09bc43ab06932b82019923d4a8fda48cd99c97..7215c26f6a902335607130f91f5d4e576fb69381 100644
--- a/docs/en/12-taos-sql/27-index.md
+++ b/docs/en/17-taos-sql/27-index.md
@@ -1,6 +1,7 @@
---
sidebar_label: Index
title: Using Indices
+description: Use Index to Accelerate Query.
---
TDengine supports SMA and FULLTEXT indexing.
diff --git a/docs/en/12-taos-sql/_category_.yml b/docs/en/17-taos-sql/_category_.yml
similarity index 100%
rename from docs/en/12-taos-sql/_category_.yml
rename to docs/en/17-taos-sql/_category_.yml
diff --git a/docs/en/12-taos-sql/index.md b/docs/en/17-taos-sql/index.md
similarity index 100%
rename from docs/en/12-taos-sql/index.md
rename to docs/en/17-taos-sql/index.md
diff --git a/docs/en/12-taos-sql/timewindow-1.webp b/docs/en/17-taos-sql/timewindow-1.webp
similarity index 100%
rename from docs/en/12-taos-sql/timewindow-1.webp
rename to docs/en/17-taos-sql/timewindow-1.webp
diff --git a/docs/en/12-taos-sql/timewindow-2.webp b/docs/en/17-taos-sql/timewindow-2.webp
similarity index 100%
rename from docs/en/12-taos-sql/timewindow-2.webp
rename to docs/en/17-taos-sql/timewindow-2.webp
diff --git a/docs/en/12-taos-sql/timewindow-3.webp b/docs/en/17-taos-sql/timewindow-3.webp
similarity index 100%
rename from docs/en/12-taos-sql/timewindow-3.webp
rename to docs/en/17-taos-sql/timewindow-3.webp
diff --git a/docs/en/17-tools/01-cli.md b/docs/en/19-tools/01-cli.md
similarity index 89%
rename from docs/en/17-tools/01-cli.md
rename to docs/en/19-tools/01-cli.md
index 0abe68c6b033c4210d2a9987ed24600c44e66fb6..e6018ce71805978263ac9a810c0c1ec7405b083c 100644
--- a/docs/en/17-tools/01-cli.md
+++ b/docs/en/19-tools/01-cli.md
@@ -4,14 +4,17 @@ sidebar_label: TDengine CLI
description: Instructions and tips for using the TDengine CLI to connect TDengine Cloud
---
+
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
+
+
The TDengine command-line interface (hereafter referred to as `TDengine CLI`) is the most simplest way for users to manipulate and interact with TDengine instances.
## Installation
-To run TDengine CLI to access TDengine cloud, please install [TDengine client installation package](https://gcp.cloud.tdengine.com/download/TDengine-client-2.7.0.0-Linux-x64.tar.gz) first.
+To run TDengine CLI to access TDengine cloud, please install [TDengine client installation package](https://tdengine.com/assets-download/cloud/TDengine-client-3.0.0.1202209031045-Linux-x64.tar.gz) first.
## Config
@@ -97,10 +100,10 @@ taos -E $TDENGINE_CLOUD_DSN
## Using TDengine CLI
-TDengine CLI will display a welcome message and version information if it successfully connected to the TDengine service. If it fails, TDengine CLI will print an error message. See [FAQ](/train-faq/faq) to solve the problem of terminal connection failure to the server. The TDengine CLI prompts as follows:
+TDengine CLI will display a welcome message and version information if it successfully connected to the TDengine service. If it fails, TDengine CLI will print an error message. The TDengine CLI prompts as follows:
```
-Welcome to the TDengine shell from Linux, Client Version:2.6.0.4
+Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
Successfully connect to cloud.tdengine.com:8085 in restful mode
diff --git a/docs/en/17-tools/03-taosbenchmark.md b/docs/en/19-tools/03-taosbenchmark.md
similarity index 58%
rename from docs/en/17-tools/03-taosbenchmark.md
rename to docs/en/19-tools/03-taosbenchmark.md
index 07de52c7349f270abd8c7843728f3585d75039e0..0482685a7fbfca4c6906d20b432fcb00d4fd62a3 100644
--- a/docs/en/17-tools/03-taosbenchmark.md
+++ b/docs/en/19-tools/03-taosbenchmark.md
@@ -1,50 +1,36 @@
---
title: taosBenchmark
sidebar_label: taosBenchmark
+toc_max_heading_level: 4
+description: "taosBenchmark (once called taosdemo ) is a tool for testing the performance of TDengine."
---
## Introduction
taosBenchmark (formerly taosdemo ) is a tool for testing the performance of TDengine products. taosBenchmark can test the performance of TDengine's insert, query, and subscription functions and simulate large amounts of data generated by many devices. taosBenchmark can be configured to generate user defined databases, supertables, subtables, and the time series data to populate these for performance benchmarking. taosBenchmark is highly configurable and some of the configurations include the time interval for inserting data, the number of working threads and the capability to insert disordered data. The installer provides taosdemo as a soft link to taosBenchmark for compatibility with past users.
-## Installation
-
-There are two ways to install taosBenchmark:
+**Please be noted that in the context of TDengine cloud service, non privileged user can't create database using any tool, including taosBenchmark. The database needs to be firstly created in the data explorer in TDengine cloud service console. For any content about creating database in this document, the user needs to ignore and create the database manually inside TDengine cloud service.**
-- Installing the official TDengine installer will automatically install taosBenchmark. Please refer to [TDengine installation](/operation/pkg-install) for details.
+## Installation
-- Compile taos-tools separately and install them. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.
+To use taosBenchmark, you need to download and install [taosTools](https://tdengine.com/assets-download/cloud/taosTools-2.1.3-Linux-x64.tar.gz). Before installing taosTools, please firstly download and install [TDengine CLI](https://docs.tdengine.com/cloud/tools/cli/#installation).
+Decompress the package and install.
+```
+tar -xzf taosTools-2.1.3-Linux-x64.tar.gz
+cd taosTools-2.1.3-Linux-x64.tar.gz
+sudo ./install-taostools.sh
+```
## Run
### Configuration and running methods
TaosBenchmark needs to be executed on the terminal of the operating system, it supports two configuration methods: [Command-line arguments](#command-line-arguments-in-detail) and [JSON configuration file](#configuration-file-parameters-in-detail). These two methods are mutually exclusive. Users can use `-f ` to specify a configuration file. When running taosBenchmark with command-line arguments to control its behavior, users should use other parameters for configuration, but not the `-f` parameter. In addition, taosBenchmark offers a special way of running without parameters.
-taosBenchmark supports the complete performance testing of TDengine by providing functionally to write, query, and subscribe. These three functions are mutually exclusive, users can only select one of them each time taosBenchmark runs. The query and subscribe functionalities are only configurable using a json configuration file by specifying the parameter `filetype`, while write can be performed through both the command-line and a configuration file.
+taosBenchmark supports the complete performance testing of TDengine by providing functionally to write, query, and subscribe. These three functions are mutually exclusive, users can only select one of them each time taosBenchmark runs. The query and subscribe functionalities are only configurable using a json configuration file by specifying the parameter `filetype`, while write can be performed through both the command-line and a configuration file. If you want to test the performance of queries or data subscriptionm configure taosBenchmark with the configuration file. You can modify the value of the `filetype` parameter to specify the function that you want to test.
**Make sure that the TDengine cluster is running correctly before running taosBenchmark. **
-### Run without command-line arguments
-
-Execute the following commands to quickly experience taosBenchmark's default configuration-based write performance testing of TDengine.
-
-```bash
-taosBenchmark
-```
-
-When run without parameters, taosBenchmark connects to the TDengine cluster specified in `/etc/taos` by default and creates a database named test in TDengine, a super table named `meters` under the test database, and 10,000 tables under the super table with 10,000 records written to each table. Note that if there is already a test database, this command will delete it first and create a new test database.
-
-### Run with command-line configuration parameters
-
-The `-f ` argument cannot be used when running taosBenchmark with command-line parameters. Users must specify all configuration parameters from the command-line. The following is an example of testing taosBenchmark writing performance using the command-line approach.
-
-```bash
-taosBenchmark -I stmt -n 200 -t 100
-```
-
-In the above command, `taosBenchmark` will create the default database named `test`, create the default super table named `meters`, create 100 subtables in the super table and insert 200 records for each subtable using parameter binding.
-
### Run with the configuration file
A sample configuration file is provided in the taosBenchmark installation package under `/examples/taosbenchmark-json`.
@@ -52,108 +38,252 @@ A sample configuration file is provided in the taosBenchmark installation packag
Use the following command-line to run taosBenchmark and control its behavior via a configuration file.
```bash
-taosBenchmark -f
+taosBenchmark -f json-file
```
-#### Configuration file examples
-##### Example of inserting a scenario JSON configuration file
+**Sample configuration files**
-
-insert.json
+#### Configuration file examples
```json
-{{#include /taos-tools/example/insert.json}}
-```
-
-
+{
+ "filetype": "insert",
+ "cfgdir": "/etc/taos",
+ "host": "127.0.0.1",
+ "port": 6030,
+ "user": "root",
+ "password": "taosdata",
+ "connection_pool_size": 8,
+ "thread_count": 4,
+ "create_table_thread_count": 7,
+ "result_file": "./insert_res.txt",
+ "confirm_parameter_prompt": "no",
+ "insert_interval": 0,
+ "interlace_rows": 100,
+ "num_of_records_per_req": 100,
+ "prepared_rand": 10000,
+ "chinese": "no",
+ "databases": [
+ {
+ "dbinfo": {
+ "name": "test",
+ "drop": "no",
+ "replica": 1,
+ "precision": "ms",
+ "keep": 3650,
+ "minRows": 100,
+ "maxRows": 4096,
+ "comp": 2
+ },
+ "super_tables": [
+ {
+ "name": "meters",
+ "child_table_exists": "no",
+ "childtable_count": 10000,
+ "childtable_prefix": "d",
+ "escape_character": "yes",
+ "auto_create_table": "no",
+ "batch_create_tbl_num": 5,
+ "data_source": "rand",
+ "insert_mode": "taosc",
+ "non_stop_mode": "no",
+ "line_protocol": "line",
+ "insert_rows": 10000,
+ "childtable_limit": 10,
+ "childtable_offset": 100,
+ "interlace_rows": 0,
+ "insert_interval": 0,
+ "partial_col_num": 0,
+ "disorder_ratio": 0,
+ "disorder_range": 1000,
+ "timestamp_step": 10,
+ "start_timestamp": "2020-10-01 00:00:00.000",
+ "sample_format": "csv",
+ "sample_file": "./sample.csv",
+ "use_sample_ts": "no",
+ "tags_file": "",
+ "columns": [
+ {
+ "type": "FLOAT",
+ "name": "current",
+ "count": 1,
+ "max": 12,
+ "min": 8
+ },
+ { "type": "INT", "name": "voltage", "max": 225, "min": 215 },
+ { "type": "FLOAT", "name": "phase", "max": 1, "min": 0 }
+ ],
+ "tags": [
+ {
+ "type": "TINYINT",
+ "name": "groupid",
+ "max": 10,
+ "min": 1
+ },
+ {
+ "name": "location",
+ "type": "BINARY",
+ "len": 16,
+ "values": ["San Francisco", "Los Angles", "San Diego",
+ "San Jose", "Palo Alto", "Campbell", "Mountain View",
+ "Sunnyvale", "Santa Clara", "Cupertino"]
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
-##### Query Scenario JSON Profile Example
+```
-
-query.json
+#### Query Scenario JSON Profile Example
```json
-{{#include /taos-tools/example/query.json}}
-```
-
-
+{
+ "filetype": "query",
+ "cfgdir": "/etc/taos",
+ "host": "127.0.0.1",
+ "port": 6030,
+ "user": "root",
+ "password": "taosdata",
+ "confirm_parameter_prompt": "no",
+ "databases": "test",
+ "query_times": 2,
+ "query_mode": "taosc",
+ "specified_table_query": {
+ "query_interval": 1,
+ "concurrent": 3,
+ "sqls": [
+ {
+ "sql": "select last_row(*) from meters",
+ "result": "./query_res0.txt"
+ },
+ {
+ "sql": "select count(*) from d0",
+ "result": "./query_res1.txt"
+ }
+ ]
+ },
+ "super_table_query": {
+ "stblname": "meters",
+ "query_interval": 1,
+ "threads": 3,
+ "sqls": [
+ {
+ "sql": "select last_row(ts) from xxxx",
+ "result": "./query_res2.txt"
+ }
+ ]
+ }
+}
-##### Subscription JSON configuration example
+```
-
-subscribe.json
+#### Subscription JSON configuration example
```json
-{{#include /taos-tools/example/subscribe.json}}
-```
+{
+ "filetype": "subscribe",
+ "cfgdir": "/etc/taos",
+ "host": "127.0.0.1",
+ "port": 6030,
+ "user": "root",
+ "password": "taosdata",
+ "databases": "test",
+ "specified_table_query": {
+ "concurrent": 1,
+ "mode": "sync",
+ "interval": 1000,
+ "restart": "yes",
+ "keepProgress": "yes",
+ "resubAfterConsume": 10,
+ "sqls": [
+ {
+ "sql": "select avg(current) from meters where location = 'beijing';",
+ "result": "./subscribe_res0.txt"
+ }
+ ]
+ },
+ "super_table_query": {
+ "stblname": "meters",
+ "threads": 1,
+ "mode": "sync",
+ "interval": 1000,
+ "restart": "yes",
+ "keepProgress": "yes",
+ "sqls": [
+ {
+ "sql": "select phase from xxxx where groupid > 3;",
+ "result": "./subscribe_res1.txt"
+ }
+ ]
+ }
+}
-
+```
-## Command-line arguments in detail
+## Command-line argument in detailed
- **-f/--file ** :
specify the configuration file to use. This file includes All parameters. Users should not use this parameter with other parameters on the command-line. There is no default value.
-- **-W/--cloud_dsn=** : The dsn to connect TDengine cloud service.
- **-c/--config-dir ** :
- specify the directory of the TDengine cluster configuration file. the default path is `/etc/taos`.
+ specify the directory where the TDengine cluster configuration file. The default path is `/etc/taos`.
- **-h/--host ** :
- specify the FQDN of the TDengine server to connect to. The default value is localhost.
+ Specify the FQDN of the TDengine server to connect to. The default value is localhost.
- **-P/--port ** :
- specify the port number of the TDengine server to connect to, the default value is 6030.
+ The port number of the TDengine server to connect to, the default value is 6030.
- **-I/--interface ** :
- specify the insert mode. Options are taosc, rest, stmt, sml, sml-rest, corresponding to normal write, restful interface writing, parameter binding interface writing, schemaless interface writing, RESTful schemaless interface writing (provided by taosAdapter). The default value is taosc.
+ Insert mode. Options are taosc, rest, stmt, sml, sml-rest, corresponding to normal write, restful interface writing, parameter binding interface writing, schemaless interface writing, RESTful schemaless interface writing (provided by taosAdapter). The default value is taosc.
- **-u/--user ** :
- specify the user name to connect to the TDengine server, the default is root.
+ User name to connect to the TDengine server. Default is root.
- **-p/--password ** :
- specify the password to connect to the TDengine server, the default is `taosdata`.
+ The default password to connect to the TDengine server is `taosdata`.
- **-o/--output ** :
specify the path of the result output file, the default value is `. /output.txt`.
- **-T/--thread ** :
- specify the number of threads to insert data, the default value is 8.
+ The number of threads to insert data. Default is 8.
- **-B/--interlace-rows ** :
- enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table. Interleaved insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables have been inserted. The default value is 0, i.e., data is inserted into one sub-table before the next sub-table is inserted.
+ Enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table. Interleaved insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables have been inserted. The default value is 0, i.e., data is inserted into one sub-table before the next sub-table is inserted.
- **-i/--insert-interval ** :
- specify the insert interval in `ms` for interleaved insert mode. The default value is 0. It only works if `-B/--interlace-rows` is greater than 0. That means that after inserting interlaced rows for each child table, the data insertion with multiple threads will wait for the interval specified by this value before proceeding to the next round of writes.
+ Specify the insert interval in `ms` for interleaved insert mode. The default value is 0. It only works if `-B/--interlace-rows` is greater than 0. After inserting interlaced rows for each child table, the data insertion thread will wait for the interval specified by this value before proceeding to the next round of writes.
- **-r/--rec-per-req ** :
- specify the number of rows to write per request, the default value is 30000.
+ Writing the number of rows of records per request to TDengine, the default value is 30000.
- **-t/--tables ** :
- specify the number of subtables to create, the default value is 10000.
+ Specify the number of sub-tables. The default is 10000.
- **-S/--timestampstep ** :
- specify the timestamp step between records when inserting data in each child table in ms, the default value is 1.
+ Timestamp step for inserting data in each child table in ms, default is 1.
- **-n/--records ** :
- specify the number of records inserted into each sub-table, the default value is 10000.
-
-- **-d/--database ** :
- specify the name of the database used, the default value is `test`.
+ The default value of the number of records inserted in each sub-table is 10000.
- **-b/--data-type ** :
- specify the data column types of the super table. The default values are three columns of type FLOAT, INT, and FLOAT.
+ specify the type of the data columns of the super table. It defaults to three columns of type FLOAT, INT, and FLOAT if not used.
- **-l/--columns ** :
- specify the number of columns in the super table. If both this parameter and `-b/--data-type` are set, the resulting number of columns is the greater of the two. If the number specified by this parameter is greater than the number of columns specified by `-b/--data-type`, the unspecified column types default to INT, for example: `-l 5 -b float,double`, then the column types are `FLOAT,DOUBLE,INT,INT,INT`. If the number of columns specified is less than or equal to the number of columns specified by `-b/--data-type`, then the columns specified by `-b/--data-type` will be used. e.g.: `-l 3 -b float,double,float,bigint` will result in the column types `FLOAT,DOUBLE,FLOAT,BIGINT`.
+ specify the number of columns in the super table. If both this parameter and `-b/--data-type` is set, the final result number of columns is the greater of the two. If the number specified by this parameter is greater than the number of columns specified by `-b/--data-type`, the unspecified column type defaults to INT, for example: `-l 5 -b float,double`, then the final column is `FLOAT,DOUBLE,INT,INT,INT`. If the number of columns specified is less than or equal to the number of columns specified by `-b/--data-type`, then the result is the column and type specified by `-b/--data-type`, e.g.: `-l 3 -b float,double,float,bigint`. The last column is `FLOAT,DOUBLE, FLOAT,BIGINT`.
- **-A/--tag-type ** :
- specify the tag column types of the super table. nchar and binary types can both set the length, for example:
+ The tag column type of the super table. nchar and binary types can both set the length, for example:
```
taosBenchmark -A INT,DOUBLE,NCHAR,BINARY(16)
```
-If the user does not set the tag type, the default is two tags, whose types are INT and BINARY(16).
+If users did not set tag type, the default is two tags, whose types are INT and BINARY(16).
Note: In some shells, such as bash, "()" needs to be escaped, so the above command should be
```
@@ -161,48 +291,45 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
```
- **-w/--binwidth **:
- specify the default length for nchar and binary types, the default value is 64.
+ specify the default length for nchar and binary types. The default value is 64.
- **-m/--table-prefix ** :
- specify the prefix of the sub-table names, the default value is "d".
+ The prefix of the sub-table name, the default value is "d".
- **-E/--escape-character** :
- specify whether to use escape characters in the super table and sub-table names, the default is no.
+ Switch parameter specifying whether to use escape characters in the super table and sub-table names. By default is not used.
- **-C/--chinese** :
specify whether to use Unicode Chinese characters in nchar and binary, the default is no.
- **-N/--normal-table** :
- specify whether taosBenchmark will create only normal tables instead of super tables. The default value is false. It can be used if the insert mode is taosc, stmt, and rest.
+ This parameter indicates that taosBenchmark will create only normal tables instead of super tables. The default value is false. It can be used if the insert mode is taosc, stmt, and rest.
- **-M/--random** :
- specify whether taosBenchmark will generate random values. The default is false. When true, for tag/data columns of numeric type, the value is a random value within the range of values of that type. For NCHAR and BINARY type tag/data columns, the value is a random string within the specified length range.
+ This parameter indicates writing data with random values. The default is false. If users use this parameter, taosBenchmark will generate the random values. For tag/data columns of numeric type, the value is a random value within the range of values of that type. For NCHAR and BINARY type tag columns/data columns, the value is the random string within the specified length range.
- **-x/--aggr-func** :
- specify whether to query aggregation function after insertion. The default value is false.
+ Switch parameter to indicate query aggregation function after insertion. The default is false.
- **-y/--answer-yes** :
- specify whether to require the user to confirm at the prompt to continue. The default value is false.
+ Switch parameter that requires the user to confirm at the prompt to continue. The default value is false.
- **-O/--disorder ** :
- specify the percentage probability of disordered data, with a value range of [0,50]. The default value is 0, i.e., there is no disordered data.
+ Specify the percentage probability of disordered data, with a value range of [0,50]. The default is 0, i.e., there is no disordered data.
- **-R/--disorder-range ** :
- specify the timestamp range for the disordered data. The disordered timestamp data will be out of order by the ordered timestamp minus a random value in this range. Valid only if the percentage of disordered data specified by `-O/--disorder` is greater than 0.
+ Specify the timestamp range for the disordered data. It leads the resulting disorder timestamp as the ordered timestamp minus a random value in this range. Valid only if the percentage of disordered data specified by `-O/--disorder` is greater than 0.
- **-F/--prepare_rand ** :
- specify the number of unique values in the generated random data. A value of 1 means that all data are equal. The default value is 10000.
-
-- **-a/--replica ** :
- specify the number of replicas when creating the database. The default value is 1.
+ Specify the number of unique values in the generated random data. A value of 1 means that all data are equal. The default value is 10000.
- **-V/--version** :
- Show version information only. Users should not use this with other parameters.
+ Show version information only. Users should not use it with other parameters.
- **-? /--help** :
Show help information and exit. Users should not use it with other parameters.
-## Configuration file parameters in detail
+## Configuration file parameters in detailed
### General configuration parameters
@@ -211,66 +338,47 @@ The parameters listed in this section apply to all function modes.
- **filetype** : The function to be tested, with optional values `insert`, `query` and `subscribe`. These correspond to the insert, query, and subscribe functions, respectively. Users can specify only one of these in each configuration file.
**cfgdir**: specify the TDengine cluster configuration file's directory. The default path is /etc/taos.
-- **host**: specify the FQDN of the TDengine server to connect to. The default value is `localhost`.
+- **host**: Specify the FQDN of the TDengine server to connect. The default value is `localhost`.
-- **port**: specify the port number of the TDengine server to connect to, the default value is `6030`.
+- **port**: The port number of the TDengine server to connect to, the default value is `6030`.
-- **user**: specify the user name to connect to the TDengine server, the default is `root`.
+- **user**: The user name of the TDengine server to connect to, the default is `root`.
-- **password**: specify the password to connect to the TDengine server, the default value is `taosdata`.
+- **password**: The password to connect to the TDengine server, the default value is `taosdata`.
### Insert scenario configuration parameters
-`filetype` must be set to `insert` in the insertion scenario. See [General Configuration Parameters](#general-configuration-parameters)
-
-#### Database related configuration parameters
-
-The parameters related to database creation are configured in `dbinfo` in the json configuration file, as follows. These parameters correspond to the database parameters specified when `create database` in TDengine.
-
-- **name**: specify the name of the database.
-
-- **drop**: indicate whether to delete the database before inserting. The default is true.
-
-- **replica**: specify the number of replicas when creating the database.
+`filetype` must be set to `insert` in the insertion scenario. See [General Configuration Parameters](#General Configuration Parameters)
-- **days**: specify the time span for storing data in a single data file. The default is 10.
+#### Stream processing related configuration parameters
-- **cache**: specify the size of the cache blocks in MB. The default value is 16.
+The parameters for creating streams are configured in `stream` in the json configuration file, as shown below.
-- **blocks**: specify the number of cache blocks in each vnode. The default is 6.
+- **stream_name**: Name of the stream. Mandatory.
-- **precision**: specify the database time precision. The default value is "ms".
+- **stream_stb**: Name of the supertable for the stream. Mandatory.
-- **keep**: specify the number of days to keep the data. The default value is 3650.
+- **stream_sql**: SQL statement for the stream to process. Mandatory.
-- **minRows**: specify the minimum number of records in the file block. The default value is 100.
+- **trigger_mode**: Triggering mode for stream processing. Optional.
-- **maxRows**: specify the maximum number of records in the file block. The default value is 4096.
+- **watermark**: Watermark for stream processing. Optional.
-- **comp**: specify the file compression level. The default value is 2.
-
-- **walLevel** : specify WAL level, default is 1.
-
-- **cacheLast**: indicate whether to allow the last record of each table to be kept in memory. The default value is 0. The value can be 0, 1, 2, or 3.
-
-- **quorum**: specify the number of writing acknowledgments in multi-replica mode. The default value is 1.
-
-- **fsync**: specify the interval of fsync in ms when users set WAL to 2. The default value is 3000.
-
-- **update** : indicate whether to support data update, default value is 0, values can be 0, 1, 2.
+- **drop**: Whether to create the stream. Specify yes to create the stream or no to not create the stream.
#### Super table related configuration parameters
The parameters for creating super tables are configured in `super_tables` in the json configuration file, as shown below.
- **name**: Super table name, mandatory, no default value.
-- **child_table_exists** : whether the child table already exists, default value is "no", values can be "yes" or "no".
+
+- **child_table_exists** : whether the child table already exists, default value is "no", optional value is "yes" or "no".
- **child_table_count** : The number of child tables, the default value is 10.
- **child_table_prefix** : The prefix of the child table name, mandatory configuration item, no default value.
-- **escape_character**: specify whether the super table and child table names containing escape characters. By default is "no". The value can be "yes" or "no".
+- **escape_character**: specify the super table and child table names containing escape characters. The value can be "yes" or "no". The default is "no".
- **auto_create_table**: only when insert_mode is taosc, rest, stmt, and childtable_exists is "no". "yes" means taosBenchmark will automatically create non-existent tables when inserting data; "no" means that taosBenchmark will create all tables before inserting.
@@ -280,7 +388,7 @@ The parameters for creating super tables are configured in `super_tables` in the
- **insert_mode**: insertion mode with options taosc, rest, stmt, sml, sml-rest, corresponding to normal write, restful interface write, parameter binding interface write, schemaless interface write, restful schemaless interface write (provided by taosAdapter). The default value is taosc.
-- **non_stop_mode**: Specify whether to keep writing. If "yes", insert_rows will be disabled, and writing will not stop until Ctrl + C stops the program. The default value is "no", i.e., taosBenchmark will stop the writing after the specified number of rows are written. Note: insert_rows must be configured as a non-zero positive integer even if it is disabled in continuous write mode.
+- **non_stop_mode**: Specify whether to keep writing. If "yes", insert_rows will be disabled, and writing will not stop until Ctrl + C stops the program. The default value is "no", i.e., taosBenchmark will stop the writing after the specified number of rows are written. Note: insert_rows must be configured as a non-zero positive integer even if it fails in continuous write mode.
- **line_protocol**: Insert data using line protocol. Only works when insert_mode is sml or sml-rest. The value can be `line`, `telnet`, or `json`.
@@ -314,6 +422,22 @@ The parameters for creating super tables are configured in `super_tables` in the
- **tags_file** : only works when insert_mode is taosc, rest. The final tag value is related to the childtable_count. Suppose the tag data rows in the CSV file are smaller than the given number of child tables. In that case, taosBenchmark will read the CSV file data cyclically until the number of child tables specified by childtable_count is generated. Otherwise, taosBenchmark will read the childtable_count rows of tag data only. The final number of child tables generated is the smaller of the two.
+#### TSMA configuration parameters
+
+The configuration parameters for specifying TSMAs are in `tsmas` in `super_tables`.
+
+- **name**: Specifies TSMA name. Mandatory.
+
+- **function**: Specifies TSMA function. Mandatory.
+
+- **interval**: Specifies TSMA interval. Mandatory.
+
+- **sliding**: Specifies time offset for TSMA window. Mandatory.
+
+- **custom**: Specifies custom configurations to attach to the end of the TSMA creation statement. Optional.
+
+- **start_when_inserted**: Specifies the number of inserted rows after which TSMA is started. Optional. The default value is 0.
+
#### Tag and Data Column Configuration Parameters
The configuration parameters for specifying super table tag columns and data columns are in `columns` and `tag` in `super_tables`, respectively.
@@ -333,6 +457,8 @@ The configuration parameters for specifying super table tag columns and data col
- **values**: The value field of the nchar/binary column/label, which will be chosen randomly from the values.
+- **sma**: Insert the column into the BSMA. Enter `yes` or `no`. The default is `no`.
+
#### insertion behavior configuration parameters
- **thread_count**: specify the number of threads to insert data. Default is 8.
@@ -345,21 +471,21 @@ The configuration parameters for specifying super table tag columns and data col
- **confirm_parameter_prompt**: The switch parameter requires the user to confirm after the prompt to continue. The default value is false.
-- **interlace_rows**: Enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table at a time. Interleaved insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables are inserted. The default value is 0, which means that data will be inserted into the following child table only after data is inserted into one child table.
+- **interlace_rows**: Enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table at a time. Staggered insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables have been inserted. The default value is 0, i.e., data is inserted into one sub-table before the next sub-table is inserted.
This parameter can also be configured in `super_tables`, and if so, the configuration in `super_tables` takes precedence and overrides the global setting.
- **insert_interval** :
- Specifies the insertion interval in ms for interleaved insertion mode. The default value is 0. Only works if `-B/--interlace-rows` is greater than 0. It means that after inserting interlace rows for each child table, the data insertion thread will wait for the interval specified by this value before proceeding to the next round of writes.
- This parameter can also be configured in `super_tables`, and if configured, the configuration in `super_tables` takes high priority, overriding the global setting.
+ Specify the insert interval in `ms` for interleaved insert mode. The default value is 0. It only works if `-B/--interlace-rows` is greater than 0. After inserting interlaced rows for each child table, the data insertion thread will wait for the interval specified by this value before proceeding to the next round of writes.
+ This parameter can also be configured in `super_tables`, and if so, the configuration in `super_tables` takes precedence and overrides the global setting.
- **num_of_records_per_req** :
- The number of rows of data to be written per request to TDengine, the default value is 30000. When it is set too large, the TDengine client driver will return the corresponding error message, so you need to lower the setting of this parameter to meet the writing requirements.
+ Writing the number of rows of records per request to TDengine, the default value is 30000. When it is set too large, the TDengine client driver will return the corresponding error message, so you need to lower the setting of this parameter to meet the writing requirements.
-- **prepare_rand**: The number of unique values in the generated random data. A value of 1 means that all data are the same. The default value is 10000.
+- **prepare_rand**: The number of unique values in the generated random data. A value of 1 means that all data are equal. The default value is 10000.
### Query scenario configuration parameters
-`filetype` must be set to `query` in the query scenario. See [General Configuration Parameters](#general-configuration-parameters) for details of this parameter and other general parameters
+`filetype` must be set to `query` in the query scenario. See [General Configuration Parameters](#General Configuration Parameters) for details of this parameter and other general parameters
#### Configuration parameters for executing the specified query statement
@@ -383,14 +509,14 @@ The configuration parameters of the super table query are set in `super_table_qu
- **threads**: The number of threads to execute the query SQL, the default value is 1.
-- **sqls** : The default value is 1.
+- **sqls**:
- **sql**: The SQL command to be executed. For the query SQL of super table, keep "xxxx" in the SQL command. The program will automatically replace it with all the sub-table names of the super table.
Replace it with all the sub-table names in the super table.
- **result**: The file to save the query result. If not specified, taosBenchmark will not save result.
### Subscription scenario configuration parameters
-`filetype` must be set to `subscribe` in the subscription scenario. See [General Configuration Parameters](#genera-configuration-parameters) for details of this and other general parameters
+`filetype` must be set to `subscribe` in the subscription scenario. See [General Configuration Parameters](#General Configuration Parameters) for details of this and other general parameters
#### Configuration parameters for executing the specified subscription statement
@@ -406,9 +532,9 @@ The configuration parameters for subscribing to a sub-table or a generic table a
- **resubAfterConsume**: "yes" means cancel the previous subscription and then subscribe again, "no" means continue the previous subscription, and the default value is "no".
-- **sqls** : The default value is "no".
+- **sqls**:
- **sql** : The SQL command to be executed, required.
- - **result** : The file to save the query result, unspecified is not saved.
+ - **result**: The file to save the query result. If not specified, taosBenchmark will not save result.
#### Configuration parameters for subscribing to supertables
@@ -426,7 +552,7 @@ The configuration parameters for subscribing to a super table are set in `super_
- **resubAfterConsume**: "yes" means cancel the previous subscription and then subscribe again, "no" means continue the previous subscription, and the default value is "no".
-- **sqls** : The default value is "no".
- - **sql**: SQL command to be executed, required; for the query SQL of the super table, keep "xxxx" in the SQL command, and the program will replace it with all the sub-table names of the super table automatically.
+- **sqls**:
+ - **sql**: The SQL command to be executed. For the query SQL of super table, keep "xxxx" in the SQL command. The program will automatically replace it with all the sub-table names of the super table.
Replace it with all the sub-table names in the super table.
- - **result**: The file to save the query result, if not specified, it will not be saved.
+ - **result**: The file to save the query result. If not specified, taosBenchmark will not save result.
diff --git a/docs/en/19-tools/06-taosdump.md b/docs/en/19-tools/06-taosdump.md
new file mode 100644
index 0000000000000000000000000000000000000000..7f4d4c06efa4d7cf6eb344da86bdf4ea8df4878b
--- /dev/null
+++ b/docs/en/19-tools/06-taosdump.md
@@ -0,0 +1,134 @@
+---
+title: taosdump
+description: "taosdump is a tool that supports backing up data from a running TDengine cluster and restoring the backed up data to the same, or another running TDengine cluster."
+---
+
+## Introduction
+
+taosdump is a tool that supports backing up data from a running TDengine cluster and restoring the backed up data to the same, or another running TDengine cluster.
+
+taosdump can back up a database, a super table, or a normal table as a logical data unit or backup data records in the database, super tables, and normal tables. When using taosdump, you can specify the directory path for data backup. If you do not specify a directory, taosdump will back up the data to the current directory by default.
+
+If the specified location already has data files, taosdump will prompt the user and exit immediately to avoid data overwriting. This means that the same path can only be used for one backup.
+
+Please be careful if you see a prompt for this and please ensure that you follow best practices and relevant SOPs for data integrity, backup and data security.
+
+Users should not use taosdump to back up raw data, environment settings, hardware information, server configuration, or cluster topology. taosdump uses [Apache AVRO](https://avro.apache.org/) as the data file format to store backup data.
+
+## Installation
+
+To use taosdump, you need to download and install [taosTools](https://tdengine.com/assets-download/cloud/taosTools-2.1.3-Linux-x64.tar.gz). Before installing taosTools, please firstly download and install [TDengine CLI](https://docs.tdengine.com/cloud/tools/cli/#installation).
+
+Decompress the package and install.
+```
+tar -xzf taosTools-2.1.3-Linux-x64.tar.gz
+cd taosTools-2.1.3-Linux-x64.tar.gz
+sudo ./install-taostools.sh
+```
+
+Set environment variable.
+
+```bash
+export TDENGINE_CLOUD_DSN=""
+```
+
+## Common usage scenarios
+
+### taosdump backup data
+
+1. backing up all databases: specify `-A` or `-all-databases` parameter.
+2. backup multiple specified databases: use `-D db1,db2,... ` parameters;
+3. back up some super or normal tables in the specified database: use `dbname stbname1 stbname2 tbname1 tbname2 ... ` parameters. Note that the first parameter of this input sequence is the database name, and only one database is supported. The second and subsequent parameters are the names of super or normal tables in that database, separated by spaces.
+4. back up the system log database: TDengine clusters usually contain a system database named `log`. The data in this database is the data that TDengine runs itself, and the taosdump will not back up the log database by default. If users need to back up the log database, users can use the `-a` or `-allow-sys` command-line parameter.
+5. Loose mode backup: taosdump version 1.4.1 onwards provides `-n` and `-L` parameters for backing up data without using escape characters and "loose" mode, which can reduce the number of backups if table names, column names, tag names do not use escape characters. This can also reduce the backup data time and backup data footprint. If you are unsure about using `-n` and `-L` conditions, please use the default parameters for "strict" mode backup. See the [official documentation](https://docs.tdengine.com/taos-sql/escape/) for a description of escaped characters.
+
+
+:::tip
+- taosdump versions after 1.4.1 provide the `-I` argument for parsing Avro file schema and data. If users specify `-s` then only taosdump will parse schema.
+- Backups after taosdump 1.4.2 use the batch count specified by the `-B` parameter. The default value is 16384. If, in some environments, low network speed or disk performance causes "Error actual dump ... batch ...", then try changing the `-B` parameter to a smaller value.
+- The export of taosdump does not support resuming from an interruption. Therefore, if the taosdump process terminates unexpectedly, delete all related files that have been exported or generated.
+- The import of taosdump supports resuming from an interruption, but when the process resumes, you will receive some "table already exists" messages, which could be ignored.
+
+:::
+
+
+
+### taosdump recover data
+
+Restore the data file in the specified path: use the `-i` parameter plus the path to the data file. You should not use the same directory to backup different data sets, and you should not backup the same data set multiple times in the same path. Otherwise, the backup data will cause overwriting or multiple backups.
+
+
+:::tip
+taosdump internally uses TDengine stmt binding API for writing recovery data with a default batch size of 16384 for better data recovery performance. If there are more columns in the backup data, it may cause a "WAL size exceeds limit" error. You can try to adjust the batch size to a smaller value by using the `-B` parameter.
+
+:::
+
+
+## Detailed command-line parameter list
+
+The following is a detailed list of taosdump command-line arguments.
+
+```
+Usage: taosdump [OPTION...] dbname [tbname ...]
+ or: taosdump [OPTION...] --databases db1,db2,...
+ or: taosdump [OPTION...] --all-databases
+ or: taosdump [OPTION...] -i inpath
+ or: taosdump [OPTION...] -o outpath
+
+ -h, --host=HOST Server host from which to dump data. Default is
+ localhost.
+ -p, --password User password to connect to server. Default is
+ taosdata.
+ -P, --port=PORT Port to connect
+ -u, --user=USER User name used to connect to server. Default is
+ root.
+ -c, --config-dir=CONFIG_DIR Configure directory. Default is /etc/taos
+ -i, --inpath=INPATH Input file path.
+ -o, --outpath=OUTPATH Output file path.
+ -r, --resultFile=RESULTFILE DumpOut/In Result file path and name.
+ -a, --allow-sys Allow to dump system database
+ -A, --all-databases Dump all databases.
+ -D, --databases=DATABASES Dump listed databases. Use comma to separate
+ database names.
+ -N, --without-property Dump database without its properties.
+ -s, --schemaonly Only dump table schemas.
+ -y, --answer-yes Input yes for prompt. It will skip data file
+ checking!
+ -d, --avro-codec=snappy Choose an avro codec among null, deflate, snappy,
+ and lzma.
+ -S, --start-time=START_TIME Start time to dump. Either epoch or
+ ISO8601/RFC3339 format is acceptable. ISO8601
+ format example: 2017-10-01T00:00:00.000+0800 or
+ 2017-10-0100:00:00:000+0800 or '2017-10-01
+ 00:00:00.000+0800'
+ -E, --end-time=END_TIME End time to dump. Either epoch or ISO8601/RFC3339
+ format is acceptable. ISO8601 format example:
+ 2017-10-01T00:00:00.000+0800 or
+ 2017-10-0100:00:00.000+0800 or '2017-10-01
+ 00:00:00.000+0800'
+ -B, --data-batch=DATA_BATCH Number of data per query/insert statement when
+ backup/restore. Default value is 16384. If you see
+ 'error actual dump .. batch ..' when backup or if
+ you see 'WAL size exceeds limit' error when
+ restore, please adjust the value to a smaller one
+ and try. The workable value is related to the
+ length of the row and type of table schema.
+ -I, --inspect inspect avro file content and print on screen
+ -L, --loose-mode Use loose mode if the table name and column name
+ use letter and number only. Default is NOT.
+ -n, --no-escape No escape char '`'. Default is using it.
+ -T, --thread-num=THREAD_NUM Number of thread for dump in file. Default is
+ 8.
+ -C, --cloud=CLOUD_DSN specify a DSN to access TDengine cloud service
+ -R, --restful Use RESTful interface to connect TDengine
+ -t, --timeout=SECONDS The timeout seconds for websocket to interact.
+ -g, --debug Print debug info.
+ -?, --help Give this help list
+ --usage Give a short usage message
+ -V, --version Print program version
+
+Mandatory or optional arguments to long options are also mandatory or optional
+for any corresponding short options.
+
+Report bugs to .
+```
diff --git a/docs/en/17-tools/index.md b/docs/en/19-tools/index.md
similarity index 100%
rename from docs/en/17-tools/index.md
rename to docs/en/19-tools/index.md
diff --git a/docs/en/20-faq.md b/docs/en/20-faq.md
deleted file mode 100644
index 32cce9075d6bdc38f5389e97b109b8163e31b622..0000000000000000000000000000000000000000
--- a/docs/en/20-faq.md
+++ /dev/null
@@ -1 +0,0 @@
-# FAQ
\ No newline at end of file
diff --git a/docs/en/eco_system.webp b/docs/en/eco_system.webp
new file mode 100644
index 0000000000000000000000000000000000000000..1bc754db97e4bc976805ad41050d642cb3c424a7
Binary files /dev/null and b/docs/en/eco_system.webp differ
diff --git a/docs/examples/csharp/.gitignore b/docs/examples/csharp/.gitignore
index b3aff79f3706e23aa74199a7f521f7912d2b0e45..627e2d891bb373f27bf77455c8cc12f7bd9eac37 100644
--- a/docs/examples/csharp/.gitignore
+++ b/docs/examples/csharp/.gitignore
@@ -1,4 +1,7 @@
-bin
-obj
-.vs
-*.sln
\ No newline at end of file
+cloud-example/connect/bin
+cloud-example/connect/obj
+cloud-example/usage/bin
+cloud-example/usage/obj
+cloud-example/stmt/bin
+cloud-example/stmt/obj
+.vs
\ No newline at end of file
diff --git a/docs/examples/csharp/cloud-example/cloud-example.sln b/docs/examples/csharp/cloud-example/cloud-example.sln
new file mode 100644
index 0000000000000000000000000000000000000000..a870f4e60217f62f534a5a6c9f22f763cc0dc247
--- /dev/null
+++ b/docs/examples/csharp/cloud-example/cloud-example.sln
@@ -0,0 +1,34 @@
+
+Microsoft Visual Studio Solution File, Format Version 12.00
+# Visual Studio Version 16
+VisualStudioVersion = 16.0.30114.105
+MinimumVisualStudioVersion = 10.0.40219.1
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "connect", "connect\connect.csproj", "{4006CF0C-17BE-4508-9682-A85298F8C92D}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "usage", "usage\usage.csproj", "{243C420F-FC47-4F21-B81E-83CDE91F2D47}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "stmt", "stmt\stmt.csproj", "{B6907CB6-41CB-4644-AEE1-551456EADE12}"
+EndProject
+Global
+ GlobalSection(SolutionConfigurationPlatforms) = preSolution
+ Debug|Any CPU = Debug|Any CPU
+ Release|Any CPU = Release|Any CPU
+ EndGlobalSection
+ GlobalSection(SolutionProperties) = preSolution
+ HideSolutionNode = FALSE
+ EndGlobalSection
+ GlobalSection(ProjectConfigurationPlatforms) = postSolution
+ {4006CF0C-17BE-4508-9682-A85298F8C92D}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {4006CF0C-17BE-4508-9682-A85298F8C92D}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {4006CF0C-17BE-4508-9682-A85298F8C92D}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {4006CF0C-17BE-4508-9682-A85298F8C92D}.Release|Any CPU.Build.0 = Release|Any CPU
+ {243C420F-FC47-4F21-B81E-83CDE91F2D47}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {243C420F-FC47-4F21-B81E-83CDE91F2D47}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {243C420F-FC47-4F21-B81E-83CDE91F2D47}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {243C420F-FC47-4F21-B81E-83CDE91F2D47}.Release|Any CPU.Build.0 = Release|Any CPU
+ {B6907CB6-41CB-4644-AEE1-551456EADE12}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {B6907CB6-41CB-4644-AEE1-551456EADE12}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {B6907CB6-41CB-4644-AEE1-551456EADE12}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {B6907CB6-41CB-4644-AEE1-551456EADE12}.Release|Any CPU.Build.0 = Release|Any CPU
+ EndGlobalSection
+EndGlobal
diff --git a/docs/examples/csharp/cloud-example/connect/Program.cs b/docs/examples/csharp/cloud-example/connect/Program.cs
new file mode 100644
index 0000000000000000000000000000000000000000..83d1d9fabeb680441253c293dc68ddffaaca3a5e
--- /dev/null
+++ b/docs/examples/csharp/cloud-example/connect/Program.cs
@@ -0,0 +1,34 @@
+using System;
+using TDengineWS.Impl;
+
+namespace Cloud.Examples
+{
+ public class ConnectExample
+ {
+ static void Main(string[] args)
+ {
+ string dsn = Environment.GetEnvironmentVariable("TDENGINE_CLOUD_DSN");
+ Connect(dsn);
+ }
+
+ public static void Connect(string dsn)
+ {
+ // get connect
+ IntPtr conn = LibTaosWS.WSConnectWithDSN(dsn);
+ if (conn == IntPtr.Zero)
+ {
+ throw new Exception($"get connection failed,reason:{LibTaosWS.WSErrorStr(conn)},code:{LibTaosWS.WSErrorNo(conn)}");
+ }
+ else
+ {
+ Console.WriteLine("Establish connect success.");
+ }
+
+ // do something ...
+
+ // close connect
+ LibTaosWS.WSClose(conn);
+
+ }
+ }
+}
\ No newline at end of file
diff --git a/docs/examples/csharp/cloud-example/connect/connect.csproj b/docs/examples/csharp/cloud-example/connect/connect.csproj
new file mode 100644
index 0000000000000000000000000000000000000000..2a4903745b0d93a8bc5f39a27ef3d1b52e6001a2
--- /dev/null
+++ b/docs/examples/csharp/cloud-example/connect/connect.csproj
@@ -0,0 +1,13 @@
+
+
+
+ Exe
+ net5.0
+ enable
+
+
+
+
+
+
+
diff --git a/docs/examples/csharp/cloud-example/stmt/Program.cs b/docs/examples/csharp/cloud-example/stmt/Program.cs
new file mode 100644
index 0000000000000000000000000000000000000000..7f986b7e764adb22e20e425094f2440860b775a7
--- /dev/null
+++ b/docs/examples/csharp/cloud-example/stmt/Program.cs
@@ -0,0 +1,83 @@
+using System;
+using TDengineWS.Impl;
+using TDengineDriver;
+using System.Runtime.InteropServices;
+
+namespace Cloud.Examples
+{
+ public class STMTExample
+ {
+ static void Main(string[] args)
+ {
+ string dsn = Environment.GetEnvironmentVariable("TDENGINE_CLOUD_DSN");
+ IntPtr conn = Connect(dsn);
+ // assume table has been created.
+ // CREATE STABLE if not exists test.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)
+ string insert = "insert into ? using test.meters tags(?,?) values(?,?,?,?)";
+
+ // Init STMT
+ IntPtr stmt = LibTaosWS.WSStmtInit(conn);
+
+ if (stmt != IntPtr.Zero)
+ {
+ // Prepare SQL
+ int code = LibTaosWS.WSStmtPrepare(stmt, insert);
+ ValidSTMTStep(code, stmt, "WSInit()");
+
+ // Bind child table name and tags
+ TAOS_MULTI_BIND[] tags = new TAOS_MULTI_BIND[2] { WSMultiBind.WSBindBinary(new string[] { "California.LosAngeles" }), WSMultiBind.WSBindInt(new int?[] { 6 }) };
+ code = LibTaosWS.WSStmtSetTbnameTags(stmt, "test.d1005",tags, 2);
+ ValidSTMTStep(code, stmt, "WSStmtSetTbnameTags()");
+
+ // bind column value
+ TAOS_MULTI_BIND[] data = new TAOS_MULTI_BIND[4];
+ data[0] = WSMultiBind.WSBindTimestamp(new long[] { 1538551000000, 1538552000000, 1538553000000, 1538554000000, 1538555000000 });
+ data[1] = WSMultiBind.WSBindFloat(new float?[] { 10.30000F, 10.30000F, 11.30000F, 10.30000F, 10.80000F });
+ data[2] = WSMultiBind.WSBindInt(new int?[] { 218, 219, 221, 222, 223 });
+ data[3] = WSMultiBind.WSBindFloat(new float?[] { 0.28000F, 0.29000F, 0.30000F, 0.31000F, 0.32000F });
+ code = LibTaosWS.WSStmtBindParamBatch(stmt, data, 4);
+ ValidSTMTStep(code, stmt, "WSStmtBindParamBatch");
+
+ LibTaosWS.WSStmtAddBatch(stmt);
+ ValidSTMTStep(code, stmt, "WSStmtAddBatch");
+
+ IntPtr affectRowPtr = Marshal.AllocHGlobal(Marshal.SizeOf(typeof(Int32)));
+ LibTaosWS.WSStmtExecute(stmt, affectRowPtr);
+ ValidSTMTStep(code, stmt, "WSStmtExecute");
+ Console.WriteLine("STMT affect rows:{0}", Marshal.ReadInt32(affectRowPtr));
+
+ LibTaosWS.WSStmtClose(stmt);
+
+ // Free allocated memory
+ Marshal.FreeHGlobal(affectRowPtr);
+ WSMultiBind.WSFreeTaosBind(tags);
+ WSMultiBind.WSFreeTaosBind(data);
+ }
+ // close connect
+ LibTaosWS.WSClose(conn);
+ }
+
+ public static IntPtr Connect(string dsn)
+ {
+ // get connect
+ IntPtr conn = LibTaosWS.WSConnectWithDSN(dsn);
+ if (conn == IntPtr.Zero)
+ {
+ throw new Exception($"get connection failed,reason:{LibTaosWS.WSErrorStr(conn)},code:{LibTaosWS.WSErrorNo(conn)}");
+ }
+ return conn;
+ }
+
+ public static void ValidSTMTStep(int code, IntPtr wsStmt, string method)
+ {
+ if (code != 0)
+ {
+ throw new Exception($"{method} failed,reason: {LibTaosWS.WSErrorStr(wsStmt)}, code: {code}");
+ }
+ else
+ {
+ Console.WriteLine("{0} success", method);
+ }
+ }
+ }
+}
\ No newline at end of file
diff --git a/docs/examples/csharp/cloud-example/stmt/stmt.csproj b/docs/examples/csharp/cloud-example/stmt/stmt.csproj
new file mode 100644
index 0000000000000000000000000000000000000000..4a7c03f79c941769f9e4d7c4be46cb0769c084ff
--- /dev/null
+++ b/docs/examples/csharp/cloud-example/stmt/stmt.csproj
@@ -0,0 +1,13 @@
+
+
+
+ Exe
+ net5
+ enable
+
+
+
+
+
+
+
diff --git a/docs/examples/csharp/cloud-example/usage/Program.cs b/docs/examples/csharp/cloud-example/usage/Program.cs
new file mode 100644
index 0000000000000000000000000000000000000000..8279d435a1fbae068358ce04a1471a0011656f3c
--- /dev/null
+++ b/docs/examples/csharp/cloud-example/usage/Program.cs
@@ -0,0 +1,92 @@
+using System;
+using TDengineDriver;
+using TDengineWS.Impl;
+using System.Collections.Generic;
+
+namespace Cloud.Examples
+{
+ public class UsageExample
+ {
+ static void Main(string[] args)
+ {
+ string dsn = Environment.GetEnvironmentVariable("TDENGINE_CLOUD_DSN");
+ IntPtr conn = Connect(dsn);
+ InsertData(conn);
+ SelectData(conn);
+ // close connect
+ LibTaosWS.WSClose(conn);
+ }
+
+ public static IntPtr Connect(string dsn)
+ {
+ // get connect
+ IntPtr conn = LibTaosWS.WSConnectWithDSN(dsn);
+ if (conn == IntPtr.Zero)
+ {
+ throw new Exception($"get connection failed,reason:{LibTaosWS.WSErrorStr(conn)},code:{LibTaosWS.WSErrorNo(conn)}");
+ }
+ return conn;
+ }
+
+
+ public static void InsertData(IntPtr conn)
+ {
+ string createTable = "CREATE STABLE if not exists test.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)";
+ string insertData = "INSERT INTO " +
+ "test.d1001 USING test.meters TAGS('California.SanFrancisco', 1) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000)" +
+ "test.d1002 USING test.meters TAGS('California.SanFrancisco', 2) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)" +
+ "test.d1003 USING test.meters TAGS('California.LosAngeles', 3) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000)" +
+ "test.d1004 USING test.meters TAGS('California.LosAngeles', 4) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ";
+
+ // create database under database named 'test'
+ IntPtr res = LibTaosWS.WSQuery(conn, createTable);
+ ValidQueryExecution(res);
+ // Free the query result every time when used up it.
+ LibTaosWS.WSFreeResult(res);
+
+ // insert data into the table created in previous step.
+ res = LibTaosWS.WSQuery(conn, insertData);
+ ValidQueryExecution(res);
+ // Free the query result every time when used up it.
+ LibTaosWS.WSFreeResult(res);
+ }
+ public static void SelectData(IntPtr conn)
+ {
+ string selectTable = "select * from test.meters";
+ IntPtr res = LibTaosWS.WSQueryTimeout(conn, selectTable,5000);
+ ValidQueryExecution(res);
+
+ // print meta
+ List metas = LibTaosWS.WSGetFields(res);
+ foreach (var meta in metas)
+ {
+ Console.Write("{0} {1}({2})\t|", meta.name, meta.TypeName(), meta.size);
+ }
+ Console.WriteLine("");
+ List