diff --git a/docs/en/01-index.md b/docs/en/01-index.md
index a5b87b5fcbe42da101fdfc417b6a41428efc7067..9090c6ef1dff349d705a252b4ffac943ddc2aaf9 100644
--- a/docs/en/01-index.md
+++ b/docs/en/01-index.md
@@ -1,5 +1,30 @@
---
title: TDengine Cloud Service Documentation
-sidebar_label: Documentation Home
+sidebar_label: Home
slug: /
---
+TDengine Cloud, is the fast, elastic, serverless and cost effective time-series data processing service based on the popular open source time-series database, TDengine. With TDengine Cloud, IoT and Big Data developers now have access, on various Cloud providers, to the same robustness, speed and scalability for which TDengine is known. TDengine Cloud delivers a comprehensive, serverless time-series platform that allows customers to focus on solving business needs not only by freeing them from operations and maintenance, but also by providing features such as caching, stream processing and PubSub in one integrated platform thus reducing complexity. At the same time, customers can rest assured that ubiquitous third-party tools such as Prometheus, Telegraf, Grafana and MQTT brokers are supported. Naturally, TDengine Cloud supports connectors in Python, Java, Go, Rust and Node.js thus allowing developers to develop in their language of choice. With SQL support as well as support for schema-less ingestion, TDengine Cloud is adaptable to the needs of all developers. TDengine Cloud also provides additional functions specifically for time series analysis, which makes data analysis and visualization a lot simpler.
+
+This is the documentation structure for TDengine Cloud.
+
+1. The [Introduction](./intro) provides an overview of the features, capabilities and competitive advantages of TDengine Cloud.
+
+2. In [Get Started](./get-started) you will see a tutorial that introduces some of the novel concepts in TDengine, its architecture and also some information on the sample database that you can use to get an idea of the sheer speed of TDengine Cloud. Note that the sample database has 100 million rows to reflect a real world database. Please read the concepts section carefully since TDengine uses these concepts as the foundation to create extremely high-performing IoT and Big Data time-series applications.
+
+3. The [Developer Guide](./develop) is a must read if you are developing IoT or Big Data applications for time series data. In this section we introduce the database connection, data modeling, data ingestion, query, stream processing, cache, data subscription, user-defined functions (coming soon), and other functionality in detail. Sample code is provided for a variety of programming languages. In most cases, you can just copy and paste the sample code, make a few changes to accommodate your application, and it will work.
+
+4. In the [Data In](./data-in) section we show you a number of ways for you to get your data into TDengine.
+
+5. TDengine Cloud believes in giving you extremely easy access to your data and in the [Data Out](./data-out) section we show you a number of ways to get data out of TDengine and into your analysis and visualization applications.
+
+6. The [Visualization](./visual) section shows you how you can visualize the data that you store in TDengine, as well as how you can visualize and monitor the status of your TDengine Cloud instance(s) and databases.
+
+7. The [TDengine SQL](./taos-sql) section provides comprehensive information about both standard SQL as well as TDengine's extensions for easy time series analysis.
+
+8. In [Connector](./connector), you can choose between Python, Java, Go, Rust and Node.js, to easily connect to TDengine to ingest and query data in your preferred development language.
+
+9. The [Tools](./tools) section introduces the Taos CLI which gives you shell access to easily perform ad hoc queries on your instances and databases. Additionally, taosBenchmark is introduced. It is a tool that can help you generate large amounts of data very easily with simple configurations and test the performance of TDengine Cloud.
+
+10. Finally, in the [FAQ](./faq) section, we try to preemptively answer questions that we anticipate. Of course, we will continue to add to this section all the time.
+
+We are very excited that you have chosen TDengine Cloud to be part of your time series platform and look forward to hearing your feedback and ways in which we can improve and be a small part of your success.
diff --git a/docs/en/02-intro.md b/docs/en/02-intro.md
index d5347e48abda2fcf4e2cac8051d967a28491bc63..6b8f91f866bbafe926c281f98853cfc02c1dde36 100644
--- a/docs/en/02-intro.md
+++ b/docs/en/02-intro.md
@@ -1,4 +1,44 @@
---
sidebar_label: Introduction
-title: TDengine Cloud Service
+title: Introduction to TDengine Cloud Service
---
+
+TDengine Cloud, is the fast, elastic, serverless and cost effective time-series data processing service based on the popular open source time-series database, TDengine. With TDengine Cloud you get the highly optimized and purpose-built for IoT time-series platform, for which TDengine is known.
+
+This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine.
+
+## Major Features
+
+The major features are listed below:
+
+1. While TDengine supports [using SQL to insert](/develop/insert-data/sql-writing), it also supports [Schemaless writing](/reference/schemaless/) just like NoSQL databases. TDengine also supports standard protocols like [InfluxDB LINE](/develop/insert-data/influxdb-line),[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) among others.
+2. TDengine supports seamless integration with third-party data collection agents like [Telegraf](/third-party/telegraf),[Prometheus](/third-party/prometheus),[StatsD](/third-party/statsd),[collectd](/third-party/collectd),[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQX](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). These agents can write data into TDengine with simple configuration and without a single line of code.
+3. Support for [all kinds of queries](/develop/query-data), including aggregation, nested query, downsampling, interpolation and others.
+4. Support for [user defined functions](/develop/udf).
+5. Support for [caching](/develop/cache). TDengine always saves the last data point in cache, so Redis is not needed in some scenarios.
+6. Support for [stream processing](../taos-sql).
+7. Support for [data subscription](../taos-sql) with the capability to specify filter conditions.
+8. High availability is supported by replication including multi-cloud replication.
+9. Provides an interactive [command-line interface](/reference/taos-shell) for management, maintenance and ad-hoc queries.
+10. Provides many ways to [get data in](../data-in) and [get data out](../data-out) data.
+11. Provides a Dashboard to monitor your running instances of TDengine.
+12. Provides [connectors](../connector/) for [Java](../connector/java), [Python](../connector/python), [Go](../connector/go), [Rust](../connector/rust), and [Node.js](../connector/node).
+13. Provides a [REST API](/reference/rest-api/).
+14. Supports seamless integration with [Grafana](../visual/grafana) for visualization.
+15. Supports seamless integration with Google Data Studio.
+
+For more details on features, please read through the entire documentation.
+
+## Competitive Advantages
+
+By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine Cloud differentiates itself from other time series platforms, with the following advantages.
+
+- **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine Cloud is a fast, elastic, serverless purpose built platform for IoT time-series data. It is the only time-series platform to solve the high cardinality issue to support billions of data collection points while outperforming other time-series platforms for data ingestion, querying and data compression.
+
+- **[Simplified Solution](https://tdengine.com/tdengine/simplified-time-series-data-solution/)**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly.
+
+- **[Cloud Native](https://tdengine.com/tdengine/cloud-native-time-series-database/)**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds. It is Enterprise ready with backup, multi-cloud replication, VPC peering and IP whitelisting.
+
+- **[Ease of Use](https://tdengine.com/tdengine/easy-time-series-data-platform/)**: For administrators, TDengine Cloud provides worry-free operations with a fully managed cloud native solution. For developers, it provides a simple interface, simplified solution and seamless integration with third party tools. For data users, it provides SQL support with powerful time series extensions built for data analytics.
+
+- **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
diff --git a/docs/en/04-get-started.md b/docs/en/04-get-started.md
index 66a84d50d4f03085e48e37b17006b96959a5a5c3..6d66fc1fd2a3978368bf4d9b5c39ba3bba12ee51 100644
--- a/docs/en/04-get-started.md
+++ b/docs/en/04-get-started.md
@@ -1 +1,7 @@
-# Get Started
\ No newline at end of file
+---
+sidebar_label: Get Started
+title: Get Started
+description: A quick guide for how to access TDengine cloud service
+---
+
+It's very convenient to access TDengine cloud service, just open your browser, connect to [TDengine Cloud Service Portal](https://cloud.tdengine.com), create an account with a valid email address, activate your account, then you will get a free TDengine cloud service. Enjoy!
\ No newline at end of file
diff --git a/docs/en/05-develop/04-stream.md b/docs/en/05-develop/04-stream.md
new file mode 100644
index 0000000000000000000000000000000000000000..36f903ee9a4f2d210e63d0b79e702bc199f790ed
--- /dev/null
+++ b/docs/en/05-develop/04-stream.md
@@ -0,0 +1,113 @@
+---
+sidebar_label: Stream Processing
+description: "The TDengine stream processing engine combines data inserts, preprocessing, analytics, real-time computation, and alerting into a single component."
+title: Stream Processing
+---
+
+Raw time-series data is often cleaned and preprocessed before being permanently stored in a database. In a traditional time-series solution, this generally requires the deployment of stream processing systems such as Kafka or Flink. However, the complexity of such systems increases the cost of development and maintenance.
+
+With the stream processing engine built into TDengine, you can process incoming data streams in real time and define stream transformations in SQL. Incoming data is automatically processed, and the results are pushed to specified tables based on triggering rules that you define. This is a lightweight alternative to complex processing engines that returns computation results in milliseconds even in high throughput scenarios.
+
+The stream processing engine includes data filtering, scalar function computation (including user-defined functions), and window aggregation, with support for sliding windows, session windows, and event windows. Stream processing can write data to supertables from other supertables, standard tables, or subtables. When you create a stream, the target supertable is automatically created. New data is then processed and written to that supertable according to the rules defined for the stream. You can use PARTITION BY statements to partition the data by table name or tag. Separate partitions are then written to different subtables within the target supertable.
+
+TDengine stream processing supports the aggregation of supertables that are deployed across multiple vnodes. It can also handle out-of-order writes and includes a watermark mechanism that determines the extent to which out-of-order data is accepted by the system. You can configure whether to drop or reprocess out-of-order data through the **ignore expired** parameter.
+
+For more information, see [Stream Processing](../../taos-sql/stream).
+
+
+## Create a Stream
+
+```sql
+CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name AS subquery
+stream_options: {
+ TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
+ WATERMARK time
+ IGNORE EXPIRED [0 | 1]
+}
+```
+
+For more information, see [Stream Processing](../../taos-sql/stream).
+
+## Usage Scenario 1
+
+It is common that smart electrical meter systems for businesses generate millions of data points that are widely dispersed and not ordered. The time required to clean and convert this data makes efficient, real-time processing impossible for traditional solutions. This scenario shows how you can configure TDengine stream processing to drop data points over 220 V, find the maximum voltage for 5 second windows, and output this data to a table.
+
+### Create a Database for Raw Data
+
+A database including one supertable and four subtables is created as follows:
+
+```sql
+DROP DATABASE IF EXISTS power;
+CREATE DATABASE power;
+USE power;
+
+CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
+
+CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2);
+CREATE TABLE d1002 USING meters TAGS ("California.SanFrancisco", 3);
+CREATE TABLE d1003 USING meters TAGS ("California.LosAngeles", 2);
+CREATE TABLE d1004 USING meters TAGS ("California.LosAngeles", 3);
+```
+
+### Create a Stream
+
+```sql
+create stream current_stream into current_stream_output_stb as select _wstart as start, _wend as end, max(current) as max_current from meters where voltage <= 220 interval (5s);
+```
+
+### Write Data
+```sql
+insert into d1001 values("2018-10-03 14:38:05.000", 10.30000, 219, 0.31000);
+insert into d1001 values("2018-10-03 14:38:15.000", 12.60000, 218, 0.33000);
+insert into d1001 values("2018-10-03 14:38:16.800", 12.30000, 221, 0.31000);
+insert into d1002 values("2018-10-03 14:38:16.650", 10.30000, 218, 0.25000);
+insert into d1003 values("2018-10-03 14:38:05.500", 11.80000, 221, 0.28000);
+insert into d1003 values("2018-10-03 14:38:16.600", 13.40000, 223, 0.29000);
+insert into d1004 values("2018-10-03 14:38:05.000", 10.80000, 223, 0.29000);
+insert into d1004 values("2018-10-03 14:38:06.500", 11.50000, 221, 0.35000);
+```
+
+### Query the Results
+
+```sql
+taos> select start, end, max_current from current_stream_output_stb;
+ start | end | max_current |
+===========================================================================
+ 2018-10-03 14:38:05.000 | 2018-10-03 14:38:10.000 | 10.30000 |
+ 2018-10-03 14:38:15.000 | 2018-10-03 14:38:20.000 | 12.60000 |
+Query OK, 2 rows in database (0.018762s)
+```
+
+## Usage Scenario 2
+
+In this scenario, the active power and reactive power are determined from the data gathered in the previous scenario. The location and name of each meter are concatenated with a period (.) between them, and the data set is partitioned by meter name and written to a new database.
+
+### Create a Database for Raw Data
+
+The procedure from the previous scenario is used to create the database.
+
+### Create a Stream
+
+```sql
+create stream power_stream into power_stream_output_stb as select ts, concat_ws(".", location, tbname) as meter_location, current*voltage*cos(phase) as active_power, current*voltage*sin(phase) as reactive_power from meters partition by tbname;
+```
+
+### Write data
+
+The procedure from the previous scenario is used to write the data.
+
+### Query the Results
+```sql
+taos> select ts, meter_location, active_power, reactive_power from power_stream_output_stb;
+ ts | meter_location | active_power | reactive_power |
+===================================================================================================================
+ 2018-10-03 14:38:05.000 | California.LosAngeles.d1004 | 2307.834596289 | 688.687331847 |
+ 2018-10-03 14:38:06.500 | California.LosAngeles.d1004 | 2387.415754896 | 871.474763418 |
+ 2018-10-03 14:38:05.500 | California.LosAngeles.d1003 | 2506.240411679 | 720.680274962 |
+ 2018-10-03 14:38:16.600 | California.LosAngeles.d1003 | 2863.424274422 | 854.482390839 |
+ 2018-10-03 14:38:05.000 | California.SanFrancisco.d1001 | 2148.178871730 | 688.120784090 |
+ 2018-10-03 14:38:15.000 | California.SanFrancisco.d1001 | 2598.589176205 | 890.081451418 |
+ 2018-10-03 14:38:16.800 | California.SanFrancisco.d1001 | 2588.728381186 | 829.240910475 |
+ 2018-10-03 14:38:16.650 | California.SanFrancisco.d1002 | 2175.595991997 | 555.520860397 |
+Query OK, 8 rows in database (0.014753s)
+```
diff --git a/docs/en/06-replication/index.md b/docs/en/06-replication/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..42e004950854319c48f17bbed1a79b8dfae22e96
--- /dev/null
+++ b/docs/en/06-replication/index.md
@@ -0,0 +1,7 @@
+---
+sidebar_label: Data Replication
+title: Data Replication
+description: Briefly introduce how to replicate data among TDengine cloud services
+---
+
+TDengine provides full support for data replication. You can replicate data from TDengine cloud service to local TDengine, from local TDengine to TDengine cloud service, or from one cloud service to another one and it doesn't matter which cloud or region the two services reside in.
\ No newline at end of file
diff --git a/docs/en/08-data-in/06-taosx.md b/docs/en/08-data-in/06-taosx.md
deleted file mode 100644
index ea502b74c0b97514857c41cd2abf15600b9695d2..0000000000000000000000000000000000000000
--- a/docs/en/08-data-in/06-taosx.md
+++ /dev/null
@@ -1 +0,0 @@
-# taosX
\ No newline at end of file
diff --git a/docs/en/09-data-out/02-tmq.md b/docs/en/09-data-out/02-tmq.md
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..d3fb4760685b6be8e094a29461408b91a302835b 100644
--- a/docs/en/09-data-out/02-tmq.md
+++ b/docs/en/09-data-out/02-tmq.md
@@ -0,0 +1,879 @@
+---
+sidebar_label: Subscription
+title: Data Subscritpion
+description: Use data subscription to get data from TDengine.
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+import Java from "./_sub_java.mdx";
+import Python from "./_sub_python.mdx";
+import Go from "./_sub_go.mdx";
+import Rust from "./_sub_rust.mdx";
+import Node from "./_sub_node.mdx";
+import CSharp from "./_sub_cs.mdx";
+import CDemo from "./_sub_c.mdx";
+
+This topic introduces how to read out data from TDengine using data subscription, which is an advanced feature in TDengine. To access the data in TDengine in data subscription way, you need to create topic, create consumer, subscribe to a topic, and consume data. In this document we will briefly explain these main steps of data subscription.
+
+## Create Topic
+
+A topic can be created on a database, on some selected columns,or on a supertable.
+
+### Topic on Columns
+
+The most common way to create a topic is to create a topic on some specifically selected columns. The Syntax is like below:
+
+```sql
+CREATE TOPIC topic_name as subquery;
+```
+
+You can subscribe to a topic through a SELECT statement. Statements that specify columns, such as `SELECT *` and `SELECT ts, cl` are supported, as are filtering conditions and scalar functions. Aggregate functions and time window aggregation are not supported. Note:
+
+- The schema of topics created in this manner is determined by the subscribed data.
+- You cannot modify (`ALTER
MODIFY`) or delete (`ALTER
DROP`) columns or tags that are used in a subscription or calculation.
+- Columns added to a table after the subscription is created are not displayed in the results. Deleting columns will cause an error.
+
+For example:
+
+```sql
+CREATE TOPIC topic_name AS SELECT ts, c1, c2, c3 FROM tmqdb.stb WHERE c1 > 1;
+```
+
+### Topic on SuperTable
+
+Syntax:
+
+```sql
+CREATE TOPIC topic_name AS STABLE stb_name;
+```
+
+Creating a topic in this manner differs from a `SELECT * from stbName` statement as follows:
+
+- The table schema can be modified.
+- Unstructured data is returned. The format of the data returned changes based on the supertable schema.
+- A different table schema may exist for every data block to be processed.
+- The data returned does not include tags.
+
+### Topic on Database
+
+Syntax:
+
+```sql
+CREATE TOPIC topic_name [WITH META] AS DATABASE db_name;
+```
+
+This SQL statement creates a subscription to all tables in the database. You can add the `WITH META` parameter to include schema changes in the subscription, including creating and deleting supertables; adding, deleting, and modifying columns; and creating, deleting, and modifying the tags of subtables. Consumers can determine the message type from the API. Note that this differs from Kafka.
+
+## Create Consumer
+
+To create a consumer, you must use the APIs provided by TDengine connectors. Below is the sample code of using connectors of different languages.
+
+
+You configure the following parameters when creating a consumer:
+
+| Parameter | Type | Description | Remarks |
+| :----------------------------: | :-----: | -------------------------------------------------------- | ------------------------------------------- |
+| `td.connect.ip` | string | Used in establishing a connection; same as `taos_connect` | |
+| `td.connect.user` | string | Used in establishing a connection; same as `taos_connect` | |
+| `td.connect.pass` | string | Used in establishing a connection; same as `taos_connect` | |
+| `td.connect.port` | string | Used in establishing a connection; same as `taos_connect` | |
+| `group.id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. |
+| `client.id` | string | Client ID | Maximum length: 192. |
+| `auto.offset.reset` | enum | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
+| `enable.auto.commit` | boolean | Commit automatically | Specify `true` or `false`. |
+| `auto.commit.interval.ms` | integer | Interval for automatic commits, in milliseconds |
+| `enable.heartbeat.background` | boolean | Backend heartbeat; if enabled, the consumer does not go offline even if it has not polled for a long time | |
+| `experimental.snapshot.enable` | boolean | Specify whether to consume messages from the WAL or from TSBS | |
+| `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages |
+
+The method of specifying these parameters depends on the language used:
+
+
+
+
+```c
+/* Create consumer groups on demand (group.id) and enable automatic commits (enable.auto.commit),
+ an automatic commit interval (auto.commit.interval.ms), and a username (td.connect.user) and password (td.connect.pass) */
+tmq_conf_t* conf = tmq_conf_new();
+tmq_conf_set(conf, "enable.auto.commit", "true");
+tmq_conf_set(conf, "auto.commit.interval.ms", "1000");
+tmq_conf_set(conf, "group.id", "cgrpName");
+tmq_conf_set(conf, "td.connect.user", "root");
+tmq_conf_set(conf, "td.connect.pass", "taosdata");
+tmq_conf_set(conf, "auto.offset.reset", "earliest");
+tmq_conf_set(conf, "experimental.snapshot.enable", "true");
+tmq_conf_set(conf, "msg.with.table.name", "true");
+tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL);
+
+tmq_t* tmq = tmq_consumer_new(conf, NULL, 0);
+tmq_conf_destroy(conf);
+```
+
+
+
+
+Java programs use the following parameters:
+
+| Parameter | Type | Description | Remarks |
+| ----------------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------- |
+| `bootstrap.servers` | string |Connection address, such as `localhost:6030` |
+| `value.deserializer` | string | Value deserializer; to use this method, implement the `com.taosdata.jdbc.tmq.Deserializer` interface or inherit the `com.taosdata.jdbc.tmq.ReferenceDeserializer` type |
+| `value.deserializer.encoding` | string | Specify the encoding for string deserialization | |
+
+Note: The `bootstrap.servers` parameter is used instead of `td.connect.ip` and `td.connect.port` to provide an interface that is consistent with Kafka.
+
+```java
+Properties properties = new Properties();
+properties.setProperty("enable.auto.commit", "true");
+properties.setProperty("auto.commit.interval.ms", "1000");
+properties.setProperty("group.id", "cgrpName");
+properties.setProperty("bootstrap.servers", "127.0.0.1:6030");
+properties.setProperty("td.connect.user", "root");
+properties.setProperty("td.connect.pass", "taosdata");
+properties.setProperty("auto.offset.reset", "earliest");
+properties.setProperty("msg.with.table.name", "true");
+properties.setProperty("value.deserializer", "com.taos.example.MetersDeserializer");
+
+TaosConsumer consumer = new TaosConsumer<>(properties);
+
+/* value deserializer definition. */
+import com.taosdata.jdbc.tmq.ReferenceDeserializer;
+
+public class MetersDeserializer extends ReferenceDeserializer {
+}
+```
+
+
+
+
+
+```go
+config := tmq.NewConfig()
+defer config.Destroy()
+err = config.SetGroupID("test")
+if err != nil {
+ panic(err)
+}
+err = config.SetAutoOffsetReset("earliest")
+if err != nil {
+ panic(err)
+}
+err = config.SetConnectIP("127.0.0.1")
+if err != nil {
+ panic(err)
+}
+err = config.SetConnectUser("root")
+if err != nil {
+ panic(err)
+}
+err = config.SetConnectPass("taosdata")
+if err != nil {
+ panic(err)
+}
+err = config.SetConnectPort("6030")
+if err != nil {
+ panic(err)
+}
+err = config.SetMsgWithTableName(true)
+if err != nil {
+ panic(err)
+}
+err = config.EnableHeartBeat()
+if err != nil {
+ panic(err)
+}
+err = config.EnableAutoCommit(func(result *wrapper.TMQCommitCallbackResult) {
+ if result.ErrCode != 0 {
+ errStr := wrapper.TMQErr2Str(result.ErrCode)
+ err := errors.NewError(int(result.ErrCode), errStr)
+ panic(err)
+ }
+})
+if err != nil {
+ panic(err)
+}
+```
+
+
+
+
+
+```rust
+let mut dsn: Dsn = "taos://".parse()?;
+dsn.set("group.id", "group1");
+dsn.set("client.id", "test");
+dsn.set("auto.offset.reset", "earliest");
+
+let tmq = TmqBuilder::from_dsn(dsn)?;
+
+let mut consumer = tmq.build()?;
+```
+
+
+
+
+
+Python programs use the following parameters:
+
+| Parameter | Type | Description | Remarks |
+| :----------------------------: | :----: | -------------------------------------------------------- | ------------------------------------------- |
+| `td_connect_ip` | string | Used in establishing a connection; same as `taos_connect` | |
+| `td_connect_user` | string | Used in establishing a connection; same as `taos_connect` | |
+| `td_connect_pass` | string | Used in establishing a connection; same as `taos_connect` | |
+| `td_connect_port` | string | Used in establishing a connection; same as `taos_connect` | |
+| `group_id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. |
+| `client_id` | string | Client ID | Maximum length: 192. |
+| `auto_offset_reset` | string | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
+| `enable_auto_commit` | string | Commit automatically | Specify `true` or `false`. |
+| `auto_commit_interval_ms` | string | Interval for automatic commits, in milliseconds |
+| `enable_heartbeat_background` | string | Backend heartbeat; if enabled, the consumer does not go offline even if it has not polled for a long time | Specify `true` or `false`. |
+| `experimental_snapshot_enable` | string | Specify whether to consume messages from the WAL or from TSBS | Specify `true` or `false`. |
+| `msg_with_table_name` | string | Specify whether to deserialize table names from messages | Specify `true` or `false`.
+| `timeout` | int | Consumer pull timeout | |
+
+
+
+
+
+```js
+// Create consumer groups on demand (group.id) and enable automatic commits (enable.auto.commit),
+// an automatic commit interval (auto.commit.interval.ms), and a username (td.connect.user) and password (td.connect.pass)
+
+let consumer = taos.consumer({
+ 'enable.auto.commit': 'true',
+ 'auto.commit.interval.ms','1000',
+ 'group.id': 'tg2',
+ 'td.connect.user': 'root',
+ 'td.connect.pass': 'taosdata',
+ 'auto.offset.reset','earliest',
+ 'msg.with.table.name': 'true',
+ 'td.connect.ip','127.0.0.1',
+ 'td.connect.port','6030'
+ });
+```
+
+
+
+
+
+```csharp
+using TDengineTMQ;
+
+// Create consumer groups on demand (GourpID) and enable automatic commits (EnableAutoCommit),
+// an automatic commit interval (AutoCommitIntervalMs), and a username (TDConnectUser) and password (TDConnectPasswd)
+var cfg = new ConsumerConfig
+ {
+ EnableAutoCommit = "true"
+ AutoCommitIntervalMs = "1000"
+ GourpId = "TDengine-TMQ-C#",
+ TDConnectUser = "root",
+ TDConnectPasswd = "taosdata",
+ AutoOffsetReset = "earliest"
+ MsgWithTableName = "true",
+ TDConnectIp = "127.0.0.1",
+ TDConnectPort = "6030"
+ };
+
+var consumer = new ConsumerBuilder(cfg).Build();
+
+```
+
+
+
+
+
+A consumer group is automatically created when multiple consumers are configured with the same consumer group ID.
+
+## Subscribe to a Topic
+
+A single consumer can subscribe to multiple topics.
+
+
+
+
+```c
+// Create a list of subscribed topics
+tmq_list_t* topicList = tmq_list_new();
+tmq_list_append(topicList, "topicName");
+// Enable subscription
+tmq_subscribe(tmq, topicList);
+tmq_list_destroy(topicList);
+
+```
+
+
+
+
+```java
+List topics = new ArrayList<>();
+topics.add("tmq_topic");
+consumer.subscribe(topics);
+```
+
+
+
+
+```go
+consumer, err := tmq.NewConsumer(config)
+if err != nil {
+ panic(err)
+}
+err = consumer.Subscribe([]string{"example_tmq_topic"})
+if err != nil {
+ panic(err)
+}
+```
+
+
+
+
+```rust
+consumer.subscribe(["tmq_meters"]).await?;
+```
+
+
+
+
+
+```python
+consumer = TaosConsumer('topic_ctb_column', group_id='vg2')
+```
+
+
+
+
+
+```js
+// Create a list of subscribed topics
+let topics = ['topic_test']
+
+// Enable subscription
+consumer.subscribe(topics);
+```
+
+
+
+
+
+```csharp
+// Create a list of subscribed topics
+List topics = new List();
+topics.add("tmq_topic");
+// Enable subscription
+consumer.Subscribe(topics);
+```
+
+
+
+
+
+## Consume messages
+
+The following code demonstrates how to consume the messages in a queue.
+
+
+
+
+```c
+## Consume data
+while (running) {
+ TAOS_RES* msg = tmq_consumer_poll(tmq, timeOut);
+ msg_process(msg);
+}
+```
+
+The `while` loop obtains a message each time it calls `tmq_consumer_poll()`. This message is exactly the same as the result returned by a query, and the same deserialization API can be used on it.
+
+
+
+
+```java
+while(running){
+ ConsumerRecords meters = consumer.poll(Duration.ofMillis(100));
+ for (Meters meter : meters) {
+ processMsg(meter);
+ }
+}
+```
+
+
+
+
+
+```go
+for {
+ result, err := consumer.Poll(time.Second)
+ if err != nil {
+ panic(err)
+ }
+ fmt.Println(result)
+ consumer.Commit(context.Background(), result.Message)
+ consumer.FreeMessage(result.Message)
+}
+```
+
+
+
+
+
+```rust
+{
+ let mut stream = consumer.stream();
+
+ while let Some((offset, message)) = stream.try_next().await? {
+ // get information from offset
+
+ // the topic
+ let topic = offset.topic();
+ // the vgroup id, like partition id in kafka.
+ let vgroup_id = offset.vgroup_id();
+ println!("* in vgroup id {vgroup_id} of topic {topic}\n");
+
+ if let Some(data) = message.into_data() {
+ while let Some(block) = data.fetch_raw_block().await? {
+ // one block for one table, get table name if needed
+ let name = block.table_name();
+ let records: Vec = block.deserialize().try_collect()?;
+ println!(
+ "** table: {}, got {} records: {:#?}\n",
+ name.unwrap(),
+ records.len(),
+ records
+ );
+ }
+ }
+ consumer.commit(offset).await?;
+ }
+}
+```
+
+
+
+
+```python
+for msg in consumer:
+ for row in msg:
+ print(row)
+```
+
+
+
+
+
+```js
+while(true){
+ msg = consumer.consume(200);
+ // process message(consumeResult)
+ console.log(msg.topicPartition);
+ console.log(msg.block);
+ console.log(msg.fields)
+}
+```
+
+
+
+
+
+```csharp
+## Consume data
+while (true)
+{
+ var consumerRes = consumer.Consume(100);
+ // process ConsumeResult
+ ProcessMsg(consumerRes);
+ consumer.Commit(consumerRes);
+}
+```
+
+
+
+
+
+## Subscribe to a Topic
+
+A single consumer can subscribe to multiple topics.
+
+
+
+
+```c
+// Create a list of subscribed topics
+tmq_list_t* topicList = tmq_list_new();
+tmq_list_append(topicList, "topicName");
+// Enable subscription
+tmq_subscribe(tmq, topicList);
+tmq_list_destroy(topicList);
+
+```
+
+
+
+
+```java
+List topics = new ArrayList<>();
+topics.add("tmq_topic");
+consumer.subscribe(topics);
+```
+
+
+
+
+```go
+consumer, err := tmq.NewConsumer(config)
+if err != nil {
+ panic(err)
+}
+err = consumer.Subscribe([]string{"example_tmq_topic"})
+if err != nil {
+ panic(err)
+}
+```
+
+
+
+
+```rust
+consumer.subscribe(["tmq_meters"]).await?;
+```
+
+
+
+
+
+```python
+consumer = TaosConsumer('topic_ctb_column', group_id='vg2')
+```
+
+
+
+
+
+```js
+// Create a list of subscribed topics
+let topics = ['topic_test']
+
+// Enable subscription
+consumer.subscribe(topics);
+```
+
+
+
+
+
+```csharp
+// Create a list of subscribed topics
+List topics = new List();
+topics.add("tmq_topic");
+// Enable subscription
+consumer.Subscribe(topics);
+```
+
+
+
+
+
+
+## Consume Data
+
+The following code demonstrates how to consume the messages in a queue.
+
+
+
+
+```c
+## Consume data
+while (running) {
+ TAOS_RES* msg = tmq_consumer_poll(tmq, timeOut);
+ msg_process(msg);
+}
+```
+
+The `while` loop obtains a message each time it calls `tmq_consumer_poll()`. This message is exactly the same as the result returned by a query, and the same deserialization API can be used on it.
+
+
+
+
+```java
+while(running){
+ ConsumerRecords meters = consumer.poll(Duration.ofMillis(100));
+ for (Meters meter : meters) {
+ processMsg(meter);
+ }
+}
+```
+
+
+
+
+
+```go
+for {
+ result, err := consumer.Poll(time.Second)
+ if err != nil {
+ panic(err)
+ }
+ fmt.Println(result)
+ consumer.Commit(context.Background(), result.Message)
+ consumer.FreeMessage(result.Message)
+}
+```
+
+
+
+
+
+```rust
+{
+ let mut stream = consumer.stream();
+
+ while let Some((offset, message)) = stream.try_next().await? {
+ // get information from offset
+
+ // the topic
+ let topic = offset.topic();
+ // the vgroup id, like partition id in kafka.
+ let vgroup_id = offset.vgroup_id();
+ println!("* in vgroup id {vgroup_id} of topic {topic}\n");
+
+ if let Some(data) = message.into_data() {
+ while let Some(block) = data.fetch_raw_block().await? {
+ // one block for one table, get table name if needed
+ let name = block.table_name();
+ let records: Vec = block.deserialize().try_collect()?;
+ println!(
+ "** table: {}, got {} records: {:#?}\n",
+ name.unwrap(),
+ records.len(),
+ records
+ );
+ }
+ }
+ consumer.commit(offset).await?;
+ }
+}
+```
+
+
+
+
+```python
+for msg in consumer:
+ for row in msg:
+ print(row)
+```
+
+
+
+
+
+```js
+while(true){
+ msg = consumer.consume(200);
+ // process message(consumeResult)
+ console.log(msg.topicPartition);
+ console.log(msg.block);
+ console.log(msg.fields)
+}
+```
+
+
+
+
+
+```csharp
+## Consume data
+while (true)
+{
+ var consumerRes = consumer.Consume(100);
+ // process ConsumeResult
+ ProcessMsg(consumerRes);
+ consumer.Commit(consumerRes);
+}
+```
+
+
+
+
+
+## Close the consumer
+
+After message consumption is finished, the consumer is unsubscribed.
+
+
+
+
+```c
+/* Unsubscribe */
+tmq_unsubscribe(tmq);
+
+/* Close consumer object */
+tmq_consumer_close(tmq);
+```
+
+
+
+
+```java
+/* Unsubscribe */
+consumer.unsubscribe();
+
+/* Close consumer */
+consumer.close();
+```
+
+
+
+
+
+```go
+consumer.Close()
+```
+
+
+
+
+
+```rust
+consumer.unsubscribe().await;
+```
+
+
+
+
+
+```py
+# Unsubscribe
+consumer.unsubscribe()
+# Close consumer
+consumer.close()
+```
+
+
+
+
+```js
+consumer.unsubscribe();
+consumer.close();
+```
+
+
+
+
+
+```csharp
+// Unsubscribe
+consumer.Unsubscribe();
+
+// Close consumer
+consumer.Close();
+```
+
+
+
+
+
+
+## Close Consumer
+
+After message consumption is finished, the consumer is unsubscribed.
+
+
+
+
+```c
+/* Unsubscribe */
+tmq_unsubscribe(tmq);
+
+/* Close consumer object */
+tmq_consumer_close(tmq);
+```
+
+
+
+
+```java
+/* Unsubscribe */
+consumer.unsubscribe();
+
+/* Close consumer */
+consumer.close();
+```
+
+
+
+
+
+```go
+consumer.Close()
+```
+
+
+
+
+
+```rust
+consumer.unsubscribe().await;
+```
+
+
+
+
+
+```py
+# Unsubscribe
+consumer.unsubscribe()
+# Close consumer
+consumer.close()
+```
+
+
+
+
+```js
+consumer.unsubscribe();
+consumer.close();
+```
+
+
+
+
+
+```csharp
+// Unsubscribe
+consumer.Unsubscribe();
+
+// Close consumer
+consumer.Close();
+```
+
+
+
+
+
+## Delete Topic
+
+Once a topic becomes useless, it can be deleted.
+
+You can delete topics that are no longer useful. Note that you must unsubscribe all consumers from a topic before deleting it.
+
+```sql
+/* Delete topic/
+DROP TOPIC topic_name;
+```
+
+## Check Status
+
+At any time, you can check the status of existing topics and consumers.
+
+1. Query all existing topics.
+
+```sql
+SHOW TOPICS;
+```
+
+2. Query the status and subscribed topics of all consumers.
+
+```sql
+SHOW CONSUMERS;
+```
\ No newline at end of file
diff --git a/docs/en/09-data-out/05-taosx.md b/docs/en/09-data-out/05-taosx.md
deleted file mode 100644
index ea502b74c0b97514857c41cd2abf15600b9695d2..0000000000000000000000000000000000000000
--- a/docs/en/09-data-out/05-taosx.md
+++ /dev/null
@@ -1 +0,0 @@
-# taosX
\ No newline at end of file
diff --git a/docs/en/09-data-out/_sub_c.mdx b/docs/en/09-data-out/_sub_c.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..b0667268e9978533e84e68ea3fe5f285538df762
--- /dev/null
+++ b/docs/en/09-data-out/_sub_c.mdx
@@ -0,0 +1,3 @@
+```c
+{{#include docs/examples/c/tmq_example.c}}
+```
diff --git a/docs/en/09-data-out/_sub_cs.mdx b/docs/en/09-data-out/_sub_cs.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..a435ea0273c94cbe75eaf7431e1a9c39d49d92e3
--- /dev/null
+++ b/docs/en/09-data-out/_sub_cs.mdx
@@ -0,0 +1,3 @@
+```csharp
+{{#include docs/examples/csharp/SubscribeDemo.cs}}
+```
\ No newline at end of file
diff --git a/docs/en/09-data-out/_sub_go.mdx b/docs/en/09-data-out/_sub_go.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..34b2aefd92c5eef75b59fbbba96b83da091722a7
--- /dev/null
+++ b/docs/en/09-data-out/_sub_go.mdx
@@ -0,0 +1,3 @@
+```go
+{{#include docs/examples/go/sub/main.go}}
+```
\ No newline at end of file
diff --git a/docs/en/09-data-out/_sub_java.mdx b/docs/en/09-data-out/_sub_java.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..d14b5fd6095dd90f89dd2c2e828858585cfddff9
--- /dev/null
+++ b/docs/en/09-data-out/_sub_java.mdx
@@ -0,0 +1,11 @@
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java}}
+{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
+{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
+```
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
+```
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
+```
\ No newline at end of file
diff --git a/docs/en/09-data-out/_sub_node.mdx b/docs/en/09-data-out/_sub_node.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..3eeff0922a31a478dd34a77c6cb6471f51a57a8c
--- /dev/null
+++ b/docs/en/09-data-out/_sub_node.mdx
@@ -0,0 +1,3 @@
+```js
+{{#include docs/examples/node/nativeexample/subscribe_demo.js}}
+```
\ No newline at end of file
diff --git a/docs/en/09-data-out/_sub_python.mdx b/docs/en/09-data-out/_sub_python.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..1309da5b416799492a6b85aae4b775e227c0ad6e
--- /dev/null
+++ b/docs/en/09-data-out/_sub_python.mdx
@@ -0,0 +1,3 @@
+```py
+{{#include docs/examples/python/tmq_example.py}}
+```
diff --git a/docs/en/09-data-out/_sub_rust.mdx b/docs/en/09-data-out/_sub_rust.mdx
new file mode 100644
index 0000000000000000000000000000000000000000..0021666a7024a9b63d6b9c38bf8a57b6eded6d66
--- /dev/null
+++ b/docs/en/09-data-out/_sub_rust.mdx
@@ -0,0 +1,3 @@
+```rust
+{{#include docs/examples/rust/nativeexample/examples/subscribe_demo.rs}}
+```
diff --git a/docs/examples/R/connect_native.r b/docs/examples/R/connect_native.r
new file mode 100644
index 0000000000000000000000000000000000000000..3c5c9e199b61263b785e86238d277bef70070b28
--- /dev/null
+++ b/docs/examples/R/connect_native.r
@@ -0,0 +1,16 @@
+if (! "RJDBC" %in% installed.packages()[, "Package"]) {
+ install.packages('RJDBC', repos='http://cran.us.r-project.org')
+}
+
+# ANCHOR: demo
+library("DBI")
+library("rJava")
+library("RJDBC")
+
+args<- commandArgs(trailingOnly = TRUE)
+driver_path = args[1] # path to jdbc-driver for example: "/root/taos-jdbcdriver-3.0.0-dist.jar"
+driver = JDBC("com.taosdata.jdbc.TSDBDriver", driver_path)
+conn = dbConnect(driver, "jdbc:TAOS://127.0.0.1:6030/?user=root&password=taosdata")
+dbGetQuery(conn, "SELECT server_version()")
+dbDisconnect(conn)
+# ANCHOR_END: demo
diff --git a/docs/examples/R/connect_rest.r b/docs/examples/R/connect_rest.r
new file mode 100644
index 0000000000000000000000000000000000000000..5ceec572fc26575dfc597983eeac3233bc4488ab
--- /dev/null
+++ b/docs/examples/R/connect_rest.r
@@ -0,0 +1,12 @@
+if (! "RJDBC" %in% installed.packages()[, "Package"]) {
+ install.packages('RJDBC', repos='http://cran.us.r-project.org')
+}
+
+library("DBI")
+library("rJava")
+library("RJDBC")
+driver_path = "/home/debug/build/lib/taos-jdbcdriver-2.0.38-dist.jar"
+driver = JDBC("com.taosdata.jdbc.rs.RestfulDriver", driver_path)
+conn = dbConnect(driver, "jdbc:TAOS-RS://localhost:6041?user=root&password=taosdata")
+dbGetQuery(conn, "SELECT server_version()")
+dbDisconnect(conn)
\ No newline at end of file
diff --git a/docs/examples/c/.gitignore b/docs/examples/c/.gitignore
new file mode 100644
index 0000000000000000000000000000000000000000..afe974314989a1e3aa4eee703738a9a960c18577
--- /dev/null
+++ b/docs/examples/c/.gitignore
@@ -0,0 +1,3 @@
+*
+!*.c
+!.gitignore
diff --git a/docs/examples/c/async_query_example.c b/docs/examples/c/async_query_example.c
new file mode 100644
index 0000000000000000000000000000000000000000..b370420b124a21b05f8e0b4041fb1461b1e2478a
--- /dev/null
+++ b/docs/examples/c/async_query_example.c
@@ -0,0 +1,195 @@
+// compile with:
+// gcc -o async_query_example async_query_example.c -ltaos
+
+#include
+#include
+#include
+#include
+#include
+#include
+
+typedef int16_t VarDataLenT;
+
+#define TSDB_NCHAR_SIZE sizeof(int32_t)
+#define VARSTR_HEADER_SIZE sizeof(VarDataLenT)
+
+#define GET_FLOAT_VAL(x) (*(float *)(x))
+#define GET_DOUBLE_VAL(x) (*(double *)(x))
+
+#define varDataLen(v) ((VarDataLenT *)(v))[0]
+
+int printRow(char *str, TAOS_ROW row, TAOS_FIELD *fields, int numFields) {
+ int len = 0;
+ char split = ' ';
+
+ for (int i = 0; i < numFields; ++i) {
+ if (i > 0) {
+ str[len++] = split;
+ }
+
+ if (row[i] == NULL) {
+ len += sprintf(str + len, "%s", "NULL");
+ continue;
+ }
+
+ switch (fields[i].type) {
+ case TSDB_DATA_TYPE_TINYINT:
+ len += sprintf(str + len, "%d", *((int8_t *)row[i]));
+ break;
+
+ case TSDB_DATA_TYPE_UTINYINT:
+ len += sprintf(str + len, "%u", *((uint8_t *)row[i]));
+ break;
+
+ case TSDB_DATA_TYPE_SMALLINT:
+ len += sprintf(str + len, "%d", *((int16_t *)row[i]));
+ break;
+
+ case TSDB_DATA_TYPE_USMALLINT:
+ len += sprintf(str + len, "%u", *((uint16_t *)row[i]));
+ break;
+
+ case TSDB_DATA_TYPE_INT:
+ len += sprintf(str + len, "%d", *((int32_t *)row[i]));
+ break;
+
+ case TSDB_DATA_TYPE_UINT:
+ len += sprintf(str + len, "%u", *((uint32_t *)row[i]));
+ break;
+
+ case TSDB_DATA_TYPE_BIGINT:
+ len += sprintf(str + len, "%" PRId64, *((int64_t *)row[i]));
+ break;
+
+ case TSDB_DATA_TYPE_UBIGINT:
+ len += sprintf(str + len, "%" PRIu64, *((uint64_t *)row[i]));
+ break;
+
+ case TSDB_DATA_TYPE_FLOAT: {
+ float fv = 0;
+ fv = GET_FLOAT_VAL(row[i]);
+ len += sprintf(str + len, "%f", fv);
+ } break;
+
+ case TSDB_DATA_TYPE_DOUBLE: {
+ double dv = 0;
+ dv = GET_DOUBLE_VAL(row[i]);
+ len += sprintf(str + len, "%lf", dv);
+ } break;
+
+ case TSDB_DATA_TYPE_BINARY:
+ case TSDB_DATA_TYPE_NCHAR: {
+ int32_t charLen = varDataLen((char *)row[i] - VARSTR_HEADER_SIZE);
+ memcpy(str + len, row[i], charLen);
+ len += charLen;
+ } break;
+
+ case TSDB_DATA_TYPE_TIMESTAMP:
+ len += sprintf(str + len, "%" PRId64, *((int64_t *)row[i]));
+ break;
+
+ case TSDB_DATA_TYPE_BOOL:
+ len += sprintf(str + len, "%d", *((int8_t *)row[i]));
+ default:
+ break;
+ }
+ }
+
+ return len;
+}
+
+void printHeader(TAOS_RES *res) {
+ int numFields = taos_num_fields(res);
+ TAOS_FIELD *fields = taos_fetch_fields(res);
+ char header[256] = {0};
+ int len = 0;
+ for (int i = 0; i < numFields; ++i) {
+ len += sprintf(header + len, "%s ", fields[i].name);
+ }
+ puts(header);
+}
+
+// ANCHOR: demo
+
+/**
+ * @brief call back function of taos_fetch_row_a
+ *
+ * @param param : the third parameter you passed to taos_fetch_row_a
+ * @param res : pointer of TAOS_RES
+ * @param numOfRow : number of rows fetched in this batch. will be 0 if there is no more data.
+ * @return void*
+ */
+void *fetch_row_callback(void *param, TAOS_RES *res, int numOfRow) {
+ printf("numOfRow = %d \n", numOfRow);
+ int numFields = taos_num_fields(res);
+ TAOS_FIELD *fields = taos_fetch_fields(res);
+ TAOS *_taos = (TAOS *)param;
+ if (numOfRow > 0) {
+ for (int i = 0; i < numOfRow; ++i) {
+ TAOS_ROW row = taos_fetch_row(res);
+ char temp[256] = {0};
+ printRow(temp, row, fields, numFields);
+ puts(temp);
+ }
+ taos_fetch_rows_a(res, fetch_row_callback, _taos);
+ } else {
+ printf("no more data, close the connection.\n");
+ taos_free_result(res);
+ taos_close(_taos);
+ taos_cleanup();
+ }
+}
+
+/**
+ * @brief callback function of taos_query_a
+ *
+ * @param param: the fourth parameter you passed to taos_query_a
+ * @param res : the result set
+ * @param code : status code
+ * @return void*
+ */
+void *select_callback(void *param, TAOS_RES *res, int code) {
+ printf("query callback ...\n");
+ TAOS *_taos = (TAOS *)param;
+ if (code == 0 && res) {
+ printHeader(res);
+ taos_fetch_rows_a(res, fetch_row_callback, _taos);
+ } else {
+ printf("failed to execute taos_query. error: %s\n", taos_errstr(res));
+ taos_free_result(res);
+ taos_close(_taos);
+ taos_cleanup();
+ exit(EXIT_FAILURE);
+ }
+}
+
+int main() {
+ TAOS *taos = taos_connect("localhost", "root", "taosdata", "power", 6030);
+ if (taos == NULL) {
+ puts("failed to connect to server");
+ exit(EXIT_FAILURE);
+ }
+ // param one is the connection returned by taos_connect.
+ // param two is the SQL to execute.
+ // param three is the callback function.
+ // param four can be any pointer. It will be passed to your callback function as the first parameter. we use taos
+ // here, because we want to close it after getting data.
+ taos_query_a(taos, "SELECT * FROM meters", select_callback, taos);
+ sleep(1);
+}
+
+// output:
+// query callback ...
+// ts current voltage phase location groupid
+// numOfRow = 8
+// 1538548685500 11.800000 221 0.280000 california.losangeles 2
+// 1538548696600 13.400000 223 0.290000 california.losangeles 2
+// 1538548685000 10.800000 223 0.290000 california.losangeles 3
+// 1538548686500 11.500000 221 0.350000 california.losangeles 3
+// 1538548685000 10.300000 219 0.310000 california.sanfrancisco 2
+// 1538548695000 12.600000 218 0.330000 california.sanfrancisco 2
+// 1538548696800 12.300000 221 0.310000 california.sanfrancisco 2
+// 1538548696650 10.300000 218 0.250000 california.sanfrancisco 3
+// numOfRow = 0
+// no more data, close the connection.
+// ANCHOR_END: demo
\ No newline at end of file
diff --git a/docs/examples/c/connect_example.c b/docs/examples/c/connect_example.c
new file mode 100644
index 0000000000000000000000000000000000000000..1a23df4806d7ff986898734e1971f6e0cd7c5360
--- /dev/null
+++ b/docs/examples/c/connect_example.c
@@ -0,0 +1,24 @@
+// compile with
+// gcc connect_example.c -o connect_example -ltaos
+#include
+#include
+#include "taos.h"
+
+int main() {
+ const char *host = "localhost";
+ const char *user = "root";
+ const char *passwd = "taosdata";
+ // if don't want to connect to a default db, set it to NULL or ""
+ const char *db = NULL;
+ uint16_t port = 0; // 0 means use the default port
+ TAOS *taos = taos_connect(host, user, passwd, db, port);
+ if (taos == NULL) {
+ int errno = taos_errno(NULL);
+ char *msg = taos_errstr(NULL);
+ printf("%d, %s\n", errno, msg);
+ } else {
+ printf("connected\n");
+ taos_close(taos);
+ }
+ taos_cleanup();
+}
diff --git a/docs/examples/c/error_handle_example.c b/docs/examples/c/error_handle_example.c
new file mode 100644
index 0000000000000000000000000000000000000000..e7dedb263df250f6634aa15fab2729cbaf4e5972
--- /dev/null
+++ b/docs/examples/c/error_handle_example.c
@@ -0,0 +1,24 @@
+// compile with
+// gcc error_handle_example.c -o error_handle_example -ltaos
+#include
+#include
+#include "taos.h"
+
+int main() {
+ const char *host = "localhost";
+ const char *user = "root";
+ const char *passwd = "taosdata";
+ // if don't want to connect to a default db, set it to NULL or ""
+ const char *db = "notexist";
+ uint16_t port = 0; // 0 means use the default port
+ TAOS *taos = taos_connect(host, user, passwd, db, port);
+ if (taos == NULL) {
+ int errno = taos_errno(NULL);
+ char *msg = taos_errstr(NULL);
+ printf("%d, %s\n", errno, msg);
+ } else {
+ printf("connected\n");
+ taos_close(taos);
+ }
+ taos_cleanup();
+}
diff --git a/docs/examples/c/insert_example.c b/docs/examples/c/insert_example.c
new file mode 100644
index 0000000000000000000000000000000000000000..ce8fdc5b9372aec7b02d3c9254ec25c4c4f62adc
--- /dev/null
+++ b/docs/examples/c/insert_example.c
@@ -0,0 +1,51 @@
+// compile with
+// gcc -o insert_example insert_example.c -ltaos
+#include
+#include
+#include "taos.h"
+
+
+/**
+ * @brief execute sql and print affected rows.
+ *
+ * @param taos
+ * @param sql
+ */
+void executeSQL(TAOS *taos, const char *sql) {
+ TAOS_RES *res = taos_query(taos, sql);
+ int code = taos_errno(res);
+ if (code != 0) {
+ printf("Error code: %d; Message: %s\n", code, taos_errstr(res));
+ taos_free_result(res);
+ taos_close(taos);
+ exit(EXIT_FAILURE);
+ }
+ int affectedRows = taos_affected_rows(res);
+ printf("affected rows %d\n", affectedRows);
+ taos_free_result(res);
+}
+
+
+
+int main() {
+ TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 6030);
+ if (taos == NULL) {
+ printf("failed to connect to server\n");
+ exit(EXIT_FAILURE);
+ }
+ executeSQL(taos, "CREATE DATABASE power");
+ executeSQL(taos, "USE power");
+ executeSQL(taos, "CREATE STABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)");
+ executeSQL(taos, "INSERT INTO d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)"
+ "d1002 USING meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)"
+ "d1003 USING meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)"
+ "d1004 USING meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)");
+ taos_close(taos);
+ taos_cleanup();
+}
+
+// output:
+// affected rows 0
+// affected rows 0
+// affected rows 0
+// affected rows 8
\ No newline at end of file
diff --git a/docs/examples/c/json_protocol_example.c b/docs/examples/c/json_protocol_example.c
new file mode 100644
index 0000000000000000000000000000000000000000..9d276127a64c3d74322e30587ab2e319c29cbf65
--- /dev/null
+++ b/docs/examples/c/json_protocol_example.c
@@ -0,0 +1,52 @@
+// compile with
+// gcc -o json_protocol_example json_protocol_example.c -ltaos
+#include
+#include
+#include
+#include "taos.h"
+
+void executeSQL(TAOS *taos, const char *sql) {
+ TAOS_RES *res = taos_query(taos, sql);
+ int code = taos_errno(res);
+ if (code != 0) {
+ printf("%s\n", taos_errstr(res));
+ taos_free_result(res);
+ taos_close(taos);
+ exit(EXIT_FAILURE);
+ }
+ taos_free_result(res);
+}
+
+// ANCHOR: main
+int main() {
+ TAOS *taos = taos_connect("localhost", "root", "taosdata", "", 6030);
+ if (taos == NULL) {
+ printf("failed to connect to server\n");
+ exit(EXIT_FAILURE);
+ }
+ executeSQL(taos, "DROP DATABASE IF EXISTS test");
+ executeSQL(taos, "CREATE DATABASE test");
+ executeSQL(taos, "USE test");
+ char *line =
+ "[{\"metric\": \"meters.current\", \"timestamp\": 1648432611249, \"value\": 10.3, \"tags\": {\"location\": "
+ "\"California.SanFrancisco\", \"groupid\": 2}},{\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, "
+ "\"value\": 219, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}},{\"metric\": \"meters.current\", "
+ "\"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": "
+ "2}},{\"metric\": \"meters.voltage\", \"timestamp\": 1648432611250, \"value\": 221, \"tags\": {\"location\": "
+ "\"California.LosAngeles\", \"groupid\": 1}}]";
+
+ char *lines[] = {line};
+ TAOS_RES *res = taos_schemaless_insert(taos, lines, 1, TSDB_SML_JSON_PROTOCOL, TSDB_SML_TIMESTAMP_NOT_CONFIGURED);
+ if (taos_errno(res) != 0) {
+ printf("failed to insert schema-less data, reason: %s\n", taos_errstr(res));
+ } else {
+ int affectedRow = taos_affected_rows(res);
+ printf("successfully inserted %d rows\n", affectedRow);
+ }
+ taos_free_result(res);
+ taos_close(taos);
+ taos_cleanup();
+}
+// output:
+// successfully inserted 4 rows
+// ANCHOR_END: main
diff --git a/docs/examples/c/line_example.c b/docs/examples/c/line_example.c
new file mode 100644
index 0000000000000000000000000000000000000000..ce39f8d9df744082a450ce246529bf56adebd1e0
--- /dev/null
+++ b/docs/examples/c/line_example.c
@@ -0,0 +1,47 @@
+// compile with
+// gcc -o line_example line_example.c -ltaos
+#include
+#include
+#include
+#include "taos.h"
+
+void executeSQL(TAOS *taos, const char *sql) {
+ TAOS_RES *res = taos_query(taos, sql);
+ int code = taos_errno(res);
+ if (code != 0) {
+ printf("%s\n", taos_errstr(res));
+ taos_free_result(res);
+ taos_close(taos);
+ exit(EXIT_FAILURE);
+ }
+ taos_free_result(res);
+}
+
+// ANCHOR: main
+int main() {
+ TAOS *taos = taos_connect("localhost", "root", "taosdata", "", 0);
+ if (taos == NULL) {
+ printf("failed to connect to server\n");
+ exit(EXIT_FAILURE);
+ }
+ executeSQL(taos, "DROP DATABASE IF EXISTS test");
+ executeSQL(taos, "CREATE DATABASE test");
+ executeSQL(taos, "USE test");
+ char *lines[] = {"meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
+ "meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
+ "meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
+ "meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"};
+ TAOS_RES *res = taos_schemaless_insert(taos, lines, 4, TSDB_SML_LINE_PROTOCOL, TSDB_SML_TIMESTAMP_MILLI_SECONDS);
+ if (taos_errno(res) != 0) {
+ printf("failed to insert schema-less data, reason: %s\n", taos_errstr(res));
+ } else {
+ int affectedRows = taos_affected_rows(res);
+ printf("successfully inserted %d rows\n", affectedRows);
+ }
+ taos_free_result(res);
+ taos_close(taos);
+ taos_cleanup();
+}
+// output:
+// successfully inserted 4 rows
+// ANCHOR_END: main
\ No newline at end of file
diff --git a/docs/examples/c/multi_bind_example.c b/docs/examples/c/multi_bind_example.c
new file mode 100644
index 0000000000000000000000000000000000000000..02e6568e9e88ac8703a4993ed406e770d23c2438
--- /dev/null
+++ b/docs/examples/c/multi_bind_example.c
@@ -0,0 +1,147 @@
+// compile with
+// gcc -o multi_bind_example multi_bind_example.c -ltaos
+#include
+#include
+#include
+#include "taos.h"
+
+/**
+ * @brief execute sql only and ignore result set
+ *
+ * @param taos
+ * @param sql
+ */
+void executeSQL(TAOS *taos, const char *sql) {
+ TAOS_RES *res = taos_query(taos, sql);
+ int code = taos_errno(res);
+ if (code != 0) {
+ printf("%s\n", taos_errstr(res));
+ taos_free_result(res);
+ taos_close(taos);
+ exit(EXIT_FAILURE);
+ }
+ taos_free_result(res);
+}
+
+/**
+ * @brief exit program when error occur.
+ *
+ * @param stmt
+ * @param code
+ * @param msg
+ */
+void checkErrorCode(TAOS_STMT *stmt, int code, const char *msg) {
+ if (code != 0) {
+ printf("%s. error: %s\n", msg, taos_stmt_errstr(stmt));
+ taos_stmt_close(stmt);
+ exit(EXIT_FAILURE);
+ }
+}
+
+/**
+ * @brief insert data using stmt API
+ *
+ * @param taos
+ */
+void insertData(TAOS *taos) {
+ // init
+ TAOS_STMT *stmt = taos_stmt_init(taos);
+ // prepare
+ const char *sql = "INSERT INTO ? USING meters TAGS(?, ?) values(?, ?, ?, ?)";
+ int code = taos_stmt_prepare(stmt, sql, 0);
+ checkErrorCode(stmt, code, "failed to execute taos_stmt_prepare");
+ // bind table name and tags
+ TAOS_BIND tags[2];
+ char *location = "California.SanFrancisco";
+ int groupId = 2;
+ tags[0].buffer_type = TSDB_DATA_TYPE_BINARY;
+ tags[0].buffer_length = strlen(location);
+ tags[0].length = &tags[0].buffer_length;
+ tags[0].buffer = location;
+ tags[0].is_null = NULL;
+
+ tags[1].buffer_type = TSDB_DATA_TYPE_INT;
+ tags[1].buffer_length = sizeof(int);
+ tags[1].length = &tags[1].buffer_length;
+ tags[1].buffer = &groupId;
+ tags[1].is_null = NULL;
+
+ code = taos_stmt_set_tbname_tags(stmt, "d1001", tags);
+ checkErrorCode(stmt, code, "failed to execute taos_stmt_set_tbname_tags");
+
+ // highlight-start
+ // insert two rows with multi binds
+ TAOS_MULTI_BIND params[4];
+ // values to bind
+ int64_t ts[] = {1648432611249, 1648432611749};
+ float current[] = {10.3, 12.6};
+ int voltage[] = {219, 218};
+ float phase[] = {0.31, 0.33};
+ // is_null array
+ char is_null[2] = {0};
+ // length array
+ int32_t int64Len[2] = {sizeof(int64_t)};
+ int32_t floatLen[2] = {sizeof(float)};
+ int32_t intLen[2] = {sizeof(int)};
+
+ params[0].buffer_type = TSDB_DATA_TYPE_TIMESTAMP;
+ params[0].buffer_length = sizeof(int64_t);
+ params[0].buffer = ts;
+ params[0].length = int64Len;
+ params[0].is_null = is_null;
+ params[0].num = 2;
+
+ params[1].buffer_type = TSDB_DATA_TYPE_FLOAT;
+ params[1].buffer_length = sizeof(float);
+ params[1].buffer = current;
+ params[1].length = floatLen;
+ params[1].is_null = is_null;
+ params[1].num = 2;
+
+ params[2].buffer_type = TSDB_DATA_TYPE_INT;
+ params[2].buffer_length = sizeof(int);
+ params[2].buffer = voltage;
+ params[2].length = intLen;
+ params[2].is_null = is_null;
+ params[2].num = 2;
+
+ params[3].buffer_type = TSDB_DATA_TYPE_FLOAT;
+ params[3].buffer_length = sizeof(float);
+ params[3].buffer = phase;
+ params[3].length = floatLen;
+ params[3].is_null = is_null;
+ params[3].num = 2;
+
+ code = taos_stmt_bind_param_batch(stmt, params); // bind batch
+ checkErrorCode(stmt, code, "failed to execute taos_stmt_bind_param_batch");
+ code = taos_stmt_add_batch(stmt); // add batch
+ checkErrorCode(stmt, code, "failed to execute taos_stmt_add_batch");
+ // highlight-end
+ // execute
+ code = taos_stmt_execute(stmt);
+ checkErrorCode(stmt, code, "failed to execute taos_stmt_execute");
+ int affectedRows = taos_stmt_affected_rows(stmt);
+ printf("successfully inserted %d rows\n", affectedRows);
+ // close
+ taos_stmt_close(stmt);
+}
+
+int main() {
+ TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 6030);
+ if (taos == NULL) {
+ printf("failed to connect to server\n");
+ exit(EXIT_FAILURE);
+ }
+ executeSQL(taos, "DROP DATABASE IF EXISTS power");
+ executeSQL(taos, "CREATE DATABASE power");
+ executeSQL(taos, "USE power");
+ executeSQL(taos,
+ "CREATE STABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), "
+ "groupId INT)");
+ insertData(taos);
+ taos_close(taos);
+ taos_cleanup();
+}
+
+// output:
+// successfully inserted 2 rows
\ No newline at end of file
diff --git a/docs/examples/c/query_example.c b/docs/examples/c/query_example.c
new file mode 100644
index 0000000000000000000000000000000000000000..fcae95bcd45a282eaa3ae911b4115e6300c6af8e
--- /dev/null
+++ b/docs/examples/c/query_example.c
@@ -0,0 +1,143 @@
+// compile with:
+// gcc -o query_example query_example.c -ltaos
+#include
+#include
+#include
+#include
+#include
+
+typedef int16_t VarDataLenT;
+
+#define TSDB_NCHAR_SIZE sizeof(int32_t)
+#define VARSTR_HEADER_SIZE sizeof(VarDataLenT)
+
+#define GET_FLOAT_VAL(x) (*(float *)(x))
+#define GET_DOUBLE_VAL(x) (*(double *)(x))
+
+#define varDataLen(v) ((VarDataLenT *)(v))[0]
+
+int printRow(char *str, TAOS_ROW row, TAOS_FIELD *fields, int numFields) {
+ int len = 0;
+ char split = ' ';
+
+ for (int i = 0; i < numFields; ++i) {
+ if (i > 0) {
+ str[len++] = split;
+ }
+
+ if (row[i] == NULL) {
+ len += sprintf(str + len, "%s", "NULL");
+ continue;
+ }
+
+ switch (fields[i].type) {
+ case TSDB_DATA_TYPE_TINYINT:
+ len += sprintf(str + len, "%d", *((int8_t *)row[i]));
+ break;
+
+ case TSDB_DATA_TYPE_UTINYINT:
+ len += sprintf(str + len, "%u", *((uint8_t *)row[i]));
+ break;
+
+ case TSDB_DATA_TYPE_SMALLINT:
+ len += sprintf(str + len, "%d", *((int16_t *)row[i]));
+ break;
+
+ case TSDB_DATA_TYPE_USMALLINT:
+ len += sprintf(str + len, "%u", *((uint16_t *)row[i]));
+ break;
+
+ case TSDB_DATA_TYPE_INT:
+ len += sprintf(str + len, "%d", *((int32_t *)row[i]));
+ break;
+
+ case TSDB_DATA_TYPE_UINT:
+ len += sprintf(str + len, "%u", *((uint32_t *)row[i]));
+ break;
+
+ case TSDB_DATA_TYPE_BIGINT:
+ len += sprintf(str + len, "%" PRId64, *((int64_t *)row[i]));
+ break;
+
+ case TSDB_DATA_TYPE_UBIGINT:
+ len += sprintf(str + len, "%" PRIu64, *((uint64_t *)row[i]));
+ break;
+
+ case TSDB_DATA_TYPE_FLOAT: {
+ float fv = 0;
+ fv = GET_FLOAT_VAL(row[i]);
+ len += sprintf(str + len, "%f", fv);
+ } break;
+
+ case TSDB_DATA_TYPE_DOUBLE: {
+ double dv = 0;
+ dv = GET_DOUBLE_VAL(row[i]);
+ len += sprintf(str + len, "%lf", dv);
+ } break;
+
+ case TSDB_DATA_TYPE_BINARY:
+ case TSDB_DATA_TYPE_NCHAR: {
+ int32_t charLen = varDataLen((char *)row[i] - VARSTR_HEADER_SIZE);
+ memcpy(str + len, row[i], charLen);
+ len += charLen;
+ } break;
+
+ case TSDB_DATA_TYPE_TIMESTAMP:
+ len += sprintf(str + len, "%" PRId64, *((int64_t *)row[i]));
+ break;
+
+ case TSDB_DATA_TYPE_BOOL:
+ len += sprintf(str + len, "%d", *((int8_t *)row[i]));
+ default:
+ break;
+ }
+ }
+
+ return len;
+}
+
+/**
+ * @brief print column name and values of each row
+ *
+ * @param res
+ * @return int
+ */
+static int printResult(TAOS_RES *res) {
+ int numFields = taos_num_fields(res);
+ TAOS_FIELD *fields = taos_fetch_fields(res);
+ char header[256] = {0};
+ int len = 0;
+ for (int i = 0; i < numFields; ++i) {
+ len += sprintf(header + len, "%s ", fields[i].name);
+ }
+ puts(header);
+
+ TAOS_ROW row = NULL;
+ while ((row = taos_fetch_row(res))) {
+ char temp[256] = {0};
+ printRow(temp, row, fields, numFields);
+ puts(temp);
+ }
+}
+
+int main() {
+ TAOS *taos = taos_connect("localhost", "root", "taosdata", "power", 6030);
+ if (taos == NULL) {
+ puts("failed to connect to server");
+ exit(EXIT_FAILURE);
+ }
+ TAOS_RES *res = taos_query(taos, "SELECT * FROM meters LIMIT 2");
+ if (taos_errno(res) != 0) {
+ printf("failed to execute taos_query. error: %s\n", taos_errstr(res));
+ exit(EXIT_FAILURE);
+ }
+ printResult(res);
+ taos_free_result(res);
+ taos_close(taos);
+ taos_cleanup();
+}
+
+// output:
+// ts current voltage phase location groupid
+// 1648432611249 10.300000 219 0.310000 California.SanFrancisco 2
+// 1648432611749 12.600000 218 0.330000 California.SanFrancisco 2
\ No newline at end of file
diff --git a/docs/examples/c/stmt_example.c b/docs/examples/c/stmt_example.c
new file mode 100644
index 0000000000000000000000000000000000000000..28dae5f9d5ea2faec0aa3c0a784d39e252651c65
--- /dev/null
+++ b/docs/examples/c/stmt_example.c
@@ -0,0 +1,141 @@
+// compile with
+// gcc -o stmt_example stmt_example.c -ltaos
+#include
+#include
+#include
+#include "taos.h"
+
+/**
+ * @brief execute sql only.
+ *
+ * @param taos
+ * @param sql
+ */
+void executeSQL(TAOS *taos, const char *sql) {
+ TAOS_RES *res = taos_query(taos, sql);
+ int code = taos_errno(res);
+ if (code != 0) {
+ printf("%s\n", taos_errstr(res));
+ taos_free_result(res);
+ taos_close(taos);
+ exit(EXIT_FAILURE);
+ }
+ taos_free_result(res);
+}
+
+/**
+ * @brief check return status and exit program when error occur.
+ *
+ * @param stmt
+ * @param code
+ * @param msg
+ */
+void checkErrorCode(TAOS_STMT *stmt, int code, const char* msg) {
+ if (code != 0) {
+ printf("%s. error: %s\n", msg, taos_stmt_errstr(stmt));
+ taos_stmt_close(stmt);
+ exit(EXIT_FAILURE);
+ }
+}
+
+typedef struct {
+ int64_t ts;
+ float current;
+ int voltage;
+ float phase;
+} Row;
+
+/**
+ * @brief insert data using stmt API
+ *
+ * @param taos
+ */
+void insertData(TAOS *taos) {
+ // init
+ TAOS_STMT *stmt = taos_stmt_init(taos);
+ // prepare
+ const char *sql = "INSERT INTO ? USING meters TAGS(?, ?) VALUES(?, ?, ?, ?)";
+ int code = taos_stmt_prepare(stmt, sql, 0);
+ checkErrorCode(stmt, code, "failed to execute taos_stmt_prepare");
+ // bind table name and tags
+ TAOS_BIND tags[2];
+ char* location = "California.SanFrancisco";
+ int groupId = 2;
+ tags[0].buffer_type = TSDB_DATA_TYPE_BINARY;
+ tags[0].buffer_length = strlen(location);
+ tags[0].length = &tags[0].buffer_length;
+ tags[0].buffer = location;
+ tags[0].is_null = NULL;
+
+ tags[1].buffer_type = TSDB_DATA_TYPE_INT;
+ tags[1].buffer_length = sizeof(int);
+ tags[1].length = &tags[1].buffer_length;
+ tags[1].buffer = &groupId;
+ tags[1].is_null = NULL;
+
+ code = taos_stmt_set_tbname_tags(stmt, "d1001", tags);
+ checkErrorCode(stmt, code, "failed to execute taos_stmt_set_tbname_tags");
+
+ // insert two rows
+ Row rows[2] = {
+ {1648432611249, 10.3, 219, 0.31},
+ {1648432611749, 12.6, 218, 0.33},
+ };
+
+ TAOS_BIND values[4];
+ values[0].buffer_type = TSDB_DATA_TYPE_TIMESTAMP;
+ values[0].buffer_length = sizeof(int64_t);
+ values[0].length = &values[0].buffer_length;
+ values[0].is_null = NULL;
+
+ values[1].buffer_type = TSDB_DATA_TYPE_FLOAT;
+ values[1].buffer_length = sizeof(float);
+ values[1].length = &values[1].buffer_length;
+ values[1].is_null = NULL;
+
+ values[2].buffer_type = TSDB_DATA_TYPE_INT;
+ values[2].buffer_length = sizeof(int);
+ values[2].length = &values[2].buffer_length;
+ values[2].is_null = NULL;
+
+ values[3].buffer_type = TSDB_DATA_TYPE_FLOAT;
+ values[3].buffer_length = sizeof(float);
+ values[3].length = &values[3].buffer_length;
+ values[3].is_null = NULL;
+
+ for (int i = 0; i < 2; ++i) {
+ values[0].buffer = &rows[i].ts;
+ values[1].buffer = &rows[i].current;
+ values[2].buffer = &rows[i].voltage;
+ values[3].buffer = &rows[i].phase;
+ code = taos_stmt_bind_param(stmt, values); // bind param
+ checkErrorCode(stmt, code, "failed to execute taos_stmt_bind_param");
+ code = taos_stmt_add_batch(stmt); // add batch
+ checkErrorCode(stmt, code, "failed to execute taos_stmt_add_batch");
+ }
+ // execute
+ code = taos_stmt_execute(stmt);
+ checkErrorCode(stmt, code, "failed to execute taos_stmt_execute");
+ int affectedRows = taos_stmt_affected_rows(stmt);
+ printf("successfully inserted %d rows\n", affectedRows);
+ // close
+ taos_stmt_close(stmt);
+}
+
+int main() {
+ TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 6030);
+ if (taos == NULL) {
+ printf("failed to connect to server\n");
+ exit(EXIT_FAILURE);
+ }
+ executeSQL(taos, "CREATE DATABASE power");
+ executeSQL(taos, "USE power");
+ executeSQL(taos, "CREATE STABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)");
+ insertData(taos);
+ taos_close(taos);
+ taos_cleanup();
+}
+
+
+// output:
+// successfully inserted 2 rows
\ No newline at end of file
diff --git a/docs/examples/c/subscribe_demo.c b/docs/examples/c/subscribe_demo.c
new file mode 100644
index 0000000000000000000000000000000000000000..2fe62c24eb92d2f57c24b40fc16f47d62ea5e378
--- /dev/null
+++ b/docs/examples/c/subscribe_demo.c
@@ -0,0 +1,66 @@
+// A simple demo for asynchronous subscription.
+// compile with:
+// gcc -o subscribe_demo subscribe_demo.c -ltaos
+
+#include
+#include
+#include
+#include
+
+int nTotalRows;
+
+/**
+ * @brief callback function of subscription.
+ *
+ * @param tsub
+ * @param res
+ * @param param. the additional parameter passed to taos_subscribe
+ * @param code. error code
+ */
+void subscribe_callback(TAOS_SUB* tsub, TAOS_RES* res, void* param, int code) {
+ if (code != 0) {
+ printf("error: %d\n", code);
+ exit(EXIT_FAILURE);
+ }
+
+ TAOS_ROW row = NULL;
+ int num_fields = taos_num_fields(res);
+ TAOS_FIELD* fields = taos_fetch_fields(res);
+ int nRows = 0;
+
+ while ((row = taos_fetch_row(res))) {
+ char buf[4096] = {0};
+ taos_print_row(buf, row, fields, num_fields);
+ puts(buf);
+ nRows++;
+ }
+
+ nTotalRows += nRows;
+ printf("%d rows consumed.\n", nRows);
+}
+
+int main() {
+ TAOS* taos = taos_connect("localhost", "root", "taosdata", NULL, 6030);
+ if (taos == NULL) {
+ printf("failed to connect to server\n");
+ exit(EXIT_FAILURE);
+ }
+
+ int restart = 1; // if the topic already exists, where to subscribe from the begin.
+ const char* topic = "topic-meter-current-bg-10";
+ const char* sql = "select * from power.meters where current > 10";
+ void* param = NULL; // additional parameter.
+ int interval = 2000; // consumption interval in microseconds.
+ TAOS_SUB* tsub = taos_subscribe(taos, restart, topic, sql, subscribe_callback, NULL, interval);
+
+ // wait for insert from others process. you can open TDengine CLI to insert some records for test.
+
+ getchar(); // press Enter to stop
+
+ printf("total rows consumed: %d\n", nTotalRows);
+ int keep = 0; // whether to keep subscribe process
+ taos_unsubscribe(tsub, keep);
+
+ taos_close(taos);
+ taos_cleanup();
+}
diff --git a/docs/examples/c/telnet_line_example.c b/docs/examples/c/telnet_line_example.c
new file mode 100644
index 0000000000000000000000000000000000000000..da62da4ba492856b0d73a564c1bf9cdd60b5b742
--- /dev/null
+++ b/docs/examples/c/telnet_line_example.c
@@ -0,0 +1,54 @@
+// compile with
+// gcc -o telnet_line_example telnet_line_example.c -ltaos
+#include
+#include
+#include
+#include "taos.h"
+
+void executeSQL(TAOS *taos, const char *sql) {
+ TAOS_RES *res = taos_query(taos, sql);
+ int code = taos_errno(res);
+ if (code != 0) {
+ printf("%s\n", taos_errstr(res));
+ taos_free_result(res);
+ taos_close(taos);
+ exit(EXIT_FAILURE);
+ }
+ taos_free_result(res);
+}
+
+// ANCHOR: main
+int main() {
+ TAOS *taos = taos_connect("localhost", "root", "taosdata", "", 6030);
+ if (taos == NULL) {
+ printf("failed to connect to server\n");
+ exit(EXIT_FAILURE);
+ }
+ executeSQL(taos, "DROP DATABASE IF EXISTS test");
+ executeSQL(taos, "CREATE DATABASE test");
+ executeSQL(taos, "USE test");
+ char *lines[] = {
+ "meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
+ "meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
+ };
+ TAOS_RES *res = taos_schemaless_insert(taos, lines, 8, TSDB_SML_TELNET_PROTOCOL, TSDB_SML_TIMESTAMP_NOT_CONFIGURED);
+ if (taos_errno(res) != 0) {
+ printf("failed to insert schema-less data, reason: %s\n", taos_errstr(res));
+ } else {
+ int affectedRow = taos_affected_rows(res);
+ printf("successfully inserted %d rows\n", affectedRow);
+ }
+
+ taos_free_result(res);
+ taos_close(taos);
+ taos_cleanup();
+}
+// output:
+// successfully inserted 8 rows
+// ANCHOR_END: main
diff --git a/docs/examples/c/tmq_example.c b/docs/examples/c/tmq_example.c
new file mode 100644
index 0000000000000000000000000000000000000000..2eaa8ed5eb904a1fa2c00d7068aff06482f9f809
--- /dev/null
+++ b/docs/examples/c/tmq_example.c
@@ -0,0 +1,275 @@
+/*
+ * Copyright (c) 2019 TAOS Data, Inc.
+ *
+ * This program is free software: you can use, redistribute, and/or modify
+ * it under the terms of the GNU Affero General Public License, version 3
+ * or later ("AGPL"), as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * You should have received a copy of the GNU Affero General Public License
+ * along with this program. If not, see .
+ */
+
+#include
+#include
+#include
+#include
+#include
+#include "taos.h"
+
+static int running = 1;
+static char dbName[64] = "tmqdb";
+static char stbName[64] = "stb";
+static char topicName[64] = "topicname";
+
+static int32_t msg_process(TAOS_RES* msg) {
+ char buf[1024];
+ int32_t rows = 0;
+
+ const char* topicName = tmq_get_topic_name(msg);
+ const char* dbName = tmq_get_db_name(msg);
+ int32_t vgroupId = tmq_get_vgroup_id(msg);
+
+ printf("topic: %s\n", topicName);
+ printf("db: %s\n", dbName);
+ printf("vgroup id: %d\n", vgroupId);
+
+ while (1) {
+ TAOS_ROW row = taos_fetch_row(msg);
+ if (row == NULL) break;
+
+ TAOS_FIELD* fields = taos_fetch_fields(msg);
+ int32_t numOfFields = taos_field_count(msg);
+ int32_t* length = taos_fetch_lengths(msg);
+ int32_t precision = taos_result_precision(msg);
+ rows++;
+ taos_print_row(buf, row, fields, numOfFields);
+ printf("row content: %s\n", buf);
+ }
+
+ return rows;
+}
+
+static int32_t init_env() {
+ TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
+ if (pConn == NULL) {
+ return -1;
+ }
+
+ TAOS_RES* pRes;
+ // drop database if exists
+ printf("create database\n");
+ pRes = taos_query(pConn, "drop database if exists tmqdb");
+ if (taos_errno(pRes) != 0) {
+ printf("error in drop tmqdb, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ // create database
+ pRes = taos_query(pConn, "create database tmqdb");
+ if (taos_errno(pRes) != 0) {
+ printf("error in create tmqdb, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ // create super table
+ printf("create super table\n");
+ pRes = taos_query(
+ pConn, "create table tmqdb.stb (ts timestamp, c1 int, c2 float, c3 varchar(16)) tags(t1 int, t3 varchar(16))");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create super table stb, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ // create sub tables
+ printf("create sub tables\n");
+ pRes = taos_query(pConn, "create table tmqdb.ctb0 using tmqdb.stb tags(0, 'subtable0')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create super table ctb0, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create table tmqdb.ctb1 using tmqdb.stb tags(1, 'subtable1')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create super table ctb1, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create table tmqdb.ctb2 using tmqdb.stb tags(2, 'subtable2')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create super table ctb2, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create table tmqdb.ctb3 using tmqdb.stb tags(3, 'subtable3')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create super table ctb3, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ // insert data
+ printf("insert data into sub tables\n");
+ pRes = taos_query(pConn, "insert into tmqdb.ctb0 values(now, 0, 0, 'a0')(now+1s, 0, 0, 'a00')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "insert into tmqdb.ctb1 values(now, 1, 1, 'a1')(now+1s, 11, 11, 'a11')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "insert into tmqdb.ctb2 values(now, 2, 2, 'a1')(now+1s, 22, 22, 'a22')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "insert into tmqdb.ctb3 values(now, 3, 3, 'a1')(now+1s, 33, 33, 'a33')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ taos_close(pConn);
+ return 0;
+}
+
+int32_t create_topic() {
+ printf("create topic\n");
+ TAOS_RES* pRes;
+ TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
+ if (pConn == NULL) {
+ return -1;
+ }
+
+ pRes = taos_query(pConn, "use tmqdb");
+ if (taos_errno(pRes) != 0) {
+ printf("error in use tmqdb, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create topic topicname as select ts, c1, c2, c3, tbname from tmqdb.stb where c1 > 1");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create topic topicname, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ taos_close(pConn);
+ return 0;
+}
+
+void tmq_commit_cb_print(tmq_t* tmq, int32_t code, void* param) {
+ printf("tmq_commit_cb_print() code: %d, tmq: %p, param: %p\n", code, tmq, param);
+}
+
+tmq_t* build_consumer() {
+ tmq_conf_res_t code;
+ tmq_conf_t* conf = tmq_conf_new();
+ code = tmq_conf_set(conf, "enable.auto.commit", "true");
+ if (TMQ_CONF_OK != code) return NULL;
+ code = tmq_conf_set(conf, "auto.commit.interval.ms", "1000");
+ if (TMQ_CONF_OK != code) return NULL;
+ code = tmq_conf_set(conf, "group.id", "cgrpName");
+ if (TMQ_CONF_OK != code) return NULL;
+ code = tmq_conf_set(conf, "client.id", "user defined name");
+ if (TMQ_CONF_OK != code) return NULL;
+ code = tmq_conf_set(conf, "td.connect.user", "root");
+ if (TMQ_CONF_OK != code) return NULL;
+ code = tmq_conf_set(conf, "td.connect.pass", "taosdata");
+ if (TMQ_CONF_OK != code) return NULL;
+ code = tmq_conf_set(conf, "auto.offset.reset", "earliest");
+ if (TMQ_CONF_OK != code) return NULL;
+ code = tmq_conf_set(conf, "experimental.snapshot.enable", "false");
+ if (TMQ_CONF_OK != code) return NULL;
+
+ tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL);
+
+ tmq_t* tmq = tmq_consumer_new(conf, NULL, 0);
+ tmq_conf_destroy(conf);
+ return tmq;
+}
+
+tmq_list_t* build_topic_list() {
+ tmq_list_t* topicList = tmq_list_new();
+ int32_t code = tmq_list_append(topicList, "topicname");
+ if (code) {
+ return NULL;
+ }
+ return topicList;
+}
+
+void basic_consume_loop(tmq_t* tmq) {
+ int32_t totalRows = 0;
+ int32_t msgCnt = 0;
+ int32_t timeout = 5000;
+ while (running) {
+ TAOS_RES* tmqmsg = tmq_consumer_poll(tmq, timeout);
+ if (tmqmsg) {
+ msgCnt++;
+ totalRows += msg_process(tmqmsg);
+ taos_free_result(tmqmsg);
+ } else {
+ break;
+ }
+ }
+
+ fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows);
+}
+
+int main(int argc, char* argv[]) {
+ int32_t code;
+
+ if (init_env() < 0) {
+ return -1;
+ }
+
+ if (create_topic() < 0) {
+ return -1;
+ }
+
+ tmq_t* tmq = build_consumer();
+ if (NULL == tmq) {
+ fprintf(stderr, "%% build_consumer() fail!\n");
+ return -1;
+ }
+
+ tmq_list_t* topic_list = build_topic_list();
+ if (NULL == topic_list) {
+ return -1;
+ }
+
+ if ((code = tmq_subscribe(tmq, topic_list))) {
+ fprintf(stderr, "%% Failed to tmq_subscribe(): %s\n", tmq_err2str(code));
+ }
+ tmq_list_destroy(topic_list);
+
+ basic_consume_loop(tmq);
+
+ code = tmq_consumer_close(tmq);
+ if (code) {
+ fprintf(stderr, "%% Failed to close consumer: %s\n", tmq_err2str(code));
+ } else {
+ fprintf(stderr, "%% Consumer closed\n");
+ }
+
+ return 0;
+}
diff --git a/docs/examples/csharp/.gitignore b/docs/examples/csharp/.gitignore
new file mode 100644
index 0000000000000000000000000000000000000000..b3aff79f3706e23aa74199a7f521f7912d2b0e45
--- /dev/null
+++ b/docs/examples/csharp/.gitignore
@@ -0,0 +1,4 @@
+bin
+obj
+.vs
+*.sln
\ No newline at end of file
diff --git a/docs/examples/csharp/AsyncQueryExample.cs b/docs/examples/csharp/AsyncQueryExample.cs
new file mode 100644
index 0000000000000000000000000000000000000000..0d47325932e2f01fec8d55cfdb64c636258f4a03
--- /dev/null
+++ b/docs/examples/csharp/AsyncQueryExample.cs
@@ -0,0 +1,106 @@
+using System;
+using System.Collections.Generic;
+using TDengineDriver;
+using TDengineDriver.Impl;
+using System.Runtime.InteropServices;
+
+namespace TDengineExample
+{
+ public class AsyncQueryExample
+ {
+ static void Main()
+ {
+ IntPtr conn = GetConnection();
+ QueryAsyncCallback queryAsyncCallback = new QueryAsyncCallback(QueryCallback);
+ TDengine.QueryAsync(conn, "select * from meters", queryAsyncCallback, IntPtr.Zero);
+ Thread.Sleep(2000);
+ TDengine.Close(conn);
+ TDengine.Cleanup();
+ }
+
+ static void QueryCallback(IntPtr param, IntPtr taosRes, int code)
+ {
+ if (code == 0 && taosRes != IntPtr.Zero)
+ {
+ FetchRawBlockAsyncCallback fetchRowAsyncCallback = new FetchRawBlockAsyncCallback(FetchRawBlockCallback);
+ TDengine.FetchRawBlockAsync(taosRes, fetchRowAsyncCallback, param);
+ }
+ else
+ {
+ Console.WriteLine($"async query data failed, failed code {code}");
+ }
+ }
+
+ // Iteratively call this interface until "numOfRows" is no greater than 0.
+ static void FetchRawBlockCallback(IntPtr param, IntPtr taosRes, int numOfRows)
+ {
+ if (numOfRows > 0)
+ {
+ Console.WriteLine($"{numOfRows} rows async retrieved");
+ IntPtr pdata = TDengine.GetRawBlock(taosRes);
+ List metaList = TDengine.FetchFields(taosRes);
+ List