diff --git a/docs/en/02-intro.md b/docs/en/02-intro.md
index f0e1bc10b250ebbcc4057c26e46d2960df5bf7fe..accc393029a812c0028c1ed076b7cf02d5602782 100644
--- a/docs/en/02-intro.md
+++ b/docs/en/02-intro.md
@@ -23,8 +23,8 @@ The major features are listed below:
3. Data Explorer: browse through databases and even run SQL queries once you login.
4. Visualization:
- Supports [Grafana](../visual/grafana/)
- - Supports Google data studio
- - Supports Grafana cloud (to be released soon)
+ - Supports Google Data Studio
+ - Supports Grafana Cloud (to be released soon)
5. [Data Subscription](../data-subscription/): Application can subscribe a table or a set of tables. API is the same as Kafka, but you can specify filter conditions and you can share the topic with other users and user groups in TDengien Cloud.
6. [Stream Processing](../stream/): Not only is the continuous query is supported, but TDengine also supports event driven stream processing, so Flink or Spark is not needed for time-series data processing.
7. Enterprise
diff --git a/docs/en/07-data-in/20-telegraf.md b/docs/en/07-data-in/20-telegraf.md
index 51cf08bedc838b3c76c958d454c5c50c18c0b9b7..d29685d0829b4e76ae37d9c251511c011f229f3e 100644
--- a/docs/en/07-data-in/20-telegraf.md
+++ b/docs/en/07-data-in/20-telegraf.md
@@ -55,7 +55,7 @@ Edit section "outputs.http".
{{#include docs/examples/thirdparty/telegraf-conf.toml:null:nrc}}
```
-The resulting configuration will collect CPU and memory data and sends it to TDengine database named "telegraf". Database "telegraf" will be created automatically if it dose not exist in advance.
+The resulting configuration will collect CPU and memory data and sends it to TDengine database named "telegraf". Database "telegraf" must be created first through TDengine Cloud explorer.
## Start Telegraf
diff --git a/docs/en/09-data-out/04-taosdump.md b/docs/en/09-data-out/04-taosdump.md
index 3a4198e53bf0e67b33ab8ed81dc834c3eb2e7548..481337e5ec30489aac5859dde0dc9cf602a4b277 100644
--- a/docs/en/09-data-out/04-taosdump.md
+++ b/docs/en/09-data-out/04-taosdump.md
@@ -4,7 +4,7 @@ title: Dump Data Using taosDump
description: Dump data from TDengine into files using taosDump
---
-# taosDump
+## Overview
taosdump is a tool that supports backing up data from a running TDengine cluster and restoring the backed up data to the same, or another running TDengine cluster.
diff --git a/docs/en/09-data-out/_sub_c.mdx b/docs/en/09-data-out/_sub_c.mdx
deleted file mode 100644
index b0667268e9978533e84e68ea3fe5f285538df762..0000000000000000000000000000000000000000
--- a/docs/en/09-data-out/_sub_c.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```c
-{{#include docs/examples/c/tmq_example.c}}
-```
diff --git a/docs/en/09-data-out/_sub_cs.mdx b/docs/en/09-data-out/_sub_cs.mdx
deleted file mode 100644
index a09e91422b12d3bfea1794d73aec28335dea9056..0000000000000000000000000000000000000000
--- a/docs/en/09-data-out/_sub_cs.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```csharp
-{{#include docs/examples/csharp/native-example/SubscribeDemo.cs}}
-```
\ No newline at end of file
diff --git a/docs/en/09-data-out/_sub_go.mdx b/docs/en/09-data-out/_sub_go.mdx
deleted file mode 100644
index 34b2aefd92c5eef75b59fbbba96b83da091722a7..0000000000000000000000000000000000000000
--- a/docs/en/09-data-out/_sub_go.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```go
-{{#include docs/examples/go/sub/main.go}}
-```
\ No newline at end of file
diff --git a/docs/en/09-data-out/_sub_java.mdx b/docs/en/09-data-out/_sub_java.mdx
deleted file mode 100644
index d14b5fd6095dd90f89dd2c2e828858585cfddff9..0000000000000000000000000000000000000000
--- a/docs/en/09-data-out/_sub_java.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
-```java
-{{#include docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java}}
-{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
-{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
-```
-```java
-{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
-```
-```java
-{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
-```
\ No newline at end of file
diff --git a/docs/en/09-data-out/_sub_node.mdx b/docs/en/09-data-out/_sub_node.mdx
deleted file mode 100644
index 3eeff0922a31a478dd34a77c6cb6471f51a57a8c..0000000000000000000000000000000000000000
--- a/docs/en/09-data-out/_sub_node.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```js
-{{#include docs/examples/node/nativeexample/subscribe_demo.js}}
-```
\ No newline at end of file
diff --git a/docs/en/09-data-out/_sub_python.mdx b/docs/en/09-data-out/_sub_python.mdx
deleted file mode 100644
index 1309da5b416799492a6b85aae4b775e227c0ad6e..0000000000000000000000000000000000000000
--- a/docs/en/09-data-out/_sub_python.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```py
-{{#include docs/examples/python/tmq_example.py}}
-```
diff --git a/docs/en/09-data-out/_sub_rust.mdx b/docs/en/09-data-out/_sub_rust.mdx
deleted file mode 100644
index eb06c8f18c3e0f2e908a2d8d9fad9b0e73b866a2..0000000000000000000000000000000000000000
--- a/docs/en/09-data-out/_sub_rust.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-```rust
-{{#include docs/examples/rust/cloud-example/examples/subscribe_demo.rs}}
-```
diff --git a/docs/en/10-programming/01-connect/01-python.md b/docs/en/10-programming/01-connect/01-python.md
index f35641d16f14e3c43b63d03988dbee0c1424e9e3..04b34c324135ffc9bdd6f75a84bcdc57f0466a4b 100644
--- a/docs/en/10-programming/01-connect/01-python.md
+++ b/docs/en/10-programming/01-connect/01-python.md
@@ -88,7 +88,7 @@ For more details about how to write or query data via REST API, please check [RE
## Jupyter
-**Step 1: Install**
+### Step 1: Install
For the users who are familiar with Jupyter to program in Python, both TDengine Python connector and Jupyter need to be ready in your environment. If you have not done yet, please use the commands below to install them.
@@ -113,9 +113,9 @@ conda install -c conda-forge taospy
-**Step 2: Configure**
+### Step 2: Configure
-In order for Jupyter to connect to TDengine cloud service, before launching Jupypter, the environment setting must be performed. We use Linux bash as example.
+In order for Jupyter to connect to TDengine cloud service, before launching Jupyter, the environment setting must be performed. We use Linux bash as example.
```bash
export TDENGINE_CLOUD_TOKEN=""
@@ -123,7 +123,7 @@ export TDENGINE_CLOUD_URL=""
jupyter lab
```
-**Step 3: Connect**
+### Step 3: Connect
Once jupyter lab is launched, Jupyter lab service is automatically connected and shown in your browser. You can create a new notebook and copy the sample code below and run it.
diff --git a/docs/en/10-programming/01-connect/02-java.md b/docs/en/10-programming/01-connect/02-java.md
index f9c79f18013142f978b3bd93bdd335bc150aeaff..0f55200d08f886c4e4c1cfbd2ae0f862ca32e753 100644
--- a/docs/en/10-programming/01-connect/02-java.md
+++ b/docs/en/10-programming/01-connect/02-java.md
@@ -71,7 +71,7 @@ To obtain the value of JDBC URL, please log in [TDengine Cloud](https://cloud.td
## Connect
-Code bellow get JDBC URL from environment variables first and then create a `Connection` object, witch is a standard JDBC Connection object.
+Code bellow get JDBC URL from environment variables first and then create a `Connection` object, which is a standard JDBC Connection object.
```java
{{#include docs/examples/java/src/main/java/com/taos/example/ConnectCloudExample.java:connect}}
diff --git a/docs/en/10-programming/01-connect/04-rust.md b/docs/en/10-programming/01-connect/04-rust.md
index d0e0d013efe4e7c07434f08f75b20fe056b67e4d..7c6f4032b5bbc6e8a558f3e9a137defd40b6ec3b 100644
--- a/docs/en/10-programming/01-connect/04-rust.md
+++ b/docs/en/10-programming/01-connect/04-rust.md
@@ -31,7 +31,7 @@ anyhow = "1.0.0"
## Config
-Run this command in your terminal to save TDengine cloud token as variables:
+Run this command in your terminal to save TDengine cloud DSN as variables:
diff --git a/docs/en/10-programming/01-connect/06-csharp.md b/docs/en/10-programming/01-connect/06-csharp.md
index 0fe7e3baa8b0d432c6cbfe4050a803420253de96..46e5699f6795c96d9f1825bbc0430f98b52312b6 100644
--- a/docs/en/10-programming/01-connect/06-csharp.md
+++ b/docs/en/10-programming/01-connect/06-csharp.md
@@ -33,7 +33,9 @@ Add following ItemGroup and Task to your project file.
+```
+```bash
dotnet add package TDengine.Connector
```
diff --git a/docs/en/10-programming/02-model.md b/docs/en/10-programming/02-model.md
index 14f4a0b01e79d39166cd4fcbd21d59e996fa00a1..6223ae84daeb1f800d4d69a230e94e20f87538ca 100644
--- a/docs/en/10-programming/02-model.md
+++ b/docs/en/10-programming/02-model.md
@@ -3,7 +3,9 @@ title: Data Model
description: Typical Data Model used in TDengine
---
-The data model employed by TDengine is similar to that of a relational database. You have to create databases and tables. You must design the data model based on your own business and application requirements. You should design the STable (an abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntactical details.
+The data model employed by TDengine is similar to that of a relational database. You have to create databases and tables. You must design the data model based on your own business and application requirements. You should design the [STable](/concept/#super-table-stable) (an abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntactical details.
+
+Note: before you read this chapter, please make sure you have already read through [Key Concepts](/concept/), since TDengine introduces new concepts like "one table for one [data collection point](/concept/#data-collection-point)" and "[super table](/concept/#super-table-stable)".
## Create Database
diff --git a/docs/en/10-programming/03-insert.md b/docs/en/10-programming/03-insert.md
index b31d38a44b22ccdb019f021254d94350c6c02a3e..86f57102b1093820e87a05cde9129cd9045b799c 100644
--- a/docs/en/10-programming/03-insert.md
+++ b/docs/en/10-programming/03-insert.md
@@ -33,7 +33,7 @@ INSERT INTO test.d101 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10
Data can be inserted into multiple tables in the same SQL statement. The example below inserts 2 rows into table "d101" and 1 row into table "d102".
```sql
-INSERT INTO test.d101 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d102 VALUES (1538548696800, 12.3, 221, 0.31);
+INSERT INTO test.d101 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) test.d102 VALUES (1538548696800, 12.3, 221, 0.31);
```
For more details about `INSERT` please refer to [INSERT](https://docs.tdengine.com/cloud/taos-sql/insert).
diff --git a/docs/en/10-programming/04-query.md b/docs/en/10-programming/04-query.md
index c7ab775ca8cf4607f15a66f7e112fd74e28ef1c2..4b23ac692978354d1d846d0cbc02b79186ed07d9 100644
--- a/docs/en/10-programming/04-query.md
+++ b/docs/en/10-programming/04-query.md
@@ -42,7 +42,7 @@ For detailed query syntax please refer to [Select](https://docs.tdengine.com/clo
In most use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviation for super table), is used in TDengine to represent one type of data collection point, and a subtable is used to represent a specific data collection point of that type. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same type of data collection points. Aggregate functions applicable for tables can be used directly on STables; the syntax is exactly the same.
-In summary, records across subtables can be aggregated by a simple query on their STable. It is like a join operation. However, tables belonging to different STables can not be aggregated.
+In summary, records across subtables can be aggregated by a simple query on their STable. It is like a join operation. However, tables belonging to different STables can not be aggregated.
### Example 1
@@ -106,6 +106,7 @@ Down sampling can also be used for STable. For example, the below SQL statement
```sql title="SQL"
SELECT _wstart, SUM(current) FROM test.meters where location like "California%" INTERVAL(1s) limit 5;
```
+
```txt title="output"
_wstart | sum(current) |
======================================================
diff --git a/docs/en/10-programming/06-connector/01-python.md b/docs/en/10-programming/06-connector/01-python.md
index 7875f1ca1c6e3f5a4b6a13a5db233d7aae315dbb..ab6da4b263ba539c1f1bbc54c7cd9c4e2f573376 100644
--- a/docs/en/10-programming/06-connector/01-python.md
+++ b/docs/en/10-programming/06-connector/01-python.md
@@ -18,12 +18,12 @@ The source code for the Python connector is hosted on [GitHub](https://github.co
### Install via pip
```
-pip3 install -U taospy
+pip3 install -U taospy[ws]
```
### Install vial conda
```
-conda install -c conda-forge taospy
+conda install -c conda-forge taospy taospyws
```
### Installation verification
@@ -75,16 +75,45 @@ The `RestClient` class is a direct wrapper for the [REST API](/reference/rest-ap
For a more detailed description of the `sql()` method, please refer to [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html).
-## Important Update
+## Other notes
+
+### Exception handling
+
+All errors from database operations are thrown directly as exceptions and the error message from the database is passed up the exception stack. The application is responsible for exception handling. For example:
+
+```python
+import taos
+
+try:
+ conn = taos.connect()
+ conn.execute("CREATE TABLE 123") # wrong sql
+except taos.Error as e:
+ print(e)
+ print("exception class: ", e.__class__.__name__)
+ print("error number:", e.errno)
+ print("error message:", e.msg)
+except BaseException as other:
+ print("exception occur")
+ print(other)
+
+# output:
+# [0x0216]: syntax error near 'Incomplete SQL statement'
+# exception class: ProgrammingError
+# error number: -2147483114
+# error message: syntax error near 'Incomplete SQL statement'
+
+```
+
+[view source code](https://github.com/taosdata/TDengine/blob/3.0/docs/examples/python/handle_exception.py)
+
+### About nanoseconds
-| Connector version | Important Update | Release date |
-| ----------------- | ----------------------------------------- | ------------ |
-| 2.6.2 | fix ci script | 2022-08-18 |
-| 2.5.2 | fix taos-ws-py python version dependency | 2022-08-12 |
-| 2.5.1 | (rest): add timezone option | 2022-08-11 |
-| 2.5.0 | add taosws module | 2022-08-10 |
-| 2.4.0 | add execute method to TaosRestConnection | 2022-07-18 |
-| 2.3.3 | support connect to TDengine Cloud Service | 2022-06-06 |
+Due to the current imperfection of Python's nanosecond support (see link below), the current implementation returns integers at nanosecond precision instead of the `datetime` type produced by `ms` and `us`, which application developers will need to handle on their own. And it is recommended to use pandas' to_datetime(). The Python Connector may modify the interface in the future if Python officially supports nanoseconds in full.
+
+1. https://stackoverflow.com/questions/10611328/parsing-datetime-strings-containing-nanoseconds
+2. https://www.python.org/dev/peps/pep-0564/
+
+## Important Update
[**Release Notes**](https://github.com/taosdata/taos-connector-python/releases)
diff --git a/docs/en/10-programming/06-connector/02-java.md b/docs/en/10-programming/06-connector/02-java.md
index d16b21621b40c68069609a4c5aa4b672d212a420..9cea45d3cc9da5ad8ce828804c9cb7d3b7901a7b 100644
--- a/docs/en/10-programming/06-connector/02-java.md
+++ b/docs/en/10-programming/06-connector/02-java.md
@@ -16,19 +16,19 @@ import TabItem from '@theme/TabItem';
TDengine currently supports timestamp, number, character, Boolean type, and the corresponding type conversion with Java is as follows:
-| TDengine DataType | JDBCType (driver version < 2.0.24) | JDBCType (driver version > = 2.0.24) |
-| ----------------- | ---------------------------------- | ------------------------------------ |
-| TIMESTAMP | java.lang.Long | java.sql.Timestamp |
-| INT | java.lang.Integer | java.lang.Integer |
-| BIGINT | java.lang.Long | java.lang.Long |
-| FLOAT | java.lang.Float | java.lang.Float |
-| DOUBLE | java.lang.Double | java.lang.Double |
-| SMALLINT | java.lang.Short | java.lang.Short |
-| TINYINT | java.lang.Byte | java.lang.Byte |
-| BOOL | java.lang.Boolean | java.lang.Boolean |
-| BINARY | java.lang.String | byte array |
-| NCHAR | java.lang.String | java.lang.String |
-| JSON | - | java.lang.String |
+| TDengine DataType | JDBCType |
+| ----------------- | ---------------------------------- |
+| TIMESTAMP | java.sql.Timestamp |
+| INT | java.lang.Integer |
+| BIGINT | java.lang.Long |
+| FLOAT | java.lang.Float |
+| DOUBLE | java.lang.Double |
+| SMALLINT | java.lang.Short |
+| TINYINT | java.lang.Byte |
+| BOOL | java.lang.Boolean |
+| BINARY | byte array |
+| NCHAR | java.lang.String |
+| JSON | java.lang.String |
**Note**: Only TAG supports JSON types
@@ -53,7 +53,7 @@ Add following dependency in the `pom.xml` file of your Maven project:
com.taosdata.jdbctaos-jdbcdriver
- 2.0.**
+ 3.0.0
```
@@ -68,7 +68,7 @@ cd taos-connector-jdbc
mvn clean install -Dmaven.test.skip=true
```
-After compilation, a jar package named taos-jdbcdriver-2.0.XX-dist.jar is generated in the target directory, and the compiled jar file is automatically placed in the local Maven repository.
+After compilation, a jar package named taos-jdbcdriver-3.0.*-dist.jar is generated in the target directory, and the compiled jar file is automatically placed in the local Maven repository.
@@ -76,8 +76,7 @@ After compilation, a jar package named taos-jdbcdriver-2.0.XX-dist.jar is genera
## Establish Connection using URL
TDengine's JDBC URL specification format is:
-`jdbc:[TAOS-RS]://[host_name]:[port]/[database_name]?batchfetch={true|false}&useSSL={true|false}&token={token}&httpPoolSize={httpPoolSize}&httpKeepAlive={true|false}]&httpConnectTimeout={httpTimeout}&httpSocketTimeout={socketTimeout}`
-
+`jdbc:TAOS-RS://[host_name]:[port]/[database_name]?batchfetch={true|false}&useSSL={true|false}&token={token}&httpPoolSize={httpPoolSize}&httpKeepAlive={true|false}]&httpConnectTimeout={httpTimeout}&httpSocketTimeout={socketTimeout}`
```java
Class.forName("com.taosdata.jdbc.rs.RestfulDriver");
@@ -85,7 +84,7 @@ String jdbcUrl = System.getenv("TDENGINE_JDBC_URL");
Connection conn = DriverManager.getConnection(jdbcUrl);
```
-Note:
+:::note
- REST API is stateless. When using the JDBC REST connection, you need to specify the database name of the table and super table in SQL. For example.
@@ -97,7 +96,7 @@ Note:
```sql
insert into test using weather(ts, temperature) tags('California.SanFrancisco') values(now, 24.6);
```
-
+:::
### Establish Connection using URL and Properties
@@ -120,7 +119,6 @@ If the configuration parameters are duplicated in the URL, Properties, the `prio
1. JDBC URL parameters, as described above, can be specified in the parameters of the JDBC URL.
2. Properties connProps
-
## Usage Examples
### Create Database and Tables
@@ -141,8 +139,8 @@ int affectedRows = stmt.executeUpdate("insert into tb values(now, 23, 10.3) (now
System.out.println("insert " + affectedRows + " rows.");
```
-`now`` is an internal function. The default is the current time of the client's computer.
-
+> `now` is an internal function. The default is the current time of the client's computer.
+> `now + 1s` represents the current time of the client plus 1 second, followed by the number representing the unit of time: a (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), n (months), y (years).
### Querying data
@@ -188,6 +186,9 @@ There are three types of error codes that the JDBC connector can report:
- Error code of the native connection method (error code between 0x2351 and 0x2400)
- Error code of other TDengine function modules
+For specific error codes, please refer to.
+
+- [TDengine Java Connector](https://github.com/taosdata/taos-connector-jdbc/blob/main/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java)
### Closing resources
@@ -197,10 +198,10 @@ stmt.close();
conn.close();
```
-:::note
+:::note
Be sure to close the connection, otherwise, there will be a connection leak.
-
:::
+
### Use with Connection Pool
#### HikariCP
@@ -283,12 +284,51 @@ Please refer to: [JDBC example](https://github.com/taosdata/TDengine/tree/develo
## Recent update logs
-| taos-jdbcdriver version | major changes |
-| :---------------------: | :------------------------------------------: |
-| 2.0.38 | JDBC REST connections add bulk pull function |
-| 2.0.37 | Added support for json tags |
-| 2.0.36 | Add support for schemaless writing |
+| taos-jdbcdriver version | major changes |
+| :---------------------: | :--------------------------------------------: |
+| 3.0.3 | fix timestamp resolution error for REST connection in jdk17+ version |
+| 3.0.1 - 3.0.2 | fix the resultSet data is parsed incorrectly sometimes. 3.0.1 is compiled on JDK 11, you are advised to use 3.0.2 in the JDK 8 environment |
+| 3.0.0 | Support for TDengine 3.0 |
+| 2.0.42 | fix wasNull interface return value in WebSocket connection |
+| 2.0.41 | fix decode method of username and password in REST connection |
+| 2.0.39 - 2.0.40 | Add REST connection/request timeout parameters |
+| 2.0.38 | JDBC REST connections add bulk pull function |
+| 2.0.37 | Support json tags |
+| 2.0.36 | Support schemaless writing |
+
+## Frequently Asked Questions
+
+1. Why is there no performance improvement when using Statement's `addBatch()` and `executeBatch()` to perform `batch data writing/update`?
+
+ **Cause**: In TDengine's JDBC implementation, SQL statements submitted by `addBatch()` method are executed sequentially in the order they are added, which does not reduce the number of interactions with the server and does not bring performance improvement.
+
+ **Solution**: 1. splice multiple values in a single insert statement; 2. use multi-threaded concurrent insertion; 3. use parameter-bound writing
+
+2. java.lang.UnsatisfiedLinkError: no taos in java.library.path
+
+ **Cause**: The program did not find the dependent native library `taos`.
+
+ **Solution**: On Windows you can copy `C:\TDengine\driver\taos.dll` to the `C:\Windows\System32` directory, on Linux the following soft link will be created `ln -s /usr/local/taos/driver/libtaos.so.x.x.x.x /usr/lib/libtaos.so` will work, on macOS the lib soft link will be `/usr/local/lib/libtaos.dylib`.
+
+3. java.lang.UnsatisfiedLinkError: taos.dll Can't load AMD 64 bit on a IA 32-bit platform
+
+ **Cause**: Currently, TDengine only supports 64-bit JDK.
+
+ **Solution**: Reinstall the 64-bit JDK.
+
+4. java.lang.NoSuchMethodError: setByteArray
+
+ **Cause**: taos-jbdcdriver 3.* only supports TDengine 3.0 and later.
+
+ **Solution**: Use taos-jdbcdriver 2.* with your TDengine 2.* deployment.
+
+5. java.lang.NoSuchMethodError: java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer; ... taos-jdbcdriver-3.0.1.jar
+
+**Cause**:taos-jdbcdriver 3.0.1 is compiled on JDK 11.
+
+**Solution**: Use taos-jdbcdriver 3.0.2.
+For additional troubleshooting, see [FAQ](../../../train-faq/faq).
## API Reference
diff --git a/docs/en/10-programming/06-connector/03-go.md b/docs/en/10-programming/06-connector/03-go.md
index 9de2b3141237a8dd1f3867312d3914f25f449875..e59423b422048f13f6776df3d821a4b18e6b506f 100644
--- a/docs/en/10-programming/06-connector/03-go.md
+++ b/docs/en/10-programming/06-connector/03-go.md
@@ -10,26 +10,40 @@ This article describes how to install `driver-go` and connect to TDengine cluste
The source code of `driver-go` is hosted on [GitHub](https://github.com/taosdata/driver-go).
-## Installation steps
+## Version support
+
+Please refer to [version support list](/reference/connector#version-support)
+
+## Installation Steps
+
+### Pre-installation preparation
+
+* Install Go development environment (Go 1.14 and above, GCC 4.8.5 and above)
+- If you use the native connector, please install the TDengine client driver. Please refer to [Install Client Driver](/reference/connector/#install-client-driver) for specific steps
+
+Configure the environment variables and check the command.
+
+* ```go env```
+* ```gcc -v```
+
### Use go get to install
-```
-go get -u github.com/taosdata/driver-go/v2@develop
-```
+`go get -u github.com/taosdata/driver-go/v3@latest`
+
### Manage with go mod
1. Initialize the project with the `go mod` command.
```text
go mod init taos-demo
- ```
+ ```
2. Introduce taosSql
```go
import (
"database/sql"
- _ "github.com/taosdata/driver-go/v2/taosSql"
+ _ "github.com/taosdata/driver-go/v3/taosSql"
)
```
@@ -37,7 +51,7 @@ go get -u github.com/taosdata/driver-go/v2@develop
```text
go mod tidy
- ```
+ ```
4. Run the program with `go run taos-demo` or compile the binary with the `go build` command.
@@ -46,7 +60,7 @@ go get -u github.com/taosdata/driver-go/v2@develop
go build
```
-## Create a connection
+## Establishing a connection
### Data source name (DSN)
@@ -73,7 +87,10 @@ Use `taosRestful` as `driverName` and use a correct [DSN](#DSN) as `dataSourceNa
## Sample programs
-* [sample program](https://github.com/taosdata/TDengine/tree/develop/examples/go)
+### More sample programs
+
+* [sample program](https://github.com/taosdata/driver-go/tree/3.0/examples)
+
* [Video tutorial](https://www.taosdata.com/blog/2020/11/11/1951.html).
## Usage limitations
@@ -92,7 +109,7 @@ import (
"fmt"
"time"
- _ "github.com/taosdata/driver-go/v2/taosRestful"
+ _ "github.com/taosdata/driver-go/v3/taosRestful"
)
func main() {
@@ -187,7 +204,6 @@ This API is created successfully without checking permissions, but only when you
`sql.Open` Built-in method to execute query statements.
-
## API Reference
-Full API see [driver-go documentation](https://pkg.go.dev/github.com/taosdata/driver-go/v2)
\ No newline at end of file
+Full API see [driver-go documentation](https://pkg.go.dev/github.com/taosdata/driver-go/v3)
\ No newline at end of file
diff --git a/docs/en/10-programming/06-connector/04-rust.md b/docs/en/10-programming/06-connector/04-rust.md
index 99765329f488b7650e119dd07e1bd8199b673a4d..2e52afb6b0789a386f3b42890a7768c7da1ffbbe 100644
--- a/docs/en/10-programming/06-connector/04-rust.md
+++ b/docs/en/10-programming/06-connector/04-rust.md
@@ -3,13 +3,19 @@ toc_max_heading_level: 4
sidebar_position: 5
sidebar_label: Rust
title: TDengine Rust Connector
-description: Detailed guide for Rust Connector
+description: This document describes the TDengine Rust connector.
---
+[](https://crates.io/crates/taos)  [](https://docs.rs/taos)
+`taos` is the official Rust connector for TDengine. Rust developers can develop applications to access the TDengine instance data.
-`libtaos` is the official Rust language connector for TDengine. Rust developers can develop applications to access the TDengine instance data.
+The source code for the Rust connectors is located on [GitHub](https://github.com/taosdata/taos-connector-rust).
-The source code for `libtaos` is hosted on [GitHub](https://github.com/taosdata/libtaos-rs).
+## Version support
+
+Please refer to [version support list](/reference/connector#version-support)
+
+The Rust Connector is still under rapid development and is not guaranteed to be backward compatible before 1.0. We recommend using TDengine version 3.0 or higher to avoid known issues.
## Installation
@@ -17,74 +23,76 @@ The source code for `libtaos` is hosted on [GitHub](https://github.com/taosdata/
Install the Rust development toolchain.
-### Adding libtaos dependencies
-
-```toml
-[dependencies]
-# use rest feature
-libtaos = { version = "*", features = ["rest"]}
-```
+### Adding taos dependencies
-### Using connection pools
+Depending on the connection method, add the [taos][taos] dependency in your Rust project as follows:
-Please enable the `r2d2` feature in `Cargo.toml`.
+In `cargo.toml`, add [taos][taos]:
```toml
[dependencies]
-libtaos = { version = "*", features = ["rest", "r2d2"] }
+# use default feature
+taos = "*"
```
-## Create a connection
+## Establishing a connection
-Create `TaosCfg` from TDengine cloud DSN. The DSN should be in form of `://[:port]?token=`.
+[TaosBuilder] creates a connection constructor through the DSN connection description string.
+The DSN should be in form of `://[:port]?token=`.
```rust
-use libtaos::*;
-let cfg = TaosCfg::from_dsn(DSN)?;
+let builder = TaosBuilder::from_dsn(DSN)?;
```
You can now use this object to create the connection.
```rust
-let conn = cfg.connect()? ;
+let conn = builder.build()?;
```
The connection object can create more than one.
```rust
-let conn = cfg.connect()? ;
-let conn2 = cfg.connect()? ;
-```
-
-You can use connection pools in applications.
-
-```rust
-let pool = r2d2::Pool::builder()
- .max_size(10000) // max connections
- .build(cfg)? ;
-
-// ...
-// Use pool to get connection
-let conn = pool.get()? ;
+let conn1 = builder.build()?;
+let conn2 = builder.build()?;
```
After that, you can perform the following operations on the database.
```rust
-async fn demo() -> Result<(), Error> {
- // get connection ...
-
- // create database
- conn.exec("create database if not exists demo").await?
- // create table
- conn.exec("create table if not exists demo.tb1 (ts timestamp, v int)").await?
- // insert
- conn.exec("insert into demo.tb1 values(now, 1)").await?
- // query
- let rows = conn.query("select * from demo.tb1").await?
- for row in rows.rows {
- println!("{}", row.into_iter().join(","));
+async fn demo(taos: &Taos, db: &str) -> Result<(), Error> {
+ // prepare database
+ taos.exec_many([
+ format!("DROP DATABASE IF EXISTS `{db}`"),
+ format!("CREATE DATABASE `{db}`"),
+ format!("USE `{db}`"),
+ ])
+ .await?;
+
+ let inserted = taos.exec_many([
+ // create super table
+ "CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) \
+ TAGS (`groupid` INT, `location` BINARY(24))",
+ // create child table
+ "CREATE TABLE `d0` USING `meters` TAGS(0, 'California.LosAngles')",
+ // insert into child table
+ "INSERT INTO `d0` values(now - 10s, 10, 116, 0.32)",
+ // insert with NULL values
+ "INSERT INTO `d0` values(now - 8s, NULL, NULL, NULL)",
+ // insert and automatically create table with tags if not exists
+ "INSERT INTO `d1` USING `meters` TAGS(1, 'California.SanFrancisco') values(now - 9s, 10.1, 119, 0.33)",
+ // insert many records in a single sql
+ "INSERT INTO `d1` values (now-8s, 10, 120, 0.33) (now - 6s, 10, 119, 0.34) (now - 4s, 11.2, 118, 0.322)",
+ ]).await?;
+
+ assert_eq!(inserted, 6);
+ let mut result = taos.query("select * from `meters`").await?;
+
+ for field in result.fields() {
+ println!("got field: {}", field.name());
}
+
+ let values = result.
}
```
@@ -92,24 +100,26 @@ async fn demo() -> Result<(), Error> {
### Connection pooling
-In complex applications, we recommend enabling connection pools. Connection pool for [libtaos] is implemented using [r2d2].
+In complex applications, we recommend enabling connection pools. [taos] implements connection pools based on [r2d2].
As follows, a connection pool with default parameters can be generated.
```rust
-let pool = r2d2::Pool::new(cfg)? ;
+let pool = TaosBuilder::from_dsn(dsn)?.pool()?;
```
You can set the same connection pool parameters using the connection pool's constructor.
```rust
- use std::time::Duration;
- let pool = r2d2::Pool::builder()
- .max_size(5000) // max connections
- .max_lifetime(Some(Duration::from_minutes(100))) // lifetime of each connection
- .min_idle(Some(1000)) // minimal idle connections
- .connection_timeout(Duration::from_minutes(2))
- .build(cfg);
+let dsn = std::env::var("TDENGINE_CLOUD_DSN")?;;
+
+let opts = PoolBuilder::new()
+ .max_size(5000) // max connections
+ .max_lifetime(Some(Duration::from_secs(60 * 60))) // lifetime of each connection
+ .min_idle(Some(1000)) // minimal idle connections
+ .connection_timeout(Duration::from_secs(2));
+
+let pool = TaosBuilder::from_dsn(dsn)?.with_pool_builder(opts)?;
```
In the application code, use `pool.get()? ` to get a connection object [Taos].
@@ -117,56 +127,99 @@ In the application code, use `pool.get()? ` to get a connection object [Taos].
```rust
let taos = pool.get()? ;
```
+# Connectors
-The [Taos] structure is the connection manager in [libtaos] and provides two main APIs.
+The [Taos][struct.Taos] object provides an API to perform operations on multiple databases.
1. `exec`: Execute some non-query SQL statements, such as `CREATE`, `ALTER`, `INSERT`, etc.
```rust
- taos.exec().await?
+ let affected_rows = taos.exec("INSERT INTO tb1 VALUES(now, NULL)").await?;
+ ```
+
+2. `exec_many`: Run multiple SQL statements simultaneously or in order.
+
+ ```rust
+ taos.exec_many([
+ "CREATE DATABASE test",
+ "USE test",
+ "CREATE TABLE `tb1` (`ts` TIMESTAMP, `val` INT)",
+ ]).await?;
```
-2. `query`: Execute the query statement and return the [TaosQueryData] object.
+3. `query`: Run a query statement and return a [ResultSet] object.
```rust
- let q = taos.query("select * from log.logs").await?
+ let mut q = taos.query("select * from log.logs").await?;
```
- The [TaosQueryData] object stores the query result data and basic information about the returned columns (column name, type, length).
+ The [ResultSet] object stores query result data and the names, types, and lengths of returned columns
- Column information is stored using [ColumnMeta].
+ You can obtain column information by using [.fields()].
```rust
- let cols = &q.column_meta;
+ let cols = q.fields();
for col in cols {
- println!("name: {}, type: {:?} , bytes: {}", col.name, col.type_, col.bytes);
+ println!("name: {}, type: {:?} , bytes: {}", col.name(), col.ty(), col.bytes());
}
```
It fetches data line by line.
```rust
- for (i, row) in q.rows.iter().enumerate() {
- for (j, cell) in row.iter().enumerate() {
- println!("cell({}, {}) data: {}", i, j, cell);
+ let mut rows = result.rows();
+ let mut nrows = 0;
+ while let Some(row) = rows.try_next().await? {
+ for (col, (name, value)) in row.enumerate() {
+ println!(
+ "[{}] got value in col {} (named `{:>8}`): {}",
+ nrows, col, name, value
+ );
}
+ nrows += 1;
}
```
+ Or use the [serde](https://serde.rs) deserialization framework.
+
+ ```rust
+ #[derive(Debug, Deserialize)]
+ struct Record {
+ // deserialize timestamp to chrono::DateTime
+ ts: DateTime,
+ // float to f32
+ current: Option,
+ // int to i32
+ voltage: Option,
+ phase: Option,
+ groupid: i32,
+ // binary/varchar to String
+ location: String,
+ }
+
+ let records: Vec = taos
+ .query("select * from `meters`")
+ .await?
+ .deserialize()
+ .try_collect()
+ .await?;
+ ```
+
Note that Rust asynchronous functions and an asynchronous runtime are required.
-[Taos] provides a few Rust methods that encapsulate SQL to reduce the frequency of `format!` code blocks.
+[Taos][struct.Taos] provides Rust methods for some SQL statements to reduce the number of `format!`s.
- `.describe(table: &str)`: Executes `DESCRIBE` and returns a Rust data structure.
- `.create_database(database: &str)`: Executes the `CREATE DATABASE` statement.
+- `.use_database(database: &str)`: Executes the `USE` statement.
+In addition, this structure is also the entry point for [Parameter Binding](#Parameter Binding Interface) and [Line Protocol Interface](#Line Protocol Interface). Please refer to the specific API descriptions for usage.
-Please move to the Rust documentation hosting page for other related structure API usage instructions: .
+For information about other structure APIs, see the [Rust documentation](https://docs.rs/taos).
-[libtaos]: https://github.com/taosdata/libtaos-rs
-[tdengine]: https://github.com/taosdata/TDengine
+[taos]: https://github.com/taosdata/rust-connector-taos
[r2d2]: https://crates.io/crates/r2d2
-[TaosCfg]: https://docs.rs/libtaos/latest/libtaos/struct.TaosCfg.html
-[Taos]: https://docs.rs/libtaos/latest/libtaos/struct.Taos.html
-[TaosQueryData]: https://docs.rs/libtaos/latest/libtaos/field/struct.TaosQueryData.html
-[Field]: https://docs.rs/libtaos/latest/libtaos/field/enum.Field.html
+[TaosBuilder]: https://docs.rs/taos/latest/taos/struct.TaosBuilder.html
+[TaosCfg]: https://docs.rs/taos/latest/taos/struct.TaosCfg.html
+[struct.Taos]: https://docs.rs/taos/latest/taos/struct.Taos.html
+[Stmt]: https://docs.rs/taos/latest/taos/struct.Stmt.html
diff --git a/docs/en/10-programming/06-connector/05-node.md b/docs/en/10-programming/06-connector/05-node.md
index 096d65c255eef632424002cb04abbc02e90fda04..9d3a1de1e8e21a847d4681309eb4071a5c0ce666 100644
--- a/docs/en/10-programming/06-connector/05-node.md
+++ b/docs/en/10-programming/06-connector/05-node.md
@@ -4,9 +4,13 @@ title: TDengine Node.JS Connector
description: Detailed guide for Node.JS Connector
---
- `td2.0-rest-connector` are the official Node.js language connectors for TDengine. Node.js developers can develop applications to access TDengine instance data. `td2.0-rest-connector` is a **REST connector** that connects to TDengine instances via the REST API.
+ `@tdengine/rest` are the official Node.js language connectors for TDengine. Node.js developers can develop applications to access TDengine instance data. `@tdengine/rest` is a **REST connector** that connects to TDengine instances via the REST API.
-The Node.js connector source code is hosted on [GitHub](https://github.com/taosdata/taos-connector-node).
+The source code for the Node.js connectors is located on [GitHub](https://github.com/taosdata/taos-connector-node/tree/3.0).
+
+## Version support
+
+Please refer to [version support list](/reference/connector#version-support)
## Installation steps
@@ -16,7 +20,7 @@ Install the Node.js development environment
### Install via npm
```bash
-npm i td2.0-rest-connector
+npm install @tdengine/rest
```
## Establishing a connection
@@ -30,13 +34,31 @@ npm i td2.0-rest-connector
{{#include docs/examples/node/reference_example.js:usage}}
```
-## Important Updates
+## Frequently Asked Questions
+
+1. Using REST connections requires starting taosadapter.
+
+ ```bash
+ sudo systemctl start taosadapter
+ ```
+
+2. Node.js versions
+
+ `@tdengine/client` supports Node.js v10.9.0 to 10.20.0 and 12.8.0 to 12.9.1.
+
+3. "Unable to establish connection", "Unable to resolve FQDN"
+ Usually, the root cause is an incorrect FQDN configuration. You can refer to this section in the [FAQ](https://docs.tdengine.com/2.4/train-faq/faq/#2-how-to-handle-unable-to-establish-connection) to troubleshoot.
-| td2.0-rest-connector version | Description |
-| ------------------------- | ---------------------------------------------------------------- |
-| 1.0.5 | Support connect to TDengine cloud service
+## Important update records
+| package name | version | TDengine version | Description |
+|----------------------|---------|---------------------|---------------------------------------------------------------------------|
+| @tdengine/rest | 3.0.0 | 3.0.0 | Supports TDengine 3.0. Not compatible with TDengine 2.x. |
+| td2.0-rest-connector | 1.0.7 | 2.4.x;2.5.x;2.6.x | Removed default port 6041。 |
+| td2.0-rest-connector | 1.0.6 | 2.4.x;2.5.x;2.6.x | Fixed affectRows bug with create, insert, update, and alter. |
+| td2.0-rest-connector | 1.0.5 | 2.4.x;2.5.x;2.6.x | Support cloud token |
+| td2.0-rest-connector | 1.0.3 | 2.4.x;2.5.x;2.6.x | Supports connection management, standard queries, system information, error information, and continuous queries |
## API Reference
-[API Reference](https://docs.taosdata.com/api/td2.0-connector/)
\ No newline at end of file
+[API Reference](https://docs.taosdata.com/api/td2.0-connector/)
diff --git a/docs/en/10-programming/06-connector/06-csharp.md b/docs/en/10-programming/06-connector/06-csharp.md
index 1d205f690fa0552e76ea74c689bd1e91af0f4163..2a9be8ef31c9c7f5f7ee3ea8d66e24a545684441 100644
--- a/docs/en/10-programming/06-connector/06-csharp.md
+++ b/docs/en/10-programming/06-connector/06-csharp.md
@@ -6,15 +6,23 @@ description: Detailed guide for C# Connector
`TDengine.Connector` is the official C# connector for TDengine. C# developers can develop applications to access TDengine instance data.
+This article describes how to install `TDengine.Connector` in a Linux or Windows environment and connect to TDengine clusters via `TDengine.Connector` to perform basic operations such as data writing and querying.
+
The source code for `TDengine.Connector` is hosted on [GitHub](https://github.com/taosdata/taos-connector-dotnet/tree/3.0).
+## Version support
+
+Please refer to [version support list](/reference/connector#version-support)
+
## Installation
### Pre-installation
-Install the .NET deployment SDK.
+* Install the [.NET SDK](https://dotnet.microsoft.com/download)
+* [Nuget Client](https://docs.microsoft.com/en-us/nuget/install-nuget-client-tools) (optional installation)
+* Install TDengine client driver, please refer to [Install client driver](/reference/connector/#install-client-driver) for details
-### Add TDengine.Connector through Nuget
+### Add `TDengine.Connector` through Nuget
```bash
dotnet add package TDengine.Connector
@@ -26,7 +34,7 @@ dotnet add package TDengine.Connector
{{#include docs/examples/csharp/cloud-example/connect/connect.csproj}}
```
-``` C#
+``` csharp
{{#include docs/examples/csharp/cloud-example/connect/Program.cs}}
```
@@ -55,9 +63,34 @@ dotnet add package TDengine.Connector
## Important Updates
| TDengine.Connector | Description |
-| ------------------------- | ---------------------------------------------------------------- |
-| 3.0.2 | Support .NET Framework 4.5 and above. Support .Net standard 2.0. Nuget package includes dynamic library for WebSocket.|
-| 3.0.1 | Support connect to TDengine cloud service|
+|--------------------|--------------------------------|
+| 3.0.2 | Support .NET Framework 4.5 and above. Support .Net standard 2.0. Nuget package includes dynamic library for WebSocket.|
+| 3.0.1 | Support WebSocket and Cloud,With function query, insert, and parameter binding|
+| 3.0.0 | Supports TDengine 3.0.0.0. TDengine 2.x is not supported. Added `TDengine.Impl.GetData()` interface to deserialize query results. |
+| 1.0.7 | Fixed TDengine.Query() memory leak. |
+| 1.0.6 | Fix schemaless bug in 1.0.4 and 1.0.5. |
+| 1.0.5 | Fix Windows sync query Chinese error bug. | 1.0.4 | Fix schemaless bug. |
+| 1.0.4 | Add asynchronous query, subscription, and other functions. Fix the binding parameter bug. |
+| 1.0.3 | Add parameter binding, schemaless, JSON tag, etc. |
+| 1.0.2 | Add connection management, synchronous query, error messages, etc. |
+
+## Other descriptions
+
+### Third-party driver
+
+`Taos` is an ADO.NET connector for TDengine, supporting Linux and Windows platforms. Community contributor `Maikebing@@maikebing contributes the connector`. Please refer to:
+
+* Interface download:
+
+## Frequently Asked Questions
+
+1. "Unable to establish connection", "Unable to resolve FQDN"
+
+ Usually, it's caused by an incorrect FQDN configuration. Please refer to this section in the [FAQ](https://docs.tdengine.com/2.4/train-faq/faq/#2-how-to-handle-unable-to-establish-connection) to troubleshoot.
+
+2. Unhandled exception. System.DllNotFoundException: Unable to load DLL 'taos' or one of its dependencies: The specified module cannot be found.
+
+ This is usually because the program did not find the dependent client driver. The solution is to copy `C:\TDengine\driver\taos.dll` to the `C:\Windows\System32\` directory on Windows, and create the following soft link on Linux `ln -s /usr/local/taos/driver/libtaos.so.x.x .x.x /usr/lib/libtaos.so` will work.
## API Reference
diff --git a/docs/en/10-programming/06-connector/09-rest-api.md b/docs/en/10-programming/06-connector/09-rest-api.md
index b51d25843c099b5472fb2e2eb05f9148a64d2e0d..db466d15882b8c16fbb0dd7da1dafb9e7ca682d8 100644
--- a/docs/en/10-programming/06-connector/09-rest-api.md
+++ b/docs/en/10-programming/06-connector/09-rest-api.md
@@ -4,7 +4,7 @@ title: REST API
description: Detailed guide for REST API
---
-To support the development of various types of applications and platforms, TDengine provides an API that conforms to REST principles; namely REST API. To minimize the learning cost, unlike REST APIs for other database engines, TDengine allows insertion of SQL commands in the BODY of an HTTP POST request, to operate the database.
+To support the development of various types of applications and platforms, TDengine provides an API that conforms to REST principles; namely REST API. To minimize the learning cost, unlike REST APIs for other database engines, TDengine allows insertion of SQL commands in the BODY of an HTTP POST request, to operate the database.
:::note
One difference from the native connector is that the REST interface is stateless and so the `USE db_name` command has no effect. All references to table names and super table names need to specify the database name in the prefix. TDengine supports specification of the db_name in RESTful URL. If the database name prefix is not specified in the SQL command, the `db_name` specified in the URL will be used.
@@ -305,4 +305,3 @@ Description:
"rows": 1
}
```
-
diff --git a/docs/en/11-visual/01-grafana.md b/docs/en/11-visual/01-grafana.md
index a059047ae840eb9b72486a2e7647585839c4efdf..e1f4870f0d08bc96dc62ac0b4ecd34af1a15d97a 100644
--- a/docs/en/11-visual/01-grafana.md
+++ b/docs/en/11-visual/01-grafana.md
@@ -15,7 +15,7 @@ TDengine currently supports Grafana versions 7.5 and above. Users can go to the
### Install with GUI
-The TDengine data source plugin is already published as a signed Grafana plugin. You can easily install it from Grafana Configuration GUI. In any platform you already installed Grafana, you can open the URL http://localhost:3000 then click plugin menu from left panel.
+The TDengine data source plugin is already published as a signed Grafana plugin. You can easily install it from Grafana Configuration GUI. In any platform you already installed Grafana, you can open the URL `http://localhost:3000` then click plugin menu from left panel.

diff --git a/docs/en/11-visual/02-gds.md b/docs/en/11-visual/02-gds.md
index 13714341602834c584c50eae9dd25e9ae3b81487..32069eadd823f21a3ab58d34561156a6796368ec 100644
--- a/docs/en/11-visual/02-gds.md
+++ b/docs/en/11-visual/02-gds.md
@@ -3,7 +3,7 @@ sidebar_label: Google Data Studio
title: Use Google Data Studio
---
-Using its [partner connector](https://datastudio.google.com/data?search=TDengine), Google Data Studio can quickly access TDengine and create interactive reports and dashboards using its web-based reporting features.The whole process does not require any code development. Share your reports and dashboards with individuals, teams, or the world. Collaborate in real time. Embed your report on any web page.
+Using its [partner connector](https://datastudio.google.com/data?search=TDengine), Google Data Studio can quickly access TDengine and create interactive reports and dashboards using its web-based reporting features.The whole process does not require any code development. Share your reports and dashboards with individuals, teams, or the world. Collaborate in real time. And embbed your report on any web page.
Refer to [GitHub](https://github.com/taosdata/gds-connector/blob/master/README.md) for additional information on utilizing the Data Studio with TDengine.
@@ -19,23 +19,12 @@ The current [connector](https://datastudio.google.com/data?search=TDengine) supp
#### URL
-TDengine Cloud URL.
-
-
-
-
-
To obtain the URL, please login [TDengine Cloud](https://cloud.tdengine.com) and click "Visualize" and then select "Google Data Studio".
#### TDengine Cloud Token
-
-
-
-
-
To obtain the value of cloud token, please login [TDengine Cloud](https://cloud.tdengine.com) and click "Visualize" and then select "Google Data Studio".
diff --git a/docs/en/12-data-subscription/index.md b/docs/en/12-data-subscription/index.md
index 80b01810d8f2234607ef1986dda27794ab8612ff..c8c4eca84f3c5754b7cfecd187c3ad56fbcad31c 100644
--- a/docs/en/12-data-subscription/index.md
+++ b/docs/en/12-data-subscription/index.md
@@ -1,7 +1,7 @@
---
sidebar_label: Data Subscription
title: Data Subscription
-description: Using topics to do data subscription and share to others from TDengine.
+description: Using topics to do data subscription and share to others from TDengine Cloud.
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
diff --git a/docs/en/12-stream.md b/docs/en/12-stream.md
index 5bda5c0ce9dcfa0c54809c12467b436c2d2fa534..32d22beff440e7028cceceedd515ce5ed690fda6 100644
--- a/docs/en/12-stream.md
+++ b/docs/en/12-stream.md
@@ -33,7 +33,7 @@ It is common that smart electrical meter systems for businesses generate million
### Create a Database for Raw Data
-Create database `power` using explore in cloud console.
+Create database `power` using explorer in TDengine Cloud console.
Then create four subtables as follows:
diff --git a/docs/en/13-replication/index.md b/docs/en/13-replication/index.md
index bd50feacdaf60b57d3502113285e3bc9948582f7..8c8ad3a2a82bdf2bfd3455c2ee462dd912b40df8 100644
--- a/docs/en/13-replication/index.md
+++ b/docs/en/13-replication/index.md
@@ -6,4 +6,4 @@ description: Replicate data between TDengine cloud services
TDengine provides full support for data replication. You can replicate data from TDengine cloud to private TDengine instance, from private TDengine instance to TDengine cloud, or from one cloud platform to another one and it doesn't matter which cloud or region the two services reside in.
-TDengine also provides database backup for enterprise plan.
+TDengine also provides database backup for enterprise plan.
diff --git a/docs/en/19-tools/01-cli.md b/docs/en/19-tools/01-cli.md
index d807fca80f2be74b9bb30f0d6023089a51ae4d81..8e8cebf5d9e7524ebf4a6d05220f53e5b3467a68 100644
--- a/docs/en/19-tools/01-cli.md
+++ b/docs/en/19-tools/01-cli.md
@@ -7,7 +7,6 @@ description: Instructions and tips for using the TDengine CLI to connect TDengin
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-
The TDengine command-line interface (hereafter referred to as `TDengine CLI`) is the most simplest way for users to manipulate and interact with TDengine instances.
@@ -61,22 +60,20 @@ To obtain the value of cloud DSN, please log in [TDengine Cloud](https://cloud.t
:::
-
-
-## Connect
+## Connect
-To access the TDengine Cloud, you can execute `taos` if you already set the environment variable.
+To access the TDengine Cloud instance, you can execute `taos` if you already set the environment variable.
-```
+```bash
taos
```
If you did not set environment variable for a TDengine Cloud instance, or you want to access other TDengine Cloud instances rather than the instance you already set the environment variable, you can use `taos -E ` as below.
-```
+```bash
taos -E $TDENGINE_CLOUD_DSN
```
@@ -85,13 +82,13 @@ taos -E $TDENGINE_CLOUD_DSN
To access the TDengine Cloud, you can execute `taos` if you already set the environment variable.
-```
-taos
+```powershell
+taos.exe
```
If you did not set environment variable for a TDengine Cloud instance, or you want to access other TDengine Cloud instances rather than the instance you already set the environment variable, you can use `taos -E ` as below.
-```
+```powershell
taos.exe -E $TDENGINE_CLOUD_DSN
```
@@ -100,13 +97,13 @@ taos.exe -E $TDENGINE_CLOUD_DSN
To access the TDengine Cloud, you can execute `taos` if you already set the environment variable.
-```
+```bash
taos
```
If you did not set environment variable for a TDengine Cloud instance, or you want to access other TDengine Cloud instances rather than the instance you already set the environment variable, you can use `taos -E ` as below.
-```
+```bash
taos -E $TDENGINE_CLOUD_DSN
```
@@ -117,7 +114,7 @@ taos -E $TDENGINE_CLOUD_DSN
TDengine CLI will display a welcome message and version information if it successfully connected to the TDengine service. If it fails, TDengine CLI will print an error message. The TDengine CLI prompts as follows:
-```
+```text
Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
@@ -127,4 +124,3 @@ taos>
```
After entering the TDengine CLI, you can execute various SQL commands, including inserts, queries, or administrative commands. Please see the [official document](https://docs.tdengine.com/reference/taos-shell#execute-sql-script-file) for more details.
-
diff --git a/docs/en/19-tools/03-taosbenchmark.md b/docs/en/19-tools/03-taosbenchmark.md
index d7317f36c75bb47d2b35a96d7c8850560489057b..0d4c22144924d63c20e08fe08fd4c66d6c3a62f4 100644
--- a/docs/en/19-tools/03-taosbenchmark.md
+++ b/docs/en/19-tools/03-taosbenchmark.md
@@ -9,20 +9,20 @@ description: "taosBenchmark (once called taosdemo ) is a tool for testing the pe
taosBenchmark (formerly taosdemo ) is a tool for testing the performance of TDengine products. taosBenchmark can test the performance of TDengine's insert, query, and subscription functions and simulate large amounts of data generated by many devices. taosBenchmark can be configured to generate user defined databases, supertables, subtables, and the time series data to populate these for performance benchmarking. taosBenchmark is highly configurable and some of the configurations include the time interval for inserting data, the number of working threads and the capability to insert disordered data. The installer provides taosdemo as a soft link to taosBenchmark for compatibility with past users.
-**Please be noted that in the context of TDengine cloud service, non privileged user can't create database using any tool, including taosBenchmark. The database needs to be firstly created in the data explorer in TDengine cloud service console. For any content about creating database in this document, the user needs to ignore and create the database manually inside TDengine cloud service.**
+:::note
+Please be noted that in the context of TDengine cloud service, non privileged user can't create database using any tool, including taosBenchmark. The database needs to be firstly created in the data explorer in TDengine cloud service console. For any content about creating database in this document, the user needs to ignore and create the database manually inside TDengine cloud service.
+:::
## Installation
-To use taosBenchmark, you need to download and install [taosTools](https://www.taosdata.com/assets-download/3.0/taosTools-2.2.7-Linux-x64.tar.gz) or any later version of v2.2.7. Before installing taosTools, please firstly download and install [TDengine CLI](https://docs.tdengine.com/cloud/tools/cli/#installation).
+There are two ways to install taosBenchmark:
-Decompress the package and install.
+- Installing the official TDengine installer will automatically install taosBenchmark. Please refer to [TDengine installation](/operation/pkg-install) for details.
+
+- Compile taos-tools separately and install them. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.
-```
-tar -xzf taosTools-2.2.7-Linux-x64.tar.gz
-cd taosTools-2.2.7-Linux-x64.tar.gz
-sudo ./install-taostools.sh
-```
## Run
+
### Configuration and running methods
Run this command in your Linux terminal to save cloud DSN as variable:
@@ -214,6 +214,10 @@ The parameters listed in this section apply to all function modes.
`filetype` must be set to `insert` in the insertion scenario. See [General Configuration Parameters](#General Configuration Parameters)
+- ** keep_trying ** : Keep trying if failed to insert, default is no. Available with v3.0.9+.
+
+- ** trying_interval ** : Specify interval between keep trying insert. Valid value is a postive number. Only valid when keep trying be enabled. Available with v3.0.9+.
+
#### Stream processing related configuration parameters
The parameters for creating streams are configured in `stream` in the json configuration file, as shown below.
diff --git a/docs/en/19-tools/06-taosdump.md b/docs/en/19-tools/06-taosdump.md
index 1385197ee4655bf279048f40f8c4374aa9511892..11fc06e0a662c12a6674e195e8d574a8347c09fc 100644
--- a/docs/en/19-tools/06-taosdump.md
+++ b/docs/en/19-tools/06-taosdump.md
@@ -17,16 +17,13 @@ Users should not use taosdump to back up raw data, environment settings, hardwar
## Installation
-To use taosdump, you need to download and install recent version of [taosTools](https://docs.tdengine.com/releases/tools/). Before installing taosTools, please firstly download and install the [TDengine client installation package](https://docs.tdengine.com/releases/tdengine/).
+There are two ways to install taosdump:
-Decompress the package and install.
-```
-tar -xzf taosTools-2.2.7-Linux-x64.tar.gz
-cd taosTools-2.2.7-Linux-x64.tar.gz
-sudo ./install-taostools.sh
-```
+- Install the taosTools official installer. Please find taosTools from [Release History](https://docs.taosdata.com/releases/tools/) page and download and install it.
+
+- Compile taos-tools separately and install it. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.
-Set environment variable.
+Run the following command to set environment variable.
```bash
export TDENGINE_CLOUD_DSN=""
diff --git a/docs/en/22-user-management/01-orgs/index.md b/docs/en/22-user-management/01-orgs/index.md
index ea4116ec453acaa3b5c7fa40f2c10bfab3cca94a..bed47538ca65e534ec4d71dd1559d494e117840d 100644
--- a/docs/en/22-user-management/01-orgs/index.md
+++ b/docs/en/22-user-management/01-orgs/index.md
@@ -4,7 +4,7 @@ title: Organization Management
description: 'Organization management'
---
-TDengine Cloud provides a list page for the user to manage his organizations. On this page, you can get all the organizations which you can have permission to view or edit. In each line of the organization list, you can get the name of the organization, roles which you have in the organization and the actions you can operate.
+TDengine Cloud provides a list page for the user to manage his organizations. On this page, you can get all the organizations which you can have permission to view or edit. In each line of the organization list, you can get the name of the organization, roles which you have in the organization and the actions you can operate.

diff --git a/docs/en/22-user-management/index.md b/docs/en/22-user-management/index.md
index 5fea58f7f2eb5e36460cd2ee63a393e4f8140441..10bdc67cf1786c6f27d2dc2a321270ce2470f0ab 100644
--- a/docs/en/22-user-management/index.md
+++ b/docs/en/22-user-management/index.md
@@ -19,10 +19,10 @@ The major features are listed below:
1. [Organization Management](./orgs/): Create new organizations, update their name and also can transfer the owner to some one in the organization.
2. [User Mgmt](./users/): Create, update or delete users or user groups. You can also create/edit/delete customized roles.
- [User](./users/users)
-3. [Admin](./admin/): Create, update or delete users or user groups. You can also create/edit/delete customized roles.
-4. [Database Access Control](./db/): Create, update or delete users or user groups. You can also create/edit/delete customized roles.
+
-## User Stories
+
```mdx-code-block
import DocCardList from '@theme/DocCardList';
diff --git a/docs/examples/c/tmq.c b/docs/examples/c/tmq.c
new file mode 100644
index 0000000000000000000000000000000000000000..eb41ad039a1852bb265165837d69edc3a2835684
--- /dev/null
+++ b/docs/examples/c/tmq.c
@@ -0,0 +1,282 @@
+/*
+ * Copyright (c) 2019 TAOS Data, Inc.
+ *
+ * This program is free software: you can use, redistribute, and/or modify
+ * it under the terms of the GNU Affero General Public License, version 3
+ * or later ("AGPL"), as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * You should have received a copy of the GNU Affero General Public License
+ * along with this program. If not, see .
+ */
+
+#include
+#include
+#include
+#include
+#include
+#include "taos.h"
+
+static int running = 1;
+
+static int32_t msg_process(TAOS_RES* msg) {
+ char buf[1024];
+ int32_t rows = 0;
+
+ const char* topicName = tmq_get_topic_name(msg);
+ const char* dbName = tmq_get_db_name(msg);
+ int32_t vgroupId = tmq_get_vgroup_id(msg);
+
+ printf("topic: %s\n", topicName);
+ printf("db: %s\n", dbName);
+ printf("vgroup id: %d\n", vgroupId);
+
+ while (1) {
+ TAOS_ROW row = taos_fetch_row(msg);
+ if (row == NULL) break;
+
+ TAOS_FIELD* fields = taos_fetch_fields(msg);
+ int32_t numOfFields = taos_field_count(msg);
+ // int32_t* length = taos_fetch_lengths(msg);
+ int32_t precision = taos_result_precision(msg);
+ rows++;
+ taos_print_row(buf, row, fields, numOfFields);
+ printf("precision: %d, row content: %s\n", precision, buf);
+ }
+
+ return rows;
+}
+
+static int32_t init_env() {
+ TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
+ if (pConn == NULL) {
+ return -1;
+ }
+
+ TAOS_RES* pRes;
+ // drop database if exists
+ printf("create database\n");
+ pRes = taos_query(pConn, "drop topic topicname");
+ if (taos_errno(pRes) != 0) {
+ printf("error in drop tmqdb, reason:%s\n", taos_errstr(pRes));
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "drop database if exists tmqdb");
+ if (taos_errno(pRes) != 0) {
+ printf("error in drop tmqdb, reason:%s\n", taos_errstr(pRes));
+ }
+ taos_free_result(pRes);
+
+ // create database
+ pRes = taos_query(pConn, "create database tmqdb precision 'ns'");
+ if (taos_errno(pRes) != 0) {
+ printf("error in create tmqdb, reason:%s\n", taos_errstr(pRes));
+ goto END;
+ }
+ taos_free_result(pRes);
+
+ // create super table
+ printf("create super table\n");
+ pRes = taos_query(
+ pConn, "create table tmqdb.stb (ts timestamp, c1 int, c2 float, c3 varchar(16)) tags(t1 int, t3 varchar(16))");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create super table stb, reason:%s\n", taos_errstr(pRes));
+ goto END;
+ }
+ taos_free_result(pRes);
+
+ // create sub tables
+ printf("create sub tables\n");
+ pRes = taos_query(pConn, "create table tmqdb.ctb0 using tmqdb.stb tags(0, 'subtable0')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create super table ctb0, reason:%s\n", taos_errstr(pRes));
+ goto END;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create table tmqdb.ctb1 using tmqdb.stb tags(1, 'subtable1')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create super table ctb1, reason:%s\n", taos_errstr(pRes));
+ goto END;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create table tmqdb.ctb2 using tmqdb.stb tags(2, 'subtable2')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create super table ctb2, reason:%s\n", taos_errstr(pRes));
+ goto END;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create table tmqdb.ctb3 using tmqdb.stb tags(3, 'subtable3')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create super table ctb3, reason:%s\n", taos_errstr(pRes));
+ goto END;
+ }
+ taos_free_result(pRes);
+
+ // insert data
+ printf("insert data into sub tables\n");
+ pRes = taos_query(pConn, "insert into tmqdb.ctb0 values(now, 0, 0, 'a0')(now+1s, 0, 0, 'a00')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes));
+ goto END;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "insert into tmqdb.ctb1 values(now, 1, 1, 'a1')(now+1s, 11, 11, 'a11')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes));
+ goto END;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "insert into tmqdb.ctb2 values(now, 2, 2, 'a1')(now+1s, 22, 22, 'a22')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes));
+ goto END;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "insert into tmqdb.ctb3 values(now, 3, 3, 'a1')(now+1s, 33, 33, 'a33')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes));
+ goto END;
+ }
+ taos_free_result(pRes);
+ taos_close(pConn);
+ return 0;
+
+END:
+ taos_free_result(pRes);
+ taos_close(pConn);
+ return -1;
+}
+
+int32_t create_topic() {
+ printf("create topic\n");
+ TAOS_RES* pRes;
+ TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
+ if (pConn == NULL) {
+ return -1;
+ }
+
+ pRes = taos_query(pConn, "use tmqdb");
+ if (taos_errno(pRes) != 0) {
+ printf("error in use tmqdb, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create topic topicname as select ts, c1, c2, c3, tbname from tmqdb.stb where c1 > 1");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create topic topicname, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ taos_close(pConn);
+ return 0;
+}
+
+void tmq_commit_cb_print(tmq_t* tmq, int32_t code, void* param) {
+ printf("tmq_commit_cb_print() code: %d, tmq: %p, param: %p\n", code, tmq, param);
+}
+
+tmq_t* build_consumer() {
+ tmq_conf_res_t code;
+ tmq_conf_t* conf = tmq_conf_new();
+ code = tmq_conf_set(conf, "enable.auto.commit", "true");
+ if (TMQ_CONF_OK != code) return NULL;
+ code = tmq_conf_set(conf, "auto.commit.interval.ms", "1000");
+ if (TMQ_CONF_OK != code) return NULL;
+ code = tmq_conf_set(conf, "group.id", "cgrpName");
+ if (TMQ_CONF_OK != code) return NULL;
+ code = tmq_conf_set(conf, "client.id", "user defined name");
+ if (TMQ_CONF_OK != code) return NULL;
+ code = tmq_conf_set(conf, "td.connect.user", "root");
+ if (TMQ_CONF_OK != code) return NULL;
+ code = tmq_conf_set(conf, "td.connect.pass", "taosdata");
+ if (TMQ_CONF_OK != code) return NULL;
+ code = tmq_conf_set(conf, "auto.offset.reset", "earliest");
+ if (TMQ_CONF_OK != code) return NULL;
+ code = tmq_conf_set(conf, "experimental.snapshot.enable", "false");
+ if (TMQ_CONF_OK != code) return NULL;
+
+ tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL);
+
+ tmq_t* tmq = tmq_consumer_new(conf, NULL, 0);
+ tmq_conf_destroy(conf);
+ return tmq;
+}
+
+tmq_list_t* build_topic_list() {
+ tmq_list_t* topicList = tmq_list_new();
+ int32_t code = tmq_list_append(topicList, "topicname");
+ if (code) {
+ tmq_list_destroy(topicList);
+ return NULL;
+ }
+ return topicList;
+}
+
+void basic_consume_loop(tmq_t* tmq) {
+ int32_t totalRows = 0;
+ int32_t msgCnt = 0;
+ int32_t timeout = 5000;
+ while (running) {
+ TAOS_RES* tmqmsg = tmq_consumer_poll(tmq, timeout);
+ if (tmqmsg) {
+ msgCnt++;
+ totalRows += msg_process(tmqmsg);
+ taos_free_result(tmqmsg);
+ } else {
+ break;
+ }
+ }
+
+ fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows);
+}
+
+int main(int argc, char* argv[]) {
+ int32_t code;
+
+ if (init_env() < 0) {
+ return -1;
+ }
+
+ if (create_topic() < 0) {
+ return -1;
+ }
+
+ tmq_t* tmq = build_consumer();
+ if (NULL == tmq) {
+ fprintf(stderr, "build_consumer() fail!\n");
+ return -1;
+ }
+
+ tmq_list_t* topic_list = build_topic_list();
+ if (NULL == topic_list) {
+ return -1;
+ }
+
+ if ((code = tmq_subscribe(tmq, topic_list))) {
+ fprintf(stderr, "Failed to tmq_subscribe(): %s\n", tmq_err2str(code));
+ }
+ tmq_list_destroy(topic_list);
+
+ basic_consume_loop(tmq);
+
+ code = tmq_consumer_close(tmq);
+ if (code) {
+ fprintf(stderr, "Failed to close consumer: %s\n", tmq_err2str(code));
+ } else {
+ fprintf(stderr, "Consumer closed\n");
+ }
+
+ return 0;
+}
diff --git a/docs/examples/csharp/native-example/AsyncQueryExample.cs b/docs/examples/csharp/asyncQuery/Program.cs
similarity index 81%
rename from docs/examples/csharp/native-example/AsyncQueryExample.cs
rename to docs/examples/csharp/asyncQuery/Program.cs
index 0d47325932e2f01fec8d55cfdb64c636258f4a03..864f06a15e5d7c9fb8fcfb25c81915e3f2e13f9d 100644
--- a/docs/examples/csharp/native-example/AsyncQueryExample.cs
+++ b/docs/examples/csharp/asyncQuery/Program.cs
@@ -11,11 +11,17 @@ namespace TDengineExample
static void Main()
{
IntPtr conn = GetConnection();
- QueryAsyncCallback queryAsyncCallback = new QueryAsyncCallback(QueryCallback);
- TDengine.QueryAsync(conn, "select * from meters", queryAsyncCallback, IntPtr.Zero);
- Thread.Sleep(2000);
- TDengine.Close(conn);
- TDengine.Cleanup();
+ try
+ {
+ QueryAsyncCallback queryAsyncCallback = new QueryAsyncCallback(QueryCallback);
+ TDengine.QueryAsync(conn, "select * from meters", queryAsyncCallback, IntPtr.Zero);
+ Thread.Sleep(2000);
+ }
+ finally
+ {
+ TDengine.Close(conn);
+ }
+
}
static void QueryCallback(IntPtr param, IntPtr taosRes, int code)
@@ -27,11 +33,11 @@ namespace TDengineExample
}
else
{
- Console.WriteLine($"async query data failed, failed code {code}");
+ throw new Exception($"async query data failed,code:{code},reason:{TDengine.Error(taosRes)}");
}
}
- // Iteratively call this interface until "numOfRows" is no greater than 0.
+ // Iteratively call this interface until "numOfRows" is no greater than 0.
static void FetchRawBlockCallback(IntPtr param, IntPtr taosRes, int numOfRows)
{
if (numOfRows > 0)
@@ -43,7 +49,7 @@ namespace TDengineExample
for (int i = 0; i < dataList.Count; i++)
{
- if (i != 0 && (i+1) % metaList.Count == 0)
+ if (i != 0 && (i + 1) % metaList.Count == 0)
{
Console.WriteLine("{0}\t|", dataList[i]);
}
@@ -63,7 +69,7 @@ namespace TDengineExample
}
else
{
- Console.WriteLine($"FetchRawBlockCallback callback error, error code {numOfRows}");
+ throw new Exception($"FetchRawBlockCallback callback error, error code {numOfRows}");
}
TDengine.FreeResult(taosRes);
}
@@ -79,8 +85,7 @@ namespace TDengineExample
var conn = TDengine.Connect(host, username, password, dbname, port);
if (conn == IntPtr.Zero)
{
- Console.WriteLine("Connect to TDengine failed");
- Environment.Exit(0);
+ throw new Exception("Connect to TDengine failed");
}
else
{
diff --git a/docs/examples/csharp/native-example/asyncquery.csproj b/docs/examples/csharp/asyncQuery/asyncquery.csproj
similarity index 98%
rename from docs/examples/csharp/native-example/asyncquery.csproj
rename to docs/examples/csharp/asyncQuery/asyncquery.csproj
index 045969edd7febbd11cc6577c8ba958669a5a7e3b..7c5b693f28dfa8832ae08bbaae9aa8a367951c70 100644
--- a/docs/examples/csharp/native-example/asyncquery.csproj
+++ b/docs/examples/csharp/asyncQuery/asyncquery.csproj
@@ -9,7 +9,7 @@
-
+
diff --git a/docs/examples/csharp/native-example/ConnectExample.cs b/docs/examples/csharp/connect/Program.cs
similarity index 90%
rename from docs/examples/csharp/native-example/ConnectExample.cs
rename to docs/examples/csharp/connect/Program.cs
index f3548ee65daab8a59695499339a8f89b0aa33a10..955db40c7c80e60350f9c0e8c6f50e7eb85246c2 100644
--- a/docs/examples/csharp/native-example/ConnectExample.cs
+++ b/docs/examples/csharp/connect/Program.cs
@@ -16,7 +16,7 @@ namespace TDengineExample
var conn = TDengine.Connect(host, username, password, dbname, port);
if (conn == IntPtr.Zero)
{
- Console.WriteLine("Connect to TDengine failed");
+ throw new Exception("Connect to TDengine failed");
}
else
{
diff --git a/docs/examples/csharp/native-example/connect.csproj b/docs/examples/csharp/connect/connect.csproj
similarity index 98%
rename from docs/examples/csharp/native-example/connect.csproj
rename to docs/examples/csharp/connect/connect.csproj
index 3a912f8987ace6ae540726886d901c8d32a7b81b..a08e86d4b42199be44a6551e37da11efb6e06a34 100644
--- a/docs/examples/csharp/native-example/connect.csproj
+++ b/docs/examples/csharp/connect/connect.csproj
@@ -9,7 +9,7 @@
-
+
diff --git a/docs/examples/csharp/csharp.sln b/docs/examples/csharp/csharp.sln
new file mode 100644
index 0000000000000000000000000000000000000000..560dde55cbddd4e7928598e7dd940c2721bd7b9c
--- /dev/null
+++ b/docs/examples/csharp/csharp.sln
@@ -0,0 +1,94 @@
+
+Microsoft Visual Studio Solution File, Format Version 12.00
+# Visual Studio Version 16
+VisualStudioVersion = 16.0.30114.105
+MinimumVisualStudioVersion = 10.0.40219.1
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "asyncquery", "asyncQuery\asyncquery.csproj", "{E2A5F00C-14E7-40E1-A2DE-6AB2975616D3}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "connect", "connect\connect.csproj", "{CCC5042D-93FC-4AE0-B2F6-7E692FD476B7}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "influxdbline", "influxdbLine\influxdbline.csproj", "{6A24FB80-1E3C-4E2D-A5AB-914FA583874D}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "optsJSON", "optsJSON\optsJSON.csproj", "{6725A961-0C66-4196-AC98-8D3F3D757D6C}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "optstelnet", "optsTelnet\optstelnet.csproj", "{B3B50D25-688B-44D4-8683-482ABC52FFCA}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "query", "query\query.csproj", "{F2B7D13B-FE04-4C5C-BB6D-C12E0A9D9970}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "stmtinsert", "stmtInsert\stmtinsert.csproj", "{B40D6BED-BE3C-4B44-9B12-28BE441311BA}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "subscribe", "subscribe\subscribe.csproj", "{C3D45A8E-AFC0-4547-9F3C-467B0B583DED}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "wsConnect", "wsConnect\wsConnect.csproj", "{51E19494-845E-49ED-97C7-749AE63111BD}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "wsInsert", "wsInsert\wsInsert.csproj", "{13E2233B-4AFF-40D9-AF42-AB3F01617540}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "wsQuery", "wsQuery\wsQuery.csproj", "{0F394169-C456-442C-929D-C2D43A0EEC7B}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "wsStmt", "wsStmt\wsStmt.csproj", "{27B9C9AB-9055-4BF2-8A14-4E59F09D5985}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "sqlinsert", "sqlInsert\sqlinsert.csproj", "{CD24BD12-8550-4627-A11D-707B446F48C3}"
+EndProject
+Global
+ GlobalSection(SolutionConfigurationPlatforms) = preSolution
+ Debug|Any CPU = Debug|Any CPU
+ Release|Any CPU = Release|Any CPU
+ EndGlobalSection
+ GlobalSection(SolutionProperties) = preSolution
+ HideSolutionNode = FALSE
+ EndGlobalSection
+ GlobalSection(ProjectConfigurationPlatforms) = postSolution
+ {E2A5F00C-14E7-40E1-A2DE-6AB2975616D3}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {E2A5F00C-14E7-40E1-A2DE-6AB2975616D3}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {E2A5F00C-14E7-40E1-A2DE-6AB2975616D3}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {E2A5F00C-14E7-40E1-A2DE-6AB2975616D3}.Release|Any CPU.Build.0 = Release|Any CPU
+ {CCC5042D-93FC-4AE0-B2F6-7E692FD476B7}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {CCC5042D-93FC-4AE0-B2F6-7E692FD476B7}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {CCC5042D-93FC-4AE0-B2F6-7E692FD476B7}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {CCC5042D-93FC-4AE0-B2F6-7E692FD476B7}.Release|Any CPU.Build.0 = Release|Any CPU
+ {6A24FB80-1E3C-4E2D-A5AB-914FA583874D}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {6A24FB80-1E3C-4E2D-A5AB-914FA583874D}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {6A24FB80-1E3C-4E2D-A5AB-914FA583874D}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {6A24FB80-1E3C-4E2D-A5AB-914FA583874D}.Release|Any CPU.Build.0 = Release|Any CPU
+ {6725A961-0C66-4196-AC98-8D3F3D757D6C}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {6725A961-0C66-4196-AC98-8D3F3D757D6C}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {6725A961-0C66-4196-AC98-8D3F3D757D6C}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {6725A961-0C66-4196-AC98-8D3F3D757D6C}.Release|Any CPU.Build.0 = Release|Any CPU
+ {B3B50D25-688B-44D4-8683-482ABC52FFCA}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {B3B50D25-688B-44D4-8683-482ABC52FFCA}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {B3B50D25-688B-44D4-8683-482ABC52FFCA}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {B3B50D25-688B-44D4-8683-482ABC52FFCA}.Release|Any CPU.Build.0 = Release|Any CPU
+ {F2B7D13B-FE04-4C5C-BB6D-C12E0A9D9970}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {F2B7D13B-FE04-4C5C-BB6D-C12E0A9D9970}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {F2B7D13B-FE04-4C5C-BB6D-C12E0A9D9970}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {F2B7D13B-FE04-4C5C-BB6D-C12E0A9D9970}.Release|Any CPU.Build.0 = Release|Any CPU
+ {B40D6BED-BE3C-4B44-9B12-28BE441311BA}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {B40D6BED-BE3C-4B44-9B12-28BE441311BA}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {B40D6BED-BE3C-4B44-9B12-28BE441311BA}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {B40D6BED-BE3C-4B44-9B12-28BE441311BA}.Release|Any CPU.Build.0 = Release|Any CPU
+ {C3D45A8E-AFC0-4547-9F3C-467B0B583DED}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {C3D45A8E-AFC0-4547-9F3C-467B0B583DED}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {C3D45A8E-AFC0-4547-9F3C-467B0B583DED}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {C3D45A8E-AFC0-4547-9F3C-467B0B583DED}.Release|Any CPU.Build.0 = Release|Any CPU
+ {51E19494-845E-49ED-97C7-749AE63111BD}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {51E19494-845E-49ED-97C7-749AE63111BD}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {51E19494-845E-49ED-97C7-749AE63111BD}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {51E19494-845E-49ED-97C7-749AE63111BD}.Release|Any CPU.Build.0 = Release|Any CPU
+ {13E2233B-4AFF-40D9-AF42-AB3F01617540}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {13E2233B-4AFF-40D9-AF42-AB3F01617540}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {13E2233B-4AFF-40D9-AF42-AB3F01617540}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {13E2233B-4AFF-40D9-AF42-AB3F01617540}.Release|Any CPU.Build.0 = Release|Any CPU
+ {0F394169-C456-442C-929D-C2D43A0EEC7B}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {0F394169-C456-442C-929D-C2D43A0EEC7B}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {0F394169-C456-442C-929D-C2D43A0EEC7B}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {0F394169-C456-442C-929D-C2D43A0EEC7B}.Release|Any CPU.Build.0 = Release|Any CPU
+ {27B9C9AB-9055-4BF2-8A14-4E59F09D5985}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {27B9C9AB-9055-4BF2-8A14-4E59F09D5985}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {27B9C9AB-9055-4BF2-8A14-4E59F09D5985}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {27B9C9AB-9055-4BF2-8A14-4E59F09D5985}.Release|Any CPU.Build.0 = Release|Any CPU
+ {CD24BD12-8550-4627-A11D-707B446F48C3}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {CD24BD12-8550-4627-A11D-707B446F48C3}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {CD24BD12-8550-4627-A11D-707B446F48C3}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {CD24BD12-8550-4627-A11D-707B446F48C3}.Release|Any CPU.Build.0 = Release|Any CPU
+ EndGlobalSection
+EndGlobal
diff --git a/docs/examples/csharp/native-example/InfluxDBLineExample.cs b/docs/examples/csharp/influxdbLine/Program.cs
similarity index 73%
rename from docs/examples/csharp/native-example/InfluxDBLineExample.cs
rename to docs/examples/csharp/influxdbLine/Program.cs
index 7b4453f4ac0b14dd76d166e395bdacb46a5d3fbc..fa3cb21fe04977b5081c922d623dee5514056770 100644
--- a/docs/examples/csharp/native-example/InfluxDBLineExample.cs
+++ b/docs/examples/csharp/influxdbLine/Program.cs
@@ -17,8 +17,7 @@ namespace TDengineExample
IntPtr res = TDengine.SchemalessInsert(conn, lines, lines.Length, (int)TDengineSchemalessProtocol.TSDB_SML_LINE_PROTOCOL, (int)TDengineSchemalessPrecision.TSDB_SML_TIMESTAMP_MILLI_SECONDS);
if (TDengine.ErrorNo(res) != 0)
{
- Console.WriteLine("SchemalessInsert failed since " + TDengine.Error(res));
- ExitProgram(conn, 1);
+ throw new Exception("SchemalessInsert failed since " + TDengine.Error(res));
}
else
{
@@ -26,7 +25,6 @@ namespace TDengineExample
Console.WriteLine($"SchemalessInsert success, affected {affectedRows} rows");
}
TDengine.FreeResult(res);
- ExitProgram(conn, 0);
}
static IntPtr GetConnection()
@@ -39,9 +37,7 @@ namespace TDengineExample
var conn = TDengine.Connect(host, username, password, dbname, port);
if (conn == IntPtr.Zero)
{
- Console.WriteLine("Connect to TDengine failed");
- TDengine.Cleanup();
- Environment.Exit(1);
+ throw new Exception("Connect to TDengine failed");
}
else
{
@@ -55,23 +51,15 @@ namespace TDengineExample
IntPtr res = TDengine.Query(conn, "CREATE DATABASE test");
if (TDengine.ErrorNo(res) != 0)
{
- Console.WriteLine("failed to create database, reason: " + TDengine.Error(res));
- ExitProgram(conn, 1);
+ throw new Exception("failed to create database, reason: " + TDengine.Error(res));
}
res = TDengine.Query(conn, "USE test");
if (TDengine.ErrorNo(res) != 0)
{
- Console.WriteLine("failed to change database, reason: " + TDengine.Error(res));
- ExitProgram(conn, 1);
+ throw new Exception("failed to change database, reason: " + TDengine.Error(res));
}
}
- static void ExitProgram(IntPtr conn, int exitCode)
- {
- TDengine.Close(conn);
- TDengine.Cleanup();
- Environment.Exit(exitCode);
- }
}
}
diff --git a/docs/examples/csharp/native-example/influxdbline.csproj b/docs/examples/csharp/influxdbLine/influxdbline.csproj
similarity index 98%
rename from docs/examples/csharp/native-example/influxdbline.csproj
rename to docs/examples/csharp/influxdbLine/influxdbline.csproj
index 58bca485088e409fe1d387c6020418bbc2bf871b..4889f8fde9dc0eb75c0547e32355929d1cceb138 100644
--- a/docs/examples/csharp/native-example/influxdbline.csproj
+++ b/docs/examples/csharp/influxdbLine/influxdbline.csproj
@@ -9,7 +9,7 @@
-
+
diff --git a/docs/examples/csharp/native-example/QueryExample.cs b/docs/examples/csharp/native-example/QueryExample.cs
deleted file mode 100644
index d75bb8d6611f5b3899485eb1a63a42ed6995847d..0000000000000000000000000000000000000000
--- a/docs/examples/csharp/native-example/QueryExample.cs
+++ /dev/null
@@ -1,82 +0,0 @@
-using TDengineDriver;
-using TDengineDriver.Impl;
-using System.Runtime.InteropServices;
-
-namespace TDengineExample
-{
- internal class QueryExample
- {
- static void Main()
- {
- IntPtr conn = GetConnection();
- // run query
- IntPtr res = TDengine.Query(conn, "SELECT * FROM meters LIMIT 2");
- if (TDengine.ErrorNo(res) != 0)
- {
- Console.WriteLine("Failed to query since: " + TDengine.Error(res));
- TDengine.Close(conn);
- TDengine.Cleanup();
- return;
- }
-
- // get filed count
- int fieldCount = TDengine.FieldCount(res);
- Console.WriteLine("fieldCount=" + fieldCount);
-
- // print column names
- List metas = LibTaos.GetMeta(res);
- for (int i = 0; i < metas.Count; i++)
- {
- Console.Write(metas[i].name + "\t");
- }
- Console.WriteLine();
-
- // print values
- List
+ */
+public class SQLWriter {
+ final static Logger logger = LoggerFactory.getLogger(SQLWriter.class);
+
+ private Connection conn;
+ private Statement stmt;
+
+ /**
+ * current number of buffered records
+ */
+ private int bufferedCount = 0;
+ /**
+ * Maximum number of buffered records.
+ * Flush action will be triggered if bufferedCount reached this value,
+ */
+ private int maxBatchSize;
+
+
+ /**
+ * Maximum SQL length.
+ */
+ private int maxSQLLength;
+
+ /**
+ * Map from table name to column values. For example:
+ * "tb001" -> "(1648432611249,2.1,114,0.09) (1648432611250,2.2,135,0.2)"
+ */
+ private Map tbValues = new HashMap<>();
+
+ /**
+ * Map from table name to tag values in the same order as creating stable.
+ * Used for creating table.
+ */
+ private Map tbTags = new HashMap<>();
+
+ public SQLWriter(int maxBatchSize) {
+ this.maxBatchSize = maxBatchSize;
+ }
+
+
+ /**
+ * Get Database Connection
+ *
+ * @return Connection
+ * @throws SQLException
+ */
+ private static Connection getConnection() throws SQLException {
+ String jdbcURL = System.getenv("TDENGINE_JDBC_URL");
+ return DriverManager.getConnection(jdbcURL);
+ }
+
+ /**
+ * Create Connection and Statement
+ *
+ * @throws SQLException
+ */
+ public void init() throws SQLException {
+ conn = getConnection();
+ stmt = conn.createStatement();
+ stmt.execute("use test");
+ ResultSet rs = stmt.executeQuery("show variables");
+ while (rs.next()) {
+ String configName = rs.getString(1);
+ if ("maxSQLLength".equals(configName)) {
+ maxSQLLength = Integer.parseInt(rs.getString(2));
+ logger.info("maxSQLLength={}", maxSQLLength);
+ }
+ }
+ }
+
+ /**
+ * Convert raw data to SQL fragments, group them by table name and cache them in a HashMap.
+ * Trigger writing when number of buffered records reached maxBachSize.
+ *
+ * @param line raw data get from task queue in format: tbName,ts,current,voltage,phase,location,groupId
+ */
+ public void processLine(String line) throws SQLException {
+ bufferedCount += 1;
+ int firstComma = line.indexOf(',');
+ String tbName = line.substring(0, firstComma);
+ int lastComma = line.lastIndexOf(',');
+ int secondLastComma = line.lastIndexOf(',', lastComma - 1);
+ String value = "(" + line.substring(firstComma + 1, secondLastComma) + ") ";
+ if (tbValues.containsKey(tbName)) {
+ tbValues.put(tbName, tbValues.get(tbName) + value);
+ } else {
+ tbValues.put(tbName, value);
+ }
+ if (!tbTags.containsKey(tbName)) {
+ String location = line.substring(secondLastComma + 1, lastComma);
+ String groupId = line.substring(lastComma + 1);
+ String tagValues = "('" + location + "'," + groupId + ')';
+ tbTags.put(tbName, tagValues);
+ }
+ if (bufferedCount == maxBatchSize) {
+ flush();
+ }
+ }
+
+
+ /**
+ * Assemble INSERT statement using buffered SQL fragments in Map {@link SQLWriter#tbValues} and execute it.
+ * In case of "Table does not exit" exception, create all tables in the sql and retry the sql.
+ */
+ public void flush() throws SQLException {
+ StringBuilder sb = new StringBuilder("INSERT INTO ");
+ for (Map.Entry entry : tbValues.entrySet()) {
+ String tableName = entry.getKey();
+ String values = entry.getValue();
+ String q = tableName + " values " + values + " ";
+ if (sb.length() + q.length() > maxSQLLength) {
+ executeSQL(sb.toString());
+ logger.warn("increase maxSQLLength or decrease maxBatchSize to gain better performance");
+ sb = new StringBuilder("INSERT INTO ");
+ }
+ sb.append(q);
+ }
+ executeSQL(sb.toString());
+ tbValues.clear();
+ bufferedCount = 0;
+ }
+
+ private void executeSQL(String sql) throws SQLException {
+ try {
+ stmt.executeUpdate(sql);
+ } catch (SQLException e) {
+ // convert to error code defined in taoserror.h
+ int errorCode = e.getErrorCode() & 0xffff;
+ if (errorCode == 0x362 || errorCode == 0x218) {
+ // Table does not exist
+ createTables();
+ executeSQL(sql);
+ } else {
+ logger.error("Execute SQL: {}", sql);
+ throw e;
+ }
+ } catch (Throwable throwable) {
+ logger.error("Execute SQL: {}", sql);
+ throw throwable;
+ }
+ }
+
+ /**
+ * Create tables in batch using syntax:
+ *
+ * CREATE TABLE [IF NOT EXISTS] tb_name1 USING stb_name TAGS (tag_value1, ...) [IF NOT EXISTS] tb_name2 USING stb_name TAGS (tag_value2, ...) ...;
+ *
+ */
+ private void createTables() throws SQLException {
+ StringBuilder sb = new StringBuilder("CREATE TABLE ");
+ for (String tbName : tbValues.keySet()) {
+ String tagValues = tbTags.get(tbName);
+ sb.append("IF NOT EXISTS ").append(tbName).append(" USING meters TAGS ").append(tagValues).append(" ");
+ }
+ String sql = sb.toString();
+ try {
+ stmt.executeUpdate(sql);
+ } catch (Throwable throwable) {
+ logger.error("Execute SQL: {}", sql);
+ throw throwable;
+ }
+ }
+
+ public boolean hasBufferedValues() {
+ return bufferedCount > 0;
+ }
+
+ public int getBufferedCount() {
+ return bufferedCount;
+ }
+
+ public void close() {
+ try {
+ stmt.close();
+ } catch (SQLException e) {
+ }
+ try {
+ conn.close();
+ } catch (SQLException e) {
+ }
+ }
+}
\ No newline at end of file
diff --git a/docs/examples/java/src/main/java/com/taos/example/highvolume/StmtWriter.java b/docs/examples/java/src/main/java/com/taos/example/highvolume/StmtWriter.java
new file mode 100644
index 0000000000000000000000000000000000000000..8ade06625d708a112c85d5657aa00bcd0e605ff4
--- /dev/null
+++ b/docs/examples/java/src/main/java/com/taos/example/highvolume/StmtWriter.java
@@ -0,0 +1,4 @@
+package com.taos.example.highvolume;
+
+public class StmtWriter {
+}
diff --git a/docs/examples/java/src/main/java/com/taos/example/highvolume/WriteTask.java b/docs/examples/java/src/main/java/com/taos/example/highvolume/WriteTask.java
new file mode 100644
index 0000000000000000000000000000000000000000..de9e5463d7dc59478f991e4783aacaae527b4c4b
--- /dev/null
+++ b/docs/examples/java/src/main/java/com/taos/example/highvolume/WriteTask.java
@@ -0,0 +1,58 @@
+package com.taos.example.highvolume;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.concurrent.BlockingQueue;
+
+class WriteTask implements Runnable {
+ private final static Logger logger = LoggerFactory.getLogger(WriteTask.class);
+ private final int maxBatchSize;
+
+ // the queue from which this writing task get raw data.
+ private final BlockingQueue queue;
+
+ // A flag indicate whether to continue.
+ private boolean active = true;
+
+ public WriteTask(BlockingQueue taskQueue, int maxBatchSize) {
+ this.queue = taskQueue;
+ this.maxBatchSize = maxBatchSize;
+ }
+
+ @Override
+ public void run() {
+ logger.info("started");
+ String line = null; // data getting from the queue just now.
+ SQLWriter writer = new SQLWriter(maxBatchSize);
+ try {
+ writer.init();
+ while (active) {
+ line = queue.poll();
+ if (line != null) {
+ // parse raw data and buffer the data.
+ writer.processLine(line);
+ } else if (writer.hasBufferedValues()) {
+ // write data immediately if no more data in the queue
+ writer.flush();
+ } else {
+ // sleep a while to avoid high CPU usage if no more data in the queue and no buffered records, .
+ Thread.sleep(100);
+ }
+ }
+ if (writer.hasBufferedValues()) {
+ writer.flush();
+ }
+ } catch (Exception e) {
+ String msg = String.format("line=%s, bufferedCount=%s", line, writer.getBufferedCount());
+ logger.error(msg, e);
+ } finally {
+ writer.close();
+ }
+ }
+
+ public void stop() {
+ logger.info("stop");
+ this.active = false;
+ }
+}
\ No newline at end of file
diff --git a/docs/examples/python/conn_native_pandas.py b/docs/examples/python/conn_native_pandas.py
index 56942ef57085766cd128b03cabb7a357587eab16..f3bab15efbe6669a88828fb194682dbfedb382df 100644
--- a/docs/examples/python/conn_native_pandas.py
+++ b/docs/examples/python/conn_native_pandas.py
@@ -1,8 +1,11 @@
import pandas
-from sqlalchemy import create_engine
+from sqlalchemy import create_engine, text
engine = create_engine("taos://root:taosdata@localhost:6030/power")
-df = pandas.read_sql("SELECT * FROM meters", engine)
+conn = engine.connect()
+df = pandas.read_sql(text("SELECT * FROM power.meters"), conn)
+conn.close()
+
# print index
print(df.index)
diff --git a/docs/examples/python/conn_rest_pandas.py b/docs/examples/python/conn_rest_pandas.py
index 0164080cd5a05e72dce40b1d111ea423623ff9b2..1b207d6ff10a353f3473116ce807cc8daf362ca7 100644
--- a/docs/examples/python/conn_rest_pandas.py
+++ b/docs/examples/python/conn_rest_pandas.py
@@ -1,8 +1,10 @@
import pandas
-from sqlalchemy import create_engine
+from sqlalchemy import create_engine, text
engine = create_engine("taosrest://root:taosdata@localhost:6041")
-df: pandas.DataFrame = pandas.read_sql("SELECT * FROM power.meters", engine)
+conn = engine.connect()
+df: pandas.DataFrame = pandas.read_sql(text("SELECT * FROM power.meters"), conn)
+conn.close()
# print index
print(df.index)
diff --git a/docs/examples/python/connect_rest_examples.py b/docs/examples/python/connect_rest_examples.py
index 900ec1022ec81ac2db761d918d1ec11c9bb26852..0f8625ae5387a275f7b84948ad80191b8e443862 100644
--- a/docs/examples/python/connect_rest_examples.py
+++ b/docs/examples/python/connect_rest_examples.py
@@ -1,24 +1,25 @@
# ANCHOR: connect
from taosrest import connect, TaosRestConnection, TaosRestCursor
-conn: TaosRestConnection = connect(url="http://localhost:6041",
- user="root",
- password="taosdata",
- timeout=30)
+conn = connect(url="http://localhost:6041",
+ user="root",
+ password="taosdata",
+ timeout=30)
# ANCHOR_END: connect
# ANCHOR: basic
# create STable
-cursor: TaosRestCursor = conn.cursor()
+cursor = conn.cursor()
cursor.execute("DROP DATABASE IF EXISTS power")
cursor.execute("CREATE DATABASE power")
-cursor.execute("CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)")
+cursor.execute(
+ "CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)")
# insert data
-cursor.execute("""INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
- power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
- power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
- power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)""")
+cursor.execute("""INSERT INTO power.d1001 USING power.meters TAGS('California.SanFrancisco', 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
+ power.d1002 USING power.meters TAGS('California.SanFrancisco', 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
+ power.d1003 USING power.meters TAGS('California.LosAngeles', 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
+ power.d1004 USING power.meters TAGS('California.LosAngeles', 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)""")
print("inserted row count:", cursor.rowcount)
# query data
@@ -28,7 +29,7 @@ print("queried row count:", cursor.rowcount)
# get column names from cursor
column_names = [meta[0] for meta in cursor.description]
# get rows
-data: list[tuple] = cursor.fetchall()
+data = cursor.fetchall()
print(column_names)
for row in data:
print(row)
diff --git a/docs/examples/python/connection_usage_native_reference.py b/docs/examples/python/connection_usage_native_reference.py
index 4803511e427bf4d906fd3a14ff6faf5a000da96c..0a23c5f95b9d0f113e861aae07255c46bb5ae0a5 100644
--- a/docs/examples/python/connection_usage_native_reference.py
+++ b/docs/examples/python/connection_usage_native_reference.py
@@ -8,7 +8,7 @@ conn.execute("CREATE DATABASE test")
# change database. same as execute "USE db"
conn.select_db("test")
conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)")
-affected_row: int = conn.execute("INSERT INTO t1 USING weather TAGS(1) VALUES (now, 23.5) (now+1m, 23.5) (now+2m 24.4)")
+affected_row = conn.execute("INSERT INTO t1 USING weather TAGS(1) VALUES (now, 23.5) (now+1m, 23.5) (now+2m, 24.4)")
print("affected_row", affected_row)
# output:
# affected_row 3
@@ -16,10 +16,10 @@ print("affected_row", affected_row)
# ANCHOR: query
# Execute a sql and get its result set. It's useful for SELECT statement
-result: taos.TaosResult = conn.query("SELECT * from weather")
+result = conn.query("SELECT * from weather")
# Get fields from result
-fields: taos.field.TaosFields = result.fields
+fields = result.fields
for field in fields:
print(field) # {name: ts, type: 9, bytes: 8}
@@ -42,4 +42,4 @@ print(data)
# ANCHOR_END: query
-conn.close()
+conn.close()
\ No newline at end of file
diff --git a/docs/examples/python/fast_write_example.py b/docs/examples/python/fast_write_example.py
new file mode 100644
index 0000000000000000000000000000000000000000..626e3310b120b9415952614b4b110ed29f787582
--- /dev/null
+++ b/docs/examples/python/fast_write_example.py
@@ -0,0 +1,225 @@
+# install dependencies:
+# recommend python >= 3.8
+#
+
+import logging
+import math
+import multiprocessing
+import sys
+import time
+import os
+from multiprocessing import Process, Queue
+from mockdatasource import MockDataSource
+from queue import Empty
+from typing import List
+
+logging.basicConfig(stream=sys.stdout, level=logging.DEBUG, format="%(asctime)s [%(name)s] - %(message)s")
+
+READ_TASK_COUNT = 1
+WRITE_TASK_COUNT = 1
+TABLE_COUNT = 1000
+QUEUE_SIZE = 1000000
+MAX_BATCH_SIZE = 3000
+
+_DONE_MESSAGE = '__DONE__'
+
+
+def get_connection():
+ """
+ If variable TDENGINE_FIRST_EP is provided then it will be used. If not, firstEP in /etc/taos/taos.cfg will be used.
+ You can also override the default username and password by supply variable TDENGINE_USER and TDENGINE_PASSWORD
+ """
+ import taos
+ firstEP = os.environ.get("TDENGINE_FIRST_EP")
+ if firstEP:
+ host, port = firstEP.split(":")
+ else:
+ host, port = None, 0
+ user = os.environ.get("TDENGINE_USER", "root")
+ password = os.environ.get("TDENGINE_PASSWORD", "taosdata")
+ return taos.connect(host=host, port=int(port), user=user, password=password)
+
+
+# ANCHOR: read
+
+def run_read_task(task_id: int, task_queues: List[Queue], infinity):
+ table_count_per_task = TABLE_COUNT // READ_TASK_COUNT
+ data_source = MockDataSource(f"tb{task_id}", table_count_per_task, infinity)
+ try:
+ for batch in data_source:
+ if isinstance(batch, tuple):
+ batch = [batch]
+ for table_id, rows in batch:
+ # hash data to different queue
+ i = table_id % len(task_queues)
+ # block putting forever when the queue is full
+ for row in rows:
+ task_queues[i].put(row)
+ if not infinity:
+ for queue in task_queues:
+ queue.put(_DONE_MESSAGE)
+ except KeyboardInterrupt:
+ pass
+ finally:
+ logging.info('read task over')
+
+
+# ANCHOR_END: read
+
+
+# ANCHOR: write
+def run_write_task(task_id: int, queue: Queue, done_queue: Queue):
+ from sql_writer import SQLWriter
+ log = logging.getLogger(f"WriteTask-{task_id}")
+ writer = SQLWriter(get_connection)
+ lines = None
+ try:
+ while True:
+ over = False
+ lines = []
+ for _ in range(MAX_BATCH_SIZE):
+ try:
+ line = queue.get_nowait()
+ if line == _DONE_MESSAGE:
+ over = True
+ break
+ if line:
+ lines.append(line)
+ except Empty:
+ time.sleep(0.1)
+ if len(lines) > 0:
+ writer.process_lines(lines)
+ if over:
+ done_queue.put(_DONE_MESSAGE)
+ break
+ except KeyboardInterrupt:
+ pass
+ except BaseException as e:
+ log.debug(f"lines={lines}")
+ raise e
+ finally:
+ writer.close()
+ log.debug('write task over')
+
+
+# ANCHOR_END: write
+
+def set_global_config():
+ argc = len(sys.argv)
+ if argc > 1:
+ global READ_TASK_COUNT
+ READ_TASK_COUNT = int(sys.argv[1])
+ if argc > 2:
+ global WRITE_TASK_COUNT
+ WRITE_TASK_COUNT = int(sys.argv[2])
+ if argc > 3:
+ global TABLE_COUNT
+ TABLE_COUNT = int(sys.argv[3])
+ if argc > 4:
+ global QUEUE_SIZE
+ QUEUE_SIZE = int(sys.argv[4])
+ if argc > 5:
+ global MAX_BATCH_SIZE
+ MAX_BATCH_SIZE = int(sys.argv[5])
+
+
+# ANCHOR: monitor
+def run_monitor_process(done_queue: Queue):
+ log = logging.getLogger("DataBaseMonitor")
+ conn = None
+ try:
+ conn = get_connection()
+
+ def get_count():
+ res = conn.query("SELECT count(*) FROM test.meters")
+ rows = res.fetch_all()
+ return rows[0][0] if rows else 0
+
+ last_count = 0
+ while True:
+ try:
+ done = done_queue.get_nowait()
+ if done == _DONE_MESSAGE:
+ break
+ except Empty:
+ pass
+ time.sleep(10)
+ count = get_count()
+ log.info(f"count={count} speed={(count - last_count) / 10}")
+ last_count = count
+ finally:
+ conn.close()
+
+
+# ANCHOR_END: monitor
+# ANCHOR: main
+def main(infinity):
+ set_global_config()
+ logging.info(f"READ_TASK_COUNT={READ_TASK_COUNT}, WRITE_TASK_COUNT={WRITE_TASK_COUNT}, "
+ f"TABLE_COUNT={TABLE_COUNT}, QUEUE_SIZE={QUEUE_SIZE}, MAX_BATCH_SIZE={MAX_BATCH_SIZE}")
+
+ conn = get_connection()
+ conn.execute("DROP DATABASE IF EXISTS test")
+ conn.execute("CREATE DATABASE IF NOT EXISTS test")
+ conn.execute("CREATE STABLE IF NOT EXISTS test.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) "
+ "TAGS (location BINARY(64), groupId INT)")
+ conn.close()
+
+ done_queue = Queue()
+ monitor_process = Process(target=run_monitor_process, args=(done_queue,))
+ monitor_process.start()
+ logging.debug(f"monitor task started with pid {monitor_process.pid}")
+
+ task_queues: List[Queue] = []
+ write_processes = []
+ read_processes = []
+
+ # create task queues
+ for i in range(WRITE_TASK_COUNT):
+ queue = Queue()
+ task_queues.append(queue)
+
+ # create write processes
+ for i in range(WRITE_TASK_COUNT):
+ p = Process(target=run_write_task, args=(i, task_queues[i], done_queue))
+ p.start()
+ logging.debug(f"WriteTask-{i} started with pid {p.pid}")
+ write_processes.append(p)
+
+ # create read processes
+ for i in range(READ_TASK_COUNT):
+ queues = assign_queues(i, task_queues)
+ p = Process(target=run_read_task, args=(i, queues, infinity))
+ p.start()
+ logging.debug(f"ReadTask-{i} started with pid {p.pid}")
+ read_processes.append(p)
+
+ try:
+ monitor_process.join()
+ for p in read_processes:
+ p.join()
+ for p in write_processes:
+ p.join()
+ time.sleep(1)
+ return
+ except KeyboardInterrupt:
+ monitor_process.terminate()
+ [p.terminate() for p in read_processes]
+ [p.terminate() for p in write_processes]
+ [q.close() for q in task_queues]
+
+
+def assign_queues(read_task_id, task_queues):
+ """
+ Compute target queues for a specific read task.
+ """
+ ratio = WRITE_TASK_COUNT / READ_TASK_COUNT
+ from_index = math.floor(read_task_id * ratio)
+ end_index = math.ceil((read_task_id + 1) * ratio)
+ return task_queues[from_index:end_index]
+
+
+if __name__ == '__main__':
+ multiprocessing.set_start_method('spawn')
+ main(False)
+# ANCHOR_END: main
diff --git a/docs/examples/python/kafka_example_common.py b/docs/examples/python/kafka_example_common.py
new file mode 100644
index 0000000000000000000000000000000000000000..566748c94e2542aabe8265ed55c85e4b725d69bb
--- /dev/null
+++ b/docs/examples/python/kafka_example_common.py
@@ -0,0 +1,65 @@
+#! encoding = utf-8
+import taos
+
+LOCATIONS = ['California.SanFrancisco', 'California.LosAngles', 'California.SanDiego', 'California.SanJose',
+ 'California.PaloAlto', 'California.Campbell', 'California.MountainView', 'California.Sunnyvale',
+ 'California.SantaClara', 'California.Cupertino']
+
+CREATE_DATABASE_SQL = 'create database if not exists {} keep 365 duration 10 buffer 16 wal_level 1'
+USE_DATABASE_SQL = 'use {}'
+DROP_TABLE_SQL = 'drop table if exists meters'
+DROP_DATABASE_SQL = 'drop database if exists {}'
+CREATE_STABLE_SQL = 'create stable meters (ts timestamp, current float, voltage int, phase float) tags ' \
+ '(location binary(64), groupId int)'
+CREATE_TABLE_SQL = 'create table if not exists {} using meters tags (\'{}\', {})'
+
+
+def create_database_and_tables(host, port, user, password, db, table_count):
+ tags_tables = _init_tags_table_names(table_count=table_count)
+ conn = taos.connect(host=host, port=port, user=user, password=password)
+
+ conn.execute(DROP_DATABASE_SQL.format(db))
+ conn.execute(CREATE_DATABASE_SQL.format(db))
+ conn.execute(USE_DATABASE_SQL.format(db))
+ conn.execute(DROP_TABLE_SQL)
+ conn.execute(CREATE_STABLE_SQL)
+ for tags in tags_tables:
+ location, group_id = _get_location_and_group(tags)
+ tables = tags_tables[tags]
+ for table_name in tables:
+ conn.execute(CREATE_TABLE_SQL.format(table_name, location, group_id))
+ conn.close()
+
+
+def clean(host, port, user, password, db):
+ conn = taos.connect(host=host, port=port, user=user, password=password)
+ conn.execute(DROP_DATABASE_SQL.format(db))
+ conn.close()
+
+
+def _init_tags_table_names(table_count):
+ tags_table_names = {}
+ group_id = 0
+ for i in range(table_count):
+ table_name = 'd{}'.format(i)
+ location_idx = i % len(LOCATIONS)
+ location = LOCATIONS[location_idx]
+ if location_idx == 0:
+ group_id += 1
+ if group_id > 10:
+ group_id -= 10
+ key = _tag_table_mapping_key(location=location, group_id=group_id)
+ if key not in tags_table_names:
+ tags_table_names[key] = []
+ tags_table_names[key].append(table_name)
+
+ return tags_table_names
+
+
+def _tag_table_mapping_key(location, group_id):
+ return '{}_{}'.format(location, group_id)
+
+
+def _get_location_and_group(key):
+ fields = key.split('_')
+ return fields[0], fields[1]
diff --git a/docs/examples/python/kafka_example_consumer.py b/docs/examples/python/kafka_example_consumer.py
new file mode 100644
index 0000000000000000000000000000000000000000..e2d5cf535b3953a3c0ecec9e25cc615948162633
--- /dev/null
+++ b/docs/examples/python/kafka_example_consumer.py
@@ -0,0 +1,231 @@
+#! encoding = utf-8
+import json
+import logging
+import time
+from concurrent.futures import ThreadPoolExecutor, Future
+from json import JSONDecodeError
+from typing import Callable
+
+import taos
+from kafka import KafkaConsumer
+from kafka.consumer.fetcher import ConsumerRecord
+
+import kafka_example_common as common
+
+
+class Consumer(object):
+ DEFAULT_CONFIGS = {
+ 'kafka_brokers': 'localhost:9092', # kafka broker
+ 'kafka_topic': 'tdengine_kafka_practices',
+ 'kafka_group_id': 'taos',
+ 'taos_host': 'localhost', # TDengine host
+ 'taos_port': 6030, # TDengine port
+ 'taos_user': 'root', # TDengine user name
+ 'taos_password': 'taosdata', # TDengine password
+ 'taos_database': 'power', # TDengine database
+ 'message_type': 'json', # message format, 'json' or 'line'
+ 'clean_after_testing': False, # if drop database after testing
+ 'max_poll': 1000, # poll size for batch mode
+ 'workers': 10, # thread count for multi-threading
+ 'testing': False
+ }
+
+ INSERT_SQL_HEADER = "insert into "
+ INSERT_PART_SQL = '{} values (\'{}\', {}, {}, {})'
+
+ def __init__(self, **configs):
+ self.config = self.DEFAULT_CONFIGS
+ self.config.update(configs)
+
+ self.consumer = None
+ if not self.config.get('testing'):
+ self.consumer = KafkaConsumer(
+ self.config.get('kafka_topic'),
+ bootstrap_servers=self.config.get('kafka_brokers'),
+ group_id=self.config.get('kafka_group_id'),
+ )
+
+ self.conns = taos.connect(
+ host=self.config.get('taos_host'),
+ port=self.config.get('taos_port'),
+ user=self.config.get('taos_user'),
+ password=self.config.get('taos_password'),
+ db=self.config.get('taos_database'),
+ )
+ if self.config.get('workers') > 1:
+ self.pool = ThreadPoolExecutor(max_workers=self.config.get('workers'))
+ self.tasks = []
+ # tags and table mapping # key: {location}_{groupId} value:
+
+ def consume(self):
+ """
+
+ consume data from kafka and deal. Base on `message_type`, `bath_consume`, `insert_by_table`,
+ there are several deal function.
+ :return:
+ """
+ self.conns.execute(common.USE_DATABASE_SQL.format(self.config.get('taos_database')))
+ try:
+ if self.config.get('message_type') == 'line': # line
+ self._run(self._line_to_taos)
+ if self.config.get('message_type') == 'json': # json
+ self._run(self._json_to_taos)
+ except KeyboardInterrupt:
+ logging.warning("## caught keyboard interrupt, stopping")
+ finally:
+ self.stop()
+
+ def stop(self):
+ """
+
+ stop consuming
+ :return:
+ """
+ # close consumer
+ if self.consumer is not None:
+ self.consumer.commit()
+ self.consumer.close()
+
+ # multi thread
+ if self.config.get('workers') > 1:
+ if self.pool is not None:
+ self.pool.shutdown()
+ for task in self.tasks:
+ while not task.done():
+ time.sleep(0.01)
+
+ # clean data
+ if self.config.get('clean_after_testing'):
+ self.conns.execute(common.DROP_TABLE_SQL)
+ self.conns.execute(common.DROP_DATABASE_SQL.format(self.config.get('taos_database')))
+ # close taos
+ if self.conns is not None:
+ self.conns.close()
+
+ def _run(self, f):
+ """
+
+ run in batch consuming mode
+ :param f:
+ :return:
+ """
+ i = 0 # just for test.
+ while True:
+ messages = self.consumer.poll(timeout_ms=100, max_records=self.config.get('max_poll'))
+ if messages:
+ if self.config.get('workers') > 1:
+ self.pool.submit(f, messages.values())
+ else:
+ f(list(messages.values()))
+ if not messages:
+ i += 1 # just for test.
+ time.sleep(0.1)
+ if i > 3: # just for test.
+ logging.warning('## test over.') # just for test.
+ return # just for test.
+
+ def _json_to_taos(self, messages):
+ """
+
+ convert a batch of json data to sql, and insert into TDengine
+ :param messages:
+ :return:
+ """
+ sql = self._build_sql_from_json(messages=messages)
+ self.conns.execute(sql=sql)
+
+ def _line_to_taos(self, messages):
+ """
+
+ convert a batch of lines data to sql, and insert into TDengine
+ :param messages:
+ :return:
+ """
+ lines = []
+ for partition_messages in messages:
+ for message in partition_messages:
+ lines.append(message.value.decode())
+ sql = self.INSERT_SQL_HEADER + ' '.join(lines)
+ self.conns.execute(sql=sql)
+
+ def _build_single_sql_from_json(self, msg_value):
+ try:
+ data = json.loads(msg_value)
+ except JSONDecodeError as e:
+ logging.error('## decode message [%s] error ', msg_value, e)
+ return ''
+ # location = data.get('location')
+ # group_id = data.get('groupId')
+ ts = data.get('ts')
+ current = data.get('current')
+ voltage = data.get('voltage')
+ phase = data.get('phase')
+ table_name = data.get('table_name')
+
+ return self.INSERT_PART_SQL.format(table_name, ts, current, voltage, phase)
+
+ def _build_sql_from_json(self, messages):
+ sql_list = []
+ for partition_messages in messages:
+ for message in partition_messages:
+ sql_list.append(self._build_single_sql_from_json(message.value))
+ return self.INSERT_SQL_HEADER + ' '.join(sql_list)
+
+
+def test_json_to_taos(consumer: Consumer):
+ records = [
+ [
+ ConsumerRecord(checksum=None, headers=None, offset=1, key=None,
+ value=json.dumps({'table_name': 'd0',
+ 'ts': '2022-12-06 15:13:38.643',
+ 'current': 3.41,
+ 'voltage': 105,
+ 'phase': 0.02027, }),
+ partition=1, topic='test', serialized_key_size=None, serialized_header_size=None,
+ serialized_value_size=None, timestamp=time.time(), timestamp_type=None),
+ ConsumerRecord(checksum=None, headers=None, offset=1, key=None,
+ value=json.dumps({'table_name': 'd1',
+ 'ts': '2022-12-06 15:13:39.643',
+ 'current': 3.41,
+ 'voltage': 102,
+ 'phase': 0.02027, }),
+ partition=1, topic='test', serialized_key_size=None, serialized_header_size=None,
+ serialized_value_size=None, timestamp=time.time(), timestamp_type=None),
+ ]
+ ]
+
+ consumer._json_to_taos(messages=records)
+
+
+def test_line_to_taos(consumer: Consumer):
+ records = [
+ [
+ ConsumerRecord(checksum=None, headers=None, offset=1, key=None,
+ value="d0 values('2023-01-01 00:00:00.001', 3.49, 109, 0.02737)".encode('utf-8'),
+ partition=1, topic='test', serialized_key_size=None, serialized_header_size=None,
+ serialized_value_size=None, timestamp=time.time(), timestamp_type=None),
+ ConsumerRecord(checksum=None, headers=None, offset=1, key=None,
+ value="d1 values('2023-01-01 00:00:00.002', 6.19, 112, 0.09171)".encode('utf-8'),
+ partition=1, topic='test', serialized_key_size=None, serialized_header_size=None,
+ serialized_value_size=None, timestamp=time.time(), timestamp_type=None),
+ ]
+ ]
+ consumer._line_to_taos(messages=records)
+
+
+def consume(kafka_brokers, kafka_topic, kafka_group_id, taos_host, taos_port, taos_user,
+ taos_password, taos_database, message_type, max_poll, workers):
+ c = Consumer(kafka_brokers=kafka_brokers, kafka_topic=kafka_topic, kafka_group_id=kafka_group_id,
+ taos_host=taos_host, taos_port=taos_port, taos_user=taos_user, taos_password=taos_password,
+ taos_database=taos_database, message_type=message_type, max_poll=max_poll, workers=workers)
+ c.consume()
+
+
+if __name__ == '__main__':
+ consumer = Consumer(testing=True)
+ common.create_database_and_tables(host='localhost', port=6030, user='root', password='taosdata', db='py_kafka_test',
+ table_count=10)
+ consumer.conns.execute(common.USE_DATABASE_SQL.format('py_kafka_test'))
+ test_json_to_taos(consumer)
+ test_line_to_taos(consumer)
+ common.clean(host='localhost', port=6030, user='root', password='taosdata', db='py_kafka_test')
diff --git a/docs/examples/python/kafka_example_perform.py b/docs/examples/python/kafka_example_perform.py
new file mode 100644
index 0000000000000000000000000000000000000000..23ae4b48c8fc8139b85cd41b041953e8f55f12b4
--- /dev/null
+++ b/docs/examples/python/kafka_example_perform.py
@@ -0,0 +1,103 @@
+#! encoding=utf-8
+
+import argparse
+import logging
+import multiprocessing
+import time
+from multiprocessing import pool
+
+import kafka_example_common as common
+import kafka_example_consumer as consumer
+import kafka_example_producer as producer
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser()
+ parser.add_argument('-kafka-broker', type=str, default='localhost:9092',
+ help='kafka borker host. default is `localhost:9200`')
+ parser.add_argument('-kafka-topic', type=str, default='tdengine-kafka-practices',
+ help='kafka topic. default is `tdengine-kafka-practices`')
+ parser.add_argument('-kafka-group', type=str, default='kafka_practices',
+ help='kafka consumer group. default is `kafka_practices`')
+ parser.add_argument('-taos-host', type=str, default='localhost',
+ help='TDengine host. default is `localhost`')
+ parser.add_argument('-taos-port', type=int, default=6030, help='TDengine port. default is 6030')
+ parser.add_argument('-taos-user', type=str, default='root', help='TDengine username, default is `root`')
+ parser.add_argument('-taos-password', type=str, default='taosdata', help='TDengine password, default is `taosdata`')
+ parser.add_argument('-taos-db', type=str, default='tdengine_kafka_practices',
+ help='TDengine db name, default is `tdengine_kafka_practices`')
+ parser.add_argument('-table-count', type=int, default=100, help='TDengine sub-table count, default is 100')
+ parser.add_argument('-table-items', type=int, default=1000, help='items in per sub-tables, default is 1000')
+ parser.add_argument('-message-type', type=str, default='line',
+ help='kafka message type. `line` or `json`. default is `line`')
+ parser.add_argument('-max-poll', type=int, default=1000, help='max poll for kafka consumer')
+ parser.add_argument('-threads', type=int, default=10, help='thread count for deal message')
+ parser.add_argument('-processes', type=int, default=1, help='process count')
+
+ args = parser.parse_args()
+ total = args.table_count * args.table_items
+
+ logging.warning("## start to prepare testing data...")
+ prepare_data_start = time.time()
+ producer.produce_total(100, args.kafka_broker, args.kafka_topic, args.message_type, total, args.table_count)
+ prepare_data_end = time.time()
+ logging.warning("## prepare testing data finished! spend-[%s]", prepare_data_end - prepare_data_start)
+
+ logging.warning("## start to create database and tables ...")
+ create_db_start = time.time()
+ # create database and table
+ common.create_database_and_tables(host=args.taos_host, port=args.taos_port, user=args.taos_user,
+ password=args.taos_password, db=args.taos_db, table_count=args.table_count)
+ create_db_end = time.time()
+ logging.warning("## create database and tables finished! spend [%s]", create_db_end - create_db_start)
+
+ processes = args.processes
+
+ logging.warning("## start to consume data and insert into TDengine...")
+ consume_start = time.time()
+ if processes > 1: # multiprocess
+ multiprocessing.set_start_method("spawn")
+ pool = pool.Pool(processes)
+
+ consume_start = time.time()
+ for _ in range(processes):
+ pool.apply_async(func=consumer.consume, args=(
+ args.kafka_broker, args.kafka_topic, args.kafka_group, args.taos_host, args.taos_port, args.taos_user,
+ args.taos_password, args.taos_db, args.message_type, args.max_poll, args.threads))
+ pool.close()
+ pool.join()
+ else:
+ consume_start = time.time()
+ consumer.consume(kafka_brokers=args.kafka_broker, kafka_topic=args.kafka_topic, kafka_group_id=args.kafka_group,
+ taos_host=args.taos_host, taos_port=args.taos_port, taos_user=args.taos_user,
+ taos_password=args.taos_password, taos_database=args.taos_db, message_type=args.message_type,
+ max_poll=args.max_poll, workers=args.threads)
+ consume_end = time.time()
+ logging.warning("## consume data and insert into TDengine over! spend-[%s]", consume_end - consume_start)
+
+ # print report
+ logging.warning(
+ "\n#######################\n"
+ " Prepare data \n"
+ "#######################\n"
+ "# data_type # %s \n"
+ "# total # %s \n"
+ "# spend # %s s\n"
+ "#######################\n"
+ " Create database \n"
+ "#######################\n"
+ "# stable # 1 \n"
+ "# sub-table # 100 \n"
+ "# spend # %s s \n"
+ "#######################\n"
+ " Consume \n"
+ "#######################\n"
+ "# data_type # %s \n"
+ "# threads # %s \n"
+ "# processes # %s \n"
+ "# total_count # %s \n"
+ "# spend # %s s\n"
+ "# per_second # %s \n"
+ "#######################\n",
+ args.message_type, total, prepare_data_end - prepare_data_start, create_db_end - create_db_start,
+ args.message_type, args.threads, processes, total, consume_end - consume_start,
+ total / (consume_end - consume_start))
diff --git a/docs/examples/python/kafka_example_producer.py b/docs/examples/python/kafka_example_producer.py
new file mode 100644
index 0000000000000000000000000000000000000000..51468c7e37ab3400bece69fa58e126a789ef9860
--- /dev/null
+++ b/docs/examples/python/kafka_example_producer.py
@@ -0,0 +1,97 @@
+#! encoding = utf-8
+import json
+import random
+import threading
+from concurrent.futures import ThreadPoolExecutor, Future
+from datetime import datetime
+
+from kafka import KafkaProducer
+
+locations = ['California.SanFrancisco', 'California.LosAngles', 'California.SanDiego', 'California.SanJose',
+ 'California.PaloAlto', 'California.Campbell', 'California.MountainView', 'California.Sunnyvale',
+ 'California.SantaClara', 'California.Cupertino']
+
+producers: list[KafkaProducer] = []
+
+lock = threading.Lock()
+start = 1640966400
+
+
+def produce_total(workers, broker, topic, message_type, total, table_count):
+ if len(producers) == 0:
+ lock.acquire()
+ if len(producers) == 0:
+ _init_kafka_producers(broker=broker, count=10)
+ lock.release()
+ pool = ThreadPoolExecutor(max_workers=workers)
+ futures = []
+ for _ in range(0, workers):
+ futures.append(pool.submit(_produce_total, topic, message_type, int(total / workers), table_count))
+ pool.shutdown()
+ for f in futures:
+ f.result()
+ _close_kafka_producers()
+
+
+def _produce_total(topic, message_type, total, table_count):
+ producer = _get_kafka_producer()
+ for _ in range(total):
+ message = _get_fake_date(message_type=message_type, table_count=table_count)
+ producer.send(topic=topic, value=message.encode(encoding='utf-8'))
+
+
+def _init_kafka_producers(broker, count):
+ for _ in range(count):
+ p = KafkaProducer(bootstrap_servers=broker, batch_size=64 * 1024, linger_ms=300, acks=0)
+ producers.append(p)
+
+
+def _close_kafka_producers():
+ for p in producers:
+ p.close()
+
+
+def _get_kafka_producer():
+ return producers[random.randint(0, len(producers) - 1)]
+
+
+def _get_fake_date(table_count, message_type='json'):
+ if message_type == 'json':
+ return _get_json_message(table_count=table_count)
+ if message_type == 'line':
+ return _get_line_message(table_count=table_count)
+ return ''
+
+
+def _get_json_message(table_count):
+ return json.dumps({
+ 'ts': _get_timestamp(),
+ 'current': random.randint(0, 1000) / 100,
+ 'voltage': random.randint(105, 115),
+ 'phase': random.randint(0, 32000) / 100000,
+ 'location': random.choice(locations),
+ 'groupId': random.randint(1, 10),
+ 'table_name': _random_table_name(table_count)
+ })
+
+
+def _get_line_message(table_count):
+ return "{} values('{}', {}, {}, {})".format(
+ _random_table_name(table_count), # table
+ _get_timestamp(), # ts
+ random.randint(0, 1000) / 100, # current
+ random.randint(105, 115), # voltage
+ random.randint(0, 32000) / 100000, # phase
+ )
+
+
+def _random_table_name(table_count):
+ return 'd{}'.format(random.randint(0, table_count - 1))
+
+
+def _get_timestamp():
+ global start
+ lock.acquire(blocking=True)
+ start += 0.001
+ lock.release()
+ return datetime.fromtimestamp(start).strftime('%Y-%m-%d %H:%M:%S.%f')[:-3]
diff --git a/docs/examples/python/mockdatasource.py b/docs/examples/python/mockdatasource.py
new file mode 100644
index 0000000000000000000000000000000000000000..9c702936ea6f1bdff3f604d376fd1925b4dc118e
--- /dev/null
+++ b/docs/examples/python/mockdatasource.py
@@ -0,0 +1,61 @@
+import time
+
+
+class MockDataSource:
+ samples = [
+ "8.8,119,0.32,California.LosAngeles,0",
+ "10.7,116,0.34,California.SanDiego,1",
+ "9.9,111,0.33,California.SanJose,2",
+ "8.9,113,0.329,California.Campbell,3",
+ "9.4,118,0.141,California.SanFrancisco,4"
+ ]
+
+ def __init__(self, tb_name_prefix, table_count, infinity=True):
+ self.table_name_prefix = tb_name_prefix + "_"
+ self.table_count = table_count
+ self.max_rows = 10000000
+ self.current_ts = round(time.time() * 1000) - self.max_rows * 100
+ # [(tableId, tableName, values),]
+ self.data = self._init_data()
+ self.infinity = infinity
+
+ def _init_data(self):
+ lines = self.samples * (self.table_count // 5 + 1)
+ data = []
+ for i in range(self.table_count):
+ table_name = self.table_name_prefix + str(i)
+ data.append((i, table_name, lines[i])) # tableId, row
+ return data
+
+ def __iter__(self):
+ self.row = 0
+ if not self.infinity:
+ return iter(self._iter_data())
+ else:
+ return self
+
+ def __next__(self):
+ """
+ next 1000 rows for each table.
+ return: {tableId:[row,...]}
+ """
+ return self._iter_data()
+
+ def _iter_data(self):
+ ts = []
+ for _ in range(1000):
+ self.current_ts += 100
+ ts.append(str(self.current_ts))
+ # add timestamp to each row
+ # [(tableId, ["tableName,ts,current,voltage,phase,location,groupId"])]
+ result = []
+ for table_id, table_name, values in self.data:
+ rows = [table_name + ',' + t + ',' + values for t in ts]
+ result.append((table_id, rows))
+ return result
+
+
+if __name__ == '__main__':
+ datasource = MockDataSource('t', 10, False)
+ for data in datasource:
+ print(data)
diff --git a/docs/examples/python/native_insert_example.py b/docs/examples/python/native_insert_example.py
index 94fd00a6e9d1dcd2119693c4b5c862d36c219a3d..cdde7d23d24d12e11c67b6c6acc0e0b089fb5335 100644
--- a/docs/examples/python/native_insert_example.py
+++ b/docs/examples/python/native_insert_example.py
@@ -25,10 +25,10 @@ def create_stable(conn: taos.TaosConnection):
# The generated SQL is:
-# INSERT INTO d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
-# d1002 USING meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
-# d1003 USING meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
-# d1004 USING meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)
+# INSERT INTO d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
+# d1002 USING meters TAGS('California.SanFrancisco', 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
+# d1003 USING meters TAGS('California.LosAngeles', 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
+# d1004 USING meters TAGS('California.LosAngeles', 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)
def get_sql():
global lines
diff --git a/docs/examples/python/sql_writer.py b/docs/examples/python/sql_writer.py
new file mode 100644
index 0000000000000000000000000000000000000000..3456981a7b9a174e38f8795ff7251ab3c675174b
--- /dev/null
+++ b/docs/examples/python/sql_writer.py
@@ -0,0 +1,111 @@
+import logging
+import taos
+
+
+class SQLWriter:
+ log = logging.getLogger("SQLWriter")
+
+ def __init__(self, get_connection_func):
+ self._tb_values = {}
+ self._tb_tags = {}
+ self._conn = get_connection_func()
+ self._max_sql_length = self.get_max_sql_length()
+ self._conn.execute("create database if not exists test")
+ self._conn.execute("USE test")
+
+ def get_max_sql_length(self):
+ rows = self._conn.query("SHOW variables").fetch_all()
+ for r in rows:
+ name = r[0]
+ if name == "maxSQLLength":
+ return int(r[1])
+ return 1024 * 1024
+
+ def process_lines(self, lines: [str]):
+ """
+ :param lines: [[tbName,ts,current,voltage,phase,location,groupId]]
+ """
+ for line in lines:
+ ps = line.split(",")
+ table_name = ps[0]
+ value = '(' + ",".join(ps[1:-2]) + ') '
+ if table_name in self._tb_values:
+ self._tb_values[table_name] += value
+ else:
+ self._tb_values[table_name] = value
+
+ if table_name not in self._tb_tags:
+ location = ps[-2]
+ group_id = ps[-1]
+ tag_value = f"('{location}',{group_id})"
+ self._tb_tags[table_name] = tag_value
+ self.flush()
+
+ def flush(self):
+ """
+ Assemble INSERT statement and execute it.
+ When the sql length grows close to MAX_SQL_LENGTH, the sql will be executed immediately, and a new INSERT statement will be created.
+ In case of "Table does not exit" exception, tables in the sql will be created and the sql will be re-executed.
+ """
+ sql = "INSERT INTO "
+ sql_len = len(sql)
+ buf = []
+ for tb_name, values in self._tb_values.items():
+ q = tb_name + " VALUES " + values
+ if sql_len + len(q) >= self._max_sql_length:
+ sql += " ".join(buf)
+ self.execute_sql(sql)
+ sql = "INSERT INTO "
+ sql_len = len(sql)
+ buf = []
+ buf.append(q)
+ sql_len += len(q)
+ sql += " ".join(buf)
+ self.create_tables()
+ self.execute_sql(sql)
+ self._tb_values.clear()
+
+ def execute_sql(self, sql):
+ try:
+ self._conn.execute(sql)
+ except taos.Error as e:
+ error_code = e.errno & 0xffff
+ # Table does not exit
+ if error_code == 9731:
+ self.create_tables()
+ else:
+ self.log.error("Execute SQL: %s", sql)
+ raise e
+ except BaseException as baseException:
+ self.log.error("Execute SQL: %s", sql)
+ raise baseException
+
+ def create_tables(self):
+ sql = "CREATE TABLE "
+ for tb in self._tb_values.keys():
+ tag_values = self._tb_tags[tb]
+ sql += "IF NOT EXISTS " + tb + " USING meters TAGS " + tag_values + " "
+ try:
+ self._conn.execute(sql)
+ except BaseException as e:
+ self.log.error("Execute SQL: %s", sql)
+ raise e
+
+ def close(self):
+ if self._conn:
+ self._conn.close()
+
+
+if __name__ == '__main__':
+ def get_connection_func():
+ conn = taos.connect()
+ return conn
+
+
+ writer = SQLWriter(get_connection_func=get_connection_func)
+ writer.execute_sql(
+ "create stable if not exists meters (ts timestamp, current float, voltage int, phase float) "
+ "tags (location binary(64), groupId int)")
+ writer.execute_sql(
+ "INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) "
+ "VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32)")
diff --git a/docs/examples/python/tmq_example.py b/docs/examples/python/tmq_example.py
index a4625ca11accfbf7d263f4c1993f712987a136cb..6f7fb87c89ce4cb96793d09a837f60ad54ae69bc 100644
--- a/docs/examples/python/tmq_example.py
+++ b/docs/examples/python/tmq_example.py
@@ -1,58 +1,55 @@
+from taos.tmq import Consumer
import taos
-from taos.tmq import *
-conn = taos.connect()
-print("init")
-conn.execute("drop topic if exists topic_ctb_column")
-conn.execute("drop database if exists py_tmq")
-conn.execute("create database if not exists py_tmq vgroups 2")
-conn.select_db("py_tmq")
-conn.execute(
- "create stable if not exists stb1 (ts timestamp, c1 int, c2 float, c3 binary(10)) tags(t1 int)"
-)
-conn.execute("create table if not exists tb1 using stb1 tags(1)")
-conn.execute("create table if not exists tb2 using stb1 tags(2)")
-conn.execute("create table if not exists tb3 using stb1 tags(3)")
-
-print("create topic")
-conn.execute(
- "create topic if not exists topic_ctb_column as select ts, c1, c2, c3 from stb1"
-)
-
-print("build consumer")
-conf = TaosTmqConf()
-conf.set("group.id", "tg2")
-conf.set("td.connect.user", "root")
-conf.set("td.connect.pass", "taosdata")
-conf.set("enable.auto.commit", "true")
-
-
-def tmq_commit_cb_print(tmq, resp, offset, param=None):
- print(f"commit: {resp}, tmq: {tmq}, offset: {offset}, param: {param}")
-
-
-conf.set_auto_commit_cb(tmq_commit_cb_print, None)
-tmq = conf.new_consumer()
-
-print("build topic list")
-
-topic_list = TaosTmqList()
-topic_list.append("topic_ctb_column")
-
-print("basic consume loop")
-tmq.subscribe(topic_list)
-
-sub_list = tmq.subscription()
-
-print("subscribed topics: ", sub_list)
-
-while 1:
- res = tmq.poll(1000)
- if res:
- topic = res.get_topic_name()
- vg = res.get_vgroup_id()
- db = res.get_db_name()
- print(f"topic: {topic}\nvgroup id: {vg}\ndb: {db}")
- for row in res:
- print(row)
+def init_tmq_env(db, topic):
+ conn = taos.connect()
+ conn.execute("drop topic if exists {}".format(topic))
+ conn.execute("drop database if exists {}".format(db))
+ conn.execute("create database if not exists {}".format(db))
+ conn.select_db(db)
+ conn.execute(
+ "create stable if not exists stb1 (ts timestamp, c1 int, c2 float, c3 varchar(16)) tags(t1 int, t3 varchar(16))")
+ conn.execute("create table if not exists tb1 using stb1 tags(1, 't1')")
+ conn.execute("create table if not exists tb2 using stb1 tags(2, 't2')")
+ conn.execute("create table if not exists tb3 using stb1 tags(3, 't3')")
+ conn.execute("create topic if not exists {} as select ts, c1, c2, c3 from stb1".format(topic))
+ conn.execute("insert into tb1 values (now, 1, 1.0, 'tmq test')")
+ conn.execute("insert into tb2 values (now, 2, 2.0, 'tmq test')")
+ conn.execute("insert into tb3 values (now, 3, 3.0, 'tmq test')")
+
+
+def cleanup(db, topic):
+ conn = taos.connect()
+ conn.execute("drop topic if exists {}".format(topic))
+ conn.execute("drop database if exists {}".format(db))
+
+
+if __name__ == '__main__':
+ init_tmq_env("tmq_test", "tmq_test_topic") # init env
+ consumer = Consumer(
+ {
+ "group.id": "tg2",
+ "td.connect.user": "root",
+ "td.connect.pass": "taosdata",
+ "enable.auto.commit": "true",
+ }
+ )
+ consumer.subscribe(["tmq_test_topic"])
+
+ try:
+ while True:
+ res = consumer.poll(1)
+ if not res:
+ break
+ err = res.error()
+ if err is not None:
+ raise err
+ val = res.value()
+
+ for block in val:
+ print(block.fetchall())
+ finally:
+ consumer.unsubscribe()
+ consumer.close()
+ cleanup("tmq_test", "tmq_test_topic")
diff --git a/docs/examples/python/tmq_websocket_example.py b/docs/examples/python/tmq_websocket_example.py
new file mode 100644
index 0000000000000000000000000000000000000000..e1dcb0086a995c0c20a5d079ed6d8f4d18ea0356
--- /dev/null
+++ b/docs/examples/python/tmq_websocket_example.py
@@ -0,0 +1,31 @@
+#!/usr/bin/python3
+from taosws import Consumer
+
+conf = {
+ "td.connect.websocket.scheme": "ws",
+ "group.id": "0",
+}
+consumer = Consumer(conf)
+
+consumer.subscribe(["test"])
+
+while True:
+ message = consumer.poll(timeout=1.0)
+ if message:
+ id = message.vgroup()
+ topic = message.topic()
+ database = message.database()
+
+ for block in message:
+ nrows = block.nrows()
+ ncols = block.ncols()
+ for row in block:
+ print(row)
+ values = block.fetchall()
+ print(nrows, ncols)
+
+ # consumer.commit(message)
+ else:
+ break
+
+consumer.close()
diff --git a/docs/zh/01-index.md b/docs/zh/01-index.md
index d2e6706892f3997af115e71d1da455ebce2ecbec..a406fe663c56a1b1641b52fdca09c9e869408476 100644
--- a/docs/zh/01-index.md
+++ b/docs/zh/01-index.md
@@ -1,25 +1,35 @@
---
-title: TDengine 文档
-sidebar_label: 文档首页
+title: TDengine Cloud 文档
+sidebar_label: 主页
slug: /
---
-TDengine 是一款[高性能](https://www.taosdata.com/fast)、[分布式](https://www.taosdata.com/scalable)、[支持 SQL](https://www.taosdata.com/sql-support) 的时序数据库 (Database)。本文档是 TDengine 用户手册,主要是介绍 TDengine 的基本概念、安装、使用、功能、开发接口、运营维护、TDengine 内核设计等等,它主要是面向架构师、开发者与系统管理员的。
+TDengine Cloud 是全托管的时序数据处理云服务平台。它是基于开源的时序数据库 TDengine 而开发的。除高性能的时序数据库之外,它还具有缓存、订阅和流计算等系统功能,而且提供了便利而又安全的数据分享、以及众多的企业级服务功能。它可以让物联网、工业互联网、金融、IT 运维监控等领域企业在时序数据的管理上大幅降低人力成本和运营成本。
-TDengine 充分利用了时序数据的特点,提出了“一个数据采集点一张表”与“超级表”的概念,设计了创新的存储引擎,让数据的写入、查询和存储效率都得到极大的提升。为正确理解并使用TDengine, 无论如何,请您仔细阅读[基本概念](./concept)一章。
+同时客户可以放心使用无处不在的第三方工具,比如 Prometheus,Telegraf,Grafana 和 MQTT 消息服务器等。天然地,TDengine Cloud 还支持 Python,Java,Go,Rust 和 Node.js等连接器。开发者可以选择自己习惯的语言来开发。通过支持 SQL,还有无模式的方式,TDengine Cloud 能够满足所有开发者的需求。TDengine Cloud 还提供了额外的特殊功能来进行时序数据的风险,使数据的分析和可视化变得极其简单。
-如果你是开发者,请一定仔细阅读[开发指南](./develop)一章,该部分对数据库连接、建模、插入数据、查询、连续查询、缓存、数据订阅、用户自定义函数等功能都做了详细介绍,并配有各种编程语言的示例代码。大部分情况下,你只要把示例代码拷贝粘贴,针对自己的应用稍作改动,就能跑起来。
+下面是 TDengine Cloud 的文档结构:
-我们已经生活在大数据的时代,纵向扩展已经无法满足日益增长的业务需求,任何系统都必须具有水平扩展的能力,集群成为大数据以及 database 系统的不可缺失功能。TDengine 团队不仅实现了集群功能,而且将这一重要核心功能开源。怎么部署、管理和维护 TDengine 集群,请参考[集群管理](./cluster)一章。
+1. [产品简介](./intro) 概述TDengine Cloud 的特点,能力和竞争优势。
-TDengine 采用 SQL 作为其查询语言,大大降低学习成本、降低迁移成本,但同时针对时序数据场景,又做了一些扩展,以支持插值、降采样、时间加权平均等操作。[SQL 手册](./taos-sql)一章详细描述了 SQL 语法、详细列出了各种支持的命令和函数。
+2. [基本概念](./concept) 主要介绍 TDengine 如何有效利用时间序列数据的特点来提高计算性能,同时提高存储效率。
-如果你是系统管理员,关心安装、升级、容错灾备、关心数据导入、导出,配置参数,怎么监测 TDengine 是否健康运行,怎么提升系统运行的性能,那么请仔细参考[运维指南](./operation)一章。
+3. [数据写入](./data-in) 主要介绍 TDengine Cloud 提供了多种数据写入 TDengine 实例的方式。在数据源部分,您可以方便地从边缘云或者主机上面的 TDengine 把数据写入云上的任何实例。
-如果你对 TDengine 外围工具,REST API, 各种编程语言的连接器想做更多详细了解,请看[参考指南](./reference)一章。
+4. [数据输出](./data-out) 主要介绍 TDengine Cloud 提供极简的访问您的时序数据的方式,通过这些方式,您可以方便的利用 TDengine 实例的数据来开发您的数据分析和可视化应用。
-如果你对 TDengine 内部的架构设计很有兴趣,欢迎仔细阅读[技术内幕](./tdinternal)一章,里面对集群的设计、数据分区、分片、写入、读出、查询、聚合查询的流程都做了详细的介绍。如果你想研读 TDengine 代码甚至贡献代码,请一定仔细读完这一章。
+5. [可视化](./visual) 主要介绍您如何使用 TDengine Cloud 上面存储的时序数据进行可视化开发,比如您可以监控和可视化您的 TDengine Cloud 上面的实例和数据库状态。
-最后,作为一个开源软件,欢迎大家的参与。如果发现文档的任何错误,描述不清晰的地方,都请在每个页面的最下方,点击“编辑本文档“直接进行修改。
+6. [数据订阅](./data-subscription) 这个部分是 TDengine Cloud 的高级功能,类似于异步发布/订阅能力,即发布到一个主题的消息会被该主题的所有订阅者立即收到通知。 TDengine Cloud 的数据订阅让您无需部署任何的消息发布/订阅系统,比如 Kafka,就可以创建自己的事件驱动应用。而且我们提供了便捷而安全的方式,让您通过创建主题和分享主题给他人都变得极其容易。
-Together, we make a difference!
+7. [流式计算](./stream) 这个部分也是 TDengine Cloud 的另外一个高级功能。通过这个功能,您无需无需部署任何流式处理系统,比如 Spark/Flink,就能创建连续查询或时间驱动的流计算。 TDengine Cloud 的流式计算可以很方便的让您实时处理进入的流式数据并把它们很轻松地按照您定义的规则导入到目的表里面。
+
+8. [数据复制](./replication) 是 TDengine Cloud 提高的成熟的数据复制功能。您可以从云端同一个区域的一个实例复制到另外一个实例,也可以从一个云服务商的区域复制到另外一个云服务商的区域。
+
+9. [开发指南](./programming) 是使用 TDengine Cloud 上的时序数据开发 IoT 和大数据应用必须阅读的部分。在这一部分中,我们详细介绍了数据库连接,数据建模,数据抽取,数据查询,流式计算,缓存,数据订阅,用户自定义函数和其他功能。我们还提供了各种编程语言的示例代码。在大多数情况下,您只需简单地复制和粘贴这些示例代码,在您的应用程序中再做一些细微修改就能工作。
+
+10. [TDengine SQL](./taos-sql) 提供了标准 SQL 以及TDengine 扩展部分的详细介绍,通过这些 SQL 语句能方便地进行时序数据分析。
+
+11. [工具](./tools)主要介绍 Taos CLI 这个通过终端来执行的命令行工具。 通过运行这个工具,可以轻松和便捷地访问您在 TDengine Cloud 的 TDengine 实例的数据库数据并进行各种查询。另外还介绍了 taosBenchmark 这个工具。通过这个工具可以帮助您用简单的配置就能比较容易地产生大量的数据,并测试 TDengine Cloud 的性能。
+
+我们非常高兴您选择 TDengine Cloud 作为您的时序数据平台的一部分,并期待着听到您的反馈以及改进意见,并成为您成功的一个小部分。
diff --git a/docs/zh/02-intro.md b/docs/zh/02-intro.md
new file mode 100644
index 0000000000000000000000000000000000000000..17f38f861e9b1ef6ad30c7c57a223c8a28a0adf2
--- /dev/null
+++ b/docs/zh/02-intro.md
@@ -0,0 +1,101 @@
+---
+sidebar_label: 产品简介
+title: TDengine Cloud 的产品简介
+---
+
+TDengine Cloud 是全托管的时序数据处理云服务平台。它是基于开源的时序数据库 TDengine 而开发的。除高性能的时序数据库之外,它还具有缓存、订阅和流计算等系统功能,而且提供了便利而又安全的数据分享、以及众多的企业级服务功能。它可以让物联网、工业互联网、金融、IT 运维监控等领域企业在时序数据的管理上大幅降低人力成本和运营成本。
+
+本章节主要介绍 TDengine Cloud 的主要功能,竞争优势和典型使用案例,让大家对 TDengine Cloud 有个整体的了解。
+
+## 主要功能
+
+TDengine Cloud 的主要功能如下:
+
+1. 数据写入
+ - 支持[使用 SQL 插入数据](../programming/insert/)。
+ - 支持 [Telegraf](../data-in/telegraf/)。
+ - 支持 [Prometheus](../data-in/prometheus/)。
+2. 数据输出
+ - 支持标准 [SQL](../programming/query/),包括子查询。
+ - 支持通过工具 [taosDump](../data-out/taosdump/) 导出数据。
+ - 支持输出数据到 [Prometheus](../data-out/prometheus/)。
+ - 支持通过[数据订阅](../data-subscription/)的方式导出数据.
+3. 数据浏览器: 可以浏览数据库和各种表,如果您已经登录,还可以直接执行 SQL 查询语句。
+4. 可视化
+ - 支持 [Grafana](../visual/grafana/)。
+ - 支持 Google Data Studio。
+ - 支持 Grafana Cloud (稍后发布)
+5. [数据订阅](../data-subscription/): 用户的应用可以订阅一个数据库,一张表或者一组表。使用的 API 跟 Kafka 基本一致,但是您必须设置具体的过滤条件来定义一个主题,然后您可以和 TDengine Cloud 的其他用户或者用户组分享这个主题。
+6. [流计算](../stream/):不仅支持连续查询,TDengine还支持基于事件驱动的流计算,无需安装 Flink/Spark 就可以处理时序数据。
+7. 企业版
+ - 支持每天备份数据。
+ - 支持复制一个数据库到另外一个区域或者另外一个云。
+ - 支持对等 VPC。
+ - 支持 IP 白名单。
+9. 工具
+ - 提供一个交互式 [命令行工具 (CLI)](../tools/cli/) 管理和实时查询。
+ - 提供一个性能检测工具 [taosBenchmark](../tools/taosbenchmark/) 来测试 TDengine 的性能。
+10. 编程
+ - 提供各种[连接器](../programming/connector/),比如 Java,Python,Go,Rust,Node.js 等编程语言。
+ - 提供了[REST API](../programming/connector/rest-api/)。
+
+更多细节功能,请阅读整个文档。
+
+## 竞争优势
+
+由于 TDengine Cloud 充分利用了[时序数据特点](https://www.taosdata.com/blog/2019/07/09/105.html),比如结构化、无需事务、很少删除或更新、写多读少等等,还有它云原生的设计使 TDengine Cloud 区别于其他时序数据云服务,具有以下特点:
+
+- **[极简时序数据平台](https://www.taosdata.com/tdengine/simplified_solution_for_time-series_data_processing)**:全托管的云服务,用户无需担心繁琐的部署、优化、扩容、备份、异地容灾等事务,可全心关注核心业务,减少对DBA的要求,大幅节省人力成本。
+
+ 除高性能、具有水平扩展能力的时序数据库外, TDengine 云服务还提供:
+
+ **缓存**:无需部署 Redis,应用就能快速的获得最新数据。
+
+ **数据订阅**:无需部署 Kafka, 当系统接收到新的数据时,应用将立即收到通知。
+
+ **流式计算**:无需部署 Spark/Flink, 应用就能创建连续查询或时间驱动的流计算。
+
+- **[便捷而且安全的数据共享](https://www.taosdata.com/tdengine/cloud/data-sharing)**:TDengine Cloud 既支持将一个库完全开放,设置读或写的权限;也支持通过数据订阅的方式,将库、超级表、一组或一张表、或聚合处理后的数据分享出去。
+
+ **便捷**:如同在线文档一样简单,只需输入对方邮件地址,设置访问权限和访问时长即可实现分享。对方收到邮件,接受邀请后,可即刻访问。
+
+ **安全**:访问权限可以控制到一个运行实例、库或订阅的 topic;对于每个授权的用户,对分享的资源,会生成一个访问用的 token;访问可以设置到期时间。
+
+ 便捷而又安全的时序数据共享,让企业各部门或合作伙伴之间快速洞察业务的运营。
+
+- **[安全可靠的企业级服务](https://tdengine.com/tdengine/high-performance-time-series-database/)**:除强大的时序数据管理、共享功能之外,TDengine Cloud 还提供企业运营必需的
+
+ **可靠**:提供数据定时备份、恢复,数据从运行实例到私有云、其他公有云或 Region 的实时复制。
+
+ **安全**:提供基于角色的访问权限控制、IP 白名单、用户行为审计等功能。
+
+ **专业**:提供7*24的专业技术服务,承诺 99.9% 的 Service Level Agreement。
+
+ 安全、专业、高效可靠的企业级服务,用户无需再为数据管理发愁,可以聚焦自身的核心业务。
+
+- **[分析能力](https://www.taosdata.com/tdengine/easy_data_analytics)**:通过超级表、存储计算分离、分区分片、预计算和其它技术,TDengine 能够高效地浏览、格式化和访问数据。
+
+- **[核心开源](https://www.taosdata.com/tdengine/open_source_time-series_database)**:TDengine 的核心代码包括集群功能全部在开源协议下公开。全球超过 140k 个运行实例,GitHub Star 20k,且拥有一个活跃的开发者社区。
+
+采用 TDengine Cloud,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。表现在几个方面:
+
+1. 由于其超强性能,它能将系统所需的计算资源和存储资源大幅降低
+2. 因为支持 SQL,能与众多第三方软件无缝集成,学习迁移成本大幅下降
+3. 因为是一款极简的时序数据平台,系统复杂度、研发和运营成本大幅降低
+
+## 技术生态
+
+在整个时序大数据平台中,TDengine 扮演的角色如下:
+
+
+
+
+
+
图 1. TDengine 技术生态图
+
+
+上图中,左侧是各种数据采集或消息队列,包括 OPC-UA、MQTT、Telegraf、也包括 Kafka,他们的数据将被源源不断的写入到 TDengine。右侧则是可视化、BI 工具、组态软件、应用程序。下侧则是 TDengine 自身提供的命令行程序(CLI)以及可视化管理工具。
+
+## 典型适用场景
+
+作为一个高性能、分布式、支持 SQL 的时序数据库(Database),TDengine 的典型适用场景包括但不限于 IoT、工业互联网、车联网、IT 运维、能源、金融证券等领域。需要指出的是,TDengine 是针对时序数据场景设计的专用数据库和专用大数据处理工具,因其充分利用了时序大数据的特点,它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM 等通用型数据。下面本文将对适用场景做更多详细的分析。
diff --git a/docs/zh/04-concept/_category_.yml b/docs/zh/04-concept/_category_.yml
new file mode 100644
index 0000000000000000000000000000000000000000..aad75dce21f63a6510bc0b8da4c93952767adfdf
--- /dev/null
+++ b/docs/zh/04-concept/_category_.yml
@@ -0,0 +1 @@
+label: 基本概念
\ No newline at end of file
diff --git a/docs/zh/04-concept/index.md b/docs/zh/04-concept/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..2cba68edcd152f5059845b9e25342b3f335f3b8b
--- /dev/null
+++ b/docs/zh/04-concept/index.md
@@ -0,0 +1,183 @@
+---
+sidebar_label: 基本概念
+title: 数据模型和基本概念
+description: TDengine 的数据模型和基本概念
+---
+
+为了便于解释基本概念,便于撰写示例程序,整个 TDengine 文档以智能电表作为典型时序数据场景。假设每个智能电表采集电流、电压、相位三个量,有多个智能电表,每个电表有位置 Location 和分组 Group ID 的静态属性. 其采集的数据类似如下的表格:
+
+