提交 96c9878f 编写于 作者: L Li Ya Qiang

add zh cloud docs to taosdata.com

上级 06d66551
...@@ -23,8 +23,8 @@ The major features are listed below: ...@@ -23,8 +23,8 @@ The major features are listed below:
3. Data Explorer: browse through databases and even run SQL queries once you login. 3. Data Explorer: browse through databases and even run SQL queries once you login.
4. Visualization: 4. Visualization:
- Supports [Grafana](../visual/grafana/) - Supports [Grafana](../visual/grafana/)
- Supports Google data studio - Supports Google Data Studio
- Supports Grafana cloud (to be released soon) - Supports Grafana Cloud (to be released soon)
5. [Data Subscription](../data-subscription/): Application can subscribe a table or a set of tables. API is the same as Kafka, but you can specify filter conditions and you can share the topic with other users and user groups in TDengien Cloud. 5. [Data Subscription](../data-subscription/): Application can subscribe a table or a set of tables. API is the same as Kafka, but you can specify filter conditions and you can share the topic with other users and user groups in TDengien Cloud.
6. [Stream Processing](../stream/): Not only is the continuous query is supported, but TDengine also supports event driven stream processing, so Flink or Spark is not needed for time-series data processing. 6. [Stream Processing](../stream/): Not only is the continuous query is supported, but TDengine also supports event driven stream processing, so Flink or Spark is not needed for time-series data processing.
7. Enterprise 7. Enterprise
......
...@@ -55,7 +55,7 @@ Edit section "outputs.http". ...@@ -55,7 +55,7 @@ Edit section "outputs.http".
{{#include docs/examples/thirdparty/telegraf-conf.toml:null:nrc}} {{#include docs/examples/thirdparty/telegraf-conf.toml:null:nrc}}
``` ```
The resulting configuration will collect CPU and memory data and sends it to TDengine database named "telegraf". Database "telegraf" will be created automatically if it dose not exist in advance. The resulting configuration will collect CPU and memory data and sends it to TDengine database named "telegraf". Database "telegraf" must be created first through TDengine Cloud explorer.
## Start Telegraf ## Start Telegraf
......
...@@ -4,7 +4,7 @@ title: Dump Data Using taosDump ...@@ -4,7 +4,7 @@ title: Dump Data Using taosDump
description: Dump data from TDengine into files using taosDump description: Dump data from TDengine into files using taosDump
--- ---
# taosDump ## Overview
taosdump is a tool that supports backing up data from a running TDengine cluster and restoring the backed up data to the same, or another running TDengine cluster. taosdump is a tool that supports backing up data from a running TDengine cluster and restoring the backed up data to the same, or another running TDengine cluster.
......
```c
{{#include docs/examples/c/tmq_example.c}}
```
```csharp
{{#include docs/examples/csharp/native-example/SubscribeDemo.cs}}
```
\ No newline at end of file
```go
{{#include docs/examples/go/sub/main.go}}
```
\ No newline at end of file
```java
{{#include docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java}}
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
```
\ No newline at end of file
```js
{{#include docs/examples/node/nativeexample/subscribe_demo.js}}
```
\ No newline at end of file
```py
{{#include docs/examples/python/tmq_example.py}}
```
```rust
{{#include docs/examples/rust/cloud-example/examples/subscribe_demo.rs}}
```
...@@ -88,7 +88,7 @@ For more details about how to write or query data via REST API, please check [RE ...@@ -88,7 +88,7 @@ For more details about how to write or query data via REST API, please check [RE
## Jupyter ## Jupyter
**Step 1: Install** ### Step 1: Install
For the users who are familiar with Jupyter to program in Python, both TDengine Python connector and Jupyter need to be ready in your environment. If you have not done yet, please use the commands below to install them. For the users who are familiar with Jupyter to program in Python, both TDengine Python connector and Jupyter need to be ready in your environment. If you have not done yet, please use the commands below to install them.
...@@ -113,9 +113,9 @@ conda install -c conda-forge taospy ...@@ -113,9 +113,9 @@ conda install -c conda-forge taospy
</TabItem> </TabItem>
</Tabs> </Tabs>
**Step 2: Configure** ### Step 2: Configure
In order for Jupyter to connect to TDengine cloud service, before launching Jupypter, the environment setting must be performed. We use Linux bash as example. In order for Jupyter to connect to TDengine cloud service, before launching Jupyter, the environment setting must be performed. We use Linux bash as example.
```bash ```bash
export TDENGINE_CLOUD_TOKEN="<token>" export TDENGINE_CLOUD_TOKEN="<token>"
...@@ -123,7 +123,7 @@ export TDENGINE_CLOUD_URL="<url>" ...@@ -123,7 +123,7 @@ export TDENGINE_CLOUD_URL="<url>"
jupyter lab jupyter lab
``` ```
**Step 3: Connect** ### Step 3: Connect
Once jupyter lab is launched, Jupyter lab service is automatically connected and shown in your browser. You can create a new notebook and copy the sample code below and run it. Once jupyter lab is launched, Jupyter lab service is automatically connected and shown in your browser. You can create a new notebook and copy the sample code below and run it.
......
...@@ -71,7 +71,7 @@ To obtain the value of JDBC URL, please log in [TDengine Cloud](https://cloud.td ...@@ -71,7 +71,7 @@ To obtain the value of JDBC URL, please log in [TDengine Cloud](https://cloud.td
<!-- exclude-end --> <!-- exclude-end -->
## Connect ## Connect
Code bellow get JDBC URL from environment variables first and then create a `Connection` object, witch is a standard JDBC Connection object. Code bellow get JDBC URL from environment variables first and then create a `Connection` object, which is a standard JDBC Connection object.
```java ```java
{{#include docs/examples/java/src/main/java/com/taos/example/ConnectCloudExample.java:connect}} {{#include docs/examples/java/src/main/java/com/taos/example/ConnectCloudExample.java:connect}}
......
...@@ -31,7 +31,7 @@ anyhow = "1.0.0" ...@@ -31,7 +31,7 @@ anyhow = "1.0.0"
## Config ## Config
Run this command in your terminal to save TDengine cloud token as variables: Run this command in your terminal to save TDengine cloud DSN as variables:
<Tabs defaultValue="bash"> <Tabs defaultValue="bash">
<TabItem value="bash" label="Bash"> <TabItem value="bash" label="Bash">
......
...@@ -33,7 +33,9 @@ Add following ItemGroup and Task to your project file. ...@@ -33,7 +33,9 @@ Add following ItemGroup and Task to your project file.
</ItemGroup> </ItemGroup>
<Copy SourceFiles="@(DepDLLFiles)" DestinationFolder="$(OutDir)" /> <Copy SourceFiles="@(DepDLLFiles)" DestinationFolder="$(OutDir)" />
</Target> </Target>
```
```bash
dotnet add package TDengine.Connector dotnet add package TDengine.Connector
``` ```
......
...@@ -3,7 +3,9 @@ title: Data Model ...@@ -3,7 +3,9 @@ title: Data Model
description: Typical Data Model used in TDengine description: Typical Data Model used in TDengine
--- ---
The data model employed by TDengine is similar to that of a relational database. You have to create databases and tables. You must design the data model based on your own business and application requirements. You should design the STable (an abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntactical details. The data model employed by TDengine is similar to that of a relational database. You have to create databases and tables. You must design the data model based on your own business and application requirements. You should design the [STable](/concept/#super-table-stable) (an abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntactical details.
Note: before you read this chapter, please make sure you have already read through [Key Concepts](/concept/), since TDengine introduces new concepts like "one table for one [data collection point](/concept/#data-collection-point)" and "[super table](/concept/#super-table-stable)".
## Create Database ## Create Database
......
...@@ -33,7 +33,7 @@ INSERT INTO test.d101 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10 ...@@ -33,7 +33,7 @@ INSERT INTO test.d101 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10
Data can be inserted into multiple tables in the same SQL statement. The example below inserts 2 rows into table "d101" and 1 row into table "d102". Data can be inserted into multiple tables in the same SQL statement. The example below inserts 2 rows into table "d101" and 1 row into table "d102".
```sql ```sql
INSERT INTO test.d101 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d102 VALUES (1538548696800, 12.3, 221, 0.31); INSERT INTO test.d101 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) test.d102 VALUES (1538548696800, 12.3, 221, 0.31);
``` ```
For more details about `INSERT` please refer to [INSERT](https://docs.tdengine.com/cloud/taos-sql/insert). For more details about `INSERT` please refer to [INSERT](https://docs.tdengine.com/cloud/taos-sql/insert).
......
...@@ -42,7 +42,7 @@ For detailed query syntax please refer to [Select](https://docs.tdengine.com/clo ...@@ -42,7 +42,7 @@ For detailed query syntax please refer to [Select](https://docs.tdengine.com/clo
In most use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviation for super table), is used in TDengine to represent one type of data collection point, and a subtable is used to represent a specific data collection point of that type. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same type of data collection points. Aggregate functions applicable for tables can be used directly on STables; the syntax is exactly the same. In most use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviation for super table), is used in TDengine to represent one type of data collection point, and a subtable is used to represent a specific data collection point of that type. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same type of data collection points. Aggregate functions applicable for tables can be used directly on STables; the syntax is exactly the same.
In summary, records across subtables can be aggregated by a simple query on their STable. It is like a join operation. However, tables belonging to different STables can not be aggregated. In summary, records across subtables can be aggregated by a simple query on their STable. It is like a join operation. However, tables belonging to different STables can not be aggregated.
### Example 1 ### Example 1
...@@ -106,6 +106,7 @@ Down sampling can also be used for STable. For example, the below SQL statement ...@@ -106,6 +106,7 @@ Down sampling can also be used for STable. For example, the below SQL statement
```sql title="SQL" ```sql title="SQL"
SELECT _wstart, SUM(current) FROM test.meters where location like "California%" INTERVAL(1s) limit 5; SELECT _wstart, SUM(current) FROM test.meters where location like "California%" INTERVAL(1s) limit 5;
``` ```
```txt title="output" ```txt title="output"
_wstart | sum(current) | _wstart | sum(current) |
====================================================== ======================================================
......
...@@ -18,12 +18,12 @@ The source code for the Python connector is hosted on [GitHub](https://github.co ...@@ -18,12 +18,12 @@ The source code for the Python connector is hosted on [GitHub](https://github.co
### Install via pip ### Install via pip
``` ```
pip3 install -U taospy pip3 install -U taospy[ws]
``` ```
### Install vial conda ### Install vial conda
``` ```
conda install -c conda-forge taospy conda install -c conda-forge taospy taospyws
``` ```
### Installation verification ### Installation verification
...@@ -75,16 +75,45 @@ The `RestClient` class is a direct wrapper for the [REST API](/reference/rest-ap ...@@ -75,16 +75,45 @@ The `RestClient` class is a direct wrapper for the [REST API](/reference/rest-ap
For a more detailed description of the `sql()` method, please refer to [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html). For a more detailed description of the `sql()` method, please refer to [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html).
## Important Update ## Other notes
### Exception handling
All errors from database operations are thrown directly as exceptions and the error message from the database is passed up the exception stack. The application is responsible for exception handling. For example:
```python
import taos
try:
conn = taos.connect()
conn.execute("CREATE TABLE 123") # wrong sql
except taos.Error as e:
print(e)
print("exception class: ", e.__class__.__name__)
print("error number:", e.errno)
print("error message:", e.msg)
except BaseException as other:
print("exception occur")
print(other)
# output:
# [0x0216]: syntax error near 'Incomplete SQL statement'
# exception class: ProgrammingError
# error number: -2147483114
# error message: syntax error near 'Incomplete SQL statement'
```
[view source code](https://github.com/taosdata/TDengine/blob/3.0/docs/examples/python/handle_exception.py)
### About nanoseconds
| Connector version | Important Update | Release date | Due to the current imperfection of Python's nanosecond support (see link below), the current implementation returns integers at nanosecond precision instead of the `datetime` type produced by `ms` and `us`, which application developers will need to handle on their own. And it is recommended to use pandas' to_datetime(). The Python Connector may modify the interface in the future if Python officially supports nanoseconds in full.
| ----------------- | ----------------------------------------- | ------------ |
| 2.6.2 | fix ci script | 2022-08-18 | 1. https://stackoverflow.com/questions/10611328/parsing-datetime-strings-containing-nanoseconds
| 2.5.2 | fix taos-ws-py python version dependency | 2022-08-12 | 2. https://www.python.org/dev/peps/pep-0564/
| 2.5.1 | (rest): add timezone option | 2022-08-11 |
| 2.5.0 | add taosws module | 2022-08-10 | ## Important Update
| 2.4.0 | add execute method to TaosRestConnection | 2022-07-18 |
| 2.3.3 | support connect to TDengine Cloud Service | 2022-06-06 |
[**Release Notes**](https://github.com/taosdata/taos-connector-python/releases) [**Release Notes**](https://github.com/taosdata/taos-connector-python/releases)
......
...@@ -16,19 +16,19 @@ import TabItem from '@theme/TabItem'; ...@@ -16,19 +16,19 @@ import TabItem from '@theme/TabItem';
TDengine currently supports timestamp, number, character, Boolean type, and the corresponding type conversion with Java is as follows: TDengine currently supports timestamp, number, character, Boolean type, and the corresponding type conversion with Java is as follows:
| TDengine DataType | JDBCType (driver version < 2.0.24) | JDBCType (driver version > = 2.0.24) | | TDengine DataType | JDBCType |
| ----------------- | ---------------------------------- | ------------------------------------ | | ----------------- | ---------------------------------- |
| TIMESTAMP | java.lang.Long | java.sql.Timestamp | | TIMESTAMP | java.sql.Timestamp |
| INT | java.lang.Integer | java.lang.Integer | | INT | java.lang.Integer |
| BIGINT | java.lang.Long | java.lang.Long | | BIGINT | java.lang.Long |
| FLOAT | java.lang.Float | java.lang.Float | | FLOAT | java.lang.Float |
| DOUBLE | java.lang.Double | java.lang.Double | | DOUBLE | java.lang.Double |
| SMALLINT | java.lang.Short | java.lang.Short | | SMALLINT | java.lang.Short |
| TINYINT | java.lang.Byte | java.lang.Byte | | TINYINT | java.lang.Byte |
| BOOL | java.lang.Boolean | java.lang.Boolean | | BOOL | java.lang.Boolean |
| BINARY | java.lang.String | byte array | | BINARY | byte array |
| NCHAR | java.lang.String | java.lang.String | | NCHAR | java.lang.String |
| JSON | - | java.lang.String | | JSON | java.lang.String |
**Note**: Only TAG supports JSON types **Note**: Only TAG supports JSON types
...@@ -53,7 +53,7 @@ Add following dependency in the `pom.xml` file of your Maven project: ...@@ -53,7 +53,7 @@ Add following dependency in the `pom.xml` file of your Maven project:
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>2.0.**</version> <version>3.0.0</version>
</dependency> </dependency>
``` ```
...@@ -68,7 +68,7 @@ cd taos-connector-jdbc ...@@ -68,7 +68,7 @@ cd taos-connector-jdbc
mvn clean install -Dmaven.test.skip=true mvn clean install -Dmaven.test.skip=true
``` ```
After compilation, a jar package named taos-jdbcdriver-2.0.XX-dist.jar is generated in the target directory, and the compiled jar file is automatically placed in the local Maven repository. After compilation, a jar package named taos-jdbcdriver-3.0.*-dist.jar is generated in the target directory, and the compiled jar file is automatically placed in the local Maven repository.
</TabItem> </TabItem>
</Tabs> </Tabs>
...@@ -76,8 +76,7 @@ After compilation, a jar package named taos-jdbcdriver-2.0.XX-dist.jar is genera ...@@ -76,8 +76,7 @@ After compilation, a jar package named taos-jdbcdriver-2.0.XX-dist.jar is genera
## Establish Connection using URL ## Establish Connection using URL
TDengine's JDBC URL specification format is: TDengine's JDBC URL specification format is:
`jdbc:[TAOS-RS]://[host_name]:[port]/[database_name]?batchfetch={true|false}&useSSL={true|false}&token={token}&httpPoolSize={httpPoolSize}&httpKeepAlive={true|false}]&httpConnectTimeout={httpTimeout}&httpSocketTimeout={socketTimeout}` `jdbc:TAOS-RS://[host_name]:[port]/[database_name]?batchfetch={true|false}&useSSL={true|false}&token={token}&httpPoolSize={httpPoolSize}&httpKeepAlive={true|false}]&httpConnectTimeout={httpTimeout}&httpSocketTimeout={socketTimeout}`
```java ```java
Class.forName("com.taosdata.jdbc.rs.RestfulDriver"); Class.forName("com.taosdata.jdbc.rs.RestfulDriver");
...@@ -85,7 +84,7 @@ String jdbcUrl = System.getenv("TDENGINE_JDBC_URL"); ...@@ -85,7 +84,7 @@ String jdbcUrl = System.getenv("TDENGINE_JDBC_URL");
Connection conn = DriverManager.getConnection(jdbcUrl); Connection conn = DriverManager.getConnection(jdbcUrl);
``` ```
Note: :::note
- REST API is stateless. When using the JDBC REST connection, you need to specify the database name of the table and super table in SQL. For example. - REST API is stateless. When using the JDBC REST connection, you need to specify the database name of the table and super table in SQL. For example.
...@@ -97,7 +96,7 @@ Note: ...@@ -97,7 +96,7 @@ Note:
```sql ```sql
insert into test using weather(ts, temperature) tags('California.SanFrancisco') values(now, 24.6); insert into test using weather(ts, temperature) tags('California.SanFrancisco') values(now, 24.6);
``` ```
:::
### Establish Connection using URL and Properties ### Establish Connection using URL and Properties
...@@ -120,7 +119,6 @@ If the configuration parameters are duplicated in the URL, Properties, the `prio ...@@ -120,7 +119,6 @@ If the configuration parameters are duplicated in the URL, Properties, the `prio
1. JDBC URL parameters, as described above, can be specified in the parameters of the JDBC URL. 1. JDBC URL parameters, as described above, can be specified in the parameters of the JDBC URL.
2. Properties connProps 2. Properties connProps
## Usage Examples ## Usage Examples
### Create Database and Tables ### Create Database and Tables
...@@ -141,8 +139,8 @@ int affectedRows = stmt.executeUpdate("insert into tb values(now, 23, 10.3) (now ...@@ -141,8 +139,8 @@ int affectedRows = stmt.executeUpdate("insert into tb values(now, 23, 10.3) (now
System.out.println("insert " + affectedRows + " rows."); System.out.println("insert " + affectedRows + " rows.");
``` ```
`now`` is an internal function. The default is the current time of the client's computer. > `now` is an internal function. The default is the current time of the client's computer.
> `now + 1s` represents the current time of the client plus 1 second, followed by the number representing the unit of time: a (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), n (months), y (years).
### Querying data ### Querying data
...@@ -188,6 +186,9 @@ There are three types of error codes that the JDBC connector can report: ...@@ -188,6 +186,9 @@ There are three types of error codes that the JDBC connector can report:
- Error code of the native connection method (error code between 0x2351 and 0x2400) - Error code of the native connection method (error code between 0x2351 and 0x2400)
- Error code of other TDengine function modules - Error code of other TDengine function modules
For specific error codes, please refer to.
- [TDengine Java Connector](https://github.com/taosdata/taos-connector-jdbc/blob/main/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java)
### Closing resources ### Closing resources
...@@ -197,10 +198,10 @@ stmt.close(); ...@@ -197,10 +198,10 @@ stmt.close();
conn.close(); conn.close();
``` ```
:::note :::note
Be sure to close the connection, otherwise, there will be a connection leak. Be sure to close the connection, otherwise, there will be a connection leak.
::: :::
### Use with Connection Pool ### Use with Connection Pool
#### HikariCP #### HikariCP
...@@ -283,12 +284,51 @@ Please refer to: [JDBC example](https://github.com/taosdata/TDengine/tree/develo ...@@ -283,12 +284,51 @@ Please refer to: [JDBC example](https://github.com/taosdata/TDengine/tree/develo
## Recent update logs ## Recent update logs
| taos-jdbcdriver version | major changes | | taos-jdbcdriver version | major changes |
| :---------------------: | :------------------------------------------: | | :---------------------: | :--------------------------------------------: |
| 2.0.38 | JDBC REST connections add bulk pull function | | 3.0.3 | fix timestamp resolution error for REST connection in jdk17+ version |
| 2.0.37 | Added support for json tags | | 3.0.1 - 3.0.2 | fix the resultSet data is parsed incorrectly sometimes. 3.0.1 is compiled on JDK 11, you are advised to use 3.0.2 in the JDK 8 environment |
| 2.0.36 | Add support for schemaless writing | | 3.0.0 | Support for TDengine 3.0 |
| 2.0.42 | fix wasNull interface return value in WebSocket connection |
| 2.0.41 | fix decode method of username and password in REST connection |
| 2.0.39 - 2.0.40 | Add REST connection/request timeout parameters |
| 2.0.38 | JDBC REST connections add bulk pull function |
| 2.0.37 | Support json tags |
| 2.0.36 | Support schemaless writing |
## Frequently Asked Questions
1. Why is there no performance improvement when using Statement's `addBatch()` and `executeBatch()` to perform `batch data writing/update`?
**Cause**: In TDengine's JDBC implementation, SQL statements submitted by `addBatch()` method are executed sequentially in the order they are added, which does not reduce the number of interactions with the server and does not bring performance improvement.
**Solution**: 1. splice multiple values in a single insert statement; 2. use multi-threaded concurrent insertion; 3. use parameter-bound writing
2. java.lang.UnsatisfiedLinkError: no taos in java.library.path
**Cause**: The program did not find the dependent native library `taos`.
**Solution**: On Windows you can copy `C:\TDengine\driver\taos.dll` to the `C:\Windows\System32` directory, on Linux the following soft link will be created `ln -s /usr/local/taos/driver/libtaos.so.x.x.x.x /usr/lib/libtaos.so` will work, on macOS the lib soft link will be `/usr/local/lib/libtaos.dylib`.
3. java.lang.UnsatisfiedLinkError: taos.dll Can't load AMD 64 bit on a IA 32-bit platform
**Cause**: Currently, TDengine only supports 64-bit JDK.
**Solution**: Reinstall the 64-bit JDK.
4. java.lang.NoSuchMethodError: setByteArray
**Cause**: taos-jbdcdriver 3.* only supports TDengine 3.0 and later.
**Solution**: Use taos-jdbcdriver 2.* with your TDengine 2.* deployment.
5. java.lang.NoSuchMethodError: java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer; ... taos-jdbcdriver-3.0.1.jar
**Cause**:taos-jdbcdriver 3.0.1 is compiled on JDK 11.
**Solution**: Use taos-jdbcdriver 3.0.2.
For additional troubleshooting, see [FAQ](../../../train-faq/faq).
## API Reference ## API Reference
......
...@@ -10,26 +10,40 @@ This article describes how to install `driver-go` and connect to TDengine cluste ...@@ -10,26 +10,40 @@ This article describes how to install `driver-go` and connect to TDengine cluste
The source code of `driver-go` is hosted on [GitHub](https://github.com/taosdata/driver-go). The source code of `driver-go` is hosted on [GitHub](https://github.com/taosdata/driver-go).
## Installation steps ## Version support
Please refer to [version support list](/reference/connector#version-support)
## Installation Steps
### Pre-installation preparation
* Install Go development environment (Go 1.14 and above, GCC 4.8.5 and above)
- If you use the native connector, please install the TDengine client driver. Please refer to [Install Client Driver](/reference/connector/#install-client-driver) for specific steps
Configure the environment variables and check the command.
* ```go env```
* ```gcc -v```
### Use go get to install ### Use go get to install
``` `go get -u github.com/taosdata/driver-go/v3@latest`
go get -u github.com/taosdata/driver-go/v2@develop
```
### Manage with go mod ### Manage with go mod
1. Initialize the project with the `go mod` command. 1. Initialize the project with the `go mod` command.
```text ```text
go mod init taos-demo go mod init taos-demo
``` ```
2. Introduce taosSql 2. Introduce taosSql
```go ```go
import ( import (
"database/sql" "database/sql"
_ "github.com/taosdata/driver-go/v2/taosSql" _ "github.com/taosdata/driver-go/v3/taosSql"
) )
``` ```
...@@ -37,7 +51,7 @@ go get -u github.com/taosdata/driver-go/v2@develop ...@@ -37,7 +51,7 @@ go get -u github.com/taosdata/driver-go/v2@develop
```text ```text
go mod tidy go mod tidy
``` ```
4. Run the program with `go run taos-demo` or compile the binary with the `go build` command. 4. Run the program with `go run taos-demo` or compile the binary with the `go build` command.
...@@ -46,7 +60,7 @@ go get -u github.com/taosdata/driver-go/v2@develop ...@@ -46,7 +60,7 @@ go get -u github.com/taosdata/driver-go/v2@develop
go build go build
``` ```
## Create a connection ## Establishing a connection
### Data source name (DSN) ### Data source name (DSN)
...@@ -73,7 +87,10 @@ Use `taosRestful` as `driverName` and use a correct [DSN](#DSN) as `dataSourceNa ...@@ -73,7 +87,10 @@ Use `taosRestful` as `driverName` and use a correct [DSN](#DSN) as `dataSourceNa
## Sample programs ## Sample programs
* [sample program](https://github.com/taosdata/TDengine/tree/develop/examples/go) ### More sample programs
* [sample program](https://github.com/taosdata/driver-go/tree/3.0/examples)
* [Video tutorial](https://www.taosdata.com/blog/2020/11/11/1951.html). * [Video tutorial](https://www.taosdata.com/blog/2020/11/11/1951.html).
## Usage limitations ## Usage limitations
...@@ -92,7 +109,7 @@ import ( ...@@ -92,7 +109,7 @@ import (
"fmt" "fmt"
"time" "time"
_ "github.com/taosdata/driver-go/v2/taosRestful" _ "github.com/taosdata/driver-go/v3/taosRestful"
) )
func main() { func main() {
...@@ -187,7 +204,6 @@ This API is created successfully without checking permissions, but only when you ...@@ -187,7 +204,6 @@ This API is created successfully without checking permissions, but only when you
`sql.Open` Built-in method to execute query statements. `sql.Open` Built-in method to execute query statements.
## API Reference ## API Reference
Full API see [driver-go documentation](https://pkg.go.dev/github.com/taosdata/driver-go/v2) Full API see [driver-go documentation](https://pkg.go.dev/github.com/taosdata/driver-go/v3)
\ No newline at end of file \ No newline at end of file
...@@ -3,13 +3,19 @@ toc_max_heading_level: 4 ...@@ -3,13 +3,19 @@ toc_max_heading_level: 4
sidebar_position: 5 sidebar_position: 5
sidebar_label: Rust sidebar_label: Rust
title: TDengine Rust Connector title: TDengine Rust Connector
description: Detailed guide for Rust Connector description: This document describes the TDengine Rust connector.
--- ---
[![Crates.io](https://img.shields.io/crates/v/taos)](https://crates.io/crates/taos) ![Crates.io](https://img.shields.io/crates/d/taos) [![docs.rs](https://img.shields.io/docsrs/taos)](https://docs.rs/taos)
`taos` is the official Rust connector for TDengine. Rust developers can develop applications to access the TDengine instance data.
`libtaos` is the official Rust language connector for TDengine. Rust developers can develop applications to access the TDengine instance data. The source code for the Rust connectors is located on [GitHub](https://github.com/taosdata/taos-connector-rust).
The source code for `libtaos` is hosted on [GitHub](https://github.com/taosdata/libtaos-rs). ## Version support
Please refer to [version support list](/reference/connector#version-support)
The Rust Connector is still under rapid development and is not guaranteed to be backward compatible before 1.0. We recommend using TDengine version 3.0 or higher to avoid known issues.
## Installation ## Installation
...@@ -17,74 +23,76 @@ The source code for `libtaos` is hosted on [GitHub](https://github.com/taosdata/ ...@@ -17,74 +23,76 @@ The source code for `libtaos` is hosted on [GitHub](https://github.com/taosdata/
Install the Rust development toolchain. Install the Rust development toolchain.
### Adding libtaos dependencies ### Adding taos dependencies
```toml
[dependencies]
# use rest feature
libtaos = { version = "*", features = ["rest"]}
```
### Using connection pools Depending on the connection method, add the [taos][taos] dependency in your Rust project as follows:
Please enable the `r2d2` feature in `Cargo.toml`. In `cargo.toml`, add [taos][taos]:
```toml ```toml
[dependencies] [dependencies]
libtaos = { version = "*", features = ["rest", "r2d2"] } # use default feature
taos = "*"
``` ```
## Create a connection ## Establishing a connection
Create `TaosCfg` from TDengine cloud DSN. The DSN should be in form of `<http | https>://<host>[:port]?token=<token>`. [TaosBuilder] creates a connection constructor through the DSN connection description string.
The DSN should be in form of `<http | https>://<host>[:port]?token=<token>`.
```rust ```rust
use libtaos::*; let builder = TaosBuilder::from_dsn(DSN)?;
let cfg = TaosCfg::from_dsn(DSN)?;
``` ```
You can now use this object to create the connection. You can now use this object to create the connection.
```rust ```rust
let conn = cfg.connect()? ; let conn = builder.build()?;
``` ```
The connection object can create more than one. The connection object can create more than one.
```rust ```rust
let conn = cfg.connect()? ; let conn1 = builder.build()?;
let conn2 = cfg.connect()? ; let conn2 = builder.build()?;
```
You can use connection pools in applications.
```rust
let pool = r2d2::Pool::builder()
.max_size(10000) // max connections
.build(cfg)? ;
// ...
// Use pool to get connection
let conn = pool.get()? ;
``` ```
After that, you can perform the following operations on the database. After that, you can perform the following operations on the database.
```rust ```rust
async fn demo() -> Result<(), Error> { async fn demo(taos: &Taos, db: &str) -> Result<(), Error> {
// get connection ... // prepare database
taos.exec_many([
// create database format!("DROP DATABASE IF EXISTS `{db}`"),
conn.exec("create database if not exists demo").await? format!("CREATE DATABASE `{db}`"),
// create table format!("USE `{db}`"),
conn.exec("create table if not exists demo.tb1 (ts timestamp, v int)").await? ])
// insert .await?;
conn.exec("insert into demo.tb1 values(now, 1)").await?
// query let inserted = taos.exec_many([
let rows = conn.query("select * from demo.tb1").await? // create super table
for row in rows.rows { "CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) \
println!("{}", row.into_iter().join(",")); TAGS (`groupid` INT, `location` BINARY(24))",
// create child table
"CREATE TABLE `d0` USING `meters` TAGS(0, 'California.LosAngles')",
// insert into child table
"INSERT INTO `d0` values(now - 10s, 10, 116, 0.32)",
// insert with NULL values
"INSERT INTO `d0` values(now - 8s, NULL, NULL, NULL)",
// insert and automatically create table with tags if not exists
"INSERT INTO `d1` USING `meters` TAGS(1, 'California.SanFrancisco') values(now - 9s, 10.1, 119, 0.33)",
// insert many records in a single sql
"INSERT INTO `d1` values (now-8s, 10, 120, 0.33) (now - 6s, 10, 119, 0.34) (now - 4s, 11.2, 118, 0.322)",
]).await?;
assert_eq!(inserted, 6);
let mut result = taos.query("select * from `meters`").await?;
for field in result.fields() {
println!("got field: {}", field.name());
} }
let values = result.
} }
``` ```
...@@ -92,24 +100,26 @@ async fn demo() -> Result<(), Error> { ...@@ -92,24 +100,26 @@ async fn demo() -> Result<(), Error> {
### Connection pooling ### Connection pooling
In complex applications, we recommend enabling connection pools. Connection pool for [libtaos] is implemented using [r2d2]. In complex applications, we recommend enabling connection pools. [taos] implements connection pools based on [r2d2].
As follows, a connection pool with default parameters can be generated. As follows, a connection pool with default parameters can be generated.
```rust ```rust
let pool = r2d2::Pool::new(cfg)? ; let pool = TaosBuilder::from_dsn(dsn)?.pool()?;
``` ```
You can set the same connection pool parameters using the connection pool's constructor. You can set the same connection pool parameters using the connection pool's constructor.
```rust ```rust
use std::time::Duration; let dsn = std::env::var("TDENGINE_CLOUD_DSN")?;;
let pool = r2d2::Pool::builder()
.max_size(5000) // max connections let opts = PoolBuilder::new()
.max_lifetime(Some(Duration::from_minutes(100))) // lifetime of each connection .max_size(5000) // max connections
.min_idle(Some(1000)) // minimal idle connections .max_lifetime(Some(Duration::from_secs(60 * 60))) // lifetime of each connection
.connection_timeout(Duration::from_minutes(2)) .min_idle(Some(1000)) // minimal idle connections
.build(cfg); .connection_timeout(Duration::from_secs(2));
let pool = TaosBuilder::from_dsn(dsn)?.with_pool_builder(opts)?;
``` ```
In the application code, use `pool.get()? ` to get a connection object [Taos]. In the application code, use `pool.get()? ` to get a connection object [Taos].
...@@ -117,56 +127,99 @@ In the application code, use `pool.get()? ` to get a connection object [Taos]. ...@@ -117,56 +127,99 @@ In the application code, use `pool.get()? ` to get a connection object [Taos].
```rust ```rust
let taos = pool.get()? ; let taos = pool.get()? ;
``` ```
# Connectors
The [Taos] structure is the connection manager in [libtaos] and provides two main APIs. The [Taos][struct.Taos] object provides an API to perform operations on multiple databases.
1. `exec`: Execute some non-query SQL statements, such as `CREATE`, `ALTER`, `INSERT`, etc. 1. `exec`: Execute some non-query SQL statements, such as `CREATE`, `ALTER`, `INSERT`, etc.
```rust ```rust
taos.exec().await? let affected_rows = taos.exec("INSERT INTO tb1 VALUES(now, NULL)").await?;
```
2. `exec_many`: Run multiple SQL statements simultaneously or in order.
```rust
taos.exec_many([
"CREATE DATABASE test",
"USE test",
"CREATE TABLE `tb1` (`ts` TIMESTAMP, `val` INT)",
]).await?;
``` ```
2. `query`: Execute the query statement and return the [TaosQueryData] object. 3. `query`: Run a query statement and return a [ResultSet] object.
```rust ```rust
let q = taos.query("select * from log.logs").await? let mut q = taos.query("select * from log.logs").await?;
``` ```
The [TaosQueryData] object stores the query result data and basic information about the returned columns (column name, type, length). The [ResultSet] object stores query result data and the names, types, and lengths of returned columns
Column information is stored using [ColumnMeta]. You can obtain column information by using [.fields()].
```rust ```rust
let cols = &q.column_meta; let cols = q.fields();
for col in cols { for col in cols {
println!("name: {}, type: {:?} , bytes: {}", col.name, col.type_, col.bytes); println!("name: {}, type: {:?} , bytes: {}", col.name(), col.ty(), col.bytes());
} }
``` ```
It fetches data line by line. It fetches data line by line.
```rust ```rust
for (i, row) in q.rows.iter().enumerate() { let mut rows = result.rows();
for (j, cell) in row.iter().enumerate() { let mut nrows = 0;
println!("cell({}, {}) data: {}", i, j, cell); while let Some(row) = rows.try_next().await? {
for (col, (name, value)) in row.enumerate() {
println!(
"[{}] got value in col {} (named `{:>8}`): {}",
nrows, col, name, value
);
} }
nrows += 1;
} }
``` ```
Or use the [serde](https://serde.rs) deserialization framework.
```rust
#[derive(Debug, Deserialize)]
struct Record {
// deserialize timestamp to chrono::DateTime<Local>
ts: DateTime<Local>,
// float to f32
current: Option<f32>,
// int to i32
voltage: Option<i32>,
phase: Option<f32>,
groupid: i32,
// binary/varchar to String
location: String,
}
let records: Vec<Record> = taos
.query("select * from `meters`")
.await?
.deserialize()
.try_collect()
.await?;
```
Note that Rust asynchronous functions and an asynchronous runtime are required. Note that Rust asynchronous functions and an asynchronous runtime are required.
[Taos] provides a few Rust methods that encapsulate SQL to reduce the frequency of `format!` code blocks. [Taos][struct.Taos] provides Rust methods for some SQL statements to reduce the number of `format!`s.
- `.describe(table: &str)`: Executes `DESCRIBE` and returns a Rust data structure. - `.describe(table: &str)`: Executes `DESCRIBE` and returns a Rust data structure.
- `.create_database(database: &str)`: Executes the `CREATE DATABASE` statement. - `.create_database(database: &str)`: Executes the `CREATE DATABASE` statement.
- `.use_database(database: &str)`: Executes the `USE` statement.
In addition, this structure is also the entry point for [Parameter Binding](#Parameter Binding Interface) and [Line Protocol Interface](#Line Protocol Interface). Please refer to the specific API descriptions for usage.
Please move to the Rust documentation hosting page for other related structure API usage instructions: <https://docs.rs/libtaos>. For information about other structure APIs, see the [Rust documentation](https://docs.rs/taos).
[libtaos]: https://github.com/taosdata/libtaos-rs [taos]: https://github.com/taosdata/rust-connector-taos
[tdengine]: https://github.com/taosdata/TDengine
[r2d2]: https://crates.io/crates/r2d2 [r2d2]: https://crates.io/crates/r2d2
[TaosCfg]: https://docs.rs/libtaos/latest/libtaos/struct.TaosCfg.html [TaosBuilder]: https://docs.rs/taos/latest/taos/struct.TaosBuilder.html
[Taos]: https://docs.rs/libtaos/latest/libtaos/struct.Taos.html [TaosCfg]: https://docs.rs/taos/latest/taos/struct.TaosCfg.html
[TaosQueryData]: https://docs.rs/libtaos/latest/libtaos/field/struct.TaosQueryData.html [struct.Taos]: https://docs.rs/taos/latest/taos/struct.Taos.html
[Field]: https://docs.rs/libtaos/latest/libtaos/field/enum.Field.html [Stmt]: https://docs.rs/taos/latest/taos/struct.Stmt.html
...@@ -4,9 +4,13 @@ title: TDengine Node.JS Connector ...@@ -4,9 +4,13 @@ title: TDengine Node.JS Connector
description: Detailed guide for Node.JS Connector description: Detailed guide for Node.JS Connector
--- ---
`td2.0-rest-connector` are the official Node.js language connectors for TDengine. Node.js developers can develop applications to access TDengine instance data. `td2.0-rest-connector` is a **REST connector** that connects to TDengine instances via the REST API. `@tdengine/rest` are the official Node.js language connectors for TDengine. Node.js developers can develop applications to access TDengine instance data. `@tdengine/rest` is a **REST connector** that connects to TDengine instances via the REST API.
The Node.js connector source code is hosted on [GitHub](https://github.com/taosdata/taos-connector-node). The source code for the Node.js connectors is located on [GitHub](https://github.com/taosdata/taos-connector-node/tree/3.0).
## Version support
Please refer to [version support list](/reference/connector#version-support)
## Installation steps ## Installation steps
...@@ -16,7 +20,7 @@ Install the Node.js development environment ...@@ -16,7 +20,7 @@ Install the Node.js development environment
### Install via npm ### Install via npm
```bash ```bash
npm i td2.0-rest-connector npm install @tdengine/rest
``` ```
## Establishing a connection ## Establishing a connection
...@@ -30,13 +34,31 @@ npm i td2.0-rest-connector ...@@ -30,13 +34,31 @@ npm i td2.0-rest-connector
{{#include docs/examples/node/reference_example.js:usage}} {{#include docs/examples/node/reference_example.js:usage}}
``` ```
## Important Updates ## Frequently Asked Questions
1. Using REST connections requires starting taosadapter.
```bash
sudo systemctl start taosadapter
```
2. Node.js versions
`@tdengine/client` supports Node.js v10.9.0 to 10.20.0 and 12.8.0 to 12.9.1.
3. "Unable to establish connection", "Unable to resolve FQDN"
Usually, the root cause is an incorrect FQDN configuration. You can refer to this section in the [FAQ](https://docs.tdengine.com/2.4/train-faq/faq/#2-how-to-handle-unable-to-establish-connection) to troubleshoot.
| td2.0-rest-connector version | Description | ## Important update records
| ------------------------- | ---------------------------------------------------------------- | | package name | version | TDengine version | Description |
| 1.0.5 | Support connect to TDengine cloud service |----------------------|---------|---------------------|---------------------------------------------------------------------------|
| @tdengine/rest | 3.0.0 | 3.0.0 | Supports TDengine 3.0. Not compatible with TDengine 2.x. |
| td2.0-rest-connector | 1.0.7 | 2.4.x;2.5.x;2.6.x | Removed default port 6041。 |
| td2.0-rest-connector | 1.0.6 | 2.4.x;2.5.x;2.6.x | Fixed affectRows bug with create, insert, update, and alter. |
| td2.0-rest-connector | 1.0.5 | 2.4.x;2.5.x;2.6.x | Support cloud token |
| td2.0-rest-connector | 1.0.3 | 2.4.x;2.5.x;2.6.x | Supports connection management, standard queries, system information, error information, and continuous queries |
## API Reference ## API Reference
[API Reference](https://docs.taosdata.com/api/td2.0-connector/) [API Reference](https://docs.taosdata.com/api/td2.0-connector/)
\ No newline at end of file
...@@ -6,15 +6,23 @@ description: Detailed guide for C# Connector ...@@ -6,15 +6,23 @@ description: Detailed guide for C# Connector
`TDengine.Connector` is the official C# connector for TDengine. C# developers can develop applications to access TDengine instance data. `TDengine.Connector` is the official C# connector for TDengine. C# developers can develop applications to access TDengine instance data.
This article describes how to install `TDengine.Connector` in a Linux or Windows environment and connect to TDengine clusters via `TDengine.Connector` to perform basic operations such as data writing and querying.
The source code for `TDengine.Connector` is hosted on [GitHub](https://github.com/taosdata/taos-connector-dotnet/tree/3.0). The source code for `TDengine.Connector` is hosted on [GitHub](https://github.com/taosdata/taos-connector-dotnet/tree/3.0).
## Version support
Please refer to [version support list](/reference/connector#version-support)
## Installation ## Installation
### Pre-installation ### Pre-installation
Install the .NET deployment SDK. * Install the [.NET SDK](https://dotnet.microsoft.com/download)
* [Nuget Client](https://docs.microsoft.com/en-us/nuget/install-nuget-client-tools) (optional installation)
* Install TDengine client driver, please refer to [Install client driver](/reference/connector/#install-client-driver) for details
### Add TDengine.Connector through Nuget ### Add `TDengine.Connector` through Nuget
```bash ```bash
dotnet add package TDengine.Connector dotnet add package TDengine.Connector
...@@ -26,7 +34,7 @@ dotnet add package TDengine.Connector ...@@ -26,7 +34,7 @@ dotnet add package TDengine.Connector
{{#include docs/examples/csharp/cloud-example/connect/connect.csproj}} {{#include docs/examples/csharp/cloud-example/connect/connect.csproj}}
``` ```
``` C# ``` csharp
{{#include docs/examples/csharp/cloud-example/connect/Program.cs}} {{#include docs/examples/csharp/cloud-example/connect/Program.cs}}
``` ```
...@@ -55,9 +63,34 @@ dotnet add package TDengine.Connector ...@@ -55,9 +63,34 @@ dotnet add package TDengine.Connector
## Important Updates ## Important Updates
| TDengine.Connector | Description | | TDengine.Connector | Description |
| ------------------------- | ---------------------------------------------------------------- | |--------------------|--------------------------------|
| 3.0.2 | Support .NET Framework 4.5 and above. Support .Net standard 2.0. Nuget package includes dynamic library for WebSocket.| | 3.0.2 | Support .NET Framework 4.5 and above. Support .Net standard 2.0. Nuget package includes dynamic library for WebSocket.|
| 3.0.1 | Support connect to TDengine cloud service| | 3.0.1 | Support WebSocket and Cloud,With function query, insert, and parameter binding|
| 3.0.0 | Supports TDengine 3.0.0.0. TDengine 2.x is not supported. Added `TDengine.Impl.GetData()` interface to deserialize query results. |
| 1.0.7 | Fixed TDengine.Query() memory leak. |
| 1.0.6 | Fix schemaless bug in 1.0.4 and 1.0.5. |
| 1.0.5 | Fix Windows sync query Chinese error bug. | 1.0.4 | Fix schemaless bug. |
| 1.0.4 | Add asynchronous query, subscription, and other functions. Fix the binding parameter bug. |
| 1.0.3 | Add parameter binding, schemaless, JSON tag, etc. |
| 1.0.2 | Add connection management, synchronous query, error messages, etc. |
## Other descriptions
### Third-party driver
`Taos` is an ADO.NET connector for TDengine, supporting Linux and Windows platforms. Community contributor `Maikebing@@maikebing contributes the connector`. Please refer to:
* Interface download:<https://github.com/maikebing/Maikebing.EntityFrameworkCore.Taos>
## Frequently Asked Questions
1. "Unable to establish connection", "Unable to resolve FQDN"
Usually, it's caused by an incorrect FQDN configuration. Please refer to this section in the [FAQ](https://docs.tdengine.com/2.4/train-faq/faq/#2-how-to-handle-unable-to-establish-connection) to troubleshoot.
2. Unhandled exception. System.DllNotFoundException: Unable to load DLL 'taos' or one of its dependencies: The specified module cannot be found.
This is usually because the program did not find the dependent client driver. The solution is to copy `C:\TDengine\driver\taos.dll` to the `C:\Windows\System32\` directory on Windows, and create the following soft link on Linux `ln -s /usr/local/taos/driver/libtaos.so.x.x .x.x /usr/lib/libtaos.so` will work.
## API Reference ## API Reference
......
...@@ -4,7 +4,7 @@ title: REST API ...@@ -4,7 +4,7 @@ title: REST API
description: Detailed guide for REST API description: Detailed guide for REST API
--- ---
To support the development of various types of applications and platforms, TDengine provides an API that conforms to REST principles; namely REST API. To minimize the learning cost, unlike REST APIs for other database engines, TDengine allows insertion of SQL commands in the BODY of an HTTP POST request, to operate the database. To support the development of various types of applications and platforms, TDengine provides an API that conforms to REST principles; namely REST API. To minimize the learning cost, unlike REST APIs for other database engines, TDengine allows insertion of SQL commands in the BODY of an HTTP POST request, to operate the database.
:::note :::note
One difference from the native connector is that the REST interface is stateless and so the `USE db_name` command has no effect. All references to table names and super table names need to specify the database name in the prefix. TDengine supports specification of the db_name in RESTful URL. If the database name prefix is not specified in the SQL command, the `db_name` specified in the URL will be used. One difference from the native connector is that the REST interface is stateless and so the `USE db_name` command has no effect. All references to table names and super table names need to specify the database name in the prefix. TDengine supports specification of the db_name in RESTful URL. If the database name prefix is not specified in the SQL command, the `db_name` specified in the URL will be used.
...@@ -305,4 +305,3 @@ Description: ...@@ -305,4 +305,3 @@ Description:
"rows": 1 "rows": 1
} }
``` ```
...@@ -15,7 +15,7 @@ TDengine currently supports Grafana versions 7.5 and above. Users can go to the ...@@ -15,7 +15,7 @@ TDengine currently supports Grafana versions 7.5 and above. Users can go to the
### Install with GUI ### Install with GUI
The TDengine data source plugin is already published as a signed Grafana plugin. You can easily install it from Grafana Configuration GUI. In any platform you already installed Grafana, you can open the URL http://localhost:3000 then click plugin menu from left panel. The TDengine data source plugin is already published as a signed Grafana plugin. You can easily install it from Grafana Configuration GUI. In any platform you already installed Grafana, you can open the URL `http://localhost:3000` then click plugin menu from left panel.
![click plugin menu](./grafana/click-plugin-menu-from-config.webp) ![click plugin menu](./grafana/click-plugin-menu-from-config.webp)
......
...@@ -3,7 +3,7 @@ sidebar_label: Google Data Studio ...@@ -3,7 +3,7 @@ sidebar_label: Google Data Studio
title: Use Google Data Studio title: Use Google Data Studio
--- ---
Using its [partner connector](https://datastudio.google.com/data?search=TDengine), Google Data Studio can quickly access TDengine and create interactive reports and dashboards using its web-based reporting features.The whole process does not require any code development. Share your reports and dashboards with individuals, teams, or the world. Collaborate in real time. Embed your report on any web page. Using its [partner connector](https://datastudio.google.com/data?search=TDengine), Google Data Studio can quickly access TDengine and create interactive reports and dashboards using its web-based reporting features.The whole process does not require any code development. Share your reports and dashboards with individuals, teams, or the world. Collaborate in real time. And embbed your report on any web page.
Refer to [GitHub](https://github.com/taosdata/gds-connector/blob/master/README.md) for additional information on utilizing the Data Studio with TDengine. Refer to [GitHub](https://github.com/taosdata/gds-connector/blob/master/README.md) for additional information on utilizing the Data Studio with TDengine.
...@@ -19,23 +19,12 @@ The current [connector](https://datastudio.google.com/data?search=TDengine) supp ...@@ -19,23 +19,12 @@ The current [connector](https://datastudio.google.com/data?search=TDengine) supp
#### URL #### URL
TDengine Cloud URL.
<!---```bash--->
<!---<cloud_url>--->
<!---```--->
<!-- exclude --> <!-- exclude -->
To obtain the URL, please login [TDengine Cloud](https://cloud.tdengine.com) and click "Visualize" and then select "Google Data Studio". To obtain the URL, please login [TDengine Cloud](https://cloud.tdengine.com) and click "Visualize" and then select "Google Data Studio".
<!-- exclude-end --> <!-- exclude-end -->
#### TDengine Cloud Token #### TDengine Cloud Token
<!---```bash--->
<!---<cloud_token>--->
<!---```--->
<!-- exclude --> <!-- exclude -->
To obtain the value of cloud token, please login [TDengine Cloud](https://cloud.tdengine.com) and click "Visualize" and then select "Google Data Studio". To obtain the value of cloud token, please login [TDengine Cloud](https://cloud.tdengine.com) and click "Visualize" and then select "Google Data Studio".
......
--- ---
sidebar_label: Data Subscription sidebar_label: Data Subscription
title: Data Subscription title: Data Subscription
description: Using topics to do data subscription and share to others from TDengine. description: Using topics to do data subscription and share to others from TDengine Cloud.
--- ---
import Tabs from "@theme/Tabs"; import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem"; import TabItem from "@theme/TabItem";
......
...@@ -33,7 +33,7 @@ It is common that smart electrical meter systems for businesses generate million ...@@ -33,7 +33,7 @@ It is common that smart electrical meter systems for businesses generate million
### Create a Database for Raw Data ### Create a Database for Raw Data
Create database `power` using explore in cloud console. Create database `power` using explorer in TDengine Cloud console.
Then create four subtables as follows: Then create four subtables as follows:
......
...@@ -6,4 +6,4 @@ description: Replicate data between TDengine cloud services ...@@ -6,4 +6,4 @@ description: Replicate data between TDengine cloud services
TDengine provides full support for data replication. You can replicate data from TDengine cloud to private TDengine instance, from private TDengine instance to TDengine cloud, or from one cloud platform to another one and it doesn't matter which cloud or region the two services reside in. TDengine provides full support for data replication. You can replicate data from TDengine cloud to private TDengine instance, from private TDengine instance to TDengine cloud, or from one cloud platform to another one and it doesn't matter which cloud or region the two services reside in.
TDengine also provides database backup for enterprise plan. TDengine also provides database backup for enterprise plan.
...@@ -7,7 +7,6 @@ description: Instructions and tips for using the TDengine CLI to connect TDengin ...@@ -7,7 +7,6 @@ description: Instructions and tips for using the TDengine CLI to connect TDengin
<!-- exclude --> <!-- exclude -->
import Tabs from '@theme/Tabs'; import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem'; import TabItem from '@theme/TabItem';
<!-- exclude-end --> <!-- exclude-end -->
The TDengine command-line interface (hereafter referred to as `TDengine CLI`) is the most simplest way for users to manipulate and interact with TDengine instances. The TDengine command-line interface (hereafter referred to as `TDengine CLI`) is the most simplest way for users to manipulate and interact with TDengine instances.
...@@ -61,22 +60,20 @@ To obtain the value of cloud DSN, please log in [TDengine Cloud](https://cloud.t ...@@ -61,22 +60,20 @@ To obtain the value of cloud DSN, please log in [TDengine Cloud](https://cloud.t
::: :::
<!-- exclude-end --> <!-- exclude-end -->
## Connect
## Connect
<Tabs defaultValue="linux" groupId="sys"> <Tabs defaultValue="linux" groupId="sys">
<TabItem value="linux" label="Connect on Linux"> <TabItem value="linux" label="Connect on Linux">
To access the TDengine Cloud, you can execute `taos` if you already set the environment variable. To access the TDengine Cloud instance, you can execute `taos` if you already set the environment variable.
``` ```bash
taos taos
``` ```
If you did not set environment variable for a TDengine Cloud instance, or you want to access other TDengine Cloud instances rather than the instance you already set the environment variable, you can use `taos -E <DSN>` as below. If you did not set environment variable for a TDengine Cloud instance, or you want to access other TDengine Cloud instances rather than the instance you already set the environment variable, you can use `taos -E <DSN>` as below.
``` ```bash
taos -E $TDENGINE_CLOUD_DSN taos -E $TDENGINE_CLOUD_DSN
``` ```
...@@ -85,13 +82,13 @@ taos -E $TDENGINE_CLOUD_DSN ...@@ -85,13 +82,13 @@ taos -E $TDENGINE_CLOUD_DSN
To access the TDengine Cloud, you can execute `taos` if you already set the environment variable. To access the TDengine Cloud, you can execute `taos` if you already set the environment variable.
``` ```powershell
taos taos.exe
``` ```
If you did not set environment variable for a TDengine Cloud instance, or you want to access other TDengine Cloud instances rather than the instance you already set the environment variable, you can use `taos -E <DSN>` as below. If you did not set environment variable for a TDengine Cloud instance, or you want to access other TDengine Cloud instances rather than the instance you already set the environment variable, you can use `taos -E <DSN>` as below.
``` ```powershell
taos.exe -E $TDENGINE_CLOUD_DSN taos.exe -E $TDENGINE_CLOUD_DSN
``` ```
...@@ -100,13 +97,13 @@ taos.exe -E $TDENGINE_CLOUD_DSN ...@@ -100,13 +97,13 @@ taos.exe -E $TDENGINE_CLOUD_DSN
To access the TDengine Cloud, you can execute `taos` if you already set the environment variable. To access the TDengine Cloud, you can execute `taos` if you already set the environment variable.
``` ```bash
taos taos
``` ```
If you did not set environment variable for a TDengine Cloud instance, or you want to access other TDengine Cloud instances rather than the instance you already set the environment variable, you can use `taos -E <DSN>` as below. If you did not set environment variable for a TDengine Cloud instance, or you want to access other TDengine Cloud instances rather than the instance you already set the environment variable, you can use `taos -E <DSN>` as below.
``` ```bash
taos -E $TDENGINE_CLOUD_DSN taos -E $TDENGINE_CLOUD_DSN
``` ```
...@@ -117,7 +114,7 @@ taos -E $TDENGINE_CLOUD_DSN ...@@ -117,7 +114,7 @@ taos -E $TDENGINE_CLOUD_DSN
TDengine CLI will display a welcome message and version information if it successfully connected to the TDengine service. If it fails, TDengine CLI will print an error message. The TDengine CLI prompts as follows: TDengine CLI will display a welcome message and version information if it successfully connected to the TDengine service. If it fails, TDengine CLI will print an error message. The TDengine CLI prompts as follows:
``` ```text
Welcome to the TDengine shell from Linux, Client Version:3.0.0.0 Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved. Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
...@@ -127,4 +124,3 @@ taos> ...@@ -127,4 +124,3 @@ taos>
``` ```
After entering the TDengine CLI, you can execute various SQL commands, including inserts, queries, or administrative commands. Please see the [official document](https://docs.tdengine.com/reference/taos-shell#execute-sql-script-file) for more details. After entering the TDengine CLI, you can execute various SQL commands, including inserts, queries, or administrative commands. Please see the [official document](https://docs.tdengine.com/reference/taos-shell#execute-sql-script-file) for more details.
...@@ -9,20 +9,20 @@ description: "taosBenchmark (once called taosdemo ) is a tool for testing the pe ...@@ -9,20 +9,20 @@ description: "taosBenchmark (once called taosdemo ) is a tool for testing the pe
taosBenchmark (formerly taosdemo ) is a tool for testing the performance of TDengine products. taosBenchmark can test the performance of TDengine's insert, query, and subscription functions and simulate large amounts of data generated by many devices. taosBenchmark can be configured to generate user defined databases, supertables, subtables, and the time series data to populate these for performance benchmarking. taosBenchmark is highly configurable and some of the configurations include the time interval for inserting data, the number of working threads and the capability to insert disordered data. The installer provides taosdemo as a soft link to taosBenchmark for compatibility with past users. taosBenchmark (formerly taosdemo ) is a tool for testing the performance of TDengine products. taosBenchmark can test the performance of TDengine's insert, query, and subscription functions and simulate large amounts of data generated by many devices. taosBenchmark can be configured to generate user defined databases, supertables, subtables, and the time series data to populate these for performance benchmarking. taosBenchmark is highly configurable and some of the configurations include the time interval for inserting data, the number of working threads and the capability to insert disordered data. The installer provides taosdemo as a soft link to taosBenchmark for compatibility with past users.
**Please be noted that in the context of TDengine cloud service, non privileged user can't create database using any tool, including taosBenchmark. The database needs to be firstly created in the data explorer in TDengine cloud service console. For any content about creating database in this document, the user needs to ignore and create the database manually inside TDengine cloud service.** :::note
Please be noted that in the context of TDengine cloud service, non privileged user can't create database using any tool, including taosBenchmark. The database needs to be firstly created in the data explorer in TDengine cloud service console. For any content about creating database in this document, the user needs to ignore and create the database manually inside TDengine cloud service.
:::
## Installation ## Installation
To use taosBenchmark, you need to download and install [taosTools](https://www.taosdata.com/assets-download/3.0/taosTools-2.2.7-Linux-x64.tar.gz) or any later version of v2.2.7. Before installing taosTools, please firstly download and install [TDengine CLI](https://docs.tdengine.com/cloud/tools/cli/#installation). There are two ways to install taosBenchmark:
Decompress the package and install. - Installing the official TDengine installer will automatically install taosBenchmark. Please refer to [TDengine installation](/operation/pkg-install) for details.
- Compile taos-tools separately and install them. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.
```
tar -xzf taosTools-2.2.7-Linux-x64.tar.gz
cd taosTools-2.2.7-Linux-x64.tar.gz
sudo ./install-taostools.sh
```
## Run ## Run
### Configuration and running methods ### Configuration and running methods
Run this command in your Linux terminal to save cloud DSN as variable: Run this command in your Linux terminal to save cloud DSN as variable:
...@@ -214,6 +214,10 @@ The parameters listed in this section apply to all function modes. ...@@ -214,6 +214,10 @@ The parameters listed in this section apply to all function modes.
`filetype` must be set to `insert` in the insertion scenario. See [General Configuration Parameters](#General Configuration Parameters) `filetype` must be set to `insert` in the insertion scenario. See [General Configuration Parameters](#General Configuration Parameters)
- ** keep_trying ** : Keep trying if failed to insert, default is no. Available with v3.0.9+.
- ** trying_interval ** : Specify interval between keep trying insert. Valid value is a postive number. Only valid when keep trying be enabled. Available with v3.0.9+.
#### Stream processing related configuration parameters #### Stream processing related configuration parameters
The parameters for creating streams are configured in `stream` in the json configuration file, as shown below. The parameters for creating streams are configured in `stream` in the json configuration file, as shown below.
......
...@@ -17,16 +17,13 @@ Users should not use taosdump to back up raw data, environment settings, hardwar ...@@ -17,16 +17,13 @@ Users should not use taosdump to back up raw data, environment settings, hardwar
## Installation ## Installation
To use taosdump, you need to download and install recent version of [taosTools](https://docs.tdengine.com/releases/tools/). Before installing taosTools, please firstly download and install the [TDengine client installation package](https://docs.tdengine.com/releases/tdengine/). There are two ways to install taosdump:
Decompress the package and install. - Install the taosTools official installer. Please find taosTools from [Release History](https://docs.taosdata.com/releases/tools/) page and download and install it.
```
tar -xzf taosTools-2.2.7-Linux-x64.tar.gz - Compile taos-tools separately and install it. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.
cd taosTools-2.2.7-Linux-x64.tar.gz
sudo ./install-taostools.sh
```
Set environment variable. Run the following command to set environment variable.
```bash ```bash
export TDENGINE_CLOUD_DSN="<DSN>" export TDENGINE_CLOUD_DSN="<DSN>"
......
...@@ -4,7 +4,7 @@ title: Organization Management ...@@ -4,7 +4,7 @@ title: Organization Management
description: 'Organization management' description: 'Organization management'
--- ---
TDengine Cloud provides a list page for the user to manage his organizations. On this page, you can get all the organizations which you can have permission to view or edit. In each line of the organization list, you can get the name of the organization, roles which you have in the organization and the actions you can operate. TDengine Cloud provides a list page for the user to manage his organizations. On this page, you can get all the organizations which you can have permission to view or edit. In each line of the organization list, you can get the name of the organization, roles which you have in the organization and the actions you can operate.
![Organization list](./images/orglist.webp) ![Organization list](./images/orglist.webp)
......
...@@ -19,10 +19,10 @@ The major features are listed below: ...@@ -19,10 +19,10 @@ The major features are listed below:
1. [Organization Management](./orgs/): Create new organizations, update their name and also can transfer the owner to some one in the organization. 1. [Organization Management](./orgs/): Create new organizations, update their name and also can transfer the owner to some one in the organization.
2. [User Mgmt](./users/): Create, update or delete users or user groups. You can also create/edit/delete customized roles. 2. [User Mgmt](./users/): Create, update or delete users or user groups. You can also create/edit/delete customized roles.
- [User](./users/users) - [User](./users/users)
3. [Admin](./admin/): Create, update or delete users or user groups. You can also create/edit/delete customized roles. <!-- 3. [Admin](./admin/): Create, update or delete users or user groups. You can also create/edit/delete customized roles.
4. [Database Access Control](./db/): Create, update or delete users or user groups. You can also create/edit/delete customized roles. 4. [Database Access Control](./db/): Create, update or delete users or user groups. You can also create/edit/delete customized roles. -->
## User Stories <!-- ## User Stories -->
```mdx-code-block ```mdx-code-block
import DocCardList from '@theme/DocCardList'; import DocCardList from '@theme/DocCardList';
......
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#include "taos.h"
static int running = 1;
static int32_t msg_process(TAOS_RES* msg) {
char buf[1024];
int32_t rows = 0;
const char* topicName = tmq_get_topic_name(msg);
const char* dbName = tmq_get_db_name(msg);
int32_t vgroupId = tmq_get_vgroup_id(msg);
printf("topic: %s\n", topicName);
printf("db: %s\n", dbName);
printf("vgroup id: %d\n", vgroupId);
while (1) {
TAOS_ROW row = taos_fetch_row(msg);
if (row == NULL) break;
TAOS_FIELD* fields = taos_fetch_fields(msg);
int32_t numOfFields = taos_field_count(msg);
// int32_t* length = taos_fetch_lengths(msg);
int32_t precision = taos_result_precision(msg);
rows++;
taos_print_row(buf, row, fields, numOfFields);
printf("precision: %d, row content: %s\n", precision, buf);
}
return rows;
}
static int32_t init_env() {
TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
if (pConn == NULL) {
return -1;
}
TAOS_RES* pRes;
// drop database if exists
printf("create database\n");
pRes = taos_query(pConn, "drop topic topicname");
if (taos_errno(pRes) != 0) {
printf("error in drop tmqdb, reason:%s\n", taos_errstr(pRes));
}
taos_free_result(pRes);
pRes = taos_query(pConn, "drop database if exists tmqdb");
if (taos_errno(pRes) != 0) {
printf("error in drop tmqdb, reason:%s\n", taos_errstr(pRes));
}
taos_free_result(pRes);
// create database
pRes = taos_query(pConn, "create database tmqdb precision 'ns'");
if (taos_errno(pRes) != 0) {
printf("error in create tmqdb, reason:%s\n", taos_errstr(pRes));
goto END;
}
taos_free_result(pRes);
// create super table
printf("create super table\n");
pRes = taos_query(
pConn, "create table tmqdb.stb (ts timestamp, c1 int, c2 float, c3 varchar(16)) tags(t1 int, t3 varchar(16))");
if (taos_errno(pRes) != 0) {
printf("failed to create super table stb, reason:%s\n", taos_errstr(pRes));
goto END;
}
taos_free_result(pRes);
// create sub tables
printf("create sub tables\n");
pRes = taos_query(pConn, "create table tmqdb.ctb0 using tmqdb.stb tags(0, 'subtable0')");
if (taos_errno(pRes) != 0) {
printf("failed to create super table ctb0, reason:%s\n", taos_errstr(pRes));
goto END;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "create table tmqdb.ctb1 using tmqdb.stb tags(1, 'subtable1')");
if (taos_errno(pRes) != 0) {
printf("failed to create super table ctb1, reason:%s\n", taos_errstr(pRes));
goto END;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "create table tmqdb.ctb2 using tmqdb.stb tags(2, 'subtable2')");
if (taos_errno(pRes) != 0) {
printf("failed to create super table ctb2, reason:%s\n", taos_errstr(pRes));
goto END;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "create table tmqdb.ctb3 using tmqdb.stb tags(3, 'subtable3')");
if (taos_errno(pRes) != 0) {
printf("failed to create super table ctb3, reason:%s\n", taos_errstr(pRes));
goto END;
}
taos_free_result(pRes);
// insert data
printf("insert data into sub tables\n");
pRes = taos_query(pConn, "insert into tmqdb.ctb0 values(now, 0, 0, 'a0')(now+1s, 0, 0, 'a00')");
if (taos_errno(pRes) != 0) {
printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes));
goto END;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "insert into tmqdb.ctb1 values(now, 1, 1, 'a1')(now+1s, 11, 11, 'a11')");
if (taos_errno(pRes) != 0) {
printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes));
goto END;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "insert into tmqdb.ctb2 values(now, 2, 2, 'a1')(now+1s, 22, 22, 'a22')");
if (taos_errno(pRes) != 0) {
printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes));
goto END;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "insert into tmqdb.ctb3 values(now, 3, 3, 'a1')(now+1s, 33, 33, 'a33')");
if (taos_errno(pRes) != 0) {
printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes));
goto END;
}
taos_free_result(pRes);
taos_close(pConn);
return 0;
END:
taos_free_result(pRes);
taos_close(pConn);
return -1;
}
int32_t create_topic() {
printf("create topic\n");
TAOS_RES* pRes;
TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
if (pConn == NULL) {
return -1;
}
pRes = taos_query(pConn, "use tmqdb");
if (taos_errno(pRes) != 0) {
printf("error in use tmqdb, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "create topic topicname as select ts, c1, c2, c3, tbname from tmqdb.stb where c1 > 1");
if (taos_errno(pRes) != 0) {
printf("failed to create topic topicname, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
taos_close(pConn);
return 0;
}
void tmq_commit_cb_print(tmq_t* tmq, int32_t code, void* param) {
printf("tmq_commit_cb_print() code: %d, tmq: %p, param: %p\n", code, tmq, param);
}
tmq_t* build_consumer() {
tmq_conf_res_t code;
tmq_conf_t* conf = tmq_conf_new();
code = tmq_conf_set(conf, "enable.auto.commit", "true");
if (TMQ_CONF_OK != code) return NULL;
code = tmq_conf_set(conf, "auto.commit.interval.ms", "1000");
if (TMQ_CONF_OK != code) return NULL;
code = tmq_conf_set(conf, "group.id", "cgrpName");
if (TMQ_CONF_OK != code) return NULL;
code = tmq_conf_set(conf, "client.id", "user defined name");
if (TMQ_CONF_OK != code) return NULL;
code = tmq_conf_set(conf, "td.connect.user", "root");
if (TMQ_CONF_OK != code) return NULL;
code = tmq_conf_set(conf, "td.connect.pass", "taosdata");
if (TMQ_CONF_OK != code) return NULL;
code = tmq_conf_set(conf, "auto.offset.reset", "earliest");
if (TMQ_CONF_OK != code) return NULL;
code = tmq_conf_set(conf, "experimental.snapshot.enable", "false");
if (TMQ_CONF_OK != code) return NULL;
tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL);
tmq_t* tmq = tmq_consumer_new(conf, NULL, 0);
tmq_conf_destroy(conf);
return tmq;
}
tmq_list_t* build_topic_list() {
tmq_list_t* topicList = tmq_list_new();
int32_t code = tmq_list_append(topicList, "topicname");
if (code) {
tmq_list_destroy(topicList);
return NULL;
}
return topicList;
}
void basic_consume_loop(tmq_t* tmq) {
int32_t totalRows = 0;
int32_t msgCnt = 0;
int32_t timeout = 5000;
while (running) {
TAOS_RES* tmqmsg = tmq_consumer_poll(tmq, timeout);
if (tmqmsg) {
msgCnt++;
totalRows += msg_process(tmqmsg);
taos_free_result(tmqmsg);
} else {
break;
}
}
fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows);
}
int main(int argc, char* argv[]) {
int32_t code;
if (init_env() < 0) {
return -1;
}
if (create_topic() < 0) {
return -1;
}
tmq_t* tmq = build_consumer();
if (NULL == tmq) {
fprintf(stderr, "build_consumer() fail!\n");
return -1;
}
tmq_list_t* topic_list = build_topic_list();
if (NULL == topic_list) {
return -1;
}
if ((code = tmq_subscribe(tmq, topic_list))) {
fprintf(stderr, "Failed to tmq_subscribe(): %s\n", tmq_err2str(code));
}
tmq_list_destroy(topic_list);
basic_consume_loop(tmq);
code = tmq_consumer_close(tmq);
if (code) {
fprintf(stderr, "Failed to close consumer: %s\n", tmq_err2str(code));
} else {
fprintf(stderr, "Consumer closed\n");
}
return 0;
}
...@@ -11,11 +11,17 @@ namespace TDengineExample ...@@ -11,11 +11,17 @@ namespace TDengineExample
static void Main() static void Main()
{ {
IntPtr conn = GetConnection(); IntPtr conn = GetConnection();
QueryAsyncCallback queryAsyncCallback = new QueryAsyncCallback(QueryCallback); try
TDengine.QueryAsync(conn, "select * from meters", queryAsyncCallback, IntPtr.Zero); {
Thread.Sleep(2000); QueryAsyncCallback queryAsyncCallback = new QueryAsyncCallback(QueryCallback);
TDengine.Close(conn); TDengine.QueryAsync(conn, "select * from meters", queryAsyncCallback, IntPtr.Zero);
TDengine.Cleanup(); Thread.Sleep(2000);
}
finally
{
TDengine.Close(conn);
}
} }
static void QueryCallback(IntPtr param, IntPtr taosRes, int code) static void QueryCallback(IntPtr param, IntPtr taosRes, int code)
...@@ -27,11 +33,11 @@ namespace TDengineExample ...@@ -27,11 +33,11 @@ namespace TDengineExample
} }
else else
{ {
Console.WriteLine($"async query data failed, failed code {code}"); throw new Exception($"async query data failed,code:{code},reason:{TDengine.Error(taosRes)}");
} }
} }
// Iteratively call this interface until "numOfRows" is no greater than 0. // Iteratively call this interface until "numOfRows" is no greater than 0.
static void FetchRawBlockCallback(IntPtr param, IntPtr taosRes, int numOfRows) static void FetchRawBlockCallback(IntPtr param, IntPtr taosRes, int numOfRows)
{ {
if (numOfRows > 0) if (numOfRows > 0)
...@@ -43,7 +49,7 @@ namespace TDengineExample ...@@ -43,7 +49,7 @@ namespace TDengineExample
for (int i = 0; i < dataList.Count; i++) for (int i = 0; i < dataList.Count; i++)
{ {
if (i != 0 && (i+1) % metaList.Count == 0) if (i != 0 && (i + 1) % metaList.Count == 0)
{ {
Console.WriteLine("{0}\t|", dataList[i]); Console.WriteLine("{0}\t|", dataList[i]);
} }
...@@ -63,7 +69,7 @@ namespace TDengineExample ...@@ -63,7 +69,7 @@ namespace TDengineExample
} }
else else
{ {
Console.WriteLine($"FetchRawBlockCallback callback error, error code {numOfRows}"); throw new Exception($"FetchRawBlockCallback callback error, error code {numOfRows}");
} }
TDengine.FreeResult(taosRes); TDengine.FreeResult(taosRes);
} }
...@@ -79,8 +85,7 @@ namespace TDengineExample ...@@ -79,8 +85,7 @@ namespace TDengineExample
var conn = TDengine.Connect(host, username, password, dbname, port); var conn = TDengine.Connect(host, username, password, dbname, port);
if (conn == IntPtr.Zero) if (conn == IntPtr.Zero)
{ {
Console.WriteLine("Connect to TDengine failed"); throw new Exception("Connect to TDengine failed");
Environment.Exit(0);
} }
else else
{ {
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
</PropertyGroup> </PropertyGroup>
<ItemGroup> <ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.0" /> <PackageReference Include="TDengine.Connector" Version="3.0.*" />
</ItemGroup> </ItemGroup>
</Project> </Project>
...@@ -16,7 +16,7 @@ namespace TDengineExample ...@@ -16,7 +16,7 @@ namespace TDengineExample
var conn = TDengine.Connect(host, username, password, dbname, port); var conn = TDengine.Connect(host, username, password, dbname, port);
if (conn == IntPtr.Zero) if (conn == IntPtr.Zero)
{ {
Console.WriteLine("Connect to TDengine failed"); throw new Exception("Connect to TDengine failed");
} }
else else
{ {
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
</PropertyGroup> </PropertyGroup>
<ItemGroup> <ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.0" /> <PackageReference Include="TDengine.Connector" Version="3.0.*" />
</ItemGroup> </ItemGroup>
</Project> </Project>

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Version 16
VisualStudioVersion = 16.0.30114.105
MinimumVisualStudioVersion = 10.0.40219.1
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "asyncquery", "asyncQuery\asyncquery.csproj", "{E2A5F00C-14E7-40E1-A2DE-6AB2975616D3}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "connect", "connect\connect.csproj", "{CCC5042D-93FC-4AE0-B2F6-7E692FD476B7}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "influxdbline", "influxdbLine\influxdbline.csproj", "{6A24FB80-1E3C-4E2D-A5AB-914FA583874D}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "optsJSON", "optsJSON\optsJSON.csproj", "{6725A961-0C66-4196-AC98-8D3F3D757D6C}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "optstelnet", "optsTelnet\optstelnet.csproj", "{B3B50D25-688B-44D4-8683-482ABC52FFCA}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "query", "query\query.csproj", "{F2B7D13B-FE04-4C5C-BB6D-C12E0A9D9970}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "stmtinsert", "stmtInsert\stmtinsert.csproj", "{B40D6BED-BE3C-4B44-9B12-28BE441311BA}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "subscribe", "subscribe\subscribe.csproj", "{C3D45A8E-AFC0-4547-9F3C-467B0B583DED}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "wsConnect", "wsConnect\wsConnect.csproj", "{51E19494-845E-49ED-97C7-749AE63111BD}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "wsInsert", "wsInsert\wsInsert.csproj", "{13E2233B-4AFF-40D9-AF42-AB3F01617540}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "wsQuery", "wsQuery\wsQuery.csproj", "{0F394169-C456-442C-929D-C2D43A0EEC7B}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "wsStmt", "wsStmt\wsStmt.csproj", "{27B9C9AB-9055-4BF2-8A14-4E59F09D5985}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "sqlinsert", "sqlInsert\sqlinsert.csproj", "{CD24BD12-8550-4627-A11D-707B446F48C3}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
Release|Any CPU = Release|Any CPU
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{E2A5F00C-14E7-40E1-A2DE-6AB2975616D3}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{E2A5F00C-14E7-40E1-A2DE-6AB2975616D3}.Debug|Any CPU.Build.0 = Debug|Any CPU
{E2A5F00C-14E7-40E1-A2DE-6AB2975616D3}.Release|Any CPU.ActiveCfg = Release|Any CPU
{E2A5F00C-14E7-40E1-A2DE-6AB2975616D3}.Release|Any CPU.Build.0 = Release|Any CPU
{CCC5042D-93FC-4AE0-B2F6-7E692FD476B7}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{CCC5042D-93FC-4AE0-B2F6-7E692FD476B7}.Debug|Any CPU.Build.0 = Debug|Any CPU
{CCC5042D-93FC-4AE0-B2F6-7E692FD476B7}.Release|Any CPU.ActiveCfg = Release|Any CPU
{CCC5042D-93FC-4AE0-B2F6-7E692FD476B7}.Release|Any CPU.Build.0 = Release|Any CPU
{6A24FB80-1E3C-4E2D-A5AB-914FA583874D}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{6A24FB80-1E3C-4E2D-A5AB-914FA583874D}.Debug|Any CPU.Build.0 = Debug|Any CPU
{6A24FB80-1E3C-4E2D-A5AB-914FA583874D}.Release|Any CPU.ActiveCfg = Release|Any CPU
{6A24FB80-1E3C-4E2D-A5AB-914FA583874D}.Release|Any CPU.Build.0 = Release|Any CPU
{6725A961-0C66-4196-AC98-8D3F3D757D6C}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{6725A961-0C66-4196-AC98-8D3F3D757D6C}.Debug|Any CPU.Build.0 = Debug|Any CPU
{6725A961-0C66-4196-AC98-8D3F3D757D6C}.Release|Any CPU.ActiveCfg = Release|Any CPU
{6725A961-0C66-4196-AC98-8D3F3D757D6C}.Release|Any CPU.Build.0 = Release|Any CPU
{B3B50D25-688B-44D4-8683-482ABC52FFCA}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{B3B50D25-688B-44D4-8683-482ABC52FFCA}.Debug|Any CPU.Build.0 = Debug|Any CPU
{B3B50D25-688B-44D4-8683-482ABC52FFCA}.Release|Any CPU.ActiveCfg = Release|Any CPU
{B3B50D25-688B-44D4-8683-482ABC52FFCA}.Release|Any CPU.Build.0 = Release|Any CPU
{F2B7D13B-FE04-4C5C-BB6D-C12E0A9D9970}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{F2B7D13B-FE04-4C5C-BB6D-C12E0A9D9970}.Debug|Any CPU.Build.0 = Debug|Any CPU
{F2B7D13B-FE04-4C5C-BB6D-C12E0A9D9970}.Release|Any CPU.ActiveCfg = Release|Any CPU
{F2B7D13B-FE04-4C5C-BB6D-C12E0A9D9970}.Release|Any CPU.Build.0 = Release|Any CPU
{B40D6BED-BE3C-4B44-9B12-28BE441311BA}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{B40D6BED-BE3C-4B44-9B12-28BE441311BA}.Debug|Any CPU.Build.0 = Debug|Any CPU
{B40D6BED-BE3C-4B44-9B12-28BE441311BA}.Release|Any CPU.ActiveCfg = Release|Any CPU
{B40D6BED-BE3C-4B44-9B12-28BE441311BA}.Release|Any CPU.Build.0 = Release|Any CPU
{C3D45A8E-AFC0-4547-9F3C-467B0B583DED}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{C3D45A8E-AFC0-4547-9F3C-467B0B583DED}.Debug|Any CPU.Build.0 = Debug|Any CPU
{C3D45A8E-AFC0-4547-9F3C-467B0B583DED}.Release|Any CPU.ActiveCfg = Release|Any CPU
{C3D45A8E-AFC0-4547-9F3C-467B0B583DED}.Release|Any CPU.Build.0 = Release|Any CPU
{51E19494-845E-49ED-97C7-749AE63111BD}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{51E19494-845E-49ED-97C7-749AE63111BD}.Debug|Any CPU.Build.0 = Debug|Any CPU
{51E19494-845E-49ED-97C7-749AE63111BD}.Release|Any CPU.ActiveCfg = Release|Any CPU
{51E19494-845E-49ED-97C7-749AE63111BD}.Release|Any CPU.Build.0 = Release|Any CPU
{13E2233B-4AFF-40D9-AF42-AB3F01617540}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{13E2233B-4AFF-40D9-AF42-AB3F01617540}.Debug|Any CPU.Build.0 = Debug|Any CPU
{13E2233B-4AFF-40D9-AF42-AB3F01617540}.Release|Any CPU.ActiveCfg = Release|Any CPU
{13E2233B-4AFF-40D9-AF42-AB3F01617540}.Release|Any CPU.Build.0 = Release|Any CPU
{0F394169-C456-442C-929D-C2D43A0EEC7B}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{0F394169-C456-442C-929D-C2D43A0EEC7B}.Debug|Any CPU.Build.0 = Debug|Any CPU
{0F394169-C456-442C-929D-C2D43A0EEC7B}.Release|Any CPU.ActiveCfg = Release|Any CPU
{0F394169-C456-442C-929D-C2D43A0EEC7B}.Release|Any CPU.Build.0 = Release|Any CPU
{27B9C9AB-9055-4BF2-8A14-4E59F09D5985}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{27B9C9AB-9055-4BF2-8A14-4E59F09D5985}.Debug|Any CPU.Build.0 = Debug|Any CPU
{27B9C9AB-9055-4BF2-8A14-4E59F09D5985}.Release|Any CPU.ActiveCfg = Release|Any CPU
{27B9C9AB-9055-4BF2-8A14-4E59F09D5985}.Release|Any CPU.Build.0 = Release|Any CPU
{CD24BD12-8550-4627-A11D-707B446F48C3}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{CD24BD12-8550-4627-A11D-707B446F48C3}.Debug|Any CPU.Build.0 = Debug|Any CPU
{CD24BD12-8550-4627-A11D-707B446F48C3}.Release|Any CPU.ActiveCfg = Release|Any CPU
{CD24BD12-8550-4627-A11D-707B446F48C3}.Release|Any CPU.Build.0 = Release|Any CPU
EndGlobalSection
EndGlobal
...@@ -17,8 +17,7 @@ namespace TDengineExample ...@@ -17,8 +17,7 @@ namespace TDengineExample
IntPtr res = TDengine.SchemalessInsert(conn, lines, lines.Length, (int)TDengineSchemalessProtocol.TSDB_SML_LINE_PROTOCOL, (int)TDengineSchemalessPrecision.TSDB_SML_TIMESTAMP_MILLI_SECONDS); IntPtr res = TDengine.SchemalessInsert(conn, lines, lines.Length, (int)TDengineSchemalessProtocol.TSDB_SML_LINE_PROTOCOL, (int)TDengineSchemalessPrecision.TSDB_SML_TIMESTAMP_MILLI_SECONDS);
if (TDengine.ErrorNo(res) != 0) if (TDengine.ErrorNo(res) != 0)
{ {
Console.WriteLine("SchemalessInsert failed since " + TDengine.Error(res)); throw new Exception("SchemalessInsert failed since " + TDengine.Error(res));
ExitProgram(conn, 1);
} }
else else
{ {
...@@ -26,7 +25,6 @@ namespace TDengineExample ...@@ -26,7 +25,6 @@ namespace TDengineExample
Console.WriteLine($"SchemalessInsert success, affected {affectedRows} rows"); Console.WriteLine($"SchemalessInsert success, affected {affectedRows} rows");
} }
TDengine.FreeResult(res); TDengine.FreeResult(res);
ExitProgram(conn, 0);
} }
static IntPtr GetConnection() static IntPtr GetConnection()
...@@ -39,9 +37,7 @@ namespace TDengineExample ...@@ -39,9 +37,7 @@ namespace TDengineExample
var conn = TDengine.Connect(host, username, password, dbname, port); var conn = TDengine.Connect(host, username, password, dbname, port);
if (conn == IntPtr.Zero) if (conn == IntPtr.Zero)
{ {
Console.WriteLine("Connect to TDengine failed"); throw new Exception("Connect to TDengine failed");
TDengine.Cleanup();
Environment.Exit(1);
} }
else else
{ {
...@@ -55,23 +51,15 @@ namespace TDengineExample ...@@ -55,23 +51,15 @@ namespace TDengineExample
IntPtr res = TDengine.Query(conn, "CREATE DATABASE test"); IntPtr res = TDengine.Query(conn, "CREATE DATABASE test");
if (TDengine.ErrorNo(res) != 0) if (TDengine.ErrorNo(res) != 0)
{ {
Console.WriteLine("failed to create database, reason: " + TDengine.Error(res)); throw new Exception("failed to create database, reason: " + TDengine.Error(res));
ExitProgram(conn, 1);
} }
res = TDengine.Query(conn, "USE test"); res = TDengine.Query(conn, "USE test");
if (TDengine.ErrorNo(res) != 0) if (TDengine.ErrorNo(res) != 0)
{ {
Console.WriteLine("failed to change database, reason: " + TDengine.Error(res)); throw new Exception("failed to change database, reason: " + TDengine.Error(res));
ExitProgram(conn, 1);
} }
} }
static void ExitProgram(IntPtr conn, int exitCode)
{
TDengine.Close(conn);
TDengine.Cleanup();
Environment.Exit(exitCode);
}
} }
} }
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
</PropertyGroup> </PropertyGroup>
<ItemGroup> <ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.0" /> <PackageReference Include="TDengine.Connector" Version="3.0.*" />
</ItemGroup> </ItemGroup>
</Project> </Project>
using System;
using TDengineTMQ;
using TDengineDriver;
using System.Runtime.InteropServices;
namespace TMQExample
{
internal class SubscribeDemo
{
static void Main(string[] args)
{
IntPtr conn = GetConnection();
string topic = "topic_example";
Console.WriteLine($"create topic if not exist {topic} as select * from meters");
//create topic
IntPtr res = TDengine.Query(conn, $"create topic if not exists {topic} as select * from meters");
if (res == IntPtr.Zero)
{
throw new Exception($"create topic failed, reason:{TDengine.Error(res)}");
}
var cfg = new ConsumerConfig
{
GourpId = "group_1",
TDConnectUser = "root",
TDConnectPasswd = "taosdata",
MsgWithTableName = "true",
TDConnectIp = "127.0.0.1",
};
// create consumer
var consumer = new ConsumerBuilder(cfg)
.Build();
// subscribe
consumer.Subscribe(topic);
// consume
for (int i = 0; i < 5; i++)
{
var consumeRes = consumer.Consume(300);
// print consumeResult
foreach (KeyValuePair<TopicPartition, TaosResult> kv in consumeRes.Message)
{
Console.WriteLine("topic partitions:\n{0}", kv.Key.ToString());
kv.Value.Metas.ForEach(meta =>
{
Console.Write("{0} {1}({2}) \t|", meta.name, meta.TypeName(), meta.size);
});
Console.WriteLine("");
kv.Value.Datas.ForEach(data =>
{
Console.WriteLine(data.ToString());
});
}
consumer.Commit(consumeRes);
Console.WriteLine("\n================ {0} done ", i);
}
// retrieve topic list
List<string> topics = consumer.Subscription();
topics.ForEach(t => Console.WriteLine("topic name:{0}", t));
// unsubscribe
consumer.Unsubscribe();
// close consumer after use.Otherwise will lead memory leak.
consumer.Close();
TDengine.Close(conn);
}
static IntPtr GetConnection()
{
string host = "localhost";
short port = 6030;
string username = "root";
string password = "taosdata";
string dbname = "power";
var conn = TDengine.Connect(host, username, password, dbname, port);
if (conn == IntPtr.Zero)
{
Console.WriteLine("Connect to TDengine failed");
System.Environment.Exit(0);
}
else
{
Console.WriteLine("Connect to TDengine success");
}
return conn;
}
}
}
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net6.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
<StartupObject>TMQExample.SubscribeDemo</StartupObject>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.0" />
</ItemGroup>
</Project>
...@@ -7,27 +7,31 @@ namespace TDengineExample ...@@ -7,27 +7,31 @@ namespace TDengineExample
static void Main() static void Main()
{ {
IntPtr conn = GetConnection(); IntPtr conn = GetConnection();
PrepareDatabase(conn); try
string[] lines = { "[{\"metric\": \"meters.current\", \"timestamp\": 1648432611249, \"value\": 10.3, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": 2}}," + {
PrepareDatabase(conn);
string[] lines = { "[{\"metric\": \"meters.current\", \"timestamp\": 1648432611249, \"value\": 10.3, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": 2}}," +
" {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, \"value\": 219, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}}, " + " {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, \"value\": 219, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}}, " +
"{\"metric\": \"meters.current\", \"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": 2}}," + "{\"metric\": \"meters.current\", \"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": 2}}," +
" {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611250, \"value\": 221, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}}]" " {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611250, \"value\": 221, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}}]"
}; };
IntPtr res = TDengine.SchemalessInsert(conn, lines, 1, (int)TDengineSchemalessProtocol.TSDB_SML_JSON_PROTOCOL, (int)TDengineSchemalessPrecision.TSDB_SML_TIMESTAMP_NOT_CONFIGURED); IntPtr res = TDengine.SchemalessInsert(conn, lines, 1, (int)TDengineSchemalessProtocol.TSDB_SML_JSON_PROTOCOL, (int)TDengineSchemalessPrecision.TSDB_SML_TIMESTAMP_NOT_CONFIGURED);
if (TDengine.ErrorNo(res) != 0) if (TDengine.ErrorNo(res) != 0)
{ {
Console.WriteLine("SchemalessInsert failed since " + TDengine.Error(res)); throw new Exception("SchemalessInsert failed since " + TDengine.Error(res));
ExitProgram(conn, 1); }
else
{
int affectedRows = TDengine.AffectRows(res);
Console.WriteLine($"SchemalessInsert success, affected {affectedRows} rows");
}
TDengine.FreeResult(res);
} }
else finally
{ {
int affectedRows = TDengine.AffectRows(res); TDengine.Close(conn);
Console.WriteLine($"SchemalessInsert success, affected {affectedRows} rows");
} }
TDengine.FreeResult(res);
ExitProgram(conn, 0);
} }
static IntPtr GetConnection() static IntPtr GetConnection()
{ {
...@@ -39,9 +43,7 @@ namespace TDengineExample ...@@ -39,9 +43,7 @@ namespace TDengineExample
var conn = TDengine.Connect(host, username, password, dbname, port); var conn = TDengine.Connect(host, username, password, dbname, port);
if (conn == IntPtr.Zero) if (conn == IntPtr.Zero)
{ {
Console.WriteLine("Connect to TDengine failed"); throw new Exception("Connect to TDengine failed");
TDengine.Cleanup();
Environment.Exit(1);
} }
else else
{ {
...@@ -55,22 +57,13 @@ namespace TDengineExample ...@@ -55,22 +57,13 @@ namespace TDengineExample
IntPtr res = TDengine.Query(conn, "CREATE DATABASE test"); IntPtr res = TDengine.Query(conn, "CREATE DATABASE test");
if (TDengine.ErrorNo(res) != 0) if (TDengine.ErrorNo(res) != 0)
{ {
Console.WriteLine("failed to create database, reason: " + TDengine.Error(res)); throw new Exception("failed to create database, reason: " + TDengine.Error(res));
ExitProgram(conn, 1);
} }
res = TDengine.Query(conn, "USE test"); res = TDengine.Query(conn, "USE test");
if (TDengine.ErrorNo(res) != 0) if (TDengine.ErrorNo(res) != 0)
{ {
Console.WriteLine("failed to change database, reason: " + TDengine.Error(res)); throw new Exception("failed to change database, reason: " + TDengine.Error(res));
ExitProgram(conn, 1);
} }
} }
static void ExitProgram(IntPtr conn, int exitCode)
{
TDengine.Close(conn);
TDengine.Cleanup();
Environment.Exit(exitCode);
}
} }
} }
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
</PropertyGroup> </PropertyGroup>
<ItemGroup> <ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.0" /> <PackageReference Include="TDengine.Connector" Version="3.0.*" />
</ItemGroup> </ItemGroup>
</Project> </Project>
...@@ -7,8 +7,10 @@ namespace TDengineExample ...@@ -7,8 +7,10 @@ namespace TDengineExample
static void Main() static void Main()
{ {
IntPtr conn = GetConnection(); IntPtr conn = GetConnection();
PrepareDatabase(conn); try
string[] lines = { {
PrepareDatabase(conn);
string[] lines = {
"meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2", "meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
"meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2", "meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
"meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3", "meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
...@@ -18,20 +20,22 @@ namespace TDengineExample ...@@ -18,20 +20,22 @@ namespace TDengineExample
"meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3", "meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
"meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3", "meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
}; };
IntPtr res = TDengine.SchemalessInsert(conn, lines, lines.Length, (int)TDengineSchemalessProtocol.TSDB_SML_TELNET_PROTOCOL, (int)TDengineSchemalessPrecision.TSDB_SML_TIMESTAMP_NOT_CONFIGURED); IntPtr res = TDengine.SchemalessInsert(conn, lines, lines.Length, (int)TDengineSchemalessProtocol.TSDB_SML_TELNET_PROTOCOL, (int)TDengineSchemalessPrecision.TSDB_SML_TIMESTAMP_NOT_CONFIGURED);
if (TDengine.ErrorNo(res) != 0) if (TDengine.ErrorNo(res) != 0)
{ {
Console.WriteLine("SchemalessInsert failed since " + TDengine.Error(res)); throw new Exception("SchemalessInsert failed since " + TDengine.Error(res));
ExitProgram(conn, 1); }
else
{
int affectedRows = TDengine.AffectRows(res);
Console.WriteLine($"SchemalessInsert success, affected {affectedRows} rows");
}
TDengine.FreeResult(res);
} }
else catch
{ {
int affectedRows = TDengine.AffectRows(res); TDengine.Close(conn);
Console.WriteLine($"SchemalessInsert success, affected {affectedRows} rows");
} }
TDengine.FreeResult(res);
ExitProgram(conn, 0);
} }
static IntPtr GetConnection() static IntPtr GetConnection()
{ {
...@@ -43,9 +47,7 @@ namespace TDengineExample ...@@ -43,9 +47,7 @@ namespace TDengineExample
var conn = TDengine.Connect(host, username, password, dbname, port); var conn = TDengine.Connect(host, username, password, dbname, port);
if (conn == IntPtr.Zero) if (conn == IntPtr.Zero)
{ {
Console.WriteLine("Connect to TDengine failed"); throw new Exception("Connect to TDengine failed");
TDengine.Cleanup();
Environment.Exit(1);
} }
else else
{ {
...@@ -59,22 +61,13 @@ namespace TDengineExample ...@@ -59,22 +61,13 @@ namespace TDengineExample
IntPtr res = TDengine.Query(conn, "CREATE DATABASE test"); IntPtr res = TDengine.Query(conn, "CREATE DATABASE test");
if (TDengine.ErrorNo(res) != 0) if (TDengine.ErrorNo(res) != 0)
{ {
Console.WriteLine("failed to create database, reason: " + TDengine.Error(res)); throw new Exception("failed to create database, reason: " + TDengine.Error(res));
ExitProgram(conn, 1);
} }
res = TDengine.Query(conn, "USE test"); res = TDengine.Query(conn, "USE test");
if (TDengine.ErrorNo(res) != 0) if (TDengine.ErrorNo(res) != 0)
{ {
Console.WriteLine("failed to change database, reason: " + TDengine.Error(res)); throw new Exception("failed to change database, reason: " + TDengine.Error(res));
ExitProgram(conn, 1);
} }
} }
static void ExitProgram(IntPtr conn, int exitCode)
{
TDengine.Close(conn);
TDengine.Cleanup();
Environment.Exit(exitCode);
}
} }
} }
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
</PropertyGroup> </PropertyGroup>
<ItemGroup> <ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.0" /> <PackageReference Include="TDengine.Connector" Version="3.0.*" />
</ItemGroup> </ItemGroup>
</Project> </Project>
...@@ -9,48 +9,47 @@ namespace TDengineExample ...@@ -9,48 +9,47 @@ namespace TDengineExample
static void Main() static void Main()
{ {
IntPtr conn = GetConnection(); IntPtr conn = GetConnection();
// run query try
IntPtr res = TDengine.Query(conn, "SELECT * FROM meters LIMIT 2");
if (TDengine.ErrorNo(res) != 0)
{ {
Console.WriteLine("Failed to query since: " + TDengine.Error(res)); // run query
TDengine.Close(conn); IntPtr res = TDengine.Query(conn, "SELECT * FROM meters LIMIT 2");
TDengine.Cleanup(); if (TDengine.ErrorNo(res) != 0)
return; {
} throw new Exception("Failed to query since: " + TDengine.Error(res));
}
// get filed count // get filed count
int fieldCount = TDengine.FieldCount(res); int fieldCount = TDengine.FieldCount(res);
Console.WriteLine("fieldCount=" + fieldCount); Console.WriteLine("fieldCount=" + fieldCount);
// print column names // print column names
List<TDengineMeta> metas = LibTaos.GetMeta(res); List<TDengineMeta> metas = LibTaos.GetMeta(res);
for (int i = 0; i < metas.Count; i++) for (int i = 0; i < metas.Count; i++)
{ {
Console.Write(metas[i].name + "\t"); Console.Write(metas[i].name + "\t");
} }
Console.WriteLine(); Console.WriteLine();
// print values // print values
List<Object> resData = LibTaos.GetData(res); List<Object> resData = LibTaos.GetData(res);
for (int i = 0; i < resData.Count; i++) for (int i = 0; i < resData.Count; i++)
{
Console.Write($"|{resData[i].ToString()} \t");
if (((i + 1) % metas.Count == 0))
{ {
Console.WriteLine(""); Console.Write($"|{resData[i].ToString()} \t");
if (((i + 1) % metas.Count == 0))
{
Console.WriteLine("");
}
} }
Console.WriteLine();
// Free result after use
TDengine.FreeResult(res);
} }
Console.WriteLine(); finally
if (TDengine.ErrorNo(res) != 0)
{ {
Console.WriteLine($"Query is not complete, Error {TDengine.ErrorNo(res)} {TDengine.Error(res)}"); TDengine.Close(conn);
} }
// exit
TDengine.FreeResult(res);
TDengine.Close(conn);
TDengine.Cleanup();
} }
static IntPtr GetConnection() static IntPtr GetConnection()
{ {
...@@ -62,8 +61,7 @@ namespace TDengineExample ...@@ -62,8 +61,7 @@ namespace TDengineExample
var conn = TDengine.Connect(host, username, password, dbname, port); var conn = TDengine.Connect(host, username, password, dbname, port);
if (conn == IntPtr.Zero) if (conn == IntPtr.Zero)
{ {
Console.WriteLine("Connect to TDengine failed"); throw new Exception("Connect to TDengine failed");
System.Environment.Exit(0);
} }
else else
{ {
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
</PropertyGroup> </PropertyGroup>
<ItemGroup> <ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.0" /> <PackageReference Include="TDengine.Connector" Version="3.0.*" />
</ItemGroup> </ItemGroup>
</Project> </Project>
...@@ -9,21 +9,29 @@ namespace TDengineExample ...@@ -9,21 +9,29 @@ namespace TDengineExample
static void Main() static void Main()
{ {
IntPtr conn = GetConnection(); IntPtr conn = GetConnection();
IntPtr res = TDengine.Query(conn, "CREATE DATABASE power"); try
CheckRes(conn, res, "failed to create database"); {
res = TDengine.Query(conn, "USE power"); IntPtr res = TDengine.Query(conn, "CREATE DATABASE power");
CheckRes(conn, res, "failed to change database"); CheckRes(conn, res, "failed to create database");
res = TDengine.Query(conn, "CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)"); res = TDengine.Query(conn, "USE power");
CheckRes(conn, res, "failed to create stable"); CheckRes(conn, res, "failed to change database");
var sql = "INSERT INTO d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000) " + res = TDengine.Query(conn, "CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)");
"d1002 USING power.meters TAGS('California.SanFrancisco', 3) VALUES('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000) " + CheckRes(conn, res, "failed to create stable");
"d1003 USING power.meters TAGS('California.LosAngeles', 2) VALUES('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000)('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000) " + var sql = "INSERT INTO d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000) " +
"d1004 USING power.meters TAGS('California.LosAngeles', 3) VALUES('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000)('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)"; "d1002 USING power.meters TAGS('California.SanFrancisco', 3) VALUES('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000) " +
res = TDengine.Query(conn, sql); "d1003 USING power.meters TAGS('California.LosAngeles', 2) VALUES('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000)('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000) " +
CheckRes(conn, res, "failed to insert data"); "d1004 USING power.meters TAGS('California.LosAngeles', 3) VALUES('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000)('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)";
int affectedRows = TDengine.AffectRows(res); res = TDengine.Query(conn, sql);
Console.WriteLine("affectedRows " + affectedRows); CheckRes(conn, res, "failed to insert data");
ExitProgram(conn, 0); int affectedRows = TDengine.AffectRows(res);
Console.WriteLine("affectedRows " + affectedRows);
TDengine.FreeResult(res);
}
finally
{
TDengine.Close(conn);
}
} }
static IntPtr GetConnection() static IntPtr GetConnection()
...@@ -36,8 +44,7 @@ namespace TDengineExample ...@@ -36,8 +44,7 @@ namespace TDengineExample
var conn = TDengine.Connect(host, username, password, dbname, port); var conn = TDengine.Connect(host, username, password, dbname, port);
if (conn == IntPtr.Zero) if (conn == IntPtr.Zero)
{ {
Console.WriteLine("Connect to TDengine failed"); throw new Exception("Connect to TDengine failed");
Environment.Exit(0);
} }
else else
{ {
...@@ -50,17 +57,10 @@ namespace TDengineExample ...@@ -50,17 +57,10 @@ namespace TDengineExample
{ {
if (TDengine.ErrorNo(res) != 0) if (TDengine.ErrorNo(res) != 0)
{ {
Console.Write(errorMsg + " since: " + TDengine.Error(res)); throw new Exception($"{errorMsg} since: {TDengine.Error(res)}");
ExitProgram(conn, 1);
} }
} }
static void ExitProgram(IntPtr conn, int exitCode)
{
TDengine.Close(conn);
TDengine.Cleanup();
Environment.Exit(exitCode);
}
} }
} }
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
</PropertyGroup> </PropertyGroup>
<ItemGroup> <ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.0" /> <PackageReference Include="TDengine.Connector" Version="3.0.*" />
</ItemGroup> </ItemGroup>
</Project> </Project>
...@@ -9,45 +9,50 @@ namespace TDengineExample ...@@ -9,45 +9,50 @@ namespace TDengineExample
static void Main() static void Main()
{ {
conn = GetConnection(); conn = GetConnection();
PrepareSTable(); try
// 1. init and prepare
stmt = TDengine.StmtInit(conn);
if (stmt == IntPtr.Zero)
{ {
Console.WriteLine("failed to init stmt, " + TDengine.Error(stmt)); PrepareSTable();
ExitProgram(); // 1. init and prepare
} stmt = TDengine.StmtInit(conn);
int res = TDengine.StmtPrepare(stmt, "INSERT INTO ? USING meters TAGS(?, ?) VALUES(?, ?, ?, ?)"); if (stmt == IntPtr.Zero)
CheckStmtRes(res, "failed to prepare stmt"); {
throw new Exception("failed to init stmt.");
}
int res = TDengine.StmtPrepare(stmt, "INSERT INTO ? USING meters TAGS(?, ?) VALUES(?, ?, ?, ?)");
CheckStmtRes(res, "failed to prepare stmt");
// 2. bind table name and tags // 2. bind table name and tags
TAOS_MULTI_BIND[] tags = new TAOS_MULTI_BIND[2] { TaosMultiBind.MultiBindBinary(new string[]{"California.SanFrancisco"}), TaosMultiBind.MultiBindInt(new int?[] {2}) }; TAOS_MULTI_BIND[] tags = new TAOS_MULTI_BIND[2] { TaosMultiBind.MultiBindBinary(new string[] { "California.SanFrancisco" }), TaosMultiBind.MultiBindInt(new int?[] { 2 }) };
res = TDengine.StmtSetTbnameTags(stmt, "d1001", tags); res = TDengine.StmtSetTbnameTags(stmt, "d1001", tags);
CheckStmtRes(res, "failed to bind table name and tags"); CheckStmtRes(res, "failed to bind table name and tags");
// 3. bind values // 3. bind values
TAOS_MULTI_BIND[] values = new TAOS_MULTI_BIND[4] { TAOS_MULTI_BIND[] values = new TAOS_MULTI_BIND[4] {
TaosMultiBind.MultiBindTimestamp(new long[2] { 1648432611249, 1648432611749}), TaosMultiBind.MultiBindTimestamp(new long[2] { 1648432611249, 1648432611749}),
TaosMultiBind.MultiBindFloat(new float?[2] { 10.3f, 12.6f}), TaosMultiBind.MultiBindFloat(new float?[2] { 10.3f, 12.6f}),
TaosMultiBind.MultiBindInt(new int?[2] { 219, 218}), TaosMultiBind.MultiBindInt(new int?[2] { 219, 218}),
TaosMultiBind.MultiBindFloat(new float?[2]{ 0.31f, 0.33f}) TaosMultiBind.MultiBindFloat(new float?[2]{ 0.31f, 0.33f})
}; };
res = TDengine.StmtBindParamBatch(stmt, values); res = TDengine.StmtBindParamBatch(stmt, values);
CheckStmtRes(res, "failed to bind params"); CheckStmtRes(res, "failed to bind params");
// 4. add batch // 4. add batch
res = TDengine.StmtAddBatch(stmt); res = TDengine.StmtAddBatch(stmt);
CheckStmtRes(res, "failed to add batch"); CheckStmtRes(res, "failed to add batch");
// 5. execute // 5. execute
res = TDengine.StmtExecute(stmt); res = TDengine.StmtExecute(stmt);
CheckStmtRes(res, "failed to execute"); CheckStmtRes(res, "failed to execute");
// 6. free
TaosMultiBind.FreeTaosBind(tags);
TaosMultiBind.FreeTaosBind(values);
}
finally
{
TDengine.Close(conn);
}
// 6. free
TaosMultiBind.FreeTaosBind(tags);
TaosMultiBind.FreeTaosBind(values);
TDengine.Close(conn);
TDengine.Cleanup();
} }
static IntPtr GetConnection() static IntPtr GetConnection()
...@@ -60,8 +65,7 @@ namespace TDengineExample ...@@ -60,8 +65,7 @@ namespace TDengineExample
var conn = TDengine.Connect(host, username, password, dbname, port); var conn = TDengine.Connect(host, username, password, dbname, port);
if (conn == IntPtr.Zero) if (conn == IntPtr.Zero)
{ {
Console.WriteLine("Connect to TDengine failed"); throw new Exception("Connect to TDengine failed");
Environment.Exit(0);
} }
else else
{ {
...@@ -70,8 +74,6 @@ namespace TDengineExample ...@@ -70,8 +74,6 @@ namespace TDengineExample
return conn; return conn;
} }
static void PrepareSTable() static void PrepareSTable()
{ {
IntPtr res = TDengine.Query(conn, "CREATE DATABASE power"); IntPtr res = TDengine.Query(conn, "CREATE DATABASE power");
...@@ -90,9 +92,8 @@ namespace TDengineExample ...@@ -90,9 +92,8 @@ namespace TDengineExample
int code = TDengine.StmtClose(stmt); int code = TDengine.StmtClose(stmt);
if (code != 0) if (code != 0)
{ {
Console.WriteLine($"failed to close stmt, {code} reason: {TDengine.StmtErrorStr(stmt)} "); throw new Exception($"failed to close stmt, {code} reason: {TDengine.StmtErrorStr(stmt)} ");
} }
ExitProgram();
} }
} }
...@@ -100,16 +101,9 @@ namespace TDengineExample ...@@ -100,16 +101,9 @@ namespace TDengineExample
{ {
if (TDengine.ErrorNo(res) != 0) if (TDengine.ErrorNo(res) != 0)
{ {
Console.WriteLine(errorMsg + " since:" + TDengine.Error(res)); throw new Exception(errorMsg + " since:" + TDengine.Error(res));
ExitProgram();
} }
} }
static void ExitProgram()
{
TDengine.Close(conn);
TDengine.Cleanup();
Environment.Exit(1);
}
} }
} }
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
</PropertyGroup> </PropertyGroup>
<ItemGroup> <ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.0" /> <PackageReference Include="TDengine.Connector" Version="3.0.*" />
</ItemGroup> </ItemGroup>
</Project> </Project>
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
</PropertyGroup> </PropertyGroup>
<ItemGroup> <ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.1" /> <PackageReference Include="TDengine.Connector" Version="3.0.*" />
</ItemGroup> </ItemGroup>
</Project> </Project>
using System;
using TDengineWS.Impl;
namespace Examples
{
public class WSConnExample
{
static int Main(string[] args)
{
string DSN = "ws://root:taosdata@127.0.0.1:6041/test";
IntPtr wsConn = LibTaosWS.WSConnectWithDSN(DSN);
if (wsConn == IntPtr.Zero)
{
Console.WriteLine("get WS connection failed");
return -1;
}
else
{
Console.WriteLine("Establish connect success.");
// close connection.
LibTaosWS.WSClose(wsConn);
}
return 0;
}
}
}
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net6.0</TargetFramework>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.*" GeneratePathProperty="true" />
</ItemGroup>
<Target Name="copyDLLDependency" BeforeTargets="BeforeBuild">
<ItemGroup>
<DepDLLFiles Include="$(PkgTDengine_Connector)\runtimes\**\*.*" />
</ItemGroup>
<Copy SourceFiles="@(DepDLLFiles)" DestinationFolder="$(OutDir)" />
</Target>
</Project>
using System;
using TDengineWS.Impl;
namespace Examples
{
public class WSInsertExample
{
static int Main(string[] args)
{
string DSN = "ws://root:taosdata@127.0.0.1:6041/test";
IntPtr wsConn = LibTaosWS.WSConnectWithDSN(DSN);
// Assert if connection is validate
if (wsConn == IntPtr.Zero)
{
Console.WriteLine("get WS connection failed");
return -1;
}
else
{
Console.WriteLine("Establish connect success.");
}
string createTable = "CREATE STABLE test.meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);";
string insert = "INSERT INTO test.d1001 USING test.meters TAGS('California.SanFrancisco', 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)" +
"test.d1002 USING test.meters TAGS('California.SanFrancisco', 3) VALUES('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)" +
"test.d1003 USING test.meters TAGS('California.LosAngeles', 2) VALUES('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000)('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000) " +
"test.d1004 USING test.meters TAGS('California.LosAngeles', 3) VALUES('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000)('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)";
IntPtr wsRes = LibTaosWS.WSQuery(wsConn, createTable);
ValidInsert("create table", wsRes);
LibTaosWS.WSFreeResult(wsRes);
wsRes = LibTaosWS.WSQuery(wsConn, insert);
ValidInsert("insert data", wsRes);
LibTaosWS.WSFreeResult(wsRes);
// close connection.
LibTaosWS.WSClose(wsConn);
return 0;
}
static void ValidInsert(string desc, IntPtr wsRes)
{
int code = LibTaosWS.WSErrorNo(wsRes);
if (code != 0)
{
Console.WriteLine($"execute SQL failed: reason: {LibTaosWS.WSErrorStr(wsRes)}, code:{code}");
}
else
{
Console.WriteLine("{0} success affect {2} rows, cost {1} nanoseconds", desc, LibTaosWS.WSTakeTiming(wsRes), LibTaosWS.WSAffectRows(wsRes));
}
}
}
}
// Establish connect success.
// create table success affect 0 rows, cost 3717542 nanoseconds
// insert data success affect 8 rows, cost 2613637 nanoseconds
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net6.0</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.*" GeneratePathProperty="true" />
</ItemGroup>
<Target Name="copyDLLDependency" BeforeTargets="BeforeBuild">
<ItemGroup>
<DepDLLFiles Include="$(PkgTDengine_Connector)\runtimes\**\*.*" />
</ItemGroup>
<Copy SourceFiles="@(DepDLLFiles)" DestinationFolder="$(OutDir)" />
</Target>
</Project>
using System;
using TDengineWS.Impl;
using System.Collections.Generic;
using TDengineDriver;
namespace Examples
{
public class WSQueryExample
{
static int Main(string[] args)
{
string DSN = "ws://root:taosdata@127.0.0.1:6041/test";
IntPtr wsConn = LibTaosWS.WSConnectWithDSN(DSN);
if (wsConn == IntPtr.Zero)
{
Console.WriteLine("get WS connection failed");
return -1;
}
else
{
Console.WriteLine("Establish connect success.");
}
string select = "select * from test.meters";
// optional:wsRes = LibTaosWS.WSQuery(wsConn, select);
IntPtr wsRes = LibTaosWS.WSQueryTimeout(wsConn, select, 1);
// Assert if query execute success.
int code = LibTaosWS.WSErrorNo(wsRes);
if (code != 0)
{
Console.WriteLine($"execute SQL failed: reason: {LibTaosWS.WSErrorStr(wsRes)}, code:{code}");
LibTaosWS.WSFreeResult(wsRes);
return -1;
}
// get meta data
List<TDengineMeta> metas = LibTaosWS.WSGetFields(wsRes);
// get retrieved data
List<object> dataSet = LibTaosWS.WSGetData(wsRes);
// do something with result.
foreach (var meta in metas)
{
Console.Write("{0} {1}({2}) \t|\t", meta.name, meta.TypeName(), meta.size);
}
Console.WriteLine("");
for (int i = 0; i < dataSet.Count;)
{
for (int j = 0; j < metas.Count; j++)
{
Console.Write("{0}\t|\t", dataSet[i]);
i++;
}
Console.WriteLine("");
}
// Free result after use.
LibTaosWS.WSFreeResult(wsRes);
// close connection.
LibTaosWS.WSClose(wsConn);
return 0;
}
}
}
// Establish connect success.
// ts TIMESTAMP(8) | current FLOAT(4) | voltage INT(4) | phase FLOAT(4) | location BINARY(64) | groupid INT(4) |
// 1538548685000 | 10.8 | 223 | 0.29 | California.LosAngeles | 3 |
// 1538548686500 | 11.5 | 221 | 0.35 | California.LosAngeles | 3 |
// 1538548685500 | 11.8 | 221 | 0.28 | California.LosAngeles | 2 |
// 1538548696600 | 13.4 | 223 | 0.29 | California.LosAngeles | 2 |
// 1538548685000 | 10.3 | 219 | 0.31 | California.SanFrancisco | 2 |
// 1538548695000 | 12.6 | 218 | 0.33 | California.SanFrancisco | 2 |
// 1538548696800 | 12.3 | 221 | 0.31 | California.SanFrancisco | 2 |
// 1538548696650 | 10.3 | 218 | 0.25 | California.SanFrancisco | 3 |
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net6.0</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.*" GeneratePathProperty="true" />
</ItemGroup>
<Target Name="copyDLLDependency" BeforeTargets="BeforeBuild">
<ItemGroup>
<DepDLLFiles Include="$(PkgTDengine_Connector)\runtimes\**\*.*" />
</ItemGroup>
<Copy SourceFiles="@(DepDLLFiles)" DestinationFolder="$(OutDir)" />
</Target>
</Project>
using System;
using TDengineWS.Impl;
using TDengineDriver;
using System.Runtime.InteropServices;
namespace Examples
{
public class WSStmtExample
{
static int Main(string[] args)
{
const string DSN = "ws://root:taosdata@127.0.0.1:6041/test";
const string table = "meters";
const string database = "test";
const string childTable = "d1005";
string insert = $"insert into ? using {database}.{table} tags(?,?) values(?,?,?,?)";
const int numOfTags = 2;
const int numOfColumns = 4;
// Establish connection
IntPtr wsConn = LibTaosWS.WSConnectWithDSN(DSN);
if (wsConn == IntPtr.Zero)
{
Console.WriteLine($"get WS connection failed");
return -1;
}
else
{
Console.WriteLine("Establish connect success...");
}
// init stmt
IntPtr wsStmt = LibTaosWS.WSStmtInit(wsConn);
if (wsStmt != IntPtr.Zero)
{
int code = LibTaosWS.WSStmtPrepare(wsStmt, insert);
ValidStmtStep(code, wsStmt, "WSStmtPrepare");
TAOS_MULTI_BIND[] wsTags = new TAOS_MULTI_BIND[] { WSMultiBind.WSBindNchar(new string[] { "California.SanDiego" }), WSMultiBind.WSBindInt(new int?[] { 4 }) };
code = LibTaosWS.WSStmtSetTbnameTags(wsStmt, $"{database}.{childTable}", wsTags, numOfTags);
ValidStmtStep(code, wsStmt, "WSStmtSetTbnameTags");
TAOS_MULTI_BIND[] data = new TAOS_MULTI_BIND[4];
data[0] = WSMultiBind.WSBindTimestamp(new long[] { 1538548687000, 1538548688000, 1538548689000, 1538548690000, 1538548691000 });
data[1] = WSMultiBind.WSBindFloat(new float?[] { 10.30F, 10.40F, 10.50F, 10.60F, 10.70F });
data[2] = WSMultiBind.WSBindInt(new int?[] { 223, 221, 222, 220, 219 });
data[3] = WSMultiBind.WSBindFloat(new float?[] { 0.31F, 0.32F, 0.33F, 0.35F, 0.28F });
code = LibTaosWS.WSStmtBindParamBatch(wsStmt, data, numOfColumns);
ValidStmtStep(code, wsStmt, "WSStmtBindParamBatch");
code = LibTaosWS.WSStmtAddBatch(wsStmt);
ValidStmtStep(code, wsStmt, "WSStmtAddBatch");
IntPtr stmtAffectRowPtr = Marshal.AllocHGlobal(Marshal.SizeOf(typeof(Int32)));
code = LibTaosWS.WSStmtExecute(wsStmt, stmtAffectRowPtr);
ValidStmtStep(code, wsStmt, "WSStmtExecute");
Console.WriteLine("WS STMT insert {0} rows...", Marshal.ReadInt32(stmtAffectRowPtr));
Marshal.FreeHGlobal(stmtAffectRowPtr);
LibTaosWS.WSStmtClose(wsStmt);
// Free unmanaged memory
WSMultiBind.WSFreeTaosBind(wsTags);
WSMultiBind.WSFreeTaosBind(data);
//check result with SQL "SELECT * FROM test.d1005;"
}
else
{
Console.WriteLine("Init STMT failed...");
}
// close connection.
LibTaosWS.WSClose(wsConn);
return 0;
}
static void ValidStmtStep(int code, IntPtr wsStmt, string desc)
{
if (code != 0)
{
Console.WriteLine($"{desc} failed,reason: {LibTaosWS.WSErrorStr(wsStmt)}, code: {code}");
}
else
{
Console.WriteLine("{0} success...", desc);
}
}
}
}
// WSStmtPrepare success...
// WSStmtSetTbnameTags success...
// WSStmtBindParamBatch success...
// WSStmtAddBatch success...
// WSStmtExecute success...
// WS STMT insert 5 rows...
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net6.0</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.*" GeneratePathProperty="true" />
</ItemGroup>
<Target Name="copyDLLDependency" BeforeTargets="BeforeBuild">
<ItemGroup>
<DepDLLFiles Include="$(PkgTDengine_Connector)\runtimes\**\*.*" />
</ItemGroup>
<Copy SourceFiles="@(DepDLLFiles)" DestinationFolder="$(OutDir)" />
</Target>
</Project>
...@@ -16,14 +16,14 @@ public class RestInsertExample { ...@@ -16,14 +16,14 @@ public class RestInsertExample {
private static List<String> getRawData() { private static List<String> getRawData() {
return Arrays.asList( return Arrays.asList(
"d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,California.SanFrancisco,2", "d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,'California.SanFrancisco',2",
"d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,California.SanFrancisco,2", "d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,'California.SanFrancisco',2",
"d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,California.SanFrancisco,2", "d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,'California.SanFrancisco',2",
"d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,California.SanFrancisco,3", "d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,'California.SanFrancisco',3",
"d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,California.LosAngeles,2", "d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,'California.LosAngeles',2",
"d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,California.LosAngeles,2", "d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,'California.LosAngeles',2",
"d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,California.LosAngeles,3", "d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,'California.LosAngeles',3",
"d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,California.LosAngeles,3" "d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,'California.LosAngeles',3"
); );
} }
......
...@@ -38,12 +38,12 @@ public class SubscribeDemo { ...@@ -38,12 +38,12 @@ public class SubscribeDemo {
statement.executeUpdate("create database " + DB_NAME); statement.executeUpdate("create database " + DB_NAME);
statement.executeUpdate("use " + DB_NAME); statement.executeUpdate("use " + DB_NAME);
statement.executeUpdate( statement.executeUpdate(
"CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT) TAGS (`groupid` INT, `location` BINARY(16))"); "CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT) TAGS (`groupid` INT, `location` BINARY(24))");
statement.executeUpdate("CREATE TABLE `d0` USING `meters` TAGS(0, 'Los Angles')"); statement.executeUpdate("CREATE TABLE `d0` USING `meters` TAGS(0, 'California.LosAngles')");
statement.executeUpdate("INSERT INTO `d0` values(now - 10s, 0.32, 116)"); statement.executeUpdate("INSERT INTO `d0` values(now - 10s, 0.32, 116)");
statement.executeUpdate("INSERT INTO `d0` values(now - 8s, NULL, NULL)"); statement.executeUpdate("INSERT INTO `d0` values(now - 8s, NULL, NULL)");
statement.executeUpdate( statement.executeUpdate(
"INSERT INTO `d1` USING `meters` TAGS(1, 'San Francisco') values(now - 9s, 10.1, 119)"); "INSERT INTO `d1` USING `meters` TAGS(1, 'California.SanFrancisco') values(now - 9s, 10.1, 119)");
statement.executeUpdate( statement.executeUpdate(
"INSERT INTO `d1` values (now-8s, 10, 120) (now - 6s, 10, 119) (now - 4s, 11.2, 118)"); "INSERT INTO `d1` values (now-8s, 10, 120) (now - 6s, 10, 119) (now - 4s, 11.2, 118)");
// create topic // create topic
...@@ -57,7 +57,7 @@ public class SubscribeDemo { ...@@ -57,7 +57,7 @@ public class SubscribeDemo {
properties.setProperty(TMQConstants.ENABLE_AUTO_COMMIT, "true"); properties.setProperty(TMQConstants.ENABLE_AUTO_COMMIT, "true");
properties.setProperty(TMQConstants.GROUP_ID, "test"); properties.setProperty(TMQConstants.GROUP_ID, "test");
properties.setProperty(TMQConstants.VALUE_DESERIALIZER, properties.setProperty(TMQConstants.VALUE_DESERIALIZER,
"com.taosdata.jdbc.MetersDeserializer"); "com.taos.example.MetersDeserializer");
// poll data // poll data
try (TaosConsumer<Meters> consumer = new TaosConsumer<>(properties)) { try (TaosConsumer<Meters> consumer = new TaosConsumer<>(properties)) {
...@@ -75,4 +75,4 @@ public class SubscribeDemo { ...@@ -75,4 +75,4 @@ public class SubscribeDemo {
} }
timer.cancel(); timer.cancel();
} }
} }
\ No newline at end of file
package com.taos.example.highvolume;
import java.sql.*;
/**
* Prepare target database.
* Count total records in database periodically so that we can estimate the writing speed.
*/
public class DataBaseMonitor {
private Connection conn;
private Statement stmt;
public DataBaseMonitor init() throws SQLException {
if (conn == null) {
String jdbcURL = System.getenv("TDENGINE_JDBC_URL");
conn = DriverManager.getConnection(jdbcURL);
stmt = conn.createStatement();
}
return this;
}
public void close() {
try {
stmt.close();
} catch (SQLException e) {
}
try {
conn.close();
} catch (SQLException e) {
}
}
public void prepareDatabase() throws SQLException {
stmt.execute("DROP DATABASE IF EXISTS test");
stmt.execute("CREATE DATABASE test");
stmt.execute("CREATE STABLE test.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)");
}
public Long count() throws SQLException {
if (!stmt.isClosed()) {
ResultSet result = stmt.executeQuery("SELECT count(*) from test.meters");
result.next();
return result.getLong(1);
}
return null;
}
/**
* show test.stables;
*
* name | created_time | columns | tags | tables |
* ============================================================================================
* meters | 2022-07-20 08:39:30.902 | 4 | 2 | 620000 |
*/
public Long getTableCount() throws SQLException {
if (!stmt.isClosed()) {
ResultSet result = stmt.executeQuery("show test.stables");
result.next();
return result.getLong(5);
}
return null;
}
}
\ No newline at end of file
package com.taos.example.highvolume;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.sql.*;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
public class FastWriteExample {
final static Logger logger = LoggerFactory.getLogger(FastWriteExample.class);
final static int taskQueueCapacity = 1000000;
final static List<BlockingQueue<String>> taskQueues = new ArrayList<>();
final static List<ReadTask> readTasks = new ArrayList<>();
final static List<WriteTask> writeTasks = new ArrayList<>();
final static DataBaseMonitor databaseMonitor = new DataBaseMonitor();
public static void stopAll() {
logger.info("shutting down");
readTasks.forEach(task -> task.stop());
writeTasks.forEach(task -> task.stop());
databaseMonitor.close();
}
public static void main(String[] args) throws InterruptedException, SQLException {
int readTaskCount = args.length > 0 ? Integer.parseInt(args[0]) : 1;
int writeTaskCount = args.length > 1 ? Integer.parseInt(args[1]) : 3;
int tableCount = args.length > 2 ? Integer.parseInt(args[2]) : 1000;
int maxBatchSize = args.length > 3 ? Integer.parseInt(args[3]) : 3000;
logger.info("readTaskCount={}, writeTaskCount={} tableCount={} maxBatchSize={}",
readTaskCount, writeTaskCount, tableCount, maxBatchSize);
databaseMonitor.init().prepareDatabase();
// Create task queues, whiting tasks and start writing threads.
for (int i = 0; i < writeTaskCount; ++i) {
BlockingQueue<String> queue = new ArrayBlockingQueue<>(taskQueueCapacity);
taskQueues.add(queue);
WriteTask task = new WriteTask(queue, maxBatchSize);
Thread t = new Thread(task);
t.setName("WriteThread-" + i);
t.start();
}
// create reading tasks and start reading threads
int tableCountPerTask = tableCount / readTaskCount;
for (int i = 0; i < readTaskCount; ++i) {
ReadTask task = new ReadTask(i, taskQueues, tableCountPerTask);
Thread t = new Thread(task);
t.setName("ReadThread-" + i);
t.start();
}
Runtime.getRuntime().addShutdownHook(new Thread(FastWriteExample::stopAll));
long lastCount = 0;
while (true) {
Thread.sleep(10000);
long numberOfTable = databaseMonitor.getTableCount();
long count = databaseMonitor.count();
logger.info("numberOfTable={} count={} speed={}", numberOfTable, count, (count - lastCount) / 10);
lastCount = count;
}
}
}
\ No newline at end of file
package com.taos.example.highvolume;
import java.util.Iterator;
/**
* Generate test data
*/
class MockDataSource implements Iterator {
private String tbNamePrefix;
private int tableCount;
private long maxRowsPerTable = 1000000000L;
// 100 milliseconds between two neighbouring rows.
long startMs = System.currentTimeMillis() - maxRowsPerTable * 100;
private int currentRow = 0;
private int currentTbId = -1;
// mock values
String[] location = {"California.LosAngeles", "California.SanDiego", "California.SanJose", "California.Campbell", "California.SanFrancisco"};
float[] current = {8.8f, 10.7f, 9.9f, 8.9f, 9.4f};
int[] voltage = {119, 116, 111, 113, 118};
float[] phase = {0.32f, 0.34f, 0.33f, 0.329f, 0.141f};
public MockDataSource(String tbNamePrefix, int tableCount) {
this.tbNamePrefix = tbNamePrefix;
this.tableCount = tableCount;
}
@Override
public boolean hasNext() {
currentTbId += 1;
if (currentTbId == tableCount) {
currentTbId = 0;
currentRow += 1;
}
return currentRow < maxRowsPerTable;
}
@Override
public String next() {
long ts = startMs + 100 * currentRow;
int groupId = currentTbId % 5 == 0 ? currentTbId / 5 : currentTbId / 5 + 1;
StringBuilder sb = new StringBuilder(tbNamePrefix + "_" + currentTbId + ","); // tbName
sb.append(ts).append(','); // ts
sb.append(current[currentRow % 5]).append(','); // current
sb.append(voltage[currentRow % 5]).append(','); // voltage
sb.append(phase[currentRow % 5]).append(','); // phase
sb.append(location[currentRow % 5]).append(','); // location
sb.append(groupId); // groupID
return sb.toString();
}
}
package com.taos.example.highvolume;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Iterator;
import java.util.List;
import java.util.concurrent.BlockingQueue;
class ReadTask implements Runnable {
private final static Logger logger = LoggerFactory.getLogger(ReadTask.class);
private final int taskId;
private final List<BlockingQueue<String>> taskQueues;
private final int queueCount;
private final int tableCount;
private boolean active = true;
public ReadTask(int readTaskId, List<BlockingQueue<String>> queues, int tableCount) {
this.taskId = readTaskId;
this.taskQueues = queues;
this.queueCount = queues.size();
this.tableCount = tableCount;
}
/**
* Assign data received to different queues.
* Here we use the suffix number in table name.
* You are expected to define your own rule in practice.
*
* @param line record received
* @return which queue to use
*/
public int getQueueId(String line) {
String tbName = line.substring(0, line.indexOf(',')); // For example: tb1_101
String suffixNumber = tbName.split("_")[1];
return Integer.parseInt(suffixNumber) % this.queueCount;
}
@Override
public void run() {
logger.info("started");
Iterator<String> it = new MockDataSource("tb" + this.taskId, tableCount);
try {
while (it.hasNext() && active) {
String line = it.next();
int queueId = getQueueId(line);
taskQueues.get(queueId).put(line);
}
} catch (Exception e) {
logger.error("Read Task Error", e);
}
}
public void stop() {
logger.info("stop");
this.active = false;
}
}
\ No newline at end of file
package com.taos.example.highvolume;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.sql.*;
import java.util.HashMap;
import java.util.Map;
/**
* A helper class encapsulate the logic of writing using SQL.
* <p>
* The main interfaces are two methods:
* <ol>
* <li>{@link SQLWriter#processLine}, which receive raw lines from WriteTask and group them by table names.</li>
* <li>{@link SQLWriter#flush}, which assemble INSERT statement and execute it.</li>
* </ol>
* <p>
* There is a technical skill worth mentioning: we create table as needed when "table does not exist" error occur instead of creating table automatically using syntax "INSET INTO tb USING stb".
* This ensure that checking table existence is a one-time-only operation.
* </p>
*
* </p>
*/
public class SQLWriter {
final static Logger logger = LoggerFactory.getLogger(SQLWriter.class);
private Connection conn;
private Statement stmt;
/**
* current number of buffered records
*/
private int bufferedCount = 0;
/**
* Maximum number of buffered records.
* Flush action will be triggered if bufferedCount reached this value,
*/
private int maxBatchSize;
/**
* Maximum SQL length.
*/
private int maxSQLLength;
/**
* Map from table name to column values. For example:
* "tb001" -> "(1648432611249,2.1,114,0.09) (1648432611250,2.2,135,0.2)"
*/
private Map<String, String> tbValues = new HashMap<>();
/**
* Map from table name to tag values in the same order as creating stable.
* Used for creating table.
*/
private Map<String, String> tbTags = new HashMap<>();
public SQLWriter(int maxBatchSize) {
this.maxBatchSize = maxBatchSize;
}
/**
* Get Database Connection
*
* @return Connection
* @throws SQLException
*/
private static Connection getConnection() throws SQLException {
String jdbcURL = System.getenv("TDENGINE_JDBC_URL");
return DriverManager.getConnection(jdbcURL);
}
/**
* Create Connection and Statement
*
* @throws SQLException
*/
public void init() throws SQLException {
conn = getConnection();
stmt = conn.createStatement();
stmt.execute("use test");
ResultSet rs = stmt.executeQuery("show variables");
while (rs.next()) {
String configName = rs.getString(1);
if ("maxSQLLength".equals(configName)) {
maxSQLLength = Integer.parseInt(rs.getString(2));
logger.info("maxSQLLength={}", maxSQLLength);
}
}
}
/**
* Convert raw data to SQL fragments, group them by table name and cache them in a HashMap.
* Trigger writing when number of buffered records reached maxBachSize.
*
* @param line raw data get from task queue in format: tbName,ts,current,voltage,phase,location,groupId
*/
public void processLine(String line) throws SQLException {
bufferedCount += 1;
int firstComma = line.indexOf(',');
String tbName = line.substring(0, firstComma);
int lastComma = line.lastIndexOf(',');
int secondLastComma = line.lastIndexOf(',', lastComma - 1);
String value = "(" + line.substring(firstComma + 1, secondLastComma) + ") ";
if (tbValues.containsKey(tbName)) {
tbValues.put(tbName, tbValues.get(tbName) + value);
} else {
tbValues.put(tbName, value);
}
if (!tbTags.containsKey(tbName)) {
String location = line.substring(secondLastComma + 1, lastComma);
String groupId = line.substring(lastComma + 1);
String tagValues = "('" + location + "'," + groupId + ')';
tbTags.put(tbName, tagValues);
}
if (bufferedCount == maxBatchSize) {
flush();
}
}
/**
* Assemble INSERT statement using buffered SQL fragments in Map {@link SQLWriter#tbValues} and execute it.
* In case of "Table does not exit" exception, create all tables in the sql and retry the sql.
*/
public void flush() throws SQLException {
StringBuilder sb = new StringBuilder("INSERT INTO ");
for (Map.Entry<String, String> entry : tbValues.entrySet()) {
String tableName = entry.getKey();
String values = entry.getValue();
String q = tableName + " values " + values + " ";
if (sb.length() + q.length() > maxSQLLength) {
executeSQL(sb.toString());
logger.warn("increase maxSQLLength or decrease maxBatchSize to gain better performance");
sb = new StringBuilder("INSERT INTO ");
}
sb.append(q);
}
executeSQL(sb.toString());
tbValues.clear();
bufferedCount = 0;
}
private void executeSQL(String sql) throws SQLException {
try {
stmt.executeUpdate(sql);
} catch (SQLException e) {
// convert to error code defined in taoserror.h
int errorCode = e.getErrorCode() & 0xffff;
if (errorCode == 0x362 || errorCode == 0x218) {
// Table does not exist
createTables();
executeSQL(sql);
} else {
logger.error("Execute SQL: {}", sql);
throw e;
}
} catch (Throwable throwable) {
logger.error("Execute SQL: {}", sql);
throw throwable;
}
}
/**
* Create tables in batch using syntax:
* <p>
* CREATE TABLE [IF NOT EXISTS] tb_name1 USING stb_name TAGS (tag_value1, ...) [IF NOT EXISTS] tb_name2 USING stb_name TAGS (tag_value2, ...) ...;
* </p>
*/
private void createTables() throws SQLException {
StringBuilder sb = new StringBuilder("CREATE TABLE ");
for (String tbName : tbValues.keySet()) {
String tagValues = tbTags.get(tbName);
sb.append("IF NOT EXISTS ").append(tbName).append(" USING meters TAGS ").append(tagValues).append(" ");
}
String sql = sb.toString();
try {
stmt.executeUpdate(sql);
} catch (Throwable throwable) {
logger.error("Execute SQL: {}", sql);
throw throwable;
}
}
public boolean hasBufferedValues() {
return bufferedCount > 0;
}
public int getBufferedCount() {
return bufferedCount;
}
public void close() {
try {
stmt.close();
} catch (SQLException e) {
}
try {
conn.close();
} catch (SQLException e) {
}
}
}
\ No newline at end of file
package com.taos.example.highvolume;
public class StmtWriter {
}
package com.taos.example.highvolume;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.concurrent.BlockingQueue;
class WriteTask implements Runnable {
private final static Logger logger = LoggerFactory.getLogger(WriteTask.class);
private final int maxBatchSize;
// the queue from which this writing task get raw data.
private final BlockingQueue<String> queue;
// A flag indicate whether to continue.
private boolean active = true;
public WriteTask(BlockingQueue<String> taskQueue, int maxBatchSize) {
this.queue = taskQueue;
this.maxBatchSize = maxBatchSize;
}
@Override
public void run() {
logger.info("started");
String line = null; // data getting from the queue just now.
SQLWriter writer = new SQLWriter(maxBatchSize);
try {
writer.init();
while (active) {
line = queue.poll();
if (line != null) {
// parse raw data and buffer the data.
writer.processLine(line);
} else if (writer.hasBufferedValues()) {
// write data immediately if no more data in the queue
writer.flush();
} else {
// sleep a while to avoid high CPU usage if no more data in the queue and no buffered records, .
Thread.sleep(100);
}
}
if (writer.hasBufferedValues()) {
writer.flush();
}
} catch (Exception e) {
String msg = String.format("line=%s, bufferedCount=%s", line, writer.getBufferedCount());
logger.error(msg, e);
} finally {
writer.close();
}
}
public void stop() {
logger.info("stop");
this.active = false;
}
}
\ No newline at end of file
import pandas import pandas
from sqlalchemy import create_engine from sqlalchemy import create_engine, text
engine = create_engine("taos://root:taosdata@localhost:6030/power") engine = create_engine("taos://root:taosdata@localhost:6030/power")
df = pandas.read_sql("SELECT * FROM meters", engine) conn = engine.connect()
df = pandas.read_sql(text("SELECT * FROM power.meters"), conn)
conn.close()
# print index # print index
print(df.index) print(df.index)
......
import pandas import pandas
from sqlalchemy import create_engine from sqlalchemy import create_engine, text
engine = create_engine("taosrest://root:taosdata@localhost:6041") engine = create_engine("taosrest://root:taosdata@localhost:6041")
df: pandas.DataFrame = pandas.read_sql("SELECT * FROM power.meters", engine) conn = engine.connect()
df: pandas.DataFrame = pandas.read_sql(text("SELECT * FROM power.meters"), conn)
conn.close()
# print index # print index
print(df.index) print(df.index)
......
# ANCHOR: connect # ANCHOR: connect
from taosrest import connect, TaosRestConnection, TaosRestCursor from taosrest import connect, TaosRestConnection, TaosRestCursor
conn: TaosRestConnection = connect(url="http://localhost:6041", conn = connect(url="http://localhost:6041",
user="root", user="root",
password="taosdata", password="taosdata",
timeout=30) timeout=30)
# ANCHOR_END: connect # ANCHOR_END: connect
# ANCHOR: basic # ANCHOR: basic
# create STable # create STable
cursor: TaosRestCursor = conn.cursor() cursor = conn.cursor()
cursor.execute("DROP DATABASE IF EXISTS power") cursor.execute("DROP DATABASE IF EXISTS power")
cursor.execute("CREATE DATABASE power") cursor.execute("CREATE DATABASE power")
cursor.execute("CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)") cursor.execute(
"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)")
# insert data # insert data
cursor.execute("""INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000) cursor.execute("""INSERT INTO power.d1001 USING power.meters TAGS('California.SanFrancisco', 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000) power.d1002 USING power.meters TAGS('California.SanFrancisco', 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000) power.d1003 USING power.meters TAGS('California.LosAngeles', 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)""") power.d1004 USING power.meters TAGS('California.LosAngeles', 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)""")
print("inserted row count:", cursor.rowcount) print("inserted row count:", cursor.rowcount)
# query data # query data
...@@ -28,7 +29,7 @@ print("queried row count:", cursor.rowcount) ...@@ -28,7 +29,7 @@ print("queried row count:", cursor.rowcount)
# get column names from cursor # get column names from cursor
column_names = [meta[0] for meta in cursor.description] column_names = [meta[0] for meta in cursor.description]
# get rows # get rows
data: list[tuple] = cursor.fetchall() data = cursor.fetchall()
print(column_names) print(column_names)
for row in data: for row in data:
print(row) print(row)
......
...@@ -8,7 +8,7 @@ conn.execute("CREATE DATABASE test") ...@@ -8,7 +8,7 @@ conn.execute("CREATE DATABASE test")
# change database. same as execute "USE db" # change database. same as execute "USE db"
conn.select_db("test") conn.select_db("test")
conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)") conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)")
affected_row: int = conn.execute("INSERT INTO t1 USING weather TAGS(1) VALUES (now, 23.5) (now+1m, 23.5) (now+2m 24.4)") affected_row = conn.execute("INSERT INTO t1 USING weather TAGS(1) VALUES (now, 23.5) (now+1m, 23.5) (now+2m, 24.4)")
print("affected_row", affected_row) print("affected_row", affected_row)
# output: # output:
# affected_row 3 # affected_row 3
...@@ -16,10 +16,10 @@ print("affected_row", affected_row) ...@@ -16,10 +16,10 @@ print("affected_row", affected_row)
# ANCHOR: query # ANCHOR: query
# Execute a sql and get its result set. It's useful for SELECT statement # Execute a sql and get its result set. It's useful for SELECT statement
result: taos.TaosResult = conn.query("SELECT * from weather") result = conn.query("SELECT * from weather")
# Get fields from result # Get fields from result
fields: taos.field.TaosFields = result.fields fields = result.fields
for field in fields: for field in fields:
print(field) # {name: ts, type: 9, bytes: 8} print(field) # {name: ts, type: 9, bytes: 8}
...@@ -42,4 +42,4 @@ print(data) ...@@ -42,4 +42,4 @@ print(data)
# ANCHOR_END: query # ANCHOR_END: query
conn.close() conn.close()
\ No newline at end of file
# install dependencies:
# recommend python >= 3.8
#
import logging
import math
import multiprocessing
import sys
import time
import os
from multiprocessing import Process, Queue
from mockdatasource import MockDataSource
from queue import Empty
from typing import List
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG, format="%(asctime)s [%(name)s] - %(message)s")
READ_TASK_COUNT = 1
WRITE_TASK_COUNT = 1
TABLE_COUNT = 1000
QUEUE_SIZE = 1000000
MAX_BATCH_SIZE = 3000
_DONE_MESSAGE = '__DONE__'
def get_connection():
"""
If variable TDENGINE_FIRST_EP is provided then it will be used. If not, firstEP in /etc/taos/taos.cfg will be used.
You can also override the default username and password by supply variable TDENGINE_USER and TDENGINE_PASSWORD
"""
import taos
firstEP = os.environ.get("TDENGINE_FIRST_EP")
if firstEP:
host, port = firstEP.split(":")
else:
host, port = None, 0
user = os.environ.get("TDENGINE_USER", "root")
password = os.environ.get("TDENGINE_PASSWORD", "taosdata")
return taos.connect(host=host, port=int(port), user=user, password=password)
# ANCHOR: read
def run_read_task(task_id: int, task_queues: List[Queue], infinity):
table_count_per_task = TABLE_COUNT // READ_TASK_COUNT
data_source = MockDataSource(f"tb{task_id}", table_count_per_task, infinity)
try:
for batch in data_source:
if isinstance(batch, tuple):
batch = [batch]
for table_id, rows in batch:
# hash data to different queue
i = table_id % len(task_queues)
# block putting forever when the queue is full
for row in rows:
task_queues[i].put(row)
if not infinity:
for queue in task_queues:
queue.put(_DONE_MESSAGE)
except KeyboardInterrupt:
pass
finally:
logging.info('read task over')
# ANCHOR_END: read
# ANCHOR: write
def run_write_task(task_id: int, queue: Queue, done_queue: Queue):
from sql_writer import SQLWriter
log = logging.getLogger(f"WriteTask-{task_id}")
writer = SQLWriter(get_connection)
lines = None
try:
while True:
over = False
lines = []
for _ in range(MAX_BATCH_SIZE):
try:
line = queue.get_nowait()
if line == _DONE_MESSAGE:
over = True
break
if line:
lines.append(line)
except Empty:
time.sleep(0.1)
if len(lines) > 0:
writer.process_lines(lines)
if over:
done_queue.put(_DONE_MESSAGE)
break
except KeyboardInterrupt:
pass
except BaseException as e:
log.debug(f"lines={lines}")
raise e
finally:
writer.close()
log.debug('write task over')
# ANCHOR_END: write
def set_global_config():
argc = len(sys.argv)
if argc > 1:
global READ_TASK_COUNT
READ_TASK_COUNT = int(sys.argv[1])
if argc > 2:
global WRITE_TASK_COUNT
WRITE_TASK_COUNT = int(sys.argv[2])
if argc > 3:
global TABLE_COUNT
TABLE_COUNT = int(sys.argv[3])
if argc > 4:
global QUEUE_SIZE
QUEUE_SIZE = int(sys.argv[4])
if argc > 5:
global MAX_BATCH_SIZE
MAX_BATCH_SIZE = int(sys.argv[5])
# ANCHOR: monitor
def run_monitor_process(done_queue: Queue):
log = logging.getLogger("DataBaseMonitor")
conn = None
try:
conn = get_connection()
def get_count():
res = conn.query("SELECT count(*) FROM test.meters")
rows = res.fetch_all()
return rows[0][0] if rows else 0
last_count = 0
while True:
try:
done = done_queue.get_nowait()
if done == _DONE_MESSAGE:
break
except Empty:
pass
time.sleep(10)
count = get_count()
log.info(f"count={count} speed={(count - last_count) / 10}")
last_count = count
finally:
conn.close()
# ANCHOR_END: monitor
# ANCHOR: main
def main(infinity):
set_global_config()
logging.info(f"READ_TASK_COUNT={READ_TASK_COUNT}, WRITE_TASK_COUNT={WRITE_TASK_COUNT}, "
f"TABLE_COUNT={TABLE_COUNT}, QUEUE_SIZE={QUEUE_SIZE}, MAX_BATCH_SIZE={MAX_BATCH_SIZE}")
conn = get_connection()
conn.execute("DROP DATABASE IF EXISTS test")
conn.execute("CREATE DATABASE IF NOT EXISTS test")
conn.execute("CREATE STABLE IF NOT EXISTS test.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) "
"TAGS (location BINARY(64), groupId INT)")
conn.close()
done_queue = Queue()
monitor_process = Process(target=run_monitor_process, args=(done_queue,))
monitor_process.start()
logging.debug(f"monitor task started with pid {monitor_process.pid}")
task_queues: List[Queue] = []
write_processes = []
read_processes = []
# create task queues
for i in range(WRITE_TASK_COUNT):
queue = Queue()
task_queues.append(queue)
# create write processes
for i in range(WRITE_TASK_COUNT):
p = Process(target=run_write_task, args=(i, task_queues[i], done_queue))
p.start()
logging.debug(f"WriteTask-{i} started with pid {p.pid}")
write_processes.append(p)
# create read processes
for i in range(READ_TASK_COUNT):
queues = assign_queues(i, task_queues)
p = Process(target=run_read_task, args=(i, queues, infinity))
p.start()
logging.debug(f"ReadTask-{i} started with pid {p.pid}")
read_processes.append(p)
try:
monitor_process.join()
for p in read_processes:
p.join()
for p in write_processes:
p.join()
time.sleep(1)
return
except KeyboardInterrupt:
monitor_process.terminate()
[p.terminate() for p in read_processes]
[p.terminate() for p in write_processes]
[q.close() for q in task_queues]
def assign_queues(read_task_id, task_queues):
"""
Compute target queues for a specific read task.
"""
ratio = WRITE_TASK_COUNT / READ_TASK_COUNT
from_index = math.floor(read_task_id * ratio)
end_index = math.ceil((read_task_id + 1) * ratio)
return task_queues[from_index:end_index]
if __name__ == '__main__':
multiprocessing.set_start_method('spawn')
main(False)
# ANCHOR_END: main
#! encoding = utf-8
import taos
LOCATIONS = ['California.SanFrancisco', 'California.LosAngles', 'California.SanDiego', 'California.SanJose',
'California.PaloAlto', 'California.Campbell', 'California.MountainView', 'California.Sunnyvale',
'California.SantaClara', 'California.Cupertino']
CREATE_DATABASE_SQL = 'create database if not exists {} keep 365 duration 10 buffer 16 wal_level 1'
USE_DATABASE_SQL = 'use {}'
DROP_TABLE_SQL = 'drop table if exists meters'
DROP_DATABASE_SQL = 'drop database if exists {}'
CREATE_STABLE_SQL = 'create stable meters (ts timestamp, current float, voltage int, phase float) tags ' \
'(location binary(64), groupId int)'
CREATE_TABLE_SQL = 'create table if not exists {} using meters tags (\'{}\', {})'
def create_database_and_tables(host, port, user, password, db, table_count):
tags_tables = _init_tags_table_names(table_count=table_count)
conn = taos.connect(host=host, port=port, user=user, password=password)
conn.execute(DROP_DATABASE_SQL.format(db))
conn.execute(CREATE_DATABASE_SQL.format(db))
conn.execute(USE_DATABASE_SQL.format(db))
conn.execute(DROP_TABLE_SQL)
conn.execute(CREATE_STABLE_SQL)
for tags in tags_tables:
location, group_id = _get_location_and_group(tags)
tables = tags_tables[tags]
for table_name in tables:
conn.execute(CREATE_TABLE_SQL.format(table_name, location, group_id))
conn.close()
def clean(host, port, user, password, db):
conn = taos.connect(host=host, port=port, user=user, password=password)
conn.execute(DROP_DATABASE_SQL.format(db))
conn.close()
def _init_tags_table_names(table_count):
tags_table_names = {}
group_id = 0
for i in range(table_count):
table_name = 'd{}'.format(i)
location_idx = i % len(LOCATIONS)
location = LOCATIONS[location_idx]
if location_idx == 0:
group_id += 1
if group_id > 10:
group_id -= 10
key = _tag_table_mapping_key(location=location, group_id=group_id)
if key not in tags_table_names:
tags_table_names[key] = []
tags_table_names[key].append(table_name)
return tags_table_names
def _tag_table_mapping_key(location, group_id):
return '{}_{}'.format(location, group_id)
def _get_location_and_group(key):
fields = key.split('_')
return fields[0], fields[1]
#! encoding = utf-8
import json
import logging
import time
from concurrent.futures import ThreadPoolExecutor, Future
from json import JSONDecodeError
from typing import Callable
import taos
from kafka import KafkaConsumer
from kafka.consumer.fetcher import ConsumerRecord
import kafka_example_common as common
class Consumer(object):
DEFAULT_CONFIGS = {
'kafka_brokers': 'localhost:9092', # kafka broker
'kafka_topic': 'tdengine_kafka_practices',
'kafka_group_id': 'taos',
'taos_host': 'localhost', # TDengine host
'taos_port': 6030, # TDengine port
'taos_user': 'root', # TDengine user name
'taos_password': 'taosdata', # TDengine password
'taos_database': 'power', # TDengine database
'message_type': 'json', # message format, 'json' or 'line'
'clean_after_testing': False, # if drop database after testing
'max_poll': 1000, # poll size for batch mode
'workers': 10, # thread count for multi-threading
'testing': False
}
INSERT_SQL_HEADER = "insert into "
INSERT_PART_SQL = '{} values (\'{}\', {}, {}, {})'
def __init__(self, **configs):
self.config = self.DEFAULT_CONFIGS
self.config.update(configs)
self.consumer = None
if not self.config.get('testing'):
self.consumer = KafkaConsumer(
self.config.get('kafka_topic'),
bootstrap_servers=self.config.get('kafka_brokers'),
group_id=self.config.get('kafka_group_id'),
)
self.conns = taos.connect(
host=self.config.get('taos_host'),
port=self.config.get('taos_port'),
user=self.config.get('taos_user'),
password=self.config.get('taos_password'),
db=self.config.get('taos_database'),
)
if self.config.get('workers') > 1:
self.pool = ThreadPoolExecutor(max_workers=self.config.get('workers'))
self.tasks = []
# tags and table mapping # key: {location}_{groupId} value:
def consume(self):
"""
consume data from kafka and deal. Base on `message_type`, `bath_consume`, `insert_by_table`,
there are several deal function.
:return:
"""
self.conns.execute(common.USE_DATABASE_SQL.format(self.config.get('taos_database')))
try:
if self.config.get('message_type') == 'line': # line
self._run(self._line_to_taos)
if self.config.get('message_type') == 'json': # json
self._run(self._json_to_taos)
except KeyboardInterrupt:
logging.warning("## caught keyboard interrupt, stopping")
finally:
self.stop()
def stop(self):
"""
stop consuming
:return:
"""
# close consumer
if self.consumer is not None:
self.consumer.commit()
self.consumer.close()
# multi thread
if self.config.get('workers') > 1:
if self.pool is not None:
self.pool.shutdown()
for task in self.tasks:
while not task.done():
time.sleep(0.01)
# clean data
if self.config.get('clean_after_testing'):
self.conns.execute(common.DROP_TABLE_SQL)
self.conns.execute(common.DROP_DATABASE_SQL.format(self.config.get('taos_database')))
# close taos
if self.conns is not None:
self.conns.close()
def _run(self, f):
"""
run in batch consuming mode
:param f:
:return:
"""
i = 0 # just for test.
while True:
messages = self.consumer.poll(timeout_ms=100, max_records=self.config.get('max_poll'))
if messages:
if self.config.get('workers') > 1:
self.pool.submit(f, messages.values())
else:
f(list(messages.values()))
if not messages:
i += 1 # just for test.
time.sleep(0.1)
if i > 3: # just for test.
logging.warning('## test over.') # just for test.
return # just for test.
def _json_to_taos(self, messages):
"""
convert a batch of json data to sql, and insert into TDengine
:param messages:
:return:
"""
sql = self._build_sql_from_json(messages=messages)
self.conns.execute(sql=sql)
def _line_to_taos(self, messages):
"""
convert a batch of lines data to sql, and insert into TDengine
:param messages:
:return:
"""
lines = []
for partition_messages in messages:
for message in partition_messages:
lines.append(message.value.decode())
sql = self.INSERT_SQL_HEADER + ' '.join(lines)
self.conns.execute(sql=sql)
def _build_single_sql_from_json(self, msg_value):
try:
data = json.loads(msg_value)
except JSONDecodeError as e:
logging.error('## decode message [%s] error ', msg_value, e)
return ''
# location = data.get('location')
# group_id = data.get('groupId')
ts = data.get('ts')
current = data.get('current')
voltage = data.get('voltage')
phase = data.get('phase')
table_name = data.get('table_name')
return self.INSERT_PART_SQL.format(table_name, ts, current, voltage, phase)
def _build_sql_from_json(self, messages):
sql_list = []
for partition_messages in messages:
for message in partition_messages:
sql_list.append(self._build_single_sql_from_json(message.value))
return self.INSERT_SQL_HEADER + ' '.join(sql_list)
def test_json_to_taos(consumer: Consumer):
records = [
[
ConsumerRecord(checksum=None, headers=None, offset=1, key=None,
value=json.dumps({'table_name': 'd0',
'ts': '2022-12-06 15:13:38.643',
'current': 3.41,
'voltage': 105,
'phase': 0.02027, }),
partition=1, topic='test', serialized_key_size=None, serialized_header_size=None,
serialized_value_size=None, timestamp=time.time(), timestamp_type=None),
ConsumerRecord(checksum=None, headers=None, offset=1, key=None,
value=json.dumps({'table_name': 'd1',
'ts': '2022-12-06 15:13:39.643',
'current': 3.41,
'voltage': 102,
'phase': 0.02027, }),
partition=1, topic='test', serialized_key_size=None, serialized_header_size=None,
serialized_value_size=None, timestamp=time.time(), timestamp_type=None),
]
]
consumer._json_to_taos(messages=records)
def test_line_to_taos(consumer: Consumer):
records = [
[
ConsumerRecord(checksum=None, headers=None, offset=1, key=None,
value="d0 values('2023-01-01 00:00:00.001', 3.49, 109, 0.02737)".encode('utf-8'),
partition=1, topic='test', serialized_key_size=None, serialized_header_size=None,
serialized_value_size=None, timestamp=time.time(), timestamp_type=None),
ConsumerRecord(checksum=None, headers=None, offset=1, key=None,
value="d1 values('2023-01-01 00:00:00.002', 6.19, 112, 0.09171)".encode('utf-8'),
partition=1, topic='test', serialized_key_size=None, serialized_header_size=None,
serialized_value_size=None, timestamp=time.time(), timestamp_type=None),
]
]
consumer._line_to_taos(messages=records)
def consume(kafka_brokers, kafka_topic, kafka_group_id, taos_host, taos_port, taos_user,
taos_password, taos_database, message_type, max_poll, workers):
c = Consumer(kafka_brokers=kafka_brokers, kafka_topic=kafka_topic, kafka_group_id=kafka_group_id,
taos_host=taos_host, taos_port=taos_port, taos_user=taos_user, taos_password=taos_password,
taos_database=taos_database, message_type=message_type, max_poll=max_poll, workers=workers)
c.consume()
if __name__ == '__main__':
consumer = Consumer(testing=True)
common.create_database_and_tables(host='localhost', port=6030, user='root', password='taosdata', db='py_kafka_test',
table_count=10)
consumer.conns.execute(common.USE_DATABASE_SQL.format('py_kafka_test'))
test_json_to_taos(consumer)
test_line_to_taos(consumer)
common.clean(host='localhost', port=6030, user='root', password='taosdata', db='py_kafka_test')
#! encoding=utf-8
import argparse
import logging
import multiprocessing
import time
from multiprocessing import pool
import kafka_example_common as common
import kafka_example_consumer as consumer
import kafka_example_producer as producer
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('-kafka-broker', type=str, default='localhost:9092',
help='kafka borker host. default is `localhost:9200`')
parser.add_argument('-kafka-topic', type=str, default='tdengine-kafka-practices',
help='kafka topic. default is `tdengine-kafka-practices`')
parser.add_argument('-kafka-group', type=str, default='kafka_practices',
help='kafka consumer group. default is `kafka_practices`')
parser.add_argument('-taos-host', type=str, default='localhost',
help='TDengine host. default is `localhost`')
parser.add_argument('-taos-port', type=int, default=6030, help='TDengine port. default is 6030')
parser.add_argument('-taos-user', type=str, default='root', help='TDengine username, default is `root`')
parser.add_argument('-taos-password', type=str, default='taosdata', help='TDengine password, default is `taosdata`')
parser.add_argument('-taos-db', type=str, default='tdengine_kafka_practices',
help='TDengine db name, default is `tdengine_kafka_practices`')
parser.add_argument('-table-count', type=int, default=100, help='TDengine sub-table count, default is 100')
parser.add_argument('-table-items', type=int, default=1000, help='items in per sub-tables, default is 1000')
parser.add_argument('-message-type', type=str, default='line',
help='kafka message type. `line` or `json`. default is `line`')
parser.add_argument('-max-poll', type=int, default=1000, help='max poll for kafka consumer')
parser.add_argument('-threads', type=int, default=10, help='thread count for deal message')
parser.add_argument('-processes', type=int, default=1, help='process count')
args = parser.parse_args()
total = args.table_count * args.table_items
logging.warning("## start to prepare testing data...")
prepare_data_start = time.time()
producer.produce_total(100, args.kafka_broker, args.kafka_topic, args.message_type, total, args.table_count)
prepare_data_end = time.time()
logging.warning("## prepare testing data finished! spend-[%s]", prepare_data_end - prepare_data_start)
logging.warning("## start to create database and tables ...")
create_db_start = time.time()
# create database and table
common.create_database_and_tables(host=args.taos_host, port=args.taos_port, user=args.taos_user,
password=args.taos_password, db=args.taos_db, table_count=args.table_count)
create_db_end = time.time()
logging.warning("## create database and tables finished! spend [%s]", create_db_end - create_db_start)
processes = args.processes
logging.warning("## start to consume data and insert into TDengine...")
consume_start = time.time()
if processes > 1: # multiprocess
multiprocessing.set_start_method("spawn")
pool = pool.Pool(processes)
consume_start = time.time()
for _ in range(processes):
pool.apply_async(func=consumer.consume, args=(
args.kafka_broker, args.kafka_topic, args.kafka_group, args.taos_host, args.taos_port, args.taos_user,
args.taos_password, args.taos_db, args.message_type, args.max_poll, args.threads))
pool.close()
pool.join()
else:
consume_start = time.time()
consumer.consume(kafka_brokers=args.kafka_broker, kafka_topic=args.kafka_topic, kafka_group_id=args.kafka_group,
taos_host=args.taos_host, taos_port=args.taos_port, taos_user=args.taos_user,
taos_password=args.taos_password, taos_database=args.taos_db, message_type=args.message_type,
max_poll=args.max_poll, workers=args.threads)
consume_end = time.time()
logging.warning("## consume data and insert into TDengine over! spend-[%s]", consume_end - consume_start)
# print report
logging.warning(
"\n#######################\n"
" Prepare data \n"
"#######################\n"
"# data_type # %s \n"
"# total # %s \n"
"# spend # %s s\n"
"#######################\n"
" Create database \n"
"#######################\n"
"# stable # 1 \n"
"# sub-table # 100 \n"
"# spend # %s s \n"
"#######################\n"
" Consume \n"
"#######################\n"
"# data_type # %s \n"
"# threads # %s \n"
"# processes # %s \n"
"# total_count # %s \n"
"# spend # %s s\n"
"# per_second # %s \n"
"#######################\n",
args.message_type, total, prepare_data_end - prepare_data_start, create_db_end - create_db_start,
args.message_type, args.threads, processes, total, consume_end - consume_start,
total / (consume_end - consume_start))
#! encoding = utf-8
import json
import random
import threading
from concurrent.futures import ThreadPoolExecutor, Future
from datetime import datetime
from kafka import KafkaProducer
locations = ['California.SanFrancisco', 'California.LosAngles', 'California.SanDiego', 'California.SanJose',
'California.PaloAlto', 'California.Campbell', 'California.MountainView', 'California.Sunnyvale',
'California.SantaClara', 'California.Cupertino']
producers: list[KafkaProducer] = []
lock = threading.Lock()
start = 1640966400
def produce_total(workers, broker, topic, message_type, total, table_count):
if len(producers) == 0:
lock.acquire()
if len(producers) == 0:
_init_kafka_producers(broker=broker, count=10)
lock.release()
pool = ThreadPoolExecutor(max_workers=workers)
futures = []
for _ in range(0, workers):
futures.append(pool.submit(_produce_total, topic, message_type, int(total / workers), table_count))
pool.shutdown()
for f in futures:
f.result()
_close_kafka_producers()
def _produce_total(topic, message_type, total, table_count):
producer = _get_kafka_producer()
for _ in range(total):
message = _get_fake_date(message_type=message_type, table_count=table_count)
producer.send(topic=topic, value=message.encode(encoding='utf-8'))
def _init_kafka_producers(broker, count):
for _ in range(count):
p = KafkaProducer(bootstrap_servers=broker, batch_size=64 * 1024, linger_ms=300, acks=0)
producers.append(p)
def _close_kafka_producers():
for p in producers:
p.close()
def _get_kafka_producer():
return producers[random.randint(0, len(producers) - 1)]
def _get_fake_date(table_count, message_type='json'):
if message_type == 'json':
return _get_json_message(table_count=table_count)
if message_type == 'line':
return _get_line_message(table_count=table_count)
return ''
def _get_json_message(table_count):
return json.dumps({
'ts': _get_timestamp(),
'current': random.randint(0, 1000) / 100,
'voltage': random.randint(105, 115),
'phase': random.randint(0, 32000) / 100000,
'location': random.choice(locations),
'groupId': random.randint(1, 10),
'table_name': _random_table_name(table_count)
})
def _get_line_message(table_count):
return "{} values('{}', {}, {}, {})".format(
_random_table_name(table_count), # table
_get_timestamp(), # ts
random.randint(0, 1000) / 100, # current
random.randint(105, 115), # voltage
random.randint(0, 32000) / 100000, # phase
)
def _random_table_name(table_count):
return 'd{}'.format(random.randint(0, table_count - 1))
def _get_timestamp():
global start
lock.acquire(blocking=True)
start += 0.001
lock.release()
return datetime.fromtimestamp(start).strftime('%Y-%m-%d %H:%M:%S.%f')[:-3]
import time
class MockDataSource:
samples = [
"8.8,119,0.32,California.LosAngeles,0",
"10.7,116,0.34,California.SanDiego,1",
"9.9,111,0.33,California.SanJose,2",
"8.9,113,0.329,California.Campbell,3",
"9.4,118,0.141,California.SanFrancisco,4"
]
def __init__(self, tb_name_prefix, table_count, infinity=True):
self.table_name_prefix = tb_name_prefix + "_"
self.table_count = table_count
self.max_rows = 10000000
self.current_ts = round(time.time() * 1000) - self.max_rows * 100
# [(tableId, tableName, values),]
self.data = self._init_data()
self.infinity = infinity
def _init_data(self):
lines = self.samples * (self.table_count // 5 + 1)
data = []
for i in range(self.table_count):
table_name = self.table_name_prefix + str(i)
data.append((i, table_name, lines[i])) # tableId, row
return data
def __iter__(self):
self.row = 0
if not self.infinity:
return iter(self._iter_data())
else:
return self
def __next__(self):
"""
next 1000 rows for each table.
return: {tableId:[row,...]}
"""
return self._iter_data()
def _iter_data(self):
ts = []
for _ in range(1000):
self.current_ts += 100
ts.append(str(self.current_ts))
# add timestamp to each row
# [(tableId, ["tableName,ts,current,voltage,phase,location,groupId"])]
result = []
for table_id, table_name, values in self.data:
rows = [table_name + ',' + t + ',' + values for t in ts]
result.append((table_id, rows))
return result
if __name__ == '__main__':
datasource = MockDataSource('t', 10, False)
for data in datasource:
print(data)
...@@ -25,10 +25,10 @@ def create_stable(conn: taos.TaosConnection): ...@@ -25,10 +25,10 @@ def create_stable(conn: taos.TaosConnection):
# The generated SQL is: # The generated SQL is:
# INSERT INTO d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000) # INSERT INTO d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
# d1002 USING meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000) # d1002 USING meters TAGS('California.SanFrancisco', 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
# d1003 USING meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000) # d1003 USING meters TAGS('California.LosAngeles', 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
# d1004 USING meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000) # d1004 USING meters TAGS('California.LosAngeles', 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)
def get_sql(): def get_sql():
global lines global lines
......
import logging
import taos
class SQLWriter:
log = logging.getLogger("SQLWriter")
def __init__(self, get_connection_func):
self._tb_values = {}
self._tb_tags = {}
self._conn = get_connection_func()
self._max_sql_length = self.get_max_sql_length()
self._conn.execute("create database if not exists test")
self._conn.execute("USE test")
def get_max_sql_length(self):
rows = self._conn.query("SHOW variables").fetch_all()
for r in rows:
name = r[0]
if name == "maxSQLLength":
return int(r[1])
return 1024 * 1024
def process_lines(self, lines: [str]):
"""
:param lines: [[tbName,ts,current,voltage,phase,location,groupId]]
"""
for line in lines:
ps = line.split(",")
table_name = ps[0]
value = '(' + ",".join(ps[1:-2]) + ') '
if table_name in self._tb_values:
self._tb_values[table_name] += value
else:
self._tb_values[table_name] = value
if table_name not in self._tb_tags:
location = ps[-2]
group_id = ps[-1]
tag_value = f"('{location}',{group_id})"
self._tb_tags[table_name] = tag_value
self.flush()
def flush(self):
"""
Assemble INSERT statement and execute it.
When the sql length grows close to MAX_SQL_LENGTH, the sql will be executed immediately, and a new INSERT statement will be created.
In case of "Table does not exit" exception, tables in the sql will be created and the sql will be re-executed.
"""
sql = "INSERT INTO "
sql_len = len(sql)
buf = []
for tb_name, values in self._tb_values.items():
q = tb_name + " VALUES " + values
if sql_len + len(q) >= self._max_sql_length:
sql += " ".join(buf)
self.execute_sql(sql)
sql = "INSERT INTO "
sql_len = len(sql)
buf = []
buf.append(q)
sql_len += len(q)
sql += " ".join(buf)
self.create_tables()
self.execute_sql(sql)
self._tb_values.clear()
def execute_sql(self, sql):
try:
self._conn.execute(sql)
except taos.Error as e:
error_code = e.errno & 0xffff
# Table does not exit
if error_code == 9731:
self.create_tables()
else:
self.log.error("Execute SQL: %s", sql)
raise e
except BaseException as baseException:
self.log.error("Execute SQL: %s", sql)
raise baseException
def create_tables(self):
sql = "CREATE TABLE "
for tb in self._tb_values.keys():
tag_values = self._tb_tags[tb]
sql += "IF NOT EXISTS " + tb + " USING meters TAGS " + tag_values + " "
try:
self._conn.execute(sql)
except BaseException as e:
self.log.error("Execute SQL: %s", sql)
raise e
def close(self):
if self._conn:
self._conn.close()
if __name__ == '__main__':
def get_connection_func():
conn = taos.connect()
return conn
writer = SQLWriter(get_connection_func=get_connection_func)
writer.execute_sql(
"create stable if not exists meters (ts timestamp, current float, voltage int, phase float) "
"tags (location binary(64), groupId int)")
writer.execute_sql(
"INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) "
"VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32)")
from taos.tmq import Consumer
import taos import taos
from taos.tmq import *
conn = taos.connect()
print("init") def init_tmq_env(db, topic):
conn.execute("drop topic if exists topic_ctb_column") conn = taos.connect()
conn.execute("drop database if exists py_tmq") conn.execute("drop topic if exists {}".format(topic))
conn.execute("create database if not exists py_tmq vgroups 2") conn.execute("drop database if exists {}".format(db))
conn.select_db("py_tmq") conn.execute("create database if not exists {}".format(db))
conn.execute( conn.select_db(db)
"create stable if not exists stb1 (ts timestamp, c1 int, c2 float, c3 binary(10)) tags(t1 int)" conn.execute(
) "create stable if not exists stb1 (ts timestamp, c1 int, c2 float, c3 varchar(16)) tags(t1 int, t3 varchar(16))")
conn.execute("create table if not exists tb1 using stb1 tags(1)") conn.execute("create table if not exists tb1 using stb1 tags(1, 't1')")
conn.execute("create table if not exists tb2 using stb1 tags(2)") conn.execute("create table if not exists tb2 using stb1 tags(2, 't2')")
conn.execute("create table if not exists tb3 using stb1 tags(3)") conn.execute("create table if not exists tb3 using stb1 tags(3, 't3')")
conn.execute("create topic if not exists {} as select ts, c1, c2, c3 from stb1".format(topic))
print("create topic") conn.execute("insert into tb1 values (now, 1, 1.0, 'tmq test')")
conn.execute( conn.execute("insert into tb2 values (now, 2, 2.0, 'tmq test')")
"create topic if not exists topic_ctb_column as select ts, c1, c2, c3 from stb1" conn.execute("insert into tb3 values (now, 3, 3.0, 'tmq test')")
)
print("build consumer") def cleanup(db, topic):
conf = TaosTmqConf() conn = taos.connect()
conf.set("group.id", "tg2") conn.execute("drop topic if exists {}".format(topic))
conf.set("td.connect.user", "root") conn.execute("drop database if exists {}".format(db))
conf.set("td.connect.pass", "taosdata")
conf.set("enable.auto.commit", "true")
if __name__ == '__main__':
init_tmq_env("tmq_test", "tmq_test_topic") # init env
def tmq_commit_cb_print(tmq, resp, offset, param=None): consumer = Consumer(
print(f"commit: {resp}, tmq: {tmq}, offset: {offset}, param: {param}") {
"group.id": "tg2",
"td.connect.user": "root",
conf.set_auto_commit_cb(tmq_commit_cb_print, None) "td.connect.pass": "taosdata",
tmq = conf.new_consumer() "enable.auto.commit": "true",
}
print("build topic list") )
consumer.subscribe(["tmq_test_topic"])
topic_list = TaosTmqList()
topic_list.append("topic_ctb_column") try:
while True:
print("basic consume loop") res = consumer.poll(1)
tmq.subscribe(topic_list) if not res:
break
sub_list = tmq.subscription() err = res.error()
if err is not None:
print("subscribed topics: ", sub_list) raise err
val = res.value()
while 1:
res = tmq.poll(1000) for block in val:
if res: print(block.fetchall())
topic = res.get_topic_name() finally:
vg = res.get_vgroup_id() consumer.unsubscribe()
db = res.get_db_name() consumer.close()
print(f"topic: {topic}\nvgroup id: {vg}\ndb: {db}") cleanup("tmq_test", "tmq_test_topic")
for row in res:
print(row)
#!/usr/bin/python3
from taosws import Consumer
conf = {
"td.connect.websocket.scheme": "ws",
"group.id": "0",
}
consumer = Consumer(conf)
consumer.subscribe(["test"])
while True:
message = consumer.poll(timeout=1.0)
if message:
id = message.vgroup()
topic = message.topic()
database = message.database()
for block in message:
nrows = block.nrows()
ncols = block.ncols()
for row in block:
print(row)
values = block.fetchall()
print(nrows, ncols)
# consumer.commit(message)
else:
break
consumer.close()
--- ---
title: TDengine 文档 title: TDengine Cloud 文档
sidebar_label: 文档首 sidebar_label:
slug: / slug: /
--- ---
TDengine 是一款[高性能](https://www.taosdata.com/fast)[分布式](https://www.taosdata.com/scalable)[支持 SQL](https://www.taosdata.com/sql-support) 的时序数据库 (Database)。本文档是 TDengine 用户手册,主要是介绍 TDengine 的基本概念、安装、使用、功能、开发接口、运营维护、TDengine 内核设计等等,它主要是面向架构师、开发者与系统管理员的 TDengine Cloud 是全托管的时序数据处理云服务平台。它是基于开源的时序数据库 TDengine 而开发的。除高性能的时序数据库之外,它还具有缓存、订阅和流计算等系统功能,而且提供了便利而又安全的数据分享、以及众多的企业级服务功能。它可以让物联网、工业互联网、金融、IT 运维监控等领域企业在时序数据的管理上大幅降低人力成本和运营成本
TDengine 充分利用了时序数据的特点,提出了“一个数据采集点一张表”与“超级表”的概念,设计了创新的存储引擎,让数据的写入、查询和存储效率都得到极大的提升。为正确理解并使用TDengine, 无论如何,请您仔细阅读[基本概念](./concept)一章 同时客户可以放心使用无处不在的第三方工具,比如 Prometheus,Telegraf,Grafana 和 MQTT 消息服务器等。天然地,TDengine Cloud 还支持 Python,Java,Go,Rust 和 Node.js等连接器。开发者可以选择自己习惯的语言来开发。通过支持 SQL,还有无模式的方式,TDengine Cloud 能够满足所有开发者的需求。TDengine Cloud 还提供了额外的特殊功能来进行时序数据的风险,使数据的分析和可视化变得极其简单
如果你是开发者,请一定仔细阅读[开发指南](./develop)一章,该部分对数据库连接、建模、插入数据、查询、连续查询、缓存、数据订阅、用户自定义函数等功能都做了详细介绍,并配有各种编程语言的示例代码。大部分情况下,你只要把示例代码拷贝粘贴,针对自己的应用稍作改动,就能跑起来。 下面是 TDengine Cloud 的文档结构:
我们已经生活在大数据的时代,纵向扩展已经无法满足日益增长的业务需求,任何系统都必须具有水平扩展的能力,集群成为大数据以及 database 系统的不可缺失功能。TDengine 团队不仅实现了集群功能,而且将这一重要核心功能开源。怎么部署、管理和维护 TDengine 集群,请参考[集群管理](./cluster)一章 1. [产品简介](./intro) 概述TDengine Cloud 的特点,能力和竞争优势
TDengine 采用 SQL 作为其查询语言,大大降低学习成本、降低迁移成本,但同时针对时序数据场景,又做了一些扩展,以支持插值、降采样、时间加权平均等操作。[SQL 手册](./taos-sql)一章详细描述了 SQL 语法、详细列出了各种支持的命令和函数 2. [基本概念](./concept) 主要介绍 TDengine 如何有效利用时间序列数据的特点来提高计算性能,同时提高存储效率
如果你是系统管理员,关心安装、升级、容错灾备、关心数据导入、导出,配置参数,怎么监测 TDengine 是否健康运行,怎么提升系统运行的性能,那么请仔细参考[运维指南](./operation)一章 3. [数据写入](./data-in) 主要介绍 TDengine Cloud 提供了多种数据写入 TDengine 实例的方式。在数据源部分,您可以方便地从边缘云或者主机上面的 TDengine 把数据写入云上的任何实例
如果你对 TDengine 外围工具,REST API, 各种编程语言的连接器想做更多详细了解,请看[参考指南](./reference)一章 4. [数据输出](./data-out) 主要介绍 TDengine Cloud 提供极简的访问您的时序数据的方式,通过这些方式,您可以方便的利用 TDengine 实例的数据来开发您的数据分析和可视化应用
如果你对 TDengine 内部的架构设计很有兴趣,欢迎仔细阅读[技术内幕](./tdinternal)一章,里面对集群的设计、数据分区、分片、写入、读出、查询、聚合查询的流程都做了详细的介绍。如果你想研读 TDengine 代码甚至贡献代码,请一定仔细读完这一章 5. [可视化](./visual) 主要介绍您如何使用 TDengine Cloud 上面存储的时序数据进行可视化开发,比如您可以监控和可视化您的 TDengine Cloud 上面的实例和数据库状态
最后,作为一个开源软件,欢迎大家的参与。如果发现文档的任何错误,描述不清晰的地方,都请在每个页面的最下方,点击“编辑本文档“直接进行修改 6. [数据订阅](./data-subscription) 这个部分是 TDengine Cloud 的高级功能,类似于异步发布/订阅能力,即发布到一个主题的消息会被该主题的所有订阅者立即收到通知。 TDengine Cloud 的数据订阅让您无需部署任何的消息发布/订阅系统,比如 Kafka,就可以创建自己的事件驱动应用。而且我们提供了便捷而安全的方式,让您通过创建主题和分享主题给他人都变得极其容易
Together, we make a difference! 7. [流式计算](./stream) 这个部分也是 TDengine Cloud 的另外一个高级功能。通过这个功能,您无需无需部署任何流式处理系统,比如 Spark/Flink,就能创建连续查询或时间驱动的流计算。 TDengine Cloud 的流式计算可以很方便的让您实时处理进入的流式数据并把它们很轻松地按照您定义的规则导入到目的表里面。
8. [数据复制](./replication) 是 TDengine Cloud 提高的成熟的数据复制功能。您可以从云端同一个区域的一个实例复制到另外一个实例,也可以从一个云服务商的区域复制到另外一个云服务商的区域。
9. [开发指南](./programming) 是使用 TDengine Cloud 上的时序数据开发 IoT 和大数据应用必须阅读的部分。在这一部分中,我们详细介绍了数据库连接,数据建模,数据抽取,数据查询,流式计算,缓存,数据订阅,用户自定义函数和其他功能。我们还提供了各种编程语言的示例代码。在大多数情况下,您只需简单地复制和粘贴这些示例代码,在您的应用程序中再做一些细微修改就能工作。
10. [TDengine SQL](./taos-sql) 提供了标准 SQL 以及TDengine 扩展部分的详细介绍,通过这些 SQL 语句能方便地进行时序数据分析。
11. [工具](./tools)主要介绍 Taos CLI 这个通过终端来执行的命令行工具。 通过运行这个工具,可以轻松和便捷地访问您在 TDengine Cloud 的 TDengine 实例的数据库数据并进行各种查询。另外还介绍了 taosBenchmark 这个工具。通过这个工具可以帮助您用简单的配置就能比较容易地产生大量的数据,并测试 TDengine Cloud 的性能。
我们非常高兴您选择 TDengine Cloud 作为您的时序数据平台的一部分,并期待着听到您的反馈以及改进意见,并成为您成功的一个小部分。
---
sidebar_label: 产品简介
title: TDengine Cloud 的产品简介
---
TDengine Cloud 是全托管的时序数据处理云服务平台。它是基于开源的时序数据库 TDengine 而开发的。除高性能的时序数据库之外,它还具有缓存、订阅和流计算等系统功能,而且提供了便利而又安全的数据分享、以及众多的企业级服务功能。它可以让物联网、工业互联网、金融、IT 运维监控等领域企业在时序数据的管理上大幅降低人力成本和运营成本。
本章节主要介绍 TDengine Cloud 的主要功能,竞争优势和典型使用案例,让大家对 TDengine Cloud 有个整体的了解。
## 主要功能
TDengine Cloud 的主要功能如下:
1. 数据写入
- 支持[使用 SQL 插入数据](../programming/insert/)
- 支持 [Telegraf](../data-in/telegraf/)
- 支持 [Prometheus](../data-in/prometheus/)
2. 数据输出
- 支持标准 [SQL](../programming/query/),包括子查询。
- 支持通过工具 [taosDump](../data-out/taosdump/) 导出数据。
- 支持输出数据到 [Prometheus](../data-out/prometheus/)
- 支持通过[数据订阅](../data-subscription/)的方式导出数据.
3. 数据浏览器: 可以浏览数据库和各种表,如果您已经登录,还可以直接执行 SQL 查询语句。
4. 可视化
- 支持 [Grafana](../visual/grafana/)
- 支持 Google Data Studio。
- 支持 Grafana Cloud (稍后发布)
5. [数据订阅](../data-subscription/): 用户的应用可以订阅一个数据库,一张表或者一组表。使用的 API 跟 Kafka 基本一致,但是您必须设置具体的过滤条件来定义一个主题,然后您可以和 TDengine Cloud 的其他用户或者用户组分享这个主题。
6. [流计算](../stream/):不仅支持连续查询,TDengine还支持基于事件驱动的流计算,无需安装 Flink/Spark 就可以处理时序数据。
7. 企业版
- 支持每天备份数据。
- 支持复制一个数据库到另外一个区域或者另外一个云。
- 支持对等 VPC。
- 支持 IP 白名单。
9. 工具
- 提供一个交互式 [命令行工具 (CLI)](../tools/cli/) 管理和实时查询。
- 提供一个性能检测工具 [taosBenchmark](../tools/taosbenchmark/) 来测试 TDengine 的性能。
10. 编程
- 提供各种[连接器](../programming/connector/),比如 Java,Python,Go,Rust,Node.js 等编程语言。
- 提供了[REST API](../programming/connector/rest-api/)
更多细节功能,请阅读整个文档。
## 竞争优势
由于 TDengine Cloud 充分利用了[时序数据特点](https://www.taosdata.com/blog/2019/07/09/105.html),比如结构化、无需事务、很少删除或更新、写多读少等等,还有它云原生的设计使 TDengine Cloud 区别于其他时序数据云服务,具有以下特点:
- **[极简时序数据平台](https://www.taosdata.com/tdengine/simplified_solution_for_time-series_data_processing)**:全托管的云服务,用户无需担心繁琐的部署、优化、扩容、备份、异地容灾等事务,可全心关注核心业务,减少对DBA的要求,大幅节省人力成本。
除高性能、具有水平扩展能力的时序数据库外, TDengine 云服务还提供:
**缓存**:无需部署 Redis,应用就能快速的获得最新数据。
**数据订阅**:无需部署 Kafka, 当系统接收到新的数据时,应用将立即收到通知。
**流式计算**:无需部署 Spark/Flink, 应用就能创建连续查询或时间驱动的流计算。
- **[便捷而且安全的数据共享](https://www.taosdata.com/tdengine/cloud/data-sharing)**:TDengine Cloud 既支持将一个库完全开放,设置读或写的权限;也支持通过数据订阅的方式,将库、超级表、一组或一张表、或聚合处理后的数据分享出去。
**便捷**:如同在线文档一样简单,只需输入对方邮件地址,设置访问权限和访问时长即可实现分享。对方收到邮件,接受邀请后,可即刻访问。
**安全**:访问权限可以控制到一个运行实例、库或订阅的 topic;对于每个授权的用户,对分享的资源,会生成一个访问用的 token;访问可以设置到期时间。
便捷而又安全的时序数据共享,让企业各部门或合作伙伴之间快速洞察业务的运营。
- **[安全可靠的企业级服务](https://tdengine.com/tdengine/high-performance-time-series-database/)**:除强大的时序数据管理、共享功能之外,TDengine Cloud 还提供企业运营必需的
**可靠**:提供数据定时备份、恢复,数据从运行实例到私有云、其他公有云或 Region 的实时复制。
**安全**:提供基于角色的访问权限控制、IP 白名单、用户行为审计等功能。
**专业**:提供7*24的专业技术服务,承诺 99.9% 的 Service Level Agreement。
安全、专业、高效可靠的企业级服务,用户无需再为数据管理发愁,可以聚焦自身的核心业务。
- **[分析能力](https://www.taosdata.com/tdengine/easy_data_analytics)**:通过超级表、存储计算分离、分区分片、预计算和其它技术,TDengine 能够高效地浏览、格式化和访问数据。
- **[核心开源](https://www.taosdata.com/tdengine/open_source_time-series_database)**:TDengine 的核心代码包括集群功能全部在开源协议下公开。全球超过 140k 个运行实例,GitHub Star 20k,且拥有一个活跃的开发者社区。
采用 TDengine Cloud,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。表现在几个方面:
1. 由于其超强性能,它能将系统所需的计算资源和存储资源大幅降低
2. 因为支持 SQL,能与众多第三方软件无缝集成,学习迁移成本大幅下降
3. 因为是一款极简的时序数据平台,系统复杂度、研发和运营成本大幅降低
## 技术生态
在整个时序大数据平台中,TDengine 扮演的角色如下:
<figure>
![TDengine Database 技术生态图](eco_system.webp)
<center><figcaption>图 1. TDengine 技术生态图</figcaption></center>
</figure>
上图中,左侧是各种数据采集或消息队列,包括 OPC-UA、MQTT、Telegraf、也包括 Kafka,他们的数据将被源源不断的写入到 TDengine。右侧则是可视化、BI 工具、组态软件、应用程序。下侧则是 TDengine 自身提供的命令行程序(CLI)以及可视化管理工具。
## 典型适用场景
作为一个高性能、分布式、支持 SQL 的时序数据库(Database),TDengine 的典型适用场景包括但不限于 IoT、工业互联网、车联网、IT 运维、能源、金融证券等领域。需要指出的是,TDengine 是针对时序数据场景设计的专用数据库和专用大数据处理工具,因其充分利用了时序大数据的特点,它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM 等通用型数据。下面本文将对适用场景做更多详细的分析。
label: 基本概念
\ No newline at end of file
---
sidebar_label: 基本概念
title: 数据模型和基本概念
description: TDengine 的数据模型和基本概念
---
为了便于解释基本概念,便于撰写示例程序,整个 TDengine 文档以智能电表作为典型时序数据场景。假设每个智能电表采集电流、电压、相位三个量,有多个智能电表,每个电表有位置 Location 和分组 Group ID 的静态属性. 其采集的数据类似如下的表格:
<div className="center-table">
<table>
<thead>
<tr>
<th rowSpan="2">Device ID</th>
<th rowSpan="2">Timestamp</th>
<th colSpan="3">Collected Metrics</th>
<th colSpan="2">Tags</th>
</tr>
<tr>
<th>current</th>
<th>voltage</th>
<th>phase</th>
<th>location</th>
<th>groupid</th>
</tr>
</thead>
<tbody>
<tr>
<td>d1001</td>
<td>1538548685000</td>
<td>10.3</td>
<td>219</td>
<td>0.31</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
<tr>
<td>d1002</td>
<td>1538548684000</td>
<td>10.2</td>
<td>220</td>
<td>0.23</td>
<td>California.SanFrancisco</td>
<td>3</td>
</tr>
<tr>
<td>d1003</td>
<td>1538548686500</td>
<td>11.5</td>
<td>221</td>
<td>0.35</td>
<td>California.LosAngeles</td>
<td>3</td>
</tr>
<tr>
<td>d1004</td>
<td>1538548685500</td>
<td>13.4</td>
<td>223</td>
<td>0.29</td>
<td>California.LosAngeles</td>
<td>2</td>
</tr>
<tr>
<td>d1001</td>
<td>1538548695000</td>
<td>12.6</td>
<td>218</td>
<td>0.33</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
<tr>
<td>d1004</td>
<td>1538548696600</td>
<td>11.8</td>
<td>221</td>
<td>0.28</td>
<td>California.LosAngeles</td>
<td>2</td>
</tr>
<tr>
<td>d1002</td>
<td>1538548696650</td>
<td>10.3</td>
<td>218</td>
<td>0.25</td>
<td>California.SanFrancisco</td>
<td>3</td>
</tr>
<tr>
<td>d1001</td>
<td>1538548696800</td>
<td>12.3</td>
<td>221</td>
<td>0.31</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
</tbody>
</table>
<a name="#model_table1">表 1. 智能电表数据示例</a>
</div>
每一条记录都有设备 ID、时间戳、采集的物理量(如上表中的 `current``voltage``phase`)以及每个设备相关的静态标签(`location``groupid`)。每个设备是受外界的触发,或按照设定的周期采集数据。采集的数据点是时序的,是一个数据流。
## 采集量(Metric)
采集量是指传感器、设备或其他类型采集点采集的物理量,比如电流、电压、温度、压力、GPS 位置等,是随时间变化的,数据类型可以是整型、浮点型、布尔型,也可是字符串。随着时间的推移,存储的采集量的数据量越来越大。智能电表示例中的电流、电压、相位就是采集量。
## 标签(Label/Tag)
标签是指传感器、设备或其他类型采集点的静态属性,不是随时间变化的,比如设备型号、颜色、设备的所在地等,数据类型可以是任何类型。虽然是静态的,但 TDengine 容许用户修改、删除或增加标签值。与采集量不一样的是,随时间的推移,存储的标签的数据量不会有什么变化。智能电表示例中的 `location``groupid` 就是标签。
## 数据采集点(Data Collection Point)
数据采集点是指按照预设时间周期或受事件触发采集物理量的硬件或软件。一个数据采集点可以采集一个或多个采集量,**但这些采集量都是同一时刻采集的,具有相同的时间戳**。对于复杂的设备,往往有多个数据采集点,每个数据采集点采集的周期都可能不一样,而且完全独立,不同步。比如对于一台汽车,有数据采集点专门采集 GPS 位置,有数据采集点专门采集发动机状态,有数据采集点专门采集车内的环境,这样一台汽车就有三个数据采集点。智能电表示例中的 d1001、d1002、d1003、d1004 等就是数据采集点。
## 表(Table)
因为采集量一般是结构化数据,同时为降低学习门槛,TDengine 采用传统的关系型数据库模型管理数据。用户需要先创建库,然后创建表,之后才能插入或查询数据。
为充分利用其数据的时序性和其他数据特点,TDengine 采取**一个数据采集点一张表**的策略,要求对每个数据采集点单独建表(比如有一千万个智能电表,就需创建一千万张表,上述表格中的 d1001,d1002,d1003,d1004 都需单独建表),用来存储这个数据采集点所采集的时序数据。这种设计有几大优点:
1. 由于不同数据采集点产生数据的过程完全独立,每个数据采集点的数据源是唯一的,一张表也就只有一个写入者,这样就可采用无锁方式来写,写入速度就能大幅提升。
2. 对于一个数据采集点而言,其产生的数据是按照时间排序的,因此写的操作可用追加的方式实现,进一步大幅提高数据写入速度。
3. 一个数据采集点的数据是以块为单位连续存储的。如果读取一个时间段的数据,它能大幅减少随机读取操作,成数量级的提升读取和查询速度。
4. 一个数据块内部,采用列式存储,对于不同数据类型,采用不同压缩算法,而且由于一个数据采集点的采集量的变化是缓慢的,压缩率更高。
如果采用传统的方式,将多个数据采集点的数据写入一张表,由于网络延时不可控,不同数据采集点的数据到达服务器的时序是无法保证的,写入操作是要有锁保护的,而且一个数据采集点的数据是难以保证连续存储在一起的。**采用一个数据采集点一张表的方式,能最大程度的保证单个数据采集点的插入和查询的性能是最优的。**
TDengine 建议用数据采集点的名字(如上表中的 d1001)来做表名。每个数据采集点可能同时采集多个采集量(如上表中的 `current``voltage``phase`),每个采集量对应一张表中的一列,数据类型可以是整型、浮点型、字符串等。除此之外,表的第一列必须是时间戳,即数据类型为 Timestamp。对采集量,TDengine 将自动按照时间戳建立索引,但对采集量本身不建任何索引。数据用列式存储方式保存。
对于复杂的设备,比如汽车,它有多个数据采集点,那么就需要为一辆汽车建立多张表。
## 超级表(STable)
由于一个数据采集点一张表,导致表的数量巨增,难以管理,而且应用经常需要做采集点之间的聚合操作,聚合的操作也变得复杂起来。为解决这个问题,TDengine 引入超级表(Super Table,简称为 STable)的概念。
超级表是指某一特定类型的数据采集点的集合。同一类型的数据采集点,其表的结构是完全一样的,但每个表(数据采集点)的静态属性(标签)是不一样的。描述一个超级表(某一特定类型的数据采集点的集合),除需要定义采集量的表结构之外,还需要定义其标签的 Schema,标签的数据类型可以是整数、浮点数、字符串、JSON,标签可以有多个,可以事后增加、删除或修改。如果整个系统有 N 个不同类型的数据采集点,就需要建立 N 个超级表。
在 TDengine 的设计里,**表用来代表一个具体的数据采集点,超级表用来代表一组相同类型的数据采集点集合**。智能电表示例中,我们可以创建一个超级表 `meters`.
## 子表(Subtable)
当为某个具体数据采集点创建表时,用户可以使用超级表的定义做模板,同时指定该具体采集点(表)的具体标签值来创建该表。**通过超级表创建的表称之为子表**。正常的表与子表的差异在于:
1. 子表就是表,因此所有正常表的 SQL 操作都可以在子表上执行。
2. 子表在正常表的基础上有扩展,它是带有静态标签的,而且这些标签可以事后增加、删除、修改,而正常的表没有。
3. 子表一定属于一张超级表,但普通表不属于任何超级表
4. 普通表无法转为子表,子表也无法转为普通表。
超级表与与基于超级表建立的子表之间的关系表现在:
1. 一张超级表包含有多张子表,这些子表具有相同的采集量 Schema,但带有不同的标签值。
2. 不能通过子表调整数据或标签的模式,对于超级表的数据模式修改立即对所有的子表生效。
3. 超级表只定义一个模板,自身不存储任何数据或标签信息。因此,不能向一个超级表写入数据,只能将数据写入子表中。
查询既可以在表上进行,也可以在超级表上进行。针对超级表的查询,TDengine 将把所有子表中的数据视为一个整体数据集进行处理,会先把满足标签过滤条件的表从超级表中找出来,然后再扫描这些表的时序数据,进行聚合操作,这样需要扫描的数据集会大幅减少,从而显著提高查询的性能。本质上,TDengine 通过对超级表查询的支持,实现了多个同类数据采集点的高效聚合。
TDengine 系统建议给一个数据采集点建表,需要通过超级表建表,而不是建普通表。在智能电表的示例中,我们可以通过超级表 meters 创建子表 d1001、d1002、d1003、d1004 等。
为了更好地理解采集量、标签、超级与子表的关系,可以参考下面关于智能电表数据模型的示意图。
<figure>
![智能电表数据模型示意图](./supertable.webp)
<center><figcaption>图 1. 智能电表数据模型示意图</figcaption></center>
</figure>
## 库(Database)
库是指一组表的集合。TDengine 容许一个运行实例有多个库,而且每个库可以配置不同的存储策略。不同类型的数据采集点往往具有不同的数据特征,包括数据采集频率的高低,数据保留时间的长短,副本的数目,数据块的大小,是否允许更新数据等等。为了在各种场景下 TDengine 都能最大效率的工作,TDengine 建议将不同数据特征的超级表创建在不同的库里。
一个库里,可以有一到多个超级表,但一个超级表只属于一个库。一个超级表所拥有的子表全部存在一个库里。
## FQDN & Endpoint
FQDN(Fully Qualified Domain Name,完全限定域名)是 Internet 上特定计算机或主机的完整域名。FQDN 由两部分组成:主机名和域名。例如,假设邮件服务器的 FQDN 可能是 mail.tdengine.com。主机名是 mail,主机位于域名 tdengine.com 中。DNS(Domain Name System),负责将 FQDN 翻译成 IP,是互联网应用的寻址方式。对于没有 DNS 的系统,可以通过配置 hosts 文件来解决。
TDengine 集群的每个节点是由 Endpoint 来唯一标识的,Endpoint 是由 FQDN 外加 Port 组成,比如 h1.tdengine.com:6030。这样当 IP 发生变化的时候,我们依然可以使用 FQDN 来动态找到节点,不需要更改集群的任何配置。而且采用 FQDN,便于内网和外网对同一个集群的统一访问。
TDengine 不建议采用直接的 IP 地址访问集群,不利于管理。不了解 FQDN 概念,请看博文[《一篇文章说清楚 TDengine 的 FQDN》](https://www.taosdata.com/blog/2020/09/11/1824.html)
---
sidebar_label: Prometheus
title: Prometheus
description: 使用 Prometheus 访问 TDengine
---
Prometheus 是一款流行的开源监控告警系统。Prometheus 于2016年加入了 Cloud Native Computing Foundation (云原生云计算基金会,简称 CNCF),成为继 Kubernetes 之后的第二个托管项目,该项目拥有非常活跃的开发人员和用户社区。
Prometheus 提供了 `remote_write``remote_read` 接口来利用其它数据库产品作为它的存储引擎。为了让 Prometheus 生态圈的用户能够利用 TDengine 的高效写入和查询,TDengine 也提供了对这两个接口的支持。
通过适当的配置, Prometheus 的数据可以通过 `remote_write` 接口存储到 TDengine 中,也可以通过 `remote_read` 接口来查询存储在 TDengine 中的数据,充分利用 TDengine 对时序数据的高效存储查询性能和集群处理能力。
## 前置条件
登录到 TDengine Cloud ,在左边的菜单点击”数据浏览器“,然后再点击”数据库“标签旁边的”+“按钮添加一个名称是”prometheus_data“使用默认参数的数据库。然后执行 `show databases` SQL确认数据库确实被成功创建出来。
## 安装 Prometheus
假设您使用的是 amd64 架构的 Linux 操作系统:
1. 下载
```
wget https://github.com/prometheus/prometheus/releases/download/v2.37.0/prometheus-2.37.0.linux-amd64.tar.gz
```
2. 解压和重命名
```
tar xvfz prometheus-*.tar.gz && mv prometheus-2.37.0.linux-amd64 prometheus
```
3. 改变目录为 prometheus
```
cd prometheus
```
然后 Prometheus 就会被安装到当前目录. 想了解更多 Prometheus 安装选型,请参考[官方文档](https://prometheus.io/docs/prometheus/latest/installation/).
## 配置 Prometheus
可以通过编辑 Prometheus 配置文件 `prometheus.yml` 来设置 Prometheus (如果您完全按照上面的步骤执行,您可以在当前目录找到 prometheus.xml 文件)。
```yaml
remote_write:
- url: "<cloud_url>/prometheus/v1/remote_write/prometheus_data?token=<cloud_token>"
remote_read:
- url: "<cloud_url>/prometheus/v1/remote_read/prometheus_data?token=<cloud_token>"
remote_timeout: 10s
read_recent: true
```
<!-- exclude -->
您可以使用真实的 TDengine Cloud 的URL和令牌来替换上面的`<url>``<token>`。可以通过访问[TDengine Cloud](https://cloud.taosdata.com)来获取真实的值。
<!-- exclude-end -->
配置完成后,Prometheus 会从自己的 HTTP 指标端点收集数据并存储到 TDengine Cloud 里面。
## 启动 Prometheus
```
./prometheus --config.file prometheus.yml
```
之后 Prometheus 应该已经启动好。同时也启动了一个 Web 服务器<http://localhost:9090>。如果您想从浏览器访问这个 Web 服务器, 可以根据您的网络环境修改 `localhost` 为正确的主机名,FQDN 或者 IP 地址。
## 验证远程写入
Log in TDengine Cloud, click "Explorer" on the left navigation bar. You will see metrics collected by prometheus.
登录 TDengine Cloud ,然后点击左边导航栏的”数据浏览器“。您就会看见由 Prometheus 收集的指标数据。
![TDengine prometheus remote_write result](prometheus_data.webp)
:::note
- TDengine 会根据一定规则自动为子表名创建唯一的 IDs。
:::
---
sidebar_label: Telegraf
title: Telegraf 写入
description: 使用 Telegraf 向 TDengine 写入数据
---
Telegraf 是一款十分流行的指标采集开源软件。在数据采集和平台监控系统中,Telegraf 可以采集多种组件的运行信息,而不需要自己手写脚本定时采集,降低数据获取的难度。
只需要将 Telegraf 的输出配置增加指向 taosAdapter 对应的 url 并修改若干配置项即可将 Telegraf 的数据写入到 TDengine 中。将 Telegraf 的数据存在到 TDengine 中可以充分利用 TDengine 对时序数据的高效存储查询性能和集群处理能力。
## 前置条件
要将 Telegraf 数据写入 TDengine Cloud ,需要首先手动创建一个数据库。登录到 TDengine Cloud ,在左边的菜单点击”数据浏览器“,然后再点击”数据库“标签旁边的”+“按钮添加一个名称是”telegraf“使用默认参数的数据库。
## 安装 Telegraf
假设您使用的是 Ubuntu 操作系统:
```bash
{{#include docs/examples/thirdparty/install-telegraf.sh:null:nrc}}
```
安装结束以后,Telegraf 服务应该已经启动。请先停止它:
```bash
sudo systemctl stop telegraf
```
## 配置环境变量
在您的终端命令行里面执行下面的命令来保存 TDengine Cloud 的令牌和URL为环境变量:
```bash
export TDENGINE_CLOUD_URL="<url>"
export TDENGINE_CLOUD_TOKEN="<token>"
```
<!-- exclude -->
您可以使用真实的 TDengine Cloud 的URL和令牌来替换上面的`<url>``<token>`。可以通过访问[TDengine Cloud](https://cloud.taosdata.com)来获取真实的值。
<!-- exclude-end -->
然后运行下面的命令来生成 telegraf.conf 文件。
```bash
{{#include docs/examples/thirdparty/gen-telegraf-conf.sh:null:nrc}}
```
编辑”outputs.http“部分
```toml
{{#include docs/examples/thirdparty/telegraf-conf.toml:null:nrc}}
```
配置完成后 Telegraf 会开始收集CPU和内容的数据并发送到 TDengine 的数据库”telegraf“。”telegraf“数据库必须先通过 TDengine Cloud 创建。
## 启动 Telegraf
使用新生的 telegraf.conf 文件启动 Telegraf。
```bash
telegraf --config telegraf.conf
```
## 验证
- 通过下面命令检查天气数据库"telegraf"被创建出来:
```sql
show databases;
```
![TDengine show telegraf databases](./telegraf-show-databases.webp)
检查天气超级表 cpu 和 mem 被创建出来:
```sql
show telegraf.stables;
```
![TDengine Cloud show telegraf stables](./telegraf-show-stables.webp)
:::note
- TDengine 接收 influxdb 格式数据默认生成的子表名是根据规则生成的唯一 ID 值。
用户如需指定生成的表名,可以通过在 taos.cfg 里配置 smlChildTableName 参数来指定。如果通过控制输入数据格式,即可利用 TDengine 这个功能指定生成的表名。
举例如下:配置 smlChildTableName=tname 插入数据为 st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的表名为 cpu1。如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。[TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议)
:::
---
sidebar_label: InfluxDB 行协议
title: Schemaless - InfluxDB 行协议
description: 通过 Schemaless 行协议写入数据
---
<!-- exclude -->
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<!-- exclude-end -->
这个部分我们会介绍如何通过无服务 InfluxDB 行协议的 REST 接口往 TDengine Cloud 写入数据。
## 配置
在您的终端命令行运行下面的命令来设置 TDengine Cloud 的令牌和URL为环境变量:
<Tabs defaultValue="bash">
<TabItem value="bash" label="Bash">
```bash
export TDENGINE_CLOUD_TOKEN="<token>"
export TDENGINE_CLOUD_URL="<url>"
```
</TabItem>
<TabItem value="cmd" label="CMD">
```bash
set TDENGINE_CLOUD_TOKEN="<token>"
set TDENGINE_CLOUD_URL="<url>"
```
</TabItem>
<TabItem value="powershell" label="Powershell">
```powershell
$env:TDENGINE_CLOUD_TOKEN="<token>"
$env:TDENGINE_CLOUD_URL="<url>"
```
</TabItem>
</Tabs>
## 插入
您可以使用任何支持 HTTP 协议的客户端通过访问 RESTful 的接口地址 `<cloud_url>/influxdb/v1/write` 往 TDengine 里面写入兼容 InfluxDB 的数据。访问地址如下:
```text
/influxdb/v1/write?db=<db_name>&token=<cloud_token>
```
支持 InfluxDB 查询参数如下:
- `db` 指定 TDengine 使用的数据库名
- `precision` TDengine 使用的时间精度
- ns - 纳秒
- u - 微妙
- ms - 毫秒
- s - 秒
- m - 分
- h - 小时
## 写入样例
```bash
curl --request POST "$TDENGINE_CLOUD_URL/influxdb/v1/write?db=<db_name>&token=$TDENGINE_CLOUD_TOKEN&precision=ns" --data-binary "measurement,host=host1 field1=2i,field2=2.0 1577846800001000001"
```
## 使用 SQL 查询样例
- `measurement` 是超级表名。
- 您可以像这样通过标签过滤数据:`where host="host1"`
```bash
curl -L -d "select * from <db_name>.measurement where host=\"host1\"" $TDENGINE_CLOUD_URL/rest/sql/test?token=$TDENGINE_CLOUD_TOKEN
```
---
sidebar_label: OpenTSDB JSON 协议
title: Schemaless - OpenTSDB JSON 协议
description: 写入使用 OpenTSDB JSON 协议的数据
---
<!-- exclude -->
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<!-- exclude-end -->
这个部分我们会介绍如何通过无服务 OpenTSDB JSON 协议的 REST 接口往 TDengine Cloud 写入数据。
## 配置
在您的终端命令行运行下面的命令来设置 TDengine Cloud 的令牌和URL为环境变量:
<Tabs defaultValue="bash">
<TabItem value="bash" label="Bash">
```bash
export TDENGINE_CLOUD_TOKEN="<token>"
export TDENGINE_CLOUD_URL="<url>"
```
</TabItem>
<TabItem value="cmd" label="CMD">
```bash
set TDENGINE_CLOUD_TOKEN="<token>"
set TDENGINE_CLOUD_URL="<url>"
```
</TabItem>
<TabItem value="powershell" label="Powershell">
```powershell
$env:TDENGINE_CLOUD_TOKEN="<token>"
$env:TDENGINE_CLOUD_URL="<url>"
```
</TabItem>
</Tabs>
## 插入
您可以使用任何支持 HTTP 协议的客户端通过访问 RESTful 的接口地址 `<cloud_url>/opentsdb/v1/put` 往 TDengine 里面写入兼容 OpenTSDB 的数据。访问地址如下:
```text
/opentsdb/v1/put/json/<db>?token=<cloud_token>
```
### 写入样例
```bash
curl --request POST "$TDENGINE_CLOUD_URL/opentsdb/v1/put/json/<db_name>?token=$TDENGINE_CLOUD_TOKEN" --data-binary "{\"metric\":\"meter_current\",\"timestamp\":1646846400,\"value\":10.3,\"tags\":{\"groupid\":2,\"location\":\"Beijing\",\"id\":\"d1001\"}}"
```
## 使用 SQL 查询样例
- `meter_current` 是超级表名。
- 您可以像这样通过标签过滤数据:`where groupid=2`.
```bash
curl -L -d "select * from <db_name>.meter_current where groupid=2" $TDENGINE_CLOUD_URL/rest/sql/test?token=$TDENGINE_CLOUD_TOKEN
```
---
sidebar_label: OpenTSDB Telnet 协议
title: Schemaless - OpenTSDB Telnet 协议
description: 写入使用 OpenTSDB Telnet 协议的数据
---
<!-- exclude -->
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<!-- exclude-end -->
这个部分我们会介绍如何通过无服务 OpenTSDB Telnet 协议的 REST 接口往 TDengine Cloud 写入数据。
## 配置
在您的终端命令行运行下面的命令来设置 TDengine Cloud 的令牌和URL为环境变量:
<Tabs defaultValue="bash">
<TabItem value="bash" label="Bash">
```bash
export TDENGINE_CLOUD_TOKEN="<token>"
export TDENGINE_CLOUD_URL="<url>"
```
</TabItem>
<TabItem value="cmd" label="CMD">
```bash
set TDENGINE_CLOUD_TOKEN="<token>"
set TDENGINE_CLOUD_URL="<url>"
```
</TabItem>
<TabItem value="powershell" label="Powershell">
```powershell
$env:TDENGINE_CLOUD_TOKEN="<token>"
$env:TDENGINE_CLOUD_URL="<url>"
```
</TabItem>
</Tabs>
## 插入
您可以使用任何支持 HTTP 协议的客户端通过访问 RESTful 的接口地址 `<cloud_url>/opentsdb/v1/put` 往 TDengine 里面写入兼容 OpenTSDB 的数据。访问地址如下:
```text
/opentsdb/v1/put/telnet/<db>?token=<cloud_token>
```
### 写入样例
```bash
curl --request POST "$TDENGINE_CLOUD_URL/opentsdb/v1/put/telnet/<db_name>?token=$TDENGINE_CLOUD_TOKEN" --data-binary "sys 1479496100 1.3E0 host=web01 interface=eth0"
```
## 使用 SQL 查询样例
- `sys` 是超级表名。
- 您可以像这样通过标签过滤数据:`where host="web01"`.
```bash
curl -L -d "select * from <db_name>.sys where host=\"web01\"" $TDENGINE_CLOUD_URL/rest/sql/test?token=$TDENGINE_CLOUD_TOKEN
```
---
sidebar_label: 数据写入
title: 从TDengine云服务里面写入数据
description: 有多种方式可以往 TDengine 里面写入数据。
---
这章主要介绍目前有多种方式往 TDengine 里面写入数据,比如用户可以直接使用 TDengine SQL 往 TDengine Cloud 里面写入数据,也可以通过编程的方式使用 TDengine 提供的[连接器(Connector)](../programming/connector)往TDengine里面写入数据。TDengine 还提供压力测试工具 [taosBenchmark](../tools/taosbenchmark)往TDengine里面写入数据,另外 TDengine 企业版还提供工具 taosX 可以从一个 TDengine Cloud 实例同步数据到另外一个。
此外,通过第三方工具 Telegraf 和 Prometheus,也可以往 TDengine 写入数据。
:::注意
由于权限的限制,有必须首先在云服务的数据浏览器里面创建数据库,然后才能往这个数据库里面写入数据。这个限制是所有写入方式必须首先做的。
:::
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册