提交 24a7992f 编写于 作者: sangshuduo's avatar sangshuduo

docs: merge with docs-cloud

......@@ -21,10 +21,10 @@ This is the documentation structure for TDengine Cloud.
7. The [TDengine SQL](./taos-sql) section provides comprehensive information about both standard SQL as well as TDengine's extensions for easy time series analysis.
8. In [Connector](./connector), you can choose between Python, Java, Go, Rust and Node.js, to easily connect to TDengine to ingest and query data in your preferred development language.
8. In [Connector](./programming/connector), you can choose between Python, Java, Go, Rust and Node.js, to easily connect to TDengine to ingest and query data in your preferred development language.
9. The [Tools](./tools) section introduces the Taos CLI which gives you shell access to easily perform ad hoc queries on your instances and databases. Additionally, taosBenchmark is introduced. It is a tool that can help you generate large amounts of data very easily with simple configurations and test the performance of TDengine Cloud.
10. Finally, in the [FAQ](./faq) section, we try to preemptively answer questions that we anticipate. Of course, we will continue to add to this section all the time.
<!-- 10. Finally, in the [FAQ](./faq) section, we try to preemptively answer questions that we anticipate. Of course, we will continue to add to this section all the time. -->
We are very excited that you have chosen TDengine Cloud to be part of your time series platform and look forward to hearing your feedback and ways in which we can improve and be a small part of your success.
......@@ -5,40 +5,77 @@ title: Introduction to TDengine Cloud Service
TDengine Cloud, is the fast, elastic, serverless and cost effective time-series data processing service based on the popular open source time-series database, TDengine. With TDengine Cloud you get the highly optimized and purpose-built for IoT time-series platform, for which TDengine is known.
This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine.
This section introduces the major features, competitive advantages and typical use-cases to help you get a high level overview of TDengine cloud service.
## Major Features
The major features are listed below:
1. While TDengine supports [using SQL to insert](/develop/insert-data/sql-writing), it also supports [Schemaless writing](/reference/schemaless/) just like NoSQL databases. TDengine also supports standard protocols like [InfluxDB LINE](/develop/insert-data/influxdb-line)[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) among others.
2. TDengine supports seamless integration with third-party data collection agents like [Telegraf](/third-party/telegraf)[Prometheus](/third-party/prometheus)[StatsD](/third-party/statsd)[collectd](/third-party/collectd)[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQX](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). These agents can write data into TDengine with simple configuration and without a single line of code.
3. Support for [all kinds of queries](/develop/query-data), including aggregation, nested query, downsampling, interpolation and others.
4. Support for [user defined functions](/develop/udf).
5. Support for [caching](/develop/cache). TDengine always saves the last data point in cache, so Redis is not needed in some scenarios.
6. Support for [stream processing](../taos-sql).
7. Support for [data subscription](../taos-sql) with the capability to specify filter conditions.
8. High availability is supported by replication including multi-cloud replication.
9. Provides an interactive [command-line interface](/reference/taos-shell) for management, maintenance and ad-hoc queries.
10. Provides many ways to [get data in](../data-in) and [get data out](../data-out) data.
11. Provides a Dashboard to monitor your running instances of TDengine.
12. Provides [connectors](../connector/) for [Java](../connector/java), [Python](../connector/python), [Go](../connector/go), [Rust](../connector/rust), and [Node.js](../connector/node).
13. Provides a [REST API](/reference/rest-api/).
14. Supports seamless integration with [Grafana](../visual/grafana) for visualization.
15. Supports seamless integration with Google Data Studio.
1. Data In
- Supports [using SQL to insert](../data-in/insert-data).
- Supports [Telegraf](../data-in/telegraf/).
- Supports [Prometheus](../data-in/prometheus/).
2. Data Out
- Supports standard [SQL](../data-out/query-data/), including nested query.
- Supports exporting data via tool [taosDump](../data-out/taosdump/).
- Supports writing data to [Prometheus](../data-out/prometheus/).
- Supports exporting data via [data subscription](../tmq/).
3. Data Explorer: browse through databases and even run SQL queryies once you login.
4. Visualization:
- Supports [Grafana](../visual/grafana/)
- Supports Google data studio (to be released soon)
- Supports Grafana cloud (to be released soon)
6. [Stream Processing](../stream/): Not only is the continuous query is supported, but TDengine also supports event driven stream processing, so Flink or Spark is not needed for time-series data processing.
7. [Data Subscription](../tmq/): Application can subscribe a table or a set of tables. API is the same as Kafka, but you can specify filter conditions.
8. Enterprise
- Supports backuping data everyday.
- Supports replicating a database to another region or cloud.
- Supports VPC peering.
- Supports Allowed IP list for security.
9. Tools
- Provides an interactive [Command-line Interface (CLI)](../tools/cli/) for management and ad-hoc queries.
- Provides a tool [taosBenchmark](../tools/taosbenchmark/) for testing the performance of TDengine.
10. Programming
- Provides [connectors](../programming/connector/) for Java, Python, Go, Rust, Node.js and other programming languages.
- Provides a [REST API](../programming/connector/rest-api/).
For more details on features, please read through the entire documentation.
## Competitive Advantages
By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine Cloud differentiates itself from other time series platforms, with the following advantages.
By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/) and its cloud native design, TDengine Cloud differentiates itself from other time series data cloud services, with the following advantages.
- **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine Cloud is a fast, elastic, serverless purpose built platform for IoT time-series data. It is the only time-series platform to solve the high cardinality issue to support billions of data collection points while outperforming other time-series platforms for data ingestion, querying and data compression.
- **Worry Free**: TDengine Cloud is a fast, elastic, serverless purpose built cloud platform for time-series data. It provides worry-free operations with a fully managed cloud service. You pay as you go.
- **[Simplified Solution](https://tdengine.com/tdengine/simplified-time-series-data-solution/)**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly.
- **[Cloud Native](https://tdengine.com/tdengine/cloud-native-time-series-database/)**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds. It is Enterprise ready with backup, multi-cloud replication, VPC peering and IP whitelisting.
- **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: It is the only time-series platform to solve the high cardinality issue to support billions of data collection points while outperforming other time-series platforms for data ingestion, querying and data compression.
- **[Ease of Use](https://tdengine.com/tdengine/easy-time-series-data-platform/)**: For administrators, TDengine Cloud provides worry-free operations with a fully managed cloud native solution. For developers, it provides a simple interface, simplified solution and seamless integration with third party tools. For data users, it provides SQL support with powerful time series extensions built for data analytics.
- **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
- **Enterprise Ready**: It supports backup, multi-cloud/multi-region database replication, VPC peering and IP whitelisting.
With TDengine cloud, the **total cost of ownership of your time-series data platform can be greatly reduced**.
1. With its built-in caching, stream processing and data subscription, system complexity and operation cost are highly reduced.
2. With SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly.
3. With the elastic, serverless and fully managed service, the operation and maintenance costs are reduced significantly.
## Technical Ecosystem
This is how TDengine would be situated, in a typical time-series data processing platform:
<figure>
![TDengine Database Technical Ecosystem ](eco_system.webp)
<center><figcaption>Figure 1. TDengine Technical Ecosystem</figcaption></center>
</figure>
On the left-hand side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right-hand side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides an interactive command-line interface and a web interface for management and maintenance.
## Typical Use Cases
As a high-performance and cloud native time-series database, TDengine's typical use case include but are not limited to IoT, Industrial Internet, Connected Vehicles, IT operation and maintenance, energy, financial markets and other fields. TDengine is a purpose-built database optimized for the characteristics of time series data. As such, it cannot be used to process data from web crawlers, social media, e-commerce, ERP, CRM and so on. More generally TDengine is not a suitable storage engine for non-time-series data.
label: Concepts
\ No newline at end of file
---
title: Concepts
---
In order to explain the basic concepts and provide some sample code, the TDengine documentation smart meters as a typical time series use case. We assume the following: 1. Each smart meter collects three metrics i.e. current, voltage, and phase; 2. There are multiple smart meters; 3. Each meter has static attributes like location and group ID. Based on this, collected data will look similar to the following table:
<div className="center-table">
<table>
<thead><tr>
<th>Device ID</th>
<th>Time Stamp</th>
<th colSpan="3">Collected Metrics</th>
<th colSpan="2">Tags</th>
</tr>
<tr>
<th>Device ID</th>
<th>Time Stamp</th>
<th>current</th>
<th>voltage</th>
<th>phase</th>
<th>location</th>
<th>groupId</th>
</tr>
</thead>
<tbody>
<tr>
<td>d1001</td>
<td>1538548685000</td>
<td>10.3</td>
<td>219</td>
<td>0.31</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
<tr>
<td>d1002</td>
<td>1538548684000</td>
<td>10.2</td>
<td>220</td>
<td>0.23</td>
<td>California.SanFrancisco</td>
<td>3</td>
</tr>
<tr>
<td>d1003</td>
<td>1538548686500</td>
<td>11.5</td>
<td>221</td>
<td>0.35</td>
<td>California.LosAngeles</td>
<td>3</td>
</tr>
<tr>
<td>d1004</td>
<td>1538548685500</td>
<td>13.4</td>
<td>223</td>
<td>0.29</td>
<td>California.LosAngeles</td>
<td>2</td>
</tr>
<tr>
<td>d1001</td>
<td>1538548695000</td>
<td>12.6</td>
<td>218</td>
<td>0.33</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
<tr>
<td>d1004</td>
<td>1538548696600</td>
<td>11.8</td>
<td>221</td>
<td>0.28</td>
<td>California.LosAngeles</td>
<td>2</td>
</tr>
<tr>
<td>d1002</td>
<td>1538548696650</td>
<td>10.3</td>
<td>218</td>
<td>0.25</td>
<td>California.SanFrancisco</td>
<td>3</td>
</tr>
<tr>
<td>d1001</td>
<td>1538548696800</td>
<td>12.3</td>
<td>221</td>
<td>0.31</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
</tbody>
</table>
<a href="#model_table1">Table 1: Smart meter example data</a>
</div>
Each row contains the device ID, time stamp, collected metrics (current, voltage, phase as above), and static tags (location and groupId in Table 1) associated with the devices. Each smart meter generates a row (measurement) in a pre-defined time interval or triggered by an external event. The device produces a sequence of measurements with associated time stamps.
## Metric
Metric refers to the physical quantity collected by sensors, equipment or other types of data collection devices, such as current, voltage, temperature, pressure, GPS position, etc., which change with time, and the data type can be integer, float, Boolean, or strings. As time goes by, the amount of collected metric data stored increases. In the smart meters example, current, voltage and phase are the metrics.
## Label/Tag
Label/Tag refers to the static properties of sensors, equipment or other types of data collection devices, which do not change with time, such as device model, color, fixed location of the device, etc. The data type can be any type. Although static, TDengine allows users to add, delete or update tag values at any time. Unlike the collected metric data, the amount of tag data stored does not change over time. In the meters example, `location` and `groupid` are the tags.
## Data Collection Point
Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points. In the smart meters example, d1001, d1002, d1003, and d1004 are the data collection points.
## Table
Since time-series data is most likely to be structured data, TDengine adopts the traditional relational database model to process them with a short learning curve. You need to create a database, create tables, then insert data points and execute queries to explore the data.
To make full use of time-series data characteristics, TDengine adopts a strategy of "**One Table for One Data Collection Point**". TDengine requires the user to create a table for each data collection point (DCP) to store collected time-series data. For example, if there are over 10 million smart meters, it means 10 million tables should be created. For the table above, 4 tables should be created for devices D1001, D1002, D1003, and D1004 to store the data collected. This design has several benefits:
1. Since the metric data from different DCP are fully independent, the data source of each DCP is unique, and a table has only one writer. In this way, data points can be written in a lock-free manner, and the writing speed can be greatly improved.
2. For a DCP, the metric data generated by DCP is ordered by timestamp, so the write operation can be implemented by simple appending, which further greatly improves the data writing speed.
3. The metric data from a DCP is continuously stored, block by block. If you read data for a period of time, it can greatly reduce random read operations and improve read and query performance by orders of magnitude.
4. Inside a data block for a DCP, columnar storage is used, and different compression algorithms are used for different data types. Metrics generally don't vary as significantly between themselves over a time range as compared to other metrics, which allows for a higher compression rate.
If the metric data of multiple DCPs are traditionally written into a single table, due to uncontrollable network delays, the timing of the data from different DCPs arriving at the server cannot be guaranteed, write operations must be protected by locks, and metric data from one DCP cannot be guaranteed to be continuously stored together. ** One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest possible extent.**
TDengine suggests using DCP ID as the table name (like D1001 in the above table). Each DCP may collect one or multiple metrics (like the current, voltage, phase as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the time stamp as the index, and won’t build the index on any metrics stored. Column wise storage is used.
Complex devices, such as connected cars, may have multiple DCPs. In this case, multiple tables are created for a single device, one table per DCP.
## Super Table (STable)
The design of one table for one data collection point will require a huge number of tables, which is difficult to manage. Furthermore, applications often need to take aggregation operations among DCPs, thus aggregation operations will become complicated. To support aggregation over multiple tables efficiently, the STable(Super Table) concept is introduced by TDengine.
STable is a template for a type of data collection point. A STable contains a set of data collection points (tables) that have the same schema or data structure, but with different static attributes (tags). To describe a STable, in addition to defining the table structure of the metrics, it is also necessary to define the schema of its tags. The data type of tags can be int, float, string, and there can be multiple tags, which can be added, deleted, or modified afterward. If the whole system has N different types of data collection points, N STables need to be established.
In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**. In the smart meters example, we can create a super table named `meters`.
## Subtable
When creating a table for a specific data collection point, the user can use a STable as a template and specify the tag values of this specific DCP to create it. ** The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is:
1. Subtable is a table, all SQL commands applied on a regular table can be applied on subtable.
2. Subtable is a table with extensions, it has static tags (labels), and these tags can be added, deleted, and updated after it is created. But a regular table does not have tags.
3. A subtable belongs to only one STable, but a STable may have many subtables. Regular tables do not belong to a STable.
4. A regular table can not be converted into a subtable, and vice versa.
The relationship between a STable and the subtables created based on this STable is as follows:
1. A STable contains multiple subtables with the same metric schema but with different tag values.
2. The schema of metrics or labels cannot be adjusted through subtables, and it can only be changed via STable. Changes to the schema of a STable takes effect immediately for all associated subtables.
3. STable defines only one template and does not store any data or label information by itself. Therefore, data cannot be written to a STable, only to subtables.
Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs. In essence, querying a supertable is a very efficient aggregate query on multiple DCPs of the same type.
In TDengine, it is recommended to use a subtable instead of a regular table for a DCP. In the smart meters example, we can create subtables like d1001, d1002, d1003, and d1004 under super table meters.
To better understand the data model using metri, tags, super table and subtable, please refer to the diagram below which demonstrates the data model of the smart meters example. ![Meters Data Model Diagram](./supertable.webp)
## Database
A database is a collection of tables. TDengine allows a running instance to have multiple databases, and each database can be configured with different storage policies. The [characteristics of time-series data](https://www.taosdata.com/blog/2019/07/09/86.html) from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. In order for TDengine to work with maximum efficiency in various scenarios, TDengine recommends that STables with different data characteristics be created in different databases.
In a database, there can be one or more STables, but a STable belongs to only one database. All tables owned by a STable are stored in only one database.
## Instance, URL, Token
An instance is a running cluster of nodes of TDengine with one or more databases. An instance cannot span across multiple regions or multiple clouds, but a single account (organization) can have multiple instances. An account may invite multiple users into his/her organization to share the data, and each user can be configured with different access rights.
TDengine cloud provides a unique URL for each instance and uses tokens to authenticate the access. The token is generated by TDengine cloud for each user and for each instance. The token has a duration and can be reset by the user for each instance at any time for security purpose.
......@@ -4,4 +4,6 @@ title: Get Started
description: A quick guide for how to access TDengine cloud service
---
It's very convenient to access TDengine cloud service, just open your browser, connect to [TDengine Cloud Service Portal](https://cloud.tdengine.com), create an account with a valid email address, activate your account, then you will get a free TDengine cloud service. Enjoy!
\ No newline at end of file
It's very convenient to access TDengine cloud service, just open your browser, connect to [TDengine Cloud Service Portal](https://cloud.tdengine.com), create an account with a valid email address, activate your account, then you will get a free TDengine cloud service.
TDengine cloud is running on AWS, Azure and Google Cloud. You can choose free plan, standard plan or enterprise plan. Enjoy!
---
sidebar_label: Data Replication
title: Data Replication
description: Briefly introduce how to replicate data among TDengine cloud services
---
TDengine provides full support for data replication. You can replicate data from TDengine cloud service to local TDengine, from local TDengine to TDengine cloud service, or from one cloud service to another one and it doesn't matter which cloud or region the two services reside in.
\ No newline at end of file
---
sidebar_label: SQL
title: Insert Data Using SQL
description: This section describes how to insert data using TDengine SQL
description: Insert data using TDengine SQL
---
# Insert Data
......@@ -42,10 +42,15 @@ For more details about `INSERT` please refer to [INSERT](https://docs.tdengine.c
## Connector Examples
:::note
Before executing the sample code in this section, you need to firstly establish connection to TDegnine cloud service, please refer to [Connect to TDengine Cloud Service](../../programming/connect/).
:::
<Tabs>
<TabItem value="python" label="Python">
In this example, we use `execute` method to execute SQL and get affected rows. The variable `conn` is an instance of class `taosrest.TaosRestConnection` we just created at [Connect Tutorial](../../develop/connect/python#connect).
In this example, we use `execute` method to execute SQL and get affected rows. The variable `conn` is an instance of class `taosrest.TaosRestConnection` we just created at [Connect Tutorial](../../programming/connect/python#connect).
```python
{{#include docs/examples/python/develop_tutorial.py:insert}}
......
---
sidebar_label: Prometheus
title: Prometheus for TDengine Cloud
description: This topic introduces how to write data into TDengine from Prometheus.
description: Write data into TDengine from Prometheus.
---
Prometheus is a widespread open-source monitoring and alerting system. Prometheus joined the Cloud Native Computing Foundation (CNCF) in 2016 as the second incubated project after Kubernetes, which has a very active developer and user community.
Prometheus provides `remote_write` and `remote_read` interfaces to leverage other database products as its storage engine. To enable users of the Prometheus ecosystem to take advantage of TDengine's efficient writing and querying, TDengine also provides support for these two interfaces.
Prometheus provides `remote_write` interface to leverage other database products as its storage engine. To enable users of the Prometheus ecosystem to take advantage of TDengine's efficient writing, TDengine also provides support for this interface so that Prometheus data can be stored in TDengine via the `remote_write` interface with proper configuration to take full advantage of TDengine's efficient storage performance and clustering capabilities for time-series data.
Prometheus data can be stored in TDengine via the `remote_write` interface with proper configuration. Data stored in TDengine can be queried via the `remote_read` interface, taking full advantage of TDengine's efficient storage query performance and clustering capabilities for time-series data.
## Prerequisites
In your TDengine cloud instance, click "Explorer" on the left panel, then click "+" besides Databases, to create a new database named as "prometheus_data". Then execute `show databases` to confirm the database has been created successfully.
## Install Prometheus
......@@ -28,7 +30,7 @@ Supposed that you use Linux system with architecture amd64:
Then Prometheus is installed in current directory. For more installation options, please refer to the [official documentation](https://prometheus.io/docs/prometheus/latest/installation/).
## Configure
## Configure Prometheus
Configuring Prometheus is done by editing the Prometheus configuration file `prometheus.yml` (If you followed previous steps, you can find prometheus.xml in current directory).
......@@ -62,15 +64,3 @@ Log in TDengine Cloud, click "Explorer" on the left navigation bar. You will see
![TDengine prometheus remote_write result](prometheus_data.webp)
## Verify Remote Read
Lets retrieve some metrics from TDengine Cloud via prometheus web server. Browse to <http://localhost:9090/graph> and use the "Graph" tab.
Enter the following expression to graph the per-second rate of chunks being created in the self-scraped Prometheus:
```
rate(prometheus_tsdb_head_chunks_created_total[1m])
```
![TDengine prometheus remote_read](prometheus_read.webp)
---
sidebar_label: Telegraf
title: Telegraf for TDengine Cloud
description: This section explains how to write data into TDengine from telegraf.
description: Write data into TDengine from telegraf.
---
Telegraf is an open-source, metrics collection software. Telegraf can collect the operation information of various components without having to write any scripts to collect regularly, reducing the difficulty of data acquisition.
Telegraf's data can be written to TDengine by simply adding the output configuration of Telegraf to the URL corresponding to taosAdapter and modifying several configuration items. The presence of Telegraf data in TDengine can take advantage of TDengine's efficient storage query performance and clustering capabilities for time-series data.
## Prerequisiteis
Before telegraf can write data into TDengine cloud service, you need to firstly manually create a database. Log in TDengine Cloud, click "Explorer" on the left navigation bar, then click the "+" button besides "Databases" to add a database named as "telegraf" using all default parameters.
## Install Telegraf
Supposed that you use Ubuntu system:
......@@ -63,9 +67,7 @@ telegraf --config telegraf.conf
## Verify
Log in TDengine Cloud, click "Explorer" on the left navigation bar.
Check weather database "telegraf" exist by executing:
- Check weather database "telegraf" exist by executing:
```sql
show databases;
......
---
sidebar_label: Data In
title: Write Data Into TDengine Cloud Service
description: A number of ways for writing data into TDengine.
---
This chapter introduces a number of ways which can be used to write data into TDengine, users can use TDengine SQL to write data into TDengine cloud service, or use the [connectors](../programming/connector) provided by TDengine to program writing into TDengine. TDengine provides [taosBenchmark](../tools/taosbenchmark), which is a performance testing tool to write into TDengine, and taosX, which is a tool provided by TDengine enterprise edition, to sync data from one TDengine cloud service to another. Furthermore, 3rd party tools, like telegraf and prometheus, can also be used to write data into TDengine.
:::note
Because of privilege limitation on cloud, you need to firstly create database in the data explorer on cloud console before preparing to write data into TDengine cloud service. This limitation is applicable to any way of writing data.
:::
---
sidebar_label: Data In
title: Write Data Into TDengine Cloud Service
description: A number of ways for writing data into TDengine.
---
This chapter introduces a number of ways which can be used to write data into TDengine, users can use 3rd party tools, like telegraf and prometheus, to write data into TDengine cloud service, or use [taosBenchmark](../tools/taosbenchmark) which is a tool provided by TDengine to write data into TDengine cloud service. Users can use taosX, which is also a tool provided by TDengine, to sync data from one TDengine cloud service to another. Furthermore, 3rd party tools, like telegraf and prometheus, can also be used to write data into TDengine.
\ No newline at end of file
---
sidebar_label: SQL
title: Query Data Using SQL
description: This topic introduces how to read data from TDengine using basic SQL.
description: Read data from TDengine using basic SQL.
---
# Query Data
......@@ -123,6 +123,11 @@ For more details please refer to [Aggregate by Window](https://docs.tdengine.com
## Connector Examples
:::note
Before executing the sample code in this section, you need to firstly establish connection to TDegnine cloud service, please refer to [Connect to TDengine Cloud Service](../../programming/connect/).
:::
<Tabs>
<TabItem value="python" label="Python">
......
此差异已折叠。
---
sidebar_label: taosDump
title: Dump Data Using taosDump
description: Introduces how to dump data from TDengine into files using taosDump
description: Dump data from TDengine into files using taosDump
---
# taosDump
......@@ -18,11 +18,7 @@ Users should not use taosdump to back up raw data, environment settings, hardwar
## Installation
There are two ways to install taosdump:
- Install the taosTools official installer. Please find taosTools from [All download links](https://www.tdengine.com/all-downloads) page and download and install it.
- Compile taos-tools separately and install it. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.
Please refer to [Install taosTools](https://docs.tdengine.com/cloud/tools/taosdump/#installation).
## Common usage scenarios
......@@ -32,7 +28,7 @@ There are two ways to install taosdump:
2. backup multiple specified databases: use `-D db1,db2,... ` parameters;
3. back up some super or normal tables in the specified database: use `-dbname stbname1 stbname2 tbname1 tbname2 ... ` parameters. Note that the first parameter of this input sequence is the database name, and only one database is supported. The second and subsequent parameters are the names of super or normal tables in that database, separated by spaces.
4. back up the system log database: TDengine clusters usually contain a system database named `log`. The data in this database is the data that TDengine runs itself, and the taosdump will not back up the log database by default. If users need to back up the log database, users can use the `-a` or `-allow-sys` command-line parameter.
5. Loose mode backup: taosdump version 1.4.1 onwards provides `-n` and `-L` parameters for backing up data without using escape characters and "loose" mode, which can reduce the number of backups if table names, column names, tag names do not use escape characters. This can also reduce the backup data time and backup data footprint. If you are unsure about using `-n` and `-L` conditions, please use the default parameters for "strict" mode backup. See the [official documentation](/taos-sql/escape) for a description of escaped characters.
5. Loose mode backup: taosdump version 1.4.1 onwards provides `-n` and `-L` parameters for backing up data without using escape characters and "loose" mode, which can reduce the number of backups if table names, column names, tag names do not use escape characters. This can also reduce the backup data time and backup data footprint. If you are unsure about using `-n` and `-L` conditions, please use the default parameters for "strict" mode backup. See the [official documentation](https://docs.tdengine.com/taos-sql/escape/) for a description of escaped characters.
<!-- exclude -->
:::tip
......
---
sidebar_label: Prometheus
title: Prometheus remote read
description: Prometheus remote_read from TDengine cloud server
---
Prometheus is a widespread open-source monitoring and alerting system. Prometheus joined the Cloud Native Computing Foundation (CNCF) in 2016 as the second incubated project after Kubernetes, which has a very active developer and user community.
Prometheus provides `remote_read` interface to leverage other database products as its storage engine. To enable users of the Prometheus ecosystem to take advantage of TDengine's efficient querying, TDengine also provides support for this interface so that data stored in TDengine can be queried via the `remote_read` interface, taking full advantage of TDengine's efficient query performance and clustering capabilities for time-series data.
## Install Prometheus
Please refer to [Install Prometheus](https://docs.tdengine.com/cloud/data-in/prometheus#install-prometheus).
## Configure Prometheus
Please refer to [Configure Prometheus](https://docs.tdengine.com/cloud/prometheus/#configure-prometheus).
## Start Prometheus
Please refer to [Start Prometheus](https://docs.tdengine.com/cloud/data-in/prometheus/#start-prometheus).
## Verify Remote Read
Lets retrieve some metrics from TDengine Cloud via prometheus web server. Browse to <http://localhost:9090/graph> and use the "Graph" tab.
Enter the following expression to graph the per-second rate of chunks being created in the self-scraped Prometheus:
```
rate(prometheus_tsdb_head_chunks_created_total[1m])
```
![TDengine prometheus remote_read](prometheus_read.webp)
```csharp
{{#include docs/examples/csharp/SubscribeDemo.cs}}
// {{#include docs/examples/csharp/SubscribeDemo.cs}}
```
\ No newline at end of file
```rust
{{#include docs/examples/rust/nativeexample/examples/subscribe_demo.rs}}
{{#include docs/examples/rust/cloud-example/examples/subscribe_demo.rs}}
```
......@@ -4,4 +4,4 @@ title: Get Data Out of TDengine
description: A number of ways getting data out of TDengine.
---
This chapter introduces how to get data out of TDengine cloud service. Besides normal query using SQL, users can use data subscription which is provided by the message queue component inside TDengine to access the data stored in TDengine. `taosdump`, which is a tool provided by TDengine, can be used to dump the data stored in TDengine cloud service into files. `taosX`, which is another tool provided by TDengine, can be used to sync up the data in one TDengine cloud service into another.
\ No newline at end of file
This chapter introduces how to get data out of TDengine cloud service. Besides normal query using SQL, users can use [data subscription](../tmq) which is provided by the message queue component inside TDengine to access the data stored in TDengine. TDengine provides [connectors](../programming/connector) for application programmers to access the data stored in TDengine. TDengine also provides some tools, like [taosdump](../tools/taosdump), which is a tool provided by TDengine to dump the data stored in TDengine cloud service into files, and `taosX`, which is another tool to sync up the data in one TDengine cloud service into another. Furthermore, 3rd party tools, like prometheus, can also be used to write data into TDengine.
---
sidebar_label: Visualization
sidebar_title: Visualization
title: Visualization
description: View TDengine in visual ways.
---
......
---
sidebar_label: Subscription
title: Data Subscritpion
description: Use data subscription to get data from TDengine.
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
This topic introduces how to read out data from TDengine using data subscription, which is an advanced feature in TDengine. To access the data in TDengine in data subscription way, you need to create topic, create consumer, subscribe to a topic, and consume data. In this document we will briefly explain these main steps of data subscription.
## Create Topic
A topic can be created on a database, on some selected columns,or on a supertable.
### Topic on Columns
The most common way to create a topic is to create a topic on some specifically selected columns. The Syntax is like below:
```sql
CREATE TOPIC topic_name as subquery;
```
You can subscribe to a topic through a SELECT statement. Statements that specify columns, such as `SELECT *` and `SELECT ts, cl` are supported, as are filtering conditions and scalar functions. Aggregate functions and time window aggregation are not supported. Note:
- The schema of topics created in this manner is determined by the subscribed data.
- You cannot modify (`ALTER <table> MODIFY`) or delete (`ALTER <table> DROP`) columns or tags that are used in a subscription or calculation.
- Columns added to a table after the subscription is created are not displayed in the results. Deleting columns will cause an error.
For example:
```sql
CREATE TOPIC topic_name AS SELECT ts, c1, c2, c3 FROM tmqdb.stb WHERE c1 > 1;
```
### Topic on SuperTable
Syntax:
```sql
CREATE TOPIC topic_name AS STABLE stb_name;
```
Creating a topic in this manner differs from a `SELECT * from stbName` statement as follows:
- The table schema can be modified.
- Unstructured data is returned. The format of the data returned changes based on the supertable schema.
- A different table schema may exist for every data block to be processed.
- The data returned does not include tags.
### Topic on Database
Syntax:
```sql
CREATE TOPIC topic_name [WITH META] AS DATABASE db_name;
```
This SQL statement creates a subscription to all tables in the database. You can add the `WITH META` parameter to include schema changes in the subscription, including creating and deleting supertables; adding, deleting, and modifying columns; and creating, deleting, and modifying the tags of subtables. Consumers can determine the message type from the API. Note that this differs from Kafka.
## Programming Model
To subscribe the data from a created topic, the client program needs to follow the programming model described in this section.
1. Create Consumer
To create a consumer, you must use the APIs provided by TDengine connectors. Below is the sample code of using connectors of different languages.
2. Subscribe to a Topic
A single consumer can subscribe to multiple topics.
3. Consume messages
4. Subscribe to a Topic
A single consumer can subscribe to multiple topics.
5. Consume Data
6. Close the consumer
After message consumption is finished, the consumer is unsubscribed.
## Sample Code
<Tabs defaultValue="Rust" groupId="lang">
<TabItem value="c" label="C">
Will be available soon
</TabItem>
<TabItem value="java" label="Java">
Will be available soon
</TabItem>
<TabItem label="Go" value="Go">
Will be available soon
</TabItem>
<TabItem label="Rust" value="Rust">
```rust
{{#include docs/examples/rust/cloud-example/examples/subscribe_demo.rs}}
```
</TabItem>
<TabItem value="Python" label="Python">
Will be available soon
</TabItem>
<TabItem label="Node.JS" value="Node.JS">
Will be available soon
</TabItem>
<TabItem value="C#" label="C#">
Will be available soon
</TabItem>
</Tabs>
## Delete Topic
Once a topic becomes useless, it can be deleted.
You can delete topics that are no longer useful. Note that you must unsubscribe all consumers from a topic before deleting it.
```sql
DROP TOPIC topic_name;
```
## Check Status
At any time, you can check the status of existing topics and consumers.
1. Query all existing topics.
```sql
SHOW TOPICS;
```
2. Query the status and subscribed topics of all consumers.
```sql
SHOW CONSUMERS;
```
......@@ -22,7 +22,6 @@ CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name AS subq
stream_options: {
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
WATERMARK time
IGNORE EXPIRED [0 | 1]
}
```
......
---
sidebar_label: Data Replication
title: Data Replication
description: Replicate data between TDengine cloud services
---
TDengine provides full support for data replication. You can replicate data from TDengine cloud to private TDengine instance, from private TDengine instance to TDengine cloud, or from one cloud platform to another one and it doesn't matter which cloud or region the two services reside in.
TDengine also provides database backup for enterprise plan.
---
sidebar_label: Python
title: Connect with Python Connector
description: Connect to TDengine cloud service using Python connector
---
<!-- exclude -->
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<!-- exclude-end -->
## Install Connector
First, you need to install the `taospy` module version >= `2.3.3`. Run the command below in your terminal.
First, you need to install the `taospy` module version >= `2.6.2`. Run the command below in your terminal.
<Tabs defaultValue="pip">
<TabItem value="pip" label="pip">
......@@ -78,3 +81,7 @@ Copy code bellow to your editor and run it.
```python
{{#include docs/examples/python/develop_tutorial.py:connect}}
```
For how to write data and query data, please refer to <https://docs.tdengine.com/cloud/data-in/insert-data/> and <https://docs.tdengine.com/cloud/data-out/query-data/>.
For more details about how to write or query data via REST API, please check [REST API](https://docs.tdengine.com/cloud/programming/connector/rest-api/).
\ No newline at end of file
---
sidebar_label: Java
title: Connect with Java Connector
description: Connect to TDengine cloud service using Java connector
---
<!-- exclude -->
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<!-- exclude-end -->
## Add Dependency
......@@ -13,11 +16,7 @@ import TabItem from '@theme/TabItem';
<TabItem value="maven" label="Maven">
```xml title="pom.xml"
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.39</version>
</dependency>
{{#include docs/examples/java/pom.xml:dep}}
```
</TabItem>
......@@ -25,7 +24,7 @@ import TabItem from '@theme/TabItem';
```groovy title="build.gradle"
dependencies {
implementation 'com.taosdata.jdbc:taos-jdbcdriver:2.0.39'
implementation 'com.taosdata.jdbc:taos-jdbcdriver:3.0.0.0'
}
```
......@@ -67,7 +66,7 @@ Alternatively, you can set environment variable in your IDE's run configurations
:::note
Replace <jdbcURL\> with real JDBC URL, it will seems like: `jdbc:TAOS-RS://example.com?usessl=true&token=xxxx`.
To obtain the value of JDBC URL, please log in [TDengine Cloud](https://cloud.tdengine.com) and click "Connector" and then select "Java".
To obtain the value of JDBC URL, please log in [TDengine Cloud](https://cloud.tdengine.com) and click "Data Insert" on the left menu.
:::
<!-- exclude-end -->
## Connect
......@@ -78,3 +77,6 @@ Code bellow get JDBC URL from environment variables first and then create a `Con
{{#include docs/examples/java/src/main/java/com/taos/example/ConnectCloudExample.java:connect}}
```
The client connection is then established. For how to write data and query data, please refer to <https://docs.tdengine.com/cloud/data-in/insert-data/> and <https://docs.tdengine.com/cloud/data-out/query-data/>.
For more details about how to write or query data via REST API, please check [REST API](https://docs.tdengine.com/cloud/programming/connector/rest-api/).
---
sidebar_label: Go
title: Connect with Go Connector
description: Connect to TDengine cloud service using Go connector
---
<!-- exclude -->
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<!-- exclude-end -->
## Initialize Module
```
......@@ -52,7 +55,7 @@ $env:TDENGINE_GO_DSN="<goDSN>"
<!-- exclude -->
:::note
Replace <goDSN\> with the real value, the format should be `https(<cloud_host>)/?token=<token>`.
To obtain the value of `goDSN`, please log in [TDengine Cloud](https://cloud.tdengine.com) and click "Connector" and then select "Go".
To obtain the value of `goDSN`, please log in [TDengine Cloud](https://cloud.tdengine.com) and click "Data In" on the lef menu.
:::
<!-- exclude-end -->
......@@ -76,3 +79,7 @@ Finally, test the connection:
```
go run main.go
```
The client connection is then established. For how to write data and query data, please refer to <https://docs.tdengine.com/cloud/data-in/insert-data/> and <https://docs.tdengine.com/cloud/data-out/query-data/>.
For more details about how to write or query data via REST API, please check [REST API](https://docs.tdengine.com/cloud/programming/connector/rest-api/).
\ No newline at end of file
---
sidebar_label: Rust
title: Connect with Rust Connector
description: Connect to TDengine cloud service using Rust connector
---
<!-- exclude -->
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<!-- exclude-end -->
## Create Project
```
......@@ -15,7 +18,15 @@ cargo new --bin cloud-example
Add dependency to `Cargo.toml`.
```toml title="Cargo.toml"
{{#include docs/examples/rust/cloud-example/Cargo.toml}}
[package]
name = "cloud-example"
version = "0.1.0"
edition = "2021"
[dependencies]
taos = { version = "*", default-features = false, features = ["ws"] }
tokio = { version = "1", features = ["full"]}
anyhow = "1.0.0"
```
## Config
......@@ -61,5 +72,7 @@ Copy following code to `main.rs`.
{{#include docs/examples/rust/cloud-example/src/main.rs}}
```
Then you can execute `cargo run` to test the connection.
Then you can execute `cargo run` to test the connection. For how to write data and query data, please refer to <https://docs.tdengine.com/cloud/data-in/insert-data/> and <https://docs.tdengine.com/cloud/data-out/query-data/>.
For more details about how to write or query data via REST API, please check [REST API](https://docs.tdengine.com/cloud/programming/connector/rest-api/).
---
sidebar_label: Node.js
title: Connect with Node.js Connector
description: Connect to TDengine cloud service using Node.JS connector
---
<!-- exclude -->
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<!-- exclude-end -->
## Install Connector
```bash
npm i td2.0-rest-connector
npm install @tdengine/rest
```
## Config
......@@ -55,3 +58,7 @@ To obtain the value of cloud token and URL, please log in [TDengine Cloud](https
```javascript
{{#include docs/examples/node/connect.js}}
```
For how to write data and query data, please refer to <https://docs.tdengine.com/cloud/data-in/insert-data/> and <https://docs.tdengine.com/cloud/data-out/query-data/>.
For more details about how to write or query data via REST API, please check [REST API](https://docs.tdengine.com/cloud/programming/connector/rest-api/).
\ No newline at end of file
---
sidebar_label: C#
title: Connect with C# Connector
description: Connect to TDengine cloud service using C# connector
---
<!-- exclude -->
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<!-- exclude-end -->
## Create Project
```bash
dotnet new console -o example
```
## Add C# TDengine Driver class lib
```bash
cd example
dotnet add package TDengine.Connector
```
## Config
Run this command in your terminal to save TDengine cloud token as variables:
<Tabs defaultValue="bash">
<TabItem value="bash" label="Bash">
```bash
export TDENGINE_CLOUD_DSN="<DSN>"
```
</TabItem>
<TabItem value="cmd" label="CMD">
```bash
set TDENGINE_CLOUD_DSN="<DSN>"
```
</TabItem>
<TabItem value="powershell" label="Powershell">
```powershell
$env:TDENGINE_CLOUD_DSN="<DSN>"
```
</TabItem>
</Tabs>
<!-- exclude -->
:::note
Replace <DSN\> with real TDengine cloud DSN. To obtain the real value, please log in [TDengine Cloud](https://cloud.tdengine.com) and click "Connector" and then select "C#".
:::
<!-- exclude-end -->
## Connect
```C#
{{#include docs/examples/csharp/cloud-example/connect/Program.cs}}
```
The client connection is then established. For how to write data and query data, please refer to <https://docs.tdengine.com/cloud/data-in/insert-data/> and <https://docs.tdengine.com/cloud/data-out/query-data/>.
For more details about how to write or query data via REST API, please check [REST API](https://docs.tdengine.com/cloud/programming/connector/rest-api/).
\ No newline at end of file
---
sidebar_label: REST API
title: REST API
description: Connect to TDengine Cloud Service through RESTful API
---
<!-- exclude -->
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<!-- exclude-end -->
## Config
Run this command in your terminal to save the TDengine cloud token and URL as variables:
......
---
sidebar_label: Quick Start
title: Connect to TDengine Cloud Service
description: Quick start of using TDengine connectors to connect to TDengine cloud service
---
This section briefly describes how to connect to TDengine cloud service using the connectors provided by TDengine so that programmers can get started quickly.
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
---
title: Data Model
desription: Typical Data Model used in TDengine
---
The data model employed by TDengine is similar to that of a relational database. You have to create databases and tables. You must design the data model based on your own business and application requirements. You should design the STable (an abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntactical details.
......
---
sidebar_label: Insert
title: Insert Data Into TDengine
description: Programming Guide for Inserting Data into TDengine
---
To quickly start your programming about writing data into TDengine, please refer to [Insert Data](../../data-in/insert-data).
\ No newline at end of file
---
sidebar_label: Query
title: Query Data From TDengine
description: Programming Guide for Querying Data
---
To quickly start your programming about querying data from TDengine, please refer to [Query Data](../../data-out/query-data).
\ No newline at end of file
---
sidebar_label: Python
title: TDengine Python Connector
description: Detailed guide for Python Connector
---
`taospy` is the official Python connector for TDengine. `taospy` wraps the [REST interface](/reference/rest-api) of TDengine. Additionally `taospy` provides a set of programming interfaces that conforms to the [Python Data Access Specification (PEP 249)](https://peps.python.org/pep-0249/). It is easy to integrate `taospy` with many third-party tools, such as [SQLAlchemy](https://www.sqlalchemy.org/) and [pandas](https://pandas.pydata.org/).
......@@ -78,6 +79,10 @@ For a more detailed description of the `sql()` method, please refer to [RestClie
| Connector version | Important Update | Release date |
| ----------------- | ----------------------------------------- | ------------ |
| 2.6.2 | fix ci script | 2022-08-18 |
| 2.5.2 | fix taos-ws-py python version dependency | 2022-08-12 |
| 2.5.1 | (rest): add timezone option | 2022-08-11 |
| 2.5.0 | add taosws module | 2022-08-10 |
| 2.4.0 | add execute method to TaosRestConnection | 2022-07-18 |
| 2.3.3 | support connect to TDengine Cloud Service | 2022-06-06 |
......
......@@ -3,6 +3,7 @@ toc_max_heading_level: 4
sidebar_position: 2
sidebar_label: Java
title: TDengine Java Connector
description: Detailed guide for Java Connector
---
import Tabs from '@theme/Tabs';
......
---
sidebar_label: Go
title: TDengine Go Connector
description: Detailed guide for Python Connector
---
`driver-go` is the official Go language connector for TDengine. It implements the [database/sql](https://golang.org/pkg/database/sql/) package, the generic Go language interface to SQL databases. Go developers can use it to develop applications that access TDengine cluster data.
......
......@@ -3,6 +3,7 @@ toc_max_heading_level: 4
sidebar_position: 5
sidebar_label: Rust
title: TDengine Rust Connector
description: Detailed guide for Rust Connector
---
......
---
sidebar_label: Node.js
title: TDengine Node.js Connector
sidebar_label: Node.JS
title: TDengine Node.JS Connector
description: Detailed guide for Node.JS Connector
---
`td2.0-rest-connector` are the official Node.js language connectors for TDengine. Node.js developers can develop applications to access TDengine instance data. `td2.0-rest-connector` is a **REST connector** that connects to TDengine instances via the REST API.
......
---
sidebar_label: C#
title: TDengine C# Connector
description: Detailed guide for C# Connector
---
`TDengine.Connector` is the official C# connector for TDengine. C# developers can develop applications to access TDengine instance data.
The source code for `TDengine.Connector` is hosted on [GitHub](https://github.com/taosdata/taos-connector-dotnet/tree/3.0).
## Installation
### Pre-installation
Install the .NET deployment SDK.
### Add TDengine.Connector through Nuget
```bash
dotnet add package TDengine.Connector
```
## Establishing a connection
``` XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net5.0</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.1" />
</ItemGroup>
</Project>
```
``` C#
{{#include docs/examples/csharp/cloud-example/connect/Program.cs}}
```
## Usage examples
### Basic Insert and Query
``` XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net5.0</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.1" />
</ItemGroup>
</Project>
```
```C#
{{#include docs/examples/csharp/cloud-example/usage/Program.cs}}
```
### STMT Insert
``` XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net5</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.1" />
</ItemGroup>
</Project>
```
```C#
{{#include docs/examples/csharp/cloud-example/stmt/Program.cs}}
```
## Important Updates
| TDengine.Connector | Description |
| ------------------------- | ---------------------------------------------------------------- |
| 3.0.1 | Support connect to TDengine cloud service
## API Reference
[API Reference](https://docs.taosdata.com/api/connector-csharp/html/860d2ac1-dd52-39c9-e460-0829c4e5a40b.htm)
---
sidebar_label: REST API
title: REST API
description: Detailed guide for REST API
---
To support the development of various types of applications and platforms, TDengine provides an API that conforms to REST principles; namely REST API. To minimize the learning cost, unlike REST APIs for other database engines, TDengine allows insertion of SQL commands in the BODY of an HTTP POST request, to operate the database.
......
# Connector
---
sidebar_label: Connector
title: Connector Reference
description: 'Reference guide for connectors'
---
This section is a detailed reference guide of the connectors provided by TDengine.
```mdx-code-block
import DocCardList from '@theme/DocCardList';
......
......@@ -15,7 +15,7 @@ To develop an application to process time-series data using TDengine, we recomme
7. In many use cases (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately.
8. If you find that the SQL functions of TDengine cannot meet your requirements, then you can use user-defined functions to solve the problem.
This section is organized in the order described above. For ease of understanding, TDengine provides sample code for each supported programming language for each function. If you want to learn more about the use of SQL, please read the [SQL manual](../taos-sql/). For a more in-depth understanding of the use of each connector, please read the [Connector Reference Guide](../connector/). For more ways to writing data into TDengine, please refer to [Data In](../data-in), for more ways to read data out of TDengine, please refer to [Data Out](../data-out).
This section is organized in the order described above. For ease of understanding, TDengine provides sample code for each supported programming language for each function. If you want to learn more about the use of SQL, please read the [SQL manual](../taos-sql/). For a more in-depth understanding of the use of each connector, please read the [Connector Reference Guide](./connector/). For more ways to writing data into TDengine, please refer to [Data In](../data-in), for more ways to read data out of TDengine, please refer to [Data Out](../data-out).
If you encounter any problems during the development process, please click ["Submit an issue"](https://github.com/taosdata/TDengine/issues/new/choose) at the bottom of each page and submit it on GitHub right away.
......
---
sidebar_label: Supertable
title: Supertable
description: Operations about Super Tables.
---
## Create a Supertable
......
---
sidebar_label: Insert
title: Insert
description: Insert data into TDengine
---
## Syntax
......
---
sidebar_label: Select
title: Select
description: Query Data from TDengine.
---
## Syntax
......
......@@ -2,6 +2,7 @@
sidebar_label: Functions
title: Functions
toc_max_heading_level: 4
description: TDengine Built-in Functions.
---
## Single Row Functions
......
---
sidebar_label: Time-Series Extensions
title: Time-Series Extensions
description: TimeSeries Data Specific Queries.
---
As a purpose-built database for storing and processing time-series data, TDengine provides time-series-specific extensions to standard SQL.
......
---
sidebar_label: Data Subscription
title: Data Subscription
description: Subscribe Data from TDengine.
---
The information in this document is related to the TDengine data subscription feature.
......
---
sidebar_label: Stream Processing
title: Stream Processing
description: Built-in Stream Processing.
---
Raw time-series data is often cleaned and preprocessed before being permanently stored in a database. Stream processing components like Kafka, Flink, and Spark are often deployed alongside a time-series database to handle these operations, increasing system complexity and maintenance costs.
......
---
sidebar_label: Operators
title: Operators
description: TDengine Supported Operators
---
## Arithmetic Operators
......
---
sidebar_label: JSON Type
title: JSON Type
description: JSON Data Type
---
......
---
title: Escape Characters
description: How to use Escape
---
## Escape Characters
......
---
sidebar_label: Name and Size Limits
title: Name and Size Limits
sidebar_label: Limits
title: Limits
description: Naming Limits
---
## Naming Rules
......
---
sidebar_label: Reserved Keywords
sidebar_label: Keywords
title: Reserved Keywords
description: Reserved Keywords in TDengine SQL
---
## Keyword List
......
---
sidebar_label: User-Defined Functions
sidebar_label: UDF
title: User-Defined Functions (UDF)
description: User Defined Functions
---
You can create user-defined functions and import them into TDengine.
......
---
sidebar_label: Index
title: Using Indices
description: Use Index to Accelerate Query.
---
TDengine supports SMA and FULLTEXT indexing.
......
......@@ -4,14 +4,17 @@ sidebar_label: TDengine CLI
description: Instructions and tips for using the TDengine CLI to connect TDengine Cloud
---
<!-- exclude -->
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<!-- exclude-end -->
The TDengine command-line interface (hereafter referred to as `TDengine CLI`) is the most simplest way for users to manipulate and interact with TDengine instances.
## Installation
To run TDengine CLI to access TDengine cloud, please install [TDengine client installation package](https://gcp.cloud.tdengine.com/download/TDengine-client-2.7.0.0-Linux-x64.tar.gz) first.
To run TDengine CLI to access TDengine cloud, please install [TDengine client installation package](https://tdengine.com/assets-download/cloud/TDengine-client-3.0.0.1202209031045-Linux-x64.tar.gz) first.
## Config
......@@ -97,10 +100,10 @@ taos -E $TDENGINE_CLOUD_DSN
## Using TDengine CLI
TDengine CLI will display a welcome message and version information if it successfully connected to the TDengine service. If it fails, TDengine CLI will print an error message. See [FAQ](/train-faq/faq) to solve the problem of terminal connection failure to the server. The TDengine CLI prompts as follows:
TDengine CLI will display a welcome message and version information if it successfully connected to the TDengine service. If it fails, TDengine CLI will print an error message. The TDengine CLI prompts as follows:
```
Welcome to the TDengine shell from Linux, Client Version:2.6.0.4
Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
Successfully connect to cloud.tdengine.com:8085 in restful mode
......
---
title: taosdump
description: "taosdump is a tool that supports backing up data from a running TDengine cluster and restoring the backed up data to the same, or another running TDengine cluster."
---
## Introduction
taosdump is a tool that supports backing up data from a running TDengine cluster and restoring the backed up data to the same, or another running TDengine cluster.
taosdump can back up a database, a super table, or a normal table as a logical data unit or backup data records in the database, super tables, and normal tables. When using taosdump, you can specify the directory path for data backup. If you do not specify a directory, taosdump will back up the data to the current directory by default.
If the specified location already has data files, taosdump will prompt the user and exit immediately to avoid data overwriting. This means that the same path can only be used for one backup.
Please be careful if you see a prompt for this and please ensure that you follow best practices and relevant SOPs for data integrity, backup and data security.
Users should not use taosdump to back up raw data, environment settings, hardware information, server configuration, or cluster topology. taosdump uses [Apache AVRO](https://avro.apache.org/) as the data file format to store backup data.
## Installation
To use taosdump, you need to download and install [taosTools](https://tdengine.com/assets-download/cloud/taosTools-2.1.3-Linux-x64.tar.gz). Before installing taosTools, please firstly download and install [TDengine CLI](https://docs.tdengine.com/cloud/tools/cli/#installation).
Decompress the package and install.
```
tar -xzf taosTools-2.1.3-Linux-x64.tar.gz
cd taosTools-2.1.3-Linux-x64.tar.gz
sudo ./install-taostools.sh
```
Set environment variable.
```bash
export TDENGINE_CLOUD_DSN="<DSN>"
```
## Common usage scenarios
### taosdump backup data
1. backing up all databases: specify `-A` or `-all-databases` parameter.
2. backup multiple specified databases: use `-D db1,db2,... ` parameters;
3. back up some super or normal tables in the specified database: use `dbname stbname1 stbname2 tbname1 tbname2 ... ` parameters. Note that the first parameter of this input sequence is the database name, and only one database is supported. The second and subsequent parameters are the names of super or normal tables in that database, separated by spaces.
4. back up the system log database: TDengine clusters usually contain a system database named `log`. The data in this database is the data that TDengine runs itself, and the taosdump will not back up the log database by default. If users need to back up the log database, users can use the `-a` or `-allow-sys` command-line parameter.
5. Loose mode backup: taosdump version 1.4.1 onwards provides `-n` and `-L` parameters for backing up data without using escape characters and "loose" mode, which can reduce the number of backups if table names, column names, tag names do not use escape characters. This can also reduce the backup data time and backup data footprint. If you are unsure about using `-n` and `-L` conditions, please use the default parameters for "strict" mode backup. See the [official documentation](https://docs.tdengine.com/taos-sql/escape/) for a description of escaped characters.
<!-- exclude -->
:::tip
- taosdump versions after 1.4.1 provide the `-I` argument for parsing Avro file schema and data. If users specify `-s` then only taosdump will parse schema.
- Backups after taosdump 1.4.2 use the batch count specified by the `-B` parameter. The default value is 16384. If, in some environments, low network speed or disk performance causes "Error actual dump ... batch ...", then try changing the `-B` parameter to a smaller value.
- The export of taosdump does not support resuming from an interruption. Therefore, if the taosdump process terminates unexpectedly, delete all related files that have been exported or generated.
- The import of taosdump supports resuming from an interruption, but when the process resumes, you will receive some "table already exists" messages, which could be ignored.
:::
<!-- exclude-end -->
### taosdump recover data
Restore the data file in the specified path: use the `-i` parameter plus the path to the data file. You should not use the same directory to backup different data sets, and you should not backup the same data set multiple times in the same path. Otherwise, the backup data will cause overwriting or multiple backups.
<!-- exclude -->
:::tip
taosdump internally uses TDengine stmt binding API for writing recovery data with a default batch size of 16384 for better data recovery performance. If there are more columns in the backup data, it may cause a "WAL size exceeds limit" error. You can try to adjust the batch size to a smaller value by using the `-B` parameter.
:::
<!-- exclude-end -->
## Detailed command-line parameter list
The following is a detailed list of taosdump command-line arguments.
```
Usage: taosdump [OPTION...] dbname [tbname ...]
or: taosdump [OPTION...] --databases db1,db2,...
or: taosdump [OPTION...] --all-databases
or: taosdump [OPTION...] -i inpath
or: taosdump [OPTION...] -o outpath
-h, --host=HOST Server host from which to dump data. Default is
localhost.
-p, --password User password to connect to server. Default is
taosdata.
-P, --port=PORT Port to connect
-u, --user=USER User name used to connect to server. Default is
root.
-c, --config-dir=CONFIG_DIR Configure directory. Default is /etc/taos
-i, --inpath=INPATH Input file path.
-o, --outpath=OUTPATH Output file path.
-r, --resultFile=RESULTFILE DumpOut/In Result file path and name.
-a, --allow-sys Allow to dump system database
-A, --all-databases Dump all databases.
-D, --databases=DATABASES Dump listed databases. Use comma to separate
database names.
-N, --without-property Dump database without its properties.
-s, --schemaonly Only dump table schemas.
-y, --answer-yes Input yes for prompt. It will skip data file
checking!
-d, --avro-codec=snappy Choose an avro codec among null, deflate, snappy,
and lzma.
-S, --start-time=START_TIME Start time to dump. Either epoch or
ISO8601/RFC3339 format is acceptable. ISO8601
format example: 2017-10-01T00:00:00.000+0800 or
2017-10-0100:00:00:000+0800 or '2017-10-01
00:00:00.000+0800'
-E, --end-time=END_TIME End time to dump. Either epoch or ISO8601/RFC3339
format is acceptable. ISO8601 format example:
2017-10-01T00:00:00.000+0800 or
2017-10-0100:00:00.000+0800 or '2017-10-01
00:00:00.000+0800'
-B, --data-batch=DATA_BATCH Number of data per query/insert statement when
backup/restore. Default value is 16384. If you see
'error actual dump .. batch ..' when backup or if
you see 'WAL size exceeds limit' error when
restore, please adjust the value to a smaller one
and try. The workable value is related to the
length of the row and type of table schema.
-I, --inspect inspect avro file content and print on screen
-L, --loose-mode Use loose mode if the table name and column name
use letter and number only. Default is NOT.
-n, --no-escape No escape char '`'. Default is using it.
-T, --thread-num=THREAD_NUM Number of thread for dump in file. Default is
8.
-C, --cloud=CLOUD_DSN specify a DSN to access TDengine cloud service
-R, --restful Use RESTful interface to connect TDengine
-t, --timeout=SECONDS The timeout seconds for websocket to interact.
-g, --debug Print debug info.
-?, --help Give this help list
--usage Give a short usage message
-V, --version Print program version
Mandatory or optional arguments to long options are also mandatory or optional
for any corresponding short options.
Report bugs to <support@taosdata.com>.
```
# FAQ
\ No newline at end of file
bin
obj
cloud-example/connect/bin
cloud-example/connect/obj
cloud-example/usage/bin
cloud-example/usage/obj
cloud-example/stmt/bin
cloud-example/stmt/obj
.vs
\ No newline at end of file
*.sln
\ No newline at end of file

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Version 16
VisualStudioVersion = 16.0.30114.105
MinimumVisualStudioVersion = 10.0.40219.1
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "connect", "connect\connect.csproj", "{4006CF0C-17BE-4508-9682-A85298F8C92D}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "usage", "usage\usage.csproj", "{243C420F-FC47-4F21-B81E-83CDE91F2D47}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "stmt", "stmt\stmt.csproj", "{B6907CB6-41CB-4644-AEE1-551456EADE12}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
Release|Any CPU = Release|Any CPU
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{4006CF0C-17BE-4508-9682-A85298F8C92D}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{4006CF0C-17BE-4508-9682-A85298F8C92D}.Debug|Any CPU.Build.0 = Debug|Any CPU
{4006CF0C-17BE-4508-9682-A85298F8C92D}.Release|Any CPU.ActiveCfg = Release|Any CPU
{4006CF0C-17BE-4508-9682-A85298F8C92D}.Release|Any CPU.Build.0 = Release|Any CPU
{243C420F-FC47-4F21-B81E-83CDE91F2D47}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{243C420F-FC47-4F21-B81E-83CDE91F2D47}.Debug|Any CPU.Build.0 = Debug|Any CPU
{243C420F-FC47-4F21-B81E-83CDE91F2D47}.Release|Any CPU.ActiveCfg = Release|Any CPU
{243C420F-FC47-4F21-B81E-83CDE91F2D47}.Release|Any CPU.Build.0 = Release|Any CPU
{B6907CB6-41CB-4644-AEE1-551456EADE12}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{B6907CB6-41CB-4644-AEE1-551456EADE12}.Debug|Any CPU.Build.0 = Debug|Any CPU
{B6907CB6-41CB-4644-AEE1-551456EADE12}.Release|Any CPU.ActiveCfg = Release|Any CPU
{B6907CB6-41CB-4644-AEE1-551456EADE12}.Release|Any CPU.Build.0 = Release|Any CPU
EndGlobalSection
EndGlobal
using System;
using TDengineWS.Impl;
namespace Cloud.Examples
{
public class ConnectExample
{
static void Main(string[] args)
{
string dsn = Environment.GetEnvironmentVariable("TDENGINE_CLOUD_DSN");
Connect(dsn);
}
public static void Connect(string dsn)
{
// get connect
IntPtr conn = LibTaosWS.WSConnectWithDSN(dsn);
if (conn == IntPtr.Zero)
{
throw new Exception($"get connection failed,reason:{LibTaosWS.WSErrorStr(conn)},code:{LibTaosWS.WSErrorNo(conn)}");
}
else
{
Console.WriteLine("Establish connect success.");
}
// do something ...
// close connect
LibTaosWS.WSClose(conn);
}
}
}
\ No newline at end of file
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net5.0</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.1" />
</ItemGroup>
</Project>
using System;
using TDengineWS.Impl;
using TDengineDriver;
using System.Runtime.InteropServices;
namespace Cloud.Examples
{
public class STMTExample
{
static void Main(string[] args)
{
string dsn = Environment.GetEnvironmentVariable("TDENGINE_CLOUD_DSN");
IntPtr conn = Connect(dsn);
// assume table has been created.
// CREATE STABLE if not exists test.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)
string insert = "insert into ? using test.meters tags(?,?) values(?,?,?,?)";
// Init STMT
IntPtr stmt = LibTaosWS.WSStmtInit(conn);
if (stmt != IntPtr.Zero)
{
// Prepare SQL
int code = LibTaosWS.WSStmtPrepare(stmt, insert);
ValidSTMTStep(code, stmt, "WSInit()");
// Bind child table name and tags
TAOS_MULTI_BIND[] tags = new TAOS_MULTI_BIND[2] { WSMultiBind.WSBindBinary(new string[] { "California.LosAngeles" }), WSMultiBind.WSBindInt(new int?[] { 6 }) };
code = LibTaosWS.WSStmtSetTbnameTags(stmt, "test.d1005",tags, 2);
ValidSTMTStep(code, stmt, "WSStmtSetTbnameTags()");
// bind column value
TAOS_MULTI_BIND[] data = new TAOS_MULTI_BIND[4];
data[0] = WSMultiBind.WSBindTimestamp(new long[] { 1538551000000, 1538552000000, 1538553000000, 1538554000000, 1538555000000 });
data[1] = WSMultiBind.WSBindFloat(new float?[] { 10.30000F, 10.30000F, 11.30000F, 10.30000F, 10.80000F });
data[2] = WSMultiBind.WSBindInt(new int?[] { 218, 219, 221, 222, 223 });
data[3] = WSMultiBind.WSBindFloat(new float?[] { 0.28000F, 0.29000F, 0.30000F, 0.31000F, 0.32000F });
code = LibTaosWS.WSStmtBindParamBatch(stmt, data, 4);
ValidSTMTStep(code, stmt, "WSStmtBindParamBatch");
LibTaosWS.WSStmtAddBatch(stmt);
ValidSTMTStep(code, stmt, "WSStmtAddBatch");
IntPtr affectRowPtr = Marshal.AllocHGlobal(Marshal.SizeOf(typeof(Int32)));
LibTaosWS.WSStmtExecute(stmt, affectRowPtr);
ValidSTMTStep(code, stmt, "WSStmtExecute");
Console.WriteLine("STMT affect rows:{0}", Marshal.ReadInt32(affectRowPtr));
LibTaosWS.WSStmtClose(stmt);
// Free allocated memory
Marshal.FreeHGlobal(affectRowPtr);
WSMultiBind.WSFreeTaosBind(tags);
WSMultiBind.WSFreeTaosBind(data);
}
// close connect
LibTaosWS.WSClose(conn);
}
public static IntPtr Connect(string dsn)
{
// get connect
IntPtr conn = LibTaosWS.WSConnectWithDSN(dsn);
if (conn == IntPtr.Zero)
{
throw new Exception($"get connection failed,reason:{LibTaosWS.WSErrorStr(conn)},code:{LibTaosWS.WSErrorNo(conn)}");
}
return conn;
}
public static void ValidSTMTStep(int code, IntPtr wsStmt, string method)
{
if (code != 0)
{
throw new Exception($"{method} failed,reason: {LibTaosWS.WSErrorStr(wsStmt)}, code: {code}");
}
else
{
Console.WriteLine("{0} success", method);
}
}
}
}
\ No newline at end of file
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net5</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.1" />
</ItemGroup>
</Project>
using System;
using TDengineDriver;
using TDengineWS.Impl;
using System.Collections.Generic;
namespace Cloud.Examples
{
public class UsageExample
{
static void Main(string[] args)
{
string dsn = Environment.GetEnvironmentVariable("TDENGINE_CLOUD_DSN");
IntPtr conn = Connect(dsn);
InsertData(conn);
SelectData(conn);
// close connect
LibTaosWS.WSClose(conn);
}
public static IntPtr Connect(string dsn)
{
// get connect
IntPtr conn = LibTaosWS.WSConnectWithDSN(dsn);
if (conn == IntPtr.Zero)
{
throw new Exception($"get connection failed,reason:{LibTaosWS.WSErrorStr(conn)},code:{LibTaosWS.WSErrorNo(conn)}");
}
return conn;
}
public static void InsertData(IntPtr conn)
{
string createTable = "CREATE STABLE if not exists test.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)";
string insertData = "INSERT INTO " +
"test.d1001 USING test.meters TAGS('California.SanFrancisco', 1) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000)" +
"test.d1002 USING test.meters TAGS('California.SanFrancisco', 2) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)" +
"test.d1003 USING test.meters TAGS('California.LosAngeles', 3) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000)" +
"test.d1004 USING test.meters TAGS('California.LosAngeles', 4) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ";
// create database under database named 'test'
IntPtr res = LibTaosWS.WSQuery(conn, createTable);
ValidQueryExecution(res);
// Free the query result every time when used up it.
LibTaosWS.WSFreeResult(res);
// insert data into the table created in previous step.
res = LibTaosWS.WSQuery(conn, insertData);
ValidQueryExecution(res);
// Free the query result every time when used up it.
LibTaosWS.WSFreeResult(res);
}
public static void SelectData(IntPtr conn)
{
string selectTable = "select * from test.meters";
IntPtr res = LibTaosWS.WSQueryTimeout(conn, selectTable,5000);
ValidQueryExecution(res);
// print meta
List<TDengineMeta> metas = LibTaosWS.WSGetFields(res);
foreach (var meta in metas)
{
Console.Write("{0} {1}({2})\t|", meta.name, meta.TypeName(), meta.size);
}
Console.WriteLine("");
List<object> dataSet = LibTaosWS.WSGetData(res);
for (int i = 0; i < dataSet.Count;)
{
for (int j = 0; j < metas.Count; j++)
{
Console.Write("{0}\t|\t", dataSet[i]);
i++;
}
Console.WriteLine("");
}
Console.WriteLine("");
// Free the query result every time when used up it.
LibTaosWS.WSFreeResult(res);
}
// Check if LibTaosWS.Query() execute correctly.
public static void ValidQueryExecution(IntPtr res)
{
int code = LibTaosWS.WSErrorNo(res);
if (code != 0)
{
throw new Exception($"execute SQL failed: reason: {LibTaosWS.WSErrorStr(res)}, code:{code}");
}
}
}
}
\ No newline at end of file
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net5.0</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.1" />
</ItemGroup>
</Project>
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册