提交 24a7992f 编写于 作者: sangshuduo's avatar sangshuduo

docs: merge with docs-cloud

...@@ -21,10 +21,10 @@ This is the documentation structure for TDengine Cloud. ...@@ -21,10 +21,10 @@ This is the documentation structure for TDengine Cloud.
7. The [TDengine SQL](./taos-sql) section provides comprehensive information about both standard SQL as well as TDengine's extensions for easy time series analysis. 7. The [TDengine SQL](./taos-sql) section provides comprehensive information about both standard SQL as well as TDengine's extensions for easy time series analysis.
8. In [Connector](./connector), you can choose between Python, Java, Go, Rust and Node.js, to easily connect to TDengine to ingest and query data in your preferred development language. 8. In [Connector](./programming/connector), you can choose between Python, Java, Go, Rust and Node.js, to easily connect to TDengine to ingest and query data in your preferred development language.
9. The [Tools](./tools) section introduces the Taos CLI which gives you shell access to easily perform ad hoc queries on your instances and databases. Additionally, taosBenchmark is introduced. It is a tool that can help you generate large amounts of data very easily with simple configurations and test the performance of TDengine Cloud. 9. The [Tools](./tools) section introduces the Taos CLI which gives you shell access to easily perform ad hoc queries on your instances and databases. Additionally, taosBenchmark is introduced. It is a tool that can help you generate large amounts of data very easily with simple configurations and test the performance of TDengine Cloud.
10. Finally, in the [FAQ](./faq) section, we try to preemptively answer questions that we anticipate. Of course, we will continue to add to this section all the time. <!-- 10. Finally, in the [FAQ](./faq) section, we try to preemptively answer questions that we anticipate. Of course, we will continue to add to this section all the time. -->
We are very excited that you have chosen TDengine Cloud to be part of your time series platform and look forward to hearing your feedback and ways in which we can improve and be a small part of your success. We are very excited that you have chosen TDengine Cloud to be part of your time series platform and look forward to hearing your feedback and ways in which we can improve and be a small part of your success.
...@@ -5,40 +5,77 @@ title: Introduction to TDengine Cloud Service ...@@ -5,40 +5,77 @@ title: Introduction to TDengine Cloud Service
TDengine Cloud, is the fast, elastic, serverless and cost effective time-series data processing service based on the popular open source time-series database, TDengine. With TDengine Cloud you get the highly optimized and purpose-built for IoT time-series platform, for which TDengine is known. TDengine Cloud, is the fast, elastic, serverless and cost effective time-series data processing service based on the popular open source time-series database, TDengine. With TDengine Cloud you get the highly optimized and purpose-built for IoT time-series platform, for which TDengine is known.
This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine. This section introduces the major features, competitive advantages and typical use-cases to help you get a high level overview of TDengine cloud service.
## Major Features ## Major Features
The major features are listed below: The major features are listed below:
1. While TDengine supports [using SQL to insert](/develop/insert-data/sql-writing), it also supports [Schemaless writing](/reference/schemaless/) just like NoSQL databases. TDengine also supports standard protocols like [InfluxDB LINE](/develop/insert-data/influxdb-line)[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) among others. 1. Data In
2. TDengine supports seamless integration with third-party data collection agents like [Telegraf](/third-party/telegraf)[Prometheus](/third-party/prometheus)[StatsD](/third-party/statsd)[collectd](/third-party/collectd)[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQX](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). These agents can write data into TDengine with simple configuration and without a single line of code. - Supports [using SQL to insert](../data-in/insert-data).
3. Support for [all kinds of queries](/develop/query-data), including aggregation, nested query, downsampling, interpolation and others. - Supports [Telegraf](../data-in/telegraf/).
4. Support for [user defined functions](/develop/udf). - Supports [Prometheus](../data-in/prometheus/).
5. Support for [caching](/develop/cache). TDengine always saves the last data point in cache, so Redis is not needed in some scenarios. 2. Data Out
6. Support for [stream processing](../taos-sql). - Supports standard [SQL](../data-out/query-data/), including nested query.
7. Support for [data subscription](../taos-sql) with the capability to specify filter conditions. - Supports exporting data via tool [taosDump](../data-out/taosdump/).
8. High availability is supported by replication including multi-cloud replication. - Supports writing data to [Prometheus](../data-out/prometheus/).
9. Provides an interactive [command-line interface](/reference/taos-shell) for management, maintenance and ad-hoc queries. - Supports exporting data via [data subscription](../tmq/).
10. Provides many ways to [get data in](../data-in) and [get data out](../data-out) data. 3. Data Explorer: browse through databases and even run SQL queryies once you login.
11. Provides a Dashboard to monitor your running instances of TDengine. 4. Visualization:
12. Provides [connectors](../connector/) for [Java](../connector/java), [Python](../connector/python), [Go](../connector/go), [Rust](../connector/rust), and [Node.js](../connector/node). - Supports [Grafana](../visual/grafana/)
13. Provides a [REST API](/reference/rest-api/). - Supports Google data studio (to be released soon)
14. Supports seamless integration with [Grafana](../visual/grafana) for visualization. - Supports Grafana cloud (to be released soon)
15. Supports seamless integration with Google Data Studio. 6. [Stream Processing](../stream/): Not only is the continuous query is supported, but TDengine also supports event driven stream processing, so Flink or Spark is not needed for time-series data processing.
7. [Data Subscription](../tmq/): Application can subscribe a table or a set of tables. API is the same as Kafka, but you can specify filter conditions.
8. Enterprise
- Supports backuping data everyday.
- Supports replicating a database to another region or cloud.
- Supports VPC peering.
- Supports Allowed IP list for security.
9. Tools
- Provides an interactive [Command-line Interface (CLI)](../tools/cli/) for management and ad-hoc queries.
- Provides a tool [taosBenchmark](../tools/taosbenchmark/) for testing the performance of TDengine.
10. Programming
- Provides [connectors](../programming/connector/) for Java, Python, Go, Rust, Node.js and other programming languages.
- Provides a [REST API](../programming/connector/rest-api/).
For more details on features, please read through the entire documentation. For more details on features, please read through the entire documentation.
## Competitive Advantages ## Competitive Advantages
By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine Cloud differentiates itself from other time series platforms, with the following advantages. By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/) and its cloud native design, TDengine Cloud differentiates itself from other time series data cloud services, with the following advantages.
- **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine Cloud is a fast, elastic, serverless purpose built platform for IoT time-series data. It is the only time-series platform to solve the high cardinality issue to support billions of data collection points while outperforming other time-series platforms for data ingestion, querying and data compression. - **Worry Free**: TDengine Cloud is a fast, elastic, serverless purpose built cloud platform for time-series data. It provides worry-free operations with a fully managed cloud service. You pay as you go.
- **[Simplified Solution](https://tdengine.com/tdengine/simplified-time-series-data-solution/)**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly. - **[Simplified Solution](https://tdengine.com/tdengine/simplified-time-series-data-solution/)**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly.
- **[Cloud Native](https://tdengine.com/tdengine/cloud-native-time-series-database/)**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds. It is Enterprise ready with backup, multi-cloud replication, VPC peering and IP whitelisting. - **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: It is the only time-series platform to solve the high cardinality issue to support billions of data collection points while outperforming other time-series platforms for data ingestion, querying and data compression.
- **[Ease of Use](https://tdengine.com/tdengine/easy-time-series-data-platform/)**: For administrators, TDengine Cloud provides worry-free operations with a fully managed cloud native solution. For developers, it provides a simple interface, simplified solution and seamless integration with third party tools. For data users, it provides SQL support with powerful time series extensions built for data analytics. - **[Ease of Use](https://tdengine.com/tdengine/easy-time-series-data-platform/)**: For administrators, TDengine Cloud provides worry-free operations with a fully managed cloud native solution. For developers, it provides a simple interface, simplified solution and seamless integration with third party tools. For data users, it provides SQL support with powerful time series extensions built for data analytics.
- **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way. - **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
- **Enterprise Ready**: It supports backup, multi-cloud/multi-region database replication, VPC peering and IP whitelisting.
With TDengine cloud, the **total cost of ownership of your time-series data platform can be greatly reduced**.
1. With its built-in caching, stream processing and data subscription, system complexity and operation cost are highly reduced.
2. With SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly.
3. With the elastic, serverless and fully managed service, the operation and maintenance costs are reduced significantly.
## Technical Ecosystem
This is how TDengine would be situated, in a typical time-series data processing platform:
<figure>
![TDengine Database Technical Ecosystem ](eco_system.webp)
<center><figcaption>Figure 1. TDengine Technical Ecosystem</figcaption></center>
</figure>
On the left-hand side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right-hand side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides an interactive command-line interface and a web interface for management and maintenance.
## Typical Use Cases
As a high-performance and cloud native time-series database, TDengine's typical use case include but are not limited to IoT, Industrial Internet, Connected Vehicles, IT operation and maintenance, energy, financial markets and other fields. TDengine is a purpose-built database optimized for the characteristics of time series data. As such, it cannot be used to process data from web crawlers, social media, e-commerce, ERP, CRM and so on. More generally TDengine is not a suitable storage engine for non-time-series data.
label: Concepts
\ No newline at end of file
---
title: Concepts
---
In order to explain the basic concepts and provide some sample code, the TDengine documentation smart meters as a typical time series use case. We assume the following: 1. Each smart meter collects three metrics i.e. current, voltage, and phase; 2. There are multiple smart meters; 3. Each meter has static attributes like location and group ID. Based on this, collected data will look similar to the following table:
<div className="center-table">
<table>
<thead><tr>
<th>Device ID</th>
<th>Time Stamp</th>
<th colSpan="3">Collected Metrics</th>
<th colSpan="2">Tags</th>
</tr>
<tr>
<th>Device ID</th>
<th>Time Stamp</th>
<th>current</th>
<th>voltage</th>
<th>phase</th>
<th>location</th>
<th>groupId</th>
</tr>
</thead>
<tbody>
<tr>
<td>d1001</td>
<td>1538548685000</td>
<td>10.3</td>
<td>219</td>
<td>0.31</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
<tr>
<td>d1002</td>
<td>1538548684000</td>
<td>10.2</td>
<td>220</td>
<td>0.23</td>
<td>California.SanFrancisco</td>
<td>3</td>
</tr>
<tr>
<td>d1003</td>
<td>1538548686500</td>
<td>11.5</td>
<td>221</td>
<td>0.35</td>
<td>California.LosAngeles</td>
<td>3</td>
</tr>
<tr>
<td>d1004</td>
<td>1538548685500</td>
<td>13.4</td>
<td>223</td>
<td>0.29</td>
<td>California.LosAngeles</td>
<td>2</td>
</tr>
<tr>
<td>d1001</td>
<td>1538548695000</td>
<td>12.6</td>
<td>218</td>
<td>0.33</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
<tr>
<td>d1004</td>
<td>1538548696600</td>
<td>11.8</td>
<td>221</td>
<td>0.28</td>
<td>California.LosAngeles</td>
<td>2</td>
</tr>
<tr>
<td>d1002</td>
<td>1538548696650</td>
<td>10.3</td>
<td>218</td>
<td>0.25</td>
<td>California.SanFrancisco</td>
<td>3</td>
</tr>
<tr>
<td>d1001</td>
<td>1538548696800</td>
<td>12.3</td>
<td>221</td>
<td>0.31</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
</tbody>
</table>
<a href="#model_table1">Table 1: Smart meter example data</a>
</div>
Each row contains the device ID, time stamp, collected metrics (current, voltage, phase as above), and static tags (location and groupId in Table 1) associated with the devices. Each smart meter generates a row (measurement) in a pre-defined time interval or triggered by an external event. The device produces a sequence of measurements with associated time stamps.
## Metric
Metric refers to the physical quantity collected by sensors, equipment or other types of data collection devices, such as current, voltage, temperature, pressure, GPS position, etc., which change with time, and the data type can be integer, float, Boolean, or strings. As time goes by, the amount of collected metric data stored increases. In the smart meters example, current, voltage and phase are the metrics.
## Label/Tag
Label/Tag refers to the static properties of sensors, equipment or other types of data collection devices, which do not change with time, such as device model, color, fixed location of the device, etc. The data type can be any type. Although static, TDengine allows users to add, delete or update tag values at any time. Unlike the collected metric data, the amount of tag data stored does not change over time. In the meters example, `location` and `groupid` are the tags.
## Data Collection Point
Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points. In the smart meters example, d1001, d1002, d1003, and d1004 are the data collection points.
## Table
Since time-series data is most likely to be structured data, TDengine adopts the traditional relational database model to process them with a short learning curve. You need to create a database, create tables, then insert data points and execute queries to explore the data.
To make full use of time-series data characteristics, TDengine adopts a strategy of "**One Table for One Data Collection Point**". TDengine requires the user to create a table for each data collection point (DCP) to store collected time-series data. For example, if there are over 10 million smart meters, it means 10 million tables should be created. For the table above, 4 tables should be created for devices D1001, D1002, D1003, and D1004 to store the data collected. This design has several benefits:
1. Since the metric data from different DCP are fully independent, the data source of each DCP is unique, and a table has only one writer. In this way, data points can be written in a lock-free manner, and the writing speed can be greatly improved.
2. For a DCP, the metric data generated by DCP is ordered by timestamp, so the write operation can be implemented by simple appending, which further greatly improves the data writing speed.
3. The metric data from a DCP is continuously stored, block by block. If you read data for a period of time, it can greatly reduce random read operations and improve read and query performance by orders of magnitude.
4. Inside a data block for a DCP, columnar storage is used, and different compression algorithms are used for different data types. Metrics generally don't vary as significantly between themselves over a time range as compared to other metrics, which allows for a higher compression rate.
If the metric data of multiple DCPs are traditionally written into a single table, due to uncontrollable network delays, the timing of the data from different DCPs arriving at the server cannot be guaranteed, write operations must be protected by locks, and metric data from one DCP cannot be guaranteed to be continuously stored together. ** One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest possible extent.**
TDengine suggests using DCP ID as the table name (like D1001 in the above table). Each DCP may collect one or multiple metrics (like the current, voltage, phase as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the time stamp as the index, and won’t build the index on any metrics stored. Column wise storage is used.
Complex devices, such as connected cars, may have multiple DCPs. In this case, multiple tables are created for a single device, one table per DCP.
## Super Table (STable)
The design of one table for one data collection point will require a huge number of tables, which is difficult to manage. Furthermore, applications often need to take aggregation operations among DCPs, thus aggregation operations will become complicated. To support aggregation over multiple tables efficiently, the STable(Super Table) concept is introduced by TDengine.
STable is a template for a type of data collection point. A STable contains a set of data collection points (tables) that have the same schema or data structure, but with different static attributes (tags). To describe a STable, in addition to defining the table structure of the metrics, it is also necessary to define the schema of its tags. The data type of tags can be int, float, string, and there can be multiple tags, which can be added, deleted, or modified afterward. If the whole system has N different types of data collection points, N STables need to be established.
In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**. In the smart meters example, we can create a super table named `meters`.
## Subtable
When creating a table for a specific data collection point, the user can use a STable as a template and specify the tag values of this specific DCP to create it. ** The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is:
1. Subtable is a table, all SQL commands applied on a regular table can be applied on subtable.
2. Subtable is a table with extensions, it has static tags (labels), and these tags can be added, deleted, and updated after it is created. But a regular table does not have tags.
3. A subtable belongs to only one STable, but a STable may have many subtables. Regular tables do not belong to a STable.
4. A regular table can not be converted into a subtable, and vice versa.
The relationship between a STable and the subtables created based on this STable is as follows:
1. A STable contains multiple subtables with the same metric schema but with different tag values.
2. The schema of metrics or labels cannot be adjusted through subtables, and it can only be changed via STable. Changes to the schema of a STable takes effect immediately for all associated subtables.
3. STable defines only one template and does not store any data or label information by itself. Therefore, data cannot be written to a STable, only to subtables.
Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs. In essence, querying a supertable is a very efficient aggregate query on multiple DCPs of the same type.
In TDengine, it is recommended to use a subtable instead of a regular table for a DCP. In the smart meters example, we can create subtables like d1001, d1002, d1003, and d1004 under super table meters.
To better understand the data model using metri, tags, super table and subtable, please refer to the diagram below which demonstrates the data model of the smart meters example. ![Meters Data Model Diagram](./supertable.webp)
## Database
A database is a collection of tables. TDengine allows a running instance to have multiple databases, and each database can be configured with different storage policies. The [characteristics of time-series data](https://www.taosdata.com/blog/2019/07/09/86.html) from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. In order for TDengine to work with maximum efficiency in various scenarios, TDengine recommends that STables with different data characteristics be created in different databases.
In a database, there can be one or more STables, but a STable belongs to only one database. All tables owned by a STable are stored in only one database.
## Instance, URL, Token
An instance is a running cluster of nodes of TDengine with one or more databases. An instance cannot span across multiple regions or multiple clouds, but a single account (organization) can have multiple instances. An account may invite multiple users into his/her organization to share the data, and each user can be configured with different access rights.
TDengine cloud provides a unique URL for each instance and uses tokens to authenticate the access. The token is generated by TDengine cloud for each user and for each instance. The token has a duration and can be reset by the user for each instance at any time for security purpose.
...@@ -4,4 +4,6 @@ title: Get Started ...@@ -4,4 +4,6 @@ title: Get Started
description: A quick guide for how to access TDengine cloud service description: A quick guide for how to access TDengine cloud service
--- ---
It's very convenient to access TDengine cloud service, just open your browser, connect to [TDengine Cloud Service Portal](https://cloud.tdengine.com), create an account with a valid email address, activate your account, then you will get a free TDengine cloud service. Enjoy! It's very convenient to access TDengine cloud service, just open your browser, connect to [TDengine Cloud Service Portal](https://cloud.tdengine.com), create an account with a valid email address, activate your account, then you will get a free TDengine cloud service.
\ No newline at end of file
TDengine cloud is running on AWS, Azure and Google Cloud. You can choose free plan, standard plan or enterprise plan. Enjoy!
---
sidebar_label: Data Replication
title: Data Replication
description: Briefly introduce how to replicate data among TDengine cloud services
---
TDengine provides full support for data replication. You can replicate data from TDengine cloud service to local TDengine, from local TDengine to TDengine cloud service, or from one cloud service to another one and it doesn't matter which cloud or region the two services reside in.
\ No newline at end of file
--- ---
sidebar_label: SQL sidebar_label: SQL
title: Insert Data Using SQL title: Insert Data Using SQL
description: This section describes how to insert data using TDengine SQL description: Insert data using TDengine SQL
--- ---
# Insert Data # Insert Data
...@@ -42,10 +42,15 @@ For more details about `INSERT` please refer to [INSERT](https://docs.tdengine.c ...@@ -42,10 +42,15 @@ For more details about `INSERT` please refer to [INSERT](https://docs.tdengine.c
## Connector Examples ## Connector Examples
:::note
Before executing the sample code in this section, you need to firstly establish connection to TDegnine cloud service, please refer to [Connect to TDengine Cloud Service](../../programming/connect/).
:::
<Tabs> <Tabs>
<TabItem value="python" label="Python"> <TabItem value="python" label="Python">
In this example, we use `execute` method to execute SQL and get affected rows. The variable `conn` is an instance of class `taosrest.TaosRestConnection` we just created at [Connect Tutorial](../../develop/connect/python#connect). In this example, we use `execute` method to execute SQL and get affected rows. The variable `conn` is an instance of class `taosrest.TaosRestConnection` we just created at [Connect Tutorial](../../programming/connect/python#connect).
```python ```python
{{#include docs/examples/python/develop_tutorial.py:insert}} {{#include docs/examples/python/develop_tutorial.py:insert}}
......
--- ---
sidebar_label: Prometheus sidebar_label: Prometheus
title: Prometheus for TDengine Cloud title: Prometheus for TDengine Cloud
description: This topic introduces how to write data into TDengine from Prometheus. description: Write data into TDengine from Prometheus.
--- ---
Prometheus is a widespread open-source monitoring and alerting system. Prometheus joined the Cloud Native Computing Foundation (CNCF) in 2016 as the second incubated project after Kubernetes, which has a very active developer and user community. Prometheus is a widespread open-source monitoring and alerting system. Prometheus joined the Cloud Native Computing Foundation (CNCF) in 2016 as the second incubated project after Kubernetes, which has a very active developer and user community.
Prometheus provides `remote_write` and `remote_read` interfaces to leverage other database products as its storage engine. To enable users of the Prometheus ecosystem to take advantage of TDengine's efficient writing and querying, TDengine also provides support for these two interfaces. Prometheus provides `remote_write` interface to leverage other database products as its storage engine. To enable users of the Prometheus ecosystem to take advantage of TDengine's efficient writing, TDengine also provides support for this interface so that Prometheus data can be stored in TDengine via the `remote_write` interface with proper configuration to take full advantage of TDengine's efficient storage performance and clustering capabilities for time-series data.
Prometheus data can be stored in TDengine via the `remote_write` interface with proper configuration. Data stored in TDengine can be queried via the `remote_read` interface, taking full advantage of TDengine's efficient storage query performance and clustering capabilities for time-series data. ## Prerequisites
In your TDengine cloud instance, click "Explorer" on the left panel, then click "+" besides Databases, to create a new database named as "prometheus_data". Then execute `show databases` to confirm the database has been created successfully.
## Install Prometheus ## Install Prometheus
...@@ -28,7 +30,7 @@ Supposed that you use Linux system with architecture amd64: ...@@ -28,7 +30,7 @@ Supposed that you use Linux system with architecture amd64:
Then Prometheus is installed in current directory. For more installation options, please refer to the [official documentation](https://prometheus.io/docs/prometheus/latest/installation/). Then Prometheus is installed in current directory. For more installation options, please refer to the [official documentation](https://prometheus.io/docs/prometheus/latest/installation/).
## Configure ## Configure Prometheus
Configuring Prometheus is done by editing the Prometheus configuration file `prometheus.yml` (If you followed previous steps, you can find prometheus.xml in current directory). Configuring Prometheus is done by editing the Prometheus configuration file `prometheus.yml` (If you followed previous steps, you can find prometheus.xml in current directory).
...@@ -62,15 +64,3 @@ Log in TDengine Cloud, click "Explorer" on the left navigation bar. You will see ...@@ -62,15 +64,3 @@ Log in TDengine Cloud, click "Explorer" on the left navigation bar. You will see
![TDengine prometheus remote_write result](prometheus_data.webp) ![TDengine prometheus remote_write result](prometheus_data.webp)
## Verify Remote Read
Lets retrieve some metrics from TDengine Cloud via prometheus web server. Browse to <http://localhost:9090/graph> and use the "Graph" tab.
Enter the following expression to graph the per-second rate of chunks being created in the self-scraped Prometheus:
```
rate(prometheus_tsdb_head_chunks_created_total[1m])
```
![TDengine prometheus remote_read](prometheus_read.webp)
--- ---
sidebar_label: Telegraf sidebar_label: Telegraf
title: Telegraf for TDengine Cloud title: Telegraf for TDengine Cloud
description: This section explains how to write data into TDengine from telegraf. description: Write data into TDengine from telegraf.
--- ---
Telegraf is an open-source, metrics collection software. Telegraf can collect the operation information of various components without having to write any scripts to collect regularly, reducing the difficulty of data acquisition. Telegraf is an open-source, metrics collection software. Telegraf can collect the operation information of various components without having to write any scripts to collect regularly, reducing the difficulty of data acquisition.
Telegraf's data can be written to TDengine by simply adding the output configuration of Telegraf to the URL corresponding to taosAdapter and modifying several configuration items. The presence of Telegraf data in TDengine can take advantage of TDengine's efficient storage query performance and clustering capabilities for time-series data. Telegraf's data can be written to TDengine by simply adding the output configuration of Telegraf to the URL corresponding to taosAdapter and modifying several configuration items. The presence of Telegraf data in TDengine can take advantage of TDengine's efficient storage query performance and clustering capabilities for time-series data.
## Prerequisiteis
Before telegraf can write data into TDengine cloud service, you need to firstly manually create a database. Log in TDengine Cloud, click "Explorer" on the left navigation bar, then click the "+" button besides "Databases" to add a database named as "telegraf" using all default parameters.
## Install Telegraf ## Install Telegraf
Supposed that you use Ubuntu system: Supposed that you use Ubuntu system:
...@@ -63,9 +67,7 @@ telegraf --config telegraf.conf ...@@ -63,9 +67,7 @@ telegraf --config telegraf.conf
## Verify ## Verify
Log in TDengine Cloud, click "Explorer" on the left navigation bar. - Check weather database "telegraf" exist by executing:
Check weather database "telegraf" exist by executing:
```sql ```sql
show databases; show databases;
......
---
sidebar_label: Data In
title: Write Data Into TDengine Cloud Service
description: A number of ways for writing data into TDengine.
---
This chapter introduces a number of ways which can be used to write data into TDengine, users can use TDengine SQL to write data into TDengine cloud service, or use the [connectors](../programming/connector) provided by TDengine to program writing into TDengine. TDengine provides [taosBenchmark](../tools/taosbenchmark), which is a performance testing tool to write into TDengine, and taosX, which is a tool provided by TDengine enterprise edition, to sync data from one TDengine cloud service to another. Furthermore, 3rd party tools, like telegraf and prometheus, can also be used to write data into TDengine.
:::note
Because of privilege limitation on cloud, you need to firstly create database in the data explorer on cloud console before preparing to write data into TDengine cloud service. This limitation is applicable to any way of writing data.
:::
---
sidebar_label: Data In
title: Write Data Into TDengine Cloud Service
description: A number of ways for writing data into TDengine.
---
This chapter introduces a number of ways which can be used to write data into TDengine, users can use 3rd party tools, like telegraf and prometheus, to write data into TDengine cloud service, or use [taosBenchmark](../tools/taosbenchmark) which is a tool provided by TDengine to write data into TDengine cloud service. Users can use taosX, which is also a tool provided by TDengine, to sync data from one TDengine cloud service to another. Furthermore, 3rd party tools, like telegraf and prometheus, can also be used to write data into TDengine.
\ No newline at end of file
--- ---
sidebar_label: SQL sidebar_label: SQL
title: Query Data Using SQL title: Query Data Using SQL
description: This topic introduces how to read data from TDengine using basic SQL. description: Read data from TDengine using basic SQL.
--- ---
# Query Data # Query Data
...@@ -123,6 +123,11 @@ For more details please refer to [Aggregate by Window](https://docs.tdengine.com ...@@ -123,6 +123,11 @@ For more details please refer to [Aggregate by Window](https://docs.tdengine.com
## Connector Examples ## Connector Examples
:::note
Before executing the sample code in this section, you need to firstly establish connection to TDegnine cloud service, please refer to [Connect to TDengine Cloud Service](../../programming/connect/).
:::
<Tabs> <Tabs>
<TabItem value="python" label="Python"> <TabItem value="python" label="Python">
......
---
sidebar_label: Subscription
title: Data Subscritpion
description: Use data subscription to get data from TDengine.
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import Java from "./_sub_java.mdx";
import Python from "./_sub_python.mdx";
import Go from "./_sub_go.mdx";
import Rust from "./_sub_rust.mdx";
import Node from "./_sub_node.mdx";
import CSharp from "./_sub_cs.mdx";
import CDemo from "./_sub_c.mdx";
This topic introduces how to read out data from TDengine using data subscription, which is an advanced feature in TDengine. To access the data in TDengine in data subscription way, you need to create topic, create consumer, subscribe to a topic, and consume data. In this document we will briefly explain these main steps of data subscription.
## Create Topic
A topic can be created on a database, on some selected columns,or on a supertable.
### Topic on Columns
The most common way to create a topic is to create a topic on some specifically selected columns. The Syntax is like below:
```sql
CREATE TOPIC topic_name as subquery;
```
You can subscribe to a topic through a SELECT statement. Statements that specify columns, such as `SELECT *` and `SELECT ts, cl` are supported, as are filtering conditions and scalar functions. Aggregate functions and time window aggregation are not supported. Note:
- The schema of topics created in this manner is determined by the subscribed data.
- You cannot modify (`ALTER <table> MODIFY`) or delete (`ALTER <table> DROP`) columns or tags that are used in a subscription or calculation.
- Columns added to a table after the subscription is created are not displayed in the results. Deleting columns will cause an error.
For example:
```sql
CREATE TOPIC topic_name AS SELECT ts, c1, c2, c3 FROM tmqdb.stb WHERE c1 > 1;
```
### Topic on SuperTable
Syntax:
```sql
CREATE TOPIC topic_name AS STABLE stb_name;
```
Creating a topic in this manner differs from a `SELECT * from stbName` statement as follows:
- The table schema can be modified.
- Unstructured data is returned. The format of the data returned changes based on the supertable schema.
- A different table schema may exist for every data block to be processed.
- The data returned does not include tags.
### Topic on Database
Syntax:
```sql
CREATE TOPIC topic_name [WITH META] AS DATABASE db_name;
```
This SQL statement creates a subscription to all tables in the database. You can add the `WITH META` parameter to include schema changes in the subscription, including creating and deleting supertables; adding, deleting, and modifying columns; and creating, deleting, and modifying the tags of subtables. Consumers can determine the message type from the API. Note that this differs from Kafka.
## Create Consumer
To create a consumer, you must use the APIs provided by TDengine connectors. Below is the sample code of using connectors of different languages.
You configure the following parameters when creating a consumer:
| Parameter | Type | Description | Remarks |
| :----------------------------: | :-----: | -------------------------------------------------------- | ------------------------------------------- |
| `td.connect.ip` | string | Used in establishing a connection; same as `taos_connect` | |
| `td.connect.user` | string | Used in establishing a connection; same as `taos_connect` | |
| `td.connect.pass` | string | Used in establishing a connection; same as `taos_connect` | |
| `td.connect.port` | string | Used in establishing a connection; same as `taos_connect` | |
| `group.id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. |
| `client.id` | string | Client ID | Maximum length: 192. |
| `auto.offset.reset` | enum | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
| `enable.auto.commit` | boolean | Commit automatically | Specify `true` or `false`. |
| `auto.commit.interval.ms` | integer | Interval for automatic commits, in milliseconds |
| `enable.heartbeat.background` | boolean | Backend heartbeat; if enabled, the consumer does not go offline even if it has not polled for a long time | |
| `experimental.snapshot.enable` | boolean | Specify whether to consume messages from the WAL or from TSBS | |
| `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages |
The method of specifying these parameters depends on the language used:
<Tabs defaultValue="java" groupId="lang">
<TabItem value="c" label="C">
```c
/* Create consumer groups on demand (group.id) and enable automatic commits (enable.auto.commit),
an automatic commit interval (auto.commit.interval.ms), and a username (td.connect.user) and password (td.connect.pass) */
tmq_conf_t* conf = tmq_conf_new();
tmq_conf_set(conf, "enable.auto.commit", "true");
tmq_conf_set(conf, "auto.commit.interval.ms", "1000");
tmq_conf_set(conf, "group.id", "cgrpName");
tmq_conf_set(conf, "td.connect.user", "root");
tmq_conf_set(conf, "td.connect.pass", "taosdata");
tmq_conf_set(conf, "auto.offset.reset", "earliest");
tmq_conf_set(conf, "experimental.snapshot.enable", "true");
tmq_conf_set(conf, "msg.with.table.name", "true");
tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL);
tmq_t* tmq = tmq_consumer_new(conf, NULL, 0);
tmq_conf_destroy(conf);
```
</TabItem>
<TabItem value="java" label="Java">
Java programs use the following parameters:
| Parameter | Type | Description | Remarks |
| ----------------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------- |
| `bootstrap.servers` | string |Connection address, such as `localhost:6030` |
| `value.deserializer` | string | Value deserializer; to use this method, implement the `com.taosdata.jdbc.tmq.Deserializer` interface or inherit the `com.taosdata.jdbc.tmq.ReferenceDeserializer` type |
| `value.deserializer.encoding` | string | Specify the encoding for string deserialization | |
Note: The `bootstrap.servers` parameter is used instead of `td.connect.ip` and `td.connect.port` to provide an interface that is consistent with Kafka.
```java
Properties properties = new Properties();
properties.setProperty("enable.auto.commit", "true");
properties.setProperty("auto.commit.interval.ms", "1000");
properties.setProperty("group.id", "cgrpName");
properties.setProperty("bootstrap.servers", "127.0.0.1:6030");
properties.setProperty("td.connect.user", "root");
properties.setProperty("td.connect.pass", "taosdata");
properties.setProperty("auto.offset.reset", "earliest");
properties.setProperty("msg.with.table.name", "true");
properties.setProperty("value.deserializer", "com.taos.example.MetersDeserializer");
TaosConsumer<Meters> consumer = new TaosConsumer<>(properties);
/* value deserializer definition. */
import com.taosdata.jdbc.tmq.ReferenceDeserializer;
public class MetersDeserializer extends ReferenceDeserializer<Meters> {
}
```
</TabItem>
<TabItem label="Go" value="Go">
```go
config := tmq.NewConfig()
defer config.Destroy()
err = config.SetGroupID("test")
if err != nil {
panic(err)
}
err = config.SetAutoOffsetReset("earliest")
if err != nil {
panic(err)
}
err = config.SetConnectIP("127.0.0.1")
if err != nil {
panic(err)
}
err = config.SetConnectUser("root")
if err != nil {
panic(err)
}
err = config.SetConnectPass("taosdata")
if err != nil {
panic(err)
}
err = config.SetConnectPort("6030")
if err != nil {
panic(err)
}
err = config.SetMsgWithTableName(true)
if err != nil {
panic(err)
}
err = config.EnableHeartBeat()
if err != nil {
panic(err)
}
err = config.EnableAutoCommit(func(result *wrapper.TMQCommitCallbackResult) {
if result.ErrCode != 0 {
errStr := wrapper.TMQErr2Str(result.ErrCode)
err := errors.NewError(int(result.ErrCode), errStr)
panic(err)
}
})
if err != nil {
panic(err)
}
```
</TabItem>
<TabItem label="Rust" value="Rust">
```rust
let mut dsn: Dsn = "taos://".parse()?;
dsn.set("group.id", "group1");
dsn.set("client.id", "test");
dsn.set("auto.offset.reset", "earliest");
let tmq = TmqBuilder::from_dsn(dsn)?;
let mut consumer = tmq.build()?;
```
</TabItem>
<TabItem value="Python" label="Python">
Python programs use the following parameters:
| Parameter | Type | Description | Remarks |
| :----------------------------: | :----: | -------------------------------------------------------- | ------------------------------------------- |
| `td_connect_ip` | string | Used in establishing a connection; same as `taos_connect` | |
| `td_connect_user` | string | Used in establishing a connection; same as `taos_connect` | |
| `td_connect_pass` | string | Used in establishing a connection; same as `taos_connect` | |
| `td_connect_port` | string | Used in establishing a connection; same as `taos_connect` | |
| `group_id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. |
| `client_id` | string | Client ID | Maximum length: 192. |
| `auto_offset_reset` | string | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
| `enable_auto_commit` | string | Commit automatically | Specify `true` or `false`. |
| `auto_commit_interval_ms` | string | Interval for automatic commits, in milliseconds |
| `enable_heartbeat_background` | string | Backend heartbeat; if enabled, the consumer does not go offline even if it has not polled for a long time | Specify `true` or `false`. |
| `experimental_snapshot_enable` | string | Specify whether to consume messages from the WAL or from TSBS | Specify `true` or `false`. |
| `msg_with_table_name` | string | Specify whether to deserialize table names from messages | Specify `true` or `false`.
| `timeout` | int | Consumer pull timeout | |
</TabItem>
<TabItem label="Node.JS" value="Node.JS">
```js
// Create consumer groups on demand (group.id) and enable automatic commits (enable.auto.commit),
// an automatic commit interval (auto.commit.interval.ms), and a username (td.connect.user) and password (td.connect.pass)
let consumer = taos.consumer({
'enable.auto.commit': 'true',
'auto.commit.interval.ms','1000',
'group.id': 'tg2',
'td.connect.user': 'root',
'td.connect.pass': 'taosdata',
'auto.offset.reset','earliest',
'msg.with.table.name': 'true',
'td.connect.ip','127.0.0.1',
'td.connect.port','6030'
});
```
</TabItem>
<TabItem value="C#" label="C#">
```csharp
using TDengineTMQ;
// Create consumer groups on demand (GourpID) and enable automatic commits (EnableAutoCommit),
// an automatic commit interval (AutoCommitIntervalMs), and a username (TDConnectUser) and password (TDConnectPasswd)
var cfg = new ConsumerConfig
{
EnableAutoCommit = "true"
AutoCommitIntervalMs = "1000"
GourpId = "TDengine-TMQ-C#",
TDConnectUser = "root",
TDConnectPasswd = "taosdata",
AutoOffsetReset = "earliest"
MsgWithTableName = "true",
TDConnectIp = "127.0.0.1",
TDConnectPort = "6030"
};
var consumer = new ConsumerBuilder(cfg).Build();
```
</TabItem>
</Tabs>
A consumer group is automatically created when multiple consumers are configured with the same consumer group ID.
## Subscribe to a Topic
A single consumer can subscribe to multiple topics.
<Tabs defaultValue="java" groupId="lang">
<TabItem value="c" label="C">
```c
// Create a list of subscribed topics
tmq_list_t* topicList = tmq_list_new();
tmq_list_append(topicList, "topicName");
// Enable subscription
tmq_subscribe(tmq, topicList);
tmq_list_destroy(topicList);
```
</TabItem>
<TabItem value="java" label="Java">
```java
List<String> topics = new ArrayList<>();
topics.add("tmq_topic");
consumer.subscribe(topics);
```
</TabItem>
<TabItem value="Go" label="Go">
```go
consumer, err := tmq.NewConsumer(config)
if err != nil {
panic(err)
}
err = consumer.Subscribe([]string{"example_tmq_topic"})
if err != nil {
panic(err)
}
```
</TabItem>
<TabItem value="Rust" label="Rust">
```rust
consumer.subscribe(["tmq_meters"]).await?;
```
</TabItem>
<TabItem value="Python" label="Python">
```python
consumer = TaosConsumer('topic_ctb_column', group_id='vg2')
```
</TabItem>
<TabItem label="Node.JS" value="Node.JS">
```js
// Create a list of subscribed topics
let topics = ['topic_test']
// Enable subscription
consumer.subscribe(topics);
```
</TabItem>
<TabItem value="C#" label="C#">
```csharp
// Create a list of subscribed topics
List<String> topics = new List<string>();
topics.add("tmq_topic");
// Enable subscription
consumer.Subscribe(topics);
```
</TabItem>
</Tabs>
## Consume messages
The following code demonstrates how to consume the messages in a queue.
<Tabs defaultValue="java" groupId="lang">
<TabItem value="c" label="C">
```c
## Consume data
while (running) {
TAOS_RES* msg = tmq_consumer_poll(tmq, timeOut);
msg_process(msg);
}
```
The `while` loop obtains a message each time it calls `tmq_consumer_poll()`. This message is exactly the same as the result returned by a query, and the same deserialization API can be used on it.
</TabItem>
<TabItem value="java" label="Java">
```java
while(running){
ConsumerRecords<Meters> meters = consumer.poll(Duration.ofMillis(100));
for (Meters meter : meters) {
processMsg(meter);
}
}
```
</TabItem>
<TabItem value="Go" label="Go">
```go
for {
result, err := consumer.Poll(time.Second)
if err != nil {
panic(err)
}
fmt.Println(result)
consumer.Commit(context.Background(), result.Message)
consumer.FreeMessage(result.Message)
}
```
</TabItem>
<TabItem value="Rust" label="Rust">
```rust
{
let mut stream = consumer.stream();
while let Some((offset, message)) = stream.try_next().await? {
// get information from offset
// the topic
let topic = offset.topic();
// the vgroup id, like partition id in kafka.
let vgroup_id = offset.vgroup_id();
println!("* in vgroup id {vgroup_id} of topic {topic}\n");
if let Some(data) = message.into_data() {
while let Some(block) = data.fetch_raw_block().await? {
// one block for one table, get table name if needed
let name = block.table_name();
let records: Vec<Record> = block.deserialize().try_collect()?;
println!(
"** table: {}, got {} records: {:#?}\n",
name.unwrap(),
records.len(),
records
);
}
}
consumer.commit(offset).await?;
}
}
```
</TabItem>
<TabItem value="Python" label="Python">
```python
for msg in consumer:
for row in msg:
print(row)
```
</TabItem>
<TabItem label="Node.JS" value="Node.JS">
```js
while(true){
msg = consumer.consume(200);
// process message(consumeResult)
console.log(msg.topicPartition);
console.log(msg.block);
console.log(msg.fields)
}
```
</TabItem>
<TabItem value="C#" label="C#">
```csharp
## Consume data
while (true)
{
var consumerRes = consumer.Consume(100);
// process ConsumeResult
ProcessMsg(consumerRes);
consumer.Commit(consumerRes);
}
```
</TabItem>
</Tabs>
## Subscribe to a Topic
A single consumer can subscribe to multiple topics.
<Tabs defaultValue="java" groupId="lang">
<TabItem value="c" label="C">
```c
// Create a list of subscribed topics
tmq_list_t* topicList = tmq_list_new();
tmq_list_append(topicList, "topicName");
// Enable subscription
tmq_subscribe(tmq, topicList);
tmq_list_destroy(topicList);
```
</TabItem>
<TabItem value="java" label="Java">
```java
List<String> topics = new ArrayList<>();
topics.add("tmq_topic");
consumer.subscribe(topics);
```
</TabItem>
<TabItem value="Go" label="Go">
```go
consumer, err := tmq.NewConsumer(config)
if err != nil {
panic(err)
}
err = consumer.Subscribe([]string{"example_tmq_topic"})
if err != nil {
panic(err)
}
```
</TabItem>
<TabItem value="Rust" label="Rust">
```rust
consumer.subscribe(["tmq_meters"]).await?;
```
</TabItem>
<TabItem value="Python" label="Python">
```python
consumer = TaosConsumer('topic_ctb_column', group_id='vg2')
```
</TabItem>
<TabItem label="Node.JS" value="Node.JS">
```js
// Create a list of subscribed topics
let topics = ['topic_test']
// Enable subscription
consumer.subscribe(topics);
```
</TabItem>
<TabItem value="C#" label="C#">
```csharp
// Create a list of subscribed topics
List<String> topics = new List<string>();
topics.add("tmq_topic");
// Enable subscription
consumer.Subscribe(topics);
```
</TabItem>
</Tabs>
## Consume Data
The following code demonstrates how to consume the messages in a queue.
<Tabs defaultValue="java" groupId="lang">
<TabItem value="c" label="C">
```c
## Consume data
while (running) {
TAOS_RES* msg = tmq_consumer_poll(tmq, timeOut);
msg_process(msg);
}
```
The `while` loop obtains a message each time it calls `tmq_consumer_poll()`. This message is exactly the same as the result returned by a query, and the same deserialization API can be used on it.
</TabItem>
<TabItem value="java" label="Java">
```java
while(running){
ConsumerRecords<Meters> meters = consumer.poll(Duration.ofMillis(100));
for (Meters meter : meters) {
processMsg(meter);
}
}
```
</TabItem>
<TabItem value="Go" label="Go">
```go
for {
result, err := consumer.Poll(time.Second)
if err != nil {
panic(err)
}
fmt.Println(result)
consumer.Commit(context.Background(), result.Message)
consumer.FreeMessage(result.Message)
}
```
</TabItem>
<TabItem value="Rust" label="Rust">
```rust
{
let mut stream = consumer.stream();
while let Some((offset, message)) = stream.try_next().await? {
// get information from offset
// the topic
let topic = offset.topic();
// the vgroup id, like partition id in kafka.
let vgroup_id = offset.vgroup_id();
println!("* in vgroup id {vgroup_id} of topic {topic}\n");
if let Some(data) = message.into_data() {
while let Some(block) = data.fetch_raw_block().await? {
// one block for one table, get table name if needed
let name = block.table_name();
let records: Vec<Record> = block.deserialize().try_collect()?;
println!(
"** table: {}, got {} records: {:#?}\n",
name.unwrap(),
records.len(),
records
);
}
}
consumer.commit(offset).await?;
}
}
```
</TabItem>
<TabItem value="Python" label="Python">
```python
for msg in consumer:
for row in msg:
print(row)
```
</TabItem>
<TabItem label="Node.JS" value="Node.JS">
```js
while(true){
msg = consumer.consume(200);
// process message(consumeResult)
console.log(msg.topicPartition);
console.log(msg.block);
console.log(msg.fields)
}
```
</TabItem>
<TabItem value="C#" label="C#">
```csharp
## Consume data
while (true)
{
var consumerRes = consumer.Consume(100);
// process ConsumeResult
ProcessMsg(consumerRes);
consumer.Commit(consumerRes);
}
```
</TabItem>
</Tabs>
## Close the consumer
After message consumption is finished, the consumer is unsubscribed.
<Tabs defaultValue="java" groupId="lang">
<TabItem value="c" label="C">
```c
/* Unsubscribe */
tmq_unsubscribe(tmq);
/* Close consumer object */
tmq_consumer_close(tmq);
```
</TabItem>
<TabItem value="java" label="Java">
```java
/* Unsubscribe */
consumer.unsubscribe();
/* Close consumer */
consumer.close();
```
</TabItem>
<TabItem value="Go" label="Go">
```go
consumer.Close()
```
</TabItem>
<TabItem value="Rust" label="Rust">
```rust
consumer.unsubscribe().await;
```
</TabItem>
<TabItem value="Python" label="Python">
```py
# Unsubscribe
consumer.unsubscribe()
# Close consumer
consumer.close()
```
</TabItem>
<TabItem label="Node.JS" value="Node.JS">
```js
consumer.unsubscribe();
consumer.close();
```
</TabItem>
<TabItem value="C#" label="C#">
```csharp
// Unsubscribe
consumer.Unsubscribe();
// Close consumer
consumer.Close();
```
</TabItem>
</Tabs>
## Close Consumer
After message consumption is finished, the consumer is unsubscribed.
<Tabs defaultValue="java" groupId="lang">
<TabItem value="c" label="C">
```c
/* Unsubscribe */
tmq_unsubscribe(tmq);
/* Close consumer object */
tmq_consumer_close(tmq);
```
</TabItem>
<TabItem value="java" label="Java">
```java
/* Unsubscribe */
consumer.unsubscribe();
/* Close consumer */
consumer.close();
```
</TabItem>
<TabItem value="Go" label="Go">
```go
consumer.Close()
```
</TabItem>
<TabItem value="Rust" label="Rust">
```rust
consumer.unsubscribe().await;
```
</TabItem>
<TabItem value="Python" label="Python">
```py
# Unsubscribe
consumer.unsubscribe()
# Close consumer
consumer.close()
```
</TabItem>
<TabItem label="Node.JS" value="Node.JS">
```js
consumer.unsubscribe();
consumer.close();
```
</TabItem>
<TabItem value="C#" label="C#">
```csharp
// Unsubscribe
consumer.Unsubscribe();
// Close consumer
consumer.Close();
```
</TabItem>
</Tabs>
## Delete Topic
Once a topic becomes useless, it can be deleted.
You can delete topics that are no longer useful. Note that you must unsubscribe all consumers from a topic before deleting it.
```sql
/* Delete topic/
DROP TOPIC topic_name;
```
## Check Status
At any time, you can check the status of existing topics and consumers.
1. Query all existing topics.
```sql
SHOW TOPICS;
```
2. Query the status and subscribed topics of all consumers.
```sql
SHOW CONSUMERS;
```
\ No newline at end of file
--- ---
sidebar_label: taosDump sidebar_label: taosDump
title: Dump Data Using taosDump title: Dump Data Using taosDump
description: Introduces how to dump data from TDengine into files using taosDump description: Dump data from TDengine into files using taosDump
--- ---
# taosDump # taosDump
...@@ -18,11 +18,7 @@ Users should not use taosdump to back up raw data, environment settings, hardwar ...@@ -18,11 +18,7 @@ Users should not use taosdump to back up raw data, environment settings, hardwar
## Installation ## Installation
There are two ways to install taosdump: Please refer to [Install taosTools](https://docs.tdengine.com/cloud/tools/taosdump/#installation).
- Install the taosTools official installer. Please find taosTools from [All download links](https://www.tdengine.com/all-downloads) page and download and install it.
- Compile taos-tools separately and install it. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.
## Common usage scenarios ## Common usage scenarios
...@@ -32,7 +28,7 @@ There are two ways to install taosdump: ...@@ -32,7 +28,7 @@ There are two ways to install taosdump:
2. backup multiple specified databases: use `-D db1,db2,... ` parameters; 2. backup multiple specified databases: use `-D db1,db2,... ` parameters;
3. back up some super or normal tables in the specified database: use `-dbname stbname1 stbname2 tbname1 tbname2 ... ` parameters. Note that the first parameter of this input sequence is the database name, and only one database is supported. The second and subsequent parameters are the names of super or normal tables in that database, separated by spaces. 3. back up some super or normal tables in the specified database: use `-dbname stbname1 stbname2 tbname1 tbname2 ... ` parameters. Note that the first parameter of this input sequence is the database name, and only one database is supported. The second and subsequent parameters are the names of super or normal tables in that database, separated by spaces.
4. back up the system log database: TDengine clusters usually contain a system database named `log`. The data in this database is the data that TDengine runs itself, and the taosdump will not back up the log database by default. If users need to back up the log database, users can use the `-a` or `-allow-sys` command-line parameter. 4. back up the system log database: TDengine clusters usually contain a system database named `log`. The data in this database is the data that TDengine runs itself, and the taosdump will not back up the log database by default. If users need to back up the log database, users can use the `-a` or `-allow-sys` command-line parameter.
5. Loose mode backup: taosdump version 1.4.1 onwards provides `-n` and `-L` parameters for backing up data without using escape characters and "loose" mode, which can reduce the number of backups if table names, column names, tag names do not use escape characters. This can also reduce the backup data time and backup data footprint. If you are unsure about using `-n` and `-L` conditions, please use the default parameters for "strict" mode backup. See the [official documentation](/taos-sql/escape) for a description of escaped characters. 5. Loose mode backup: taosdump version 1.4.1 onwards provides `-n` and `-L` parameters for backing up data without using escape characters and "loose" mode, which can reduce the number of backups if table names, column names, tag names do not use escape characters. This can also reduce the backup data time and backup data footprint. If you are unsure about using `-n` and `-L` conditions, please use the default parameters for "strict" mode backup. See the [official documentation](https://docs.tdengine.com/taos-sql/escape/) for a description of escaped characters.
<!-- exclude --> <!-- exclude -->
:::tip :::tip
......
---
sidebar_label: Prometheus
title: Prometheus remote read
description: Prometheus remote_read from TDengine cloud server
---
Prometheus is a widespread open-source monitoring and alerting system. Prometheus joined the Cloud Native Computing Foundation (CNCF) in 2016 as the second incubated project after Kubernetes, which has a very active developer and user community.
Prometheus provides `remote_read` interface to leverage other database products as its storage engine. To enable users of the Prometheus ecosystem to take advantage of TDengine's efficient querying, TDengine also provides support for this interface so that data stored in TDengine can be queried via the `remote_read` interface, taking full advantage of TDengine's efficient query performance and clustering capabilities for time-series data.
## Install Prometheus
Please refer to [Install Prometheus](https://docs.tdengine.com/cloud/data-in/prometheus#install-prometheus).
## Configure Prometheus
Please refer to [Configure Prometheus](https://docs.tdengine.com/cloud/prometheus/#configure-prometheus).
## Start Prometheus
Please refer to [Start Prometheus](https://docs.tdengine.com/cloud/data-in/prometheus/#start-prometheus).
## Verify Remote Read
Lets retrieve some metrics from TDengine Cloud via prometheus web server. Browse to <http://localhost:9090/graph> and use the "Graph" tab.
Enter the following expression to graph the per-second rate of chunks being created in the self-scraped Prometheus:
```
rate(prometheus_tsdb_head_chunks_created_total[1m])
```
![TDengine prometheus remote_read](prometheus_read.webp)
```csharp ```csharp
{{#include docs/examples/csharp/SubscribeDemo.cs}} // {{#include docs/examples/csharp/SubscribeDemo.cs}}
``` ```
\ No newline at end of file
```rust ```rust
{{#include docs/examples/rust/nativeexample/examples/subscribe_demo.rs}} {{#include docs/examples/rust/cloud-example/examples/subscribe_demo.rs}}
``` ```
...@@ -4,4 +4,4 @@ title: Get Data Out of TDengine ...@@ -4,4 +4,4 @@ title: Get Data Out of TDengine
description: A number of ways getting data out of TDengine. description: A number of ways getting data out of TDengine.
--- ---
This chapter introduces how to get data out of TDengine cloud service. Besides normal query using SQL, users can use data subscription which is provided by the message queue component inside TDengine to access the data stored in TDengine. `taosdump`, which is a tool provided by TDengine, can be used to dump the data stored in TDengine cloud service into files. `taosX`, which is another tool provided by TDengine, can be used to sync up the data in one TDengine cloud service into another. This chapter introduces how to get data out of TDengine cloud service. Besides normal query using SQL, users can use [data subscription](../tmq) which is provided by the message queue component inside TDengine to access the data stored in TDengine. TDengine provides [connectors](../programming/connector) for application programmers to access the data stored in TDengine. TDengine also provides some tools, like [taosdump](../tools/taosdump), which is a tool provided by TDengine to dump the data stored in TDengine cloud service into files, and `taosX`, which is another tool to sync up the data in one TDengine cloud service into another. Furthermore, 3rd party tools, like prometheus, can also be used to write data into TDengine.
\ No newline at end of file
--- ---
sidebar_label: Visualization sidebar_label: Visualization
sidebar_title: Visualization title: Visualization
description: View TDengine in visual ways. description: View TDengine in visual ways.
--- ---
......
---
sidebar_label: Subscription
title: Data Subscritpion
description: Use data subscription to get data from TDengine.
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
This topic introduces how to read out data from TDengine using data subscription, which is an advanced feature in TDengine. To access the data in TDengine in data subscription way, you need to create topic, create consumer, subscribe to a topic, and consume data. In this document we will briefly explain these main steps of data subscription.
## Create Topic
A topic can be created on a database, on some selected columns,or on a supertable.
### Topic on Columns
The most common way to create a topic is to create a topic on some specifically selected columns. The Syntax is like below:
```sql
CREATE TOPIC topic_name as subquery;
```
You can subscribe to a topic through a SELECT statement. Statements that specify columns, such as `SELECT *` and `SELECT ts, cl` are supported, as are filtering conditions and scalar functions. Aggregate functions and time window aggregation are not supported. Note:
- The schema of topics created in this manner is determined by the subscribed data.
- You cannot modify (`ALTER <table> MODIFY`) or delete (`ALTER <table> DROP`) columns or tags that are used in a subscription or calculation.
- Columns added to a table after the subscription is created are not displayed in the results. Deleting columns will cause an error.
For example:
```sql
CREATE TOPIC topic_name AS SELECT ts, c1, c2, c3 FROM tmqdb.stb WHERE c1 > 1;
```
### Topic on SuperTable
Syntax:
```sql
CREATE TOPIC topic_name AS STABLE stb_name;
```
Creating a topic in this manner differs from a `SELECT * from stbName` statement as follows:
- The table schema can be modified.
- Unstructured data is returned. The format of the data returned changes based on the supertable schema.
- A different table schema may exist for every data block to be processed.
- The data returned does not include tags.
### Topic on Database
Syntax:
```sql
CREATE TOPIC topic_name [WITH META] AS DATABASE db_name;
```
This SQL statement creates a subscription to all tables in the database. You can add the `WITH META` parameter to include schema changes in the subscription, including creating and deleting supertables; adding, deleting, and modifying columns; and creating, deleting, and modifying the tags of subtables. Consumers can determine the message type from the API. Note that this differs from Kafka.
## Programming Model
To subscribe the data from a created topic, the client program needs to follow the programming model described in this section.
1. Create Consumer
To create a consumer, you must use the APIs provided by TDengine connectors. Below is the sample code of using connectors of different languages.
2. Subscribe to a Topic
A single consumer can subscribe to multiple topics.
3. Consume messages
4. Subscribe to a Topic
A single consumer can subscribe to multiple topics.
5. Consume Data
6. Close the consumer
After message consumption is finished, the consumer is unsubscribed.
## Sample Code
<Tabs defaultValue="Rust" groupId="lang">
<TabItem value="c" label="C">
Will be available soon
</TabItem>
<TabItem value="java" label="Java">
Will be available soon
</TabItem>
<TabItem label="Go" value="Go">
Will be available soon
</TabItem>
<TabItem label="Rust" value="Rust">
```rust
{{#include docs/examples/rust/cloud-example/examples/subscribe_demo.rs}}
```
</TabItem>
<TabItem value="Python" label="Python">
Will be available soon
</TabItem>
<TabItem label="Node.JS" value="Node.JS">
Will be available soon
</TabItem>
<TabItem value="C#" label="C#">
Will be available soon
</TabItem>
</Tabs>
## Delete Topic
Once a topic becomes useless, it can be deleted.
You can delete topics that are no longer useful. Note that you must unsubscribe all consumers from a topic before deleting it.
```sql
DROP TOPIC topic_name;
```
## Check Status
At any time, you can check the status of existing topics and consumers.
1. Query all existing topics.
```sql
SHOW TOPICS;
```
2. Query the status and subscribed topics of all consumers.
```sql
SHOW CONSUMERS;
```
...@@ -22,7 +22,6 @@ CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name AS subq ...@@ -22,7 +22,6 @@ CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name AS subq
stream_options: { stream_options: {
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time] TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
WATERMARK time WATERMARK time
IGNORE EXPIRED [0 | 1]
} }
``` ```
......
---
sidebar_label: Data Replication
title: Data Replication
description: Replicate data between TDengine cloud services
---
TDengine provides full support for data replication. You can replicate data from TDengine cloud to private TDengine instance, from private TDengine instance to TDengine cloud, or from one cloud platform to another one and it doesn't matter which cloud or region the two services reside in.
TDengine also provides database backup for enterprise plan.
--- ---
sidebar_label: Python sidebar_label: Python
title: Connect with Python Connector title: Connect with Python Connector
description: Connect to TDengine cloud service using Python connector
--- ---
<!-- exclude -->
import Tabs from '@theme/Tabs'; import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem'; import TabItem from '@theme/TabItem';
<!-- exclude-end -->
## Install Connector ## Install Connector
First, you need to install the `taospy` module version >= `2.3.3`. Run the command below in your terminal. First, you need to install the `taospy` module version >= `2.6.2`. Run the command below in your terminal.
<Tabs defaultValue="pip"> <Tabs defaultValue="pip">
<TabItem value="pip" label="pip"> <TabItem value="pip" label="pip">
...@@ -78,3 +81,7 @@ Copy code bellow to your editor and run it. ...@@ -78,3 +81,7 @@ Copy code bellow to your editor and run it.
```python ```python
{{#include docs/examples/python/develop_tutorial.py:connect}} {{#include docs/examples/python/develop_tutorial.py:connect}}
``` ```
For how to write data and query data, please refer to <https://docs.tdengine.com/cloud/data-in/insert-data/> and <https://docs.tdengine.com/cloud/data-out/query-data/>.
For more details about how to write or query data via REST API, please check [REST API](https://docs.tdengine.com/cloud/programming/connector/rest-api/).
\ No newline at end of file
--- ---
sidebar_label: Java sidebar_label: Java
title: Connect with Java Connector title: Connect with Java Connector
description: Connect to TDengine cloud service using Java connector
--- ---
<!-- exclude -->
import Tabs from '@theme/Tabs'; import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem'; import TabItem from '@theme/TabItem';
<!-- exclude-end -->
## Add Dependency ## Add Dependency
...@@ -13,11 +16,7 @@ import TabItem from '@theme/TabItem'; ...@@ -13,11 +16,7 @@ import TabItem from '@theme/TabItem';
<TabItem value="maven" label="Maven"> <TabItem value="maven" label="Maven">
```xml title="pom.xml" ```xml title="pom.xml"
<dependency> {{#include docs/examples/java/pom.xml:dep}}
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.39</version>
</dependency>
``` ```
</TabItem> </TabItem>
...@@ -25,7 +24,7 @@ import TabItem from '@theme/TabItem'; ...@@ -25,7 +24,7 @@ import TabItem from '@theme/TabItem';
```groovy title="build.gradle" ```groovy title="build.gradle"
dependencies { dependencies {
implementation 'com.taosdata.jdbc:taos-jdbcdriver:2.0.39' implementation 'com.taosdata.jdbc:taos-jdbcdriver:3.0.0.0'
} }
``` ```
...@@ -67,7 +66,7 @@ Alternatively, you can set environment variable in your IDE's run configurations ...@@ -67,7 +66,7 @@ Alternatively, you can set environment variable in your IDE's run configurations
:::note :::note
Replace <jdbcURL\> with real JDBC URL, it will seems like: `jdbc:TAOS-RS://example.com?usessl=true&token=xxxx`. Replace <jdbcURL\> with real JDBC URL, it will seems like: `jdbc:TAOS-RS://example.com?usessl=true&token=xxxx`.
To obtain the value of JDBC URL, please log in [TDengine Cloud](https://cloud.tdengine.com) and click "Connector" and then select "Java". To obtain the value of JDBC URL, please log in [TDengine Cloud](https://cloud.tdengine.com) and click "Data Insert" on the left menu.
::: :::
<!-- exclude-end --> <!-- exclude-end -->
## Connect ## Connect
...@@ -78,3 +77,6 @@ Code bellow get JDBC URL from environment variables first and then create a `Con ...@@ -78,3 +77,6 @@ Code bellow get JDBC URL from environment variables first and then create a `Con
{{#include docs/examples/java/src/main/java/com/taos/example/ConnectCloudExample.java:connect}} {{#include docs/examples/java/src/main/java/com/taos/example/ConnectCloudExample.java:connect}}
``` ```
The client connection is then established. For how to write data and query data, please refer to <https://docs.tdengine.com/cloud/data-in/insert-data/> and <https://docs.tdengine.com/cloud/data-out/query-data/>.
For more details about how to write or query data via REST API, please check [REST API](https://docs.tdengine.com/cloud/programming/connector/rest-api/).
--- ---
sidebar_label: Go sidebar_label: Go
title: Connect with Go Connector title: Connect with Go Connector
description: Connect to TDengine cloud service using Go connector
--- ---
<!-- exclude -->
import Tabs from '@theme/Tabs'; import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem'; import TabItem from '@theme/TabItem';
<!-- exclude-end -->
## Initialize Module ## Initialize Module
``` ```
...@@ -52,7 +55,7 @@ $env:TDENGINE_GO_DSN="<goDSN>" ...@@ -52,7 +55,7 @@ $env:TDENGINE_GO_DSN="<goDSN>"
<!-- exclude --> <!-- exclude -->
:::note :::note
Replace <goDSN\> with the real value, the format should be `https(<cloud_host>)/?token=<token>`. Replace <goDSN\> with the real value, the format should be `https(<cloud_host>)/?token=<token>`.
To obtain the value of `goDSN`, please log in [TDengine Cloud](https://cloud.tdengine.com) and click "Connector" and then select "Go". To obtain the value of `goDSN`, please log in [TDengine Cloud](https://cloud.tdengine.com) and click "Data In" on the lef menu.
::: :::
<!-- exclude-end --> <!-- exclude-end -->
...@@ -76,3 +79,7 @@ Finally, test the connection: ...@@ -76,3 +79,7 @@ Finally, test the connection:
``` ```
go run main.go go run main.go
``` ```
The client connection is then established. For how to write data and query data, please refer to <https://docs.tdengine.com/cloud/data-in/insert-data/> and <https://docs.tdengine.com/cloud/data-out/query-data/>.
For more details about how to write or query data via REST API, please check [REST API](https://docs.tdengine.com/cloud/programming/connector/rest-api/).
\ No newline at end of file
--- ---
sidebar_label: Rust sidebar_label: Rust
title: Connect with Rust Connector title: Connect with Rust Connector
description: Connect to TDengine cloud service using Rust connector
--- ---
<!-- exclude -->
import Tabs from '@theme/Tabs'; import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem'; import TabItem from '@theme/TabItem';
<!-- exclude-end -->
## Create Project ## Create Project
``` ```
...@@ -15,7 +18,15 @@ cargo new --bin cloud-example ...@@ -15,7 +18,15 @@ cargo new --bin cloud-example
Add dependency to `Cargo.toml`. Add dependency to `Cargo.toml`.
```toml title="Cargo.toml" ```toml title="Cargo.toml"
{{#include docs/examples/rust/cloud-example/Cargo.toml}} [package]
name = "cloud-example"
version = "0.1.0"
edition = "2021"
[dependencies]
taos = { version = "*", default-features = false, features = ["ws"] }
tokio = { version = "1", features = ["full"]}
anyhow = "1.0.0"
``` ```
## Config ## Config
...@@ -61,5 +72,7 @@ Copy following code to `main.rs`. ...@@ -61,5 +72,7 @@ Copy following code to `main.rs`.
{{#include docs/examples/rust/cloud-example/src/main.rs}} {{#include docs/examples/rust/cloud-example/src/main.rs}}
``` ```
Then you can execute `cargo run` to test the connection. Then you can execute `cargo run` to test the connection. For how to write data and query data, please refer to <https://docs.tdengine.com/cloud/data-in/insert-data/> and <https://docs.tdengine.com/cloud/data-out/query-data/>.
For more details about how to write or query data via REST API, please check [REST API](https://docs.tdengine.com/cloud/programming/connector/rest-api/).
--- ---
sidebar_label: Node.js sidebar_label: Node.js
title: Connect with Node.js Connector title: Connect with Node.js Connector
description: Connect to TDengine cloud service using Node.JS connector
--- ---
<!-- exclude -->
import Tabs from '@theme/Tabs'; import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem'; import TabItem from '@theme/TabItem';
<!-- exclude-end -->
## Install Connector ## Install Connector
```bash ```bash
npm i td2.0-rest-connector npm install @tdengine/rest
``` ```
## Config ## Config
...@@ -55,3 +58,7 @@ To obtain the value of cloud token and URL, please log in [TDengine Cloud](https ...@@ -55,3 +58,7 @@ To obtain the value of cloud token and URL, please log in [TDengine Cloud](https
```javascript ```javascript
{{#include docs/examples/node/connect.js}} {{#include docs/examples/node/connect.js}}
``` ```
For how to write data and query data, please refer to <https://docs.tdengine.com/cloud/data-in/insert-data/> and <https://docs.tdengine.com/cloud/data-out/query-data/>.
For more details about how to write or query data via REST API, please check [REST API](https://docs.tdengine.com/cloud/programming/connector/rest-api/).
\ No newline at end of file
---
sidebar_label: C#
title: Connect with C# Connector
description: Connect to TDengine cloud service using C# connector
---
<!-- exclude -->
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<!-- exclude-end -->
## Create Project
```bash
dotnet new console -o example
```
## Add C# TDengine Driver class lib
```bash
cd example
dotnet add package TDengine.Connector
```
## Config
Run this command in your terminal to save TDengine cloud token as variables:
<Tabs defaultValue="bash">
<TabItem value="bash" label="Bash">
```bash
export TDENGINE_CLOUD_DSN="<DSN>"
```
</TabItem>
<TabItem value="cmd" label="CMD">
```bash
set TDENGINE_CLOUD_DSN="<DSN>"
```
</TabItem>
<TabItem value="powershell" label="Powershell">
```powershell
$env:TDENGINE_CLOUD_DSN="<DSN>"
```
</TabItem>
</Tabs>
<!-- exclude -->
:::note
Replace <DSN\> with real TDengine cloud DSN. To obtain the real value, please log in [TDengine Cloud](https://cloud.tdengine.com) and click "Connector" and then select "C#".
:::
<!-- exclude-end -->
## Connect
```C#
{{#include docs/examples/csharp/cloud-example/connect/Program.cs}}
```
The client connection is then established. For how to write data and query data, please refer to <https://docs.tdengine.com/cloud/data-in/insert-data/> and <https://docs.tdengine.com/cloud/data-out/query-data/>.
For more details about how to write or query data via REST API, please check [REST API](https://docs.tdengine.com/cloud/programming/connector/rest-api/).
\ No newline at end of file
--- ---
sidebar_label: REST API sidebar_label: REST API
title: REST API title: REST API
description: Connect to TDengine Cloud Service through RESTful API
--- ---
<!-- exclude -->
import Tabs from '@theme/Tabs'; import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem'; import TabItem from '@theme/TabItem';
<!-- exclude-end -->
## Config ## Config
Run this command in your terminal to save the TDengine cloud token and URL as variables: Run this command in your terminal to save the TDengine cloud token and URL as variables:
......
---
sidebar_label: Quick Start
title: Connect to TDengine Cloud Service
description: Quick start of using TDengine connectors to connect to TDengine cloud service
---
This section briefly describes how to connect to TDengine cloud service using the connectors provided by TDengine so that programmers can get started quickly.
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
--- ---
title: Data Model title: Data Model
desription: Typical Data Model used in TDengine
--- ---
The data model employed by TDengine is similar to that of a relational database. You have to create databases and tables. You must design the data model based on your own business and application requirements. You should design the STable (an abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntactical details. The data model employed by TDengine is similar to that of a relational database. You have to create databases and tables. You must design the data model based on your own business and application requirements. You should design the STable (an abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntactical details.
......
---
sidebar_label: Insert
title: Insert Data Into TDengine
description: Programming Guide for Inserting Data into TDengine
---
To quickly start your programming about writing data into TDengine, please refer to [Insert Data](../../data-in/insert-data).
\ No newline at end of file
---
sidebar_label: Query
title: Query Data From TDengine
description: Programming Guide for Querying Data
---
To quickly start your programming about querying data from TDengine, please refer to [Query Data](../../data-out/query-data).
\ No newline at end of file
--- ---
sidebar_label: Python sidebar_label: Python
title: TDengine Python Connector title: TDengine Python Connector
description: Detailed guide for Python Connector
--- ---
`taospy` is the official Python connector for TDengine. `taospy` wraps the [REST interface](/reference/rest-api) of TDengine. Additionally `taospy` provides a set of programming interfaces that conforms to the [Python Data Access Specification (PEP 249)](https://peps.python.org/pep-0249/). It is easy to integrate `taospy` with many third-party tools, such as [SQLAlchemy](https://www.sqlalchemy.org/) and [pandas](https://pandas.pydata.org/). `taospy` is the official Python connector for TDengine. `taospy` wraps the [REST interface](/reference/rest-api) of TDengine. Additionally `taospy` provides a set of programming interfaces that conforms to the [Python Data Access Specification (PEP 249)](https://peps.python.org/pep-0249/). It is easy to integrate `taospy` with many third-party tools, such as [SQLAlchemy](https://www.sqlalchemy.org/) and [pandas](https://pandas.pydata.org/).
...@@ -78,6 +79,10 @@ For a more detailed description of the `sql()` method, please refer to [RestClie ...@@ -78,6 +79,10 @@ For a more detailed description of the `sql()` method, please refer to [RestClie
| Connector version | Important Update | Release date | | Connector version | Important Update | Release date |
| ----------------- | ----------------------------------------- | ------------ | | ----------------- | ----------------------------------------- | ------------ |
| 2.6.2 | fix ci script | 2022-08-18 |
| 2.5.2 | fix taos-ws-py python version dependency | 2022-08-12 |
| 2.5.1 | (rest): add timezone option | 2022-08-11 |
| 2.5.0 | add taosws module | 2022-08-10 |
| 2.4.0 | add execute method to TaosRestConnection | 2022-07-18 | | 2.4.0 | add execute method to TaosRestConnection | 2022-07-18 |
| 2.3.3 | support connect to TDengine Cloud Service | 2022-06-06 | | 2.3.3 | support connect to TDengine Cloud Service | 2022-06-06 |
......
...@@ -3,6 +3,7 @@ toc_max_heading_level: 4 ...@@ -3,6 +3,7 @@ toc_max_heading_level: 4
sidebar_position: 2 sidebar_position: 2
sidebar_label: Java sidebar_label: Java
title: TDengine Java Connector title: TDengine Java Connector
description: Detailed guide for Java Connector
--- ---
import Tabs from '@theme/Tabs'; import Tabs from '@theme/Tabs';
......
--- ---
sidebar_label: Go sidebar_label: Go
title: TDengine Go Connector title: TDengine Go Connector
description: Detailed guide for Python Connector
--- ---
`driver-go` is the official Go language connector for TDengine. It implements the [database/sql](https://golang.org/pkg/database/sql/) package, the generic Go language interface to SQL databases. Go developers can use it to develop applications that access TDengine cluster data. `driver-go` is the official Go language connector for TDengine. It implements the [database/sql](https://golang.org/pkg/database/sql/) package, the generic Go language interface to SQL databases. Go developers can use it to develop applications that access TDengine cluster data.
......
...@@ -3,6 +3,7 @@ toc_max_heading_level: 4 ...@@ -3,6 +3,7 @@ toc_max_heading_level: 4
sidebar_position: 5 sidebar_position: 5
sidebar_label: Rust sidebar_label: Rust
title: TDengine Rust Connector title: TDengine Rust Connector
description: Detailed guide for Rust Connector
--- ---
......
--- ---
sidebar_label: Node.js sidebar_label: Node.JS
title: TDengine Node.js Connector title: TDengine Node.JS Connector
description: Detailed guide for Node.JS Connector
--- ---
`td2.0-rest-connector` are the official Node.js language connectors for TDengine. Node.js developers can develop applications to access TDengine instance data. `td2.0-rest-connector` is a **REST connector** that connects to TDengine instances via the REST API. `td2.0-rest-connector` are the official Node.js language connectors for TDengine. Node.js developers can develop applications to access TDengine instance data. `td2.0-rest-connector` is a **REST connector** that connects to TDengine instances via the REST API.
......
---
sidebar_label: C#
title: TDengine C# Connector
description: Detailed guide for C# Connector
---
`TDengine.Connector` is the official C# connector for TDengine. C# developers can develop applications to access TDengine instance data.
The source code for `TDengine.Connector` is hosted on [GitHub](https://github.com/taosdata/taos-connector-dotnet/tree/3.0).
## Installation
### Pre-installation
Install the .NET deployment SDK.
### Add TDengine.Connector through Nuget
```bash
dotnet add package TDengine.Connector
```
## Establishing a connection
``` XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net5.0</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.1" />
</ItemGroup>
</Project>
```
``` C#
{{#include docs/examples/csharp/cloud-example/connect/Program.cs}}
```
## Usage examples
### Basic Insert and Query
``` XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net5.0</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.1" />
</ItemGroup>
</Project>
```
```C#
{{#include docs/examples/csharp/cloud-example/usage/Program.cs}}
```
### STMT Insert
``` XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net5</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.1" />
</ItemGroup>
</Project>
```
```C#
{{#include docs/examples/csharp/cloud-example/stmt/Program.cs}}
```
## Important Updates
| TDengine.Connector | Description |
| ------------------------- | ---------------------------------------------------------------- |
| 3.0.1 | Support connect to TDengine cloud service
## API Reference
[API Reference](https://docs.taosdata.com/api/connector-csharp/html/860d2ac1-dd52-39c9-e460-0829c4e5a40b.htm)
--- ---
sidebar_label: REST API sidebar_label: REST API
title: REST API title: REST API
description: Detailed guide for REST API
--- ---
To support the development of various types of applications and platforms, TDengine provides an API that conforms to REST principles; namely REST API. To minimize the learning cost, unlike REST APIs for other database engines, TDengine allows insertion of SQL commands in the BODY of an HTTP POST request, to operate the database. To support the development of various types of applications and platforms, TDengine provides an API that conforms to REST principles; namely REST API. To minimize the learning cost, unlike REST APIs for other database engines, TDengine allows insertion of SQL commands in the BODY of an HTTP POST request, to operate the database.
......
# Connector ---
sidebar_label: Connector
title: Connector Reference
description: 'Reference guide for connectors'
---
This section is a detailed reference guide of the connectors provided by TDengine.
```mdx-code-block ```mdx-code-block
import DocCardList from '@theme/DocCardList'; import DocCardList from '@theme/DocCardList';
......
...@@ -15,7 +15,7 @@ To develop an application to process time-series data using TDengine, we recomme ...@@ -15,7 +15,7 @@ To develop an application to process time-series data using TDengine, we recomme
7. In many use cases (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately. 7. In many use cases (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately.
8. If you find that the SQL functions of TDengine cannot meet your requirements, then you can use user-defined functions to solve the problem. 8. If you find that the SQL functions of TDengine cannot meet your requirements, then you can use user-defined functions to solve the problem.
This section is organized in the order described above. For ease of understanding, TDengine provides sample code for each supported programming language for each function. If you want to learn more about the use of SQL, please read the [SQL manual](../taos-sql/). For a more in-depth understanding of the use of each connector, please read the [Connector Reference Guide](../connector/). For more ways to writing data into TDengine, please refer to [Data In](../data-in), for more ways to read data out of TDengine, please refer to [Data Out](../data-out). This section is organized in the order described above. For ease of understanding, TDengine provides sample code for each supported programming language for each function. If you want to learn more about the use of SQL, please read the [SQL manual](../taos-sql/). For a more in-depth understanding of the use of each connector, please read the [Connector Reference Guide](./connector/). For more ways to writing data into TDengine, please refer to [Data In](../data-in), for more ways to read data out of TDengine, please refer to [Data Out](../data-out).
If you encounter any problems during the development process, please click ["Submit an issue"](https://github.com/taosdata/TDengine/issues/new/choose) at the bottom of each page and submit it on GitHub right away. If you encounter any problems during the development process, please click ["Submit an issue"](https://github.com/taosdata/TDengine/issues/new/choose) at the bottom of each page and submit it on GitHub right away.
......
--- ---
sidebar_label: Supertable sidebar_label: Supertable
title: Supertable title: Supertable
description: Operations about Super Tables.
--- ---
## Create a Supertable ## Create a Supertable
......
--- ---
sidebar_label: Insert sidebar_label: Insert
title: Insert title: Insert
description: Insert data into TDengine
--- ---
## Syntax ## Syntax
......
--- ---
sidebar_label: Select sidebar_label: Select
title: Select title: Select
description: Query Data from TDengine.
--- ---
## Syntax ## Syntax
......
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
sidebar_label: Functions sidebar_label: Functions
title: Functions title: Functions
toc_max_heading_level: 4 toc_max_heading_level: 4
description: TDengine Built-in Functions.
--- ---
## Single Row Functions ## Single Row Functions
......
--- ---
sidebar_label: Time-Series Extensions sidebar_label: Time-Series Extensions
title: Time-Series Extensions title: Time-Series Extensions
description: TimeSeries Data Specific Queries.
--- ---
As a purpose-built database for storing and processing time-series data, TDengine provides time-series-specific extensions to standard SQL. As a purpose-built database for storing and processing time-series data, TDengine provides time-series-specific extensions to standard SQL.
......
--- ---
sidebar_label: Data Subscription sidebar_label: Data Subscription
title: Data Subscription title: Data Subscription
description: Subscribe Data from TDengine.
--- ---
The information in this document is related to the TDengine data subscription feature. The information in this document is related to the TDengine data subscription feature.
......
--- ---
sidebar_label: Stream Processing sidebar_label: Stream Processing
title: Stream Processing title: Stream Processing
description: Built-in Stream Processing.
--- ---
Raw time-series data is often cleaned and preprocessed before being permanently stored in a database. Stream processing components like Kafka, Flink, and Spark are often deployed alongside a time-series database to handle these operations, increasing system complexity and maintenance costs. Raw time-series data is often cleaned and preprocessed before being permanently stored in a database. Stream processing components like Kafka, Flink, and Spark are often deployed alongside a time-series database to handle these operations, increasing system complexity and maintenance costs.
......
--- ---
sidebar_label: Operators sidebar_label: Operators
title: Operators title: Operators
description: TDengine Supported Operators
--- ---
## Arithmetic Operators ## Arithmetic Operators
......
--- ---
sidebar_label: JSON Type sidebar_label: JSON Type
title: JSON Type title: JSON Type
description: JSON Data Type
--- ---
......
--- ---
title: Escape Characters title: Escape Characters
description: How to use Escape
--- ---
## Escape Characters ## Escape Characters
......
--- ---
sidebar_label: Name and Size Limits sidebar_label: Limits
title: Name and Size Limits title: Limits
description: Naming Limits
--- ---
## Naming Rules ## Naming Rules
......
--- ---
sidebar_label: Reserved Keywords sidebar_label: Keywords
title: Reserved Keywords title: Reserved Keywords
description: Reserved Keywords in TDengine SQL
--- ---
## Keyword List ## Keyword List
......
--- ---
sidebar_label: User-Defined Functions sidebar_label: UDF
title: User-Defined Functions (UDF) title: User-Defined Functions (UDF)
description: User Defined Functions
--- ---
You can create user-defined functions and import them into TDengine. You can create user-defined functions and import them into TDengine.
......
--- ---
sidebar_label: Index sidebar_label: Index
title: Using Indices title: Using Indices
description: Use Index to Accelerate Query.
--- ---
TDengine supports SMA and FULLTEXT indexing. TDengine supports SMA and FULLTEXT indexing.
......
...@@ -4,14 +4,17 @@ sidebar_label: TDengine CLI ...@@ -4,14 +4,17 @@ sidebar_label: TDengine CLI
description: Instructions and tips for using the TDengine CLI to connect TDengine Cloud description: Instructions and tips for using the TDengine CLI to connect TDengine Cloud
--- ---
<!-- exclude -->
import Tabs from '@theme/Tabs'; import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem'; import TabItem from '@theme/TabItem';
<!-- exclude-end -->
The TDengine command-line interface (hereafter referred to as `TDengine CLI`) is the most simplest way for users to manipulate and interact with TDengine instances. The TDengine command-line interface (hereafter referred to as `TDengine CLI`) is the most simplest way for users to manipulate and interact with TDengine instances.
## Installation ## Installation
To run TDengine CLI to access TDengine cloud, please install [TDengine client installation package](https://gcp.cloud.tdengine.com/download/TDengine-client-2.7.0.0-Linux-x64.tar.gz) first. To run TDengine CLI to access TDengine cloud, please install [TDengine client installation package](https://tdengine.com/assets-download/cloud/TDengine-client-3.0.0.1202209031045-Linux-x64.tar.gz) first.
## Config ## Config
...@@ -97,10 +100,10 @@ taos -E $TDENGINE_CLOUD_DSN ...@@ -97,10 +100,10 @@ taos -E $TDENGINE_CLOUD_DSN
## Using TDengine CLI ## Using TDengine CLI
TDengine CLI will display a welcome message and version information if it successfully connected to the TDengine service. If it fails, TDengine CLI will print an error message. See [FAQ](/train-faq/faq) to solve the problem of terminal connection failure to the server. The TDengine CLI prompts as follows: TDengine CLI will display a welcome message and version information if it successfully connected to the TDengine service. If it fails, TDengine CLI will print an error message. The TDengine CLI prompts as follows:
``` ```
Welcome to the TDengine shell from Linux, Client Version:2.6.0.4 Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved. Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
Successfully connect to cloud.tdengine.com:8085 in restful mode Successfully connect to cloud.tdengine.com:8085 in restful mode
......
--- ---
title: taosBenchmark title: taosBenchmark
sidebar_label: taosBenchmark sidebar_label: taosBenchmark
toc_max_heading_level: 4
description: "taosBenchmark (once called taosdemo ) is a tool for testing the performance of TDengine."
--- ---
## Introduction ## Introduction
taosBenchmark (formerly taosdemo ) is a tool for testing the performance of TDengine products. taosBenchmark can test the performance of TDengine's insert, query, and subscription functions and simulate large amounts of data generated by many devices. taosBenchmark can be configured to generate user defined databases, supertables, subtables, and the time series data to populate these for performance benchmarking. taosBenchmark is highly configurable and some of the configurations include the time interval for inserting data, the number of working threads and the capability to insert disordered data. The installer provides taosdemo as a soft link to taosBenchmark for compatibility with past users. taosBenchmark (formerly taosdemo ) is a tool for testing the performance of TDengine products. taosBenchmark can test the performance of TDengine's insert, query, and subscription functions and simulate large amounts of data generated by many devices. taosBenchmark can be configured to generate user defined databases, supertables, subtables, and the time series data to populate these for performance benchmarking. taosBenchmark is highly configurable and some of the configurations include the time interval for inserting data, the number of working threads and the capability to insert disordered data. The installer provides taosdemo as a soft link to taosBenchmark for compatibility with past users.
## Installation **Please be noted that in the context of TDengine cloud service, non privileged user can't create database using any tool, including taosBenchmark. The database needs to be firstly created in the data explorer in TDengine cloud service console. For any content about creating database in this document, the user needs to ignore and create the database manually inside TDengine cloud service.**
There are two ways to install taosBenchmark:
- Installing the official TDengine installer will automatically install taosBenchmark. Please refer to [TDengine installation](/operation/pkg-install) for details. ## Installation
- Compile taos-tools separately and install them. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details. To use taosBenchmark, you need to download and install [taosTools](https://tdengine.com/assets-download/cloud/taosTools-2.1.3-Linux-x64.tar.gz). Before installing taosTools, please firstly download and install [TDengine CLI](https://docs.tdengine.com/cloud/tools/cli/#installation).
Decompress the package and install.
```
tar -xzf taosTools-2.1.3-Linux-x64.tar.gz
cd taosTools-2.1.3-Linux-x64.tar.gz
sudo ./install-taostools.sh
```
## Run ## Run
### Configuration and running methods ### Configuration and running methods
TaosBenchmark needs to be executed on the terminal of the operating system, it supports two configuration methods: [Command-line arguments](#command-line-arguments-in-detail) and [JSON configuration file](#configuration-file-parameters-in-detail). These two methods are mutually exclusive. Users can use `-f <json file>` to specify a configuration file. When running taosBenchmark with command-line arguments to control its behavior, users should use other parameters for configuration, but not the `-f` parameter. In addition, taosBenchmark offers a special way of running without parameters. TaosBenchmark needs to be executed on the terminal of the operating system, it supports two configuration methods: [Command-line arguments](#command-line-arguments-in-detail) and [JSON configuration file](#configuration-file-parameters-in-detail). These two methods are mutually exclusive. Users can use `-f <json file>` to specify a configuration file. When running taosBenchmark with command-line arguments to control its behavior, users should use other parameters for configuration, but not the `-f` parameter. In addition, taosBenchmark offers a special way of running without parameters.
taosBenchmark supports the complete performance testing of TDengine by providing functionally to write, query, and subscribe. These three functions are mutually exclusive, users can only select one of them each time taosBenchmark runs. The query and subscribe functionalities are only configurable using a json configuration file by specifying the parameter `filetype`, while write can be performed through both the command-line and a configuration file. taosBenchmark supports the complete performance testing of TDengine by providing functionally to write, query, and subscribe. These three functions are mutually exclusive, users can only select one of them each time taosBenchmark runs. The query and subscribe functionalities are only configurable using a json configuration file by specifying the parameter `filetype`, while write can be performed through both the command-line and a configuration file. If you want to test the performance of queries or data subscriptionm configure taosBenchmark with the configuration file. You can modify the value of the `filetype` parameter to specify the function that you want to test.
**Make sure that the TDengine cluster is running correctly before running taosBenchmark. ** **Make sure that the TDengine cluster is running correctly before running taosBenchmark. **
### Run without command-line arguments
Execute the following commands to quickly experience taosBenchmark's default configuration-based write performance testing of TDengine.
```bash
taosBenchmark
```
When run without parameters, taosBenchmark connects to the TDengine cluster specified in `/etc/taos` by default and creates a database named test in TDengine, a super table named `meters` under the test database, and 10,000 tables under the super table with 10,000 records written to each table. Note that if there is already a test database, this command will delete it first and create a new test database.
### Run with command-line configuration parameters
The `-f <json file>` argument cannot be used when running taosBenchmark with command-line parameters. Users must specify all configuration parameters from the command-line. The following is an example of testing taosBenchmark writing performance using the command-line approach.
```bash
taosBenchmark -I stmt -n 200 -t 100
```
In the above command, `taosBenchmark` will create the default database named `test`, create the default super table named `meters`, create 100 subtables in the super table and insert 200 records for each subtable using parameter binding.
### Run with the configuration file ### Run with the configuration file
A sample configuration file is provided in the taosBenchmark installation package under `<install_directory>/examples/taosbenchmark-json`. A sample configuration file is provided in the taosBenchmark installation package under `<install_directory>/examples/taosbenchmark-json`.
...@@ -52,108 +38,252 @@ A sample configuration file is provided in the taosBenchmark installation packag ...@@ -52,108 +38,252 @@ A sample configuration file is provided in the taosBenchmark installation packag
Use the following command-line to run taosBenchmark and control its behavior via a configuration file. Use the following command-line to run taosBenchmark and control its behavior via a configuration file.
```bash ```bash
taosBenchmark -f <json file> taosBenchmark -f json-file
``` ```
#### Configuration file examples **Sample configuration files**
##### Example of inserting a scenario JSON configuration file
<details> #### Configuration file examples
<summary>insert.json</summary>
```json ```json
{{#include /taos-tools/example/insert.json}} {
``` "filetype": "insert",
"cfgdir": "/etc/taos",
</details> "host": "127.0.0.1",
"port": 6030,
"user": "root",
"password": "taosdata",
"connection_pool_size": 8,
"thread_count": 4,
"create_table_thread_count": 7,
"result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no",
"insert_interval": 0,
"interlace_rows": 100,
"num_of_records_per_req": 100,
"prepared_rand": 10000,
"chinese": "no",
"databases": [
{
"dbinfo": {
"name": "test",
"drop": "no",
"replica": 1,
"precision": "ms",
"keep": 3650,
"minRows": 100,
"maxRows": 4096,
"comp": 2
},
"super_tables": [
{
"name": "meters",
"child_table_exists": "no",
"childtable_count": 10000,
"childtable_prefix": "d",
"escape_character": "yes",
"auto_create_table": "no",
"batch_create_tbl_num": 5,
"data_source": "rand",
"insert_mode": "taosc",
"non_stop_mode": "no",
"line_protocol": "line",
"insert_rows": 10000,
"childtable_limit": 10,
"childtable_offset": 100,
"interlace_rows": 0,
"insert_interval": 0,
"partial_col_num": 0,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 10,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"use_sample_ts": "no",
"tags_file": "",
"columns": [
{
"type": "FLOAT",
"name": "current",
"count": 1,
"max": 12,
"min": 8
},
{ "type": "INT", "name": "voltage", "max": 225, "min": 215 },
{ "type": "FLOAT", "name": "phase", "max": 1, "min": 0 }
],
"tags": [
{
"type": "TINYINT",
"name": "groupid",
"max": 10,
"min": 1
},
{
"name": "location",
"type": "BINARY",
"len": 16,
"values": ["San Francisco", "Los Angles", "San Diego",
"San Jose", "Palo Alto", "Campbell", "Mountain View",
"Sunnyvale", "Santa Clara", "Cupertino"]
}
]
}
]
}
]
}
##### Query Scenario JSON Profile Example ```
<details> #### Query Scenario JSON Profile Example
<summary>query.json</summary>
```json ```json
{{#include /taos-tools/example/query.json}} {
``` "filetype": "query",
"cfgdir": "/etc/taos",
</details> "host": "127.0.0.1",
"port": 6030,
"user": "root",
"password": "taosdata",
"confirm_parameter_prompt": "no",
"databases": "test",
"query_times": 2,
"query_mode": "taosc",
"specified_table_query": {
"query_interval": 1,
"concurrent": 3,
"sqls": [
{
"sql": "select last_row(*) from meters",
"result": "./query_res0.txt"
},
{
"sql": "select count(*) from d0",
"result": "./query_res1.txt"
}
]
},
"super_table_query": {
"stblname": "meters",
"query_interval": 1,
"threads": 3,
"sqls": [
{
"sql": "select last_row(ts) from xxxx",
"result": "./query_res2.txt"
}
]
}
}
##### Subscription JSON configuration example ```
<details> #### Subscription JSON configuration example
<summary>subscribe.json</summary>
```json ```json
{{#include /taos-tools/example/subscribe.json}} {
``` "filetype": "subscribe",
"cfgdir": "/etc/taos",
"host": "127.0.0.1",
"port": 6030,
"user": "root",
"password": "taosdata",
"databases": "test",
"specified_table_query": {
"concurrent": 1,
"mode": "sync",
"interval": 1000,
"restart": "yes",
"keepProgress": "yes",
"resubAfterConsume": 10,
"sqls": [
{
"sql": "select avg(current) from meters where location = 'beijing';",
"result": "./subscribe_res0.txt"
}
]
},
"super_table_query": {
"stblname": "meters",
"threads": 1,
"mode": "sync",
"interval": 1000,
"restart": "yes",
"keepProgress": "yes",
"sqls": [
{
"sql": "select phase from xxxx where groupid > 3;",
"result": "./subscribe_res1.txt"
}
]
}
}
</details> ```
## Command-line arguments in detail ## Command-line argument in detailed
- **-f/--file <json file\>** : - **-f/--file <json file\>** :
specify the configuration file to use. This file includes All parameters. Users should not use this parameter with other parameters on the command-line. There is no default value. specify the configuration file to use. This file includes All parameters. Users should not use this parameter with other parameters on the command-line. There is no default value.
- **-W/--cloud_dsn=<DSN\>** : The dsn to connect TDengine cloud service.
- **-c/--config-dir <dir\>** : - **-c/--config-dir <dir\>** :
specify the directory of the TDengine cluster configuration file. the default path is `/etc/taos`. specify the directory where the TDengine cluster configuration file. The default path is `/etc/taos`.
- **-h/--host <host\>** : - **-h/--host <host\>** :
specify the FQDN of the TDengine server to connect to. The default value is localhost. Specify the FQDN of the TDengine server to connect to. The default value is localhost.
- **-P/--port <port\>** : - **-P/--port <port\>** :
specify the port number of the TDengine server to connect to, the default value is 6030. The port number of the TDengine server to connect to, the default value is 6030.
- **-I/--interface <insertMode\>** : - **-I/--interface <insertMode\>** :
specify the insert mode. Options are taosc, rest, stmt, sml, sml-rest, corresponding to normal write, restful interface writing, parameter binding interface writing, schemaless interface writing, RESTful schemaless interface writing (provided by taosAdapter). The default value is taosc. Insert mode. Options are taosc, rest, stmt, sml, sml-rest, corresponding to normal write, restful interface writing, parameter binding interface writing, schemaless interface writing, RESTful schemaless interface writing (provided by taosAdapter). The default value is taosc.
- **-u/--user <user\>** : - **-u/--user <user\>** :
specify the user name to connect to the TDengine server, the default is root. User name to connect to the TDengine server. Default is root.
- **-p/--password <passwd\>** : - **-p/--password <passwd\>** :
specify the password to connect to the TDengine server, the default is `taosdata`. The default password to connect to the TDengine server is `taosdata`.
- **-o/--output <file\>** : - **-o/--output <file\>** :
specify the path of the result output file, the default value is `. /output.txt`. specify the path of the result output file, the default value is `. /output.txt`.
- **-T/--thread <threadNum\>** : - **-T/--thread <threadNum\>** :
specify the number of threads to insert data, the default value is 8. The number of threads to insert data. Default is 8.
- **-B/--interlace-rows <rowNum\>** : - **-B/--interlace-rows <rowNum\>** :
enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table. Interleaved insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables have been inserted. The default value is 0, i.e., data is inserted into one sub-table before the next sub-table is inserted. Enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table. Interleaved insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables have been inserted. The default value is 0, i.e., data is inserted into one sub-table before the next sub-table is inserted.
- **-i/--insert-interval <timeInterval\>** : - **-i/--insert-interval <timeInterval\>** :
specify the insert interval in `ms` for interleaved insert mode. The default value is 0. It only works if `-B/--interlace-rows` is greater than 0. That means that after inserting interlaced rows for each child table, the data insertion with multiple threads will wait for the interval specified by this value before proceeding to the next round of writes. Specify the insert interval in `ms` for interleaved insert mode. The default value is 0. It only works if `-B/--interlace-rows` is greater than 0. After inserting interlaced rows for each child table, the data insertion thread will wait for the interval specified by this value before proceeding to the next round of writes.
- **-r/--rec-per-req <rowNum\>** : - **-r/--rec-per-req <rowNum\>** :
specify the number of rows to write per request, the default value is 30000. Writing the number of rows of records per request to TDengine, the default value is 30000.
- **-t/--tables <tableNum\>** : - **-t/--tables <tableNum\>** :
specify the number of subtables to create, the default value is 10000. Specify the number of sub-tables. The default is 10000.
- **-S/--timestampstep <stepLength\>** : - **-S/--timestampstep <stepLength\>** :
specify the timestamp step between records when inserting data in each child table in ms, the default value is 1. Timestamp step for inserting data in each child table in ms, default is 1.
- **-n/--records <recordNum\>** : - **-n/--records <recordNum\>** :
specify the number of records inserted into each sub-table, the default value is 10000. The default value of the number of records inserted in each sub-table is 10000.
- **-d/--database <dbName\>** :
specify the name of the database used, the default value is `test`.
- **-b/--data-type <colType\>** : - **-b/--data-type <colType\>** :
specify the data column types of the super table. The default values are three columns of type FLOAT, INT, and FLOAT. specify the type of the data columns of the super table. It defaults to three columns of type FLOAT, INT, and FLOAT if not used.
- **-l/--columns <colNum\>** : - **-l/--columns <colNum\>** :
specify the number of columns in the super table. If both this parameter and `-b/--data-type` are set, the resulting number of columns is the greater of the two. If the number specified by this parameter is greater than the number of columns specified by `-b/--data-type`, the unspecified column types default to INT, for example: `-l 5 -b float,double`, then the column types are `FLOAT,DOUBLE,INT,INT,INT`. If the number of columns specified is less than or equal to the number of columns specified by `-b/--data-type`, then the columns specified by `-b/--data-type` will be used. e.g.: `-l 3 -b float,double,float,bigint` will result in the column types `FLOAT,DOUBLE,FLOAT,BIGINT`. specify the number of columns in the super table. If both this parameter and `-b/--data-type` is set, the final result number of columns is the greater of the two. If the number specified by this parameter is greater than the number of columns specified by `-b/--data-type`, the unspecified column type defaults to INT, for example: `-l 5 -b float,double`, then the final column is `FLOAT,DOUBLE,INT,INT,INT`. If the number of columns specified is less than or equal to the number of columns specified by `-b/--data-type`, then the result is the column and type specified by `-b/--data-type`, e.g.: `-l 3 -b float,double,float,bigint`. The last column is `FLOAT,DOUBLE, FLOAT,BIGINT`.
- **-A/--tag-type <tagType\>** : - **-A/--tag-type <tagType\>** :
specify the tag column types of the super table. nchar and binary types can both set the length, for example: The tag column type of the super table. nchar and binary types can both set the length, for example:
``` ```
taosBenchmark -A INT,DOUBLE,NCHAR,BINARY(16) taosBenchmark -A INT,DOUBLE,NCHAR,BINARY(16)
``` ```
If the user does not set the tag type, the default is two tags, whose types are INT and BINARY(16). If users did not set tag type, the default is two tags, whose types are INT and BINARY(16).
Note: In some shells, such as bash, "()" needs to be escaped, so the above command should be Note: In some shells, such as bash, "()" needs to be escaped, so the above command should be
``` ```
...@@ -161,48 +291,45 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\) ...@@ -161,48 +291,45 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
``` ```
- **-w/--binwidth <length\>**: - **-w/--binwidth <length\>**:
specify the default length for nchar and binary types, the default value is 64. specify the default length for nchar and binary types. The default value is 64.
- **-m/--table-prefix <tablePrefix\>** : - **-m/--table-prefix <tablePrefix\>** :
specify the prefix of the sub-table names, the default value is "d". The prefix of the sub-table name, the default value is "d".
- **-E/--escape-character** : - **-E/--escape-character** :
specify whether to use escape characters in the super table and sub-table names, the default is no. Switch parameter specifying whether to use escape characters in the super table and sub-table names. By default is not used.
- **-C/--chinese** : - **-C/--chinese** :
specify whether to use Unicode Chinese characters in nchar and binary, the default is no. specify whether to use Unicode Chinese characters in nchar and binary, the default is no.
- **-N/--normal-table** : - **-N/--normal-table** :
specify whether taosBenchmark will create only normal tables instead of super tables. The default value is false. It can be used if the insert mode is taosc, stmt, and rest. This parameter indicates that taosBenchmark will create only normal tables instead of super tables. The default value is false. It can be used if the insert mode is taosc, stmt, and rest.
- **-M/--random** : - **-M/--random** :
specify whether taosBenchmark will generate random values. The default is false. When true, for tag/data columns of numeric type, the value is a random value within the range of values of that type. For NCHAR and BINARY type tag/data columns, the value is a random string within the specified length range. This parameter indicates writing data with random values. The default is false. If users use this parameter, taosBenchmark will generate the random values. For tag/data columns of numeric type, the value is a random value within the range of values of that type. For NCHAR and BINARY type tag columns/data columns, the value is the random string within the specified length range.
- **-x/--aggr-func** : - **-x/--aggr-func** :
specify whether to query aggregation function after insertion. The default value is false. Switch parameter to indicate query aggregation function after insertion. The default is false.
- **-y/--answer-yes** : - **-y/--answer-yes** :
specify whether to require the user to confirm at the prompt to continue. The default value is false. Switch parameter that requires the user to confirm at the prompt to continue. The default value is false.
- **-O/--disorder <Percentage\>** : - **-O/--disorder <Percentage\>** :
specify the percentage probability of disordered data, with a value range of [0,50]. The default value is 0, i.e., there is no disordered data. Specify the percentage probability of disordered data, with a value range of [0,50]. The default is 0, i.e., there is no disordered data.
- **-R/--disorder-range <timeRange\>** : - **-R/--disorder-range <timeRange\>** :
specify the timestamp range for the disordered data. The disordered timestamp data will be out of order by the ordered timestamp minus a random value in this range. Valid only if the percentage of disordered data specified by `-O/--disorder` is greater than 0. Specify the timestamp range for the disordered data. It leads the resulting disorder timestamp as the ordered timestamp minus a random value in this range. Valid only if the percentage of disordered data specified by `-O/--disorder` is greater than 0.
- **-F/--prepare_rand <Num\>** : - **-F/--prepare_rand <Num\>** :
specify the number of unique values in the generated random data. A value of 1 means that all data are equal. The default value is 10000. Specify the number of unique values in the generated random data. A value of 1 means that all data are equal. The default value is 10000.
- **-a/--replica <replicaNum\>** :
specify the number of replicas when creating the database. The default value is 1.
- **-V/--version** : - **-V/--version** :
Show version information only. Users should not use this with other parameters. Show version information only. Users should not use it with other parameters.
- **-? /--help** : - **-? /--help** :
Show help information and exit. Users should not use it with other parameters. Show help information and exit. Users should not use it with other parameters.
## Configuration file parameters in detail ## Configuration file parameters in detailed
### General configuration parameters ### General configuration parameters
...@@ -211,66 +338,47 @@ The parameters listed in this section apply to all function modes. ...@@ -211,66 +338,47 @@ The parameters listed in this section apply to all function modes.
- **filetype** : The function to be tested, with optional values `insert`, `query` and `subscribe`. These correspond to the insert, query, and subscribe functions, respectively. Users can specify only one of these in each configuration file. - **filetype** : The function to be tested, with optional values `insert`, `query` and `subscribe`. These correspond to the insert, query, and subscribe functions, respectively. Users can specify only one of these in each configuration file.
**cfgdir**: specify the TDengine cluster configuration file's directory. The default path is /etc/taos. **cfgdir**: specify the TDengine cluster configuration file's directory. The default path is /etc/taos.
- **host**: specify the FQDN of the TDengine server to connect to. The default value is `localhost`. - **host**: Specify the FQDN of the TDengine server to connect. The default value is `localhost`.
- **port**: specify the port number of the TDengine server to connect to, the default value is `6030`. - **port**: The port number of the TDengine server to connect to, the default value is `6030`.
- **user**: specify the user name to connect to the TDengine server, the default is `root`. - **user**: The user name of the TDengine server to connect to, the default is `root`.
- **password**: specify the password to connect to the TDengine server, the default value is `taosdata`. - **password**: The password to connect to the TDengine server, the default value is `taosdata`.
### Insert scenario configuration parameters ### Insert scenario configuration parameters
`filetype` must be set to `insert` in the insertion scenario. See [General Configuration Parameters](#general-configuration-parameters) `filetype` must be set to `insert` in the insertion scenario. See [General Configuration Parameters](#General Configuration Parameters)
#### Database related configuration parameters
The parameters related to database creation are configured in `dbinfo` in the json configuration file, as follows. These parameters correspond to the database parameters specified when `create database` in TDengine.
- **name**: specify the name of the database.
- **drop**: indicate whether to delete the database before inserting. The default is true.
- **replica**: specify the number of replicas when creating the database.
- **days**: specify the time span for storing data in a single data file. The default is 10. #### Stream processing related configuration parameters
- **cache**: specify the size of the cache blocks in MB. The default value is 16. The parameters for creating streams are configured in `stream` in the json configuration file, as shown below.
- **blocks**: specify the number of cache blocks in each vnode. The default is 6. - **stream_name**: Name of the stream. Mandatory.
- **precision**: specify the database time precision. The default value is "ms". - **stream_stb**: Name of the supertable for the stream. Mandatory.
- **keep**: specify the number of days to keep the data. The default value is 3650. - **stream_sql**: SQL statement for the stream to process. Mandatory.
- **minRows**: specify the minimum number of records in the file block. The default value is 100. - **trigger_mode**: Triggering mode for stream processing. Optional.
- **maxRows**: specify the maximum number of records in the file block. The default value is 4096. - **watermark**: Watermark for stream processing. Optional.
- **comp**: specify the file compression level. The default value is 2. - **drop**: Whether to create the stream. Specify yes to create the stream or no to not create the stream.
- **walLevel** : specify WAL level, default is 1.
- **cacheLast**: indicate whether to allow the last record of each table to be kept in memory. The default value is 0. The value can be 0, 1, 2, or 3.
- **quorum**: specify the number of writing acknowledgments in multi-replica mode. The default value is 1.
- **fsync**: specify the interval of fsync in ms when users set WAL to 2. The default value is 3000.
- **update** : indicate whether to support data update, default value is 0, values can be 0, 1, 2.
#### Super table related configuration parameters #### Super table related configuration parameters
The parameters for creating super tables are configured in `super_tables` in the json configuration file, as shown below. The parameters for creating super tables are configured in `super_tables` in the json configuration file, as shown below.
- **name**: Super table name, mandatory, no default value. - **name**: Super table name, mandatory, no default value.
- **child_table_exists** : whether the child table already exists, default value is "no", values can be "yes" or "no".
- **child_table_exists** : whether the child table already exists, default value is "no", optional value is "yes" or "no".
- **child_table_count** : The number of child tables, the default value is 10. - **child_table_count** : The number of child tables, the default value is 10.
- **child_table_prefix** : The prefix of the child table name, mandatory configuration item, no default value. - **child_table_prefix** : The prefix of the child table name, mandatory configuration item, no default value.
- **escape_character**: specify whether the super table and child table names containing escape characters. By default is "no". The value can be "yes" or "no". - **escape_character**: specify the super table and child table names containing escape characters. The value can be "yes" or "no". The default is "no".
- **auto_create_table**: only when insert_mode is taosc, rest, stmt, and childtable_exists is "no". "yes" means taosBenchmark will automatically create non-existent tables when inserting data; "no" means that taosBenchmark will create all tables before inserting. - **auto_create_table**: only when insert_mode is taosc, rest, stmt, and childtable_exists is "no". "yes" means taosBenchmark will automatically create non-existent tables when inserting data; "no" means that taosBenchmark will create all tables before inserting.
...@@ -280,7 +388,7 @@ The parameters for creating super tables are configured in `super_tables` in the ...@@ -280,7 +388,7 @@ The parameters for creating super tables are configured in `super_tables` in the
- **insert_mode**: insertion mode with options taosc, rest, stmt, sml, sml-rest, corresponding to normal write, restful interface write, parameter binding interface write, schemaless interface write, restful schemaless interface write (provided by taosAdapter). The default value is taosc. - **insert_mode**: insertion mode with options taosc, rest, stmt, sml, sml-rest, corresponding to normal write, restful interface write, parameter binding interface write, schemaless interface write, restful schemaless interface write (provided by taosAdapter). The default value is taosc.
- **non_stop_mode**: Specify whether to keep writing. If "yes", insert_rows will be disabled, and writing will not stop until Ctrl + C stops the program. The default value is "no", i.e., taosBenchmark will stop the writing after the specified number of rows are written. Note: insert_rows must be configured as a non-zero positive integer even if it is disabled in continuous write mode. - **non_stop_mode**: Specify whether to keep writing. If "yes", insert_rows will be disabled, and writing will not stop until Ctrl + C stops the program. The default value is "no", i.e., taosBenchmark will stop the writing after the specified number of rows are written. Note: insert_rows must be configured as a non-zero positive integer even if it fails in continuous write mode.
- **line_protocol**: Insert data using line protocol. Only works when insert_mode is sml or sml-rest. The value can be `line`, `telnet`, or `json`. - **line_protocol**: Insert data using line protocol. Only works when insert_mode is sml or sml-rest. The value can be `line`, `telnet`, or `json`.
...@@ -314,6 +422,22 @@ The parameters for creating super tables are configured in `super_tables` in the ...@@ -314,6 +422,22 @@ The parameters for creating super tables are configured in `super_tables` in the
- **tags_file** : only works when insert_mode is taosc, rest. The final tag value is related to the childtable_count. Suppose the tag data rows in the CSV file are smaller than the given number of child tables. In that case, taosBenchmark will read the CSV file data cyclically until the number of child tables specified by childtable_count is generated. Otherwise, taosBenchmark will read the childtable_count rows of tag data only. The final number of child tables generated is the smaller of the two. - **tags_file** : only works when insert_mode is taosc, rest. The final tag value is related to the childtable_count. Suppose the tag data rows in the CSV file are smaller than the given number of child tables. In that case, taosBenchmark will read the CSV file data cyclically until the number of child tables specified by childtable_count is generated. Otherwise, taosBenchmark will read the childtable_count rows of tag data only. The final number of child tables generated is the smaller of the two.
#### TSMA configuration parameters
The configuration parameters for specifying TSMAs are in `tsmas` in `super_tables`.
- **name**: Specifies TSMA name. Mandatory.
- **function**: Specifies TSMA function. Mandatory.
- **interval**: Specifies TSMA interval. Mandatory.
- **sliding**: Specifies time offset for TSMA window. Mandatory.
- **custom**: Specifies custom configurations to attach to the end of the TSMA creation statement. Optional.
- **start_when_inserted**: Specifies the number of inserted rows after which TSMA is started. Optional. The default value is 0.
#### Tag and Data Column Configuration Parameters #### Tag and Data Column Configuration Parameters
The configuration parameters for specifying super table tag columns and data columns are in `columns` and `tag` in `super_tables`, respectively. The configuration parameters for specifying super table tag columns and data columns are in `columns` and `tag` in `super_tables`, respectively.
...@@ -333,6 +457,8 @@ The configuration parameters for specifying super table tag columns and data col ...@@ -333,6 +457,8 @@ The configuration parameters for specifying super table tag columns and data col
- **values**: The value field of the nchar/binary column/label, which will be chosen randomly from the values. - **values**: The value field of the nchar/binary column/label, which will be chosen randomly from the values.
- **sma**: Insert the column into the BSMA. Enter `yes` or `no`. The default is `no`.
#### insertion behavior configuration parameters #### insertion behavior configuration parameters
- **thread_count**: specify the number of threads to insert data. Default is 8. - **thread_count**: specify the number of threads to insert data. Default is 8.
...@@ -345,21 +471,21 @@ The configuration parameters for specifying super table tag columns and data col ...@@ -345,21 +471,21 @@ The configuration parameters for specifying super table tag columns and data col
- **confirm_parameter_prompt**: The switch parameter requires the user to confirm after the prompt to continue. The default value is false. - **confirm_parameter_prompt**: The switch parameter requires the user to confirm after the prompt to continue. The default value is false.
- **interlace_rows**: Enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table at a time. Interleaved insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables are inserted. The default value is 0, which means that data will be inserted into the following child table only after data is inserted into one child table. - **interlace_rows**: Enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table at a time. Staggered insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables have been inserted. The default value is 0, i.e., data is inserted into one sub-table before the next sub-table is inserted.
This parameter can also be configured in `super_tables`, and if so, the configuration in `super_tables` takes precedence and overrides the global setting. This parameter can also be configured in `super_tables`, and if so, the configuration in `super_tables` takes precedence and overrides the global setting.
- **insert_interval** : - **insert_interval** :
Specifies the insertion interval in ms for interleaved insertion mode. The default value is 0. Only works if `-B/--interlace-rows` is greater than 0. It means that after inserting interlace rows for each child table, the data insertion thread will wait for the interval specified by this value before proceeding to the next round of writes. Specify the insert interval in `ms` for interleaved insert mode. The default value is 0. It only works if `-B/--interlace-rows` is greater than 0. After inserting interlaced rows for each child table, the data insertion thread will wait for the interval specified by this value before proceeding to the next round of writes.
This parameter can also be configured in `super_tables`, and if configured, the configuration in `super_tables` takes high priority, overriding the global setting. This parameter can also be configured in `super_tables`, and if so, the configuration in `super_tables` takes precedence and overrides the global setting.
- **num_of_records_per_req** : - **num_of_records_per_req** :
The number of rows of data to be written per request to TDengine, the default value is 30000. When it is set too large, the TDengine client driver will return the corresponding error message, so you need to lower the setting of this parameter to meet the writing requirements. Writing the number of rows of records per request to TDengine, the default value is 30000. When it is set too large, the TDengine client driver will return the corresponding error message, so you need to lower the setting of this parameter to meet the writing requirements.
- **prepare_rand**: The number of unique values in the generated random data. A value of 1 means that all data are the same. The default value is 10000. - **prepare_rand**: The number of unique values in the generated random data. A value of 1 means that all data are equal. The default value is 10000.
### Query scenario configuration parameters ### Query scenario configuration parameters
`filetype` must be set to `query` in the query scenario. See [General Configuration Parameters](#general-configuration-parameters) for details of this parameter and other general parameters `filetype` must be set to `query` in the query scenario. See [General Configuration Parameters](#General Configuration Parameters) for details of this parameter and other general parameters
#### Configuration parameters for executing the specified query statement #### Configuration parameters for executing the specified query statement
...@@ -383,14 +509,14 @@ The configuration parameters of the super table query are set in `super_table_qu ...@@ -383,14 +509,14 @@ The configuration parameters of the super table query are set in `super_table_qu
- **threads**: The number of threads to execute the query SQL, the default value is 1. - **threads**: The number of threads to execute the query SQL, the default value is 1.
- **sqls** : The default value is 1. - **sqls**:
- **sql**: The SQL command to be executed. For the query SQL of super table, keep "xxxx" in the SQL command. The program will automatically replace it with all the sub-table names of the super table. - **sql**: The SQL command to be executed. For the query SQL of super table, keep "xxxx" in the SQL command. The program will automatically replace it with all the sub-table names of the super table.
Replace it with all the sub-table names in the super table. Replace it with all the sub-table names in the super table.
- **result**: The file to save the query result. If not specified, taosBenchmark will not save result. - **result**: The file to save the query result. If not specified, taosBenchmark will not save result.
### Subscription scenario configuration parameters ### Subscription scenario configuration parameters
`filetype` must be set to `subscribe` in the subscription scenario. See [General Configuration Parameters](#genera-configuration-parameters) for details of this and other general parameters `filetype` must be set to `subscribe` in the subscription scenario. See [General Configuration Parameters](#General Configuration Parameters) for details of this and other general parameters
#### Configuration parameters for executing the specified subscription statement #### Configuration parameters for executing the specified subscription statement
...@@ -406,9 +532,9 @@ The configuration parameters for subscribing to a sub-table or a generic table a ...@@ -406,9 +532,9 @@ The configuration parameters for subscribing to a sub-table or a generic table a
- **resubAfterConsume**: "yes" means cancel the previous subscription and then subscribe again, "no" means continue the previous subscription, and the default value is "no". - **resubAfterConsume**: "yes" means cancel the previous subscription and then subscribe again, "no" means continue the previous subscription, and the default value is "no".
- **sqls** : The default value is "no". - **sqls**:
- **sql** : The SQL command to be executed, required. - **sql** : The SQL command to be executed, required.
- **result** : The file to save the query result, unspecified is not saved. - **result**: The file to save the query result. If not specified, taosBenchmark will not save result.
#### Configuration parameters for subscribing to supertables #### Configuration parameters for subscribing to supertables
...@@ -426,7 +552,7 @@ The configuration parameters for subscribing to a super table are set in `super_ ...@@ -426,7 +552,7 @@ The configuration parameters for subscribing to a super table are set in `super_
- **resubAfterConsume**: "yes" means cancel the previous subscription and then subscribe again, "no" means continue the previous subscription, and the default value is "no". - **resubAfterConsume**: "yes" means cancel the previous subscription and then subscribe again, "no" means continue the previous subscription, and the default value is "no".
- **sqls** : The default value is "no". - **sqls**:
- **sql**: SQL command to be executed, required; for the query SQL of the super table, keep "xxxx" in the SQL command, and the program will replace it with all the sub-table names of the super table automatically. - **sql**: The SQL command to be executed. For the query SQL of super table, keep "xxxx" in the SQL command. The program will automatically replace it with all the sub-table names of the super table.
Replace it with all the sub-table names in the super table. Replace it with all the sub-table names in the super table.
- **result**: The file to save the query result, if not specified, it will not be saved. - **result**: The file to save the query result. If not specified, taosBenchmark will not save result.
---
title: taosdump
description: "taosdump is a tool that supports backing up data from a running TDengine cluster and restoring the backed up data to the same, or another running TDengine cluster."
---
## Introduction
taosdump is a tool that supports backing up data from a running TDengine cluster and restoring the backed up data to the same, or another running TDengine cluster.
taosdump can back up a database, a super table, or a normal table as a logical data unit or backup data records in the database, super tables, and normal tables. When using taosdump, you can specify the directory path for data backup. If you do not specify a directory, taosdump will back up the data to the current directory by default.
If the specified location already has data files, taosdump will prompt the user and exit immediately to avoid data overwriting. This means that the same path can only be used for one backup.
Please be careful if you see a prompt for this and please ensure that you follow best practices and relevant SOPs for data integrity, backup and data security.
Users should not use taosdump to back up raw data, environment settings, hardware information, server configuration, or cluster topology. taosdump uses [Apache AVRO](https://avro.apache.org/) as the data file format to store backup data.
## Installation
To use taosdump, you need to download and install [taosTools](https://tdengine.com/assets-download/cloud/taosTools-2.1.3-Linux-x64.tar.gz). Before installing taosTools, please firstly download and install [TDengine CLI](https://docs.tdengine.com/cloud/tools/cli/#installation).
Decompress the package and install.
```
tar -xzf taosTools-2.1.3-Linux-x64.tar.gz
cd taosTools-2.1.3-Linux-x64.tar.gz
sudo ./install-taostools.sh
```
Set environment variable.
```bash
export TDENGINE_CLOUD_DSN="<DSN>"
```
## Common usage scenarios
### taosdump backup data
1. backing up all databases: specify `-A` or `-all-databases` parameter.
2. backup multiple specified databases: use `-D db1,db2,... ` parameters;
3. back up some super or normal tables in the specified database: use `dbname stbname1 stbname2 tbname1 tbname2 ... ` parameters. Note that the first parameter of this input sequence is the database name, and only one database is supported. The second and subsequent parameters are the names of super or normal tables in that database, separated by spaces.
4. back up the system log database: TDengine clusters usually contain a system database named `log`. The data in this database is the data that TDengine runs itself, and the taosdump will not back up the log database by default. If users need to back up the log database, users can use the `-a` or `-allow-sys` command-line parameter.
5. Loose mode backup: taosdump version 1.4.1 onwards provides `-n` and `-L` parameters for backing up data without using escape characters and "loose" mode, which can reduce the number of backups if table names, column names, tag names do not use escape characters. This can also reduce the backup data time and backup data footprint. If you are unsure about using `-n` and `-L` conditions, please use the default parameters for "strict" mode backup. See the [official documentation](https://docs.tdengine.com/taos-sql/escape/) for a description of escaped characters.
<!-- exclude -->
:::tip
- taosdump versions after 1.4.1 provide the `-I` argument for parsing Avro file schema and data. If users specify `-s` then only taosdump will parse schema.
- Backups after taosdump 1.4.2 use the batch count specified by the `-B` parameter. The default value is 16384. If, in some environments, low network speed or disk performance causes "Error actual dump ... batch ...", then try changing the `-B` parameter to a smaller value.
- The export of taosdump does not support resuming from an interruption. Therefore, if the taosdump process terminates unexpectedly, delete all related files that have been exported or generated.
- The import of taosdump supports resuming from an interruption, but when the process resumes, you will receive some "table already exists" messages, which could be ignored.
:::
<!-- exclude-end -->
### taosdump recover data
Restore the data file in the specified path: use the `-i` parameter plus the path to the data file. You should not use the same directory to backup different data sets, and you should not backup the same data set multiple times in the same path. Otherwise, the backup data will cause overwriting or multiple backups.
<!-- exclude -->
:::tip
taosdump internally uses TDengine stmt binding API for writing recovery data with a default batch size of 16384 for better data recovery performance. If there are more columns in the backup data, it may cause a "WAL size exceeds limit" error. You can try to adjust the batch size to a smaller value by using the `-B` parameter.
:::
<!-- exclude-end -->
## Detailed command-line parameter list
The following is a detailed list of taosdump command-line arguments.
```
Usage: taosdump [OPTION...] dbname [tbname ...]
or: taosdump [OPTION...] --databases db1,db2,...
or: taosdump [OPTION...] --all-databases
or: taosdump [OPTION...] -i inpath
or: taosdump [OPTION...] -o outpath
-h, --host=HOST Server host from which to dump data. Default is
localhost.
-p, --password User password to connect to server. Default is
taosdata.
-P, --port=PORT Port to connect
-u, --user=USER User name used to connect to server. Default is
root.
-c, --config-dir=CONFIG_DIR Configure directory. Default is /etc/taos
-i, --inpath=INPATH Input file path.
-o, --outpath=OUTPATH Output file path.
-r, --resultFile=RESULTFILE DumpOut/In Result file path and name.
-a, --allow-sys Allow to dump system database
-A, --all-databases Dump all databases.
-D, --databases=DATABASES Dump listed databases. Use comma to separate
database names.
-N, --without-property Dump database without its properties.
-s, --schemaonly Only dump table schemas.
-y, --answer-yes Input yes for prompt. It will skip data file
checking!
-d, --avro-codec=snappy Choose an avro codec among null, deflate, snappy,
and lzma.
-S, --start-time=START_TIME Start time to dump. Either epoch or
ISO8601/RFC3339 format is acceptable. ISO8601
format example: 2017-10-01T00:00:00.000+0800 or
2017-10-0100:00:00:000+0800 or '2017-10-01
00:00:00.000+0800'
-E, --end-time=END_TIME End time to dump. Either epoch or ISO8601/RFC3339
format is acceptable. ISO8601 format example:
2017-10-01T00:00:00.000+0800 or
2017-10-0100:00:00.000+0800 or '2017-10-01
00:00:00.000+0800'
-B, --data-batch=DATA_BATCH Number of data per query/insert statement when
backup/restore. Default value is 16384. If you see
'error actual dump .. batch ..' when backup or if
you see 'WAL size exceeds limit' error when
restore, please adjust the value to a smaller one
and try. The workable value is related to the
length of the row and type of table schema.
-I, --inspect inspect avro file content and print on screen
-L, --loose-mode Use loose mode if the table name and column name
use letter and number only. Default is NOT.
-n, --no-escape No escape char '`'. Default is using it.
-T, --thread-num=THREAD_NUM Number of thread for dump in file. Default is
8.
-C, --cloud=CLOUD_DSN specify a DSN to access TDengine cloud service
-R, --restful Use RESTful interface to connect TDengine
-t, --timeout=SECONDS The timeout seconds for websocket to interact.
-g, --debug Print debug info.
-?, --help Give this help list
--usage Give a short usage message
-V, --version Print program version
Mandatory or optional arguments to long options are also mandatory or optional
for any corresponding short options.
Report bugs to <support@taosdata.com>.
```
# FAQ
\ No newline at end of file
bin cloud-example/connect/bin
obj cloud-example/connect/obj
cloud-example/usage/bin
cloud-example/usage/obj
cloud-example/stmt/bin
cloud-example/stmt/obj
.vs .vs
\ No newline at end of file
*.sln
\ No newline at end of file

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Version 16
VisualStudioVersion = 16.0.30114.105
MinimumVisualStudioVersion = 10.0.40219.1
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "connect", "connect\connect.csproj", "{4006CF0C-17BE-4508-9682-A85298F8C92D}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "usage", "usage\usage.csproj", "{243C420F-FC47-4F21-B81E-83CDE91F2D47}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "stmt", "stmt\stmt.csproj", "{B6907CB6-41CB-4644-AEE1-551456EADE12}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
Release|Any CPU = Release|Any CPU
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{4006CF0C-17BE-4508-9682-A85298F8C92D}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{4006CF0C-17BE-4508-9682-A85298F8C92D}.Debug|Any CPU.Build.0 = Debug|Any CPU
{4006CF0C-17BE-4508-9682-A85298F8C92D}.Release|Any CPU.ActiveCfg = Release|Any CPU
{4006CF0C-17BE-4508-9682-A85298F8C92D}.Release|Any CPU.Build.0 = Release|Any CPU
{243C420F-FC47-4F21-B81E-83CDE91F2D47}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{243C420F-FC47-4F21-B81E-83CDE91F2D47}.Debug|Any CPU.Build.0 = Debug|Any CPU
{243C420F-FC47-4F21-B81E-83CDE91F2D47}.Release|Any CPU.ActiveCfg = Release|Any CPU
{243C420F-FC47-4F21-B81E-83CDE91F2D47}.Release|Any CPU.Build.0 = Release|Any CPU
{B6907CB6-41CB-4644-AEE1-551456EADE12}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{B6907CB6-41CB-4644-AEE1-551456EADE12}.Debug|Any CPU.Build.0 = Debug|Any CPU
{B6907CB6-41CB-4644-AEE1-551456EADE12}.Release|Any CPU.ActiveCfg = Release|Any CPU
{B6907CB6-41CB-4644-AEE1-551456EADE12}.Release|Any CPU.Build.0 = Release|Any CPU
EndGlobalSection
EndGlobal
using System;
using TDengineWS.Impl;
namespace Cloud.Examples
{
public class ConnectExample
{
static void Main(string[] args)
{
string dsn = Environment.GetEnvironmentVariable("TDENGINE_CLOUD_DSN");
Connect(dsn);
}
public static void Connect(string dsn)
{
// get connect
IntPtr conn = LibTaosWS.WSConnectWithDSN(dsn);
if (conn == IntPtr.Zero)
{
throw new Exception($"get connection failed,reason:{LibTaosWS.WSErrorStr(conn)},code:{LibTaosWS.WSErrorNo(conn)}");
}
else
{
Console.WriteLine("Establish connect success.");
}
// do something ...
// close connect
LibTaosWS.WSClose(conn);
}
}
}
\ No newline at end of file
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net5.0</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.1" />
</ItemGroup>
</Project>
using System;
using TDengineWS.Impl;
using TDengineDriver;
using System.Runtime.InteropServices;
namespace Cloud.Examples
{
public class STMTExample
{
static void Main(string[] args)
{
string dsn = Environment.GetEnvironmentVariable("TDENGINE_CLOUD_DSN");
IntPtr conn = Connect(dsn);
// assume table has been created.
// CREATE STABLE if not exists test.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)
string insert = "insert into ? using test.meters tags(?,?) values(?,?,?,?)";
// Init STMT
IntPtr stmt = LibTaosWS.WSStmtInit(conn);
if (stmt != IntPtr.Zero)
{
// Prepare SQL
int code = LibTaosWS.WSStmtPrepare(stmt, insert);
ValidSTMTStep(code, stmt, "WSInit()");
// Bind child table name and tags
TAOS_MULTI_BIND[] tags = new TAOS_MULTI_BIND[2] { WSMultiBind.WSBindBinary(new string[] { "California.LosAngeles" }), WSMultiBind.WSBindInt(new int?[] { 6 }) };
code = LibTaosWS.WSStmtSetTbnameTags(stmt, "test.d1005",tags, 2);
ValidSTMTStep(code, stmt, "WSStmtSetTbnameTags()");
// bind column value
TAOS_MULTI_BIND[] data = new TAOS_MULTI_BIND[4];
data[0] = WSMultiBind.WSBindTimestamp(new long[] { 1538551000000, 1538552000000, 1538553000000, 1538554000000, 1538555000000 });
data[1] = WSMultiBind.WSBindFloat(new float?[] { 10.30000F, 10.30000F, 11.30000F, 10.30000F, 10.80000F });
data[2] = WSMultiBind.WSBindInt(new int?[] { 218, 219, 221, 222, 223 });
data[3] = WSMultiBind.WSBindFloat(new float?[] { 0.28000F, 0.29000F, 0.30000F, 0.31000F, 0.32000F });
code = LibTaosWS.WSStmtBindParamBatch(stmt, data, 4);
ValidSTMTStep(code, stmt, "WSStmtBindParamBatch");
LibTaosWS.WSStmtAddBatch(stmt);
ValidSTMTStep(code, stmt, "WSStmtAddBatch");
IntPtr affectRowPtr = Marshal.AllocHGlobal(Marshal.SizeOf(typeof(Int32)));
LibTaosWS.WSStmtExecute(stmt, affectRowPtr);
ValidSTMTStep(code, stmt, "WSStmtExecute");
Console.WriteLine("STMT affect rows:{0}", Marshal.ReadInt32(affectRowPtr));
LibTaosWS.WSStmtClose(stmt);
// Free allocated memory
Marshal.FreeHGlobal(affectRowPtr);
WSMultiBind.WSFreeTaosBind(tags);
WSMultiBind.WSFreeTaosBind(data);
}
// close connect
LibTaosWS.WSClose(conn);
}
public static IntPtr Connect(string dsn)
{
// get connect
IntPtr conn = LibTaosWS.WSConnectWithDSN(dsn);
if (conn == IntPtr.Zero)
{
throw new Exception($"get connection failed,reason:{LibTaosWS.WSErrorStr(conn)},code:{LibTaosWS.WSErrorNo(conn)}");
}
return conn;
}
public static void ValidSTMTStep(int code, IntPtr wsStmt, string method)
{
if (code != 0)
{
throw new Exception($"{method} failed,reason: {LibTaosWS.WSErrorStr(wsStmt)}, code: {code}");
}
else
{
Console.WriteLine("{0} success", method);
}
}
}
}
\ No newline at end of file
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net5</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.1" />
</ItemGroup>
</Project>
using System;
using TDengineDriver;
using TDengineWS.Impl;
using System.Collections.Generic;
namespace Cloud.Examples
{
public class UsageExample
{
static void Main(string[] args)
{
string dsn = Environment.GetEnvironmentVariable("TDENGINE_CLOUD_DSN");
IntPtr conn = Connect(dsn);
InsertData(conn);
SelectData(conn);
// close connect
LibTaosWS.WSClose(conn);
}
public static IntPtr Connect(string dsn)
{
// get connect
IntPtr conn = LibTaosWS.WSConnectWithDSN(dsn);
if (conn == IntPtr.Zero)
{
throw new Exception($"get connection failed,reason:{LibTaosWS.WSErrorStr(conn)},code:{LibTaosWS.WSErrorNo(conn)}");
}
return conn;
}
public static void InsertData(IntPtr conn)
{
string createTable = "CREATE STABLE if not exists test.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)";
string insertData = "INSERT INTO " +
"test.d1001 USING test.meters TAGS('California.SanFrancisco', 1) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000)" +
"test.d1002 USING test.meters TAGS('California.SanFrancisco', 2) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)" +
"test.d1003 USING test.meters TAGS('California.LosAngeles', 3) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000)" +
"test.d1004 USING test.meters TAGS('California.LosAngeles', 4) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ";
// create database under database named 'test'
IntPtr res = LibTaosWS.WSQuery(conn, createTable);
ValidQueryExecution(res);
// Free the query result every time when used up it.
LibTaosWS.WSFreeResult(res);
// insert data into the table created in previous step.
res = LibTaosWS.WSQuery(conn, insertData);
ValidQueryExecution(res);
// Free the query result every time when used up it.
LibTaosWS.WSFreeResult(res);
}
public static void SelectData(IntPtr conn)
{
string selectTable = "select * from test.meters";
IntPtr res = LibTaosWS.WSQueryTimeout(conn, selectTable,5000);
ValidQueryExecution(res);
// print meta
List<TDengineMeta> metas = LibTaosWS.WSGetFields(res);
foreach (var meta in metas)
{
Console.Write("{0} {1}({2})\t|", meta.name, meta.TypeName(), meta.size);
}
Console.WriteLine("");
List<object> dataSet = LibTaosWS.WSGetData(res);
for (int i = 0; i < dataSet.Count;)
{
for (int j = 0; j < metas.Count; j++)
{
Console.Write("{0}\t|\t", dataSet[i]);
i++;
}
Console.WriteLine("");
}
Console.WriteLine("");
// Free the query result every time when used up it.
LibTaosWS.WSFreeResult(res);
}
// Check if LibTaosWS.Query() execute correctly.
public static void ValidQueryExecution(IntPtr res)
{
int code = LibTaosWS.WSErrorNo(res);
if (code != 0)
{
throw new Exception($"execute SQL failed: reason: {LibTaosWS.WSErrorStr(res)}, code:{code}");
}
}
}
}
\ No newline at end of file
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net5.0</TargetFramework>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.1" />
</ItemGroup>
</Project>
...@@ -2,4 +2,4 @@ module tdengine.com/example ...@@ -2,4 +2,4 @@ module tdengine.com/example
go 1.17 go 1.17
require github.com/taosdata/driver-go/v2 latest require github.com/taosdata/driver-go/v3 latest
\ No newline at end of file \ No newline at end of file
...@@ -5,7 +5,7 @@ import ( ...@@ -5,7 +5,7 @@ import (
"fmt" "fmt"
"os" "os"
_ "github.com/taosdata/driver-go/v2/taosRestful" _ "github.com/taosdata/driver-go/v3/taosRestful"
) )
func main() { func main() {
......
package com.taos.test; package com.taos.test;
import com.taos.example.*; import com.taos.example.CloudTutorial;
import org.junit.FixMethodOrder; import com.taos.example.ConnectCloudExample;
import org.junit.Test; import org.junit.Test;
import java.sql.*; import java.sql.SQLException;
@FixMethodOrder
public class TestAll { public class TestAll {
private String[] args = new String[]{};
public void dropDB(String dbName) throws SQLException {
String jdbcUrl = "jdbc:TAOS://localhost:6030?user=root&password=taosdata";
try (Connection conn = DriverManager.getConnection(jdbcUrl)) {
try (Statement stmt = conn.createStatement()) {
stmt.execute("drop database if exists " + dbName);
}
}
}
public void insertData() throws SQLException {
String jdbcUrl = "jdbc:TAOS://localhost:6030?user=root&password=taosdata";
try (Connection conn = DriverManager.getConnection(jdbcUrl)) {
try (Statement stmt = conn.createStatement()) {
String sql = "INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000)\n" +
" power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 15:38:15.000',12.60000,218,0.33000)\n" +
" power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 15:38:16.800',12.30000,221,0.31000)\n" +
" power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES('2018-10-03 15:38:16.650',10.30000,218,0.25000)\n" +
" power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 15:38:05.500',11.80000,221,0.28000)\n" +
" power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 15:38:16.600',13.40000,223,0.29000)\n" +
" power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:05.000',10.80000,223,0.29000)\n" +
" power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:06.000',10.80000,223,0.29000)\n" +
" power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:07.000',10.80000,223,0.29000)\n" +
" power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:08.500',11.50000,221,0.35000)";
stmt.execute(sql);
}
}
}
@Test
public void testJNIConnect() throws SQLException {
JNIConnectExample.main(args);
}
@Test @Test
public void testRestConnect() throws SQLException { public void testConnectCloudExample() throws SQLException {
RESTConnectExample.main(args); ConnectCloudExample.main(new String[]{});
} }
@Test // @Test
public void testRestInsert() throws SQLException { // public void testCloudTutorial() throws SQLException {
dropDB("power"); // CloudTutorial.main(new String[]{});
RestInsertExample.main(args); // }
RestQueryExample.main(args);
}
@Test
public void testStmtInsert() throws SQLException {
dropDB("power");
StmtInsertExample.main(args);
}
@Test
public void testSubscribe() {
Thread thread = new Thread(() -> {
try {
Thread.sleep(1000);
insertData();
} catch (SQLException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
});
thread.start();
SubscribeDemo.main(args);
}
@Test
public void testSchemaless() throws SQLException {
LineProtocolExample.main(args);
TelnetLineProtocolExample.main(args);
// for json protocol, tags may be double type. but for telnet protocol tag must be nchar type.
// To avoid type mismatch, we delete database test.
dropDB("test");
JSONProtocolExample.main(args);
}
} }
\ No newline at end of file
const { options, connect } = require("td2.0-rest-connector"); const { options, connect } = require("@tdengine/rest");
async function test() { async function test() {
options.url = process.env.TDENGINE_CLOUD_URL; options.url = process.env.TDENGINE_CLOUD_URL;
...@@ -6,7 +6,7 @@ async function test() { ...@@ -6,7 +6,7 @@ async function test() {
let conn = connect(options); let conn = connect(options);
let cursor = conn.cursor(); let cursor = conn.cursor();
try { try {
let res = await cursor.query("select server_version()"); let res = await cursor.query("show databases");
res.toString(); res.toString();
} catch (err) { } catch (err) {
console.log(err); console.log(err);
......
...@@ -11,25 +11,13 @@ url = os.environ["TDENGINE_CLOUD_URL"] ...@@ -11,25 +11,13 @@ url = os.environ["TDENGINE_CLOUD_URL"]
token = os.environ["TDENGINE_CLOUD_TOKEN"] token = os.environ["TDENGINE_CLOUD_TOKEN"]
conn = taosrest.connect(url=url, token=token) conn = taosrest.connect(url=url, token=token)
# test the connection by getting version info
print("server version:", conn.server_info)
# ANCHOR_END: connect # ANCHOR_END: connect
# ANCHOR: insert # ANCHOR: insert
# drop database
affected_row = conn.execute("DROP DATABASE IF EXISTS power")
print("affected_row", affected_row) # 0
# create database
affected_row = conn.execute("CREATE DATABASE power")
print("affected_row", affected_row) # 0
# create super table # create super table
conn.execute("CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)") conn.execute("CREATE STABLE IF NOT EXISTS power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)")
print("affected_row", affected_row) # 0
# insert multiple rows into multiple tables at once. subtables will be created automatically. # insert multiple rows into multiple tables at once. subtables will be created automatically.
affected_row = conn.execute("""INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000) affected_row = conn.execute("""INSERT INTO power.d1001 USING power.meters TAGS('California.SanFrancisco', 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000) power.d1002 USING power.meters TAGS('California.SanFrancisco', 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
""") """)
print("affected_row", affected_row) # 4 print("affected_row", affected_row) # 4
# ANCHOR_END: insert # ANCHOR_END: insert
......
...@@ -3,10 +3,11 @@ name = "cloud-example" ...@@ -3,10 +3,11 @@ name = "cloud-example"
version = "0.1.0" version = "0.1.0"
edition = "2021" edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies] [dependencies]
libtaos = { version="0.4.5-alpha.0", features=["rest"]} taos = { version = "*", default-features = false, features = ["ws"] }
tokio = { version = "1", features = ["full"]} tokio = { version = "1", features = ["full"]}
anyhow = "1.0.0" anyhow = "1.0.0"
serde = { version = "1", features = ["derive"]}
chrono = "*"
pretty_env_logger = "0.4"
log = "*"
use std::time::Duration;
use chrono::{DateTime, Local};
use taos::*;
// Query options 2, use deserialization with serde.
#[derive(Debug, serde::Deserialize)]
#[allow(dead_code)]
struct Record {
// deserialize timestamp to chrono::DateTime<Local>
ts: DateTime<Local>,
// float to f32
current: Option<f32>,
// int to i32
voltage: Option<i32>,
phase: Option<f32>,
}
async fn prepare(taos: Taos) -> anyhow::Result<()> {
let inserted = taos.exec_many([
"use tmq",
// create child table
"CREATE TABLE `d0` USING `meters` TAGS(0, 'Los Angles')",
// insert into child table
"INSERT INTO `d0` values(now - 10s, 10, 116, 0.32)",
// insert with NULL values
"INSERT INTO `d0` values(now - 8s, NULL, NULL, NULL)",
// insert and automatically create table with tags if not exists
"INSERT INTO `d1` USING `meters` TAGS(1, 'San Francisco') values(now - 9s, 10.1, 119, 0.33)",
// insert many records in a single sql
"INSERT INTO `d1` values (now-8s, 10, 120, 0.33) (now - 6s, 10, 119, 0.34) (now - 4s, 11.2, 118, 0.322)",
]).await?;
assert_eq!(inserted, 6);
Ok(())
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
std::env::set_var("RUST_LOG", "debug");
pretty_env_logger::init();
let dsn = std::env::var("TDENGINE_CLOUD_DSN")?;
let builder = TaosBuilder::from_dsn(&dsn)?;
let taos = builder.build()?;
// prepare database
taos.exec_many([
"DROP TOPIC IF EXISTS tmq_meters",
"USE tmq",
"CREATE STABLE IF NOT EXISTS `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) TAGS (`groupid` INT, `location` BINARY(16))",
"CREATE TOPIC tmq_meters with META AS DATABASE tmq"
])
.await?;
let task = tokio::spawn(prepare(taos));
tokio::time::sleep(Duration::from_secs(1)).await;
// subscribe
let dsn2 = format!("{dsn}&group.id=test");
dbg!(&dsn2);
let tmq = TmqBuilder::from_dsn(dsn2)?;
let mut consumer = tmq.build()?;
consumer.subscribe(["tmq_meters"]).await?;
println!("start subscription");
{
let mut stream = consumer.stream();
while let Some((offset, message)) = stream.try_next().await? {
// get information from offset
// the topic
let topic = offset.topic();
// the vgroup id, like partition id in kafka.
let vgroup_id = offset.vgroup_id();
println!("* in vgroup id {vgroup_id} of topic {topic}\n");
if let Some(data) = message.into_data() {
while let Some(block) = data.fetch_raw_block().await? {
// one block for one table, get table name if needed
let name = block.table_name();
let records: Vec<Record> = block.deserialize().try_collect()?;
println!(
"** table: {}, got {} records: {:#?}\n",
name.unwrap(),
records.len(),
records
);
}
}
consumer.commit(offset).await?;
}
}
consumer.unsubscribe().await;
task.await??;
Ok(())
}
use anyhow::Result; use anyhow::Result;
use libtaos::*; use taos::*;
#[tokio::main] #[tokio::main]
async fn main() -> Result<()> { async fn main() -> Result<()> {
let dsn = std::env::var("TDENGINE_CLOUD_DSN")?; let mut dsn = std::env::var("TDENGINE_CLOUD_DSN").parse()?;
let cfg = TaosCfg::from_dsn(dsn)?; let builder = TaosBuilder::from_dsn(dsn)?;
let conn = cfg.connect()?; let conn = builder.build()?;
//ANCHOR: insert //ANCHOR: insert
conn.exec("DROP DATABASE IF EXISTS power").await?; conn.exec("DROP DATABASE IF EXISTS power").await?;
conn.exec("CREATE DATABASE power").await?; conn.exec("CREATE DATABASE power").await?;
...@@ -18,18 +18,20 @@ power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10 ...@@ -18,18 +18,20 @@ power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10
let result = conn.query("SELECT ts, current FROM power.meters LIMIT 2").await?; let result = conn.query("SELECT ts, current FROM power.meters LIMIT 2").await?;
// ANCHOR_END: query // ANCHOR_END: query
// ANCHOR: meta // ANCHOR: meta
let meta: Vec<ColumnMeta> = result.column_meta; let fields = result.fields();
for column in meta { for column in fields {
println!("name:{} bytes: {}", column.name, column.bytes) println!("name:{} bytes: {}", column.name(), column.bytes());
} }
// name:ts bytes: 8 // name:ts bytes: 8
// name:current bytes: 4 // name:current bytes: 4
// ANCHOR_END: meta // ANCHOR_END: meta
// ANCHOR: iter // ANCHOR: iter
let rows: Vec<Vec<Field>> = result.rows; let mut rows = result.rows();
for row in rows { while let Some(row) = rows.try_next().await? {
println!("{} {}", row[0].as_timestamp().unwrap(), row[1].as_float().unwrap()); for (name, value) in row {
} println!("got value of {}: {}", name, value);
}
}
// 2018-10-03 14:38:05.000 10.3 // 2018-10-03 14:38:05.000 10.3
// 2018-10-03 14:38:15.000 12.6 // 2018-10-03 14:38:15.000 12.6
// ANCHOR_END: iter // ANCHOR_END: iter
......
use anyhow::Result; use anyhow::Result;
use libtaos::*; use taos::*;
#[tokio::main] #[tokio::main]
async fn main() -> Result<()> { async fn main() -> Result<()> {
let dsn = std::env::var("TDENGINE_CLOUD_DSN")?; let dsn = std::env::var("TDENGINE_CLOUD_DSN")?;
let cfg = TaosCfg::from_dsn(dsn)?; let taos = TaosBuilder::from_dsn(dsn)?.build()?;
let conn = cfg.connect()?; let _ = taos.query("show databases").await?;
let _ = conn.query("show databases").await?;
println!("Connected"); println!("Connected");
Ok(()) Ok(())
} }
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册