We provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. From TDengine 2.4.0.0, taosBenchmark and taosdump were not released together with TDengine.
By default, TDengine compiling does not include taos-tools. You can use 'cmake .. -DBUILD_TOOLS=true' to make them be compiled with TDengine.
To build the [taos-tools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
Note TDengine 2.3.x.0 and later use a component named 'taosAdapter' to play http daemon role by default instead of the http daemon embedded in the early version of TDengine. The taosAdapter is programmed by go language. If you pull TDengine source code to the latest from an existing codebase, please execute 'git submodule update --init --recursive' to pull taosAdapter source code. Please install go language 1.14 or above for compiling taosAdapter. If you meet difficulties regarding 'go mod', especially you are from China, you can use a proxy to solve the problem.
Note TDengine 2.3.x.0 and later use a component named 'taosAdapter' to play http daemon role by default instead of the http daemon embedded in the early version of TDengine. The taosAdapter is programmed by go language. If you pull TDengine source code to the latest from an existing codebase, please execute 'git submodule update --init --recursive' to pull taosAdapter source code. Please install go language version 1.14 or above for compiling taosAdapter. If you meet difficulties regarding 'go mod', especially you are from China, you can use a proxy to solve the problem.
@@ -374,7 +374,7 @@ The following is the content of a typical query JSON example file.
}
```
The following parameters are specific to the query in the JSON file.
```
"query_times": the number of queries per query type
"query_mode": query data interface, "tosc": call TDengine's c interface; "resetful": use restfule interface. Options are available. Default is "taosc".
"specified_table_query": { query for the specified table
...
...
@@ -389,7 +389,7 @@ The following parameters are specific to the query in the JSON file.
"threads": the number of threads to execute sqls concurrently, optional, default is 1. Each thread is responsible for a part of sub-tables and executes all sqls.
"sql": "select count(*) from xxxx". Query statement for all sub-tables in the super table, where the table name must be written as "xxxx" and the instance will be replaced with the sub-table name automatically.
"result": the name of the file to which the query result is written. Optional, the default is null, which means the query results are not written to a file.
```
The following is a typical subscription JSON example file content.
```
...
...
@@ -432,13 +432,13 @@ The following is a typical subscription JSON example file content.
}
```
The following are the meanings of the parameters specific to the subscription function.
```
"interval": interval for executing subscriptions, in seconds. Optional, default is 0.
"restart": subscription restart." yes": restart the subscription if it already exists, "no": continue the previous subscription. (Please note that the executing user needs to have read/write access to the dataDir directory)
"keepProgress": keep the progress of the subscription information. yes means keep the subscription information, no means don't keep it. The value is yes and restart is no to continue the previous subscriptions.
"resubAfterConsume": Used in conjunction with keepProgress to call unsubscribe after the subscription has been consumed the appropriate number of times and to subscribe again.
"result": the name of the file to which the query result is written. Optional, default is null, means the query result will not be written to the file. Note: The file to save the result after each sql statement cannot be renamed, and the file name will be appended with the thread number when generating the result file.
```
Conclusion
--
TDengine is a big data platform designed and optimized for IoT, Telematics, Industrial Internet, DevOps, etc. TDengine shows a high performance that far exceeds similar products due to the innovative data storage and query engine design in the database kernel. And withSQL syntax support and connectors for multiple programming languages (currently Java, Python, Go, C#, NodeJS, Rust, etc. are supported), it is extremely easy to use and has zero learning cost. To facilitate the operation and maintenance needs, we also provide data migration and monitoring functions and other related ecological tools and software.
TDengine supports multiple ways to write data, including SQL, Prometheus, Telegraf, EMQ MQTT Broker, HiveMQ Broker, CSV file, etc. Kafka, OPC and other interfaces will be provided in the future. Data can be inserted in one single record or in batches, data from one or multiple data collection points can be inserted at the same time. TDengine supports multi-thread insertion, out-of-order data insertion, and also historical data insertion.
TDengine supports multiple ways to write data, including SQL, Prometheus, Telegraf, collectd, StatsD, EMQ MQTT Broker, HiveMQ Broker, CSV file, etc. Kafka, OPC and other interfaces will be provided in the future. Data can be inserted in one single record or in batches, data from one or multiple data collection points can be inserted at the same time. TDengine supports multi-thread insertion, out-of-order data insertion, and also historical data insertion.
## <a class="anchor" id="sql"></a> Data Writing via SQL
...
...
@@ -141,141 +141,84 @@ use prometheus;
select * from apiserver_request_latencies_bucket;
```
## <a class="anchor" id="telegraf"></a> Data Writing via Telegraf and taosAdapter
Please refer to [Official document](https://portal.influxdata.com/downloads/) for Telegraf installation.
TDengine version 2.3.0.0+ includes a stand-alone application taosAdapter in charge of receive data insertion from Telegraf.
## <a class="anchor" id="telegraf"></a> Data Writing via Telegraf
[Telegraf](https://www.influxdata.com/time-series-platform/telegraf/) is a popular open source tool for IT operation data collection. TDengine provides a simple tool [Bailongma](https://github.com/taosdata/Bailongma), which only needs to be simply configured in Telegraf without any code, and can directly write the data collected by Telegraf into TDengine, then automatically create databases and related table entries in TDengine according to rules. Blog post [Use Docker Container to Quickly Build a Devops Monitoring Demo](https://www.taosdata.com/blog/2020/02/03/1189.html), which is an example of using bailongma to write Prometheus and Telegraf data into TDengine.
### Compile blm_telegraf From Source Code
Users need to download the source code of [Bailongma](https://github.com/taosdata/Bailongma) from github, then compile and generate an executable file using Golang language compiler. Before you start compiling, you need to complete following prepares:
- A server running Linux OS
- Golang version 1.10 and higher installed
- An appropriated TDengine version. Because the client dynamic link library of TDengine is used, it is necessary to install the same version of TDengine as the server-side; for example, if the server version is TDengine 2.0. 0, ensure install the same version on the linux server where bailongma is located (can be on the same server as TDengine, or on a different server)
Bailongma project has a folder, blm_telegraf, which holds the Telegraf writing API. The compiling process is as follows:
```bash
cd blm_telegraf
go build
Configuration:
Please add following words in /etc/telegraf/telegraf.conf. Fill 'database name' with the database name you want to store in the TDengine for Telegraf data. Please fill the values in TDengine server/cluster host, username and password fields.
If everything goes well, an executable of blm_telegraf will be generated in the corresponding directory.
### Install Telegraf
At the moment, TDengine supports Telegraf version 1.7. 4 and above. Users can download the installation package on Telegraf's website according to your current operating system. The download address is as follows: https://portal.influxdata.com/downloads
### Configure Telegraf
Modify the TDengine-related configurations in the Telegraf configuration file /etc/telegraf/telegraf.conf.
In the output plugins section, add the [[outputs.http]] configuration:
- url: The URL provided by bailongma API service, please refer to the example section below
- data_format: "json"
- json_timestamp_units: "1ms"
In agent section:
- hostname: The machine name that distinguishes different collection devices, and it is necessary to ensure its uniqueness
- metric_batch_size: 100, which is the max number of records per batch written by Telegraf allowed. Increasing the number can reduce the request sending frequency of Telegraf.
For information on how to use Telegraf to collect data and more about using Telegraf, please refer to the official [document](https://docs.influxdata.com/telegraf/v1.11/) of Telegraf.
### Launch blm_telegraf
blm_telegraf has following options, which can be set to tune configurations of blm_telegraf when launching.
```sh
--host
The ip address of TDengine server, default is null
--batch-size
blm_prometheus assembles the received telegraf data into a TDengine writing request. This parameter controls the number of data pieces carried in a writing request sent to TDengine at a time.
--dbname
Set a name for the database created in TDengine, blm_telegraf will automatically create a database named dbname in TDengine, and the default value is prometheus.
--dbuser
Then restart telegraf:
```
sudo systemctl start telegraf
```
Now you can query the metrics data of Telegraf from TDengine.
Set the user name to access TDengine, the default value is 'root '
Please find taosAdapter configuration and usage from `taosadapter --help` output.
Please refer to [official document](https://github.com/statsd/statsd) for StatsD installation.
TDengine version 2.3.0.0+ includes a stand-alone application taosAdapter in charge of receive data insertion from StatsD.
### Example
Launch an API service for blm_telegraf with the following command
```bash
./blm_telegraf -host 127.0.0.1 -port 8089
Please add following words in the config.js file. Please fill the value to 'host' and 'port' with what the TDengine and taosAdapter using.
```
Assuming that the IP address of the server where blm_telegraf located is "10.1.2. 3", the URL shall be added to the configuration file of telegraf as:
```yaml
url = "http://10.1.2.3:8089/telegraf"
add "./backends/repeater" to backends section.
add { host:'<TDengine server/cluster host>', port: <port for StatsD>} to repeater section.
```
### Query written data of telegraf
The format of generated data by telegraf is as follows:
```json
Example file:
```
{
"fields":{
"usage_guest":0,
"usage_guest_nice":0,
"usage_idle":89.7897897897898,
"usage_iowait":0,
"usage_irq":0,
"usage_nice":0,
"usage_softirq":0,
"usage_steal":0,
"usage_system":5.405405405405405,
"usage_user":4.804804804804805
},
"name":"cpu",
"tags":{
"cpu":"cpu2",
"host":"bogon"
},
"timestamp":1576464360
port: 8125
, backends: ["./backends/repeater"]
, repeater: [{ host: '127.0.0.1', port: 6044}]
}
```
Where the name field is the name of the time-series data collected by telegraf, and the tag field is the tag of the time-series data. blm_telegraf automatically creates a STable in TDengine with the name of the time series data, and converts the tag field into the tag value of TDengine, with Timestamp as the timestamp and fields values as the value of the time-series data. Therefore, in the client of TDEngine, you can check whether this data was successfully written through the following instruction.
Please find taosAdapter configuration and usage from `taosadapter --help` output.
```mysql
use telegraf;
select * from cpu;
```
## <a class="anchor" id="taosadapter2-telegraf"></a> Insert data via Bailongma 2.0 and Telegraf
**Notice:**
TDengine 2.3.0.0+ provides taosAdapter to support Telegraf data writing. Bailongma v2 will be abandoned and no more maintained.
MQTT is a popular data transmission protocol in the IoT. TDengine can easily access the data received by MQTT Broker and write it to TDengine.
## <a class="anchor" id="emq"></a> Data Writing via EMQ Broker
[EMQ](https://github.com/emqx/emqx) is an open source MQTT Broker software, with no need of coding, only to use "rules" in EMQ Dashboard for simple configuration, and MQTT data can be directly written into TDengine. EMQ X supports storing data to the TDengine by sending it to a Web service, and also provides a native TDengine driver on Enterprise Edition for direct data store. Please refer to [EMQ official documents](https://docs.emqx.io/broker/latest/cn/rule/rule-example.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine) for more details.
## <a class="anchor" id="hivemq"></a> Data Writing via HiveMQ Broker
[HiveMQ](https://www.hivemq.com/) is an MQTT agent that provides Free Personal and Enterprise Edition versions. It is mainly used for enterprises, emerging machine-to-machine(M2M) communication and internal transmission to meet scalability, easy management and security features. HiveMQ provides an open source plug-in development kit. You can store data to TDengine via HiveMQ extension-TDengine. Refer to the [HiveMQ extension-TDengine documentation](https://github.com/huskar-t/hivemq-tdengine-extension/blob/b62a26ecc164a310104df57691691b237e091c89/README.md) for more details.
case1<pxiao>: [TS-854] normal table batch insert with binding same table, different number of columns and timestamp in ascending order
case2<pxiao>: [TS-854] normal table batch insert with binding same table, different number of columns and timestamp in descending order
case3<pxiao>: [TS-854] normal table batch insert with binding same table, different number of columns and timestamp out of order
case4<pxiao>: [TS-854] normal table batch insert with binding same table, different number of columns and same timestamp
case5<pxiao>: [TS-854] normal table batch insert with binding different tables, different number of columns and timestamp in ascending order
case6<pxiao>: [TS-854] normal table batch insert with binding different tables, different number of columns and timestamp in descending order
case7<pxiao>: [TS-854] normal table batch insert with binding different tables, different number of columns and timestamp out of order
case8<pxiao>: [TS-854] normal table batch insert with binding different tables, different number of columns and same timestamp
case9<pxiao>: [TS-854] sub table batch insert with binding same table, different number of columns and timestamp in ascending order
case10<pxiao>: [TS-854] sub table batch insert with binding same table, different number of columns and timestamp in descending order
case11<pxiao>: [TS-854] sub table batch insert with binding same table, different number of columns and timestamp out of order
case12<pxiao>: [TS-854] sub table batch insert with binding same table, different number of columns and same timestamp
case13<pxiao>: [TS-854] sub table batch insert with binding different tables, different number of columns and timestamp in ascending order
case14<pxiao>: [TS-854] sub table batch insert with binding different tables, different number of columns and timestamp in descending order
case15<pxiao>: [TS-854] sub table batch insert with binding different tables, different number of columns and timestamp out of order
case16<pxiao>: [TS-854] sub table batch insert with binding different tables, different number of columns and same timestamp
case17<pxiao>: [TS-854] sub table batch insert with binding same table, different number of columns, different number of tags and timestamp in ascending order
case18<pxiao>: [TS-854] sub table batch insert with binding same table, different number of columns, different number of tags and timestamp in descending order
case19<pxiao>: [TS-854] sub table batch insert with binding same table, different number of columns, different number of tags and timestamp out of order
case20<pxiao>: [TS-854] sub table batch insert with binding same table, different number of columns, different number of tags and same timestamp
case21<pxiao>: [TS-854] sub table batch insert with binding different tables, different number of columns, different number of tags and timestamp in ascending order
case22<pxiao>: [TS-854] sub table batch insert with binding different tables, different number of columns, different number of tags and timestamp in descending order
case23<pxiao>: [TS-854] sub table batch insert with binding different tables, different number of columns, different number of tags and timestamp out of order
case24<pxiao>: [TS-854] sub table batch insert with binding different tables, different number of columns, different number of tags and same timestamp