提交 0d1d9f61 编写于 作者: D dingbo

docs: third party

上级 f9e02724
# taosDump
\ No newline at end of file
# taosDump
taosdump is a tool that supports backing up data from a running TDengine cluster and restoring the backed up data to the same, or another running TDengine cluster.
taosdump can back up a database, a super table, or a normal table as a logical data unit or backup data records in the database, super tables, and normal tables. When using taosdump, you can specify the directory path for data backup. If you do not specify a directory, taosdump will back up the data to the current directory by default.
If the specified location already has data files, taosdump will prompt the user and exit immediately to avoid data overwriting. This means that the same path can only be used for one backup.
Please be careful if you see a prompt for this and please ensure that you follow best practices and relevant SOPs for data integrity, backup and data security.
Users should not use taosdump to back up raw data, environment settings, hardware information, server configuration, or cluster topology. taosdump uses [Apache AVRO](https://avro.apache.org/) as the data file format to store backup data.
## Installation
There are two ways to install taosdump:
- Install the taosTools official installer. Please find taosTools from [All download links](https://www.tdengine.com/all-downloads) page and download and install it.
- Compile taos-tools separately and install it. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.
## Common usage scenarios
### taosdump backup data
1. backing up all databases: specify `-A` or `-all-databases` parameter.
2. backup multiple specified databases: use `-D db1,db2,... ` parameters;
3. back up some super or normal tables in the specified database: use `-dbname stbname1 stbname2 tbname1 tbname2 ... ` parameters. Note that the first parameter of this input sequence is the database name, and only one database is supported. The second and subsequent parameters are the names of super or normal tables in that database, separated by spaces.
4. back up the system log database: TDengine clusters usually contain a system database named `log`. The data in this database is the data that TDengine runs itself, and the taosdump will not back up the log database by default. If users need to back up the log database, users can use the `-a` or `-allow-sys` command-line parameter.
5. Loose mode backup: taosdump version 1.4.1 onwards provides `-n` and `-L` parameters for backing up data without using escape characters and "loose" mode, which can reduce the number of backups if table names, column names, tag names do not use escape characters. This can also reduce the backup data time and backup data footprint. If you are unsure about using `-n` and `-L` conditions, please use the default parameters for "strict" mode backup. See the [official documentation](/taos-sql/escape) for a description of escaped characters.
<!-- exclude -->
:::tip
- taosdump versions after 1.4.1 provide the `-I` argument for parsing Avro file schema and data. If users specify `-s` then only taosdump will parse schema.
- Backups after taosdump 1.4.2 use the batch count specified by the `-B` parameter. The default value is 16384. If, in some environments, low network speed or disk performance causes "Error actual dump ... batch ...", then try changing the `-B` parameter to a smaller value.
:::
<!-- exclude-end -->
### taosdump recover data
Restore the data file in the specified path: use the `-i` parameter plus the path to the data file. You should not use the same directory to backup different data sets, and you should not backup the same data set multiple times in the same path. Otherwise, the backup data will cause overwriting or multiple backups.
<!-- exclude -->
:::tip
taosdump internally uses TDengine stmt binding API for writing recovery data with a default batch size of 16384 for better data recovery performance. If there are more columns in the backup data, it may cause a "WAL size exceeds limit" error. You can try to adjust the batch size to a smaller value by using the `-B` parameter.
:::
<!-- exclude-end -->
## Detailed command-line parameter list
The following is a detailed list of taosdump command-line arguments.
```
Usage: taosdump [OPTION...] dbname [tbname ...]
or: taosdump [OPTION...] --databases db1,db2,...
or: taosdump [OPTION...] --all-databases
or: taosdump [OPTION...] -i inpath
or: taosdump [OPTION...] -o outpath
-h, --host=HOST Server host from which to dump data. Default is
localhost.
-p, --password User password to connect to server. Default is
taosdata.
-P, --port=PORT Port to connect
-u, --user=USER User name used to connect to server. Default is
root.
-c, --config-dir=CONFIG_DIR Configure directory. Default is /etc/taos
-i, --inpath=INPATH Input file path.
-o, --outpath=OUTPATH Output file path.
-r, --resultFile=RESULTFILE DumpOut/In Result file path and name.
-a, --allow-sys Allow to dump system database
-A, --all-databases Dump all databases.
-D, --databases=DATABASES Dump listed databases. Use comma to separate
database names.
-N, --without-property Dump database without its properties.
-s, --schemaonly Only dump table schemas.
-y, --answer-yes Input yes for prompt. It will skip data file
checking!
-d, --avro-codec=snappy Choose an avro codec among null, deflate, snappy,
and lzma.
-S, --start-time=START_TIME Start time to dump. Either epoch or
ISO8601/RFC3339 format is acceptable. ISO8601
format example: 2017-10-01T00:00:00.000+0800 or
2017-10-0100:00:00:000+0800 or '2017-10-01
00:00:00.000+0800'
-E, --end-time=END_TIME End time to dump. Either epoch or ISO8601/RFC3339
format is acceptable. ISO8601 format example:
2017-10-01T00:00:00.000+0800 or
2017-10-0100:00:00.000+0800 or '2017-10-01
00:00:00.000+0800'
-B, --data-batch=DATA_BATCH Number of data per query/insert statement when
backup/restore. Default value is 16384. If you see
'error actual dump .. batch ..' when backup or if
you see 'WAL size exceeds limit' error when
restore, please adjust the value to a smaller one
and try. The workable value is related to the
length of the row and type of table schema.
-I, --inspect inspect avro file content and print on screen
-L, --loose-mode Use loose mode if the table name and column name
use letter and number only. Default is NOT.
-n, --no-escape No escape char '`'. Default is using it.
-T, --thread-num=THREAD_NUM Number of thread for dump in file. Default is
5.
-g, --debug Print debug info.
-?, --help Give this help list
--usage Give a short usage message
-V, --version Print program version
Mandatory or optional arguments to long options are also mandatory or optional
for any corresponding short options.
Report bugs to <support@taosdata.com>.
```
# Prometheus
Prometheus is a widespread open-source monitoring and alerting system. Prometheus joined the Cloud Native Computing Foundation (CNCF) in 2016 as the second incubated project after Kubernetes, which has a very active developer and user community.
Prometheus provides `remote_write` and `remote_read` interfaces to leverage other database products as its storage engine. To enable users of the Prometheus ecosystem to take advantage of TDengine's efficient writing and querying, TDengine also provides support for these two interfaces.
Prometheus data can be stored in TDengine via the `remote_write` interface with proper configuration. Data stored in TDengine can be queried via the `remote_read` interface, taking full advantage of TDengine's efficient storage query performance and clustering capabilities for time-series data.
## Prerequisites
To write Prometheus data to TDengine requires the following preparations.
- The TDengine cluster is deployed and functioning properly
- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details.
- Prometheus has been installed. Please refer to the [official documentation](https://prometheus.io/docs/prometheus/latest/installation/) for installing Prometheus
## Configuration steps
Configuring Prometheus is done by editing the Prometheus configuration file prometheus.yml (default location `/etc/prometheus/prometheus.yml`).
### Configuring third-party database addresses
Point the `remote_read url` and `remote_write url` to the domain name or IP address of the server running the taosAdapter service, the REST service port (taosAdapter uses 6041 by default), and the name of the database you want to write to TDengine, and ensure that the corresponding URL form as follows.
- remote_read url : `http://<taosAdapter's host>:<REST service port>/prometheus/v1/remote_read/<database name>`
- remote_write url : `http://<taosAdapter's host>:<REST service port>/prometheus/v1/remote_write/<database name>`
### Configure Basic authentication
- username: <TDengine's username>
- password: <TDengine's password>
### Example configuration of remote_write and remote_read related sections in prometheus.yml file
```yaml
remote_write:
- url: "http://localhost:6041/prometheus/v1/remote_write/prometheus_data"
basic_auth:
username: root
password: taosdata
remote_read:
- url: "http://localhost:6041/prometheus/v1/remote_read/prometheus_data"
basic_auth:
username: root
password: taosdata
remote_timeout: 10s
read_recent: true
```
## Verification method
After restarting Prometheus, you can refer to the following example to verify that data is written from Prometheus to TDengine and can read out correctly.
### Query and write data using TDengine CLI
```
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
test | 2022-04-12 08:07:58.756 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready |
log | 2022-04-20 07:19:50.260 | 2 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready |
prometheus_data | 2022-04-20 07:21:09.202 | 158 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
db | 2022-04-15 06:37:08.512 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready |
Query OK, 4 row(s) in set (0.000585s)
taos> use prometheus_data;
Database changed.
taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
metrics | 2022-04-20 07:21:09.209 | 2 | 1 | 1389 |
Query OK, 1 row(s) in set (0.000487s)
taos> select * from metrics limit 10;
ts | value | labels |
=============================================================================================
2022-04-20 07:21:09.193000000 | 0.000024996 | {"__name__":"go_gc_duration... |
2022-04-20 07:21:14.193000000 | 0.000024996 | {"__name__":"go_gc_duration... |
2022-04-20 07:21:19.193000000 | 0.000024996 | {"__name__":"go_gc_duration... |
2022-04-20 07:21:24.193000000 | 0.000024996 | {"__name__":"go_gc_duration... |
2022-04-20 07:21:29.193000000 | 0.000024996 | {"__name__":"go_gc_duration... |
2022-04-20 07:21:09.193000000 | 0.000054249 | {"__name__":"go_gc_duration... |
2022-04-20 07:21:14.193000000 | 0.000054249 | {"__name__":"go_gc_duration... |
2022-04-20 07:21:19.193000000 | 0.000054249 | {"__name__":"go_gc_duration... |
2022-04-20 07:21:24.193000000 | 0.000054249 | {"__name__":"go_gc_duration... |
2022-04-20 07:21:29.193000000 | 0.000054249 | {"__name__":"go_gc_duration... |
Query OK, 10 row(s) in set (0.011146s)
```
### Use promql-cli to read data from TDengine via remote_read
Install promql-cli
```
go install github.com/nalbury/promql-cli@latest
```
Query Prometheus data in the running state of TDengine and taosAdapter services
```
ubuntu@shuduo-1804 ~ $ promql-cli --host "http://127.0.0.1:9090" "sum(up) by (job)"
JOB VALUE TIMESTAMP
prometheus 1 2022-04-20T08:05:26Z
node 1 2022-04-20T08:05:26Z
```
Stop taosAdapter service and query Prometheus data to verify
```
ubuntu@shuduo-1804 ~ $ sudo systemctl stop taosadapter.service
ubuntu@shuduo-1804 ~ $ promql-cli --host "http://127.0.0.1:9090" "sum(up) by (job)"
VALUE TIMESTAMP
```
# Telegraf
Telegraf is a viral, open-source, metrics collection software. Telegraf can collect the operation information of various components without having to write any scripts to collect regularly, reducing the difficulty of data acquisition.
Telegraf's data can be written to TDengine by simply adding the output configuration of Telegraf to the URL corresponding to taosAdapter and modifying several configuration items. The presence of Telegraf data in TDengine can take advantage of TDengine's efficient storage query performance and clustering capabilities for time-series data.
## Prerequisites
To write Telegraf data to TDengine requires the following preparations.
- The TDengine cluster is deployed and functioning properly
- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details.
- Telegraf has been installed. Please refer to the [official documentation](https://docs.influxdata.com/telegraf/v1.22/install/) for Telegraf installation.
## Configuration steps
In the Telegraf configuration file (default location `/etc/telegraf/telegraf.conf`) add an `outputs.http` section.
```
[[outputs.http]]
url = "http://<taosAdapter's host>:<REST service port>/influxdb/v1/write?db=<database name>"
...
username = "<TDengine's username>"
password = "<TDengine's password>"
...
```
Where <taosAdapter's host\> please fill in the server's domain name or IP address running the taosAdapter service. <REST service port\> please fill in the port of the REST service (default is 6041). <TDengine's username\> and <TDengine's password\> please fill in the actual configuration of the currently running TDengine. And <database name\> please fill in the database name where you want to store Telegraf data in TDengine.
An example is as follows.
```
[[outputs.http]]
url = "http://127.0.0.1:6041/influxdb/v1/write?db=telegraf"
method = "POST"
timeout = "5s"
username = "root"
password = "taosdata"
data_format = "influx"
influx_max_line_bytes = 250
```
## Verification method
Restart Telegraf service:
```
sudo systemctl restart telegraf
```
Use TDengine CLI to verify Telegraf correctly writing data to TDengine and read out:
```
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
telegraf | 2022-04-20 08:47:53.488 | 22 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
log | 2022-04-20 07:19:50.260 | 9 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready |
Query OK, 2 row(s) in set (0.002401s)
taos> use telegraf;
Database changed.
taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
swap | 2022-04-20 08:47:53.532 | 7 | 1 | 1 |
cpu | 2022-04-20 08:48:03.488 | 11 | 2 | 5 |
system | 2022-04-20 08:47:53.512 | 8 | 1 | 1 |
diskio | 2022-04-20 08:47:53.550 | 12 | 2 | 15 |
kernel | 2022-04-20 08:47:53.503 | 6 | 1 | 1 |
mem | 2022-04-20 08:47:53.521 | 35 | 1 | 1 |
processes | 2022-04-20 08:47:53.555 | 12 | 1 | 1 |
disk | 2022-04-20 08:47:53.541 | 8 | 5 | 2 |
Query OK, 8 row(s) in set (0.000521s)
taos> select * from telegraf.system limit 10;
ts | load1 | load5 | load15 | n_cpus | n_users | uptime | uptime_format | host
|
=============================================================================================================================================================================================================================================
2022-04-20 08:47:50.000000000 | 0.000000000 | 0.050000000 | 0.070000000 | 4 | 1 | 5533 | 1:32 | shuduo-1804
|
2022-04-20 08:48:00.000000000 | 0.000000000 | 0.050000000 | 0.070000000 | 4 | 1 | 5543 | 1:32 | shuduo-1804
|
2022-04-20 08:48:10.000000000 | 0.000000000 | 0.040000000 | 0.070000000 | 4 | 1 | 5553 | 1:32 | shuduo-1804
|
Query OK, 3 row(s) in set (0.013269s)
```
# EMQX Broker
MQTT is a popular IoT data transfer protocol. [EMQX](https://github.com/emqx/emqx) is an open-source MQTT Broker software. You can write MQTT data directly to TDengine without any code. You only need to setup "rules" in EMQX Dashboard to create a simple configuration. EMQX supports saving data to TDengine by sending data to a web service and provides a native TDengine driver for direct saving in the Enterprise Edition. Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use it.).
## Prerequisites
The following preparations are required for EMQX to add TDengine data sources correctly.
- The TDengine cluster is deployed and working properly
- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details.
- If you use the emulated writers described later, you need to install the appropriate version of Node.js. V12 is recommended.
## Install and start EMQX
Depending on the current operating system, users can download the installation package from the [EMQX official website](https://www.emqx.io/downloads) and execute the installation. After installation, use `sudo emqx start` or `sudo systemctl start emqx` to start the EMQX service.
## Create Database and Table
In this step we create the appropriate database and table schema in TDengine for receiving MQTT data. Open TDengine CLI and execute SQL bellow:
```sql
CREATE DATABASE test;
USE test;
CREATE TABLE sensor_data (ts TIMESTAMP, temperature FLOAT, humidity FLOAT, volume FLOAT, pm10 FLOAT, pm25 FLOAT, so2 FLOAT, no2 FLOAT, co FLOAT, sensor_id NCHAR(255), area TINYINT, coll_time TIMESTAMP);
```
Note: The table schema is based on the blog [(In Chinese) Data Transfer, Storage, Presentation, EMQX + TDengine Build MQTT IoT Data Visualization Platform](https://www.taosdata.com/blog/2020/08/04/1722.html) as an example. Subsequent operations are carried out with this blog scenario too. Please modify it according to your actual application scenario.
## Configuring EMQX Rules
Since the configuration interface of EMQX differs from version to version, here is v4.4.3 as an example. For other versions, please refer to the corresponding official documentation.
### Login EMQX Dashboard
Use your browser to open the URL `http://IP:18083` and log in to EMQX Dashboard. The initial installation username is `admin` and the password is: `public`.
![TDengine Database EMQX login dashboard](./emqx/login-dashboard.webp)
### Creating Rule
Select "Rule" in the "Rule Engine" on the left and click the "Create" button: !
![TDengine Database EMQX rule engine](./emqx/rule-engine.webp)
### Edit SQL fields
Copy SQL bellow and paste it to the SQL edit area:
```sql
SELECT
payload
FROM
"sensor/data"
```
![TDengine Database EMQX create rule](./emqx/create-rule.webp)
### Add "action handler"
![TDengine Database EMQX add action handler](./emqx/add-action-handler.webp)
### Add "Resource"
![TDengine Database EMQX create resource](./emqx/create-resource.webp)
Select "Data to Web Service" and click the "New Resource" button.
### Edit "Resource"
Select "WebHook" and fill in the request URL as the address and port of the server running taosAdapter (default is 6041). Leave the other properties at their default values.
![TDengine Database EMQX edit resource](./emqx/edit-resource.webp)
### Edit "action"
Edit the resource configuration to add the key/value pairing for Authorization. If you use the default TDengine username and password then the value of key Authorization is:
```
Basic cm9vdDp0YW9zZGF0YQ==
```
Please refer to the [ TDengine REST API documentation ](/reference/rest-api/) for the authorization in details.
Enter the rule engine replacement template in the message body:
```sql
INSERT INTO test.sensor_data VALUES(
now,
${payload.temperature},
${payload.humidity},
${payload.volume},
${payload.PM10},
${payload.pm25},
${payload.SO2},
${payload.NO2},
${payload.CO},
'${payload.id}',
${payload.area},
${payload.ts}
)
```
![TDengine Database EMQX edit action](./emqx/edit-action.webp)
Finally, click the "Create" button at bottom left corner saving the rule.
## Compose program to mock data
```javascript
{{#include docs/examples/other/mock.js}}
```
Note: `CLIENT_NUM` in the code can be set to a smaller value at the beginning of the test to avoid hardware performance be not capable to handle a more significant number of concurrent clients.
![TDengine Database EMQX client num](./emqx/client-num.webp)
## Execute tests to simulate sending MQTT data
```
npm install mqtt mockjs --save ---registry=https://registry.npm.taobao.org
node mock.js
```
![TDengine Database EMQX run mock](./emqx/run-mock.webp)
## Verify that EMQX is receiving data
Refresh the EMQX Dashboard rules engine interface to see how many records were received correctly:
![TDengine Database EMQX rule matched](./emqx/check-rule-matched.webp)
## Verify that data writing to TDengine
Use the TDengine CLI program to log in and query the appropriate databases and tables to verify that the data is being written to TDengine correctly:
![TDengine Database EMQX result in taos](./emqx/check-result-in-taos.webp)
Please refer to the [TDengine official documentation](https://docs.taosdata.com/) for more details on how to use TDengine.
EMQX Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use EMQX.
---
sidebar_label: Go
title: TDengine Go Connector
---
\ No newline at end of file
---
sidebar_label: Node.js
title: TDengine Node.js Connector
---
\ No newline at end of file
// mock.js
const mqtt = require('mqtt')
const Mock = require('mockjs')
const EMQX_SERVER = 'mqtt://localhost:1883'
const CLIENT_NUM = 10
const STEP = 5000 // Data interval in ms
const AWAIT = 5000 // Sleep time after data be written once to avoid data writing too fast
const CLIENT_POOL = []
startMock()
function sleep(timer = 100) {
return new Promise(resolve => {
setTimeout(resolve, timer)
})
}
async function startMock() {
const now = Date.now()
for (let i = 0; i < CLIENT_NUM; i++) {
const client = await createClient(`mock_client_${i}`)
CLIENT_POOL.push(client)
}
// last 24h every 5s
const last = 24 * 3600 * 1000
for (let ts = now - last; ts <= now; ts += STEP) {
for (const client of CLIENT_POOL) {
const mockData = generateMockData()
const data = {
...mockData,
id: client.clientId,
area: 0,
ts,
}
client.publish('sensor/data', JSON.stringify(data))
}
const dateStr = new Date(ts).toLocaleTimeString()
console.log(`${dateStr} send success.`)
await sleep(AWAIT)
}
console.log(`Done, use ${(Date.now() - now) / 1000}s`)
}
/**
* Init a virtual mqtt client
* @param {string} clientId ClientID
*/
function createClient(clientId) {
return new Promise((resolve, reject) => {
const client = mqtt.connect(EMQX_SERVER, {
clientId,
})
client.on('connect', () => {
console.log(`client ${clientId} connected`)
resolve(client)
})
client.on('reconnect', () => {
console.log('reconnect')
})
client.on('error', (e) => {
console.error(e)
reject(e)
})
})
}
/**
* Generate mock data
*/
function generateMockData() {
return {
"temperature": parseFloat(Mock.Random.float(22, 100).toFixed(2)),
"humidity": parseFloat(Mock.Random.float(12, 86).toFixed(2)),
"volume": parseFloat(Mock.Random.float(20, 200).toFixed(2)),
"PM10": parseFloat(Mock.Random.float(0, 300).toFixed(2)),
"pm25": parseFloat(Mock.Random.float(0, 300).toFixed(2)),
"SO2": parseFloat(Mock.Random.float(0, 50).toFixed(2)),
"NO2": parseFloat(Mock.Random.float(0, 50).toFixed(2)),
"CO": parseFloat(Mock.Random.float(0, 50).toFixed(2)),
"area": Mock.Random.integer(0, 20),
"ts": 1596157444170,
}
}
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册