未验证 提交 65defda9 编写于 作者: sangshuduo's avatar sangshuduo 提交者: GitHub

docs: 3rd parties in english (#12499)

* docs: 3rd parties doc in English

[TD-15558]

* fix link to broder

* docs: third parties doc in English

[TD-15558]
上级 1ee89e78
---
sidebar_label: Prometheus
title: Prometheus
title: Prometheus writing and reading
---
import Prometheus from "../14-reference/_prometheus.mdx"
Prometheus 是一款流行的开源监控告警系统。Prometheus 于2016年加入了 Cloud Native Computing Foundation (云原生云计算基金会,简称 CNCF),成为继 Kubernetes 之后的第二个托管项目,该项目拥有非常活跃的开发人员和用户社区。
Prometheus is a widespread open-source monitoring and alerting system. Prometheus joined the Cloud Native Computing Foundation (CNCF) in 2016 as the second-hosted project after Kubernetes, which has a very active developer and user community.
Prometheus 提供了 `remote_write``remote_read` 接口来利用其它数据库产品作为它的存储引擎。为了让 Prometheus 生态圈的用户能够利用 TDengine 的高效写入和查询,TDengine 也提供了对这两个接口的支持。
Prometheus provides `remote_write` and `remote_read` interfaces to leverage other database products as its storage engine. To enable users of the Prometheus ecosystem to take advantage of TDengine's efficient writing and querying, TDengine also provides support for these two interfaces.
通过适当的配置, Prometheus 的数据可以通过 `remote_write` 接口存储到 TDengine 中,也可以通过 `remote_read` 接口来查询存储在 TDengine 中的数据,充分利用 TDengine 对时序数据的高效存储查询性能和集群处理能力。
Prometheus data can be stored in TDengine via the `remote_write` interface with proper configuration. Data stored in TDengine can be queried via the `remote_read` interface, taking full advantage of TDengine's efficient storage query performance and clustering capabilities for time-series data.
## 前置条件
## Prerequisites
要将 Prometheus 数据写入 TDengine 需要以下几方面的准备工作。
- TDengine 集群已经部署并正常运行
- taosAdapter 已经安装并正常运行。具体细节请参考 [taosAdapter 的使用手册](/reference/taosadapter)
- Prometheus 已经安装。安装 Prometheus 请参考[官方文档](https://prometheus.io/docs/prometheus/latest/installation/)
To write Prometheus data to TDengine requires the following preparations.
- The TDengine cluster is deployed and functioning properly
- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details.
- Prometheus has been installed. Please refer to the [official documentation](https://prometheus.io/docs/prometheus/latest/installation/) for installing Prometheus
## Configuration steps
## 配置步骤
<Prometheus />
## 验证方法
## Verification method
After restarting Prometheus, you can refer to the following example to verify that data is written from Prometheus to TDengine and can read out correctly.
重启 Prometheus 后可参考以下示例验证从 Prometheus 向 TDengine 写入数据并能够正确读出。
### Query and write data using TDengine CLI
### 使用 TDengine CLI 查询写入数据
```
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
......@@ -61,15 +63,15 @@ taos> select * from metrics limit 10;
Query OK, 10 row(s) in set (0.011146s)
```
### 使用 promql-cli 通过 remote_read 从 TDengine 读取数据
### Use promql-cli to read data from TDengine via remote_read
安装 promql-cli
Install promql-cli
```
go install github.com/nalbury/promql-cli@latest
```
在 TDengine 和 taosAdapter 服务运行状态对 Prometheus 数据进行查询
Query Prometheus data in the running state of TDengine and taosAdapter services
```
ubuntu@shuduo-1804 ~ $ promql-cli --host "http://127.0.0.1:9090" "sum(up) by (job)"
......@@ -78,7 +80,7 @@ prometheus 1 2022-04-20T08:05:26Z
node 1 2022-04-20T08:05:26Z
```
暂停 taosAdapter 服务后对 Prometheus 数据进行查询
Stop taosAdapter service and query Prometheus data to verify
```
ubuntu@shuduo-1804 ~ $ sudo systemctl stop taosadapter.service
......
---
sidebar_label: Telegraf
title: Telegraf 写入
title: Telegraf writing
---
import Telegraf from "../14-reference/_telegraf.mdx"
Telegraf 是一款十分流行的指标采集开源软件。在数据采集和平台监控系统中,Telegraf 可以采集多种组件的运行信息,而不需要自己手写脚本定时采集,降低数据获取的难度。
Telegraf is a viral metrics collection open-source software. In the data collection and platform monitoring system, Telegraf can collect the operation information of various components without writing their scripts to collect regularly, reducing the difficulty of data acquisition.
只需要将 Telegraf 的输出配置增加指向 taosAdapter 对应的 url 并修改若干配置项即可将 Telegraf 的数据写入到 TDengine 中。将 Telegraf 的数据存在到 TDengine 中可以充分利用 TDengine 对时序数据的高效存储查询性能和集群处理能力。
Telegraf's data can be written to TDengine by simply adding the output configuration of Telegraf to the URL corresponding to taosAdapter and modifying several configuration items. The presence of Telegraf data in TDengine can take advantage of TDengine's efficient storage query performance and clustering capabilities for time-series data.
## 前置条件
## Prerequisites
要将 Telegraf 数据写入 TDengine 需要以下几方面的准备工作。
- TDengine 集群已经部署并正常运行
- taosAdapter 已经安装并正常运行。具体细节请参考 [taosAdapter 的使用手册](/reference/taosadapter)
- Telegraf 已经安装。安装 Telegraf 请参考[官方文档](https://docs.influxdata.com/telegraf/v1.22/install/)
To write Telegraf data to TDengine requires the following preparations.
- The TDengine cluster is deployed and functioning properly
- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details.
- Telegraf has been installed. Please refer to the [official documentation](https://docs.influxdata.com/telegraf/v1.22/install/) for Telegraf installation.
## 配置步骤
## Configuration steps
<Telegraf />
## 验证方法
## Verification method
重启 Telegraf 服务:
Restart Telegraf service:
```
sudo systemctl restart telegraf
```
使用 TDengine CLI 验证从 Telegraf 向 TDengine 写入数据并能够正确读出:
Use TDengine CLI to verify Telegraf correctly writing data to TDengine and read out:
```
taos> show databases;
......
---
sidebar_label: collectd
title: collectd 写入
title: collectd writing
---
import CollectD from "../14-reference/_collectd.mdx"
collectd 是一个用来收集系统性能的守护进程。collectd 提供各种存储方式来存储不同值的机制。它会在系统运行和存储信息时周期性的统计系统的相关统计信息。利用这些信息有助于查找当前系统性能瓶颈和预测系统未来的负载等。
只需要将 collectd 的配置指向运行 taosAdapter 的服务器域名(或 IP 地址)和相应端口即可将 collectd 采集的数据写入到 TDengine,可以充分利用 TDengine 对时序数据的高效存储查询性能和集群处理能力。
collectd is a daemon used to collect system performance. collectd provides various storage mechanisms to store different values. It periodically counts system-related statistics while the system is running and storing information. You can use this information to help identify current system performance bottlenecks and predict future system load.
## 前置条件
You can write the data collected by collectd to TDengine by simply pointing the configuration of collectd to the domain name (or IP address) and corresponding port of the server running taosAdapter. It can take full advantage of TDengine's efficient storage query performance and clustering capability for time-series data.
要将 collectd 数据写入 TDengine,需要几方面的准备工作。
- TDengine 集群已经部署并正常运行
- taosAdapter 已经安装并正常运行,具体细节请参考[ taosAdapter 的使用手册](/reference/taosadapter)
- collectd 已经安装。安装 collectd 请参考[官方文档](https://collectd.org/download.shtml)
## Prerequisites
## 配置步骤
Writing collectd data to the TDengine requires several preparations.
- The TDengine cluster is deployed and running properly
- taosAdapter is installed and running, please refer to [taosAdapter's manual](/reference/taosadapter) for details
- collectd has been installed. Please refer to the [official documentation](https://collectd.org/download.shtml) to install collectd
## Configuration steps
<CollectD />
## 验证方法
## Verification method
重启 collectd
Restart collectd
```
sudo systemctl restart collectd
```
使用 TDengine CLI 验证从 collectd 向 TDengine 写入数据并能够正确读出:
Use the TDengine CLI to verify that data is written to TDengine from collectd and can read out correctly.
```
taos> show databases;
......
---
sidebar_label: StatsD
title: StatsD 直接写入
title: StatsD writing
---
import StatsD from "../14-reference/_statsd.mdx"
StatsD 是汇总和总结应用指标的一个简单的守护进程,近些年来发展迅速,已经变成了一个用于收集应用性能指标的统一的协议。
StatsD is a simple daemon for aggregating and summarizing application metrics, which has evolved rapidly in recent years into a unified protocol for collecting application performance metrics.
只需要在 StatsD 的配置文件中填写运行 taosAdapter 的服务器域名(或 IP 地址)和相应端口即可将 StatsD 的数据写入到 TDengine 中,可以充分利用 TDengine 对时序数据的高效存储查询性能和集群处理能力。
You can write StatsD data to TDengine by simply filling in the configuration file of StatsD with the domain name (or IP address) of the server running taosAdapter and the corresponding port. It can take full advantage of TDengine's efficient storage query performance and clustering capabilities for time-series data.
## 前置条件
## Prerequisites
要将 StatsD 数据写入 TDengine 需要以下几方面的准备工作。
- TDengine 集群已经部署并正常运行
- taosAdapter 已经安装并正常运行。具体细节请参考 [taosAdapter 的使用手册](/reference/taosadapter)
- StatsD 已经安装。安装 StatsD 请参考[官方文档](https://github.com/statsd/statsd)
To write StatsD data to TDengine requires the following preparations.
- The TDengine cluster has been deployed and is working properly
- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details.
- StatsD has been installed. To install StatsD, please refer to [official documentation](https://github.com/statsd/statsd)
## 配置步骤
## Configuration steps
<StatsD />
## 验证方法
## Verification method
运行 StatsD:
Start StatsD:
```
$ node stats.js config.js &
......@@ -30,13 +30,13 @@ $ 20 Apr 09:54:41 - [8546] reading config file: exampleConfig.js
20 Apr 09:54:41 - server is up INFO
```
使用 nc 写入测试数据:
Using the utility software `nc` to write data for test:
```
$ echo "foo:1|c" | nc -u -w0 127.0.0.1 8125
```
使用 TDengine CLI 验证从 StatsD 向 TDengine 写入数据并能够正确读出:
Use the TDengine CLI to verify that data is written to TDengine from StatsD and can read out correctly.
```
Welcome to the TDengine shell from Linux, Client Version:2.4.0.0
......
---
sidebar_label: icinga2
title: icinga2 写入
title: icinga2 writing
---
import Icinga2 from "../14-reference/_icinga2.mdx"
icinga2 是一款开源主机、网络监控软件,最初由 Nagios 网络监控应用发展而来。目前,icinga2 遵从 GNU GPL v2 许可协议发行。
icinga2 is an open-source host, network monitoring software initially developed from the Nagios network monitoring application. Currently, icinga2 is distributed under the GNU GPL v2 license.
只需要将 icinga2 的配置修改指向 taosAdapter 对应的服务器和相应端口即可将 icinga2 采集的数据存在到 TDengine 中,可以充分利用 TDengine 对时序数据的高效存储查询性能和集群处理能力。
You can write the data collected by icinga2 to TDengine by simply modifying the icinga2 configuration to point to the taosAdapter server and the corresponding port, taking advantage of TDengine's efficient storage and query performance and clustering capabilities for time-series data.
## 前置条件
## Prerequisites
要将 icinga2 数据写入 TDengine 需要以下几方面的准备工作。
- TDengine 集群已经部署并正常运行
- taosAdapter 已经安装并正常运行。具体细节请参考[ taosAdapter 的使用手册](/reference/taosadapter)
- icinga2 已经安装。安装 icinga2 请参考[官方文档](https://icinga.com/docs/icinga-2/latest/doc/02-installation/)
To write icinga2 data to TDengine requires the following preparations.
- The TDengine cluster is deployed and working properly
- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details.
- icinga2 has been installed. Please refer to the [official documentation](https://icinga.com/docs/icinga-2/latest/doc/02-installation/) for icinga2 installation
## 配置步骤
## Configuration steps
<Icinga2 />
## 验证方法
## Verification method
重启 taosAdapter:
Restart taosAdapter:
```
sudo systemctl restart taosadapter
```
重启 icinga2:
Restart icinga2:
```
sudo systemctl restart icinga2
```
等待 10 秒左右后,使用 TDengine CLI 查询 TDengine 验证是否创建相应数据库并写入数据:
After waiting about 10 seconds, use the TDengine CLI to query TDengine to verify that the appropriate database has been created and data are written.
```
taos> show databases;
......
......@@ -5,31 +5,31 @@ title: TCollector 写入
import Tcollector from "../14-reference/_tcollector.mdx"
TCollector 是 openTSDB 的一部分,它用来采集客户端日志发送给数据库。
TCollector is part of openTSDB and collects client logs to send to the database.
只需要将 TCollector 的配置修改指向运行 taosAdapter 的服务器域名(或 IP 地址)和相应端口即可将 TCollector 采集的数据存在到 TDengine 中,可以充分利用 TDengine 对时序数据的高效存储查询性能和集群处理能力。
You can write the data collected by TCollector to TDengine by simply changing the configuration of TCollector to point to the domain name (or IP address) and corresponding port of the server running taosAdapter. It can take full advantage of TDengine's efficient storage query performance and clustering capability for time-series data.
## 前置条件
## Prerequisites
要将 TCollector 数据写入 TDengine 需要以下几方面的准备工作。
- TDengine 集群已经部署并正常运行
- taosAdapter 已经安装并正常运行。具体细节请参考 [taosAdapter 的使用手册](/reference/taosadapter)
- TCollector 已经安装。安装 TCollector 请参考[官方文档](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html#installation-of-tcollector)
To write TCollector data to TDengine requires the following preparations.
- The TDengine cluster has been deployed and is working properly
- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details.
- TCollector has been installed. Please refer to [official documentation](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html#installation-of-tcollector) for TCollector installation
## 配置步骤
## Configuration steps
<Tcollector />
## 验证方法
## Verification method
重启 taosAdapter:
Restart taosAdapter:
```
sudo systemctl restart taosadapter
```
手动执行 `sudo ./tcollector.py`
Run `sudo ./tcollector.py`:
等待数秒后使用 TDengine CLI 查询 TDengine 是否创建相应数据库并写入数据。
Wait for a few seconds and then use the TDengine CLI to query TDengine whether the corresponding database has been created and data are written.
```
taos> show databases;
......
......@@ -4,3 +4,188 @@ title: EMQ Broker writing
---
MQTT is a popular IoT data transfer protocol, [EMQ](https://github.com/emqx/emqx) is an open source MQTT Broker software, without any code, only need to use "rules" in EMQ Dashboard to do simple configuration, you can write MQTT data directly to TDengine. EMQ X supports saving data to TDengine by sending to web services, and also provides native TDengine driver for direct saving in the Enterprise Edition. Please refer to the [EMQ official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use it. tdengine).
## Prerequisites
The following preparations are required for EMQX to add TDengine data sources correctly.
- The TDengine cluster is deployed and working properly
- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details.
- If you use the emulated writers described later, you need to install the appropriate version of Node.js. V12 is recommended.
## Install and start EMQX
Depending on the current operating system, users can download the installation package from the EMQX official website and execute the installation. The download address is as follows: <https://www.emqx.io/zh/downloads>. After installation, use `sudo emqx start` or `sudo systemctl start emqx` to start the EMQX service.
## Create the appropriate database and table structures in TDengine for receiving MQTT data
### Take the Docker installation of TDengine as an example
```bash
docker exec -it tdengine bash
taos
```
### Create Database and Table
```sql
create database test;
use test;
create table:
CREATE TABLE sensor_data (ts timestamp, temperature float, humidity float, volume float, PM10 float, pm25 float, SO2 float, NO2 float, CO float, sensor_id NCHAR(255), area TINYINT, coll_time timestamp);
```
Note: The table structure is based on the blog [(In Chinese) Data Transfer, Storage, Presentation, EMQ X + TDengine Build MQTT IoT Data Visualization Platform](https://www.taosdata.com/blog/2020/08/04/1722.html) as an example. Subsequent operations are carried out with this blog scenario as an example. Please modify it according to the actual application scenario.
## Configuring EMQX Rules
Since the configuration interface of EMQX differs from version to version, here is only v4.4.3 as an example. For other versions, please refer to the corresponding official documentation.
### Login EMQX Dashboard
Use your browser to open the URL http://IP:18083 and log in to EMQX Dashboard. The initial installation username is `admin` and the password is: `public`.
! [img](. /emqx/login-dashboard.png)
### Creating Rules
Select "Rule" in the "Rule Engine" on the left and click the "Create" button: !
! [img](. /emqx/rule-engine.png)
### Edit SQL fields
! [img](. /emqx/create-rule.png)
### Add "action handler"
! [img](. /emqx/add-action-handler.png)
### Add "Resource"
! [img](. /emqx/create-resource.png)
Select "Send Data to Web Service" and click the "New Resource" button.
### Edit "Resource"
Select "Send Data to Web Service" and fill in the request URL as the address and port of the server running taosAdapter (default is 6041). Leave the other properties at their default values.
! [img](. /emqx/edit-resource.png)
### Edit "action"
Edit the resource configuration to add the key/value pairing for Authorization authentication. Please refer to the [ TDengine REST API documentation ](https://docs.taosdata.com/reference/rest-api/) for the documentation. Enter the rule engine replacement template in the message body.
![img](./emqx/edit-action.png)
## Compose program to mock data
```javascript
// mock.js
const mqtt = require('mqtt')
const Mock = require('mockjs')
const EMQX_SERVER = 'mqtt://localhost:1883'
const CLIENT_NUM = 10
const STEP = 5000 // Data interval in ms
const AWAIT = 5000 // Sleep time after data be written once to avoid data writing too fast
const CLIENT_POOL = []
startMock()
function sleep(timer = 100) {
return new Promise(resolve => {
setTimeout(resolve, timer)
})
}
async function startMock() {
const now = Date.now()
for (let i = 0; i < CLIENT_NUM; i++) {
const client = await createClient(`mock_client_${i}`)
CLIENT_POOL.push(client)
}
// last 24h every 5s
const last = 24 * 3600 * 1000
for (let ts = now - last; ts <= now; ts += STEP) {
for (const client of CLIENT_POOL) {
const mockData = generateMockData()
const data = {
...mockData,
id: client.clientId,
area: 0,
ts,
}
client.publish('sensor/data', JSON.stringify(data))
}
const dateStr = new Date(ts).toLocaleTimeString()
console.log(`${dateStr} send success.`)
await sleep(AWAIT)
}
console.log(`Done, use ${(Date.now() - now) / 1000}s`)
}
/**
* Init a virtual mqtt client
* @param {string} clientId ClientID
*/
function createClient(clientId) {
return new Promise((resolve, reject) => {
const client = mqtt.connect(EMQX_SERVER, {
clientId,
})
client.on('connect', () => {
console.log(`client ${clientId} connected`)
resolve(client)
})
client.on('reconnect', () => {
console.log('reconnect')
})
client.on('error', (e) => {
console.error(e)
reject(e)
})
})
}
/**
* Generate mock data
*/
function generateMockData() {
return {
"temperature": parseFloat(Mock.Random.float(22, 100).toFixed(2)),
"humidity": parseFloat(Mock.Random.float(12, 86).toFixed(2)),
"volume": parseFloat(Mock.Random.float(20, 200).toFixed(2)),
"PM10": parseFloat(Mock.Random.float(0, 300).toFixed(2)),
"pm25": parseFloat(Mock.Random.float(0, 300).toFixed(2)),
"SO2": parseFloat(Mock.Random.float(0, 50).toFixed(2)),
"NO2": parseFloat(Mock.Random.float(0, 50).toFixed(2)),
"CO": parseFloat(Mock.Random.float(0, 50).toFixed(2)),
"area": Mock.Random.integer(0, 20),
"ts": 1596157444170,
}
}
```
Note: CLIENT_NUM in the code can be set to a smaller value at the beginning of the test to avoid hardware performance not being able to handle a more significant number of concurrent clients.
! [img](. /emqx/client-num.png)
## Execute tests to simulate sending MQTT data
```
npm install mqtt mockjs --save ---registry=https://registry.npm.taobao.org
node mock.js
```
! [img](. /emqx/run-mock.png)
## Verify that EMQX is receiving data
Refresh the EMQX Dashboard rules engine interface to see how many records were received correctly:
! [img](. /emqx/check-rule-matched.png)
## Verify that data writing to TDengine
Use the TDengine CLI program to log in and query the appropriate databases and tables to verify that the data is being written to TDengine correctly:
! [img](. /emqx/check-result-in-taos.png)
Please refer to the [TDengine official documentation](https://docs.taosdata.com/) for more details on how to use TDengine.
EMQX Please refer to the [EMQ official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use EMQX.
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册