提交 37824d3e 编写于 作者: Z zhaoyanggh

doc: modify cloud doc for promethus and telegraf

上级 7142daca
# Prometheus ---
sidebar_label: Prometheus
title: Prometheus for TDengine Cloud
---
Prometheus is a widespread open-source monitoring and alerting system. Prometheus joined the Cloud Native Computing Foundation (CNCF) in 2016 as the second incubated project after Kubernetes, which has a very active developer and user community. Prometheus is a widespread open-source monitoring and alerting system. Prometheus joined the Cloud Native Computing Foundation (CNCF) in 2016 as the second incubated project after Kubernetes, which has a very active developer and user community.
Prometheus provides `remote_write` and `remote_read` interfaces to leverage other database products as its storage engine. To enable users of the Prometheus ecosystem to take advantage of TDengine's efficient writing and querying, TDengine also provides support for these two interfaces. Prometheus provides `remote_write` and `remote_read` interfaces to leverage other database products as its storage engine. To enable users of the Prometheus ecosystem to take advantage of TDengine's efficient writing and querying, TDengine also provides support for these two interfaces.
Prometheus data can be stored in TDengine via the `remote_write` interface with proper configuration. Data stored in TDengine can be queried via the `remote_read` interface, taking full advantage of TDengine's efficient storage query performance and clustering capabilities for time-series data. Prometheus data can be stored in TDengine via the `remote_write` interface with proper configuration. Data stored in TDengine can be queried via the `remote_read` interface, taking full advantage of TDengine's efficient storage query performance and clustering capabilities for time-series data.
## Prerequisites ## Install Prometheus
To write Prometheus data to TDengine requires the following preparations. Please refer to the [official documentation](https://prometheus.io/docs/prometheus/latest/installation/) for installing Prometheus
- The TDengine cluster is deployed and functioning properly
- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details.
- Prometheus has been installed. Please refer to the [official documentation](https://prometheus.io/docs/prometheus/latest/installation/) for installing Prometheus
## Configuration steps ## Configuration steps
Configuring Prometheus is done by editing the Prometheus configuration file prometheus.yml (default location `/etc/prometheus/prometheus.yml`). Configuring Prometheus is done by editing the Prometheus configuration file prometheus.yml (default location `/etc/prometheus/prometheus.yml`).
### Configuring third-party database addresses
Point the `remote_read url` and `remote_write url` to the domain name or IP address of the server running the taosAdapter service, the REST service port (taosAdapter uses 6041 by default), and the name of the database you want to write to TDengine, and ensure that the corresponding URL form as follows.
- remote_read url : `http://<taosAdapter's host>:<REST service port>/prometheus/v1/remote_read/<database name>`
- remote_write url : `http://<taosAdapter's host>:<REST service port>/prometheus/v1/remote_write/<database name>`
### Configure Basic authentication
- username: <TDengine's username>
- password: <TDengine's password>
### Example configuration of remote_write and remote_read related sections in prometheus.yml file
```yaml ```yaml
remote_write: remote_write:
- url: "http://localhost:6041/prometheus/v1/remote_write/prometheus_data" - url: "http://localhost:6041/prometheus/v1/remote_write/prometheus_data?token=<token>"
basic_auth:
username: root
password: taosdata
remote_read: remote_read:
- url: "http://localhost:6041/prometheus/v1/remote_read/prometheus_data" - url: "http://localhost:6041/prometheus/v1/remote_read/prometheus_data?token=<token>"
basic_auth:
username: root
password: taosdata
remote_timeout: 10s remote_timeout: 10s
read_recent: true read_recent: true
``` ```
## Verification method ## Verification plugin
After restarting Prometheus, you can refer to the following example to verify that data is written from Prometheus to TDengine and can read out correctly. After restarting Prometheus, you can refer to the following example to verify that data is written from Prometheus to TDengine and can read out correctly.
### Query and write data using TDengine CLI
``` ```
taos> show databases; taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
...@@ -88,29 +65,17 @@ taos> select * from metrics limit 10; ...@@ -88,29 +65,17 @@ taos> select * from metrics limit 10;
Query OK, 10 row(s) in set (0.011146s) Query OK, 10 row(s) in set (0.011146s)
``` ```
### Use promql-cli to read data from TDengine via remote_read ## Use promql-cli
Install promql-cli Install promql-cli
``` ```bash
go install github.com/nalbury/promql-cli@latest go install github.com/nalbury/promql-cli@latest
``` ```
Query Prometheus data in the running state of TDengine and taosAdapter services Query Prometheus data in the running state of TDengine and taosAdapter services
``` ```bash
ubuntu@shuduo-1804 ~ $ promql-cli --host "http://127.0.0.1:9090" "sum(up) by (job)" promql-cli --host "<url>" "sum(up) by (job)"
JOB VALUE TIMESTAMP
prometheus 1 2022-04-20T08:05:26Z
node 1 2022-04-20T08:05:26Z
```
Stop taosAdapter service and query Prometheus data to verify
```
ubuntu@shuduo-1804 ~ $ sudo systemctl stop taosadapter.service
ubuntu@shuduo-1804 ~ $ promql-cli --host "http://127.0.0.1:9090" "sum(up) by (job)"
VALUE TIMESTAMP
``` ```
# Telegraf ---
sidebar_label: Telegraf
title: Telegraf for TDengine Cloud
---
Telegraf is a viral, open-source, metrics collection software. Telegraf can collect the operation information of various components without having to write any scripts to collect regularly, reducing the difficulty of data acquisition. Telegraf is a viral, open-source, metrics collection software. Telegraf can collect the operation information of various components without having to write any scripts to collect regularly, reducing the difficulty of data acquisition.
Telegraf's data can be written to TDengine by simply adding the output configuration of Telegraf to the URL corresponding to taosAdapter and modifying several configuration items. The presence of Telegraf data in TDengine can take advantage of TDengine's efficient storage query performance and clustering capabilities for time-series data. Telegraf's data can be written to TDengine by simply adding the output configuration of Telegraf to the URL corresponding to taosAdapter and modifying several configuration items. The presence of Telegraf data in TDengine can take advantage of TDengine's efficient storage query performance and clustering capabilities for time-series data.
## Prerequisites ## Install Telegraf
To write Telegraf data to TDengine requires the following preparations. Please refer to the [official documentation](https://docs.influxdata.com/telegraf/v1.22/install/) for Telegraf installation.
- The TDengine cluster is deployed and functioning properly
- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details.
- Telegraf has been installed. Please refer to the [official documentation](https://docs.influxdata.com/telegraf/v1.22/install/) for Telegraf installation.
## Configuration steps ## Configuration steps
In the Telegraf configuration file (default location `/etc/telegraf/telegraf.conf`) add an `outputs.http` section. In the Telegraf configuration file (default location `/etc/telegraf/telegraf.conf`) add an `outputs.http` section.
``` ```
[[outputs.http]] [[outputs.http]]
url = "http://<taosAdapter's host>:<REST service port>/influxdb/v1/write?db=<database name>" url = "<url>/influxdb/v1/write?db=<database name>&token=<token>"
...
username = "<TDengine's username>"
password = "<TDengine's password>"
...
```
Where <taosAdapter's host\> please fill in the server's domain name or IP address running the taosAdapter service. <REST service port\> please fill in the port of the REST service (default is 6041). <TDengine's username\> and <TDengine's password\> please fill in the actual configuration of the currently running TDengine. And <database name\> please fill in the database name where you want to store Telegraf data in TDengine.
An example is as follows.
```
[[outputs.http]]
url = "http://127.0.0.1:6041/influxdb/v1/write?db=telegraf"
method = "POST" method = "POST"
timeout = "5s" timeout = "5s"
username = "root"
password = "taosdata"
data_format = "influx" data_format = "influx"
influx_max_line_bytes = 250 influx_max_line_bytes = 250
``` ```
## Verification method ## Verification plugin
Restart Telegraf service: Restart Telegraf service:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册