未验证 提交 2161a23e 编写于 作者: sangshuduo's avatar sangshuduo 提交者: GitHub

Test/sangshuduo/td 13408 move example back (#10480)

* [TD-13408]<test>: move examples back

* [TD-13408]<test>: move examples back

* add examples/c to CMakeLists.txt

* update tests

* [TD-13408]<test>: move rust example back

* fix examples path for Windows

* fix tests/exmaples

* fix typo and spell mistakes

* fix typo and format

* fix format and typo
上级 ce0a4ee5
......@@ -315,7 +315,7 @@ taosAdapter 相关配置参数请参考 taosadapter --help 命令输出以及相
## <a class="anchor" id="emq"></a>EMQ Broker 直接写入
MQTT 是流行的物联网数据传输协议,[EMQ](https://github.com/emqx/emqx)是一开源的 MQTT Broker 软件,无需任何代码,只需要在 EMQ Dashboard 里使用“规则”做简单配置,即可将 MQTT 的数据直接写入 TDengine。EMQ X 支持通过 发送到 Web 服务的方式保存数据到 TDengine,也在企业版上提供原生的 TDengine 驱动实现直接保存。详细使用方法请参考 [EMQ 官方文档](https://docs.emqx.com/zh/enterprise/v4.4/rule/backend_tdengine.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine)
MQTT 是流行的物联网数据传输协议,[EMQ](https://github.com/emqx/emqx) 是一开源的 MQTT Broker 软件,无需任何代码,只需要在 EMQ Dashboard 里使用“规则”做简单配置,即可将 MQTT 的数据直接写入 TDengine。EMQ X 支持通过 发送到 Web 服务的方式保存数据到 TDengine,也在企业版上提供原生的 TDengine 驱动实现直接保存。详细使用方法请参考 [EMQ 官方文档](https://docs.emqx.com/zh/enterprise/v4.4/rule/backend_tdengine.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine)
## <a class="anchor" id="hivemq"></a>HiveMQ Broker 直接写入
......
......@@ -2,7 +2,7 @@
## <a class="anchor" id="grafana"></a>Grafana
TDengine 能够与开源数据可视化系统 [Grafana](https://www.grafana.com/)快速集成搭建数据监测报警系统,整个过程无需任何代码开发,TDengine 中数据表中内容可以在仪表盘(DashBoard)上进行可视化展现。关于TDengine插件的使用您可以在[GitHub](https://github.com/taosdata/grafanaplugin/blob/master/README.md)中了解更多。
TDengine 能够与开源数据可视化系统 [Grafana](https://www.grafana.com/) 快速集成搭建数据监测报警系统,整个过程无需任何代码开发,TDengine 中数据表中内容可以在仪表盘(DashBoard)上进行可视化展现。关于 TDengine 插件的使用您可以在 [GitHub](https://github.com/taosdata/grafanaplugin/blob/master/README.md) 中了解更多。
### 安装Grafana
......@@ -10,7 +10,7 @@ TDengine 能够与开源数据可视化系统 [Grafana](https://www.grafana.com/
### 配置Grafana
TDengine 的 Grafana 插件托管在GitHub,可从 <https://github.com/taosdata/grafanaplugin/releases/latest> 下载,当前最新版本为 3.1.3。
TDengine 的 Grafana 插件托管在 GitHub,可从 <https://github.com/taosdata/grafanaplugin/releases/latest> 下载,当前最新版本为 3.1.3。
推荐使用 [`grafana-cli` 命令行工具](https://grafana.com/docs/grafana/latest/administration/cli/) 进行插件安装。
......@@ -40,7 +40,7 @@ Grafana 7.3+ / 8.x 版本会对插件进行签名检查,因此还需要在 gra
allow_loading_unsigned_plugins = tdengine-datasource
```
Docker环境下,可以使用如下的环境变量设置自动安装并设置 TDengine 插件:
Docker 环境下,可以使用如下的环境变量设置自动安装并设置 TDengine 插件:
```bash
GF_INSTALL_PLUGINS=https://github.com/taosdata/grafanaplugin/releases/download/v3.1.3/tdengine-datasource-3.1.3.zip;tdengine-datasource
......@@ -77,9 +77,9 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource
![img](../images/connections/create_dashboard1.jpg)
如上图所示,在 Query 中选中 `TDengine` 数据源,在下方查询框可输入相应 sql 进行查询,具体说明如下:
如上图所示,在 Query 中选中 `TDengine` 数据源,在下方查询框可输入相应 SQL 进行查询,具体说明如下:
* INPUT SQL:输入要查询的语句(该 SQL 语句的结果集应为两列多行),例如:`select avg(mem_system) from log.dn where ts >= $from and ts < $to interval($interval)` ,其中,from、to 和 interval 为 TDengine插件的内置变量,表示从Grafana插件面板获取的查询范围和时间间隔。除了内置变量外,`也支持可以使用自定义模板变量`
* INPUT SQL:输入要查询的语句(该 SQL 语句的结果集应为两列多行),例如:`select avg(mem_system) from log.dn where ts >= $from and ts < $to interval($interval)` ,其中,from、to 和 interval 为 TDengine 插件的内置变量,表示从 Grafana 插件面板获取的查询范围和时间间隔。除了内置变量外,`也支持可以使用自定义模板变量`
* ALIAS BY:可设置当前查询别名。
* GENERATE SQL: 点击该按钮会自动替换相应变量,并生成最终执行的语句。
......@@ -87,7 +87,7 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource
![img](../images/connections/create_dashboard2.jpg)
> 关于如何使用Grafana创建相应的监测界面以及更多有关使用Grafana的信息,请参考Grafana官方的[文档](https://grafana.com/docs/)。
> 关于如何使用 Grafana 创建相应的监测界面以及更多有关使用 Grafana 的信息,请参考 Grafana 官方的[文档](https://grafana.com/docs/)。
#### 导入 Dashboard
......
# 使用 TDengine + Telegraf + Grafana 快速搭建 IT 运维展示系统
## 背景介绍
TDengine是涛思数据专为物联网、车联网、工业互联网、IT运维等设计和优化的大数据平台。自从 2019年 7 月开源以来,凭借创新的数据建模设计、快捷的安装方式、易用的编程接口和强大的数据写入查询性能博得了大量时序数据开发者的青睐。
TDengine 是涛思数据专为物联网、车联网、工业互联网、IT 运维等设计和优化的大数据平台。自从 2019年 7 月开源以来,凭借创新的数据建模设计、快捷的安装方式、易用的编程接口和强大的数据写入查询性能博得了大量时序数据开发者的青睐。
IT 运维监测数据通常都是对时间特性比较敏感的数据,例如:
- 系统资源指标:CPU、内存、IO、带宽等。
- 软件系统指标:存活状态、连接数目、请求数目、超时数目、错误数目、响应时间、服务类型及其他与业务有关的指标。
......@@ -13,23 +15,26 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如
![IT-DevOps-Solutions-Telegraf.png](../../images/IT-DevOps-Solutions-Telegraf.png)
## 安装步骤
### 安装 Telegraf,Grafana 和 TDengine
安装 Telegraf、Grafana 和 TDengine 请参考相关官方文档。
### Telegraf
请参考[官方文档](https://portal.influxdata.com/downloads/)
### Grafana
请参考[官方文档](https://grafana.com/grafana/download)
### TDengine
从涛思数据官网[下载](http://taosdata.com/cn/all-downloads/)页面下载最新 TDengine-server 2.3.0.0 或以上版本安装。
从涛思数据官网[下载](http://taosdata.com/cn/all-downloads/)页面下载最新 TDengine-server 2.3.0.0 或以上版本安装。
## 数据链路设置
### 下载 TDengine 插件到 grafana 插件目录
```bash
......@@ -40,8 +45,10 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如
5. sudo systemctl restart grafana-server.service
```
### 修改 /etc/telegraf/telegraf.conf
### 修改 /etc/telegraf/telegraf.conf
配置方法,在 /etc/telegraf/telegraf.conf 增加如下文字,其中 database name 请填写希望在 TDengine 保存 Telegraf 数据的数据库名,TDengine server/cluster host、username和 password 填写 TDengine 实际值:
```
[[outputs.http]]
url = "http://<TDengine server/cluster host>:6041/influxdb/v1/write?db=<database name>"
......@@ -54,20 +61,19 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如
```
然后重启 telegraf:
```
```bash
sudo systemctl start telegraf
```
### 导入 Dashboard
使用 Web 浏览器访问 IP:3000 登录 Grafana 界面,系统初始用户名密码为 admin/admin。
点击左侧齿轮图标并选择 Plugins,应该可以找到 TDengine data source 插件图标。
点击左侧加号图标并选择 Import,从 https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v0.1.0.json 下载 dashboard JSON 文件后导入。之后可以看到如下界面的仪表盘:
点击左侧加号图标并选择 Import,从 `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v0.1.0.json` 下载 dashboard JSON 文件后导入。之后可以看到如下界面的仪表盘:
![IT-DevOps-Solutions-telegraf-dashboard.png](../../images/IT-DevOps-Solutions-telegraf-dashboard.png)
## 总结
以上演示如何快速搭建一个完整的 IT 运维展示系统。得力于 TDengine 2.3.0.0 版本中新增的 schemaless 协议解析功能,以及强大的生态软件适配能力,用户可以短短数分钟就可以搭建一个高效易用的 IT 运维系统。TDengine 强大的数据写入查询性能和其他丰富功能请参考官方文档和产品落地案例。
# 使用 TDengine + collectd/StatsD + Grafana 快速搭建 IT 运维监控系统
## 背景介绍
TDengine是涛思数据专为物联网、车联网、工业互联网、IT运维等设计和优化的大数据平台。自从 2019年 7 月开源以来,凭借创新的数据建模设计、快捷的安装方式、易用的编程接口和强大的数据写入查询性能博得了大量时序数据开发者的青睐。
IT 运维监测数据通常都是对时间特性比较敏感的数据,例如:
- 系统资源指标:CPU、内存、IO、带宽等。
- 软件系统指标:存活状态、连接数目、请求数目、超时数目、错误数目、响应时间、服务类型及其他与业务有关的指标。
......@@ -14,21 +16,27 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如
![IT-DevOps-Solutions-Collectd-StatsD.png](../../images/IT-DevOps-Solutions-Collectd-StatsD.png)
## 安装步骤
安装 collectd, StatsD, Grafana 和 TDengine 请参考相关官方文档。
### 安装 collectd
请参考[官方文档](https://collectd.org/documentation.shtml)
### 安装 StatsD
请参考[官方文档](https://github.com/statsd/statsd)
### 安装 Grafana
请参考[官方文档](https://grafana.com/grafana/download)
### 安装 TDengine
从涛思数据官网[下载](http://taosdata.com/cn/all-downloads/)页面下载最新 TDengine-server 2.3.0.0 或以上版本安装。
## 数据链路设置
### 复制 TDengine 插件到 grafana 插件目录
```bash
......@@ -40,7 +48,9 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如
```
### 配置 collectd
在 /etc/collectd/collectd.conf 文件中增加如下内容,其中 host 和 port 请填写 TDengine 和 taosAdapter 配置的实际值:
```
LoadPlugin network
<Plugin network>
......@@ -51,7 +61,9 @@ sudo systemctl start collectd
```
### 配置 StatsD
在 config.js 文件中增加如下内容后启动 StatsD,其中 host 和 port 请填写 TDengine 和 taosAdapter 配置的实际值:
```
backends 部分添加 "./backends/repeater"
repeater 部分添加 { host:'<TDengine server/cluster host>', port: <port for StatsD>}
......@@ -74,6 +86,7 @@ repeater 部分添加 { host:'<TDengine server/cluster host>', port: <port for S
![IT-DevOps-Solutions-statsd-dashboard.png](../../images/IT-DevOps-Solutions-statsd-dashboard.png)
## 总结
TDengine 作为新兴的时序大数据平台,具备极强的高性能、高可靠、易管理、易维护的优势。得力于 TDengine 2.3.0.0 版本中新增的 schemaless 协议解析功能,以及强大的生态软件适配能力,用户可以短短数分钟就可以搭建一个高效易用的 IT 运维系统或者适配一个已存在的系统。
TDengine 强大的数据写入查询性能和其他丰富功能请参考官方文档和产品成功落地案例。
# Rapidly build an IT DevOps system with TDengine + Telegraf + Grafana
## Background
TDengine is an open-source big data platform designed and optimized for Internet of Things (IoT), Connected Vehicles, and Industrial IoT. Besides the 10x faster time-series database, it provides caching, stream computing, message queuing and other functionalities to reduce the complexity and costs of development and operations.
There are a lot of time-series data in the IT DevOps scenario, for example:
- Metrics of system resource: CPU, memory, IO and network status, etc.
- Metrics for software system: service status, number of connections, number of requests, number of the timeout, number of errors, response time, service type, and other metrics related to the specific business.
......@@ -13,23 +15,26 @@ Here we introduce a way to build an IT DevOps system with TDengine, Telegraf, an
![IT-DevOps-Solutions-Telegraf.png](../../images/IT-DevOps-Solutions-Telegraf.png)
## Installation steps
### Install Telegraf,Grafana and TDengine
Please refer to each component's official document for Telegraf, Grafana, and TDengine installation.
### Telegraf
Please refer to the [official document](https://portal.influxdata.com/downloads/).
### Grafana
Please refer to the [official document](https://grafana.com/grafana/download).
### TDengine
Please download TDengine 2.3.0.0 or the above version from TAOS Data's [official website](http://taosdata.com/en/all-downloads/).
### TDengine
Please download TDengine 2.3.0.0 or the above version from TAOS Data's [official website](http://taosdata.com/en/all-downloads/).
## Setup data chain
### Download TDengine plugin to Grafana plugin's directory
```bash
......@@ -40,8 +45,10 @@ Please download TDengine 2.3.0.0 or the above version from TAOS Data's [official
5. sudo systemctl restart grafana-server.service
```
### Modify /etc/telegraf/telegraf.conf
### Modify /etc/telegraf/telegraf.conf
Please add few lines in /etc/telegraf/telegraf.conf as below. Please fill database name for what you desire to save Telegraf's data in TDengine. Please specify the correct value for the hostname of the TDengine server/cluster, username, and password:
```
[[outputs.http]]
url = "http://<TDengine server/cluster host>:6041/influxdb/v1/write?db=<database name>"
......@@ -54,22 +61,21 @@ Please add few lines in /etc/telegraf/telegraf.conf as below. Please fill databa
```
Then restart telegraf:
```
```bash
sudo systemctl start telegraf
```
### Import dashboard
Use your Web browser to access IP:3000 to log in to the Grafana management interface. The default username and password are admin/admin。
Click the 'gear' icon from the left bar to select 'Plugins'. You could find the icon of the TDengine data source plugin.
Click the 'plus' icon from the left bar to select 'Import'. You can download the dashboard JSON file from https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v0.1.0.json then import it to the Grafana. After that, you should see the interface like:
Click the 'plus' icon from the left bar to select 'Import'. You can download the dashboard JSON file from `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v0.1.0.json` then import it to the Grafana. After that, you should see the interface like:
![IT-DevOps-Solutions-telegraf-dashboard.png](../../images/IT-DevOps-Solutions-telegraf-dashboard.png)
## Summary
We demonstrated how to build a full-function IT DevOps system with TDengine, Telegraf, and Grafana. TDengine supports schemaless protocol data insertion capability from 2.3.0.0. Based on TDengine's powerful ecosystem software integration capability, the user can build a high efficient and easy-to-maintain IT DevOps system in a few minutes. Please find more detailed documentation about TDengine high-performance data insertion/query functions and more use cases from TAOS Data's official website.
# Rapidly build a IT DevOps system with TDengine + collectd/StatsD + Grafana
## Background
TDengine is an open-source big data platform designed and optimized for Internet of Things (IoT), Connected Vehicles, and Industrial IoT. Besides the 10x faster time-series database, it provides caching, stream computing, message queuing and other functionalities to reduce the complexity and costs of development and operations.
There are a lot of time-series data in the IT DevOps scenario, for example:
- Metrics of system resource: CPU, memory, IO and network status, etc.
- Metrics for software system: service status, number of connections, number of requests, number of the timeout, number of errors, response time, service type, and other metrics related to the specific business.
......@@ -14,21 +16,27 @@ Here we introduce a way to build an IT DevOps system with TDengine, collectd/sta
![IT-DevOps-Solutions-Collectd-StatsD.png](../../images/IT-DevOps-Solutions-Collectd-StatsD.png)
## Installation steps
Please refer to each component's official document for collectd, StatsD, Grafana, and TDengine installation.
### collectd
Please refer to the [official document](https://collectd.org/documentation.shtml).
### StatsD
Please refer to the [official document](https://github.com/statsd/statsd).
### Grafana
Please refer to the [official document](https://grafana.com/grafana/download).
### TDengine
Please download TDengine 2.3.0.0 or the above version from TAOS Data's [official website](http://taosdata.com/cn/all-downloads/).
## Setup data chain
### Download TDengine plugin to Grafana plugin's directory
```bash
......@@ -40,7 +48,9 @@ Please download TDengine 2.3.0.0 or the above version from TAOS Data's [official
```
### To configure collectd
Please add a few lines in /etc/collectd/collectd.conf as below. Please specify the correct value for hostname and the port number:
```
LoadPlugin network
<Plugin network>
......@@ -51,11 +61,11 @@ sudo systemctl start collectd
```
### To configure StatsD
Please add a few lines in the config.js file then restart StatsD. Please use the correct hostname and port number of TDengine and taosAdapter:
```
fill backends section with "./backends/repeater"
fill repeater section with { host:'<TDengine server/cluster host>', port: <port for StatsD>}
```
- fill backends section with "./backends/repeater"
- fill repeater section with { host:'<TDengine server/cluster host>', port: <port for StatsD>}
### Import dashboard
......@@ -65,7 +75,7 @@ Click the gear icon from the left bar to select 'Plugins'. You could find the ic
#### Import collectd dashboard
Please download the dashboard JSON file from https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json.
Please download the dashboard JSON file from `https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json`.
Click the 'plus' icon from the left bar to select 'Import'. Then you should see the interface like:
......@@ -73,7 +83,7 @@ Click the 'plus' icon from the left bar to select 'Import'. Then you should see
#### Import StatsD dashboard
Please download dashboard JSON file from https://github.com/taosdata/grafanaplugin/blob/master/examples/statsd/dashboards/statsd-with-tdengine-v0.1.0.json.
Please download dashboard JSON file from `https://github.com/taosdata/grafanaplugin/blob/master/examples/statsd/dashboards/statsd-with-tdengine-v0.1.0.json`.
Click the 'plus' icon from the left bar to select 'Import'. Then you should see the interface like:
......
# Best practice of immigration from OpenTSDB to TDengine
As a distributed, scalable, HBase-based distributed temporal database system, OpenTSDB has been introduced and widely used in the field of operation and monitoring by people in DevOps due to its first-mover advantage. However, in recent years, with the rapid development of new technologies such as cloud computing, microservices, and containerization, enterprise-level services have become more and more diverse, and the architecture has become more and more complex, and the application operation infrastructure environment has become more and more diverse, which brings more and more pressure on system and operation monitoring. From this status quo, the use of OpenTSDB as the monitoring backend storage for DevOps is increasingly plagued by performance issues and slow feature upgrades, as well as the resulting increase in application deployment costs and reduced operational efficiency, which are becoming more and more serious as the system scales up.
As a distributed, scalable, HBase-based distributed temporal database system, OpenTSDB has been introduced and widely used in the field of operation and monitoring by people in DevOps due to its first-mover advantage. However, in recent years, with the rapid development of new technologies such as cloud computing, micro-services, and containerization, enterprise-level services have become more and more diverse, and the architecture has become more and more complex, and the application operation infrastructure environment has become more and more diverse, which brings more and more pressure on system and operation monitoring. From this status quo, the use of OpenTSDB as the monitoring backend storage for DevOps is increasingly plagued by performance issues and slow feature upgrades, as well as the resulting increase in application deployment costs and reduced operational efficiency, which are becoming more and more serious as the system scales up.
In this context, to meet the fast-growing IoT big data market and technical demands, TOS Data has developed an innovative big data processing product TDengine independently after learning the advantages of many traditional relational databases, NoSQL databases, stream computing engines, message queues, etc. TDengine has its unique advantages in time-series big data processing. TDengine can effectively solve the problems currently encountered by OpenTSDB.
In this context, to meet the fast-growing IoT big data market and technical demands, TAOS Data has developed an innovative big data processing product TDengine independently after learning the advantages of many traditional relational databases, NoSQL databases, stream computing engines, message queues, etc. TDengine has its unique advantages in time-series big data processing. TDengine can effectively solve the problems currently encountered by OpenTSDB.
Compared with OpenTSDB, TDengine has the following distinctive features.
......@@ -69,15 +69,13 @@ First copy the entire dist directory under the grafanaplugin directory to Grafan
sudo cp -r . /var/lib/grafana/plugins/tdengine
sudo chown grafana:grafana -R /var/lib/grafana/plugins/tdengine
echo -e "[plugins]\nallow_loading_unsigned_plugins = taosdata-tdengine-datasource\n" | sudo tee -a /etc/grafana/grafana.ini
# start grafana service
sudo service grafana-server restart
# or with systemd
sudo systemctl start grafana-server
```
In addition, TDengine provides two default Dashboard templates for users to quickly view the information saved to the TDengine repository. You can simply import the templates from the Grafana directory into Grafana to activate their use.
![](../../images/IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.jpg)
......@@ -133,8 +131,6 @@ Now let's assume a DevOps scenario where we use collectd to collect base metrics
| 2 | swap | value | double | host | swap_type | swap_type_instance | source | |
| 3 | disk | value | double | host | disk_point | disk_instance | disk_type | source |
TDengine requires data stored to have a data schema, i.e., you need to create a supertable and specify the schema of the supertable before writing the data. For data schema creation, you have two ways to do this: 1) Take full advantage of TDengine's native data writing support for OpenTSDB by calling the API provided by TDengine to write the data (in text line or JSON format) to the super table and automate the creation of the single-value model. And automate the creation of single-value models. This approach does not require major adjustments to the data writing application, nor does it require conversion of the written data format.
At the C level, TDengine provides taos_insert_lines to write data in OpenTSDB format directly (in version 2.3.x this function corresponds to taos_schemaless_insert). For the code reference example, please refer to the sample code schemaless.c in the installation package directory.
......@@ -153,8 +149,6 @@ create stable swap(ts timestamp, val double) tags(host binary(12), swap_type bin
create stable disk(ts timestamp, val double) tags(host binary(12), disk_point binary(20), disk_instance binary(20), disk_type binary(20), source binary(20));
```
For sub-tables use dynamic table creation as shown below:
```sql
......@@ -167,8 +161,6 @@ Eventually about 340 sub-tables and 3 super-tables will be created in the system
If you want to take advantage of TDengine's multi-value modeling capabilities, you need to first meet the requirements that different collection quantities have the same collection frequency and can reach the **data writing side simultaneously via a message queue**, thus ensuring that multiple metrics are written at once using SQL statements. The name of the metric is used as the name of the super table to create a multi-column model of data with the same collection frequency and capable of arriving at the same. The data can be collected with the same frequency and arrive in multiple columns. The names of the sub-tables are named using a fixed rule. Each metric above contains only one measurement value, so it cannot be transformed into a multi-value model.
## Data triage and application adaptation
Data is subscribed from the message queue and an adapted writer is started to write the data.
......@@ -203,9 +195,9 @@ The manual migration of data requires attention to two issues.
(2) Under the full-load operation of the system, if there are enough remaining computing and IO resources, a multi-threaded import mechanism can be established to maximize the efficiency of data migration. Considering the huge load on the CPU brought by data parsing, the maximum number of parallel tasks needs to be controlled to avoid the overall system overload triggered by importing historical data.
Due to the ease of operation of TDegnine itself, there is no need to perform index maintenance, data format change processing, etc. throughout the process, and the whole process only needs to be executed sequentially.
Due to the ease of operation of TDengine itself, there is no need to perform index maintenance, data format change processing, etc. throughout the process, and the whole process only needs to be executed sequentially.
Once the historical data is fully imported into TDengine, the two systems are running simultaneously, after which the query requests can be switched to TDengine, thus achieving a seamless application switchover.
Once the historical data is fully imported into TDengine, the two systems are running simultaneously, after which the query requests can be switched to TDengine, thus achieving a seamless application switch-over.
## Appendix 1: Correspondence table of OpenTSDB query functions
......@@ -220,11 +212,13 @@ SELECT avg(val) FROM (SELECT first(val) FROM super_table WHERE ts >= startTime a
Notes.
1. the value within the Interval needs to be the same as the interval value of the outer query.
As the interpolation of values in OpenTSDB uses linear interpolation, use fill(linear) to declare the interpolation type in the interpolation clause. The following functions with the same interpolation requirements are handled by this method. 3.
3. The 20s parameter in Interval means that the inner query will generate results in a 20-second window. In a real query, it needs to be adjusted to the time interval between different records. This ensures that the interpolation results are generated equivalently to the original data.
Due to the special interpolation strategy and mechanism of OpenTSDB, the way of interpolation before computation in Aggregate query makes it impossible for the computation result to be the same as TDengine. However, in the case of Downsample, TDengine, and OpenTSDB can obtain the same result (because OpenTSDB uses a completely different interpolation strategy for Aggregate and Downsample queries).
(since OpenTSDB uses a completely different interpolation strategy for aggregated and downsampled queries).[]()
2. The 20s parameter in Interval means that the inner query will generate results in a 20-second window. In a real query, it needs to be adjusted to the time interval between different records. This ensures that the interpolation results are generated equivalently to the original data.
Due to the special interpolation strategy and mechanism of OpenTSDB, the way of interpolation before computation in Aggregate query makes it impossible for the computation result to be the same as TDengine. However, in the case of downsampling, TDengine, and OpenTSDB can obtain the same result (because OpenTSDB uses a completely different interpolation strategy for Aggregate and Downsampling queries).
(since OpenTSDB uses a completely different interpolation strategy for aggregated and downsampled queries).[]()
**Count**
......@@ -234,8 +228,6 @@ Example.
select count(*) from super_table_name;
**Dev**
Equivalent function: stddev
......@@ -244,8 +236,6 @@ Example.
Select stddev(val) from table_name
**Estimated percentiles**
Equivalent function: apercentile
......@@ -256,9 +246,7 @@ Select apercentile(col1, 50, “t-digest”) from table_name
Remark.
1. t-digest algorithm is used by default in OpenTSDB during approximate query processing, so to get the same calculation result, you need to specify the algorithm used in the apercentile function. tDengine can support two different approximate processing algorithms, which are declared by "default " and "t-digest" to declare.
1. t-digest algorithm is used by default in OpenTSDB during approximate query processing, so to get the same calculation result, you need to specify the algorithm used in the apercentile function. TDengine can support two different approximate processing algorithms, which are declared by "default " and "t-digest" to declare.
**First**
......@@ -268,8 +256,6 @@ Example.
Select first(col1) from table_name
**Last**
Equivalent function: last
......@@ -278,8 +264,6 @@ Example.
Select last(col1) from table_name
**Max**
Equivalent function: max
......@@ -290,8 +274,6 @@ Select max(value) from (select first(val) value from table_name interval(10s) fi
Note: The Max function requires interpolation, for the reasons given above.
**Min**
Equivalent function: min
......@@ -300,8 +282,6 @@ Example.
Select min(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s);
**MinMax**
Equivalent function: max
......@@ -310,8 +290,6 @@ Select max(val) from table_name
Note: This function does not require interpolation, so it can be calculated directly.
**MimMin**
Equivalent function: min
......@@ -319,16 +297,12 @@ Equivalent function: min
Select min(val) from table_name
Note: This function does not require interpolation, so it can be calculated directly.
**Percentile**
Equivalent function: percentile
备注:
Note:
**Sum**
......@@ -338,8 +312,6 @@ Select max(value) from (select first(val) value from table_name interval(10s) fi
Note: This function does not require interpolation, so it can be calculated directly.
**Zimsum**
Equivalent function: sum
......@@ -348,12 +320,9 @@ Select sum(val) from table_name
Note: This function does not require interpolation, so it can be calculated directly.
完整示例:
OpenTSDB query JSON example:
```json
//OpenTSDB query JSON
query = {
"start":1510560000,
"end": 1515000009,
......@@ -362,15 +331,13 @@ query = {
"metric":"cpu.usage_user",
}]
}
// Equivalent SQL:
SELECT count(*)
FROM `cpu.usage_user`
WHERE ts>=1510560000 AND ts<=1515000009
```
## Appendix 2: Resource Estimation Methodology
### Data generation environment
......@@ -419,11 +386,11 @@ Be careful not to start the taosd service immediately after the installation is
To ensure that the system can get the necessary information to run properly. Please set the following key parameters correctly on the server-side.
FQDN, firstEp, secondEP, dataDir, logDir, tmpDir, serverPort. The specific meaning of each parameter and the requirements for setting them can be found in the documentation "TDengine Cluster Installation, Management" (https://www.taosdata.com/cn/ documentation/cluster)".
FQDN, firstEp, secondEP, dataDir, logDir, tmpDir, serverPort. The specific meaning of each parameter and the requirements for setting them can be found in the documentation [TDengine Cluster Installation, Management](https://www.taosdata.com/cn/ documentation/cluster).
Follow the same steps to set the parameters on the node that needs to run and start the taosd service, then add the Dnode to the cluster.
Finally, start taos and execute the command show dnodes, if you can see all the nodes that have joined the cluster, then the cluster is successfully built. For the specific operation procedure and notes, please refer to the document "[TDengine Cluster Installation, Management](https://www.taosdata.com/cn/documentation/cluster)".
Finally, start taos and execute the command show dnodes, if you can see all the nodes that have joined the cluster, then the cluster is successfully built. For the specific operation procedure and notes, please refer to the document [TDengine Cluster Installation, Management](https://www.taosdata.com/cn/documentation/cluster).
## Appendix 4: Super table names
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册