未验证 提交 c0529691 编写于 作者: sangshuduo's avatar sangshuduo 提交者: GitHub

docs: refine reference docs (#12582)

* docs: reference doc in English

[TD-15603]

* docs: reference docs in English

[TD-15603]

* docs: refine reference English docs

[TD-15603]

* docs: refine reference docs
上级 d0bd15dc
......@@ -8,7 +8,7 @@ import Prometheus from "./_prometheus.mdx"
import CollectD from "./_collectd.mdx"
import StatsD from "./_statsd.mdx"
import Icinga2 from "./_icinga2.mdx"
import Tcollector from "./_tcollector.mdx"
import TCollector from "./_tcollector.mdx"
taosAdapter 是一个 TDengine 的配套工具,是 TDengine 集群和应用程序之间的桥梁和适配器。它提供了一种易于使用和高效的方式来直接从数据收集代理软件(如 Telegraf、StatsD、collectd 等)摄取数据。它还提供了 InfluxDB/OpenTSDB 兼容的数据摄取接口,允许 InfluxDB/OpenTSDB 应用程序无缝移植到 TDengine。
......@@ -225,7 +225,7 @@ AllowWebSockets
### TCollector
<Tcollector />
<TCollector />
### node_exporter
......
......@@ -3,7 +3,7 @@ sidebar_label: TCollector
title: TCollector 写入
---
import Tcollector from "../14-reference/_tcollector.mdx"
import TCollector from "../14-reference/_tcollector.mdx"
TCollector 是 openTSDB 的一部分,它用来采集客户端日志发送给数据库。
......@@ -17,7 +17,7 @@ TCollector 是 openTSDB 的一部分,它用来采集客户端日志发送给
- TCollector 已经安装。安装 TCollector 请参考[官方文档](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html#installation-of-tcollector)
## 配置步骤
<Tcollector />
<TCollector />
## 验证方法
......
......@@ -35,7 +35,7 @@ MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/em
CREATE TABLE sensor_data (ts timestamp, temperature float, humidity float, volume float, PM10 float, pm25 float, SO2 float, NO2 float, CO float, sensor_id NCHAR(255), area TINYINT, coll_time timestamp);
```
注:表结构以博客[数据传输、存储、展现,EMQ X + TDengine 搭建 MQTT 物联网数据可视化平台](https://www.taosdata.com/blog/2020/08/04/1722.html)为例。后续操作均以此博客场景为例进行,请你根据实际应用场景进行修改。
注:表结构以博客[数据传输、存储、展现,EMQX + TDengine 搭建 MQTT 物联网数据可视化平台](https://www.taosdata.com/blog/2020/08/04/1722.html)为例。后续操作均以此博客场景为例进行,请你根据实际应用场景进行修改。
## 配置 EMQX 规则
......@@ -188,5 +188,5 @@ node mock.js
![img](./emqx/check-result-in-taos.png)
TDengine 详细使用方法请参考 [TDengine 官方文档](https://docs.taosdata.com/)
EMQX 详细使用方法请参考 [EMQ 官方文档](https://www.emqx.io/docs/zh/v4.4/rule/rule-engine.html)
EMQX 详细使用方法请参考 [EMQX 官方文档](https://www.emqx.io/docs/zh/v4.4/rule/rule-engine.html)
......@@ -12,14 +12,14 @@ This section introduces the major features, competitive advantages, suited scena
The major features are listed below:
1. Besides [using SQL to insert](/develop/insert-data/sql-writing),supports [Schemaless writing](/reference/schemaless/),and supports [InfluxDB LINE](/develop/insert-data/influxdb-line)[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) and other protocols.
2. Support seamless integration with third-party data collection agent like [Telegraf](/third-party/telegraf)[Prometheus](/third-party/prometheus)[StatsD](/third-party/statsd)[collectd](/third-party/collectd)[icinga2](/third-party/icinga2), [Tcollector](/third-party/tcollector), [EMQ](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). Without a line of code, those agents can write data points into TDengine just by configuration.
2. Support seamless integration with third-party data collection agent like [Telegraf](/third-party/telegraf)[Prometheus](/third-party/prometheus)[StatsD](/third-party/statsd)[collectd](/third-party/collectd)[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQX](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). Without a line of code, those agents can write data points into TDengine just by configuration.
3. Support [all kinds of queries](/query-data), including aggregation, nested query, downsampling, interpolation, etc.
4. Support [user defined functions](/develop/udf)
5. Support [caching](/develop/cache). TDengine always save the last data point in cache, so Redis is not needed in some scenarios.
6. Support [continuous query](/develop/continuous-query).
7. Support [data subscription](/develop/subscribe),and the filter condition can be specified.
8. Support [cluster](/cluster/), so it can gain more processing power by adding more nodes. The high availability is supported by replication.
9. Provide interactive [command line intrerface](/reference/taos-shell) for management, maintainence and ad-hoc query.
9. Provide interactive [command-line intrerface](/reference/taos-shell) for management, maintainence and ad-hoc query.
10. Provide many ways to [import](/operation/import), [export](/operation/export) data.
11. Provide [monitoring](/operation/monitor) on TDengine running instances.
12. Provide [connectors](/reference/connector/) for [C/C++](/reference/connector/cpp), [Java](/reference/connector/java), [Python](/reference/connector/python), [Go](/reference/connector/go), [Rust](/reference/connector/rust), [Node.js](/reference/connector/node) and other programming languages.
......@@ -58,7 +58,7 @@ In the time-series data processing platform, TDengine stands in a role like this
<center>Figure 1. TDengine Technical Ecosystem</center>
On the left side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right side, visualization/BI tools, HMI, Python/R, IoT App can be connected. TDengine itself provides interactive command line interface and web interface for management and maintainence.
On the left side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right side, visualization/BI tools, HMI, Python/R, IoT App can be connected. TDengine itself provides interactive command-line interface and web interface for management and maintainence.
## Suited Scenarios for TDengine
......
......@@ -10,7 +10,7 @@ import AptGetInstall from "./\_apt_get_install.mdx";
## Quick Install
The full package of TDengine includes server(taosd), taosAdapter for connecting with third-party systems and providing RESTful interface, client driver(taosc), command line program(CLI, taos) and some tools. For current version, the server taosd and taosAdapter can only be installed and run on Linux systems, and will support Windows, macOS and other systems in the future. The client driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to the RESTful interface, TDengine also provides connectors for a number of programming languages. In versions before 2.4, there is no taosAdapter, and the RESTfule interface is provided by the built-in http service of taosd.
The full package of TDengine includes server(taosd), taosAdapter for connecting with third-party systems and providing RESTful interface, client driver(taosc), command-line program(CLI, taos) and some tools. For current version, the server taosd and taosAdapter can only be installed and run on Linux systems, and will support Windows, macOS and other systems in the future. The client driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to the RESTful interface, TDengine also provides connectors for a number of programming languages. In versions before 2.4, there is no taosAdapter, and the RESTfule interface is provided by the built-in http service of taosd.
TDengine supports X64/ARM64/MIPS64/Alpha64 hardware platforms, and will support ARM32, RISC-V and other CPU architectures in the future.
......@@ -71,7 +71,7 @@ Check if taosd is running:
systemctl status taosd
```
If everything is fine,you can run TDengine command line interface `taos` to access TDengine and play around it.
If everything is fine,you can run TDengine command-line interface `taos` to access TDengine and play around it.
:::info
......@@ -132,7 +132,7 @@ This command will create a super table "meters" under database "test". Unde "met
This command will insert 100 million rows into database quickly. Depends on the hardware configuration, it only takes a dozen seconds for a regular PC server.
taosBenchmark provides you command line options and confuguration file to customize the scenarios, like number of tables, number of rows per table, number of columns and more. Please execute `taosBenchmark --help` to list them. For details on running taosBenchmark, please check [reference for taosBenchmark](/reference/taosbenchmark)
taosBenchmark provides you command-line options and confuguration file to customize the scenarios, like number of tables, number of rows per table, number of columns and more. Please execute `taosBenchmark --help` to list them. For details on running taosBenchmark, please check [reference for taosBenchmark](/reference/taosbenchmark)
## Experience query speed
......
......@@ -52,7 +52,7 @@ If choosing to use native connection and the application is not on the same host
### Verify
After the above installation and configuration are done and making sure TDengine service is already started and in service, the TDengine command line interface `taos` can be launched to access TDengine.以
After the above installation and configuration are done and making sure TDengine service is already started and in service, the TDengine command-line interface `taos` can be launched to access TDengine.
<Tabs defaultValue="linux" groupId="os">
<TabItem value="linux" label="Linux">
......
......@@ -2,9 +2,9 @@
title: REST API
---
To support the development of various types of platforms, TDengine provides an API that conforms to the REST design standard, namely REST API. To minimize the learning cost, different from the design methods of other database REST APIs, TDengine directly requests the SQL statement contained in the BODY through HTTP POST to operate the database and only requires a URL.
To support the development of various types of platforms, TDengine provides an API that conforms to the REST principle, namely REST API. To minimize the learning cost, different from the other database REST APIs, TDengine directly requests the SQL command contained in the request BODY through HTTP POST to operate the database and only requires a URL.
Note: One difference from the native connector is that the RESTful interface is stateless, so the `USE db_name` command has no effect. All references to table names and supertable names need to specify the database name prefix. (Since version 2.2.0.0, it is supported to specify db_name in RESTful URL. If the database name prefix is not specified in the SQL statement, the db_name specified in the URL will be used. Since version 2.4.0.0, RESTful is provided by taosAdapter by default. And it requires that the db_name must be specified in the URL.)
Note: One difference from the native connector is that the REST interface is stateless, so the `USE db_name` command has no effect. All references to table names and super table names need to specify the database name prefix. (Since version 2.2.0.0, it is supported to specify db_name in RESTful URL. If the database name prefix is not specified in the SQL command, the `db_name` specified in the URL will be used. Since version 2.4.0.0, REST service is provided by taosAdapter by default. And it requires that the `db_name` must be specified in the URL.)
## Installation
......@@ -303,5 +303,4 @@ Only some configuration parameters related to the RESTful interface are listed b
:::note
If you are using the REST API provided by taosd, you should write the above configuration in taosd's configuration file taos.cfg. If you use the REST API of taosAdapter, you need to refer to taosAdapter [corresponding configuration method](/reference/taosadapter/).
:::
......@@ -11,7 +11,7 @@ import PkgList from "/components/PkgList";
The default installation path is C:\TDengine, including the following files (directories).
- _taos.exe_ : TDengine CLI command line program
- _taos.exe_ : TDengine CLI command-line program
- _cfg_ : configuration file directory
- _driver_: client driver dynamic link library
- _examples_: sample programs bash/C/C#/go/JDBC/Python/Node.js
......
......@@ -621,9 +621,9 @@ public void setString(int columnIndex, ArrayList<String> list, int size) throws
public void setNString(int columnIndex, ArrayList<String> list, int size) throws SQLException
```
### Writing without mode
### Schemaless Writing
Starting with version 2.2.0.0, TDengine has added the ability to write without mode. It is compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. See [schemaless writing](/reference/schemaless/) for details.
Starting with version 2.2.0.0, TDengine has added the ability to schemaless writing. It is compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. See [schemaless writing](/reference/schemaless/) for details.
**Note**.
......
......@@ -76,7 +76,7 @@ Manually install the following tools.
- Install Visual Studio related: [Visual Studio Build Tools](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools) or [ Visual Studio 2017 Community](https://visualstudio.microsoft.com/pl/thank-you-downloading-visual-studio/?sku=Community)
- Install [Python](https://www.python.org/downloads/) 2.7 (`v3.x.x` is not supported) and execute `npm config set python python2.7`.
- Go to the `cmd` command line interface, `npm config set msvs_version 2017`
- Go to the `cmd` command-line interface, `npm config set msvs_version 2017`
Refer to Microsoft's Node.js User Manual [ Microsoft's Node.js Guidelines for Windows ](https://github.com/Microsoft/nodejs-guidelines/blob/master/windows- environment. md#compiling-native-addon-modules).
......@@ -112,7 +112,7 @@ Validation method.
- Create a new installation verification directory, e.g. `~/tdengine-test`, and download the [nodejsChecker.js source code](https://github.com/taosdata/TDengine/tree/develop/examples/nodejs/) from GitHub. nodejsChecker.js) to local.
- Execute the following command from the command line.
- Execute the following command from the command-line.
```bash
npm init -y
......@@ -120,7 +120,7 @@ npm install td2.0-connector
node nodejsChecker.js host=localhost
```
- After executing the above steps, the command line will output the result of nodejsChecker.js connecting to the TDengine instance and performing a simple insert and query.
- After executing the above steps, the command-line will output the result of nodejsChecker.js connecting to the TDengine instance and performing a simple insert and query.
## Establishing a connection
......
......@@ -307,7 +307,7 @@ For a more detailed description of the `sql()` method, please refer to [RestClie
| ------------------------------------------------------------------------------------------------------------- | ------------------- ---- |
| [bind_multi.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-multi.py) | parameter binding, bind multiple rows at once |
| [bind_row.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-row.py) | bind_row.py
| [insert_lines.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/insert-lines.py) | InfluxDB row protocol write |
| [insert_lines.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/insert-lines.py) | InfluxDB line protocol writing |
| [json_tag.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/json-tag.py) | Use JSON type tags |
| [subscribe-async.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/subscribe-async.py) | Asynchronous subscription |
| [subscribe-sync.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/subscribe-sync.py) | synchronous-subscribe |
......
......@@ -275,7 +275,7 @@ Note that Rust asynchronous functions and an asynchronous runtime are required.
- `.create_database(database: &str)`: Executes the `CREATE DATABASE` statement.
- `.use_database(database: &str)`: Executes the `USE` statement.
In addition, this structure is also the entry point for [Parameter Binding](#Parameter Binding Interface) and [Row Protocol Interface](#Row Protocol Interface). Please refer to the specific API descriptions for usage.
In addition, this structure is also the entry point for [Parameter Binding](#Parameter Binding Interface) and [Line Protocol Interface](#Line Protocol Interface). Please refer to the specific API descriptions for usage.
### Parameter Binding Interface
......@@ -325,7 +325,7 @@ stmt.execute()? ;
//stmt.execute()? ;
```
### Row protocol interface
### Line protocol interface
The line protocol interface supports multiple modes and different precision and requires the introduction of constants in the schemaless module to set.
......@@ -334,7 +334,7 @@ use libtaos::*;
use libtaos::schemaless::*;
```
- InfluxDB row protocol
- InfluxDB line protocol
```rust
let lines = [
......
......@@ -8,7 +8,7 @@ import Prometheus from "./_prometheus.mdx"
import CollectD from "./_collectd.mdx"
import StatsD from "./_statsd.mdx"
import Icinga2 from "./_icinga2.mdx"
import Tcollector from "./_tcollector.mdx"
import TCollector from "./_tcollector.mdx"
taosAdapter is a TDengine companion tool that acts as a bridge and adapter between TDengine clusters and applications. It provides an easy-to-use and efficient way to ingest data directly from data collection agent software such as Telegraf, StatsD, collectd, etc. It also provides an InfluxDB/OpenTSDB compatible data ingestion interface that allows InfluxDB/OpenTSDB applications to be seamlessly ported to TDengine.
......@@ -30,26 +30,26 @@ taosAdapter provides the following features.
### Install taosAdapter
taosAdapter has been part of TDengine server software since TDengine v2.4.0.0. If you use the TDengine server, you don't need additional steps to install taosAdapter. You can download taosAdapter from [Taos Data official website](https://taosdata. com/cn/all-downloads/) to download the TDengine server (taosAdapter is included in v2.4.0.0 and above) installation package. If you need to deploy taosAdapter separately on a server other than the TDengine server, you should install the full TDengine on that server to install taosAdapter. If you need to generate taosAdapter using source code compilation, you can refer to the [Building taosAdapter]( https://github.com/taosdata/taosadapter/blob/develop/BUILD-CN.md) documentation.
taosAdapter has been part of TDengine server software since TDengine v2.4.0.0. If you use the TDengine server, you don't need additional steps to install taosAdapter. You can download taosAdapter from [TAOSData official website](https://taosdata.com/en/all-downloads/) to download the TDengine server installation package (taosAdapter is included in v2.4.0.0 and later version). If you need to deploy taosAdapter separately on another server other than the TDengine server, you should install the full TDengine on that server to install taosAdapter. If you need to build taosAdapter from source code, you can refer to the [Building taosAdapter]( https://github.com/taosdata/taosadapter/blob/develop/BUILD.md) documentation.
### start/stop taosAdapter
On Linux systems, the taosAdapter service is managed by systemd by default. You can use the command `systemctl start taosadapter` to start the taosAdapter service and use the command `systemctl stop taosadapter` to stop the taosAdapter service.
On Linux systems, the taosAdapter service is managed by `systemd` by default. You can use the command `systemctl start taosadapter` to start the taosAdapter service and use the command `systemctl stop taosadapter` to stop the taosAdapter service.
### Remove taosAdapter
Use the command `rmtaos` to remove the TDengine server software, including taosAdapter.
Use the command `rmtaos` to remove the TDengine server software if you use tar.gz package or use package management command like rpm or apt to remove the TDengine server, including taosAdapter.
### Upgrade taosAdapter
taosAdapter and TDengine server need to use the same version. Please upgrade the taosAdapter by upgrading the TDengine server.
You need to upgrade the taosAdapter deployed separately from taosd by upgrading the TDengine server of the deployed server.
You need to upgrade the taosAdapter deployed separately from TDengine server by upgrading the TDengine server on the deployed server.
## taosAdapter parameter list
taosAdapter supports configuration via command-line arguments, environment variables and configuration files. The default configuration file is /etc/taos/taosadapter.toml.
taosAdapter is configurable via command-line arguments, environment variables and configuration files. The default configuration file is /etc/taos/taosadapter.toml on Linux.
Command-line arguments take precedence over environment variables over configuration files. The command line usage is arg=val, e.g., taosadapter -p=30000 --debug=true. The detailed list is as follows:
Command-line arguments take precedence over environment variables over configuration files. The command-line usage is arg=val, e.g., taosadapter -p=30000 --debug=true. The detailed list is as follows:
```shell
Usage of taosAdapter:
......@@ -156,7 +156,7 @@ See [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/bl
- Compatible with RESTful interfaces
[https://www.taosdata.com/cn/documentation/connector#restful](https://www.taosdata.com/cn/documentation/connector#restful)
- Compatible with InfluxDB v1 write interface
[https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/](https://docs.influxdata.com/influxdb/v2.0/reference/ api/influxdb-1x/write/)
[https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/)
- Compatible with OpenTSDB JSON and telnet format writes
- <http://opentsdb.net/docs/build/html/api_http/put.html>
- <http://opentsdb.net/docs/build/html/api_telnet/put.html>
......@@ -165,19 +165,19 @@ See [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/bl
- Seamless connection with StatsD
StatsD is a simple yet powerful daemon for aggregating statistical information. Please visit [https://github.com/statsd/statsd](https://github.com/statsd/statsd) for more information.
- Seamless connection with icinga2
icinga2 is a software that collects inspection result metrics and performance data. Please visit [https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer](https://icinga.com/docs/icinga-2/latest/doc/14- features/#opentsdb-writer) for more information.
- Seamless connection to tcollector
icinga2 is a software that collects inspection result metrics and performance data. Please visit [https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer](https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer) for more information.
- Seamless connection to TCollector
TCollector is a client process that collects data from a local collector and pushes the data to OpenTSDB. Please visit [http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html) for more information.
- Seamless connection to node_exporter
node_export is an exporter for machine metrics. Please visit [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) for more information.
- Support for Prometheus remote_read and remote_write
remote_read and remote_write are clustering solutions for Prometheus data read and write separation. Please visit [https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote- read-meets-streaming/#remote-apis) for more information.
remote_read and remote_write are interfaces for Prometheus data read and write from/to other data storage solution. Please visit [https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis) for more information.
## Interfaces
### TDengine RESTful interface
You can use any client that supports the http protocol to write data to or query data from TDengine by accessing the RESTful interface address `http://<fqdn>:6041/<APIEndPoint>`. See the [official documentation](/reference/connector#restful) for details. The following EndPoint is supported.
You can use any client that supports the http protocol to write data to or query data from TDengine by accessing the REST interface address `http://<fqdn>:6041/<APIEndPoint>`. See the [official documentation](/reference/connector#restful) for details. The following EndPoint is supported.
```text
/rest/sql
......@@ -200,7 +200,7 @@ Support InfluxDB query parameters as follows.
- `u` TDengine user name
- `p` TDengine password
Note: InfluxDB token verification is not supported at present. Only Basic verification and query parameter validation are supported.
Note: InfluxDB token authorization is not supported at present. Only Basic authorization and query parameter validation are supported.
### OpenTSDB
......@@ -225,17 +225,17 @@ You can use any client that supports the http protocol to access the Restful int
### TCollector
<Tcollector />
<TCollector />
### node_exporter
Exporter of hardware and OS metrics exposed by the \*NIX kernel used by Prometheus
node_export is an exporter of hardware and OS metrics exposed by the \*NIX kernel used by Prometheus
- Enable the taosAdapter configuration node_exporter.enable
- Enable the taosAdapter configuration `node_exporter.enable`
- Set the configuration of the node_exporter
- Restart taosAdapter
### prometheus
### Prometheus
<Prometheus />
......@@ -246,16 +246,16 @@ taosAdapter will monitor its memory usage during operation and adjust it with tw
- pauseQueryMemoryThreshold
- pauseAllMemoryThreshold
Stops processing query requests when the pauseQueryMemoryThreshold threshold is exceeded.
Stops processing query requests when the `pauseQueryMemoryThreshold` threshold is exceeded.
http response content.
HTTP response content.
- code 503
- body "query memory exceeds threshold"
Stops processing all write and query requests when the pauseAllMemoryThreshold threshold is exceeded.
Stops processing all write and query requests when the `pauseAllMemoryThreshold` threshold is exceeded.
http response: code 503
HTTP response: code 503
- code 503
- body "memory exceeds threshold"
......@@ -266,24 +266,24 @@ Status check interface `http://<fqdn>:6041/-/ping`
- Normal returns `code 200`
- No parameter If memory exceeds pauseAllMemoryThreshold returns `code 503`
- Request parameter `action=query` returns `code 503` if memory exceeds pauseQueryMemoryThreshold or pauseAllMemoryThreshold
- Request parameter `action=query` returns `code 503` if memory exceeds `pauseQueryMemoryThreshold` or `pauseAllMemoryThreshold`
Corresponding configuration parameter
``text
monitor.collectDuration monitoring interval environment variable "TAOS_MONITOR_COLLECT_DURATION" (default value 3s)
monitor.incgroup whether to run in cgroup (set to true for running in container) environment variable "TAOS_MONITOR_INCGROUP"
monitor.pauseAllMemoryThreshold memory threshold for no more inserts and queries environment variable "TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD" (default 80)
monitor.pauseQueryMemoryThreshold memory threshold for no more queries Environment variable "TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD" (default 70)
monitor.collectDuration monitoring interval environment variable `TAOS_MONITOR_COLLECT_DURATION` (default value 3s)
monitor.incgroup whether to run in cgroup (set to true for running in container) environment variable `TAOS_MONITOR_INCGROUP`
monitor.pauseAllMemoryThreshold memory threshold for no more inserts and queries environment variable `TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD` (default 80)
monitor.pauseQueryMemoryThreshold memory threshold for no more queries Environment variable `TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD` (default 70)
```
You can adjust it according to the specific project application scenario and operation strategy, and it is recommended to use operation monitoring software for timely system memory status monitoring. The load balancer can also check the taosAdapter running status through this interface.
You can adjust it according to the specific application scenario and operation strategy, and it is recommended to use operation monitoring software to monitor system memory status timely. The load balancer can also check the taosAdapter running status through this interface.
## taosAdapter Monitoring Metrics
taosAdapter collects http-related metrics, CPU percentage, and memory percentage.
taosAdapter collects HTTP-related metrics, CPU percentage, and memory percentage.
### http interface
### HTTP interface
Provides an interface conforming to [OpenMetrics](https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md).
......@@ -293,14 +293,14 @@ http://<fqdn>:6041/metrics
### Write to TDengine
taosAdapter supports writing http monitoring, CPU percentage, and memory percentage to TDengine.
taosAdapter supports writing the metrics of HTTP monitoring, CPU percentage, and memory percentage to TDengine.
For configuration parameters
| **Configuration items** | **Description** | **Default values** |
| ----------------------- | --------------------------------------------------------- | ---------- |
| monitor.collectDuration | CPU and memory collection interval | 3s |
| monitor.identity | The current taosadapter identifier will be used if not set to 'hostname:port' | |
| monitor.identity | The current taosadapter identifier will be used if not set to `hostname:port` | |
| monitor.incgroup | whether it is running in a cgroup (set to true for running in a container) | false |
| monitor.writeToTD | Whether to write to TDengine | true |
| monitor.user | TDengine connection username | root |
......@@ -320,13 +320,13 @@ This parameter controls the number of results returned by the following interfac
## Troubleshooting
You can check the taosAdapter running status with the` systemctl status taosadapter` command.
You can check the taosAdapter running status with the `systemctl status taosadapter` command.
You can also adjust the level of the taosAdapter log output by setting the --logLevel parameter or the environment variable TAOS_ADAPTER_LOG_LEVEL. Valid values are: panic, fatal, error, warn, warning, info, debug and trace.
You can also adjust the level of the taosAdapter log output by setting the `--logLevel` parameter or the environment variable `TAOS_ADAPTER_LOG_LEVEL`. Valid values are: panic, fatal, error, warn, warning, info, debug and trace.
## How to migrate from older TDengine versions to taosAdapter
In TDengine server 2.2.x.x or earlier, the taosd process contains an embedded http service. As mentioned earlier, taosAdapter is a standalone software managed using systemd and has its process. And there are some configuration parameters and behaviors that are different between the two. See the following table.
In TDengine server 2.2.x.x or earlier, the TDengine server process (taosd) contains an embedded HTTP service. As mentioned earlier, taosAdapter is a standalone software managed using `systemd` and has its process ID. And there are some configuration parameters and behaviors that are different between the two. See the following table for details.
| **#** | **embedded httpd** | **taosAdapter** | **comment** |
| ----- | ------------------- | ------------------------------------ | ------------------------------------------------------------------ ------------------------------------------------------------------------ |
......
......@@ -21,9 +21,9 @@ There are two ways to install taosBenchmark:
### Configuration and running methods
taosBenchmark supports two configuration methods: [command line arguments](# command line arguments detailed) and [JSON configuration file](# configuration file arguments detailed). These two methods are mutually exclusive, and with only one command line parameter, users can use `-f <json file>` to specify a configuration file when using a configuration file. When running taosBenchmark with command-line arguments and controlling its behavior, users should use other parameters for configuration rather than `-f` parameter. In addition, taosBenchmark offers a special way of running without parameters.
taosBenchmark supports two configuration methods: [Command-line arguments](#Command-line arguments in detailed) and [JSON configuration file](#Configuration file arguments in detailed). These two methods are mutually exclusive, and with only one command-line parameter, users can use `-f <json file>` to specify a configuration file when using a configuration file. When running taosBenchmark with command-line arguments and controlling its behavior, users should use other parameters for configuration rather than `-f` parameter. In addition, taosBenchmark offers a special way of running without parameters.
taosBenchmark supports complete performance testing of TDengine. taosBenchmark supports the TDengine functions in three categories: write, query, and subscribe. These three functions are mutually exclusive, and users can select only one of them each time taosBenchmark runs. It is important to note that the type of functionality to be tested is not configurable when using the command line configuration method, which can only test writing performance. To test the query and subscription performance of the TDengine, you must use the configuration file method and specify the function type to test via the parameter `filetype` in the configuration file.
taosBenchmark supports complete performance testing of TDengine. taosBenchmark supports the TDengine functions in three categories: write, query, and subscribe. These three functions are mutually exclusive, and users can select only one of them each time taosBenchmark runs. It is important to note that the type of functionality to be tested is not configurable when using the command-line configuration method, which can only test writing performance. To test the query and subscription performance of the TDengine, you must use the configuration file method and specify the function type to test via the parameter `filetype` in the configuration file.
**Make sure that the TDengine cluster is running correctly before running taosBenchmark. **
......@@ -37,9 +37,9 @@ taosBenchmark
When run without parameters, taosBenchmark connects to the TDengine cluster specified in `/etc/taos` by default and creates a database named test in TDengine, a super table named `meters` under the test database, and 10,000 tables under the super table with 10,000 records written to each table. Note that if there is already a test database, this table is not used. Note that if there is already a test database, this command will delete it first and create a new test database.
### Run with command line configuration parameters
### Run with command-line configuration parameters
The `-f <json file>` argument cannot be used when running taosBenchmark with command-line parameters and controlling its behavior. Users must specify all configuration parameters from the command line. The following is an example of testing taosBenchmark writing performance using the command line approach.
The `-f <json file>` argument cannot be used when running taosBenchmark with command-line parameters and controlling its behavior. Users must specify all configuration parameters from the command-line. The following is an example of testing taosBenchmark writing performance using the command-line approach.
```bash
taosBenchmark -I stmt -n 200 -t 100
......@@ -51,7 +51,7 @@ The above command, `taosBenchmark` will create a database named `test`, create a
A sample configuration file is provided in the taosBenchmark installation package under `<install_directory>/examples/taosbenchmark-json`.
Use the following command line to run taosBenchmark and control its behavior via a configuration file.
Use the following command-line to run taosBenchmark and control its behavior via a configuration file.
```bash
taosBenchmark -f <json file>
......@@ -81,7 +81,7 @@ taosBenchmark -f <json file>
</details>
#### 订阅场景 JSON 配置文件示例
#### Subscription JSON configuration example
<details>
<summary>subscribe.json</summary>
......@@ -92,10 +92,10 @@ taosBenchmark -f <json file>
</details>
## Command Line Parameters Explained
## Command-line argument in detailed
- **-f/--file <json file\>** :
specify the configuration file to use. This file includes All parameters. And users should not use this parameter with other parameters on the command line. There is no default value.
specify the configuration file to use. This file includes All parameters. And users should not use this parameter with other parameters on the command-line. There is no default value.
- **-c/--config-dir <dir\>** :
specify the directory where the TDengine cluster configuration file. the default path is `/etc/taos`.
......@@ -204,7 +204,7 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
- **-? /--help** :
Show help information and exit. Users should not use it with other parameters.
## Configuration file parameters in detail
## Configuration file parameters in detailed
### General configuration parameters
......
......@@ -30,7 +30,7 @@ There are two ways to install taosdump:
1. backing up all databases: specify `-A` or `-all-databases` parameter.
2. backup multiple specified databases: use `-D db1,db2,... ` parameters; 3.
3. back up some super or normal tables in the specified database: use `-dbname stbname1 stbname2 tbname1 tbname2 ... ` parameters. Note that the first parameter of this input sequence is the database name, and only one database is supported. The second and subsequent parameters are the names of super or normal tables in that database, separated by spaces.
4. back up the system log database: TDengine clusters usually contain a system database named `log`. The data in this database is the data that TDengine runs itself, and the taosdump will not back up the log database by default. If users need to back up the log database, users can use the `-a` or `-allow-sys` command line parameter.
4. back up the system log database: TDengine clusters usually contain a system database named `log`. The data in this database is the data that TDengine runs itself, and the taosdump will not back up the log database by default. If users need to back up the log database, users can use the `-a` or `-allow-sys` command-line parameter.
5. Loose mode backup: taosdump version 1.4.1 onwards provides `-n` and `-L` parameters for backing up data without using escape characters and "loose" mode, which can reduce the number of backups if table names, column names, tag names do not use This can reduce the backup data time and backup data footprint if table names, column names, and tag names do not use `escape character`. If you are unsure about using `-n` and `-L` conditions, please use the default parameters for "strict" mode backup. See the [official documentation](/taos-sql/escape) for a description of escaped characters.
:::tip
......@@ -48,9 +48,9 @@ taosdump internally uses TDengine stmt binding API for writing recovery data and
:::
## Detailed command line parameter list
## Detailed command-line parameter list
The following is a detailed list of taosdump command line arguments.
The following is a detailed list of taosdump command-line arguments.
```
Usage: taosdump [OPTION...] dbname [tbname ...]
......
......@@ -3,7 +3,7 @@ title: Deploying TDengine with Docker
Description: "This chapter focuses on starting the TDengine service in a container and accessing it."
---
This chapter describes how to start the TDengine service in a container and access it. Users can control the behavior of the service in the container by using environment variables on the docker run command line or in the docker-compose file.
This chapter describes how to start the TDengine service in a container and access it. Users can control the behavior of the service in the container by using environment variables on the docker run command-line or in the docker-compose file.
## Starting TDengine
......
......@@ -3,7 +3,7 @@ title: File directory structure
description: "TDengine installation directory description"
---
When TDengine is installed, the following directories or files will be created in the operating system by default.
After TDengine is installed, the following directories or files will be created in the system by default.
| directory/file | description |
| ------------------------- | -------------------------------------------------------------------- |
......@@ -17,14 +17,14 @@ When TDengine is installed, the following directories or files will be created i
## Executable files
All executable files of TDengine are stored in the _/usr/local/taos/bin_ directory by default. These include.
All executable files of TDengine are in the _/usr/local/taos/bin_ directory by default. These include.
- _taosd_: TDengine server-side executable files
- _taos_: TDengine shell executable
- _taos_: TDengine CLI executable
- _taosdump_: data import and export tool
- _taosBenchmark_: TDengine testing tool
- _remove.sh_: script to uninstall TDengine, please execute it carefully, link to the **rmtaos** command in the /usr/bin directory. Will remove the TDengine installation directory /usr/local/taos, but will keep /etc/taos, /var/lib/taos, /var/log/taos
- _taosadapter_: server-side executable that provides RESTful services and accepts writing requests from a variety of other software
- _remove.sh_: script to uninstall TDengine, please execute it carefully, link to the **rmtaos** command in the /usr/bin directory. Will remove the TDengine installation directory `/usr/local/taos`, but will keep `/etc/taos`, `/var/lib/taos`, `/var/log/taos`
- _taosadapter_: server-side executable that provides RESTful services and accepts writing requests from a variety of other softwares
- _tarbitrator_: provides arbitration for two-node cluster deployments
- _run_taosd_and_taosadapter.sh_: script to start both taosd and taosAdapter
- _TDinsight.sh_: script to download TDinsight and install it
......@@ -32,11 +32,9 @@ All executable files of TDengine are stored in the _/usr/local/taos/bin_ directo
- _taosd-dump-cfg.gdb_: script to facilitate debugging of taosd's gdb execution.
:::note
taosBenchmark and taosdump after version 2.4.0.0 require taosTools as a standalone installation.
taosdump after version 2.4.0.0 require taosTools as a standalone installation. A few version taosBenchmark is include in taosTools too.
:::
:::tip
You can configure different data directories and log directories by modifying the system configuration file taos.cfg.
You can configure different data directories and log directories by modifying the system configuration file `taos.cfg`.
:::
......@@ -3,19 +3,17 @@ title: Schemaless Writing
description: "The Schemaless write method eliminates the need to create super tables/sub tables in advance and automatically creates the storage structure corresponding to the data as it is written to the interface."
---
In IoT applications, many data items are often collected for intelligent control, business analysis, device monitoring, etc. Due to the version upgrade of the application logic, or the hardware adjustment of the device itself, the data collection items may change more frequently. To facilitate the data logging work in such cases, TDengine
Starting from version 2.2.0.0, it provides a call to the Schemaless write method, which eliminates the need to create super tables/sub tables in advance and automatically creates the storage structure corresponding to the data as the data is written to the interface. And when necessary, Schemaless
will automatically add the required columns to ensure that the data written by the user is stored correctly.
In IoT applications, many data items are often collected for intelligent control, business analysis, device monitoring, etc. Due to the version upgrade of the application logic, or the hardware adjustment of the device itself, the data collection items may change more frequently. To facilitate the data logging work in such cases, TDengine starting from version 2.2.0.0, it provides a series of interfaces to the schemaless writing method, which eliminates the need to create super tables/sub tables in advance and automatically creates the storage structure corresponding to the data as the data is written to the interface. And when necessary, Schemaless writing will automatically add the required columns to ensure that the data written by the user is stored correctly.
The schemaless write method creates super tables and their corresponding sub-tables completely indistinguishable from the super tables and sub-tables created directly via SQL. You can write data directly to them via SQL statements. Note that the table names of tables created by schema-free writing are based on fixed mapping rules for tag values, so they are not explicitly ideographic and lack readability.
The schemaless writing method creates super tables and their corresponding sub-tables completely indistinguishable from the super tables and sub-tables created directly via SQL. You can write data directly to them via SQL statements. Note that the names of tables created by schemaless writing are based on fixed mapping rules for tag values, so they are not explicitly ideographic and lack readability.
## Schemaless Write Row Protocol
## Schemaless Writing Line Protocol
TDengine's schemaless write line protocol is compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. However, when using these three protocols, you need to specify in the API the standard of the parsing protocol to be used for the input content.
TDengine's schemaless writing line protocol supports to be compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. However, when using these three protocols, you need to specify in the API the standard of the parsing protocol to be used for the input content.
For the standard writing protocols of InfluxDB and OpenTSDB, please refer to the documentation of each protocol. The following is a description of TDengine's extended protocol content, based on InfluxDB's row protocol first. They allow users to control the (supertable) schema more granularly.
For the standard writing protocols of InfluxDB and OpenTSDB, please refer to the documentation of each protocol. The following is a description of TDengine's extended protocol, based on InfluxDB's line protocol first. They allow users to control the (super table) schema more granularly.
With the following formatting conventions, Schemaless uses a single string to express a data row (multiple rows can be passed into the write API at once to enable bulk writing of numerous rows).
With the following formatting conventions, Schemaless writing uses a single string to express a data row (multiple rows can be passed into the writing API at once to enable bulk writing).
```json
measurement,tag_set field_set timestamp
......@@ -24,17 +22,17 @@ measurement,tag_set field_set timestamp
where :
- measurement will be used as the data table name. It will be separated from tag_set by a comma.
- tag_set will be used as tag data in the format ``<tag_key>=<tag_value>,<tag_key>=<tag_value>``, i.e. multiple tag data can be separated by a comma. It is separated from field_set by a space.
- field_set will be used as normal column data in the format of `<field_key>=<field_value>,<field_key>=<field_value>`, again using a comma to separate multiple normal columns of data. It is separated from the timestamp by a space.
- The timestamp is the timestamp of the primary key corresponding to the data in this row.
- tag_set will be used as tag data in the format `<tag_key>=<tag_value>,<tag_key>=<tag_value>`, i.e. multiple tags' data can be separated by a comma. It is separated from field_set by space.
- field_set will be used as normal column data in the format of `<field_key>=<field_value>,<field_key>=<field_value>`, again using a comma to separate multiple normal columns of data. It is separated from the timestamp by space.
- The timestamp is the primary key corresponding to the data in this row.
All data in tag_set is automatically converted to the nchar data type and does not require double quotes (").
All data in tag_set is automatically converted to the NCHAR data type and does not require double quotes (").
In the schemaless write data row protocol, each data item in the field_set needs to be described with its data type. Let's explain in detail:
In the schemaless writing data line protocol, each data item in the field_set needs to be described with its data type. Let's explain in detail:
- If there are English double quotes on both sides, it indicates the BINARY(32) type. For example, `"abc"`.
- If there are double quotes on both sides and an L prefix, it means NCHAR(32) type. For example, `L "error message"`.
- Spaces, equal signs (=), commas (,), and double quotes (") need to be escaped with a backslash (\) in front. (All refer to the English half-angle symbol)
- If there are double quotes on both sides and an L prefix, it means NCHAR(32) type. For example, `L"error message"`.
- Spaces, equal signs (=), commas (,), and double quotes (") need to be escaped with a backslash (\) in front. (All refer to the ASCII character)
- Numeric types will be distinguished from data types by the suffix.
| **Serial number** | **Postfix** | **Mapping type** | **Size (bytes)** |
......@@ -46,10 +44,9 @@ In the schemaless write data row protocol, each data item in the field_set needs
| 5 | i32 | Int | 4 |
| 6 | i64 or i | Bigint | 8 |
- t, T, true, True, TRUE, f, F, false, and False will be handled directly as BOOL types.
- `t`, `T`, `true`, `True`, `TRUE`, `f`, `F`, `false`, and `False` will be handled directly as BOOL types.
For example, the following data rows indicate that the t1 label is "3" (NCHAR), the t2 label is "4" (NCHAR), and the t3 label is "t3" to the super table named st
labeled "t3" (NCHAR), write c1 column as 3 (BIGINT), c2 column as false (BOOL), c3 column is "passit" (BINARY), c4 column is 4 (DOUBLE), and the primary key timestamp is 1626006833639000000 in one row.
For example, the following data rows indicate that the t1 label is "3" (NCHAR), the t2 label is "4" (NCHAR), and the t3 label is "t3" to the super table named `st` labeled "t3" (NCHAR), write c1 column as 3 (BIGINT), c2 column as false (BOOL), c3 column is "passit" (BINARY), c4 column is 4 (DOUBLE), and the primary key timestamp is 1626006833639000000 in one row.
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
......@@ -57,9 +54,9 @@ st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
Note that if the wrong case is used when describing the data type suffix, or if the wrong data type is specified for the data, it may cause an error message and cause the data to fail to be written.
## Main processing logic for schemaless writes
## Main processing logic for schemaless writing
Schema-free writes process row data according to the following principles.
Schemaless writes process row data according to the following principles.
1. You can use the following rules to generate the sub-table names: first, combine the measurement name and the key and value of the label into the next string:
......@@ -70,8 +67,8 @@ Schema-free writes process row data according to the following principles.
Note that tag_key1, tag_key2 are not the original order of the tags entered by the user but the result of using the tag names in ascending order of the strings. Therefore, tag_key1 is not the first tag entered in the line protocol.
The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t*" is a fixed prefix that every table generated by this mapping relationship has. 2.
2. If the super table obtained by parsing the row protocol does not exist, this super table is created.
If the sub-table obtained by the parse row protocol does not exist, Schemaless creates the sub-table according to the sub-table name determined in steps 1 or 2. 4.
2. If the super table obtained by parsing the line protocol does not exist, this super table is created.
If the sub-table obtained by the parse line protocol does not exist, Schemaless creates the sub-table according to the sub-table name determined in steps 1 or 2. 4.
4. If the specified tag or regular column in the data row does not exist, the corresponding tag or regular column is added to the super table (only incremental).
5. If there are some tag columns or regular columns in the super table that are not specified to take values in a data row, then the values of these columns are set to NULL.
6. For BINARY or NCHAR columns, if the length of the value provided in a data row exceeds the column type limit, the maximum length of characters allowed to be stored in the column is automatically increased (only incremented and not decremented) to ensure complete preservation of the data.
......@@ -79,14 +76,13 @@ If the sub-table obtained by the parse row protocol does not exist, Schemaless c
8. Errors encountered throughout the processing will interrupt the writing process and return an error code.
:::tip
All processing logic without mode will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed
All processing logic of schemaless will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed
16k bytes. See [TAOS SQL Boundary Limits](/taos-sql/limit) for specific constraints in this area.
:::
## Time resolution recognition
Three specified modes are supported in the schemaless write process, as follows:
Three specified modes are supported in the schemaless writing process, as follows:
| **Serial** | **Value** | **Description** |
| -------- | ------------------- | ------------------------------- |
......@@ -108,55 +104,54 @@ In the SML_LINE_PROTOCOL parsing mode, the user is required to specify the time
In SML_TELNET_PROTOCOL and SML_JSON_PROTOCOL modes, the time precision is determined based on the length of the timestamp (in the same way as the OpenTSDB standard operation), and the user-specified time resolution is ignored at this point.
## Data mode mapping rules
## Data schema mapping rules
This section describes how data for row protocols are mapped to data with a schema. The data measurement in each row protocol is mapped to
This section describes how data for line protocols are mapped to data with a schema. The data measurement in each line protocol is mapped to
The tag name in tag_set is the name of the tag in the data schema, and the name in field_set is the column's name. The following data is used as an example to illustrate the mapping rules.
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
```
The row data mapping generates a super table: st, which contains three labels of type nchar: t1, t2, t3. 5 data columns are ts (timestamp), c1 (bigint), c3 (binary), c2 (bool), c4 (bigint). The mapping becomes the following SQL statement.
The row data mapping generates a super table: `st`, which contains three labels of type NCHAR: t1, t2, t3. Five data columns are ts (timestamp), c1 (bigint), c3 (binary), c2 (bool), c4 (bigint). The mapping becomes the following SQL statement.
```json
create stable st (_ts timestamp, c1 bigint, c2 bool, c3 binary(6), c4 bigint) tags(t1 nchar(1), t2 nchar(1), t3 nchar(2))
```
## Data pattern change handling
## Data schema change handling
This section describes the impact on the data pattern for different row data writing cases.
This section describes the impact on the data schema for different line protocol data writing cases.
When writing to an explicitly identified field type using the row protocol, subsequent changes to the field's type definition will result in an explicit data schema error, i.e., will trigger a write API report error. As shown below, the
When writing to an explicitly identified field type using the line protocol, subsequent changes to the field's type definition will result in an explicit data schema error, i.e., will trigger a write API report error. As shown below, the
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4i 1626006833640000000
```
The data type mapping in the first row defines column c4 as Double, but the data in the second row is declared as BigInt by the numeric suffix, which triggers a parsing error with no mode write.
The data type mapping in the first row defines column c4 as DOUBLE, but the data in the second row is declared as BIGINT by the numeric suffix, which triggers a parsing error with schemaless writing.
If the row protocol before the column declares the data column as binary, the subsequent one requires a longer binary length, which triggers a super table schema change.
If the line protocol before the column declares the data column as BINARY, the subsequent one requires a longer binary length, which triggers a super table schema change.
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c5="pass" 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c5="passit" 1626006833640000000
```
The first line of the line protocol parsing will declare column c5 is a binary(4) field, the second line data write will extract column c5 is still a binary column. Still, its width is 6, then you need to increase the width of the binary to be able to accommodate the new string.
The first line of the line protocol parsing will declare column c5 is a BINARY(4) field, the second line data write will extract column c5 is still a BINARY column. Still, its width is 6, then you need to increase the width of the BINARY field to be able to accommodate the new string.
```json
st,t1=3,t2=4,t3=t3 c1=3i64 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c6="passit" 1626006833640000000
```
The second row of data has an additional column c6 of type binary(6) compared to the first row. Then a column c6 of type binary(6) is automatically added at this point.
The second line of data has an additional column c6 of type BINARY(6) compared to the first row. Then a column c6 of type BINARY(6) is automatically added at this point.
## Write integrity
TDengine provides idempotency guarantees for data writes, i.e., you can repeatedly call the API to write data with errors. However, it does not give atomicity guarantees for writing multiple rows of data. During the process of writing numerous rows of data in one batch, some data will be written successfully, and some data will fail.
TDengine provides idempotency guarantees for data writing, i.e., you can repeatedly call the API to write data with errors. However, it does not give atomicity guarantees for writing multiple rows of data. During the process of writing numerous rows of data in one batch, some data will be written successfully, and some data will fail.
## Error code
If it is an error in the data itself during the schemaless writing process, the application will get TSDB_CODE_TSC_LINE_SYNTAX_ERROR
error message, which indicates that the error occurred in writing. The other error codes are consistent with the original system and can be obtained via the `taos_errstr(`) to get the specific cause of the error.
If it is an error in the data itself during the schemaless writing process, the application will get `TSDB_CODE_TSC_LINE_SYNTAX_ERROR` error message, which indicates that the error occurred in writing. The other error codes are consistent with the TDengine and can be obtained via the `taos_errstr()` to get the specific cause of the error.
......@@ -19,7 +19,7 @@ password = "taosdata"
The default database name written by taosAdapter is `collectd`. You can also modify the taosAdapter configuration file dbs entry to specify a different name. user and password are the values configured by the actual TDengine. After changing the configuration file, you need to restart the taosAdapter.
- You can also enable the taosAdapter to receive collectd data by using the taosAdapter command line parameters or by setting environment variables.
- You can also enable the taosAdapter to receive collectd data by using the taosAdapter command-line parameters or by setting environment variables.
### Configure collectd
#collectd
......
......@@ -21,7 +21,7 @@ password = "taosdata"
The default database name written by the taosAdapter is `icinga2`. You can also modify the taosAdapter configuration file dbs entry to specify a different name. user and password are the values configured by the actual TDengine. You need to restart the taosAdapter after modification.
- You can also enable taosAdapter to receive icinga2 data by using the taosAdapter command line parameters or setting environment variables.
- You can also enable taosAdapter to receive icinga2 data by using the taosAdapter command-line parameters or setting environment variables.
### Configure icinga3
......
Configuring Prometheus is done by editing the Prometheus configuration file prometheus.yml (default location /etc/prometheus/prometheus.yml).
Configuring Prometheus is done by editing the Prometheus configuration file prometheus.yml (default location `/etc/prometheus/prometheus.yml`).
### Configuring third-party database addresses
......
......@@ -27,7 +27,7 @@ deleteTimings = true
The default database name written by taosAdapter is `statsd`. To specify a different name, you can also modify the taosAdapter configuration file db entry. user and password fill in the actual TDengine configuration values. After changing the configuration file, you need to restart the taosAdapter.
- You can also enable taosAdapter to receive StatsD data by using the taosAdapter command line parameters or setting environment variables.
- You can also enable taosAdapter to receive StatsD data by using the taosAdapter command-line parameters or setting environment variables.
### Configuring StatsD
......
......@@ -19,7 +19,7 @@ password = "taosdata"
The taosAdapter writes to the database with the default name `tcollector`. You can also modify the taosAdapter configuration file dbs entry to specify a different name. user and password fill in the actual TDengine configuration values. After changing the configuration file, you need to restart the taosAdapter.
- You can also enable taosAdapter to receive tcollector data by using the taosAdapter command line parameters or setting environment variables.
- You can also enable taosAdapter to receive tcollector data by using the taosAdapter command-line parameters or setting environment variables.
### Configuring TCollector
......
In the Telegraf configuration file (default location /etc/telegraf/telegraf.conf) add the outputs.http output module configuration.
In the Telegraf configuration file (default location `/etc/telegraf/telegraf.conf`) add an `outputs.http` section.
```
[[outputs.http]]
......@@ -24,4 +24,3 @@ An example is as follows.
data_format = "influx"
influx_max_line_bytes = 250
```
......@@ -2,7 +2,7 @@
title: Reference
---
The reference guide is the most detailed introduction to TDengine itself, TDengine's various language connectors, and the tools that come with it.
The reference guide is the detailed introduction to TDengine, various TDengine's connectors in different languages, and the tools that come with it.
```mdx-code-block
import DocCardList from '@theme/DocCardList';
......
......@@ -3,7 +3,7 @@ sidebar_label: TCollector
title: TCollector writing
---
import Tcollector from "../14-reference/_tcollector.mdx"
import TCollector from "../14-reference/_tcollector.mdx"
TCollector is part of openTSDB and collects client computer's logs to send to the database.
......@@ -17,7 +17,7 @@ To write data to the TDengine via TCollector requires the following preparations
- TCollector has been installed. Please refer to [official documentation](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html#installation-of-tcollector) for TCollector installation
## Configuration steps
<Tcollector />
<TCollector />
## Verification method
......
---
sidebar_label: EMQ Broker
title: EMQ Broker writing
sidebar_label: EMQX Broker
title: EMQX Broker writing
---
MQTT is a popular IoT data transfer protocol, [EMQ](https://github.com/emqx/emqx) is an open-source MQTT Broker software, without any code, only need to use "rules" in EMQ Dashboard to do simple configuration. You can write MQTT data directly to TDengine. EMQ X supports saving data to TDengine by sending it to web services and provides a native TDengine driver for direct saving in the Enterprise Edition. Please refer to the [EMQ official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use it. tdengine).
MQTT is a popular IoT data transfer protocol, [EMQX](https://github.com/emqx/emqx) is an open-source MQTT Broker software, without any code, only need to use "rules" in EMQX Dashboard to do simple configuration. You can write MQTT data directly to TDengine. EMQX supports saving data to TDengine by sending it to web services and provides a native TDengine driver for direct saving in the Enterprise Edition. Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use it. tdengine).
## Prerequisites
......@@ -34,7 +34,7 @@ Depending on the current operating system, users can download the installation p
CREATE TABLE sensor_data (ts timestamp, temperature float, humidity float, volume float, PM10 float, pm25 float, SO2 float, NO2 float, CO float, sensor_id NCHAR(255), area TINYINT, coll_time timestamp);
```
Note: The table schema is based on the blog [(In Chinese) Data Transfer, Storage, Presentation, EMQ X + TDengine Build MQTT IoT Data Visualization Platform](https://www.taosdata.com/blog/2020/08/04/1722.html) as an example. Subsequent operations are carried out with this blog scenario too. Please modify it according to your actual application scenario.
Note: The table schema is based on the blog [(In Chinese) Data Transfer, Storage, Presentation, EMQX + TDengine Build MQTT IoT Data Visualization Platform](https://www.taosdata.com/blog/2020/08/04/1722.html) as an example. Subsequent operations are carried out with this blog scenario too. Please modify it according to your actual application scenario.
## Configuring EMQX Rules
......@@ -187,4 +187,4 @@ Use the TDengine CLI program to log in and query the appropriate databases and t
![img](./emqx/check-result-in-taos.png)
Please refer to the [TDengine official documentation](https://docs.taosdata.com/) for more details on how to use TDengine.
EMQX Please refer to the [EMQ official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use EMQX.
EMQX Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use EMQX.
......@@ -34,7 +34,7 @@ Please refer to the [official documentation](https://grafana.com/grafana/downloa
### TDengine
Download the latest TDengine-server 2.4.0.x or above from the [Downloads](http://taosdata.com/cn/all-downloads/) page on the Taos Data website and install it.
Download the latest TDengine-server 2.4.0.x or above from the [Downloads](http://taosdata.com/cn/all-downloads/) page on the TAOSData website and install it.
## Data Connection Setup
......
......@@ -99,7 +99,7 @@ This chapter describes the differences between OpenTSDB and TDengine at the syst
TDengine currently only supports Grafana for visual kanban rendering, so if your application uses front-end kanban boards other than Grafana (e.g., [TSDash](https://github.com/facebook/tsdash), [Status Wolf](https://github) .com/box/StatusWolf), etc.). You cannot directly migrate those front-end kanbans to TDengine, and the front-end kanban will need to be ported to Grafana to work correctly.
TDengine version 2.3.0.x only supports collectd and StatsD as data collection aggregation software but will provide more data collection aggregation software in the future. If you use other data aggregators on the collection side, your application needs to be ported to these two data aggregation systems to write data correctly.
In addition to the two data aggregator software protocols mentioned above, TDengine also supports writing data directly via InfluxDB's row protocol and OpenTSDB's data writing protocol, JSON format. You can rewrite the logic on the data push side to write data using the row protocols supported by TDengine.
In addition to the two data aggregator software protocols mentioned above, TDengine also supports writing data directly via InfluxDB's line protocol and OpenTSDB's data writing protocol, JSON format. You can rewrite the logic on the data push side to write data using the line protocols supported by TDengine.
In addition, if your application uses the following features of OpenTSDB, you need to understand the following considerations before migrating your application to TDengine.
......
# 高效写入数据
TDengine 支持多种接口写入数据,包括 SQL,Prometheus,Telegraf,collectd,StatsD,EMQ MQTT Broker,HiveMQ Broker,CSV 文件等,后续还将提供 Kafka,OPC 等接口。数据可以单条插入,也可以批量插入,可以插入一个数据采集点的数据,也可以同时插入多个数据采集点的数据。支持多线程插入,支持时间乱序数据插入,也支持历史数据插入。
TDengine 支持多种接口写入数据,包括 SQL,Prometheus,Telegraf,collectd,StatsD,EMQX MQTT Broker,HiveMQ Broker,CSV 文件等,后续还将提供 Kafka,OPC 等接口。数据可以单条插入,也可以批量插入,可以插入一个数据采集点的数据,也可以同时插入多个数据采集点的数据。支持多线程插入,支持时间乱序数据插入,也支持历史数据插入。
## <a class="anchor" id="sql"></a>SQL 写入
......@@ -312,9 +312,9 @@ TCollector 是一个在客户侧收集本地收集器并发送数据到 OpenTSDB
taosAdapter 相关配置参数请参考 taosadapter --help 命令输出以及相关文档。
## <a class="anchor" id="emq"></a>EMQ Broker 直接写入
## <a class="anchor" id="emq"></a>EMQX Broker 直接写入
MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/emqx) 是一开源的 MQTT Broker 软件,无需任何代码,只需要在 EMQ Dashboard 里使用“规则”做简单配置,即可将 MQTT 的数据直接写入 TDengine。EMQX 支持通过 发送到 Web 服务的方式保存数据到 TDengine,也在企业版上提供原生的 TDengine 驱动实现直接保存。详细使用方法请参考 [EMQ 官方文档](https://docs.emqx.com/zh/enterprise/v4.4/rule/backend_tdengine.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine)
MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/emqx) 是一开源的 MQTT Broker 软件,无需任何代码,只需要在 EMQX Dashboard 里使用“规则”做简单配置,即可将 MQTT 的数据直接写入 TDengine。EMQX 支持通过 发送到 Web 服务的方式保存数据到 TDengine,也在企业版上提供原生的 TDengine 驱动实现直接保存。详细使用方法请参考 [EMQX 官方文档](https://docs.emqx.com/zh/enterprise/v4.4/rule/backend_tdengine.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine)
## <a class="anchor" id="hivemq"></a>HiveMQ Broker 直接写入
......
# Efficient Data Writing
TDengine supports multiple ways to write data, including SQL, Prometheus, Telegraf, collectd, StatsD, EMQ MQTT Broker, HiveMQ Broker, CSV file, etc. Kafka, OPC and other interfaces will be provided in the future. Data can be inserted in one single record or in batches, data from one or multiple data collection points can be inserted at the same time. TDengine supports multi-thread insertion, out-of-order data insertion, and also historical data insertion.
TDengine supports multiple ways to write data, including SQL, Prometheus, Telegraf, collectd, StatsD, EMQX MQTT Broker, HiveMQ Broker, CSV file, etc. Kafka, OPC and other interfaces will be provided in the future. Data can be inserted in one single record or in batches, data from one or multiple data collection points can be inserted at the same time. TDengine supports multi-thread insertion, out-of-order data insertion, and also historical data insertion.
## <a class="anchor" id="sql"></a> Data Writing via SQL
......@@ -303,9 +303,9 @@ TCollector is a client-side process that gathers data from local collectors and
Please find taosAdapter configuration and usage from `taosadapter --help` output.
## <a class="anchor" id="emq"></a> Data Writing via EMQ Broker
## <a class="anchor" id="emq"></a> Data Writing via EMQX Broker
[EMQ](https://github.com/emqx/emqx) is an open source MQTT Broker software, with no need of coding, only to use "rules" in EMQ Dashboard for simple configuration, and MQTT data can be directly written into TDengine. EMQX supports storing data to the TDengine by sending it to a Web service, and also provides a native TDengine driver on Enterprise Edition for direct data store. Please refer to [EMQ official documents](https://docs.emqx.io/broker/latest/cn/rule/rule-example.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine) for more details.
[EMQX](https://github.com/emqx/emqx) is an open source MQTT Broker software, with no need of coding, only to use "rules" in EMQX Dashboard for simple configuration, and MQTT data can be directly written into TDengine. EMQX supports storing data to the TDengine by sending it to a Web service, and also provides a native TDengine driver on Enterprise Edition for direct data store. Please refer to [EMQX official documents](https://docs.emqx.io/broker/latest/cn/rule/rule-example.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine) for more details.
## <a class="anchor" id="hivemq"></a> Data Writing via HiveMQ Broker
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册