未验证 提交 fd34ae9d 编写于 作者: sangshuduo's avatar sangshuduo 提交者: GitHub

Docs/sangshuduo/td 12520 docker doc (#10799)

* [TD-12520]<docs>: refine docker documents.

* update taosadapter part

* remove manually launch taosadapter section

* update English version

* [TD-12520]<docs>: refine doc for docker usage

* [TD-12520]<docs>: fix docker doc

* fix typo
上级 424ef567
......@@ -121,7 +121,7 @@ TDengine RESTful 接口详情请参考[官方文档](https://www.taosdata.com/cn
### 使用 Docker 容器运行 TDengine server 和 taosAdapter
在 TDegnine 2.4.0.0 之后版本的 Docker 容器,开始提供一个独立运行的组件 taosAdapter,代替之前版本 TDengine 中 taosd 进程中内置的 http server。taosAdapter 支持通过 RESTful 接口对 TDengine server 的数据写入和查询能力,并提供和 InfluxDB/OpenTSDB 兼容的数据摄取接口,允许 InfluxDB/OpenTSDB 应用程序无缝移植到 TDengine。在新版本 Docker 镜像中,默认启用了 taosAdapter,也可以使用 docker run 命令中设置 TAOS_DISABLE_ADAPTER=true 来禁用 taosAdapter;也可以在 docker run 命令中单独使用taosAdapter,而不运行 taosd 。
在 TDengine 2.4.0.0 之后版本的 Docker 容器,开始提供一个独立运行的组件 taosAdapter,代替之前版本 TDengine 中 taosd 进程中内置的 http server。taosAdapter 支持通过 RESTful 接口对 TDengine server 的数据写入和查询能力,并提供和 InfluxDB/OpenTSDB 兼容的数据摄取接口,允许 InfluxDB/OpenTSDB 应用程序无缝移植到 TDengine。在新版本 Docker 镜像中,默认启用了 taosAdapter,也可以使用 docker run 命令中设置 TAOS_DISABLE_ADAPTER=true 来禁用 taosAdapter;也可以在 docker run 命令中单独使用taosAdapter,而不运行 taosd 。
注意:如果容器中运行 taosAdapter,需要根据需要映射其他端口,具体端口默认配置和修改方法请参考[taosAdapter文档](https://github.com/taosdata/taosadapter/blob/develop/README-CN.md)
......@@ -227,7 +227,7 @@ taos>
- **查看数据库。**
```bash
$ taos> show databases;
$ taos> SHOW DATABASES;
name | created_time | ntables | vgroups | ···
test | 2021-08-18 06:01:11.021 | 10000 | 6 | ···
log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
......@@ -240,7 +240,7 @@ $ taos> show databases;
$ taos> use test;
Database changed.
$ taos> show stables;
$ taos> SHOW STABLES;
name | created_time | columns | tags | tables |
============================================================================================
meters | 2021-08-18 06:01:11.116 | 4 | 2 | 10000 |
......@@ -251,10 +251,7 @@ Query OK, 1 row(s) in set (0.003259s)
- **查看表,限制输出十条。**
```bash
$ taos> select * from test.t0 limit 10;
DB error: Table does not exist (0.002857s)
taos> select * from test.d0 limit 10;
taos> SELECT * FROM test.d0 LIMIT 10;
ts | current | voltage | phase |
======================================================================================
2017-07-14 10:40:00.000 | 10.12072 | 223 | 0.34167 |
......@@ -274,7 +271,7 @@ Query OK, 10 row(s) in set (0.016791s)
- **查看 d0 表的标签值。**
```bash
$ taos> select groupid, location from test.d0;
$ taos> SELECT groupid, location FROM test.d0;
groupid | location |
=================================
0 | shanghai |
......@@ -292,7 +289,7 @@ echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
然后可以使用 taos shell 查询 taosAdapter 自动创建的数据库 statsd 和 超级表 foo 中的内容:
```
taos> show databases;
taos> SHOW DATABASES;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
......@@ -302,13 +299,13 @@ Query OK, 2 row(s) in set (0.002112s)
taos> use statsd;
Database changed.
taos> show stables;
taos> SHOW STABLES;
name | created_time | columns | tags | tables |
============================================================================================
foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
Query OK, 1 row(s) in set (0.001160s)
taos> select * from foo;
taos> SELECT * FROM foo;
ts | value | metric_type |
=======================================================================================
2021-12-28 09:21:48.840820836 | 1 | counter |
......
......@@ -13,6 +13,10 @@ TDengine 支持 X64/ARM64/MIPS64/Alpha64 硬件平台,后续将支持 ARM32、
docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
```
```
docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
```
详细操作方法请参照 [通过 Docker 快速体验 TDengine](https://www.taosdata.com/cn/documentation/getting-started/docker)
注:暂时不建议生产环境采用 Docker 来部署 TDengine 的客户端或服务端,但在开发环境下或初次尝试时,使用 Docker 方式部署是十分方便的。特别是,利用 Docker,可以方便地在 macOS 和 Windows 环境下尝试 TDengine。
......
......@@ -118,8 +118,8 @@ TDengine 当前只支持 Grafana 的可视化看板呈现,所以如果你的
| 序号 | 测量(metric) | 值名称 | 类型 | tag1 | tag2 | tag3 | tag4 | tag5 |
| ---- | -------------- | ------ | ------ | ---- | ----------- | -------------------- | --------- | ------ |
| 1 | memory | value | double | host | memory_type | memory_type_instance | source | n/a |
| 2 | swap | value | double | host | swap_type | swap_type_instance | source | n/a |
| 1 | memory | value | double | host | memory_type | memory_type_instance | source | n/a |
| 2 | swap | value | double | host | swap_type | swap_type_instance | source | n/a |
| 3 | disk | value | double | host | disk_point | disk_instance | disk_type | source |
TDengine 要求存储的数据具有数据模式,即写入数据之前需创建超级表并指定超级表的模式。对于数据模式的建立,你有两种方式来完成此项工作:1)充分利用 TDengine 对 OpenTSDB 的数据原生写入的支持,调用 TDengine 提供的 API 将(文本行或 JSON 格式)数据写入,并自动化地建立单值模型。采用这种方式不需要对数据写入应用进行较大的调整,也不需要对写入的数据格式进行转换。
......@@ -198,7 +198,7 @@ DataX 具体的使用方式及如何使用 DataX 将数据写入 TDengine 请参
2)在系统全负载运行下,如果有足够的剩余计算和 IO 资源,可以建立多线程的导入机制,最大限度地提升数据迁移的效率。考虑到数据解析对于 CPU 带来的巨大负载,需要控制最大的并行任务数量,以避免因导入历史数据而触发的系统整体过载。
由于 TDegnine 本身操作简易性,所以不需要在整个过程中进行索引维护、数据格式的变化处理等工作,整个过程只需要顺序执行即可。
由于 TDengine 本身操作简易性,所以不需要在整个过程中进行索引维护、数据格式的变化处理等工作,整个过程只需要顺序执行即可。
当历史数据完全导入到 TDengine 以后,此时两个系统处于同时运行的状态,之后便可以将查询请求切换到 TDengine 上,从而实现无缝的应用切换。
......
......@@ -197,7 +197,7 @@ column[0]:FLOAT column[1]:INT column[2]:FLOAT
Press enter key to continue or Ctrl-C to stop
```
After enter, this command will automatically create a super table `meters` under the database test, there are 10,000 tables under this super table, the table name is "d0" to "d9999", each table has 10,000 records, each record has four fields (ts, current, voltage, phase), the time stamp is from "2017-07-14 10:40:00 000" to "2017-07-14 10:40:09 999", each table has a tag location and groupId, groupId is set from 1 to 10 and location is set to "beijing" or "shanghai".
After enter, this command will automatically create a super table `meters` under the database test, there are 10,000 tables under this super table, the table name is "d0" to "d9999", each table has 10,000 records, each record has four fields (ts, current, voltage, phase), the time stamp is from "2017-07-14 10:40:00 000" to "2017-07-14 10:40:09 999", each table has a tag location and groupid, groupid is set from 1 to 10 and location is set to "beijing" or "shanghai".
It takes about a few minutes to execute this command and ends up inserting a total of 100 million records.
......@@ -217,7 +217,7 @@ taos>
- **View the database.**
```bash
$ taos> show databases;
$ taos> SHOW DATABASES;
name | created_time | ntables | vgroups | ···
test | 2021-08-18 06:01:11.021 | 10000 | 6 | ···
log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
......@@ -227,10 +227,10 @@ $ taos> show databases;
- **View Super Tables.**
```bash
$ taos> use test;
$ taos> USE test;
Database changed.
$ taos> show stables;
$ taos> SHOW STABLES;
name | created_time | columns | tags | tables |
============================================================================================
meters | 2021-08-18 06:01:11.116 | 4 | 2 | 10000 |
......@@ -241,10 +241,7 @@ Query OK, 1 row(s) in set (0.003259s)
- **View the table and limit the output to 10 entries.**
```bash
$ taos> select * from test.t0 limit 10;
DB error: Table does not exist (0.002857s)
taos> select * from test.d0 limit 10;
taos> SELECT * FROM test.d0 LIMIT 10;
ts | current | voltage | phase |
======================================================================================
2017-07-14 10:40:00.000 | 10.12072 | 223 | 0.34167 |
......@@ -264,7 +261,7 @@ Query OK, 10 row(s) in set (0.016791s)
- **View the tag values for the d0 table.**
```bash
$ taos> select groupid, location from test.d0;
$ taos> SELECT groupid, location FROM test.d0;
groupid | location |
=================================
0 | shanghai |
......@@ -283,23 +280,23 @@ echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
Then you can use the taos shell to query the taosAdapter automatically created database statsd and the contents of the super table foo.
```
taos> show databases;
taos> SHOW DATABASES;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
Query OK, 2 row(s) in set (0.002112s)
taos> use statsd;
taos> USE statsd;
Database changed.
taos> show stables;
taos> SHOW STABLES;
name | created_time | columns | tags | tables |
============================================================================================
foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
Query OK, 1 row(s) in set (0.001160s)
taos> select * from foo;
taos> SELECT * FROM foo;
ts | value | metric_type |
=======================================================================================
2021-12-28 09:21:48.840820836 | 1 | counter |
......
......@@ -63,7 +63,7 @@ This allows collectd to push the data to taosAdapter using the push to OpenTSDB
After the data has been written to TDengine properly, you can adapt Grafana to visualize the data written to TDengine. There is a connection plugin for Grafana in the TDengine installation directory connector/grafanaplugin. The way to use this plugin is simple.
First copy the entire dist directory under the grafanaplugin directory to Grafana's plugins directory (the default address is /var/lib/grafana/plugins/), and then restart Grafana to see the TDengine data source under the Add Data Source menu.
First copy the entire `dist` directory under the grafanaplugin directory to Grafana's plugins directory (the default address is /var/lib/grafana/plugins/), and then restart Grafana to see the TDengine data source under the Add Data Source menu.
```shell
sudo cp -r . /var/lib/grafana/plugins/tdengine
......@@ -144,15 +144,15 @@ The steps are as follows: the name of the metrics is used as the name of the TDe
Create 3 super tables in TDengine.
```sql
create stable memory(ts timestamp, val float) tags(host binary(12)memory_type binary(20), memory_type_instance binary(20), source binary(20));
create stable swap(ts timestamp, val double) tags(host binary(12), swap_type binary(20), swap_type_binary binary(20), source binary(20));
create stable disk(ts timestamp, val double) tags(host binary(12), disk_point binary(20), disk_instance binary(20), disk_type binary(20), source binary(20));
CREATE STABLE memory(ts timestamp, val float) tags(host binary(12)memory_type binary(20), memory_type_instance binary(20), source binary(20));
CREATE STABLE swap(ts timestamp, val double) tags(host binary(12), swap_type binary(20), swap_type_binary binary(20), source binary(20));
CREATE STABLE disk(ts timestamp, val double) tags(host binary(12), disk_point binary(20), disk_instance binary(20), disk_type binary(20), source binary(20));
```
For sub-tables use dynamic table creation as shown below:
```sql
insert into memory_vm130_memory_bufferred_collectd using memory tags(vm130, memory, 'buffer', 'collectd') values(1632979445, 3.0656);
INSERT INTO memory_vm130_memory_buffered_collectd USING memory TAGS(vm130, memory, 'buffer', 'collectd') VALUES(1632979445, 3.0656);
```
Eventually about 340 sub-tables and 3 super-tables will be created in the system. Note that if the use of concatenated tagged values causes the sub-table names to exceed the system limit (191 bytes), then some encoding (e.g. MD5) needs to be used to convert them to an acceptable length.
......@@ -168,7 +168,7 @@ Data is subscribed from the message queue and an adapted writer is started to wr
After the data starts to be written for a sustained period, SQL statements can be used to check whether the amount of data written meets the expected write requirements. The following SQL statement is used to count the amount of data.
```sql
select count(*) from memory
SELECT COUNT(*) FROM memory
```
After completing the query, if the written data does not differ from the expected one, and there are no abnormal error messages from the writing program itself, then you can confirm that the data writing is complete and valid.
......@@ -213,7 +213,7 @@ Notes.
1. the value within the Interval needs to be the same as the interval value of the outer query.
As the interpolation of values in OpenTSDB uses linear interpolation, use fill(linear) to declare the interpolation type in the interpolation clause. The following functions with the same interpolation requirements are handled by this method. 3.
As the interpolation of values in OpenTSDB uses linear interpolation, use FILL(linear) to declare the interpolation type in the interpolation clause. The following functions with the same interpolation requirements are handled by this method. 3.
2. The 20s parameter in Interval means that the inner query will generate results in a 20-second window. In a real query, it needs to be adjusted to the time interval between different records. This ensures that the interpolation results are generated equivalently to the original data.
......@@ -226,7 +226,7 @@ Equivalent function: count
Example.
select count(*) from super_table_name;
SELECT COUNT(*) FROM super_table_name;
**Dev**
......@@ -234,7 +234,7 @@ Equivalent function: stddev
Example.
Select stddev(val) from table_name
SELECT STDDEV(val) FROM table_name
**Estimated percentiles**
......@@ -242,7 +242,7 @@ Equivalent function: apercentile
Example.
Select apercentile(col1, 50, “t-digest”) from table_name
SELECT APERCENTILE(col1, 50, “t-digest”) FROM table_name
Remark.
......@@ -254,7 +254,7 @@ Equivalent function: first
Example.
Select first(col1) from table_name
SELECT FIRST(col1) FROM table_name
**Last**
......@@ -262,7 +262,7 @@ Equivalent function: last
Example.
Select last(col1) from table_name
SELECT LAST(col1) FROM table_name
**Max**
......@@ -270,7 +270,7 @@ Equivalent function: max
Example.
Select max(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s)
SELECT MAX(value) FROM (SELECT FIRST(val) value FROM table_name INTERVAL(10s) FILL(linear)) INTERVAL(10s)
Note: The Max function requires interpolation, for the reasons given above.
......@@ -280,13 +280,13 @@ Equivalent function: min
Example.
Select min(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s);
SELECT MIN(value) FROM (select first(val) value FROM table_name INTERVAL(10s) FILL(linear)) INTERVAL(10s);
**MinMax**
Equivalent function: max
Select max(val) from table_name
SELECT max(val) FROM table_name
Note: This function does not require interpolation, so it can be calculated directly.
......@@ -294,7 +294,7 @@ Note: This function does not require interpolation, so it can be calculated dire
Equivalent function: min
Select min(val) from table_name
SELECT min(val) FROM table_name
Note: This function does not require interpolation, so it can be calculated directly.
......@@ -308,7 +308,7 @@ Note:
Equivalent function: sum
Select max(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s)
SELECT MAX(value) FROM (SELECT FIRST(val) value FROM table_name INTERVAL(10s) FILL(linear)) INTERVAL(10s)
Note: This function does not require interpolation, so it can be calculated directly.
......@@ -316,7 +316,7 @@ Note: This function does not require interpolation, so it can be calculated dire
Equivalent function: sum
Select sum(val) from table_name
SELECT SUM(val) FROM table_name
Note: This function does not require interpolation, so it can be calculated directly.
......@@ -356,7 +356,7 @@ Combining the above formula and bringing the parameters into the calculation for
### Storage device selection considerations
The hard disk should be used with a better random read performance hard disk device, if you can have SSD, consider using SSD as much as possible. better random read performance of the disk is extremely helpful to improve the system query performance and can improve the overall query response performance of the system. To obtain better query performance, the performance index of single-threaded random read IOPS of the hard disk device should not be lower than 1000, it is better to reach 5000 IOPS or more. To obtain an evaluation of the current device random read IO performance, it is recommended that fio software be used to evaluate its operational performance (see Appendix 1 for details on how to use it) to confirm whether it can meet the large file random read performance requirements.
The hard disk should be used with a better random read performance hard disk device, if you can have SSD, consider using SSD as much as possible. better random read performance of the disk is extremely helpful to improve the system query performance and can improve the overall query response performance of the system. To obtain better query performance, the performance index of single-threaded random read IOPS of the hard disk device should not be lower than 1000, it is better to reach 5000 IOPS or more. To obtain an evaluation of the current device random read IO performance, it is recommended that `fio` software be used to evaluate its operational performance (see Appendix 1 for details on how to use it) to confirm whether it can meet the large file random read performance requirements.
Hard disk write performance has little impact on TDengine; TDengine writes in append write mode, so as long as it has good sequential write performance, both SAS hard disks and SSDs, in general, can meet TDengine's requirements for disk write performance well.
......@@ -390,7 +390,7 @@ FQDN, firstEp, secondEP, dataDir, logDir, tmpDir, serverPort. The specific meani
Follow the same steps to set the parameters on the node that needs to run and start the taosd service, then add the Dnode to the cluster.
Finally, start taos and execute the command show dnodes, if you can see all the nodes that have joined the cluster, then the cluster is successfully built. For the specific operation procedure and notes, please refer to the document [TDengine Cluster Installation, Management](https://www.taosdata.com/cn/documentation/cluster).
Finally, start taos and execute the command `SHOW DNODES`, if you can see all the nodes that have joined the cluster, then the cluster is successfully built. For the specific operation procedure and notes, please refer to the document [TDengine Cluster Installation, Management](https://www.taosdata.com/cn/documentation/cluster).
## Appendix 4: Super table names
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册