提交 0432fd39 编写于 作者: S Shengliang Guan

Merge remote-tracking branch 'origin/develop' into feature/os

# 与其他工具的连接
## Grafana
TDengine能够与开源数据可视化系统[Grafana](https://www.grafana.com/)快速集成搭建数据监测报警系统,整个过程无需任何代码开发,TDengine中数据表中内容可以在仪表盘(DashBoard)上进行可视化展现。
### 安装Grafana
目前TDengine支持Grafana 5.2.4以上的版本。用户可以根据当前的操作系统,到Grafana官网下载安装包,并执行安装。下载地址如下:https://grafana.com/grafana/download。
### 配置Grafana
TDengine的Grafana插件在安装包的/usr/local/taos/connector/grafana目录下。
以CentOS 7.2操作系统为例,将tdengine目录拷贝到/var/lib/grafana/plugins目录下,重新启动grafana即可。
### 使用 Grafana
#### 配置数据源
用户可以直接通过 localhost:3000 的网址,登录 Grafana 服务器(用户名/密码:admin/admin),通过左侧 `Configuration -> Data Sources` 可以添加数据源,如下图所示:
![img](../assets/add_datasource1.jpg)
点击 `Add data source` 可进入新增数据源页面,在查询框中输入 TDengine 可选择添加,如下图所示:
![img](../assets/add_datasource2.jpg)
进入数据源配置页面,按照默认提示修改相应配置即可:
![img](../assets/add_datasource3.jpg)
* Host: TDengine 集群的中任意一台服务器的 IP 地址与 TDengine RESTful 接口的端口号(6020),默认 http://localhost:6020。
* User:TDengine 用户名。
* Password:TDengine 用户密码。
点击 `Save & Test` 进行测试,成功会有如下提示:
![img](../assets/add_datasource4.jpg)
#### 创建 Dashboard
回到主界面创建 Dashboard,点击 Add Query 进入面板查询页面:
![img](../assets/create_dashboard1.jpg)
如上图所示,在 Query 中选中 `TDengine` 数据源,在下方查询框可输入相应 sql 进行查询,具体说明如下:
* INPUT SQL:输入要查询的语句(该 SQL 语句的结果集应为两列多行),例如:`select avg(mem_system) from log.dn where ts >= $from and ts < $to interval($interval)` ,其中,from、to 和 interval 为 TDengine插件的内置变量,表示从Grafana插件面板获取的查询范围和时间间隔。除了内置变量外,`也支持可以使用自定义模板变量`
* ALIAS BY:可设置当前查询别名。
* GENERATE SQL: 点击该按钮会自动替换相应变量,并生成最终执行的语句。
按照默认提示查询当前 TDengine 部署所在服务器指定间隔系统内存平均使用量如下:
![img](../assets/create_dashboard2.jpg)
> 关于如何使用Grafana创建相应的监测界面以及更多有关使用Grafana的信息,请参考Grafana官方的[文档](https://grafana.com/docs/)。
#### 导入 Dashboard
在 Grafana 插件目录 /usr/local/taos/connector/grafana/tdengine/dashboard/ 下提供了一个 `tdengine-grafana.json` 可导入的 dashboard。
点击左侧 `Import` 按钮,并上传 `tdengine-grafana.json` 文件:
![img](../assets/import_dashboard1.jpg)
导入完成之后可看到如下效果:
![img](../assets/import_dashboard2.jpg)
## Matlab
MatLab可以通过安装包内提供的JDBC Driver直接连接到TDengine获取数据到本地工作空间。
### MatLab的JDBC接口适配
MatLab的适配有下面几个步骤,下面以Windows10上适配MatLab2017a为例:
- 将TDengine安装包内的驱动程序JDBCDriver-1.0.0-dist.jar拷贝到${matlab_root}\MATLAB\R2017a\java\jar\toolbox
- 将TDengine安装包内的taos.lib文件拷贝至${matlab_ root _dir}\MATLAB\R2017a\lib\win64
- 将新添加的驱动jar包加入MatLab的classpath。在${matlab_ root _dir}\MATLAB\R2017a\toolbox\local\classpath.txt文件中添加下面一行
`$matlabroot/java/jar/toolbox/JDBCDriver-1.0.0-dist.jar`
- 在${user_home}\AppData\Roaming\MathWorks\MATLAB\R2017a\下添加一个文件javalibrarypath.txt, 并在该文件中添加taos.dll的路径,比如您的taos.dll是在安装时拷贝到了C:\Windows\System32下,那么就应该在javalibrarypath.txt中添加如下一行:
`C:\Windows\System32`
### 在MatLab中连接TDengine获取数据
在成功进行了上述配置后,打开MatLab。
- 创建一个连接:
`conn = database(‘db’, ‘root’, ‘taosdata’, ‘com.taosdata.jdbc.TSDBDriver’, ‘jdbc:TSDB://127.0.0.1:0/’)`
- 执行一次查询:
`sql0 = [‘select * from tb’]`
`data = select(conn, sql0);`
- 插入一条记录:
`sql1 = [‘insert into tb values (now, 1)’]`
`exec(conn, sql1)`
更多例子细节请参考安装包内examples\Matlab\TDengineDemo.m文件。
## R
R语言支持通过JDBC接口来连接TDengine数据库。首先需要安装R语言的JDBC包。启动R语言环境,然后执行以下命令安装R语言的JDBC支持库:
```R
install.packages('RJDBC', repos='http://cran.us.r-project.org')
```
安装完成以后,通过执行`library('RJDBC')`命令加载 _RJDBC_ 包:
然后加载TDengine的JDBC驱动:
```R
drv<-JDBC("com.taosdata.jdbc.TSDBDriver","JDBCDriver-2.0.0-dist.jar", identifier.quote="\"")
```
如果执行成功,不会出现任何错误信息。之后通过以下命令尝试连接数据库:
```R
conn<-dbConnect(drv,"jdbc:TSDB://192.168.0.1:0/?user=root&password=taosdata","root","taosdata")
```
注意将上述命令中的IP地址替换成正确的IP地址。如果没有任务错误的信息,则连接数据库成功,否则需要根据错误提示调整连接的命令。TDengine支持以下的 _RJDBC_ 包中函数:
- dbWriteTable(conn, "test", iris, overwrite=FALSE, append=TRUE):将数据框iris写入表test中,overwrite必须设置为false,append必须设为TRUE,且数据框iris要与表test的结构一致。
- dbGetQuery(conn, "select count(*) from test"):查询语句
- dbSendUpdate(conn, "use db"):执行任何非查询sql语句。例如dbSendUpdate(conn, "use db"), 写入数据dbSendUpdate(conn, "insert into t1 values(now, 99)")等。
- dbReadTable(conn, "test"):读取表test中数据
- dbDisconnect(conn):关闭连接
- dbRemoveTable(conn, "test"):删除表test
TDengine客户端暂不支持如下函数:
- dbExistsTable(conn, "test"):是否存在表test
- dbListTables(conn):显示连接中的所有表
# Connect with other tools
## Telegraf
TDengine is easy to integrate with [Telegraf](https://www.influxdata.com/time-series-platform/telegraf/), an open-source server agent for collecting and sending metrics and events, without more development.
### Install Telegraf
At present, TDengine supports Telegraf newer than version 1.7.4. Users can go to the [download link] and choose the proper package to install on your system.
### Configure Telegraf
Telegraf is configured by changing items in the configuration file */etc/telegraf/telegraf.conf*.
In **output plugins** section,add _[[outputs.http]]_ iterm:
- _url_: http://ip:6020/telegraf/udb, in which _ip_ is the IP address of any node in TDengine cluster. Port 6020 is the RESTful APT port used by TDengine. _udb_ is the name of the database to save data, which needs to create beforehand.
- _method_: "POST"
- _username_: username to login TDengine
- _password_: password to login TDengine
- _data_format_: "json"
- _json_timestamp_units_: "1ms"
In **agent** part:
- hostname: used to distinguish different machines. Need to be unique.
- metric_batch_size: 30,the maximum number of records allowed to write in Telegraf. The larger the value is, the less frequent requests are sent. For TDengine, the value should be less than 50.
Please refer to the [Telegraf docs](https://docs.influxdata.com/telegraf/v1.11/) for more information.
## Grafana
[Grafana] is an open-source system for time-series data display. It is easy to integrate TDengine and Grafana to build a monitor system. Data saved in TDengine can be fetched and shown on the Grafana dashboard.
### Install Grafana
For now, TDengine only supports Grafana newer than version 5.2.4. Users can go to the [Grafana download page] for the proper package to download.
### Configure Grafana
TDengine Grafana plugin is in the _/usr/local/taos/connector/grafana_ directory.
Taking Centos 7.2 as an example, just copy TDengine directory to _/var/lib/grafana/plugins_ directory and restart Grafana.
### Use Grafana
Users can log in the Grafana server (username/password:admin/admin) through localhost:3000 to configure TDengine as the data source. As is shown in the picture below, TDengine as a data source option is shown in the box:
![img](../assets/clip_image001.png)
When choosing TDengine as the data source, the Host in HTTP configuration should be configured as the IP address of any node of a TDengine cluster. The port should be set as 6020. For example, when TDengine and Grafana are on the same machine, it should be configured as _http://localhost:6020.
Besides, users also should set the username and password used to log into TDengine. Then click _Save&Test_ button to save.
![img](../assets/clip_image001-2474914.png)
Then, TDengine as a data source should show in the Grafana data source list.
![img](../assets/clip_image001-2474939.png)
Then, users can create Dashboards in Grafana using TDengine as the data source:
![img](../assets/clip_image001-2474961.png)
Click _Add Query_ button to add a query and input the SQL command you want to run in the _INPUT SQL_ text box. The SQL command should expect a two-row, multi-column result, such as _SELECT count(*) FROM sys.cpu WHERE ts>=from and ts<​to interval(interval)_, in which, _from_, _to_ and _inteval_ are TDengine inner variables representing query time range and time interval.
_ALIAS BY_ field is to set the query alias. Click _GENERATE SQL_ to send the command to TDengine:
![img](../assets/clip_image001-2474987.png)
Please refer to the [Grafana official document] for more information about Grafana.
## Matlab
Matlab can connect to and retrieve data from TDengine by TDengine JDBC Driver.
### MatLab and TDengine JDBC adaptation
Several steps are required to adapt Matlab to TDengine. Taking adapting Matlab2017a on Windows10 as an example:
1. Copy the file _JDBCDriver-1.0.0-dist.jar_ in TDengine package to the directory _${matlab_root}\MATLAB\R2017a\java\jar\toolbox_
2. Copy the file _taos.lib_ in TDengine package to _${matlab_ root _dir}\MATLAB\R2017a\lib\win64_
3. Add the .jar package just copied to the Matlab classpath. Append the line below as the end of the file of _${matlab_ root _dir}\MATLAB\R2017a\toolbox\local\classpath.txt_
`$matlabroot/java/jar/toolbox/JDBCDriver-1.0.0-dist.jar`
4. Create a file called _javalibrarypath.txt_ in directory _${user_home}\AppData\Roaming\MathWorks\MATLAB\R2017a\_, and add the _taos.dll_ path in the file. For example, if the file _taos.dll_ is in the directory of _C:\Windows\System32_,then add the following line in file *javalibrarypath.txt*:
`C:\Windows\System32`
### TDengine operations in Matlab
After correct configuration, open Matlab:
- build a connection:
`conn = database(‘db’, ‘root’, ‘taosdata’, ‘com.taosdata.jdbc.TSDBDriver’, ‘jdbc:TSDB://127.0.0.1:0/’)`
- Query:
`sql0 = [‘select * from tb’]`
`data = select(conn, sql0);`
- Insert a record:
`sql1 = [‘insert into tb values (now, 1)’]`
`exec(conn, sql1)`
Please refer to the file _examples\Matlab\TDengineDemo.m_ for more information.
## R
Users can use R language to access the TDengine server with the JDBC interface. At first, install JDBC package in R:
```R
install.packages('rJDBC', repos='http://cran.us.r-project.org')
```
Then use _library_ function to load the package:
```R
library('RJDBC')
```
Then load the TDengine JDBC driver:
```R
drv<-JDBC("com.taosdata.jdbc.TSDBDriver","JDBCDriver-1.0.0-dist.jar", identifier.quote="\"")
```
If succeed, no error message will display. Then use the following command to try a database connection:
```R
conn<-dbConnect(drv,"jdbc:TSDB://192.168.0.1:0/?user=root&password=taosdata","root","taosdata")
```
Please replace the IP address in the command above to the correct one. If no error message is shown, then the connection is established successfully. TDengine supports below functions in _RJDBC_ package:
- _dbWriteTable(conn, "test", iris, overwrite=FALSE, append=TRUE)_: write the data in a data frame _iris_ to the table _test_ in the TDengine server. Parameter _overwrite_ must be _false_. _append_ must be _TRUE_ and the schema of the data frame _iris_ should be the same as the table _test_.
- _dbGetQuery(conn, "select count(*) from test")_: run a query command
- _dbSendUpdate(conn, "use db")_: run any non-query command.
- _dbReadTable(conn, "test"_): read all the data in table _test_
- _dbDisconnect(conn)_: close a connection
- _dbRemoveTable(conn, "test")_: remove table _test_
Below functions are **not supported** currently:
- _dbExistsTable(conn, "test")_: if talbe _test_ exists
- _dbListTables(conn)_: list all tables in the connection
[Telegraf]: www.taosdata.com
[download link]: https://portal.influxdata.com/downloads
[Telegraf document]: www.taosdata.com
[Grafana]: https://grafana.com
[Grafana download page]: https://grafana.com/grafana/download
[Grafana official document]: https://grafana.com/docs/
此差异已折叠。
# TaosData Contributor License Agreement
This TaosData Contributor License Agreement (CLA) applies to any contribution you make to any TaosData projects. If you are representing your employing organization to sign this agreement, please warrant that you have the authority to grant the agreement.
## Terms
**"TaosData"**, **"we"**, **"our"** and **"us"** means TaosData, inc.
**"You"** and **"your"** means you or the organization you are on behalf of to sign this agreement.
**"Contribution"** means any original work you, or the organization you represent submit to TaosData for any project in any manner.
## Copyright License
All rights of your Contribution submitted to TaosData in any manner are granted to TaosData and recipients of software distributed by TaosData. You waive any rights that my affect our ownership of the copyright and grant to us a perpetual, worldwide, transferable, non-exclusive, no-charge, royalty-free, irrevocable, and sublicensable license to use, reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute Contributions and any derivative work created based on a Contribution.
## Patent License
With respect to any patents you own or that you can license without payment to any third party, you grant to us and to any recipient of software distributed by us, a perpetual, worldwide, transferable, non-exclusive, no-charge, royalty-free, irrevocable patent license to make, have make, use, sell, offer to sell, import, and otherwise transfer the Contribution in whole or in part, alone or included in any product under any patent you own, or license from a third party, that is necessarily infringed by the Contribution or by combination of the Contribution with any Work.
## Your Representations and Warranties
You represent and warrant that:
- the Contribution you submit is an original work that you can legally grant the rights set out in this agreement.
- the Contribution you submit and licenses you granted does not and will not, infringe the rights of any third party.
- you are not aware of any pending or threatened claims, suits, actions, or charges pertaining to the contributions. You also warrant to notify TaosData immediately if you become aware of any such actual or potential claims, suits, actions, allegations or charges.
## Support
You are not obligated to support your Contribution except you volunteer to provide support. If you want, you can provide for a fee.
**I agree and accept on behalf of myself and behalf of my organization:**
\ No newline at end of file
#TDengine文档
TDengine是一个高效的存储、查询、分析时序大数据的平台,专为物联网、车联网、工业互联网、运维监测等优化而设计。您可以像使用关系型数据库MySQL一样来使用它,但建议您在使用前仔细阅读一遍下面的文档,特别是[数据模型](data-model-and-architecture)与数据建模一节。除本文档之外,欢迎[下载产品白皮书](https://www.taosdata.com/downloads/TDengine%20White%20Paper.pdf)
##TDengine 介绍
- TDengine 简介及特色
- TDengine 适用场景
- TDengine 性能指标介绍和验证方法
##立即开始
- 快捷安装:可通过源码、安装包或docker安装,三秒钟搞定
- 轻松启动:使用systemctl 启停TDengine
- 命令行程序TAOS:访问TDengine的简便方式
- [极速体验](https://www.taosdata.com/cn/getting-started/#TDengine-极速体验):运行示例程序,快速体验高效的数据插入、查询
##数据模型和整体架构
- 数据模型:关系型数据库模型,但要求每个采集点单独建表
- 集群与基本逻辑单元:吸取NoSQL优点,支持水平扩展,支持高可靠
- 存储模型与数据分区:标签数据与时序数据完全分离,按vnode和时间两个维度对数据切分
- 数据写入与复制流程:先写入WAL、之后写入缓存,再给应用确认,支持多副本
- 缓存与持久化:最新数据缓存在内存中,但落盘时采用列式存储、超高压缩比
- 高效查询:支持各种函数、时间轴聚合、插值、多表聚合
##数据建模
- 创建库:为具有相似数据特征的数据采集点创建一个库
- 创建超级表:为同一类型的数据采集点创建一个超级表
- 创建表:使用超级表做模板,为每一个具体的数据采集点单独建表
##高效写入数据
- SQL写入:使用SQL insert命令向一张或多张表写入单条或多条记录
- Telegraf 写入:配置Telegraf, 不用任何代码,将采集数据直接写入
- Prometheus写入:配置Prometheus, 不用任何代码,将数据直接写入
- EMQ X Broker:配置EMQ X,不用任何代码,就可将MQTT数据直接写入
##高效查询数据
- 主要查询功能:支持各种标准函数,设置过滤条件,时间段查询
- 多表聚合查询:使用超级表,设置标签过滤条件,进行高效聚合查询
- 降采样查询:按时间段分段聚合,支持插值
##高级功能
- 连续查询(Continuous Query):基于滑动窗口,定时自动的对数据流进行查询计算
- 数据订阅(Publisher/Subscriber):象典型的消息队列,应用可订阅接收到的最新数据
- [缓存 (Cache)](https://www.taosdata.com/cn/documentation/advanced-features/#缓存-(Cache)):每个设备最新的数据都会缓存在内存中,可快速获取
- [报警监测(Alarm monitoring)](https://www.taosdata.com/blog/2020/04/14/1438.html/):根据配置规则,自动监测超限行为数据,并主动推送
##连接器
- C/C++ Connector:通过libtaos客户端的库,连接TDengine服务器的主要方法
- Java Connector(JDBC):通过标准的JDBC API,给Java应用提供到TDengine的连接
- Python Connector:给Python应用提供一个连接TDengine服务器的驱动
- RESTful Connector:提供一最简单的连接TDengine服务器的方式
- Go Connector:给Go应用提供一个连接TDengine服务器的驱动
- Node.js Connector:给node应用提供一个链接TDengine服务器的驱动
##与其他工具的连接
- Grafana:获取并可视化保存在TDengine的数据
- Matlab:通过配置Matlab的JDBC数据源访问保存在TDengine的数据
- R:通过配置R的JDBC数据源访问保存在TDengine的数据
## TDengine集群的安装、管理
- 安装:与单节点的安装一样,但要设好配置文件里的参数first
- 节点管理:增加、删除、查看集群的节点
- mnode的管理:系统自动创建、无需任何人工干预
- 负载均衡:一旦节点个数或负载有变化,自动进行
- 节点离线处理:节点离线超过一定时长,将从集群中剔除
- Arbitrator:对于偶数个副本的情形,使用它可以防止split brain。
##TDengine的运营和维护
- 容量规划:根据场景,估算硬件资源
- 容错和灾备:设置正确的WAL和数据副本数
- 系统配置:端口,缓存大小,文件块大小和其他系统配置
- 用户管理:添加、删除TDengine用户,修改用户密码
- 数据导入:可按脚本文件导入,也可按数据文件导入
- 数据导出:从shell按表导出,也可用taosdump工具做各种导出
- 系统监控:检查系统现有的连接、查询、流式计算,日志和事件等
- 文件目录结构:TDengine数据文件、配置文件等所在目录 Hui Li
##TAOS SQL
- 支持的数据类型:支持时间戳、整型、浮点型、布尔型、字符型等多种数据类型
- 数据库管理:添加、删除、查看数据库
- 表管理:添加、删除、查看、修改表
- 超级表管理:添加、删除、查看、修改超级表
- 标签管理:增加、删除、修改标签
- 数据写入:支持单表单条、多条、多表多条写入,支持历史数据写入
- 数据查询:支持时间段、值过滤、排序、查询结果手动分页等
- SQL函数:支持各种聚合函数、选择函数、计算函数,如avg, min, diff等
- 时间维度聚合:将表中数据按照时间段进行切割后聚合,降维处理
##TDengine的技术设计
- 系统模块:taosd的功能和模块划分
- 技术博客:更多的技术分析和架构设计文章
## 常用工具
- [TDengine样例数据导入工具](https://www.taosdata.com/cn/documentation/blog/2020/01/18/如何快速验证性能和主要功能?tdengine样例数据导入工/)
- [TDengine性能对比测试工具](https://www.taosdata.com/cn/documentation/blog/2020/01/13/用influxdb开源的性能测试工具对比influxdb和tdengine/)
##TDengine与其他数据库的对比测试
- [用InfluxDB开源的性能测试工具对比InfluxDB和TDengine](https://www.taosdata.com/cn/documentation/blog/2020/01/13/用influxdb开源的性能测试工具对比influxdb和tdengine/)
- [TDengine与OpenTSDB对比测试](https://www.taosdata.com/cn/documentation/blog/2019/08/21/tdengine与opentsdb对比测试/)
- [TDengine与Cassandra对比测试](https://www.taosdata.com/cn/documentation/blog/2019/08/14/tdengine与cassandra对比测试/)
- [TDengine与InfluxDB对比测试](https://www.taosdata.com/cn/documentation/blog/2019/07/19/tdengine与influxdb对比测试/)
- [TDengine与InfluxDB、OpenTSDB、Cassandra、MySQL、ClickHouse等数据库的对比测试报告](https://www.taosdata.com/downloads/TDengine_Testing_Report_cn.pdf)
##物联网大数据
- [物联网、工业互联网大数据的特点](https://www.taosdata.com/blog/2019/07/09/物联网、工业互联网大数据的特点/)
- [物联网大数据平台应具备的功能和特点](https://www.taosdata.com/blog/2019/07/29/物联网大数据平台应具备的功能和特点/)
- [通用大数据架构为什么不适合处理物联网数据?](https://www.taosdata.com/blog/2019/07/09/通用互联网大数据处理架构为什么不适合处理物联/)
- [物联网、车联网、工业互联网大数据平台,为什么推荐使用TDengine?](https://www.taosdata.com/blog/2019/07/09/物联网、车联网、工业互联网大数据平台,为什么/)
##培训和FAQ
- <a href='https://www.taosdata.com/en/faq'>FAQ</a>:常见问题与答案
- <a href='https://www.taosdata.com/en/blog/?categories=4'>应用案列</a>:一些使用实例来解释如何使用TDengine
\ No newline at end of file
#Documentation
TDengine is a highly efficient platform to store, query, and analyze time-series data. It works like a relational database, but you are strongly suggested to read through the following documentation before you experience it.
##Getting Started
- Quick Start: download, install and experience TDengine in a few seconds
- TDengine Shell: command-line interface to access TDengine server
- Major Features: insert/query, aggregation, cache, pub/sub, continuous query
## Data Model and Architecture
- Data Model: relational database model, but one table for one device with static tags
- Architecture: Management Module, Data Module, Client Module
- Writing Process: records recieved are written to WAL, cache, then ack is sent back to client
- Data Storage: records are sharded in the time range, and stored column by column
##TAOS SQL
- Data Types: support timestamp, int, float, double, binary, nchar, bool, and other types
- Database Management: add, drop, check databases
- Table Management: add, drop, check, alter tables
- Inserting Records: insert one or more records into tables, historical records can be imported
- Data Query: query data with time range and filter conditions, support limit/offset
- SQL Functions: support aggregation, selector, transformation functions
- Downsampling: aggregate data in successive time windows, support interpolation
##STable: Super Table
- What is a Super Table: an innovated way to aggregate tables
- Create a STable: it is like creating a standard table, but with tags defined
- Create a Table via STable: use STable as the template, with tags specified
- Aggregate Tables via STable: group tables together by specifying the tags filter condition
- Create Table Automatically: create tables automatically with a STable as a template
- Management of STables: create/delete/alter super table just like standard tables
- Management of Tags: add/delete/alter tags on super tables or tables
##Advanced Features
- Continuous Query: query executed by TDengine periodically with a sliding window
- Publisher/Subscriber: subscribe to the newly arrived data like a typical messaging system
- Caching: the newly arrived data of each device/table will always be cached
##Connector
- C/C++ Connector: primary method to connect to the server through libtaos client library
- Java Connector: driver for connecting to the server from Java applications using the JDBC API
- Python Connector: driver for connecting to the server from Python applications
- RESTful Connector: a simple way to interact with TDengine via HTTP
- Go Connector: driver for connecting to the server from Go applications
- Node.js Connector: driver for connecting to the server from node applications
##Connections with Other Tools
- Telegraf: pass the collected DevOps metrics to TDengine
- Grafana: query the data saved in TDengine and visualize them
- Matlab: access TDengine server from Matlab via JDBC
- R: access TDengine server from R via JDBC
##Administrator
- Directory and Files: files and directories related with TDengine
- Configuration on Server: customize IP port, cache size, file block size and other settings
- Configuration on Client: customize locale, default user and others
- User Management: add/delete users, change passwords
- Import Data: import data into TDengine from either script or CSV file
- Export Data: export data either from TDengine shell or from tool taosdump
- Management of Connections, Streams, Queries: check or kill the connections, queries
- System Monitor: collect the system metric, and log important operations
##More on System Architecture
- Storage Design: column-based storage with optimization on time-series data
- Query Design: an efficient way to query time-series data
- Technical blogs to delve into the inside of TDengine
## More on IoT Big Data
- [Characteristics of IoT Big Data](https://www.taosdata.com/blog/2019/07/09/characteristics-of-iot-big-data/)
- [Why don’t General Big Data Platforms Fit IoT Scenarios?](https://www.taosdata.com/blog/2019/07/09/why-does-the-general-big-data-platform-not-fit-iot-data-processing/)
- [Why TDengine is the Best Choice for IoT Big Data Processing?](https://www.taosdata.com/blog/2019/07/09/why-tdengine-is-the-best-choice-for-iot-big-data-processing/)
##Tutorials & FAQ
- <a href='https://www.taosdata.com/en/faq'>FAQ</a>: a list of frequently asked questions and answers
- <a href='https://www.taosdata.com/en/blog/?categories=4'>Use cases</a>: a few typical cases to explain how to use TDengine in IoT platform
# TDengine 适用场景介绍(草案)
## TDengine 简介
<!-- 本节内容来源于白皮书 -->
TDengine是涛思数据面对高速增长的物联网大数据市场和技术挑战推出的创新性的大数据处理产品,它不依赖任何第三方软件,也不是优化或包装了一个开源的数据库或流式计算产品,而是在吸取众多传统关系型数据库、NoSQL数据库、流式计算引擎、消息队列等软件的优点之后自主开发的产品,在时序空间大数据处理上,有着自己独到的优势。
* __10倍以上的性能提升__:定义了创新的数据存储结构,单核每秒就能处理至少2万次请求,插入数百万个数据点,读出一千万以上数据点,比现有通用数据库快了十倍以上。
* __硬件或云服务成本降至1/5__:由于超强性能,计算资源不到通用大数据方案的1/5;通过列式存储和先进的压缩算法,存储空间不到通用数据库的1/10
* __全栈时序数据处理引擎__:将数据库、消息队列、缓存、流式计算等功能融合一起,应用无需再集成Kafka/Redis/HBase/Spark/HDFS等软件,大幅降低应用开发和维护的复杂度成本。
* __强大的分析功能__:无论是十年前还是一秒钟前的数据,指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。临时查询可通过Shell, Python, R, Matlab随时进行。
* __与第三方工具无缝连接__:不用一行代码,即可与Telegraf, Grafana, EMQ, Prometheus, Matlab, R等集成。后续将支持OPC, Hadoop, Spark等, BI工具也将无缝连接。
* __零运维成本、零学习成本__:安装、集群一秒搞定,无需分库分表,实时备份。标准SQL,支持JDBC, RESTful, 支持Python/Java/C/C++/Go, 与MySQL相似,零学习成本。
<!--sdASF -->
## TDengine 总体适用场景
作为一个IOT大数据平台,TDengine的典型适用场景是在IOT范畴,而且用户有一定的数据量。本文后续的介绍主要针对这个范畴里面的系统。范畴之外的系统,比如CRM,ERP等,不在本文讨论范围内。
## 数据源特点和需求
从数据源角度,设计人员可以从已经角度分析TDengine在目标应用系统里面的适用性。
|数据源特点和需求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---|
|总体数据量巨大| | | ✅ |TDengine在容量方面提供出色的水平扩展功能,并且具备匹配高压缩的存储结构,达到业界最优的存储效率。|
|数据输入速度偶尔或者持续巨大| | | ✅ | TDengine的性能大大超过同类产品,可以在同样的硬件环境下持续处理大量的输入数据,并且提供很容易在用户环境里面运行的性能评估工具。|
|数据源数目巨大| | | ✅ |TDengine设计中包含专门针对大量数据源的优化,包括数据的写入和查询,尤其适合高效处理海量(千万或者更多量级)的数据源。|
## 系统架构要求
|系统架构要求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---|
|要求简单可靠的系统架构| | | ✅ |TDengine的系统架构非常简单可靠,自带消息队列,缓存,流式计算,监控等功能,无需集成额外的第三方产品。|
|要求容错和高可靠| | | ✅ |TDengine的集群功能,自动提供容错灾备等高可靠功能|
|标准化规范| | | ✅ |TDengine使用标准的SQL语言提供主要功能,遵守标准化规范|
## 系统功能需求
|系统功能需求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---|
|要求完整的内置数据处理算法| | ✅ | |TDengine的实现了通用的数据处理算法,但是还没有做到妥善处理各行各业的所有要求,因此特殊类型的处理还需要应用层面处理。|
|需要大量的交叉查询处理| | ✅ | |这种类型的处理更多应该用关系型数据系统处理,或者应该考虑TDengine和关系型数据系统配合实现系统功能|
## 系统性能需求
|系统性能需求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---|
|要求较大的总体处理能力| | | ✅ |TDengine的集群功能可以轻松地让多服务器配合达成处理能力的提升。|
|要求高速处理数据 | | | ✅ |TDengine的专门为IOT优化的存储和数据处理的设计,一般可以让系统得到超出同类产品多倍数的处理速度提升。|
|要求快速处理小粒度数据| | | ✅ |这方面TDengine性能可以完全对标关系型和NoSQL型数据处理系统。|
## 系统维护需求
|系统维护需求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---|
|要求系统可靠运行| | | ✅ |TDengine的系统架构非常稳定可靠,日常维护也简单便捷,对维护人员的要求简洁明了,最大程度上杜绝人为错误和事故。|
|要求运维学习成本可控| | | ✅ |同上|
|要求市场有大量人才储备| ✅ | | |TDengine作为新一代产品,目前人才市场里面有经验的人员还有限。但是学习成本低,我们作为厂家也提供运维的培训和辅助服务|
# 立即开始
## 快捷安装
TDengine软件分为服务器、客户端和报警模块三部分,目前2.0版仅能在Linux系统上安装和运行,后续会支持Windows、MAC OS等系统。如果应用需要在Windows或Mac上运行,目前只能使用TDengine的RESTful接口连接服务器。硬件支持X64,后续会支持ARM、龙芯等CPU系统。用户可根据需求选择通过[源码](https://www.taosdata.com/cn/getting-started/#通过源码安装)或者[安装包](https://www.taosdata.com/cn/getting-started/#通过安装包安装)来安装。
### 通过源码安装
请参考我们的[TDengine github主页](https://github.com/taosdata/TDengine)下载源码并安装.
### 通过Docker容器运行
请参考[TDengine官方Docker镜像的发布、下载和使用](https://www.taosdata.com/blog/2020/05/13/1509.html)
### 通过安装包安装
服务器部分,我们提供三种安装包,您可以根据需要选择。TDengine的安装非常简单,从下载到安装成功仅仅只要几秒钟。
<ul id='packageList'>
<li><a id='tdengine-rpm' style='color:var(--b2)'>TDengine-server-2.0.0.0-Linux-x64.rpm (5.3M)</a></li>
<li><a id='tdengine-deb' style='color:var(--b2)'>TDengine-server-2.0.0.0-Linux-x64.deb (2.5M)</a></li>
<li><a id='tdengine-tar' style='color:var(--b2)'>TDengine-server-2.0.0.0-Linux-x64.tar.gz (5.3M)</a></li>
</ul>
客户端部分,Linux安装包如下:
- TDengine-client-2.0.0.0-Linux-x64.tar.gz (3.4M)
报警模块的Linux安装包如下(请参考[报警模块的使用方法](https://github.com/taosdata/TDengine/blob/master/alert/README_cn.md)):
- TDengine-alert-2.0.0-Linux-x64.tar.gz (8.1M)
目前,TDengine只支持在使用[`systemd`](https://en.wikipedia.org/wiki/Systemd)做进程服务管理的linux系统上安装。其他linux系统的支持正在开发中。用`which`命令来检测系统中是否存在`systemd`:
```cmd
which systemd
```
如果系统中不存在`systemd`命令,请考虑[通过源码安装](#通过源码安装)TDengine。
具体的安装过程,请参见[`TDengine多种安装包的安装和卸载`](https://www.taosdata.com/blog/2019/08/09/566.html)
## 轻松启动
安装成功后,用户可使用`systemctl`命令来启动TDengine的服务进程。
```cmd
systemctl start taosd
```
检查服务是否正常工作。
```cmd
systemctl status taosd
```
如果TDengine服务正常工作,那么您可以通过TDengine的命令行程序`taos`来访问并体验TDengine。
**注:_systemctl_ 命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 _sudo_**
## TDengine命令行程序
执行TDengine命令行程序,您只要在Linux终端执行`taos`即可
```cmd
taos
```
如果TDengine终端链接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考[FAQ](https://www.taosdata.com/cn/faq/)来解决终端链接服务端失败的问题)。TDengine终端的提示符号如下:
```cmd
taos>
```
在TDengine终端中,用户可以通过SQL命令来创建/删除数据库、表等,并进行插入查询操作。在终端中运行的SQL语句需要以分号结束来运行。示例:
```mysql
create database db;
use db;
create table t (ts timestamp, cdata int);
insert into t values ('2019-07-15 00:00:00', 10);
insert into t values ('2019-07-15 01:00:00', 20);
select * from t;
ts | speed |
===================================
19-07-15 00:00:00.000| 10|
19-07-15 01:00:00.000| 20|
Query OK, 2 row(s) in set (0.001700s)
```
除执行SQL语句外,系统管理员还可以从TDengine终端检查系统运行状态,添加删除用户账号等。
### 命令行参数
您可通过配置命令行参数来改变TDengine终端的行为。以下为常用的几个命令行参数:
- -c, --config-dir: 指定配置文件目录,默认为_/etc/taos_
- -h, --host: 指定服务的IP地址,默认为本地服务
- -s, --commands: 在不进入终端的情况下运行TDengine命令
- -u, -- user: 链接TDengine服务器的用户名,缺省为root
- -p, --password: 链接TDengine服务器的密码,缺省为taosdata
- -?, --help: 打印出所有命令行参数
示例:
```cmd
taos -h 192.168.0.1 -s "use db; show tables;"
```
### 运行SQL命令脚本
TDengine终端可以通过`source`命令来运行SQL命令脚本.
```
taos> source <filename>;
```
### Shell小技巧
- 可以使用上下光标键查看已经历史输入的命令
- 修改用户密码。在shell中使用alter user命令
- ctrl+c 中止正在进行中的查询
- 执行`RESET QUERY CACHE`清空本地缓存的表的schema
## TDengine 极速体验
启动TDengine的服务,在Linux终端执行taosdemo
```
> taosdemo
```
该命令将在数据库test下面自动创建一张超级表meters,该超级表下有1万张表,表名为"t0" 到"t9999",每张表有10万条记录,每条记录有 (f1, f2, f3)三个字段,时间戳从"2017-07-14 10:40:00 000" 到"2017-07-14 10:41:39 999",每张表带有标签areaid和loc, areaid被设置为1到10, loc被设置为"beijing"或者“shanghai"。
执行这条命令大概需要10分钟,最后共插入10亿条记录。
在TDengine客户端输入查询命令,体验查询速度。
- 查询超级表下记录总条数:
```
taos>select count(*) from test.meters;
```
- 查询10亿条记录的平均值、最大值、最小值等:
```
taos>select avg(f1), max(f2), min(f3) from test.meters;
```
- 查询loc="beijing"的记录总条数:
```
taos>select count(*) from test.meters where loc="beijing";
```
- 查询areaid=10的所有记录的平均值、最大值、最小值等:
```
taos>select avg(f1), max(f2), min(f3) from test.meters where areaid=10;
```
- 对表t10按10s进行平均值、最大值和最小值聚合统计:
```
taos>select avg(f1), max(f2), min(f3) from test.t10 interval(10s);
```
**Note:** taosdemo命令本身带有很多选项,配置表的数目、记录条数等等,请执行 `taosdemo --help`详细列出。您可以设置不同参数进行体验。
#Getting Started
## Quick Start
At the moment, TDengine only runs on Linux. You can set up and install it either from the <a href='#Install-from-Source'>source code</a> or the <a href='#Install-from-Package'>packages</a>. It takes only a few seconds from download to run it successfully.
### Install from Source
Please visit our [github page](https://github.com/taosdata/TDengine) for instructions on installation from the source code.
### Install from Package
Three different packages are provided, please pick up the one you like.
<ul id='packageList'>
<li><a id='tdengine-rpm' style='color:var(--b2)'>TDengine RPM package (1.5M)</a></li>
<li><a id='tdengine-deb' style='color:var(--b2)'>TDengine DEB package (1.7M)</a></li>
<li><a id='tdengine-tar' style='color:var(--b2)'>TDengine Tarball (3.0M)</a></li>
</ul>
For the time being, TDengine only supports installation on Linux systems using [`systemd`](https://en.wikipedia.org/wiki/Systemd) as the service manager. To check if your system has *systemd*, use the _which_ command.
```cmd
which systemd
```
If the `systemd` command is not found, please [install from source code](#Install-from-Source).
### Running TDengine
After installation, start the TDengine service by the `systemctl` command.
```cmd
systemctl start taosd
```
Then check if the server is working now.
```cmd
systemctl status taosd
```
If the service is running successfully, you can play around through TDengine shell `taos`, the command line interface tool located in directory /usr/local/bin/taos
**Note: The _systemctl_ command needs the root privilege. Use _sudo_ if you are not the _root_ user.**
##TDengine Shell
To launch TDengine shell, the command line interface, in a Linux terminal, type:
```cmd
taos
```
The welcome message is printed if the shell connects to TDengine server successfully, otherwise, an error message will be printed (refer to our [FAQ](../faq) page for troubleshooting the connection error). The TDengine shell prompt is:
```cmd
taos>
```
In the TDengine shell, you can create databases, create tables and insert/query data with SQL. Each query command ends with a semicolon. It works like MySQL, for example:
```mysql
create database db;
use db;
create table t (ts timestamp, cdata int);
insert into t values ('2019-07-15 10:00:00', 10);
insert into t values ('2019-07-15 10:01:05', 20);
select * from t;
ts | speed |
===================================
19-07-15 10:00:00.000| 10|
19-07-15 10:01:05.000| 20|
Query OK, 2 row(s) in set (0.001700s)
```
Besides the SQL commands, the system administrator can check system status, add or delete accounts, and manage the servers.
###Shell Command Line Parameters
You can run `taos` command with command line options to fit your needs. Some frequently used options are listed below:
- -c, --config-dir: set the configuration directory. It is _/etc/taos_ by default
- -h, --host: set the IP address of the server it will connect to, Default is localhost
- -s, --commands: set the command to run without entering the shell
- -u, -- user: user name to connect to server. Default is root
- -p, --password: password. Default is 'taosdata'
- -?, --help: get a full list of supported options
Examples:
```cmd
taos -h 192.168.0.1 -s "use db; show tables;"
```
###Run Batch Commands
Inside TDengine shell, you can run batch commands in a file with *source* command.
```
taos> source <filename>;
```
### Tips
- Use up/down arrow key to check the command history
- To change the default password, use "`alter user`" command
- ctrl+c to interrupt any queries
- To clean the cached schema of tables or STables, execute command `RESET QUERY CACHE`
## Major Features
The core functionality of TDengine is the time-series database. To reduce the development and management complexity, and to improve the system efficiency further, TDengine also provides caching, pub/sub messaging system, and stream computing functionalities. It provides a full stack for IoT big data platform. The detailed features are listed below:
- SQL like query language used to insert or explore data
- C/C++, Java(JDBC), Python, Go, RESTful, and Node.JS interfaces for development
- Ad hoc queries/analysis via Python/R/Matlab or TDengine shell
- Continuous queries to support sliding-window based stream computing
- Super table to aggregate multiple time-streams efficiently with flexibility
- Aggregation over a time window on one or multiple time-streams
- Built-in messaging system to support publisher/subscriber model
- Built-in cache for each time stream to make latest data available as fast as light speed
- Transparent handling of historical data and real-time data
- Integrating with Telegraf, Grafana and other tools seamlessly
- A set of tools or configuration to manage TDengine
For enterprise edition, TDengine provides more advanced features below:
- Linear scalability to deliver higher capacity/throughput
- High availability to guarantee the carrier-grade service
- Built-in replication between nodes which may span multiple geographical sites
- Multi-tier storage to make historical data management simpler and cost-effective
- Web-based management tools and other tools to make maintenance simpler
TDengine is specially designed and optimized for time-series data processing in IoT, connected cars, Industrial IoT, IT infrastructure and application monitoring, and other scenarios. Compared with other solutions, it is 10x faster on insert/query speed. With a single-core machine, over 20K requestes can be processed, millions data points can be ingested, and over 10 million data points can be retrieved in a second. Via column-based storage and tuned compression algorithm for different data types, less than 1/10 storage space is required.
## Explore More on TDengine
Please read through the whole <a href='../documentation'>documentation</a> to learn more about TDengine.
# 数据建模
TDengine采用关系型数据模型,需要建库、建表。因此对于一个具体的应用场景,需要考虑库的设计,超级表和普通表的设计。本节不讨论细致的语法规则,只介绍概念。
##创建库
不同类型的数据采集点往往具有不同的数据特征,包括数据采集频率的高低,数据保留时间的长短,副本的数目,数据块的大小等等。为让各种场景下TDengine都能最大效率的工作,TDengine建议将不同数据特征的表创建在不同的库里,因为每个库可以配置不同的存储策略。创建一个库时,除SQL标准的选项外,应用还可以指定保留时长、副本数、内存块个数、时间精度、文件块里最大最小记录条数、是否压缩、一个数据文件覆盖的天数等多种参数。比如:
```cmd
CREATE DATABASE power KEEP 365 DAYS 10 REPLICA 3 BLOCKS 4;
```
上述语句将创建一个名为power的库,这个库的数据将保留365天(超过365天将被自动删除),每10天一个数据文件,副本数为3, 内存块数为4。详细的语法及参数请见TAOS SQL。
注意:任何一张表或超级表是属于一个库的,在创建表之前,必须先创建库。
## 创建超级表
一个物联网系统,往往存在多种类型的设备,比如对于电网,存在智能电表、变压器、母线、开关等等。为便于多表之间的聚合,使用TDengine, 需要对每个类型的设备创建一超级表。以表一中的智能电表为例,可以使用如下的SQL命令创建超级表:
```cmd
CREATE TABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupdId int);
```
与创建普通表一样,创建表时,需要提供表名(示例中为meters),表结构Schema,即数据列的定义,为采集的物理量(示例中为ts, current, voltage, phase),数据类型可以为整型、浮点型、字符串等。除此之外,还需要提供标签的schema (示例中为location, groupId),标签的数据类型可以为整型、浮点型、字符串等。采集点的静态属性往往可以作为标签,比如采集点的地理位置、设备型号、设备组ID、管理员ID等等。标签的schema可以事后增加、删除、修改。具体定义以及细节请见 TAOS SQL一节。
每一种类型的数据采集点需要建立一个超级表,因此一个物联网系统,往往会有多个超级表。一个系统可以有多个DB,一个DB里可以有一到多个超级表。
## 创建表
TDengine对每个数据采集点需要独立建表。与标准的关系型数据一样,一张表有表名,Schema,但除此之外,还可以带有一到多个标签。创建时,需要使用超级表做模板,同时指定标签的具体值。以表一中的智能电表为例,可以使用如下的SQL命令建表:
```cmd
CREATE TABLE d1001 USING meters TAGS ("Beijing.Chaoyang", 2);
```
其中d1001是表名,meters是超级表的表名,后面紧跟标签Location的具体标签值”Beijing.Chaoyang",标签groupId的具体标签值2。虽然在创建表时,需要指定标签值,但可以事后修改。详细细则请见 TAOS SQL。
TDengine建议将数据采集点的全局唯一ID作为表名。但对于有的场景,并没有唯一的ID,可以将多个ID组合成一个唯一的ID。不建议将具有唯一性的ID作为标签值。
**自动建表**:在某些特殊场景中,用户在写数据时并不确定某个数据采集点的表是否存在,此时可在写入数据时使用自动建表语法来创建不存在的表,若该表已存在则不会建立新表。比如:
```cmd
INSERT INTO d1001 USING METERS TAGS ("Beijng.Chaoyang", 2) VALUES (now, 10.2, 219, 0.32);
```
上述SQL语句将记录(now, 10.2, 219, 0.32) 插入进表d1001。如果表d1001还未创建,则使用超级表meters做模板自动创建,同时打上标签值“Beijing.Chaoyang", 2。
**多列模型**:TDengine支持多列模型,只要这些物理量是同时采集的,这些量就可以作为不同列放在同一张表里。有的数据采集点有多组采集量,每一组的数据采集时间是不一样的,这时需要对同一个采集点建多张表。但还有一种极限的设计,单列模型,无论是否同时采集,每个采集的物理量单独建表。TDengine建议,只要采集时间一致,就采用多列模型,因为插入效率以及存储效率更高。
\ No newline at end of file
# TDengine System Architecture
## Storage Design
TDengine data mainly include **metadata** and **data** that we will introduce in the following sections.
### Metadata Storage
Metadata include the information of databases, tables, etc. Metadata files are saved in _/var/lib/taos/mgmt/_ directory by default. The directory tree is as below:
```
/var/lib/taos/
+--mgmt/
+--db.db
+--meters.db
+--user.db
+--vgroups.db
```
A metadata structure (database, table, etc.) is saved as a record in a metadata file. All metadata files are appended only, and even a drop operation adds a deletion record at the end of the file.
### Data storage
Data in TDengine are sharded according to the time range. Data of tables in the same vnode in a certain time range are saved in the same filegroup, such as files v0f1804*. This sharding strategy can effectively improve data searching speed. By default, a group of files contains data in 10 days, which can be configured by *daysPerFile* in the configuration file or by *DAYS* keyword in *CREATE DATABASE* clause. Data in files are blockwised. A data block only contains one table's data. Records in the same data block are sorted according to the primary timestamp, which helps to improve the compression rate and save storage. The compression algorithms used in TDengine include simple8B, delta-of-delta, RLE, LZ4, etc.
By default, TDengine data are saved in */var/lib/taos/data/* directory. _/var/lib/taos/tsdb/_ directory contains vnode informations and data file linkes.
```
/var/lib/taos/
+--tsdb/
| +--vnode0
| +--meterObj.v0
| +--db/
| +--v0f1804.head->/var/lib/taos/data/vnode0/v0f1804.head1
| +--v0f1804.data->/var/lib/taos/data/vnode0/v0f1804.data
| +--v0f1804.last->/var/lib/taos/data/vnode0/v0f1804.last1
| +--v0f1805.head->/var/lib/taos/data/vnode0/v0f1805.head1
| +--v0f1805.data->/var/lib/taos/data/vnode0/v0f1805.data
| +--v0f1805.last->/var/lib/taos/data/vnode0/v0f1805.last1
| :
+--data/
+--vnode0/
+--v0f1804.head1
+--v0f1804.data
+--v0f1804.last1
+--v0f1805.head1
+--v0f1805.data
+--v0f1805.last1
:
```
#### meterObj file
There are only one meterObj file in a vnode. Informations bout the vnode, such as created time, configuration information, vnode statistic informations are saved in this file. It has the structure like below:
```
<start_of_file>
[file_header]
[table_record1_offset&length]
[table_record2_offset&length]
...
[table_recordN_offset&length]
[table_record1]
[table_record2]
...
[table_recordN]
<end_of_file>
```
The file header takes 512 bytes, which mainly contains informations about the vnode. Each table record is the representation of a table on disk.
#### head file
The _head_ files contain the index of data blocks in the _data_ file. The inner organization is as below:
```
<start_of_file>
[file_header]
[table1_offset]
[table2_offset]
...
[tableN_offset]
[table1_index_block]
[table2_index_block]
...
[tableN_index_block]
<end_of_file>
```
The table offset array in the _head_ file saves the information about the offsets of each table index block. Indices on data blocks in the same table are saved continuously. This also makes it efficient to load data indices on the same table. The data index block has a structure like:
```
[index_block_info]
[block1_index]
[block2_index]
...
[blockN_index]
```
The index block info part contains the information about the index block such as the number of index blocks, etc. Each block index corresponds to a real data block in the _data_ file or _last_ file. Information about the location of the real data block, the primary timestamp range of the data block, etc. are all saved in the block index part. The block indices are sorted in ascending order according to the primary timestamp. So we can apply algorithms such as the binary search on the data to efficiently search blocks according to time.
#### data file
The _data_ files store the real data block. They are append-only. The organization is as:
```
<start_of_file>
[file_header]
[block1]
[block2]
...
[blockN]
<end_of_file>
```
A data block in _data_ files only belongs to a table in the vnode and the records in a data block are sorted in ascending order according to the primary timestamp key. Data blocks are column-oriented. Data in the same column are stored contiguously, which improves reading speed and compression rate because of their similarity. A data block has the following organization:
```
[column1_info]
[column2_info]
...
[columnN_info]
[column1_data]
[column2_data]
...
[columnN_data]
```
The column info part includes information about column types, column compression algorithm, column data offset and length in the _data_ file, etc. Besides, pre-calculated results of the column data in the block are also in the column info part, which helps to improve reading speed by avoiding loading data block necessarily.
#### last file
To avoid storage fragment and to import query speed and compression rate, TDengine introduces an extra file, the _last_ file. When the number of records in a data block is lower than a threshold, TDengine will flush the block to the _last_ file for temporary storage. When new data comes, the data in the _last_ file will be merged with the new data and form a larger data block and written to the _data_ file. The organization of the _last_ file is similar to the _data_ file.
### Summary
The innovation in architecture and storage design of TDengine improves resource usage. On the one hand, the virtualization makes it easy to distribute resources between different vnodes and for future scaling. On the other hand, sorted and column-oriented storage makes TDengine have a great advantage in writing, querying and compression.
## Query Design
#### Introduction
TDengine provides a variety of query functions for both tables and super tables. In addition to regular aggregate queries, it also provides time window based query and statistical aggregation for time series data. TDengine's query processing requires the client app, management node, and data node to work together. The functions and modules involved in query processing included in each component are as follows:
Client (Client App). The client development kit, embed in a client application, consists of TAOS SQL parser and query executor, the second-stage aggregator (Result Merger), continuous query manager and other major functional modules. The SQL parser is responsible for parsing and verifying the SQL statement and converting it into an abstract syntax tree. The query executor is responsible for transforming the abstract syntax tree into the query execution logic and creates the metadata query according to the query condition of the SQL statement. Since TAOS SQL does not currently include complex nested queries and pipeline query processing mechanism, there is no longer need for query plan optimization and physical query plan conversions. The second-stage aggregator is responsible for performing the aggregation of the independent results returned by query involved data nodes at the client side to generate final results. The continuous query manager is dedicated to managing the continuous queries created by users, including issuing fixed-interval query requests and writing the results back to TDengine or returning to the client application as needed. Also, the client is also responsible for retrying after the query fails, canceling the query request, and maintaining the connection heartbeat and reporting the query status to the management node.
Management Node. The management node keeps the metadata of all the data of the entire cluster system, provides the metadata of the data required for the query from the client node, and divides the query request according to the load condition of the cluster. The super table contains information about all the tables created according to the super table, so the query processor (Query Executor) of the management node is responsible for the query processing of the tags of tables and returns the table information satisfying the tag query. Besides, the management node maintains the query status of the cluster in the Query Status Manager component, in which the metadata of all queries that are currently executing are temporarily stored in-memory buffer. When the client issues *show queries* command to management node, current running queries information is returned to the client.
Data Node. The data node, responsible for storing all data of the database, consists of query executor, query processing scheduler, query task queue, and other related components. Once the query requests from the client received, they are put into query task queue and waiting to be processed by query executor. The query executor extracts the query request from the query task queue and invokes the query optimizer to perform the basic optimization for the query execution plan. And then query executor scans the qualified data blocks in both cache and disk to obtain qualified data and return the calculated results. Besides, the data node also needs to respond to management information and commands from the management node. For example, after the *kill query* received from the management node, the query task needs to be stopped immediately.
<center> <img src="../assets/fig1.png"> </center>
<center>Fig 1. System query processing architecture diagram (only query related components)</center>
#### Query Process Design
The client, the management node, and the data node cooperate to complete the entire query processing of TDengine. Let's take a concrete SQL query as an example to illustrate the whole query processing flow. The SQL statement is to query on super table *FOO_SUPER_TABLE* to get the total number of records generated on January 12, 2019, from the table, of which TAG_LOC equals to 'beijing'. The SQL statement is as follows:
```sql
SELECT COUNT(*)
FROM FOO_SUPER_TABLE
WHERE TAG_LOC = 'beijing' AND TS >= '2019-01-12 00:00:00' AND TS < '2019-01-13 00:00:00'
```
First, the client invokes the TAOS SQL parser to parse and validate the SQL statement, then generates a syntax tree, and extracts the object of the query - the super table *FOO_SUPER_TABLE*, and then the parser sends requests with filtering information (TAG_LOC='beijing') to management node to get the corresponding metadata about *FOO_SUPER_TABLE*.
Once the management node receives the request for metadata acquisition, first finds the super table *FOO_SUPER_TABLE* basic information, and then applies the query condition (TAG_LOC='beijing') to filter all the related tables created according to it. And finally, the query executor returns the metadata information that satisfies the query request to the client.
After the client obtains the metadata information of *FOO_SUPER_TABLE*, the query executor initiates a query request with timestamp range filtering condition (TS >= '2019- 01-12 00:00:00' AND TS < '2019-01-13 00:00:00') to all nodes that hold the corresponding data according to the information about data distribution in metadata.
The data node receives the query sent from the client, converts it into an internal structure and puts it into the query task queue to be executed by query executor after optimizing the execution plan. When the query result is obtained, the query result is returned to the client. It should be noted that the data nodes perform the query process independently of each other, and rely solely on their data and content for processing.
When all data nodes involved in the query return results, the client aggregates the result sets from each data node. In this case, all results are accumulated to generate the final query result. The second stage of aggregation is not always required for all queries. For example, a column selection query does not require a second-stage aggregation at all.
#### REST Query Process
In addition to C/C++, Python, and JDBC interface, TDengine also provides a REST interface based on the HTTP protocol, which is different from using the client application programming interface. When the user uses the REST interface, all the query processing is completed on the server-side, and the user's application is not involved in query processing anymore. After the query processing is completed, the result is returned to the client through the HTTP JSON string.
<center> <img src="../assets/fig2.png"> </center>
<center>Fig. 2 REST query architecture</center>
When a client uses an HTTP-based REST query interface, the client first establishes a connection with the HTTP connector at the data node and then uses the token to ensure the reliability of the request through the REST signature mechanism. For the data node, after receiving the request, the HTTP connector invokes the embedded client program to initiate a query processing, and then the embedded client parses the SQL statement from the HTTP connector and requests the management node to get metadata as needed. After that, the embedded client sends query requests to the same data node or other nodes in the cluster and aggregates the calculation results on demand. Finally, you also need to convert the result of the query into a JSON format string and return it to the client via an HTTP response. After the HTTP connector receives the request SQL, the subsequent process processing is completely consistent with the query processing using the client application development kit.
It should be noted that during the entire processing, the client application is no longer involved in, and is only responsible for sending SQL requests through the HTTP protocol and receiving the results in JSON format. Besides, each data node is embedded with an HTTP connector and a client, so any data node in the cluster received requests from a client, the data node can initiate the query and return the result to the client through the HTTP protocol, with transfer the request to other data nodes.
#### Technology
Because TDengine stores data and tags value separately, the tag value is kept in the management node and directly associated with each table instead of records, resulting in a great reduction of the data storage. Therefore, the tag value can be managed by a fully in-memory structure. First, the filtering of the tag data can drastically reduce the data size involved in the second phase of the query. The query processing for the data is performed at the data node. TDengine takes advantage of the immutable characteristics of IoT data by calculating the maximum, minimum, and other statistics of the data in one data block on each saved data block, to effectively improve the performance of query processing. If the query process involves all the data of the entire data block, the pre-computed result is used directly, and the content of the data block is no longer needed. Since the size of disk space required to store the pre-computation result is much smaller than the size of the specific data, the pre-computation result can greatly reduce the disk IO and speed up the query processing.
TDengine employs column-oriented data storage techniques. When the data block is involved to be loaded from the disk for calculation, only the required column is read according to the query condition, and the read overhead can be minimized. The data of one column is stored in a contiguous memory block and therefore can make full use of the CPU L2 cache to greatly speed up the data scanning. Besides, TDengine utilizes the eagerly responding mechanism and returns a partial result before the complete result is acquired. For example, when the first batch of results is obtained, the data node immediately returns it directly to the client in case of a column select query.
\ No newline at end of file
# 高效查询数据
## 主要查询功能
TDengine采用SQL作为查询语言,应用程序可以通过C/C++, JDBC, GO, Python连接器发送SQL查询语句,用户还可以通过TAOS Shell直接手动执行SQL即席查询,十分方便。支持如下查询功能:
- 查询单列、或多列查询
- 支持值过滤条件:\>, \<, =, \<> 大于,小于,等于,不等于等等
- 支持对标签的模糊匹配
- 支持Group by, Order by, Limit, Offset
- 支持列之间的四则运算
- 支持时间戳对齐的JOIN操作
- 支持多种函数: count, max, min, avg, sum, twa, stddev, leastsquares, top, bottom, first, last, percentile, apercentile, last_row, spread, diff
例如:在TAOS Shell中,从表d1001中查询出vlotage >215的记录,按时间降序排列,仅仅输出2条。
```mysql
taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
ts | current | voltage | phase |
======================================================================================
2018-10-03 14:38:16.800 | 12.30000 | 221 | 0.31000 |
2018-10-03 14:38:15.000 | 12.60000 | 218 | 0.33000 |
Query OK, 2 row(s) in set (0.001100s)
```
为满足物联网场景的需求,TDengine支持几个特殊的函数,比如twa(时间加权平均),spread (最大值与最小值的差),last_row(最后一条记录)等,更多与物联网场景相关的函数将添加进来。TDengine还支持连续查询。
具体的查询语法请看TAOS SQL。
## 多表聚合查询
TDengine对每个数据采集点单独建表,但应用经常需要对数据点之间进行聚合。为高效的进行聚合操作,TDengine引入超级表(STable)的概念。超级表用来代表一特定类型的数据采集点,它是表的集合,包含多张表。这集合里每张表的Schema是一样的,但每张表都带有自己的静态标签,标签可以多个,可以随时增加、删除和修改。
应用可通过指定标签的过滤条件,对一个STable下的全部或部分表进行聚合或统计操作,这样大大简化应用的开发。其具体流程如下图所示:
<center> <img src="../assets/stable.png"> </center>
<center> 多表聚合查询原理图 </center>
1:应用将一个查询条件发往系统;2: taosc将超级表的名字发往Meta Node(管理节点);3:管理节点将超级表所拥有的vnode列表发回taosc;4:taosc将计算的请求连同标签过滤条件发往这些vnode对应的多个数据节点;5:每个vnode先在内存里查找出自己节点里符合标签过滤条件的表的集合,然后扫描存储的时序数据,完成相应的聚合计算,将结果返回给taosc;6:taosc将多个数据节点返回的结果做最后的聚合,将其返回给应用。
由于TDengine在vnode内将标签数据与时序数据分离存储,通过先在内存里过滤标签数据,将需要扫描的数据集大幅减少,大幅提升了聚合计算速度。同时,由于数据分布在多个vnode/dnode,聚合计算操作在多个vnode里并发进行,又进一步提升了聚合的速度。
对普通表的聚合函数以及绝大部分操作都适用于超级表,语法完全一样,细节请看TAOS SQL。
比如:在TAOS Shell,查找所有智能电表采集的电压平均值,并按照location分组
```mysql
taos> select avg(voltage) from meters group by location;
avg(voltage) | location |
=============================================================
222.000000000 | Beijing.Haidian |
219.200000000 | Beijing.Chaoyang |
Query OK, 2 row(s) in set (0.002136s)
```
## 降采样查询、插值
物联网场景里,经常需要做down sampling,需要将采集的数据按时间段进行聚合。TDengine提供了一个简便的关键词interval让操作变得极为简单。比如:将智能电表d1001采集的电流值每10秒钟求和
```mysql
taos> SELECT sum(current) FROM d1001 interval(10s) ;
ts | sum(current) |
======================================================
2018-10-03 14:38:00.000 | 10.300000191 |
2018-10-03 14:38:10.000 | 24.900000572 |
Query OK, 2 row(s) in set (0.000883s)
```
降采样操作还适用于超级表,比如:将所有智能电表采集的电流值每秒钟求和
```mysql
taos> SELECT sum(current) FROM meters interval(1s) ;
ts | sum(current) |
======================================================
2018-10-03 14:38:04.000 | 10.199999809 |
2018-10-03 14:38:05.000 | 32.900000572 |
2018-10-03 14:38:06.000 | 11.500000000 |
2018-10-03 14:38:15.000 | 12.600000381 |
2018-10-03 14:38:16.000 | 36.000000000 |
Query OK, 5 row(s) in set (0.001538s)
```
物联网场景里,每个数据采集点采集数据的时间是难同步的,但很多分析算法(比如FFT)需要把采集的数据严格按照时间等间隔的对齐,在很多系统里,需要应用自己写程序来处理,但使用TDengine的降采样操作就轻松解决。如果一个时间间隔里,没有采集的数据,TDengine还提供插值计算的功能。
语法规则细节请见TAOS SQL。
# 超级表STable:多表聚合
TDengine要求每个数据采集点单独建表,这样能极大提高数据的插入/查询性能,但是导致系统中表的数量猛增,让应用对表的维护以及聚合、统计操作难度加大。为降低应用的开发难度,TDengine引入了超级表STable (Super Table)的概念。
## 什么是超级表
STable是同一类型数据采集点的抽象,是同类型采集实例的集合,包含多张数据结构一样的子表。每个STable为其子表定义了表结构和一组标签:表结构即表中记录的数据列及其数据类型;标签名和数据类型由STable定义,标签值记录着每个子表的静态信息,用以对子表进行分组过滤。子表本质上就是普通的表,由一个时间戳主键和若干个数据列组成,每行记录着具体的数据,数据查询操作与普通表完全相同;但子表与普通表的区别在于每个子表从属于一张超级表,并带有一组由STable定义的标签值。每种类型的采集设备可以定义一个STable。数据模型定义表的每列数据的类型,如温度、压力、电压、电流、GPS实时位置等,而标签信息属于Meta Data,如采集设备的序列号、型号、位置等,是静态的,是表的元数据。用户在创建表(数据采集点)时指定STable(采集类型)外,还可以指定标签的值,也可事后增加或修改。
TDengine扩展标准SQL语法用于定义STable,使用关键词tags指定标签信息。语法如下:
```mysql
CREATE TABLE <stable_name> (<field_name> TIMESTAMP, field_name1 field_type,…) TAGS(tag_name tag_type, …)
```
其中tag_name是标签名,tag_type是标签的数据类型。标签可以使用时间戳之外的其他TDengine支持的数据类型,标签的个数最多为6个,名字不能与系统关键词相同,也不能与其他列名相同。如:
```mysql
create table thermometer (ts timestamp, degree float)
tags (location binary(20), type int)
```
上述SQL创建了一个名为thermometer的STable,带有标签location和标签type。
为某个采集点创建表时,可以指定其所属的STable以及标签的值,语法如下:
```mysql
CREATE TABLE <tb_name> USING <stb_name> TAGS (tag_value1,...)
```
沿用上面温度计的例子,使用超级表thermometer建立单个温度计数据表的语句如下:
```mysql
create table t1 using thermometer tags (‘beijing’, 10)
```
上述SQL以thermometer为模板,创建了名为t1的表,这张表的Schema就是thermometer的Schema,但标签location值为‘beijing’,标签type值为10。
用户可以使用一个STable创建数量无上限的具有不同标签的表,从这个意义上理解,STable就是若干具有相同数据模型,不同标签的表的集合。与普通表一样,用户可以创建、删除、查看超级表STable,大部分适用于普通表的查询操作都可运用到STable上,包括各种聚合和投影选择函数。除此之外,可以设置标签的过滤条件,仅对STbale中部分表进行聚合查询,大大简化应用的开发。
TDengine对表的主键(时间戳)建立索引,暂时不提供针对数据模型中其他采集量(比如温度、压力值)的索引。每个数据采集点会采集若干数据记录,但每个采集点的标签仅仅是一条记录,因此数据标签在存储上没有冗余,且整体数据规模有限。TDengine将标签数据与采集的动态数据完全分离存储,而且针对STable的标签建立了高性能内存索引结构,为标签提供全方位的快速操作支持。用户可按照需求对其进行增删改查(Create,Retrieve,Update,Delete,CRUD)操作。
STable从属于库,一个STable只属于一个库,但一个库可以有一到多个STable, 一个STable可有多个子表。
## 超级表管理
- 创建超级表
```mysql
CREATE TABLE <stable_name> (<field_name> TIMESTAMP, field_name1 field_type,…) TAGS(tag_name tag_type, …)
```
与创建表的SQL语法相似。但需指定TAGS字段的名称和类型。
说明:
1. TAGS列总长度不能超过512 bytes;
2. TAGS列的数据类型不能是timestamp和nchar类型;
3. TAGS列名不能与其他列名相同;
4. TAGS列名不能为预留关键字.
- 显示已创建的超级表
```mysql
show stables;
```
查看数据库内全部STable,及其相关信息,包括STable的名称、创建时间、列数量、标签(TAG)数量、通过该STable建表的数量。
- 删除超级表
```mysql
DROP TABLE <stable_name>
```
Note: 删除STable不会级联删除通过STable创建的表;相反删除STable时要求通过该STable创建的表都已经被删除。
- 查看属于某STable并满足查询条件的表
```mysql
SELECT TBNAME,[TAG_NAME,…] FROM <stable_name> WHERE <tag_name> <[=|=<|>=|<>] values..> ([AND|OR] …)
```
查看属于某STable并满足查询条件的表。说明:TBNAME为关键词,显示通过STable建立的子表表名,查询过程中可以使用针对标签的条件。
```mysql
SELECT COUNT(TBNAME) FROM <stable_name> WHERE <tag_name> <[=|=<|>=|<>] values..> ([AND|OR] …)
```
统计属于某个STable并满足查询条件的子表的数量
## 写数据时自动建子表
在某些特殊场景中,用户在写数据时并不确定某个设备的表是否存在,此时可使用自动建表语法来实现写入数据时里用超级表定义的表结构自动创建不存在的子表,若该表已存在则不会建立新表。注意:自动建表语句只能自动建立子表而不能建立超级表,这就要求超级表已经被事先定义好。自动建表语法跟insert/import语法非常相似,唯一区别是语句中增加了超级表和标签信息。具体语法如下:
```mysql
INSERT INTO <tb_name> USING <stb_name> TAGS (<tag1_value>, ...) VALUES (field_value, ...) (field_value, ...) ...;
```
向表tb_name中插入一条或多条记录,如果tb_name这张表不存在,则会用超级表stb_name定义的表结构以及用户指定的标签值(即tag1_value…)来创建名为tb_name新表,并将用户指定的值写入表中。如果tb_name已经存在,则建表过程会被忽略,系统也不会检查tb_name的标签是否与用户指定的标签值一致,也即不会更新已存在表的标签。
```mysql
INSERT INTO <tb1_name> USING <stb1_name> TAGS (<tag1_value1>, ...) VALUES (<field1_value1>, ...) (<field1_value2>, ...) ... <tb_name2> USING <stb_name2> TAGS(<tag1_value2>, ...) VALUES (<field1_value1>, ...) ...;
```
向多张表tb1_name,tb2_name等插入一条或多条记录,并分别指定各自的超级表进行自动建表。
## STable中TAG管理
除了更新标签的值的操作是针对子表进行,其他所有的标签操作(添加标签、删除标签等)均只能作用于STable,不能对单个子表操作。对STable添加标签以后,依托于该STable建立的所有表将自动增加了一个标签,对于数值型的标签,新增加的标签的默认值是0.
- 添加新的标签
```mysql
ALTER TABLE <stable_name> ADD TAG <new_tag_name> <TYPE>
```
为STable增加一个新的标签,并指定新标签的类型。标签总数不能超过6个。
- 删除标签
```mysql
ALTER TABLE <stable_name> DROP TAG <tag_name>
```
删除超级表的一个标签,从超级表删除某个标签后,该超级表下的所有子表也会自动删除该标签。
说明:第一列标签不能删除,至少需要为STable保留一个标签。
- 修改标签名
```mysql
ALTER TABLE <stable_name> CHANGE TAG <old_tag_name> <new_tag_name>
```
修改超级表的标签名,从超级表修改某个标签名后,该超级表下的所有子表也会自动更新该标签名。
- 修改子表的标签值
```mysql
ALTER TABLE <table_name> SET TAG <tag_name>=<new_tag_value>
```
## STable多表聚合
针对所有的通过STable创建的子表进行多表聚合查询,支持按照全部的TAG值进行条件过滤,并可将结果按照TAGS中的值进行聚合,暂不支持针对binary类型的模糊匹配过滤。语法如下:
```mysql
SELECT function<field_name>,…
FROM <stable_name>
WHERE <tag_name> <[=|<=|>=|<>] values..> ([AND|OR] …)
INTERVAL (<time range>)
GROUP BY <tag_name>, <tag_name>…
ORDER BY <tag_name> <asc|desc>
SLIMIT <group_limit>
SOFFSET <group_offset>
LIMIT <record_limit>
OFFSET <record_offset>
```
**说明**
超级表聚合查询,TDengine目前支持以下聚合\选择函数:sum、count、avg、first、last、min、max、top、bottom,以及针对全部或部分列的投影操作,使用方式与单表查询的计算过程相同。暂不支持其他类型的聚合计算和四则运算。当前所有的函数及计算过程均不支持嵌套的方式进行执行。
不使用GROUP BY的查询将会对超级表下所有满足筛选条件的表按时间进行聚合,结果输出默认是按照时间戳单调递增输出,用户可以使用ORDER BY _c0 ASC|DESC选择查询结果时间戳的升降排序;使用GROUP BY <tag_name> 的聚合查询会按照tags进行分组,并对每个组内的数据分别进行聚合,输出结果为各个组的聚合结果,组间的排序可以由ORDER BY <tag_name> 语句指定,每个分组内部,时间序列是单调递增的。
使用SLIMIT/SOFFSET语句指定组间分页,即指定结果集中输出的最大组数以及对组起始的位置。使用LIMIT/OFFSET语句指定组内分页,即指定结果集中每个组内最多输出多少条记录以及记录起始的位置。
## STable使用示例
以温度传感器采集时序数据作为例,示范STable的使用。 在这个例子中,对每个温度计都会建立一张表,表名为温度计的ID,温度计读数的时刻记为ts,采集的值记为degree。通过tags给每个采集器打上不同的标签,其中记录温度计的地区和类型,以方便我们后面的查询。所有温度计的采集量都一样,因此我们用STable来定义表结构。
###定义STable表结构并使用它创建子表
创建STable语句如下:
```mysql
CREATE TABLE thermometer (ts timestamp, degree double)
TAGS(location binary(20), type int)
```
假设有北京,天津和上海三个地区的采集器共4个,温度采集器有3种类型,我们就可以对每个采集器建表如下:
```mysql
CREATE TABLE therm1 USING thermometer TAGS (’beijing’, 1);
CREATE TABLE therm2 USING thermometer TAGS (’beijing’, 2);
CREATE TABLE therm3 USING thermometer TAGS (’tianjin’, 1);
CREATE TABLE therm4 USING thermometer TAGS (’shanghai’, 3);
```
其中therm1,therm2,therm3,therm4是超级表thermometer四个具体的子表,也即普通的Table。以therm1为例,它表示采集器therm1的数据,表结构完全由thermometer定义,标签location=”beijing”, type=1表示therm1的地区是北京,类型是第1类的温度计。
###写入数据
注意,写入数据时不能直接对STable操作,而是要对每张子表进行操作。我们分别向四张表therm1,therm2, therm3, therm4写入一条数据,写入语句如下:
```mysql
INSERT INTO therm1 VALUES (’2018-01-01 00:00:00.000’, 20);
INSERT INTO therm2 VALUES (’2018-01-01 00:00:00.000’, 21);
INSERT INTO therm3 VALUES (’2018-01-01 00:00:00.000’, 24);
INSERT INTO therm4 VALUES (’2018-01-01 00:00:00.000’, 23);
```
###按标签聚合查询
查询位于北京(beijing)和天津(tianjing)两个地区的温度传感器采样值的数量count(*)、平均温度avg(degree)、最高温度max(degree)、最低温度min(degree),并将结果按所处地域(location)和传感器类型(type)进行聚合。
```mysql
SELECT COUNT(*), AVG(degree), MAX(degree), MIN(degree)
FROM thermometer
WHERE location=’beijing’ or location=’tianjing’
GROUP BY location, type
```
###按时间周期聚合查询
查询仅位于北京以外地区的温度传感器最近24小时(24h)采样值的数量count(*)、平均温度avg(degree)、最高温度max(degree)和最低温度min(degree),将采集结果按照10分钟为周期进行聚合,并将结果按所处地域(location)和传感器类型(type)再次进行聚合。
```mysql
SELECT COUNT(*), AVG(degree), MAX(degree), MIN(degree)
FROM thermometer
WHERE name<>’beijing’ and ts>=now-1d
INTERVAL(10M)
GROUP BY location, type
```
\ No newline at end of file
# STable: Super Table
"One Table for One Device" design can improve the insert/query performance significantly for a single device. But it has a side effect, the aggregation of multiple tables becomes hard. To reduce the complexity and improve the efficiency, TDengine introduced a new concept: STable (Super Table).
## What is a Super Table
STable is an abstract and a template for a type of device. A STable contains a set of devices (tables) that have the same schema or data structure. Besides the shared schema, a STable has a set of tags, like the model, serial number and so on. Tags are used to record the static attributes for the devices and are used to group a set of devices (tables) for aggregation. Tags are metadata of a table and can be added, deleted or changed.
TDengine does not save tags as a part of the data points collected. Instead, tags are saved as metadata. Each table has a set of tags. To improve query performance, tags are all cached and indexed. One table can only belong to one STable, but one STable may contain many tables.
Like a table, you can create, show, delete and describe STables. Most query operations on tables can be applied to STable too, including the aggregation and selector functions. For queries on a STable, if no tags filter, the operations are applied to all the tables created via this STable. If there is a tag filter, the operations are applied only to a subset of the tables which satisfy the tag filter conditions. It will be very convenient to use tags to put devices into different groups for aggregation.
##Create a STable
Similiar to creating a standard table, syntax is:
```mysql
CREATE TABLE <stable_name> (<field_name> TIMESTAMP, field_name1 field_type,…) TAGS(tag_name tag_type, …)
```
New keyword "tags" is introduced, where tag_name is the tag name, and tag_type is the associated data type.
Note:
1. The bytes of all tags together shall be less than 512
2. Tag's data type can not be time stamp or nchar
3. Tag name shall be different from the field name
4. Tag name shall not be the same as system keywords
5. Maximum number of tags is 6
For example:
```mysql
create table thermometer (ts timestamp, degree float)
tags (location binary(20), type int)
```
The above statement creates a STable thermometer with two tag "location" and "type"
##Create a Table via STable
To create a table for a device, you can use a STable as its template and assign the tag values. The syntax is:
```mysql
CREATE TABLE <tb_name> USING <stb_name> TAGS (tag_value1,...)
```
You can create any number of tables via a STable, and each table may have different tag values. For example, you create five tables via STable thermometer below:
```mysql
create table t1 using thermometer tags (‘beijing’, 10);
create table t2 using thermometer tags (‘beijing’, 20);
create table t3 using thermometer tags (‘shanghai’, 10);
create table t4 using thermometer tags (‘shanghai’, 20);
create table t5 using thermometer tags (‘new york’, 10);
```
## Aggregate Tables via STable
You can group a set of tables together by specifying the tags filter condition, then apply the aggregation operations. The result set can be grouped and ordered based on tag value. Syntax is:
```mysql
SELECT function<field_name>,…
FROM <stable_name>
WHERE <tag_name> <[=|<=|>=|<>] values..> ([AND|OR] …)
INTERVAL (<time range>)
GROUP BY <tag_name>, <tag_name>…
ORDER BY <tag_name> <asc|desc>
SLIMIT <group_limit>
SOFFSET <group_offset>
LIMIT <record_limit>
OFFSET <record_offset>
```
For the time being, STable supports only the following aggregation/selection functions: *sum, count, avg, first, last, min, max, top, bottom*, and the projection operations, the same syntax as a standard table. Arithmetic operations are not supported, embedded queries not either.
*INTERVAL* is used for the aggregation over a time range.
If *GROUP BY* is not used, the aggregation is applied to all the selected tables, and the result set is output in ascending order of the timestamp, but you can use "*ORDER BY _c0 ASC|DESC*" to specify the order you like.
If *GROUP BY <tag_name>* is used, the aggregation is applied to groups based on tags. Each group is aggregated independently. Result set is a group of aggregation results. The group order is decided by *ORDER BY <tag_name>*. Inside each group, the result set is in the ascending order of the time stamp.
*SLIMIT/SOFFSET* are used to limit the number of groups and starting group number.
*LIMIT/OFFSET* are used to limit the number of records in a group and the starting rows.
###Example 1:
Check the average, maximum, and minimum temperatures of Beijing and Shanghai, and group the result set by location and type. The SQL statement shall be:
```mysql
SELECT COUNT(*), AVG(degree), MAX(degree), MIN(degree)
FROM thermometer
WHERE location=’beijing’ or location=’tianjing’
GROUP BY location, type
```
### Example 2:
List the number of records, average, maximum, and minimum temperature every 10 minutes for the past 24 hours for all the thermometers located in Beijing with type 10. The SQL statement shall be:
```mysql
SELECT COUNT(*), AVG(degree), MAX(degree), MIN(degree)
FROM thermometer
WHERE name=’beijing’ and type=10 and ts>=now-1d
INTERVAL(10M)
```
## Create Table Automatically
Insert operation will fail if the table is not created yet. But for STable, TDengine can create the table automatically if the application provides the STable name, table name and tags' value when inserting data points. The syntax is:
```mysql
INSERT INTO <tb_name> USING <stb_name> TAGS (<tag1_value>, ...) VALUES (field_value, ...) (field_value, ...) ... <tb_name2> USING <stb_name2> TAGS(<tag1_value2>, ...) VALUES (<field1_value1>, ...) ...;
```
When inserting data points into table tb_name, the system will check if table tb_name is created or not. If it is already created, the data points will be inserted as usual. But if the table is not created yet, the system will create the table tb_bame using STable stb_name as the template with the tags. Multiple tables can be specified in the SQL statement.
## Management of STables
After you can create a STable, you can describe, delete, change STables. This section lists all the supported operations.
### Show STables in current DB
```mysql
show stables;
```
It lists all STables in current DB, including the name, created time, number of fileds, number of tags, and number of tables which are created via this STable.
### Describe a STable
```mysql
DESCRIBE <stable_name>
```
It lists the STable's schema and tags
### Drop a STable
```mysql
DROP TABLE <stable_name>
```
To delete a STable, all the tables created via this STable shall be deleted first, otherwise, it will fail.
### List the Associated Tables of a STable
```mysql
SELECT TBNAME,[TAG_NAME,…] FROM <stable_name> WHERE <tag_name> <[=|=<|>=|<>] values..> ([AND|OR] …)
```
It will list all the tables which satisfy the tag filter conditions. The tables are all created from this specific STable. TBNAME is a new keyword introduced, it is the table name associated with the STable.
```mysql
SELECT COUNT(TBNAME) FROM <stable_name> WHERE <tag_name> <[=|=<|>=|<>] values..> ([AND|OR] …)
```
The above SQL statement will list the number of tables in a STable, which satisfy the filter condition.
## Management of Tags
You can add, delete and change the tags for a STable, and you can change the tag value of a table. The SQL commands are listed below.
###Add a Tag
```mysql
ALTER TABLE <stable_name> ADD TAG <new_tag_name> <TYPE>
```
It adds a new tag to the STable with a data type. The maximum number of tags is 6.
###Drop a Tag
```mysql
ALTER TABLE <stable_name> DROP TAG <tag_name>
```
It drops a tag from a STable. The first tag could not be deleted, and there must be at least one tag.
###Change a Tag's Name
```mysql
ALTER TABLE <stable_name> CHANGE TAG <old_tag_name> <new_tag_name>
```
It changes the name of a tag from old to new.
###Change the Tag's Value
```mysql
ALTER TABLE <table_name> SET TAG <tag_name>=<new_tag_value>
```
It changes a table's tag value to a new one.
此差异已折叠。
此差异已折叠。
#系统管理
## 容量规划
系统的处理能力是有限的,但通过对TDengine配置参数的调整,可以在性能与容量上进行平衡。
**内存需求**
每个Database可以创建固定数目的Vnode,默认与CPU核数相同,可通过maxVgroupsPerDb配置;每个vnode会占用固定大小的内存,与参数blocks和cache有关;每个Table会占用与Tag总大小有关的内存;此外,系统会有一些固定的内存开销。因此,每个Database需要的系统内存可通过如下公式计算:
```
Memory Size = maxVgroupsPerDb * (blocks * cache + 10Mb) + numOfTables * (tagSizePerTable + 0.5Kb)
```
**CPU需求**
每个Vnode可以容纳的最大数据表数,可以通过参数maxTablesPerVnode设置,但每个Vnode的数据表数在1万以下性能最佳。系统总的Vnode数目最好不要超过CPU核数的两倍。
**存储需求**
在绝大多数场景下,TDengine的压缩比不会低于5倍。压缩前的原始数据大小可通过如下方式计算:
```
Raw DataSize = numOfTables * rowSizePerTable * rowsPerTable
```
用户可以通过参数keep,设置数据在磁盘中的最大保存时长。
## 容错和灾备
**容错**
TDengine支持**WAL**(Write Ahead Log)机制,实现数据的容错能力,保证数据的高可用。
TDengine接收到应用的请求数据包时,先将请求的原始数据包写入数据库日志文件,等数据成功写入数据库数据文件后,再删除相应的WAL。这样保证了TDengine能够在断电等因素导致的服务重启时从数据库日志文件中恢复数据,避免数据的丢失。
涉及的系统配置参数有两个.
walLevel:WAL级别,1:写wal, 但不执行fsync; 2:写wal, 而且执行fsync。
fsync:当walLevel设置为2时,执行fsync的周期。设置为0,表示每次写入,立即执行fsync。
**灾备**
TDengine的集群通过多个副本的机制,来提供系统的高可靠性,实现灾备能力。
TDengine集群是由mnode负责管理的,为保证mnode的高可靠,可以配置多个mnode副本,副本数由系统配置参数numOfMnodes决定,为了支持高可靠,需要设置大于1。为保证元数据的强一致性,mnode副本之间通过同步方式进行数据复制,保证了元数据的强一致性。
TDengine集群中的时序数据的副本数是与数据库关联的,一个集群里可以有多个数据库,每个数据库可以配置不同的副本数。创建数据库时,通过参数replica 指定副本数。为了支持高可靠,需要设置副本数大于1。
TDengine集群的节点数必须大于等于副本数,否则创建表时将报错。
当TDengine集群中的节点部署在不同的物理机上(比如不同的机架、或不同的IDC),并设置多个副本数时,就实现了异地容灾,从而提供系统的高可靠性,无需再使用其他软件或工具。
## 文件目录结构
安装TDengine后,默认会在操作系统中生成下列目录或文件:
| 目录/文件 | 说明 |
| ---------------------- | :------------------------------------------------|
| /usr/local/taos/bin | TDengine可执行文件目录。其中的执行文件都会软链接到/usr/bin目录下。 |
| /usr/local/taos/connector | TDengine各种连接器目录。 |
| /usr/local/taos/driver | TDengine动态链接库目录。会软链接到/usr/lib目录下。 |
| /usr/local/taos/examples | TDengine各种语言应用示例目录。 |
| /usr/local/taos/include | TDengine对外提供的C语言接口的头文件。 |
| /etc/taos/taos.cfg | TDengine默认[配置文件] |
| /var/lib/taos | TDengine默认数据文件目录,可通过[配置文件]修改位置. |
| /var/log/taos | TDengine默认日志文件目录,可通过[配置文件]修改位置 |
**可执行文件**
TDengine的所有可执行文件默认存放在 _/usr/local/taos/bin_ 目录下。其中包括:
- _taosd_:TDengine服务端可执行文件
- _taos_: TDengine Shell可执行文件
- _taosdump_:数据导入导出工具
- remove.sh:卸载TDengine的脚本, 请谨慎执行,链接到/usr/bin目录下的rmtaos命令。会删除TDengine的安装目录/usr/local/taos,但会保留/etc/taos、/var/lib/taos、/var/log/taos。
您可以通过修改系统配置文件taos.cfg来配置不同的数据目录和日志目录。
## 服务端配置
TDengine系统后台服务由taosd提供,可以在配置文件taos.cfg里修改配置参数,以满足不同场景的需求。配置文件的缺省位置在/etc/taos目录,可以通过taosd命令行执行参数-c指定配置文件目录。比如taosd -c /home/user来指定配置文件位于/home/user这个目录。
下面仅仅列出一些重要的配置参数,更多的参数请看配置文件里的说明。各个参数的详细介绍及作用请看前述章节。**注意:配置修改后,需要重启*taosd*服务才能生效。**
- first: taosd启动时,主动连接的集群中第一个dnode的end point, 缺省值为 localhost:6030。
- second: taosd启动时,如果first连接不上,尝试连接集群中第二个dnode的end point, 缺省值为空。
- fqdn:数据节点的FQDN。如果为空,将自动获取操作系统配置的第一个, 缺省值为空。
- serverPort:taosd启动后,对外服务的端口号,默认值为6030。
- httpPort: RESTful服务使用的端口号,所有的HTTP请求(TCP)都需要向该接口发起查询/写入请求。
- dataDir: 数据文件目录,所有的数据文件都将写入该目录。默认值:/var/lib/taos。
- logDir:日志文件目录,客户端和服务器的运行日志文件将写入该目录。默认值:/var/log/taos。
- arbitrator:系统中裁决器的end point, 缺省值为空。
- role:dnode的可选角色。0-any; 既可作为mnode,也可分配vnode;1-mgmt;只能作为mnode,不能分配vnode;2-dnode;不能作为mnode,只能分配vnode
- debugFlag:运行日志开关。131(输出错误和警告日志),135( 输出错误、警告和调试日志),143( 输出错误、警告、调试和跟踪日志)。默认值:131或135(不同模块有不同的默认值)。
- numOfLogLines:单个日志文件允许的最大行数。默认值:10,000,000行。
- maxSQLLength:单条SQL语句允许最长限制。默认值:65380字节。
- maxBinaryDisplayWidth:Shell中binary 和 nchar字段的显示宽度上限,超过此限制的部分将被隐藏。默认值:30。可在 shell 中通过命令 set max_binary_display_width nn动态修改此选项。
不同应用场景的数据往往具有不同的数据特征,比如保留天数、副本数、采集频次、记录大小、采集点的数量、压缩等都可完全不同。为获得在存储上的最高效率,TDengine提供如下存储相关的系统配置参数:
- days:一个数据文件存储数据的时间跨度,单位为天,默认值:10。
- keep:数据库中数据保留的天数,单位为天,默认值:3650。
- minRows: 文件块中记录的最小条数,单位为条,默认值:100。
- maxRows: 文件块中记录的最大条数,单位为条,默认值:4096。
- comp: 文件压缩标志位,0:关闭,1:一阶段压缩,2:两阶段压缩。默认值:2。
- walLevel:WAL级别。1:写wal, 但不执行fsync; 2:写wal, 而且执行fsync。默认值:1。
- fsync:当wal设置为2时,执行fsync的周期。设置为0,表示每次写入,立即执行fsync。单位为毫秒,默认值:3000。
- cache: 内存块的大小,单位为兆字节(MB),默认值:16。
- blocks: 每个VNODE(TSDB)中有多少cache大小的内存块。因此一个VNODE的用的内存大小粗略为(cache * blocks)。单位为块,默认值:4。
- replica:副本个数,取值范围:1-3。单位为个,默认值:1
- precision:时间戳精度标识,ms表示毫秒,us表示微秒。默认值:ms
对于一个应用场景,可能有多种数据特征的数据并存,最佳的设计是将具有相同数据特征的表放在一个库里,这样一个应用有多个库,而每个库可以配置不同的存储参数,从而保证系统有最优的性能。TDengine允许应用在创建库时指定上述存储参数,如果指定,该参数就将覆盖对应的系统配置参数。举例,有下述SQL:
```
create database demo days 10 cache 32 blocks 8 replica 3
```
该SQL创建了一个库demo, 每个数据文件存储10天数据,内存块为32兆字节,每个VNODE占用8个内存块,副本数为3,而其他参数与系统配置完全一致。
TDengine集群中加入一个新的dnode时,涉及集群相关的一些参数必须与已有集群的配置相同,否则不能成功加入到集群中。会进行校验的参数如下:
- numOfMnodes:系统中管理节点个数。默认值:3。
- balance:是否启动负载均衡。0:否,1:是。默认值:1。
- mnodeEqualVnodeNum: 一个mnode等同于vnode消耗的个数。默认值:4。
- offlineThreshold: dnode离线阈值,超过该时间将导致该dnode从集群中删除。单位为秒,默认值:86400*10(即10天)。
- statusInterval: dnode向mnode报告状态时长。单位为秒,默认值:1。
- maxTablesPerVnode: 每个vnode中能够创建的最大表个数。默认值:1000000。
- maxVgroupsPerDb: 每个数据库中能够使用的最大vnode个数。
- arbitrator: 系统中裁决器的end point。
- timezone:时区。从系统中动态获取当前的时区设置。
- locale:系统区位信息及编码格式。系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过API设置。
- charset:字符集编码。系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过API设置。
## 客户端配置
TDengine系统的前台交互客户端应用程序为taos,它与taosd共享同一个配置文件taos.cfg。运行taos时,使用参数-c指定配置文件目录,如taos -c /home/cfg,表示使用/home/cfg/目录下的taos.cfg配置文件中的参数,缺省目录是/etc/taos。更多taos的使用方法请见[Shell命令行程序](#_TDengine_Shell命令行程序)。本节主要讲解taos客户端应用在配置文件taos.cfg文件中使用到的参数。
客户端配置参数列表及解释
- first: taos启动时,主动连接的集群中第一个taosd实例的end point, 缺省值为 localhost:6030。
- second: taos启动时,如果first连接不上,尝试连接集群中第二个taosd实例的end point, 缺省值为空。
- charset:字符集编码。系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过API设置。
- locale:系统区位信息及编码格式。系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过API设置。
日志的配置参数,与server的配置参数完全一样。
启动taos时,也可以从命令行指定一个taosd实例的end point,否则就从taos.cfg读取。
## 用户管理
系统管理员可以在CLI界面里添加、删除用户,也可以修改密码。CLI里SQL语法如下:
```
CREATE USER user_name PASS ‘password’
```
创建用户,并指定用户名和密码,密码需要用单引号引起来
```
DROP USER user_name
```
删除用户,限root用户使用
```
ALTER USER user_name PASS ‘password’
```
修改用户密码, 为避免被转换为小写,密码需要用单引号引用
```
SHOW USERS
```
显示所有用户
## 数据导入
TDengine提供多种方便的数据导入功能,一种按脚本文件导入,一种按数据文件导入,一种是taosdump工具导入本身导出的文件。
**按脚本文件导入**
TDengine的shell支持source filename命令,用于批量运行文件中的SQL语句。用户可将建库、建表、写数据等SQL命令写在同一个文件中,每条命令单独一行,在shell中运行source命令,即可按顺序批量运行文件中的SQL语句。以‘#’开头的SQL语句被认为是注释,shell将自动忽略。
**按数据文件导入**
TDengine也支持在shell对已存在的表从CSV文件中进行数据导入。CSV文件只属于一张表且CSV文件中的数据格式需与要导入表的结构相同, 在导入的时候,其语法如下
```mysql
insert into tb1 file 'path/data.csv'
```
注意:如果CSV文件首行存在描述信息,请手动删除后再导入
例如,现在存在一个子表d1001, 其表结构如下:
```mysql
taos> DESCRIBE d1001
Field | Type | Length | Note |
=================================================================================
ts | TIMESTAMP | 8 | |
current | FLOAT | 4 | |
voltage | INT | 4 | |
phase | FLOAT | 4 | |
location | BINARY | 64 | TAG |
groupid | INT | 4 | TAG |
```
要导入的data.csv的格式如下:
```csv
'2018-10-04 06:38:05.000',10.30000,219,0.31000
'2018-10-05 06:38:15.000',12.60000,218,0.33000
'2018-10-06 06:38:16.800',13.30000,221,0.32000
'2018-10-07 06:38:05.000',13.30000,219,0.33000
'2018-10-08 06:38:05.000',14.30000,219,0.34000
'2018-10-09 06:38:05.000',15.30000,219,0.35000
'2018-10-10 06:38:05.000',16.30000,219,0.31000
'2018-10-11 06:38:05.000',17.30000,219,0.32000
'2018-10-12 06:38:05.000',18.30000,219,0.31000
```
那么可以用如下命令导入数据
```
taos> insert into d1001 file '~/data.csv';
Query OK, 9 row(s) affected (0.004763s)
```
**taosdump工具导入**
TDengine提供了方便的数据库导入导出工具taosdump。用户可以将taosdump从一个系统导出的数据,导入到其他系统中。具体使用方法,请参见博客:
[TDengine DUMP工具使用指南]: https://www.taosdata.com/blog/2020/03/09/1334.html
## 数据导出
为方便数据导出,TDengine提供了两种导出方式,分别是按表导出和用taosdump导出。
**按表导出CSV文件**
如果用户需要导出一个表或一个STable中的数据,可在shell中运行
```
select * from <tb_name> >> data.csv
```
这样,表tb_name中的数据就会按照CSV格式导出到文件data.csv中。
**用taosdump导出数据**
TDengine提供了方便的数据库导出工具taosdump。用户可以根据需要选择导出所有数据库、一个数据库或者数据库中的一张表,所有数据或一时间段的数据,甚至仅仅表的定义。具体使用方法,请参见博客:
[TDengine DUMP工具使用指南]: https://www.taosdata.com/blog/2020/03/09/1334.html
## 系统连接、任务查询管理
系统管理员可以从CLI查询系统的连接、正在进行的查询、流式计算,并且可以关闭连接、停止正在进行的查询和流式计算。CLI里SQL语法如下:
```
SHOW CONNECTIONS
```
显示数据库的连接,其中一列显示ip:port, 为连接的IP地址和端口号。
```
KILL CONNECTION <connection-id>
```
强制关闭数据库连接,其中的connection-id是SHOW CONNECTIONS中显示的第一列的数字。
```
SHOW QUERIES
```
显示数据查询,其中第一列显示的以冒号隔开的两个数字为query-id,为发起该query应用连接的connection-id和查询次数。
```
KILL QUERY <query-id>
```
强制关闭数据查询,其中query-id是SHOW QUERIES中显示的 connection-id:query-no字串,如“105:2”,拷贝粘贴即可。
```
SHOW STREAMS
```
显示流式计算,其中第一列显示的以冒号隔开的两个数字为stream-id, 为启动该stream应用连接的connection-id和发起stream的次数。
```
KILL STREAM <stream-id>
```
强制关闭流式计算,其中的中stream-id是SHOW STREAMS中显示的connection-id:stream-no字串,如103:2,拷贝粘贴即可。
## 系统监控
TDengine启动后,会自动创建一个监测数据库SYS,并自动将服务器的CPU、内存、硬盘空间、带宽、请求数、磁盘读写速度、慢查询等信息定时写入该数据库。TDengine还将重要的系统操作(比如登录、创建、删除数据库等)日志以及各种错误报警信息记录下来存放在SYS库里。系统管理员可以从CLI直接查看这个数据库,也可以在WEB通过图形化界面查看这些监测信息。
这些监测信息的采集缺省是打开的,但可以修改配置文件里的选项enableMonitor将其关闭或打开。
\ No newline at end of file
#Administrator
## Directory and Files
After TDengine is installed, by default, the following directories will be created:
| Directory/File | Description |
| ---------------------- | :------------------------------ |
| /etc/taos/taos.cfg | TDengine configuration file |
| /usr/local/taos/driver | TDengine dynamic link library |
| /var/lib/taos | TDengine default data directory |
| /var/log/taos | TDengine default log directory |
| /usr/local/taos/bin. | TDengine executables |
### Executables
All TDengine executables are located at _/usr/local/taos/bin_ , including:
- `taosd`:TDengine server
- `taos`: TDengine Shell, the command line interface.
- `taosdump`:TDengine data export tool
- `rmtaos`: a script to uninstall TDengine
You can change the data directory and log directory setting through the system configuration file
## Configuration on Server
`taosd` is running on the server side, you can change the system configuration file taos.cfg to customize its behavior. By default, taos.cfg is located at /etc/taos, but you can specify the path to configuration file via the command line parameter -c. For example: `taosd -c /home/user` means the configuration file will be read from directory /home/user.
This section lists only the most important configuration parameters. Please check taos.cfg to find all the configurable parameters. **Note: to make your new configurations work, you have to restart taosd after you change taos.cfg**.
- mgmtShellPort: TCP and UDP port between client and TDengine mgmt (default: 6030). Note: 5 successive UDP ports (6030-6034) starting from this number will be used.
- vnodeShellPort: TCP and UDP port between client and TDengine vnode (default: 6035). Note: 5 successive UDP ports (6035-6039) starting from this number will be used.
- httpPort: TCP port for RESTful service (default: 6020)
- dataDir: data directory, default is /var/lib/taos
- maxUsers: maximum number of users allowed
- maxDbs: maximum number of databases allowed
- maxTables: maximum number of tables allowed
- enableMonitor: turn on/off system monitoring, 0: off, 1: on
- logDir: log directory, default is /var/log/taos
- numOfLogLines: maximum number of lines in the log file
- debugFlag: log level, 131: only error and warnings, 135: all
In different scenarios, data characteristics are different. For example, the retention policy, data sampling period, record size, the number of devices, and data compression may be different. To gain the best performance, you can change the following configurations related to storage:
- days: number of days to cover for a data file
- keep: number of days to keep the data
- rows: number of rows of records in a block in data file.
- comp: compression algorithm, 0: off, 1: standard; 2: maximum compression
- ctime: period (seconds) to flush data to disk
- clog: flag to turn on/off Write Ahead Log, 0: off, 1: on
- tables: maximum number of tables allowed in a vnode
- cache: cache block size (bytes)
- tblocks: maximum number of cache blocks for a table
- abloks: average number of cache blocks for a table
- precision: timestamp precision, us: microsecond ms: millisecond, default is ms
For an application, there may be multiple data scenarios. The best design is to put all data with the same characteristics into one database. One application may have multiple databases, and every database has its own configuration to maximize the system performance. You can specify the above configurations related to storage when you create a database. For example:
```mysql
CREATE DATABASE demo DAYS 10 CACHE 16000 ROWS 2000
```
The above SQL statement will create a database demo, with 10 days for each data file, 16000 bytes for a cache block, and 2000 rows in a file block.
The configuration provided when creating a database will overwrite the configuration in taos.cfg.
## Configuration on Client
*taos* is the TDengine shell and is a client that connects to taosd. TDengine uses the same configuration file taos.cfg for the client, with default location at /etc/taos. You can change it by specifying command line parameter -c when you run taos. For example, *taos -c /home/user*, it will read the configuration file taos.cfg from directory /home/user.
The parameters related to client configuration are listed below:
- masterIP: IP address of TDengine server
- charset: character set, default is the system . For data type nchar, TDengine uses unicode to store the data. Thus, the client needs to tell its character set.
- locale: system language setting
- defaultUser: default login user, default is root
- defaultPass: default password, default is taosdata
For TCP/UDP port, and system debug/log configuration, it is the same as the server side.
For server IP, user name, password, you can always specify them in the command line when you run taos. If they are not specified, they will be read from the taos.cfg
## User Management
System administrator (user root) can add, remove a user, or change the password from the TDengine shell. Commands are listed below:
Create a user, password shall be quoted with the single quote.
```mysql
CREATE USER user_name PASS ‘password’
```
Remove a user
```mysql
DROP USER user_name
```
Change the password for a user
```mysql
ALTER USER user_name PASS ‘password’
```
List all users
```mysql
SHOW USERS
```
## Import Data
Inside the TDengine shell, you can import data into TDengine from either a script or CSV file
**Import from Script**
```
source <filename>
```
Inside the file, you can put all SQL statements there. Each SQL statement has a line. If a line starts with "#", it means comments, it will be skipped. The system will execute the SQL statements line by line automatically until the ends
**Import from CSV**
```mysql
insert into tb1 file 'path/data.csv'
```
CSV file contains records for only one table, and the data structure shall be the same as the defined schema for the table. The header of CSV file shall be removed.
For example, the following is a sub-table d1001:
```mysql
taos> DESCRIBE d1001
Field | Type | Length | Note |
=================================================================================
ts | TIMESTAMP | 8 | |
current | FLOAT | 4 | |
voltage | INT | 4 | |
phase | FLOAT | 4 | |
location | BINARY | 64 | TAG |
groupid | INT | 4 | TAG |
```
The data format in data.csv like this:
```csv
'2018-10-04 06:38:05.000',10.30000,219,0.31000
'2018-10-05 06:38:15.000',12.60000,218,0.33000
'2018-10-06 06:38:16.800',13.30000,221,0.32000
'2018-10-07 06:38:05.000',13.30000,219,0.33000
'2018-10-08 06:38:05.000',14.30000,219,0.34000
'2018-10-09 06:38:05.000',15.30000,219,0.35000
'2018-10-10 06:38:05.000',16.30000,219,0.31000
'2018-10-11 06:38:05.000',17.30000,219,0.32000
'2018-10-12 06:38:05.000',18.30000,219,0.31000
```
then data can be imported into database by this cmd:
```
taos> insert into d1001 file '~/data.csv';
Query OK, 9 row(s) affected (0.004763s)
```
## Export Data
You can export data either from TDengine shell or from tool taosdump.
**Export from TDengine Shell**
```mysql
select * from <tb_name> >> data.csv
```
The above SQL statement will dump the query result set into data.csv file.
**Export Using taosdump**
TDengine provides a data dumping tool taosdump. You can choose to dump a database, a table, all data or data only a time range, even only the metadata. For example:
- Export one or more tables in a DB: taosdump [OPTION…] dbname tbname …
- Export one or more DBs: taosdump [OPTION…] --databases dbname…
- Export all DBs (excluding system DB): taosdump [OPTION…] --all-databases
run *taosdump —help* to get a full list of the options
## Management of Connections, Streams, Queries
The system administrator can check, kill the ongoing connections, streams, or queries.
```
SHOW CONNECTIONS
```
It lists all connections. The first column shows connection-id from the client.
```
KILL CONNECTION <connection-id>
```
It kills the connection, where connection-id is the number of the first column showed by "SHOW CONNECTIONS".
```
SHOW QUERIES
```
It shows the ongoing queries. The first column shows the connection-id:query-no, where connection-id is the connection from the client, and id assigned by the system
```
KILL QUERY <query-id>
```
It kills the query, where query-id is the connection-id:query-no showed by "SHOW QUERIES". You can copy and paste it.
```
SHOW STREAMS
```
It shows the continuous queries. The first column shows the connection-id:stream-no, where connection-id is the connection from the client, and id assigned by the system.
```
KILL STREAM <stream-id>
```
It kills the continuous query, where stream-id is the connection-id:stream-no showed by "SHOW STREAMS". You can copy and paste it.
## System Monitor
TDengine runs a system monitor in the background. Once it is started, it will create a database sys automatically. System monitor collects the metric like CPU, memory, network, disk, number of requests periodically, and writes them into database sys. Also, TDengine will log all important actions, like login, logout, create database, drop database and so on, and write them into database sys.
You can check all the saved monitor information from database sys. By default, system monitor is turned on. But you can turn it off by changing the parameter in the configuration file.
此差异已折叠。
#Advanced Features
##Continuous Query
Continuous Query is a query executed by TDengine periodically with a sliding window, it is a simplified stream computing driven by timers, not by events. Continuous query can be applied to a table or a STable, and the result set can be passed to the application directly via call back function, or written into a new table in TDengine. The query is always executed on a specified time window (window size is specified by parameter interval), and this window slides forward while time flows (the sliding period is specified by parameter sliding).
Continuous query is defined by TAOS SQL, there is nothing special. One of the best applications is downsampling. Once it is defined, at the end of each cycle, the system will execute the query, pass the result to the application or write it to a database.
If historical data pints are inserted into the stream, the query won't be re-executed, and the result set won't be updated. If the result set is passed to the application, the application needs to keep the status of continuous query, the server won't maintain it. If application re-starts, it needs to decide the time where the stream computing shall be started.
####How to use continuous query
- Pass result set to application
Application shall use API taos_stream (details in connector section) to start the stream computing. Inside the API, the SQL syntax is:
```sql
SELECT aggregation FROM [table_name | stable_name]
INTERVAL(window_size) SLIDING(period)
```
where the new keyword INTERVAL specifies the window size, and SLIDING specifies the sliding period. If parameter sliding is not specified, the sliding period will be the same as window size. The minimum window size is 10ms. The sliding period shall not be larger than the window size. If you set a value larger than the window size, the system will adjust it to window size automatically.
For example:
```sql
SELECT COUNT(*) FROM FOO_TABLE
INTERVAL(1M) SLIDING(30S)
```
The above SQL statement will count the number of records for the past 1-minute window every 30 seconds.
- Save the result into a database
If you want to save the result set of stream computing into a new table, the SQL shall be:
```sql
CREATE TABLE table_name AS
SELECT aggregation from [table_name | stable_name]
INTERVAL(window_size) SLIDING(period)
```
Also, you can set the time range to execute the continuous query. If no range is specified, the continuous query will be executed forever. For example, the following continuous query will be executed from now and will stop in one hour.
```sql
CREATE TABLE QUERY_RES AS
SELECT COUNT(*) FROM FOO_TABLE
WHERE TS > NOW AND TS <= NOW + 1H
INTERVAL(1M) SLIDING(30S)
```
###Manage the Continuous Query
Inside TDengine shell, you can use the command "show streams" to list the ongoing continuous queries, the command "kill stream" to kill a specific continuous query.
If you drop a table generated by the continuous query, the query will be removed too.
##Publisher/Subscriber
Time series data is a sequence of data points over time. Inside a table, the data points are stored in order of timestamp. Also, there is a data retention policy, the data points will be removed once their lifetime is passed. From another view, a table in DTengine is just a standard message queue.
To reduce the development complexity and improve data consistency, TDengine provides the pub/sub functionality. To publish a message, you simply insert a record into a table. Compared with popular messaging tool Kafka, you subscribe to a table or a SQL query statement, instead of a topic. Once new data points arrive, TDengine will notify the application. The process is just like Kafka.
The detailed API will be introduced in the [connectors](https://www.taosdata.com/en/documentation/advanced-features/) section.
##Caching
TDengine allocates a fixed-size buffer in memory, the newly arrived data will be written into the buffer first. Every device or table gets one or more memory blocks. For typical IoT scenarios, the hot data shall always be newly arrived data, they are more important for timely analysis. Based on this observation, TDengine manages the cache blocks in First-In-First-Out strategy. If no enough space in the buffer, the oldest data will be saved into hard disk first, then be overwritten by newly arrived data. TDengine also guarantees every device can keep at least one block of data in the buffer.
By this design, the application can retrieve the latest data from each device super-fast, since they are all available in memory. You can use last or last_row function to return the last data record. If the super table is used, it can be used to return the last data records of all or a subset of devices. For example, to retrieve the latest temperature from thermometers in located Beijing, execute the following SQL
```mysql
select last(*) from thermometers where location=’beijing’
```
By this design, caching tool, like Redis, is not needed in the system. It will reduce the complexity of the system.
TDengine creates one or more virtual nodes(vnode) in each data node. Each vnode contains data for multiple tables and has its own buffer. The buffer of a vnode is fully separated from the buffer of another vnode, not shared. But the tables in a vnode share the same buffer.
System configuration parameter cacheBlockSize configures the cache block size in bytes, and another parameter cacheNumOfBlocks configures the number of cache blocks. The total memory for the buffer of a vnode is $cacheBlockSize \times cacheNumOfBlocks$. Another system parameter numOfBlocksPerMeter configures the maximum number of cache blocks a table can use. When you create a database, you can specify these parameters.
\ No newline at end of file
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -38,7 +38,7 @@ which systemd
如果系统中不存在`systemd`命令,请考虑[通过源码安装](#通过源码安装)TDengine。
具体的安装过程,请参见[`TDengine多种安装包的安装和卸载`](https://www.taosdata.com/blog/2019/08/09/566.html)
具体的安装过程,请参见<a href="https://www.taosdata.com/blog/2019/08/09/566.html">TDengine多种安装包的安装和卸载</a>
## 轻松启动
......
......@@ -11,7 +11,7 @@ TDengine采用关系型数据模型,需要建库、建表。因此对于一个
```cmd
CREATE DATABASE power KEEP 365 DAYS 10 REPLICA 3 BLOCKS 4;
```
上述语句将创建一个名为power的库,这个库的数据将保留365天(超过365天将被自动删除),每10天一个数据文件,副本数为3, 内存块数为4。详细的语法及参数请见TAOS SQL。
上述语句将创建一个名为power的库,这个库的数据将保留365天(超过365天将被自动删除),每10天一个数据文件,副本数为3, 内存块数为4。详细的语法及参数请见<a href="https://www.taosdata.com/cn/documentation20/taos-sql/">TAOS SQL </a>
注意:任何一张表或超级表是属于一个库的,在创建表之前,必须先创建库。
......@@ -20,7 +20,7 @@ CREATE DATABASE power KEEP 365 DAYS 10 REPLICA 3 BLOCKS 4;
```cmd
CREATE TABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupdId int);
```
与创建普通表一样,创建表时,需要提供表名(示例中为meters),表结构Schema,即数据列的定义,为采集的物理量(示例中为ts, current, voltage, phase),数据类型可以为整型、浮点型、字符串等。除此之外,还需要提供标签的schema (示例中为location, groupId),标签的数据类型可以为整型、浮点型、字符串等。采集点的静态属性往往可以作为标签,比如采集点的地理位置、设备型号、设备组ID、管理员ID等等。标签的schema可以事后增加、删除、修改。具体定义以及细节请见 TAOS SQL一节。
与创建普通表一样,创建表时,需要提供表名(示例中为meters),表结构Schema,即数据列的定义,为采集的物理量(示例中为ts, current, voltage, phase),数据类型可以为整型、浮点型、字符串等。除此之外,还需要提供标签的schema (示例中为location, groupId),标签的数据类型可以为整型、浮点型、字符串等。采集点的静态属性往往可以作为标签,比如采集点的地理位置、设备型号、设备组ID、管理员ID等等。标签的schema可以事后增加、删除、修改。具体定义以及细节请见 <a href="https://www.taosdata.com/cn/documentation20/taos-sql/">TAOS SQL </a>一节。
每一种类型的数据采集点需要建立一个超级表,因此一个物联网系统,往往会有多个超级表。一个系统可以有多个DB,一个DB里可以有一到多个超级表。
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -11,5 +11,6 @@ IF ((TD_LINUX_64) OR (TD_LINUX_32 AND TD_ARM))
INCLUDE_DIRECTORIES(inc)
AUX_SOURCE_DIRECTORY(./src SRC)
ADD_LIBRARY(monitor ${SRC})
TARGET_LINK_LIBRARIES(monitor taos_static)
# TARGET_LINK_LIBRARIES(monitor taos_static)
TARGET_LINK_LIBRARIES(monitor taos)
ENDIF ()
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册