diff --git a/documentation/webdocs/assets/Picture2.png b/documentation/webdocs/assets/Picture2.png new file mode 100644 index 0000000000000000000000000000000000000000..715a8bd37ee9fe7e96eacce4e7ff563fedeefbee Binary files /dev/null and b/documentation/webdocs/assets/Picture2.png differ diff --git a/documentation/webdocs/assets/clip_image001-2474914.png b/documentation/webdocs/assets/clip_image001-2474914.png new file mode 100644 index 0000000000000000000000000000000000000000..eb369b1567c860b772e1bfdad64ff17aaac2534d Binary files /dev/null and b/documentation/webdocs/assets/clip_image001-2474914.png differ diff --git a/documentation/webdocs/assets/clip_image001-2474939.png b/documentation/webdocs/assets/clip_image001-2474939.png new file mode 100644 index 0000000000000000000000000000000000000000..53f00deea3a484986a5681ec9d00d8ae02e88fec Binary files /dev/null and b/documentation/webdocs/assets/clip_image001-2474939.png differ diff --git a/documentation/webdocs/assets/clip_image001-2474961.png b/documentation/webdocs/assets/clip_image001-2474961.png new file mode 100644 index 0000000000000000000000000000000000000000..20ae8d6f7724a4bddcf8c7eb3809d468aa4223ed Binary files /dev/null and b/documentation/webdocs/assets/clip_image001-2474961.png differ diff --git a/documentation/webdocs/assets/clip_image001-2474987.png b/documentation/webdocs/assets/clip_image001-2474987.png new file mode 100644 index 0000000000000000000000000000000000000000..3d09f7fc28e7a1fb7e3bb2b9b2bc7c20895e8bb4 Binary files /dev/null and b/documentation/webdocs/assets/clip_image001-2474987.png differ diff --git a/documentation/webdocs/assets/clip_image001.png b/documentation/webdocs/assets/clip_image001.png new file mode 100644 index 0000000000000000000000000000000000000000..78b6d06a9562b802e80f0ed5fdb8963b5e525589 Binary files /dev/null and b/documentation/webdocs/assets/clip_image001.png differ diff --git a/documentation/webdocs/assets/fig1.png b/documentation/webdocs/assets/fig1.png new file mode 100644 index 0000000000000000000000000000000000000000..af9b74e0d1a872b8d93f71842dc0063bc8a86092 Binary files /dev/null and b/documentation/webdocs/assets/fig1.png differ diff --git a/documentation/webdocs/assets/fig2.png b/documentation/webdocs/assets/fig2.png new file mode 100644 index 0000000000000000000000000000000000000000..3bae70ba86964c3c341b72ea1d3af04201f7c6c1 Binary files /dev/null and b/documentation/webdocs/assets/fig2.png differ diff --git a/documentation/webdocs/assets/image-20190707124650780.png b/documentation/webdocs/assets/image-20190707124650780.png new file mode 100644 index 0000000000000000000000000000000000000000..9ebcac863e862d8b240c86dec29be1ebe7aa50f0 Binary files /dev/null and b/documentation/webdocs/assets/image-20190707124650780.png differ diff --git a/documentation/webdocs/assets/image-20190707124818590.png b/documentation/webdocs/assets/image-20190707124818590.png new file mode 100644 index 0000000000000000000000000000000000000000..dc1cb6325b2d4cd6f05c88b75b4d17ef85caa67f Binary files /dev/null and b/documentation/webdocs/assets/image-20190707124818590.png differ diff --git a/documentation/webdocs/assets/nodes.png b/documentation/webdocs/assets/nodes.png new file mode 100644 index 0000000000000000000000000000000000000000..d4ae5120c29b8cfacdc543df5a2a7104d77a2a7b Binary files /dev/null and b/documentation/webdocs/assets/nodes.png differ diff --git a/documentation/webdocs/assets/structure.png b/documentation/webdocs/assets/structure.png new file mode 100644 index 0000000000000000000000000000000000000000..801829b68580e1a46d0841a3d38e4885eb383991 Binary files /dev/null and b/documentation/webdocs/assets/structure.png differ diff --git a/documentation/webdocs/assets/vnode.png b/documentation/webdocs/assets/vnode.png new file mode 100644 index 0000000000000000000000000000000000000000..5247717f62118a8e690e80a3538c1a8dd1ab9416 Binary files /dev/null and b/documentation/webdocs/assets/vnode.png differ diff --git a/documentation/webdocs/assets/write_process.png b/documentation/webdocs/assets/write_process.png new file mode 100644 index 0000000000000000000000000000000000000000..f7d60864824a34af48df637026d704a921dc49f6 Binary files /dev/null and b/documentation/webdocs/assets/write_process.png differ diff --git a/documentation/webdocs/markdowndocs/Connections with other Tools-ch.md b/documentation/webdocs/markdowndocs/Connections with other Tools-ch.md new file mode 100644 index 0000000000000000000000000000000000000000..be036683d7a38c9da55065d877d905bb1117a2df --- /dev/null +++ b/documentation/webdocs/markdowndocs/Connections with other Tools-ch.md @@ -0,0 +1,158 @@ +#与其他工具的连接 + +## Telegraf + +TDengine能够与开源数据采集系统[Telegraf](https://www.influxdata.com/time-series-platform/telegraf/)快速集成,整个过程无需任何代码开发。 + +###安装Telegraf + +目前TDengine支持Telegraf 1.7.4以上的版本。用户可以根据当前的操作系统,到Telegraf官网下载安装包,并执行安装。下载地址如下:https://portal.influxdata.com/downloads + +###配置Telegraf + +修改Telegraf配置文件/etc/telegraf/telegraf.conf中与TDengine有关的配置项。 + +在output plugins部分,增加[[outputs.http]]配置项: + +- url:http://ip:6020/telegraf/udb,其中ip为TDengine集群的中任意一台服务器的IP地址,6020为TDengine RESTful接口的端口号,telegraf为固定关键字,udb为用于存储采集数据的数据库名称,可预先创建。 +- method: "POST" +- username: 登录TDengine的用户名 +- password: 登录TDengine的密码 +- data_format: "json" +- json_timestamp_units: "1ms" + +在agent部分: + +- hostname: 区分不同采集设备的机器名称,需确保其唯一性 +- metric_batch_size: 30,允许Telegraf每批次写入记录最大数量,增大其数量可以降低Telegraf的请求发送频率,但对于TDengine,该数值不能超过50 + +关于如何使用Telegraf采集数据以及更多有关使用Telegraf的信息,请参考Telegraf官方的[文档](https://docs.influxdata.com/telegraf/v1.11/)。 + +## Grafana + +TDengine能够与开源数据可视化系统[Grafana](https://www.grafana.com/)快速集成搭建数据监测报警系统,整个过程无需任何代码开发,TDengine中数据表中内容可以在仪表盘(DashBoard)上进行可视化展现。 + +###安装Grafana + +目前TDengine支持Grafana 5.2.4以上的版本。用户可以根据当前的操作系统,到Grafana官网下载安装包,并执行安装。下载地址如下:https://grafana.com/grafana/download + +###配置Grafana + +TDengine的Grafana插件在安装包的/usr/local/taos/connector/grafana目录下。 + +以CentOS 7.2操作系统为例,将tdengine目录拷贝到/var/lib/grafana/plugins目录下,重新启动grafana即可。 + +###使用Grafana + +用户可以直接通过localhost:3000的网址,登录Grafana服务器(用户名/密码:admin/admin),配置TDengine数据源,如下图所示,此时可以在下拉列表中看到TDengine数据源。 + +![img](../assets/clip_image001.png) + +TDengine数据源中的HTTP配置里面的Host地址要设置为TDengine集群的中任意一台服务器的IP地址与TDengine RESTful接口的端口号(6020)。假设TDengine数据库与Grafana部署在同一机器,那么应输入:http://localhost:6020。 + +此外,还需配置登录TDengine的用户名与密码,然后点击下图中的Save&Test按钮保存。 + +![img](../assets/clip_image001-2474914.png) + + + +然后,就可以在Grafana的数据源列表中看到刚创建好的TDengine的数据源: + +![img](../assets/clip_image001-2474939.png) + + + +基于上面的步骤,就可以在创建Dashboard的时候使用TDengine数据源,如下图所示: + +![img](../assets/clip_image001-2474961.png) + + + +然后,可以点击Add Query按钮增加一个新查询。 + +在INPUT SQL输入框中输入查询SQL语句,该SQL语句的结果集应为两行多列的曲线数据,例如SELECT count(*) FROM sys.cpu WHERE ts>=from and ts<​to interval(interval)。其中,from、to和interval为TDengine插件的内置变量,表示从Grafana插件面板获取的查询范围和时间间隔。 + +ALIAS BY输入框为查询的别名,点击GENERATE SQL 按钮可以获取发送给TDengine的SQL语句。如下图所示: + +![img](../assets/clip_image001-2474987.png) + + + +关于如何使用Grafana创建相应的监测界面以及更多有关使用Grafana的信息,请参考Grafana官方的[文档](https://grafana.com/docs/)。 + +## Matlab + +MatLab可以通过安装包内提供的JDBC Driver直接连接到TDengine获取数据到本地工作空间。 + +###MatLab的JDBC接口适配 + +MatLab的适配有下面几个步骤,下面以Windows10上适配MatLab2017a为例: + +- 将TDengine安装包内的驱动程序JDBCDriver-1.0.0-dist.jar拷贝到${matlab_root}\MATLAB\R2017a\java\jar\toolbox +- 将TDengine安装包内的taos.lib文件拷贝至${matlab_ root _dir}\MATLAB\R2017a\lib\win64 +- 将新添加的驱动jar包加入MatLab的classpath。在${matlab_ root _dir}\MATLAB\R2017a\toolbox\local\classpath.txt文件中添加下面一行 + +​ `$matlabroot/java/jar/toolbox/JDBCDriver-1.0.0-dist.jar` + +- 在${user_home}\AppData\Roaming\MathWorks\MATLAB\R2017a\下添加一个文件javalibrarypath.txt, 并在该文件中添加taos.dll的路径,比如您的taos.dll是在安装时拷贝到了C:\Windows\System32下,那么就应该在javalibrarypath.txt中添加如下一行: + +​ `C:\Windows\System32` + +###在MatLab中连接TDengine获取数据 + +在成功进行了上述配置后,打开MatLab。 + +- 创建一个连接: + + `conn = database(‘db’, ‘root’, ‘taosdata’, ‘com.taosdata.jdbc.TSDBDriver’, ‘jdbc:TSDB://127.0.0.1:0/’)` + +- 执行一次查询: + + `sql0 = [‘select * from tb’]` + + `data = select(conn, sql0);` + +- 插入一条记录: + + `sql1 = [‘insert into tb values (now, 1)’]` + + `exec(conn, sql1)` + +更多例子细节请参考安装包内examples\Matlab\TDengineDemo.m文件。 + +## R + +R语言支持通过JDBC接口来连接TDengine数据库。首先需要安装R语言的JDBC包。启动R语言环境,然后执行以下命令安装R语言的JDBC支持库: + +```R +install.packages('rJDBC', repos='http://cran.us.r-project.org') +``` + +安装完成以后,通过执行`library('RJDBC')`命令加载 _RJDBC_ 包: + +然后加载TDengine的JDBC驱动: + +```R +drv<-JDBC("com.taosdata.jdbc.TSDBDriver","JDBCDriver-1.0.0-dist.jar", identifier.quote="\"") +``` +如果执行成功,不会出现任何错误信息。之后通过以下命令尝试连接数据库: + +```R +conn<-dbConnect(drv,"jdbc:TSDB://192.168.0.1:0/?user=root&password=taosdata","root","taosdata") +``` + +注意将上述命令中的IP地址替换成正确的IP地址。如果没有任务错误的信息,则连接数据库成功,否则需要根据错误提示调整连接的命令。TDengine支持以下的 _RJDBC_ 包中函数: + + +- dbWriteTable(conn, "test", iris, overwrite=FALSE, append=TRUE):将数据框iris写入表test中,overwrite必须设置为false,append必须设为TRUE,且数据框iris要与表test的结构一致。 +- dbGetQuery(conn, "select count(*) from test"):查询语句 +- dbSendUpdate(conn, "use db"):执行任何非查询sql语句。例如dbSendUpdate(conn, "use db"), 写入数据dbSendUpdate(conn, "insert into t1 values(now, 99)")等。 +- dbReadTable(conn, "test"):读取表test中数据 +- dbDisconnect(conn):关闭连接 +- dbRemoveTable(conn, "test"):删除表test + +TDengine客户端暂不支持如下函数: +- dbExistsTable(conn, "test"):是否存在表test +- dbListTables(conn):显示连接中的所有表 + + diff --git a/documentation/webdocs/markdowndocs/Connections with other Tools.md b/documentation/webdocs/markdowndocs/Connections with other Tools.md new file mode 100644 index 0000000000000000000000000000000000000000..8be05698497184aee2c41a60e32f39b636e2070e --- /dev/null +++ b/documentation/webdocs/markdowndocs/Connections with other Tools.md @@ -0,0 +1,167 @@ +# Connect with other tools + +## Telegraf + +TDengine is easy to integrate with [Telegraf](https://www.influxdata.com/time-series-platform/telegraf/), an open-source server agent for collecting and sending metrics and events, without more development. + +### Install Telegraf + +At present, TDengine supports Telegraf newer than version 1.7.4. Users can go to the [download link] and choose the proper package to install on your system. + +### Configure Telegraf + +Telegraf is configured by changing items in the configuration file */etc/telegraf/telegraf.conf*. + + +In **output plugins** section,add _[[outputs.http]]_ iterm: + +- _url_: http://ip:6020/telegraf/udb, in which _ip_ is the IP address of any node in TDengine cluster. Port 6020 is the RESTful APT port used by TDengine. _udb_ is the name of the database to save data, which needs to create beforehand. +- _method_: "POST" +- _username_: username to login TDengine +- _password_: password to login TDengine +- _data_format_: "json" +- _json_timestamp_units_: "1ms" + +In **agent** part: + +- hostname: used to distinguish different machines. Need to be unique. +- metric_batch_size: 30,the maximum number of records allowed to write in Telegraf. The larger the value is, the less frequent requests are sent. For TDengine, the value should be less than 50. + +Please refer to the [Telegraf docs](https://docs.influxdata.com/telegraf/v1.11/) for more information. + +## Grafana + +[Grafana] is an open-source system for time-series data display. It is easy to integrate TDengine and Grafana to build a monitor system. Data saved in TDengine can be fetched and shown on the Grafana dashboard. + +### Install Grafana + +For now, TDengine only supports Grafana newer than version 5.2.4. Users can go to the [Grafana download page] for the proper package to download. + +### Configure Grafana + +TDengine Grafana plugin is in the _/usr/local/taos/connector/grafana_ directory. +Taking Centos 7.2 as an example, just copy TDengine directory to _/var/lib/grafana/plugins_ directory and restart Grafana. + +### Use Grafana + +Users can log in the Grafana server (username/password:admin/admin) through localhost:3000 to configure TDengine as the data source. As is shown in the picture below, TDengine as a data source option is shown in the box: + + +![img](../assets/clip_image001.png) + +When choosing TDengine as the data source, the Host in HTTP configuration should be configured as the IP address of any node of a TDengine cluster. The port should be set as 6020. For example, when TDengine and Grafana are on the same machine, it should be configured as _http://localhost:6020. + + +Besides, users also should set the username and password used to log into TDengine. Then click _Save&Test_ button to save. + +![img](../assets/clip_image001-2474914.png) + +Then, TDengine as a data source should show in the Grafana data source list. + +![img](../assets/clip_image001-2474939.png) + + +Then, users can create Dashboards in Grafana using TDengine as the data source: + + +![img](../assets/clip_image001-2474961.png) + + + +Click _Add Query_ button to add a query and input the SQL command you want to run in the _INPUT SQL_ text box. The SQL command should expect a two-row, multi-column result, such as _SELECT count(*) FROM sys.cpu WHERE ts>=from and ts<​to interval(interval)_, in which, _from_, _to_ and _inteval_ are TDengine inner variables representing query time range and time interval. + + +_ALIAS BY_ field is to set the query alias. Click _GENERATE SQL_ to send the command to TDengine: + +![img](../assets/clip_image001-2474987.png) + +Please refer to the [Grafana official document] for more information about Grafana. + + +## Matlab + +Matlab can connect to and retrieve data from TDengine by TDengine JDBC Driver. + +### MatLab and TDengine JDBC adaptation + +Several steps are required to adapt Matlab to TDengine. Taking adapting Matlab2017a on Windows10 as an example: + +1. Copy the file _JDBCDriver-1.0.0-dist.jar_ in TDengine package to the directory _${matlab_root}\MATLAB\R2017a\java\jar\toolbox_ +2. Copy the file _taos.lib_ in TDengine package to _${matlab_ root _dir}\MATLAB\R2017a\lib\win64_ +3. Add the .jar package just copied to the Matlab classpath. Append the line below as the end of the file of _${matlab_ root _dir}\MATLAB\R2017a\toolbox\local\classpath.txt_ + +​ `$matlabroot/java/jar/toolbox/JDBCDriver-1.0.0-dist.jar` + +4. Create a file called _javalibrarypath.txt_ in directory _${user_home}\AppData\Roaming\MathWorks\MATLAB\R2017a\_, and add the _taos.dll_ path in the file. For example, if the file _taos.dll_ is in the directory of _C:\Windows\System32_,then add the following line in file *javalibrarypath.txt*: + +​ `C:\Windows\System32` + +### TDengine operations in Matlab + +After correct configuration, open Matlab: + +- build a connection: + + `conn = database(‘db’, ‘root’, ‘taosdata’, ‘com.taosdata.jdbc.TSDBDriver’, ‘jdbc:TSDB://127.0.0.1:0/’)` + +- Query: + + `sql0 = [‘select * from tb’]` + + `data = select(conn, sql0);` + +- Insert a record: + + `sql1 = [‘insert into tb values (now, 1)’]` + + `exec(conn, sql1)` + +Please refer to the file _examples\Matlab\TDengineDemo.m_ for more information. + +## R + +Users can use R language to access the TDengine server with the JDBC interface. At first, install JDBC package in R: + +```R +install.packages('rJDBC', repos='http://cran.us.r-project.org') +``` + +Then use _library_ function to load the package: + +```R +library('RJDBC') +``` + +Then load the TDengine JDBC driver: + +```R +drv<-JDBC("com.taosdata.jdbc.TSDBDriver","JDBCDriver-1.0.0-dist.jar", identifier.quote="\"") +``` +If succeed, no error message will display. Then use the following command to try a database connection: + +```R +conn<-dbConnect(drv,"jdbc:TSDB://192.168.0.1:0/?user=root&password=taosdata","root","taosdata") +``` + +Please replace the IP address in the command above to the correct one. If no error message is shown, then the connection is established successfully. TDengine supports below functions in _RJDBC_ package: + + +- _dbWriteTable(conn, "test", iris, overwrite=FALSE, append=TRUE)_: write the data in a data frame _iris_ to the table _test_ in the TDengine server. Parameter _overwrite_ must be _false_. _append_ must be _TRUE_ and the schema of the data frame _iris_ should be the same as the table _test_. +- _dbGetQuery(conn, "select count(*) from test")_: run a query command +- _dbSendUpdate(conn, "use db")_: run any non-query command. +- _dbReadTable(conn, "test"_): read all the data in table _test_ +- _dbDisconnect(conn)_: close a connection +- _dbRemoveTable(conn, "test")_: remove table _test_ + +Below functions are **not supported** currently: +- _dbExistsTable(conn, "test")_: if talbe _test_ exists +- _dbListTables(conn)_: list all tables in the connection + + +[Telegraf]: www.taosdata.com +[download link]: https://portal.influxdata.com/downloads +[Telegraf document]: www.taosdata.com +[Grafana]: https://grafana.com +[Grafana download page]: https://grafana.com/grafana/download +[Grafana official document]: https://grafana.com/docs/ + diff --git a/documentation/webdocs/markdowndocs/Connector.md b/documentation/webdocs/markdowndocs/Connector.md new file mode 100644 index 0000000000000000000000000000000000000000..91ac6e58c7d2871a9b4182afb4cba72c27402a39 --- /dev/null +++ b/documentation/webdocs/markdowndocs/Connector.md @@ -0,0 +1,508 @@ +# TDengine connectors + +TDengine provides many connectors for development, including C/C++, JAVA, Python, RESTful, Go, Node.JS, etc. + +## C/C++ API + +C/C++ APIs are similar to the MySQL APIs. Applications should include TDengine head file _taos.h_ to use C/C++ APIs by adding the following line in code: +```C +#include +``` +Make sure TDengine library _libtaos.so_ is installed and use _-ltaos_ option to link the library when compiling. The return values of all APIs are _-1_ or _NULL_ for failure. + + +### C/C++ sync API + +Sync APIs are those APIs waiting for responses from the server after sending a request. TDengine has the following sync APIs: + + +- `TAOS *taos_connect(char *ip, char *user, char *pass, char *db, int port)` + + Open a connection to a TDengine server. The parameters are _ip_ (IP address of the server), _user_ (username to login), _pass_ (password to login), _db_ (database to use after connection) and _port_ (port number to connect). The parameter _db_ can be NULL for no database to use after connection. Otherwise, the database should exist before connection or a connection error is reported. The handle returned by this API should be kept for future use. + +- `void taos_close(TAOS *taos)` + + Close a connection to a TDengine server by the handle returned by _taos_connect_` + + +- `int taos_query(TAOS *taos, char *sqlstr)` + + The API used to run a SQL command. The command can be DQL or DML. The parameter _taos_ is the handle returned by _taos_connect_. Return value _-1_ means failure. + + +- `TAOS_RES *taos_use_result(TAOS *taos)` + + Use the result after running _taos_query_. The handle returned should be kept for future fetch. + + +- `TAOS_ROW taos_fetch_row(TAOS_RES *res)` + + Fetch a row of return results through _res_, the handle returned by _taos_use_result_. + + +- `int taos_num_fields(TAOS_RES *res)` + + Get the number of fields in the return result. + + +- `TAOS_FIELD *taos_fetch_fields(TAOS_RES *res)` + + Fetch the description of each field. The description includes the property of data type, field name, and bytes. The API should be used with _taos_num_fields_ to fetch a row of data. + + +- `void taos_free_result(TAOS_RES *res)` + + Free the resources used by a result set. Make sure to call this API after fetching results or memory leak would happen. + + +- `void taos_init()` + + Initialize the environment variable used by TDengine client. The API is not necessary since it is called int _taos_connect_ by default. + + +- `char *taos_errstr(TAOS *taos)` + + Return the reason of the last API call failure. The return value is a string. + + +- `int *taos_errno(TAOS *taos)` + + Return the error code of the last API call failure. The return value is an integer. + + +- `int taos_options(TSDB_OPTION option, const void * arg, ...)` + + Set client options. The parameter _option_ supports values of _TSDB_OPTION_CONFIGDIR_ (configuration directory), _TSDB_OPTION_SHELL_ACTIVITY_TIMER_, _TSDB_OPTION_LOCALE_ (client locale) and _TSDB_OPTION_TIMEZONE_ (client timezone). + +The 12 APIs are the most important APIs frequently used. Users can check _taos.h_ file for more API information. + +**Note**: The connection to a TDengine server is not multi-thread safe. So a connection can only be used by one thread. + +### C/C++ async API + +In addition to sync APIs, TDengine also provides async APIs, which are more efficient. Async APIs are returned right away without waiting for a response from the server, allowing the application to continute with other tasks without blocking. So async APIs are more efficient, especially useful when in a poor network. + +All async APIs require callback functions. The callback functions have the format: +```C +void fp(void *param, TAOS_RES * res, TYPE param3) +``` +The first two parameters of the callback function are the same for all async APIs. The third parameter is different for different APIs. Generally, the first parameter is the handle provided to the API for action. The second parameter is a result handle. + +- `void taos_query_a(TAOS *taos, char *sqlstr, void (*fp)(void *param, TAOS_RES *, int code), void *param);` + + The async query interface. _taos_ is the handle returned by _taos_connect_ interface. _sqlstr_ is the SQL command to run. _fp_ is the callback function. _param_ is the parameter required by the callback function. The third parameter of the callback function _code_ is _0_ (for success) or a negative number (for failure, call taos_errstr to get the error as a string). Applications mainly handle with the second parameter, the returned result set. + + +- `void taos_fetch_rows_a(TAOS_RES *res, void (*fp)(void *param, TAOS_RES *, int numOfRows), void *param);` + + The async API to fetch a batch of rows, which should only be used with a _taos_query_a_ call. The parameter _res_ is the result handle returned by _taos_query_a_. _fp_ is the callback function. _param_ is a user-defined structure to pass to _fp_. The parameter _numOfRows_ is the number of result rows in the current fetch cycle. In the callback function, applications should call _taos_fetch_row_ to get records from the result handle. After getting a batch of results, applications should continue to call _taos_fetch_rows_a_ API to handle the next batch, until the _numOfRows_ is _0_ (for no more data to fetch) or _-1_ (for failure). + + +- `void taos_fetch_row_a(TAOS_RES *res, void (*fp)(void *param, TAOS_RES *, TAOS_ROW row), void *param);` + + The async API to fetch a result row. _res_ is the result handle. _fp_ is the callback function. _param_ is a user-defined structure to pass to _fp_. The third parameter of the callback function is a single result row, which is different from that of _taos_fetch_rows_a_ API. With this API, it is not necessary to call _taos_fetch_row_ to retrieve each result row, which is handier than _taos_fetch_rows_a_ but less efficient. + + +Applications may apply operations on multiple tables. However, **it is important to make sure the operations on the same table are serialized**. That means after sending an insert request in a table to the server, no operations on the table are allowed before a response is received. + +### C/C++ continuous query interface + +TDengine provides APIs for continuous query driven by time, which run queries periodically in the background. There are only two APIs: + + +- `TAOS_STREAM *taos_open_stream(TAOS *taos, char *sqlstr, void (*fp)(void *param, TAOS_RES *, TAOS_ROW row), int64_t stime, void *param, void (*callback)(void *));` + + The API is used to create a continuous query. + * _taos_: the connection handle returned by _taos_connect_. + * _sqlstr_: the SQL string to run. Only query commands are allowed. + * _fp_: the callback function to run after a query + * _param_: a parameter passed to _fp_ + * _stime_: the time of the stream starts in the form of epoch milliseconds. If _0_ is given, the start time is set as the current time. + * _callback_: a callback function to run when the continuous query stops automatically. + + The API is expected to return a handle for success. Otherwise, a NULL pointer is returned. + + +- `void taos_close_stream (TAOS_STREAM *tstr)` + + Close the continuous query by the handle returned by _taos_open_stream_. Make sure to call this API when the continuous query is not needed anymore. + + +### C/C++ subscription API + +For the time being, TDengine supports subscription on one table. It is implemented through periodic pulling from a TDengine server. + +- `TAOS_SUB *taos_subscribe(char *host, char *user, char *pass, char *db, char *table, long time, int mseconds)` + The API is used to start a subscription session by given a handle. The parameters required are _host_ (IP address of a TDenginer server), _user_ (username), _pass_ (password), _db_ (database to use), _table_ (table name to subscribe), _time_ (start time to subscribe, 0 for now), _mseconds_ (pulling period). If failed to open a subscription session, a _NULL_ pointer is returned. + + +- `TAOS_ROW taos_consume(TAOS_SUB *tsub)` + The API used to get the new data from a TDengine server. It should be put in an infinite loop. The parameter _tsub_ is the handle returned by _taos_subscribe_. If new data are updated, the API will return a row of the result. Otherwise, the API is blocked until new data arrives. If _NULL_ pointer is returned, it means an error occurs. + + +- `void taos_unsubscribe(TAOS_SUB *tsub)` + Stop a subscription session by the handle returned by _taos_subscribe_. + + +- `int taos_num_subfields(TAOS_SUB *tsub)` + The API used to get the number of fields in a row. + + +- `TAOS_FIELD *taos_fetch_subfields(TAOS_RES *res)` + The API used to get the description of each column. + +## Java Connector + +### JDBC Interface + +TDengine provides a JDBC driver `taos-jdbcdriver-x.x.x.jar` for Enterprise Java developers. TDengine's JDBC Driver is implemented as a subset of the standard JDBC 3.0 Specification and supports the most common Java development frameworks. The driver is currently not published to the online dependency repositories such as Maven Center Repository, and users should manually add the `.jar` file to their local dependency repository. + +Please note the JDBC driver itself relies on a native library written in C. On a Linux OS, the driver relies on a `libtaos.so` native library, where .so stands for "Shared Object". After the successful installation of TDengine on Linux, `libtaos.so` should be automatically copied to `/usr/local/lib/taos` and added to the system's default search path. On a Windows OS, the driver relies on a `taos.dll` native library, where .dll stands for "Dynamic Link Library". After the successful installation of the TDengine client on Windows, the `taos-jdbcdriver.jar` file can be found in `C:/TDengine/driver/JDBC`; the `taos.dll` file can be found in `C:/TDengine/driver/C` and should have been automatically copied to the system's searching path `C:/Windows/System32`. + +Developers can refer to the Oracle's official JDBC API documentation for detailed usage on classes and methods. However, there are some differences of connection configurations and supported methods in the driver implementation between TDengine and traditional relational databases. + +For database connections, TDengine's JDBC driver has the following configurable parameters in the JDBC URL. The standard format of a TDengine JDBC URL is: + +`jdbc:TSDB://{host_ip}:{port}/{database_name}?[user={user}|&password={password}|&charset={charset}|&cfgdir={config_dir}|&locale={locale}|&timezone={timezone}]` + +where `{}` marks the required parameters and `[]` marks the optional. The usage of each parameter is pretty straightforward: + +* user - login user name for TDengine; by default, it's `root` +* password - login password; by default, it's `taosdata` +* charset - the client-side charset; by default, it's the operation system's charset +* cfgdir - the directory of TDengine client configuration file; by default it's `/etc/taos` on Linux and `C:\TDengine/cfg` on Windows +* locale - the language environment of TDengine client; by default, it's the operation system's locale +* timezone - the timezone of the TDengine client; by default, it's the operation system's timezone + +All parameters can be configured at the time when creating a connection using the java.sql.DriverManager class, for example: + +```java +import java.sql.Connection; +import java.sql.DriverManager; +import java.util.Properties; +import com.taosdata.jdbc.TSDBDriver; + +public Connection getConn() throws Exception{ + Class.forName("com.taosdata.jdbc.TSDBDriver"); + String jdbcUrl = "jdbc:TAOS://127.0.0.1:0/db?user=root&password=taosdata"; + Properties connProps = new Properties(); + connProps.setProperty(TSDBDriver.PROPERTY_KEY_USER, "root"); + connProps.setProperty(TSDBDriver.PROPERTY_KEY_PASSWORD, "taosdata"); + connProps.setProperty(TSDBDriver.PROPERTY_KEY_CONFIG_DIR, "/etc/taos"); + connProps.setProperty(TSDBDriver.PROPERTY_KEY_CHARSET, "UTF-8"); + connProps.setProperty(TSDBDriver.PROPERTY_KEY_LOCALE, "en_US.UTF-8"); + connProps.setProperty(TSDBDriver.PROPERTY_KEY_TIMEZONE, "UTC-8"); + Connection conn = DriverManager.getConnection(jdbcUrl, connProps); + return conn; +} +``` + +Except `cfgdir`, all the parameters listed above can also be configured in the configuration file. The properties specified when calling DriverManager.getConnection() has the highest priority among all configuration methods. The JDBC URL has the second-highest priority, and the configuration file has the lowest priority. The explicitly configured parameters in a method with higher priorities always overwrite that same parameter configured in methods with lower priorities. For example, if `charset` is explicitly configured as "UTF-8" in the JDBC URL and "GKB" in the `taos.cfg` file, then "UTF-8" will be used. + +Although the JDBC driver is implemented following the JDBC standard as much as possible, there are major differences between TDengine and traditional databases in terms of data models that lead to the differences in the driver implementation. Here is a list of head-ups for developers who have plenty of experience on traditional databases but little on TDengine: + +* TDengine does NOT support updating or deleting a specific record, which leads to some unsupported methods in the JDBC driver +* TDengine currently does not support `join` or `union` operations, and thus, is lack of support for associated methods in the JDBC driver +* TDengine supports batch insertions which are controlled at the level of SQL statement writing instead of API calls +* TDengine doesn't support nested queries and neither does the JDBC driver. Thus for each established connection to TDengine, there should be only one open result set associated with it + +All the error codes and error messages can be found in `TSDBError.java` . For a more detailed coding example, please refer to the demo project `JDBCDemo` in TDengine's code examples. + +## Python Connector + +### Install TDengine Python client + +Users can find python client packages in our source code directory _src/connector/python_. There are two directories corresponding two python versions. Please choose the correct package to install. Users can use _pip_ command to install: + +```cmd +pip install src/connector/python/python2/ +``` + +or + +``` +pip install src/connector/python/python3/ +``` + +If _pip_ command is not installed on the system, users can choose to install pip or just copy the _taos_ directory in the python client directory to the application directory to use. + +### Python client interfaces + +To use TDengine Python client, import TDengine module at first: + +```python +import taos +``` + +Users can get module information from Python help interface or refer to our [python code example](). We list the main classes and methods below: + +- _TDengineConnection_ class + + Run `help(taos.TDengineConnection)` in python terminal for details. + +- _TDengineCursor_ class + + Run `help(taos.TDengineCursor)` in python terminal for details. + +- connect method + + Open a connection. Run `help(taos.connect)` in python terminal for details. + +## RESTful Connector + +TDengine also provides RESTful API to satisfy developing on different platforms. Unlike other databases, TDengine RESTful API applies operations to the database through the SQL command in the body of HTTP POST request. What users are required to provide is just a URL. + + +For the time being, TDengine RESTful API uses a _\_ generated from username and password for identification. Safer identification methods will be provided in the future. + + +### HTTP URL encoding + +To use TDengine RESTful API, the URL should have the following encoding format: +``` +http://:/rest/sql +``` +- _ip_: IP address of any node in a TDengine cluster +- _PORT_: TDengine HTTP service port. It is 6020 by default. + +For example, the URL encoding _http://192.168.0.1:6020/rest/sql_ used to send HTTP request to a TDengine server with IP address as 192.168.0.1. + +It is required to add a token in an HTTP request header for identification. + +``` +Authorization: Basic +``` + +The HTTP request body contains the SQL command to run. If the SQL command contains a table name, it should also provide the database name it belongs to in the form of `.`. Otherwise, an error code is returned. + +For example, use _curl_ command to send a HTTP request: + +``` +curl -H 'Authorization: Basic ' -d '' :/rest/sql +``` + +or use + +``` +curl -u username:password -d '' :/rest/sql +``` + +where `TOKEN` is the encryted string of `{username}:{password}` using the Base64 algorithm, e.g. `root:taosdata` will be encoded as `cm9vdDp0YW9zZGF0YQ==` + +### HTTP response + +The HTTP resonse is in JSON format as below: + +``` +{ + "status": "succ", + "head": ["column1","column2", …], + "data": [ + ["2017-12-12 23:44:25.730", 1], + ["2017-12-12 22:44:25.728", 4] + ], + "rows": 2 +} +``` +Specifically, +- _status_: the result of the operation, success or failure +- _head_: description of returned result columns +- _data_: the returned data array. If no data is returned, only an _affected_rows_ field is listed +- _rows_: the number of rows returned + +### Example + +- Use _curl_ command to query all the data in table _t1_ of database _demo_: + + `curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.t1' 192.168.0.1:6020/rest/sql` + +The return value is like: + +``` +{ + "status": "succ", + "head": ["column1","column2","column3"], + "data": [ + ["2017-12-12 23:44:25.730", 1, 2.3], + ["2017-12-12 22:44:25.728", 4, 5.6] + ], + "rows": 2 +} +``` + +- Use HTTP to create a database: + + `curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'create database demo' 192.168.0.1:6020/rest/sql` + + The return value should be: + +``` +{ + "status": "succ", + "head": ["affected_rows"], + "data": [[1]], + "rows": 1, +} +``` + +## Go Connector + +TDengine also provides a Go client package named _taosSql_ for users to access TDengine with Go. The package is in _/usr/local/taos/connector/go/src/taosSql_ by default if you installed TDengine. Users can copy the directory _/usr/local/taos/connector/go/src/taosSql_ to the _src_ directory of your project and import the package in the source code for use. + +```Go +import ( + "database/sql" + _ "taosSql" +) +``` + +The _taosSql_ package is in _cgo_ form, which calls TDengine C/C++ sync interfaces. So a connection is allowed to be used by one thread at the same time. Users can open multiple connections for multi-thread operations. + +Please refer the the demo code in the package for more information. + +## Node.js Connector + +TDengine also provides a node.js connector package that is installable through [npm](https://www.npmjs.com/). The package is also in our source code at *src/connector/nodejs/*. The following instructions are also available [here](https://github.com/taosdata/tdengine/tree/master/src/connector/nodejs) + +To get started, just type in the following to install the connector through [npm](https://www.npmjs.com/). + +```cmd +npm install td-connector +``` + +It is highly suggested you use npm. If you don't have it installed, you can also just copy the nodejs folder from *src/connector/nodejs/* into your node project folder. + +To interact with TDengine, we make use of the [node-gyp](https://github.com/nodejs/node-gyp) library. To install, you will need to install the following depending on platform (the following instructions are quoted from node-gyp) + +### On Unix + +- `python` (`v2.7` recommended, `v3.x.x` is **not** supported) +- `make` +- A proper C/C++ compiler toolchain, like [GCC](https://gcc.gnu.org) + +### On macOS + +- `python` (`v2.7` recommended, `v3.x.x` is **not** supported) (already installed on macOS) + +- Xcode + + - You also need to install the + + ``` + Command Line Tools + ``` + + via Xcode. You can find this under the menu + + ``` + Xcode -> Preferences -> Locations + ``` + + (or by running + + ``` + xcode-select --install + ``` + + in your Terminal) + + - This step will install `gcc` and the related toolchain containing `make` + +### On Windows + +#### Option 1 + +Install all the required tools and configurations using Microsoft's [windows-build-tools](https://github.com/felixrieseberg/windows-build-tools) using `npm install --global --production windows-build-tools` from an elevated PowerShell or CMD.exe (run as Administrator). + +#### Option 2 + +Install tools and configuration manually: + +- Install Visual C++ Build Environment: [Visual Studio Build Tools](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools) (using "Visual C++ build tools" workload) or [Visual Studio 2017 Community](https://visualstudio.microsoft.com/pl/thank-you-downloading-visual-studio/?sku=Community) (using the "Desktop development with C++" workload) +- Install [Python 2.7](https://www.python.org/downloads/) (`v3.x.x` is not supported), and run `npm config set python python2.7` (or see below for further instructions on specifying the proper Python version and path.) +- Launch cmd, `npm config set msvs_version 2017` + +If the above steps didn't work for you, please visit [Microsoft's Node.js Guidelines for Windows](https://github.com/Microsoft/nodejs-guidelines/blob/master/windows-environment.md#compiling-native-addon-modules) for additional tips. + +To target native ARM64 Node.js on Windows 10 on ARM, add the components "Visual C++ compilers and libraries for ARM64" and "Visual C++ ATL for ARM64". + +### Usage + +The following is a short summary of the basic usage of the connector, the full api and documentation can be found [here](http://docs.taosdata.com/node) + +#### Connection + +To use the connector, first require the library ```td-connector```. Running the function ```taos.connect``` with the connection options passed in as an object will return a TDengine connection object. The required connection option is ```host```, other options if not set, will be the default values as shown below. + +A cursor also needs to be initialized in order to interact with TDengine from Node.js. + +```javascript +const taos = require('td-connector'); +var conn = taos.connect({host:"127.0.0.1", user:"root", password:"taosdata", config:"/etc/taos",port:0}) +var cursor = conn.cursor(); // Initializing a new cursor +``` + +To close a connection, run + +```javascript +conn.close(); +``` + +#### Queries + +We can now start executing simple queries through the ```cursor.query``` function, which returns a TaosQuery object. + +```javascript +var query = cursor.query('show databases;') +``` + +We can get the results of the queries through the ```query.execute()``` function, which returns a promise that resolves with a TaosResult object, which contains the raw data and additional functionalities such as pretty printing the results. + +```javascript +var promise = query.execute(); +promise.then(function(result) { + result.pretty(); //logs the results to the console as if you were in the taos shell +}); +``` + +You can also query by binding parameters to a query by filling in the question marks in a string as so. The query will automatically parse what was binded and convert it to the proper format for use with TDengine + +```javascript +var query = cursor.query('select * from meterinfo.meters where ts <= ? and areaid = ?;').bind(new Date(), 5); +query.execute().then(function(result) { + result.pretty(); +}) +``` + +The TaosQuery object can also be immediately executed upon creation by passing true as the second argument, returning a promise instead of a TaosQuery. + +```javascript +var promise = cursor.query('select * from meterinfo.meters where v1 = 30;', true) +promise.then(function(result) { + result.pretty(); +}) +``` +#### Async functionality + +Async queries can be performed using the same functions such as `cursor.execute`, `cursor.query`, but now with `_a` appended to them. + +Say you want to execute an two async query on two seperate tables, using `cursor.query_a`, you can do that and get a TaosQuery object, which upon executing with the `execute_a` function, returns a promise that resolves with a TaosResult object. + +```javascript +var promise1 = cursor.query_a('select count(*), avg(v1), avg(v2) from meter1;').execute_a() +var promise2 = cursor.query_a('select count(*), avg(v1), avg(v2) from meter2;').execute_a(); +promise1.then(function(result) { + result.pretty(); +}) +promise2.then(function(result) { + result.pretty(); +}) +``` + + +### Example + +An example of using the NodeJS connector to create a table with weather data and create and execute queries can be found [here](https://github.com/taosdata/TDengine/tree/master/tests/examples/nodejs/node-example.js) (The preferred method for using the connector) + +An example of using the NodeJS connector to achieve the same things but without all the object wrappers that wrap around the data returned to achieve higher functionality can be found [here](https://github.com/taosdata/TDengine/tree/master/tests/examples/nodejs/node-example-raw.js) + diff --git a/documentation/webdocs/markdowndocs/Contributor_License_Agreement.md b/documentation/webdocs/markdowndocs/Contributor_License_Agreement.md new file mode 100644 index 0000000000000000000000000000000000000000..8c158da4c5958384064b9993de6643be86b94fee --- /dev/null +++ b/documentation/webdocs/markdowndocs/Contributor_License_Agreement.md @@ -0,0 +1,35 @@ +# TaosData Contributor License Agreement + +This TaosData Contributor License Agreement (CLA) applies to any contribution you make to any TaosData projects. If you are representing your employing organization to sign this agreement, please warrant that you have the authority to grant the agreement. + +## Terms + +**"TaosData"**, **"we"**, **"our"** and **"us"** means TaosData, inc. + +**"You"** and **"your"** means you or the organization you are on behalf of to sign this agreement. + +**"Contribution"** means any original work you, or the organization you represent submit to TaosData for any project in any manner. + +## Copyright License + +All rights of your Contribution submitted to TaosData in any manner are granted to TaosData and recipients of software distributed by TaosData. You waive any rights that my affect our ownership of the copyright and grant to us a perpetual, worldwide, transferable, non-exclusive, no-charge, royalty-free, irrevocable, and sublicensable license to use, reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute Contributions and any derivative work created based on a Contribution. + +## Patent License + +With respect to any patents you own or that you can license without payment to any third party, you grant to us and to any recipient of software distributed by us, a perpetual, worldwide, transferable, non-exclusive, no-charge, royalty-free, irrevocable patent license to make, have make, use, sell, offer to sell, import, and otherwise transfer the Contribution in whole or in part, alone or included in any product under any patent you own, or license from a third party, that is necessarily infringed by the Contribution or by combination of the Contribution with any Work. + +## Your Representations and Warranties + +You represent and warrant that: + +- the Contribution you submit is an original work that you can legally grant the rights set out in this agreement. + +- the Contribution you submit and licenses you granted does not and will not, infringe the rights of any third party. + +- you are not aware of any pending or threatened claims, suits, actions, or charges pertaining to the contributions. You also warrant to notify TaosData immediately if you become aware of any such actual or potential claims, suits, actions, allegations or charges. + +## Support + +You are not obligated to support your Contribution except you volunteer to provide support. If you want, you can provide for a fee. + +**I agree and accept on behalf of myself and behalf of my organization:** \ No newline at end of file diff --git a/documentation/webdocs/markdowndocs/Data model and architecture-ch.md b/documentation/webdocs/markdowndocs/Data model and architecture-ch.md new file mode 100644 index 0000000000000000000000000000000000000000..f17b015172095be051d6fe78c47db458ca2c797f --- /dev/null +++ b/documentation/webdocs/markdowndocs/Data model and architecture-ch.md @@ -0,0 +1,100 @@ +# 数据模型和设计 + +## 数据模型 + +### 物联网典型场景 + +在典型的物联网、车联网、运维监测场景中,往往有多种不同类型的数据采集设备,采集一个到多个不同的物理量。而同一种采集设备类型,往往又有多个具体的采集设备分布在不同的地点。大数据处理系统就是要将各种采集的数据汇总,然后进行计算和分析。对于同一类设备,其采集的数据类似如下的表格: + +| Device ID | Time Stamp | Value 1 | Value 2 | Value 3 | Tag 1 | Tag 2 | +| :-------: | :-----------: | :-----: | :-----: | :-----: | :---: | :---: | +| D1001 | 1538548685000 | 10.3 | 219 | 0.31 | Red | Tesla | +| D1002 | 1538548684000 | 10.2 | 220 | 0.23 | Blue | BMW | +| D1003 | 1538548686500 | 11.5 | 221 | 0.35 | Black | Honda | +| D1004 | 1538548685500 | 13.4 | 223 | 0.29 | Red | Volvo | +| D1001 | 1538548695000 | 12.6 | 218 | 0.33 | Red | Tesla | +| D1004 | 1538548696600 | 11.8 | 221 | 0.28 | Black | Honda | + +每一条记录都有设备ID,时间戳,采集的物理量,还有与每个设备相关的静态标签。每个设备是受外界的触发,或按照设定的周期采集数据。采集的数据点是时序的,是一个数据流。 + +### 数据特征 + +除时序特征外,仔细研究发现,物联网、车联网、运维监测类数据还具有很多其他明显的特征。 + +1. 数据是结构化的; +2. 数据极少有更新或删除操作; +3. 无需传统数据库的事务处理; +4. 相对互联网应用,写多读少; +5. 流量平稳,根据设备数量和采集频次,可以预测出来; +6. 用户关注的是一段时间的趋势,而不是某一特点时间点的值; +7. 数据是有保留期限的; +8. 数据的查询分析一定是基于时间段和地理区域的; +9. 除存储查询外,还往往需要各种统计和实时计算操作; +10. 数据量巨大,一天采集的数据就可以超过100亿条。 + +充分利用上述特征,TDengine采取了一特殊的优化的存储和计算设计来处理时序数据,能将系统处理能力显著提高。 + +### 关系型数据库模型 + +因为采集的数据一般是结构化数据,而且为降低学习门槛,TDengine采用传统的关系型数据库模型管理数据。因此用户需要先创建库,然后创建表,之后才能插入或查询数据。 + +### 一个设备一张表 + +为充分利用其数据的时序性和其他数据特点,TDengine要求**对每个数据采集点单独建表**(比如有一千万个智能电表,就需创建一千万张表,上述表格中的D1001, D1002, D1003, D1004都需单独建表),用来存储这个采集点所采集的时序数据。这种设计能保证一个采集点的数据在存储介质上是一块一块连续的,大幅减少随机读取操作,成数量级的提升读取和查询速度。而且由于不同数据采集设备产生数据的过程完全独立,每个设备只产生属于自己的数据,一张表也就只有一个写入者。这样每个表就可以采用无锁方式来写,写入速度就能大幅提升。同时,对于一个数据采集点而言,其产生的数据是时序的,因此写的操作可用追加的方式实现,进一步大幅提高数据写入速度。 + +### 数据建模最佳实践 + +**表(Table)**:TDengine 建议用数据采集点的名字(如上表中的D1001)来做表名。每个数据采集点可能同时采集多个物理量(如上表中的value1, value2, value3),每个物理量对应一张表中的一列,数据类型可以是整型、浮点型、字符串等。除此之外,表的第一列必须是时间戳,即数据类型为 timestamp。有的设备有多组采集量,每一组的采集频次是不一样的,这是需要对同一个设备建多张表。对采集的数据,TDengine将自动按照时间戳建立索引,但对采集的物理量不建任何索引。数据是用列式存储方式保存。 + +**超级表(Super Table)**:对于同一类型的采集点,为保证Schema的一致性,而且为便于聚合统计操作,可以先定义超级表STable(详见第10章),然后再定义表。每个采集点往往还有静态标签信息(如上表中的Tag 1, Tag 2),比如设备型号、颜色等,这些静态信息不会保存在存储采集数据的数据节点中,而是通过超级表保存在元数据节点中。这些静态标签信息将作为过滤条件,用于采集点之间的数据聚合统计操作。 + +**库(DataBase)**:不同的数据采集点往往具有不同的数据特征,包括数据采集频率高低,数据保留时间长短,备份数目,单个字段大小等等。为让各种场景下TDengine都能最大效率的工作,TDengine建议将不同数据特征的表创建在不同的库里。创建一个库时,除SQL标准的选项外,应用还可以指定保留时长、数据备份的份数、cache大小、文件块大小、是否压缩等多种参数(详见第19章)。 + +**Schemaless vs Schema**: 与NoSQL的各种引擎相比,由于应用需要定义schema,插入数据的灵活性降低。但对于物联网、金融这些典型的时序数据场景,schema会很少变更,因此这个灵活性不够的设计就不成问题。相反,TDengine采用结构化数据来进行处理的方式将让查询、分析的性能成数量级的提升。 + +TDengine对库的数量、超级表的数量以及表的数量没有做任何限制,而且其多少不会对性能产生影响,应用按照自己的场景创建即可。 + +## 主要模块 +如图所示,TDengine服务主要包含两大模块:**管理节点模块(MGMT)** 和 **数据节点模块(DNODE)**。整个TDengine还包含**客户端模块**。 + +
+
图 1 TDengine架构示意图
+ +### 管理节点模块 +管理节点模块主要负责元数据的存储和查询等工作,其中包括用户信息的管理、数据库和表信息的创建、删除以及查询等。应用连接TDengine时会首先连接到管理节点。在创建/删除数据库和表时,请求也会首先发送请求到管理节点模块。由管理节点模块首先创建/删除元数据信息,然后发送请求到数据节点模块进行分配/删除所需要的资源。在数据写入和查询时,应用同样会首先访问管理节点模块,获取元数据信息。然后根据元数据管理信息访问数据节点模块。 + +### 数据节点模块 +写入数据的存储和查询工作是由数据节点模块负责。 为了更高效地利用资源,以及方便将来进行水平扩展,TDengine内部对数据节点进行了虚拟化,引入了虚拟节点(virtual node, 简称vnode)的概念,作为存储、资源分配以及数据备份的单元。如图2所示,在一个dnode上,通过虚拟化,可以将该dnode视为多个虚拟节点的集合。 + +创建一个库时,系统会自动分配vnode。每个vnode存储一定数量的表中的数据,但一个表只会存在于一个vnode里,不会跨vnode。一个vnode只会属于一个库,但一个库会有一到多个vnode。不同的vnode之间资源互不共享。每个虚拟节点都有自己的缓存,在硬盘上也有自己的存储目录。而同一vnode内部无论是缓存还是硬盘的存储都是共享的。通过虚拟化,TDengine可以将dnode上有限的物理资源合理地分配给不同的vnode,大大提高资源的利用率和并发度。一台物理机器上的虚拟节点个数可以根据其硬件资源进行配置。 + +
+
图 2 TDengine虚拟化
+ +### 客户端模块 +TDengine客户端模块主要负责将应用传来的请求(SQL语句)进行解析,转化为内部结构体再发送到服务端。TDengine的各种接口都是基于TDengine的客户端模块进行开发的。客户端模块与管理模块使用TCP/UDP通讯,端口号由系统参数mgmtShellPort配置, 缺省值为6030。客户端与数据节点模块也是使用TCP/UDP通讯,端口号由系统参数vnodeShellPort配置, 缺省值为6035。两个端口号均可通过系统配置文件taos.cfg进行个性化设置。 + +## 写入流程 +TDengine的完整写入流程如图3所示。为了保证写入数据的安全性和完整性,TDengine在写入数据时采用[预写日志算法]。客户端发来的数据在经过验证以后,首先会写入预写日志中,以保证TDengine能够在断电等因素导致的服务重启时从预写日志中恢复数据,避免数据的丢失。写入预写日志后,数据会被写到对应的vnode的缓存中。随后,服务端会发送确认信息给客户端表示写入成功。TDengine中存在两种机制可以促使缓存中的数据写入到硬盘上进行持久化存储: + +
+
图 3 TDengine写入流程
+ +1. **时间驱动的落盘**:TDengine服务会定时将vnode缓存中的数据写入到硬盘上,默认为一个小时落一次盘。落盘间隔可在配置文件taos.cfg中通过参数commitTime配置。 +2. **数据驱动的落盘**:当vnode中缓存的数据达到一定规模时,为了不阻塞后续数据的写入,TDengine也会拉起落盘线程将缓存中的数据清空。数据驱动的落盘会刷新定时落盘的时间。 + +TDengine在数据落盘时会打开新的预写日志文件,在落盘后则会删除老的预写日志文件,避免日志文件无限制的增长。TDengine对缓存按照先进先出的原则进行管理,以保证每个表的最新数据都在缓存中。 + +## 数据存储 + +TDengine将所有数据存储在/var/lib/taos/目录下,您可以通过系统配置参数dataDir进行个性化配置。 + +TDengine中的元数据信息包括TDengine中的数据库、表、用户等信息。每个超级表、以及每个表的标签数据也存放在这里。为提高访问速度,元数据全部有缓存。 + +TDengine中写入的数据在硬盘上是按时间维度进行分片的。同一个vnode中的表在同一时间范围内的数据都存放在同一文件组中。这一数据分片方式可以大大简化数据在时间维度的查询,提高查询速度。在默认配置下,硬盘上的每个数据文件存放10天数据。用户可根据需要修改系统配置参数daysPerFile进行个性化配置。 + +表中的数据都有保存时间,一旦超过保存时间(缺省是3650天),数据将被系统自动删除。您可以通过系统配置参数daysToKeep进行个性化设置。 + +数据在文件中是按块存储的。每个数据块只包含一张表的数据,且数据是按照时间主键递增排列的。数据在数据块中按列存储,这样使得同列的数据存放在一起,对于不同的数据类型还采用不同的压缩方法,大大提高压缩的比例,节省存储空间。 + +数据文件总共有三类文件,一类是data文件,它存放了真实的数据块,该文件只进行追加操作;一类文件是head文件, 它存放了其对应的data文件中数据块的索引信息;第三类是last文件,专门存储最后写入的数据,每次落盘操作时,这部分数据会与内存里的数据合并,并决定是否写入data文件还是last文件。 \ No newline at end of file diff --git a/documentation/webdocs/markdowndocs/Data model and architecture.md b/documentation/webdocs/markdowndocs/Data model and architecture.md new file mode 100644 index 0000000000000000000000000000000000000000..3a91f1e8dc24314f66acf906e69d3dcd0df8e370 --- /dev/null +++ b/documentation/webdocs/markdowndocs/Data model and architecture.md @@ -0,0 +1,101 @@ +# Data Model and Architecture +## Data Model + +### A Typical IoT Scenario + +In a typical IoT scenario, there are many types of devices. Each device is collecting one or multiple metrics. For a specific type of device, the collected data looks like the table below: + +| Device ID | Time Stamp | Value 1 | Value 2 | Value 3 | Tag 1 | Tag 2 | +| :-------: | :-----------: | :-----: | :-----: | :-----: | :---: | :---: | +| D1001 | 1538548685000 | 10.3 | 219 | 0.31 | Red | Tesla | +| D1002 | 1538548684000 | 10.2 | 220 | 0.23 | Blue | BMW | +| D1003 | 1538548686500 | 11.5 | 221 | 0.35 | Black | Honda | +| D1004 | 1538548685500 | 13.4 | 223 | 0.29 | Red | Volvo | +| D1001 | 1538548695000 | 12.6 | 218 | 0.33 | Red | Tesla | +| D1004 | 1538548696600 | 11.8 | 221 | 0.28 | Black | Honda | + +Each data record has device ID, timestamp, the collected metrics, and static tags associated with the device. Each device generates a data record in a pre-defined timer or triggered by an event. It is a sequence of data points, like a stream. + +### Data Characteristics + +Being a series of data points over time, data points generated by devices, sensors, servers, or applications have strong common characteristics. + +1. metric is always structured data; +2. there are rarely delete/update operations on collected data; +3. there is only one single data source for one device or sensor; +4. ratio of read/write is much lower than typical Internet application; +5. the user pays attention to the trend of data, not the specific value at a specific time; +6. there is always a data retention policy; +7. the data query is always executed in a given time range and a subset of devices; +8. real-time aggregation or analytics is mandatory; +9. traffic is predictable based on the number of devices and sampling frequency; +10. data volume is huge, a system may generate 10 billion data points in a day. + +By utilizing the above characteristics, TDengine designs the storage and computing engine in a special and optimized way for time-series data. The system efficiency is improved significantly. + +### Relational Database Model + +Since time-series data is more likely to be structured data, TDengine adopts the traditional relational database model to process them. You need to create a database, create tables with schema definition, then insert data points and execute queries to explore the data. Standard SQL is used, there is no learning curve. + +### One Table for One Device + +Due to different network latency, the data points from different devices may arrive at the server out of order. But for the same device, data points will arrive at the server in order if system is designed well. To utilize this special feature, TDengine requires the user to create a table for each device (time-stream). For example, if there are over 10,000 smart meters, 10,000 tables shall be created. For the table above, 4 tables shall be created for device D1001, D1002, D1003 and D1004, to store the data collected. + +This strong requirement can guarantee the data points from a device can be saved in a continuous memory/hard disk space block by block. If queries are applied only on one device in a time range, this design will reduce the read latency significantly since a whole block is owned by one single device. Also, write latency can be significantly reduced too, since the data points generated by the same device will arrive in order, the new data point will be simply appended to a block. Cache block size and the rows of records in a file block can be configured to fit the scenarios. + +### Best Practices + +**Table**: TDengine suggests to use device ID as the table name (like D1001 in the above diagram). Each device may collect one or more metrics (like value1, valu2, valu3 in the diagram). Each metric has a column in the table, the metric name can be used as the column name. The data type for a column can be int, float, double, tinyint, bigint, bool or binary. Sometimes, a device may have multiple metric group, each group have different sampling period, you shall create a table for each group for each device. The first column in the table must be time stamp. TDengine uses time stamp as the index, and won’t build the index on any metrics stored. + +**Tags:** to support aggregation over multiple tables efficiently, [STable(Super Table)](../super-table) concept is introduced by TDengine. A STable is used to represent the same type of device. The schema is used to define the collected metrics(like value1, value2, value3 in the diagram), and tags are used to define the static attributes for each table or device(like tag1, tag2 in the diagram). A table is created via STable with a specific tag value. All or a subset of tables in a STable can be aggregated by filtering tag values. + +**Database:** different types of devices may generate data points in different patterns and shall be processed differently. For example, sampling frequency, data retention policy, replication number, cache size, record size, the compression algorithm may be different. To make the system more efficient, TDengine suggests creating a different database with unique configurations for different scenarios + +**Schemaless vs Schema:** compared with NoSQL database, since a table with schema definition shall be created before the data points can be inserted, flexibilities are not that good, especially when the schema is changed. But in most IoT scenarios, the schema is well defined and is rarely changed, the loss of flexibilities won’t be a big pain to developers or the administrator. TDengine allows the application to change the schema in a second even there is a huge amount of historical data when schema has to be changed. + +TDengine does not impose a limitation on the number of tables, [STables](../super-table), or databases. You can create any number of STable or databases to fit the scenarios. + +## Architecture + +There are two main modules in TDengine server as shown in Picture 1: **Management Module (MGMT)** and **Data Module(DNODE)**. The whole TDengine architecture also includes a **TDengine Client Module**. + +
+
Picture 1 TDengine Architecture
+### MGMT Module +The MGMT module deals with the storage and querying on metadata, which includes information about users, databases, and tables. Applications will connect to the MGMT module at first when connecting the TDengine server. When creating/dropping databases/tables, The request is sent to the MGMT module at first to create/delete metadata. Then the MGMT module will send requests to the data module to allocate/free resources required. In the case of writing or querying, applications still need to visit MGMT module to get meta data, according to which, then access the DNODE module. + +### DNODE Module +The DNODE module is responsible for storing and querying data. For the sake of future scaling and high-efficient resource usage, TDengine applies virtualization on resources it uses. TDengine introduces the concept of virtual node (vnode), which is the unit of storage, resource allocation and data replication (enterprise edition). As is shown in Picture 2, TDengine treats each data node as an aggregation of vnodes. + +When a DB is created, the system will allocate a vnode. Each vnode contains multiple tables, but a table belongs to only one vnode. Each DB has one or mode vnodes, but one vnode belongs to only one DB. Each vnode contains all the data in a set of tables. Vnodes have their own cache, directory to store data. Resources between different vnodes are exclusive with each other, no matter cache or file directory. However, resources in the same vnode are shared between all the tables in it. By virtualization, TDengine can distribute resources reasonably to each vnode and improve resource usage and concurrency. The number of vnodes on a dnode is configurable according to its hardware resources. + +
+
Picture 2 TDengine Virtualization
+ +### Client Module +TDengine client module accepts requests (mainly in SQL form) from applications and converts the requests to internal representations and sends to the server side. TDengine supports multiple interfaces, which are all built on top of TDengine client module. + +For the communication between client and MGMT module, TCP/UDP is used, the port is set by the parameter mgmtShellPort in system configuration file taos.cfg, default is 6030. For the communication between client and DNODE module, TCP/UDP is used, the port is set by the parameter vnodeShellPort in the system configuration file, default is 6035. + +## Writing Process +Picture 3 shows the full writing process of TDengine. TDengine uses [Writing Ahead Log] (WAL) strategy to assure data security and integrity. Data received from the client is written to the commit log at first. When TDengine recovers from crashes caused by power lose or other situations, the commit log is used to recover data. After writting to commit log, data will be wrtten to the corresponding vnode cache, then an acknowledgment is sent to the application. There are two mechanisms that can flush data in cache to disk for persistent storage: + +1. **Flush driven by timer**: There is a backend timer which flushes data in cache periodically to disks. The period is configurable via parameter commitTime in system configuration file taos.cfg. +2. **Flush driven by data**: Data in the cache is also flushed to disks when the left buffer size is below a threshold. Flush driven by data can reset the timer of flush driven by the timer. + +
+
Picture 3 TDengine Writting Process
+ +New commit log file will be opened when the committing process begins. When the committing process finishes, the old commit file will be removed. + +## Data Storage + +TDengine data are saved in _/var/lib/taos_ directory by default. It can be changed to other directories by setting the parameter dataDir in system configuration file taos.cfg. + +TDengine's metadata includes the database, table, user, super table and tag information. To reduce the latency, metadata are all buffered in the cache. + +Data records saved in tables are sharded according to the time range. Data of tables in the same vnode in a certain time range are saved in the same file group. This sharding strategy can effectively improve data searching speed. By default, one group of files contain data in 10 days, which can be configured by *daysPerFile* in the configuration file or by *DAYS* keyword in *CREATE DATABASE* clause. + +Data records are removed automatically once their lifetime is passed. The lifetime is configurable via parameter daysToKeep in the system configuration file. The default value is 3650 days. + +Data in files are blockwise. A data block only contains one table's data. Records in the same data block are sorted according to the primary timestamp. To improve the compression ratio, records are stored column by column, and the different compression algorithm is applied based on each column's data type. \ No newline at end of file diff --git a/documentation/webdocs/markdowndocs/More on System Architecture-ch.md b/documentation/webdocs/markdowndocs/More on System Architecture-ch.md new file mode 100644 index 0000000000000000000000000000000000000000..8e5eeee1c5a6c96ddda1281110766a12a56b8d12 --- /dev/null +++ b/documentation/webdocs/markdowndocs/More on System Architecture-ch.md @@ -0,0 +1,248 @@ +# TDengine的技术设计 + +## 存储设计 + +TDengine的数据存储主要包含**元数据的存储**和**写入数据的存储**。以下章节详细介绍了TDengine各种数据的存储结构。 + +### 元数据的存储 + +TDengine中的元数据信息包括TDengine中的数据库,表,超级表等信息。元数据信息默认存放在 _/var/lib/taos/mgmt/_ 文件夹下。该文件夹的目录结构如下所示: +``` +/var/lib/taos/ + +--mgmt/ + +--db.db + +--meters.db + +--user.db + +--vgroups.db +``` +元数据在文件中按顺序排列。文件中的每条记录代表TDengine中的一个元数据机构(数据库、表等)。元数据文件只进行追加操作,即便是元数据的删除,也只是在数据文件中追加一条删除的记录。 + +### 写入数据的存储 + +TDengine中写入的数据在硬盘上是按时间维度进行分片的。同一个vnode中的表在同一时间范围内的数据都存放在同一文件组中,如下图中的v0f1804*文件。这一数据分片方式可以大大简化数据在时间维度的查询,提高查询速度。在默认配置下,硬盘上的每个文件存放10天数据。用户可根据需要调整数据库的 _daysPerFile_ 配置项进行配置。 数据在文件中是按块存储的。每个数据块只包含一张表的数据,且数据是按照时间主键递增排列的。数据在数据块中按列存储,这样使得同类型的数据存放在一起,可以大大提高压缩的比例,节省存储空间。TDengine对不同类型的数据采用了不同的压缩算法进行压缩,以达到最优的压缩结果。TDengine使用的压缩算法包括simple8B、delta-of-delta、RLE以及LZ4等。 + +TDengine的数据文件默认存放在 */var/lib/taos/data/* 下。而 */var/lib/taos/tsdb/* 文件夹下存放了vnode的信息、vnode中表的信息以及数据文件的链接等。其完整目录结构如下所示: +``` +/var/lib/taos/ + +--tsdb/ + | +--vnode0 + | +--meterObj.v0 + | +--db/ + | +--v0f1804.head->/var/lib/taos/data/vnode0/v0f1804.head1 + | +--v0f1804.data->/var/lib/taos/data/vnode0/v0f1804.data + | +--v0f1804.last->/var/lib/taos/data/vnode0/v0f1804.last1 + | +--v0f1805.head->/var/lib/taos/data/vnode0/v0f1805.head1 + | +--v0f1805.data->/var/lib/taos/data/vnode0/v0f1805.data + | +--v0f1805.last->/var/lib/taos/data/vnode0/v0f1805.last1 + | : + +--data/ + +--vnode0/ + +--v0f1804.head1 + +--v0f1804.data + +--v0f1804.last1 + +--v0f1805.head1 + +--v0f1805.data + +--v0f1805.last1 + : +``` + +#### meterObj文件 +每个vnode中只存在一个 _meterObj_ 文件。该文件中存储了vnode的基本信息(创建时间,配置信息,vnode的统计信息等)以及该vnode中表的信息。其结构如下所示: +``` +<文件开始> +[文件头] +[表记录1偏移量和长度] +[表记录2偏移量和长度] +... +[表记录N偏移量和长度] +[表记录1] +[表记录2] +... +[表记录N] +[表记录] +<文件结尾> +``` +其中,文件头大小为512字节,主要存放vnode的基本信息。每条表记录代表属于该vnode中的一张表在硬盘上的表示。 + +#### head文件 +head文件中存放了其对应的data文件中数据块的索引信息。该文件组织形式如下: +``` +<文件开始> +[文件头] +[表1偏移量] +[表2偏移量] +... +[表N偏移量] +[表1数据索引] +[表2数据索引] +... +[表N数据索引] +<文件结尾> +``` +文件开头的偏移量列表表示对应表的数据索引块的开始位置在文件中的偏移量。每张表的数据索引信息在head文件中都是连续存放的。这也使得TDengine在读取单表数据时,可以将该表所有的数据块索引一次性读入内存,大大提高读取速度。表的数据索引块组织如下: +``` +[索引块信息] +[数据块1索引] +[数据块2索引] +... +[数据块N索引] +``` +其中,索引块信息中记录了数据块的个数等描述信息。每个数据块索引对应一个在data文件或last文件中的一个单独的数据块。索引信息中记录了数据块存放的文件、数据块起始位置的偏移量、数据块中数据时间主键的范围等。索引块中的数据块索引是按照时间范围顺序排放的,这也就是说,索引块M对应的数据块中的数据时间范围都大于索引块M-1的。这种预先排序的存储方式使得在TDengine在进行按照时间戳进行查询时可以使用折半查找算法,大大提高查询速度。 + +#### data文件 +data文件中存放了真实的数据块。该文件只进行追加操作。其文件组织形式如下: +``` +<文件开始> +[文件头] +[数据块1] +[数据块2] +... +[数据块N] +<文件结尾> +``` +每个数据块只属于vnode中的一张表,且数据块中的数据按照时间主键排列。数据块中的数据按列组织排放,使得同一类型的数据排放在一起,方便压缩和读取。每个数据块的组织形式如下所示: +``` +[列1信息] +[列2信息] +... +[列N信息] +[列1数据] +[列2数据] +... +[列N数据] +``` +列信息中包含该列的类型,列的压缩算法,列数据在文件中的偏移量以及长度等。除此之外,列信息中也包含该内存块中该列数据的预计算结果,从而在过滤查询时根据预计算结果判定是否读取数据块,大大提高读取速度。 + +#### last文件 +为了防止数据块的碎片化,提高查询速度和压缩率,TDengine引入了last文件。当要落盘的数据块中的数据条数低于某个阈值时,TDengine会先将该数据块写入到last文件中进行暂时存储。当有新的数据需要落盘时,last文件中的数据会被读取出来与新数据组成新的数据块写入到data文件中。last文件的组织形式与data文件类似。 + +### TDengine数据存储小结 +TDengine通过其创新的架构和存储结构设计,有效提高了计算机资源的使用率。一方面,TDengine的虚拟化使得TDengine的水平扩展及备份非常容易。另一方面,TDengine将表中数据按时间主键排序存储且其列式存储的组织形式都使TDengine在写入、查询以及压缩方面拥有非常大的优势。 + + +## 查询处理 + +### 概述 + +TDengine提供了多种多样针对表和超级表的查询处理功能,除了常规的聚合查询之外,还提供针对时序数据的窗口查询、统计聚合等功能。TDengine的查询处理需要客户端、管理节点、数据节点协同完成。 各组件包含的与查询处理相关的功能和模块如下: + +客户端(Client App)。客户端包含TAOS SQL的解析(SQL Parser)和查询请求执行器(Query Executor),第二阶段聚合器(Result Merger),连续查询管理器(Continuous Query Manager)等主要功能模块构成。SQL解析器负责对SQL语句进行解析校验,并转化为抽象语法树,查询执行器负责将抽象语法树转化查询执行逻辑,并根据SQL语句查询条件,将其转换为针对管理节点元数据查询和针对数据节点的数据查询两级查询处理。由于TAOS SQL当前不提供复杂的嵌套查询和pipeline查询处理机制,所以不再需要查询计划优化、逻辑查询计划到物理查询计划转换等过程。第二阶段聚合器负责将各数据节点查询返回的独立结果进行二阶段聚合生成最后的结果。连续查询管理器则负责针对用户建立的连续查询进行管理,负责定时拉起查询请求并按需将结果写回TDengine或返回给客户应用。此外,客户端还负责查询失败后重试、取消查询请求、以及维持连接心跳、向管理节点上报查询状态等工作。 + +管理节点(Management Node)。管理节点保存了整个集群系统的全部数据的元数据信息,向客户端节点提供查询所需的数据的元数据,并根据集群的负载情况切分查询请求。通过超级表包含了通过该超级表创建的所有表的信息,因此查询处理器(Query Executor)负责针对标签(TAG)的查询处理,并将满足标签查询请求的表信息返回给客户端。此外,管理节点还维护集群的查询状态(Query Status Manager)维护,查询状态管理中在内存中临时保存有当前正在执行的全部查询,当客户端使用 *show queries* 命令的时候,将当前系统正在运行的查询信息返回客户端。 + +数据节点(Data Node)。数据节点保存了数据库中全部数据内容,并通过查询执行器、查询处理调度器、查询任务队列(Query Task Queue)进行查询处理的调度执行,从客户端接收到的查询处理请求都统一放置到处理队列中,查询执行器从队列中获得查询请求,并负责执行。通过查询优化器(Query Optimizer)对于查询进行基本的优化处理,以及通过数据节点的查询执行器(Query Executor)扫描符合条件的数据单元并返回计算结果。等接收客户端发出的查询请求,执行查询处理,并将结果返回。同时数据节点还需要响应来自管理节点的管理信息和命令,例如 *kill query* 命令以后,需要即刻停止执行的查询任务。 + +
+
图 1. 系统查询处理架构图(只包含查询相关组件)
+ +### 普通查询处理 + +客户端、管理节点、数据节点协同完成TDengine的查询处理全流程。我们以一个具体的SQL查询为例,说明TDengine的查询处理流程。SQL语句向超级表*FOO_SUPER_TABLE*查询获取时间范围在2019年1月12日整天,标签TAG_LOC是'beijing'的表所包含的所有记录总数,SQL语句如下: + +```sql +SELECT COUNT(*) +FROM FOO_SUPER_TABLE +WHERE TAG_LOC = 'beijing' AND TS >= '2019-01-12 00:00:00' AND TS < '2019-01-13 00:00:00' +``` + +首先,客户端调用TAOS SQL解析器对SQL语句进行解析及合法性检查,然后生成语法树,并从中提取查询的对象 — 超级表 *FOO_SUPER_TABLE* ,然后解析器向管理节点(Management Node)请求其相应的元数据信息,并将过滤信息(TAG_LOC='beijing')同时发送到管理节点。 + +管理节点接收元数据获取的请求,首先找到超级表 *FOO_SUPER_TABLE* 基础信息,然后应用查询条件来过滤通过该超级表创建的全部表,最后满足查询条件(TAG_LOC='beijing'),即 *TAG_LOC* 标签列是 'beijing' 的的通过其查询执行器将满足查询要求的对象(表或超级表)的元数据信息返回给客户端。 + +客户端获得了 *FOO_SUPER_TABLE* 的元数据信息后,查询执行器根据元数据中的数据分布,分别向保存有相应数据的节点发起查询请求,此时时间戳范围过滤条件(TS >= '2019-01-12 00:00:00' AND TS < '2019-01-13 00:00:00')需要同时发送给全部的数据节点。 + +数据节点接收到发自客户端的查询,转化为内部结构并进行优化以后将其放入任务执行队列,等待查询执行器执行。当查询结果获得以后,将查询结果返回客户端。数据节点执行查询的过程均相互独立,完全只依赖于自身的数据和内容进行计算。 + +当所有查询涉及的数据节点返回结果后,客户端将每个数据节点查询的结果集再次进行聚合(针对本案例,即将所有结果再次进行累加),累加的结果即为最后的查询结果。第二阶段聚合并不是所有的查询都需要。例如,针对数据的列选取操作,实际上是不需要第二阶段聚合。 + +### REST查询处理 + +在 C/C++ 、Python接口、 JDBC 接口之外,TDengine 还提供基于 HTTP 协议的 REST 接口。不同于使用应用客户端开发程序进行的开发。当用户使用 REST 接口的时候,所有的查询处理过程都是在服务器端来完成,用户的应用服务不会参与数据库的计算过程,查询处理完成后结果通过 HTTP的 JSON 格式返回给用户。 + +
+
图 2. REST查询架构
+ +当用户使用基于HTTP的REST查询接口,HTTP的请求首先与位于数据节点的HTTP连接器( Connector),建立连接,然后通过REST的签名机制,使用Token来确保请求的可靠性。对于数据节点,HTTP连接器接收到请求后,调用内嵌的客户端程序发起查询请求,内嵌客户端将解析通过HTTP连接器传递过来的SQL语句,解析该SQL语句并按需向管理节点请求元数据信息,然后向本机或集群中其他节点发送查询请求,最后按需聚合计算结果。HTTP连接器接收到请求SQL以后,后续的流程处理与采用应用客户端方式的查询处理完全一致。最后,还需要将查询的结果转换为JSON格式字符串,并通过HTTP 响应返回给客户端。 + +可以看到,在处理HTTP流程的整个过程中,用户应用不再参与到查询处理的过程中,只负责通过HTTP协议发送SQL请求并接收JSON格式的结果。同时还需要注意的是,每个数据节点均内嵌了一个HTTP连接器和客户端程序,因此请求集群中任何一个数据节点,该数据节点均能够通过HTTP协议返回用户的查询结果。 + +### 技术特征 + +由于TDengine采用数据和标签分离存储的模式,能够极大地降低标签数据存储的冗余度。标签数据直接关联到每个表,并采用全内存的结构进行管理和维护标签数据,全内存的结构提供快速的查询处理,千万级别规模的标签数据查询可以在毫秒级别返回。首先针对标签数据的过滤可以有效地降低第二阶段的查询涉及的数据规模。为有效地提升查询处理的性能,针对物联网数据的不可更改的特点,TDengine采用在每个保存的数据块上,都记录下该数据块中数据的最大值、最小值、和等统计数据。如果查询处理涉及整个数据块的全部数据,则直接使用预计算结果,不再读取数据块的内容。由于预计算模块的大小远小于磁盘上存储的具体数据的大小,对于磁盘IO为瓶颈的查询处理,使用预计算结果可以极大地减小读取IO,并加速查询处理的流程。 + +由于TDengine采用按列存储数据。当从磁盘中读取数据块进行计算的时候,按照查询列信息读取该列数据,并不需要读取其他不相关的数据,可以最小化读取数据。此外,由于采用列存储结构,数据节点针对数据的扫描采用该列数据块进行,可以充分利用CPU L2高速缓存,极大地加速数据扫描的速度。此外,对于某些查询,并不会等全部查询结果生成后再返回结果。例如,列选取查询,当第一批查询结果获得以后,数据节点直接将其返回客户端。同时,在查询处理过程中,系统在数据节点接收到查询请求以后马上返回客户端查询确认信息,并同时拉起查询处理过程,并等待查询执行完成后才返回给用户查询有响应。 + +## TDengine集群设计 + +### 1:集群与主要逻辑单元 + +TDengine是基于硬件、软件系统不可靠、一定会有故障的假设进行设计的,是基于任何单台计算机都无足够能力处理海量数据的假设进行设计的。因此TDengine从研发的第一天起,就按照分布式高可靠架构进行设计,是完全去中心化的,是水平扩展的,这样任何单台或多台服务器宕机或软件错误都不影响系统的服务。通过节点虚拟化并辅以自动化负载均衡技术,TDengine能最大限度地利用异构集群中的计算和存储资源。而且只要数据副本数大于一,无论是硬软件的升级、还是IDC的迁移等都无需停止集群的服务,极大地保证系统的正常运行,并且降低了系统管理员和运维人员的工作量。 + +下面的示例图上有八个物理节点,每个物理节点被逻辑的划分为多个虚拟节点。下面对系统的基本概念进行介绍。 + + + +![assets/nodes.png](../assets/nodes.png) + +**物理节点(dnode)**:集群中的一物理服务器或云平台上的一虚拟机。为安全以及通讯效率,一个物理节点可配置两张网卡,或两个IP地址。其中一张网卡用于集群内部通讯,其IP地址为**privateIp**, 另外一张网卡用于与集群外部应用的通讯,其IP地址为**publicIp**。在一些云平台(如阿里云),对外的IP地址是映射过来的,因此publicIp还有一个对应的内部IP地址**internalIp**(与privateIp不同)。对于只有一个IP地址的物理节点,publicIp, privateIp以及internalIp都是同一个地址,没有任何区别。一个dnode上有而且只有一个taosd实例运行。 + +**虚拟数据节点(vnode)**:在物理节点之上的可独立运行的基础逻辑单元,时序数据写入、存储、查询等操作逻辑都在虚拟节点中进行(图中V),采集的时序数据就存储在vnode上。一个vnode包含固定数量的表。当创建一张新表时,系统会检查是否需要创建新的vnode。一个物理节点上能创建的vnode的数量取决于物理节点的硬件资源。一个vnode只属于一个DB,但一个DB可以有多个vnode。 + +**虚拟数据节点组(vgroup)**: 位于不同物理节点的vnode可以组成一个虚拟数据节点组vnode group(如上图dnode0中的V0, dnode1中的V1, dnode6中的V2属于同一个虚拟节点组)。归属于同一个vgroup的虚拟节点采取master/slave的方式进行管理。写只能在master上进行,但采用asynchronous的方式将数据同步到slave,这样确保了一份数据在多个物理节点上有拷贝。如果master节点宕机,其他节点监测到后,将重新选举vgroup里的master, 新的master能继续处理数据请求,从而保证系统运行的可靠性。一个vgroup里虚拟节点个数就是数据的副本数。如果一个DB的副本数为N,系统必须有至少N个物理节点。副本数在创建DB时通过参数replica可以指定,缺省为1。使用TDengine, 数据的安全依靠多副本解决,因此不再需要昂贵的磁盘阵列等存储设备。 + +**虚拟管理节点(mnode)**:负责所有节点运行状态的监控和维护,以及节点之间的负载均衡(图中M)。同时,虚拟管理节点也负责元数据(包括用户、数据库、表、静态标签等)的存储和管理,因此也称为Meta Node。TDengine集群中可配置多个(最多不超过5个) mnode,它们自动构建成为一个管理节点集群(图中M0, M1, M2)。mnode间采用master/slave的机制进行管理,而且采取强一致方式进行数据同步。mnode集群的创建由系统自动完成,无需人工干预。每个dnode上至多有一个mnode,而且每个dnode都知道整个集群中所有mnode的IP地址。 + +**taosc**:一个软件模块,是TDengine给应用提供的驱动程序(driver),内嵌于JDBC、ODBC driver中,或者C语言连接库里。应用都是通过taosc而不是直接来与整个集群进行交互的。这个模块负责获取并缓存元数据;将插入、查询等请求转发到正确的虚拟节点;在把结果返回给应用时,还需要负责最后一级的聚合、排序、过滤等操作。对于JDBC, ODBC, C/C++接口而言,这个模块是在应用所处的计算机上运行,但消耗的资源很小。为支持全分布式的REST接口,taosc在TDengine集群的每个dnode上都有一运行实例。 + +**对外服务地址**:TDengine集群可以容纳单台、多台甚至几千台物理节点。应用只需要向集群中任何一个物理节点的publicIp发起连接即可。启动CLI应用taos时,选项-h需要提供的就是publicIp。 + +**master/secondIp**:每一个dnode都需要配置一个masterIp。dnode启动后,将对配置的masterIp发起加入集群的连接请求。masterIp是已经创建的集群中的任何一个节点的privateIp,对于集群中的第一个节点,就是它自己的privateIp。为保证连接成功,每个dnode还可配置secondIp, 该IP地址也是已创建的集群中的任何一个节点的privateIp。如果一个节点连接masterIp失败,它将试图链接secondIp。 + +dnode启动后,会获知集群的mnode IP列表,并且定时向mnode发送状态信息。 + +vnode与mnode只是逻辑上的划分,都是执行程序taosd里的不同线程而已,无需安装不同的软件,做任何特殊的配置。最小的系统配置就是一个物理节点,vnode,mnode和taosc都存在而且都正常运行,但单一节点无法保证系统的高可靠。 + +### 2:一典型的操作流程 + +为解释vnode, mnode, taosc和应用之间的关系以及各自扮演的角色,下面对写入数据这个典型操作的流程进行剖析。 + + + +![Picture1](../assets/Picture2.png) + + + +1. 应用通过JDBC、ODBC或其他API接口发起插入数据的请求。 +2. taosc会检查缓存,看是有保存有该表的meta data。如果有,直接到第4步。如果没有,taosc将向mnode发出get meta-data请求。 +3. mnode将该表的meta-data返回给taosc。Meta-data包含有该表的schema, 而且还有该表所属的vgroup信息(vnode ID以及所在的dnode的IP地址,如果副本数为N,就有N组vnodeID/IP)。如果taosc迟迟得不到mnode回应,而且存在多个mnode,taosc将向下一个mnode发出请求。 +4. taosc向master vnode发起插入请求。 +5. vnode插入数据后,给taosc一个应答,表示插入成功。如果taosc迟迟得不到vnode的回应,taosc会认为该节点已经离线。这种情况下,如果被插入的数据库有多个副本,taosc将向vgroup里下一个vnode发出插入请求。 +6. taosc通知APP,写入成功。 + +对于第二和第三步,taosc启动时,并不知道mnode的IP地址,因此会直接向配置的集群对外服务的IP地址发起请求。如果接收到该请求的dnode并没有配置mnode,该dnode会在回复的消息中告知mnode的IP地址列表(如果有多个dnodes,mnode的IP地址可以有多个),这样taosc会重新向新的mnode的IP地址发出获取meta-data的请求。 + +对于第四和第五步,没有缓存的情况下,taosc无法知道虚拟节点组里谁是master,就假设第一个vnodeID/IP就是master,向它发出请求。如果接收到请求的vnode并不是master,它会在回复中告知谁是master,这样taosc就向建议的master vnode发出请求。一旦得到插入成功的回复,taosc会缓存住master节点的信息。 + +上述是插入数据的流程,查询、计算的流程也完全一致。taosc把这些复杂的流程全部封装屏蔽了,因此应用无需处理重定向、获取meta data等细节,完全是透明的。 + +通过taosc缓存机制,只有在第一次对一张表操作时,才需要访问mnode, 因此mnode不会成为系统瓶颈。但因为schema有可能变化,而且vgroup有可能发生改变(比如负载均衡发生),因此taosc需要定时自动刷新缓存。 + +### 3:数据分区 + +vnode(虚拟数据节点)保存采集的时序数据,而且查询、计算都在这些节点上进行。为便于负载均衡、数据恢复、支持异构环境,TDengine将一个物理节点根据其计算和存储资源切分为多个vnode。这些vnode的管理是TDengine自动完成的,对应用完全透明。 + +对于单独一个数据采集点,无论其数据量多大,一个vnode(或vnode group, 如果副本数大于1)有足够的计算资源和存储资源来处理(如果每秒生成一条16字节的记录,一年产生的原始数据不到0.5G),因此TDengine将一张表的所有数据都存放在一个vnode里,而不会让同一个采集点的数据分布到两个或多个dnode上。而且一个vnode可存储多张表的数据,一个vnode可容纳的表的数目由配置参数tables指定,缺省为2000。设计上,一个vnode里所有的表都属于同一个DB。因此一个数据库DB需要的vnode或vgroup的个数等于:数据库表的数目/tables。 + +创建DB时,系统并不会马上分配资源。但当创建一张表时,系统将看是否有已经分配的vnode, 而且是否有空位,如果有,立即在该有空位的vnode创建表。如果没有,系统将从集群中,根据当前的负载情况,在一个dnode上创建一新的vnode, 然后创建表。如果DB有多个副本,系统不是只创建一个vnode,而是一个vgroup(虚拟数据节点组)。系统对vnode的数目没有任何限制,仅仅受限于物理节点本身的计算和存储资源。 + +参数tables的设置需要考虑具体场景,创建DB时,可以个性化指定该参数。该参数不宜过大,也不宜过小。过小,极端情况,就是每个数据采集点一个vnode, 这样导致系统数据文件过多。过大,虚拟化带来的优势就会丧失。给定集群计算资源的情况下,整个系统vnode的个数应该是CPU核的数目的两倍以上。 + +### 4:负载均衡 + +每个dnode(物理节点)都定时向 mnode(虚拟管理节点)报告其状态(包括硬盘空间、内存大小、CPU、网络、虚拟节点个数等),因此mnode了解整个集群的状态。基于整体状态,当mnode发现某个dnode负载过重,它会将dnode上的一个或多个vnode挪到其他dnode。在挪动过程中,对外服务继续进行,数据插入、查询和计算操作都不受影响。负载均衡操作结束后,应用也无需重启,将自动连接新的vnode。 + +如果mnode一段时间没有收到dnode的状态报告,mnode会认为这个dnode已经离线。如果离线时间超过一定时长(时长由配置参数offlineThreshold决定),该dnode将被mnode强制剔除出集群。该dnode上的vnodes如果副本数大于一,系统将自动在其他dnode上创建新的副本,以保证数据的副本数。 + + + +**Note:**目前集群功能仅仅限于企业版 \ No newline at end of file diff --git a/documentation/webdocs/markdowndocs/More on System Architecture.md b/documentation/webdocs/markdowndocs/More on System Architecture.md new file mode 100644 index 0000000000000000000000000000000000000000..d7a38b99a3ae5a630509f3ef0f0ffdc97d3aaaf1 --- /dev/null +++ b/documentation/webdocs/markdowndocs/More on System Architecture.md @@ -0,0 +1,176 @@ +# TDengine System Architecture + +## Storage Design + +TDengine data mainly include **metadata** and **data** that we will introduce in the following sections. + +### Metadata Storage + +Metadata include the information of databases, tables, etc. Metadata files are saved in _/var/lib/taos/mgmt/_ directory by default. The directory tree is as below: +``` +/var/lib/taos/ + +--mgmt/ + +--db.db + +--meters.db + +--user.db + +--vgroups.db +``` + +A metadata structure (database, table, etc.) is saved as a record in a metadata file. All metadata files are appended only, and even a drop operation adds a deletion record at the end of the file. + +### Data storage + +Data in TDengine are sharded according to the time range. Data of tables in the same vnode in a certain time range are saved in the same filegroup, such as files v0f1804*. This sharding strategy can effectively improve data searching speed. By default, a group of files contains data in 10 days, which can be configured by *daysPerFile* in the configuration file or by *DAYS* keyword in *CREATE DATABASE* clause. Data in files are blockwised. A data block only contains one table's data. Records in the same data block are sorted according to the primary timestamp, which helps to improve the compression rate and save storage. The compression algorithms used in TDengine include simple8B, delta-of-delta, RLE, LZ4, etc. + +By default, TDengine data are saved in */var/lib/taos/data/* directory. _/var/lib/taos/tsdb/_ directory contains vnode informations and data file linkes. + +``` +/var/lib/taos/ + +--tsdb/ + | +--vnode0 + | +--meterObj.v0 + | +--db/ + | +--v0f1804.head->/var/lib/taos/data/vnode0/v0f1804.head1 + | +--v0f1804.data->/var/lib/taos/data/vnode0/v0f1804.data + | +--v0f1804.last->/var/lib/taos/data/vnode0/v0f1804.last1 + | +--v0f1805.head->/var/lib/taos/data/vnode0/v0f1805.head1 + | +--v0f1805.data->/var/lib/taos/data/vnode0/v0f1805.data + | +--v0f1805.last->/var/lib/taos/data/vnode0/v0f1805.last1 + | : + +--data/ + +--vnode0/ + +--v0f1804.head1 + +--v0f1804.data + +--v0f1804.last1 + +--v0f1805.head1 + +--v0f1805.data + +--v0f1805.last1 + : +``` + +#### meterObj file +There are only one meterObj file in a vnode. Informations bout the vnode, such as created time, configuration information, vnode statistic informations are saved in this file. It has the structure like below: + +``` + +[file_header] +[table_record1_offset&length] +[table_record2_offset&length] +... +[table_recordN_offset&length] +[table_record1] +[table_record2] +... +[table_recordN] + +``` +The file header takes 512 bytes, which mainly contains informations about the vnode. Each table record is the representation of a table on disk. + +#### head file +The _head_ files contain the index of data blocks in the _data_ file. The inner organization is as below: +``` + +[file_header] +[table1_offset] +[table2_offset] +... +[tableN_offset] +[table1_index_block] +[table2_index_block] +... +[tableN_index_block] + +``` +The table offset array in the _head_ file saves the information about the offsets of each table index block. Indices on data blocks in the same table are saved continuously. This also makes it efficient to load data indices on the same table. The data index block has a structure like: + +``` +[index_block_info] +[block1_index] +[block2_index] +... +[blockN_index] +``` +The index block info part contains the information about the index block such as the number of index blocks, etc. Each block index corresponds to a real data block in the _data_ file or _last_ file. Information about the location of the real data block, the primary timestamp range of the data block, etc. are all saved in the block index part. The block indices are sorted in ascending order according to the primary timestamp. So we can apply algorithms such as the binary search on the data to efficiently search blocks according to time. + +#### data file +The _data_ files store the real data block. They are append-only. The organization is as: +``` + +[file_header] +[block1] +[block2] +... +[blockN] + +``` +A data block in _data_ files only belongs to a table in the vnode and the records in a data block are sorted in ascending order according to the primary timestamp key. Data blocks are column-oriented. Data in the same column are stored contiguously, which improves reading speed and compression rate because of their similarity. A data block has the following organization: + +``` +[column1_info] +[column2_info] +... +[columnN_info] +[column1_data] +[column2_data] +... +[columnN_data] +``` +The column info part includes information about column types, column compression algorithm, column data offset and length in the _data_ file, etc. Besides, pre-calculated results of the column data in the block are also in the column info part, which helps to improve reading speed by avoiding loading data block necessarily. + +#### last file +To avoid storage fragment and to import query speed and compression rate, TDengine introduces an extra file, the _last_ file. When the number of records in a data block is lower than a threshold, TDengine will flush the block to the _last_ file for temporary storage. When new data comes, the data in the _last_ file will be merged with the new data and form a larger data block and written to the _data_ file. The organization of the _last_ file is similar to the _data_ file. + +### Summary +The innovation in architecture and storage design of TDengine improves resource usage. On the one hand, the virtualization makes it easy to distribute resources between different vnodes and for future scaling. On the other hand, sorted and column-oriented storage makes TDengine have a great advantage in writing, querying and compression. + +## Query Design + +#### Introduction + +TDengine provides a variety of query functions for both tables and super tables. In addition to regular aggregate queries, it also provides time window based query and statistical aggregation for time series data. TDengine's query processing requires the client app, management node, and data node to work together. The functions and modules involved in query processing included in each component are as follows: + +Client (Client App). The client development kit, embed in a client application, consists of TAOS SQL parser and query executor, the second-stage aggregator (Result Merger), continuous query manager and other major functional modules. The SQL parser is responsible for parsing and verifying the SQL statement and converting it into an abstract syntax tree. The query executor is responsible for transforming the abstract syntax tree into the query execution logic and creates the metadata query according to the query condition of the SQL statement. Since TAOS SQL does not currently include complex nested queries and pipeline query processing mechanism, there is no longer need for query plan optimization and physical query plan conversions. The second-stage aggregator is responsible for performing the aggregation of the independent results returned by query involved data nodes at the client side to generate final results. The continuous query manager is dedicated to managing the continuous queries created by users, including issuing fixed-interval query requests and writing the results back to TDengine or returning to the client application as needed. Also, the client is also responsible for retrying after the query fails, canceling the query request, and maintaining the connection heartbeat and reporting the query status to the management node. + +Management Node. The management node keeps the metadata of all the data of the entire cluster system, provides the metadata of the data required for the query from the client node, and divides the query request according to the load condition of the cluster. The super table contains information about all the tables created according to the super table, so the query processor (Query Executor) of the management node is responsible for the query processing of the tags of tables and returns the table information satisfying the tag query. Besides, the management node maintains the query status of the cluster in the Query Status Manager component, in which the metadata of all queries that are currently executing are temporarily stored in-memory buffer. When the client issues *show queries* command to management node, current running queries information is returned to the client. + +Data Node. The data node, responsible for storing all data of the database, consists of query executor, query processing scheduler, query task queue, and other related components. Once the query requests from the client received, they are put into query task queue and waiting to be processed by query executor. The query executor extracts the query request from the query task queue and invokes the query optimizer to perform the basic optimization for the query execution plan. And then query executor scans the qualified data blocks in both cache and disk to obtain qualified data and return the calculated results. Besides, the data node also needs to respond to management information and commands from the management node. For example, after the *kill query* received from the management node, the query task needs to be stopped immediately. + +
+
Fig 1. System query processing architecture diagram (only query related components)
+ +#### Query Process Design + +The client, the management node, and the data node cooperate to complete the entire query processing of TDengine. Let's take a concrete SQL query as an example to illustrate the whole query processing flow. The SQL statement is to query on super table *FOO_SUPER_TABLE* to get the total number of records generated on January 12, 2019, from the table, of which TAG_LOC equals to 'beijing'. The SQL statement is as follows: + +```sql +SELECT COUNT(*) +FROM FOO_SUPER_TABLE +WHERE TAG_LOC = 'beijing' AND TS >= '2019-01-12 00:00:00' AND TS < '2019-01-13 00:00:00' +``` + +First, the client invokes the TAOS SQL parser to parse and validate the SQL statement, then generates a syntax tree, and extracts the object of the query - the super table *FOO_SUPER_TABLE*, and then the parser sends requests with filtering information (TAG_LOC='beijing') to management node to get the corresponding metadata about *FOO_SUPER_TABLE*. + +Once the management node receives the request for metadata acquisition, first finds the super table *FOO_SUPER_TABLE* basic information, and then applies the query condition (TAG_LOC='beijing') to filter all the related tables created according to it. And finally, the query executor returns the metadata information that satisfies the query request to the client. + +After the client obtains the metadata information of *FOO_SUPER_TABLE*, the query executor initiates a query request with timestamp range filtering condition (TS >= '2019- 01-12 00:00:00' AND TS < '2019-01-13 00:00:00') to all nodes that hold the corresponding data according to the information about data distribution in metadata. + +The data node receives the query sent from the client, converts it into an internal structure and puts it into the query task queue to be executed by query executor after optimizing the execution plan. When the query result is obtained, the query result is returned to the client. It should be noted that the data nodes perform the query process independently of each other, and rely solely on their data and content for processing. + +When all data nodes involved in the query return results, the client aggregates the result sets from each data node. In this case, all results are accumulated to generate the final query result. The second stage of aggregation is not always required for all queries. For example, a column selection query does not require a second-stage aggregation at all. + +#### REST Query Process + +In addition to C/C++, Python, and JDBC interface, TDengine also provides a REST interface based on the HTTP protocol, which is different from using the client application programming interface. When the user uses the REST interface, all the query processing is completed on the server-side, and the user's application is not involved in query processing anymore. After the query processing is completed, the result is returned to the client through the HTTP JSON string. + +
+
Fig. 2 REST query architecture
+ +When a client uses an HTTP-based REST query interface, the client first establishes a connection with the HTTP connector at the data node and then uses the token to ensure the reliability of the request through the REST signature mechanism. For the data node, after receiving the request, the HTTP connector invokes the embedded client program to initiate a query processing, and then the embedded client parses the SQL statement from the HTTP connector and requests the management node to get metadata as needed. After that, the embedded client sends query requests to the same data node or other nodes in the cluster and aggregates the calculation results on demand. Finally, you also need to convert the result of the query into a JSON format string and return it to the client via an HTTP response. After the HTTP connector receives the request SQL, the subsequent process processing is completely consistent with the query processing using the client application development kit. + +It should be noted that during the entire processing, the client application is no longer involved in, and is only responsible for sending SQL requests through the HTTP protocol and receiving the results in JSON format. Besides, each data node is embedded with an HTTP connector and a client, so any data node in the cluster received requests from a client, the data node can initiate the query and return the result to the client through the HTTP protocol, with transfer the request to other data nodes. + +#### Technology + +Because TDengine stores data and tags value separately, the tag value is kept in the management node and directly associated with each table instead of records, resulting in a great reduction of the data storage. Therefore, the tag value can be managed by a fully in-memory structure. First, the filtering of the tag data can drastically reduce the data size involved in the second phase of the query. The query processing for the data is performed at the data node. TDengine takes advantage of the immutable characteristics of IoT data by calculating the maximum, minimum, and other statistics of the data in one data block on each saved data block, to effectively improve the performance of query processing. If the query process involves all the data of the entire data block, the pre-computed result is used directly, and the content of the data block is no longer needed. Since the size of disk space required to store the pre-computation result is much smaller than the size of the specific data, the pre-computation result can greatly reduce the disk IO and speed up the query processing. + +TDengine employs column-oriented data storage techniques. When the data block is involved to be loaded from the disk for calculation, only the required column is read according to the query condition, and the read overhead can be minimized. The data of one column is stored in a contiguous memory block and therefore can make full use of the CPU L2 cache to greatly speed up the data scanning. Besides, TDengine utilizes the eagerly responding mechanism and returns a partial result before the complete result is acquired. For example, when the first batch of results is obtained, the data node immediately returns it directly to the client in case of a column select query. \ No newline at end of file diff --git a/documentation/webdocs/markdowndocs/Super Table-ch.md b/documentation/webdocs/markdowndocs/Super Table-ch.md new file mode 100644 index 0000000000000000000000000000000000000000..14145cbb70aa421b6c1d3340ce8139d8aa4b642c --- /dev/null +++ b/documentation/webdocs/markdowndocs/Super Table-ch.md @@ -0,0 +1,224 @@ +# 超级表STable:多表聚合 + +TDengine要求每个数据采集点单独建表,这样能极大提高数据的插入/查询性能,但是导致系统中表的数量猛增,让应用对表的维护以及聚合、统计操作难度加大。为降低应用的开发难度,TDengine引入了超级表STable (Super Table)的概念。 + +## 什么是超级表 + +STable是同一类型数据采集点的抽象,是同类型采集实例的集合,包含多张数据结构一样的子表。每个STable为其子表定义了表结构和一组标签:表结构即表中记录的数据列及其数据类型;标签名和数据类型由STable定义,标签值记录着每个子表的静态信息,用以对子表进行分组过滤。子表本质上就是普通的表,由一个时间戳主键和若干个数据列组成,每行记录着具体的数据,数据查询操作与普通表完全相同;但子表与普通表的区别在于每个子表从属于一张超级表,并带有一组由STable定义的标签值。每种类型的采集设备可以定义一个STable。数据模型定义表的每列数据的类型,如温度、压力、电压、电流、GPS实时位置等,而标签信息属于Meta Data,如采集设备的序列号、型号、位置等,是静态的,是表的元数据。用户在创建表(数据采集点)时指定STable(采集类型)外,还可以指定标签的值,也可事后增加或修改。 + +TDengine扩展标准SQL语法用于定义STable,使用关键词tags指定标签信息。语法如下: + +```mysql +CREATE TABLE ( TIMESTAMP, field_name1 field_type,…) TAGS(tag_name tag_type, …) +``` + +其中tag_name是标签名,tag_type是标签的数据类型。标签可以使用时间戳之外的其他TDengine支持的数据类型,标签的个数最多为6个,名字不能与系统关键词相同,也不能与其他列名相同。如: + +```mysql +create table thermometer (ts timestamp, degree float) +tags (location binary(20), type int) +``` + +上述SQL创建了一个名为thermometer的STable,带有标签location和标签type。 + +为某个采集点创建表时,可以指定其所属的STable以及标签的值,语法如下: + +```mysql +CREATE TABLE USING TAGS (tag_value1,...) +``` + +沿用上面温度计的例子,使用超级表thermometer建立单个温度计数据表的语句如下: + +```mysql +create table t1 using thermometer tags (‘beijing’, 10) +``` + +上述SQL以thermometer为模板,创建了名为t1的表,这张表的Schema就是thermometer的Schema,但标签location值为‘beijing’,标签type值为10。 + +用户可以使用一个STable创建数量无上限的具有不同标签的表,从这个意义上理解,STable就是若干具有相同数据模型,不同标签的表的集合。与普通表一样,用户可以创建、删除、查看超级表STable,大部分适用于普通表的查询操作都可运用到STable上,包括各种聚合和投影选择函数。除此之外,可以设置标签的过滤条件,仅对STbale中部分表进行聚合查询,大大简化应用的开发。 + +TDengine对表的主键(时间戳)建立索引,暂时不提供针对数据模型中其他采集量(比如温度、压力值)的索引。每个数据采集点会采集若干数据记录,但每个采集点的标签仅仅是一条记录,因此数据标签在存储上没有冗余,且整体数据规模有限。TDengine将标签数据与采集的动态数据完全分离存储,而且针对STable的标签建立了高性能内存索引结构,为标签提供全方位的快速操作支持。用户可按照需求对其进行增删改查(Create,Retrieve,Update,Delete,CRUD)操作。 + +STable从属于库,一个STable只属于一个库,但一个库可以有一到多个STable, 一个STable可有多个子表。 + +## 超级表管理 + +- 创建超级表 + + ```mysql + CREATE TABLE ( TIMESTAMP, field_name1 field_type,…) TAGS(tag_name tag_type, …) + ``` + + 与创建表的SQL语法相似。但需指定TAGS字段的名称和类型。 + + 说明: + + 1. TAGS列总长度不能超过512 bytes; + 2. TAGS列的数据类型不能是timestamp和nchar类型; + 3. TAGS列名不能与其他列名相同; + 4. TAGS列名不能为预留关键字. + +- 显示已创建的超级表 + + ```mysql + show stables; + ``` + + 查看数据库内全部STable,及其相关信息,包括STable的名称、创建时间、列数量、标签(TAG)数量、通过该STable建表的数量。 + +- 删除超级表 + + ```mysql + DROP TABLE + ``` + + Note: 删除STable不会级联删除通过STable创建的表;相反删除STable时要求通过该STable创建的表都已经被删除。 + +- 查看属于某STable并满足查询条件的表 + + ```mysql + SELECT TBNAME,[TAG_NAME,…] FROM WHERE <[=|=<|>=|<>] values..> ([AND|OR] …) + ``` + + 查看属于某STable并满足查询条件的表。说明:TBNAME为关键词,显示通过STable建立的子表表名,查询过程中可以使用针对标签的条件。 + + ```mysql + SELECT COUNT(TBNAME) FROM WHERE <[=|=<|>=|<>] values..> ([AND|OR] …) + ``` + + 统计属于某个STable并满足查询条件的子表的数量 + +## 写数据时自动建子表 + +在某些特殊场景中,用户在写数据时并不确定某个设备的表是否存在,此时可使用自动建表语法来实现写入数据时里用超级表定义的表结构自动创建不存在的子表,若该表已存在则不会建立新表。注意:自动建表语句只能自动建立子表而不能建立超级表,这就要求超级表已经被事先定义好。自动建表语法跟insert/import语法非常相似,唯一区别是语句中增加了超级表和标签信息。具体语法如下: + +```mysql +INSERT INTO USING TAGS (, ...) VALUES (field_value, ...) (field_value, ...) ...; +``` + +向表tb_name中插入一条或多条记录,如果tb_name这张表不存在,则会用超级表stb_name定义的表结构以及用户指定的标签值(即tag1_value…)来创建名为tb_name新表,并将用户指定的值写入表中。如果tb_name已经存在,则建表过程会被忽略,系统也不会检查tb_name的标签是否与用户指定的标签值一致,也即不会更新已存在表的标签。 + +```mysql +INSERT INTO USING TAGS (, ...) VALUES (, ...) (, ...) ... USING TAGS(, ...) VALUES (, ...) ...; +``` + +向多张表tb1_name,tb2_name等插入一条或多条记录,并分别指定各自的超级表进行自动建表。 + +## STable中TAG管理 + +除了更新标签的值的操作是针对子表进行,其他所有的标签操作(添加标签、删除标签等)均只能作用于STable,不能对单个子表操作。对STable添加标签以后,依托于该STable建立的所有表将自动增加了一个标签,对于数值型的标签,新增加的标签的默认值是0. + +- 添加新的标签 + + ```mysql + ALTER TABLE ADD TAG + ``` + + 为STable增加一个新的标签,并指定新标签的类型。标签总数不能超过6个。 + +- 删除标签 + + ```mysql + ALTER TABLE DROP TAG + ``` + + 删除超级表的一个标签,从超级表删除某个标签后,该超级表下的所有子表也会自动删除该标签。 + + 说明:第一列标签不能删除,至少需要为STable保留一个标签。 + +- 修改标签名 + + ```mysql + ALTER TABLE CHANGE TAG + ``` + + 修改超级表的标签名,从超级表修改某个标签名后,该超级表下的所有子表也会自动更新该标签名。 + +- 修改子表的标签值 + + ```mysql + ALTER TABLE SET TAG = + ``` + +## STable多表聚合 + +针对所有的通过STable创建的子表进行多表聚合查询,支持按照全部的TAG值进行条件过滤,并可将结果按照TAGS中的值进行聚合,暂不支持针对binary类型的模糊匹配过滤。语法如下: + +```mysql +SELECT function,… + FROM + WHERE <[=|<=|>=|<>] values..> ([AND|OR] …) + INTERVAL (