提交 6f541853 编写于 作者: dengyihao's avatar dengyihao

[TD-6217]<feauture> merge develop

[submodule "src/connector/go"] [submodule "src/connector/go"]
path = src/connector/go path = src/connector/go
url = git@github.com:taosdata/driver-go.git url = https://github.com/taosdata/driver-go.git
[submodule "src/connector/grafanaplugin"] [submodule "src/connector/grafanaplugin"]
path = src/connector/grafanaplugin path = src/connector/grafanaplugin
url = git@github.com:taosdata/grafanaplugin.git url = https://github.com/taosdata/grafanaplugin.git
[submodule "src/connector/hivemq-tdengine-extension"] [submodule "src/connector/hivemq-tdengine-extension"]
path = src/connector/hivemq-tdengine-extension path = src/connector/hivemq-tdengine-extension
url = git@github.com:taosdata/hivemq-tdengine-extension.git url = https://github.com/taosdata/hivemq-tdengine-extension.git
[submodule "tests/examples/rust"] [submodule "tests/examples/rust"]
path = tests/examples/rust path = tests/examples/rust
url = https://github.com/songtianyi/tdengine-rust-bindings.git url = https://github.com/songtianyi/tdengine-rust-bindings.git
......
...@@ -235,11 +235,28 @@ pipeline { ...@@ -235,11 +235,28 @@ pipeline {
npm install td2.0-connector > /dev/null 2>&1 npm install td2.0-connector > /dev/null 2>&1
node nodejsChecker.js host=localhost node nodejsChecker.js host=localhost
node test1970.js node test1970.js
cd ${WKC}/tests/connectorTest/nodejsTest/nanosupport
npm install td2.0-connector > /dev/null 2>&1
node nanosecondTest.js
'''
sh '''
cd ${WKC}/src/connector/node-rest/
npm install
npm run build
npm run build:test
npm run test
''' '''
sh ''' sh '''
cd ${WKC}/tests/examples/C#/taosdemo cd ${WKC}/tests/examples/C#/taosdemo
mcs -out:taosdemo *.cs > /dev/null 2>&1 mcs -out:taosdemo *.cs > /dev/null 2>&1
echo '' |./taosdemo -c /etc/taos echo '' |./taosdemo -c /etc/taos
cd ${WKC}/tests/connectorTest/C#Test/nanosupport
mcs -out:nano *.cs > /dev/null 2>&1
echo '' |./nano
''' '''
sh ''' sh '''
cd ${WKC}/tests/gotest cd ${WKC}/tests/gotest
...@@ -264,12 +281,12 @@ pipeline { ...@@ -264,12 +281,12 @@ pipeline {
''' '''
} }
timeout(time: 60, unit: 'MINUTES'){ timeout(time: 60, unit: 'MINUTES'){
// sh ''' sh '''
// cd ${WKC}/tests/pytest cd ${WKC}/tests/pytest
// rm -rf /var/lib/taos/* rm -rf /var/lib/taos/*
// rm -rf /var/log/taos/* rm -rf /var/log/taos/*
// ./handle_crash_gen_val_log.sh ./handle_crash_gen_val_log.sh
// ''' '''
sh ''' sh '''
cd ${WKC}/tests/pytest cd ${WKC}/tests/pytest
rm -rf /var/lib/taos/* rm -rf /var/lib/taos/*
......
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
[![TDengine](TDenginelogo.png)](https://www.taosdata.com) [![TDengine](TDenginelogo.png)](https://www.taosdata.com)
简体中文 | [English](./README.md) 简体中文 | [English](./README.md)
很多职位正在热招中,请看[这里](https://www.taosdata.com/cn/careers/)
# TDengine 简介 # TDengine 简介
...@@ -107,6 +108,12 @@ Go 连接器和 Grafana 插件在其他独立仓库,如果安装它们的话 ...@@ -107,6 +108,12 @@ Go 连接器和 Grafana 插件在其他独立仓库,如果安装它们的话
git submodule update --init --recursive git submodule update --init --recursive
``` ```
如果使用 https 协议下载比较慢,可以通过修改 ~/.gitconfig 文件添加以下两行设置使用 ssh 协议下载。需要首先上传 ssh 密钥到 GitHub,详细方法请参考 GitHub 官方文档。
```
[url "git@github.com:"]
insteadOf = https://github.com/
```
## 构建 TDengine ## 构建 TDengine
### Linux 系统 ### Linux 系统
......
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
[![TDengine](TDenginelogo.png)](https://www.taosdata.com) [![TDengine](TDenginelogo.png)](https://www.taosdata.com)
English | [简体中文](./README-CN.md) English | [简体中文](./README-CN.md)
We are hiring, check [here](https://www.taosdata.com/en/careers/)
# What is TDengine? # What is TDengine?
...@@ -101,6 +102,12 @@ so you should run this command in the TDengine directory to install them: ...@@ -101,6 +102,12 @@ so you should run this command in the TDengine directory to install them:
git submodule update --init --recursive git submodule update --init --recursive
``` ```
You can modify the file ~/.gitconfig to use ssh protocol instead of https for better download speed. You need to upload ssh public key to GitHub first. Please refer to GitHub official documentation for detail.
```
[url "git@github.com:"]
insteadOf = https://github.com/
```
## Build TDengine ## Build TDengine
### On Linux platform ### On Linux platform
...@@ -195,6 +202,19 @@ taos ...@@ -195,6 +202,19 @@ taos
If TDengine shell connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown. If TDengine shell connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown.
## Install TDengine by apt-get
If you use Debian or Ubuntu system, you can use 'apt-get' command to intall TDengine from official repository. Please use following commands to setup:
```
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
[Optional] echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
sudo apt-get update
apt-get policy tdengine
sudo apt-get install tdengine
```
## Quick Run ## Quick Run
If you don't want to run TDengine as a service, you can run it in current shell. For example, to quickly start a TDengine server after building, run the command below in terminal: (We take Linux as an example, command on Windows will be `taosd.exe`) If you don't want to run TDengine as a service, you can run it in current shell. For example, to quickly start a TDengine server after building, run the command below in terminal: (We take Linux as an example, command on Windows will be `taosd.exe`)
......
...@@ -45,6 +45,10 @@ IF (TD_TQ) ...@@ -45,6 +45,10 @@ IF (TD_TQ)
ADD_DEFINITIONS(-D_TD_TQ_) ADD_DEFINITIONS(-D_TD_TQ_)
ENDIF () ENDIF ()
IF (TD_PRO)
ADD_DEFINITIONS(-D_TD_PRO_)
ENDIF ()
IF (TD_MEM_CHECK) IF (TD_MEM_CHECK)
ADD_DEFINITIONS(-DTAOS_MEM_CHECK) ADD_DEFINITIONS(-DTAOS_MEM_CHECK)
ENDIF () ENDIF ()
...@@ -120,6 +124,10 @@ IF (TD_APLHINE) ...@@ -120,6 +124,10 @@ IF (TD_APLHINE)
MESSAGE(STATUS "aplhine is defined") MESSAGE(STATUS "aplhine is defined")
ENDIF () ENDIF ()
IF (TD_BUILD_HTTP)
ADD_DEFINITIONS(-DHTTP_EMBEDDED)
ENDIF ()
IF (TD_LINUX) IF (TD_LINUX)
ADD_DEFINITIONS(-DLINUX) ADD_DEFINITIONS(-DLINUX)
ADD_DEFINITIONS(-D_LINUX) ADD_DEFINITIONS(-D_LINUX)
...@@ -133,8 +141,10 @@ IF (TD_LINUX) ...@@ -133,8 +141,10 @@ IF (TD_LINUX)
IF (TD_MEMORY_SANITIZER) IF (TD_MEMORY_SANITIZER)
SET(DEBUG_FLAGS "-fsanitize=address -fsanitize=undefined -fno-sanitize-recover=all -fsanitize=float-divide-by-zero -fsanitize=float-cast-overflow -fno-sanitize=null -fno-sanitize=alignment -static-libasan -O0 -g3 -DDEBUG") SET(DEBUG_FLAGS "-fsanitize=address -fsanitize=undefined -fno-sanitize-recover=all -fsanitize=float-divide-by-zero -fsanitize=float-cast-overflow -fno-sanitize=null -fno-sanitize=alignment -static-libasan -O0 -g3 -DDEBUG")
MESSAGE(STATUS "memory sanitizer detected as true")
ELSE () ELSE ()
SET(DEBUG_FLAGS "-O0 -g3 -DDEBUG") SET(DEBUG_FLAGS "-O0 -g3 -DDEBUG")
MESSAGE(STATUS "memory sanitizer detected as false")
ENDIF () ENDIF ()
SET(RELEASE_FLAGS "-O3 -Wno-error") SET(RELEASE_FLAGS "-O3 -Wno-error")
......
...@@ -49,6 +49,9 @@ IF (${DBNAME} MATCHES "power") ...@@ -49,6 +49,9 @@ IF (${DBNAME} MATCHES "power")
ELSEIF (${DBNAME} MATCHES "tq") ELSEIF (${DBNAME} MATCHES "tq")
SET(TD_TQ TRUE) SET(TD_TQ TRUE)
MESSAGE(STATUS "tq is true") MESSAGE(STATUS "tq is true")
ELSEIF (${DBNAME} MATCHES "pro")
SET(TD_PRO TRUE)
MESSAGE(STATUS "pro is true")
ENDIF () ENDIF ()
IF (${DLLTYPE} MATCHES "go") IF (${DLLTYPE} MATCHES "go")
...@@ -87,6 +90,12 @@ IF (${BUILD_JDBC} MATCHES "false") ...@@ -87,6 +90,12 @@ IF (${BUILD_JDBC} MATCHES "false")
SET(TD_BUILD_JDBC FALSE) SET(TD_BUILD_JDBC FALSE)
ENDIF () ENDIF ()
SET(TD_BUILD_HTTP TRUE)
IF (${BUILD_HTTP} MATCHES "false")
SET(TD_BUILD_HTTP FALSE)
ENDIF ()
SET(TD_MEMORY_SANITIZER FALSE) SET(TD_MEMORY_SANITIZER FALSE)
IF (${MEMORY_SANITIZER} MATCHES "true") IF (${MEMORY_SANITIZER} MATCHES "true")
SET(TD_MEMORY_SANITIZER TRUE) SET(TD_MEMORY_SANITIZER TRUE)
......
...@@ -32,7 +32,7 @@ ELSEIF (TD_WINDOWS) ...@@ -32,7 +32,7 @@ ELSEIF (TD_WINDOWS)
#INSTALL(TARGETS taos RUNTIME DESTINATION driver) #INSTALL(TARGETS taos RUNTIME DESTINATION driver)
#INSTALL(TARGETS shell RUNTIME DESTINATION .) #INSTALL(TARGETS shell RUNTIME DESTINATION .)
IF (TD_MVN_INSTALLED) IF (TD_MVN_INSTALLED)
INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos-jdbcdriver-2.0.34-dist.jar DESTINATION connector/jdbc) INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos-jdbcdriver-2.0.35-dist.jar DESTINATION connector/jdbc)
ENDIF () ENDIF ()
ELSEIF (TD_DARWIN) ELSEIF (TD_DARWIN)
SET(TD_MAKE_INSTALL_SH "${TD_COMMUNITY_DIR}/packaging/tools/make_install.sh") SET(TD_MAKE_INSTALL_SH "${TD_COMMUNITY_DIR}/packaging/tools/make_install.sh")
......
...@@ -86,7 +86,7 @@ ENDIF () ...@@ -86,7 +86,7 @@ ENDIF ()
MESSAGE(STATUS "============= compile version parameter information start ============= ") MESSAGE(STATUS "============= compile version parameter information start ============= ")
MESSAGE(STATUS "ver number:" ${TD_VER_NUMBER}) MESSAGE(STATUS "ver number:" ${TD_VER_NUMBER})
MESSAGE(STATUS "compatible ver number:" ${TD_VER_COMPATIBLE}) MESSAGE(STATUS "compatible ver number:" ${TD_VER_COMPATIBLE})
MESSAGE(STATUS "communit commit id:" ${TD_VER_GIT}) MESSAGE(STATUS "community commit id:" ${TD_VER_GIT})
MESSAGE(STATUS "internal commit id:" ${TD_VER_GIT_INTERNAL}) MESSAGE(STATUS "internal commit id:" ${TD_VER_GIT_INTERNAL})
MESSAGE(STATUS "build date:" ${TD_VER_DATE}) MESSAGE(STATUS "build date:" ${TD_VER_DATE})
MESSAGE(STATUS "ver type:" ${TD_VER_VERTYPE}) MESSAGE(STATUS "ver type:" ${TD_VER_VERTYPE})
......
...@@ -40,17 +40,19 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专 ...@@ -40,17 +40,19 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专
* [超级表管理](/taos-sql#super-table):添加、删除、查看、修改超级表 * [超级表管理](/taos-sql#super-table):添加、删除、查看、修改超级表
* [标签管理](/taos-sql#tags):增加、删除、修改标签 * [标签管理](/taos-sql#tags):增加、删除、修改标签
* [数据写入](/taos-sql#insert):支持单表单条、多条、多表多条写入,支持历史数据写入 * [数据写入](/taos-sql#insert):支持单表单条、多条、多表多条写入,支持历史数据写入
* [数据查询](/taos-sql#select):支持时间段、值过滤、排序、查询结果手动分页等 * [数据查询](/taos-sql#select):支持时间段、值过滤、排序、嵌套查询、UINON、JOIN、查询结果手动分页等
* [SQL函数](/taos-sql#functions):支持各种聚合函数、选择函数、计算函数,如avg, min, diff等 * [SQL函数](/taos-sql#functions):支持各种聚合函数、选择函数、计算函数,如avg, min, diff等
* [窗口切分聚合](/taos-sql#aggregation):将表中数据按照时间段等方式进行切割后聚合,降维处理 * [窗口切分聚合](/taos-sql#aggregation):将表中数据按照时间段等方式进行切割后聚合,降维处理
* [边界限制](/taos-sql#limitation):库、表、SQL等边界限制条件 * [边界限制](/taos-sql#limitation):库、表、SQL等边界限制条件
* [UDF](/taos-sql/udf):用户定义函数的创建和管理方法
* [错误码](/taos-sql/error-code):TDengine 2.0 错误码以及对应的十进制码 * [错误码](/taos-sql/error-code):TDengine 2.0 错误码以及对应的十进制码
## [高效写入数据](/insert) ## [高效写入数据](/insert)
* [SQL写入](/insert#sql):使用SQL insert命令向一张或多张表写入单条或多条记录 * [SQL 写入](/insert#sql):使用SQL insert命令向一张或多张表写入单条或多条记录
* [Prometheus写入](/insert#prometheus):配置Prometheus, 不用任何代码,将数据直接写入 * [Schemaless 写入](/insert#schemaless):免于预先建表,将数据直接写入时自动维护元数据结构
* [Telegraf写入](/insert#telegraf):配置Telegraf, 不用任何代码,将采集数据直接写入 * [Prometheus 写入](/insert#prometheus):配置Prometheus, 不用任何代码,将数据直接写入
* [Telegraf 写入](/insert#telegraf):配置Telegraf, 不用任何代码,将采集数据直接写入
* [EMQ X Broker](/insert#emq):配置EMQ X,不用任何代码,就可将MQTT数据直接写入 * [EMQ X Broker](/insert#emq):配置EMQ X,不用任何代码,就可将MQTT数据直接写入
* [HiveMQ Broker](/insert#hivemq):配置HiveMQ,不用任何代码,就可将MQTT数据直接写入 * [HiveMQ Broker](/insert#hivemq):配置HiveMQ,不用任何代码,就可将MQTT数据直接写入
......
...@@ -2,28 +2,27 @@ ...@@ -2,28 +2,27 @@
## <a class="anchor" id="intro"></a>TDengine 简介 ## <a class="anchor" id="intro"></a>TDengine 简介
TDengine 是涛思数据面对高速增长的物联网大数据市场和技术挑战推出的创新性的大数据处理产品,它不依赖任何第三方软件,也不是优化或包装了一个开源的数据库或流式计算产品,而是在吸取众多传统关系型数据库、NoSQL 数据库、流式计算引擎、消息队列等软件的优点之后自主开发的产品,在时序空间大数据处理上,有着自己独到的优势。 TDengine 是涛思数据面对高速增长的物联网大数据市场和技术挑战推出的创新性的大数据处理产品,它不依赖任何第三方软件,也不是优化或包装了一个开源的数据库或流式计算产品,而是在吸取众多传统关系型数据库、NoSQL 数据库、流式计算引擎、消息队列等软件的优点之后自主开发的产品,TDengine 在时序空间大数据处理上,有着自己独到的优势。
TDengine 的模块之一是时序数据库。但除此之外,为减少研发的复杂度、系统维护的难度,TDengine 还提供缓存、消息队列、订阅、流式计算等功能,为物联网、工业互联网大数据的处理提供全栈的技术方案,是一个高效易用的物联网大数据平台。与 Hadoop 等典型的大数据平台相比,它具有如下鲜明的特点: TDengine 的模块之一是时序数据库。但除此之外,为减少研发的复杂度、系统维护的难度,TDengine 还提供缓存、消息队列、订阅、流式计算等功能,为物联网和工业互联网大数据的处理提供全栈的技术方案,是一个高效易用的物联网大数据平台。与 Hadoop 等典型的大数据平台相比,TDengine 具有如下鲜明的特点:
* __10 倍以上的性能提升__:定义了创新的数据存储结构,单核每秒能处理至少 2 万次请求,插入数百万个数据点,读出一千万以上数据点,比现有通用数据库快十倍以上。 * __10 倍以上的性能提升__:定义了创新的数据存储结构,单核每秒能处理至少 2 万次请求,插入数百万个数据点,读出一千万以上数据点,比现有通用数据库快十倍以上。
* __硬件或云服务成本降至 1/5__:由于超强性能,计算资源不到通用大数据方案的 1/5;通过列式存储和先进的压缩算法,存储空间不到通用数据库的 1/10。 * __硬件或云服务成本降至 1/5__:由于超强性能,计算资源不到通用大数据方案的 1/5;通过列式存储和先进的压缩算法,存储占用不到通用数据库的 1/10。
* __全栈时序数据处理引擎__:将数据库、消息队列、缓存、流式计算等功能融为一体,应用无需再集成 Kafka/Redis/HBase/Spark/HDFS 等软件,大幅降低应用开发和维护的复杂度成本。 * __全栈时序数据处理引擎__:将数据库、消息队列、缓存、流式计算等功能融为一体,应用无需再集成 Kafka/Redis/HBase/Spark/HDFS 等软件,大幅降低应用开发和维护的复杂度成本。
* __强大的分析功能__:无论是十年前还是一秒钟前的数据,指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。即席查询可通过 Shell, Python, R, MATLAB 随时进行。 * __强大的分析功能__:无论是十年前还是一秒钟前的数据,指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。即席查询可通过 Shell, Python, R, MATLAB 随时进行。
* __与第三方工具无缝连接__:不用一行代码,即可与 Telegraf, Grafana, EMQ, HiveMQ, Prometheus, MATLAB, R 等集成。后续将支持 OPC, Hadoop, Spark 等,BI 工具也将无缝连接。 * __高可用性和水平扩展__:通过分布式架构和一致性算法,通过多复制和集群特性,TDengine确保了高可用性和水平扩展性以支持关键任务应用程序。
* __零运维成本、零学习成本__:安装集群简单快捷,无需分库分表,实时备份。类标准 SQL,支持 RESTful,支持 Python/Java/C/C++/C#/Go/Node.js, 与 MySQL 相似,零学习成本。 * __零运维成本、零学习成本__:安装集群简单快捷,无需分库分表,实时备份。类似标准 SQL,支持 RESTful,支持 Python/Java/C/C++/C#/Go/Node.js, 与 MySQL 相似,零学习成本。
* __核心开源__:除了一些辅助功能外,TDengine的核心是开源的。企业再也不会被数据库绑定了。这使生态更加强大,产品更加稳定,开发者社区更加活跃。
采用 TDengine,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。但需要指出的是,因充分利用了物联网时序数据的特点,它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM 等通用型数据。 采用 TDengine,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。但需要指出的是,因充分利用了物联网时序数据的特点,它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM 等通用型数据。
![TDengine技术生态图](page://images/eco_system.png) ![TDengine技术生态图](../images/eco_system.png)
<center>图 1. TDengine技术生态图</center> <center>图 1. TDengine技术生态图</center>
## <a class="anchor" id="scenes"></a>TDengine 总体适用场景 ## <a class="anchor" id="scenes"></a>TDengine 总体适用场景
作为一个 IoT 大数据平台,TDengine 的典型适用场景是在 IoT 范畴,而且用户有一定的数据量。本文后续的介绍主要针对这个范畴里面的系统。范畴之外的系统,比如 CRM,ERP 等,不在本文讨论范围内。 作为一个 IoT 大数据平台,TDengine 的典型适用场景是在 IoT 范畴,而且用户有一定的数据量。本文后续的介绍主要针对这个范畴里面的系统。范畴之外的系统,比如 CRM,ERP 等,不在本文讨论范围内。
### 数据源特点和需求 ### 数据源特点和需求
从数据源角度,设计人员可以从下面几个角度分析 TDengine 在目标应用系统里面的适用性。 从数据源角度,设计人员可以从下面几个角度分析 TDengine 在目标应用系统里面的适用性。
...@@ -64,4 +63,3 @@ TDengine 的模块之一是时序数据库。但除此之外,为减少研发 ...@@ -64,4 +63,3 @@ TDengine 的模块之一是时序数据库。但除此之外,为减少研发
|要求系统可靠运行| | | √ | TDengine 的系统架构非常稳定可靠,日常维护也简单便捷,对维护人员的要求简洁明了,最大程度上杜绝人为错误和事故。| |要求系统可靠运行| | | √ | TDengine 的系统架构非常稳定可靠,日常维护也简单便捷,对维护人员的要求简洁明了,最大程度上杜绝人为错误和事故。|
|要求运维学习成本可控| | | √ |同上。| |要求运维学习成本可控| | | √ |同上。|
|要求市场有大量人才储备| √ | | | TDengine 作为新一代产品,目前人才市场里面有经验的人员还有限。但是学习成本低,我们作为厂家也提供运维的培训和辅助服务。| |要求市场有大量人才储备| √ | | | TDengine 作为新一代产品,目前人才市场里面有经验的人员还有限。但是学习成本低,我们作为厂家也提供运维的培训和辅助服务。|
...@@ -22,6 +22,18 @@ TDengine 的安装非常简单,从下载到安装成功仅仅只要几秒钟 ...@@ -22,6 +22,18 @@ TDengine 的安装非常简单,从下载到安装成功仅仅只要几秒钟
具体的安装过程,请参见 [TDengine 多种安装包的安装和卸载](https://www.taosdata.com/blog/2019/08/09/566.html) 以及 [视频教程](https://www.taosdata.com/blog/2020/11/11/1941.html) 具体的安装过程,请参见 [TDengine 多种安装包的安装和卸载](https://www.taosdata.com/blog/2019/08/09/566.html) 以及 [视频教程](https://www.taosdata.com/blog/2020/11/11/1941.html)
### 使用 apt-get 安装
如果使用 Debian 或 Ubuntu 系统,也可以使用 apt-get 从官方仓库安装,设置方法为:
```
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
[ beta 版安装包仓库为可选安装项 ] echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
sudo apt-get update
apt-get policy tdengine
sudo apt-get install tdengine
```
<a class="anchor" id="start"></a> <a class="anchor" id="start"></a>
## 轻松启动 ## 轻松启动
......
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
taosd包含rpc, dnode, vnode, tsdb, query, cq, sync, wal, mnode, http, monitor等模块,具体如下图: taosd包含rpc, dnode, vnode, tsdb, query, cq, sync, wal, mnode, http, monitor等模块,具体如下图:
![modules.png](page://images/architecture/modules.png) ![modules.png](../../images/architecture/modules.png)
taosd的启动入口是dnode模块,dnode然后启动其他模块,包括可选配置的http, monitor模块。taosc或dnode之间交互的消息都是通过rpc模块进行,dnode模块根据接收到的消息类型,将消息分发到vnode或mnode的消息队列,或由dnode模块自己消费。dnode的工作线程(worker)消费消息队列里的消息,交给mnode或vnode进行处理。下面对各个模块做简要说明。 taosd的启动入口是dnode模块,dnode然后启动其他模块,包括可选配置的http, monitor模块。taosc或dnode之间交互的消息都是通过rpc模块进行,dnode模块根据接收到的消息类型,将消息分发到vnode或mnode的消息队列,或由dnode模块自己消费。dnode的工作线程(worker)消费消息队列里的消息,交给mnode或vnode进行处理。下面对各个模块做简要说明。
...@@ -41,13 +41,13 @@ RPC模块还提供数据压缩功能,如果数据包的字节数超过系统 ...@@ -41,13 +41,13 @@ RPC模块还提供数据压缩功能,如果数据包的字节数超过系统
taosd的消息消费由dnode通过读写线程池进行控制,是系统的中枢。该模块内的结构体图如下: taosd的消息消费由dnode通过读写线程池进行控制,是系统的中枢。该模块内的结构体图如下:
![dnode.png](page://images/architecture/dnode.png) ![dnode.png](../../images/architecture/dnode.png)
## VNODE模块 ## VNODE模块
vnode是一独立的数据存储查询逻辑单元,但因为一个vnode只能容许一个DB,因此vnode内部没有account, DB, user等概念。为实现更好的模块化、封装以及未来的扩展,它有很多子模块,包括负责存储的TSDB,负责查询的Query, 负责数据复制的sync,负责数据库日志的的wal, 负责连续查询的cq(continuous query), 负责事件触发的流计算的event等模块,这些子模块只与vnode模块发生关系,与其他模块没有任何调用关系。模块图如下: vnode是一独立的数据存储查询逻辑单元,但因为一个vnode只能容许一个DB,因此vnode内部没有account, DB, user等概念。为实现更好的模块化、封装以及未来的扩展,它有很多子模块,包括负责存储的TSDB,负责查询的Query, 负责数据复制的sync,负责数据库日志的的wal, 负责连续查询的cq(continuous query), 负责事件触发的流计算的event等模块,这些子模块只与vnode模块发生关系,与其他模块没有任何调用关系。模块图如下:
![vnode.png](page://images/architecture/vnode.png) ![vnode.png](../../images/architecture/vnode.png)
vnode模块向下,与dnodeVRead,dnodeVWrite发生互动,向上,与子模块发生互动。它主要的功能有: vnode模块向下,与dnodeVRead,dnodeVWrite发生互动,向上,与子模块发生互动。它主要的功能有:
......
...@@ -90,7 +90,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性 ...@@ -90,7 +90,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性
具体的流程图如下: 具体的流程图如下:
![replica-master.png](page://images/architecture/replica-master.png) ![replica-master.png](../../images/architecture/replica-master.png)
选择Master的具体规则如下: 选择Master的具体规则如下:
...@@ -105,7 +105,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性 ...@@ -105,7 +105,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性
如果vnode A是master, vnode B是slave, vnode A能接受客户端的写请求,而vnode B不能。当vnode A收到写的请求后,遵循下面的流程: 如果vnode A是master, vnode B是slave, vnode A能接受客户端的写请求,而vnode B不能。当vnode A收到写的请求后,遵循下面的流程:
![replica-forward.png](page://images/architecture/replica-forward.png) ![replica-forward.png](../../images/architecture/replica-forward.png)
1. 应用对写请求做基本的合法性检查,通过,则给该请求包打上一个版本号(version, 单调递增) 1. 应用对写请求做基本的合法性检查,通过,则给该请求包打上一个版本号(version, 单调递增)
2. 应用将打上版本号的写请求封装一个WAL Head, 写入WAL(Write Ahead Log) 2. 应用将打上版本号的写请求封装一个WAL Head, 写入WAL(Write Ahead Log)
...@@ -140,7 +140,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性 ...@@ -140,7 +140,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性
整个数据恢复流程分为两大步骤,第一步,先恢复archived data(file), 然后恢复wal。具体流程如下: 整个数据恢复流程分为两大步骤,第一步,先恢复archived data(file), 然后恢复wal。具体流程如下:
![replica-restore.png](page://images/architecture/replica-restore.png) ![replica-restore.png](../../images/architecture/replica-restore.png)
1. 通过已经建立的TCP连接,发送sync req给master节点 1. 通过已经建立的TCP连接,发送sync req给master节点
2. master收到sync req后,以client的身份,向vnode B主动建立一新的专用于同步的TCP连接(syncFd) 2. master收到sync req后,以client的身份,向vnode B主动建立一新的专用于同步的TCP连接(syncFd)
......
...@@ -156,7 +156,7 @@ TDengine 的设计是基于单个硬件、软件系统不可靠,基于任何 ...@@ -156,7 +156,7 @@ TDengine 的设计是基于单个硬件、软件系统不可靠,基于任何
TDengine 分布式架构的逻辑结构图如下: TDengine 分布式架构的逻辑结构图如下:
![TDengine架构示意图](page://images/architecture/structure.png) ![TDengine架构示意图](../images/architecture/structure.png)
<center> 图 1 TDengine架构示意图 </center> <center> 图 1 TDengine架构示意图 </center>
一个完整的 TDengine 系统是运行在一到多个物理节点上的,逻辑上,它包含数据节点(dnode)、TDengine应用驱动(taosc)以及应用(app)。系统中存在一到多个数据节点,这些数据节点组成一个集群(cluster)。应用通过taosc的API与TDengine集群进行互动。下面对每个逻辑单元进行简要介绍。 一个完整的 TDengine 系统是运行在一到多个物理节点上的,逻辑上,它包含数据节点(dnode)、TDengine应用驱动(taosc)以及应用(app)。系统中存在一到多个数据节点,这些数据节点组成一个集群(cluster)。应用通过taosc的API与TDengine集群进行互动。下面对每个逻辑单元进行简要介绍。
...@@ -207,7 +207,7 @@ TDengine 分布式架构的逻辑结构图如下: ...@@ -207,7 +207,7 @@ TDengine 分布式架构的逻辑结构图如下:
为解释vnode、mnode、taosc和应用之间的关系以及各自扮演的角色,下面对写入数据这个典型操作的流程进行剖析。 为解释vnode、mnode、taosc和应用之间的关系以及各自扮演的角色,下面对写入数据这个典型操作的流程进行剖析。
![TDengine典型的操作流程](page://images/architecture/message.png) ![TDengine典型的操作流程](../images/architecture/message.png)
<center> 图 2 TDengine典型的操作流程 </center> <center> 图 2 TDengine典型的操作流程 </center>
1. 应用通过JDBC、ODBC或其他API接口发起插入数据的请求。 1. 应用通过JDBC、ODBC或其他API接口发起插入数据的请求。
...@@ -250,7 +250,7 @@ vnode(虚拟数据节点)负责为采集的时序数据提供写入、查询和 ...@@ -250,7 +250,7 @@ vnode(虚拟数据节点)负责为采集的时序数据提供写入、查询和
创建DB时,系统并不会马上分配资源。但当创建一张表时,系统将看是否有已经分配的vnode, 且该vnode是否有空余的表空间,如果有,立即在该有空位的vnode创建表。如果没有,系统将从集群中,根据当前的负载情况,在一个dnode上创建一新的vnode, 然后创建表。如果DB有多个副本,系统不是只创建一个vnode,而是一个vgroup(虚拟数据节点组)。系统对vnode的数目没有任何限制,仅仅受限于物理节点本身的计算和存储资源。 创建DB时,系统并不会马上分配资源。但当创建一张表时,系统将看是否有已经分配的vnode, 且该vnode是否有空余的表空间,如果有,立即在该有空位的vnode创建表。如果没有,系统将从集群中,根据当前的负载情况,在一个dnode上创建一新的vnode, 然后创建表。如果DB有多个副本,系统不是只创建一个vnode,而是一个vgroup(虚拟数据节点组)。系统对vnode的数目没有任何限制,仅仅受限于物理节点本身的计算和存储资源。
每张表的meda data(包含schema, 标签等)也存放于vnode里,而不是集中存放于mnode,实际上这是对Meta数据的分片,这样便于高效并行的进行标签过滤操作。 每张表的meta data(包含schema, 标签等)也存放于vnode里,而不是集中存放于mnode,实际上这是对Meta数据的分片,这样便于高效并行的进行标签过滤操作。
### 数据分区 ### 数据分区
...@@ -278,7 +278,7 @@ TDengine除vnode分片之外,还对时序数据按照时间段进行分区。 ...@@ -278,7 +278,7 @@ TDengine除vnode分片之外,还对时序数据按照时间段进行分区。
Master Vnode遵循下面的写入流程: Master Vnode遵循下面的写入流程:
![TDengine Master写入流程](page://images/architecture/write_master.png) ![TDengine Master写入流程](../images/architecture/write_master.png)
<center> 图 3 TDengine Master写入流程 </center> <center> 图 3 TDengine Master写入流程 </center>
1. master vnode收到应用的数据插入请求,验证OK,进入下一步; 1. master vnode收到应用的数据插入请求,验证OK,进入下一步;
...@@ -292,7 +292,7 @@ Master Vnode遵循下面的写入流程: ...@@ -292,7 +292,7 @@ Master Vnode遵循下面的写入流程:
对于slave vnode,写入流程是: 对于slave vnode,写入流程是:
![TDengine Slave写入流程](page://images/architecture/write_slave.png) ![TDengine Slave写入流程](../images/architecture/write_slave.png)
<center> 图 4 TDengine Slave写入流程 </center> <center> 图 4 TDengine Slave写入流程 </center>
1. slave vnode收到Master vnode转发了的数据插入请求。检查last version是否与master一致,如果一致,进入下一步。如果不一致,需要进入同步状态。 1. slave vnode收到Master vnode转发了的数据插入请求。检查last version是否与master一致,如果一致,进入下一步。如果不一致,需要进入同步状态。
...@@ -434,7 +434,7 @@ SELECT COUNT(*) FROM d1001 WHERE ts >= '2017-7-14 00:00:00' AND ts < '2017-7-14 ...@@ -434,7 +434,7 @@ SELECT COUNT(*) FROM d1001 WHERE ts >= '2017-7-14 00:00:00' AND ts < '2017-7-14
TDengine对每个数据采集点单独建表,但在实际应用中经常需要对不同的采集点数据进行聚合。为高效的进行聚合操作,TDengine引入超级表(STable)的概念。超级表用来代表一特定类型的数据采集点,它是包含多张表的表集合,集合里每张表的模式(schema)完全一致,但每张表都带有自己的静态标签,标签可以有多个,可以随时增加、删除和修改。应用可通过指定标签的过滤条件,对一个STable下的全部或部分表进行聚合或统计操作,这样大大简化应用的开发。其具体流程如下图所示: TDengine对每个数据采集点单独建表,但在实际应用中经常需要对不同的采集点数据进行聚合。为高效的进行聚合操作,TDengine引入超级表(STable)的概念。超级表用来代表一特定类型的数据采集点,它是包含多张表的表集合,集合里每张表的模式(schema)完全一致,但每张表都带有自己的静态标签,标签可以有多个,可以随时增加、删除和修改。应用可通过指定标签的过滤条件,对一个STable下的全部或部分表进行聚合或统计操作,这样大大简化应用的开发。其具体流程如下图所示:
![多表聚合查询原理图](page://images/architecture/multi_tables.png) ![多表聚合查询原理图](../images/architecture/multi_tables.png)
<center> 图 5 多表聚合查询原理图 </center> <center> 图 5 多表聚合查询原理图 </center>
1. 应用将一个查询条件发往系统; 1. 应用将一个查询条件发往系统;
......
...@@ -43,7 +43,7 @@ CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAG ...@@ -43,7 +43,7 @@ CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAG
每一种类型的数据采集点需要建立一个超级表,因此一个物联网系统,往往会有多个超级表。对于电网,我们就需要对智能电表、变压器、母线、开关等都建立一个超级表。在物联网中,一个设备就可能有多个数据采集点(比如一台风力发电的风机,有的采集点采集电流、电压等电参数,有的采集点采集温度、湿度、风向等环境参数),这个时候,对这一类型的设备,需要建立多张超级表。一张超级表里包含的采集物理量必须是同时采集的(时间戳是一致的)。 每一种类型的数据采集点需要建立一个超级表,因此一个物联网系统,往往会有多个超级表。对于电网,我们就需要对智能电表、变压器、母线、开关等都建立一个超级表。在物联网中,一个设备就可能有多个数据采集点(比如一台风力发电的风机,有的采集点采集电流、电压等电参数,有的采集点采集温度、湿度、风向等环境参数),这个时候,对这一类型的设备,需要建立多张超级表。一张超级表里包含的采集物理量必须是同时采集的(时间戳是一致的)。
一张超级表最多容许1024列,如果一个采集点采集的物理量个数超过1024,需要建多张超级表来处理。一个系统可以有多个DB,一个DB里可以有一到多个超级表。 一张超级表最多容许 1024 列,如果一个采集点采集的物理量个数超过 1024,需要建多张超级表来处理。一个系统可以有多个 DB,一个 DB 里可以有一到多个超级表。(从 2.1.7.0 版本开始,列数限制由 1024 列放宽到了 4096 列。)
## <a class="anchor" id="create-table"></a>创建表 ## <a class="anchor" id="create-table"></a>创建表
......
...@@ -2,9 +2,9 @@ ...@@ -2,9 +2,9 @@
TDengine支持多种接口写入数据,包括SQL, Prometheus, Telegraf, EMQ MQTT Broker, HiveMQ Broker, CSV文件等,后续还将提供Kafka, OPC等接口。数据可以单条插入,也可以批量插入,可以插入一个数据采集点的数据,也可以同时插入多个数据采集点的数据。支持多线程插入,支持时间乱序数据插入,也支持历史数据插入。 TDengine支持多种接口写入数据,包括SQL, Prometheus, Telegraf, EMQ MQTT Broker, HiveMQ Broker, CSV文件等,后续还将提供Kafka, OPC等接口。数据可以单条插入,也可以批量插入,可以插入一个数据采集点的数据,也可以同时插入多个数据采集点的数据。支持多线程插入,支持时间乱序数据插入,也支持历史数据插入。
## <a class="anchor" id="sql"></a>SQL写入 ## <a class="anchor" id="sql"></a>SQL 写入
应用通过C/C++、JDBC、GO、C#或Python Connector 执行SQL insert语句来插入数据,用户还可以通过TAOS Shell,手动输入SQL insert语句插入数据。比如下面这条insert 就将一条记录写入到表d1001中: 应用通过C/C++, Java, Go, C#, Python, Node.js 连接器执行SQL insert语句来插入数据,用户还可以通过TAOS Shell,手动输入SQL insert语句插入数据。比如下面这条insert 就将一条记录写入到表d1001中:
```mysql ```mysql
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31); INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31);
``` ```
...@@ -27,11 +27,74 @@ INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, ...@@ -27,11 +27,74 @@ INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6,
- 对同一张表,如果新插入记录的时间戳已经存在,默认情形下(UPDATE=0)新记录将被直接抛弃,也就是说,在一张表里,时间戳必须是唯一的。如果应用自动生成记录,很有可能生成的时间戳是一样的,这样,成功插入的记录条数会小于应用插入的记录条数。如果在创建数据库时使用了 UPDATE 1 选项,插入相同时间戳的新记录将覆盖原有记录。 - 对同一张表,如果新插入记录的时间戳已经存在,默认情形下(UPDATE=0)新记录将被直接抛弃,也就是说,在一张表里,时间戳必须是唯一的。如果应用自动生成记录,很有可能生成的时间戳是一样的,这样,成功插入的记录条数会小于应用插入的记录条数。如果在创建数据库时使用了 UPDATE 1 选项,插入相同时间戳的新记录将覆盖原有记录。
- 写入的数据的时间戳必须大于当前时间减去配置参数keep的时间。如果keep配置为3650天,那么无法写入比3650天还早的数据。写入数据的时间戳也不能大于当前时间加配置参数days。如果days为2,那么无法写入比当前时间还晚2天的数据。 - 写入的数据的时间戳必须大于当前时间减去配置参数keep的时间。如果keep配置为3650天,那么无法写入比3650天还早的数据。写入数据的时间戳也不能大于当前时间加配置参数days。如果days为2,那么无法写入比当前时间还晚2天的数据。
## <a class="anchor" id="prometheus"></a>Prometheus直接写入 ## <a class="anchor" id="schemaless"></a>Schemaless 写入
在物联网应用中,常会采集比较多的数据项,用于实现智能控制、业务分析、设备监控等。由于应用逻辑的版本升级,或者设备自身的硬件调整等原因,数据采集项就有可能比较频繁地出现变动。为了在这种情况下方便地完成数据记录工作,TDengine 从 2.2.0.0 版本开始,提供 Schemaless 写入方式,可以免于预先创建超级表/数据子表,而是随着数据写入,自动创建与数据对应的存储结构。并且在必要时,Schemaless 将自动增加必要的数据列,保证用户写入的数据可以被正确存储。目前,TDengine 的 C/C++ Connector 提供支持 Schemaless 的操作接口,详情请参见 [Schemaless 方式写入接口](https://www.taosdata.com/cn/documentation/connector#schemaless) 章节。这里对 Schemaless 的数据表达格式进行描述。
### Schemaless 数据行协议
Schemaless 采用一个字符串来表达最终存储的一个数据行(可以向 Schemaless 写入 API 中一次传入多个字符串来实现多个数据行的批量写入),其格式约定如下:
```json
measurement,tag_set field_set timestamp
```
其中,
* measurement 将作为数据表名。它与 tag_set 之间使用一个英文逗号来分隔。
* tag_set 将作为标签数据,其格式形如 `<tag_key>=<tag_value>,<tag_key>=<tag_value>`,也即可以使用英文逗号来分隔多个标签数据。它与 field_set 之间使用一个半角空格来分隔。
* field_set 将作为普通列数据,其格式形如 `<field_key>=<field_value>,<field_key>=<field_value>`,同样是使用英文逗号来分隔多个普通列的数据。它与 timestamp 之间使用一个半角空格来分隔。
* timestamp 即本行数据对应的主键时间戳。
在 Schemaless 的数据行协议中,tag_set、field_set 中的每个数据项都需要对自身的数据类型进行描述。具体来说:
* 如果两边有英文双引号,表示 BIANRY(32) 类型。例如 `"abc"`
* 如果两边有英文双引号而且带有 L 前缀,表示 NCHAR(32) 类型。例如 `L"报错信息"`
* 对空格、等号(=)、逗号(,)、双引号("),前面需要使用反斜杠(\)进行转义。(都指的是英文半角符号)
* 数值类型将通过后缀来区分数据类型:
- 没有后缀,为 FLOAT 类型;
- 后缀为 f32,为 FLOAT 类型;
- 后缀为 f64,为 DOUBLE 类型;
- 后缀为 i8,表示为 TINYINT (INT8) 类型;
- 后缀为 i16,表示为 SMALLINT (INT16) 类型;
- 后缀为 i32,表示为 INT (INT32) 类型;
- 后缀为 i64,表示为 BIGINT (INT64) 类型;
* t, T, true, True, TRUE, f, F, false, False 将直接作为 BOOL 型来处理。
timestamp 位置的时间戳通过后缀来声明时间精度,具体如下:
* 不带任何后缀的长整数会被当作微秒来处理;
* 当后缀为 s 时,表示秒时间戳;
* 当后缀为 ms 时,表示毫秒时间戳;
* 当后缀为 us 时,表示微秒时间戳;
* 当后缀为 ns 时,表示纳秒时间戳;
* 当时间戳为 0 时,表示采用客户端的当前时间(因此,同一批提交的数据中,时间戳 0 会被解释为同一个时间点,于是就有可能导致时间戳重复)。
例如,如下 Schemaless 数据行表示:向名为 st 的超级表下的 t1 标签为 3(BIGINT 类型)、t2 标签为 4(DOUBLE 类型)、t3 标签为 "t3"(BINARY 类型)的数据子表,写入 c1 列为 3(BIGINT 类型)、c2 列为 false(BOOL 类型)、c3 列为 "passit"(NCHAR 类型)、c4 列为 4(DOUBLE 类型)、主键时间戳为 1626006833639000000(纳秒精度)的一行数据。
```json
st,t1=3i64,t2=4f64,t3="t3" c1=3i64,c3=L"passit",c2=false,c4=4f64 1626006833639000000ns
```
需要注意的是,如果描述数据类型后缀时使用了错误的大小写,或者为数据指定的数据类型有误,均可能引发报错提示而导致数据写入失败。
### Schemaless 的处理逻辑
Schemaless 按照如下原则来处理行数据:
1. 当 tag_set 中有 ID 字段时,该字段的值将作为数据子表的表名。
2. 没有 ID 字段时,将使用 `measurement + tag_value1 + tag_value2 + ...` 的 md5 值来作为子表名。
3. 如果指定的超级表名不存在,则 Schemaless 会创建这个超级表。
4. 如果指定的数据子表不存在,则 Schemaless 会按照步骤 1 或 2 确定的子表名来创建子表。
5. 如果数据行中指定的标签列或普通列不存在,则 Schemaless 会在超级表中增加对应的标签列或普通列(只增不减)。
6. 如果超级表中存在一些标签列或普通列未在一个数据行中被指定取值,那么这些列的值在这一行中会被置为 NULL。
7. 对 BINARY 或 NCHAR 列,如果数据行中所提供值的长度超出了列类型的限制,那么 Schemaless 会增加该列允许存储的字符长度上限(只增不减),以保证数据的完整保存。
8. 如果指定的数据子表已经存在,而且本次指定的标签列取值跟已保存的值不一样,那么最新的数据行中的值会覆盖旧的标签列取值。
9. 整个处理过程中遇到的错误会中断写入过程,并返回错误代码。
**注意:**Schemaless 所有的处理逻辑,仍会遵循 TDengine 对数据结构的底层限制,例如每行数据的总长度不能超过 16k 字节。这方面的具体限制约束请参见 [TAOS SQL 边界限制](https://www.taosdata.com/cn/documentation/taos-sql#limitation) 章节。
关于 Schemaless 的字符串编码处理、时区设置等,均会沿用 TAOSC 客户端的设置。
## <a class="anchor" id="prometheus"></a>Prometheus 直接写入
[Prometheus](https://www.prometheus.io/)作为Cloud Native Computing Fundation毕业的项目,在性能监控以及K8S性能监控领域有着非常广泛的应用。TDengine提供一个小工具[Bailongma](https://github.com/taosdata/Bailongma),只需对Prometheus做简单配置,无需任何代码,就可将Prometheus采集的数据直接写入TDengine,并按规则在TDengine自动创建库和相关表项。博文[用Docker容器快速搭建一个Devops监控Demo](https://www.taosdata.com/blog/2020/02/03/1189.html)即是采用Bailongma将Prometheus和Telegraf的数据写入TDengine中的示例,可以参考。 [Prometheus](https://www.prometheus.io/)作为Cloud Native Computing Fundation毕业的项目,在性能监控以及K8S性能监控领域有着非常广泛的应用。TDengine提供一个小工具[Bailongma](https://github.com/taosdata/Bailongma),只需对Prometheus做简单配置,无需任何代码,就可将Prometheus采集的数据直接写入TDengine,并按规则在TDengine自动创建库和相关表项。博文[用Docker容器快速搭建一个Devops监控Demo](https://www.taosdata.com/blog/2020/02/03/1189.html)即是采用Bailongma将Prometheus和Telegraf的数据写入TDengine中的示例,可以参考。
### 从源代码编译blm_prometheus ### 从源代码编译 blm_prometheus
用户需要从github下载[Bailongma](https://github.com/taosdata/Bailongma)的源码,使用Golang语言编译器编译生成可执行文件。在开始编译前,需要准备好以下条件: 用户需要从github下载[Bailongma](https://github.com/taosdata/Bailongma)的源码,使用Golang语言编译器编译生成可执行文件。在开始编译前,需要准备好以下条件:
- Linux操作系统的服务器 - Linux操作系统的服务器
...@@ -46,11 +109,11 @@ go build ...@@ -46,11 +109,11 @@ go build
一切正常的情况下,就会在对应的目录下生成一个blm_prometheus的可执行程序。 一切正常的情况下,就会在对应的目录下生成一个blm_prometheus的可执行程序。
### 安装Prometheus ### 安装 Prometheus
通过Prometheus的官网下载安装。具体请见:[下载地址](https://prometheus.io/download/) 通过Prometheus的官网下载安装。具体请见:[下载地址](https://prometheus.io/download/)
### 配置Prometheus ### 配置 Prometheus
参考Prometheus的[配置文档](https://prometheus.io/docs/prometheus/latest/configuration/configuration/),在Prometheus的配置文件中的<remote_write>部分,增加以下配置: 参考Prometheus的[配置文档](https://prometheus.io/docs/prometheus/latest/configuration/configuration/),在Prometheus的配置文件中的<remote_write>部分,增加以下配置:
...@@ -60,7 +123,8 @@ go build ...@@ -60,7 +123,8 @@ go build
启动Prometheus后,可以通过taos客户端查询确认数据是否成功写入。 启动Prometheus后,可以通过taos客户端查询确认数据是否成功写入。
### 启动blm_prometheus程序 ### 启动 blm_prometheus 程序
blm_prometheus程序有以下选项,在启动blm_prometheus程序时可以通过设定这些选项来设定blm_prometheus的配置。 blm_prometheus程序有以下选项,在启动blm_prometheus程序时可以通过设定这些选项来设定blm_prometheus的配置。
```bash ```bash
--tdengine-name --tdengine-name
...@@ -94,7 +158,8 @@ remote_write: ...@@ -94,7 +158,8 @@ remote_write:
- url: "http://10.1.2.3:8088/receive" - url: "http://10.1.2.3:8088/receive"
``` ```
### 查询prometheus写入数据 ### 查询 prometheus 写入数据
prometheus产生的数据格式如下: prometheus产生的数据格式如下:
```json ```json
{ {
...@@ -105,10 +170,10 @@ prometheus产生的数据格式如下: ...@@ -105,10 +170,10 @@ prometheus产生的数据格式如下:
instance="192.168.99.116:8443", instance="192.168.99.116:8443",
job="kubernetes-apiservers", job="kubernetes-apiservers",
le="125000", le="125000",
resource="persistentvolumes", s resource="persistentvolumes",
cope="cluster", scope="cluster",
verb="LIST", verb="LIST",
version=v1" version="v1"
} }
} }
``` ```
...@@ -118,11 +183,11 @@ use prometheus; ...@@ -118,11 +183,11 @@ use prometheus;
select * from apiserver_request_latencies_bucket; select * from apiserver_request_latencies_bucket;
``` ```
## <a class="anchor" id="telegraf"></a>Telegraf直接写入 ## <a class="anchor" id="telegraf"></a>Telegraf 直接写入
[Telegraf](https://www.influxdata.com/time-series-platform/telegraf/)是一流行的IT运维数据采集开源工具,TDengine提供一个小工具[Bailongma](https://github.com/taosdata/Bailongma),只需在Telegraf做简单配置,无需任何代码,就可将Telegraf采集的数据直接写入TDengine,并按规则在TDengine自动创建库和相关表项。博文[用Docker容器快速搭建一个Devops监控Demo](https://www.taosdata.com/blog/2020/02/03/1189.html)即是采用bailongma将Prometheus和Telegraf的数据写入TDengine中的示例,可以参考。 [Telegraf](https://www.influxdata.com/time-series-platform/telegraf/)是一流行的IT运维数据采集开源工具,TDengine提供一个小工具[Bailongma](https://github.com/taosdata/Bailongma),只需在Telegraf做简单配置,无需任何代码,就可将Telegraf采集的数据直接写入TDengine,并按规则在TDengine自动创建库和相关表项。博文[用Docker容器快速搭建一个Devops监控Demo](https://www.taosdata.com/blog/2020/02/03/1189.html)即是采用bailongma将Prometheus和Telegraf的数据写入TDengine中的示例,可以参考。
### 从源代码编译blm_telegraf ### 从源代码编译 blm_telegraf
用户需要从github下载[Bailongma](https://github.com/taosdata/Bailongma)的源码,使用Golang语言编译器编译生成可执行文件。在开始编译前,需要准备好以下条件: 用户需要从github下载[Bailongma](https://github.com/taosdata/Bailongma)的源码,使用Golang语言编译器编译生成可执行文件。在开始编译前,需要准备好以下条件:
...@@ -139,11 +204,11 @@ go build ...@@ -139,11 +204,11 @@ go build
一切正常的情况下,就会在对应的目录下生成一个blm_telegraf的可执行程序。 一切正常的情况下,就会在对应的目录下生成一个blm_telegraf的可执行程序。
### 安装Telegraf ### 安装 Telegraf
目前TDengine支持Telegraf 1.7.4以上的版本。用户可以根据当前的操作系统,到Telegraf官网下载安装包,并执行安装。下载地址如下:https://portal.influxdata.com/downloads 。 目前TDengine支持Telegraf 1.7.4以上的版本。用户可以根据当前的操作系统,到Telegraf官网下载安装包,并执行安装。下载地址如下:https://portal.influxdata.com/downloads 。
### 配置Telegraf ### 配置 Telegraf
修改Telegraf配置文件/etc/telegraf/telegraf.conf中与TDengine有关的配置项。 修改Telegraf配置文件/etc/telegraf/telegraf.conf中与TDengine有关的配置项。
...@@ -160,7 +225,8 @@ go build ...@@ -160,7 +225,8 @@ go build
关于如何使用Telegraf采集数据以及更多有关使用Telegraf的信息,请参考Telegraf官方的[文档](https://docs.influxdata.com/telegraf/v1.11/) 关于如何使用Telegraf采集数据以及更多有关使用Telegraf的信息,请参考Telegraf官方的[文档](https://docs.influxdata.com/telegraf/v1.11/)
### 启动blm_telegraf程序 ### 启动 blm_telegraf 程序
blm_telegraf程序有以下选项,在启动blm_telegraf程序时可以通过设定这些选项来设定blm_telegraf的配置。 blm_telegraf程序有以下选项,在启动blm_telegraf程序时可以通过设定这些选项来设定blm_telegraf的配置。
```bash ```bash
...@@ -196,7 +262,7 @@ blm_telegraf对telegraf提供服务的端口号。 ...@@ -196,7 +262,7 @@ blm_telegraf对telegraf提供服务的端口号。
url = "http://10.1.2.3:8089/telegraf" url = "http://10.1.2.3:8089/telegraf"
``` ```
### 查询telegraf写入数据 ### 查询 telegraf 写入数据
telegraf产生的数据格式如下: telegraf产生的数据格式如下:
```json ```json
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
## <a class="anchor" id="queries"></a>主要查询功能 ## <a class="anchor" id="queries"></a>主要查询功能
TDengine 采用 SQL 作为查询语言。应用程序可以通过 C/C++, Java, Go, Python 连接器发送 SQL 语句,用户可以通过 TDengine 提供的命令行(Command Line Interface, CLI)工具 TAOS Shell 手动执行 SQL 即席查询(Ad-Hoc Query)。TDengine 支持如下查询功能: TDengine 采用 SQL 作为查询语言。应用程序可以通过 C/C++, Java, Go, C#, Python, Node.js 连接器发送 SQL 语句,用户可以通过 TDengine 提供的命令行(Command Line Interface, CLI)工具 TAOS Shell 手动执行 SQL 即席查询(Ad-Hoc Query)。TDengine 支持如下查询功能:
- 单列、多列数据查询 - 单列、多列数据查询
- 标签和数值的多种过滤条件:>, <, =, <>, like 等 - 标签和数值的多种过滤条件:>, <, =, <>, like 等
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
`taos-jdbcdriver` 的实现包括 2 种形式: JDBC-JNI 和 JDBC-RESTful(taos-jdbcdriver-2.0.18 开始支持 JDBC-RESTful)。 JDBC-JNI 通过调用客户端 libtaos.so(或 taos.dll )的本地方法实现, JDBC-RESTful 则在内部封装了 RESTful 接口实现。 `taos-jdbcdriver` 的实现包括 2 种形式: JDBC-JNI 和 JDBC-RESTful(taos-jdbcdriver-2.0.18 开始支持 JDBC-RESTful)。 JDBC-JNI 通过调用客户端 libtaos.so(或 taos.dll )的本地方法实现, JDBC-RESTful 则在内部封装了 RESTful 接口实现。
![tdengine-connector](page://images/tdengine-jdbc-connector.png) ![tdengine-connector](../../images/tdengine-jdbc-connector.png)
上图显示了 3 种 Java 应用使用连接器访问 TDengine 的方式: 上图显示了 3 种 Java 应用使用连接器访问 TDengine 的方式:
...@@ -68,18 +68,18 @@ INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES( ...@@ -68,18 +68,18 @@ INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES(
TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下: TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下:
| TDengine DataType | Java DataType | | TDengine DataType | JDBCType (driver 版本 < 2.0.24) | JDBCType driver 版本 >= 2.0.24) |
| ----------------- | ------------------ | | ----------------- | ------------------ | ------------------ |
| TIMESTAMP | java.sql.Timestamp | | TIMESTAMP | java.lang.Long | java.sql.Timestamp |
| INT | java.lang.Integer | | INT | java.lang.Integer | java.lang.Integer |
| BIGINT | java.lang.Long | | BIGINT | java.lang.Long | java.lang.Long |
| FLOAT | java.lang.Float | | FLOAT | java.lang.Float | java.lang.Float |
| DOUBLE | java.lang.Double | | DOUBLE | java.lang.Double | java.lang.Double |
| SMALLINT | java.lang.Short | | SMALLINT | java.lang.Short | java.lang.Short |
| TINYINT | java.lang.Byte | | TINYINT | java.lang.Byte | java.lang.Byte |
| BOOL | java.lang.Boolean | | BOOL | java.lang.Boolean | java.lang.Boolean |
| BINARY | byte array | | BINARY | java.lang.String | byte array |
| NCHAR | java.lang.String | | NCHAR | java.lang.String | java.lang.String |
## 安装Java Connector ## 安装Java Connector
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用。 TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用。
![image-connecotr](page://images/connector.png) ![image-connecotr](../images/connector.png)
目前TDengine的连接器可支持的平台广泛,包括:X64/X86/ARM64/ARM32/MIPS/Alpha等硬件平台,以及Linux/Win64/Win32等开发环境。对照矩阵如下: 目前TDengine的连接器可支持的平台广泛,包括:X64/X86/ARM64/ARM32/MIPS/Alpha等硬件平台,以及Linux/Win64/Win32等开发环境。对照矩阵如下:
...@@ -64,7 +64,9 @@ TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、 ...@@ -64,7 +64,9 @@ TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、
编辑taos.cfg文件(默认路径/etc/taos/taos.cfg),将firstEP修改为TDengine服务器的End Point,例如:h1.taos.com:6030 编辑taos.cfg文件(默认路径/etc/taos/taos.cfg),将firstEP修改为TDengine服务器的End Point,例如:h1.taos.com:6030
**提示: 如本机没有部署TDengine服务,仅安装了应用驱动,则taos.cfg中仅需配置firstEP,无需配置FQDN。** **提示:**
1. **如本机没有部署TDengine服务,仅安装了应用驱动,则taos.cfg中仅需配置firstEP,无需配置FQDN。**
2. **为防止与服务器端连接时出现“unable to resolve FQDN”错误,建议确认客户端的hosts文件已经配置正确的FQDN值。**
**Windows x64/x86** **Windows x64/x86**
...@@ -96,7 +98,7 @@ TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、 ...@@ -96,7 +98,7 @@ TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、
**提示:** **提示:**
1. **如利用FQDN连接服务器,必须确认本机网络环境DNS已配置好,或在hosts文件中添加FQDN寻址记录,如编辑C:\Windows\system32\drivers\etc\hosts,添加如下的记录:`192.168.1.99 h1.taos.com` ** 1. **如利用FQDN连接服务器,必须确认本机网络环境DNS已配置好,或在hosts文件中添加FQDN寻址记录,如编辑C:\Windows\system32\drivers\etc\hosts,添加如下的记录:`192.168.1.99 h1.taos.com` **
2**卸载:运行unins000.exe可卸载TDengine应用驱动。** 2. **卸载:运行unins000.exe可卸载TDengine应用驱动。**
### 安装验证 ### 安装验证
...@@ -309,7 +311,7 @@ TDengine的异步API均采用非阻塞调用模式。应用程序可以用多线 ...@@ -309,7 +311,7 @@ TDengine的异步API均采用非阻塞调用模式。应用程序可以用多线
<a class="anchor" id="stmt"></a> <a class="anchor" id="stmt"></a>
### 参数绑定 API ### 参数绑定 API
除了直接调用 `taos_query` 进行查询,TDengine 也提供了支持参数绑定的 Prepare API,与 MySQL 一样,这些 API 目前也仅支持用问号 `?` 来代表待绑定的参数。 除了直接调用 `taos_query` 进行查询,TDengine 也提供了支持参数绑定的 Prepare API,与 MySQL 一样,这些 API 目前也仅支持用问号 `?` 来代表待绑定的参数。文档中有时也会把此功能称为“原生接口写入”。
从 2.1.1.0 和 2.1.2.0 版本开始,TDengine 大幅改进了参数绑定接口对数据写入(INSERT)场景的支持。这样在通过参数绑定接口写入数据时,就避免了 SQL 语法解析的资源消耗,从而在绝大多数情况下显著提升写入性能。此时的典型操作步骤如下: 从 2.1.1.0 和 2.1.2.0 版本开始,TDengine 大幅改进了参数绑定接口对数据写入(INSERT)场景的支持。这样在通过参数绑定接口写入数据时,就避免了 SQL 语法解析的资源消耗,从而在绝大多数情况下显著提升写入性能。此时的典型操作步骤如下:
1. 调用 `taos_stmt_init` 创建参数绑定对象; 1. 调用 `taos_stmt_init` 创建参数绑定对象;
...@@ -400,6 +402,25 @@ typedef struct TAOS_MULTI_BIND { ...@@ -400,6 +402,25 @@ typedef struct TAOS_MULTI_BIND {
(2.1.3.0 版本新增) (2.1.3.0 版本新增)
用于在其他 stmt API 返回错误(返回错误码或空指针)时获取错误信息。 用于在其他 stmt API 返回错误(返回错误码或空指针)时获取错误信息。
<a class="anchor" id="schemaless"></a>
### Schemaless 方式写入接口
除了使用 SQL 方式或者使用参数绑定 API 写入数据外,还可以使用 Schemaless 的方式完成写入。Schemaless 可以免于预先创建超级表/数据子表的数据结构,而是可以直接写入数据,TDengine 系统会根据写入的数据内容自动创建和维护所需要的表结构。Schemaless 的使用方式详见 [Schemaless 写入](https://www.taosdata.com/cn/documentation/insert#schemaless) 章节,这里介绍与之配套使用的 C/C++ API。
- `int taos_insert_lines(TAOS* taos, char* lines[], int numLines)`
(2.2.0.0 版本新增)
以 Schemaless 格式写入多行数据。其中:
* taos:调用 taos_connect 返回的数据库连接。
* lines:由 char 字符串指针组成的数组,指向本次想要写入数据库的多行数据。
* numLines:lines 数据的总行数。
返回值为 0 表示写入成功,非零值表示出错。具体错误代码请参见 [taoserror.h](https://github.com/taosdata/TDengine/blob/develop/src/inc/taoserror.h) 文件。
说明:
1. 此接口是一个同步阻塞式接口,使用时机与 `taos_query()` 一致。
2. 在调用此接口之前,必须先调用 `taos_select_db()` 来确定目前是在向哪个 DB 来写入。
### 连续查询接口 ### 连续查询接口
TDengine提供时间驱动的实时流式计算API。可以每隔一指定的时间段,对一张或多张数据库的表(数据流)进行各种实时聚合计算操作。操作简单,仅有打开、关闭流的API。具体如下: TDengine提供时间驱动的实时流式计算API。可以每隔一指定的时间段,对一张或多张数据库的表(数据流)进行各种实时聚合计算操作。操作简单,仅有打开、关闭流的API。具体如下:
...@@ -754,7 +775,7 @@ curl -u username:password -d '<SQL>' <ip>:<PORT>/rest/sql/[db_name] ...@@ -754,7 +775,7 @@ curl -u username:password -d '<SQL>' <ip>:<PORT>/rest/sql/[db_name]
- data: 具体返回的数据,一行一行的呈现,如果不返回结果集,那么就仅有 [[affected_rows]]。data 中每一行的数据列顺序,与 column_meta 中描述数据列的顺序完全一致。 - data: 具体返回的数据,一行一行的呈现,如果不返回结果集,那么就仅有 [[affected_rows]]。data 中每一行的数据列顺序,与 column_meta 中描述数据列的顺序完全一致。
- rows: 表明总共多少行数据。 - rows: 表明总共多少行数据。
column_meta 中的列类型说明: <a class="anchor" id="column_meta"></a>column_meta 中的列类型说明:
* 1:BOOL * 1:BOOL
* 2:TINYINT * 2:TINYINT
* 3:SMALLINT * 3:SMALLINT
...@@ -1144,7 +1165,7 @@ var affectRows = cursor.execute('insert into test.weather values(now, 22.3, 34); ...@@ -1144,7 +1165,7 @@ var affectRows = cursor.execute('insert into test.weather values(now, 22.3, 34);
execute方法的返回值为该语句影响的行数,上面的sql向test库的weather表中,插入了一条数据,则返回值affectRows为1。 execute方法的返回值为该语句影响的行数,上面的sql向test库的weather表中,插入了一条数据,则返回值affectRows为1。
TDengine目前还不支持update和delete语句 TDengine 目前还不支持 delete 语句。但从 2.0.8.0 版本开始,可以通过 `CREATE DATABASE` 时指定的 UPDATE 参数来启用对数据行的 update
#### 查询 #### 查询
......
...@@ -32,15 +32,15 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource ...@@ -32,15 +32,15 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource
用户可以直接通过 localhost:3000 的网址,登录 Grafana 服务器(用户名/密码:admin/admin),通过左侧 `Configuration -> Data Sources` 可以添加数据源,如下图所示: 用户可以直接通过 localhost:3000 的网址,登录 Grafana 服务器(用户名/密码:admin/admin),通过左侧 `Configuration -> Data Sources` 可以添加数据源,如下图所示:
![img](page://images/connections/add_datasource1.jpg) ![img](../images/connections/add_datasource1.jpg)
点击 `Add data source` 可进入新增数据源页面,在查询框中输入 TDengine 可选择添加,如下图所示: 点击 `Add data source` 可进入新增数据源页面,在查询框中输入 TDengine 可选择添加,如下图所示:
![img](page://images/connections/add_datasource2.jpg) ![img](../images/connections/add_datasource2.jpg)
进入数据源配置页面,按照默认提示修改相应配置即可: 进入数据源配置页面,按照默认提示修改相应配置即可:
![img](page://images/connections/add_datasource3.jpg) ![img](../images/connections/add_datasource3.jpg)
* Host: TDengine 集群的中任意一台服务器的 IP 地址与 TDengine RESTful 接口的端口号(6041),默认 http://localhost:6041 。 * Host: TDengine 集群的中任意一台服务器的 IP 地址与 TDengine RESTful 接口的端口号(6041),默认 http://localhost:6041 。
* User:TDengine 用户名。 * User:TDengine 用户名。
...@@ -48,13 +48,13 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource ...@@ -48,13 +48,13 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource
点击 `Save & Test` 进行测试,成功会有如下提示: 点击 `Save & Test` 进行测试,成功会有如下提示:
![img](page://images/connections/add_datasource4.jpg) ![img](../images/connections/add_datasource4.jpg)
#### 创建 Dashboard #### 创建 Dashboard
回到主界面创建 Dashboard,点击 Add Query 进入面板查询页面: 回到主界面创建 Dashboard,点击 Add Query 进入面板查询页面:
![img](page://images/connections/create_dashboard1.jpg) ![img](../images/connections/create_dashboard1.jpg)
如上图所示,在 Query 中选中 `TDengine` 数据源,在下方查询框可输入相应 sql 进行查询,具体说明如下: 如上图所示,在 Query 中选中 `TDengine` 数据源,在下方查询框可输入相应 sql 进行查询,具体说明如下:
...@@ -65,7 +65,7 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource ...@@ -65,7 +65,7 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource
按照默认提示查询当前 TDengine 部署所在服务器指定间隔系统内存平均使用量如下: 按照默认提示查询当前 TDengine 部署所在服务器指定间隔系统内存平均使用量如下:
![img](page://images/connections/create_dashboard2.jpg) ![img](../images/connections/create_dashboard2.jpg)
> 关于如何使用Grafana创建相应的监测界面以及更多有关使用Grafana的信息,请参考Grafana官方的[文档](https://grafana.com/docs/)。 > 关于如何使用Grafana创建相应的监测界面以及更多有关使用Grafana的信息,请参考Grafana官方的[文档](https://grafana.com/docs/)。
...@@ -75,11 +75,11 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource ...@@ -75,11 +75,11 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource
点击左侧 `Import` 按钮,并上传 `tdengine-grafana.json` 文件: 点击左侧 `Import` 按钮,并上传 `tdengine-grafana.json` 文件:
![img](page://images/connections/import_dashboard1.jpg) ![img](../images/connections/import_dashboard1.jpg)
导入完成之后可看到如下效果: 导入完成之后可看到如下效果:
![img](page://images/connections/import_dashboard2.jpg) ![img](../images/connections/import_dashboard2.jpg)
## <a class="anchor" id="matlab"></a>MATLAB ## <a class="anchor" id="matlab"></a>MATLAB
......
...@@ -119,9 +119,14 @@ taos> ...@@ -119,9 +119,14 @@ taos>
上面已经介绍如何从零开始搭建集群。集群组建完后,还可以随时添加新的数据节点进行扩容,或删除数据节点,并检查集群当前状态。 上面已经介绍如何从零开始搭建集群。集群组建完后,还可以随时添加新的数据节点进行扩容,或删除数据节点,并检查集群当前状态。
**提示:**
- 以下所有执行命令的操作需要先登陆进TDengine系统,必要时请使用root权限。
### 添加数据节点 ### 添加数据节点
执行CLI程序taos,使用root账号登录进系统,执行: 执行CLI程序taos,执行:
``` ```
CREATE DNODE "fqdn:port"; CREATE DNODE "fqdn:port";
...@@ -131,7 +136,7 @@ CREATE DNODE "fqdn:port"; ...@@ -131,7 +136,7 @@ CREATE DNODE "fqdn:port";
### 删除数据节点 ### 删除数据节点
执行CLI程序taos,使用root账号登录进TDengine系统,执行: 执行CLI程序taos,执行:
```mysql ```mysql
DROP DNODE "fqdn:port | dnodeID"; DROP DNODE "fqdn:port | dnodeID";
...@@ -153,7 +158,7 @@ DROP DNODE "fqdn:port | dnodeID"; ...@@ -153,7 +158,7 @@ DROP DNODE "fqdn:port | dnodeID";
手动将某个vnode迁移到指定的dnode。 手动将某个vnode迁移到指定的dnode。
执行CLI程序taos,使用root账号登录进TDengine系统,执行: 执行CLI程序taos,执行:
```mysql ```mysql
ALTER DNODE <source-dnodeId> BALANCE "VNODE:<vgId>-DNODE:<dest-dnodeId>"; ALTER DNODE <source-dnodeId> BALANCE "VNODE:<vgId>-DNODE:<dest-dnodeId>";
...@@ -169,7 +174,7 @@ ALTER DNODE <source-dnodeId> BALANCE "VNODE:<vgId>-DNODE:<dest-dnodeId>"; ...@@ -169,7 +174,7 @@ ALTER DNODE <source-dnodeId> BALANCE "VNODE:<vgId>-DNODE:<dest-dnodeId>";
### 查看数据节点 ### 查看数据节点
执行CLI程序taos,使用root账号登录进TDengine系统,执行: 执行CLI程序taos,执行:
```mysql ```mysql
SHOW DNODES; SHOW DNODES;
``` ```
...@@ -180,8 +185,9 @@ SHOW DNODES; ...@@ -180,8 +185,9 @@ SHOW DNODES;
为充分利用多核技术,并提供scalability,数据需要分片处理。因此TDengine会将一个DB的数据切分成多份,存放在多个vnode里。这些vnode可能分布在多个数据节点dnode里,这样就实现了水平扩展。一个vnode仅仅属于一个DB,但一个DB可以有多个vnode。vnode的是mnode根据当前系统资源的情况,自动进行分配的,无需任何人工干预。 为充分利用多核技术,并提供scalability,数据需要分片处理。因此TDengine会将一个DB的数据切分成多份,存放在多个vnode里。这些vnode可能分布在多个数据节点dnode里,这样就实现了水平扩展。一个vnode仅仅属于一个DB,但一个DB可以有多个vnode。vnode的是mnode根据当前系统资源的情况,自动进行分配的,无需任何人工干预。
执行CLI程序taos,使用root账号登录进TDengine系统,执行: 执行CLI程序taos,执行:
```mysql ```mysql
USE SOME_DATABASE;
SHOW VGROUPS; SHOW VGROUPS;
``` ```
......
...@@ -216,7 +216,7 @@ taosd -C ...@@ -216,7 +216,7 @@ taosd -C
| 98 | maxBinaryDisplayWidth | | **C** | | Taos shell中binary 和 nchar字段的显示宽度上限,超过此限制的部分将被隐藏 | 5 - | 30 | 实际上限按以下规则计算:如果字段值的长度大于 maxBinaryDisplayWidth,则显示上限为 **字段名长度****maxBinaryDisplayWidth** 的较大者。否则,上限为 **字段名长度****字段值长度** 的较大者。可在 shell 中通过命令 set max_binary_display_width nn动态修改此选项 | | 98 | maxBinaryDisplayWidth | | **C** | | Taos shell中binary 和 nchar字段的显示宽度上限,超过此限制的部分将被隐藏 | 5 - | 30 | 实际上限按以下规则计算:如果字段值的长度大于 maxBinaryDisplayWidth,则显示上限为 **字段名长度****maxBinaryDisplayWidth** 的较大者。否则,上限为 **字段名长度****字段值长度** 的较大者。可在 shell 中通过命令 set max_binary_display_width nn动态修改此选项 |
| 99 | queryBufferSize | | **S** | MB | 为所有并发查询占用保留的内存大小。 | | | 计算规则可以根据实际应用可能的最大并发数和表的数字相乘,再乘 170 。(2.0.15 以前的版本中,此参数的单位是字节) | | 99 | queryBufferSize | | **S** | MB | 为所有并发查询占用保留的内存大小。 | | | 计算规则可以根据实际应用可能的最大并发数和表的数字相乘,再乘 170 。(2.0.15 以前的版本中,此参数的单位是字节) |
| 100 | ratioOfQueryCores | | **S** | | 设置查询线程的最大数量。 | | | 最小值0 表示只有1个查询线程;最大值2表示最大建立2倍CPU核数的查询线程。默认为1,表示最大和CPU核数相等的查询线程。该值可以为小数,即0.5表示最大建立CPU核数一半的查询线程。 | | 100 | ratioOfQueryCores | | **S** | | 设置查询线程的最大数量。 | | | 最小值0 表示只有1个查询线程;最大值2表示最大建立2倍CPU核数的查询线程。默认为1,表示最大和CPU核数相等的查询线程。该值可以为小数,即0.5表示最大建立CPU核数一半的查询线程。 |
| 101 | update | | **S** | | 允许更新已存在的数据行 | 0 \| 1 | 0 | 从 2.0.8.0 版本开始 | | 101 | update | | **S** | | 允许更新已存在的数据行 | 0:不允许更新;1:允许整行更新;2:允许部分列更新。(2.1.7.0 版本开始此参数支持设为 2,在此之前取值只能是 [0, 1]) | 0 | 2.0.8.0 版本之前,不支持此参数。 |
| 102 | cacheLast | | **S** | | 是否在内存中缓存子表的最近数据 | 0:关闭;1:缓存子表最近一行数据;2:缓存子表每一列的最近的非NULL值;3:同时打开缓存最近行和列功能。(2.1.2.0 版本开始此参数支持 0~3 的取值范围,在此之前取值只能是 [0, 1]) | 0 | 2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。 | | 102 | cacheLast | | **S** | | 是否在内存中缓存子表的最近数据 | 0:关闭;1:缓存子表最近一行数据;2:缓存子表每一列的最近的非NULL值;3:同时打开缓存最近行和列功能。(2.1.2.0 版本开始此参数支持 0~3 的取值范围,在此之前取值只能是 [0, 1]) | 0 | 2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。 |
| 103 | numOfCommitThreads | YES | **S** | | 设置写入线程的最大数量 | | | | | 103 | numOfCommitThreads | YES | **S** | | 设置写入线程的最大数量 | | | |
| 104 | maxWildCardsLength | | **C** | bytes | 设定 LIKE 算子的通配符字符串允许的最大长度 | 0-16384 | 100 | 2.1.6.1 版本新增。 | | 104 | maxWildCardsLength | | **C** | bytes | 设定 LIKE 算子的通配符字符串允许的最大长度 | 0-16384 | 100 | 2.1.6.1 版本新增。 |
...@@ -239,7 +239,7 @@ taosd -C ...@@ -239,7 +239,7 @@ taosd -C
| 10 | fsync | 毫秒 | 当wal设置为2时,执行fsync的周期。设置为0,表示每次写入,立即执行fsync。 | | 3000 | | 10 | fsync | 毫秒 | 当wal设置为2时,执行fsync的周期。设置为0,表示每次写入,立即执行fsync。 | | 3000 |
| 11 | replica | | (可通过 alter database 修改)副本个数 | 1-3 | 1 | | 11 | replica | | (可通过 alter database 修改)副本个数 | 1-3 | 1 |
| 12 | precision | | 时间戳精度标识(2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。)(从 2.1.5.0 版本开始,新增对纳秒时间精度的支持) | ms 表示毫秒,us 表示微秒,ns 表示纳秒 | ms | | 12 | precision | | 时间戳精度标识(2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。)(从 2.1.5.0 版本开始,新增对纳秒时间精度的支持) | ms 表示毫秒,us 表示微秒,ns 表示纳秒 | ms |
| 13 | update | | 是否允许更新 | 0:不允许;1:允许 | 0 | | 13 | update | | 是否允许数据更新(从 2.1.7.0 版本开始此参数支持 0~2 的取值范围,在此之前取值只能是 [0, 1];而 2.0.8.0 之前的版本在 SQL 指令中不支持此参数。) | 0:不允许;1:允许更新整行;2:允许部分列更新。 | 0 |
| 14 | cacheLast | | (可通过 alter database 修改)是否在内存中缓存子表的最近数据(从 2.1.2.0 版本开始此参数支持 0~3 的取值范围,在此之前取值只能是 [0, 1];而 2.0.11.0 之前的版本在 SQL 指令中不支持此参数。)(2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。) | 0:关闭;1:缓存子表最近一行数据;2:缓存子表每一列的最近的非NULL值;3:同时打开缓存最近行和列功能 | 0 | | 14 | cacheLast | | (可通过 alter database 修改)是否在内存中缓存子表的最近数据(从 2.1.2.0 版本开始此参数支持 0~3 的取值范围,在此之前取值只能是 [0, 1];而 2.0.11.0 之前的版本在 SQL 指令中不支持此参数。)(2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。) | 0:关闭;1:缓存子表最近一行数据;2:缓存子表每一列的最近的非NULL值;3:同时打开缓存最近行和列功能 | 0 |
对于一个应用场景,可能有多种数据特征的数据并存,最佳的设计是将具有相同数据特征的表放在一个库里,这样一个应用有多个库,而每个库可以配置不同的存储参数,从而保证系统有最优的性能。TDengine允许应用在创建库时指定上述存储参数,如果指定,该参数就将覆盖对应的系统配置参数。举例,有下述SQL: 对于一个应用场景,可能有多种数据特征的数据并存,最佳的设计是将具有相同数据特征的表放在一个库里,这样一个应用有多个库,而每个库可以配置不同的存储参数,从而保证系统有最优的性能。TDengine允许应用在创建库时指定上述存储参数,如果指定,该参数就将覆盖对应的系统配置参数。举例,有下述SQL:
...@@ -365,9 +365,9 @@ taos -C 或 taos --dump-config ...@@ -365,9 +365,9 @@ taos -C 或 taos --dump-config
- timezone - timezone
默认值:从系统中动态获取当前的时区设置 默认值:从系统中动态获取当前客户端运行系统所在的时区。
客户端运行系统所在的时区。为应对多时区的数据写入和查询问题,TDengine 采用 Unix 时间戳(Unix Timestamp)来记录和存储时间戳。Unix 时间戳的特点决定了任一时刻不论在任何时区,产生的时间戳均一致。需要注意的是,Unix时间戳是在客户端完成转换和记录。为了确保客户端其他形式的时间转换为正确的 Unix 时间戳,需要设置正确的时区。 为应对多时区的数据写入和查询问题,TDengine 采用 Unix 时间戳(Unix Timestamp)来记录和存储时间戳。Unix 时间戳的特点决定了任一时刻不论在任何时区,产生的时间戳均一致。需要注意的是,Unix时间戳是在客户端完成转换和记录。为了确保客户端其他形式的时间转换为正确的 Unix 时间戳,需要设置正确的时区。
在Linux系统中,客户端会自动读取系统设置的时区信息。用户也可以采用多种方式在配置文件设置时区。例如: 在Linux系统中,客户端会自动读取系统设置的时区信息。用户也可以采用多种方式在配置文件设置时区。例如:
``` ```
...@@ -568,6 +568,35 @@ COMPACT 命令对指定的一个或多个 VGroup 启动碎片重整,系统会 ...@@ -568,6 +568,35 @@ COMPACT 命令对指定的一个或多个 VGroup 启动碎片重整,系统会
需要注意的是,碎片重整操作会大幅消耗磁盘 I/O。因此在重整进行期间,有可能会影响节点的写入和查询性能,甚至在极端情况下导致短时间的阻写。 需要注意的是,碎片重整操作会大幅消耗磁盘 I/O。因此在重整进行期间,有可能会影响节点的写入和查询性能,甚至在极端情况下导致短时间的阻写。
<a class="anchor" id="tsz_compress"></a>
## 浮点数有损压缩
在车联网等物联网智能应用场景中,经常会采集和存储海量的浮点数类型数据,如果能更高效地对此类数据进行压缩,那么不但能够节省数据存储的硬件资源,也能够因降低磁盘 I/O 数据量而提升系统性能表现。
从 2.1.6.0 版本开始,TDengine 提供一种名为 TSZ 的新型数据压缩算法,无论设置为有损压缩还是无损压缩,都能够显著提升浮点数类型数据的压缩率表现。目前该功能以可选模块的方式进行发布,可以通过添加特定的编译参数来启用该功能(也即常规安装包中暂未包含该功能)。
**需要注意的是,该功能一旦启用,效果是全局的,也即会对系统中所有的 FLOAT、DOUBLE 类型的数据生效。同时,在启用了浮点数有损压缩功能后写入的数据,也无法被未启用该功能的版本载入,并有可能因此而导致数据库服务报错退出。**
### 创建支持 TSZ 压缩算法的 TDengine 版本
TSZ 模块保存在单独的代码仓库 https://github.com/taosdata/TSZ 中。可以通过以下步骤创建包含此模块的 TDengine 版本:
1. TDengine 中的插件目前只支持通过 SSH 的方式拉取和编译,所以需要自己先配置好通过 SSH 拉取 GitHub 代码的环境。
2. `git clone git@github.com:taosdata/TDengine -b your_branchname --recurse-submodules` 通过 `--recurse-submodules` 使依赖模块的源代码可以被一并下载。
3. `mkdir debug && cd debug` 进入单独的编译目录。
4. `cmake .. -DTSZ_ENABLED=true` 其中参数 `-DTSZ_ENABLED=true` 表示在编译过程中加入对 TSZ 插件功能的支持。如果成功激活对 TSZ 模块的编译,那么 CMAKE 过程中也会显示 `build with TSZ enabled` 字样。
5. 编译成功后,包含 TSZ 浮点压缩功能的插件便已经编译进了 TDengine 中了,可以通过调整 taos.cfg 中的配置参数来使用此功能了。
### 通过配置文件来启用 TSZ 压缩算法
如果要启用 TSZ 压缩算法,除了在 TDengine 的编译过程需要声明启用 TSZ 模块之外,还需要在 taos.cfg 配置文件中对以下参数进行设置:
* lossyColumns:配置要进行有损压缩的浮点数数据类型。参数值类型为字符串,含义为:空 - 关闭有损压缩;float - 只对 FLOAT 类型进行有损压缩;double - 只对 DOUBLE 类型进行有损压缩;float|double:对 FLOAT 和 DOUBLE 类型都进行有损压缩。默认值是“空”,也即关闭有损压缩。
* fPrecision:设置 float 类型浮点数压缩精度,小于此值的浮点数尾数部分将被截断。参数值类型为 FLOAT,最小值为 0.0,最大值为 100,000.0。缺省值为 0.00000001(1E-8)。
* dPrecision:设置 double 类型浮点数压缩精度,小于此值的浮点数尾数部分将被截断。参数值类型为 DOUBLE,最小值为 0.0,最大值为 100,000.0。缺省值为 0.0000000000000001(1E-16)。
* maxRange:表示数据的最大浮动范围。一般无需调整,在数据具有特定特征时可以配合 range 参数来实现极高的数据压缩率。默认值为 500。
* range:表示数据大体浮动范围。一般无需调整,在数据具有特定特征时可以配合 maxRange 参数来实现极高的数据压缩率。默认值为 100。
**注意:**对 cfg 配置文件中参数值的任何调整,都需要重新启动 taosd 才能生效。并且以上选项为全局配置选项,配置后对所有数据库中所有表的 FLOAT 及 DOUBLE 类型的字段生效。
## <a class="anchor" id="directories"></a>文件目录结构 ## <a class="anchor" id="directories"></a>文件目录结构
安装TDengine后,默认会在操作系统中生成下列目录或文件: 安装TDengine后,默认会在操作系统中生成下列目录或文件:
...@@ -652,7 +681,7 @@ rmtaos ...@@ -652,7 +681,7 @@ rmtaos
- 表名:不能包含“.”以及特殊字符,与所属数据库名一起,不能超过 192 个字符,每行数据最大长度 16k 个字符 - 表名:不能包含“.”以及特殊字符,与所属数据库名一起,不能超过 192 个字符,每行数据最大长度 16k 个字符
- 表的列名:不能包含特殊字符,不能超过 64 个字符 - 表的列名:不能包含特殊字符,不能超过 64 个字符
- 数据库名、表名、列名,都不能以数字开头,合法的可用字符集是“英文字符、数字和下划线” - 数据库名、表名、列名,都不能以数字开头,合法的可用字符集是“英文字符、数字和下划线”
- 表的列数:不能超过 1024 列,最少需要 2 列,第一列必须是时间戳 - 表的列数:不能超过 1024 列,最少需要 2 列,第一列必须是时间戳(从 2.1.7.0 版本开始,改为最多支持 4096 列)
- 记录的最大长度:包括时间戳 8 byte,不能超过 16KB(每个 BINARY/NCHAR 类型的列还会额外占用 2 个 byte 的存储位置) - 记录的最大长度:包括时间戳 8 byte,不能超过 16KB(每个 BINARY/NCHAR 类型的列还会额外占用 2 个 byte 的存储位置)
- 单条 SQL 语句默认最大字符串长度:65480 byte,但可通过系统配置参数 maxSQLLength 修改,最长可配置为 1048576 byte - 单条 SQL 语句默认最大字符串长度:65480 byte,但可通过系统配置参数 maxSQLLength 修改,最长可配置为 1048576 byte
- 数据库副本数:不能超过 3 - 数据库副本数:不能超过 3
...@@ -665,7 +694,7 @@ rmtaos ...@@ -665,7 +694,7 @@ rmtaos
- 库的个数:仅受节点个数限制 - 库的个数:仅受节点个数限制
- 单个库上虚拟节点个数:不能超过 64 个 - 单个库上虚拟节点个数:不能超过 64 个
- 库的数目,超级表的数目、表的数目,系统不做限制,仅受系统资源限制 - 库的数目,超级表的数目、表的数目,系统不做限制,仅受系统资源限制
- SELECT 语句的查询结果,最多允许返回 1024 列(语句中的函数调用可能也会占用一些列空间),超限时需要显式指定较少的返回数据列,以避免语句执行报错。 - SELECT 语句的查询结果,最多允许返回 1024 列(语句中的函数调用可能也会占用一些列空间),超限时需要显式指定较少的返回数据列,以避免语句执行报错。(从 2.1.7.0 版本开始,改为最多允许 4096 列)
目前 TDengine 有将近 200 个内部保留关键字,这些关键字无论大小写均不可以用作库名、表名、STable 名、数据列名及标签列名等。这些关键字列表如下: 目前 TDengine 有将近 200 个内部保留关键字,这些关键字无论大小写均不可以用作库名、表名、STable 名、数据列名及标签列名等。这些关键字列表如下:
...@@ -806,7 +835,7 @@ taos -n sync -P 6042 -h <fqdn of server> ...@@ -806,7 +835,7 @@ taos -n sync -P 6042 -h <fqdn of server>
-h:所要连接的服务端的 FQDN 或 ip 地址。如果不设置这一项,会使用本机 taos.cfg 文件中 FQDN 参数的设置作为默认值。 -h:所要连接的服务端的 FQDN 或 ip 地址。如果不设置这一项,会使用本机 taos.cfg 文件中 FQDN 参数的设置作为默认值。
-P:所连接服务端的网络端口。默认值为 6030。 -P:所连接服务端的网络端口。默认值为 6030。
-N:诊断过程中使用的网络包总数。最小值是 1、最大值是 10000,默认值为 100。 -N:诊断过程中使用的网络包总数。最小值是 1、最大值是 10000,默认值为 100。
-l:单个网络包的大小(单位:字节)。最小值是 1024、最大值是 1024*1024*1024,默认值为 1000。 -l:单个网络包的大小(单位:字节)。最小值是 1024、最大值是 1024 * 1024 * 1024,默认值为 1000。
-S:网络封包的类型。可以是 TCP 或 UDP,默认值为 TCP。 -S:网络封包的类型。可以是 TCP 或 UDP,默认值为 TCP。
#### FQDN 解析速度诊断 #### FQDN 解析速度诊断
......
# UDF(用户定义函数)
在有些应用场景中,应用逻辑需要的查询无法直接使用系统内置的函数来表示。利用 UDF 功能,TDengine 可以插入用户编写的处理代码并在查询中使用它们,就能够很方便地解决特殊应用场景中的使用需求。
从 2.2.0.0 版本开始,TDengine 支持通过 C/C++ 语言进行 UDF 定义。接下来结合示例讲解 UDF 的使用方法。
## 用 C/C++ 语言来定义 UDF
TDengine 提供 3 个 UDF 的源代码示例,分别为:
* [add_one.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/add_one.c)
* [abs_max.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/abs_max.c)
* [sum_double.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/sum_double.c)
### 无需中间变量的标量函数
[add_one.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/add_one.c) 是结构最简单的 UDF 实现。其功能为:对传入的一个数据列(可能因 WHERE 子句进行了筛选)中的每一项,都输出 +1 之后的值,并且要求输入的列数据类型为 INT。
这一具体的处理逻辑在函数 `void add_one(char* data, short itype, short ibytes, int numOfRows, long long* ts, char* dataOutput, char* interBUf, char* tsOutput, int* numOfOutput, short otype, short obytes, SUdfInit* buf)` 中定义。这类用于实现 UDF 的基础计算逻辑的函数,我们称为 udfNormalFunc,也就是对行数据块的标量计算函数。需要注意的是,udfNormalFunc 的参数项是固定的,用于按照约束完成与引擎之间的数据交换。
- udfNormalFunc 中各参数的具体含义是:
* data:存有输入的数据。
* itype:输入数据的类型。这里采用的是短整型表示法,与各种数据类型对应的值可以参见 [column_meta 中的列类型说明](https://www.taosdata.com/cn/documentation/connector#column_meta)。例如 4 用于表示 INT 型。
* iBytes:输入数据中每个值会占用的字节数。
* numOfRows:输入数据的总行数。
* ts:主键时间戳在输入中的列数据。
* dataOutput:输出数据的缓冲区。
* interBuf:系统使用的中间临时缓冲区,通常用户逻辑无需对 interBuf 进行处理。
* tsOutput:主键时间戳在输出时的列数据。
* numOfOutput:输出数据的个数。
* oType:输出数据的类型。取值含义与 itype 参数一致。
* oBytes:输出数据中每个值会占用的字节数。
* buf:计算过程的中间变量缓冲区。
其中 buf 参数需要用到一个自定义结构体 SUdfInit。在这个例子中,因为 add_one 的计算过程无需用到中间变量缓存,所以可以把 SUdfInit 定义成一个空结构体。
### 无需中间变量的聚合函数
[abs_max.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/abs_max.c) 实现的是一个聚合函数,功能是对一组数据按绝对值取最大值。
其计算过程为:与所在查询语句相关的数据会被分为多个行数据块,对每个行数据块调用 udfNormalFunc(在本例的实现代码中,实际函数名是 `abs_max`),再将每个数据块的计算结果调用 udfMergeFunc(本例中,其实际的函数名是 `abs_max_merge`)进行聚合,生成每个子表的聚合结果。如果查询指令涉及超级表,那么最后还会通过 udfFinalizeFunc(本例中,其实际的函数名是 `abs_max_finalize`)再把子表的计算结果聚合为超级表的计算结果。
值得注意的是,udfNormalFunc、udfMergeFunc、udfFinalizeFunc 之间,函数名约定使用相同的前缀,此前缀即 udfNormalFunc 的实际函数名。udfMergeFunc 的函数名后缀 `_merge`、udfFinalizeFunc 的函数名后缀 `_finalize`,是 UDF 实现规则的一部分,系统会按照这些函数名后缀来调用相应功能。
- udfMergeFunc 用于对计算中间结果进行聚合。本例中 udfMergeFunc 对应的实现函数为 `void abs_max_merge(char* data, int32_t numOfRows, char* dataOutput, int32_t* numOfOutput, SUdfInit* buf)`,其中各参数的具体含义是:
* data:udfNormalFunc 的输出组合在一起的数据,也就成为了 udfMergeFunc 的输入。
* numOfRows:data 中数据的行数。
* dataOutput:输出数据的缓冲区。
* numOfOutput:输出数据的个数。
* buf:计算过程的中间变量缓冲区。
- udfFinalizeFunc 用于对计算结果进行最终聚合。本例中 udfFinalizeFunc 对应的实现函数为 `void abs_max_finalize(char* dataOutput, char* interBuf, int* numOfOutput, SUdfInit* buf)`,其中各参数的具体含义是:
* dataOutput:输出数据的缓冲区。对 udfFinalizeFunc 来说,其输入数据也来自于这里。
* interBuf:系统使用的中间临时缓冲区,与 udfNormalFunc 中的同名参数含义一致。
* numOfOutput:输出数据的个数。
* buf:计算过程的中间变量缓冲区。
同样因为 abs_max 的计算过程无需用到中间变量缓存,所以同样是可以把 SUdfInit 定义成一个空结构体。
### 使用中间变量的聚合函数
[sum_double.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/sum_double.c) 也是一个聚合函数,功能是对一组数据输出求和结果的倍数。
出于功能演示的目的,在这个用户定义函数的实现方法中,用到了中间变量缓冲区 buf。因此,在这个源代码文件中,SUdfInit 就不再是一个空的结构体,而是定义了缓冲区的具体存储内容。
也正是因为用到了中间变量缓冲区,因此就需要对这一缓冲区进行初始化和资源释放。具体来说,也即对应 udfInitFunc(本例中,其实际的函数名是 `sum_double_init`)和 udfDestroyFunc(本例中,其实际的函数名是 `sum_double_destroy`)。其函数名命名规则同样是采取以 udfNormalFunc 的实际函数名为前缀,以 `_init``_destroy` 为后缀。系统会在初始化和资源释放时调用对应名称的函数。
- udfInitFunc 用于初始化中间变量缓冲区中的变量和内容。本例中 udfInitFunc 对应的实现函数为 `int sum_double_init(SUdfInit* buf)`,其中各参数的具体含义是:
* buf:计算过程的中间变量缓冲区。
- udfDestroyFunc 用于释放中间变量缓冲区中的变量和内容。本例中 udfDestroyFunc 对应的实现函数为 `void sum_double_destroy(SUdfInit* buf)`,其中各参数的具体含义是:
* buf:计算过程的中间变量缓冲区。
注意,UDF 的实现过程中需要小心处理对中间变量缓冲区的使用,如果使用不当则有可能导致内存泄露或对资源的过度占用,甚至导致系统服务进程崩溃等。
### UDF 实现方式的规则总结
根据所要实现的 UDF 类型不同,用户所要实现的功能函数内容也会有所区别:
* 无需中间变量的标量函数:结构体 SUdfInit 可以为空,需实现 udfNormalFunc。
* 无需中间变量的聚合函数:结构体 SUdfInit 可以为空,需实现 udfNormalFunc、udfMergeFunc、udfFinalizeFunc。
* 使用中间变量的标量函数:结构体 SUdfInit 需要具体定义,并需实现 udfNormalFunc、udfInitFunc、udfDestroyFunc。
* 使用中间变量的聚合函数:结构体 SUdfInit 需要具体定义,并需实现 udfNormalFunc、udfInitFunc、udfDestroyFunc、udfMergeFunc、udfFinalizeFunc。
## 编译 UDF
用户定义函数的 C 语言源代码无法直接被 TDengine 系统使用,而是需要先编译为 .so 链接库,之后才能载入 TDengine 系统。
例如,按照上一章节描述的规则准备好了用户定义函数的源代码 add_one.c,那么可以执行如下指令编译得到动态链接库文件:
```bash
gcc -g -O0 -fPIC -shared add_one.c -o add_one.so
```
这样就准备好了动态链接库 add_one.so 文件,可以供后文创建 UDF 时使用了。
## 在系统中管理和使用 UDF
### 创建 UDF
用户可以通过 SQL 指令在系统中加载客户端所在主机上的 UDF 函数库(不能通过 RESTful 接口或 HTTP 管理界面来进行这一过程)。一旦创建成功,则当前 TDengine 集群的所有用户都可以在 SQL 指令中使用这些函数。UDF 存储在系统的 MNode 节点上,因此即使重启 TDengine 系统,已经创建的 UDF 也仍然可用。
在创建 UDF 时,需要区分标量函数和聚合函数。如果创建时声明了错误的函数类别,则可能导致通过 SQL 指令调用函数时出错。
- 创建标量函数:`CREATE FUNCTION ids(X) AS ids(Y) OUTPUTTYPE typename(Z) bufsize B;`
* ids(X):标量函数未来在 SQL 指令中被调用时的函数名,必须与函数实现中 udfNormalFunc 的实际名称一致;
* ids(Y):包含 UDF 函数实现的动态链接库的库文件路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件),这个路径需要用英文单引号或英文双引号括起来;
* typename(Z):此函数计算结果的数据类型,与上文中 udfNormalFunc 的 itype 参数不同,这里不是使用数字表示法,而是直接写类型名称即可;
* B:系统使用的中间临时缓冲区大小,单位是字节,最小 0,最大 512,通常可以设置为 128。
例如,如下语句可以把 add_one.so 创建为系统中可用的 UDF:
```sql
CREATE FUNCTION add_one AS "/home/taos/udf_example/add_one.so" OUTPUTTYPE INT bufsize 128;
```
- 创建聚合函数:`CREATE AGGREGATE FUNCTION ids(X) AS ids(Y) OUTPUTTYPE typename(Z) bufsize B;`
* ids(X):聚合函数未来在 SQL 指令中被调用时的函数名,必须与函数实现中 udfNormalFunc 的实际名称一致;
* ids(Y):包含 UDF 函数实现的动态链接库的库文件路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件),这个路径需要用英文单引号或英文双引号括起来;
* typename(Z):此函数计算结果的数据类型,与上文中 udfNormalFunc 的 itype 参数不同,这里不是使用数字表示法,而是直接写类型名称即可;
* B:系统使用的中间临时缓冲区大小,单位是字节,最小 0,最大 512,通常可以设置为 128。
例如,如下语句可以把 add_one.so 创建为系统中可用的 UDF:
```sql
CREATE FUNCTION abs_max AS "/home/taos/udf_example/abs_max.so" OUTPUTTYPE BIGINT bufsize 128;
```
### 管理 UDF
- 删除指定名称的用户定义函数:`DROP FUNCTION ids(X);`
* ids(X):此参数的含义与 CREATE 指令中的 ids(X) 参数一致,也即要删除的函数的名字,例如 `DROP FUNCTION add_one;`
- 显示系统中当前可用的所有 UDF:`SHOW FUNCTIONS;`
### 调用 UDF
在 SQL 指令中,可以直接以在系统中创建 UDF 时赋予的函数名来调用用户定义函数。例如:
```sql
SELECT X(c) FROM table/stable;
```
表示对名为 c 的数据列调用名为 X 的用户定义函数。SQL 指令中用户定义函数可以配合 WHERE 等查询特性来使用。
## UDF 的一些使用限制
在当前版本下,使用 UDF 存在如下这些限制:
1. 在创建和调用 UDF 时,服务端和客户端都只支持 Linux 操作系统;
2. UDF 不能与系统内建的 SQL 函数混合使用;
3. UDF 只支持以单个数据列作为输入;
4. UDF 只要创建成功,就会被持久化存储到 MNode 节点中;
5. 无法通过 RESTful 接口来创建 UDF;
6. UDF 在 SQL 中定义的函数名,必须与 .so 库文件实现中的接口函数名前缀保持一致,也即必须是 udfNormalFunc 的名称,而且不可与 TDengine 中已有的内建 SQL 函数重名。
...@@ -70,7 +70,7 @@ TDengine 缺省的时间戳是毫秒精度,但通过在 CREATE DATABASE 时传 ...@@ -70,7 +70,7 @@ TDengine 缺省的时间戳是毫秒精度,但通过在 CREATE DATABASE 时传
1) KEEP是该数据库的数据保留多长天数,缺省是3650天(10年),数据库会自动删除超过时限的数据;<!-- REPLACE_OPEN_TO_ENTERPRISE__KEEP_PARAM_DESCRIPTION --> 1) KEEP是该数据库的数据保留多长天数,缺省是3650天(10年),数据库会自动删除超过时限的数据;<!-- REPLACE_OPEN_TO_ENTERPRISE__KEEP_PARAM_DESCRIPTION -->
2) UPDATE 标志数据库支持更新相同时间戳数据; 2) UPDATE 标志数据库支持更新相同时间戳数据;(从 2.1.7.0 版本开始此参数支持设为 2,表示允许部分列更新,也即更新数据行时未被设置的列会保留原值。)(从 2.0.8.0 版本开始支持此参数。注意此参数不能通过 `ALTER DATABASE` 指令进行修改。)
3) 数据库名最大长度为33; 3) 数据库名最大长度为33;
...@@ -233,7 +233,7 @@ TDengine 缺省的时间戳是毫秒精度,但通过在 CREATE DATABASE 时传 ...@@ -233,7 +233,7 @@ TDengine 缺省的时间戳是毫秒精度,但通过在 CREATE DATABASE 时传
``` ```
说明: 说明:
1) 列的最大个数为1024,最小个数为2; 1) 列的最大个数为1024,最小个数为2;(从 2.1.7.0 版本开始,改为最多允许 4096 列)
2) 列名最大长度为64。 2) 列名最大长度为64。
...@@ -573,16 +573,24 @@ Query OK, 2 row(s) in set (0.003112s) ...@@ -573,16 +573,24 @@ Query OK, 2 row(s) in set (0.003112s)
注意:普通表的通配符 * 中并不包含 _标签列_。 注意:普通表的通配符 * 中并不包含 _标签列_。
##### 获取标签列的去重取值 #### 获取标签列或普通列的去重取值
从 2.0.15 版本开始,支持在超级表查询标签列时,指定 DISTINCT 关键字,这样将返回指定标签列的所有不重复取值 从 2.0.15.0 版本开始,支持在超级表查询标签列时,指定 DISTINCT 关键字,这样将返回指定标签列的所有不重复取值。注意,在 2.1.6.0 版本之前,DISTINCT 只支持处理单个标签列,而从 2.1.6.0 版本开始,DISTINCT 可以对多个标签列进行处理,输出这些标签列取值不重复的组合
```mysql ```sql
SELECT DISTINCT tag_name FROM stb_name; SELECT DISTINCT tag_name [, tag_name ...] FROM stb_name;
``` ```
注意:目前 DISTINCT 关键字只支持对超级表的标签列进行去重,而不能用于普通列。 从 2.1.7.0 版本开始,DISTINCT 也支持对数据子表或普通表进行处理,也即支持获取单个普通列的不重复取值,或多个普通列取值的不重复组合。
```sql
SELECT DISTINCT col_name [, col_name ...] FROM tb_name;
```
需要注意的是,DISTINCT 目前不支持对超级表中的普通列进行处理。如果需要进行此类操作,那么需要把超级表放在子查询中,再对子查询的计算结果执行 DISTINCT。
说明:
1. cfg 文件中的配置参数 maxNumOfDistinctRes 将对 DISTINCT 能够输出的数据行数进行限制。其最小值是 100000,最大值是 100000000,默认值是 10000000。如果实际计算结果超出了这个限制,那么会仅输出这个数量范围内的部分。
2. 由于浮点数天然的精度机制原因,在特定情况下,对 FLOAT 和 DOUBLE 列使用 DISTINCT 并不能保证输出值的完全唯一性。
3. 在当前版本下,DISTINCT 不能在嵌套查询的子查询中使用,也不能与聚合函数、GROUP BY、或 JOIN 在同一条语句中混用。
#### 结果集列名 #### 结果集列名
...@@ -730,6 +738,34 @@ Query OK, 1 row(s) in set (0.001091s) ...@@ -730,6 +738,34 @@ Query OK, 1 row(s) in set (0.001091s)
5. 从 2.0.17.0 版本开始,条件过滤开始支持 BETWEEN AND 语法,例如 `WHERE col2 BETWEEN 1.5 AND 3.25` 表示查询条件为“1.5 ≤ col2 ≤ 3.25”。 5. 从 2.0.17.0 版本开始,条件过滤开始支持 BETWEEN AND 语法,例如 `WHERE col2 BETWEEN 1.5 AND 3.25` 表示查询条件为“1.5 ≤ col2 ≤ 3.25”。
6. 从 2.1.4.0 版本开始,条件过滤开始支持 IN 算子,例如 `WHERE city IN ('Beijing', 'Shanghai')`。说明:BOOL 类型写作 `{true, false}` 或 `{0, 1}` 均可,但不能写作 0、1 之外的整数;FLOAT 和 DOUBLE 类型会受到浮点数精度影响,集合内的值在精度范围内认为和数据行的值完全相等才能匹配成功;TIMESTAMP 类型支持非主键的列。<!-- REPLACE_OPEN_TO_ENTERPRISE__IN_OPERATOR_AND_UNSIGNED_INTEGER --> 6. 从 2.1.4.0 版本开始,条件过滤开始支持 IN 算子,例如 `WHERE city IN ('Beijing', 'Shanghai')`。说明:BOOL 类型写作 `{true, false}` 或 `{0, 1}` 均可,但不能写作 0、1 之外的整数;FLOAT 和 DOUBLE 类型会受到浮点数精度影响,集合内的值在精度范围内认为和数据行的值完全相等才能匹配成功;TIMESTAMP 类型支持非主键的列。<!-- REPLACE_OPEN_TO_ENTERPRISE__IN_OPERATOR_AND_UNSIGNED_INTEGER -->
<a class="anchor" id="join"></a>
### JOIN 子句
从 2.2.0.0 版本开始,TDengine 对内连接(INNER JOIN)中的自然连接(Natural join)操作实现了完整的支持。也即支持“普通表与普通表之间”、“超级表与超级表之间”、“子查询与子查询之间”进行自然连接。自然连接与内连接的主要区别是,自然连接要求参与连接的字段在不同的表/超级表中必须是同名字段。也即,TDengine 在连接关系的表达中,要求必须使用同名数据列/标签列的相等关系。
在普通表与普通表之间的 JOIN 操作中,只能使用主键时间戳之间的相等关系。例如:
```sql
SELECT *
FROM temp_tb_1 t1, pressure_tb_1 t2
WHERE t1.ts = t2.ts
```
在超级表与超级表之间的 JOIN 操作中,除了主键时间戳一致的条件外,还要求引入能实现一一对应的标签列的相等关系。例如:
```sql
SELECT *
FROM temp_stable t1, temp_stable t2
WHERE t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0;
```
类似地,也可以对多个子查询的查询结果进行 JOIN 操作。
注意,JOIN 操作存在如下限制要求:
1. 参与一条语句中 JOIN 操作的表/超级表最多可以有 10 个。
2. 在包含 JOIN 操作的查询语句中不支持 FILL。
3. 暂不支持参与 JOIN 操作的表之间聚合后的四则运算。
4. 不支持只对其中一部分表做 GROUP BY。
5. JOIN 查询的不同表的过滤条件之间不能为 OR。
<a class="anchor" id="nested"></a> <a class="anchor" id="nested"></a>
### 嵌套查询 ### 嵌套查询
...@@ -757,7 +793,7 @@ SELECT ... FROM (SELECT ... FROM ...) ...; ...@@ -757,7 +793,7 @@ SELECT ... FROM (SELECT ... FROM ...) ...;
* 外层查询不支持 GROUP BY。 * 外层查询不支持 GROUP BY。
<a class="anchor" id="union"></a> <a class="anchor" id="union"></a>
### UNION ALL 操作符 ### UNION ALL 子句
```mysql ```mysql
SELECT ... SELECT ...
...@@ -1064,7 +1100,7 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数 ...@@ -1064,7 +1100,7 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
```mysql ```mysql
SELECT LAST(field_name) FROM { tb_name | stb_name } [WHERE clause]; SELECT LAST(field_name) FROM { tb_name | stb_name } [WHERE clause];
``` ```
功能说明:统计表/超级表中某列的值最后写入的非NULL值。 功能说明:统计表/超级表中某列的值最后写入的非 NULL 值。
返回结果数据类型:同应用的字段。 返回结果数据类型:同应用的字段。
...@@ -1074,9 +1110,11 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数 ...@@ -1074,9 +1110,11 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
说明: 说明:
1)如果要返回各个列的最后(时间戳最大)一个非NULL值,可以使用LAST(\*); 1)如果要返回各个列的最后(时间戳最大)一个非 NULL 值,可以使用 LAST(\*);
2)如果结果集中的某列全部为 NULL 值,则该列的返回结果也是 NULL;如果结果集中所有列全部为 NULL 值,则不返回结果。
2)如果结果集中的某列全部为NULL值,则该列的返回结果也是NULL;如果结果集中所有列全部为NULL值,则不返回结果 3)在用于超级表时,时间戳完全一样且同为最大的数据行可能有多个,那么会从中随机返回一条,而并不保证多次运行所挑选的数据行必然一致
示例: 示例:
```mysql ```mysql
...@@ -1225,7 +1263,9 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数 ...@@ -1225,7 +1263,9 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
适用于:**表、超级表**。 适用于:**表、超级表**。
限制:LAST_ROW()不能与INTERVAL一起使用。 限制:LAST_ROW() 不能与 INTERVAL 一起使用。
说明:在用于超级表时,时间戳完全一样且同为最大的数据行可能有多个,那么会从中随机返回一条,而并不保证多次运行所挑选的数据行必然一致。
示例: 示例:
```mysql ```mysql
...@@ -1254,9 +1294,13 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数 ...@@ -1254,9 +1294,13 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
适用于:**表、超级表**。 适用于:**表、超级表**。
说明:(从 2.0.15.0 版本开始新增此函数)INTERP 必须指定时间断面,如果该时间断面不存在直接对应的数据,那么会根据 FILL 参数的设定进行插值。此外,条件语句里面可附带筛选条件,例如标签、tbname。 说明:(从 2.0.15.0 版本开始新增此函数)
INTERP 查询要求查询的时间区间必须位于数据集合(表)的所有记录的时间范围之内。如果给定的时间戳位于时间范围之外,即使有插值指令,仍然不返回结果。 1)INTERP 必须指定时间断面,如果该时间断面不存在直接对应的数据,那么会根据 FILL 参数的设定进行插值。此外,条件语句里面可附带筛选条件,例如标签、tbname。
2)INTERP 查询要求查询的时间区间必须位于数据集合(表)的所有记录的时间范围之内。如果给定的时间戳位于时间范围之外,即使有插值指令,仍然不返回结果。
3)单个 INTERP 函数查询只能够针对一个时间点进行查询,如果需要返回等时间间隔的断面数据,可以通过 INTERP 配合 EVERY 的方式来进行查询处理(而不是使用 INTERVAL),其含义是每隔固定长度的时间进行插值。
示例: 示例:
```sql ```sql
...@@ -1280,6 +1324,18 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数 ...@@ -1280,6 +1324,18 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
Query OK, 1 row(s) in set (0.003056s) Query OK, 1 row(s) in set (0.003056s)
``` ```
如下所示代码表示在时间区间 `['2017-7-14 18:40:00', '2017-7-14 18:40:00.014']` 中每隔 5 毫秒 进行一次断面计算。
```sql
taos> SELECT INTERP(current) FROM d636 WHERE ts>='2017-7-14 18:40:00' AND ts<='2017-7-14 18:40:00.014' EVERY(5a);
ts | interp(current) |
=================================================
2017-07-14 18:40:00.000 | 10.04179 |
2017-07-14 18:40:00.010 | 10.16123 |
Query OK, 2 row(s) in set (0.003487s)
```
### 计算函数 ### 计算函数
- **DIFF** - **DIFF**
...@@ -1405,8 +1461,6 @@ SELECT function_list FROM tb_name ...@@ -1405,8 +1461,6 @@ SELECT function_list FROM tb_name
SELECT function_list FROM stb_name SELECT function_list FROM stb_name
[WHERE where_condition] [WHERE where_condition]
[SESSION(ts_col, tol_val)]
[STATE_WINDOW(col)]
[INTERVAL(interval [, offset]) [SLIDING sliding]] [INTERVAL(interval [, offset]) [SLIDING sliding]]
[FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})] [FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
[GROUP BY tags] [GROUP BY tags]
...@@ -1417,8 +1471,8 @@ SELECT function_list FROM stb_name ...@@ -1417,8 +1471,8 @@ SELECT function_list FROM stb_name
1. 时间窗口:聚合时间段的窗口宽度由关键词 INTERVAL 指定,最短时间间隔 10 毫秒(10a);并且支持偏移 offset(偏移必须小于间隔),也即时间窗口划分与“UTC 时刻 0”相比的偏移量。SLIDING 语句用于指定聚合时间段的前向增量,也即每次窗口向前滑动的时长。当 SLIDING 与 INTERVAL 取值相等的时候,滑动窗口即为翻转窗口。 1. 时间窗口:聚合时间段的窗口宽度由关键词 INTERVAL 指定,最短时间间隔 10 毫秒(10a);并且支持偏移 offset(偏移必须小于间隔),也即时间窗口划分与“UTC 时刻 0”相比的偏移量。SLIDING 语句用于指定聚合时间段的前向增量,也即每次窗口向前滑动的时长。当 SLIDING 与 INTERVAL 取值相等的时候,滑动窗口即为翻转窗口。
* 从 2.1.5.0 版本开始,INTERVAL 语句允许的最短时间间隔调整为 1 微秒(1u),当然如果所查询的 DATABASE 的时间精度设置为毫秒级,那么允许的最短时间间隔为 1 毫秒(1a)。 * 从 2.1.5.0 版本开始,INTERVAL 语句允许的最短时间间隔调整为 1 微秒(1u),当然如果所查询的 DATABASE 的时间精度设置为毫秒级,那么允许的最短时间间隔为 1 毫秒(1a)。
* **注意:**用到 INTERVAL 语句时,除非极特殊的情况,都要求把客户端和服务端的 taos.cfg 配置文件中的 timezone 参数配置为相同的取值,以避免时间处理函数频繁进行跨时区转换而导致的严重性能影响。 * **注意:**用到 INTERVAL 语句时,除非极特殊的情况,都要求把客户端和服务端的 taos.cfg 配置文件中的 timezone 参数配置为相同的取值,以避免时间处理函数频繁进行跨时区转换而导致的严重性能影响。
2. 状态窗口:使用整数或布尔值来标识产生记录时设备的状态量,产生的记录如果具有相同的状态量取值则归属于同一个状态窗口,数值改变后该窗口关闭。状态量所对应的列作为 STATE_WINDOW 语句的参数来指定。 2. 状态窗口:使用整数或布尔值来标识产生记录时设备的状态量,产生的记录如果具有相同的状态量取值则归属于同一个状态窗口,数值改变后该窗口关闭。状态量所对应的列作为 STATE_WINDOW 语句的参数来指定。(状态窗口暂不支持对超级表使用)
3. 会话窗口:时间戳所在的列由 SESSION 语句的 ts_col 参数指定,会话窗口根据相邻两条记录的时间戳差值来确定是否属于同一个会话——如果时间戳差异在 tol_val 以内,则认为记录仍属于同一个窗口;如果时间变化超过 tol_val,则自动开启下一个窗口。 3. 会话窗口:时间戳所在的列由 SESSION 语句的 ts_col 参数指定,会话窗口根据相邻两条记录的时间戳差值来确定是否属于同一个会话——如果时间戳差异在 tol_val 以内,则认为记录仍属于同一个窗口;如果时间变化超过 tol_val,则自动开启下一个窗口。(会话窗口暂不支持对超级表使用)
- WHERE 语句可以指定查询的起止时间和其他过滤条件。 - WHERE 语句可以指定查询的起止时间和其他过滤条件。
- FILL 语句指定某一窗口区间数据缺失的情况下的填充模式。填充模式包括以下几种: - FILL 语句指定某一窗口区间数据缺失的情况下的填充模式。填充模式包括以下几种:
1. 不进行填充:NONE(默认填充模式)。 1. 不进行填充:NONE(默认填充模式)。
...@@ -1454,10 +1508,10 @@ SELECT AVG(current), MAX(current), LEASTSQUARES(current, start_val, step_val), P ...@@ -1454,10 +1508,10 @@ SELECT AVG(current), MAX(current), LEASTSQUARES(current, start_val, step_val), P
- 数据库名最大长度为 32。 - 数据库名最大长度为 32。
- 表名最大长度为 192,每行数据最大长度 16k 个字符(注意:数据行内每个 BINARY/NCHAR 类型的列还会额外占用 2 个字节的存储位置)。 - 表名最大长度为 192,每行数据最大长度 16k 个字符(注意:数据行内每个 BINARY/NCHAR 类型的列还会额外占用 2 个字节的存储位置)。
- 列名最大长度为 64,最多允许 1024 列,最少需要 2 列,第一列必须是时间戳。 - 列名最大长度为 64,最多允许 1024 列,最少需要 2 列,第一列必须是时间戳。(从 2.1.7.0 版本开始,改为最多允许 4096 列)
- 标签名最大长度为 64,最多允许 128 个,可以 1 个,一个表中标签值的总长度不超过 16k 个字符。 - 标签名最大长度为 64,最多允许 128 个,可以 1 个,一个表中标签值的总长度不超过 16k 个字符。
- SQL 语句最大长度 65480 个字符,但可通过系统配置参数 maxSQLLength 修改,最长可配置为 1M。 - SQL 语句最大长度 65480 个字符,但可通过系统配置参数 maxSQLLength 修改,最长可配置为 1M。
- SELECT 语句的查询结果,最多允许返回 1024 列(语句中的函数调用可能也会占用一些列空间),超限时需要显式指定较少的返回数据列,以避免语句执行报错。 - SELECT 语句的查询结果,最多允许返回 1024 列(语句中的函数调用可能也会占用一些列空间),超限时需要显式指定较少的返回数据列,以避免语句执行报错。(从 2.1.7.0 版本开始,改为最多允许 4096 列)
- 库的数目,超级表的数目、表的数目,系统不做限制,仅受系统资源限制。 - 库的数目,超级表的数目、表的数目,系统不做限制,仅受系统资源限制。
## TAOS SQL 其他约定 ## TAOS SQL 其他约定
...@@ -1466,12 +1520,6 @@ SELECT AVG(current), MAX(current), LEASTSQUARES(current, start_val, step_val), P ...@@ -1466,12 +1520,6 @@ SELECT AVG(current), MAX(current), LEASTSQUARES(current, start_val, step_val), P
TAOS SQL 支持对标签、TBNAME 进行 GROUP BY 操作,也支持普通列进行 GROUP BY,前提是:仅限一列且该列的唯一值小于 10 万个。 TAOS SQL 支持对标签、TBNAME 进行 GROUP BY 操作,也支持普通列进行 GROUP BY,前提是:仅限一列且该列的唯一值小于 10 万个。
**JOIN 操作的限制**
TAOS SQL 支持表之间按主键时间戳来 join 两张表的列,暂不支持两个表之间聚合后的四则运算。
JOIN 查询的不同表的过滤条件之间不能为 OR。
**IS NOT NULL 与不为空的表达式适用范围** **IS NOT NULL 与不为空的表达式适用范围**
IS NOT NULL 支持所有类型的列。不为空的表达式为 <>"",仅对非数值类型的列适用。 IS NOT NULL 支持所有类型的列。不为空的表达式为 <>"",仅对非数值类型的列适用。
......
...@@ -96,9 +96,11 @@ TDengine 目前尚不支持删除功能,未来根据用户需求可能会支 ...@@ -96,9 +96,11 @@ TDengine 目前尚不支持删除功能,未来根据用户需求可能会支
另需注意,在 UPDATE 设置为 0 时,后发送的相同时间戳的数据会被直接丢弃,但并不会报错,而且仍然会被计入 affected rows (所以不能利用 INSERT 指令的返回信息进行时间戳查重)。这样设计的主要原因是,TDengine 把写入的数据看做一个数据流,无论时间戳是否出现冲突,TDengine 都认为产生数据的原始设备真实地产生了这样的数据。UPDATE 参数只是控制这样的流数据在进行持久化时要怎样处理——UPDATE 为 0 时,表示先写入的数据覆盖后写入的数据;而 UPDATE 为 1 时,表示后写入的数据覆盖先写入的数据。这种覆盖关系如何选择,取决于对数据的后续使用和统计中,希望以先还是后生成的数据为准。 另需注意,在 UPDATE 设置为 0 时,后发送的相同时间戳的数据会被直接丢弃,但并不会报错,而且仍然会被计入 affected rows (所以不能利用 INSERT 指令的返回信息进行时间戳查重)。这样设计的主要原因是,TDengine 把写入的数据看做一个数据流,无论时间戳是否出现冲突,TDengine 都认为产生数据的原始设备真实地产生了这样的数据。UPDATE 参数只是控制这样的流数据在进行持久化时要怎样处理——UPDATE 为 0 时,表示先写入的数据覆盖后写入的数据;而 UPDATE 为 1 时,表示后写入的数据覆盖先写入的数据。这种覆盖关系如何选择,取决于对数据的后续使用和统计中,希望以先还是后生成的数据为准。
此外,从 2.1.7.0 版本开始,支持将 UPDATE 参数设为 2,表示“支持部分列更新”。也即,当 UPDATE 设为 1 时,如果更新一个数据行,其中某些列没有提供取值,那么这些列会被设为 NULL;而当 UPDATE 设为 2 时,如果更新一个数据行,其中某些列没有提供取值,那么这些列会保持原有数据行中的对应值。
## 10. 我怎么创建超过1024列的表? ## 10. 我怎么创建超过1024列的表?
使用2.0及其以上版本,默认支持1024列;2.0之前的版本,TDengine最大允许创建250列的表。但是如果确实超过限值,建议按照数据特性,逻辑地将这个宽表分解成几个小表。 使用 2.0 及其以上版本,默认支持 1024 列;2.0 之前的版本,TDengine 最大允许创建 250 列的表。但是如果确实超过限值,建议按照数据特性,逻辑地将这个宽表分解成几个小表。(从 2.1.7.0 版本开始,表的最大列数增加到了 4096 列。)
## 11. 最有效的写入数据的方法是什么? ## 11. 最有效的写入数据的方法是什么?
......
# TDengine Documentation # TDengine Documentation
TDengine is a highly efficient platform to store, query, and analyze time-series data. It is specially designed and optimized for IoT, Internet of Vehicles, Industrial IoT, IT Infrastructure and Application Monitoring, etc. It works like a relational database, such as MySQL, but you are strongly encouraged to read through the following documentation before you experience it, especially the Data Model and Data Modeling sections. In addition to this document, you should also download and read our technology white paper. For the older TDengine version 1.6 documentation, please click here. TDengine is a highly efficient platform to store, query, and analyze time-series data. It is specially designed and optimized for IoT, Internet of Vehicles, Industrial IoT, IT Infrastructure and Application Monitoring, etc. It works like a relational database, such as MySQL, but you are strongly encouraged to read through the following documentation before you experience it, especially the Data Modeling sections. In addition to this document, you should also download and read the technology white paper. For the older TDengine version 1.6 documentation, please click [here](https://www.taosdata.com/en/documentation16/).
## [TDengine Introduction](/evaluation) ## [TDengine Introduction](/evaluation)
...@@ -10,28 +10,41 @@ TDengine is a highly efficient platform to store, query, and analyze time-series ...@@ -10,28 +10,41 @@ TDengine is a highly efficient platform to store, query, and analyze time-series
## [Getting Started](/getting-started) ## [Getting Started](/getting-started)
* [Quickly Install](/getting-started#install): install via source code/package / Docker within seconds * [Quick Install](/getting-started#install): install via source code/package / Docker within seconds
* [Quick Launch](/getting-started#start): start / stop TDengine quickly with systemctl
- [Easy to Launch](/getting-started#start): start / stop TDengine with systemctl * [Command-line](/getting-started#console) : an easy way to access TDengine server
- [Command-line](/getting-started#console) : an easy way to access TDengine server * [Experience Lightning Speed](/getting-started#demo): running a demo, inserting/querying data to experience faster speed
- [Experience Lightning Speed](/getting-started#demo): running a demo, inserting/querying data to experience faster speed * [List of Supported Platforms](/getting-started#platforms): a list of platforms supported by TDengine server and client
- [List of Supported Platforms](/getting-started#platforms): a list of platforms supported by TDengine server and client * [Deploy to Kubernetes](https://taosdata.github.io/TDengine-Operator/en/index.html):a detailed guide for TDengine deployment in Kubernetes environment
- [Deploy to Kubernetes](https://taosdata.github.io/TDengine-Operator/en/index.html):a detailed guide for TDengine deployment in Kubernetes environment
## [Overall Architecture](/architecture) ## [Overall Architecture](/architecture)
- [Data Model](/architecture#model): relational database model, but one table for one device with static tags - [Data Model](/architecture#model): relational database model, but one table for one data collection point with static tags
- [Cluster and Primary Logical Unit](/architecture#cluster): Take advantage of NoSQL, support scale-out and high-reliability - [Cluster and Primary Logical Unit](/architecture#cluster): Take advantage of NoSQL architecture, high availability and horizontal scalability
- [Storage Model and Data Partitioning/Sharding](/architecture#sharding): tag data will be separated from time-series data, segmented by vnode and time - [Storage Model and Data Partitioning/Sharding](/architecture#sharding): tag data is separated from time-series data, sharded by vnodes and partitioned by time
- [Data Writing and Replication Process](/architecture#replication): records received are written to WAL, cached, with acknowledgement is sent back to client, while supporting multi-replicas - [Data Writing and Replication Process](/architecture#replication): records received are written to WAL, cached, with acknowledgement sent back to client, while supporting data replications
- [Caching and Persistence](/architecture#persistence): latest records are cached in memory, but are written in columnar format with an ultra-high compression ratio - [Caching and Persistence](/architecture#persistence): latest records are cached in memory, but are written in columnar format with an ultra-high compression ratio
- [Data Query](/architecture#query): support various functions, time-axis aggregation, interpolation, and multi-table aggregation - [Data Query](/architecture#query): support various SQL functions, downsampling, interpolation, and multi-table aggregation
## [Data Modeling](/model) ## [Data Modeling](/model)
- [Create a Database](/model#create-db): create a database for all data collection points with similar features - [Create a Database](/model#create-db): create a database for all data collection points with similar data characteristics
- [Create a Super Table(STable)](/model#create-stable): create a STable for all data collection points with the same type - [Create a Super Table(STable)](/model#create-stable): create a STable for all data collection points with the same type
- [Create a Table](/model#create-table): use STable as the template, to create a table for each data collecting point - [Create a Table](/model#create-table): use STable as the template to create a table for each data collecting point
## [Efficient Data Ingestion](/insert)
- [Data Writing via SQL](/insert#sql): write one or multiple records into one or multiple tables via SQL insert command
- [Data Writing via Prometheus](/insert#prometheus): Configure Prometheus to write data directly without any code
- [Data Writing via Telegraf](/insert#telegraf): Configure Telegraf to write collected data directly without any code
- [Data Writing via EMQ X](/insert#emq): Configure EMQ X to write MQTT data directly without any code
- [Data Writing via HiveMQ Broker](/insert#hivemq): Configure HiveMQ to write MQTT data directly without any code
## [Efficient Data Querying](/queries)
- [Major Features](/queries#queries): support various standard query functions, setting filter conditions, and querying per time segment
- [Multi-table Aggregation](/queries#aggregation): use STable and set tag filter conditions to perform efficient aggregation
- [Downsampling](/queries#sampling): aggregate data in successive time windows, support interpolation
## [TAOS SQL](/taos-sql) ## [TAOS SQL](/taos-sql)
...@@ -40,27 +53,13 @@ TDengine is a highly efficient platform to store, query, and analyze time-series ...@@ -40,27 +53,13 @@ TDengine is a highly efficient platform to store, query, and analyze time-series
- [Table Management](/taos-sql#table): add, drop, check, alter tables - [Table Management](/taos-sql#table): add, drop, check, alter tables
- [STable Management](/taos-sql#super-table): add, drop, check, alter STables - [STable Management](/taos-sql#super-table): add, drop, check, alter STables
- [Tag Management](/taos-sql#tags): add, drop, alter tags - [Tag Management](/taos-sql#tags): add, drop, alter tags
- [Inserting Records](/taos-sql#insert): support to write single/multiple items per table, multiple items across tables, and support to write historical data - [Inserting Records](/taos-sql#insert): write single/multiple records a table, multiple records across tables, and historical data
- [Data Query](/taos-sql#select): support time segment, value filtering, sorting, manual paging of query results, etc - [Data Query](/taos-sql#select): support time segment, value filtering, sorting, manual paging of query results, etc
- [SQL Function](/taos-sql#functions): support various aggregation functions, selection functions, and calculation functions, such as avg, min, diff, etc - [SQL Function](/taos-sql#functions): support various aggregation functions, selection functions, and calculation functions, such as avg, min, diff, etc
- [Time Dimensions Aggregation](/taos-sql#aggregation): aggregate and reduce the dimension after cutting table data by time segment - [Cutting and Aggregation](/taos-sql#aggregation): aggregate and reduce the dimension after cutting table data by time segment
- [Boundary Restrictions](/taos-sql#limitation): restrictions for the library, table, SQL, and others - [Boundary Restrictions](/taos-sql#limitation): restrictions for the library, table, SQL, and others
- [Error Code](/taos-sql/error-code): TDengine 2.0 error codes and corresponding decimal codes - [Error Code](/taos-sql/error-code): TDengine 2.0 error codes and corresponding decimal codes
## [Efficient Data Ingestion](/insert)
- [SQL Ingestion](/insert#sql): write one or multiple records into one or multiple tables via SQL insert command
- [Prometheus Ingestion](/insert#prometheus): Configure Prometheus to write data directly without any code
- [Telegraf Ingestion](/insert#telegraf): Configure Telegraf to write collected data directly without any code
- [EMQ X Broker](/insert#emq): Configure EMQ X to write MQTT data directly without any code
- [HiveMQ Broker](/insert#hivemq): Configure HiveMQ to write MQTT data directly without any code
## [Efficient Data Querying](/queries)
- [Main Query Features](/queries#queries): support various standard functions, setting filter conditions, and querying per time segment
- [Multi-table Aggregation Query](/queries#aggregation): use STable and set tag filter conditions to perform efficient aggregation queries
- [Downsampling to Query Value](/queries#sampling): aggregate data in successive time windows, support interpolation
## [Advanced Features](/advanced-features) ## [Advanced Features](/advanced-features)
- [Continuous Query](/advanced-features#continuous-query): Based on sliding windows, the data stream is automatically queried and calculated at regular intervals - [Continuous Query](/advanced-features#continuous-query): Based on sliding windows, the data stream is automatically queried and calculated at regular intervals
...@@ -88,12 +87,12 @@ TDengine is a highly efficient platform to store, query, and analyze time-series ...@@ -88,12 +87,12 @@ TDengine is a highly efficient platform to store, query, and analyze time-series
## [Installation and Management of TDengine Cluster](/cluster) ## [Installation and Management of TDengine Cluster](/cluster)
- [Preparation](/cluster#prepare): important considerations before deploying TDengine for production usage - [Preparation](/cluster#prepare): important steps before deploying TDengine for production usage
- [Create Your First Node](/cluster#node-one): simple to follow the quick setup - [Create the First Node](/cluster#node-one): just follow the steps in quick start
- [Create Subsequent Nodes](/cluster#node-other): configure taos.cfg for new nodes to add more to the existing cluster - [Create Subsequent Nodes](/cluster#node-other): configure taos.cfg for new nodes to add more to the existing cluster
- [Node Management](/cluster#management): add, delete, and check nodes in the cluster - [Node Management](/cluster#management): add, delete, and check nodes in the cluster
- [High-availability of Vnode](/cluster#high-availability): implement high-availability of Vnode through multi-replicas - [High-availability of Vnode](/cluster#high-availability): implement high-availability of Vnode through replicas
- [Mnode Management](/cluster#mnode): automatic system creation without any manual intervention - [Mnode Management](/cluster#mnode): mnodes are created automatically without any manual intervention
- [Load Balancing](/cluster#load-balancing): automatically performed once the number of nodes or load changes - [Load Balancing](/cluster#load-balancing): automatically performed once the number of nodes or load changes
- [Offline Node Processing](/cluster#offline): any node that offline for more than a certain period will be removed from the cluster - [Offline Node Processing](/cluster#offline): any node that offline for more than a certain period will be removed from the cluster
- [Arbitrator](/cluster#arbitrator): used in the case of an even number of replicas to prevent split-brain - [Arbitrator](/cluster#arbitrator): used in the case of an even number of replicas to prevent split-brain
...@@ -108,27 +107,14 @@ TDengine is a highly efficient platform to store, query, and analyze time-series ...@@ -108,27 +107,14 @@ TDengine is a highly efficient platform to store, query, and analyze time-series
- [Export Data](/administrator#export): export data either from TDengine shell or from the taosdump tool - [Export Data](/administrator#export): export data either from TDengine shell or from the taosdump tool
- [System Monitor](/administrator#status): monitor the system connections, queries, streaming calculation, logs, and events - [System Monitor](/administrator#status): monitor the system connections, queries, streaming calculation, logs, and events
- [File Directory Structure](/administrator#directories): directories where TDengine data files and configuration files located - [File Directory Structure](/administrator#directories): directories where TDengine data files and configuration files located
- [Parameter Restrictions and Reserved Keywords](/administrator#keywords): TDengine’s list of parameter restrictions and reserved keywords - [Parameter Limitss and Reserved Keywords](/administrator#keywords): TDengine’s list of parameter limits and reserved keywords
## TDengine Technical Design
- [System Module]: taosd functions and modules partitioning
- [Data Replication]: support real-time synchronous/asynchronous replication, to ensure high-availability of the system
- [Technical Blog](https://www.taosdata.com/cn/blog/?categories=3): More technical analysis and architecture design articles
## Common Tools
- [TDengine sample import tools](https://www.taosdata.com/blog/2020/01/18/1166.html)
- [TDengine performance comparison test tools](https://www.taosdata.com/blog/2020/01/18/1166.html)
- [Use TDengine visually through IDEA Database Management Tool](https://www.taosdata.com/blog/2020/08/27/1767.html)
## Performance: TDengine vs Others ## Performance: TDengine vs Others
- [Performance: TDengine vs InfluxDB with InfluxDB’s open-source performance testing tool](https://www.taosdata.com/blog/2020/01/13/1105.html) - [Performance: TDengine vs OpenTSDB](https://www.taosdata.com/blog/2019/09/12/710.html)
- [Performance: TDengine vs OpenTSDB](https://www.taosdata.com/blog/2019/08/21/621.html) - [Performance: TDengine vs Cassandra](https://www.taosdata.com/blog/2019/09/12/708.html)
- [Performance: TDengine vs Cassandra](https://www.taosdata.com/blog/2019/08/14/573.html) - [Performance: TDengine vs InfluxDB](https://www.taosdata.com/blog/2019/09/12/706.html)
- [Performance: TDengine vs InfluxDB](https://www.taosdata.com/blog/2019/07/19/419.html) - [Performance Test Reports of TDengine vs InfluxDB/OpenTSDB/Cassandra/MySQL/ClickHouse](https://www.taosdata.com/downloads/TDengine_Testing_Report_en.pdf)
- [Performance Test Reports of TDengine vs InfluxDB/OpenTSDB/Cassandra/MySQL/ClickHouse](https://www.taosdata.com/downloads/TDengine_Testing_Report_cn.pdf)
## More on IoT Big Data ## More on IoT Big Data
...@@ -136,7 +122,8 @@ TDengine is a highly efficient platform to store, query, and analyze time-series ...@@ -136,7 +122,8 @@ TDengine is a highly efficient platform to store, query, and analyze time-series
- [Features and Functions of IoT Big Data platforms](https://www.taosdata.com/blog/2019/07/29/542.html) - [Features and Functions of IoT Big Data platforms](https://www.taosdata.com/blog/2019/07/29/542.html)
- [Why don’t General Big Data Platforms Fit IoT Scenarios?](https://www.taosdata.com/blog/2019/07/09/why-does-the-general-big-data-platform-not-fit-iot-data-processing/) - [Why don’t General Big Data Platforms Fit IoT Scenarios?](https://www.taosdata.com/blog/2019/07/09/why-does-the-general-big-data-platform-not-fit-iot-data-processing/)
- [Why TDengine is the best choice for IoT, Internet of Vehicles, and Industry Internet Big Data platforms?](https://www.taosdata.com/blog/2019/07/09/why-tdengine-is-the-best-choice-for-iot-big-data-processing/) - [Why TDengine is the best choice for IoT, Internet of Vehicles, and Industry Internet Big Data platforms?](https://www.taosdata.com/blog/2019/07/09/why-tdengine-is-the-best-choice-for-iot-big-data-processing/)
- [Technical Blog](https://www.taosdata.com/cn/blog/?categories=3): More technical analysis and architecture design articles
## FAQ ## FAQ
- [FAQ: Common questions and answers](/faq) - [FAQ: Common questions and answers](/faq)
\ No newline at end of file
...@@ -2,20 +2,20 @@ ...@@ -2,20 +2,20 @@
## <a class="anchor" id="intro"></a> About TDengine ## <a class="anchor" id="intro"></a> About TDengine
TDengine is an innovative Big Data processing product launched by Taos Data in the face of the fast-growing Internet of Things (IoT) Big Data market and technical challenges. It does not rely on any third-party software, nor does it optimize or package any open-source database or stream computing product. Instead, it is a product independently developed after absorbing the advantages of many traditional relational databases, NoSQL databases, stream computing engines, message queues, and other software. TDengine has its own unique Big Data processing advantages in time-series space. TDengine is an innovative Big Data processing product launched by TAOS Data in the face of the fast-growing Internet of Things (IoT) Big Data market and technical challenges. It does not rely on any third-party software, nor does it optimize or package any open-source database or stream computing product. Instead, it is a product independently developed after absorbing the advantages of many traditional relational databases, NoSQL databases, stream computing engines, message queues, and other software. TDengine has its own unique Big Data processing advantages in time-series space.
One of the modules of TDengine is the time-series database. However, in addition to this, to reduce the complexity of research and development and the difficulty of system operation, TDengine also provides functions such as caching, message queuing, subscription, stream computing, etc. TDengine provides a full-stack technical solution for the processing of IoT and Industrial Internet BigData. It is an efficient and easy-to-use IoT Big Data platform. Compared with typical Big Data platforms such as Hadoop, TDengine has the following distinct characteristics: One of the modules of TDengine is the time-series database. However, in addition to this, to reduce the complexity of research and development and the difficulty of system operation, TDengine also provides functions such as caching, message queuing, subscription, stream computing, etc. TDengine provides a full-stack technical solution for the processing of IoT and Industrial Internet BigData. It is an efficient and easy-to-use IoT Big Data platform. Compared with typical Big Data platforms such as Hadoop, TDengine has the following distinct characteristics:
- **Performance improvement over 10 times**: An innovative data storage structure is defined, with each single core can process at least 20,000 requests per second, insert millions of data points, and read more than 10 million data points, which is more than 10 times faster than other existing general database. - **Performance improvement over 10 times**: An innovative data storage structure is defined, with every single core that can process at least 20,000 requests per second, insert millions of data points, and read more than 10 million data points, which is more than 10 times faster than other existing general database.
- **Reduce the cost of hardware or cloud services to 1/5**: Due to its ultra-performance, TDengine’s computing resources consumption is less than 1/5 of other common Big Data solutions; through columnar storage and advanced compression algorithms, the storage consumption is less than 1/10 of other general databases. - **Reduce the cost of hardware or cloud services to 1/5**: Due to its ultra-performance, TDengine’s computing resources consumption is less than 1/5 of other common Big Data solutions; through columnar storage and advanced compression algorithms, the storage consumption is less than 1/10 of other general databases.
- **Full-stack time-series data processing engine**: Integrate database, message queue, cache, stream computing, and other functions, and the applications do not need to integrate with software such as Kafka/Redis/HBase/Spark/HDFS, thus greatly reducing the complexity cost of application development and maintenance. - **Full-stack time-series data processing engine**: Integrate database, message queue, cache, stream computing, and other functions, and the applications do not need to integrate with software such as Kafka/Redis/HBase/Spark/HDFS, thus greatly reducing the complexity cost of application development and maintenance.
- **Powerful analysis functions**: Data from ten years ago or one second ago, can all be queried based on a specified time range. Data can be aggregated on a timeline or multiple devices. Ad-hoc queries can be made at any time through Shell, Python, R, and MATLAB. - **Highly Available and Horizontal Scalable**: With the distributed architecture and consistency algorithm, via multi-replication and clustering features, TDengine ensures high availability and horizontal scalability to support mission-critical applications.
- **Seamless connection with third-party tools**: Integration with Telegraf, Grafana, EMQ, HiveMQ, Prometheus, MATLAB, R, etc. without even one single line of code. OPC, Hadoop, Spark, etc. will be supported in the future, and more BI tools will be seamlessly connected to.
- **Zero operation cost & zero learning cost**: Installing clusters is simple and quick, with real-time backup built-in, and no need to split libraries or tables. Similar to standard SQL, TDengine can support RESTful, Python/Java/C/C++/C#/Go/Node.js, and similar to MySQL with zero learning cost. - **Zero operation cost & zero learning cost**: Installing clusters is simple and quick, with real-time backup built-in, and no need to split libraries or tables. Similar to standard SQL, TDengine can support RESTful, Python/Java/C/C++/C#/Go/Node.js, and similar to MySQL with zero learning cost.
- **Core is Open Sourced:** Except for some auxiliary features, the core of TDengine is open-sourced. Enterprise won't be locked by the database anymore. The ecosystem is more strong, products are more stable, and developer communities are more active.
With TDengine, the total cost of ownership of typical IoT, Internet of Vehicles, and Industrial Internet Big Data platforms can be greatly reduced. However, it should be pointed out that due to making full use of the characteristics of IoT time-series data, TDengine cannot be used to process general data from web crawlers, microblogs, WeChat, e-commerce, ERP, CRM, and other sources. With TDengine, the total cost of ownership of typical IoT, Internet of Vehicles, and Industrial Internet Big Data platforms can be greatly reduced. However, since it makes full use of the characteristics of IoT time-series data, TDengine cannot be used to process general data from web crawlers, microblogs, WeChat, e-commerce, ERP, CRM, and other sources.
![TDengine Technology Ecosystem](page://images/eco_system.png) ![TDengine Technology Ecosystem](../images/eco_system.png)
<center>Figure 1. TDengine Technology Ecosystem</center> <center>Figure 1. TDengine Technology Ecosystem</center>
...@@ -62,4 +62,4 @@ From the perspective of data sources, designers can analyze the applicability of ...@@ -62,4 +62,4 @@ From the perspective of data sources, designers can analyze the applicability of
| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ | | ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
| Require system with high-reliability | | | √ | TDengine has a very robust and reliable system architecture to implement simple and convenient daily operation with streamlined experiences for operators, thus human errors and accidents are eliminated to the greatest extent. | | Require system with high-reliability | | | √ | TDengine has a very robust and reliable system architecture to implement simple and convenient daily operation with streamlined experiences for operators, thus human errors and accidents are eliminated to the greatest extent. |
| Require controllable operation learning cost | | | √ | As above. | | Require controllable operation learning cost | | | √ | As above. |
| Require abundant talent supply | √ | | | As a new-generation product, it’s still difficult to find talents with TDengine experiences from market. However, the learning cost is low. As the vendor, we also provide extensive operation training and counselling services. | | Require abundant talent supply | √ | | | As a new-generation product, it’s still difficult to find talents with TDengine experiences from the market. However, the learning cost is low. As the vendor, we also provide extensive operation training and counseling services. |
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
## <a class="anchor" id="install"></a>Quick Install ## <a class="anchor" id="install"></a>Quick Install
TDengine software consists of 3 components: server, client, and alarm module. At the moment, TDengine server only runs on Linux (Windows, mac OS and more OS supports will come soon), but client can run on either Windows or Linux. TDengine client can be installed and run on Windows or Linux. Applications based-on any OSes can all connect to server taosd via a RESTful interface. About CPU, TDengine supports X64/ARM64/MIPS64/Alpha64, and ARM32、RISC-V, other more CPU architectures will be supported soon. You can set up and install TDengine server either from the [source code](https://www.taosdata.com/en/getting-started/#Install-from-Source) or the [packages](https://www.taosdata.com/en/getting-started/#Install-from-Package). TDengine software consists of 3 parts: server, client, and alarm module. At the moment, TDengine server only runs on Linux (Windows, mac OS and more OS supports will come soon), but client can run on either Windows or Linux. TDengine client can be installed and run on Windows or Linux. Applications based-on any OSes can all connect to server taosd via a RESTful interface. About CPU, TDengine supports X64/ARM64/MIPS64/Alpha64, and ARM32、RISC-V, other more CPU architectures will be supported soon. You can set up and install TDengine server either from the [source code](https://www.taosdata.com/en/getting-started/#Install-from-Source) or the [packages](https://www.taosdata.com/en/getting-started/#Install-from-Package).
### <a class="anchor" id="source-install"></a>Install from Source ### <a class="anchor" id="source-install"></a>Install from Source
...@@ -16,10 +16,23 @@ Please refer to the detailed operation in [Quickly experience TDengine through D ...@@ -16,10 +16,23 @@ Please refer to the detailed operation in [Quickly experience TDengine through D
### <a class="anchor" id="package-install"></a>Install from Package ### <a class="anchor" id="package-install"></a>Install from Package
It’s extremely easy to install for TDengine, which takes only a few seconds from downloaded to successful installed. The server installation package includes clients and connectors. We provide 3 installation packages, which you can choose according to actual needs: Three different packages for TDengine server are provided, please pick up the one you like. (Lite packages only have execution files and connector of C/C++, but standard packages support connectors of nearly all programming languages.) Beta version has more features, but we suggest you to install stable version for production or testing.
Click [here](https://www.taosdata.com/en/getting-started/#Install-from-Package) to download the install package. Click [here](https://www.taosdata.com/en/getting-started/#Install-from-Package) to download the install package.
### Install TDengine by apt-get
If you use Debian or Ubuntu system you can use 'apt-get' command to install TDengine from official repository. Please use following commands to setup:
```
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
[Optional] echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
sudo apt-get update
apt-get policy tdengine
sudo apt-get install tdengine
```
## <a class="anchor" id="start"></a>Quick Launch ## <a class="anchor" id="start"></a>Quick Launch
After installation, you can start the TDengine service by the `systemctl` command. After installation, you can start the TDengine service by the `systemctl` command.
...@@ -131,7 +144,7 @@ After starting the TDengine server, you can execute the command `taosdemo` in th ...@@ -131,7 +144,7 @@ After starting the TDengine server, you can execute the command `taosdemo` in th
$ taosdemo $ taosdemo
``` ```
Using this command, a STable named `meters` will be created in the database `test` There are 10k tables under this stable, named from `t0` to `t9999`. In each table there are 100k rows of records, each row with columns (`f1`, `f2` and `f3`. The timestamp is from "2017-07-14 10:40:00 000" to "2017-07-14 10:41:39 999". Each table also has tags `areaid` and `loc`: `areaid` is set from 1 to 10, `loc` is set to "beijing" or "shanghai". Using this command, a STable named `meters` will be created in the database `test`. There are 10k tables under this STable, named from `t0` to `t9999`. In each table there are 100k rows of records, each row with columns (`f1`, `f2` and `f3`. The timestamp is from "2017-07-14 10:40:00 000" to "2017-07-14 10:41:39 999". Each table also has tags `areaid` and `loc`: `areaid` is set from 1 to 10, `loc` is set to "beijing" or "shanghai".
It takes about 10 minutes to execute this command. Once finished, 1 billion rows of records will be inserted. It takes about 10 minutes to execute this command. Once finished, 1 billion rows of records will be inserted.
...@@ -201,7 +214,7 @@ Note: ● has been verified by official tests; ○ has been verified by unoffici ...@@ -201,7 +214,7 @@ Note: ● has been verified by official tests; ○ has been verified by unoffici
List of platforms supported by TDengine client and connectors List of platforms supported by TDengine client and connectors
At the moment, TDengine connectors can support a wide range of platforms, including hardware platforms such as X64/X86/ARM64/ARM32/MIPS/Alpha, and development environments such as Linux/Win64/Win32. At the moment, TDengine connectors can support a wide range of platforms, including hardware platforms such as X64/X86/ARM64/ARM32/MIPS/Alpha, and operating system such as Linux/Win64/Win32.
Comparison matrix as following: Comparison matrix as following:
...@@ -218,4 +231,4 @@ Comparison matrix as following: ...@@ -218,4 +231,4 @@ Comparison matrix as following:
Note: ● has been verified by official tests; ○ has been verified by unofficial tests. Note: ● has been verified by official tests; ○ has been verified by unofficial tests.
Please visit [Connectors](https://www.taosdata.com/en/documentation/connector) section for more detailed information. Please visit Connectors section for more detailed information.
...@@ -2,17 +2,15 @@ ...@@ -2,17 +2,15 @@
TDengine adopts a relational data model, so we need to build the "database" and "table". Therefore, for a specific application scenario, it is necessary to consider the design of the database, STable and ordinary table. This section does not discuss detailed syntax rules, but only concepts. TDengine adopts a relational data model, so we need to build the "database" and "table". Therefore, for a specific application scenario, it is necessary to consider the design of the database, STable and ordinary table. This section does not discuss detailed syntax rules, but only concepts.
Please watch the [video tutorial](https://www.taosdata.com/blog/2020/11/11/1945.html) for data modeling.
## <a class="anchor" id="create-db"></a> Create a Database ## <a class="anchor" id="create-db"></a> Create a Database
Different types of data collection points often have different data characteristics, including frequency of data collecting, length of data retention time, number of replicas, size of data blocks, whether to update data or not, and so on. To ensure TDengine working with great efficiency in various scenarios, TDengine suggests creating tables with different data characteristics in different databases, because each database can be configured with different storage strategies. When creating a database, in addition to SQL standard options, the application can also specify a variety of parameters such as retention duration, number of replicas, number of memory blocks, time accuracy, max and min number of records in a file block, whether it is compressed or not, and number of days a data file will be overwritten. For example: Different types of data collection points often have different data characteristics, including data sampling rate, length of data retention time, number of replicas, size of data blocks, whether to update data or not, and so on. To ensure TDengine working with great efficiency in various scenarios, TDengine suggests creating tables with different data characteristics in different databases, because each database can be configured with different storage strategies. When creating a database, in addition to SQL standard options, the application can also specify a variety of parameters such as retention duration, number of replicas, number of memory blocks, time resolution, max and min number of records in a file block, whether it is compressed or not, and number of days covered by a data file. For example:
```mysql ```mysql
CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 6 UPDATE 1; CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 6 UPDATE 1;
``` ```
The above statement will create a database named “power”. The data of this database will be kept for 365 days (it will be automatically deleted 365 days later), one data file created per 10 days, and the number of memory blocks is 4 for data updating. For detailed syntax and parameters, please refer to [Data Management section of TAOS SQL](https://www.taosdata.com/en/documentation/taos-sql#management). The above statement will create a database named “power”. The data of this database will be kept for 365 days (data will be automatically deleted 365 days later), one data file will be created per 10 days, the number of memory blocks is 4, and data updating is allowed. For detailed syntax and parameters, please refer to [Data Management section of TAOS SQL](https://www.taosdata.com/en/documentation/taos-sql#management).
After the database created, please use SQL command USE to switch to the new database, for example: After the database created, please use SQL command USE to switch to the new database, for example:
...@@ -20,13 +18,12 @@ After the database created, please use SQL command USE to switch to the new data ...@@ -20,13 +18,12 @@ After the database created, please use SQL command USE to switch to the new data
USE power; USE power;
``` ```
Replace the database operating in the current connection with “power”, otherwise, before operating on a specific table, you need to use "database name. table name" to specify the name of database to use. Specify the database operating in the current connection with “power”, otherwise, before operating on a specific table, you need to use "database-name.table-name" to specify the name of database to use.
**Note:** **Note:**
- Any table or STable belongs to a database. Before creating a table, a database must be created first. - Any table or STable belongs to a database. Before creating a table, a database must be created first.
- Tables in two different databases cannot be JOIN. - Tables in two different databases cannot be JOIN.
- You need to specify a timestamp when creating and inserting records, or querying history records.
## <a class="anchor" id="create-stable"></a> Create a STable ## <a class="anchor" id="create-stable"></a> Create a STable
...@@ -38,11 +35,11 @@ CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAG ...@@ -38,11 +35,11 @@ CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAG
**Note:** The STABLE keyword in this instruction needs to be written as TABLE in versions before 2.0.15. **Note:** The STABLE keyword in this instruction needs to be written as TABLE in versions before 2.0.15.
Just like creating an ordinary table, you need to provide the table name (‘meters’ in the example) and the table structure Schema, that is, the definition of data columns. The first column must be a timestamp (‘ts’ in the example), the other columns are the physical metrics collected (current, volume, phase in the example), and the data types can be int, float, string, etc. In addition, you need to provide the schema of the tag (location, groupId in the example), and the data types of the tag can be int, float, string and so on. Static attributes of collection points can often be used as tags, such as geographic location of collection points, device model, device group ID, administrator ID, etc. The schema of the tag can be added, deleted and modified afterwards. Please refer to the [STable Management section of TAOS SQL](https://www.taosdata.com/cn/documentation/taos-sql#super-table) for specific definitions and details. Just like creating an ordinary table, you need to provide the table name (‘meters’ in the example) and the table structure Schema, that is, the definition of data columns. The first column must be a timestamp (‘ts’ in the example), the other columns are the physical metrics collected (current, volume, phase in the example), and the data types can be int, float, string, etc. In addition, you need to provide the schema of the tag (location, groupId in the example), and the data types of the tag can be int, float, string and so on. Static attributes of data collection points can often be used as tags, such as geographic location of collection points, device model, device group ID, administrator ID, etc. The schema of the tags can be added, deleted and modified afterwards. Please refer to the [STable Management section of TAOS SQL](https://www.taosdata.com/cn/documentation/taos-sql#super-table) for specific definitions and details.
Each type of data collection point needs an established STable, so an IoT system often has multiple STables. For the power grid, we need to build a STable for smart meters, transformers, buses, switches, etc. For IoT, a device may have multiple data collection points (for example, a fan for wind-driven generator, some collection points capture parameters such as current and voltage, and some capture environmental parameters such as temperature, humidity and wind direction). In this case, multiple STables need to be established for corresponding types of devices. All collected physical metrics contained in one and the same STable must be collected at the same time (with a consistent timestamp). A STable must be created for each type of data collection point, so an IoT system often has multiple STables. For the power grid, we need to build a STable for smart meters, a STable for transformers, a STable for buses, a STable for switches, etc. For IoT, a device may have multiple data collection points (for example, a fan for wind-driven generator, one data collection point captures metrics such as current and voltage, and one data collection point captures environmental parameters such as temperature, humidity and wind direction). In this case, multiple STables need to be established for corresponding types of devices. All metrics contained in a STable must be collected at the same time (with the same timestamp).
A STable allows up to 1024 columns. If the number of physical metrics collected at a collection point exceeds 1024, multiple STables need to be built to process them. A system can have multiple DBs, and a DB can have one or more STables. A STable allows up to 1024 columns. If the number of metrics collected at a data collection point exceeds 1024, multiple STables need to be built to process them. A system can have multiple DBs, and a DB can have one or more STables.
## <a class="anchor" id="create-table"></a> Create a Table ## <a class="anchor" id="create-table"></a> Create a Table
...@@ -54,22 +51,23 @@ CREATE TABLE d1001 USING meters TAGS ("Beijing.Chaoyang", 2); ...@@ -54,22 +51,23 @@ CREATE TABLE d1001 USING meters TAGS ("Beijing.Chaoyang", 2);
Where d1001 is the table name, meters is the name of the STable, followed by the specific tag value of tag Location as "Beijing.Chaoyang", and the specific tag value of tag groupId 2. Although the tag value needs to be specified when creating the table, it can be modified afterwards. Please refer to the [Table Management section of TAOS SQL](https://www.taosdata.com/en/documentation/taos-sql#table) for details. Where d1001 is the table name, meters is the name of the STable, followed by the specific tag value of tag Location as "Beijing.Chaoyang", and the specific tag value of tag groupId 2. Although the tag value needs to be specified when creating the table, it can be modified afterwards. Please refer to the [Table Management section of TAOS SQL](https://www.taosdata.com/en/documentation/taos-sql#table) for details.
**Note: ** At present, TDengine does not technically restrict the use of a STable of a database (dbA) as a template to create a sub-table of another database (dbB). This usage will be prohibited later, and it is not recommended to use this method to create a table. **Note: ** At present, TDengine does not technically restrict the use of a STable of a database (dbA) as a template to create a sub-table of another database (dbB). This usage will be prohibited later, and it is not recommended to use this way to create a table.
TDengine suggests to use the globally unique ID of data collection point as a table name (such as device serial number). However, in some scenarios, there is no unique ID, and multiple IDs can be combined into a unique ID. It is not recommended to use a unique ID as tag value. TDengine suggests to use the globally unique ID of data collection point as a table name (such as device serial number). However, in some scenarios, there is no unique ID, and multiple IDs can be combined into a unique ID. It is not recommended to use a unique ID as tag value.
**Automatic table creating** : In some special scenarios, user is not sure whether the table of a certain data collection point exists when writing data. In this case, the non-existent table can be created by using automatic table building syntax when writing data. If the table already exists, no new table will be created. For example: **Automatic table creating** : In some special scenarios, user is not sure whether the table of a certain data collection point exists when writing data. In this case, the non-existent table can be created by using automatic table creating syntax when writing data. If the table already exists, no new table will be created. For example:
```mysql ```mysql
INSERT INTO d1001 USING METERS TAGS ("Beijng.Chaoyang", 2) VALUES (now, 10.2, 219, 0.32); INSERT INTO d1001 USING METERS TAGS ("Beijng.Chaoyang", 2) VALUES (now, 10.2, 219, 0.32);
``` ```
The SQL statement above inserts records (now, 10.2, 219, 0.32) into table d1001. If table d1001 has not been created yet, the STable meters is used as the template to automatically create it, and the tag value "Beijing.Chaoyang", 2 is marked at the same time. The SQL statement above inserts records (now, 10.2, 219, 0.32) into table d1001. If table d1001 has not been created yet, the STable meters is used as the template to create it automatically, and the tag value "Beijing.Chaoyang", 2 is set at the same time.
For detailed syntax of automatic table building, please refer to the "[Automatic Table Creation When Inserting Records](https://www.taosdata.com/en/documentation/taos-sql#auto_create_table)" section. For detailed syntax of automatic table building, please refer to the "[Automatic Table Creation When Inserting Records](https://www.taosdata.com/en/documentation/taos-sql#auto_create_table)" section.
## Multi-column Model vs Single-column Model ## Multi-column Model vs Single-column Model
TDengine supports multi-column model. As long as physical metrics are collected simultaneously by a data collection point (with a consistent timestamp), these metrics can be placed in a STable as different columns. However, there is also an extreme design, a single-column model, in which each collected physical metric is set up separately, so each type of physical metrics is set up separately with a STable. For example, create 3 Stables, one each for current, voltage and phase. TDengine supports multi-column model. As long as metrics are collected simultaneously by a data collection point (with the same timestamp), these metrics can be placed in a STable as different columns. However, there is also an extreme design, a single-column model, in which a STable is created for each metric. For smart meter example, we need to create 3 Stables, one for current, one for voltage and one for phase.
TDengine recommends using multi-column model as much as possible because of higher insertion and storage efficiency. However, for some scenarios, types of collected metrics often change. In this case, if multi-column model is adopted, the schema definition of STable needs to be modified frequently and the application becomes complicated. To avoid that, single-column model is recommended.
TDengine recommends using multi-column model as much as possible because of higher insertion and storage efficiency. However, for some scenarios, types of collected metrics often change. In this case, if multi-column model is adopted, the structure definition of STable needs to be frequently modified so make the application complicated. To avoid that, single-column model is recommended.
# Efficient Data Writing # Efficient Data Writing
TDengine supports multiple interfaces to write data, including SQL, Prometheus, Telegraf, EMQ MQTT Broker, HiveMQ Broker, CSV file, etc. Kafka, OPC and other interfaces will be provided in the future. Data can be inserted in a single piece or in batches, data from one or multiple data collection points can be inserted at the same time. TDengine supports multi-thread insertion, nonsequential data insertion, and also historical data insertion. TDengine supports multiple ways to write data, including SQL, Prometheus, Telegraf, EMQ MQTT Broker, HiveMQ Broker, CSV file, etc. Kafka, OPC and other interfaces will be provided in the future. Data can be inserted in one single record or in batches, data from one or multiple data collection points can be inserted at the same time. TDengine supports multi-thread insertion, out-of-order data insertion, and also historical data insertion.
## <a class="anchor" id="sql"></a> SQL Writing ## <a class="anchor" id="sql"></a> Data Writing via SQL
Applications insert data by executing SQL insert statements through C/C++, JDBC, GO, C#, or Python Connector, and users can manually enter SQL insert statements to insert data through TAOS Shell. For example, the following insert writes a record to table d1001: Applications insert data by executing SQL insert statements through C/C++, Java, Go, C#, Python, Node.js Connectors, and users can manually enter SQL insert statements to insert data through TAOS Shell. For example, the following insert writes a record to table d1001:
```mysql ```mysql
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31); INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31);
``` ```
TDengine supports writing multiple records at a time. For example, the following command writes two records to table d1001: TDengine supports writing multiple records in a single statement. For example, the following command writes two records to table d1001:
```mysql ```mysql
INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3, 218, 0.25); INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3, 218, 0.25);
``` ```
TDengine also supports writing data to multiple tables at a time. For example, the following command writes two records to d1001 and one record to d1002: TDengine also supports writing data to multiple tables in a single statement. For example, the following command writes two records to d1001 and one record to d1002:
```mysql ```mysql
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d1002 VALUES (1538548696800, 12.3, 221, 0.31); INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d1002 VALUES (1538548696800, 12.3, 221, 0.31);
...@@ -26,22 +26,22 @@ For the SQL INSERT Grammar, please refer to [Taos SQL insert](https://www.taosd ...@@ -26,22 +26,22 @@ For the SQL INSERT Grammar, please refer to [Taos SQL insert](https://www.taosd
**Tips:** **Tips:**
- To improve writing efficiency, batch writing is required. The more records written in a batch, the higher the insertion efficiency. However, a record cannot exceed 16K, and the total length of an SQL statement cannot exceed 64K (it can be configured by parameter maxSQLLength, and the maximum can be configured to 1M). - To improve writing efficiency, batch writing is required. The more records written in a batch, the higher the insertion efficiency. However, a record size cannot exceed 16K, and the total length of an SQL statement cannot exceed 64K (it can be configured by parameter maxSQLLength, and the maximum can be configured to 1M).
- TDengine supports multi-thread parallel writing. To further improve writing speed, a client needs to open more than 20 threads to write parallelly. However, after the number of threads reaches a certain threshold, it cannot be increased or even become decreased, because too much frequent thread switching brings extra overhead. - TDengine supports multi-thread parallel writing. To further improve writing speed, a client needs to open more than 20 threads to write parallelly. However, after the number of threads reaches a certain threshold, it cannot be increased or even become decreased, because too much thread switching brings extra overhead.
- For a same table, if the timestamp of a newly inserted record already exists, (no database was created using UPDATE 1) the new record will be discarded as default, that is, the timestamp must be unique in a table. If an application automatically generates records, it is very likely that the generated timestamps will be the same, so the number of records successfully inserted will be smaller than the number of records the application try to insert. If you use UPDATE 1 option when creating a database, inserting a new record with the same timestamp will overwrite the original record. - For the same table, if the timestamp of a newly inserted record already exists, the new record will be discarded as default (database option update = 0), that is, the timestamp must be unique in a table. If an application automatically generates records, it is very likely that the generated timestamps will be the same, so the number of records successfully inserted will be smaller than the number of records the application try to insert. If you use UPDATE 1 option when creating a database, inserting a new record with the same timestamp will overwrite the original record.
- The timestamp of written data must be greater than the current time minus the time of configuration parameter keep. If keep is configured for 3650 days, data older than 3650 days cannot be written. The timestamp for writing data cannot be greater than the current time plus configuration parameter days. If days is configured to 2, data 2 days later than the current time cannot be written. - The timestamp of written data must be greater than the current time minus the time of configuration parameter keep. If keep is configured for 3650 days, data older than 3650 days cannot be written. The timestamp for writing data cannot be greater than the current time plus configuration parameter days. If days is configured to 2, data 2 days later than the current time cannot be written.
## <a class="anchor" id="prometheus"></a> Direct Writing of Prometheus ## <a class="anchor" id="prometheus"></a> Data Writing via Prometheus
As a graduate project of Cloud Native Computing Foundation, [Prometheus](https://www.prometheus.io/) is widely used in the field of performance monitoring and K8S performance monitoring. TDengine provides a simple tool [Bailongma](https://github.com/taosdata/Bailongma), which only needs to be simply configured in Prometheus without any code, and can directly write the data collected by Prometheus into TDengine, then automatically create databases and related table entries in TDengine according to rules. Blog post [Use Docker Container to Quickly Build a Devops Monitoring Demo](https://www.taosdata.com/blog/2020/02/03/1189.html), which is an example of using bailongma to write Prometheus and Telegraf data into TDengine. As a graduate project of Cloud Native Computing Foundation, [Prometheus](https://www.prometheus.io/) is widely used in the field of performance monitoring and K8S performance monitoring. TDengine provides a simple tool [Bailongma](https://github.com/taosdata/Bailongma), which only needs to be simply configured in Prometheus without any code, and can directly write the data collected by Prometheus into TDengine, then automatically create databases and related table entries in TDengine according to rules. Blog post [Use Docker Container to Quickly Build a Devops Monitoring Demo](https://www.taosdata.com/blog/2020/02/03/1189.html), which is an example of using bailongma to write Prometheus and Telegraf data into TDengine.
### Compile blm_prometheus From Source ### Compile blm_prometheus From Source
Users need to download the source code of [Bailongma](https://github.com/taosdata/Bailongma) from github, then compile and generate an executable file using Golang language compiler. Before you start compiling, you need to complete following prepares: Users need to download the source code of [Bailongma](https://github.com/taosdata/Bailongma) from github, then compile and generate an executable file using Golang language compiler. Before you start compiling, you need to prepare:
- A server running Linux OS - A server running Linux OS
- Golang version 1.10 and higher installed - Golang version 1.10 and higher installed
- An appropriated TDengine version. Because the client dynamic link library of TDengine is used, it is necessary to install the same version of TDengine as the server-side; for example, if the server version is TDengine 2.0. 0, ensure install the same version on the linux server where bailongma is located (can be on the same server as TDengine, or on a different server) - Since the client dynamic link library of TDengine is used, it is necessary to install the same version of TDengine as the server-side. For example, if the server version is TDengine 2.0. 0, ensure install the same version on the linux server where bailongma is located (can be on the same server as TDengine, or on a different server)
Bailongma project has a folder, blm_prometheus, which holds the prometheus writing API. The compiling process is as follows: Bailongma project has a folder, blm_prometheus, which holds the prometheus writing API. The compiling process is as follows:
...@@ -134,7 +134,7 @@ The format of generated data by Prometheus is as follows: ...@@ -134,7 +134,7 @@ The format of generated data by Prometheus is as follows:
} }
``` ```
Where apiserver_request_latencies_bucket is the name of the time-series data collected by prometheus, and the tag of the time-series data is in the following {}. blm_prometheus automatically creates a STable in TDengine with the name of the time series data, and converts the tag in {} into the tag value of TDengine, with Timestamp as the timestamp and value as the value of the time-series data. Therefore, in the client of TDEngine, you can check whether this data was successfully written through the following instruction. Where apiserver_request_latencies_bucket is the name of the time-series data collected by prometheus, and the tag of the time-series data is in the following {}. blm_prometheus automatically creates a STable in TDengine with the name of the time series data, and converts the tag in {} into the tag value of TDengine, with Timestamp as the timestamp and value as the value of the time-series data. Therefore, in the client of TDengine, you can check whether this data was successfully written through the following instruction.
```mysql ```mysql
use prometheus; use prometheus;
...@@ -144,7 +144,7 @@ select * from apiserver_request_latencies_bucket; ...@@ -144,7 +144,7 @@ select * from apiserver_request_latencies_bucket;
## <a class="anchor" id="telegraf"></a> Direct Writing of Telegraf ## <a class="anchor" id="telegraf"></a> Data Writing via Telegraf
[Telegraf](https://www.influxdata.com/time-series-platform/telegraf/) is a popular open source tool for IT operation data collection. TDengine provides a simple tool [Bailongma](https://github.com/taosdata/Bailongma), which only needs to be simply configured in Telegraf without any code, and can directly write the data collected by Telegraf into TDengine, then automatically create databases and related table entries in TDengine according to rules. Blog post [Use Docker Container to Quickly Build a Devops Monitoring Demo](https://www.taosdata.com/blog/2020/02/03/1189.html), which is an example of using bailongma to write Prometheus and Telegraf data into TDengine. [Telegraf](https://www.influxdata.com/time-series-platform/telegraf/) is a popular open source tool for IT operation data collection. TDengine provides a simple tool [Bailongma](https://github.com/taosdata/Bailongma), which only needs to be simply configured in Telegraf without any code, and can directly write the data collected by Telegraf into TDengine, then automatically create databases and related table entries in TDengine according to rules. Blog post [Use Docker Container to Quickly Build a Devops Monitoring Demo](https://www.taosdata.com/blog/2020/02/03/1189.html), which is an example of using bailongma to write Prometheus and Telegraf data into TDengine.
...@@ -271,12 +271,12 @@ select * from cpu; ...@@ -271,12 +271,12 @@ select * from cpu;
MQTT is a popular data transmission protocol in the IoT. TDengine can easily access the data received by MQTT Broker and write it to TDengine. MQTT is a popular data transmission protocol in the IoT. TDengine can easily access the data received by MQTT Broker and write it to TDengine.
## <a class="anchor" id="emq"></a> Direct Writing of EMQ Broker ## <a class="anchor" id="emq"></a> Data Writing via EMQ Broker
[EMQ](https://github.com/emqx/emqx) is an open source MQTT Broker software, with no need of coding, only to use "rules" in EMQ Dashboard for simple configuration, and MQTT data can be directly written into TDengine. EMQ X supports storing data to the TDengine by sending it to a Web service, and also provides a native TDengine driver on Enterprise Edition for direct data store. Please refer to [EMQ official documents](https://docs.emqx.io/broker/latest/cn/rule/rule-example.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine) for more details. [EMQ](https://github.com/emqx/emqx) is an open source MQTT Broker software, with no need of coding, only to use "rules" in EMQ Dashboard for simple configuration, and MQTT data can be directly written into TDengine. EMQ X supports storing data to the TDengine by sending it to a Web service, and also provides a native TDengine driver on Enterprise Edition for direct data store. Please refer to [EMQ official documents](https://docs.emqx.io/broker/latest/cn/rule/rule-example.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine) for more details.
## <a class="anchor" id="hivemq"></a> Direct Writing of HiveMQ Broker ## <a class="anchor" id="hivemq"></a> Data Writing via HiveMQ Broker
[HiveMQ](https://www.hivemq.com/) is an MQTT agent that provides Free Personal and Enterprise Edition versions. It is mainly used for enterprises, emerging machine-to-machine(M2M) communication and internal transmission to meet scalability, easy management and security features. HiveMQ provides an open source plug-in development kit. You can store data to TDengine via HiveMQ extension-TDengine. Refer to the [HiveMQ extension-TDengine documentation](https://github.com/huskar-t/hivemq-tdengine-extension/blob/b62a26ecc164a310104df57691691b237e091c89/README.md) for more details. [HiveMQ](https://www.hivemq.com/) is an MQTT agent that provides Free Personal and Enterprise Edition versions. It is mainly used for enterprises, emerging machine-to-machine(M2M) communication and internal transmission to meet scalability, easy management and security features. HiveMQ provides an open source plug-in development kit. You can store data to TDengine via HiveMQ extension-TDengine. Refer to the [HiveMQ extension-TDengine documentation](https://github.com/huskar-t/hivemq-tdengine-extension/blob/b62a26ecc164a310104df57691691b237e091c89/README.md) for more details.
\ No newline at end of file
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
## <a class="anchor" id="queries"></a> Main Query Features ## <a class="anchor" id="queries"></a> Main Query Features
TDengine uses SQL as the query language. Applications can send SQL statements through C/C++, Java, Go, Python connectors, and users can manually execute SQL Ad-Hoc Query through the Command Line Interface (CLI) tool TAOS Shell provided by TDengine. TDengine supports the following query functions: TDengine uses SQL as the query language. Applications can send SQL statements through C/C++, Java, Go, C#, Python, Node.js connectors, and users can manually execute SQL Ad-Hoc Query through the Command Line Interface (CLI) tool TAOS Shell provided by TDengine. TDengine supports the following query functions:
- Single-column and multi-column data query - Single-column and multi-column data query
- Multiple filters for tags and numeric values: >, <, =, < >, like, etc - Multiple filters for tags and numeric values: >, <, =, < >, like, etc
...@@ -28,7 +28,7 @@ For specific query syntax, please see the [Data Query section of TAOS SQL](https ...@@ -28,7 +28,7 @@ For specific query syntax, please see the [Data Query section of TAOS SQL](https
## <a class="anchor" id="aggregation"></a> Multi-table Aggregation Query ## <a class="anchor" id="aggregation"></a> Multi-table Aggregation Query
In an IoT scenario, there are often multiple data collection points in a same type. TDengine uses the concept of STable to describe a certain type of data collection point, and an ordinary table to describe a specific data collection point. At the same time, TDengine uses tags to describe the statical attributes of data collection points. A given data collection point has a specific tag value. By specifying the filters of tags, TDengine provides an efficient method to aggregate and query the sub-tables of STables (data collection points of a certain type). Aggregation functions and most operations on ordinary tables are applicable to STables, and the syntax is exactly the same. In an IoT scenario, there are often multiple data collection points in a same type. TDengine uses the concept of STable to describe a certain type of data collection point, and an ordinary table to describe a specific data collection point. At the same time, TDengine uses tags to describe the static attributes of data collection points. A given data collection point has a specific tag value. By specifying the filters of tags, TDengine provides an efficient method to aggregate and query the sub-tables of STables (data collection points of a certain type). Aggregation functions and most operations on ordinary tables are applicable to STables, and the syntax is exactly the same.
**Example 1**: In TAOS Shell, look up the average voltages collected by all smart meters in Beijing and group them by location **Example 1**: In TAOS Shell, look up the average voltages collected by all smart meters in Beijing and group them by location
...@@ -55,7 +55,7 @@ TDengine only allows aggregation queries between tables belonging to a same STab ...@@ -55,7 +55,7 @@ TDengine only allows aggregation queries between tables belonging to a same STab
## <a class="anchor" id="sampling"></a> Down Sampling Query, Interpolation ## <a class="anchor" id="sampling"></a> Down Sampling Query, Interpolation
In a scenario of IoT, it is often necessary to aggregate the collected data by intervals through down sampling. TDengine provides a simple keyword interval, which makes query operations according to time windows extremely simple. For example, the current values collected by smart meter d1001 are summed every 10 seconds. In a scenario of IoT, it is often necessary to aggregate the collected data by intervals through down sampling. TDengine provides a simple keyword `interval`, which makes query operations according to time windows extremely simple. For example, the current values collected by smart meter d1001 are summed every 10 seconds.
```mysql ```mysql
taos> SELECT sum(current) FROM d1001 INTERVAL(10s); taos> SELECT sum(current) FROM d1001 INTERVAL(10s);
...@@ -94,6 +94,6 @@ taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a); ...@@ -94,6 +94,6 @@ taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
Query OK, 5 row(s) in set (0.001521s) Query OK, 5 row(s) in set (0.001521s)
``` ```
In a scenario of IoT, it is difficult to synchronize the time stamp of collected data at each point, but many analysis algorithms (such as FFT) need to align the collected data strictly at equal intervals of time. In many systems, it’s required to write their own programs to process, but the down sampling operation of TDengine can be easily solved. If there is no collected data in an interval, TDengine also provides interpolation calculation function. In IoT scenario, it is difficult to synchronize the time stamp of collected data at each point, but many analysis algorithms (such as FFT) need to align the collected data strictly at equal intervals of time. In many systems, it’s required to write their own programs to process, but the down sampling operation of TDengine can be used to solve the problem easily. If there is no collected data in an interval, TDengine also provides interpolation calculation function.
For details of syntax rules, please refer to the [Time-dimension Aggregation section of TAOS SQL](https://www.taosdata.com/en/documentation/taos-sql#aggregation). For details of syntax rules, please refer to the [Time-dimension Aggregation section of TAOS SQL](https://www.taosdata.com/en/documentation/taos-sql#aggregation).
\ No newline at end of file
...@@ -9,8 +9,8 @@ Continuous query of TDengine adopts time-driven mode, which can be defined direc ...@@ -9,8 +9,8 @@ Continuous query of TDengine adopts time-driven mode, which can be defined direc
The continuous query provided by TDengine differs from the time window calculation in ordinary stream computing in the following ways: The continuous query provided by TDengine differs from the time window calculation in ordinary stream computing in the following ways:
- Unlike the real-time feedback calculated results of stream computing, continuous query only starts calculation after the time window is closed. For example, if the time period is 1 day, the results of that day will only be generated after 23:59:59. - Unlike the real-time feedback calculated results of stream computing, continuous query only starts calculation after the time window is closed. For example, if the time period is 1 day, the results of that day will only be generated after 23:59:59.
- If a history record is written to the time interval that has been calculated, the continuous query will not recalculate and will not push the results to the user again. For the mode of writing back to TDengine, the existing calculated results will not be updated. - If a history record is written to the time interval that has been calculated, the continuous query will not re-calculate and will not push the new results to the user again.
- Using the mode of continuous query pushing results, the server does not cache the client's calculation status, nor does it provide Exactly-Once semantic guarantee. If the user's application side crashed, the continuous query pulled up again would only recalculate the latest complete time window from the time pulled up again. If writeback mode is used, TDengine can ensure the validity and continuity of data writeback. - TDengine server does not cache or save the client's status, nor does it provide Exactly-Once semantic guarantee. If the application crashes, the continuous query will be pull up again and starting time must be provided by the application.
### How to use continuous query ### How to use continuous query
...@@ -29,7 +29,7 @@ We already know that the average voltage of these meters can be counted with one ...@@ -29,7 +29,7 @@ We already know that the average voltage of these meters can be counted with one
select avg(voltage) from meters interval(1m) sliding(30s); select avg(voltage) from meters interval(1m) sliding(30s);
``` ```
Every time this statement is executed, all data will be recalculated. If you need to execute every 30 seconds to incrementally calculate the data of the latest minute, you can improve the above statement as following, using a different `startTime` each time and executing it regularly: Every time this statement is executed, all data will be re-calculated. If you need to execute every 30 seconds to incrementally calculate the data of the latest minute, you can improve the above statement as following, using a different `startTime` each time and executing it regularly:
```sql ```sql
select avg(voltage) from meters where ts > {startTime} interval(1m) sliding(30s); select avg(voltage) from meters where ts > {startTime} interval(1m) sliding(30s);
...@@ -65,7 +65,7 @@ It should be noted that now in the above example refers to the time when continu ...@@ -65,7 +65,7 @@ It should be noted that now in the above example refers to the time when continu
### Manage the Continuous Query ### Manage the Continuous Query
Users can view all continuous queries running in the system through the show streams command in the console, and can kill the corresponding continuous queries through the kill stream command. Subsequent versions will provide more finer-grained and convenient continuous query management commands. Users can view all continuous queries running in the system through the `show streams` command in the console, and can kill the corresponding continuous queries through the `kill stream` command. Subsequent versions will provide more finer-grained and convenient continuous query management commands.
## <a class="anchor" id="subscribe"></a> Publisher/Subscriber ## <a class="anchor" id="subscribe"></a> Publisher/Subscriber
...@@ -101,7 +101,7 @@ Another method is to query the STable. In this way, no matter how many meters th ...@@ -101,7 +101,7 @@ Another method is to query the STable. In this way, no matter how many meters th
select * from meters where ts > {last_timestamp} and current > 10; select * from meters where ts > {last_timestamp} and current > 10;
``` ```
However, how to choose `last_timestamp` has become a new problem. Because, on the one hand, the time of data generation (the data timestamp) and the time of data storage are generally not the same, and sometimes the deviation is still very large; On the other hand, the time when the data of different meters arrive at TDengine will also vary. Therefore, if we use the timestamp of the data from the slowest meter as `last_timestamp` in the query, we may repeatedly read the data of other meters; If the timestamp of the fastest meter is used, the data of other meters may be missed. However, how to choose `last_timestamp` has become a new problem. Because, on the one hand, the time of data generation (the data timestamp) and the time of data writing are generally not the same, and sometimes the deviation is still very large; On the other hand, the time when the data of different meters arrive at TDengine will also vary. Therefore, if we use the timestamp of the data from the slowest meter as `last_timestamp` in the query, we may repeatedly read the data of other meters; If the timestamp of the fastest meter is used, the data of other meters may be missed.
The subscription function of TDengine provides a thorough solution to the above problem. The subscription function of TDengine provides a thorough solution to the above problem.
...@@ -357,4 +357,4 @@ This SQL statement will obtain the last recorded voltage value of all smart mete ...@@ -357,4 +357,4 @@ This SQL statement will obtain the last recorded voltage value of all smart mete
In scenarios of TDengine, alarm monitoring is a common requirement. Conceptually, it requires the program to filter out data that meet certain conditions from the data of the latest period of time, and calculate a result according to a defined formula based on these data. When the result meets certain conditions and lasts for a certain period of time, it will notify the user in some form. In scenarios of TDengine, alarm monitoring is a common requirement. Conceptually, it requires the program to filter out data that meet certain conditions from the data of the latest period of time, and calculate a result according to a defined formula based on these data. When the result meets certain conditions and lasts for a certain period of time, it will notify the user in some form.
In order to meet the needs of users for alarm monitoring, TDengine provides this function in the form of an independent module. For its installation and use, please refer to the blog [How to Use TDengine for Alarm Monitoring](https://www.taosdata.com/blog/2020/04/14/1438.html). In order to meet the needs of users for alarm monitoring, TDengine provides this function in the form of an independent module. For its installation and use, please refer to the blog [How to Use TDengine for Alarm Monitoring](https://www.taosdata.com/blog/2020/04/14/1438.html).
\ No newline at end of file
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
The taos-jdbcdriver is implemented in two forms: JDBC-JNI and JDBC-RESTful (supported from taos-jdbcdriver-2.0.18). JDBC-JNI is implemented by calling the local methods of libtaos.so (or taos.dll) on the client, while JDBC-RESTful encapsulates the RESTful interface implementation internally. The taos-jdbcdriver is implemented in two forms: JDBC-JNI and JDBC-RESTful (supported from taos-jdbcdriver-2.0.18). JDBC-JNI is implemented by calling the local methods of libtaos.so (or taos.dll) on the client, while JDBC-RESTful encapsulates the RESTful interface implementation internally.
![tdengine-connector](page://images/tdengine-jdbc-connector.png) ![tdengine-connector](../../images/tdengine-jdbc-connector.png)
The figure above shows the three ways Java applications can access the TDengine: The figure above shows the three ways Java applications can access the TDengine:
...@@ -69,18 +69,18 @@ INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES( ...@@ -69,18 +69,18 @@ INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES(
The TDengine supports the following data types and Java data types: The TDengine supports the following data types and Java data types:
| TDengine DataType | Java DataType | | TDengine DataType | JDBCType (driver version < 2.0.24) | JDBCType (driver version >= 2.0.24) |
| ----------------- | ------------------ | | ----------------- | ------------------ | ------------------ |
| TIMESTAMP | java.sql.Timestamp | | TIMESTAMP | java.lang.Long | java.sql.Timestamp |
| INT | java.lang.Integer | | INT | java.lang.Integer | java.lang.Integer |
| BIGINT | java.lang.Long | | BIGINT | java.lang.Long | java.lang.Long |
| FLOAT | java.lang.Float | | FLOAT | java.lang.Float | java.lang.Float |
| DOUBLE | java.lang.Double | | DOUBLE | java.lang.Double | java.lang.Double |
| SMALLINT | java.lang.Short | | SMALLINT | java.lang.Short | java.lang.Short |
| TINYINT | java.lang.Byte | | TINYINT | java.lang.Byte | java.lang.Byte |
| BOOL | java.lang.Boolean | | BOOL | java.lang.Boolean | java.lang.Boolean |
| BINARY | byte[] | | BINARY | java.lang.String | byte array |
| NCHAR | java.lang.String | | NCHAR | java.lang.String | java.lang.String |
## Install Java connector ## Install Java connector
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
TDengine provides many connectors for development, including C/C++, JAVA, Python, RESTful, Go, Node.JS, etc. TDengine provides many connectors for development, including C/C++, JAVA, Python, RESTful, Go, Node.JS, etc.
![image-connector](page://images/connector.png) ![image-connector](../images/connector.png)
At present, TDengine connectors support a wide range of platforms, including hardware platforms such as X64/X86/ARM64/ARM32/MIPS/Alpha, and development environments such as Linux/Win64/Win32. The comparison matrix is as follows: At present, TDengine connectors support a wide range of platforms, including hardware platforms such as X64/X86/ARM64/ARM32/MIPS/Alpha, and development environments such as Linux/Win64/Win32. The comparison matrix is as follows:
...@@ -66,7 +66,11 @@ Run install_client.sh to install. ...@@ -66,7 +66,11 @@ Run install_client.sh to install.
Edit the taos.cfg file (default path/etc/taos/taos.cfg) and change firstEP to End Point of the TDengine server, for example: [h1.taos.com](http://h1.taos.com/):6030. Edit the taos.cfg file (default path/etc/taos/taos.cfg) and change firstEP to End Point of the TDengine server, for example: [h1.taos.com](http://h1.taos.com/):6030.
**Tip: If no TDengine service deployed in this machine, but only the application driver is installed, only firstEP needs to be configured in taos.cfg, and FQDN does not.** **Tip: **
**1. If no TDengine service deployed in this machine, but only the application driver is installed, only firstEP needs to be configured in taos.cfg, and FQDN does not.**
**2. To prevent “unable to resolve FQDN” error when connecting to the server, ensure that the hosts file of the client has the correct FQDN value.**
**Windows x64/x86** **Windows x64/x86**
...@@ -128,7 +132,7 @@ taos> ...@@ -128,7 +132,7 @@ taos>
**Windows (x64/x86) environment:** **Windows (x64/x86) environment:**
Under cmd, enter the c:\ tdengine directory and directly execute taos.exe, and you should be able to connect to tdengine service normally and jump to taos shell interface. For example: Under cmd, enter the c:\TDengine directory and directly execute taos.exe, and you should be able to connect to tdengine service normally and jump to taos shell interface. For example:
```mysql ```mysql
C:\TDengine>taos C:\TDengine>taos
...@@ -179,7 +183,7 @@ Clean up the running environment and call this API before the application exits. ...@@ -179,7 +183,7 @@ Clean up the running environment and call this API before the application exits.
- `int taos_options(TSDB_OPTION option, const void * arg, ...)` - `int taos_options(TSDB_OPTION option, const void * arg, ...)`
Set client options, currently only time zone setting (`_TSDB_OPTIONTIMEZONE`) and encoding setting (`_TSDB_OPTIONLOCALE`) are supported. The time zone and encoding default to the current operating system settings. Set client options, currently only time zone setting (_TSDB_OPTIONTIMEZONE) and encoding setting (_TSDB_OPTIONLOCALE) are supported. The time zone and encoding default to the current operating system settings.
- `char *taos_get_client_info()` - `char *taos_get_client_info()`
...@@ -403,17 +407,17 @@ See [video tutorials](https://www.taosdata.com/blog/2020/11/11/1963.html) for th ...@@ -403,17 +407,17 @@ See [video tutorials](https://www.taosdata.com/blog/2020/11/11/1963.html) for th
- python 2.7 or >= 3.4 installed - python 2.7 or >= 3.4 installed
- pip or pip3 installed - pip or pip3 installed
### Python client installation ### Python connector installation
#### Linux #### Linux
Users can find the connector package for python2 and python3 in the source code src/connector/python (or tar.gz/connector/python) folder. Users can install it through `pip` command: Users can find the connector package for python2 and python3 in the source code src/connector/python (or tar.gz/connector/python) folder. Users can install it through `pip` command:
`pip install src/connector/python/linux/python2/` `pip install src/connector/python/`
or or
`pip3 install src/connector/python/linux/python3/` `pip3 install src/connector/python/`
#### Windows #### Windows
...@@ -536,7 +540,7 @@ Refer to help (taos.TDengineCursor) in python. This class corresponds to the wri ...@@ -536,7 +540,7 @@ Refer to help (taos.TDengineCursor) in python. This class corresponds to the wri
Used to generate an instance of taos.TDengineConnection. Used to generate an instance of taos.TDengineConnection.
### Python client code sample ### Python connector code sample
In tests/examples/python, we provide a sample Python program read_example. py to guide you to design your own write and query program. After installing the corresponding client, introduce the taos class through `import taos`. The steps are as follows: In tests/examples/python, we provide a sample Python program read_example. py to guide you to design your own write and query program. After installing the corresponding client, introduce the taos class through `import taos`. The steps are as follows:
...@@ -606,11 +610,11 @@ The return value is in JSON format, as follows: ...@@ -606,11 +610,11 @@ The return value is in JSON format, as follows:
```json ```json
{ {
"status": "succ", "status": "succ",
"head": ["ts","current", ], "head": ["ts","current",...],
"column_meta": [["ts",9,8],["current",6,4], ], "column_meta": [["ts",9,8],["current",6,4], ...],
"data": [ "data": [
["2018-10-03 14:38:05.000", 10.3, ], ["2018-10-03 14:38:05.000", 10.3, ...],
["2018-10-03 14:38:15.000", 12.6, ] ["2018-10-03 14:38:15.000", 12.6, ...]
], ],
"rows": 2 "rows": 2
} }
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
## <a class="anchor" id="grafana"></a> Grafana ## <a class="anchor" id="grafana"></a> Grafana
TDengine can quickly integrate with [Grafana](https://www.grafana.com/), an open source data visualization system, to build a data monitoring and alarming system. The whole process does not require any code to write. The contents of the data table in TDengine can be visually showed on DashBoard. TDengine can be quickly integrated with [Grafana](https://www.grafana.com/), an open source data visualization system, to build a data monitoring and alarming system. The whole process does not require any code to write. The contents of the data table in TDengine can be visually showed on DashBoard.
### Install Grafana ### Install Grafana
...@@ -26,15 +26,15 @@ sudo cp -rf /usr/local/taos/connector/grafanaplugin /var/lib/grafana/plugins/tde ...@@ -26,15 +26,15 @@ sudo cp -rf /usr/local/taos/connector/grafanaplugin /var/lib/grafana/plugins/tde
You can log in the Grafana server (username/password:admin/admin) through localhost:3000, and add data sources through `Configuration -> Data Sources` on the left panel, as shown in the following figure: You can log in the Grafana server (username/password:admin/admin) through localhost:3000, and add data sources through `Configuration -> Data Sources` on the left panel, as shown in the following figure:
![img](page://images/connections/add_datasource1.jpg) ![img](../images/connections/add_datasource1.jpg)
Click `Add data source` to enter the Add Data Source page, and enter TDengine in the query box to select Add, as shown in the following figure: Click `Add data source` to enter the Add Data Source page, and enter TDengine in the query box to select Add, as shown in the following figure:
![img](page://images/connections/add_datasource2.jpg) ![img](../images/connections/add_datasource2.jpg)
Enter the data source configuration page and modify the corresponding configuration according to the default prompt: Enter the data source configuration page and modify the corresponding configuration according to the default prompt:
![img](page://images/connections/add_datasource3.jpg) ![img](../images/connections/add_datasource3.jpg)
- Host: IP address of any server in TDengine cluster and port number of TDengine RESTful interface (6041), default [http://localhost:6041](http://localhost:6041/) - Host: IP address of any server in TDengine cluster and port number of TDengine RESTful interface (6041), default [http://localhost:6041](http://localhost:6041/)
- User: TDengine username. - User: TDengine username.
...@@ -42,13 +42,13 @@ Enter the data source configuration page and modify the corresponding configurat ...@@ -42,13 +42,13 @@ Enter the data source configuration page and modify the corresponding configurat
Click `Save & Test` to test. Success will be prompted as follows: Click `Save & Test` to test. Success will be prompted as follows:
![img](page://images/connections/add_datasource4.jpg) ![img](../images/connections/add_datasource4.jpg)
#### Create Dashboard #### Create Dashboard
Go back to the home to create Dashboard, and click `Add Query` to enter the panel query page: Go back to the home to create Dashboard, and click `Add Query` to enter the panel query page:
![img](page://images/connections/create_dashboard1.jpg) ![img](../images/connections/create_dashboard1.jpg)
As shown in the figure above, select the TDengine data source in Query, and enter the corresponding sql in the query box below to query. Details are as follows: As shown in the figure above, select the TDengine data source in Query, and enter the corresponding sql in the query box below to query. Details are as follows:
...@@ -58,7 +58,7 @@ As shown in the figure above, select the TDengine data source in Query, and ente ...@@ -58,7 +58,7 @@ As shown in the figure above, select the TDengine data source in Query, and ente
According to the default prompt, query the average system memory usage at the specified interval of the server where the current TDengine deployed in as follows: According to the default prompt, query the average system memory usage at the specified interval of the server where the current TDengine deployed in as follows:
![img](page://images/connections/create_dashboard2.jpg) ![img](../images/connections/create_dashboard2.jpg)
> Please refer to Grafana [documents](https://grafana.com/docs/) for how to use Grafana to create the corresponding monitoring interface and for more about Grafana usage. > Please refer to Grafana [documents](https://grafana.com/docs/) for how to use Grafana to create the corresponding monitoring interface and for more about Grafana usage.
...@@ -68,11 +68,11 @@ A `tdengine-grafana.json` importable dashboard is provided under the Grafana plu ...@@ -68,11 +68,11 @@ A `tdengine-grafana.json` importable dashboard is provided under the Grafana plu
Click the `Import` button on the left panel and upload the `tdengine-grafana.json` file: Click the `Import` button on the left panel and upload the `tdengine-grafana.json` file:
![img](page://images/connections/import_dashboard1.jpg) ![img](../images/connections/import_dashboard1.jpg)
You can see as follows after Dashboard imported. You can see as follows after Dashboard imported.
![img](page://images/connections/import_dashboard2.jpg) ![img](../images/connections/import_dashboard2.jpg)
## <a class="anchor" id="matlab"></a> MATLAB ## <a class="anchor" id="matlab"></a> MATLAB
......
# TDengine Cluster Management # TDengine Cluster Management
Multiple TDengine servers, that is, multiple running instances of taosd, can form a cluster to ensure the highly reliable operation of TDengine and provide scale-out features. To understand cluster management in TDengine 2.0, it is necessary to understand the basic concepts of clustering. Please refer to the chapter "Overall Architecture of TDengine 2.0". And before installing the cluster, please follow the chapter ["Getting started"](https://www.taosdata.com/en/documentation/getting-started/) to install and experience the single node function. Multiple TDengine servers, that is, multiple running instances of taosd, can form a cluster to ensure the highly reliable operation of TDengine and provide scale-out features. To understand cluster management in TDengine 2.0, it is necessary to understand the basic concepts of clustering. Please refer to the chapter "Overall Architecture of TDengine 2.0". And before installing the cluster, please follow the chapter ["Getting started"](https://www.taosdata.com/en/documentation/getting-started/) to install and experience the single node TDengine.
Each data node of the cluster is uniquely identified by End Point, which is composed of FQDN (Fully Qualified Domain Name) plus Port, such as [h1.taosdata.com](http://h1.taosdata.com/):6030. The general FQDN is the hostname of the server, which can be obtained through the Linux command `hostname -f` (how to configure FQDN, please refer to: [All about FQDN of TDengine](https://www.taosdata.com/blog/2020/09/11/1824.html)). Port is the external service port number of this data node. The default is 6030, but it can be modified by configuring the parameter serverPort in taos.cfg. A physical node may be configured with multiple hostnames, and TDengine will automatically get the first one, but it can also be specified through the configuration parameter fqdn in taos.cfg. If you are accustomed to direct IP address access, you can set the parameter fqdn to the IP address of this node. Each data node of the cluster is uniquely identified by End Point, which is composed of FQDN (Fully Qualified Domain Name) plus Port, such as [h1.taosdata.com](http://h1.taosdata.com/):6030. The general FQDN is the hostname of the server, which can be obtained through the Linux command `hostname -f` (how to configure FQDN, please refer to: [All about FQDN of TDengine](https://www.taosdata.com/blog/2020/09/11/1824.html)). Port is the external service port number of this data node. The default is 6030, but it can be modified by configuring the parameter serverPort in taos.cfg. A physical node may be configured with multiple hostnames, and TDengine will automatically get the first one, but it can also be specified through the configuration parameter `fqdn` in taos.cfg. If you want to access via direct IP address, you can set the parameter `fqdn` to the IP address of this node.
The cluster management of TDengine is extremely simple. Except for manual intervention in adding and deleting nodes, all other tasks are completed automatically, thus minimizing the workload of operation. This chapter describes the operations of cluster management in detail. The cluster management of TDengine is extremely simple. Except for manual intervention in adding and deleting nodes, all other tasks are completed automatically, thus minimizing the workload of operation. This chapter describes the operations of cluster management in detail.
...@@ -12,7 +12,7 @@ Please refer to the [video tutorial](https://www.taosdata.com/blog/2020/11/11/19 ...@@ -12,7 +12,7 @@ Please refer to the [video tutorial](https://www.taosdata.com/blog/2020/11/11/19
**Step 0:** Plan FQDN of all physical nodes in the cluster, and add the planned FQDN to /etc/hostname of each physical node respectively; modify the /etc/hosts of each physical node, and add the corresponding IP and FQDN of all cluster physical nodes. [If DNS is deployed, contact your network administrator to configure it on DNS] **Step 0:** Plan FQDN of all physical nodes in the cluster, and add the planned FQDN to /etc/hostname of each physical node respectively; modify the /etc/hosts of each physical node, and add the corresponding IP and FQDN of all cluster physical nodes. [If DNS is deployed, contact your network administrator to configure it on DNS]
**Step 1:** If the physical nodes have previous test data, installed with version 1. x, or installed with other versions of TDengine, please delete it first and drop all data. For specific steps, please refer to the blog "[Installation and Uninstallation of Various Packages of TDengine](https://www.taosdata.com/blog/2019/08/09/566.html)" **Step 1:** If the physical nodes have previous test data, installed with version 1. x, or installed with other versions of TDengine, please backup all data, then delete it and drop all data. For specific steps, please refer to the blog "[Installation and Uninstallation of Various Packages of TDengine](https://www.taosdata.com/blog/2019/08/09/566.html)"
**Note 1:** Because the information of FQDN will be written into a file, if FQDN has not been configured or changed before, and TDengine has been started, be sure to clean up the previous data (`rm -rf /var/lib/taos/*`)on the premise of ensuring that the data is useless or backed up; **Note 1:** Because the information of FQDN will be written into a file, if FQDN has not been configured or changed before, and TDengine has been started, be sure to clean up the previous data (`rm -rf /var/lib/taos/*`)on the premise of ensuring that the data is useless or backed up;
...@@ -136,7 +136,7 @@ Execute the CLI program taos, log in to the TDengine system using the root accou ...@@ -136,7 +136,7 @@ Execute the CLI program taos, log in to the TDengine system using the root accou
DROP DNODE "fqdn:port"; DROP DNODE "fqdn:port";
``` ```
Where fqdn is the FQDN of the deleted node, and port is the port number of its external server. Where fqdn is the FQDN of the deleted node, and port is the port number.
<font color=green>**【Note】**</font> <font color=green>**【Note】**</font>
...@@ -185,7 +185,7 @@ Because of the introduction of vnode, it is impossible to simply draw a conclusi ...@@ -185,7 +185,7 @@ Because of the introduction of vnode, it is impossible to simply draw a conclusi
TDengine cluster is managed by mnode (a module of taosd, management node). In order to ensure the high-availability of mnode, multiple mnode replicas can be configured. The number of replicas is determined by system configuration parameter numOfMnodes, and the effective range is 1-3. In order to ensure the strong consistency of metadata, mnode replicas are duplicated synchronously. TDengine cluster is managed by mnode (a module of taosd, management node). In order to ensure the high-availability of mnode, multiple mnode replicas can be configured. The number of replicas is determined by system configuration parameter numOfMnodes, and the effective range is 1-3. In order to ensure the strong consistency of metadata, mnode replicas are duplicated synchronously.
A cluster has multiple data node dnodes, but a dnode runs at most one mnode instance. In the case of multiple dnodes, which dnode can be used as an mnode? This is automatically specified by the system according to the resource situation on the whole. User can execute the following command in the console of TDengine through the CLI program taos: A cluster has multiple data node dnodes, but a dnode runs at most one mnode instance. In the case of multiple dnodes, which dnode can be used as an mnode? This is automatically selected by the system based on the resource on the whole. User can execute the following command in the console of TDengine through the CLI program taos:
``` ```
SHOW MNODES; SHOW MNODES;
...@@ -213,7 +213,7 @@ When the above three situations occur, the system will start a load computing of ...@@ -213,7 +213,7 @@ When the above three situations occur, the system will start a load computing of
If a data node is offline, the TDengine cluster will automatically detect it. There are two detailed situations: If a data node is offline, the TDengine cluster will automatically detect it. There are two detailed situations:
- If the data node is offline for more than a certain period of time (configuration parameter offlineThreshold in taos.cfg controls the duration), the system will automatically delete the data node, generate system alarm information and trigger the load balancing process. If the deleted data node is online again, it will not be able to join the cluster, and the system administrator will need to add it to the cluster again. - If the data node is offline for more than a certain period of time (configuration parameter `offlineThreshold` in taos.cfg controls the duration), the system will automatically delete the data node, generate system alarm information and trigger the load balancing process. If the deleted data node is online again, it will not be able to join the cluster, and the system administrator will need to add it to the cluster again.
- After offline, the system will automatically start the data recovery process if it goes online again within the duration of offlineThreshold. After the data is fully recovered, the node will start to work normally. - After offline, the system will automatically start the data recovery process if it goes online again within the duration of offlineThreshold. After the data is fully recovered, the node will start to work normally.
**Note:** If each data node belonging to a virtual node group (including mnode group) is in offline or unsynced state, Master can only be elected after all data nodes in the virtual node group are online and can exchange status information, and the virtual node group can serve externally. For example, the whole cluster has 3 data nodes with 3 replicas. If all 3 data nodes go down and then 2 data nodes restart, it will not work. Only when all 3 data nodes restart successfully can serve externally again. **Note:** If each data node belonging to a virtual node group (including mnode group) is in offline or unsynced state, Master can only be elected after all data nodes in the virtual node group are online and can exchange status information, and the virtual node group can serve externally. For example, the whole cluster has 3 data nodes with 3 replicas. If all 3 data nodes go down and then 2 data nodes restart, it will not work. Only when all 3 data nodes restart successfully can serve externally again.
...@@ -229,7 +229,7 @@ The name of the executable for Arbitrator is tarbitrator. The executable has alm ...@@ -229,7 +229,7 @@ The name of the executable for Arbitrator is tarbitrator. The executable has alm
1. Click [Package Download](https://www.taosdata.com/cn/all-downloads/), and in the TDengine Arbitrator Linux section, select the appropriate version to download and install. 1. Click [Package Download](https://www.taosdata.com/cn/all-downloads/), and in the TDengine Arbitrator Linux section, select the appropriate version to download and install.
2. The command line parameter -p of this application can specify the port number of its external service, and the default is 6042. 2. The command line parameter -p of this application can specify the port number of its service, and the default is 6042.
3. Modify the configuration file of each taosd instance, and set parameter arbitrator to the End Point corresponding to the tarbitrator in taos.cfg. (If this parameter is configured, when the number of replicas is even, the system will automatically connect the configured Arbitrator. If the number of replicas is odd, even if the Arbitrator is configured, the system will not establish a connection.) 3. Modify the configuration file of each taosd instance, and set parameter arbitrator to the End Point corresponding to the tarbitrator in taos.cfg. (If this parameter is configured, when the number of replicas is even, the system will automatically connect the configured Arbitrator. If the number of replicas is odd, even if the Arbitrator is configured, the system will not establish a connection.)
4. The Arbitrator configured in the configuration file will appear in the return result of instruction `SHOW DNODES`; the value of the corresponding role column will be "arb". 4. The Arbitrator configured in the configuration file will appear in the return result of instruction `SHOW DNODES`; the value of the corresponding role column will be "arb".
...@@ -22,8 +22,8 @@ If there is plenty of memory, the configuration of Blocks can be increased so th ...@@ -22,8 +22,8 @@ If there is plenty of memory, the configuration of Blocks can be increased so th
CPU requirements depend on the following two aspects: CPU requirements depend on the following two aspects:
- **Data insertion** TDengine single core can handle at least 10,000 insertion requests per second. Each insertion request can take multiple records, and inserting one record at a time is almost the same as inserting 10 records in computing resources consuming. Therefore, the larger the number of inserts, the higher the insertion efficiency. If an insert request has more than 200 records, a single core can insert 1 million records per second. However, the faster the insertion speed, the higher the requirement for front-end data collection, because records need to be cached and then inserted in batches. - **Data insertion**: TDengine single core can handle at least 10,000 insertion requests per second. Each insertion request can take multiple records, and inserting one record at a time is almost the same as inserting 10 records in computing resources consuming. Therefore, the larger the number of records per insert, the higher the insertion efficiency. If an insert request has more than 200 records, a single core can insert 1 million records per second. However, the faster the insertion speed, the higher the requirement for front-end data collection, because records need to be cached and then inserted in batches.
- **Query requirements** TDengine to provide efficient queries, but the queries in each scenario vary greatly and the query frequency too, making it difficult to give objective figures. Users need to write some query statements for their own scenes to determine. - **Query**: TDengine provides efficient queries, but the queries in each scenario vary greatly and the query frequency too, making it difficult to give objective figures. Users need to write some query statements for their own scenes to estimate.
Therefore, only for data insertion, CPU can be estimated, but the computing resources consumed by query cannot be that clear. In the actual operation, it is not recommended to make CPU utilization rate over 50%. After that, new nodes need to be added to bring more computing resources. Therefore, only for data insertion, CPU can be estimated, but the computing resources consumed by query cannot be that clear. In the actual operation, it is not recommended to make CPU utilization rate over 50%. After that, new nodes need to be added to bring more computing resources.
...@@ -78,7 +78,7 @@ When the nodes in TDengine cluster are deployed on different physical machines a ...@@ -78,7 +78,7 @@ When the nodes in TDengine cluster are deployed on different physical machines a
## <a class="anchor" id="config"></a> Server-side Configuration ## <a class="anchor" id="config"></a> Server-side Configuration
The background service of TDengine system is provided by taosd, and the configuration parameters can be modified in the configuration file taos.cfg to meet the requirements of different scenarios. The default location of the configuration file is the /etc/taos directory, which can be specified by executing the parameter -c from the taosd command line. Such as taosd-c/home/user, to specify that the configuration file is located in the /home/user directory. The background service of TDengine system is provided by taosd, and the configuration parameters can be modified in the configuration file taos.cfg to meet the requirements of different scenarios. The default location of the configuration file is the /etc/taos directory, which can be specified by executing the parameter `-c` from the taosd command line. Such as `taosd -c /home/user`, to specify that the configuration file is located in the /home/user directory.
You can also use “-C” to show the current server configuration parameters: You can also use “-C” to show the current server configuration parameters:
...@@ -88,14 +88,14 @@ taosd -C ...@@ -88,14 +88,14 @@ taosd -C
Only some important configuration parameters are listed below. For more parameters, please refer to the instructions in the configuration file. Please refer to the previous chapters for detailed introduction and function of each parameter, and the default of these parameters is working and generally does not need to be set. **Note: After the configuration is modified, \*taosd service\* needs to be restarted to take effect.** Only some important configuration parameters are listed below. For more parameters, please refer to the instructions in the configuration file. Please refer to the previous chapters for detailed introduction and function of each parameter, and the default of these parameters is working and generally does not need to be set. **Note: After the configuration is modified, \*taosd service\* needs to be restarted to take effect.**
- firstEp: end point of the first dnode in the actively connected cluster when taosd starts, the default value is localhost: 6030. - firstEp: end point of the first dnode which will be connected in the cluster when taosd starts, the default value is localhost: 6030.
- fqdn: FQDN of the data node, which defaults to the first hostname configured by the operating system. If you are accustomed to IP address access, you can set it to the IP address of the node. - fqdn: FQDN of the data node, which defaults to the first hostname configured by the operating system. If you want to access via IP address directly, you can set it to the IP address of the node.
- serverPort: the port number of the external service after taosd started, the default value is 6030. - serverPort: the port number of the external service after taosd started, the default value is 6030.
- httpPort: the port number used by the RESTful service to which all HTTP requests (TCP) require a query/write request. The default value is 6041. - httpPort: the port number used by the RESTful service to which all HTTP requests (TCP) require a query/write request. The default value is 6041.
- dataDir: the data file directory to which all data files will be written. [Default:/var/lib/taos](http://default/var/lib/taos). - dataDir: the data file directory to which all data files will be written. [Default:/var/lib/taos](http://default/var/lib/taos).
- logDir: the log file directory to which the running log files of the client and server will be written. [Default:/var/log/taos](http://default/var/log/taos). - logDir: the log file directory to which the running log files of the client and server will be written. [Default:/var/log/taos](http://default/var/log/taos).
- arbitrator: the end point of the arbiter in the system; the default value is null. - arbitrator: the end point of the arbitrator in the system; the default value is null.
- role: optional role for dnode. 0-any; it can be used as an mnode and to allocate vnodes; 1-mgmt; It can only be an mnode, but not to allocate vnodes; 2-dnode; caannot be an mnode, only vnode can be allocated - role: optional role for dnode. 0-any; it can be used as an mnode and to allocate vnodes; 1-mgmt; It can only be an mnode, but not to allocate vnodes; 2-dnode; cannot be an mnode, only vnode can be allocated
- debugFlage: run the log switch. 131 (output error and warning logs), 135 (output error, warning, and debug logs), 143 (output error, warning, debug, and trace logs). Default value: 131 or 135 (different modules have different default values). - debugFlage: run the log switch. 131 (output error and warning logs), 135 (output error, warning, and debug logs), 143 (output error, warning, debug, and trace logs). Default value: 131 or 135 (different modules have different default values).
- numOfLogLines: the maximum number of lines allowed for a single log file. Default: 10,000,000 lines. - numOfLogLines: the maximum number of lines allowed for a single log file. Default: 10,000,000 lines.
- logKeepDays: the maximum retention time of the log file. When it is greater than 0, the log file will be renamed to taosdlog.xxx, where xxx is the timestamp of the last modification of the log file in seconds. Default: 0 days. - logKeepDays: the maximum retention time of the log file. When it is greater than 0, the log file will be renamed to taosdlog.xxx, where xxx is the timestamp of the last modification of the log file in seconds. Default: 0 days.
...@@ -161,18 +161,18 @@ For example: ...@@ -161,18 +161,18 @@ For example:
## <a class="anchor" id="client"></a> Client Configuration ## <a class="anchor" id="client"></a> Client Configuration
The foreground interactive client application of TDengine system is taos and application driver, which shares the same configuration file taos.cfg with taosd. When running taos, use the parameter -c to specify the configuration file directory, such as taos-c/home/cfg, which means using the parameters in the taos.cfg configuration file under the /home/cfg/ directory. The default directory is /etc/taos. For more information on how to use taos, see the help information taos --help. This section mainly describes the parameters used by the taos client application in the configuration file taos.cfg. The foreground interactive client application of TDengine system is taos and application driver, which shares the same configuration file taos.cfg with taosd. When running taos, use the parameter `-c` to specify the configuration file directory, such as `taos -c /home/cfg`, which means using the parameters in the taos.cfg configuration file under the /home/cfg/ directory. The default directory is /etc/taos. For more information on how to use taos, see the help information `taos --help`. This section mainly describes the parameters used by the taos client application in the configuration file taos.cfg.
**Versions after 2.0. 10.0 support the following parameters on command line to display the current client configuration parameters** **Versions after 2.0. 10.0 support the following parameters on command line to display the current client configuration parameters**
```bash ```bash
taos -C taos --dump-config taos -C or taos --dump-config
``` ```
Client configuration parameters: Client configuration parameters:
- firstEp: end point of the first taosd instance in the actively connected cluster when taos is started, the default value is localhost: 6030. - firstEp: end point of the first taosd instance in the actively connected cluster when taos is started, the default value is localhost: 6030.
- secondEp: when taos starts, if not impossible to connect to firstEp, it will try to connect to secondEp. - secondEp: when taos starts, if unable to connect to firstEp, it will try to connect to secondEp.
- locale - locale
Default value: obtained dynamically from the system. If the automatic acquisition fails, user needs to set it in the configuration file or through API Default value: obtained dynamically from the system. If the automatic acquisition fails, user needs to set it in the configuration file or through API
...@@ -493,4 +493,4 @@ At the moment, TDengine has nearly 200 internal reserved keywords, which cannot ...@@ -493,4 +493,4 @@ At the moment, TDengine has nearly 200 internal reserved keywords, which cannot
| CONCAT | GLOB | METRICS | SET | VIEW | | CONCAT | GLOB | METRICS | SET | VIEW |
| CONFIGS | GRANTS | MIN | SHOW | WAVG | | CONFIGS | GRANTS | MIN | SHOW | WAVG |
| CONFLICT | GROUP | MINUS | SLASH | WHERE | | CONFLICT | GROUP | MINUS | SLASH | WHERE |
| CONNECTION | | | | | | CONNECTION | | | | |
\ No newline at end of file
# TAOS SQL # TAOS SQL
TDengine provides a SQL-style language, TAOS SQL, to insert or query data, and support other common tips. To finish this document, you should have some understanding about SQL. TDengine provides a SQL-style language, TAOS SQL, to insert or query data. This document introduces TAOS SQL and supports other common tips. To read through this document, readers should have basic understanding about SQL.
TAOS SQL is the main tool for users to write and query data to TDengine. TAOS SQL provides a style and mode similar to standard SQL to facilitate users to get started quickly. Strictly speaking, TAOS SQL is not and does not attempt to provide SQL standard syntax. In addition, since TDengine does not provide deletion function for temporal structured data, the relevant function of data deletion is non-existent in TAO SQL. TAOS SQL is the main tool for users to write and query data into/from TDengine. TAOS SQL provides a syntax style similar to standard SQL to facilitate users to get started quickly. Strictly speaking, TAOS SQL is not and does not attempt to provide SQL standard syntax. In addition, since TDengine does not provide deletion functionality for time-series data, the relevant functions of data deletion is unsupported in TAO SQL.
Let’s take a look at the conventions used for syntax descriptions. Let’s take a look at the conventions used for syntax descriptions.
...@@ -37,7 +37,7 @@ With TDengine, the most important thing is timestamp. When creating and insertin ...@@ -37,7 +37,7 @@ With TDengine, the most important thing is timestamp. When creating and insertin
- Epch Time: a timestamp value can also be a long integer representing milliseconds since 1970-01-01 08:00:00.000. - Epch Time: a timestamp value can also be a long integer representing milliseconds since 1970-01-01 08:00:00.000.
- Arithmetic operations can be applied to timestamp. For example: now-2h represents a timestamp which is 2 hours ago from the current server time. Units include u( microsecond), a (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks). In `select * from t1 where ts > now-2w and ts <= now-1w`, which queries data of the whole week before two weeks. To specify the interval of down sampling, you can also use n(calendar month) and y(calendar year) as time units. - Arithmetic operations can be applied to timestamp. For example: now-2h represents a timestamp which is 2 hours ago from the current server time. Units include u( microsecond), a (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks). In `select * from t1 where ts > now-2w and ts <= now-1w`, which queries data of the whole week before two weeks. To specify the interval of down sampling, you can also use n(calendar month) and y(calendar year) as time units.
Default time precision of TDengine is millisecond, you can change it to microseocnd by setting parameter enableMicrosecond. TDengine's timestamp is set to millisecond accuracy by default. Microsecond/nanosecond accuracy can be set using CREATE DATABASE with PRECISION parameter. (Nanosecond resolution is supported from version 2.1.5.0 onwards.)
In TDengine, the following 10 data types can be used in data model of an ordinary table. In TDengine, the following 10 data types can be used in data model of an ordinary table.
...@@ -75,7 +75,7 @@ Note: ...@@ -75,7 +75,7 @@ Note:
2. UPDATE marks the database support updating the same timestamp data; 2. UPDATE marks the database support updating the same timestamp data;
3. Maximum length of the database name is 33; 3. Maximum length of the database name is 33;
4. Maximum length of a SQL statement is 65480 characters; 4. Maximum length of a SQL statement is 65480 characters;
5. Database has more storage-related configuration parameters, see System Management. 5. Database has more storage-related configuration parameters, see [Server-side Configuration](https://www.taosdata.com/en/documentation/administrator#config) .
- **Show current system parameters** - **Show current system parameters**
...@@ -88,7 +88,7 @@ Note: ...@@ -88,7 +88,7 @@ Note:
```mysql ```mysql
USE db_name; USE db_name;
``` ```
Use/switch database Use/switch database (Invalid when accessing through RESTful connection)
- **Drop a database** - **Drop a database**
```mysql ```mysql
...@@ -127,7 +127,7 @@ Note: ...@@ -127,7 +127,7 @@ Note:
ALTER DATABASE db_name CACHELAST 0; ALTER DATABASE db_name CACHELAST 0;
``` ```
CACHELAST parameter controls whether last_row of the data subtable is cached in memory. The default value is 0, and the value range is [0, 1]. Where 0 means not enabled and 1 means enabled. (supported from version 2.0. 11) CACHELAST parameter controls whether last_row of the data subtable is cached in memory. The default value is 0, and the value range is [0, 1]. Where 0 means not enabled and 1 means enabled. (supported from version 2.0. 11)
**Tips**: After all the above parameters are modified, show databases can be used to confirm whether the modification is successful. **Tips**: After all the above parameters are modified, show databases can be used to confirm whether the modification is successful.
- **Show all databases in system** - **Show all databases in system**
...@@ -138,14 +138,17 @@ Note: ...@@ -138,14 +138,17 @@ Note:
## <a class="anchor" id="table"></a> Table Management ## <a class="anchor" id="table"></a> Table Management
- Create a table - **Create a table**
Note:
1. The first field must be a timestamp, and system will set it as the primary key; ```mysql
2. The max length of table name is 192; CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]);
3. The length of each row of the table cannot exceed 16k characters; ```
4. Sub-table names can only consist of letters, numbers, and underscores, and cannot begin with numbers Note:
5. If the data type binary or nchar is used, the maximum number of bytes should be specified, such as binary (20), which means 20 bytes; 1. The first field must be a timestamp, and system will set it as the primary key;
2. The max length of table name is 192;
3. The length of each row of the table cannot exceed 16k characters;
4. Sub-table names can only consist of letters, numbers, and underscores, and cannot begin with numbers
5. If the data type binary or nchar is used, the maximum number of bytes should be specified, such as binary (20), which means 20 bytes;
- **Create a table via STable** - **Create a table via STable**
...@@ -171,10 +174,10 @@ Note: ...@@ -171,10 +174,10 @@ Note:
Note: Note:
1. The method of batch creating tables requires that the data table must use STable as a template. 1. The method of batch creating tables requires that the data table must use STable as a template.
2. On the premise of not exceeding the length limit of SQL statements, it is suggested that the number of tables in a single statement should be controlled between 1000 and 3000, which will obtain an ideal speed of table building. 2. On the premise of not exceeding the length limit of SQL statements, it is suggested that the number of tables in a single statement should be controlled between 1000 and 3000, which will obtain an ideal speed of table creating.
- **Drop a table** - **Drop a table**
```mysql ```mysql
DROP TABLE [IF EXISTS] tb_name; DROP TABLE [IF EXISTS] tb_name;
``` ```
...@@ -218,7 +221,7 @@ Note: ...@@ -218,7 +221,7 @@ Note:
## <a class="anchor" id="super-table"></a> STable Management ## <a class="anchor" id="super-table"></a> STable Management
Note: In 2.0. 15.0 and later versions, STABLE reserved words are supported. That is, in the instruction description later in this section, the three instructions of CREATE, DROP and ALTER need to write TABLE instead of STABLE in the old version as the reserved word. Note: In 2.0.15.0 and later versions, STABLE reserved words are supported. That is, in the instruction description later in this section, the three instructions of CREATE, DROP and ALTER need to write TABLE instead of STABLE in the old version as the reserved word.
- **Create a STable** - **Create a STable**
...@@ -290,7 +293,7 @@ Note: In 2.0. 15.0 and later versions, STABLE reserved words are supported. That ...@@ -290,7 +293,7 @@ Note: In 2.0. 15.0 and later versions, STABLE reserved words are supported. That
Modify a tag name of STable. After modifying, all sub-tables under the STable will automatically update the new tag name. Modify a tag name of STable. After modifying, all sub-tables under the STable will automatically update the new tag name.
- **Modify a tag value of sub-table** - **Modify a tag value of sub-table**
```mysql ```mysql
ALTER TABLE tb_name SET TAG tag_name=new_tag_value; ALTER TABLE tb_name SET TAG tag_name=new_tag_value;
``` ```
...@@ -306,7 +309,7 @@ Note: In 2.0. 15.0 and later versions, STABLE reserved words are supported. That ...@@ -306,7 +309,7 @@ Note: In 2.0. 15.0 and later versions, STABLE reserved words are supported. That
Insert a record into table tb_name. Insert a record into table tb_name.
- **Insert a record with data corresponding to a given column** - **Insert a record with data corresponding to a given column**
```mysql ```mysql
INSERT INTO tb_name (field1_name, ...) VALUES (field1_value1, ...); INSERT INTO tb_name (field1_name, ...) VALUES (field1_value1, ...);
``` ```
...@@ -320,14 +323,14 @@ Note: In 2.0. 15.0 and later versions, STABLE reserved words are supported. That ...@@ -320,14 +323,14 @@ Note: In 2.0. 15.0 and later versions, STABLE reserved words are supported. That
Insert multiple records into table tb_name. Insert multiple records into table tb_name.
- **Insert multiple records into a given column** - **Insert multiple records into a given column**
```mysql ```mysql
INSERT INTO tb_name (field1_name, ...) VALUES (field1_value1, ...) (field1_value2, ...) ...; INSERT INTO tb_name (field1_name, ...) VALUES (field1_value1, ...) (field1_value2, ...) ...;
``` ```
Insert multiple records into a given column of table tb_name. Insert multiple records into a given column of table tb_name.
- **Insert multiple records into multiple tables** - **Insert multiple records into multiple tables**
```mysql ```mysql
INSERT INTO tb1_name VALUES (field1_value1, ...) (field1_value2, ...) ... INSERT INTO tb1_name VALUES (field1_value1, ...) (field1_value2, ...) ...
tb2_name VALUES (field1_value1, ...) (field1_value2, ...) ...; tb2_name VALUES (field1_value1, ...) (field1_value2, ...) ...;
...@@ -421,7 +424,7 @@ taos> SELECT * FROM d1001; ...@@ -421,7 +424,7 @@ taos> SELECT * FROM d1001;
Query OK, 3 row(s) in set (0.001165s) Query OK, 3 row(s) in set (0.001165s)
``` ```
For Stables, wildcards contain *tag columns*. For STables, wildcards contain *tag columns*.
```mysql ```mysql
taos> SELECT * FROM meters; taos> SELECT * FROM meters;
...@@ -720,7 +723,7 @@ TDengine supports aggregations over data, they are listed below: ...@@ -720,7 +723,7 @@ TDengine supports aggregations over data, they are listed below:
================================================ ================================================
9 | 9 | 9 | 9 |
Query OK, 1 row(s) in set (0.004475s) Query OK, 1 row(s) in set (0.004475s)
taos> SELECT COUNT(*), COUNT(voltage) FROM d1001; taos> SELECT COUNT(*), COUNT(voltage) FROM d1001;
count(*) | count(voltage) | count(*) | count(voltage) |
================================================ ================================================
...@@ -758,7 +761,7 @@ TDengine supports aggregations over data, they are listed below: ...@@ -758,7 +761,7 @@ TDengine supports aggregations over data, they are listed below:
``` ```
- **TWA** - **TWA**
```mysql ```mysql
SELECT TWA(field_name) FROM tb_name WHERE clause; SELECT TWA(field_name) FROM tb_name WHERE clause;
``` ```
...@@ -799,7 +802,7 @@ TDengine supports aggregations over data, they are listed below: ...@@ -799,7 +802,7 @@ TDengine supports aggregations over data, they are listed below:
================================================================================ ================================================================================
35.200000763 | 658 | 0.950000018 | 35.200000763 | 658 | 0.950000018 |
Query OK, 1 row(s) in set (0.000980s) Query OK, 1 row(s) in set (0.000980s)
``` ```
- **STDDEV** - **STDDEV**
...@@ -896,7 +899,7 @@ TDengine supports aggregations over data, they are listed below: ...@@ -896,7 +899,7 @@ TDengine supports aggregations over data, they are listed below:
====================================== ======================================
13.40000 | 223 | 13.40000 | 223 |
Query OK, 1 row(s) in set (0.001123s) Query OK, 1 row(s) in set (0.001123s)
taos> SELECT MAX(current), MAX(voltage) FROM d1001; taos> SELECT MAX(current), MAX(voltage) FROM d1001;
max(current) | max(voltage) | max(current) | max(voltage) |
====================================== ======================================
...@@ -937,8 +940,6 @@ TDengine supports aggregations over data, they are listed below: ...@@ -937,8 +940,6 @@ TDengine supports aggregations over data, they are listed below:
Query OK, 1 row(s) in set (0.001023s) Query OK, 1 row(s) in set (0.001023s)
``` ```
-
- **LAST** - **LAST**
```mysql ```mysql
...@@ -972,7 +973,7 @@ TDengine supports aggregations over data, they are listed below: ...@@ -972,7 +973,7 @@ TDengine supports aggregations over data, they are listed below:
``` ```
- **TOP** - **TOP**
```mysql ```mysql
SELECT TOP(field_name, K) FROM { tb_name | stb_name } [WHERE clause]; SELECT TOP(field_name, K) FROM { tb_name | stb_name } [WHERE clause];
``` ```
...@@ -1029,7 +1030,7 @@ TDengine supports aggregations over data, they are listed below: ...@@ -1029,7 +1030,7 @@ TDengine supports aggregations over data, they are listed below:
2018-10-03 14:38:15.000 | 218 | 2018-10-03 14:38:15.000 | 218 |
2018-10-03 14:38:16.650 | 218 | 2018-10-03 14:38:16.650 | 218 |
Query OK, 2 row(s) in set (0.001332s) Query OK, 2 row(s) in set (0.001332s)
taos> SELECT BOTTOM(current, 2) FROM d1001; taos> SELECT BOTTOM(current, 2) FROM d1001;
ts | bottom(current, 2) | ts | bottom(current, 2) |
================================================= =================================================
...@@ -1092,7 +1093,7 @@ TDengine supports aggregations over data, they are listed below: ...@@ -1092,7 +1093,7 @@ TDengine supports aggregations over data, they are listed below:
======================= =======================
12.30000 | 12.30000 |
Query OK, 1 row(s) in set (0.001238s) Query OK, 1 row(s) in set (0.001238s)
taos> SELECT LAST_ROW(current) FROM d1002; taos> SELECT LAST_ROW(current) FROM d1002;
last_row(current) | last_row(current) |
======================= =======================
...@@ -1146,7 +1147,7 @@ TDengine supports aggregations over data, they are listed below: ...@@ -1146,7 +1147,7 @@ TDengine supports aggregations over data, they are listed below:
============================ ============================
5.000000000 | 5.000000000 |
Query OK, 1 row(s) in set (0.001792s) Query OK, 1 row(s) in set (0.001792s)
taos> SELECT SPREAD(voltage) FROM d1001; taos> SELECT SPREAD(voltage) FROM d1001;
spread(voltage) | spread(voltage) |
============================ ============================
...@@ -1172,7 +1173,7 @@ TDengine supports aggregations over data, they are listed below: ...@@ -1172,7 +1173,7 @@ TDengine supports aggregations over data, they are listed below:
## <a class="anchor" id="aggregation"></a> Time-dimension Aggregation ## <a class="anchor" id="aggregation"></a> Time-dimension Aggregation
TDengine supports aggregating by intervals. Data in a table can partitioned by intervals and aggregated to generate results. For example, a temperature sensor collects data once per second, but the average temperature needs to be queried every 10 minutes. This aggregation is suitable for down sample operation, and the syntax is as follows: TDengine supports aggregating by intervals (time range). Data in a table can partitioned by intervals and aggregated to generate results. For example, a temperature sensor collects data once per second, but the average temperature needs to be queried every 10 minutes. This aggregation is suitable for down sample operation, and the syntax is as follows:
```mysql ```mysql
SELECT function_list FROM tb_name SELECT function_list FROM tb_name
...@@ -1235,11 +1236,11 @@ SELECT AVG(current), MAX(current), LEASTSQUARES(current, start_val, step_val), P ...@@ -1235,11 +1236,11 @@ SELECT AVG(current), MAX(current), LEASTSQUARES(current, start_val, step_val), P
**Restrictions on group by** **Restrictions on group by**
TAOS SQL supports group by operation on tags, tbnames and ordinary columns, required that only one column and whichhas less than 100,000 unique values. TAOS SQL supports group by operation on tags, tbnames and ordinary columns, required that only one column and which has less than 100,000 unique values.
**Restrictions on join operation** **Restrictions on join operation**
TAOS SQL supports join columns of two tables by Primary Key timestamp between them, and does not support four operations after tables aggregated for the time being. TAOS SQL supports join columns of two tables by Primary Key timestamp between them, and does not support four arithmetic operations after tables aggregated for the time being.
**Availability of is no null** **Availability of is no null**
......
...@@ -45,6 +45,7 @@ echo "version=${version}" ...@@ -45,6 +45,7 @@ echo "version=${version}"
#docker manifest rm tdengine/tdengine:${version} #docker manifest rm tdengine/tdengine:${version}
if [ "$verType" == "beta" ]; then if [ "$verType" == "beta" ]; then
docker manifest inspect tdengine/tdengine-beta:latest docker manifest inspect tdengine/tdengine-beta:latest
docker manifest create -a tdengine/tdengine-beta:latest tdengine/tdengine-amd64-beta:latest tdengine/tdengine-aarch64-beta:latest tdengine/tdengine-aarch32-beta:latest
docker manifest rm tdengine/tdengine-beta:latest docker manifest rm tdengine/tdengine-beta:latest
docker manifest create -a tdengine/tdengine-beta:${version} tdengine/tdengine-amd64-beta:${version} tdengine/tdengine-aarch64-beta:${version} tdengine/tdengine-aarch32-beta:${version} docker manifest create -a tdengine/tdengine-beta:${version} tdengine/tdengine-amd64-beta:${version} tdengine/tdengine-aarch64-beta:${version} tdengine/tdengine-aarch32-beta:${version}
docker manifest create -a tdengine/tdengine-beta:latest tdengine/tdengine-amd64-beta:latest tdengine/tdengine-aarch64-beta:latest tdengine/tdengine-aarch32-beta:latest docker manifest create -a tdengine/tdengine-beta:latest tdengine/tdengine-amd64-beta:latest tdengine/tdengine-aarch64-beta:latest tdengine/tdengine-aarch32-beta:latest
...@@ -54,6 +55,7 @@ if [ "$verType" == "beta" ]; then ...@@ -54,6 +55,7 @@ if [ "$verType" == "beta" ]; then
elif [ "$verType" == "stable" ]; then elif [ "$verType" == "stable" ]; then
docker manifest inspect tdengine/tdengine:latest docker manifest inspect tdengine/tdengine:latest
docker manifest create -a tdengine/tdengine:latest tdengine/tdengine-amd64:latest tdengine/tdengine-aarch64:latest tdengine/tdengine-aarch32:latest
docker manifest rm tdengine/tdengine:latest docker manifest rm tdengine/tdengine:latest
docker manifest create -a tdengine/tdengine:${version} tdengine/tdengine-amd64:${version} tdengine/tdengine-aarch64:${version} tdengine/tdengine-aarch32:${version} docker manifest create -a tdengine/tdengine:${version} tdengine/tdengine-amd64:${version} tdengine/tdengine-aarch64:${version} tdengine/tdengine-aarch32:${version}
docker manifest create -a tdengine/tdengine:latest tdengine/tdengine-amd64:latest tdengine/tdengine-aarch64:latest tdengine/tdengine-aarch32:latest docker manifest create -a tdengine/tdengine:latest tdengine/tdengine-amd64:latest tdengine/tdengine-aarch64:latest tdengine/tdengine-aarch32:latest
......
...@@ -22,7 +22,7 @@ cpuType=x64 # [aarch32 | aarch64 | x64 | x86 | mips64 ...] ...@@ -22,7 +22,7 @@ cpuType=x64 # [aarch32 | aarch64 | x64 | x86 | mips64 ...]
osType=Linux # [Linux | Kylin | Alpine | Raspberrypi | Darwin | Windows | Ningsi60 | Ningsi80 |...] osType=Linux # [Linux | Kylin | Alpine | Raspberrypi | Darwin | Windows | Ningsi60 | Ningsi80 |...]
pagMode=full # [full | lite] pagMode=full # [full | lite]
soMode=dynamic # [static | dynamic] soMode=dynamic # [static | dynamic]
dbName=taos # [taos | power | tq] dbName=taos # [taos | power | tq | pro]
allocator=glibc # [glibc | jemalloc] allocator=glibc # [glibc | jemalloc]
verNumber="" verNumber=""
verNumberComp="1.0.0.0" verNumberComp="1.0.0.0"
...@@ -78,7 +78,7 @@ do ...@@ -78,7 +78,7 @@ do
echo " -l [full | lite] " echo " -l [full | lite] "
echo " -a [glibc | jemalloc] " echo " -a [glibc | jemalloc] "
echo " -s [static | dynamic] " echo " -s [static | dynamic] "
echo " -d [taos | power | tq ] " echo " -d [taos | power | tq | pro] "
echo " -n [version number] " echo " -n [version number] "
echo " -m [compatible version number] " echo " -m [compatible version number] "
exit 0 exit 0
...@@ -253,6 +253,10 @@ if [ "$osType" != "Darwin" ]; then ...@@ -253,6 +253,10 @@ if [ "$osType" != "Darwin" ]; then
${csudo} ./makepkg_tq.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${dbName} ${verNumberComp} ${csudo} ./makepkg_tq.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${dbName} ${verNumberComp}
${csudo} ./makeclient_tq.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${dbName} ${csudo} ./makeclient_tq.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${dbName}
${csudo} ./makearbi_tq.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${csudo} ./makearbi_tq.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode}
elif [[ "$dbName" == "pro" ]]; then
${csudo} ./makepkg_pro.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${dbName} ${verNumberComp}
${csudo} ./makeclient_pro.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${dbName}
${csudo} ./makearbi_pro.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode}
else else
${csudo} ./makepkg_power.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${dbName} ${verNumberComp} ${csudo} ./makepkg_power.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${dbName} ${verNumberComp}
${csudo} ./makeclient_power.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${dbName} ${csudo} ./makeclient_power.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${dbName}
...@@ -262,4 +266,3 @@ else ...@@ -262,4 +266,3 @@ else
cd ${script_dir}/tools cd ${script_dir}/tools
./makeclient.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${dbName} ./makeclient.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${dbName}
fi fi
...@@ -102,6 +102,12 @@ elif echo $osinfo | grep -qwi "centos" ; then ...@@ -102,6 +102,12 @@ elif echo $osinfo | grep -qwi "centos" ; then
elif echo $osinfo | grep -qwi "fedora" ; then elif echo $osinfo | grep -qwi "fedora" ; then
# echo "This is fedora system" # echo "This is fedora system"
os_type=2 os_type=2
elif echo $osinfo | grep -qwi "Linx" ; then
# echo "This is Linx system"
os_type=1
service_mod=0
initd_mod=0
service_config_dir="/etc/systemd/system"
else else
echo " osinfo: ${osinfo}" echo " osinfo: ${osinfo}"
echo " This is an officially unverified linux system," echo " This is an officially unverified linux system,"
......
#!/bin/bash
#
# This file is used to install database on linux systems. The operating system
# is required to use systemd to manage services at boot
set -e
#set -x
# -----------------------Variables definition---------------------
script_dir=$(dirname $(readlink -f "$0"))
bin_link_dir="/usr/bin"
#inc_link_dir="/usr/include"
#install main path
install_main_dir="/usr/local/tarbitrator"
# old bin dir
bin_dir="/usr/local/tarbitrator/bin"
service_config_dir="/etc/systemd/system"
# Color setting
RED='\033[0;31m'
GREEN='\033[1;32m'
GREEN_DARK='\033[0;32m'
GREEN_UNDERLINE='\033[4;32m'
NC='\033[0m'
csudo=""
if command -v sudo > /dev/null; then
csudo="sudo"
fi
update_flag=0
initd_mod=0
service_mod=2
if pidof systemd &> /dev/null; then
service_mod=0
elif $(which service &> /dev/null); then
service_mod=1
service_config_dir="/etc/init.d"
if $(which chkconfig &> /dev/null); then
initd_mod=1
elif $(which insserv &> /dev/null); then
initd_mod=2
elif $(which update-rc.d &> /dev/null); then
initd_mod=3
else
service_mod=2
fi
else
service_mod=2
fi
# get the operating system type for using the corresponding init file
# ubuntu/debian(deb), centos/fedora(rpm), others: opensuse, redhat, ..., no verification
#osinfo=$(awk -F= '/^NAME/{print $2}' /etc/os-release)
if [[ -e /etc/os-release ]]; then
osinfo=$(cat /etc/os-release | grep "NAME" | cut -d '"' -f2) ||:
else
osinfo=""
fi
#echo "osinfo: ${osinfo}"
os_type=0
if echo $osinfo | grep -qwi "ubuntu" ; then
# echo "This is ubuntu system"
os_type=1
elif echo $osinfo | grep -qwi "debian" ; then
# echo "This is debian system"
os_type=1
elif echo $osinfo | grep -qwi "Kylin" ; then
# echo "This is Kylin system"
os_type=1
elif echo $osinfo | grep -qwi "centos" ; then
# echo "This is centos system"
os_type=2
elif echo $osinfo | grep -qwi "fedora" ; then
# echo "This is fedora system"
os_type=2
else
echo " osinfo: ${osinfo}"
echo " This is an officially unverified linux system,"
echo " if there are any problems with the installation and operation, "
echo " please feel free to contact hanatech.com.cn for support."
os_type=1
fi
function kill_tarbitrator() {
pid=$(ps -ef | grep "tarbitrator" | grep -v "grep" | awk '{print $2}')
if [ -n "$pid" ]; then
${csudo} kill -9 $pid || :
fi
}
function install_main_path() {
#create install main dir and all sub dir
${csudo} rm -rf ${install_main_dir} || :
${csudo} mkdir -p ${install_main_dir}
${csudo} mkdir -p ${install_main_dir}/bin
#${csudo} mkdir -p ${install_main_dir}/include
${csudo} mkdir -p ${install_main_dir}/init.d
}
function install_bin() {
# Remove links
${csudo} rm -f ${bin_link_dir}/rmtarbitrator || :
${csudo} rm -f ${bin_link_dir}/tarbitrator || :
${csudo} cp -r ${script_dir}/bin/* ${install_main_dir}/bin && ${csudo} chmod 0555 ${install_main_dir}/bin/*
#Make link
[ -x ${install_main_dir}/bin/remove_arbi_prodb.sh ] && ${csudo} ln -s ${install_main_dir}/bin/remove_arbi_prodb.sh ${bin_link_dir}/rmtarbitrator || :
[ -x ${install_main_dir}/bin/tarbitrator ] && ${csudo} ln -s ${install_main_dir}/bin/tarbitrator ${bin_link_dir}/tarbitrator || :
}
function install_header() {
${csudo} rm -f ${inc_link_dir}/taos.h ${inc_link_dir}/taoserror.h || :
${csudo} cp -f ${script_dir}/inc/* ${install_main_dir}/include && ${csudo} chmod 644 ${install_main_dir}/include/*
${csudo} ln -s ${install_main_dir}/include/taos.h ${inc_link_dir}/taos.h
${csudo} ln -s ${install_main_dir}/include/taoserror.h ${inc_link_dir}/taoserror.h
}
function clean_service_on_sysvinit() {
#restart_config_str="taos:2345:respawn:${service_config_dir}/taosd start"
#${csudo} sed -i "\|${restart_config_str}|d" /etc/inittab || :
if pidof tarbitrator &> /dev/null; then
${csudo} service tarbitratord stop || :
fi
if ((${initd_mod}==1)); then
if [ -e ${service_config_dir}/tarbitratord ]; then
${csudo} chkconfig --del tarbitratord || :
fi
elif ((${initd_mod}==2)); then
if [ -e ${service_config_dir}/tarbitratord ]; then
${csudo} insserv -r tarbitratord || :
fi
elif ((${initd_mod}==3)); then
if [ -e ${service_config_dir}/tarbitratord ]; then
${csudo} update-rc.d -f tarbitratord remove || :
fi
fi
${csudo} rm -f ${service_config_dir}/tarbitratord || :
if $(which init &> /dev/null); then
${csudo} init q || :
fi
}
function install_service_on_sysvinit() {
clean_service_on_sysvinit
sleep 1
# Install prodbs service
if ((${os_type}==1)); then
${csudo} cp -f ${script_dir}/init.d/tarbitratord.deb ${install_main_dir}/init.d/tarbitratord
${csudo} cp ${script_dir}/init.d/tarbitratord.deb ${service_config_dir}/tarbitratord && ${csudo} chmod a+x ${service_config_dir}/tarbitratord
elif ((${os_type}==2)); then
${csudo} cp -f ${script_dir}/init.d/tarbitratord.rpm ${install_main_dir}/init.d/tarbitratord
${csudo} cp ${script_dir}/init.d/tarbitratord.rpm ${service_config_dir}/tarbitratord && ${csudo} chmod a+x ${service_config_dir}/tarbitratord
fi
if ((${initd_mod}==1)); then
${csudo} chkconfig --add tarbitratord || :
${csudo} chkconfig --level 2345 tarbitratord on || :
elif ((${initd_mod}==2)); then
${csudo} insserv tarbitratord || :
${csudo} insserv -d tarbitratord || :
elif ((${initd_mod}==3)); then
${csudo} update-rc.d tarbitratord defaults || :
fi
}
function clean_service_on_systemd() {
tarbitratord_service_config="${service_config_dir}/tarbitratord.service"
if systemctl is-active --quiet tarbitratord; then
echo "tarbitrator is running, stopping it..."
${csudo} systemctl stop tarbitratord &> /dev/null || echo &> /dev/null
fi
${csudo} systemctl disable tarbitratord &> /dev/null || echo &> /dev/null
${csudo} rm -f ${tarbitratord_service_config}
}
function install_service_on_systemd() {
clean_service_on_systemd
tarbitratord_service_config="${service_config_dir}/tarbitratord.service"
${csudo} bash -c "echo '[Unit]' >> ${tarbitratord_service_config}"
${csudo} bash -c "echo 'Description=ProDB arbitrator service' >> ${tarbitratord_service_config}"
${csudo} bash -c "echo 'After=network-online.target' >> ${tarbitratord_service_config}"
${csudo} bash -c "echo 'Wants=network-online.target' >> ${tarbitratord_service_config}"
${csudo} bash -c "echo >> ${tarbitratord_service_config}"
${csudo} bash -c "echo '[Service]' >> ${tarbitratord_service_config}"
${csudo} bash -c "echo 'Type=simple' >> ${tarbitratord_service_config}"
${csudo} bash -c "echo 'ExecStart=/usr/bin/tarbitrator' >> ${tarbitratord_service_config}"
${csudo} bash -c "echo 'TimeoutStopSec=1000000s' >> ${tarbitratord_service_config}"
${csudo} bash -c "echo 'LimitNOFILE=infinity' >> ${tarbitratord_service_config}"
${csudo} bash -c "echo 'LimitNPROC=infinity' >> ${tarbitratord_service_config}"
${csudo} bash -c "echo 'LimitCORE=infinity' >> ${tarbitratord_service_config}"
${csudo} bash -c "echo 'TimeoutStartSec=0' >> ${tarbitratord_service_config}"
${csudo} bash -c "echo 'StandardOutput=null' >> ${tarbitratord_service_config}"
${csudo} bash -c "echo 'Restart=always' >> ${tarbitratord_service_config}"
${csudo} bash -c "echo 'StartLimitBurst=3' >> ${tarbitratord_service_config}"
${csudo} bash -c "echo 'StartLimitInterval=60s' >> ${tarbitratord_service_config}"
${csudo} bash -c "echo >> ${tarbitratord_service_config}"
${csudo} bash -c "echo '[Install]' >> ${tarbitratord_service_config}"
${csudo} bash -c "echo 'WantedBy=multi-user.target' >> ${tarbitratord_service_config}"
${csudo} systemctl enable tarbitratord
}
function install_service() {
if ((${service_mod}==0)); then
install_service_on_systemd
elif ((${service_mod}==1)); then
install_service_on_sysvinit
else
# must manual stop taosd
kill_tarbitrator
fi
}
function update_prodb() {
# Start to update
echo -e "${GREEN}Start to update ProDB's arbitrator ...${NC}"
# Stop the service if running
if pidof tarbitrator &> /dev/null; then
if ((${service_mod}==0)); then
${csudo} systemctl stop tarbitratord || :
elif ((${service_mod}==1)); then
${csudo} service tarbitratord stop || :
else
kill_tarbitrator
fi
sleep 1
fi
install_main_path
#install_header
install_bin
install_service
echo
#echo -e "${GREEN_DARK}To configure ProDB ${NC}: edit /etc/taos/taos.cfg"
if ((${service_mod}==0)); then
echo -e "${GREEN_DARK}To start arbitrator ${NC}: ${csudo} systemctl start tarbitratord${NC}"
elif ((${service_mod}==1)); then
echo -e "${GREEN_DARK}To start arbitrator ${NC}: ${csudo} service tarbitratord start${NC}"
else
echo -e "${GREEN_DARK}To start arbitrator ${NC}: ./tarbitrator${NC}"
fi
echo
echo -e "\033[44;32;1mProDB's arbitrator is updated successfully!${NC}"
}
function install_prodb() {
# Start to install
echo -e "${GREEN}Start to install ProDB's arbitrator ...${NC}"
install_main_path
#install_header
install_bin
install_service
echo
#echo -e "${GREEN_DARK}To configure ProDB ${NC}: edit /etc/taos/taos.cfg"
if ((${service_mod}==0)); then
echo -e "${GREEN_DARK}To start arbitrator ${NC}: ${csudo} systemctl start tarbitratord${NC}"
elif ((${service_mod}==1)); then
echo -e "${GREEN_DARK}To start arbitrator ${NC}: ${csudo} service tarbitratord start${NC}"
else
echo -e "${GREEN_DARK}To start arbitrator ${NC}: tarbitrator${NC}"
fi
echo -e "\033[44;32;1mProDB's arbitrator is installed successfully!${NC}"
echo
}
## ==============================Main program starts from here============================
# Install server and client
if [ -x ${bin_dir}/tarbitrator ]; then
update_flag=1
update_prodb
else
install_prodb
fi
...@@ -128,8 +128,12 @@ function install_lib() { ...@@ -128,8 +128,12 @@ function install_lib() {
${csudo} ln -s ${install_main_dir}/driver/libtaos.* ${lib_link_dir}/libtaos.1.dylib ${csudo} ln -s ${install_main_dir}/driver/libtaos.* ${lib_link_dir}/libtaos.1.dylib
${csudo} ln -s ${lib_link_dir}/libtaos.1.dylib ${lib_link_dir}/libtaos.dylib ${csudo} ln -s ${lib_link_dir}/libtaos.1.dylib ${lib_link_dir}/libtaos.dylib
fi fi
${csudo} ldconfig if [ "$osType" != "Darwin" ]; then
${csudo} ldconfig
else
${csudo} update_dyld_shared_cache
fi
} }
function install_header() { function install_header() {
......
#!/bin/bash
#
# This file is used to install ProDB client on linux systems. The operating system
# is required to use systemd to manage services at boot
set -e
#set -x
# -----------------------Variables definition---------------------
osType=Linux
pagMode=full
if [ "$osType" != "Darwin" ]; then
script_dir=$(dirname $(readlink -f "$0"))
# Dynamic directory
data_dir="/var/lib/ProDB"
log_dir="/var/log/ProDB"
else
script_dir=`dirname $0`
cd ${script_dir}
script_dir="$(pwd)"
data_dir="/var/lib/ProDB"
log_dir="~/ProDB/log"
fi
log_link_dir="/usr/local/ProDB/log"
cfg_install_dir="/etc/ProDB"
if [ "$osType" != "Darwin" ]; then
bin_link_dir="/usr/bin"
lib_link_dir="/usr/lib"
lib64_link_dir="/usr/lib64"
inc_link_dir="/usr/include"
else
bin_link_dir="/usr/local/bin"
lib_link_dir="/usr/local/lib"
inc_link_dir="/usr/local/include"
fi
#install main path
install_main_dir="/usr/local/ProDB"
# old bin dir
bin_dir="/usr/local/ProDB/bin"
# Color setting
RED='\033[0;31m'
GREEN='\033[1;32m'
GREEN_DARK='\033[0;32m'
GREEN_UNDERLINE='\033[4;32m'
NC='\033[0m'
csudo=""
if command -v sudo > /dev/null; then
csudo="sudo"
fi
update_flag=0
function kill_client() {
pid=$(ps -ef | grep "prodbc" | grep -v "grep" | awk '{print $2}')
if [ -n "$pid" ]; then
${csudo} kill -9 $pid || :
fi
}
function install_main_path() {
#create install main dir and all sub dir
${csudo} rm -rf ${install_main_dir} || :
${csudo} mkdir -p ${install_main_dir}
${csudo} mkdir -p ${install_main_dir}/cfg
${csudo} mkdir -p ${install_main_dir}/bin
${csudo} mkdir -p ${install_main_dir}/connector
${csudo} mkdir -p ${install_main_dir}/driver
${csudo} mkdir -p ${install_main_dir}/examples
${csudo} mkdir -p ${install_main_dir}/include
}
function install_bin() {
# Remove links
${csudo} rm -f ${bin_link_dir}/prodbc || :
if [ "$osType" != "Darwin" ]; then
${csudo} rm -f ${bin_link_dir}/prodemo || :
${csudo} rm -f ${bin_link_dir}/prodump || :
fi
${csudo} rm -f ${bin_link_dir}/rmprodb || :
${csudo} rm -f ${bin_link_dir}/set_core || :
${csudo} cp -r ${script_dir}/bin/* ${install_main_dir}/bin && ${csudo} chmod 0555 ${install_main_dir}/bin/*
#Make link
[ -x ${install_main_dir}/bin/prodbc ] && ${csudo} ln -s ${install_main_dir}/bin/prodbc ${bin_link_dir}/prodbc || :
if [ "$osType" != "Darwin" ]; then
[ -x ${install_main_dir}/bin/prodemo ] && ${csudo} ln -s ${install_main_dir}/bin/prodemo ${bin_link_dir}/prodemo || :
[ -x ${install_main_dir}/bin/prodump ] && ${csudo} ln -s ${install_main_dir}/bin/prodump ${bin_link_dir}/prodump || :
fi
[ -x ${install_main_dir}/bin/remove_client_prodb.sh ] && ${csudo} ln -s ${install_main_dir}/bin/remove_client_prodb.sh ${bin_link_dir}/rmprodb || :
[ -x ${install_main_dir}/bin/set_core.sh ] && ${csudo} ln -s ${install_main_dir}/bin/set_core.sh ${bin_link_dir}/set_core || :
}
function clean_lib() {
sudo rm -f /usr/lib/libtaos.* || :
sudo rm -rf ${lib_dir} || :
}
function install_lib() {
# Remove links
${csudo} rm -f ${lib_link_dir}/libtaos.* || :
${csudo} rm -f ${lib64_link_dir}/libtaos.* || :
#${csudo} rm -rf ${v15_java_app_dir} || :
${csudo} cp -rf ${script_dir}/driver/* ${install_main_dir}/driver && ${csudo} chmod 777 ${install_main_dir}/driver/*
if [ "$osType" != "Darwin" ]; then
${csudo} ln -s ${install_main_dir}/driver/libtaos.* ${lib_link_dir}/libtaos.so.1
${csudo} ln -s ${lib_link_dir}/libtaos.so.1 ${lib_link_dir}/libtaos.so
if [ -d "${lib64_link_dir}" ]; then
${csudo} ln -s ${install_main_dir}/driver/libtaos.* ${lib64_link_dir}/libtaos.so.1 || :
${csudo} ln -s ${lib64_link_dir}/libtaos.so.1 ${lib64_link_dir}/libtaos.so || :
fi
else
${csudo} ln -s ${install_main_dir}/driver/libtaos.* ${lib_link_dir}/libtaos.1.dylib
${csudo} ln -s ${lib_link_dir}/libtaos.1.dylib ${lib_link_dir}/libtaos.dylib
fi
${csudo} ldconfig
}
function install_header() {
${csudo} rm -f ${inc_link_dir}/taos.h ${inc_link_dir}/taoserror.h || :
${csudo} cp -f ${script_dir}/inc/* ${install_main_dir}/include && ${csudo} chmod 644 ${install_main_dir}/include/*
${csudo} ln -s ${install_main_dir}/include/taos.h ${inc_link_dir}/taos.h
${csudo} ln -s ${install_main_dir}/include/taoserror.h ${inc_link_dir}/taoserror.h
}
function install_config() {
#${csudo} rm -f ${install_main_dir}/cfg/taos.cfg || :
if [ ! -f ${cfg_install_dir}/taos.cfg ]; then
${csudo} mkdir -p ${cfg_install_dir}
[ -f ${script_dir}/cfg/taos.cfg ] && ${csudo} cp ${script_dir}/cfg/taos.cfg ${cfg_install_dir}
${csudo} chmod 644 ${cfg_install_dir}/*
fi
${csudo} cp -f ${script_dir}/cfg/taos.cfg ${install_main_dir}/cfg/taos.cfg.org
${csudo} ln -s ${cfg_install_dir}/taos.cfg ${install_main_dir}/cfg
}
function install_log() {
${csudo} rm -rf ${log_dir} || :
if [ "$osType" != "Darwin" ]; then
${csudo} mkdir -p ${log_dir} && ${csudo} chmod 777 ${log_dir}
else
mkdir -p ${log_dir} && ${csudo} chmod 777 ${log_dir}
fi
${csudo} ln -s ${log_dir} ${install_main_dir}/log
}
function install_connector() {
${csudo} cp -rf ${script_dir}/connector/* ${install_main_dir}/connector
}
function install_examples() {
if [ -d ${script_dir}/examples ]; then
${csudo} cp -rf ${script_dir}/examples/* ${install_main_dir}/examples
fi
}
function update_prodb() {
# Start to update
if [ ! -e prodb.tar.gz ]; then
echo "File prodb.tar.gz does not exist"
exit 1
fi
tar -zxf prodb.tar.gz
echo -e "${GREEN}Start to update ProDB client...${NC}"
# Stop the client shell if running
if pidof prodbc &> /dev/null; then
kill_client
sleep 1
fi
install_main_path
install_log
install_header
install_lib
if [ "$pagMode" != "lite" ]; then
install_connector
fi
install_examples
install_bin
install_config
echo
echo -e "\033[44;32;1mProDB client is updated successfully!${NC}"
rm -rf $(tar -tf prodb.tar.gz)
}
function install_prodb() {
# Start to install
if [ ! -e prodb.tar.gz ]; then
echo "File prodb.tar.gz does not exist"
exit 1
fi
tar -zxf prodb.tar.gz
echo -e "${GREEN}Start to install ProDB client...${NC}"
install_main_path
install_log
install_header
install_lib
if [ "$pagMode" != "lite" ]; then
install_connector
fi
install_examples
install_bin
install_config
echo
echo -e "\033[44;32;1mProDB client is installed successfully!${NC}"
rm -rf $(tar -tf prodb.tar.gz)
}
## ==============================Main program starts from here============================
# Install or updata client and client
# if server is already install, don't install client
if [ -e ${bin_dir}/prodbs ]; then
echo -e "\033[44;32;1mThere are already installed ProDB server, so don't need install client!${NC}"
exit 0
fi
if [ -x ${bin_dir}/prodbc ]; then
update_flag=1
update_prodb
else
install_prodb
fi
此差异已折叠。
...@@ -20,44 +20,33 @@ fi ...@@ -20,44 +20,33 @@ fi
# Dynamic directory # Dynamic directory
if [ "$osType" != "Darwin" ]; then if [ "$osType" != "Darwin" ]; then
data_dir="/var/lib/taos" data_dir="/var/lib/taos"
log_dir="/var/log/taos" log_dir="/var/log/taos"
else
data_dir="/usr/local/var/lib/taos"
log_dir="/usr/local/var/log/taos"
fi
if [ "$osType" != "Darwin" ]; then
cfg_install_dir="/etc/taos" cfg_install_dir="/etc/taos"
else
cfg_install_dir="/usr/local/etc/taos"
fi
if [ "$osType" != "Darwin" ]; then
bin_link_dir="/usr/bin" bin_link_dir="/usr/bin"
lib_link_dir="/usr/lib" lib_link_dir="/usr/lib"
lib64_link_dir="/usr/lib64" lib64_link_dir="/usr/lib64"
inc_link_dir="/usr/include" inc_link_dir="/usr/include"
install_main_dir="/usr/local/taos"
bin_dir="/usr/local/taos/bin"
else else
data_dir="/usr/local/var/lib/taos"
log_dir="/usr/local/var/log/taos"
cfg_install_dir="/usr/local/etc/taos"
bin_link_dir="/usr/local/bin" bin_link_dir="/usr/local/bin"
lib_link_dir="/usr/local/lib" lib_link_dir="/usr/local/lib"
inc_link_dir="/usr/local/include" inc_link_dir="/usr/local/include"
fi
#install main path
if [ "$osType" != "Darwin" ]; then
install_main_dir="/usr/local/taos"
else
install_main_dir="/usr/local/Cellar/tdengine/${verNumber}" install_main_dir="/usr/local/Cellar/tdengine/${verNumber}"
fi
# old bin dir bin_dir="/usr/local/Cellar/tdengine/${verNumber}/bin"
if [ "$osType" != "Darwin" ]; then
bin_dir="/usr/local/taos/bin"
else
bin_dir="/usr/local/Cellar/tdengine/${verNumber}/bin"
fi fi
service_config_dir="/etc/systemd/system" service_config_dir="/etc/systemd/system"
...@@ -254,7 +243,12 @@ function install_lib() { ...@@ -254,7 +243,12 @@ function install_lib() {
${csudo} ln -sf ${lib64_link_dir}/libtaos.so.1 ${lib64_link_dir}/libtaos.so ${csudo} ln -sf ${lib64_link_dir}/libtaos.so.1 ${lib64_link_dir}/libtaos.so
fi fi
else else
${csudo} cp -Rf ${binary_dir}/build/lib/libtaos.* ${install_main_dir}/driver && ${csudo} chmod 777 ${install_main_dir}/driver/* ${csudo} cp -Rf ${binary_dir}/build/lib/libtaos.${verNumber}.dylib ${install_main_dir}/driver && ${csudo} chmod 777 ${install_main_dir}/driver/*
${csudo} ln -sf ${install_main_dir}/driver/libtaos.* ${install_main_dir}/driver/libtaos.1.dylib || :
${csudo} ln -sf ${install_main_dir}/driver/libtaos.1.dylib ${install_main_dir}/driver/libtaos.dylib || :
${csudo} ln -sf ${install_main_dir}/driver/libtaos.* ${lib_link_dir}/libtaos.1.dylib || :
${csudo} ln -sf ${lib_link_dir}/libtaos.1.dylib ${lib_link_dir}/libtaos.dylib || :
fi fi
install_jemalloc install_jemalloc
......
#!/bin/bash
#
# Generate arbitrator's tar.gz setup package for all os system
set -e
#set -x
curr_dir=$(pwd)
compile_dir=$1
version=$2
build_time=$3
cpuType=$4
osType=$5
verMode=$6
verType=$7
pagMode=$8
script_dir="$(dirname $(readlink -f $0))"
top_dir="$(readlink -f ${script_dir}/../..)"
# create compressed install file.
build_dir="${compile_dir}/build"
code_dir="${top_dir}/src"
release_dir="${top_dir}/release"
#package_name='linux'
if [ "$verMode" == "cluster" ]; then
install_dir="${release_dir}/ProDB-enterprise-arbitrator-${version}"
else
install_dir="${release_dir}/ProDB-arbitrator-${version}"
fi
# Directories and files.
bin_files="${build_dir}/bin/tarbitrator ${script_dir}/remove_arbi_pro.sh"
install_files="${script_dir}/install_arbi_pro.sh"
#header_files="${code_dir}/inc/taos.h ${code_dir}/inc/taoserror.h"
init_file_tarbitrator_deb=${script_dir}/../deb/tarbitratord
init_file_tarbitrator_rpm=${script_dir}/../rpm/tarbitratord
# make directories.
mkdir -p ${install_dir} && cp ${install_files} ${install_dir} && chmod a+x ${install_dir}/install_arbi_pro.sh || :
#mkdir -p ${install_dir}/inc && cp ${header_files} ${install_dir}/inc || :
mkdir -p ${install_dir}/bin && cp ${bin_files} ${install_dir}/bin && chmod a+x ${install_dir}/bin/* || :
mkdir -p ${install_dir}/init.d && cp ${init_file_tarbitrator_deb} ${install_dir}/init.d/tarbitratord.deb || :
mkdir -p ${install_dir}/init.d && cp ${init_file_tarbitrator_rpm} ${install_dir}/init.d/tarbitratord.rpm || :
cd ${release_dir}
if [ "$verMode" == "cluster" ]; then
pkg_name=${install_dir}-${osType}-${cpuType}
elif [ "$verMode" == "edge" ]; then
pkg_name=${install_dir}-${osType}-${cpuType}
else
echo "unknow verMode, nor cluster or edge"
exit 1
fi
if [ "$verType" == "beta" ]; then
pkg_name=${pkg_name}-${verType}
elif [ "$verType" == "stable" ]; then
pkg_name=${pkg_name}
else
echo "unknow verType, nor stabel or beta"
exit 1
fi
tar -zcv -f "$(basename ${pkg_name}).tar.gz" $(basename ${install_dir}) --remove-files || :
exitcode=$?
if [ "$exitcode" != "0" ]; then
echo "tar ${pkg_name}.tar.gz error !!!"
exit $exitcode
fi
cd ${curr_dir}
#!/bin/bash
#
# Generate tar.gz package for linux client in all os system
set -e
#set -x
curr_dir=$(pwd)
compile_dir=$1
version=$2
build_time=$3
cpuType=$4
osType=$5
verMode=$6
verType=$7
pagMode=$8
if [ "$osType" != "Darwin" ]; then
script_dir="$(dirname $(readlink -f $0))"
top_dir="$(readlink -f ${script_dir}/../..)"
else
script_dir=`dirname $0`
cd ${script_dir}
script_dir="$(pwd)"
top_dir=${script_dir}/../..
fi
# create compressed install file.
build_dir="${compile_dir}/build"
code_dir="${top_dir}/src"
release_dir="${top_dir}/release"
#package_name='linux'
if [ "$verMode" == "cluster" ]; then
install_dir="${release_dir}/ProDB-enterprise-client-${version}"
else
install_dir="${release_dir}/ProDB-client-${version}"
fi
# Directories and files.
if [ "$osType" != "Darwin" ]; then
lib_files="${build_dir}/lib/libtaos.so.${version}"
else
bin_files="${build_dir}/bin/taos ${script_dir}/remove_client_pro.sh"
lib_files="${build_dir}/lib/libtaos.${version}.dylib"
fi
header_files="${code_dir}/inc/taos.h ${code_dir}/inc/taoserror.h"
if [ "$verMode" == "cluster" ]; then
cfg_dir="${top_dir}/../enterprise/packaging/cfg"
else
cfg_dir="${top_dir}/packaging/cfg"
fi
install_files="${script_dir}/install_client_pro.sh"
# make directories.
mkdir -p ${install_dir}
mkdir -p ${install_dir}/inc && cp ${header_files} ${install_dir}/inc
mkdir -p ${install_dir}/cfg && cp ${cfg_dir}/taos.cfg ${install_dir}/cfg/taos.cfg
sed -i '/dataDir/ {s/taos/ProDB/g}' ${install_dir}/cfg/taos.cfg
sed -i '/logDir/ {s/taos/ProDB/g}' ${install_dir}/cfg/taos.cfg
sed -i "s/TDengine/ProDB/g" ${install_dir}/cfg/taos.cfg
mkdir -p ${install_dir}/bin
if [ "$osType" != "Darwin" ]; then
if [ "$pagMode" == "lite" ]; then
strip ${build_dir}/bin/taos
cp ${build_dir}/bin/taos ${install_dir}/bin/prodbc
cp ${script_dir}/remove_pro.sh ${install_dir}/bin
else
cp ${build_dir}/bin/taos ${install_dir}/bin/prodbc
cp ${script_dir}/remove_pro.sh ${install_dir}/bin
cp ${build_dir}/bin/taosdemo ${install_dir}/bin/prodemo
cp ${build_dir}/bin/taosdump ${install_dir}/bin/prodump
cp ${script_dir}/set_core.sh ${install_dir}/bin
cp ${script_dir}/get_client.sh ${install_dir}/bin
cp ${script_dir}/taosd-dump-cfg.gdb ${install_dir}/bin
fi
else
cp ${bin_files} ${install_dir}/bin
fi
chmod a+x ${install_dir}/bin/* || :
if [ -f ${build_dir}/bin/jemalloc-config ]; then
mkdir -p ${install_dir}/jemalloc/{bin,lib,lib/pkgconfig,include/jemalloc,share/doc/jemalloc,share/man/man3}
cp ${build_dir}/bin/jemalloc-config ${install_dir}/jemalloc/bin
if [ -f ${build_dir}/bin/jemalloc.sh ]; then
cp ${build_dir}/bin/jemalloc.sh ${install_dir}/jemalloc/bin
fi
if [ -f ${build_dir}/bin/jeprof ]; then
cp ${build_dir}/bin/jeprof ${install_dir}/jemalloc/bin
fi
if [ -f ${build_dir}/include/jemalloc/jemalloc.h ]; then
cp ${build_dir}/include/jemalloc/jemalloc.h ${install_dir}/jemalloc/include/jemalloc
fi
if [ -f ${build_dir}/lib/libjemalloc.so.2 ]; then
cp ${build_dir}/lib/libjemalloc.so.2 ${install_dir}/jemalloc/lib
ln -sf libjemalloc.so.2 ${install_dir}/jemalloc/lib/libjemalloc.so
fi
if [ -f ${build_dir}/lib/libjemalloc.a ]; then
cp ${build_dir}/lib/libjemalloc.a ${install_dir}/jemalloc/lib
fi
if [ -f ${build_dir}/lib/libjemalloc_pic.a ]; then
cp ${build_dir}/lib/libjemalloc_pic.a ${install_dir}/jemalloc/lib
fi
if [ -f ${build_dir}/lib/pkgconfig/jemalloc.pc ]; then
cp ${build_dir}/lib/pkgconfig/jemalloc.pc ${install_dir}/jemalloc/lib/pkgconfig
fi
if [ -f ${build_dir}/share/doc/jemalloc/jemalloc.html ]; then
cp ${build_dir}/share/doc/jemalloc/jemalloc.html ${install_dir}/jemalloc/share/doc/jemalloc
fi
if [ -f ${build_dir}/share/man/man3/jemalloc.3 ]; then
cp ${build_dir}/share/man/man3/jemalloc.3 ${install_dir}/jemalloc/share/man/man3
fi
fi
cd ${install_dir}
if [ "$osType" != "Darwin" ]; then
tar -zcv -f prodb.tar.gz * --remove-files || :
else
tar -zcv -f prodb.tar.gz * || :
mv prodb.tar.gz ..
rm -rf ./*
mv ../prodb.tar.gz .
fi
cd ${curr_dir}
cp ${install_files} ${install_dir}
if [ "$osType" == "Darwin" ]; then
sed 's/osType=Linux/osType=Darwin/g' ${install_dir}/install_client_pro.sh >> install_client_prodb_temp.sh
mv install_client_prodb_temp.sh ${install_dir}/install_client_pro.sh
fi
if [ "$pagMode" == "lite" ]; then
sed 's/pagMode=full/pagMode=lite/g' ${install_dir}/install_client_pro.sh >> install_client_prodb_temp.sh
mv install_client_prodb_temp.sh ${install_dir}/install_client_pro.sh
fi
chmod a+x ${install_dir}/install_client_pro.sh
# Copy example code
mkdir -p ${install_dir}/examples
examples_dir="${top_dir}/tests/examples"
cp -r ${examples_dir}/c ${install_dir}/examples
sed -i '/passwd/ {s/taosdata/prodb/g}' ${install_dir}/examples/c/*.c
sed -i '/root/ {s/taosdata/prodb/g}' ${install_dir}/examples/c/*.c
if [[ "$pagMode" != "lite" ]] && [[ "$cpuType" != "aarch32" ]]; then
cp -r ${examples_dir}/JDBC ${install_dir}/examples
cp -r ${examples_dir}/matlab ${install_dir}/examples
mv ${install_dir}/examples/matlab/TDengineDemo.m ${install_dir}/examples/matlab/ProDBDemo.m
sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/examples/matlab/ProDBDemo.m
cp -r ${examples_dir}/python ${install_dir}/examples
sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/examples/python/read_example.py
cp -r ${examples_dir}/R ${install_dir}/examples
sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/examples/R/command.txt
cp -r ${examples_dir}/go ${install_dir}/examples
mv ${install_dir}/examples/go/taosdemo.go ${install_dir}/examples/go/prodemo.go
sed -i '/root/ {s/taosdata/prodb/g}' ${install_dir}/examples/go/prodemo.go
fi
# Copy driver
mkdir -p ${install_dir}/driver
cp ${lib_files} ${install_dir}/driver
# Copy connector
connector_dir="${code_dir}/connector"
mkdir -p ${install_dir}/connector
if [[ "$pagMode" != "lite" ]] && [[ "$cpuType" != "aarch32" ]]; then
if [ "$osType" != "Darwin" ]; then
cp ${build_dir}/lib/*.jar ${install_dir}/connector ||:
fi
if [ -d "${connector_dir}/grafanaplugin/dist" ]; then
cp -r ${connector_dir}/grafanaplugin/dist ${install_dir}/connector/grafanaplugin
else
echo "WARNING: grafanaplugin bunlded dir not found, please check if want to use it!"
fi
if find ${connector_dir}/go -mindepth 1 -maxdepth 1 | read; then
cp -r ${connector_dir}/go ${install_dir}/connector
else
echo "WARNING: go connector not found, please check if want to use it!"
fi
cp -r ${connector_dir}/python ${install_dir}/connector
mv ${install_dir}/connector/python/taos ${install_dir}/connector/python/prodb
sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/connector/python/prodb/cinterface.py
sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/connector/python/prodb/subscription.py
sed -i '/self._password/ {s/taosdata/prodb/g}' ${install_dir}/connector/python/prodb/connection.py
fi
cd ${release_dir}
if [ "$verMode" == "cluster" ]; then
pkg_name=${install_dir}-${osType}-${cpuType}
elif [ "$verMode" == "edge" ]; then
pkg_name=${install_dir}-${osType}-${cpuType}
else
echo "unknow verMode, nor cluster or edge"
exit 1
fi
if [ "$pagMode" == "lite" ]; then
pkg_name=${pkg_name}-Lite
fi
if [ "$verType" == "beta" ]; then
pkg_name=${pkg_name}-${verType}
elif [ "$verType" == "stable" ]; then
pkg_name=${pkg_name}
else
echo "unknow verType, nor stable or beta"
exit 1
fi
if [ "$osType" != "Darwin" ]; then
tar -zcv -f "$(basename ${pkg_name}).tar.gz" $(basename ${install_dir}) --remove-files || :
else
tar -zcv -f "$(basename ${pkg_name}).tar.gz" $(basename ${install_dir}) || :
mv "$(basename ${pkg_name}).tar.gz" ..
rm -rf ./*
mv ../"$(basename ${pkg_name}).tar.gz" .
fi
cd ${curr_dir}
#!/bin/bash
#
# Generate tar.gz package for all os system
set -e
#set -x
curr_dir=$(pwd)
compile_dir=$1
version=$2
build_time=$3
cpuType=$4
osType=$5
verMode=$6
verType=$7
pagMode=$8
versionComp=$9
script_dir="$(dirname $(readlink -f $0))"
top_dir="$(readlink -f ${script_dir}/../..)"
# create compressed install file.
build_dir="${compile_dir}/build"
code_dir="${top_dir}/src"
release_dir="${top_dir}/release"
#package_name='linux'
if [ "$verMode" == "cluster" ]; then
install_dir="${release_dir}/ProDB-enterprise-server-${version}"
else
install_dir="${release_dir}/ProDB-server-${version}"
fi
lib_files="${build_dir}/lib/libtaos.so.${version}"
header_files="${code_dir}/inc/taos.h ${code_dir}/inc/taoserror.h"
if [ "$verMode" == "cluster" ]; then
cfg_dir="${top_dir}/../enterprise/packaging/cfg"
else
cfg_dir="${top_dir}/packaging/cfg"
fi
install_files="${script_dir}/install_pro.sh"
nginx_dir="${code_dir}/../../enterprise/src/plugins/web"
# make directories.
mkdir -p ${install_dir}
mkdir -p ${install_dir}/inc && cp ${header_files} ${install_dir}/inc
mkdir -p ${install_dir}/cfg && cp ${cfg_dir}/taos.cfg ${install_dir}/cfg/taos.cfg
#mkdir -p ${install_dir}/bin && cp ${bin_files} ${install_dir}/bin && chmod a+x ${install_dir}/bin/* || :
mkdir -p ${install_dir}/bin
if [ "$pagMode" == "lite" ]; then
strip ${build_dir}/bin/taosd
strip ${build_dir}/bin/taos
cp ${build_dir}/bin/taos ${install_dir}/bin/prodbc
cp ${build_dir}/bin/taosd ${install_dir}/bin/prodbs
cp ${script_dir}/remove_pro.sh ${install_dir}/bin
else
cp ${build_dir}/bin/taos ${install_dir}/bin/prodbc
cp ${build_dir}/bin/taosd ${install_dir}/bin/prodbs
cp ${script_dir}/remove_pro.sh ${install_dir}/bin
cp ${build_dir}/bin/taosdemo ${install_dir}/bin/prodemo
cp ${build_dir}/bin/taosdump ${install_dir}/bin/prodump
cp ${build_dir}/bin/tarbitrator ${install_dir}/bin
cp ${script_dir}/set_core.sh ${install_dir}/bin
cp ${script_dir}/get_client.sh ${install_dir}/bin
cp ${script_dir}/startPre.sh ${install_dir}/bin
cp ${script_dir}/taosd-dump-cfg.gdb ${install_dir}/bin
fi
chmod a+x ${install_dir}/bin/* || :
if [ "$verMode" == "cluster" ]; then
sed 's/verMode=edge/verMode=cluster/g' ${install_dir}/bin/remove_pro.sh >> remove_prodb_temp.sh
mv remove_prodb_temp.sh ${install_dir}/bin/remove_pro.sh
mkdir -p ${install_dir}/nginxd && cp -r ${nginx_dir}/* ${install_dir}/nginxd
cp ${nginx_dir}/png/taos.png ${install_dir}/nginxd/admin/images/taos.png
rm -rf ${install_dir}/nginxd/png
# replace the OEM name, add by yangzy@2021-09-22
sed -i -e 's/www.taosdata.com/www.hanatech.com.cn/g' $(grep -r 'www.taosdata.com' ${install_dir}/nginxd | sed -r "s/(.*\.html):\s*(.*)/\1/g")
sed -i -e 's/TAOS Data/Hanatech/g' $(grep -r 'TAOS Data' ${install_dir}/nginxd | sed -r "s/(.*\.html):\s*(.*)/\1/g")
sed -i -e 's/taosd/prodbs/g' `grep -r 'taosd' ${install_dir}/nginxd | grep -E '*\.js\s*.*' | sed -r -e 's/(.*\.js):\s*(.*)/\1/g' | sort | uniq`
sed -i -e 's/<th style="font-weight: normal">taosd<\/th>/<th style="font-weight: normal">prodbs<\/th>/g' ${install_dir}/nginxd/admin/monitor.html
sed -i -e "s/data:\['taosd', 'system'\],/data:\['prodbs', 'system'\],/g" ${install_dir}/nginxd/admin/monitor.html
sed -i -e "s/name: 'taosd',/name: 'prodbs',/g" ${install_dir}/nginxd/admin/monitor.html
sed -i "s/TDengine/ProDB/g" ${install_dir}/nginxd/admin/*.html
sed -i "s/TDengine/ProDB/g" ${install_dir}/nginxd/admin/js/*.js
sed -i '/dataDir/ {s/taos/ProDB/g}' ${install_dir}/cfg/taos.cfg
sed -i '/logDir/ {s/taos/ProDB/g}' ${install_dir}/cfg/taos.cfg
sed -i "s/TDengine/ProDB/g" ${install_dir}/cfg/taos.cfg
if [ "$cpuType" == "aarch64" ]; then
cp -f ${install_dir}/nginxd/sbin/arm/64bit/nginx ${install_dir}/nginxd/sbin/
elif [ "$cpuType" == "aarch32" ]; then
cp -f ${install_dir}/nginxd/sbin/arm/32bit/nginx ${install_dir}/nginxd/sbin/
fi
rm -rf ${install_dir}/nginxd/sbin/arm
fi
cd ${install_dir}
tar -zcv -f prodb.tar.gz * --remove-files || :
exitcode=$?
if [ "$exitcode" != "0" ]; then
echo "tar prodb.tar.gz error !!!"
exit $exitcode
fi
cd ${curr_dir}
cp ${install_files} ${install_dir}
if [ "$verMode" == "cluster" ]; then
sed 's/verMode=edge/verMode=cluster/g' ${install_dir}/install_pro.sh >> install_prodb_temp.sh
mv install_prodb_temp.sh ${install_dir}/install_pro.sh
fi
if [ "$pagMode" == "lite" ]; then
sed 's/pagMode=full/pagMode=lite/g' ${install_dir}/install.sh >> install_prodb_temp.sh
mv install_prodb_temp.sh ${install_dir}/install_pro.sh
fi
chmod a+x ${install_dir}/install_pro.sh
# Copy example code
mkdir -p ${install_dir}/examples
examples_dir="${top_dir}/tests/examples"
cp -r ${examples_dir}/c ${install_dir}/examples
sed -i '/passwd/ {s/taosdata/prodb/g}' ${install_dir}/examples/c/*.c
sed -i '/root/ {s/taosdata/prodb/g}' ${install_dir}/examples/c/*.c
if [[ "$pagMode" != "lite" ]] && [[ "$cpuType" != "aarch32" ]]; then
cp -r ${examples_dir}/JDBC ${install_dir}/examples
cp -r ${examples_dir}/matlab ${install_dir}/examples
mv ${install_dir}/examples/matlab/TDengineDemo.m ${install_dir}/examples/matlab/ProDBDemo.m
sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/examples/matlab/ProDBDemo.m
cp -r ${examples_dir}/python ${install_dir}/examples
sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/examples/python/read_example.py
cp -r ${examples_dir}/R ${install_dir}/examples
sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/examples/R/command.txt
cp -r ${examples_dir}/go ${install_dir}/examples
mv ${install_dir}/examples/go/taosdemo.go ${install_dir}/examples/go/prodemo.go
sed -i '/root/ {s/taosdata/prodb/g}' ${install_dir}/examples/go/prodemo.go
fi
# Copy driver
mkdir -p ${install_dir}/driver && cp ${lib_files} ${install_dir}/driver && echo "${versionComp}" > ${install_dir}/driver/vercomp.txt
# Copy connector
connector_dir="${code_dir}/connector"
mkdir -p ${install_dir}/connector
if [[ "$pagMode" != "lite" ]] && [[ "$cpuType" != "aarch32" ]]; then
cp ${build_dir}/lib/*.jar ${install_dir}/connector ||:
if [ -d "${connector_dir}/grafanaplugin/dist" ]; then
cp -r ${connector_dir}/grafanaplugin/dist ${install_dir}/connector/grafanaplugin
else
echo "WARNING: grafanaplugin bundled dir not found, please check if want to use it!"
fi
if find ${connector_dir}/go -mindepth 1 -maxdepth 1 | read; then
cp -r ${connector_dir}/go ${install_dir}/connector
else
echo "WARNING: go connector not found, please check if want to use it!"
fi
cp -r ${connector_dir}/python ${install_dir}/connector/
mv ${install_dir}/connector/python/taos ${install_dir}/connector/python/prodb
sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/connector/python/prodb/cinterface.py
sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/connector/python/prodb/subscription.py
sed -i '/self._password/ {s/taosdata/prodb/g}' ${install_dir}/connector/python/prodb/connection.py
fi
cd ${release_dir}
if [ "$verMode" == "cluster" ]; then
pkg_name=${install_dir}-${osType}-${cpuType}
elif [ "$verMode" == "edge" ]; then
pkg_name=${install_dir}-${osType}-${cpuType}
else
echo "unknow verMode, nor cluster or edge"
exit 1
fi
if [ "$pagMode" == "lite" ]; then
pkg_name=${pkg_name}-Lite
fi
if [ "$verType" == "beta" ]; then
pkg_name=${pkg_name}-${verType}
elif [ "$verType" == "stable" ]; then
pkg_name=${pkg_name}
else
echo "unknow verType, nor stabel or beta"
exit 1
fi
tar -zcv -f "$(basename ${pkg_name}).tar.gz" $(basename ${install_dir}) --remove-files || :
exitcode=$?
if [ "$exitcode" != "0" ]; then
echo "tar ${pkg_name}.tar.gz error !!!"
exit $exitcode
fi
cd ${curr_dir}
#!/bin/bash
#
# Script to stop the service and uninstall ProDB's arbitrator
set -e
#set -x
verMode=edge
RED='\033[0;31m'
GREEN='\033[1;32m'
NC='\033[0m'
#install main path
install_main_dir="/usr/local/tarbitrator"
bin_link_dir="/usr/bin"
service_config_dir="/etc/systemd/system"
tarbitrator_service_name="tarbitratord"
csudo=""
if command -v sudo > /dev/null; then
csudo="sudo"
fi
initd_mod=0
service_mod=2
if pidof systemd &> /dev/null; then
service_mod=0
elif $(which service &> /dev/null); then
service_mod=1
service_config_dir="/etc/init.d"
if $(which chkconfig &> /dev/null); then
initd_mod=1
elif $(which insserv &> /dev/null); then
initd_mod=2
elif $(which update-rc.d &> /dev/null); then
initd_mod=3
else
service_mod=2
fi
else
service_mod=2
fi
function kill_tarbitrator() {
pid=$(ps -ef | grep "tarbitrator" | grep -v "grep" | awk '{print $2}')
if [ -n "$pid" ]; then
${csudo} kill -9 $pid || :
fi
}
function clean_bin() {
# Remove link
${csudo} rm -f ${bin_link_dir}/tarbitrator || :
}
function clean_header() {
# Remove link
${csudo} rm -f ${inc_link_dir}/taos.h || :
${csudo} rm -f ${inc_link_dir}/taoserror.h || :
}
function clean_log() {
# Remove link
${csudo} rm -rf /arbitrator.log || :
}
function clean_service_on_systemd() {
tarbitratord_service_config="${service_config_dir}/${tarbitrator_service_name}.service"
if systemctl is-active --quiet ${tarbitrator_service_name}; then
echo "ProDB tarbitrator is running, stopping it..."
${csudo} systemctl stop ${tarbitrator_service_name} &> /dev/null || echo &> /dev/null
fi
${csudo} systemctl disable ${tarbitrator_service_name} &> /dev/null || echo &> /dev/null
${csudo} rm -f ${tarbitratord_service_config}
}
function clean_service_on_sysvinit() {
if pidof tarbitrator &> /dev/null; then
echo "ProDB's tarbitrator is running, stopping it..."
${csudo} service tarbitratord stop || :
fi
if ((${initd_mod}==1)); then
if [ -e ${service_config_dir}/tarbitratord ]; then
${csudo} chkconfig --del tarbitratord || :
fi
elif ((${initd_mod}==2)); then
if [ -e ${service_config_dir}/tarbitratord ]; then
${csudo} insserv -r tarbitratord || :
fi
elif ((${initd_mod}==3)); then
if [ -e ${service_config_dir}/tarbitratord ]; then
${csudo} update-rc.d -f tarbitratord remove || :
fi
fi
${csudo} rm -f ${service_config_dir}/tarbitratord || :
if $(which init &> /dev/null); then
${csudo} init q || :
fi
}
function clean_service() {
if ((${service_mod}==0)); then
clean_service_on_systemd
elif ((${service_mod}==1)); then
clean_service_on_sysvinit
else
# must manual stop
kill_tarbitrator
fi
}
# Stop service and disable booting start.
clean_service
# Remove binary file and links
clean_bin
# Remove header file.
##clean_header
# Remove log file
clean_log
${csudo} rm -rf ${install_main_dir}
echo -e "${GREEN}ProDB's arbitrator is removed successfully!${NC}"
echo
#!/bin/bash
#
# Script to stop the client and uninstall database, but retain the config and log files.
set -e
# set -x
RED='\033[0;31m'
GREEN='\033[1;32m'
NC='\033[0m'
#install main path
install_main_dir="/usr/local/ProDB"
log_link_dir="/usr/local/ProDB/log"
cfg_link_dir="/usr/local/ProDB/cfg"
bin_link_dir="/usr/bin"
lib_link_dir="/usr/lib"
lib64_link_dir="/usr/lib64"
inc_link_dir="/usr/include"
csudo=""
if command -v sudo > /dev/null; then
csudo="sudo"
fi
function kill_client() {
if [ -n "$(pidof prodbc)" ]; then
${csudo} kill -9 $pid || :
fi
}
function clean_bin() {
# Remove link
${csudo} rm -f ${bin_link_dir}/prodbc || :
${csudo} rm -f ${bin_link_dir}/prodemo || :
${csudo} rm -f ${bin_link_dir}/prodump || :
${csudo} rm -f ${bin_link_dir}/rmprodb || :
${csudo} rm -f ${bin_link_dir}/set_core || :
}
function clean_lib() {
# Remove link
${csudo} rm -f ${lib_link_dir}/libtaos.* || :
${csudo} rm -f ${lib64_link_dir}/libtaos.* || :
}
function clean_header() {
# Remove link
${csudo} rm -f ${inc_link_dir}/taos.h || :
${csudo} rm -f ${inc_link_dir}/taoserror.h || :
}
function clean_config() {
# Remove link
${csudo} rm -f ${cfg_link_dir}/* || :
}
function clean_log() {
# Remove link
${csudo} rm -rf ${log_link_dir} || :
}
# Stop client.
kill_client
# Remove binary file and links
clean_bin
# Remove header file.
clean_header
# Remove lib file
clean_lib
# Remove link log directory
clean_log
# Remove link configuration file
clean_config
${csudo} rm -rf ${install_main_dir}
echo -e "${GREEN}ProDB client is removed successfully!${NC}"
echo
此差异已折叠。
此差异已折叠。
此差异已折叠。
...@@ -101,7 +101,7 @@ typedef struct SMergeTsCtx { ...@@ -101,7 +101,7 @@ typedef struct SMergeTsCtx {
}SMergeTsCtx; }SMergeTsCtx;
typedef struct SVgroupTableInfo { typedef struct SVgroupTableInfo {
SVgroupInfo vgInfo; SVgroupMsg vgInfo;
SArray *itemList; // SArray<STableIdInfo> SArray *itemList; // SArray<STableIdInfo>
} SVgroupTableInfo; } SVgroupTableInfo;
...@@ -184,7 +184,9 @@ void tscClearInterpInfo(SQueryInfo* pQueryInfo); ...@@ -184,7 +184,9 @@ void tscClearInterpInfo(SQueryInfo* pQueryInfo);
bool tscIsInsertData(char* sqlstr); bool tscIsInsertData(char* sqlstr);
int tscAllocPayload(SSqlCmd* pCmd, int size); // the memory is not reset in case of fast allocate payload function
int32_t tscAllocPayloadFast(SSqlCmd *pCmd, size_t size);
int32_t tscAllocPayload(SSqlCmd* pCmd, int size);
TAOS_FIELD tscCreateField(int8_t type, const char* name, int16_t bytes); TAOS_FIELD tscCreateField(int8_t type, const char* name, int16_t bytes);
...@@ -298,7 +300,11 @@ void doExecuteQuery(SSqlObj* pSql, SQueryInfo* pQueryInfo); ...@@ -298,7 +300,11 @@ void doExecuteQuery(SSqlObj* pSql, SQueryInfo* pQueryInfo);
SVgroupsInfo* tscVgroupInfoClone(SVgroupsInfo *pInfo); SVgroupsInfo* tscVgroupInfoClone(SVgroupsInfo *pInfo);
void* tscVgroupInfoClear(SVgroupsInfo *pInfo); void* tscVgroupInfoClear(SVgroupsInfo *pInfo);
#if 0
void tscSVgroupInfoCopy(SVgroupInfo* dst, const SVgroupInfo* src); void tscSVgroupInfoCopy(SVgroupInfo* dst, const SVgroupInfo* src);
#endif
/** /**
* The create object function must be successful expect for the out of memory issue. * The create object function must be successful expect for the out of memory issue.
* *
...@@ -328,6 +334,7 @@ void doAddGroupColumnForSubquery(SQueryInfo* pQueryInfo, int32_t tagIndex, SSqlC ...@@ -328,6 +334,7 @@ void doAddGroupColumnForSubquery(SQueryInfo* pQueryInfo, int32_t tagIndex, SSqlC
int16_t tscGetJoinTagColIdByUid(STagCond* pTagCond, uint64_t uid); int16_t tscGetJoinTagColIdByUid(STagCond* pTagCond, uint64_t uid);
int16_t tscGetTagColIndexById(STableMeta* pTableMeta, int16_t colId); int16_t tscGetTagColIndexById(STableMeta* pTableMeta, int16_t colId);
int32_t doInitSubState(SSqlObj* pSql, int32_t numOfSubqueries);
void tscPrintSelNodeList(SSqlObj* pSql, int32_t subClauseIndex); void tscPrintSelNodeList(SSqlObj* pSql, int32_t subClauseIndex);
......
此差异已折叠。
...@@ -41,6 +41,14 @@ JNIEXPORT void JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_initImp ...@@ -41,6 +41,14 @@ JNIEXPORT void JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_initImp
JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_setOptions JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_setOptions
(JNIEnv *, jclass, jint, jstring); (JNIEnv *, jclass, jint, jstring);
/*
* Class: com_taosdata_jdbc_TSDBJNIConnector
* Method: setConfigImp
* Signature: (Ljava/lang/String;)Lcom/taosdata/jdbc/TSDBException;
*/
JNIEXPORT jobject JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_setConfigImp
(JNIEnv *, jclass, jstring);
/* /*
* Class: com_taosdata_jdbc_TSDBJNIConnector * Class: com_taosdata_jdbc_TSDBJNIConnector
* Method: getTsCharset * Method: getTsCharset
......
此差异已折叠。
...@@ -2,6 +2,7 @@ EXPORTS ...@@ -2,6 +2,7 @@ EXPORTS
taos_init taos_init
taos_cleanup taos_cleanup
taos_options taos_options
taos_set_config
taos_connect taos_connect
taos_connect_auth taos_connect_auth
taos_close taos_close
......
此差异已折叠。
此差异已折叠。
...@@ -1491,7 +1491,6 @@ TAOS_STMT* taos_stmt_init(TAOS* taos) { ...@@ -1491,7 +1491,6 @@ TAOS_STMT* taos_stmt_init(TAOS* taos) {
pSql->signature = pSql; pSql->signature = pSql;
pSql->pTscObj = pObj; pSql->pTscObj = pObj;
pSql->maxRetry = TSDB_MAX_REPLICA; pSql->maxRetry = TSDB_MAX_REPLICA;
pSql->isBind = true;
pStmt->pSql = pSql; pStmt->pSql = pSql;
pStmt->last = STMT_INIT; pStmt->last = STMT_INIT;
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
...@@ -17,5 +17,5 @@ IF (HEADER_GTEST_INCLUDE_DIR AND (LIB_GTEST_STATIC_DIR OR LIB_GTEST_SHARED_DIR)) ...@@ -17,5 +17,5 @@ IF (HEADER_GTEST_INCLUDE_DIR AND (LIB_GTEST_STATIC_DIR OR LIB_GTEST_SHARED_DIR))
AUX_SOURCE_DIRECTORY(${CMAKE_CURRENT_SOURCE_DIR} SOURCE_LIST) AUX_SOURCE_DIRECTORY(${CMAKE_CURRENT_SOURCE_DIR} SOURCE_LIST)
ADD_EXECUTABLE(cliTest ${SOURCE_LIST}) ADD_EXECUTABLE(cliTest ${SOURCE_LIST})
TARGET_LINK_LIBRARIES(cliTest taos tutil common gtest pthread) TARGET_LINK_LIBRARIES(cliTest taos cJson tutil common gtest pthread)
ENDIF() ENDIF()
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Subproject commit 4a4d79099b076b8ff12d5b4fdbcba54049a6866d Subproject commit 016d8e82a24d72779be0ab0090580a372b4fffca
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册