提交 c7b5d918 编写于 作者: G Ganlin Zhao

[TD-11220]<feature>(query): time related functions

要显示的变更太多。

To preserve performance only 1000 of 1000+ files are displayed.
......@@ -4,18 +4,19 @@
[submodule "src/connector/hivemq-tdengine-extension"]
path = src/connector/hivemq-tdengine-extension
url = https://github.com/taosdata/hivemq-tdengine-extension.git
[submodule "tests/examples/rust"]
path = tests/examples/rust
url = https://github.com/songtianyi/tdengine-rust-bindings.git
[submodule "deps/jemalloc"]
path = deps/jemalloc
url = https://github.com/jemalloc/jemalloc
[submodule "deps/TSZ"]
path = deps/TSZ
url = https://github.com/taosdata/TSZ.git
branch = master
[submodule "src/kit/taos-tools"]
path = src/kit/taos-tools
url = https://github.com/taosdata/taos-tools
[submodule "src/plugins/taosadapter"]
path = src/plugins/taosadapter
url = https://github.com/taosdata/taosadapter
[submodule "examples/rust"]
path = examples/rust
url = https://github.com/songtianyi/tdengine-rust-bindings.git
\ No newline at end of file
......@@ -70,5 +70,6 @@ INCLUDE(cmake/install.inc)
ADD_SUBDIRECTORY(deps)
ADD_SUBDIRECTORY(src)
ADD_SUBDIRECTORY(tests)
ADD_SUBDIRECTORY(examples/c)
INCLUDE(CPack)
......@@ -11,7 +11,7 @@ def sync_source() {
sh '''
cd ${WKC}
[ -f src/connector/grafanaplugin/README.md ] && rm -f src/connector/grafanaplugin/README.md > /dev/null || echo "failed to remove grafanaplugin README.md"
git reset --hard HEAD~10 >/dev/null
git reset --hard >/dev/null
'''
script {
if (env.CHANGE_TARGET == 'master') {
......@@ -36,47 +36,65 @@ def sync_source() {
'''
}
}
sh'''
sh '''
cd ${WKC}
git reset --hard
git remote prune origin
[ -f src/connector/grafanaplugin/README.md ] && rm -f src/connector/grafanaplugin/README.md > /dev/null || echo "failed to remove grafanaplugin README.md"
git pull >/dev/null
git fetch origin +refs/pull/${CHANGE_ID}/merge
git checkout -qf FETCH_HEAD
git reset --hard
git clean -dfx
git submodule update --init --recursive --remote
git submodule update --init --recursive
cd ${WK}
git reset --hard HEAD~10
git reset --hard
'''
sh '''
cd ${WKCT}
git reset --hard
'''
script {
if (env.CHANGE_TARGET == 'master') {
sh '''
cd ${WK}
git checkout master
cd ${WKCT}
git checkout master
'''
} else if (env.CHANGE_TARGET == '2.0') {
sh '''
cd ${WK}
git checkout 2.0
cd ${WKCT}
git checkout 2.0
'''
} else if (env.CHANGE_TARGET == '2.4') {
sh '''
cd ${WK}
git checkout 2.4
cd ${WKCT}
git checkout 2.4
'''
} else {
sh '''
cd ${WK}
git checkout develop
cd ${WKCT}
git checkout develop
'''
}
}
sh '''
export TZ=Asia/Harbin
cd ${WK}
git pull >/dev/null
export TZ=Asia/Harbin
date
git clean -dfx
cd ${WKCT}
git pull >/dev/null
git clean -dfx
date
'''
}
def pre_test() {
......@@ -111,6 +129,7 @@ pipeline {
environment{
WK = '/var/data/jenkins/workspace/TDinternal'
WKC = '/var/data/jenkins/workspace/TDinternal/community'
WKCT = '/var/data/jenkins/workspace/TDinternal/community/tests'
LOGDIR = '/var/data/jenkins/workspace/log'
}
stages {
......@@ -220,6 +239,13 @@ pipeline {
}
}
stage('run test') {
options { skipDefaultCheckout() }
when {
allOf {
changeRequest()
not { expression { env.CHANGE_BRANCH =~ /docs\// }}
}
}
parallel {
stage ('build worker06_arm64') {
agent {label " worker06_arm64 "}
......@@ -269,14 +295,16 @@ pipeline {
date
hostname
'''
timeout(time: 60, unit: 'MINUTES') {
sh '''
date
cd ${WKC}/tests/parallel_test
time ./run.sh -m m.json -t cases.task -l ${LOGDIR} -b ${BRANCH_NAME}
date
hostname
'''
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
timeout(time: 20, unit: 'MINUTES') {
sh '''
date
cd ${WKC}/tests/parallel_test
time ./run.sh -m m.json -t cases.task -l ${LOGDIR} -b ${BRANCH_NAME}
date
hostname
'''
}
}
}
}
......
......@@ -6,8 +6,8 @@
[![TDengine](TDenginelogo.png)](https://www.taosdata.com)
简体中文 | [English](./README.md)
很多职位正在热招中,请看[这里](https://www.taosdata.com/cn/careers/)
简体中文 | [English](./README.md)
很多职位正在热招中,请看[这里](https://www.taosdata.com/cn/careers/)
# TDengine 简介
......@@ -57,6 +57,18 @@ sudo apt-get install -y openjdk-8-jdk
sudo apt-get install -y maven
```
#### 为 taos-tools 安装编译需要的软件
taosTools 是用于 TDengine 的辅助工具软件集合。目前它包含 taosBenchmark(曾命名为 taosdemo)和 taosdump 两个软件。
默认 TDengine 编译不包含 taosTools。您可以在编译 TDengine 时使用`cmake .. -DBUILD_TOOLS=true` 来同时编译 taosTools。
为了在 Ubuntu/Debian 系统上编译 [taos-tools](https://github.com/taosdata/taos-tools) 需要安装如下软件:
```bash
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev pkg-config
```
### CentOS 7:
```bash
......@@ -75,7 +87,7 @@ sudo yum install -y java-1.8.0-openjdk
sudo yum install -y maven
```
### CentOS 8 & Fedora:
### CentOS 8 & Fedora
```bash
sudo dnf install -y gcc gcc-c++ make cmake epel-release git
......@@ -93,6 +105,18 @@ sudo dnf install -y java-1.8.0-openjdk
sudo dnf install -y maven
```
#### 在 CentOS 上构建 taosTools 安装依赖软件
为了在 CentOS 上构建 [taosTools](https://github.com/taosdata/taos-tools) 需要安装如下依赖软件
```bash
sudo yum install zlib-devel xz-devel snappy-devel jansson-devel pkgconfig libatomic libstdc++-static
```
注意:由于 snappy 缺乏 pkg-config 支持
(参考 [链接](https://github.com/google/snappy/pull/86)),会导致
cmake 提示无法发现 libsnappy,实际上工作正常。
## 获取源码
首先,你需要从 GitHub 克隆源码:
......@@ -109,6 +133,7 @@ git submodule update --init --recursive
```
如果使用 https 协议下载比较慢,可以通过修改 ~/.gitconfig 文件添加以下两行设置使用 ssh 协议下载。需要首先上传 ssh 密钥到 GitHub,详细方法请参考 GitHub 官方文档。
```
[url "git@github.com:"]
insteadOf = https://github.com/
......@@ -123,7 +148,8 @@ mkdir debug && cd debug
cmake .. && cmake --build .
```
您可以选择使用 Jemalloc 作为内存分配器,替代默认的 glibc:
您可以选择使用 jemalloc 作为内存分配器,替代默认的 glibc:
```bash
apt install autoconf
cmake .. -DJEMALLOC_ENABLED=true
......@@ -292,4 +318,4 @@ TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入
# [谁在使用TDengine](https://github.com/taosdata/TDengine/issues/2432)
欢迎所有 TDengine 用户及贡献者在 [这里](https://github.com/taosdata/TDengine/issues/2432) 分享您在当前工作中开发/使用 TDengine 的故事。
欢迎所有 TDengine 用户及贡献者在 [这里](https://github.com/taosdata/TDengine/issues/2432) 分享您在当前工作中开发/使用 TDengine 的故事。
......@@ -26,48 +26,58 @@ TDengine is an open-sourced big data platform under [GNU AGPL v3.0](http://www.g
- **Zero Management, No Learning Curve**: It takes only seconds to download, install, and run it successfully; there are no other dependencies. Automatic partitioning on tables or DBs. Standard SQL is used, with C/C++, Python, JDBC, Go and RESTful connectors.
# Documentation
For user manual, system design and architecture, engineering blogs, refer to [TDengine Documentation](https://www.taosdata.com/en/documentation/)(中文版请点击[这里](https://www.taosdata.com/cn/documentation20/))
for details. The documentation from our website can also be downloaded locally from *documentation/tdenginedocs-en* or *documentation/tdenginedocs-cn*.
# Building
At the moment, TDengine only supports building and running on Linux systems. You can choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) or from the source code. This quick guide is for installation from the source only.
To build TDengine, use [CMake](https://cmake.org/) 3.0.2 or higher versions in the project directory.
## Install build dependencies
### Ubuntu 16.04 and above & Debian:
### Ubuntu 16.04 and above or Debian
```bash
sudo apt-get install -y gcc cmake build-essential git
```
### Ubuntu 14.04:
### Ubuntu 14.04
```bash
sudo apt-get install -y gcc cmake3 build-essential git binutils-2.26
export PATH=/usr/lib/binutils-2.26/bin:$PATH
```
To compile and package the JDBC driver source code, you should have a Java jdk-8 or higher and Apache Maven 2.7 or higher installed.
To install openjdk-8:
```bash
sudo apt-get install -y openjdk-8-jdk
```
To install Apache Maven:
```bash
sudo apt-get install -y maven
sudo apt-get install -y maven
```
#### Install build dependencies for taos-tools
#### Install build dependencies for taosTools
We provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. From TDengine 2.4.0.0, taosBenchmark and taosdump were not released together with TDengine.
By default, TDengine compiling does not include taos-tools. You can use 'cmake .. -DBUILD_TOOLS=true' to make them be compiled with TDengine.
By default, TDengine compiling does not include taosTools. You can use 'cmake .. -DBUILD_TOOLS=true' to make them be compiled with TDengine.
To build the [taosTools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
To build the [taos-tools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
```bash
sudo apt install libjansson-dev libsnappy-dev liblzma-dev libz-dev pkg-config
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev pkg-config
```
### CentOS 7:
### CentOS 7
```bash
sudo yum install epel-release
sudo yum update
......@@ -76,41 +86,51 @@ sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake
```
To install openjdk-8:
```bash
sudo yum install -y java-1.8.0-openjdk
```
To install Apache Maven:
```bash
sudo yum install -y maven
```
### CentOS 8 & Fedora:
### CentOS 8 & Fedora
```bash
sudo dnf install -y gcc gcc-c++ make cmake epel-release git
```
To install openjdk-8:
```bash
sudo dnf install -y java-1.8.0-openjdk
```
To install Apache Maven:
```bash
sudo dnf install -y maven
```
#### Install build dependencies for taos-tools
To build the [taos-tools](https://github.com/taosdata/taos-tools) on CentOS, the following packages need to be installed.
#### Install build dependencies for taosTools on CentOS
To build the [taosTools](https://github.com/taosdata/taos-tools) on CentOS, the following packages need to be installed.
```bash
sudo yum install zlib-devel xz-devel snappy-devel jansson-devel pkgconfig libatomic
sudo yum install zlib-devel xz-devel snappy-devel jansson-devel pkgconfig libatomic libstdc++-static
```
Note: Since snappy lacks pkg-config support (refer to [link](https://github.com/google/snappy/pull/86)), it lead a cmake prompt libsnappy not found. But snappy will works well.
### Setup golang environment
TDengine includes few components developed by Go language. Please refer to golang.org official documentation for golang environment setup.
Please use version 1.14+. For the user in China, we recommend using a proxy to accelerate package downloading.
```
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
......@@ -119,6 +139,7 @@ go env -w GOPROXY=https://goproxy.cn,direct
## Get the source codes
First of all, you may clone the source codes from github:
```bash
git clone https://github.com/taosdata/TDengine.git
cd TDengine
......@@ -126,11 +147,13 @@ cd TDengine
The connectors for go & grafana and some tools have been moved to separated repositories,
so you should run this command in the TDengine directory to install them:
```bash
git submodule update --init --recursive
```
You can modify the file ~/.gitconfig to use ssh protocol instead of https for better download speed. You need to upload ssh public key to GitHub first. Please refer to GitHub official documentation for detail.
```
[url "git@github.com:"]
insteadOf = https://github.com/
......@@ -146,17 +169,20 @@ cmake .. && cmake --build .
```
Note TDengine 2.3.x.0 and later use a component named 'taosAdapter' to play http daemon role by default instead of the http daemon embedded in the early version of TDengine. The taosAdapter is programmed by go language. If you pull TDengine source code to the latest from an existing codebase, please execute 'git submodule update --init --recursive' to pull taosAdapter source code. Please install go language version 1.14 or above for compiling taosAdapter. If you meet difficulties regarding 'go mod', especially you are from China, you can use a proxy to solve the problem.
```
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
```
The embedded http daemon still be built from TDengine source code by default. Or you can use the following command to choose to build taosAdapter.
```
cmake .. -DBUILD_HTTP=false
```
You can use Jemalloc as memory allocator instead of glibc:
```
apt install autoconf
cmake .. -DJEMALLOC_ENABLED=true
......@@ -166,16 +192,19 @@ TDengine build script can detect the host machine's architecture on X86-64, X86,
You can also specify CPUTYPE option like aarch64 or aarch32 too if the detection result is not correct:
aarch64:
```bash
cmake .. -DCPUTYPE=aarch64 && cmake --build .
```
aarch32:
```bash
cmake .. -DCPUTYPE=aarch32 && cmake --build .
```
mips64:
```bash
cmake .. -DCPUTYPE=mips64 && cmake --build .
```
......@@ -184,6 +213,7 @@ cmake .. -DCPUTYPE=mips64 && cmake --build .
If you use the Visual Studio 2013, please open a command window by executing "cmd.exe".
Please specify "amd64" for 64 bits Windows or specify "x86" is for 32 bits Windows when you execute vcvarsall.bat.
```cmd
mkdir debug && cd debug
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < amd64 | x86 >
......@@ -204,6 +234,7 @@ nmake
```
Or, you can simply open a command window by clicking Windows Start -> "Visual Studio < 2019 | 2017 >" folder -> "x64 Native Tools Command Prompt for VS < 2019 | 2017 >" or "x86 Native Tools Command Prompt for VS < 2019 | 2017 >" depends what architecture your Windows is, then execute commands as follows:
```cmd
mkdir debug && cd debug
cmake .. -G "NMake Makefiles"
......@@ -222,6 +253,7 @@ cmake .. && cmake --build .
# Installing
After building successfully, TDengine can be installed by: (On Windows platform, the following command should be `nmake install`)
```bash
sudo make install
```
......@@ -230,11 +262,13 @@ Users can find more information about directories installed on the system in the
Users can also choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) for it.
To start the service after installation, in a terminal, use:
```bash
sudo systemctl start taosd
```
Then users can use the [TDengine shell](https://www.taosdata.com/en/getting-started/#TDengine-Shell) to connect the TDengine server. In a terminal, use:
```bash
taos
```
......@@ -243,7 +277,7 @@ If TDengine shell connects the server successfully, welcome messages and version
## Install TDengine by apt-get
If you use Debian or Ubuntu system, you can use 'apt-get' command to intall TDengine from official repository. Please use following commands to setup:
If you use Debian or Ubuntu system, you can use 'apt-get' command to install TDengine from official repository. Please use following commands to setup:
```
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
......@@ -257,11 +291,13 @@ sudo apt-get install tdengine
## Quick Run
If you don't want to run TDengine as a service, you can run it in current shell. For example, to quickly start a TDengine server after building, run the command below in terminal: (We take Linux as an example, command on Windows will be `taosd.exe`)
```bash
./build/bin/taosd -c test/cfg
```
In another terminal, use the TDengine shell to connect the server:
```bash
./build/bin/taos -c test/cfg
```
......@@ -269,7 +305,9 @@ In another terminal, use the TDengine shell to connect the server:
option "-c test/cfg" specifies the system configuration file directory.
# Try TDengine
It is easy to run SQL commands from TDengine shell which is the same as other SQL databases.
```sql
create database db;
use db;
......@@ -281,7 +319,8 @@ drop database db;
```
# Developing with TDengine
### Official Connectors
## Official Connectors
TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation.
......@@ -293,7 +332,7 @@ TDengine provides abundant developing tools for users to develop on TDengine. Fo
- [Node.js](https://www.taosdata.com/en/documentation/connector#nodejs)
- [Rust](https://www.taosdata.com/en/documentation/connector/rust)
### Third Party Connectors
## Third Party Connectors
The TDengine community has also kindly built some of their own connectors! Follow the links below to find the source code for them.
......@@ -301,11 +340,13 @@ The TDengine community has also kindly built some of their own connectors! Follo
- [.Net Core Connector](https://github.com/maikebing/Maikebing.EntityFrameworkCore.Taos)
- [Lua Connector](https://github.com/taosdata/TDengine/tree/develop/tests/examples/lua)
# How to run the test cases and how to add a new test case?
# How to run the test cases and how to add a new test case
TDengine's test framework and all test cases are fully open source.
Please refer to [this document](tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md) for how to run test and develop new test case.
# TDengine Roadmap
- Support event-driven stream computing
- Support user defined functions
- Support MQTT connection
......
......@@ -96,11 +96,12 @@ IF (${VERBOSE} MATCHES "true")
SET(TD_BUILD_VERBOSE TRUE)
ENDIF ()
IF (${TSZ_ENABLED} MATCHES "true")
# define add
# build TSZ by default
IF ("${TSZ_ENABLED}" MATCHES "false")
set(VAR_TSZ "" CACHE INTERNAL "global variant empty" )
ELSE()
# define add
MESSAGE(STATUS "build with TSZ enabled")
ADD_DEFINITIONS(-DTD_TSZ)
set(VAR_TSZ "TSZ" CACHE INTERNAL "global variant tsz" )
ELSE()
set(VAR_TSZ "" CACHE INTERNAL "global variant empty" )
ENDIF()
ENDIF()
\ No newline at end of file
......@@ -45,6 +45,6 @@ IF (TD_LINUX_64 AND JEMALLOC_ENABLED)
INCLUDE_DIRECTORIES(${CMAKE_BINARY_DIR}/build/include)
ENDIF ()
IF (${TSZ_ENABLED} MATCHES "true")
IF (NOT "${TSZ_ENABLED}" MATCHES "false")
ADD_SUBDIRECTORY(TSZ)
ENDIF()
Subproject commit 11c1060d4f917dd799ae628b131db5d6a5ef6954
......@@ -87,14 +87,15 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专
* [taosAdapter](/tools/adapter): TDengine 集群和应用之间的 RESTful 接口适配服务。
* [TDinsight](/tools/insight): 监控 TDengine 集群的 Grafana 面板集合。
* [taosdump](/tools/taosdump): TDengine 数据备份工具。使用 taosdump 请安装 taosTools。
* [taosBenchmark](/tools/taosbenchmark): TDengine 压力测试工具。使用 taosBenchmark 请安装 taosTools。
* [taosBenchmark](/tools/taosbenchmark): TDengine 压力测试工具。
* [taosTools](/tools/taos-tools): taosTools 是用于 TDengine 的辅助工具软件集合。。
## [与其他工具的连接](/connections)
* [Grafana](/connections#grafana):获取并可视化保存在TDengine的数据
* [MATLAB](/connections#matlab):通过配置MATLAB的JDBC数据源访问保存在TDengine的数据
* [R](/connections#r):通过配置R的JDBC数据源访问保存在TDengine的数据
* [IDEA Database](https://www.taosdata.com/blog/2020/08/27/1767.html):通过IDEA 数据库管理工具可视化使用 TDengine
* [TDengineGUI](https://github.com/skye0207/TDengineGUI):基于Electron开发的跨平台TDengine图形化管理工具
* [DataX](https://www.taosdata.com/blog/2021/10/26/3156.html):支持 TDeninge 和其他数据库之间进行数据迁移的工具
## [TDengine集群的安装、管理](/cluster)
......@@ -134,14 +135,6 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专
* [devops](/devops/collectd):使用 TDengine + collectd_statsd + Grafana 快速搭建 IT 运维系统
* [最佳实践](/devops/immigrate):OpenTSDB 应用迁移到 TDengine 的最佳实践
## 常用工具
* [TDengine样例导入工具](https://www.taosdata.com/blog/2020/01/18/1166.html)
* [TDengine写入性能测试工具](https://www.taosdata.com/blog/2020/01/18/1166.html)
* [IDEA数据库管理工具可视化使用TDengine](https://www.taosdata.com/blog/2020/08/27/1767.html)
* [基于Electron开发的跨平台TDengine图形化管理工具](https://github.com/skye0207/TDengineGUI)
* [基于DataX的TDeninge数据迁移工具](https://www.taosdata.com/blog/2021/10/26/3156.html)
## TDengine与其他数据库的对比测试
* [用InfluxDB开源的性能测试工具对比InfluxDB和TDengine](https://www.taosdata.com/blog/2020/01/13/1105.html)
......
......@@ -35,7 +35,7 @@ $ docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdeng
进一步,还可以使用 docker run 命令启动运行 TDengine server 的 docker 容器,并使用 `--name` 命令行参数将容器命名为 `tdengine`,使用 `--hostname` 指定 hostname 为 `tdengine-server`,通过 `-v` 挂载本地目录到容器,实现宿主机与容器内部的数据同步,防止容器删除后,数据丢失。
```bash
$ docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:/var/log/taos -v ~/work/taos/data:/var/lib/taos -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:/var/log/taos -v ~/work/taos/data:/var/lib/taos -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
```
- **--name tdengine**:设置容器名称,我们可以通过容器名称来访问对应的容器
......@@ -45,7 +45,12 @@ $ docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:
### 使用 docker ps 命令确认容器是否已经正确运行
```bash
$ docker ps
docker ps
```
输出示例如下:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS ···
c452519b0f9b tdengine/tdengine "taosd" 14 minutes ago Up 14 minutes ···
```
......@@ -85,7 +90,6 @@ TDengine 终端成功连接服务端,打印出了欢迎消息和版本信息
在 TDengine 终端中,可以通过 SQL 命令来创建/删除数据库、表、超级表等,并可以进行插入和查询操作。具体可以参考 [TAOS SQL 说明文档](https://www.taosdata.com/cn/documentation/taos-sql)
### 在宿主机访问 Docker 容器中的 TDengine server
在使用了 -p 命令行参数映射了正确的端口启动了 TDengine Docker 容器后,就在宿主机使用 taos shell 命令即可访问运行在 Docker 容器中的 TDengine。
......@@ -102,7 +106,12 @@ taos>
也可以在宿主机使用 curl 通过 RESTful 端口访问 Docker 容器内的 TDengine server。
```
$ curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql
curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql
```
输出示例如下:
```
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep0,keep1,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep0,keep1,keep(D)",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["test","2021-08-18 06:01:11.021",10000,4,1,1,10,"3650,3650,3650",16,6,100,4096,1,3000,2,0,"ms",0,"ready"],["log","2021-08-18 05:51:51.065",4,1,1,1,10,"30,30,30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":2}
```
......@@ -119,195 +128,36 @@ TDengine RESTful 接口详情请参考[官方文档](https://www.taosdata.com/cn
使用 docker 运行 TDengine 2.4.0.4 版本镜像(taosd + taosAdapter):
```bash
$ docker run -d --name tdengine-all -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine:2.4.0.4
docker run -d --name tdengine-all -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine:2.4.0.4
```
使用 docker 运行 TDengine 2.4.0.4 版本镜像(仅 taosAdapter,需要设置 firstEp 配置项 或 TAOS_FIRST_EP 环境变量):
```bash
$ docker run -d --name tdengine-taosa -p 6041-6049:6041-6049 -p 6041-6049:6041-6049/udp -e TAOS_FIRST_EP=tdengine-all tdengine/tdengine:2.4.0.4 taosadapter
docker run -d --name tdengine-taosa -p 6041-6049:6041-6049 -p 6041-6049:6041-6049/udp -e TAOS_FIRST_EP=tdengine-all tdengine/tdengine:2.4.0.4 taosadapter
```
使用 docker 运行 TDengine 2.4.0.4 版本镜像(仅 taosd):
```bash
$ docker run -d --name tdengine-taosd -p 6030-6042:6030-6042 -p 6030-6042:6030-6042/udp -e TAOS_DISABLE_ADAPTER=true tdengine/tdengine:2.4.0.4
docker run -d --name tdengine-taosd -p 6030-6042:6030-6042 -p 6030-6042:6030-6042/udp -e TAOS_DISABLE_ADAPTER=true tdengine/tdengine:2.4.0.4
```
使用 curl 命令验证 RESTful 接口可以正常工作:
```bash
$ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0.1:6041/rest/sql
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
```
taosAdapter 支持多个数据收集代理软件(如 Telegraf、StatsD、collectd 等),这里仅模拟 StatsD 写入数据,在宿主机执行命令如下:
```bash
$ echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
```
然后可以使用 taos shell 查询 taosAdapter 自动创建的数据库 statsd 和 超级表 foo 中的内容:
```bash
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
Query OK, 2 row(s) in set (0.002112s)
taos> use statsd;
Database changed.
taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
Query OK, 1 row(s) in set (0.001160s)
taos> select * from foo;
ts | value | metric_type |
=======================================================================================
2021-12-28 09:21:48.840820836 | 1 | counter |
Query OK, 1 row(s) in set (0.001639s)
taos>
```
可以看到模拟数据已经被写入到 TDengine 中。
### 应用示例:在宿主机使用 taosBenchmark 写入数据到 Docker 容器中的 TDengine server
1,在宿主机命令行界面执行 taosBenchmark (曾命名为 taosdemo)写入数据到 Docker 容器中的 TDengine server
```
$ taosBenchmark
taosBenchmark is simulating data generated by power equipments monitoring...
host: 127.0.0.1:6030
user: root
password: taosdata
configDir:
resultFile: ./output.txt
thread num of insert data: 10
thread num of create table: 10
top insert interval: 0
```
使用 curl 命令验证 RESTful 接口可以正常工作:
```
$ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0.1:6041/rest/sql
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
```
taosAdapter 支持多个数据收集代理软件(如 Telegraf、StatsD、collectd 等),这里仅模拟 StasD 写入数据,在宿主机执行命令如下:
```
$ echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
```
然后可以使用 taos shell 查询 taosAdapter 自动创建的数据库 statsd 和 超级表 foo 中的内容:
```
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
Query OK, 2 row(s) in set (0.002112s)
taos> use statsd;
Database changed.
taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
Query OK, 1 row(s) in set (0.001160s)
taos> select * from foo;
ts | value | metric_type |
=======================================================================================
2021-12-28 09:21:48.840820836 | 1 | counter |
Query OK, 1 row(s) in set (0.001639s)
taos>
```
可以看到模拟数据已经被写入到 TDengine 中。
### 应用示例:在宿主机使用 taosBenchmark 写入数据到 Docker 容器中的 TDengine server
1,在宿主机命令行界面执行 taosBenchmark 写入数据到 Docker 容器中的 TDengine server
```bash
$ taosBenchmark
taosBenchmark is simulating data generated by power equipments monitoring...
host: 127.0.0.1:6030
user: root
password: taosdata
configDir:
resultFile: ./output.txt
thread num of insert data: 10
thread num of create table: 10
top insert interval: 0
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0.1:6041/rest/sql
```
使用 curl 命令验证 RESTful 接口可以正常工作
输出示例如下
```
$ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0.1:6041/rest/sql
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
```
taosAdapter 支持多个数据收集代理软件(如 Telegraf、StatsD、collectd 等),这里仅模拟 StasD 写入数据,在宿主机执行命令如下:
```
$ echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
```
然后可以使用 taos shell 查询 taosAdapter 自动创建的数据库 statsd 和 超级表 foo 中的内容:
```
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
Query OK, 2 row(s) in set (0.002112s)
taos> use statsd;
Database changed.
taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
Query OK, 1 row(s) in set (0.001160s)
taos> select * from foo;
ts | value | metric_type |
=======================================================================================
2021-12-28 09:21:48.840820836 | 1 | counter |
Query OK, 1 row(s) in set (0.001639s)
taos>
```
可以看到模拟数据已经被写入到 TDengine 中。
### 应用示例:在宿主机使用 taosBenchmark 写入数据到 Docker 容器中的 TDengine server
1,在宿主机命令行界面执行 taosBenchmark 写入数据到 Docker 容器中的 TDengine server
1,在宿主机命令行界面执行 taosBenchmark (曾命名为 taosdemo)写入数据到 Docker 容器中的 TDengine server
```bash
$ taosBenchmark
......@@ -361,7 +211,7 @@ column[0]:FLOAT column[1]:INT column[2]:FLOAT
最后共插入 1 亿条记录。
2进入 TDengine 终端,查看 taosBenchmark 生成的数据。
2.进入 TDengine 终端,查看 taosBenchmark 生成的数据。
- **进入命令行。**
......@@ -380,7 +230,7 @@ taos>
$ taos> show databases;
name | created_time | ntables | vgroups | ···
test | 2021-08-18 06:01:11.021 | 10000 | 6 | ···
log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
```
......@@ -429,16 +279,51 @@ $ taos> select groupid, location from test.d0;
=================================
0 | shanghai |
Query OK, 1 row(s) in set (0.003490s)
```
### 应用示例:使用数据收集代理软件写入 TDengine
taosAdapter 支持多个数据收集代理软件(如 Telegraf、StatsD、collectd 等),这里仅模拟 StasD 写入数据,在宿主机执行命令如下:
```
echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
```
然后可以使用 taos shell 查询 taosAdapter 自动创建的数据库 statsd 和 超级表 foo 中的内容:
```
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
Query OK, 2 row(s) in set (0.002112s)
taos> use statsd;
Database changed.
taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
Query OK, 1 row(s) in set (0.001160s)
taos> select * from foo;
ts | value | metric_type |
=======================================================================================
2021-12-28 09:21:48.840820836 | 1 | counter |
Query OK, 1 row(s) in set (0.001639s)
taos>
```
可以看到模拟数据已经被写入到 TDengine 中。
## 停止正在 Docker 中运行的 TDengine 服务
```bash
$ docker stop tdengine
tdengine
docker stop tdengine
```
- **docker stop**:通过 docker stop 停止指定的正在运行中的 docker 镜像。
- **tdengine**:容器名称。
# TDengine 安装包的安装和卸载
TDengine 开源版本提供 deb 和 rpm 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 deb 支持 Debian/Ubuntu 等系统,rpm 支持 CentOS/RHEL/SUSE 等系统。同时我们也为企业用户提供 tar.gz 格式安装包。
## deb 包的安装和卸载
### 安装 deb
1、从官网下载获得deb安装包,比如TDengine-server-2.0.0.0-Linux-x64.deb;
2、进入到TDengine-server-2.0.0.0-Linux-x64.deb安装包所在目录,执行如下的安装命令:
```
plum@ubuntu:~/git/taosv16$ sudo dpkg -i TDengine-server-2.0.0.0-Linux-x64.deb
Selecting previously unselected package tdengine.
(Reading database ... 233181 files and directories currently installed.)
Preparing to unpack TDengine-server-2.0.0.0-Linux-x64.deb ...
Failed to stop taosd.service: Unit taosd.service not loaded.
Stop taosd service success!
Unpacking tdengine (2.0.0.0) ...
Setting up tdengine (2.0.0.0) ...
Start to install TDEngine...
Synchronizing state of taosd.service with SysV init with /lib/systemd/systemd-sysv-install...
Executing /lib/systemd/systemd-sysv-install enable taosd
insserv: warning: current start runlevel(s) (empty) of script `taosd' overrides LSB defaults (2 3 4 5).
insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `taosd' overrides LSB defaults (0 1 6).
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join OR leave it blank to build one :
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : use taos in shell
TDengine is installed successfully!
```
注:当安装第一个节点时,出现 Enter FQDN:提示的时候,不需要输入任何内容。只有当安装第二个或以后更多的节点时,才需要输入已有集群中任何一个可用节点的 FQDN,支持该新节点加入集群。当然也可以不输入,而是在新节点启动前,配置到新节点的配置文件中。
后续两种安装包也是同样的操作。
### 卸载 deb
卸载命令如下:
```
plum@ubuntu:~/git/tdengine/debs$ sudo dpkg -r tdengine
(Reading database ... 233482 files and directories currently installed.)
Removing tdengine (2.0.0.0) ...
TDEngine is removed successfully!
```
## rpm包的安装和卸载
### 安装 rpm
1、从官网下载获得rpm安装包,比如TDengine-server-2.0.0.0-Linux-x64.rpm;
2、进入到TDengine-server-2.0.0.0-Linux-x64.rpm安装包所在目录,执行如下的安装命令:
```
[root@bogon x86_64]# rpm -iv TDengine-server-2.0.0.0-Linux-x64.rpm
Preparing packages...
TDengine-2.0.0.0-3.x86_64
Start to install TDEngine...
Created symlink from /etc/systemd/system/multi-user.target.wants/taosd.service to /etc/systemd/system/taosd.service.
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join OR leave it blank to build one :
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : use taos in shell
TDengine is installed successfully!
```
### 卸载 rpm
卸载命令如下:
```
[root@bogon x86_64]# rpm -e tdengine
TDEngine is removed successfully!
```
## tar.gz 格式安装包的安装和卸载
### 安装 tar.gz 安装包
1、从官网下载获得tar.gz安装包,比如TDengine-server-2.0.0.0-Linux-x64.tar.gz;
2、进入到TDengine-server-2.0.0.0-Linux-x64.tar.gz安装包所在目录,先解压文件后,进入子目录,执行其中的install.sh安装脚本:
```
plum@ubuntu:~/git/tdengine/release$ sudo tar -xzvf TDengine-server-2.0.0.0-Linux-x64.tar.gz
plum@ubuntu:~/git/tdengine/release$ ll
total 3796
drwxr-xr-x 3 root root 4096 Aug 9 14:20 ./
drwxrwxr-x 11 plum plum 4096 Aug 8 11:03 ../
drwxr-xr-x 5 root root 4096 Aug 8 11:03 TDengine-server/
-rw-r--r-- 1 root root 3871844 Aug 8 11:03 TDengine-server-2.0.0.0-Linux-x64.tar.gz
plum@ubuntu:~/git/tdengine/release$ cd TDengine-server/
plum@ubuntu:~/git/tdengine/release/TDengine-server$ ll
total 2640
drwxr-xr-x 5 root root 4096 Aug 8 11:03 ./
drwxr-xr-x 3 root root 4096 Aug 9 14:20 ../
drwxr-xr-x 5 root root 4096 Aug 8 11:03 connector/
drwxr-xr-x 2 root root 4096 Aug 8 11:03 driver/
drwxr-xr-x 8 root root 4096 Aug 8 11:03 examples/
-rwxr-xr-x 1 root root 13095 Aug 8 11:03 install.sh*
-rw-r--r-- 1 root root 2651954 Aug 8 11:03 taos.tar.gz
plum@ubuntu:~/git/tdengine/release/TDengine-server$ sudo ./install.sh
This is ubuntu system
verType=server interactiveFqdn=yes
Start to install TDengine...
Synchronizing state of taosd.service with SysV init with /lib/systemd/systemd-sysv-install...
Executing /lib/systemd/systemd-sysv-install enable taosd
insserv: warning: current start runlevel(s) (empty) of script `taosd' overrides LSB defaults (2 3 4 5).
insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `taosd' overrides LSB defaults (0 1 6).
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join OR leave it blank to build one :hostname.taosdata.com:7030
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : use taos in shell
Please run: taos -h hostname.taosdata.com:7030 to login into cluster, then execute : create dnode 'newDnodeFQDN:port'; in TAOS shell to add this new node into the clsuter
TDengine is installed successfully!
```
说明:install.sh 安装脚本在执行过程中,会通过命令行交互界面询问一些配置信息。如果希望采取无交互安装方式,那么可以用 -e no 参数来执行 install.sh 脚本。运行 ./install.sh -h 指令可以查看所有参数的详细说明信息。
### tar.gz 安装后的卸载
卸载命令如下:
```
plum@ubuntu:~/git/tdengine/release/TDengine-server$ rmtaos
TDEngine is removed successfully!
```
## 安装目录说明
TDengine成功安装后,主安装目录是/usr/local/taos,目录内容如下:
```
plum@ubuntu:/usr/local/taos$ cd /usr/local/taos
plum@ubuntu:/usr/local/taos$ ll
total 36
drwxr-xr-x 9 root root 4096 7月 30 19:20 ./
drwxr-xr-x 13 root root 4096 7月 30 19:20 ../
drwxr-xr-x 2 root root 4096 7月 30 19:20 bin/
drwxr-xr-x 2 root root 4096 7月 30 19:20 cfg/
lrwxrwxrwx 1 root root 13 7月 30 19:20 data -> /var/lib/taos/
drwxr-xr-x 2 root root 4096 7月 30 19:20 driver/
drwxr-xr-x 8 root root 4096 7月 30 19:20 examples/
drwxr-xr-x 2 root root 4096 7月 30 19:20 include/
drwxr-xr-x 2 root root 4096 7月 30 19:20 init.d/
lrwxrwxrwx 1 root root 13 7月 30 19:20 log -> /var/log/taos/
```
- 自动生成配置文件目录、数据库目录、日志目录。
- 配置文件缺省目录:/etc/taos/taos.cfg, 软链接到/usr/local/taos/cfg/taos.cfg;
- 数据库缺省目录:/var/lib/taos, 软链接到/usr/local/taos/data;
- 日志缺省目录:/var/log/taos, 软链接到/usr/local/taos/log;
- /usr/local/taos/bin目录下的可执行文件,会软链接到/usr/bin目录下;
- /usr/local/taos/driver目录下的动态库文件,会软链接到/usr/lib目录下;
- /usr/local/taos/include目录下的头文件,会软链接到到/usr/include目录下;
## 卸载和更新文件说明
卸载安装包的时候,将保留配置文件、数据库文件和日志文件,即 /etc/taos/taos.cfg 、 /var/lib/taos 、 /var/log/taos 。如果用户确认后不需保留,可以手工删除,但一定要慎重,因为删除后,数据将永久丢失,不可以恢复!
如果是更新安装,当缺省配置文件( /etc/taos/taos.cfg )存在时,仍然使用已有的配置文件,安装包中携带的配置文件修改为taos.cfg.org保存在 /usr/local/taos/cfg/ 目录,可以作为设置配置参数的参考样例;如果不存在配置文件,就使用安装包中自带的配置文件。
## 注意事项
- TDengine提供了多种安装包,但最好不要在一个系统上同时使用 tar.gz 安装包和 deb 或 rpm 安装包。否则会相互影响,导致在使用时出现问题。
- 对于deb包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 tdengine 包的安装信息,执行如下命令:
```
plum@ubuntu:~/git/tdengine/$ sudo rm -f /var/lib/dpkg/info/tdengine*
```
然后再重新进行安装就可以了。
- 对于rpm包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除tdengine包的安装信息,执行如下命令:
```
[root@bogon x86_64]# rpm -e --noscripts tdengine
```
然后再重新进行安装就可以了。
......@@ -2,85 +2,101 @@
## <a class="anchor" id="install"></a>快捷安装
TDengine 软件分为服务器、客户端和报警模块三部分,目前 2.0 版服务器仅能在 Linux 系统上安装和运行,后续会支持 Windows、Mac OS 等系统。客户端可以在 Windows 或 Linux 上安装和运行。任何 OS 的应用也可以选择 RESTful 接口连接服务器 taosd,其中 2.4 之后版本默认使用单独运行的独立组件 taosAdapter 提供 http 服务,之前版本使用内置 http 服务。CPU 支持 X64/ARM64/MIPS64/Alpha64,后续会支持 ARM32、RISC-V 等 CPU 架构。用户可根据需求选择通过 [源码](https://www.taosdata.com/cn/getting-started/#通过源码安装) 或者 [安装包](https://www.taosdata.com/cn/getting-started/#通过安装包安装) 来安装
TDengine 包括服务端、客户端和周边生态工具软件,目前 2.0 版服务端仅在 Linux 系统上安装和运行,后续将支持 Windows、Mac OS 等系统。客户端可以在 Windows 或 Linux 上安装和运行。在任何操作系统上的应用都可以使用 RESTful 接口连接服务端程序 taosd,其中 2.4 之后版本默认使用单独运行的独立组件 taosAdapter 提供 http 服务和更多数据写入方式。taosAdapter 需要手动启动。而之前版本 TDengine 使用内置 http 服务
### <a class="anchor" id="source-install"></a>通过源码安装
请参考我们的 [TDengine github 主页](https://github.com/taosdata/TDengine) 下载源码并安装.
TDengine 支持 X64/ARM64/MIPS64/Alpha64 硬件平台,后续将支持 ARM32、RISC-V 等 CPU 架构。
### 通过 Docker 容器运行
### 通过 Docker 容器安装
暂时不建议生产环境采用 Docker 来部署 TDengine 的客户端或服务端,但在开发环境下或初次尝试时,使用 Docker 方式部署是十分方便的。特别是,利用 Docker,可以方便地在 Mac OS X 和 Windows 环境下尝试 TDengine。
```
docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
```
详细操作方法请参照 [通过 Docker 快速体验 TDengine](https://www.taosdata.com/cn/documentation/getting-started/docker)
注:暂时不建议生产环境采用 Docker 来部署 TDengine 的客户端或服务端,但在开发环境下或初次尝试时,使用 Docker 方式部署是十分方便的。特别是,利用 Docker,可以方便地在 Mac OS X 和 Windows 环境下尝试 TDengine。
### <a class="anchor" id="package-install"></a>通过安装包安装
TDengine 的安装非常简单,从下载到安装成功仅仅只要几秒钟。服务端安装包包含客户端和连接器,我们提供三种安装包,您可以根据需要选择:
TDengine 的安装非常简单,从下载到安装成功仅仅只要几秒钟。为方便使用,标准的服务端安装包包含了客户端程序和示例代码;如果您只需要用到服务端程序和客户端连接的 C/C++ 语言支持,也可以仅下载 lite 版本的安装包。在安装包格式上,我们提供 rpm 和 deb 格式,也为企业客户提供 tar.gz 格式安装包,以方便在特定操作系统上使用。发布版本包括稳定版和 Beta 版,Beta版含有更多新功能。正式上线或测试建议安装稳定版。您可以根据需要选择下载:
<ul id="server-packageList" class="package-list"></ul>
安装包下载在 [这里](https://www.taosdata.com/cn/getting-started/#通过安装包安装)
具体的安装方法,请参见 [TDengine 多种安装包的安装和卸载](https://www.taosdata.com/cn/getting-started/install) 以及 [视频教程](https://www.taosdata.com/blog/2020/11/11/1941.html)
具体的安装过程,请参见 [TDengine 多种安装包的安装和卸载](https://www.taosdata.com/blog/2019/08/09/566.html) 以及 [视频教程](https://www.taosdata.com/blog/2020/11/11/1941.html)
**请点击[这里](https://github.com/taosdata/TDengine/releases)查看 release notes。**
### 使用 apt-get 安装
如果使用 Debian 或 Ubuntu 系统,也可以使用 apt-get 从官方仓库安装,设置方法为:
如果使用 Debian 或 Ubuntu 系统,也可以使用 apt-get 工具从官方仓库安装,设置方法为:
```
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
[ beta 版安装包仓库为可选安装项 ] echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
[ 如果安装 Beta 版需要安装包仓库 ] echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
sudo apt-get update
apt-cache policy tdengine
sudo apt-get install tdengine
```
<a class="anchor" id="start"></a>
## 轻松启动
### 仅安装客户端
如果客户端和服务端运行在不同的电脑上,可以单独安装客户端。下载时请注意,所选择的客户端版本号应该和在上面下载的服务端版本号严格匹配。Linux 和 Windows 安装包如下(其中 lite 版本的安装包仅带有 C/C++ 语言的连接支持,而标准版本的安装包还包含和示例代码):
<ul id="client-packagelist" class="package-list"></ul>
### <a class="anchor" id="source-install"></a>通过源码安装
如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine github 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
**下载其他组件、最新 Beta 版及之前版本的安装包,请点击[这里](https://www.taosdata.com/cn/all-downloads/)**
## <a class="anchor" id="start"></a>轻松启动
安装成功后,用户可使用 `systemctl` 命令来启动 TDengine 的服务进程。
```bash
$ systemctl start taosd
systemctl start taosd
```
检查服务是否正常工作:
```bash
$ systemctl status taosd
systemctl status taosd
```
如果 TDengine 服务正常工作,那么您可以通过 TDengine 的命令行程序 `taos` 来访问并体验 TDengine。
如果 TDengine 服务正常工作,那么您可以通过 TDengine 的命令行程序 `taos` 来访问并体验 TDengine。
**注意:**
### 注意:
- systemctl 命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 sudo 。
- 为更好的获得产品反馈,改善产品,TDengine 会采集基本的使用信息,但您可以修改系统配置文件 taos.cfg 里的配置参数 telemetryReporting,将其设为 0,就可将其关闭。
- TDengine 采用 FQDN (一般就是 hostname )作为节点的 ID,为保证正常运行,需要给运行 taosd 的服务器配置好 hostname,在客户端应用运行的机器配置好 DNS 服务或 hosts 文件,保证 FQDN 能够解析。
- `systemctl stop taosd` 指令在执行后并不会马上停止 TDengine 服务,而是会等待系统中必要的落盘工作正常完成。在数据量很大的情况下,这可能会消耗较长时间。
* TDengine 支持在使用 [`systemd`](https://en.wikipedia.org/wiki/Systemd) 做进程服务管理的 Linux 系统上安装,用 `which systemctl` 命令来检测系统中是否存在 `systemd` 包:
TDengine 支持在使用 [`systemd`](https://en.wikipedia.org/wiki/Systemd) 做进程服务管理的 Linux 系统上安装,用 `which systemctl` 命令来检测系统中是否存在 `systemd` 包:
```bash
$ which systemctl
which systemctl
```
如果系统中不支持 `systemd`,也可以用手动运行 /usr/local/taos/bin/taosd 方式启动 TDengine 服务。
<a class="anchor" id="console"></a>
## TDengine 命令行程序
## <a class="anchor" id="console"></a>使用 TDengine 客户端程序
执行 TDengine 命令行程序,您只要在 Linux 终端执行 `taos` 即可。
执行 TDengine 客户端程序,您只要在 Linux 终端执行 `taos` 即可。
```bash
$ taos
taos
```
如果 TDengine 终端连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考 [FAQ](https://www.taosdata.com/cn/documentation/faq/) 来解决终端连接服务端失败的问题)。TDengine 终端的提示符号如下:
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考 [FAQ](https://www.taosdata.com/cn/documentation/faq/) 来解决终端连接服务端失败的问题)。客户端的提示符号如下:
```cmd
taos>
```
在 TDengine 端中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行插入查询操作。在终端中运行的 SQL 语句需要以分号结束来运行。示例:
在 TDengine 客户端中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行插入查询操作。在终端中运行的 SQL 语句需要以分号结束来运行。示例:
```mysql
create database demo;
......@@ -96,26 +112,26 @@ select * from t;
Query OK, 2 row(s) in set (0.003128s)
```
除执行 SQL 语句外,系统管理员还可以从 TDengine 端进行检查系统运行状态、添加删除用户账号等操作。
除执行 SQL 语句外,系统管理员还可以从 TDengine 客户端进行检查系统运行状态、添加删除用户账号等操作。
**命令行参数**
### 命令行参数
您可通过配置命令行参数来改变 TDengine 端的行为。以下为常用的几个命令行参数:
您可通过配置命令行参数来改变 TDengine 客户端的行为。以下为常用的几个命令行参数:
- -c, --config-dir: 指定配置文件目录,默认为 `/etc/taos`
- -h, --host: 指定服务的 FQDN 地址或 IP 地址,默认为连接本地服务
- -s, --commands: 在不进入终端的情况下运行 TDengine 命令
- -u, --user: 连接 TDengine 服务的用户名,缺省为 root
- -p, --password: 连接TDengine服务器的密码,缺省为 taosdata
- -u, --user: 连接 TDengine 服务的用户名,缺省为 root
- -p, --password: 连接 TDengine 服务端的密码,缺省为 taosdata
- -?, --help: 打印出所有命令行参数
示例:
```bash
$ taos -h h1.taos.com -s "use db; show tables;"
taos -h h1.taos.com -s "use db; show tables;"
```
**运行 SQL 命令脚本**
### 运行 SQL 命令脚本
TDengine 终端可以通过 `source` 命令来运行 SQL 命令脚本。
......@@ -123,7 +139,7 @@ TDengine 终端可以通过 `source` 命令来运行 SQL 命令脚本。
taos> source <filename>;
```
**Shell 小技巧**
### taos shell 小技巧
- 可以使用上下光标键查看历史输入的指令
- 修改用户密码:在 shell 中使用 `alter user` 命令,缺省密码为 taosdata
......@@ -134,15 +150,25 @@ taos> source <filename>;
## <a class="anchor" id="demo"></a>TDengine 极速体验
启动 TDengine 的服务,在 Linux 终端执行 taosBenchmark (曾命名为 taosdemo,在 2.4 之后的版本请按照独立的 taosTools 软件包):
### <a class="anchor" id="taosBenchmark"></a> 使用 taosBenchmark 体验写入速度
启动 TDengine 的服务,在 Linux 终端执行 `taosBenchmark` (曾命名为 taosdemo)。taosBenchmark 在 TDengine 2.4.0.7 和之前发布版本在 taosTools 安装包中发布提供,在后续版本中 taosBenchmark 将在 TDengine 标准安装包中发布。
```bash
$ taosBenchmark
taosBenchmark
```
该命令将在数据库 test 下面自动创建一张超级表 meters,该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupId,groupId 被设置为 1 到 10, location 被设置为 "beijing" 或者 "shanghai"。
执行这条命令大概需要几分钟,最后共插入 1 亿条记录。
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。
#### taosBenchmark 详细功能列表
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,请执行 `taosBenchmark --help` 详细列出。您可以设置不同参数进行体验。
taosBenchmark 详细使用方法请参照 [如何使用taosBenchmark对TDengine进行性能测试](https://www.taosdata.com/2021/10/09/3111.html)
### <a class="anchor" id="taosshell"></a> 使用 taos shell 体验查询速度
在 TDengine 客户端输入查询命令,体验查询速度。
......@@ -175,18 +201,10 @@ taos> select avg(current), max(voltage), min(phase) from test.meters where group
```mysql
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
```
## <a class="anchor" id="taosBenchmark"></a> taosBenchmark 详细功能列表
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,请执行 `taosBenchmark --help` 详细列出。您可以设置不同参数进行体验。
taosBenchmark 详细使用方法请参照 [如何使用taosBenchmark对TDengine进行性能测试](https://www.taosdata.com/2021/10/09/3111.html)
## 客户端
如果客户端和服务端运行在不同的电脑上,可以单独安装客户端。Linux 和 Windows 安装包可以在 [这里](https://www.taosdata.com/cn/getting-started/#客户端) 下载。
## <a class="anchor" id="platforms"></a>支持平台列表
### TDengine 服务支持的平台列表
### TDengine 服务支持的平台列表
| | **CentOS 7/8** | **Ubuntu 16/18/20** | **Other Linux** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** | **华为 EulerOS** |
| -------------- | --------------------- | ------------------------ | --------------- | --------------- | ------------------------- | --------------------- | --------------------- |
......@@ -224,3 +242,4 @@ taosBenchmark 详细使用方法请参照 [如何使用taosBenchmark对TDengine
请跳转到 [连接器](https://www.taosdata.com/cn/documentation/connector) 查看更详细的信息。
<script src="/wp-includes/js/quick-start.js?v=1"></script>
......@@ -5,15 +5,19 @@ TDengine支持多种接口写入数据,包括SQL,Prometheus,Telegraf,col
## <a class="anchor" id="sql"></a>SQL 写入
应用通过C/C++, Java, Go, C#, Python, Node.js 连接器执行SQL insert语句来插入数据,用户还可以通过TAOS Shell,手动输入SQL insert语句插入数据。比如下面这条insert 就将一条记录写入到表d1001中:
```mysql
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31);
```
TDengine支持一次写入多条记录,比如下面这条命令就将两条记录写入到表d1001中:
```mysql
INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3, 218, 0.25);
```
TDengine也支持一次向多个表写入数据,比如下面这条命令就向d1001写入两条记录,向d1002写入一条记录:
```mysql
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d1002 VALUES (1538548696800, 12.3, 221, 0.31);
```
......@@ -28,6 +32,7 @@ INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6,
- 写入的数据的时间戳必须大于当前时间减去配置参数keep的时间。如果keep配置为3650天,那么无法写入比3650天还早的数据。写入数据的时间戳也不能大于当前时间加配置参数days。如果days为2,那么无法写入比当前时间还晚2天的数据。
## <a class="anchor" id="schemaless"></a>无模式(Schemaless)写入
**前言**
<br/>在物联网应用中,常会采集比较多的数据项,用于实现智能控制、业务分析、设备监控等。由于应用逻辑的版本升级,或者设备自身的硬件调整等原因,数据采集项就有可能比较频繁地出现变动。为了在这种情况下方便地完成数据记录工作,TDengine 从 2.2.0.0 版本开始,提供调用 Schemaless 写入方式,可以免于预先创建超级表/子表的步骤,随着数据写入接口能够自动创建与数据对应的存储结构。并且在必要时,Schemaless 将自动增加必要的数据列,保证用户写入的数据可以被正确存储。
<br/>目前,TDengine 的 C/C++ Connector 提供支持 Schemaless 的操作接口,详情请参见 [Schemaless 方式写入接口](https://www.taosdata.com/cn/documentation/connector#schemaless)章节。这里对 Schemaless 的数据表达格式进行了描述。
......@@ -39,11 +44,13 @@ INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6,
对于InfluxDB、OpenTSDB的标准写入协议请参考各自的文档。下面首先以 InfluxDB 的行协议为基础,介绍 TDengine 扩展的协议内容,允许用户采用更加精细的方式控制(超级表)模式。
Schemaless 采用一个字符串来表达一个数据行(可以向写入 API 中一次传入多行字符串来实现多个数据行的批量写入),其格式约定如下:
```json
measurement,tag_set field_set timestamp
```
其中,
其中:
* measurement 将作为数据表名。它与 tag_set 之间使用一个英文逗号来分隔。
* tag_set 将作为标签数据,其格式形如 `<tag_key>=<tag_value>,<tag_key>=<tag_value>`,也即可以使用英文逗号来分隔多个标签数据。它与 field_set 之间使用一个半角空格来分隔。
* field_set 将作为普通列数据,其格式形如 `<field_key>=<field_value>,<field_key>=<field_value>`,同样是使用英文逗号来分隔多个普通列的数据。它与 timestamp 之间使用一个半角空格来分隔。
......@@ -51,6 +58,7 @@ measurement,tag_set field_set timestamp
tag_set 中的所有的数据自动转化为 nchar 数据类型,并不需要使用双引号(")。
<br/>在无模式写入数据行协议中,field_set 中的每个数据项都需要对自身的数据类型进行描述。具体来说:
* 如果两边有英文双引号,表示 BIANRY(32) 类型。例如 `"abc"`
* 如果两边有英文双引号而且带有 L 前缀,表示 NCHAR(32) 类型。例如 `L"报错信息"`
* 对空格、等号(=)、逗号(,)、双引号("),前面需要使用反斜杠(\)进行转义。(都指的是英文半角符号)
......@@ -64,20 +72,26 @@ tag_set 中的所有的数据自动转化为 nchar 数据类型,并不需要
| 4 | i16 | SmallInt | 2 |
| 5 | i32 | Int | 4 |
| 6 | i64或i | Bigint | 8 |
* t, T, true, True, TRUE, f, F, false, False 将直接作为 BOOL 型来处理。
<br/>例如如下数据行表示:向名为 st 的超级表下的 t1 标签为 "3"(NCHAR)、t2 标签为 "4"(NCHAR)、t3 标签为 "t3"(NCHAR)的数据子表,写入 c1 列为 3(BIGINT)、c2 列为 false(BOOL)、c3 列为 "passit"(BINARY)、c4 列为 4(DOUBLE)、主键时间戳为 1626006833639000000 的一行数据。
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
```
需要注意的是,如果描述数据类型后缀时使用了错误的大小写,或者为数据指定的数据类型有误,均可能引发报错提示而导致数据写入失败。
### 无模式写入的主要处理逻辑
无模式写入按照如下原则来处理行数据:
<br/>1. 将使用如下规则来生成子表名:首先将measurement 的名称和标签的 key 和 value 组合成为如下的字符串
```json
"measurement,tag_key1=tag_value1,tag_key2=tag_value2"
```
需要注意的是,这里的tag_key1, tag_key2并不是用户输入的标签的原始顺序,而是使用了标签名称按照字符串升序排列后的结果。所以,tag_key1 并不是在行协议中输入的第一个标签。
排列完成以后计算该字符串的 MD5 散列值 "md5_val"。然后将计算的结果与字符串组合生成表名:“t_md5_val”。其中的 “t_” 是固定的前缀,每个通过该映射关系自动生成的表都具有该前缀。
<br/>2. 如果解析行协议获得的超级表不存在,则会创建这个超级表。
......@@ -120,7 +134,9 @@ st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
```
该行数据映射生成一个超级表: st, 其包含了 3 个类型为 nchar 的标签,分别是:t1, t2, t3。五个数据列,分别是ts(timestamp),c1 (bigint),c3(binary),c2 (bool), c4 (bigint)。映射成为如下 SQL 语句:
```json
create stable st (_ts timestamp, c1 bigint, c2 bool, c3 binary(6), c4 bigint) tags(t1 nchar(1), t2 nchar(1), t3 nchar(2))
```
......@@ -129,22 +145,28 @@ create stable st (_ts timestamp, c1 bigint, c2 bool, c3 binary(6), c4 bigint) ta
<br/>本节将说明不同行数据写入情况下,对于数据模式的影响。
在使用行协议写入一个明确的标识的字段类型的时候,后续更改该字段的类型定义,会出现明确的数据模式错误,即会触发写入 API 报告错误。如下所示,
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4i 1626006833640000000
```
第一行的数据类型映射将 c4 列定义为 Double, 但是第二行的数据又通过数值后缀方式声明该列为 BigInt, 由此会触发无模式写入的解析错误。
如果列前面的行协议将数据列声明为了 binary, 后续的要求长度更长的binary长度,此时会触发超级表模式的变更。
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c5="pass" 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c5="passit" 1626006833640000000
```
第一行中行协议解析会声明 c5 列是一个 binary(4)的字段,第二次行数据写入会提取列 c5 仍然是 binary 列,但是其宽度为 6,此时需要将binary的宽度增加到能够容纳 新字符串的宽度。
```json
st,t1=3,t2=4,t3=t3 c1=3i64 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c6="passit" 1626006833640000000
```
第二行数据相对于第一行来说增加了一个列 c6,类型为binary(6)。那么此时会自动增加一个列 c6, 类型为 binary(6)。
**写入完整性**
......@@ -156,98 +178,35 @@ st,t1=3,t2=4,t3=t3 c1=3i64,c6="passit" 1626006833640000000
**后续升级计划**
<br/>当前版本只提供了 C 版本的 API,后续将提供 其他高级语言的 API,例如 Java/Go/Python/C# 等。此外,在TDengine v2.3及后续版本中,您还可以通过 taosAdapter 采用 REST 的方式直接写入无模式数据。
## <a class="anchor" id="prometheus"></a>Prometheus 直接写入(通过 taosAdapter)
## <a class="anchor" id="prometheus"></a>Prometheus 直接写入
[Prometheus](https://www.prometheus.io/)作为Cloud Native Computing Fundation毕业的项目,在性能监控以及K8S性能监控领域有着非常广泛的应用。TDengine提供一个小工具[Bailongma](https://github.com/taosdata/Bailongma),只需对Prometheus做简单配置,无需任何代码,就可将Prometheus采集的数据直接写入TDengine,并按规则在TDengine自动创建库和相关表项。博文[用Docker容器快速搭建一个Devops监控Demo](https://www.taosdata.com/blog/2020/02/03/1189.html)即是采用Bailongma将Prometheus和Telegraf的数据写入TDengine中的示例,可以参考。
### 从源代码编译 blm_prometheus
用户需要从github下载[Bailongma](https://github.com/taosdata/Bailongma)的源码,使用Golang语言编译器编译生成可执行文件。在开始编译前,需要准备好以下条件:
- Linux操作系统的服务器
- 安装好Golang,1.14版本以上
- 对应的TDengine版本。因为用到了TDengine的客户端动态链接库,因此需要安装好和服务端相同版本的TDengine程序;比如服务端版本是TDengine 2.0.0, 则在Bailongma所在的Linux服务器(可以与TDengine在同一台服务器,或者不同服务器)
Bailongma项目中有一个文件夹blm_prometheus,存放了prometheus的写入API程序。编译过程如下:
```bash
cd blm_prometheus
go build
```
一切正常的情况下,就会在对应的目录下生成一个blm_prometheus的可执行程序。
### 安装 Prometheus
通过Prometheus的官网下载安装。具体请见:[下载地址](https://prometheus.io/download/)
### 配置 Prometheus
remote_read 和 remote_write 是 Prometheus 数据读写分离的集群方案。
只需要将 remote_read 和 remote_write url 指向 taosAdapter 对应的 url 同时设置 Basic 验证即可使用。
参考Prometheus的[配置文档](https://prometheus.io/docs/prometheus/latest/configuration/configuration/),在Prometheus的配置文件中的<remote_write>部分,增加以下配置:
* remote_read url : `http://host_to_taosAdapter:port(default 6041)/prometheus/v1/remote_read/:db`
* remote_write url : `http://host_to_taosAdapter:port(default 6041)/prometheus/v1/remote_write/:db`
```
- url: "bailongma API服务提供的URL"(参考下面的blm_prometheus启动示例章节)
```
启动Prometheus后,可以通过taos客户端查询确认数据是否成功写入。
### 启动 blm_prometheus 程序
blm_prometheus程序有以下选项,在启动blm_prometheus程序时可以通过设定这些选项来设定blm_prometheus的配置。
```bash
--tdengine-name
如果TDengine安装在一台具备域名的服务器上,也可以通过配置TDengine的域名来访问TDengine。在K8S环境下,可以配置成TDengine所运行的service name。
--batch-size
blm_prometheus会将收到的prometheus的数据拼装成TDengine的写入请求,这个参数控制一次发给TDengine的写入请求中携带的数据条数。
--dbname
设置在TDengine中创建的数据库名称,blm_prometheus会自动在TDengine中创建一个以dbname为名称的数据库,缺省值是prometheus。
--dbuser
设置访问TDengine的用户名,缺省值是'root'
Basic验证:
--dbpassword
设置访问TDengine的密码,缺省值是'taosdata'
* username: TDengine 连接用户名
* password: TDengine 连接密码
--port
blm_prometheus对prometheus提供服务的端口号。
```
### 启动示例
示例 prometheus.yml 如下:
通过以下命令启动一个blm_prometheus的API服务
```bash
./blm_prometheus -port 8088
```
假设blm_prometheus所在服务器的IP地址为"10.1.2.3",则在prometheus的配置文件中<remote_write>部分增加url为
```yaml
remote_write:
- url: "http://10.1.2.3:8088/receive"
```
- url: "http://localhost:6041/prometheus/v1/remote_write/prometheus_data"
basic_auth:
username: root
password: taosdata
### 查询 prometheus 写入数据
prometheus产生的数据格式如下:
```json
{
Timestamp: 1576466279341,
Value: 37.000000,
apiserver_request_latencies_bucket {
component="apiserver",
instance="192.168.99.116:8443",
job="kubernetes-apiservers",
le="125000",
resource="persistentvolumes",
scope="cluster",
verb="LIST",
version="v1"
}
}
```
其中,apiserver_request_latencies_bucket为prometheus采集的时序数据的名称,后面{}中的为该时序数据的标签。blm_prometheus会以时序数据的名称在TDengine中自动创建一个超级表,并将{}中的标签转换成TDengine的tag值,Timestamp作为时间戳,value作为该时序数据的值。因此在TDengine的客户端中,可以通过以下指令查到这个数据是否成功写入。
```mysql
use prometheus;
select * from apiserver_request_latencies_bucket;
remote_read:
- url: "http://localhost:6041/prometheus/v1/remote_read/prometheus_data"
basic_auth:
username: root
password: taosdata
remote_timeout: 10s
read_recent: true
```
## <a class="anchor" id="telegraf"></a> Telegraf 直接写入(通过 taosAdapter)
......@@ -257,6 +216,7 @@ select * from apiserver_request_latencies_bucket;
TDengine 新版本(2.3.0.0+)包含一个 taosAdapter 独立程序,负责接收包括 Telegraf 的多种应用的数据写入。
配置方法,在 /etc/telegraf/telegraf.conf 增加如下文字,其中 database name 请填写希望在 TDengine 保存 Telegraf 数据的数据库名,TDengine server/cluster host、username和 password 填写 TDengine 实际值:
```
[[outputs.http]]
url = "http://<TDengine server/cluster host>:6041/influxdb/v1/write?db=<database name>"
......@@ -269,9 +229,11 @@ TDengine 新版本(2.3.0.0+)包含一个 taosAdapter 独立程序,负责
```
然后重启 telegraf:
```
sudo systemctl start telegraf
```
即可在 TDengine 中查询 metrics 数据库中 Telegraf 写入的数据。
taosAdapter 相关配置参数请参考 taosadapter --help 命令输出以及相关文档。
......@@ -283,16 +245,20 @@ taosAdapter 相关配置参数请参考 taosadapter --help 命令输出以及相
TDengine 新版本(2.3.0.0+)包含一个 taosAdapter 独立程序,负责接收包括 collectd 的多种应用的数据写入。
在 /etc/collectd/collectd.conf 文件中增加如下内容,其中 host 和 port 请填写 TDengine 和 taosAdapter 配置的实际值:
```
LoadPlugin network
<Plugin network>
Server "<TDengine cluster/server host>" "<port for collectd>"
</Plugin>
```
重启 collectd
```
sudo systemctl start collectd
```
taosAdapter 相关配置参数请参考 taosadapter --help 命令输出以及相关文档。
## <a class="anchor" id="statsd"></a> StatsD 直接写入(通过 taosAdapter)
......@@ -303,12 +269,14 @@ taosAdapter 相关配置参数请参考 taosadapter --help 命令输出以及相
TDengine 新版本(2.3.0.0+)包含一个 taosAdapter 独立程序,负责接收包括 StatsD 的多种应用的数据写入。
在 config.js 文件中增加如下内容后启动 StatsD,其中 host 和 port 请填写 TDengine 和 taosAdapter 配置的实际值:
```
backends 部分添加 "./backends/repeater"
repeater 部分添加 { host:'<TDengine server/cluster host>', port: <port for StatsD>}
```
示例配置文件:
```
{
port: 8125
......@@ -323,9 +291,10 @@ icinga2 可以收集监控和性能数据并写入 OpenTSDB,taosAdapter 可以
## <a class="anchor" id="icinga2"></a> icinga2 直接写入(通过 taosAdapter)
* 参考链接 https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer 使能 opentsdb-writer
* 参考链接 `https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer` 使能 opentsdb-writer
* 使能 taosAdapter 配置项 opentsdb_telnet.enable
* 修改配置文件 /etc/icinga2/features-enabled/opentsdb.conf
```
object OpenTsdbWriter "opentsdb" {
host = "host to taosAdapter"
......@@ -344,12 +313,6 @@ TCollector 是一个在客户侧收集本地收集器并发送数据到 OpenTSDB
taosAdapter 相关配置参数请参考 taosadapter --help 命令输出以及相关文档。
## <a class="anchor" id="taosadapter2-telegraf"></a> 使用 Bailongma 2.0 接入 Telegraf 数据写入
**注意:**
TDengine 新版本(2.3.0.0+)提供新版本 Bailongma ,命名为 taosAdapter ,提供更简便的 Telegraf 数据写入以及其他更强大的功能,Bailongma v2 及之前版本将逐步不再维护。
## <a class="anchor" id="emq"></a>EMQ Broker 直接写入
MQTT是流行的物联网数据传输协议,[EMQ](https://github.com/emqx/emqx)是一开源的MQTT Broker软件,无需任何代码,只需要在EMQ Dashboard里使用“规则”做简单配置,即可将MQTT的数据直接写入TDengine。EMQ X 支持通过 发送到 Web 服务的方式保存数据到 TDEngine,也在企业版上提供原生的 TDEngine 驱动实现直接保存。详细使用方法请参考 [EMQ 官方文档](https://docs.emqx.io/broker/latest/cn/rule/rule-example.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine)
......
......@@ -57,10 +57,10 @@ INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES(
| taos-jdbcdriver 版本 | TDengine 2.0.x.x 版本 | TDengine 2.2.x.x 版本 | TDengine 2.4.x.x 版本 | JDK 版本 |
|---------------------| ----------------------| ----------------------| ----------------------| -------- |
| 2.0.37 | X | X | 2.4.0.4 | 1.8.x |
| 2.0.36 | X | 2.2.2.11 以上 | 2.4.0.0 - 2.4.0.3 | 1.8.x |
| 2.0.35 | X | 2.2.2.11 以上 | 2.3.0.0 - 2.4.0.3 | 1.8.x |
| 2.0.33 - 2.0.34 | 2.0.3.0 以上 | 2.2.0.0 以上 | 2.4.0.0 - 2.4.0.3 | 1.8.x |
| 2.0.37 | X | X | 2.4.0.6 以上 | 1.8.x |
| 2.0.36 | X | 2.2.2.11 以上 | 2.4.0.0 - 2.4.0.5 | 1.8.x |
| 2.0.35 | X | 2.2.2.11 以上 | 2.3.0.0 - 2.4.0.5 | 1.8.x |
| 2.0.33 - 2.0.34 | 2.0.3.0 以上 | 2.2.0.0 以上 | 2.4.0.0 - 2.4.0.5 | 1.8.x |
| 2.0.31 - 2.0.32 | 2.1.3.0 - 2.1.7.7 | X | X | 1.8.x |
| 2.0.22 - 2.0.30 | 2.0.18.0 - 2.1.2.1 | X | X | 1.8.x |
| 2.0.12 - 2.0.21 | 2.0.8.0 - 2.0.17.4 | X | X | 1.8.x |
......@@ -792,8 +792,8 @@ Query OK, 1 row(s) in set (0.000141s)
## 在框架中使用
* Spring JdbcTemplate 中使用 taos-jdbcdriver,可参考 [SpringJdbcTemplate](https://github.com/taosdata/TDengine/tree/develop/tests/examples/JDBC/SpringJdbcTemplate)
* Springboot + Mybatis 中使用,可参考 [springbootdemo](https://github.com/taosdata/TDengine/tree/develop/tests/examples/JDBC/springbootdemo)
* Spring JdbcTemplate 中使用 taos-jdbcdriver,可参考 [SpringJdbcTemplate](https://github.com/taosdata/TDengine/tree/develop/examples/JDBC/SpringJdbcTemplate)
* Springboot + Mybatis 中使用,可参考 [springbootdemo](https://github.com/taosdata/TDengine/tree/develop/examples/JDBC/springbootdemo)
## 示例程序
......@@ -803,7 +803,7 @@ Query OK, 1 row(s) in set (0.000141s)
* Springbootdemo:springboot示例源程序
* SpringJdbcTemplate:SpringJDBC模板
请参考:[JDBC example](https://github.com/taosdata/TDengine/tree/develop/tests/examples/JDBC)
请参考:[JDBC example](https://github.com/taosdata/TDengine/tree/develop/examples/JDBC)
## 常见问题
* 使用Statement的addBatch和executeBatch来执行“批量写入/更行”,为什么没有带来性能上的提升?
......
......@@ -164,7 +164,7 @@ C/C++的API类似于MySQL的C API。应用程序使用时,需要包含TDengine
### 示例程序
使用C/C++连接器的示例代码请参见 https://github.com/taosdata/TDengine/tree/develop/tests/examples/c 。
使用C/C++连接器的示例代码请参见 https://github.com/taosdata/TDengine/tree/develop/examples/c 。
示例程序源码也可以在安装目录下的 examples/c 路径下找到:
......@@ -330,7 +330,7 @@ TDengine的异步API均采用非阻塞调用模式。应用程序可以用多线
除 C/C++ 语言外,TDengine 的 Java 语言 JNI Connector 也提供参数绑定接口支持,具体请另外参见:[参数绑定接口的 Java 用法](https://www.taosdata.com/cn/documentation/connector/java#stmt-java)
接口相关的具体函数如下(也可以参考 [prepare.c](https://github.com/taosdata/TDengine/blob/develop/tests/examples/c/prepare.c) 文件中使用对应函数的方式):
接口相关的具体函数如下(也可以参考 [prepare.c](https://github.com/taosdata/TDengine/blob/develop/examples/c/prepare.c) 文件中使用对应函数的方式):
- `TAOS_STMT* taos_stmt_init(TAOS *taos)`
......@@ -1080,7 +1080,7 @@ HTTP 请求 URL 采用 `sqlutc` 时,返回结果集的时间戳将采用 UTC
示例程序源码位于
* {client_install_directory}/examples/C#
* [github C# example source code](https://github.com/taosdata/TDengine/tree/develop/tests/examples/C%2523)
* [github C# example source code](https://github.com/taosdata/TDengine/tree/develop/examples/C%2523)
**注意:** TDengineTest.cs C#示例源程序,包含了数据库连接参数,以及如何执行数据插入、查询等操作。
......@@ -1112,7 +1112,7 @@ dotnet add package TDengine.Connector
```c#
using TDengineDriver;
```
* 用户可以参考[TDengineTest.cs](https://github.com/taosdata/TDengine/tree/develop/tests/examples/C%2523/TDengineTest)来定义数据库连接参数,以及如何执行数据插入、查询等操作。
* 用户可以参考[TDengineTest.cs](https://github.com/taosdata/TDengine/tree/develop/examples/C%2523/TDengineTest)来定义数据库连接参数,以及如何执行数据插入、查询等操作。
**注意:**
......@@ -1150,7 +1150,7 @@ Go连接器支持的系统有:
### 示例程序
使用 Go 连接器的示例代码请参考 https://github.com/taosdata/TDengine/tree/develop/tests/examples/go 以及[视频教程](https://www.taosdata.com/blog/2020/11/11/1951.html)
使用 Go 连接器的示例代码请参考 https://github.com/taosdata/TDengine/tree/develop/examples/go 以及[视频教程](https://www.taosdata.com/blog/2020/11/11/1951.html)
示例程序源码也位于安装目录下的 examples/go/taosdemo.go 文件中。
......@@ -1285,7 +1285,7 @@ Node-example-raw.js
验证方法:
1. 新建安装验证目录,例如:`~/tdengine-test`,拷贝github上nodejsChecker.js源程序。下载地址:(https://github.com/taosdata/TDengine/tree/develop/tests/examples/nodejs/nodejsChecker.js)。
1. 新建安装验证目录,例如:`~/tdengine-test`,拷贝github上nodejsChecker.js源程序。下载地址:(https://github.com/taosdata/TDengine/tree/develop/examples/nodejs/nodejsChecker.js)。
2. 在命令行中执行以下命令:
......@@ -1389,6 +1389,6 @@ promise2.then(function(result) {
### 示例
[node-example.js](https://github.com/taosdata/TDengine/tree/master/tests/examples/nodejs/node-example.js)提供了一个使用NodeJS 连接器建表,插入天气数据并查询插入的数据的代码示例。
[node-example.js](https://github.com/taosdata/tests/tree/master/examples/nodejs/node-example.js)提供了一个使用NodeJS 连接器建表,插入天气数据并查询插入的数据的代码示例。
[node-example-raw.js](https://github.com/taosdata/TDengine/tree/master/tests/examples/nodejs/node-example-raw.js)同样是一个使用NodeJS 连接器建表,插入天气数据并查询插入的数据的代码示例,但和上面不同的是,该示例只使用`cursor`
[node-example-raw.js](https://github.com/taosdata/tests/tree/master/examples/nodejs/node-example-raw.js)同样是一个使用NodeJS 连接器建表,插入天气数据并查询插入的数据的代码示例,但和上面不同的是,该示例只使用`cursor`
......@@ -52,7 +52,7 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource
#### 配置数据源
用户可以直接通过 <http://localhost:3000> 的网址,登录 Grafana 服务器(用户名/密码:admin/admin),通过左侧 `Configuration -> Data Sources` 可以添加数据源,如下图所示:
用户可以直接通过 http://localhost:3000 的网址,登录 Grafana 服务器(用户名/密码:admin/admin),通过左侧 `Configuration -> Data Sources` 可以添加数据源,如下图所示:
![img](../images/connections/add_datasource1.jpg)
......
......@@ -15,11 +15,13 @@ Database Memory Size = maxVgroupsPerDb * (blocks * cache + 10MB) + numOfTables *
示例:假设是 4 核机器,cache 是缺省大小 16M, blocks 是缺省值 6,并且一个 DB 中有 10 万张表,标签总长度是 256 字节,则这个 DB 总的内存需求为:4 \* (16 \* 6 + 10) + 100000 \* (0.25 + 0.5) / 1000 = 499M。
在实际的系统运维中,我们通常会更关心 TDengine 服务进程(taosd)会占用的内存量。
```
taosd 内存总量 = vnode 内存 + mnode 内存 + 查询内存
```
其中:
1. “vnode 内存”指的是集群中所有的 Database 存储分摊到当前 taosd 节点上所占用的内存资源。可以按上文“Database Memory Size”计算公式估算每个 DB 的内存占用量进行加总,再按集群中总共的 TDengine 节点数做平均(如果设置为多副本,则还需要乘以对应的副本倍数)。
2. “mnode 内存”指的是集群中管理节点所占用的资源。如果一个 taosd 节点上分布有 mnode 管理节点,则内存消耗还需要增加“0.2KB * 集群中数据表总数”。
3. “查询内存”指的是服务端处理查询请求时所需要占用的内存。单条查询语句至少会占用“0.2KB * 查询涉及的数据表总数”的内存量。
......@@ -33,11 +35,13 @@ taosd 内存总量 = vnode 内存 + mnode 内存 + 查询内存
客户端应用采用 taosc 客户端驱动连接服务端,会有内存需求的开销。
客户端的内存开销主要由写入过程中的 SQL 语句、表的元数据信息缓存、以及结构性开销构成。系统最大容纳的表数量为 N(每个通过超级表创建的表的 meta data 开销约 256 字节),最大并行写入线程数量 T,最大 SQL 语句长度 S(通常是 1 Mbytes)。由此可以进行客户端内存开销的估算(单位 MBytes):
```
M = (T * S * 3 + (N / 4096) + 100)
```
举例如下:用户最大并发写入线程数 100,子表数总数 10,000,000,那么客户端的内存最低要求是:
```
100 * 3 + (10000000 / 4096) + 100 = 2741 (MBytes)
```
......@@ -149,7 +153,7 @@ taosd -C
| 31 | streamCompDelayRatio | | **S** | | 连续查询的延迟时间计算系数 | 0.1-0.9 | 0.1 | |
| 32 | maxVgroupsPerDb | | **S** | | 每个DB中 能够使用的最大vnode个数 | 0-8192 | | |
| 33 | maxTablesPerVnode | | **S** | | 每个vnode中能够创建的最大表个数 | | 1000000 | |
| 34 | minTablesPerVnode | YES | **S** | | 每个vnode中必须创建的最小表个数 | | 100 | |
| 34 | minTablesPerVnode | YES | **S** | | 每个vnode中必须创建的最小表个数 | | 1000 | |
| 35 | tableIncStepPerVnode | YES | **S** | | 每个vnode中超过最小表数后递增步长 | | 1000 | |
| 36 | cache | | **S** | MB | 内存块的大小 | | 16 | |
| 37 | blocks | | **S** | | 每个vnode(tsdb)中有多少cache大小的内存块。因此一个vnode的用的内存大小粗略为(cache * blocks) | | 6 | |
......@@ -162,7 +166,7 @@ taosd -C
| 44 | walLevel | | **S** | | WAL级别 | 1:写wal, 但不执行fsync; 2:写wal, 而且执行fsync | 1 | |
| 45 | fsync | | **S** | 毫秒 | 当wal设置为2时,执行fsync的周期 | 最小为0,表示每次写入,立即执行fsync;最大为180000(三分钟) | 3000 | |
| 46 | replica | | **S** | | 副本个数 | 1-3 | 1 | |
| 47 | mqttHostName | YES | **S** | | mqtt uri | | | [mqtt://username:password@hostname:1883/taos/](mqtt://username:password@hostname:1883/taos/) |
| 47 | mqttHostName | YES | **S** | | mqtt uri | | | mqtt://username:password@hostname:1883/taos/ |
| 48 | mqttPort | YES | **S** | | mqtt client name | | | 1883 |
| 49 | mqttTopic | YES | **S** | | | | | /test |
| 50 | compressMsgSize | | **S** | bytes | 客户端与服务器之间进行消息通讯过程中,对通讯的消息进行压缩的阈值。如果要压缩消息,建议设置为64330字节,即大于64330字节的消息体才进行压缩。 | `0 `表示对所有的消息均进行压缩 >0: 超过该值的消息才进行压缩 -1: 不压缩 | -1 | |
......@@ -310,6 +314,7 @@ ALTER DNODE <dnode_id> <config>
> debugFlag < 131 | 135 | 143 > 设置debugFlag为131、135或者143
例如:
```
alter dnode 1 debugFlag 135;
```
......@@ -347,25 +352,33 @@ taos -C 或 taos --dump-config
如果配置文件中不设置charset,在Linux系统中,taos在启动时候,自动读取系统当前的locale信息,并从locale信息中解析提取charset编码格式。如果自动读取locale信息失败,则尝试读取charset配置,如果读取charset配置也失败,则中断启动过程。
在Linux系统中,locale信息包含了字符编码信息,因此正确设置了Linux系统locale以后可以不用再单独设置charset。例如:
```
locale zh_CN.UTF-8
```
在Windows系统中,无法从locale获取系统当前编码。如果无法从配置文件中读取字符串编码信息,taos默认设置为字符编码为CP936。其等效在配置文件中添加如下配置:
```
charset CP936
```
如果需要调整字符编码,请查阅当前操作系统使用的编码,并在配置文件中正确设置。
在Linux系统中,如果用户同时设置了locale和字符集编码charset,并且locale和charset的不一致,后设置的值将覆盖前面设置的值。
```
locale zh_CN.UTF-8
charset GBK
```
则charset的有效值是GBK。
```
charset GBK
locale zh_CN.UTF-8
```
charset的有效值是UTF-8。
日志的配置参数,与server 的配置参数完全一样。
......@@ -373,29 +386,37 @@ taos -C 或 taos --dump-config
- timezone
默认值:动态获取当前客户端运行系统所在的时区。
为应对多时区的数据写入和查询问题,TDengine 采用 Unix 时间戳(Unix Timestamp)来记录和存储时间戳。Unix 时间戳的特点决定了任一时刻不论在任何时区,产生的时间戳均一致。需要注意的是,Unix时间戳是在客户端完成转换和记录。为了确保客户端其他形式的时间转换为正确的 Unix 时间戳,需要设置正确的时区。
在Linux系统中,客户端会自动读取系统设置的时区信息。用户也可以采用多种方式在配置文件设置时区。例如:
```
timezone UTC-8
timezone GMT-8
timezone Asia/Shanghai
```
均是合法的设置东八区时区的格式。但需注意,Windows 下并不支持 `timezone Asia/Shanghai` 这样的写法,而必须写成 `timezone UTC-8`。
时区的设置对于查询和写入SQL语句中非Unix时间戳的内容(时间戳字符串、关键词now的解析)产生影响。例如:
```sql
SELECT count(*) FROM table_name WHERE TS<'2019-04-11 12:01:08';
```
在东八区,SQL语句等效于
```sql
SELECT count(*) FROM table_name WHERE TS<1554955268000;
```
在UTC时区,SQL语句等效于
```sql
SELECT count(*) FROM table_name WHERE TS<1554984068000;
```
为了避免使用字符串时间格式带来的不确定性,也可以直接使用Unix时间戳。此外,还可以在SQL语句中使用带有时区的时间戳字符串,例如:RFC3339格式的时间戳字符串,2013-04-12T15:52:01.123+08:00或者ISO-8601格式时间戳字符串2013-04-12T15:52:01.123+0800。上述两个字符串转化为Unix时间戳不受系统所在时区的影响。
启动taos时,也可以从命令行指定一个taosd实例的end point,否则就从taos.cfg读取。
......@@ -457,6 +478,7 @@ TDengine也支持在shell对已存在的表从CSV文件中进行数据导入。C
```mysql
insert into tb1 file 'path/data.csv';
```
**注意:如果CSV文件首行存在描述信息,请手动删除后再导入。如某列为空,填NULL,无引号。**
例如,现在存在一个子表d1001, 其表结构如下:
......@@ -472,6 +494,7 @@ taos> DESCRIBE d1001
location | BINARY | 64 | TAG |
groupid | INT | 4 | TAG |
```
要导入的data.csv的格式如下:
```csv
......@@ -485,6 +508,7 @@ taos> DESCRIBE d1001
'2018-10-11 06:38:05.000',17.30000,219,0.32000
'2018-10-12 06:38:05.000',18.30000,219,0.31000
```
那么可以用如下命令导入数据:
```mysql
......@@ -494,7 +518,7 @@ Query OK, 9 row(s) affected (0.004763s)
**taosdump工具导入**
TDengine提供了方便的数据库导入导出工具taosdump。用户可以将taosdump从一个系统导出的数据,导入到其他系统中。具体使用方法,请参见博客:[TDengine DUMP工具使用指南](https://www.taosdata.com/blog/2020/03/09/1334.html)
TDengine提供了方便的数据库导入导出工具taosdump。用户可以将taosdump从一个系统导出的数据,导入到其他系统中。具体使用方法,请参见[TDengine 数据备份工具: taosdump](/tools/taosdump)
## <a class="anchor" id="export"></a>数据导出
......@@ -578,12 +602,14 @@ chmod +x TDinsight.sh
准备:
1. TDengine Server 信息:
* TDengine RESTful 服务:对本地而言,可以是 http://localhost:6041 ,使用参数 `-a`
* TDengine RESTful 服务:对本地而言,可以是 `http://localhost:6041`,使用参数 `-a`。
* TDengine 用户名和密码,使用 `-u` `-p` 参数设置。
2. Grafana 告警通知
* 使用已经存在的Grafana Notification Channel `uid`,参数 `-E`。该参数可以使用 `curl -u admin:admin localhost:3000/api/alert-notifications |jq` 来获取。
```bash
sudo ./TDinsight.sh -a http://localhost:6041 -u root -p taosdata -E <notifier uid>
```
......@@ -602,7 +628,7 @@ chmod +x TDinsight.sh
-T '{"alarm_level":"%s","time":"%s","name":"%s","content":"%s"}'
```
运行程序并重启 Grafana 服务,打开面板:<http://localhost:3000/d/tdinsight>
运行程序并重启 Grafana 服务,打开面板:`http://localhost:3000/d/tdinsight`
更多使用场景和限制请参考[TDinsight](https://github.com/taosdata/grafanaplugin/blob/master/dashboards/TDinsight.md) 文档。
......@@ -654,7 +680,7 @@ TDengine 使用 Linux 系统的 systemd/systemctl/service 来管理系统的启
以 systemctl 为例,命令如下:
- 启动服务进程:`systemctl start taosd`
- 启动服务进程:`systemctl start taosd`
- 停止服务进程:`systemctl stop taosd`
......@@ -663,15 +689,17 @@ TDengine 使用 Linux 系统的 systemd/systemctl/service 来管理系统的启
- 查看服务状态:`systemctl status taosd`
如果服务进程处于活动状态,则 status 指令会显示如下的相关信息:
```
......
Active: active (running)
Active: active (running)
......
```
如果后台服务进程处于停止状态,则 status 指令会显示如下的相关信息:
```
......
......@@ -681,6 +709,7 @@ Active: inactive (dead)
```
卸载 TDengine,只需要执行如下命令:
```
rmtaos
```
......@@ -771,11 +800,12 @@ rmtaos
| COPY | IF | NOW | STABLES | WHERE |
## 转义字符说明
- 转义字符表(转义符的功能从 2.4.0.4 版本开始)
| 字符序列 | **代表的字符** |
| :--------: | ------- |
| `\'` | 单引号' |
| `\'` | 单引号' |
| `\"` | 双引号" |
| \n | 换行符 |
| \r | 回车符 |
......@@ -791,6 +821,7 @@ rmtaos
2. 数据里有转义字符
1. 遇到上面定义的转义字符会转义(%和_见下面说明),如果没有匹配的转义字符会忽略掉转义符\。
2. 对于%和_,因为在like里这两个字符是通配符,所以在模式匹配like里用`\%`%和`\_`表示字符里本身的%和_,如果在like模式匹配上下文之外使用`\%`或`\_`,则它们的计算结果为字符串`\%`和`\_`,而不是%和_。
## 诊断及其他
#### 网络连接诊断
......@@ -864,7 +895,7 @@ rmtaos
针对多台服务器组成的集群,当服务启动过程耗时较长时,可通过该命令行来诊断每台服务器的 taosd 实例的启动状态,以准确定位问题。
`taos -n rpc -h <fqdn of server>`
`taos -n rpc -h <fqdn of server>`
该命令用来诊断已经启动的 taosd 实例的端口是否可正常访问。如果 taosd 程序异常或者失去响应,可以通过 `taos -n rpc -h <fqdn of server>` 来发起一个与指定 fqdn 的 rpc 通信,看看 taosd 是否能收到,以此来判定是网络问题还是 taosd 程序异常问题。
......@@ -909,7 +940,9 @@ taosd 服务端日志文件标志位 debugflag 默认为 131,在 debug 时往
- taosdlog 服务器端生成的日志,记录taosinfo中全部信息外,还根据设置的日志输出级别,记录DEBUG(日志级别135)、TRACE(日志级别是 143)。
### 客户端日志
每个独立运行的客户端(一个进程)生成一个独立的客户端日志,其命名方式采用 taoslog+<序号> 的方式命名。文件标志位 debugflag 默认为 131,在 debug 时往往需要将其提升到 135 或 143 。
- taoslog 客户端(driver)生成的日志,默认记录客户端INFO/ERROR/WARNING 级别日志,还根据设置的日志输出级别,记录DEBUG(日志级别135)、TRACE(日志级别是 143)。
其中,日志文件最大长度由 numOfLogLines 来进行配置,一个 taosd 实例最多保留两个文件。
......
# TDengine Documentation
TDengine is a highly efficient platform to store, query, and analyze time-series data. It is specially designed and optimized for IoT, Internet of Vehicles, Industrial IoT, IT Infrastructure and Application Monitoring, etc. It works like a relational database, such as MySQL, but you are strongly encouraged to read through the following documentation before you experience it, especially the Data Modeling sections. In addition to this document, you should also download and read the technology white paper.
TDengine is a highly efficient platform to store, query, and analyze time-series data. It is specially designed and optimized for IoT, Internet of Vehicles, Industrial IoT, IT Infrastructure and Application Monitoring, etc. It works like a relational database, such as MySQL, but you are strongly encouraged to read through the following documentation before you experience it, especially the Data Modeling sections. In addition to this document, you should also download and read the technology white paper.
## [TDengine Introduction](/evaluation)
* [TDengine Introduction and Features](/evaluation#intro)
......@@ -20,7 +21,7 @@ TDengine is a highly efficient platform to store, query, and analyze time-series
- [Data Model](/architecture#model): relational database model, but one table for one data collection point with static tags
- [Cluster and Primary Logical Unit](/architecture#cluster): Take advantage of NoSQL architecture, high availability and horizontal scalability
- [Storage Model and Data Partitioning/Sharding](/architecture#sharding): tag data is separated from time-series data, sharded by vnodes and partitioned by time
- [Storage Model and Data Partitioning/Sharding](/architecture#sharding): tag data is separated from time-series data, sharded by vnodes and partitioned by time
- [Data Writing and Replication Process](/architecture#replication): records received are written to WAL, cached, with acknowledgement sent back to client, while supporting data replications
- [Caching and Persistence](/architecture#persistence): latest records are cached in memory, but are written in columnar format with an ultra-high compression ratio
- [Data Query](/architecture#query): support various SQL functions, downsampling, interpolation, and multi-table aggregation
......@@ -79,12 +80,14 @@ TDengine is a highly efficient platform to store, query, and analyze time-series
- [Windows Client](https://www.taosdata.com/blog/2019/07/26/514.html): compile your own Windows client, which is required by various connectors on the Windows environment
- [Rust Connector](/connector/rust): A taosc/RESTful API based TDengine client for Rust
## [Components and Tools]
## Components and Tools
* [taosAdapter](/tools/adapter): a bridge/adapter between TDengine cluster and applications.
* [TDinsight](/tools/insight): monitoring TDengine cluster with Grafana.
* [taosdump](/tools/taosdump): backup tool for TDengine. Please install `taosTools` package for it.
* [taosBenchmark](/tools/taosbenchmark): stress test tool for TDengine. Please install `taosTools` package for it.
* [taosBenchmark](/tools/taosbenchmark): stress test tool for TDengine.
* [taosTools](/tools/taos-tools): taosTools are some useful tool collections for TDengine.
## [Connections with Other Tools](/connections)
......@@ -92,11 +95,13 @@ TDengine is a highly efficient platform to store, query, and analyze time-series
- [MATLAB](/connections#matlab): access data stored in TDengine server via JDBC configured within MATLAB
- [R](/connections#r): access data stored in TDengine server via JDBC configured within R
- [IDEA Database](https://www.taosdata.com/blog/2020/08/27/1767.html): use TDengine visually through IDEA Database Management Tool
- [TDengineGUI](https://github.com/skye0207/TDengineGUI): a TDengine management tool with Graphical User Interface
- [DataX](https://github.com/taosdata/datax): a data immigaration tool with TDeninge supported
## [Installation and Management of TDengine Cluster](/cluster)
- [Preparation](/cluster#prepare): important steps before deploying TDengine for production usage
- [Create the First Node](/cluster#node-one): just follow the steps in quick start
- [Create the First Node](/cluster#node-one): just follow the steps in quick start
- [Create Subsequent Nodes](/cluster#node-other): configure taos.cfg for new nodes to add more to the existing cluster
- [Node Management](/cluster#management): add, delete, and check nodes in the cluster
- [High-availability of Vnode](/cluster#high-availability): implement high-availability of Vnode through replicas
......@@ -118,6 +123,12 @@ TDengine is a highly efficient platform to store, query, and analyze time-series
- [File Directory Structure](/administrator#directories): directories where TDengine data files and configuration files located
- [Parameter Limits and Reserved Keywords](/administrator#keywords): TDengine’s list of parameter limits and reserved keywords
## Rapidly build an IT DevOps system with TDengine
* [devops](/devops/telegraf): Rapidly build an IT DevOps system with TDengine + Telegraf + Grafana
* [devops](/devops/collectd): Rapidly build a IT DevOps system with TDengine + collectd/StatsD + Grafana
* [immigration](/devops/immigrate): Best practice of immigration from OpenTSDB to TDengine
## Performance: TDengine vs Others
- [Performance: TDengine vs OpenTSDB](https://www.taosdata.com/blog/2019/09/12/710.html)
......
# Quickly experience TDengine through Docker
# Quickly Taste TDengine with Docker
While it is not recommended to deploy TDengine services via Docker in a production environment, Docker tools do a good job of shielding the environmental differences in the underlying operating system and are well suited for use in development testing or first-time experience with the toolset for installing and running TDengine. In particular, Docker makes it relatively easy to try TDengine on Mac OSX and Windows systems without having to install a virtual machine or rent an additional Linux server. In addition, starting from version 2.0.14.0, TDengine provides images that support both X86-64, X86, arm64, and arm32 platforms, so non-mainstream computers that can run docker, such as NAS, Raspberry Pi, and embedded development boards, can also easily experience TDengine based on this document.
While it is not recommended to deploy TDengine services via Docker in a production environment, Docker tools do a good job of shielding the environmental differences in the underlying operating system and are well suited for use in development testing or first-time taste with the toolset for installing and running TDengine. In particular, Docker makes it relatively easy to try TDengine on Mac OSX and Windows systems without having to install a virtual machine or rent an additional Linux server. In addition, starting from version 2.0.14.0, TDengine provides images that support both X86-64, X86, arm64, and arm32 platforms, so non-mainstream computers that can run docker, such as NAS, Raspberry Pi, and embedded development boards, can also easily taste TDengine based on this document.
The following article explains how to quickly build a single-node TDengine runtime environment via Docker to support development and testing through a Step by Step style introduction.
......@@ -20,7 +20,7 @@ Docker version 20.10.3, build 48d30b5
### running TDengine server inside Docker
```bash
$ docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd
```
......@@ -35,7 +35,7 @@ This command starts a docker container with TDengine server running and maps the
Further, you can also use the `docker run` command to start the docker container running TDengine server, and use the `--name` command line parameter to name the container tdengine, use `--hostname` to specify the hostname as tdengine-server, and use `-v` to mount the local directory (-v) to synchronize the data inside the host and the container to prevent data loss after the container is deleted.
```
$ docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:/var/log/taos -v ~/work/taos/data:/var/lib/taos -p 6030-6041:6030-6041 -p 6030-6041:6030-6041/udp tdengine/tdengine
docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:/var/log/taos -v ~/work/taos/data:/var/lib/taos -p 6030-6041:6030-6041 -p 6030-6041:6030-6041/udp tdengine/tdengine
```
- **--name tdengine**: set the container name, we can access the corresponding container by container name
......@@ -45,7 +45,12 @@ $ docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:
### Use the `docker ps` command to verify that the container is running correctly
```bash
$ docker ps
docker ps
```
The output could be:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS ···
c452519b0f9b tdengine/tdengine "taosd" 14 minutes ago Up 14 minutes ···
```
......@@ -61,7 +66,7 @@ c452519b0f9b tdengine/tdengine "taosd" 14 minutes ago Up 14 minutes ·
```bash
$ docker exec -it tdengine /bin/bash
root@tdengine-server:~/TDengine-server-2.4.0.4#
root@tdengine-server:~/TDengine-server-2.4.0.4#
```
- **docker exec**: Enter the container by `docker exec` command, if exited, the container will not stop.
......@@ -78,7 +83,7 @@ root@tdengine-server:~/TDengine-server-2.4.0.4# taos
Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
taos>
taos>
```
The TDengine shell successfully connects to the server and prints out a welcome message and version information. If it fails, an error message is printed.
......@@ -101,7 +106,12 @@ taos>
You can also access the TDengine server inside the Docker container using `curl` command from the host side through the RESTful port.
```
$ curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql
curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql
```
The output could be:
```
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep0,keep1,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep0,keep1,keep(D)",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["test","2021-08-18 06:01:11.021",10000,4,1,1,10,"3650,3650,3650",16,6,100,4096,1,3000,2,0,"ms",0,"ready"],["log","2021-08-18 05:51:51.065",4,1,1,1,10,"30,30,30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":2}
```
......@@ -109,7 +119,6 @@ This command accesses the TDengine server through the RESTful interface, which c
TDengine RESTful interface details can be found in the [official documentation](https://www.taosdata.com/en/documentation/connector#restful).
### Running TDengine server and taosAdapter with a Docker container
Docker containers of TDegnine version 2.4.0.0 and later include a component named `taosAdapter`, which supports data writing and querying capabilities to the TDengine server through the RESTful interface and provides the data ingestion interfaces compatible with InfluxDB/OpenTSDB. Allows seamless migration of InfluxDB/OpenTSDB applications to access TDengine.
......@@ -121,54 +130,25 @@ Running TDengine version 2.4.0.4 image with docker.
Start taosAdapter and taosd by default:
```
$ docker run -d --name tdengine-taosa -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine:2.4.0.4
docker run -d --name tdengine-taosa -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine:2.4.0.4
```
Verify that the RESTful interface taosAdapter provides working using the `curl` command.
```
$ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0.1:6041/rest/sql
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
```
taosAdapter supports multiple data collection agents (e.g. Telegraf, StatsD, collectd, etc.), here only demonstrate how StasD is simulated to write data, and the command is executed from the host side as follows.
```
$ echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0.1:6041/rest/sql
```
Then you can use the taos shell to query the taosAdapter automatically created database statsd and the contents of the super table foo.
```
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
Query OK, 2 row(s) in set (0.002112s)
taos> use statsd;
Database changed.
taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
Query OK, 1 row(s) in set (0.001160s)
taos> select * from foo;
ts | value | metric_type |
=======================================================================================
2021-12-28 09:21:48.840820836 | 1 | counter |
Query OK, 1 row(s) in set (0.001639s)
The output could be:
taos>
```
You can see that the simulation data has been written to TDengine.
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
```
### Application example: write data to TDengine server in Docker container using taosBenchmark on the host
1, execute `taosBenchmark` (was named taosdemo) in the host command line interface to write data to the TDengine server in the Docker container
1. execute `taosBenchmark` (was named taosdemo) in the host command line interface to write data to the TDengine server in the Docker container
```bash
$ taosBenchmark
......@@ -177,7 +157,7 @@ taosBenchmark is simulating data generated by power equipments monitoring...
host: 127.0.0.1:6030
user: root
password: taosdata
configDir:
configDir:
resultFile: ./output.txt
thread num of insert data: 10
thread num of create table: 10
......@@ -206,13 +186,13 @@ database[0]:
maxSqlLen: 1048576
timeStampStep: 1
startTimestamp: 2017-07-14 10:40:00.000
sampleFormat:
sampleFile:
tagsFile:
sampleFormat:
sampleFile:
tagsFile:
columnCount: 3
column[0]:FLOAT column[1]:INT column[2]:FLOAT
column[0]:FLOAT column[1]:INT column[2]:FLOAT
tagCount: 2
tag[0]:INT tag[1]:BINARY(16)
tag[0]:INT tag[1]:BINARY(16)
Press enter key to continue or Ctrl-C to stop
```
......@@ -221,7 +201,7 @@ After enter, this command will automatically create a super table `meters` under
It takes about a few minutes to execute this command and ends up inserting a total of 100 million records.
3, Go to the TDengine terminal and view the data generated by taosBenchmark.
2.Go to the TDengine terminal and view the data generated by taosBenchmark.
- **Go to the terminal interface.**
......@@ -231,7 +211,7 @@ $ root@c452519b0f9b:~/TDengine-server-2.4.0.4# taos
Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
taos>
taos>
```
- **View the database.**
......@@ -240,7 +220,7 @@ taos>
$ taos> show databases;
name | created_time | ntables | vgroups | ···
test | 2021-08-18 06:01:11.021 | 10000 | 6 | ···
log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
```
......@@ -292,13 +272,49 @@ Query OK, 1 row(s) in set (0.003490s)
```
### Application Example: use data collection agent to write data into TDengine
taosAdapter supports multiple data collection agents (e.g. Telegraf, StatsD, collectd, etc.), here only demonstrate how StasD is simulated to write data, and the command is executed from the host side as follows.
```
echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
```
Then you can use the taos shell to query the taosAdapter automatically created database statsd and the contents of the super table foo.
```
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
Query OK, 2 row(s) in set (0.002112s)
taos> use statsd;
Database changed.
taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
Query OK, 1 row(s) in set (0.001160s)
taos> select * from foo;
ts | value | metric_type |
=======================================================================================
2021-12-28 09:21:48.840820836 | 1 | counter |
Query OK, 1 row(s) in set (0.001639s)
taos>
```
You can see that the simulation data has been written to TDengine.
## Stop the TDengine service that is running in Docker
```bash
$ docker stop tdengine
tdengine
docker stop tdengine
```
- **docker stop**: Stop the specified running docker image with docker stop.
- **tdengine**: The name of the container.
Since TDengine was open sourced in July 2019, it has gained a lot of popularity among time-series database developers with its innovative data modeling design, simple installation method, easy programming interface, and powerful data insertion and query performance. The insertion and querying performance is often astonishing to users who are new to TDengine. In order to help users to experience the high performance and functions of TDengine in the shortest time, we developed an application called `taosBenchmark` (was named `taosdemo`) for insertion and querying performance testing of TDengine. Then user can easily simulate the scenario of a large number of devices generating a very large amount of data. User can easily manipulate the number of columns, data types, disorder ratio, and number of concurrent threads with taosBenchmark customized parameters.
Running taosBenchmark is very simple. Just download the [TDengine installation package](https://www.taosdata.com/cn/all-downloads/) or compiling the [TDengine code](https://github.com/taosdata/TDengine). It can be found and run in the installation directory or in the compiled results directory.
Running taosBenchmark is very simple. Just download the TDengine installation package (https://www.taosdata.com/cn/all-downloads/) or compiling the TDengine code yourself (https://github.com/taosdata/TDengine). It can be found and run in the installation directory or in the compiled results directory.
# To run an insertion test with taosBenchmark
To run an insertion test with taosBenchmark
--
Executing taosBenchmark without any parameters results in the following output.
```
$ taosBenchmark
......@@ -70,6 +70,7 @@ Query OK, 6 row(s) in set (0.002972s)
```
After pressing any key taosBenchmark will create the database test and super table meters and generate 10,000 sub-tables representing 10,000 individule meter devices that report data. That means they independently using the super table meters as a template according to TDengine data modeling best practices.
```
taos> use test;
Database changed.
......@@ -91,7 +92,9 @@ taos> show stables;
meters | 2021-08-27 11:21:01.209 | 4 | 2 | 10000 |
Query OK, 1 row(s) in set (0.001740s)
```
Then taosBenchmark generates 10,000 records for each meter device.
```
...
====thread[3] completed total inserted rows: 6250000, total affected rows: 6250000. 347626.22 records/second====
......@@ -108,9 +111,11 @@ Spent 18.0863 seconds to insert rows: 100000000, affected rows: 100000000 with 1
insert delay, avg: 28.64ms, max: 112.92ms, min: 9.35ms
```
The above information is the result of a real test on a normal PC server with 8 CPUs and 64G RAM. It shows that taosBenchmark inserted 100,000,000 (no need to count, 100 million) records in 18 seconds, or an average of 552,909,049 records per second.
TDengine also offers a parameter-bind interface for better performance, and using the parameter-bind interface (taosBenchmark -I stmt) on the same hardware for the same amount of data writes, the results are as follows.
```
...
......@@ -145,12 +150,13 @@ Spent 6.0257 seconds to insert rows: 100000000, affected rows: 100000000 with 16
insert delay, avg: 8.31ms, max: 860.12ms, min: 2.00ms
```
It shows that taosBenchmark inserted 100 million records in 6 seconds, with a much more higher insertion performance, 1,659,590 records wer inserted per second.
It shows that taosBenchmark inserted 100 million records in 6 seconds, with a much more higher insertion performance, 1,659,590 records wer inserted per second.
Because taosBenchmark is so easy to use, so we have extended it with more features to support more complex parameter settings for sample data preparation and validation for rapid prototyping.
The complete list of taosBenchmark command-line arguments can be displayed via taosBenchmark --help as follows.
```
$ taosBenchmark --help
......@@ -197,52 +203,70 @@ Report bugs to <support@taosdata.com>.
```
taosBenchmark's parameters are designed to meet the needs of data simulation. A few commonly used parameters are described below.
```
-I, --interface=INTERFACE The interface (taosc, rest, and stmt) taosBenchmark uses. Default is 'taosc'.
```
The performance difference between different interfaces of taosBenchmark has been mentioned earlier, the -I parameter is used to select different interfaces, currently taosc, stmt and rest are supported. The -I parameter is used to select different interfaces, currently taosc, stmt and rest are supported. taosc uses SQL statements to write data, stmt uses parameter binding interface to write data, and rest uses RESTful protocol to write data.
```
-T, --threads=NUMBER The number of threads. Default is 8.
```
The -T parameter sets how many threads taosBenchmark uses to synchronize data writes, so that multiple threads can squeeze as much processing power out of the hardware as possible.
```
-b, --data-type=DATATYPE The data_type of columns, default: FLOAT, INT, FLOAT.
-w, --binwidth=WIDTH The width of data_type 'BINARY' or 'NCHAR'. Default is 64
-l, --columns=COLUMNS The number of columns per record. Demo mode by default is 3 (float, int, float). Max values is 4095
```
As mentioned earlier, tadosdemo creates a typical meter data reporting scenario by default, with each device containing three columns. They are current, voltage and phases. TDengine supports BOOL, TINYINT, SMALLINT, INT, BIGINT, FLOAT, DOUBLE, BINARY, NCHAR, TIMESTAMP data types. By using -b with a list of types allows you to specify the column list with customized data type. Using -w to specify the width of the columns of the BINARY and NCHAR data types (default is 64). The -l parameter can be added to the columns of the data type specified by the -b parameter with the total number of columns of the INT type, which reduces the manual input process in case of a particularly large number of columns, up to 4095 columns.
```
-r, --rec-per-req=NUMBER The number of records per request. Default is 30000.
```
To reach TDengine performance limits, data insertion can be executed by using multiple clients, multiple threads, and batch data insertions at once. The -r parameter sets the number of records batch that can be stitched together in a single write request, the default is 30,000. The effective number of spliced records is also related to the client buffer size, which is currently 1M Bytes. If the record column width is large, the maximum number of spliced records can be calculated by dividing 1M by the column width (in bytes).
```
-t, --tables=NUMBER The number of tables. Default is 10000.
-n, --records=NUMBER The number of records per table. Default is 10000.
-M, --random The value of records generated are totally random. The default is to simulate power equipment scenario.
```
As mentioned earlier, taosBenchmark creates 10,000 tables by default, and each table writes 10,000 records. taosBenchmark can set the number of tables and the number of records in each table by -t and -n. The data generated by default without parameters are simulated real scenarios, and the simulated data are current and voltage phase values with certain jitter, which can more realistically show TDengine's efficient data compression ability. If you need to simulate the generation of completely random data, you can pass the -M parameter.
```
-y, --answer-yes Default input yes for prompt.
```
As we can see above, taosBenchmark outputs a list of parameters for the upcoming operation by default before creating a database or inserting data, so that the user can know what data is about to be written before inserting. To facilitate automatic testing, the -y parameter allows taosBenchmark to write data immediately after outputting the parameters.
```
-O, --disorder=NUMBER Insert order mode--0: In order, 1 ~ 50: disorder ratio. Default is in order.
-R, --disorder-range=NUMBER Out of order data's range, ms, default is 1000.
```
In some scenarios, the received data does not arrive in exact order, but contains a certain percentage of out-of-order data, which TDengine can also handle very well. In order to simulate the writing of out-of-order data, tadosdemo provides -O and -R parameters to be set. The -O parameter is the same as the -O parameter for fully ordered data writes. 1 to 50 is the percentage of data that contains out-of-order data. The -R parameter is the range of the timestamp offset of the out-of-order data, default is 1000 milliseconds. Also note that temporal data is uniquely identified by a timestamp, so garbled data may generate the exact same timestamp as previously written data, and such data may either be discarded (update 0) or overwrite existing data (update 1 or 2) depending on the update value created by the database, and the total number of data entries may not match the expected number of entries.
```
-g, --debug Print debug info.
```
If you are interested in the taosBenchmark insertion process or if the data insertion result is not as expected, you can use the -g parameter to make taosBenchmark print the debugging information in the process of the execution to the screen or import it to another file with the Linux redirect command to easily find the cause of the problem. In addition, taosBenchmark will also output the corresponding executed statements and debugging reasons to the screen after the execution fails. You can search the word "reason" to find the error reason information returned by the TDengine server.
```
-x, --aggr-func Test aggregation funtions after insertion.
```
TDengine is not only very powerful in insertion performance, but also in query performance due to its advanced database engine design. tadosdemo provides a -x function that performs the usual query operations and outputs the query consumption time after the insertion of data. The following is the result of a common query after inserting 100 million rows on the aforementioned server.
You can see that the select * fetch 100 million rows (not output to the screen) operation consumes only 1.26 seconds. The most of normal aggregation function for 100 million records usually takes only about 20 milliseconds, and even the longest count function takes less than 40 milliseconds.
```
taosBenchmark -I stmt -T 48 -y -x
...
......@@ -264,7 +288,9 @@ select min(current) took 0.025812 second(s)
select first(current) took 0.024105 second(s)
...
```
In addition to the command line approach, taosBenchmark also supports take a JSON file as an incoming parameter to provide a richer set of settings. A typical JSON file would look like this.
```
{
"filetype": "insert",
......@@ -273,17 +299,17 @@ In addition to the command line approach, taosBenchmark also supports take a JSO
"port": 6030,
"user": "root",
"password": "taosdata",
"thread_count": 4,
"thread_count_create_tbl": 4,
"result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no",
"insert_interval": 0,
"interlace_rows": 100,
"thread_count": 4,
"thread_count_create_tbl": 4,
"result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no",
"insert_interval": 0,
"interlace_rows": 100,
"num_of_records_per_req": 100,
"databases": [{
"dbinfo": {
"name": "db",
"drop": "yes",
"drop": "yes",
"replica": 1,
"days": 10,
"cache": 16,
......@@ -301,39 +327,41 @@ In addition to the command line approach, taosBenchmark also supports take a JSO
},
"super_tables": [{
"name": "stb",
"child_table_exists":"no",
"childtable_count": 100,
"childtable_prefix": "stb_",
"auto_create_table": "no",
"batch_create_tbl_num": 5,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 100000,
"childtable_limit": 10,
"childtable_offset":100,
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 10,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"child_table_exists":"no",
"childtable_count": 100,
"childtable_prefix": "stb_",
"auto_create_table": "no",
"batch_create_tbl_num": 5,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 100000,
"childtable_limit": 10,
"childtable_offset":100,
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 10,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":10}, {"type": "BINARY", "len": 16, "count":3}, {"type": "BINARY", "len": 32, "count":6}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":5}]
}]
}]
}
```
For example, we can specify different number of threads for table creation and data insertion with "thread_count" and "thread_count_create_tbl". You can use a combination of "child_table_exists", "childtable_limit" and "childtable_offset" to use multiple taosBenchmark processes (even on different computers) to write to different ranges of child tables of the same super table at the same time. You can also import existing data by specifying the data source as a csv file with "data_source" and "sample_file".
Use taosBenchmark for query and subscription testing
--
# Use taosBenchmark for query and subscription testing
taosBenchmark can not only write data, but also perform query and subscription functions. However, a taosBenchmark instance can only support one of these functions, not all three, and the configuration file is used to specify which function to test.
The following is the content of a typical query JSON example file.
```
{
"filetype": "query",
......@@ -373,7 +401,9 @@ The following is the content of a typical query JSON example file.
}
}
```
The following parameters are specific to the query in the JSON file.
```
"query_times": the number of queries per query type
"query_mode": query data interface, "tosc": call TDengine's c interface; "resetful": use restfule interface. Options are available. Default is "taosc".
......@@ -392,6 +422,7 @@ The following parameters are specific to the query in the JSON file.
```
The following is a typical subscription JSON example file content.
```
{
"filetype":"subscribe",
......@@ -404,34 +435,36 @@ The following is a typical subscription JSON example file content.
"confirm_parameter_prompt": "no",
"specified_table_query":
{
"concurrent":1,
"mode":"sync",
"interval":0,
"restart":"yes",
"concurrent":1,
"mode":"sync",
"interval":0,
"restart":"yes",
"keepProgress":"yes",
"sqls": [
{
"sql": "select * from stb00_0 ;",
"sql": "select * from stb00_0 ;",
"result": "./subscribe_res0.txt"
}]
},
"super_table_query":
"super_table_query":
{
"stblname": "stb0",
"threads":1,
"mode":"sync",
"interval":10000,
"restart":"yes",
"threads":1,
"mode":"sync",
"interval":10000,
"restart":"yes",
"keepProgress":"yes",
"sqls": [
{
"sql": "select * from xxxx where ts > '2021-02-25 11:35:00.000' ;",
"sql": "select * from xxxx where ts > '2021-02-25 11:35:00.000' ;",
"result": "./subscribe_res1.txt"
}]
}
}
```
The following are the meanings of the parameters specific to the subscription function.
```
"interval": interval for executing subscriptions, in seconds. Optional, default is 0.
"restart": subscription restart." yes": restart the subscription if it already exists, "no": continue the previous subscription. (Please note that the executing user needs to have read/write access to the dataDir directory)
......@@ -439,11 +472,12 @@ The following are the meanings of the parameters specific to the subscription fu
"resubAfterConsume": Used in conjunction with keepProgress to call unsubscribe after the subscription has been consumed the appropriate number of times and to subscribe again.
"result": the name of the file to which the query result is written. Optional, default is null, means the query result will not be written to the file. Note: The file to save the result after each sql statement cannot be renamed, and the file name will be appended with the thread number when generating the result file.
```
Conclusion
--
# Conclusion
TDengine is a big data platform designed and optimized for IoT, Telematics, Industrial Internet, DevOps, etc. TDengine shows a high performance that far exceeds similar products due to the innovative data storage and query engine design in the database kernel. And withSQL syntax support and connectors for multiple programming languages (currently Java, Python, Go, C#, NodeJS, Rust, etc. are supported), it is extremely easy to use and has zero learning cost. To facilitate the operation and maintenance needs, we also provide data migration and monitoring functions and other related ecological tools and software.
For users who are new to TDengine, we have developed rich features for taosBenchmark to facilitate technical evaluation and stress testing. This article is a brief introduction to taosBenchmark, which will continue to evolve and improve as new features are added to TDengine.
For users who are new to TDengine, we have developed rich features for taosBenchmark to facilitate technical evaluation and stress testing. This article is a brief introduction to taosBenchmark, which will continue to evolve and improve as new features are added to TDengine.
As part of TDengine, taosBenchmark's source code is fully open on the GitHub. Suggestions or advices about the use or implementation of taosBenchmark or TDengine are welcomed on GitHub or in the Taos Data user group.
# How to install/uninstall TDengine with installtion package
TDengine open source version provides `deb` and `rpm` format installation packages. Our users can choose the appropriate installation package according to their own running environment. The `deb` supports Debian/Ubuntu etc. and the `rpm` supports CentOS/RHEL/SUSE etc. We also provide `tar.gz` format installers for enterprise users.
## Install and uninstall deb package
### Install deb package
- Download and obtain the deb installation package from the official website, such as TDengine-server-2.0.0.0-Linux-x64.deb.
- Go to the directory where the TDengine-server-2.0.0.0-Linux-x64.deb installation package is located and execute the following installation command.
```
plum@ubuntu:~/git/taosv16$ sudo dpkg -i TDengine-server-2.0.0.0-Linux-x64.deb
Selecting previously unselected package tdengine.
(Reading database ... 233181 files and directories currently installed.)
Preparing to unpack TDengine-server-2.0.0.0-Linux-x64.deb ...
Failed to stop taosd.service: Unit taosd.service not loaded.
Stop taosd service success!
Unpacking tdengine (2.0.0.0) ...
Setting up tdengine (2.0.0.0) ...
Start to install TDEngine...
Synchronizing state of taosd.service with SysV init with /lib/systemd/systemd-sysv-install...
Executing /lib/systemd/systemd-sysv-install enable taosd
insserv: warning: current start runlevel(s) (empty) of script `taosd' overrides LSB defaults (2 3 4 5).
insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `taosd' overrides LSB defaults (0 1 6).
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join OR leave it blank to build one :
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : use taos in shell
TDengine is installed successfully!
```
Note: When the Enter FQDN: prompt appears when the first node is installed, nothing needs to be entered. Only when installing the second or later more nodes is it necessary to enter the FQDN of any of the available nodes in the existing cluster to support that new node joining the cluster. It is of course possible to not enter it, but to configure it into the new node's configuration file before the new node starts
in the configuration file of the new node before it starts.
The same operation is performed for the other installation packages format.
### Uninstall deb
Uninstall command is below:
```
plum@ubuntu:~/git/tdengine/debs$ sudo dpkg -r tdengine
(Reading database ... 233482 files and directories currently installed.)
Removing tdengine (2.0.0.0) ...
TDEngine is removed successfully!
```
## Install and unstall rpm package
### Install rpm
- Download and obtain the rpm installation package from the official website, such as TDengine-server-2.0.0.0-Linux-x64.rpm.
- Go to the directory where the TDengine-server-2.0.0.0-Linux-x64.rpm installation package is located and execute the following installation command.
```
[root@bogon x86_64]# rpm -iv TDengine-server-2.0.0.0-Linux-x64.rpm
Preparing packages...
TDengine-2.0.0.0-3.x86_64
Start to install TDEngine...
Created symlink from /etc/systemd/system/multi-user.target.wants/taosd.service to /etc/systemd/system/taosd.service.
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join OR leave it blank to build one :
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : use taos in shell
TDengine is installed successfully!
```
### Uninstall rpm
Uninstall command is following:
```
[root@bogon x86_64]# rpm -e tdengine
TDEngine is removed successfully!
```
## Install and uninstall tar.gz
### Install tar.gz
- Download and obtain the tar.gz installation package from the official website, such as `TDengine-server-2.0.0.0-Linux-x64.tar.gz`.
- Go to the directory where the `TDengine-server-2.0.0.0-Linux-x64.tar.gz` installation package is located, unzip the file first, then enter the subdirectory and execute the install.sh installation script in it as follows
```
plum@ubuntu:~/git/tdengine/release$ sudo tar -xzvf TDengine-server-2.0.0.0-Linux-x64.tar.gz
plum@ubuntu:~/git/tdengine/release$ ll
total 3796
drwxr-xr-x 3 root root 4096 Aug 9 14:20 ./
drwxrwxr-x 11 plum plum 4096 Aug 8 11:03 ../
drwxr-xr-x 5 root root 4096 Aug 8 11:03 TDengine-server/
-rw-r--r-- 1 root root 3871844 Aug 8 11:03 TDengine-server-2.0.0.0-Linux-x64.tar.gz
plum@ubuntu:~/git/tdengine/release$ cd TDengine-server/
plum@ubuntu:~/git/tdengine/release/TDengine-server$ ll
total 2640
drwxr-xr-x 5 root root 4096 Aug 8 11:03 ./
drwxr-xr-x 3 root root 4096 Aug 9 14:20 ../
drwxr-xr-x 5 root root 4096 Aug 8 11:03 connector/
drwxr-xr-x 2 root root 4096 Aug 8 11:03 driver/
drwxr-xr-x 8 root root 4096 Aug 8 11:03 examples/
-rwxr-xr-x 1 root root 13095 Aug 8 11:03 install.sh*
-rw-r--r-- 1 root root 2651954 Aug 8 11:03 taos.tar.gz
plum@ubuntu:~/git/tdengine/release/TDengine-server$ sudo ./install.sh
This is ubuntu system
verType=server interactiveFqdn=yes
Start to install TDengine...
Synchronizing state of taosd.service with SysV init with /lib/systemd/systemd-sysv-install...
Executing /lib/systemd/systemd-sysv-install enable taosd
insserv: warning: current start runlevel(s) (empty) of script `taosd' overrides LSB defaults (2 3 4 5).
insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `taosd' overrides LSB defaults (0 1 6).
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join OR leave it blank to build one :hostname.taosdata.com:7030
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : use taos in shell
Please run: taos -h hostname.taosdata.com:7030 to login into cluster, then execute : create dnode 'newDnodeFQDN:port'; in TAOS shell to add this new node into the clsuter
TDengine is installed successfully!
```
Note: The install.sh install script asks for some configuration information through an interactive command line interface during execution. If you prefer a non-interactive installation, you can execute the install.sh script with the -e no parameter. Run . /install.sh -h command to see detailed information about all parameters.
### Uninstall TDengine after tar.gz package installed
Uninstall command is following:
```
plum@ubuntu:~/git/tdengine/release/TDengine-server$ rmtaos
TDEngine is removed successfully!
```
## Installation directory description
After TDengine is successfully installed, the main installation directory is /usr/local/taos, and the directory contents are as follows:
```
plum@ubuntu:/usr/local/taos$ cd /usr/local/taos
plum@ubuntu:/usr/local/taos$ ll
total 36
drwxr-xr-x 9 root root 4096 7 30 19:20 ./
drwxr-xr-x 13 root root 4096 7 30 19:20 ../
drwxr-xr-x 2 root root 4096 7 30 19:20 bin/
drwxr-xr-x 2 root root 4096 7 30 19:20 cfg/
lrwxrwxrwx 1 root root 13 7 30 19:20 data -> /var/lib/taos/
drwxr-xr-x 2 root root 4096 7 30 19:20 driver/
drwxr-xr-x 8 root root 4096 7 30 19:20 examples/
drwxr-xr-x 2 root root 4096 7 30 19:20 include/
drwxr-xr-x 2 root root 4096 7 30 19:20 init.d/
lrwxrwxrwx 1 root root 13 7 30 19:20 log -> /var/log/taos/
```
- Automatically generates the configuration file directory, database directory, and log directory.
- Configuration file default directory: /etc/taos/taos.cfg, softlinked to /usr/local/taos/cfg/taos.cfg.
- Database default directory: /var/lib/taos, softlinked to /usr/local/taos/data.
- Log default directory: /var/log/taos, softlinked to /usr/local/taos/log.
- executables in the /usr/local/taos/bin directory, which are soft-linked to the /usr/bin directory.
- Dynamic library files in the /usr/local/taos/driver directory, which are soft-linked to the /usr/lib directory.
- header files in the /usr/local/taos/include directory, which are soft-linked to the /usr/include directory.
## Uninstall and update file instructions
When uninstalling the installation package, the configuration files, database files and log files will be kept, i.e. /etc/taos/taos.cfg, /var/lib/taos, /var/log/taos. If users confirm that they do not need to keep them, they can delete them manually, but must be careful, because after deletion, the data will be permanently lost and cannot be recovered!
If the installation is updated, when the default configuration file (/etc/taos/taos.cfg) exists, the existing configuration file is still used. The configuration file carried in the installation package is modified to taos.cfg.org and saved in the /usr/local/taos/cfg/ directory, which can be used as a reference sample for setting configuration parameters; if the configuration file does not exist, the Use the configuration file that comes with the installation package
file that comes with the installation package.
## Caution
- TDengine provides several installers, but it is best not to use both the tar.gz installer and the deb or rpm installer on one system. Otherwise, they may affect each other and cause problems when using them.
- For deb package installation, if the installation directory is manually deleted by mistake, the uninstallation, or reinstallation cannot be successful. In this case, you need to clear the installation information of the tdengine package by executing the following command:
```
plum@ubuntu:~/git/tdengine/$ sudo rm -f /var/lib/dpkg/info/tdengine*
```
Then just reinstall it.
- For the rpm package after installation, if the installation directory is manually deleted by mistake part of the uninstallation, or reinstallation can not be successful. In this case, you need to clear the installation information of the tdengine package by executing the following command:
```
[root@bogon x86_64]# rpm -e --noscripts tdengine
```
Then just reinstall it.
# Quick Start
# Getting Started
## <a class="anchor" id="install"></a>Quick Install
TDengine software consists of 3 parts: server, client, and alarm module. At the moment, TDengine server only runs on Linux (Windows, mac OS and more OS supports will come soon), but client can run on either Windows or Linux. TDengine client can be installed and run on Windows or Linux. Applications based-on any OSes can all connect to server taosd via a RESTful interface. From 2.4 and later version, TDengine use a stand-alone software, taosAdapteer to provide http service. The early version uses the http server embedded in the taosd. About CPU, TDengine supports X64/ARM64/MIPS64/Alpha64, and ARM32、RISC-V, other more CPU architectures will be supported soon. You can set up and install TDengine server either from the [source code](https://www.taosdata.com/en/getting-started/#Install-from-Source) or the [packages](https://www.taosdata.com/en/getting-started/#Install-from-Package).
TDengine includes server, client, and ecological software and peripheral tools. Currently, version 2.0 of the server can only be installed and run on Linux and will support Windows, macOS, and other OSes in the future. The client can be installed and run on Windows or Linux. Applications on any operating system can use the RESTful interface to connect to the taosd server. After 2.4, TDengine includes taosAdapter to provide an easy-to-use and efficient way to ingest data including RESTful service. taosAdapter needs to be started manually as a stand-alone component. The early version uses an embedded HTTP component to provide the RESTful interface.
### <a class="anchor" id="source-install"></a>Install from Source
TDengine supports X64/ARM64/MIPS64/Alpha64 hardware platforms and will support ARM32, RISC-V, and other CPU architectures in the future.
Please visit our [TDengine github page](https://github.com/taosdata/TDengine) for instructions on installation from the source code.
### Install with Docker Container
### Install from Docker Container
```
docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
```
For the time being, it is not recommended to use Docker to deploy the client or server side of TDengine in production environments, but it is convenient to use Docker to deploy in development environments or when trying it for the first time. In particular, with Docker, it is easy to try TDengine in Mac OS X and Windows environments.
Please refer to [Quickly Taste TDengine with Docker](https://www.taosdata.com/en/documentation/getting-started/docker) for the details.
Please refer to the detailed operation in [Quickly experience TDengine through Docker](https://www.taosdata.com/en/documentation/getting-started/docker).
For the time being, using Docker to deploy the client or server of TDengine for production environments is not recommended. However it is a convenient way to deploy TDengine for development purposes. In particular, it is easy to try TDengine in Mac OS X and Windows environments with Docker.
### <a class="anchor" id="package-install"></a>Install from Package
Three different packages for TDengine server are provided, please pick up the one you like. (Lite packages only have execution files and connector of C/C++, but standard packages support connectors of nearly all programming languages.) Beta version has more features, but we suggest you to install stable version for production or testing.
TDengine is very easy to install, from download to successful installation in just a few seconds. For ease of use, the standard server installation package includes the client application and sample code; if you only need the server application and C/C++ language support for the client connection, you can also download the lite version of the installation package only. The installation packages are available in `rpm` and `deb` formats, as well as `tar.gz` format for enterprise customers who need to facilitate use on specific operating systems. Releases include both stable and beta releases. We recommend the stable release for production use or testing. The beta release may contain more new features. You can choose to download from the following as needed:
<ul id="server-packageList" class="package-list"></ul>
Click [here](https://www.taosdata.com/en/getting-started/#Install-from-Package) to download the install package.
For detailed installation steps, please refer to [How to install/uninstall TDengine with installation package](https://www.taosdata.com/getting-started/install).
**Click [here](https://github.com/taosdata/TDengine/releases) for release notes.**
### Install TDengine by apt-get
If you use Debian or Ubuntu system you can use 'apt-get' command to install TDengine from official repository. Please use following commands to setup:
If you use Debian or Ubuntu system you can use the `apt-get` command to install TDengine from the official repository. Please use the following commands to setup:
```
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
......@@ -33,18 +39,30 @@ apt-get policy tdengine
sudo apt-get install tdengine
```
### Install client only
If the client and server are running on different computers, you can install the client separately. When downloading, please note that the selected client version number should strictly match the server version number downloaded above. Linux and Windows installation packages are as follows (the lite version of the installer comes with connection support for the C/C++ language only, while the standard version of the installer also contains sample code):
<ul id="client-packagelist" class="package-list"></ul>
### <a class="anchor" id="source-install"></a>Install from Source
If you want to contribute to TDengine, please visit [TDengine GitHub page](https://github.com/taosdata/TDengine) for detailed instructions on build and installation from the source code.
**To download other components, beta version, or early releases, please click [here](https://www.taosdata.com/en/all-downloads/).**
## <a class="anchor" id="start"></a>Quick Launch
After installation, you can start the TDengine service by the `systemctl` command.
```bash
$ systemctl start taosd
systemctl start taosd
```
Then check if the service is working now.
```bash
$ systemctl status taosd
systemctl status taosd
```
If the service is running successfully, you can play around through TDengine shell `taos`.
......@@ -52,25 +70,25 @@ If the service is running successfully, you can play around through TDengine she
**Note:**
- The `systemctl` command needs the **root** privilege. Use **sudo** if you are not the **root** user.
- To get better product feedback and improve our solution, TDengine will collect basic usage information, but you can modify the configuration parameter **telemetryReporting** in the system configuration file taos.cfg, and set it to 0 to turn it off.
- TDengine uses FQDN (usually hostname) as the node ID. In order to ensure normal operation, you need to set hostname for the server running taosd, and configure DNS service or hosts file for the machine running client application, to ensure the FQDN can be resolved.
- TDengine supports installation on Linux systems with[ systemd ](https://en.wikipedia.org/wiki/Systemd)as the process service management, and uses `which systemctl` command to detect whether `systemd` packages exist in the system:
- To get better product feedback and improve our solution, TDengine will collect basic usage information, but you can modify the configuration parameter **telemetryReporting** in the system configuration file `taos.cfg`, and set it to 0 to turn it off.
- TDengine uses FQDN (usually hostname) as the node ID. To ensure normal operation, you need to set the host's name for the server running `taosd`, and configure DNS service or hosts file for the machine running the client application, to ensure the FQDN can be resolved.
- TDengine supports installation on Linux systems with [systemd](https://en.wikipedia.org/wiki/Systemd) as the process service management and uses `which systemctl` command to detect whether `systemd` packages exist in the system:
```bash
$ which systemctl
which systemctl
```
If `systemd` is not supported in the system, TDengine service can also be launched via `/usr/local/taos/bin/taosd` manually.
## <a class="anchor" id="console"></a>TDengine Shell Command Line
To launch TDengine shell, the command line interface, in a Linux terminal, type:
To launch TDengine shell, the command-line interface, in a Linux terminal, type:
```bash
$ taos
taos
```
The welcome message is printed if the shell connects to TDengine server successfully, otherwise, an error message will be printed (refer to our [FAQ](https://www.taosdata.com/en/faq) page for troubleshooting the connection error). The TDengine shell prompt is:
The welcome message is printed if the shell connects to the TDengine server successfully, otherwise, an error message will be printed (refer to our [FAQ](https://www.taosdata.com/en/faq) page for troubleshooting the connection error). The TDengine shell prompt is:
```cmd
taos>
......@@ -106,49 +124,59 @@ Besides the SQL commands, the system administrator can check system status, add
### Shell Command Line Parameters
You can configure command parameters to change how TDengine shell executes. Some frequently used options are listed below:
You can configure command parameters to change how the TDengine shell executes. Some frequently used options are listed below:
- -c, --config-dir: set the configuration directory. It is */etc/taos* by default.
- -h, --host: set the IP address of the server it will connect to. Default is localhost.
- -s, --commands: set the command to run without entering the shell.
- -u, -- user: user name to connect to server. Default is root.
- -u, -- user: user name to connect to the server/cluster. Default is root.
- -p, --password: password. Default is 'taosdata'.
- -?, --help: get a full list of supported options.
Examples:
```bash
$ taos -h 192.168.0.1 -s "use db; show tables;"
taos -h 192.168.0.1 -s "use db; show tables;"
```
### Run SQL Command Scripts
Inside TDengine shell, you can run SQL scripts in a file with source command.
Inside TDengine shell, you can run SQL scripts in a file with the `source` command.
```mysql
taos> source <filename>;
```
### Shell Tips
### taos shell tips
- Use up/down arrow key to check the command history
- To change the default password, use "alter user" command
- Use the up/down arrow key to check the command history
- To change the default password, use `alter user` command
- Use ctrl+c to interrupt any queries
- To clean the schema of local cached tables, execute command `RESET QUERY CACHE`
- To clean the schema of locally cached tables, execute the command `RESET QUERY CACHE`
## <a class="anchor" id="demo"></a>Taste TDengine’s Lightning Speed
## <a class="anchor" id="demo"></a>Experience TDengine’s Lightning Speed
### <a class="anchor" id="taosBenchmark"></a> Taste insertion speed with taosBenchmark
After starting the TDengine server, you can execute the command `taosBenchmark` (was named `taosdemo`, please install taosTools package if you use TDengine 2.4 or later version) in the Linux terminal.
Once the TDengine server started, you can execute the command `taosBenchmark` (which was named `taosdemo`) in the Linux terminal. In 2.4.0.7 and early release, taosBenchmark is distributed within taosTools package. In later release, taosBenchmark will be included within TDengine again.
```bash
$ taosBenchmark
```bash
taosBenchmark
```
Using this command, a STable named `meters` will be created in the database `test`. There are 10k tables under this STable, named from `t0` to `t9999`. In each table there are 100k rows of records, each row with columns (`f1`, `f2` and `f3`. The timestamp is from "2017-07-14 10:40:00 000" to "2017-07-14 10:41:39 999". Each table also has tags `areaid` and `loc`: `areaid` is set from 1 to 10, `loc` is set to "beijing" or "shanghai".
Using this command, a STable named `meters` will be created in the database `test`. There are 10k tables under this STable, named from `d0` to `d9999`. In each table, there are 100k rows of records, each row with columns (`ts`, `current`, `voltage`, and `phase`. The timestamp is from "2017-07-14 10:40:00 000" to "2017-07-14 10:41:39 999". Each table also has tags `location` and `groupId`: `groupId` is set from 1 to 10, `location` is set to "beijing" or "shanghai".
Once execution is finished, 1 billion rows of records will be inserted. It usually takes about a dozen seconds to execute this command on a normal PC server but it may be different depending on the particular hardware platform performance.
### <a class="anchor" id="taosBenchmark"></a> Using taosBenchmark in detail
you can run the command `taosBenchmark` with many options, like the number of tables, rows of records, and so on. To know more about these options, you can execute `taosBenchmark --help` and then take a try using different options.
It takes about 10 minutes to execute this command. Once finished, 1 billion rows of records will be inserted.
For more details on how to use taosBenchmark, please refer to [How to use taosBenchmark to test the performance of TDengine](https://tdengine.com/2021/10/09/3114.html).
In the TDengine client, enter sql query commands and then experience our lightning query speed.
### <a class="anchor" id="taosshell"></a> Taste query speed with taos shell
In the TDengine client, enter sql query commands and then taste our lightning query speed.
- query total rows of records:
......@@ -156,7 +184,7 @@ In the TDengine client, enter sql query commands and then experience our lightni
taos> select count(*) from test.meters;
```
- query average, max and min of the total 1 billion records:
- query average, max, and min of the total 1 billion records:
```mysql
taos> select avg(f1), max(f2), min(f3) from test.meters;
......@@ -180,23 +208,6 @@ taos> select avg(f1), max(f2), min(f3) from test.meters where areaid=10;
taos> select avg(f1), max(f2), min(f3) from test.t10 interval(10s);
```
## <a class="anchor" id="taosBenchmark"></a> Using taosBenchmark in detail
you can run command `taosBenchmark` with many options, like number of tables, rows of records and so on. To know more about these options, you can execute `taosBenchmark --help` and then take a try using different options.
Please refer to [How to use taosBenchmark to test the performance of TDengine](https://tdengine.com/2021/10/09/3114.html) for detail.
## Client and Alarm Module
If your client and server running on different machines, please install the client separately. Linux and Windows packages are provided:
- TDengine-client-2.0.10.0-Linux-x64.tar.gz(3.0M)
- TDengine-client-2.0.10.0-Windows-x64.exe(2.8M)
- TDengine-client-2.0.10.0-Windows-x86.exe(2.8M)
Linux package of Alarm Module is as following (please refer [How to Use Alarm Module](https://github.com/taosdata/TDengine/blob/master/alert/README_cn.md)):
- TDengine-alert-2.0.10.0-Linux-x64.tar.gz (8.1M)
## <a class="anchor" id="platforms"></a>List of Supported Platforms
List of platforms supported by TDengine server
......@@ -213,11 +224,11 @@ List of platforms supported by TDengine server
| Allwinner ARM64 | | | ○ | | | |
| Actions ARM64 | | | ○ | | | |
Note: ● has been verified by official tests; ○ has been verified by unofficial tests.
Note: ● has been verified by official tests; ○ has been verified by unofficial tests.
List of platforms supported by TDengine client and connectors
At the moment, TDengine connectors can support a wide range of platforms, including hardware platforms such as X64/X86/ARM64/ARM32/MIPS/Alpha, and operating system such as Linux/Win64/Win32.
At the moment, TDengine connectors can support a wide range of platforms, including hardware platforms such as X64/X86/ARM64/ARM32/MIPS/Alpha, and operating systems such as Linux/Win64/Win32.
Comparison matrix as following:
......@@ -235,3 +246,5 @@ Comparison matrix as following:
Note: ● has been verified by official tests; ○ has been verified by unofficial tests.
Please visit Connectors section for more detailed information.
<script src="/wp-includes/js/quick-start.js?v=1"></script>
......@@ -138,7 +138,7 @@ TDengine suggests using data collection point ID as the table name (like D1001 i
### STable: A Collection of Data Points in the Same Type
The design of one table for each data collection point will require a huge number of tables, which is difficult to manage. Moreover, applications often need to take aggregation operations between data collection points, thus aggregation operations will become complicated. To support aggregation over multiple tables efficiently, the [STable(Super Table)](https://www.taosdata.com/en/documentation/super-table) concept is introduced by TDengine.
The design of one table for each data collection point will require a huge number of tables, which is difficult to manage. Moreover, applications often need to take aggregation operations between data collection points, thus aggregation operations will become complicated. To support aggregation over multiple tables efficiently, the STable(Super Table) concept is introduced by TDengine.
STable is an abstract set for a type of data collection point. A STable contains a set of data collection points (tables) that have the same schema or data structure, but with different static attributes (tags). To describe a STable (a set of data collection points of a specific type), in addition to defining the table structure of the collected metrics, it is also necessary to define the schema of its tags. The data type of tags can be int, float, string, and there can be multiple tags, which can be added, deleted, or modified afterward. If the whole system has N different types of data collection points, N STables need to be established.
......
......@@ -32,6 +32,7 @@ For the SQL INSERT Grammar, please refer to [Taos SQL insert](https://www.taosd
- The timestamp of written data must be greater than the current time minus the time of configuration parameter keep. If keep is configured for 3650 days, data older than 3650 days cannot be written. The timestamp for writing data cannot be greater than the current time plus configuration parameter days. If days is configured to 2, data 2 days later than the current time cannot be written.
## <a class="anchor" id="schemaless"></a> Data Writing via Schemaless
**Introduction**
<br/> In many IoT applications, data collection is often used in intelligent control, business analysis and device monitoring etc. As fast application upgrade and iteration, or hardware adjustment, data collection metrics can change rapidly over time. To provide solutions to such use cases, from version 2.2.0.0, TDengine supports writing data via Schemaless. When using Schemaless, action of pre-creating table before inserting data is no longer needed anymore. Tables, data columns and tags can be created automatically. Schemaless can also add additional data columns to tables if necessary, to make sure data can be properly stored into TDengine.
......@@ -44,6 +45,7 @@ For the SQL INSERT Grammar, please refer to [Taos SQL insert](https://www.taosd
For InfluxDB, OpenTSDB data writing protocol format, users can refer to corresponding official documentation for details. Following will give examples of introducing protocol extension from TDengine based on InfluxDB's Line Protocol, allowing users to use Schemaless with more precision.
Schemaless use one line of string literals to represent one data record. (Users can also pass multiple lines to the Schemaless API for batch insertion), the format is as follows:
```json
measurement,tag_set field_set timestamp
```
......@@ -55,6 +57,7 @@ measurement,tag_set field_set timestamp
All tag values in tag_set are automatically converted and stored as NCHAR data type in TDengine and no need to be surrounded by double quote(")
<br/> In Schemaless Line Protocol, data format in field_set need to be self-descriptive in order to convert data to corresponding TDengine data types. For example:
* Field value surrounded by double quote indicates data is BINARY(32) data types. For example, `"abc"`.
* Field value surrounded by double quote and L letter prefix indicates data is NCHAR(32) data type. For example `L"报错信息"`.
* Space, equal sign(=), comma(,), double quote(") need to use backslash(\) to escape.
......@@ -68,6 +71,7 @@ All tag values in tag_set are automatically converted and stored as NCHAR data t
| 4 | i16 | SMALLINT | 2 |
| 5 | i32 | INT | 4 |
| 6 | i64 / i | BIGINT | 8 |
* t, T, true, True, TRUE, f, F, false, False represents BOOLEAN types。
### Schemaless processing logic
......@@ -75,9 +79,11 @@ All tag values in tag_set are automatically converted and stored as NCHAR data t
Following rules are followed by Schemaless protocol parsing:
<br/>1. For child table name generation, firstly create following string by concatenating measurement and tag key/values strings together.
```json
"measurement,tag_key1=tag_value1,tag_key2=tag_value2"
```
tag_key1, tag_key2 are not following the original order of user input, but sorted according to tag names.
After MD5 value "md5_val" calculated using the above string, prefix "t_" is prepended to "md5_val" to form the child table name.
<br/>2. If super table does not exist, a new super table will be created.
......@@ -89,7 +95,7 @@ After MD5 value "md5_val" calculated using the above string, prefix "t_" is prep
<br/>8. If any error occurs during processing, error code will be returned.
**Note**
<br/>Schemaless will follow TDengine data structure limitations. For example, each table row cannot exceed 16KB. For detailed TDengine limitations please refer to (https://www.taosdata.com/en/documentation/taos-sql#limitation).
<br/>Schemaless will follow TDengine data structure limitations. For example, each table row cannot exceed 16KB. For detailed TDengine limitations please refer to `https://www.taosdata.com/en/documentation/taos-sql#limitation`.
**Timestamp precisions**
<br/>Following protocols are supported in Schemaless:
......@@ -120,10 +126,13 @@ When SML_TELNET_PROTOCOL or SML_JSON_PROTOCOL used,timestamp precision is dete
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
```
Above line is mapped to a super table with name "st" with 3 NCHAR type tags ("t1", "t2", "t3") and 5 columns: ts(timestamp),c1 (bigint),c3(binary),c2 (bool), c4 (bigint). This is identical to create a super table with the following SQL clause:
```json
create stable st (_ts timestamp, c1 bigint, c2 bool, c3 binary(6), c4 bigint) tags(t1 nchar(1), t2 nchar(1), t3 nchar(2))
```
**Schemaless data alternation rules**
<br/>This section describes several data alternation scenarios:
......@@ -133,19 +142,23 @@ When column with one line has certain type, and following lines attemp to change
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4i 1626006833640000000
```
For first line of data, c4 column type is declared as DOUBLE with no suffix. However, the second line declared the column type to be BIGINT with suffix "i". Schemaless parsing error will be occurred.
When column is declared as BINARY type, but follow-up line insertion requires longer BINARY length of this column, max length of this column will be extended:
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c5="pass" 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c5="passit" 1626006833640000000
```
In first line c5 column store string "pass" with 4 characters as BINARY(4), but in second line c5 requires 2 more characters for storing binary string "passit", c5 column max length will be extend from BINARY(4) to BINARY(6) to accommodate more characters.
```json
st,t1=3,t2=4,t3=t3 c1=3i64 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c6="passit" 1626006833640000000
```
In above example second line has one more column c6 with value "passit" compared to the first line. A new column c6 will be added with type BINARY(6).
**Data integrity**
......@@ -157,123 +170,46 @@ In above example second line has one more column c6 with value "passit" compared
**Future enhancement**
<br/> Currently TDengine only provides clang API support for Schemaless. In future versions, APIs/connectors of more languages will be supported, e.g., Java/Go/Python/C# etc. From TDengine v2.3 and later versions, users can also use taosAdaptor to writing data via Schemaless through RESTful interface.
## <a class="anchor" id="prometheus"></a> Data Writing via Prometheus
As a graduate project of Cloud Native Computing Foundation, [Prometheus](https://www.prometheus.io/) is widely used in the field of performance monitoring and K8S performance monitoring. TDengine provides a simple tool [Bailongma](https://github.com/taosdata/Bailongma), which only needs to be simply configured in Prometheus without any code, and can directly write the data collected by Prometheus into TDengine, then automatically create databases and related table entries in TDengine according to rules. Blog post [Use Docker Container to Quickly Build a Devops Monitoring Demo](https://www.taosdata.com/blog/2020/02/03/1189.html), which is an example of using bailongma to write Prometheus and Telegraf data into TDengine.
### Compile blm_prometheus From Source
Users need to download the source code of [Bailongma](https://github.com/taosdata/Bailongma) from github, then compile and generate an executable file using Golang language compiler. Before you start compiling, you need to prepare:
- A server running Linux OS
- Golang version 1.10 and higher installed
- Since the client dynamic link library of TDengine is used, it is necessary to install the same version of TDengine as the server-side. For example, if the server version is TDengine 2.0. 0, ensure install the same version on the linux server where bailongma is located (can be on the same server as TDengine, or on a different server)
Bailongma project has a folder, blm_prometheus, which holds the prometheus writing API. The compiling process is as follows:
```bash
cd blm_prometheus
go build
```
If everything goes well, an executable of blm_prometheus will be generated in the corresponding directory.
### Install Prometheus
Download and install as the instruction of Prometheus official website. [Download Address](https://prometheus.io/download/)
### Configure Prometheus
Read the Prometheus [configuration document](https://prometheus.io/docs/prometheus/latest/configuration/configuration/) and add following configurations in the section of Prometheus configuration file
- url: The URL provided by bailongma API service, refer to the blm_prometheus startup example section below
After Prometheus launched, you can check whether data is written successfully through query taos client.
### Launch blm_prometheus
blm_prometheus has following options that you can configure when you launch blm_prometheus.
```sh
--tdengine-name
If TDengine is installed on a server with a domain name, you can also access the TDengine by configuring the domain name of it. In K8S environment, it can be configured as the service name that TDengine runs
--batch-size
blm_prometheus assembles the received prometheus data into a TDengine writing request. This parameter controls the number of data pieces carried in a writing request sent to TDengine at a time.
--dbname
Set a name for the database created in TDengine, blm_prometheus will automatically create a database named dbname in TDengine, and the default value is prometheus.
## <a class="anchor" id="prometheus"></a> Data Writing via Prometheus via taosAdapter
--dbuser
Remote_read and remote_write are cluster schemes for Prometheus data read-write separation.
Just use the REMOTE_READ and REMOTE_WRITE URL to point to the URL corresponding to Taosadapter to use Basic authentication.
Set the user name to access TDengine, the default value is'root '
* Remote_read url: `http://host_to_taosadapter:port (default 6041) /prometheus/v1/remote_read/:db`
* Remote_write url: `http://host_to_taosadapter:port (default 6041) /Prometheus/v1/remote_write/:db`
--dbpassword
Basic verification:
Set the password to access TDengine, the default value is'taosdata '
* Username: TDengine connection username
* Password: TDengine connection password
--port
The port number blm_prometheus used to serve prometheus.
```
### Example
Launch an API service for blm_prometheus with the following command:
```bash
./blm_prometheus -port 8088
```
Assuming that the IP address of the server where blm_prometheus located is "10.1.2. 3", the URL shall be added to the configuration file of Prometheus as:
Example Prometheus.yml is as follows:
```yaml
remote_write:
- url: "http://10.1.2.3:8088/receive"
```
### Query written data of prometheus
The format of generated data by Prometheus is as follows:
```json
{
Timestamp: 1576466279341,
Value: 37.000000,
apiserver_request_latencies_bucket {
component="apiserver",
instance="192.168.99.116:8443",
job="kubernetes-apiservers",
le="125000",
resource="persistentvolumes", s
cope="cluster",
verb="LIST",
version=“v1"
}
}
```
Where apiserver_request_latencies_bucket is the name of the time-series data collected by prometheus, and the tag of the time-series data is in the following {}. blm_prometheus automatically creates a STable in TDengine with the name of the time series data, and converts the tag in {} into the tag value of TDengine, with Timestamp as the timestamp and value as the value of the time-series data. Therefore, in the client of TDengine, you can check whether this data was successfully written through the following instruction.
```mysql
use prometheus;
select * from apiserver_request_latencies_bucket;
- url: "http://localhost:6041/prometheus/v1/remote_write/prometheus_data"
basic_auth:
username: root
password: taosdata
remote_read:
- url: "http://localhost:6041/prometheus/v1/remote_read/prometheus_data"
basic_auth:
username: root
password: taosdata
remote_timeout: 10s
read_recent: true
```
## <a class="anchor" id="telegraf"></a> Data Writing via Telegraf and taosAdapter
Please refer to [Official document](https://portal.influxdata.com/downloads/) for Telegraf installation.
TDengine version 2.3.0.0+ includes a stand-alone application taosAdapter in charge of receive data insertion from Telegraf.
Configuration:
Please add following words in /etc/telegraf/telegraf.conf. Fill 'database name' with the database name you want to store in the TDengine for Telegraf data. Please fill the values in TDengine server/cluster host, username and password fields.
```
[[outputs.http]]
url = "http://<TDengine server/cluster host>:6041/influxdb/v1/write?db=<database name>"
......@@ -286,9 +222,11 @@ Please add following words in /etc/telegraf/telegraf.conf. Fill 'database name'
```
Then restart telegraf:
```
sudo systemctl start telegraf
```
Now you can query the metrics data of Telegraf from TDengine.
Please find taosAdapter configuration and usage from `taosadapter --help` output.
......@@ -301,16 +239,20 @@ TDengine version 2.3.0.0+ includes a stand-alone application taosAdapter in char
Configuration:
Please add following words in /etc/collectd/collectd.conf. Please fill the value 'host' and 'port' with what the TDengine and taosAdapter using.
```
LoadPlugin network
<Plugin network>
Server "<TDengine cluster/server host>" "<port for collectd>"
</Plugin>
```
Then restart collectd
```
sudo systemctl start collectd
```
Please find taosAdapter configuration and usage from `taosadapter --help` output.
## <a class="anchor" id="statsd"></a> Data Writting via StatsD and taosAdapter
......@@ -320,12 +262,14 @@ Please refer to [official document](https://github.com/statsd/statsd) for StatsD
TDengine version 2.3.0.0+ includes a stand-alone application taosAdapter in charge of receive data insertion from StatsD.
Please add following words in the config.js file. Please fill the value to 'host' and 'port' with what the TDengine and taosAdapter using.
```
add "./backends/repeater" to backends section.
add { host:'<TDengine server/cluster host>', port: <port for StatsD>} to repeater section.
```
Example file:
```
{
port: 8125
......@@ -338,9 +282,10 @@ port: 8125
Use icinga2 to collect check result metrics and performance data
* Follow the doc to enable opentsdb-writer https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer
* Follow the doc to enable opentsdb-writer `https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer`
* Enable taosAdapter configuration opentsdb_telnet.enable
* Modify the configuration file /etc/icinga2/features-enabled/opentsdb.conf
```
object OpenTsdbWriter "opentsdb" {
host = "host to taosAdapter"
......@@ -359,12 +304,6 @@ TCollector is a client-side process that gathers data from local collectors and
Please find taosAdapter configuration and usage from `taosadapter --help` output.
## <a class="anchor" id="taosadapter2-telegraf"></a> Insert data via Bailongma 2.0 and Telegraf
**Notice:**
TDengine 2.3.0.0+ provides taosAdapter to support Telegraf data writing. Bailongma v2 will be abandoned and no more maintained.
## <a class="anchor" id="emq"></a> Data Writing via EMQ Broker
[EMQ](https://github.com/emqx/emqx) is an open source MQTT Broker software, with no need of coding, only to use "rules" in EMQ Dashboard for simple configuration, and MQTT data can be directly written into TDengine. EMQ X supports storing data to the TDengine by sending it to a Web service, and also provides a native TDengine driver on Enterprise Edition for direct data store. Please refer to [EMQ official documents](https://docs.emqx.io/broker/latest/cn/rule/rule-example.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine) for more details.
......
......@@ -753,12 +753,12 @@ Query OK, 1 row(s) in set (0.000141s)
## Integrated with framework
- Please refer to [SpringJdbcTemplate](https://github.com/taosdata/TDengine/tree/develop/tests/examples/JDBC/SpringJdbcTemplate) if using taos-jdbcdriver in Spring JdbcTemplate.
- Please refer to [springbootdemo](https://github.com/taosdata/TDengine/tree/develop/tests/examples/JDBC/springbootdemo) if using taos-jdbcdriver in Spring JdbcTemplate.
- Please refer to [SpringJdbcTemplate](https://github.com/taosdata/TDengine/tree/develop/examples/JDBC/SpringJdbcTemplate) if using taos-jdbcdriver in Spring JdbcTemplate.
- Please refer to [springbootdemo](https://github.com/taosdata/TDengine/tree/develop/examples/JDBC/springbootdemo) if using taos-jdbcdriver in Spring JdbcTemplate.
## Example Codes
you see sample code here: [JDBC example](https://github.com/taosdata/TDengine/tree/develop/tests/examples/JDBC)
you see sample code here: [JDBC example](https://github.com/taosdata/TDengine/tree/develop/examples/JDBC)
## FAQ
- Why does not addBatch and executeBatch provide a performance benefit for executing "batch writes/updates"?
......
......@@ -21,7 +21,7 @@ Note: ● stands for that has been verified by official tests; ○ stands for th
Note:
- To access the TDengine database through connectors (except RESTful) in the system without TDengine server software, it is necessary to install the corresponding version of the client installation package to make the application driver (the file name is [libtaos.so](http://libtaos.so/) in Linux system and taos.dll in Windows system) installed in the system, otherwise, the error that the corresponding library file cannot be found will occur.
- To access the TDengine database through connectors (except RESTful) in the system without TDengine server software, it is necessary to install the corresponding version of the client installation package to make the application driver (the file name is libtaos.so in Linux system and taos.dll in Windows system) installed in the system, otherwise, the error that the corresponding library file cannot be found will occur.
- All APIs that execute SQL statements, such as `tao_query`, `taos_query_a`, `taos_subscribe` in C/C++ Connector, and APIs corresponding to them in other languages, can only execute one SQL statement at a time. If the actual parameters contain multiple statements, their behavior is undefined.
- Users upgrading to TDengine 2.0. 8.0 must update the JDBC connection. TDengine must upgrade taos-jdbcdriver to 2.0.12 and above.
- No matter which programming language connector is selected, TDengine version 2.0 and above recommends that each thread of database application establish an independent connection or establish a connection pool based on threads to avoid mutual interference between threads of "USE statement" state variables in the connection (but query and write operations of the connection are thread-safe).
......@@ -164,10 +164,10 @@ The C/C++ API is similar to MySQL's C API. When application use it, it needs to
Note:
- The TDengine dynamic library needs to be linked at compiling. The library in Linux is [libtaos.so](http://libtaos.so/), which installed at/usr/local/taos/driver. By Windows, it is taos.dll and installed at C:\ TDengine.
- The TDengine dynamic library needs to be linked at compiling. The library in Linux is *libtaos.so*, which installed at/usr/local/taos/driver. By Windows, it is taos.dll and installed at C:\ TDengine.
- Unless otherwise specified, when the return value of API is an integer, 0 represents success, others are error codes representing the cause of failure, and when the return value is a pointer, NULL represents failure.
More sample codes for using C/C++ connectors, please visit https://github.com/taosdata/TDengine/tree/develop/tests/examples/c.
More sample codes for using C/C++ connectors, please visit https://github.com/taosdata/TDengine/tree/develop/examples/c.
### Basic API
......@@ -891,7 +891,7 @@ Only some configuration parameters related to RESTful interface are listed below
### Example Source Code
you can find sample code under follow directions:
* {client_install_directory}/examples/C#
* [github C# example source code](https://github.com/taosdata/TDengine/tree/develop/tests/examples/C%2523)
* [github C# example source code](https://github.com/taosdata/TDengine/tree/develop/examples/C%2523)
**Tips:** TDengineTest.cs One of C# connector's sample code that include basic examples like connection,sql executions and so on.
......@@ -924,7 +924,7 @@ dotnet add package TDengine.Connector
```C#
using TDengineDriver;
```
* user can reference from[TDengineTest.cs](https://github.com/taosdata/TDengine/tree/develop/tests/examples/C%2523/TDengineTest) and learn how to define database connection,query,insert and other basic data manipulations.
* user can reference from[TDengineTest.cs](https://github.com/taosdata/TDengine/tree/develop/examples/C%2523/TDengineTest) and learn how to define database connection,query,insert and other basic data manipulations.
**Note:**
......@@ -951,7 +951,7 @@ https://www.taosdata.com/blog/2020/11/02/1901.html
The TDengine provides the GO driver taosSql. taosSql implements the GO language's built-in interface database/sql/driver. Users can access TDengine in the application by simply importing the package as follows, see https://github.com/taosdata/driver-go/blob/develop/taosSql/driver_test.go for details.
Sample code for using the Go connector can be found in https://github.com/taosdata/TDengine/tree/develop/tests/examples/go .
Sample code for using the Go connector can be found in https://github.com/taosdata/TDengine/tree/develop/examples/go .
```Go
import (
......@@ -1067,7 +1067,7 @@ After installing the TDengine client, the nodejsChecker.js program can verify wh
Steps:
1. Create a new installation verification directory, for example: `~/tdengine-test`, copy the nodejsChecker.js source program on github. Download address: (https://github.com/taosdata/TDengine/tree/develop/tests/examples/nodejs/nodejsChecker.js).
1. Create a new installation verification directory, for example: `~/tdengine-test`, copy the nodejsChecker.js source program on github. Download address: (https://github.com/taosdata/TDengine/tree/develop/examples/nodejs/nodejsChecker.js).
2. Execute the following command:
......@@ -1171,6 +1171,6 @@ promise2.then(function(result) {
### Example
[node-example.js](https://github.com/taosdata/TDengine/tree/master/tests/examples/nodejs/node-example.js) provides a code example that uses the NodeJS connector to create a table, insert weather data, and query the inserted data.
[node-example.js](https://github.com/taosdata/tests/tree/master/examples/nodejs/node-example.js) provides a code example that uses the NodeJS connector to create a table, insert weather data, and query the inserted data.
[node-example-raw.js](https://github.com/taosdata/TDengine/tree/master/tests/examples/nodejs/node-example-raw.js) is also a code example that uses the NodeJS connector to create a table, insert weather data, and query the inserted data, but unlike the above, this example only uses cursor.
[node-example-raw.js](https://github.com/taosdata/tests/tree/master/examples/nodejs/node-example-raw.js) is also a code example that uses the NodeJS connector to create a table, insert weather data, and query the inserted data, but unlike the above, this example only uses cursor.
......@@ -63,7 +63,7 @@ Enter the data source configuration page and modify the corresponding configurat
![img](../images/connections/add_datasource3.jpg)
- Host: IP address of any server in TDengine cluster and port number of TDengine RESTful interface (6041), use [http://localhost:6041](http://localhost:6041/) to access the interface by default. Note the 2.4 and later version of TDengine use a stand-alone software, taosAdapter to provide RESTful interface. Please refer to its document for configuration and deployment.
- Host: IP address of any server in TDengine cluster and port number of TDengine RESTful interface (6041), use `http://localhost:6041` to access the interface by default. Note the 2.4 and later version of TDengine use a stand-alone software, taosAdapter to provide RESTful interface. Please refer to its document for configuration and deployment.
- User: TDengine username.
- Password: TDengine user password.
......
......@@ -2,7 +2,7 @@
Multiple TDengine servers, that is, multiple running instances of taosd, can form a cluster to ensure the highly reliable operation of TDengine and provide scale-out features. To understand cluster management in TDengine 2.0, it is necessary to understand the basic concepts of clustering. Please refer to the chapter "Overall Architecture of TDengine 2.0". And before installing the cluster, please follow the chapter ["Getting started"](https://www.taosdata.com/en/documentation/getting-started/) to install and experience the single node TDengine.
Each data node of the cluster is uniquely identified by End Point, which is composed of FQDN (Fully Qualified Domain Name) plus Port, such as [h1.taosdata.com](http://h1.taosdata.com/):6030. The general FQDN is the hostname of the server, which can be obtained through the Linux command `hostname -f` (how to configure FQDN, please refer to: [All about FQDN of TDengine](https://www.taosdata.com/blog/2020/09/11/1824.html)). Port is the external service port number of this data node. The default is 6030, but it can be modified by configuring the parameter serverPort in taos.cfg. A physical node may be configured with multiple hostnames, and TDengine will automatically get the first one, but it can also be specified through the configuration parameter `fqdn` in taos.cfg. If you want to access via direct IP address, you can set the parameter `fqdn` to the IP address of this node.
Each data node of the cluster is uniquely identified by End Point, which is composed of FQDN (Fully Qualified Domain Name) plus Port, such as h1.taosdata.com:6030. The general FQDN is the hostname of the server, which can be obtained through the Linux command `hostname -f` (how to configure FQDN, please refer to: [All about FQDN of TDengine](https://www.taosdata.com/blog/2020/09/11/1824.html)). Port is the external service port number of this data node. The default is 6030, but it can be modified by configuring the parameter serverPort in taos.cfg. A physical node may be configured with multiple hostnames, and TDengine will automatically get the first one, but it can also be specified through the configuration parameter `fqdn` in taos.cfg. If you want to access via direct IP address, you can set the parameter `fqdn` to the IP address of this node.
The cluster management of TDengine is extremely simple. Except for manual intervention in adding and deleting nodes, all other tasks are completed automatically, thus minimizing the workload of operation. This chapter describes the operations of cluster management in detail.
......@@ -27,9 +27,9 @@ Please refer to the [video tutorial](https://www.taosdata.com/blog/2020/11/11/19
1. Execute command `hostname -f` on each physical node, and check and confirm that the hostnames of all nodes are different (the node where the application driver is located does not need to do this check).
2. Execute `ping host` on each physical node, wherein host is that hostname of other physical node, and see if other physical nodes can be communicated to; if not, you need to check the network settings, or the /etc/hosts file (the default path for Windows systems is C:\ Windows\ system32\ drivers\ etc\ hosts), or the configuration of DNS. If it fails to ping, then we cann't build the cluster.
3. From the physical node where the application runs, ping the data node where taosd runs. If the ping fails, the application cannot connect to taosd. Please check the DNS settings or hosts file of the physical node where the application is located;
4. The End Point of each data node is the output hostname plus the port number, for example, [h1.taosdata.com](http://h1.taosdata.com/): 6030
4. The End Point of each data node is the output hostname plus the port number, for example, `h1.taosdata.com:6030`
**Step 5:** Modify the TDengine configuration file (the file/etc/taos/taos.cfg for all nodes needs to be modified). Assume that the first data node End Point to be started is [h1.taosdata.com](http://h1.taosdata.com/): 6030, and its parameters related to cluster configuration are as follows:
**Step 5:** Modify the TDengine configuration file (the file/etc/taos/taos.cfg for all nodes needs to be modified). Assume that the first data node End Point to be started is h1.taosdata.com:6030, and its parameters related to cluster configuration are as follows:
```
// firstEp is the first data node connected after each data node’s first launch
......@@ -64,7 +64,7 @@ The parameters that must be modified are firstEp and fqdn. At each data node, ev
## <a class="anchor" id="node-one"></a> Launch the First Data Node
Follow the instructions in "[Getting started](https://www.taosdata.com/en/documentation/getting-started/)", launch the first data node, such as [h1.taosdata.com](http://h1.taosdata.com/), then execute taos, start the taos shell, and execute command "show dnodes" from the shell; ", as follows:
Follow the instructions in "[Getting started](https://www.taosdata.com/en/documentation/getting-started/)", launch the first data node, such as h1.taosdata.com, then execute taos, start the taos shell, and execute command "show dnodes" from the shell; ", as follows:
```
Welcome to the TDengine shell from Linux, Client Version:2.0.0.0
......
......@@ -92,8 +92,8 @@ Only some important configuration parameters are listed below. For more paramete
- fqdn: FQDN of the data node, which defaults to the first hostname configured by the operating system. If you want to access via IP address directly, you can set it to the IP address of the node.
- serverPort: the port number of the external service after taosd started, the default value is 6030.
- httpPort: the port number used by the RESTful service to which all HTTP requests (TCP) require a query/write request. The default value is 6041. Note 2.4 and later version use a stand-alone software, taosAdapter to provide RESTFul interface.
- dataDir: the data file directory to which all data files will be written. [Default:/var/lib/taos](http://default/var/lib/taos).
- logDir: the log file directory to which the running log files of the client and server will be written. [Default:/var/log/taos](http://default/var/log/taos).
- dataDir: the data file directory to which all data files will be written. `Default:/var/lib/taos`.
- logDir: the log file directory to which the running log files of the client and server will be written. `Default:/var/log/taos`.
- arbitrator: the end point of the arbitrator in the system; the default value is null.
- role: optional role for dnode. 0-any; it can be used as an mnode and to allocate vnodes; 1-mgmt; It can only be an mnode, but not to allocate vnodes; 2-dnode; cannot be an mnode, only vnode can be allocated
- debugFlage: run the log switch. 131 (output error and warning logs), 135 (output error, warning, and debug logs), 143 (output error, warning, debug, and trace logs). Default value: 131 or 135 (different modules have different default values).
......@@ -175,39 +175,44 @@ Client configuration parameters:
- secondEp: when taos starts, if unable to connect to firstEp, it will try to connect to secondEp.
- locale
Default value: obtained dynamically from the system. If the automatic acquisition fails, user needs to set it in the configuration file or through API
TDengine provides a special field type nchar for storing non-ASCII encoded wide characters such as Chinese, Japanese and Korean. The data written to the nchar field will be uniformly encoded in UCS4-LE format and sent to the server. It should be noted that the correctness of coding is guaranteed by the client. Therefore, if users want to normally use nchar fields to store non-ASCII characters such as Chinese, Japanese, Korean, etc., it’s needed to set the encoding format of the client correctly.
The characters inputted by the client are all in the current default coding format of the operating system, mostly UTF-8 on Linux systems, and some Chinese system codes may be GB18030 or GBK, etc. The default encoding in the docker environment is POSIX. In the Chinese versions of Windows system, the code is CP936. The client needs to ensure that the character set it uses is correctly set, that is, the current encoded character set of the operating system running by the client, in order to ensure that the data in nchar is correctly converted into UCS4-LE encoding format.
The naming rules of locale in Linux are: < language > _ < region >. < character set coding >, such as: zh_CN.UTF-8, zh stands for Chinese, CN stands for mainland region, and UTF-8 stands for character set. Character set encoding provides a description of encoding transformations for clients to correctly parse local strings. Linux system and Mac OSX system can determine the character encoding of the system by setting locale. Because the locale used by Windows is not the POSIX standard locale format, another configuration parameter charset is needed to specify the character encoding under Windows. You can also use charset to specify character encoding in Linux systems.
- charset
Default value: obtained dynamically from the system. If the automatic acquisition fails, user needs to set it in the configuration file or through API
If charset is not set in the configuration file, in Linux system, when taos starts up, it automatically reads the current locale information of the system, and parses and extracts the charset encoding format from the locale information. If the automatic reading of locale information fails, an attempt is made to read the charset configuration, and if the reading of the charset configuration also fails, the startup process is interrupted.
In Linux system, locale information contains character encoding information, so it is unnecessary to set charset separately after setting locale of Linux system correctly. For example:
```
locale zh_CN.UTF-8
```
On Windows systems, the current system encoding cannot be obtained from locale. If string encoding information cannot be read from the configuration file, taos defaults to CP936. It is equivalent to adding the following to the configuration file:
j
```
charset CP936
```
If you need to adjust the character encoding, check the encoding used by the current operating system and set it correctly in the configuration file.
In Linux systems, if user sets both locale and charset encoding charset, and the locale and charset are inconsistent, the value set later will override the value set earlier.
```
locale zh_CN.UTF-8
charset GBK
```
The valid value for charset is GBK.
And the valid value for charset is UTF-8.
The configuration parameters of log are exactly the same as those of server.
- timezone
......@@ -217,31 +222,35 @@ Client configuration parameters:
The time zone in which the client runs the system. In order to deal with the problem of data writing and query in multiple time zones, TDengine uses Unix Timestamp to record and store timestamps. The characteristics of UNIX timestamps determine that the generated timestamps are consistent at any time regardless of any time zone. It should be noted that UNIX timestamps are converted and recorded on the client side. In order to ensure that other forms of time on the client are converted into the correct Unix timestamp, the correct time zone needs to be set.
In Linux system, the client will automatically read the time zone information set by the system. Users can also set time zones in profiles in a number of ways. For example:
```
timezone UTC-8
timezone GMT-8
timezone Asia/Shanghai
```
All above are legal to set the format of the East Eight Zone.
The setting of time zone affects the content of non-Unix timestamp (timestamp string, parsing of keyword now) in query and writing SQL statements. For example:
```sql
SELECT count(*) FROM table_name WHERE TS<'2019-04-11 12:01:08';
```
In East Eight Zone, the SQL statement is equivalent to
```sql
SELECT count(*) FROM table_name WHERE TS<1554955268000;
```
In the UTC time zone, the SQL statement is equivalent to
```sql
SELECT count(*) FROM table_name WHERE TS<1554984068000;
```
In order to avoid the uncertainty caused by using string time format, Unix timestamp can also be used directly. In addition, timestamp strings with time zones can also be used in SQL statements, such as: timestamp strings in RFC3339 format, 2013-04-12T15:52:01.123+08:00, or ISO-8601 format timestamp strings 2013-04-12T15:52:01.123+0800. The conversion of the above two strings into Unix timestamps is not affected by the time zone in which the system is located.
When starting taos, you can also specify an end point for an instance of taosd from the command line, otherwise read from taos.cfg.
- maxBinaryDisplayWidth
......@@ -340,7 +349,7 @@ Query OK, 9 row(s) affected (0.004763s)
**Import via taosdump tool**
TDengine provides a convenient database import and export tool, taosdump. Users can import data exported by taosdump from one system into other systems. Please refer to the blog: [User Guide of TDengine DUMP Tool](https://www.taosdata.com/blog/2020/03/09/1334.html).
TDengine provides a convenient database import and export tool, taosdump. Users can import data exported by taosdump from one system into other systems. Please refer to [backup tool for TDengine - taosdump](/tools/taosdump).
## <a class="anchor" id="export"></a> Export Data
......@@ -428,7 +437,7 @@ Some CLI options are needed to use the script:
2. Grafana alerting notifications. There's two ways to setup this:
1. To use existing Grafana notification channel with `uid`, option `-E`. The `uid` could be retrieved with `curl -u admin:admin localhost:3000/api/alert-notifications |'.[]| .uid + "," + .name' -r`, then use it like this:
```bash
sudo ./TDinsight.sh -a http://localhost:6041 -u root -p taosdata -E <notifier uid>
```
......@@ -447,7 +456,7 @@ Some CLI options are needed to use the script:
-T '{"alarm_level":"%s","time":"%s","name":"%s","content":"%s"}'
```
Follow the usage of the script and then restart grafana-server service, here we go <http://localhost:3000/d/tdinsight>.
Follow the usage of the script and then restart grafana-server service, here we go http://localhost:3000/d/tdinsight.
Refer to [TDinsight](https://github.com/taosdata/grafanaplugin/blob/master/dashboards/TDinsight.md) README for more scenario and limitations of the script, and the metrics descriptions for all of the TDinsight.
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册