提交 943bd85e 编写于 作者: G Ganlin Zhao

[TD-11222]<feature>(query): Histogram function

要显示的变更太多。

To preserve performance only 1000 of 1000+ files are displayed.
......@@ -4,9 +4,6 @@
[submodule "src/connector/hivemq-tdengine-extension"]
path = src/connector/hivemq-tdengine-extension
url = https://github.com/taosdata/hivemq-tdengine-extension.git
[submodule "tests/examples/rust"]
path = tests/examples/rust
url = https://github.com/songtianyi/tdengine-rust-bindings.git
[submodule "deps/jemalloc"]
path = deps/jemalloc
url = https://github.com/jemalloc/jemalloc
......
......@@ -11,7 +11,7 @@ def sync_source() {
sh '''
cd ${WKC}
[ -f src/connector/grafanaplugin/README.md ] && rm -f src/connector/grafanaplugin/README.md > /dev/null || echo "failed to remove grafanaplugin README.md"
git reset --hard HEAD~10 >/dev/null
git reset --hard >/dev/null
'''
script {
if (env.CHANGE_TARGET == 'master') {
......@@ -36,47 +36,65 @@ def sync_source() {
'''
}
}
sh'''
sh '''
cd ${WKC}
git reset --hard
git remote prune origin
[ -f src/connector/grafanaplugin/README.md ] && rm -f src/connector/grafanaplugin/README.md > /dev/null || echo "failed to remove grafanaplugin README.md"
git pull >/dev/null
git fetch origin +refs/pull/${CHANGE_ID}/merge
git checkout -qf FETCH_HEAD
git reset --hard
git clean -dfx
git submodule update --init --recursive --remote
git submodule update --init --recursive
cd ${WK}
git reset --hard HEAD~10
git reset --hard
'''
sh '''
cd ${WKCT}
git reset --hard
'''
script {
if (env.CHANGE_TARGET == 'master') {
sh '''
cd ${WK}
git checkout master
cd ${WKCT}
git checkout master
'''
} else if (env.CHANGE_TARGET == '2.0') {
sh '''
cd ${WK}
git checkout 2.0
cd ${WKCT}
git checkout 2.0
'''
} else if (env.CHANGE_TARGET == '2.4') {
sh '''
cd ${WK}
git checkout 2.4
cd ${WKCT}
git checkout 2.4
'''
} else {
sh '''
cd ${WK}
git checkout develop
cd ${WKCT}
git checkout develop
'''
}
}
sh '''
export TZ=Asia/Harbin
cd ${WK}
git pull >/dev/null
export TZ=Asia/Harbin
date
git clean -dfx
cd ${WKCT}
git pull >/dev/null
git clean -dfx
date
'''
}
def pre_test() {
......@@ -111,6 +129,7 @@ pipeline {
environment{
WK = '/var/data/jenkins/workspace/TDinternal'
WKC = '/var/data/jenkins/workspace/TDinternal/community'
WKCT = '/var/data/jenkins/workspace/TDinternal/community/tests'
LOGDIR = '/var/data/jenkins/workspace/log'
}
stages {
......@@ -220,6 +239,13 @@ pipeline {
}
}
stage('run test') {
options { skipDefaultCheckout() }
when {
allOf {
changeRequest()
not { expression { env.CHANGE_BRANCH =~ /docs\// }}
}
}
parallel {
stage ('build worker06_arm64') {
agent {label " worker06_arm64 "}
......@@ -269,14 +295,16 @@ pipeline {
date
hostname
'''
timeout(time: 60, unit: 'MINUTES') {
sh '''
date
cd ${WKC}/tests/parallel_test
time ./run.sh -m m.json -t cases.task -l ${LOGDIR} -b ${BRANCH_NAME}
date
hostname
'''
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
timeout(time: 20, unit: 'MINUTES') {
sh '''
date
cd ${WKC}/tests/parallel_test
time ./run.sh -m m.json -t cases.task -l ${LOGDIR} -b ${BRANCH_NAME}
date
hostname
'''
}
}
}
}
......
......@@ -6,8 +6,8 @@
[![TDengine](TDenginelogo.png)](https://www.taosdata.com)
简体中文 | [English](./README.md)
很多职位正在热招中,请看[这里](https://www.taosdata.com/cn/careers/)
简体中文 | [English](./README.md)
很多职位正在热招中,请看[这里](https://www.taosdata.com/cn/careers/)
# TDengine 简介
......@@ -57,6 +57,18 @@ sudo apt-get install -y openjdk-8-jdk
sudo apt-get install -y maven
```
#### 为 taos-tools 安装编译需要的软件
taosTools 是用于 TDengine 的辅助工具软件集合。目前它包含 taosBenchmark(曾命名为 taosdemo)和 taosdump 两个软件。
默认 TDengine 编译不包含 taosTools。您可以在编译 TDengine 时使用`cmake .. -DBUILD_TOOLS=true` 来同时编译 taosTools。
为了在 Ubuntu/Debian 系统上编译 [taos-tools](https://github.com/taosdata/taos-tools) 需要安装如下软件:
```bash
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev pkg-config
```
### CentOS 7:
```bash
......@@ -75,7 +87,7 @@ sudo yum install -y java-1.8.0-openjdk
sudo yum install -y maven
```
### CentOS 8 & Fedora:
### CentOS 8 & Fedora
```bash
sudo dnf install -y gcc gcc-c++ make cmake epel-release git
......@@ -93,6 +105,18 @@ sudo dnf install -y java-1.8.0-openjdk
sudo dnf install -y maven
```
#### 在 CentOS 上构建 taosTools 安装依赖软件
为了在 CentOS 上构建 [taosTools](https://github.com/taosdata/taos-tools) 需要安装如下依赖软件
```bash
sudo yum install zlib-devel xz-devel snappy-devel jansson-devel pkgconfig libatomic libstdc++-static
```
注意:由于 snappy 缺乏 pkg-config 支持
(参考 [链接](https://github.com/google/snappy/pull/86)),会导致
cmake 提示无法发现 libsnappy,实际上工作正常。
## 获取源码
首先,你需要从 GitHub 克隆源码:
......@@ -109,6 +133,7 @@ git submodule update --init --recursive
```
如果使用 https 协议下载比较慢,可以通过修改 ~/.gitconfig 文件添加以下两行设置使用 ssh 协议下载。需要首先上传 ssh 密钥到 GitHub,详细方法请参考 GitHub 官方文档。
```
[url "git@github.com:"]
insteadOf = https://github.com/
......@@ -123,7 +148,8 @@ mkdir debug && cd debug
cmake .. && cmake --build .
```
您可以选择使用 Jemalloc 作为内存分配器,替代默认的 glibc:
您可以选择使用 jemalloc 作为内存分配器,替代默认的 glibc:
```bash
apt install autoconf
cmake .. -DJEMALLOC_ENABLED=true
......@@ -292,4 +318,4 @@ TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入
# [谁在使用TDengine](https://github.com/taosdata/TDengine/issues/2432)
欢迎所有 TDengine 用户及贡献者在 [这里](https://github.com/taosdata/TDengine/issues/2432) 分享您在当前工作中开发/使用 TDengine 的故事。
欢迎所有 TDengine 用户及贡献者在 [这里](https://github.com/taosdata/TDengine/issues/2432) 分享您在当前工作中开发/使用 TDengine 的故事。
......@@ -26,48 +26,58 @@ TDengine is an open-sourced big data platform under [GNU AGPL v3.0](http://www.g
- **Zero Management, No Learning Curve**: It takes only seconds to download, install, and run it successfully; there are no other dependencies. Automatic partitioning on tables or DBs. Standard SQL is used, with C/C++, Python, JDBC, Go and RESTful connectors.
# Documentation
For user manual, system design and architecture, engineering blogs, refer to [TDengine Documentation](https://www.taosdata.com/en/documentation/)(中文版请点击[这里](https://www.taosdata.com/cn/documentation20/))
for details. The documentation from our website can also be downloaded locally from *documentation/tdenginedocs-en* or *documentation/tdenginedocs-cn*.
# Building
At the moment, TDengine only supports building and running on Linux systems. You can choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) or from the source code. This quick guide is for installation from the source only.
To build TDengine, use [CMake](https://cmake.org/) 3.0.2 or higher versions in the project directory.
## Install build dependencies
### Ubuntu 16.04 and above & Debian:
### Ubuntu 16.04 and above or Debian
```bash
sudo apt-get install -y gcc cmake build-essential git
```
### Ubuntu 14.04:
### Ubuntu 14.04
```bash
sudo apt-get install -y gcc cmake3 build-essential git binutils-2.26
export PATH=/usr/lib/binutils-2.26/bin:$PATH
```
To compile and package the JDBC driver source code, you should have a Java jdk-8 or higher and Apache Maven 2.7 or higher installed.
To install openjdk-8:
```bash
sudo apt-get install -y openjdk-8-jdk
```
To install Apache Maven:
```bash
sudo apt-get install -y maven
sudo apt-get install -y maven
```
#### Install build dependencies for taos-tools
#### Install build dependencies for taosTools
We provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. From TDengine 2.4.0.0, taosBenchmark and taosdump were not released together with TDengine.
By default, TDengine compiling does not include taos-tools. You can use 'cmake .. -DBUILD_TOOLS=true' to make them be compiled with TDengine.
By default, TDengine compiling does not include taosTools. You can use 'cmake .. -DBUILD_TOOLS=true' to make them be compiled with TDengine.
To build the [taosTools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
To build the [taos-tools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
```bash
sudo apt install libjansson-dev libsnappy-dev liblzma-dev libz-dev pkg-config
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev pkg-config
```
### CentOS 7:
### CentOS 7
```bash
sudo yum install epel-release
sudo yum update
......@@ -76,41 +86,51 @@ sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake
```
To install openjdk-8:
```bash
sudo yum install -y java-1.8.0-openjdk
```
To install Apache Maven:
```bash
sudo yum install -y maven
```
### CentOS 8 & Fedora:
### CentOS 8 & Fedora
```bash
sudo dnf install -y gcc gcc-c++ make cmake epel-release git
```
To install openjdk-8:
```bash
sudo dnf install -y java-1.8.0-openjdk
```
To install Apache Maven:
```bash
sudo dnf install -y maven
```
#### Install build dependencies for taos-tools
To build the [taos-tools](https://github.com/taosdata/taos-tools) on CentOS, the following packages need to be installed.
#### Install build dependencies for taosTools on CentOS
To build the [taosTools](https://github.com/taosdata/taos-tools) on CentOS, the following packages need to be installed.
```bash
sudo yum install zlib-devel xz-devel snappy-devel jansson-devel pkgconfig libatomic
sudo yum install zlib-devel xz-devel snappy-devel jansson-devel pkgconfig libatomic libstdc++-static
```
Note: Since snappy lacks pkg-config support (refer to [link](https://github.com/google/snappy/pull/86)), it lead a cmake prompt libsnappy not found. But snappy will works well.
### Setup golang environment
TDengine includes few components developed by Go language. Please refer to golang.org official documentation for golang environment setup.
Please use version 1.14+. For the user in China, we recommend using a proxy to accelerate package downloading.
```
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
......@@ -119,6 +139,7 @@ go env -w GOPROXY=https://goproxy.cn,direct
## Get the source codes
First of all, you may clone the source codes from github:
```bash
git clone https://github.com/taosdata/TDengine.git
cd TDengine
......@@ -126,11 +147,13 @@ cd TDengine
The connectors for go & grafana and some tools have been moved to separated repositories,
so you should run this command in the TDengine directory to install them:
```bash
git submodule update --init --recursive
```
You can modify the file ~/.gitconfig to use ssh protocol instead of https for better download speed. You need to upload ssh public key to GitHub first. Please refer to GitHub official documentation for detail.
```
[url "git@github.com:"]
insteadOf = https://github.com/
......@@ -146,17 +169,20 @@ cmake .. && cmake --build .
```
Note TDengine 2.3.x.0 and later use a component named 'taosAdapter' to play http daemon role by default instead of the http daemon embedded in the early version of TDengine. The taosAdapter is programmed by go language. If you pull TDengine source code to the latest from an existing codebase, please execute 'git submodule update --init --recursive' to pull taosAdapter source code. Please install go language version 1.14 or above for compiling taosAdapter. If you meet difficulties regarding 'go mod', especially you are from China, you can use a proxy to solve the problem.
```
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
```
The embedded http daemon still be built from TDengine source code by default. Or you can use the following command to choose to build taosAdapter.
```
cmake .. -DBUILD_HTTP=false
```
You can use Jemalloc as memory allocator instead of glibc:
```
apt install autoconf
cmake .. -DJEMALLOC_ENABLED=true
......@@ -166,16 +192,19 @@ TDengine build script can detect the host machine's architecture on X86-64, X86,
You can also specify CPUTYPE option like aarch64 or aarch32 too if the detection result is not correct:
aarch64:
```bash
cmake .. -DCPUTYPE=aarch64 && cmake --build .
```
aarch32:
```bash
cmake .. -DCPUTYPE=aarch32 && cmake --build .
```
mips64:
```bash
cmake .. -DCPUTYPE=mips64 && cmake --build .
```
......@@ -184,6 +213,7 @@ cmake .. -DCPUTYPE=mips64 && cmake --build .
If you use the Visual Studio 2013, please open a command window by executing "cmd.exe".
Please specify "amd64" for 64 bits Windows or specify "x86" is for 32 bits Windows when you execute vcvarsall.bat.
```cmd
mkdir debug && cd debug
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < amd64 | x86 >
......@@ -204,6 +234,7 @@ nmake
```
Or, you can simply open a command window by clicking Windows Start -> "Visual Studio < 2019 | 2017 >" folder -> "x64 Native Tools Command Prompt for VS < 2019 | 2017 >" or "x86 Native Tools Command Prompt for VS < 2019 | 2017 >" depends what architecture your Windows is, then execute commands as follows:
```cmd
mkdir debug && cd debug
cmake .. -G "NMake Makefiles"
......@@ -222,6 +253,7 @@ cmake .. && cmake --build .
# Installing
After building successfully, TDengine can be installed by: (On Windows platform, the following command should be `nmake install`)
```bash
sudo make install
```
......@@ -230,11 +262,13 @@ Users can find more information about directories installed on the system in the
Users can also choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) for it.
To start the service after installation, in a terminal, use:
```bash
sudo systemctl start taosd
```
Then users can use the [TDengine shell](https://www.taosdata.com/en/getting-started/#TDengine-Shell) to connect the TDengine server. In a terminal, use:
```bash
taos
```
......@@ -257,11 +291,13 @@ sudo apt-get install tdengine
## Quick Run
If you don't want to run TDengine as a service, you can run it in current shell. For example, to quickly start a TDengine server after building, run the command below in terminal: (We take Linux as an example, command on Windows will be `taosd.exe`)
```bash
./build/bin/taosd -c test/cfg
```
In another terminal, use the TDengine shell to connect the server:
```bash
./build/bin/taos -c test/cfg
```
......@@ -269,7 +305,9 @@ In another terminal, use the TDengine shell to connect the server:
option "-c test/cfg" specifies the system configuration file directory.
# Try TDengine
It is easy to run SQL commands from TDengine shell which is the same as other SQL databases.
```sql
create database db;
use db;
......@@ -281,7 +319,8 @@ drop database db;
```
# Developing with TDengine
### Official Connectors
## Official Connectors
TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation.
......@@ -293,7 +332,7 @@ TDengine provides abundant developing tools for users to develop on TDengine. Fo
- [Node.js](https://www.taosdata.com/en/documentation/connector#nodejs)
- [Rust](https://www.taosdata.com/en/documentation/connector/rust)
### Third Party Connectors
## Third Party Connectors
The TDengine community has also kindly built some of their own connectors! Follow the links below to find the source code for them.
......@@ -301,11 +340,13 @@ The TDengine community has also kindly built some of their own connectors! Follo
- [.Net Core Connector](https://github.com/maikebing/Maikebing.EntityFrameworkCore.Taos)
- [Lua Connector](https://github.com/taosdata/TDengine/tree/develop/tests/examples/lua)
# How to run the test cases and how to add a new test case?
# How to run the test cases and how to add a new test case
TDengine's test framework and all test cases are fully open source.
Please refer to [this document](tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md) for how to run test and develop new test case.
# TDengine Roadmap
- Support event-driven stream computing
- Support user defined functions
- Support MQTT connection
......
......@@ -35,7 +35,7 @@ $ docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdeng
进一步,还可以使用 docker run 命令启动运行 TDengine server 的 docker 容器,并使用 `--name` 命令行参数将容器命名为 `tdengine`,使用 `--hostname` 指定 hostname 为 `tdengine-server`,通过 `-v` 挂载本地目录到容器,实现宿主机与容器内部的数据同步,防止容器删除后,数据丢失。
```bash
$ docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:/var/log/taos -v ~/work/taos/data:/var/lib/taos -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:/var/log/taos -v ~/work/taos/data:/var/lib/taos -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
```
- **--name tdengine**:设置容器名称,我们可以通过容器名称来访问对应的容器
......@@ -45,7 +45,12 @@ $ docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:
### 使用 docker ps 命令确认容器是否已经正确运行
```bash
$ docker ps
docker ps
```
输出示例如下:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS ···
c452519b0f9b tdengine/tdengine "taosd" 14 minutes ago Up 14 minutes ···
```
......@@ -85,7 +90,6 @@ TDengine 终端成功连接服务端,打印出了欢迎消息和版本信息
在 TDengine 终端中,可以通过 SQL 命令来创建/删除数据库、表、超级表等,并可以进行插入和查询操作。具体可以参考 [TAOS SQL 说明文档](https://www.taosdata.com/cn/documentation/taos-sql)
### 在宿主机访问 Docker 容器中的 TDengine server
在使用了 -p 命令行参数映射了正确的端口启动了 TDengine Docker 容器后,就在宿主机使用 taos shell 命令即可访问运行在 Docker 容器中的 TDengine。
......@@ -102,7 +106,12 @@ taos>
也可以在宿主机使用 curl 通过 RESTful 端口访问 Docker 容器内的 TDengine server。
```
$ curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql
curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql
```
输出示例如下:
```
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep0,keep1,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep0,keep1,keep(D)",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["test","2021-08-18 06:01:11.021",10000,4,1,1,10,"3650,3650,3650",16,6,100,4096,1,3000,2,0,"ms",0,"ready"],["log","2021-08-18 05:51:51.065",4,1,1,1,10,"30,30,30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":2}
```
......@@ -119,195 +128,36 @@ TDengine RESTful 接口详情请参考[官方文档](https://www.taosdata.com/cn
使用 docker 运行 TDengine 2.4.0.4 版本镜像(taosd + taosAdapter):
```bash
$ docker run -d --name tdengine-all -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine:2.4.0.4
docker run -d --name tdengine-all -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine:2.4.0.4
```
使用 docker 运行 TDengine 2.4.0.4 版本镜像(仅 taosAdapter,需要设置 firstEp 配置项 或 TAOS_FIRST_EP 环境变量):
```bash
$ docker run -d --name tdengine-taosa -p 6041-6049:6041-6049 -p 6041-6049:6041-6049/udp -e TAOS_FIRST_EP=tdengine-all tdengine/tdengine:2.4.0.4 taosadapter
docker run -d --name tdengine-taosa -p 6041-6049:6041-6049 -p 6041-6049:6041-6049/udp -e TAOS_FIRST_EP=tdengine-all tdengine/tdengine:2.4.0.4 taosadapter
```
使用 docker 运行 TDengine 2.4.0.4 版本镜像(仅 taosd):
```bash
$ docker run -d --name tdengine-taosd -p 6030-6042:6030-6042 -p 6030-6042:6030-6042/udp -e TAOS_DISABLE_ADAPTER=true tdengine/tdengine:2.4.0.4
docker run -d --name tdengine-taosd -p 6030-6042:6030-6042 -p 6030-6042:6030-6042/udp -e TAOS_DISABLE_ADAPTER=true tdengine/tdengine:2.4.0.4
```
使用 curl 命令验证 RESTful 接口可以正常工作:
```bash
$ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0.1:6041/rest/sql
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
```
taosAdapter 支持多个数据收集代理软件(如 Telegraf、StatsD、collectd 等),这里仅模拟 StatsD 写入数据,在宿主机执行命令如下:
```bash
$ echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
```
然后可以使用 taos shell 查询 taosAdapter 自动创建的数据库 statsd 和 超级表 foo 中的内容:
```bash
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
Query OK, 2 row(s) in set (0.002112s)
taos> use statsd;
Database changed.
taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
Query OK, 1 row(s) in set (0.001160s)
taos> select * from foo;
ts | value | metric_type |
=======================================================================================
2021-12-28 09:21:48.840820836 | 1 | counter |
Query OK, 1 row(s) in set (0.001639s)
taos>
```
可以看到模拟数据已经被写入到 TDengine 中。
### 应用示例:在宿主机使用 taosBenchmark 写入数据到 Docker 容器中的 TDengine server
1,在宿主机命令行界面执行 taosBenchmark (曾命名为 taosdemo)写入数据到 Docker 容器中的 TDengine server
```
$ taosBenchmark
taosBenchmark is simulating data generated by power equipments monitoring...
host: 127.0.0.1:6030
user: root
password: taosdata
configDir:
resultFile: ./output.txt
thread num of insert data: 10
thread num of create table: 10
top insert interval: 0
```
使用 curl 命令验证 RESTful 接口可以正常工作:
```
$ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0.1:6041/rest/sql
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
```
taosAdapter 支持多个数据收集代理软件(如 Telegraf、StatsD、collectd 等),这里仅模拟 StasD 写入数据,在宿主机执行命令如下:
```
$ echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
```
然后可以使用 taos shell 查询 taosAdapter 自动创建的数据库 statsd 和 超级表 foo 中的内容:
```
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
Query OK, 2 row(s) in set (0.002112s)
taos> use statsd;
Database changed.
taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
Query OK, 1 row(s) in set (0.001160s)
taos> select * from foo;
ts | value | metric_type |
=======================================================================================
2021-12-28 09:21:48.840820836 | 1 | counter |
Query OK, 1 row(s) in set (0.001639s)
taos>
```
可以看到模拟数据已经被写入到 TDengine 中。
### 应用示例:在宿主机使用 taosBenchmark 写入数据到 Docker 容器中的 TDengine server
1,在宿主机命令行界面执行 taosBenchmark 写入数据到 Docker 容器中的 TDengine server
```bash
$ taosBenchmark
taosBenchmark is simulating data generated by power equipments monitoring...
host: 127.0.0.1:6030
user: root
password: taosdata
configDir:
resultFile: ./output.txt
thread num of insert data: 10
thread num of create table: 10
top insert interval: 0
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0.1:6041/rest/sql
```
使用 curl 命令验证 RESTful 接口可以正常工作
输出示例如下
```
$ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0.1:6041/rest/sql
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
```
taosAdapter 支持多个数据收集代理软件(如 Telegraf、StatsD、collectd 等),这里仅模拟 StasD 写入数据,在宿主机执行命令如下:
```
$ echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
```
然后可以使用 taos shell 查询 taosAdapter 自动创建的数据库 statsd 和 超级表 foo 中的内容:
```
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
Query OK, 2 row(s) in set (0.002112s)
taos> use statsd;
Database changed.
taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
Query OK, 1 row(s) in set (0.001160s)
taos> select * from foo;
ts | value | metric_type |
=======================================================================================
2021-12-28 09:21:48.840820836 | 1 | counter |
Query OK, 1 row(s) in set (0.001639s)
taos>
```
可以看到模拟数据已经被写入到 TDengine 中。
### 应用示例:在宿主机使用 taosBenchmark 写入数据到 Docker 容器中的 TDengine server
1,在宿主机命令行界面执行 taosBenchmark 写入数据到 Docker 容器中的 TDengine server
1,在宿主机命令行界面执行 taosBenchmark (曾命名为 taosdemo)写入数据到 Docker 容器中的 TDengine server
```bash
$ taosBenchmark
......@@ -361,7 +211,7 @@ column[0]:FLOAT column[1]:INT column[2]:FLOAT
最后共插入 1 亿条记录。
2进入 TDengine 终端,查看 taosBenchmark 生成的数据。
2.进入 TDengine 终端,查看 taosBenchmark 生成的数据。
- **进入命令行。**
......@@ -380,7 +230,7 @@ taos>
$ taos> show databases;
name | created_time | ntables | vgroups | ···
test | 2021-08-18 06:01:11.021 | 10000 | 6 | ···
log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
```
......@@ -429,16 +279,51 @@ $ taos> select groupid, location from test.d0;
=================================
0 | shanghai |
Query OK, 1 row(s) in set (0.003490s)
```
### 应用示例:使用数据收集代理软件写入 TDengine
taosAdapter 支持多个数据收集代理软件(如 Telegraf、StatsD、collectd 等),这里仅模拟 StasD 写入数据,在宿主机执行命令如下:
```
echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
```
然后可以使用 taos shell 查询 taosAdapter 自动创建的数据库 statsd 和 超级表 foo 中的内容:
```
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
Query OK, 2 row(s) in set (0.002112s)
taos> use statsd;
Database changed.
taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
Query OK, 1 row(s) in set (0.001160s)
taos> select * from foo;
ts | value | metric_type |
=======================================================================================
2021-12-28 09:21:48.840820836 | 1 | counter |
Query OK, 1 row(s) in set (0.001639s)
taos>
```
可以看到模拟数据已经被写入到 TDengine 中。
## 停止正在 Docker 中运行的 TDengine 服务
```bash
$ docker stop tdengine
tdengine
docker stop tdengine
```
- **docker stop**:通过 docker stop 停止指定的正在运行中的 docker 镜像。
- **tdengine**:容器名称。
......@@ -12,6 +12,10 @@ TDengine 软件分为服务器、客户端和报警模块三部分,目前 2.0
暂时不建议生产环境采用 Docker 来部署 TDengine 的客户端或服务端,但在开发环境下或初次尝试时,使用 Docker 方式部署是十分方便的。特别是,利用 Docker,可以方便地在 Mac OS X 和 Windows 环境下尝试 TDengine。
```
docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
```
详细操作方法请参照 [通过 Docker 快速体验 TDengine](https://www.taosdata.com/cn/documentation/getting-started/docker)
### <a class="anchor" id="package-install"></a>通过安装包安装
......@@ -25,6 +29,7 @@ TDengine 的安装非常简单,从下载到安装成功仅仅只要几秒钟
### 使用 apt-get 安装
如果使用 Debian 或 Ubuntu 系统,也可以使用 apt-get 从官方仓库安装,设置方法为:
```
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
......@@ -44,13 +49,14 @@ $ systemctl start taosd
```
检查服务是否正常工作:
```bash
$ systemctl status taosd
```
如果 TDengine 服务正常工作,那么您可以通过 TDengine 的命令行程序 `taos` 来访问并体验 TDengine。
如果 TDengine 服务正常工作,那么您可以通过 TDengine 的命令行程序 `taos` 来访问并体验 TDengine。
**注意:**
### 注意:
- systemctl 命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 sudo 。
- 为更好的获得产品反馈,改善产品,TDengine 会采集基本的使用信息,但您可以修改系统配置文件 taos.cfg 里的配置参数 telemetryReporting,将其设为 0,就可将其关闭。
......@@ -98,7 +104,7 @@ Query OK, 2 row(s) in set (0.003128s)
除执行 SQL 语句外,系统管理员还可以从 TDengine 终端进行检查系统运行状态、添加删除用户账号等操作。
**命令行参数**
### 命令行参数
您可通过配置命令行参数来改变 TDengine 终端的行为。以下为常用的几个命令行参数:
......@@ -115,7 +121,7 @@ Query OK, 2 row(s) in set (0.003128s)
$ taos -h h1.taos.com -s "use db; show tables;"
```
**运行 SQL 命令脚本**
### 运行 SQL 命令脚本
TDengine 终端可以通过 `source` 命令来运行 SQL 命令脚本。
......@@ -123,7 +129,7 @@ TDengine 终端可以通过 `source` 命令来运行 SQL 命令脚本。
taos> source <filename>;
```
**Shell 小技巧**
### taos shell 小技巧
- 可以使用上下光标键查看历史输入的指令
- 修改用户密码:在 shell 中使用 `alter user` 命令,缺省密码为 taosdata
......@@ -175,9 +181,10 @@ taos> select avg(current), max(voltage), min(phase) from test.meters where group
```mysql
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
```
## <a class="anchor" id="taosBenchmark"></a> taosBenchmark 详细功能列表
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,请执行 `taosBenchmark --help` 详细列出。您可以设置不同参数进行体验。
taosBenchmark (曾命名 taosdemo)命令本身带有很多选项,配置表的数目、记录条数等等,请执行 `taosBenchmark --help` 详细列出。您可以设置不同参数进行体验。
taosBenchmark 详细使用方法请参照 [如何使用taosBenchmark对TDengine进行性能测试](https://www.taosdata.com/2021/10/09/3111.html)
## 客户端
......
......@@ -5,15 +5,19 @@ TDengine支持多种接口写入数据,包括SQL,Prometheus,Telegraf,col
## <a class="anchor" id="sql"></a>SQL 写入
应用通过C/C++, Java, Go, C#, Python, Node.js 连接器执行SQL insert语句来插入数据,用户还可以通过TAOS Shell,手动输入SQL insert语句插入数据。比如下面这条insert 就将一条记录写入到表d1001中:
```mysql
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31);
```
TDengine支持一次写入多条记录,比如下面这条命令就将两条记录写入到表d1001中:
```mysql
INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3, 218, 0.25);
```
TDengine也支持一次向多个表写入数据,比如下面这条命令就向d1001写入两条记录,向d1002写入一条记录:
```mysql
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d1002 VALUES (1538548696800, 12.3, 221, 0.31);
```
......@@ -28,6 +32,7 @@ INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6,
- 写入的数据的时间戳必须大于当前时间减去配置参数keep的时间。如果keep配置为3650天,那么无法写入比3650天还早的数据。写入数据的时间戳也不能大于当前时间加配置参数days。如果days为2,那么无法写入比当前时间还晚2天的数据。
## <a class="anchor" id="schemaless"></a>无模式(Schemaless)写入
**前言**
<br/>在物联网应用中,常会采集比较多的数据项,用于实现智能控制、业务分析、设备监控等。由于应用逻辑的版本升级,或者设备自身的硬件调整等原因,数据采集项就有可能比较频繁地出现变动。为了在这种情况下方便地完成数据记录工作,TDengine 从 2.2.0.0 版本开始,提供调用 Schemaless 写入方式,可以免于预先创建超级表/子表的步骤,随着数据写入接口能够自动创建与数据对应的存储结构。并且在必要时,Schemaless 将自动增加必要的数据列,保证用户写入的数据可以被正确存储。
<br/>目前,TDengine 的 C/C++ Connector 提供支持 Schemaless 的操作接口,详情请参见 [Schemaless 方式写入接口](https://www.taosdata.com/cn/documentation/connector#schemaless)章节。这里对 Schemaless 的数据表达格式进行了描述。
......@@ -39,11 +44,13 @@ INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6,
对于InfluxDB、OpenTSDB的标准写入协议请参考各自的文档。下面首先以 InfluxDB 的行协议为基础,介绍 TDengine 扩展的协议内容,允许用户采用更加精细的方式控制(超级表)模式。
Schemaless 采用一个字符串来表达一个数据行(可以向写入 API 中一次传入多行字符串来实现多个数据行的批量写入),其格式约定如下:
```json
measurement,tag_set field_set timestamp
```
其中,
其中:
* measurement 将作为数据表名。它与 tag_set 之间使用一个英文逗号来分隔。
* tag_set 将作为标签数据,其格式形如 `<tag_key>=<tag_value>,<tag_key>=<tag_value>`,也即可以使用英文逗号来分隔多个标签数据。它与 field_set 之间使用一个半角空格来分隔。
* field_set 将作为普通列数据,其格式形如 `<field_key>=<field_value>,<field_key>=<field_value>`,同样是使用英文逗号来分隔多个普通列的数据。它与 timestamp 之间使用一个半角空格来分隔。
......@@ -51,6 +58,7 @@ measurement,tag_set field_set timestamp
tag_set 中的所有的数据自动转化为 nchar 数据类型,并不需要使用双引号(")。
<br/>在无模式写入数据行协议中,field_set 中的每个数据项都需要对自身的数据类型进行描述。具体来说:
* 如果两边有英文双引号,表示 BIANRY(32) 类型。例如 `"abc"`
* 如果两边有英文双引号而且带有 L 前缀,表示 NCHAR(32) 类型。例如 `L"报错信息"`
* 对空格、等号(=)、逗号(,)、双引号("),前面需要使用反斜杠(\)进行转义。(都指的是英文半角符号)
......@@ -64,20 +72,26 @@ tag_set 中的所有的数据自动转化为 nchar 数据类型,并不需要
| 4 | i16 | SmallInt | 2 |
| 5 | i32 | Int | 4 |
| 6 | i64或i | Bigint | 8 |
* t, T, true, True, TRUE, f, F, false, False 将直接作为 BOOL 型来处理。
<br/>例如如下数据行表示:向名为 st 的超级表下的 t1 标签为 "3"(NCHAR)、t2 标签为 "4"(NCHAR)、t3 标签为 "t3"(NCHAR)的数据子表,写入 c1 列为 3(BIGINT)、c2 列为 false(BOOL)、c3 列为 "passit"(BINARY)、c4 列为 4(DOUBLE)、主键时间戳为 1626006833639000000 的一行数据。
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
```
需要注意的是,如果描述数据类型后缀时使用了错误的大小写,或者为数据指定的数据类型有误,均可能引发报错提示而导致数据写入失败。
### 无模式写入的主要处理逻辑
无模式写入按照如下原则来处理行数据:
<br/>1. 将使用如下规则来生成子表名:首先将measurement 的名称和标签的 key 和 value 组合成为如下的字符串
```json
"measurement,tag_key1=tag_value1,tag_key2=tag_value2"
```
需要注意的是,这里的tag_key1, tag_key2并不是用户输入的标签的原始顺序,而是使用了标签名称按照字符串升序排列后的结果。所以,tag_key1 并不是在行协议中输入的第一个标签。
排列完成以后计算该字符串的 MD5 散列值 "md5_val"。然后将计算的结果与字符串组合生成表名:“t_md5_val”。其中的 “t_” 是固定的前缀,每个通过该映射关系自动生成的表都具有该前缀。
<br/>2. 如果解析行协议获得的超级表不存在,则会创建这个超级表。
......@@ -120,7 +134,9 @@ st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
```
该行数据映射生成一个超级表: st, 其包含了 3 个类型为 nchar 的标签,分别是:t1, t2, t3。五个数据列,分别是ts(timestamp),c1 (bigint),c3(binary),c2 (bool), c4 (bigint)。映射成为如下 SQL 语句:
```json
create stable st (_ts timestamp, c1 bigint, c2 bool, c3 binary(6), c4 bigint) tags(t1 nchar(1), t2 nchar(1), t3 nchar(2))
```
......@@ -129,22 +145,28 @@ create stable st (_ts timestamp, c1 bigint, c2 bool, c3 binary(6), c4 bigint) ta
<br/>本节将说明不同行数据写入情况下,对于数据模式的影响。
在使用行协议写入一个明确的标识的字段类型的时候,后续更改该字段的类型定义,会出现明确的数据模式错误,即会触发写入 API 报告错误。如下所示,
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4i 1626006833640000000
```
第一行的数据类型映射将 c4 列定义为 Double, 但是第二行的数据又通过数值后缀方式声明该列为 BigInt, 由此会触发无模式写入的解析错误。
如果列前面的行协议将数据列声明为了 binary, 后续的要求长度更长的binary长度,此时会触发超级表模式的变更。
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c5="pass" 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c5="passit" 1626006833640000000
```
第一行中行协议解析会声明 c5 列是一个 binary(4)的字段,第二次行数据写入会提取列 c5 仍然是 binary 列,但是其宽度为 6,此时需要将binary的宽度增加到能够容纳 新字符串的宽度。
```json
st,t1=3,t2=4,t3=t3 c1=3i64 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c6="passit" 1626006833640000000
```
第二行数据相对于第一行来说增加了一个列 c6,类型为binary(6)。那么此时会自动增加一个列 c6, 类型为 binary(6)。
**写入完整性**
......@@ -156,98 +178,35 @@ st,t1=3,t2=4,t3=t3 c1=3i64,c6="passit" 1626006833640000000
**后续升级计划**
<br/>当前版本只提供了 C 版本的 API,后续将提供 其他高级语言的 API,例如 Java/Go/Python/C# 等。此外,在TDengine v2.3及后续版本中,您还可以通过 taosAdapter 采用 REST 的方式直接写入无模式数据。
## <a class="anchor" id="prometheus"></a>Prometheus 直接写入(通过 taosAdapter)
## <a class="anchor" id="prometheus"></a>Prometheus 直接写入
remote_read 和 remote_write 是 Prometheus 数据读写分离的集群方案。
只需要将 remote_read 和 remote_write url 指向 taosAdapter 对应的 url 同时设置 Basic 验证即可使用。
[Prometheus](https://www.prometheus.io/)作为Cloud Native Computing Fundation毕业的项目,在性能监控以及K8S性能监控领域有着非常广泛的应用。TDengine提供一个小工具[Bailongma](https://github.com/taosdata/Bailongma),只需对Prometheus做简单配置,无需任何代码,就可将Prometheus采集的数据直接写入TDengine,并按规则在TDengine自动创建库和相关表项。博文[用Docker容器快速搭建一个Devops监控Demo](https://www.taosdata.com/blog/2020/02/03/1189.html)即是采用Bailongma将Prometheus和Telegraf的数据写入TDengine中的示例,可以参考。
* remote_read url : `http://host_to_taosAdapter:port(default 6041)/prometheus/v1/remote_read/:db`
* remote_write url : `http://host_to_taosAdapter:port(default 6041)/prometheus/v1/remote_write/:db`
### 从源代码编译 blm_prometheus
Basic验证:
用户需要从github下载[Bailongma](https://github.com/taosdata/Bailongma)的源码,使用Golang语言编译器编译生成可执行文件。在开始编译前,需要准备好以下条件:
- Linux操作系统的服务器
- 安装好Golang,1.14版本以上
- 对应的TDengine版本。因为用到了TDengine的客户端动态链接库,因此需要安装好和服务端相同版本的TDengine程序;比如服务端版本是TDengine 2.0.0, 则在Bailongma所在的Linux服务器(可以与TDengine在同一台服务器,或者不同服务器)
Bailongma项目中有一个文件夹blm_prometheus,存放了prometheus的写入API程序。编译过程如下:
```bash
cd blm_prometheus
go build
```
* username: TDengine 连接用户名
* password: TDengine 连接密码
一切正常的情况下,就会在对应的目录下生成一个blm_prometheus的可执行程序。
### 安装 Prometheus
通过Prometheus的官网下载安装。具体请见:[下载地址](https://prometheus.io/download/)
### 配置 Prometheus
参考Prometheus的[配置文档](https://prometheus.io/docs/prometheus/latest/configuration/configuration/),在Prometheus的配置文件中的<remote_write>部分,增加以下配置:
```
- url: "bailongma API服务提供的URL"(参考下面的blm_prometheus启动示例章节)
```
示例 prometheus.yml 如下:
启动Prometheus后,可以通过taos客户端查询确认数据是否成功写入。
### 启动 blm_prometheus 程序
blm_prometheus程序有以下选项,在启动blm_prometheus程序时可以通过设定这些选项来设定blm_prometheus的配置。
```bash
--tdengine-name
如果TDengine安装在一台具备域名的服务器上,也可以通过配置TDengine的域名来访问TDengine。在K8S环境下,可以配置成TDengine所运行的service name。
--batch-size
blm_prometheus会将收到的prometheus的数据拼装成TDengine的写入请求,这个参数控制一次发给TDengine的写入请求中携带的数据条数。
--dbname
设置在TDengine中创建的数据库名称,blm_prometheus会自动在TDengine中创建一个以dbname为名称的数据库,缺省值是prometheus。
--dbuser
设置访问TDengine的用户名,缺省值是'root'
--dbpassword
设置访问TDengine的密码,缺省值是'taosdata'
--port
blm_prometheus对prometheus提供服务的端口号。
```
### 启动示例
通过以下命令启动一个blm_prometheus的API服务
```bash
./blm_prometheus -port 8088
```
假设blm_prometheus所在服务器的IP地址为"10.1.2.3",则在prometheus的配置文件中<remote_write>部分增加url为
```yaml
remote_write:
- url: "http://10.1.2.3:8088/receive"
```
- url: "http://localhost:6041/prometheus/v1/remote_write/prometheus_data"
basic_auth:
username: root
password: taosdata
### 查询 prometheus 写入数据
prometheus产生的数据格式如下:
```json
{
Timestamp: 1576466279341,
Value: 37.000000,
apiserver_request_latencies_bucket {
component="apiserver",
instance="192.168.99.116:8443",
job="kubernetes-apiservers",
le="125000",
resource="persistentvolumes",
scope="cluster",
verb="LIST",
version="v1"
}
}
```
其中,apiserver_request_latencies_bucket为prometheus采集的时序数据的名称,后面{}中的为该时序数据的标签。blm_prometheus会以时序数据的名称在TDengine中自动创建一个超级表,并将{}中的标签转换成TDengine的tag值,Timestamp作为时间戳,value作为该时序数据的值。因此在TDengine的客户端中,可以通过以下指令查到这个数据是否成功写入。
```mysql
use prometheus;
select * from apiserver_request_latencies_bucket;
remote_read:
- url: "http://localhost:6041/prometheus/v1/remote_read/prometheus_data"
basic_auth:
username: root
password: taosdata
remote_timeout: 10s
read_recent: true
```
## <a class="anchor" id="telegraf"></a> Telegraf 直接写入(通过 taosAdapter)
......@@ -257,6 +216,7 @@ select * from apiserver_request_latencies_bucket;
TDengine 新版本(2.3.0.0+)包含一个 taosAdapter 独立程序,负责接收包括 Telegraf 的多种应用的数据写入。
配置方法,在 /etc/telegraf/telegraf.conf 增加如下文字,其中 database name 请填写希望在 TDengine 保存 Telegraf 数据的数据库名,TDengine server/cluster host、username和 password 填写 TDengine 实际值:
```
[[outputs.http]]
url = "http://<TDengine server/cluster host>:6041/influxdb/v1/write?db=<database name>"
......@@ -269,9 +229,11 @@ TDengine 新版本(2.3.0.0+)包含一个 taosAdapter 独立程序,负责
```
然后重启 telegraf:
```
sudo systemctl start telegraf
```
即可在 TDengine 中查询 metrics 数据库中 Telegraf 写入的数据。
taosAdapter 相关配置参数请参考 taosadapter --help 命令输出以及相关文档。
......@@ -283,16 +245,20 @@ taosAdapter 相关配置参数请参考 taosadapter --help 命令输出以及相
TDengine 新版本(2.3.0.0+)包含一个 taosAdapter 独立程序,负责接收包括 collectd 的多种应用的数据写入。
在 /etc/collectd/collectd.conf 文件中增加如下内容,其中 host 和 port 请填写 TDengine 和 taosAdapter 配置的实际值:
```
LoadPlugin network
<Plugin network>
Server "<TDengine cluster/server host>" "<port for collectd>"
</Plugin>
```
重启 collectd
```
sudo systemctl start collectd
```
taosAdapter 相关配置参数请参考 taosadapter --help 命令输出以及相关文档。
## <a class="anchor" id="statsd"></a> StatsD 直接写入(通过 taosAdapter)
......@@ -303,12 +269,14 @@ taosAdapter 相关配置参数请参考 taosadapter --help 命令输出以及相
TDengine 新版本(2.3.0.0+)包含一个 taosAdapter 独立程序,负责接收包括 StatsD 的多种应用的数据写入。
在 config.js 文件中增加如下内容后启动 StatsD,其中 host 和 port 请填写 TDengine 和 taosAdapter 配置的实际值:
```
backends 部分添加 "./backends/repeater"
repeater 部分添加 { host:'<TDengine server/cluster host>', port: <port for StatsD>}
```
示例配置文件:
```
{
port: 8125
......@@ -323,9 +291,10 @@ icinga2 可以收集监控和性能数据并写入 OpenTSDB,taosAdapter 可以
## <a class="anchor" id="icinga2"></a> icinga2 直接写入(通过 taosAdapter)
* 参考链接 https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer 使能 opentsdb-writer
* 参考链接 `https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer` 使能 opentsdb-writer
* 使能 taosAdapter 配置项 opentsdb_telnet.enable
* 修改配置文件 /etc/icinga2/features-enabled/opentsdb.conf
```
object OpenTsdbWriter "opentsdb" {
host = "host to taosAdapter"
......@@ -344,11 +313,15 @@ TCollector 是一个在客户侧收集本地收集器并发送数据到 OpenTSDB
taosAdapter 相关配置参数请参考 taosadapter --help 命令输出以及相关文档。
## <a class="anchor" id="taosadapter2-telegraf"></a> 使用 Bailongma 2.0 接入 Telegraf 数据写入
## <a class="anchor" id="bailongma2-prometheus"></a> 使用 Bailongma 2.0 接入 Prometheus 数据写入
**注意:**
TDengine 新版本(2.3.0.0+)提供新版本 Bailongma ,命名为 taosAdapter ,提供更简便的 Telegraf 数据写入以及其他更强大的功能,Bailongma v2 及之前版本将逐步不再维护。
TDengine 新版本(2.4.0.4+)包含 taosAdapter 组件,提供更简便的 Prometheus 数据写入以及其他更强大的功能,Bailongma v2 及之前版本将逐步不再维护。
## <a class="anchor" id="bailongma2-telegraf"></a> 使用 Bailongma 2.0 接入 Telegraf 数据写入
**注意:**
TDengine 新版本(2.3.0.0+)包含 taosAdapter 组件,提供更简便的 Telegraf 数据写入以及其他更强大的功能,Bailongma v2 及之前版本将逐步不再维护。
## <a class="anchor" id="emq"></a>EMQ Broker 直接写入
......
......@@ -57,10 +57,10 @@ INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES(
| taos-jdbcdriver 版本 | TDengine 2.0.x.x 版本 | TDengine 2.2.x.x 版本 | TDengine 2.4.x.x 版本 | JDK 版本 |
|---------------------| ----------------------| ----------------------| ----------------------| -------- |
| 2.0.37 | X | X | 2.4.0.4 | 1.8.x |
| 2.0.36 | X | 2.2.2.11 以上 | 2.4.0.0 - 2.4.0.3 | 1.8.x |
| 2.0.35 | X | 2.2.2.11 以上 | 2.3.0.0 - 2.4.0.3 | 1.8.x |
| 2.0.33 - 2.0.34 | 2.0.3.0 以上 | 2.2.0.0 以上 | 2.4.0.0 - 2.4.0.3 | 1.8.x |
| 2.0.37 | X | X | 2.4.0.6 以上 | 1.8.x |
| 2.0.36 | X | 2.2.2.11 以上 | 2.4.0.0 - 2.4.0.5 | 1.8.x |
| 2.0.35 | X | 2.2.2.11 以上 | 2.3.0.0 - 2.4.0.5 | 1.8.x |
| 2.0.33 - 2.0.34 | 2.0.3.0 以上 | 2.2.0.0 以上 | 2.4.0.0 - 2.4.0.5 | 1.8.x |
| 2.0.31 - 2.0.32 | 2.1.3.0 - 2.1.7.7 | X | X | 1.8.x |
| 2.0.22 - 2.0.30 | 2.0.18.0 - 2.1.2.1 | X | X | 1.8.x |
| 2.0.12 - 2.0.21 | 2.0.8.0 - 2.0.17.4 | X | X | 1.8.x |
......
......@@ -79,7 +79,7 @@ TDengine is a highly efficient platform to store, query, and analyze time-series
- [Windows Client](https://www.taosdata.com/blog/2019/07/26/514.html): compile your own Windows client, which is required by various connectors on the Windows environment
- [Rust Connector](/connector/rust): A taosc/RESTful API based TDengine client for Rust
## [Components and Tools]
## Components and Tools
* [taosAdapter](/tools/adapter): a bridge/adapter between TDengine cluster and applications.
* [TDinsight](/tools/insight): monitoring TDengine cluster with Grafana.
......
# Quickly experience TDengine through Docker
# Quickly experience TDengine with Docker
While it is not recommended to deploy TDengine services via Docker in a production environment, Docker tools do a good job of shielding the environmental differences in the underlying operating system and are well suited for use in development testing or first-time experience with the toolset for installing and running TDengine. In particular, Docker makes it relatively easy to try TDengine on Mac OSX and Windows systems without having to install a virtual machine or rent an additional Linux server. In addition, starting from version 2.0.14.0, TDengine provides images that support both X86-64, X86, arm64, and arm32 platforms, so non-mainstream computers that can run docker, such as NAS, Raspberry Pi, and embedded development boards, can also easily experience TDengine based on this document.
......@@ -20,7 +20,7 @@ Docker version 20.10.3, build 48d30b5
### running TDengine server inside Docker
```bash
$ docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd
```
......@@ -35,7 +35,7 @@ This command starts a docker container with TDengine server running and maps the
Further, you can also use the `docker run` command to start the docker container running TDengine server, and use the `--name` command line parameter to name the container tdengine, use `--hostname` to specify the hostname as tdengine-server, and use `-v` to mount the local directory (-v) to synchronize the data inside the host and the container to prevent data loss after the container is deleted.
```
$ docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:/var/log/taos -v ~/work/taos/data:/var/lib/taos -p 6030-6041:6030-6041 -p 6030-6041:6030-6041/udp tdengine/tdengine
docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:/var/log/taos -v ~/work/taos/data:/var/lib/taos -p 6030-6041:6030-6041 -p 6030-6041:6030-6041/udp tdengine/tdengine
```
- **--name tdengine**: set the container name, we can access the corresponding container by container name
......@@ -45,7 +45,12 @@ $ docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:
### Use the `docker ps` command to verify that the container is running correctly
```bash
$ docker ps
docker ps
```
The output could be:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS ···
c452519b0f9b tdengine/tdengine "taosd" 14 minutes ago Up 14 minutes ···
```
......@@ -61,7 +66,7 @@ c452519b0f9b tdengine/tdengine "taosd" 14 minutes ago Up 14 minutes ·
```bash
$ docker exec -it tdengine /bin/bash
root@tdengine-server:~/TDengine-server-2.4.0.4#
root@tdengine-server:~/TDengine-server-2.4.0.4#
```
- **docker exec**: Enter the container by `docker exec` command, if exited, the container will not stop.
......@@ -78,7 +83,7 @@ root@tdengine-server:~/TDengine-server-2.4.0.4# taos
Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
taos>
taos>
```
The TDengine shell successfully connects to the server and prints out a welcome message and version information. If it fails, an error message is printed.
......@@ -101,7 +106,12 @@ taos>
You can also access the TDengine server inside the Docker container using `curl` command from the host side through the RESTful port.
```
$ curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql
curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql
```
The output could be:
```
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep0,keep1,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep0,keep1,keep(D)",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["test","2021-08-18 06:01:11.021",10000,4,1,1,10,"3650,3650,3650",16,6,100,4096,1,3000,2,0,"ms",0,"ready"],["log","2021-08-18 05:51:51.065",4,1,1,1,10,"30,30,30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":2}
```
......@@ -109,7 +119,6 @@ This command accesses the TDengine server through the RESTful interface, which c
TDengine RESTful interface details can be found in the [official documentation](https://www.taosdata.com/en/documentation/connector#restful).
### Running TDengine server and taosAdapter with a Docker container
Docker containers of TDegnine version 2.4.0.0 and later include a component named `taosAdapter`, which supports data writing and querying capabilities to the TDengine server through the RESTful interface and provides the data ingestion interfaces compatible with InfluxDB/OpenTSDB. Allows seamless migration of InfluxDB/OpenTSDB applications to access TDengine.
......@@ -121,54 +130,25 @@ Running TDengine version 2.4.0.4 image with docker.
Start taosAdapter and taosd by default:
```
$ docker run -d --name tdengine-taosa -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine:2.4.0.4
docker run -d --name tdengine-taosa -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine:2.4.0.4
```
Verify that the RESTful interface taosAdapter provides working using the `curl` command.
```
$ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0.1:6041/rest/sql
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
```
taosAdapter supports multiple data collection agents (e.g. Telegraf, StatsD, collectd, etc.), here only demonstrate how StasD is simulated to write data, and the command is executed from the host side as follows.
```
$ echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0.1:6041/rest/sql
```
Then you can use the taos shell to query the taosAdapter automatically created database statsd and the contents of the super table foo.
```
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
Query OK, 2 row(s) in set (0.002112s)
taos> use statsd;
Database changed.
taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
Query OK, 1 row(s) in set (0.001160s)
taos> select * from foo;
ts | value | metric_type |
=======================================================================================
2021-12-28 09:21:48.840820836 | 1 | counter |
Query OK, 1 row(s) in set (0.001639s)
The output could be:
taos>
```
You can see that the simulation data has been written to TDengine.
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
```
### Application example: write data to TDengine server in Docker container using taosBenchmark on the host
1, execute `taosBenchmark` (was named taosdemo) in the host command line interface to write data to the TDengine server in the Docker container
1. execute `taosBenchmark` (was named taosdemo) in the host command line interface to write data to the TDengine server in the Docker container
```bash
$ taosBenchmark
......@@ -177,7 +157,7 @@ taosBenchmark is simulating data generated by power equipments monitoring...
host: 127.0.0.1:6030
user: root
password: taosdata
configDir:
configDir:
resultFile: ./output.txt
thread num of insert data: 10
thread num of create table: 10
......@@ -206,13 +186,13 @@ database[0]:
maxSqlLen: 1048576
timeStampStep: 1
startTimestamp: 2017-07-14 10:40:00.000
sampleFormat:
sampleFile:
tagsFile:
sampleFormat:
sampleFile:
tagsFile:
columnCount: 3
column[0]:FLOAT column[1]:INT column[2]:FLOAT
column[0]:FLOAT column[1]:INT column[2]:FLOAT
tagCount: 2
tag[0]:INT tag[1]:BINARY(16)
tag[0]:INT tag[1]:BINARY(16)
Press enter key to continue or Ctrl-C to stop
```
......@@ -221,7 +201,7 @@ After enter, this command will automatically create a super table `meters` under
It takes about a few minutes to execute this command and ends up inserting a total of 100 million records.
3, Go to the TDengine terminal and view the data generated by taosBenchmark.
2.Go to the TDengine terminal and view the data generated by taosBenchmark.
- **Go to the terminal interface.**
......@@ -231,7 +211,7 @@ $ root@c452519b0f9b:~/TDengine-server-2.4.0.4# taos
Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
taos>
taos>
```
- **View the database.**
......@@ -240,7 +220,7 @@ taos>
$ taos> show databases;
name | created_time | ntables | vgroups | ···
test | 2021-08-18 06:01:11.021 | 10000 | 6 | ···
log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
```
......@@ -292,13 +272,49 @@ Query OK, 1 row(s) in set (0.003490s)
```
### Application Example: use data collection agent to write data into TDengine
taosAdapter supports multiple data collection agents (e.g. Telegraf, StatsD, collectd, etc.), here only demonstrate how StasD is simulated to write data, and the command is executed from the host side as follows.
```
echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
```
Then you can use the taos shell to query the taosAdapter automatically created database statsd and the contents of the super table foo.
```
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
Query OK, 2 row(s) in set (0.002112s)
taos> use statsd;
Database changed.
taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
Query OK, 1 row(s) in set (0.001160s)
taos> select * from foo;
ts | value | metric_type |
=======================================================================================
2021-12-28 09:21:48.840820836 | 1 | counter |
Query OK, 1 row(s) in set (0.001639s)
taos>
```
You can see that the simulation data has been written to TDengine.
## Stop the TDengine service that is running in Docker
```bash
$ docker stop tdengine
tdengine
docker stop tdengine
```
- **docker stop**: Stop the specified running docker image with docker stop.
- **tdengine**: The name of the container.
......@@ -12,7 +12,11 @@ Please visit our [TDengine github page](https://github.com/taosdata/TDengine) fo
For the time being, it is not recommended to use Docker to deploy the client or server side of TDengine in production environments, but it is convenient to use Docker to deploy in development environments or when trying it for the first time. In particular, with Docker, it is easy to try TDengine in Mac OS X and Windows environments.
Please refer to the detailed operation in [Quickly experience TDengine through Docker](https://www.taosdata.com/en/documentation/getting-started/docker).
```
docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
```
Please refer to [Quickly experience TDengine with Docker](https://www.taosdata.com/en/documentation/getting-started/docker) for the details.
### <a class="anchor" id="package-install"></a>Install from Package
......@@ -54,8 +58,8 @@ If the service is running successfully, you can play around through TDengine she
- The `systemctl` command needs the **root** privilege. Use **sudo** if you are not the **root** user.
- To get better product feedback and improve our solution, TDengine will collect basic usage information, but you can modify the configuration parameter **telemetryReporting** in the system configuration file taos.cfg, and set it to 0 to turn it off.
- TDengine uses FQDN (usually hostname) as the node ID. In order to ensure normal operation, you need to set hostname for the server running taosd, and configure DNS service or hosts file for the machine running client application, to ensure the FQDN can be resolved.
- TDengine supports installation on Linux systems with[ systemd ](https://en.wikipedia.org/wiki/Systemd)as the process service management, and uses `which systemctl` command to detect whether `systemd` packages exist in the system:
- TDengine supports installation on Linux systems with [systemd](https://en.wikipedia.org/wiki/Systemd) as the process service management, and uses `which systemctl` command to detect whether `systemd` packages exist in the system:
```bash
$ which systemctl
```
......@@ -140,7 +144,7 @@ taos> source <filename>;
After starting the TDengine server, you can execute the command `taosBenchmark` (was named `taosdemo`, please install taosTools package if you use TDengine 2.4 or later version) in the Linux terminal.
```bash
```bash
$ taosBenchmark
```
......@@ -213,7 +217,7 @@ List of platforms supported by TDengine server
| Allwinner ARM64 | | | ○ | | | |
| Actions ARM64 | | | ○ | | | |
Note: ● has been verified by official tests; ○ has been verified by unofficial tests.
Note: ● has been verified by official tests; ○ has been verified by unofficial tests.
List of platforms supported by TDengine client and connectors
......
......@@ -138,7 +138,7 @@ TDengine suggests using data collection point ID as the table name (like D1001 i
### STable: A Collection of Data Points in the Same Type
The design of one table for each data collection point will require a huge number of tables, which is difficult to manage. Moreover, applications often need to take aggregation operations between data collection points, thus aggregation operations will become complicated. To support aggregation over multiple tables efficiently, the [STable(Super Table)](https://www.taosdata.com/en/documentation/super-table) concept is introduced by TDengine.
The design of one table for each data collection point will require a huge number of tables, which is difficult to manage. Moreover, applications often need to take aggregation operations between data collection points, thus aggregation operations will become complicated. To support aggregation over multiple tables efficiently, the STable(Super Table) concept is introduced by TDengine.
STable is an abstract set for a type of data collection point. A STable contains a set of data collection points (tables) that have the same schema or data structure, but with different static attributes (tags). To describe a STable (a set of data collection points of a specific type), in addition to defining the table structure of the collected metrics, it is also necessary to define the schema of its tags. The data type of tags can be int, float, string, and there can be multiple tags, which can be added, deleted, or modified afterward. If the whole system has N different types of data collection points, N STables need to be established.
......
......@@ -32,6 +32,7 @@ For the SQL INSERT Grammar, please refer to [Taos SQL insert](https://www.taosd
- The timestamp of written data must be greater than the current time minus the time of configuration parameter keep. If keep is configured for 3650 days, data older than 3650 days cannot be written. The timestamp for writing data cannot be greater than the current time plus configuration parameter days. If days is configured to 2, data 2 days later than the current time cannot be written.
## <a class="anchor" id="schemaless"></a> Data Writing via Schemaless
**Introduction**
<br/> In many IoT applications, data collection is often used in intelligent control, business analysis and device monitoring etc. As fast application upgrade and iteration, or hardware adjustment, data collection metrics can change rapidly over time. To provide solutions to such use cases, from version 2.2.0.0, TDengine supports writing data via Schemaless. When using Schemaless, action of pre-creating table before inserting data is no longer needed anymore. Tables, data columns and tags can be created automatically. Schemaless can also add additional data columns to tables if necessary, to make sure data can be properly stored into TDengine.
......@@ -44,6 +45,7 @@ For the SQL INSERT Grammar, please refer to [Taos SQL insert](https://www.taosd
For InfluxDB, OpenTSDB data writing protocol format, users can refer to corresponding official documentation for details. Following will give examples of introducing protocol extension from TDengine based on InfluxDB's Line Protocol, allowing users to use Schemaless with more precision.
Schemaless use one line of string literals to represent one data record. (Users can also pass multiple lines to the Schemaless API for batch insertion), the format is as follows:
```json
measurement,tag_set field_set timestamp
```
......@@ -55,6 +57,7 @@ measurement,tag_set field_set timestamp
All tag values in tag_set are automatically converted and stored as NCHAR data type in TDengine and no need to be surrounded by double quote(")
<br/> In Schemaless Line Protocol, data format in field_set need to be self-descriptive in order to convert data to corresponding TDengine data types. For example:
* Field value surrounded by double quote indicates data is BINARY(32) data types. For example, `"abc"`.
* Field value surrounded by double quote and L letter prefix indicates data is NCHAR(32) data type. For example `L"报错信息"`.
* Space, equal sign(=), comma(,), double quote(") need to use backslash(\) to escape.
......@@ -68,6 +71,7 @@ All tag values in tag_set are automatically converted and stored as NCHAR data t
| 4 | i16 | SMALLINT | 2 |
| 5 | i32 | INT | 4 |
| 6 | i64 / i | BIGINT | 8 |
* t, T, true, True, TRUE, f, F, false, False represents BOOLEAN types。
### Schemaless processing logic
......@@ -75,9 +79,11 @@ All tag values in tag_set are automatically converted and stored as NCHAR data t
Following rules are followed by Schemaless protocol parsing:
<br/>1. For child table name generation, firstly create following string by concatenating measurement and tag key/values strings together.
```json
"measurement,tag_key1=tag_value1,tag_key2=tag_value2"
```
tag_key1, tag_key2 are not following the original order of user input, but sorted according to tag names.
After MD5 value "md5_val" calculated using the above string, prefix "t_" is prepended to "md5_val" to form the child table name.
<br/>2. If super table does not exist, a new super table will be created.
......@@ -89,7 +95,7 @@ After MD5 value "md5_val" calculated using the above string, prefix "t_" is prep
<br/>8. If any error occurs during processing, error code will be returned.
**Note**
<br/>Schemaless will follow TDengine data structure limitations. For example, each table row cannot exceed 16KB. For detailed TDengine limitations please refer to (https://www.taosdata.com/en/documentation/taos-sql#limitation).
<br/>Schemaless will follow TDengine data structure limitations. For example, each table row cannot exceed 16KB. For detailed TDengine limitations please refer to `https://www.taosdata.com/en/documentation/taos-sql#limitation`.
**Timestamp precisions**
<br/>Following protocols are supported in Schemaless:
......@@ -120,10 +126,13 @@ When SML_TELNET_PROTOCOL or SML_JSON_PROTOCOL used,timestamp precision is dete
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
```
Above line is mapped to a super table with name "st" with 3 NCHAR type tags ("t1", "t2", "t3") and 5 columns: ts(timestamp),c1 (bigint),c3(binary),c2 (bool), c4 (bigint). This is identical to create a super table with the following SQL clause:
```json
create stable st (_ts timestamp, c1 bigint, c2 bool, c3 binary(6), c4 bigint) tags(t1 nchar(1), t2 nchar(1), t3 nchar(2))
```
**Schemaless data alternation rules**
<br/>This section describes several data alternation scenarios:
......@@ -133,19 +142,23 @@ When column with one line has certain type, and following lines attemp to change
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4i 1626006833640000000
```
For first line of data, c4 column type is declared as DOUBLE with no suffix. However, the second line declared the column type to be BIGINT with suffix "i". Schemaless parsing error will be occurred.
When column is declared as BINARY type, but follow-up line insertion requires longer BINARY length of this column, max length of this column will be extended:
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c5="pass" 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c5="passit" 1626006833640000000
```
In first line c5 column store string "pass" with 4 characters as BINARY(4), but in second line c5 requires 2 more characters for storing binary string "passit", c5 column max length will be extend from BINARY(4) to BINARY(6) to accommodate more characters.
```json
st,t1=3,t2=4,t3=t3 c1=3i64 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c6="passit" 1626006833640000000
```
In above example second line has one more column c6 with value "passit" compared to the first line. A new column c6 will be added with type BINARY(6).
**Data integrity**
......@@ -157,123 +170,46 @@ In above example second line has one more column c6 with value "passit" compared
**Future enhancement**
<br/> Currently TDengine only provides clang API support for Schemaless. In future versions, APIs/connectors of more languages will be supported, e.g., Java/Go/Python/C# etc. From TDengine v2.3 and later versions, users can also use taosAdaptor to writing data via Schemaless through RESTful interface.
## <a class="anchor" id="prometheus"></a> Data Writing via Prometheus
As a graduate project of Cloud Native Computing Foundation, [Prometheus](https://www.prometheus.io/) is widely used in the field of performance monitoring and K8S performance monitoring. TDengine provides a simple tool [Bailongma](https://github.com/taosdata/Bailongma), which only needs to be simply configured in Prometheus without any code, and can directly write the data collected by Prometheus into TDengine, then automatically create databases and related table entries in TDengine according to rules. Blog post [Use Docker Container to Quickly Build a Devops Monitoring Demo](https://www.taosdata.com/blog/2020/02/03/1189.html), which is an example of using bailongma to write Prometheus and Telegraf data into TDengine.
### Compile blm_prometheus From Source
Users need to download the source code of [Bailongma](https://github.com/taosdata/Bailongma) from github, then compile and generate an executable file using Golang language compiler. Before you start compiling, you need to prepare:
- A server running Linux OS
- Golang version 1.10 and higher installed
- Since the client dynamic link library of TDengine is used, it is necessary to install the same version of TDengine as the server-side. For example, if the server version is TDengine 2.0. 0, ensure install the same version on the linux server where bailongma is located (can be on the same server as TDengine, or on a different server)
Bailongma project has a folder, blm_prometheus, which holds the prometheus writing API. The compiling process is as follows:
```bash
cd blm_prometheus
go build
```
If everything goes well, an executable of blm_prometheus will be generated in the corresponding directory.
### Install Prometheus
Download and install as the instruction of Prometheus official website. [Download Address](https://prometheus.io/download/)
### Configure Prometheus
Read the Prometheus [configuration document](https://prometheus.io/docs/prometheus/latest/configuration/configuration/) and add following configurations in the section of Prometheus configuration file
- url: The URL provided by bailongma API service, refer to the blm_prometheus startup example section below
After Prometheus launched, you can check whether data is written successfully through query taos client.
### Launch blm_prometheus
blm_prometheus has following options that you can configure when you launch blm_prometheus.
```sh
--tdengine-name
If TDengine is installed on a server with a domain name, you can also access the TDengine by configuring the domain name of it. In K8S environment, it can be configured as the service name that TDengine runs
--batch-size
blm_prometheus assembles the received prometheus data into a TDengine writing request. This parameter controls the number of data pieces carried in a writing request sent to TDengine at a time.
--dbname
Set a name for the database created in TDengine, blm_prometheus will automatically create a database named dbname in TDengine, and the default value is prometheus.
## <a class="anchor" id="prometheus"></a> Data Writing via Prometheus via taosAdapter
--dbuser
Remote_read and remote_write are cluster schemes for Prometheus data read-write separation.
Just use the REMOTE_READ and REMOTE_WRITE URL to point to the URL corresponding to Taosadapter to use Basic authentication.
Set the user name to access TDengine, the default value is'root '
* Remote_read url: `http://host_to_taosadapter:port (default 6041) /prometheus/v1/remote_read/:db`
* Remote_write url: `http://host_to_taosadapter:port (default 6041) /Prometheus/v1/remote_write/:db`
--dbpassword
Basic verification:
Set the password to access TDengine, the default value is'taosdata '
* Username: TDengine connection username
* Password: TDengine connection password
--port
The port number blm_prometheus used to serve prometheus.
```
### Example
Launch an API service for blm_prometheus with the following command:
```bash
./blm_prometheus -port 8088
```
Assuming that the IP address of the server where blm_prometheus located is "10.1.2. 3", the URL shall be added to the configuration file of Prometheus as:
Example Prometheus.yml is as follows:
```yaml
remote_write:
- url: "http://10.1.2.3:8088/receive"
```
### Query written data of prometheus
The format of generated data by Prometheus is as follows:
```json
{
Timestamp: 1576466279341,
Value: 37.000000,
apiserver_request_latencies_bucket {
component="apiserver",
instance="192.168.99.116:8443",
job="kubernetes-apiservers",
le="125000",
resource="persistentvolumes", s
cope="cluster",
verb="LIST",
version=“v1"
}
}
```
Where apiserver_request_latencies_bucket is the name of the time-series data collected by prometheus, and the tag of the time-series data is in the following {}. blm_prometheus automatically creates a STable in TDengine with the name of the time series data, and converts the tag in {} into the tag value of TDengine, with Timestamp as the timestamp and value as the value of the time-series data. Therefore, in the client of TDengine, you can check whether this data was successfully written through the following instruction.
```mysql
use prometheus;
select * from apiserver_request_latencies_bucket;
- url: "http://localhost:6041/prometheus/v1/remote_write/prometheus_data"
basic_auth:
username: root
password: taosdata
remote_read:
- url: "http://localhost:6041/prometheus/v1/remote_read/prometheus_data"
basic_auth:
username: root
password: taosdata
remote_timeout: 10s
read_recent: true
```
## <a class="anchor" id="telegraf"></a> Data Writing via Telegraf and taosAdapter
Please refer to [Official document](https://portal.influxdata.com/downloads/) for Telegraf installation.
TDengine version 2.3.0.0+ includes a stand-alone application taosAdapter in charge of receive data insertion from Telegraf.
Configuration:
Please add following words in /etc/telegraf/telegraf.conf. Fill 'database name' with the database name you want to store in the TDengine for Telegraf data. Please fill the values in TDengine server/cluster host, username and password fields.
```
[[outputs.http]]
url = "http://<TDengine server/cluster host>:6041/influxdb/v1/write?db=<database name>"
......@@ -286,9 +222,11 @@ Please add following words in /etc/telegraf/telegraf.conf. Fill 'database name'
```
Then restart telegraf:
```
sudo systemctl start telegraf
```
Now you can query the metrics data of Telegraf from TDengine.
Please find taosAdapter configuration and usage from `taosadapter --help` output.
......@@ -301,16 +239,20 @@ TDengine version 2.3.0.0+ includes a stand-alone application taosAdapter in char
Configuration:
Please add following words in /etc/collectd/collectd.conf. Please fill the value 'host' and 'port' with what the TDengine and taosAdapter using.
```
LoadPlugin network
<Plugin network>
Server "<TDengine cluster/server host>" "<port for collectd>"
</Plugin>
```
Then restart collectd
```
sudo systemctl start collectd
```
Please find taosAdapter configuration and usage from `taosadapter --help` output.
## <a class="anchor" id="statsd"></a> Data Writting via StatsD and taosAdapter
......@@ -320,12 +262,14 @@ Please refer to [official document](https://github.com/statsd/statsd) for StatsD
TDengine version 2.3.0.0+ includes a stand-alone application taosAdapter in charge of receive data insertion from StatsD.
Please add following words in the config.js file. Please fill the value to 'host' and 'port' with what the TDengine and taosAdapter using.
```
add "./backends/repeater" to backends section.
add { host:'<TDengine server/cluster host>', port: <port for StatsD>} to repeater section.
```
Example file:
```
{
port: 8125
......@@ -338,9 +282,10 @@ port: 8125
Use icinga2 to collect check result metrics and performance data
* Follow the doc to enable opentsdb-writer https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer
* Follow the doc to enable opentsdb-writer `https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer`
* Enable taosAdapter configuration opentsdb_telnet.enable
* Modify the configuration file /etc/icinga2/features-enabled/opentsdb.conf
```
object OpenTsdbWriter "opentsdb" {
host = "host to taosAdapter"
......@@ -359,11 +304,15 @@ TCollector is a client-side process that gathers data from local collectors and
Please find taosAdapter configuration and usage from `taosadapter --help` output.
## <a class="anchor" id="taosadapter2-telegraf"></a> Insert data via Bailongma 2.0 and Telegraf
## <a class="anchor" id="bailongma2-prometheus"></a> Insert Prometheus data via Bailongma 2.0
**Notice:**
TDengine 2.3.0.0+ provides taosAdapter to support Telegraf data writing. Bailongma v2 will be abandoned and no more maintained.
TDengine 2.4.0.4+ provides taosAdapter to support Prometheus data writing. Bailongma v2 will be abandoned and no more maintained.
## <a class="anchor" id="bailongma2-telegraf"></a> Insert data via Bailongma 2.0 and Telegraf
**Notice:**
TDengine 2.3.0.0+ provides taosAdapter to support Telegraf data writing. Bailongma v2 will be abandoned and no more maintained.
## <a class="anchor" id="emq"></a> Data Writing via EMQ Broker
......
......@@ -21,7 +21,7 @@ Note: ● stands for that has been verified by official tests; ○ stands for th
Note:
- To access the TDengine database through connectors (except RESTful) in the system without TDengine server software, it is necessary to install the corresponding version of the client installation package to make the application driver (the file name is [libtaos.so](http://libtaos.so/) in Linux system and taos.dll in Windows system) installed in the system, otherwise, the error that the corresponding library file cannot be found will occur.
- To access the TDengine database through connectors (except RESTful) in the system without TDengine server software, it is necessary to install the corresponding version of the client installation package to make the application driver (the file name is libtaos.so in Linux system and taos.dll in Windows system) installed in the system, otherwise, the error that the corresponding library file cannot be found will occur.
- All APIs that execute SQL statements, such as `tao_query`, `taos_query_a`, `taos_subscribe` in C/C++ Connector, and APIs corresponding to them in other languages, can only execute one SQL statement at a time. If the actual parameters contain multiple statements, their behavior is undefined.
- Users upgrading to TDengine 2.0. 8.0 must update the JDBC connection. TDengine must upgrade taos-jdbcdriver to 2.0.12 and above.
- No matter which programming language connector is selected, TDengine version 2.0 and above recommends that each thread of database application establish an independent connection or establish a connection pool based on threads to avoid mutual interference between threads of "USE statement" state variables in the connection (but query and write operations of the connection are thread-safe).
......
......@@ -27,7 +27,7 @@ Please refer to the [video tutorial](https://www.taosdata.com/blog/2020/11/11/19
1. Execute command `hostname -f` on each physical node, and check and confirm that the hostnames of all nodes are different (the node where the application driver is located does not need to do this check).
2. Execute `ping host` on each physical node, wherein host is that hostname of other physical node, and see if other physical nodes can be communicated to; if not, you need to check the network settings, or the /etc/hosts file (the default path for Windows systems is C:\ Windows\ system32\ drivers\ etc\ hosts), or the configuration of DNS. If it fails to ping, then we cann't build the cluster.
3. From the physical node where the application runs, ping the data node where taosd runs. If the ping fails, the application cannot connect to taosd. Please check the DNS settings or hosts file of the physical node where the application is located;
4. The End Point of each data node is the output hostname plus the port number, for example, [h1.taosdata.com](http://h1.taosdata.com/): 6030
4. The End Point of each data node is the output hostname plus the port number, for example, `h1.taosdata.com:6030`
**Step 5:** Modify the TDengine configuration file (the file/etc/taos/taos.cfg for all nodes needs to be modified). Assume that the first data node End Point to be started is [h1.taosdata.com](http://h1.taosdata.com/): 6030, and its parameters related to cluster configuration are as follows:
......
......@@ -92,8 +92,8 @@ Only some important configuration parameters are listed below. For more paramete
- fqdn: FQDN of the data node, which defaults to the first hostname configured by the operating system. If you want to access via IP address directly, you can set it to the IP address of the node.
- serverPort: the port number of the external service after taosd started, the default value is 6030.
- httpPort: the port number used by the RESTful service to which all HTTP requests (TCP) require a query/write request. The default value is 6041. Note 2.4 and later version use a stand-alone software, taosAdapter to provide RESTFul interface.
- dataDir: the data file directory to which all data files will be written. [Default:/var/lib/taos](http://default/var/lib/taos).
- logDir: the log file directory to which the running log files of the client and server will be written. [Default:/var/log/taos](http://default/var/log/taos).
- dataDir: the data file directory to which all data files will be written. `Default:/var/lib/taos`.
- logDir: the log file directory to which the running log files of the client and server will be written. `Default:/var/log/taos`.
- arbitrator: the end point of the arbitrator in the system; the default value is null.
- role: optional role for dnode. 0-any; it can be used as an mnode and to allocate vnodes; 1-mgmt; It can only be an mnode, but not to allocate vnodes; 2-dnode; cannot be an mnode, only vnode can be allocated
- debugFlage: run the log switch. 131 (output error and warning logs), 135 (output error, warning, and debug logs), 143 (output error, warning, debug, and trace logs). Default value: 131 or 135 (different modules have different default values).
......@@ -447,7 +447,7 @@ Some CLI options are needed to use the script:
-T '{"alarm_level":"%s","time":"%s","name":"%s","content":"%s"}'
```
Follow the usage of the script and then restart grafana-server service, here we go <http://localhost:3000/d/tdinsight>.
Follow the usage of the script and then restart grafana-server service, here we go `http://localhost:3000/d/tdinsight`.
Refer to [TDinsight](https://github.com/taosdata/grafanaplugin/blob/master/dashboards/TDinsight.md) README for more scenario and limitations of the script, and the metrics descriptions for all of the TDinsight.
......
此差异已折叠。
......@@ -25,11 +25,6 @@ apps:
- network
- system-observe
taosdemo:
command: usr/bin/taosdemo
plugs:
- network
parts:
script:
plugin: dump
......@@ -74,8 +69,7 @@ parts:
- etc/taos/taos.cfg
- usr/bin/taosd
- usr/bin/taos
- usr/bin/taosdemo
- usr/lib/libtaos.so.2.3.0.0
- usr/lib/libtaos.so.2.4.0.0
- usr/lib/libtaos.so.1
- usr/lib/libtaos.so
......
......@@ -902,7 +902,7 @@ SSDataBlock* doGlobalAggregate(void* param, bool* newgroup) {
// not belongs to the same group, return the result of current group;
setInputDataBlock(pOperator, pAggInfo->binfo.pCtx, pAggInfo->pExistBlock, TSDB_ORDER_ASC);
updateOutputBuf(&pAggInfo->binfo, &pAggInfo->bufCapacity, pAggInfo->pExistBlock->info.rows, pOperator->pRuntimeEnv);
updateOutputBuf(&pAggInfo->binfo, &pAggInfo->bufCapacity, pAggInfo->pExistBlock->info.rows, pOperator->pRuntimeEnv, true);
{ // reset output buffer
for(int32_t j = 0; j < pOperator->numOfOutput; ++j) {
......@@ -954,7 +954,7 @@ SSDataBlock* doGlobalAggregate(void* param, bool* newgroup) {
// not belongs to the same group, return the result of current group
setInputDataBlock(pOperator, pAggInfo->binfo.pCtx, pBlock, TSDB_ORDER_ASC);
updateOutputBuf(&pAggInfo->binfo, &pAggInfo->bufCapacity, pBlock->info.rows * pAggInfo->resultRowFactor, pOperator->pRuntimeEnv);
updateOutputBuf(&pAggInfo->binfo, &pAggInfo->bufCapacity, pBlock->info.rows * pAggInfo->resultRowFactor, pOperator->pRuntimeEnv, true);
doExecuteFinalMerge(pOperator, pOperator->numOfOutput, pBlock);
savePrevOrderColumns(pAggInfo->currentGroupColData, pAggInfo->groupColumnList, pBlock, 0, &pAggInfo->hasGroupColData);
......@@ -985,6 +985,8 @@ SSDataBlock* doGlobalAggregate(void* param, bool* newgroup) {
}
}
// shrink output memory on end
shrinkOutputBuf(&pAggInfo->binfo, &pAggInfo->bufCapacity);
return (pRes->info.rows != 0)? pRes:NULL;
}
......
......@@ -10009,7 +10009,7 @@ int32_t validateSqlNode(SSqlObj* pSql, SSqlNode* pSqlNode, SQueryInfo* pQueryInf
const char* msg3 = "start(end) time of query range required or time range too large";
const char* msg4 = "interval query not supported, since the result of sub query not include valid timestamp column";
const char* msg5 = "only tag query not compatible with normal column filter";
const char* msg6 = "not support stddev/percentile/interp in the outer query yet";
const char* msg6 = "not support stddev/percentile in the outer query yet";
const char* msg7 = "derivative/twa/rate/irate/diff requires timestamp column exists in subquery";
const char* msg8 = "condition missing for join query";
const char* msg9 = "not support 3 level select";
......
......@@ -45,7 +45,7 @@
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.76</version>
<version>1.2.79</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
......
......@@ -268,7 +268,7 @@ public class RestfulResultSet extends AbstractResultSet implements ResultSet {
if (precision == TimestampPrecision.MS) {
// ms timestamp: yyyy-MM-dd HH:mm:ss.SSS
return row.getTimestamp(colIndex);
return (Timestamp) row.getTimestamp(colIndex);
}
if (precision == TimestampPrecision.US) {
// us timestamp: yyyy-MM-dd HH:mm:ss.SSSSSS
......
// const TaosBind = require('../nodetaos/taosBind');
const taos = require('../tdengine');
var conn = taos.connect({ host: "localhost" });
var cursor = conn.cursor();
function executeUpdate(updateSql){
console.log(updateSql);
cursor.execute(updateSql);
}
function executeQuery(querySql){
let query = cursor.query(querySql);
query.execute().then((result=>{
console.log(querySql);
result.pretty();
}));
}
function stmtBindParamSample(){
let db = 'node_test_db';
let table = 'stmt_taos_bind_sample';
let createDB = `create database if not exists ${db} keep 3650;`;
let dropDB = `drop database if exists ${db};`;
let useDB = `use ${db}`;
let createTable = `create table if not exists ${table} `+
`(ts timestamp,`+
`nil int,`+
`bl bool,`+
`i8 tinyint,`+
`i16 smallint,`+
`i32 int,`+
`i64 bigint,`+
`f32 float,`+
`d64 double,`+
`bnr binary(20),`+
`blob nchar(20),`+
`u8 tinyint unsigned,`+
`u16 smallint unsigned,`+
`u32 int unsigned,`+
`u64 bigint unsigned);`;
let querySql = `select * from ${table};`;
let insertSql = `insert into ? values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?);`
executeUpdate(dropDB);
executeUpdate(createDB);
executeUpdate(useDB);
executeUpdate(createTable);
let binds = new taos.TaosBind(15);
binds.bindTimestamp(1642435200000);
binds.bindNil();
binds.bindBool(true);
binds.bindTinyInt(127);
binds.bindSmallInt(32767);
binds.bindInt(1234555);
binds.bindBigInt(-164243520000011111n);
binds.bindFloat(214.02);
binds.bindDouble(2.01);
binds.bindBinary('taosdata涛思数据');
binds.bindNchar('TDengine数据');
binds.bindUTinyInt(254);
binds.bindUSmallInt(65534);
binds.bindUInt(4294967294);
binds.bindUBigInt(164243520000011111n);
cursor.stmtInit();
cursor.stmtPrepare(insertSql);
cursor.stmtSetTbname(table);
cursor.bindParam(binds.getBind());
cursor.addBatch();
cursor.stmtExecute();
cursor.stmtClose();
executeQuery(querySql);
executeUpdate(dropDB);
}
stmtBindParamSample();
setTimeout(()=>{
conn.close();
},2000);
\ No newline at end of file
......@@ -6,8 +6,8 @@
const ref = require('ref-napi');
const os = require('os');
const ffi = require('ffi-napi');
const ArrayType = require('ref-array-napi');
const Struct = require('ref-struct-napi');
const ArrayType = require('ref-array-di')(ref);
const Struct = require('ref-struct-di')(ref);
const FieldTypes = require('./constants');
const errors = require('./error');
const _ = require('lodash')
......@@ -20,6 +20,7 @@ const TAOSFIELD = {
BYTES_OFFSET: 66,
STRUCT_SIZE: 68,
}
function convertTimestamp(data, num_of_rows, nbytes = 0, offset = 0, precision = 0) {
data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset);
let res = [];
......@@ -31,6 +32,7 @@ function convertTimestamp(data, num_of_rows, nbytes = 0, offset = 0, precision =
}
return res;
}
function convertBool(data, num_of_rows, nbytes = 0, offset = 0, precision = 0) {
data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset);
let res = new Array(data.length);
......@@ -47,6 +49,7 @@ function convertBool(data, num_of_rows, nbytes = 0, offset = 0, precision = 0) {
}
return res;
}
function convertTinyint(data, num_of_rows, nbytes = 0, offset = 0, precision = 0) {
data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset);
let res = [];
......@@ -58,6 +61,7 @@ function convertTinyint(data, num_of_rows, nbytes = 0, offset = 0, precision = 0
}
return res;
}
function convertTinyintUnsigned(data, num_of_rows, nbytes = 0, offset = 0, precision = 0) {
data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset);
let res = [];
......@@ -81,6 +85,7 @@ function convertSmallint(data, num_of_rows, nbytes = 0, offset = 0, precision =
}
return res;
}
function convertSmallintUnsigned(data, num_of_rows, nbytes = 0, offset = 0, precision = 0) {
data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset);
let res = [];
......@@ -104,6 +109,7 @@ function convertInt(data, num_of_rows, nbytes = 0, offset = 0, precision = 0) {
}
return res;
}
function convertIntUnsigned(data, num_of_rows, nbytes = 0, offset = 0, precision = 0) {
data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset);
let res = [];
......@@ -116,7 +122,6 @@ function convertIntUnsigned(data, num_of_rows, nbytes = 0, offset = 0, precision
return res;
}
function convertBigint(data, num_of_rows, nbytes = 0, offset = 0, precision = 0) {
data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset);
let res = [];
......@@ -128,6 +133,7 @@ function convertBigint(data, num_of_rows, nbytes = 0, offset = 0, precision = 0)
}
return res;
}
function convertBigintUnsigned(data, num_of_rows, nbytes = 0, offset = 0, precision = 0) {
data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset);
let res = [];
......@@ -140,7 +146,6 @@ function convertBigintUnsigned(data, num_of_rows, nbytes = 0, offset = 0, precis
return res;
}
function convertFloat(data, num_of_rows, nbytes = 0, offset = 0, precision = 0) {
data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset);
let res = [];
......@@ -152,6 +157,7 @@ function convertFloat(data, num_of_rows, nbytes = 0, offset = 0, precision = 0)
}
return res;
}
function convertDouble(data, num_of_rows, nbytes = 0, offset = 0, precision = 0) {
data = ref.reinterpret(data.deref(), nbytes * num_of_rows, offset);
let res = [];
......@@ -329,11 +335,54 @@ function CTaosInterface(config = null, pass = false) {
//void taos_close_stream(TAOS_STREAM *tstr);
'taos_close_stream': [ref.types.void, [ref.types.void_ptr]],
//Schemaless insert
//Schemaless insert
//TAOS_RES* taos_schemaless_insert(TAOS* taos, char* lines[], int numLines, int protocol,int precision)
// 'taos_schemaless_insert': [ref.types.void_ptr, [ref.types.void_ptr, ref.types.char_ptr, ref.types.int, ref.types.int, ref.types.int]]
'taos_schemaless_insert': [ref.types.void_ptr, [ref.types.void_ptr, smlLine, 'int', 'int', 'int']]
//stmt APIs
// TAOS_STMT* taos_stmt_init(TAOS *taos)
, 'taos_stmt_init': [ref.types.void_ptr, [ref.types.void_ptr]]
// int taos_stmt_prepare(TAOS_STMT *stmt, const char *sql, unsigned long length)
, 'taos_stmt_prepare': [ref.types.int, [ref.types.void_ptr, ref.types.char_ptr, ref.types.ulong]]
// int taos_stmt_set_tbname(TAOS_STMT* stmt, const char* name)
, 'taos_stmt_set_tbname': [ref.types.int, [ref.types.void_ptr, ref.types.char_ptr]]
// int taos_stmt_set_tbname_tags(TAOS_STMT* stmt, const char* name, TAOS_BIND* tags)
, 'taos_stmt_set_tbname_tags': [ref.types.int, [ref.types.void_ptr, ref.types.char_ptr, ref.types.void_ptr]]
// int taos_stmt_set_sub_tbname(TAOS_STMT* stmt, const char* name)
, 'taos_stmt_set_sub_tbname': [ref.types.int, [ref.types.void_ptr, ref.types.char_ptr]]
// int taos_stmt_bind_param(TAOS_STMT *stmt, TAOS_BIND *bind)
// , 'taos_stmt_bind_param': [ref.types.int, [ref.types.void_ptr, ref.refType(TAOS_BIND)]]
, 'taos_stmt_bind_param': [ref.types.int, [ref.types.void_ptr, ref.types.void_ptr]]
// int taos_stmt_bind_single_param_batch(TAOS_STMT* stmt, TAOS_MULTI_BIND* bind, int colIdx)
, 'taos_stmt_bind_single_param_batch': [ref.types.int, [ref.types.void_ptr, ref.types.void_ptr, ref.types.int]]
// int taos_stmt_bind_param_batch(TAOS_STMT* stmt, TAOS_MULTI_BIND* bind)
, 'taos_stmt_bind_param_batch': [ref.types.int, [ref.types.void_ptr, ref.types.void_ptr]]
// int taos_stmt_add_batch(TAOS_STMT *stmt)
, 'taos_stmt_add_batch': [ref.types.int, [ref.types.void_ptr]]
// int taos_stmt_execute(TAOS_STMT *stmt)
, 'taos_stmt_execute': [ref.types.int, [ref.types.void_ptr]]
// TAOS_RES* taos_stmt_use_result(TAOS_STMT *stmt)
, 'taos_stmt_use_result': [ref.types.int, [ref.types.void_ptr]]
// int taos_stmt_close(TAOS_STMT *stmt)
, 'taos_stmt_close': [ref.types.int, [ref.types.void_ptr]]
// char * taos_stmt_errstr(TAOS_STMT *stmt)
, 'taos_stmt_errstr': [ref.types.char_ptr, [ref.types.void_ptr]]
// int taos_load_table_info(TAOS *taos, const char* tableNameList)
, 'taos_load_table_info': [ref.types.int, [ref.types.void_ptr, ref.types.char_ptr]]
});
if (pass == false) {
......@@ -355,9 +404,11 @@ function CTaosInterface(config = null, pass = false) {
}
return this;
}
CTaosInterface.prototype.config = function config() {
return this._config;
}
CTaosInterface.prototype.connect = function connect(host = null, user = "root", password = "taosdata", db = null, port = 0) {
let _host, _user, _password, _db, _port;
try {
......@@ -399,10 +450,12 @@ CTaosInterface.prototype.connect = function connect(host = null, user = "root",
}
return connection;
}
CTaosInterface.prototype.close = function close(connection) {
this.libtaos.taos_close(connection);
console.log("Connection is closed");
}
CTaosInterface.prototype.query = function query(connection, sql) {
return this.libtaos.taos_query(connection, ref.allocCString(sql));
}
......@@ -410,6 +463,7 @@ CTaosInterface.prototype.query = function query(connection, sql) {
CTaosInterface.prototype.affectedRows = function affectedRows(result) {
return this.libtaos.taos_affected_rows(result);
}
CTaosInterface.prototype.useResult = function useResult(result) {
let fields = [];
......@@ -427,6 +481,7 @@ CTaosInterface.prototype.useResult = function useResult(result) {
}
return fields;
}
CTaosInterface.prototype.fetchBlock = function fetchBlock(result, fields) {
let pblock = ref.NULL_POINTER;
let num_of_rows = this.libtaos.taos_fetch_block(result, pblock);
......@@ -467,31 +522,39 @@ CTaosInterface.prototype.fetchBlock = function fetchBlock(result, fields) {
}
return { blocks: blocks, num_of_rows }
}
CTaosInterface.prototype.fetchRow = function fetchRow(result, fields) {
let row = this.libtaos.taos_fetch_row(result);
return row;
}
CTaosInterface.prototype.freeResult = function freeResult(result) {
this.libtaos.taos_free_result(result);
result = null;
}
/** Number of fields returned in this result handle, must use with async */
CTaosInterface.prototype.numFields = function numFields(result) {
return this.libtaos.taos_num_fields(result);
}
// Fetch fields count by connection, the latest query
CTaosInterface.prototype.fieldsCount = function fieldsCount(result) {
return this.libtaos.taos_field_count(result);
}
CTaosInterface.prototype.fetchFields = function fetchFields(result) {
return this.libtaos.taos_fetch_fields(result);
}
CTaosInterface.prototype.errno = function errno(result) {
return this.libtaos.taos_errno(result);
}
CTaosInterface.prototype.errStr = function errStr(result) {
return ref.readCString(this.libtaos.taos_errstr(result));
}
// Async
CTaosInterface.prototype.query_a = function query_a(connection, sql, callback, param = ref.ref(ref.NULL)) {
// void taos_query_a(TAOS *taos, char *sqlstr, void (*fp)(void *param, TAOS_RES *, int), void *param)
......@@ -550,6 +613,7 @@ CTaosInterface.prototype.fetch_rows_a = function fetch_rows_a(result, callback,
this.libtaos.taos_fetch_rows_a(result, asyncCallbackWrapper, param);
return param;
}
// Fetch field meta data by result handle
CTaosInterface.prototype.fetchFields_a = function fetchFields_a(result) {
let pfields = this.fetchFields(result);
......@@ -567,6 +631,7 @@ CTaosInterface.prototype.fetchFields_a = function fetchFields_a(result) {
}
return fields;
}
// Stop a query by result handle
CTaosInterface.prototype.stopQuery = function stopQuery(result) {
if (result != null) {
......@@ -576,9 +641,11 @@ CTaosInterface.prototype.stopQuery = function stopQuery(result) {
throw new errors.ProgrammingError("No result handle passed to stop query");
}
}
CTaosInterface.prototype.getServerInfo = function getServerInfo(connection) {
return ref.readCString(this.libtaos.taos_get_server_info(connection));
}
CTaosInterface.prototype.getClientInfo = function getClientInfo() {
return ref.readCString(this.libtaos.taos_get_client_info());
}
......@@ -644,6 +711,7 @@ CTaosInterface.prototype.consume = function consume(subscription) {
}
return { data: data, fields: fields, result: result };
}
CTaosInterface.prototype.unsubscribe = function unsubscribe(subscription) {
//void taos_unsubscribe(TAOS_SUB *tsub);
this.libtaos.taos_unsubscribe(subscription);
......@@ -688,23 +756,25 @@ CTaosInterface.prototype.openStream = function openStream(connection, sql, callb
return streamHandle;
}
}
CTaosInterface.prototype.closeStream = function closeStream(stream) {
this.libtaos.taos_close_stream(stream);
console.log("Closed stream");
}
//Schemaless insert API
//Schemaless insert API
/**
* TAOS* taos, char* lines[], int numLines, int protocol,int precision)
* using taos_errstr get error info, taos_errno get error code. Remmember
* to release taos_res, otherwile will lead memory leak.
* using taos_errstr get error info, taos_errno get error code. Remember
* to release taos_res, otherwise will lead memory leak.
* TAOS schemaless insert api
* @param {*} connection a valid database connection
* @param {*} lines string data, which statisfied with line proctocol
* @param {*} lines string data, which satisfied with line protocol
* @param {*} numLines number of rows in param lines.
* @param {*} protocal Line protocol, enum type (0,1,2,3),indicate different line protocol
* @param {*} protocol Line protocol, enum type (0,1,2,3),indicate different line protocol
* @param {*} precision timestamp precision in lines, enum type (0,1,2,3,4,5,6)
* @returns TAOS_RES
*
* @returns TAOS_RES
*
*/
CTaosInterface.prototype.schemalessInsert = function schemalessInsert(connection, lines, protocal, precision) {
let _numLines = null;
......@@ -727,3 +797,161 @@ CTaosInterface.prototype.schemalessInsert = function schemalessInsert(connection
}
return this.libtaos.taos_schemaless_insert(connection, _lines, _numLines, protocal, precision);
}
//stmt APIs
/**
* init a TAOS_STMT object for later use.it should be freed with stmtClose.
* @param {*} connection valid taos connection
* @returns Not NULL returned for success, and NULL for failure.
*
*/
CTaosInterface.prototype.stmtInit = function stmtInit(connection) {
return this.libtaos.taos_stmt_init(connection)
}
/**
* prepare a sql statement,'sql' should be a valid INSERT/SELECT statement, 'length' is not used.
* @param {*} stmt
* @param {string} sql a valid INSERT/SELECT statement
* @param {ulong} length not used
* @returns 0 for success, non-zero for failure.
*/
CTaosInterface.prototype.stmtPrepare = function stmtPrepare(stmt, sql, length) {
return this.libtaos.taos_stmt_prepare(stmt, ref.allocCString(sql), 0);
}
/**
* For INSERT only. Used to bind table name as a parmeter for the input stmt object.
* @param {*} stmt could be the value returned by 'stmtInit',
* that may be a valid object or NULL.
* @param {TaosBind} tableName target table name you want to bind
* @returns 0 for success, non-zero for failure.
*/
CTaosInterface.prototype.stmtSetTbname = function stmtSetTbname(stmt, tableName) {
return this.libtaos.taos_stmt_set_tbname(stmt, ref.allocCString(tableName));
}
/**
* For INSERT only.
* Set a table name for binding table name as parameter and tag values for all tag parameters.
* @param {*} stmt could be the value returned by 'stmtInit', that may be a valid object or NULL.
* @param {*} tableName use to set target table name
* @param {TaosMultiBind} tags use to set tag value for target table.
* @returns
*/
CTaosInterface.prototype.stmtSetTbnameTags = function stmtSetTbnameTags(stmt, tableName, tags) {
return this.libtaos.taos_stmt_set_tbname_tags(stmt, ref.allocCString(tableName), tags);
}
/**
* For INSERT only.
* Set a table name for binding table name as parameter. Only used for binding all tables
* in one stable, user application must call 'loadTableInfo' API to load all table
* meta before calling this API. If the table meta is not cached locally, it will return error.
* @param {*} stmt could be the value returned by 'StmtInit', that may be a valid object or NULL.
* @param {*} subTableName table name which is belong to an stable
* @returns 0 for success, non-zero for failure.
*/
CTaosInterface.prototype.stmtSetSubTbname = function stmtSetSubTbname(stmt, subTableName) {
return this.libtaos.taos_stmt_set_sub_tbname(stmt, subTableName);
}
/**
* bind a whole line data, for both INSERT and SELECT. The parameter 'bind' points to an array
* contains the whole line data. Each item in array represents a column's value, the item
* number and sequence should keep consistence with columns in sql statement. The usage of
* structure TAOS_BIND is the same with MYSQL_BIND in MySQL.
* @param {*} stmt could be the value returned by 'stmtInit', that may be a valid object or NULL.
* @param {*} binds points to an array contains the whole line data.
* @returns 0 for success, non-zero for failure.
*/
CTaosInterface.prototype.bindParam = function bindParam(stmt, binds) {
return this.libtaos.taos_stmt_bind_param(stmt, binds);
}
/**
* Bind a single column's data, INTERNAL used and for INSERT only.
* @param {*} stmt could be the value returned by 'stmtInit', that may be a valid object or NULL.
* @param {TaosMultiBind} mbind points to a column's data which could be the one or more lines.
* @param {*} colIndex the column's index in prepared sql statement, it starts from 0.
* @returns 0 for success, non-zero for failure.
*/
CTaosInterface.prototype.stmtBindSingleParamBatch = function stmtBindSingleParamBatch(stmt, mbind, colIndex) {
return this.libtaos.taos_stmt_bind_single_param_batch(stmt, mbind.ref(), colIndex);
}
/**
* For INSERT only.
* Bind one or multiple lines data.
* @param {*} stmt could be the value returned by 'stmtInit',
* that may be a valid object or NULL.
* @param {*} mbinds Points to an array contains one or more lines data.The item
* number and sequence should keep consistence with columns
* n sql statement.
* @returns 0 for success, non-zero for failure.
*/
CTaosInterface.prototype.stmtBindParamBatch = function stmtBindParamBatch(stmt, mbinds) {
return this.libtaos.taos_stmt_bind_param_batch(stmt, mbinds);
}
/**
* add all current bound parameters to batch process, for INSERT only.
* Must be called after each call to bindParam/bindSingleParamBatch,
* or all columns binds for one or more lines with bindSingleParamBatch. User
* application can call any bind parameter API again to bind more data lines after calling
* to this API.
* @param {*} stmt
* @returns 0 for success, non-zero for failure.
*/
CTaosInterface.prototype.addBatch = function addBatch(stmt) {
return this.libtaos.taos_stmt_add_batch(stmt);
}
/**
* actually execute the INSERT/SELECT sql statement. User application can continue
* to bind new data after calling to this API.
* @param {*} stmt
* @returns 0 for success, non-zero for failure.
*/
CTaosInterface.prototype.stmtExecute = function stmtExecute(stmt) {
return this.libtaos.taos_stmt_execute(stmt);
}
/**
* For SELECT only,getting the query result.
* User application should free it with API 'FreeResult' at the end.
* @param {*} stmt could be the value returned by 'stmtInit', that may be a valid object or NULL.
* @returns Not NULL for success, NULL for failure.
*/
CTaosInterface.prototype.stmtUseResult = function stmtUseResult(stmt) {
return this.libtaos.taos_stmt_use_result(stmt);
}
/**
* user application call this API to load all tables meta info.
* This method must be called before stmtSetSubTbname(IntPtr stmt, string name);
* @param {*} taos taos connection
* @param {*} tableList tables need to load meta info are form in an array
* @returns 0 for success, non-zero for failure.
*/
CTaosInterface.prototype.loadTableInfo = function loadTableInfo(taos, tableList) {
return this.libtaos.taos_load_table_info(taos, tableList)
}
/**
* Close STMT object and free resources.
* @param {*} stmt could be the value returned by 'stmtInit', that may be a valid object or NULL.
* @returns 0 for success, non-zero for failure.
*/
CTaosInterface.prototype.closeStmt = function closeStmt(stmt) {
return this.libtaos.taos_stmt_close(stmt);
}
/**
* Get detail error message when got failure for any stmt API call.
* If not failure, the result returned by this API is unknown.
* @param {*} stmt Could be the value returned by 'stmtInit', that may be a valid object or NULL.
* @returns error string
*/
CTaosInterface.prototype.stmtErrStr = function stmtErrStr(stmt) {
return ref.readCString(this.libtaos.taos_stmt_errstr(stmt));
}
\ No newline at end of file
......@@ -492,3 +492,255 @@ TDengineCursor.prototype.schemalessInsert = function schemalessInsert(lines, pro
}
this._chandle.freeResult(this._result);
}
//STMT
/**
* init a TAOS_STMT object for later use.it should be freed with stmtClose.
* @returns Not NULL returned for success, and NULL for failure.
*
*/
TDengineCursor.prototype.stmtInit = function stmtInit() {
let stmt = null
stmt = this._chandle.stmtInit(this._connection._conn);
if (stmt == null || stmt == undefined) {
throw new errors.DatabaseError(this._chandle.stmtErrStr(stmt));
} else {
this._stmt = stmt;
}
}
/**
* prepare a sql statement,'sql' should be a valid INSERT/SELECT statement
* @param {string} sql a valid INSERT/SELECT statement
* @returns {int} 0 for success, non-zero for failure.
*/
TDengineCursor.prototype.stmtPrepare = function stmtPrepare(sql) {
if (this._stmt == null) {
throw new errors.DatabaseError("stmt is null,init stmt first");
} else {
let stmtPrepare = this._chandle.stmtPrepare(this._stmt, sql, null);
if (stmtPrepare != 0) {
throw new errors.DatabaseError(this._chandle.stmtErrStr(this._stmt));
} else {
console.log("stmtPrepare success.");
}
}
}
/**
* For INSERT only. Used to bind table name as a parmeter for the input stmt object.
* @param {TaosBind} tableName target table name you want to bind
* @returns 0 for success, non-zero for failure.
*/
TDengineCursor.prototype.stmtSetTbname = function stmtSetTbname(tableName){
if (this._stmt == null) {
throw new errors.DatabaseError("stmt is null,init stmt first");
} else {
let stmtPrepare = this._chandle.stmtSetTbname(this._stmt, tableName);
if (stmtPrepare != 0) {
throw new errors.DatabaseError(this._chandle.stmtErrStr(this._stmt));
} else {
console.log("stmtSetTbname success.");
}
}
}
/**
* For INSERT only.
* Set a table name for binding table name as parameter and tag values for all tag parameters.
* @param {*} tableName use to set target table name
* @param {TaosMultiBind} tags use to set tag value for target table.
* @returns
*/
TDengineCursor.prototype.stmtSetTbnameTags = function stmtSetTbnameTags(tableName,tags){
if (this._stmt == null) {
throw new errors.DatabaseError("stmt is null,init stmt first");
} else {
let stmtPrepare = this._chandle.stmtSetTbnameTags(this._stmt, tableName,tags);
if (stmtPrepare != 0) {
throw new errors.DatabaseError(this._chandle.stmtErrStr(this._stmt));
} else {
console.log("stmtSetTbnameTags success.");
}
}
}
/**
* For INSERT only.
* Set a table name for binding table name as parameter. Only used for binding all tables
* in one stable, user application must call 'loadTableInfo' API to load all table
* meta before calling this API. If the table meta is not cached locally, it will return error.
* @param {*} subTableName table name which is belong to an stable
* @returns 0 for success, non-zero for failure.
*/
TDengineCursor.prototype.stmtSetSubTbname = function stmtSetSubTbname(subTableName){
if (this._stmt == null) {
throw new errors.DatabaseError("stmt is null,init stmt first");
} else {
let stmtPrepare = this._chandle.stmtSetSubTbname(this._stmt, subTableName);
if (stmtPrepare != 0) {
throw new errors.DatabaseError(this._chandle.stmtErrStr(this._stmt));
} else {
console.log("stmtSetSubTbname success.");
}
}
}
/**
* bind a whole line data, for both INSERT and SELECT. The parameter 'bind' points to an array
* contains the whole line data. Each item in array represents a column's value, the item
* number and sequence should keep consistence with columns in sql statement. The usage of
* structure TAOS_BIND is the same with MYSQL_BIND in MySQL.
* @param {*} binds points to an array contains the whole line data.
* @returns 0 for success, non-zero for failure.
*/
TDengineCursor.prototype.bindParam = function bindParam(binds) {
if (this._stmt == null) {
throw new errors.DatabaseError("stmt is null,init stmt first");
} else {
let stmtPrepare = this._chandle.bindParam(this._stmt, binds);
if (stmtPrepare != 0) {
throw new errors.DatabaseError(this._chandle.stmtErrStr(this._stmt));
} else {
console.log("bindParam success.");
}
}
}
/**
* Bind a single column's data, INTERNAL used and for INSERT only.
* @param {TaosMultiBind} mbind points to a column's data which could be the one or more lines.
* @param {*} colIndex the column's index in prepared sql statement, it starts from 0.
* @returns 0 for success, non-zero for failure.
*/
TDengineCursor.prototype.stmtBindSingleParamBatch = function stmtBindSingleParamBatch(mbind,colIndex){
if (this._stmt == null) {
throw new errors.DatabaseError("stmt is null,init stmt first");
} else {
let stmtPrepare = this._chandle.stmtBindSingleParamBatch(this._stmt, mbind,colIndex);
if (stmtPrepare != 0) {
throw new errors.DatabaseError(this._chandle.stmtErrStr(this._stmt));
} else {
console.log("stmtBindSingleParamBatch success.");
}
}
}
/**
* For INSERT only.
* Bind one or multiple lines data.
* @param {*} mbinds Points to an array contains one or more lines data.The item
* number and sequence should keep consistence with columns
* n sql statement.
* @returns 0 for success, non-zero for failure.
*/
TDengineCursor.prototype.stmtBindParamBatch = function stmtBindParamBatch(mbinds){
if (this._stmt == null) {
throw new errors.DatabaseError("stmt is null,init stmt first");
} else {
let stmtPrepare = this._chandle.stmtBindParamBatch(this._stmt, mbinds);
if (stmtPrepare != 0) {
throw new errors.DatabaseError(this._chandle.stmtErrStr(this._stmt));
} else {
console.log("stmtBindParamBatch success.");
}
}
}
/**
* add all current bound parameters to batch process, for INSERT only.
* Must be called after each call to bindParam/bindSingleParamBatch,
* or all columns binds for one or more lines with bindSingleParamBatch. User
* application can call any bind parameter API again to bind more data lines after calling
* to this API.
* @param {*} stmt
* @returns 0 for success, non-zero for failure.
*/
TDengineCursor.prototype.addBatch = function addBatch() {
if (this._stmt == null) {
throw new errors.DatabaseError("stmt is null,init stmt first");
} else {
let addBatchRes = this._chandle.addBatch(this._stmt);
if (addBatchRes != 0) {
throw new errors.DatabaseError(this._chandle.stmtErrStr(this._stmt));
}
else {
console.log("addBatch success.");
}
}
}
/**
* actually execute the INSERT/SELECT sql statement. User application can continue
* to bind new data after calling to this API.
* @param {*} stmt
* @returns 0 for success, non-zero for failure.
*/
TDengineCursor.prototype.stmtExecute = function stmtExecute() {
if (this._stmt != null) {
let stmtExecRes = this._chandle.stmtExecute(this._stmt);
if (stmtExecRes != 0) {
throw new errors.DatabaseError(this._chandle.stmtErrStr(this._stmt));
} else {
console.log("stmtExecute success.")
}
} else {
throw new errors.DatabaseError("stmt is null,init stmt first");
}
}
/**
* For SELECT only,getting the query result.
* User application should free it with API 'FreeResult' at the end.
* @returns Not NULL for success, NULL for failure.
*/
TDengineCursor.prototype.stmtUseResult = function stmtUseResult(){
if (this._stmt != null) {
let stmtExecRes = this._chandle.stmtUseResult(this._stmt);
if (stmtExecRes != 0) {
throw new errors.DatabaseError(this._chandle.stmtErrStr(this._stmt));
} else {
console.log("stmtUseResult success.")
}
} else {
throw new errors.DatabaseError("stmt is null,init stmt first");
}
}
/**
* user application call this API to load all tables meta info.
* This method must be called before stmtSetSubTbname(IntPtr stmt, string name);
* @param {*} tableList tables need to load meta info are form in an array
* @returns 0 for success, non-zero for failure.
*/
TDengineCursor.prototype.loadTableInfo = function loadTableInfo(tableList){
if (this._connection._conn != null) {
let stmtExecRes = this._chandle.loadTableInfo(this._connection._conn,tableList);
if (stmtExecRes != 0) {
throw new errors.DatabaseError(this._chandle.stmtErrStr(this._stmt));
} else {
console.log("loadTableInfo success.")
}
} else {
throw new errors.DatabaseError("taos connection is null.");
}
}
/**
* close STMT object and free resources.
* @param {*} stmt
* @returns 0 for success, non-zero for failure.
*/
TDengineCursor.prototype.stmtClose = function stmtClose() {
if (this._stmt == null) {
throw new DatabaseError("stmt is null,init stmt first");
} else {
let closeStmtRes = this._chandle.closeStmt(this._stmt);
if (closeStmtRes != 0) {
throw new DatabaseError(this._chandle.stmtErrStr(this._stmt));
}
else {
console.log("closeStmt success.");
}
}
}
\ No newline at end of file
const ref = require('ref-napi');
const StructType = require('ref-struct-di')(ref);
const taosConst = require('./constants');
const { TDError } = require('./error');
var bufferType = ref.types.int32;
var buffer = ref.refType(ref.types.void);
var bufferLength = ref.types.uint64;
var length = ref.refType(ref.types.uint64);
var isNull = ref.refType(ref.types.int32);
var is_unsigned = ref.types.int;
var error = ref.refType(ref.types.void);
var u = ref.types.int64;
var allocated = ref.types.uint32;
var TAOS_BIND = StructType({
buffer_type : bufferType,
buffer : buffer,
buffer_length : bufferLength,
length : length,
is_null : isNull,
is_unsigned : is_unsigned,
error : error,
u : u,
allocated: allocated,
});
class TaosBind {
constructor(num) {
console.log(TAOS_BIND.size);
this.buf = Buffer.alloc(TAOS_BIND.size * num);
this.num = num;
this.index = 0;
}
/**
* Used to bind null value for all data types that tdengine supports.
*/
bindNil() {
if(!this._isOutOfBound()){
let nil = new TAOS_BIND({
buffer_type : taosConst.C_NULL,
is_null : ref.alloc(ref.types.int32, 1),
});
TAOS_BIND.set(this.buf, this.index * TAOS_BIND.size, nil);
this.index++
}else{
throw new TDError(`bindNil() failed,since index:${this.index} is out of Buffer bound ${this.num}.`);
}
}
/**
*
* @param {bool} val is not null bool value,true or false.
*/
bindBool(val) {
if(!this._isOutOfBound()){
let bl = new TAOS_BIND({
buffer_type : taosConst.C_BOOL,
buffer : ref.alloc(ref.types.bool, val),
buffer_length : ref.types.bool.size,
length : ref.alloc(ref.types.uint64, ref.types.bool.size),
is_null : ref.alloc(ref.types.int32, 0),
});
TAOS_BIND.set(this.buf, this.index * TAOS_BIND.size, bl);
this.index++
}else{
throw new TDError(`bindBool() failed with ${val},since index:${this.index} is out of Buffer bound ${this.num}.`);
}
}
/**
*
* @param {int8} val is a not null tinyint value.
*/
bindTinyInt(val){
if(!this._isOutOfBound()){
let tinnyInt = new TAOS_BIND({
buffer_type : taosConst.C_TINYINT,
buffer : ref.alloc(ref.types.int8, val),
buffer_length : ref.types.int8.size,
length : ref.alloc(ref.types.uint64, ref.types.int8.size),
is_null : ref.alloc(ref.types.int32, 0),
});
TAOS_BIND.set(this.buf, this.index * TAOS_BIND.size, tinnyInt);
this.index++
}else{
throw new TDError(`bindTinyInt() failed with ${val},since index:${this.index} is out of Buffer bound ${this.num}.`);
}
}
/**
*
* @param {short} val is a not null small int value.
*/
bindSmallInt(val){
if(!this._isOutOfBound()){
let samllint = new TAOS_BIND({
buffer_type : taosConst.C_SMALLINT,
buffer : ref.alloc(ref.types.int16, val),
buffer_length : ref.types.int16.size,
length : ref.alloc(ref.types.uint64, ref.types.int16.size),
is_null : ref.alloc(ref.types.int32, 0),
});
TAOS_BIND.set(this.buf, this.index * TAOS_BIND.size, samllint);
this.index++
}else{
throw new TDError(`bindSmallInt() failed with ${val},since index:${this.index} is out of Buffer bound ${this.num}.`);
}
}
/**
*
* @param {int} val is a not null int value.
*/
bindInt(val){
if(!this._isOutOfBound()){
let int = new TAOS_BIND({
buffer_type : taosConst.C_INT,
buffer : ref.alloc(ref.types.int32, val),
buffer_length : ref.types.int32.size,
length : ref.alloc(ref.types.uint64, ref.types.int32.size),
is_null : ref.alloc(ref.types.int32, 0),
});
TAOS_BIND.set(this.buf, this.index * TAOS_BIND.size, int);
this.index++
}else{
throw new TDError(`bindInt() failed with ${val},since index:${this.index} is out of Buffer bound ${this.num}.`);
}
}
/**
*
* @param {long} val is not null big int value.
*/
bindBigInt(val) {
if(!this._isOutOfBound()){
let bigint = new TAOS_BIND({
buffer_type : taosConst.C_BIGINT,
buffer : ref.alloc(ref.types.int64, val.toString()),
buffer_length : ref.types.int64.size,
length : ref.alloc(ref.types.uint64, ref.types.int64.size),
is_null : ref.alloc(ref.types.int32, 0),
});
TAOS_BIND.set(this.buf, this.index * TAOS_BIND.size, bigint);
this.index++
}else{
throw new TDError(`bindBigInt() failed with ${val},since index:${this.index} is out of Buffer bound ${this.num}.`);
}
}
/**
*
* @param {float} val is a not null float value
*/
bindFloat(val) {
if(!this._isOutOfBound()){
let float = new TAOS_BIND({
buffer_type : taosConst.C_FLOAT,
buffer : ref.alloc(ref.types.float, val),
buffer_length : ref.types.float.size,
length : ref.alloc(ref.types.uint64, ref.types.float.size),
is_null : ref.alloc(ref.types.int32, 0),
});
TAOS_BIND.set(this.buf, this.index * TAOS_BIND.size, float);
this.index++
}else{
throw new TDError(`bindFloat() failed with ${val},since index:${this.index} is out of Buffer bound ${this.num}.`);
}
}
/**
*
* @param {double} val is a not null double value
*/
bindDouble(val){
if(!this._isOutOfBound()){
let double = new TAOS_BIND({
buffer_type : taosConst.C_DOUBLE,
buffer : ref.alloc(ref.types.double, val),
buffer_length : ref.types.double.size,
length : ref.alloc(ref.types.uint64, ref.types.double.size),
is_null : ref.alloc(ref.types.int32, 0),
});
TAOS_BIND.set(this.buf, this.index * TAOS_BIND.size, double);
this.index++
}else{
throw new TDError(`bindDouble() failed with ${val},since index:${this.index} is out of Buffer bound ${this.num}.`);
}
}
/**
*
* @param {string} val is a string.
*/
bindBinary(val){
let cstringBuf = ref.allocCString(val,'utf-8');
if(!this._isOutOfBound()){
let binary = new TAOS_BIND({
buffer_type : taosConst.C_BINARY,
buffer : cstringBuf,
buffer_length : cstringBuf.length,
length : ref.alloc(ref.types.uint64, cstringBuf.length),
is_null : ref.alloc(ref.types.int32, 0),
});
TAOS_BIND.set(this.buf, this.index * TAOS_BIND.size, binary);
this.index++
}else{
throw new TDError(`bindBinary() failed with ${val},since index:${this.index} is out of Buffer bound ${this.num}.`);
}
}
/**
*
* @param {long} val is a not null timestamp(long) values.
*/
bindTimestamp(val) {
let ts = new TAOS_BIND({
buffer_type : taosConst.C_TIMESTAMP,
buffer : ref.alloc(ref.types.int64, val),
buffer_length : ref.types.int64.size,
length : ref.alloc(ref.types.uint64, ref.types.int64.size),
is_null : ref.alloc(ref.types.int32, 0),
});
TAOS_BIND.set(this.buf, this.index * TAOS_BIND.size, ts);
this.index++
}
/**
*
* @param {string} val is a string.
*/
bindNchar(val){
let cstringBuf = ref.allocCString(val,'utf-8');
if(!this._isOutOfBound()){
let nchar = new TAOS_BIND({
buffer_type : taosConst.C_NCHAR,
buffer : cstringBuf,
buffer_length : cstringBuf.length,
length : ref.alloc(ref.types.uint64, cstringBuf.length),
is_null : ref.alloc(ref.types.int32, 0),
});
TAOS_BIND.set(this.buf, this.index * TAOS_BIND.size, nchar);
this.index++
}else{
throw new TDError(`bindNchar() failed with ${val},since index:${this.index} is out of Buffer bound ${this.num}.`);
}
}
/**
*
* @param {uint8} val is a not null unsinged tinyint value.
*/
bindUTinyInt(val){
if(!this._isOutOfBound()){
let uTinyInt = new TAOS_BIND({
buffer_type : taosConst.C_TINYINT_UNSIGNED,
buffer : ref.alloc(ref.types.uint8, val),
buffer_length : ref.types.uint8.size,
length : ref.alloc(ref.types.uint64, ref.types.uint8.size),
is_null : ref.alloc(ref.types.int32, 0),
});
TAOS_BIND.set(this.buf, this.index * TAOS_BIND.size, uTinyInt);
this.index++
}else{
throw new TDError(`bindUTinyInt() failed with ${val},since index:${this.index} is out of Buffer bound ${this.num}.`);
}
}
/**
*
* @param {uint16} val is a not null unsinged smallint value.
*/
bindUSmallInt(val){
if(!this._isOutOfBound()){
let uSmallInt = new TAOS_BIND({
buffer_type : taosConst.C_SMALLINT_UNSIGNED,
buffer : ref.alloc(ref.types.uint16, val),
buffer_length : ref.types.uint16.size,
length : ref.alloc(ref.types.uint64, ref.types.uint16.size),
is_null : ref.alloc(ref.types.int32, 0),
});
TAOS_BIND.set(this.buf, this.index * TAOS_BIND.size, uSmallInt);
this.index++
}else{
throw new TDError(`bindUSmallInt() failed with ${val},since index:${this.index} is out of Buffer bound ${this.num}.`);
}
}
/**
*
* @param {uint32} val is a not null unsinged int value.
*/
bindUInt(val){
if(!this._isOutOfBound()){
let uInt = new TAOS_BIND({
buffer_type : taosConst.C_INT_UNSIGNED,
buffer : ref.alloc(ref.types.uint32, val),
buffer_length : ref.types.uint32.size,
length : ref.alloc(ref.types.uint64, ref.types.uint32.size),
is_null : ref.alloc(ref.types.int32, 0),
});
TAOS_BIND.set(this.buf, this.index * TAOS_BIND.size, uInt);
this.index++
}else{
throw new TDError(`bindUInt() failed with ${val},since index:${this.index} is out of Buffer bound ${this.num}.`);
}
}
/**
*
* @param {uint64} val is a not null unsinged bigint value.
*/
bindUBigInt(val){
if(!this._isOutOfBound()){
let uBigInt = new TAOS_BIND({
buffer_type : taosConst.C_BIGINT_UNSIGNED,
buffer : ref.alloc(ref.types.uint64, val.toString()),
buffer_length : ref.types.uint64.size,
length : ref.alloc(ref.types.uint64, ref.types.uint64.size),
is_null : ref.alloc(ref.types.int32, 0),
});
TAOS_BIND.set(this.buf, this.index * TAOS_BIND.size, uBigInt);
this.index++
}else{
throw new TDError(`bindUBigInt() failed with ${val},since index:${this.index} is out of Buffer bound ${this.num}.`);
}
}
/**
*
* @returns binded buffer.
*/
getBind() {
return this.buf;
}
_isOutOfBound(){
if(this.num>this.index){
return false;
}else{
return true;
}
}
}
module.exports = TaosBind;
......@@ -4,6 +4,7 @@
"description": "A Node.js connector for TDengine.",
"main": "tdengine.js",
"directories": {
"example": "examples",
"test": "test"
},
"scripts": {
......@@ -29,9 +30,9 @@
"dependencies": {
"ffi-napi": "^3.1.0",
"lodash": "^4.17.21",
"ref-array-napi": "^1.2.1",
"ref-napi": "^1.5.2",
"ref-struct-napi": "^1.1.1"
"ref-array-di": "^1.2.1",
"ref-napi": "^3.0.2",
"ref-struct-di": "^1.1.1"
},
"devDependencies": {
"jest": "^27.4.7"
......
# TDengine Node.js connector
[![minzip](https://img.shields.io/bundlephobia/minzip/td2.0-connector.svg)](https://github.com/taosdata/TDengine/tree/master/src/connector/nodejs) [![NPM](https://img.shields.io/npm/l/td2.0-connector.svg)](https://github.com/taosdata/TDengine/#what-is-tdengine)
This is the Node.js library that lets you connect to [TDengine](https://www.github.com/taosdata/tdengine) 2.0 version. It is built so that you can use as much of it as you want or as little of it as you want through providing an extensive API. If you want the raw data in the form of an array of arrays for the row data retrieved from a table, you can do that. If you want to wrap that data with objects that allow you easily manipulate and display data such as using a prettifier function, you can do that!
......@@ -106,6 +107,7 @@ promise.then(function(result) {
```
You can also query by binding parameters to a query by filling in the question marks in a string as so. The query will automatically parse what was binded and convert it to the proper format for use with TDengine
```javascript
var query = cursor.query('select * from meterinfo.meters where ts <= ? and areaid = ?;').bind(new Date(), 5);
query.execute().then(function(result) {
......@@ -114,6 +116,7 @@ query.execute().then(function(result) {
```
The TaosQuery object can also be immediately executed upon creation by passing true as the second argument, returning a promise instead of a TaosQuery.
```javascript
var promise = cursor.query('select * from meterinfo.meters where v1 = 30;', true)
promise.then(function(result) {
......@@ -121,7 +124,8 @@ promise.then(function(result) {
})
```
If you want to execute queries without objects being wrapped around the data, use ```cursor.execute()``` directly and ```cursor.fetchall()``` to retrieve data if there is any.
If you want to execute queries without objects being wrapped around the data, use `cursor.execute()` directly and `cursor.fetchall()` to retrieve data if there is any.
```javascript
cursor.execute('select count(*), avg(v1), min(v2) from meterinfo.meters where ts >= \"2019-07-20 00:00:00.000\";');
var data = cursor.fetchall();
......
var TDengineConnection = require('./nodetaos/connection.js')
const TDengineConstant = require('./nodetaos/constants.js')
const TaosBind = require('./nodetaos/taosBind')
module.exports = {
connect: function (connection = {}) {
return new TDengineConnection(connection);
},
SCHEMALESS_PROTOCOL: TDengineConstant.SCHEMALESS_PROTOCOL,
SCHEMALESS_PRECISION: TDengineConstant.SCHEMALESS_PRECISION,
TaosBind,
}
\ No newline at end of file
......@@ -392,14 +392,14 @@ static char* formatTimestamp(char* buf, int64_t val, int precision) {
FILETIME b; // unit is 100ns
ULARGE_INTEGER c;
SystemTimeToFileTime(&a,&b);
c.LowPart = b.dwLowDateTime;
c.HighPart = b.dwHighDateTime;
c.LowPart = b.dwLowDateTime;
c.HighPart = b.dwHighDateTime;
c.QuadPart+=tt*10000000;
b.dwLowDateTime=c.LowPart;
b.dwHighDateTime=c.HighPart;
b.dwLowDateTime=c.LowPart;
b.dwHighDateTime=c.HighPart;
FileTimeToLocalFileTime(&b,&b);
FileTimeToSystemTime(&b,&a);
int pos = sprintf(buf,"%02d-%02d-%02d %02d:%02d:%02d", a.wYear, a.wMonth,a.wDay, a.wHour, a.wMinute, a.wSecond);
int pos = sprintf(buf,"%02d-%02d-%02d %02d:%02d:%02d", a.wYear, a.wMonth,a.wDay, a.wHour, a.wMinute, a.wSecond);
if (precision == TSDB_TIME_PRECISION_NANO) {
sprintf(buf + pos, ".%09d", ms);
} else if (precision == TSDB_TIME_PRECISION_MICRO) {
......@@ -442,6 +442,7 @@ static void dumpFieldToFile(FILE* fp, const char* val, TAOS_FIELD* field, int32_
return;
}
int n;
char buf[TSDB_MAX_BYTES_PER_ROW];
switch (field->type) {
case TSDB_DATA_TYPE_BOOL:
......@@ -475,7 +476,12 @@ static void dumpFieldToFile(FILE* fp, const char* val, TAOS_FIELD* field, int32_
fprintf(fp, "%.5f", GET_FLOAT_VAL(val));
break;
case TSDB_DATA_TYPE_DOUBLE:
fprintf(fp, "%.9f", GET_DOUBLE_VAL(val));
n = snprintf(buf, TSDB_MAX_BYTES_PER_ROW, "%*.9f", length, GET_DOUBLE_VAL(val));
if (n > MAX(25, length)) {
fprintf(fp, "%*.15e", length, GET_DOUBLE_VAL(val));
} else {
fprintf(fp, "%s", buf);
}
break;
case TSDB_DATA_TYPE_BINARY:
case TSDB_DATA_TYPE_NCHAR:
......@@ -631,6 +637,7 @@ static void printField(const char* val, TAOS_FIELD* field, int width, int32_t le
return;
}
int n;
char buf[TSDB_MAX_BYTES_PER_ROW];
switch (field->type) {
case TSDB_DATA_TYPE_BOOL:
......@@ -664,7 +671,12 @@ static void printField(const char* val, TAOS_FIELD* field, int width, int32_t le
printf("%*.5f", width, GET_FLOAT_VAL(val));
break;
case TSDB_DATA_TYPE_DOUBLE:
printf("%*.9f", width, GET_DOUBLE_VAL(val));
n = snprintf(buf, TSDB_MAX_BYTES_PER_ROW, "%*.9f", width, GET_DOUBLE_VAL(val));
if (n > MAX(25, width)) {
printf("%*.15e", width, GET_DOUBLE_VAL(val));
} else {
printf("%s", buf);
}
break;
case TSDB_DATA_TYPE_BINARY:
case TSDB_DATA_TYPE_NCHAR:
......
Subproject commit d6baa48620fcbff857642c4ec10e3c48226ca97c
Subproject commit edcecea26114a4ba2da625876a263e195ae56fea
......@@ -664,7 +664,8 @@ void* doDestroyFilterInfo(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFil
void setInputDataBlock(SOperatorInfo* pOperator, SQLFunctionCtx* pCtx, SSDataBlock* pBlock, int32_t order);
int32_t getNumOfResult(SQueryRuntimeEnv *pRuntimeEnv, SQLFunctionCtx* pCtx, int32_t numOfOutput);
void finalizeQueryResult(SOperatorInfo* pOperator, SQLFunctionCtx* pCtx, SResultRowInfo* pResultRowInfo, int32_t* rowCellInfoOffset);
void updateOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity, int32_t numOfInputRows, SQueryRuntimeEnv* runtimeEnv);
void updateOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity, int32_t numOfInputRows, SQueryRuntimeEnv* runtimeEnv, bool extendLarge);
void shrinkOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity);
void clearOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity);
void copyTsColoum(SSDataBlock* pRes, SQLFunctionCtx* pCtx, int32_t numOfOutput);
......
......@@ -2192,6 +2192,10 @@ static void copyTopBotRes(SQLFunctionCtx *pCtx, int32_t type) {
// set the corresponding tag data for each record
// todo check malloc failure
if (pCtx->tagInfo.numOfTagCols == 0) {
return ;
}
char **pData = calloc(pCtx->tagInfo.numOfTagCols, POINTER_BYTES);
for (int32_t i = 0; i < pCtx->tagInfo.numOfTagCols; ++i) {
pData[i] = pCtx->tagInfo.pTagCtxList[i]->pOutput;
......@@ -4736,6 +4740,10 @@ static void copySampleFuncRes(SQLFunctionCtx *pCtx, int32_t type) {
pTimestamp++;
}
if (pCtx->tagInfo.numOfTagCols == 0) {
return ;
}
char **tagOutputs = calloc(pCtx->tagInfo.numOfTagCols, POINTER_BYTES);
for (int32_t i = 0; i < pCtx->tagInfo.numOfTagCols; ++i) {
tagOutputs[i] = pCtx->tagInfo.pTagCtxList[i]->pOutput;
......
......@@ -758,7 +758,7 @@ static bool resultRowInterpolated(SResultRow* pResult, SResultTsInterpType type)
}
}
static FORCE_INLINE int32_t getForwardStepsInBlock(int32_t numOfRows, __block_search_fn_t searchFn, TSKEY ekey, int16_t pos,
static FORCE_INLINE int32_t getForwardStepsInBlock(int32_t numOfRows, __block_search_fn_t searchFn, TSKEY ekey, int32_t pos,
int16_t order, int64_t *pData) {
int32_t forwardStep = 0;
......@@ -3682,31 +3682,56 @@ void setDefaultOutputBuf(SQueryRuntimeEnv *pRuntimeEnv, SOptrBasicInfo *pInfo, i
initCtxOutputBuffer(pCtx, pDataBlock->info.numOfCols);
}
void updateOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity, int32_t numOfInputRows, SQueryRuntimeEnv* runtimeEnv) {
// extend doulbe newSize to bufCapacity
bool extendColCapacity(SColumnInfoData* pColInfo, int32_t newSize, SQLFunctionCtx* pCtx, int32_t *bufCapacity, bool extendLarge) {
char* p = NULL;
int32_t newCapacity = 0;
if (extendLarge) {
// doulbe newSize
newCapacity = newSize * 2;
p = realloc(pColInfo->pData, (size_t)newCapacity * pColInfo->info.bytes);
}
if (p == NULL) {
// failed then newSize
newCapacity = newSize;
p = realloc(pColInfo->pData, (size_t)newCapacity * pColInfo->info.bytes);
if(p == NULL) {
taosMsleep(1000);
p = realloc(pColInfo->pData, (size_t)newCapacity * pColInfo->info.bytes);
qInfo("MEM realloc memory size %d failed, sleep 1s to try, p=%p", newSize * pColInfo->info.bytes, p);
}
}
if (p != NULL) {
// save new pointer
pColInfo->pData = p;
pCtx->pOutput = p;
(*bufCapacity) = newCapacity;
return true;
}
return false;
}
void updateOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity, int32_t numOfInputRows, SQueryRuntimeEnv* runtimeEnv, bool extendLarge) {
SSDataBlock* pDataBlock = pBInfo->pRes;
int32_t newSize = pDataBlock->info.rows + numOfInputRows + 5; // extra output buffer
if ((*bufCapacity) < newSize) {
for(int32_t i = 0; i < pDataBlock->info.numOfCols; ++i) {
SColumnInfoData *pColInfo = taosArrayGet(pDataBlock->pDataBlock, i);
char* p = realloc(pColInfo->pData, ((size_t)newSize) * pColInfo->info.bytes);
if (p != NULL) {
pColInfo->pData = p;
// it starts from the tail of the previously generated results.
pBInfo->pCtx[i].pOutput = pColInfo->pData;
(*bufCapacity) = newSize;
} else {
if (!extendColCapacity(pColInfo, newSize, &pBInfo->pCtx[i], bufCapacity, extendLarge)) {
// error throw except
size_t allocateSize = ((size_t)(newSize)) * pColInfo->info.bytes;
qError("can not allocate %zu bytes for output. Rows: %d, colBytes %d",
qError("can not allocate %zu bytes for output. Rows: %d, colBytes %d",
allocateSize, newSize, pColInfo->info.bytes);
longjmp(runtimeEnv->env, TSDB_CODE_QRY_OUT_OF_MEMORY);
return ;
}
}
}
for (int32_t i = 0; i < pDataBlock->info.numOfCols; ++i) {
SColumnInfoData *pColInfo = taosArrayGet(pDataBlock->pDataBlock, i);
pBInfo->pCtx[i].pOutput = pColInfo->pData + (size_t)pColInfo->info.bytes * pDataBlock->info.rows;
......@@ -3726,6 +3751,26 @@ void updateOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity, int32_t numOf
}
}
// shrink pBInfo->pRes memory
void shrinkOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity) {
SSDataBlock* pDataBlock = pBInfo->pRes;
int32_t rows = pDataBlock->info.rows + 5; // remain 5 buffer
// shrink if only too large blank space
if (*bufCapacity - rows <= 200) {
return ; // no need shrink
}
// bufCapcaity shrink to rows
for(int32_t i = 0; i < pDataBlock->info.numOfCols; ++i) {
SColumnInfoData *pColInfo = taosArrayGet(pDataBlock->pDataBlock, i);
void* pNew = realloc(pColInfo->pData, rows * pColInfo->info.bytes);
if (pNew)
pColInfo->pData = pNew;
}
*bufCapacity = rows;
}
void copyTsColoum(SSDataBlock* pRes, SQLFunctionCtx* pCtx, int32_t numOfOutput) {
bool interpQuery = false;
int32_t tsNum = 0;
......@@ -6019,7 +6064,7 @@ static SSDataBlock* doProjectOperation(void* param, bool* newgroup) {
// the pDataBlock are always the same one, no need to call this again
setInputDataBlock(pOperator, pInfo->pCtx, pBlock, order);
updateOutputBuf(&pProjectInfo->binfo, &pProjectInfo->bufCapacity, pBlock->info.rows, pOperator->pRuntimeEnv);
updateOutputBuf(&pProjectInfo->binfo, &pProjectInfo->bufCapacity, pBlock->info.rows, pOperator->pRuntimeEnv,false);
projectApplyFunctions(pRuntimeEnv, pInfo->pCtx, pOperator->numOfOutput);
if (pTableQueryInfo != NULL) {
......@@ -6088,7 +6133,7 @@ static SSDataBlock* doProjectOperation(void* param, bool* newgroup) {
// the pDataBlock are always the same one, no need to call this again
setInputDataBlock(pOperator, pInfo->pCtx, pBlock, order);
updateOutputBuf(&pProjectInfo->binfo, &pProjectInfo->bufCapacity, pBlock->info.rows, pOperator->pRuntimeEnv);
updateOutputBuf(&pProjectInfo->binfo, &pProjectInfo->bufCapacity, pBlock->info.rows, pOperator->pRuntimeEnv,false);
projectApplyFunctions(pRuntimeEnv, pInfo->pCtx, pOperator->numOfOutput);
if (pTableQueryInfo != NULL) {
......@@ -6603,7 +6648,7 @@ static void doTimeEveryImpl(SOperatorInfo* pOperator, SQLFunctionCtx *pCtx, SSDa
break;
}
updateOutputBuf(&pEveryInfo->binfo, &pEveryInfo->bufCapacity, 0, pOperator->pRuntimeEnv);
updateOutputBuf(&pEveryInfo->binfo, &pEveryInfo->bufCapacity, 0, pOperator->pRuntimeEnv,false);
}
}
}
......@@ -6623,7 +6668,7 @@ static SSDataBlock* doTimeEvery(void* param, bool* newgroup) {
pRes->info.rows = 0;
if (!pEveryInfo->groupDone) {
updateOutputBuf(&pEveryInfo->binfo, &pEveryInfo->bufCapacity, 0, pOperator->pRuntimeEnv);
updateOutputBuf(&pEveryInfo->binfo, &pEveryInfo->bufCapacity, 0, pOperator->pRuntimeEnv,false);
doTimeEveryImpl(pOperator, pInfo->pCtx, pEveryInfo->lastBlock, false);
if (pRes->info.rows >= pRuntimeEnv->resultInfo.threshold) {
copyTsColoum(pRes, pInfo->pCtx, pOperator->numOfOutput);
......@@ -6659,7 +6704,7 @@ static SSDataBlock* doTimeEvery(void* param, bool* newgroup) {
// the pDataBlock are always the same one, no need to call this again
setInputDataBlock(pOperator, pInfo->pCtx, pBlock, order);
updateOutputBuf(&pEveryInfo->binfo, &pEveryInfo->bufCapacity, pBlock->info.rows, pOperator->pRuntimeEnv);
updateOutputBuf(&pEveryInfo->binfo, &pEveryInfo->bufCapacity, pBlock->info.rows, pOperator->pRuntimeEnv, false);
doTimeEveryImpl(pOperator, pInfo->pCtx, pBlock, *newgroup);
if (pEveryInfo->groupDone && pOperator->upstream[0]->notify) {
......@@ -6685,7 +6730,7 @@ static SSDataBlock* doTimeEvery(void* param, bool* newgroup) {
if (!pEveryInfo->groupDone) {
pEveryInfo->allDone = true;
updateOutputBuf(&pEveryInfo->binfo, &pEveryInfo->bufCapacity, 0, pOperator->pRuntimeEnv);
updateOutputBuf(&pEveryInfo->binfo, &pEveryInfo->bufCapacity, 0, pOperator->pRuntimeEnv,false);
doTimeEveryImpl(pOperator, pInfo->pCtx, NULL, false);
if (pRes->info.rows >= pRuntimeEnv->resultInfo.threshold) {
break;
......@@ -6706,7 +6751,7 @@ static SSDataBlock* doTimeEvery(void* param, bool* newgroup) {
// Return result of the previous group in the firstly.
if (*newgroup) {
if (!pEveryInfo->groupDone) {
updateOutputBuf(&pEveryInfo->binfo, &pEveryInfo->bufCapacity, 0, pOperator->pRuntimeEnv);
updateOutputBuf(&pEveryInfo->binfo, &pEveryInfo->bufCapacity, 0, pOperator->pRuntimeEnv,false);
doTimeEveryImpl(pOperator, pInfo->pCtx, NULL, false);
if (pRes->info.rows >= pRuntimeEnv->resultInfo.threshold) {
pEveryInfo->existDataBlock = pBlock;
......@@ -6742,7 +6787,7 @@ static SSDataBlock* doTimeEvery(void* param, bool* newgroup) {
// the pDataBlock are always the same one, no need to call this again
setInputDataBlock(pOperator, pInfo->pCtx, pBlock, order);
updateOutputBuf(&pEveryInfo->binfo, &pEveryInfo->bufCapacity, pBlock->info.rows, pOperator->pRuntimeEnv);
updateOutputBuf(&pEveryInfo->binfo, &pEveryInfo->bufCapacity, pBlock->info.rows, pOperator->pRuntimeEnv, false);
pEveryInfo->groupDone = false;
......@@ -9520,7 +9565,6 @@ SQInfo* createQInfoImpl(SQueryTableMsg* pQueryMsg, SGroupbyExpr* pGroupbyExpr, S
}
for (int16_t col = 0; col < numOfOutput; ++col) {
assert(pExprs[col].base.resBytes > 0);
pQueryAttr->resultRowSize += pExprs[col].base.resBytes;
// keep the tag length
......@@ -10106,4 +10150,4 @@ bool queryReadOverCB(void* param) {
return true;
}
return false;
}
\ No newline at end of file
}
......@@ -128,7 +128,7 @@ _err:
int tsdbApplyRtnOnFSet(STsdbRepo *pRepo, SDFileSet *pSet, SRtn *pRtn) {
SDiskID did;
SDFileSet nSet;
SDFileSet nSet = {0};
STsdbFS * pfs = REPO_FS(pRepo);
int level;
......
......@@ -91,10 +91,8 @@ static int tsdbEncodeDFileSetArray(void **buf, SArray *pArray) {
}
static int tsdbDecodeDFileSetArray(void **originBuf, void *buf, SArray *pArray, SFSHeader *pSFSHeader) {
uint64_t nset;
SDFileSet dset;
dset.ver = TSDB_FSET_VER_0; // default value
uint64_t nset = 0;
taosArrayClear(pArray);
buf = taosDecodeFixedU64(buf, &nset);
......@@ -113,6 +111,7 @@ static int tsdbDecodeDFileSetArray(void **originBuf, void *buf, SArray *pArray,
}
for (size_t i = 0; i < nset; i++) {
SDFileSet dset = {0}; // ver is TSDB_FSET_VER_0(0) at default
buf = tsdbDecodeDFileSet(buf, &dset, pSFSHeader->version);
taosArrayPush(pArray, (void *)(&dset));
}
......
......@@ -1171,6 +1171,8 @@ static int32_t offsetSkipBlock(STsdbQueryHandle* q, SBlockInfo* pBlockInfo, int6
q->frows += pBlock->numOfRows; // maybe have some row in memroy
}
} else {
// already read rows belong to forbid skip rows -> frows
q->frows += pBlock->numOfRows;
// the remainder be put to pArray
if(pArray == NULL)
pArray = taosArrayInit(1, sizeof(SRange));
......@@ -1237,22 +1239,24 @@ static int32_t offsetSkipBlock(STsdbQueryHandle* q, SBlockInfo* pBlockInfo, int6
q->frows += pBlock->numOfRows; // maybe have some row in memroy
}
} else {
// the remainder be put to pArray
if(pArray == NULL)
pArray = taosArrayInit(1, sizeof(SRange));
if(range.from == -1) {
// already read rows belong to forbid skip rows -> frows
q->frows += pBlock->numOfRows;
// the remainder be put to pArray
if(pArray == NULL)
pArray = taosArrayInit(1, sizeof(SRange));
if(range.from == -1) {
range.from = i;
} else {
if(range.to - 1 != i) {
// add the previous
taosArrayPush(pArray, &range);
range.from = i;
} else {
if(range.to - 1 != i) {
// add the previous
taosArrayPush(pArray, &range);
range.from = i;
}
}
range.to = 0;
taosArrayPush(pArray, &range);
range.from = -1;
break;
}
range.to = 0;
taosArrayPush(pArray, &range);
range.from = -1;
break;
}
}
......
......@@ -457,7 +457,7 @@ static int32_t tsdbSyncRecvDFileSetArray(SSyncH *pSynch) {
// Create local files and copy from remote
SDiskID did;
SDFileSet fset;
SDFileSet fset = {0};
tfsAllocDisk(fidLevel, &(did.level), &(did.id));
if (did.level == TFS_UNDECIDED_LEVEL) {
......
# generate debug version:
# mkdir debug; cd debug; cmake -DCMAKE_BUILD_TYPE=Debug ..
# generate release version:
# mkdir release; cd release; cmake -DCMAKE_BUILD_TYPE=Release ..
CMAKE_MINIMUM_REQUIRED(VERSION 3.0...3.20)
PROJECT(TDengine)
SET(CMAKE_C_STANDARD 11)
ADD_SUBDIRECTORY(examples/c)
ADD_SUBDIRECTORY(tsim)
ADD_SUBDIRECTORY(test/c)
ADD_SUBDIRECTORY(comparisonTest/tdengine)
### Prepare development environment
1. sudo apt install
build-essential cmake net-tools python-pip python-setuptools python3-pip
python3-setuptools valgrind psmisc curl
2. git clone <https://github.com/taosdata/TDengine>; cd TDengine
3. mkdir debug; cd debug; cmake ..; make ; sudo make install
4. cd ../tests && pip3 install -r requirements.txt
> Note: Both Python2 and Python3 are currently supported by the Python test
> framework. Since Python2 is no longer officially supported by Python Software
> Foundation since January 1, 2020, it is recommended that subsequent test case
> development be guaranteed to run correctly on Python3.
> For Python2, please consider being compatible if appropriate without
> additional burden.
>
> If you use some new Linux distribution like Ubuntu 20.04 which already do not
> include Python2, please do not install Python2-related packages.
>
> <https://nakedsecurity.sophos.com/2020/01/03/python-is-dead-long-live-python/> 
### How to run Python test suite
1. cd \<TDengine\>/tests/pytest
2. ./smoketest.sh \# for smoke test
3. ./smoketest.sh -g \# for memory leak detection test with valgrind
4. ./fulltest.sh \# for full test
> Note1: TDengine daemon's configuration and data files are stored in
> \<TDengine\>/sim directory. As a historical design, it's same place with
> TSIM script. So after the TSIM script ran with sudo privilege, the directory
> has been used by TSIM then the python script cannot write it by a normal
> user. You need to remove the directory completely first before running the
> Python test case. We should consider using two different locations to store
> for TSIM and Python script.
> Note2: if you need to debug crash problem with a core dump, you need
> manually edit smoketest.sh or fulltest.sh to add "ulimit -c unlimited"
> before the script line. Then you can look for the core file in
> \<TDengine\>/tests/pytest after the program crash.
### How to add a new test case
**1. TSIM test cases:**
TSIM was the testing framework has been used internally. Now it still be used to run the test cases we develop in the past as a legacy system. We are turning to use Python to develop new test case and are abandoning TSIM gradually.
**2. Python test cases:**
**2.1 Please refer to \<TDengine\>/tests/pytest/insert/basic.py to add a new
test case.** The new test case must implement 3 functions, where self.init()
and self.stop() simply copy the contents of insert/basic.py and the test
logic is implemented in self.run(). You can refer to the code in the util
directory for more information.
**2.2 Edit smoketest.sh to add the path and filename of the new test case**
Note: The Python test framework may continue to be improved in the future,
hopefully, to provide more functionality and ease of writing test cases. The
method of writing the test case above does not exclude that it will also be
affected.
**2.3 What test.py does in detail:**
test.py is the entry program for test case execution and monitoring.
test.py has the following functions.
\-f --file, Specifies the test case file name to be executed
-p --path, Specifies deployment path
\-m --master, Specifies the master server IP for cluster deployment
-c--cluster, test cluster function
-s--stop, terminates all running nodes
\-g--valgrind, load valgrind for memory leak detection test
\-h--help, display help
**2.4 What util/log.py does in detail:**
log.py is quite simple, the main thing is that you can print the output in
different colors as needed. The success() should be called for successful
test case execution and the success() will print green text. The exit() will
print red text and exit the program, exit() should be called for test
failure.
**util/log.py**
...
    def info(self, info):
        printf("%s %s" % (datetime.datetime.now(), info))
 
    def sleep(self, sec):
        printf("%s sleep %d seconds" % (datetime.datetime.now(), sec))
        time.sleep(sec)
 
    def debug(self, err):
        printf("\\033[1;36m%s %s\\033[0m" % (datetime.datetime.now(), err))
 
    def success(self, info):
        printf("\\033[1;32m%s %s\\033[0m" % (datetime.datetime.now(), info))
 
    def notice(self, err):
        printf("\\033[1;33m%s %s\\033[0m" % (datetime.datetime.now(), err))
 
    def exit(self, err):
        printf("\\033[1;31m%s %s\\033[0m" % (datetime.datetime.now(), err))
        sys.exit(1)
 
    def printNoPrefix(self, info):
        printf("\\033[1;36m%s\\033[0m" % (info)
...
**2.5 What util/sql.py does in detail:**
SQL.py is mainly used to execute SQL statements to manipulate the database,
and the code is extracted and commented as follows:
**util/sql.py**
\# prepare() is mainly used to set up the environment for testing table and
data, and to set up the database db for testing. do not call prepare() if you
need to test the database operation command.
def prepare(self):
tdLog.info("prepare database:db")
self.cursor.execute('reset query cache')
self.cursor.execute('drop database if exists db')
self.cursor.execute('create database db')
self.cursor.execute('use db')
...
\# query() is mainly used to execute select statements for normal syntax input
def query(self, sql):
...
\# error() is mainly used to execute the select statement with the wrong syntax
input, the error will be caught as a reasonable behavior, if not caught it will
prove that the test failed
def error()
...
\# checkRows() is used to check the number of returned lines after calling
query(select ...) after calling the query(select ...) to check the number of
rows of returned results.
def checkRows(self, expectRows):
...
\# checkData() is used to check the returned result data after calling
query(select ...) after the query(select ...) is called, failure to meet
expectation is
def checkData(self, row, col, data):
...
\# getData() returns the result data after calling query(select ...) to return
the resulting data after calling query(select ...)
def getData(self, row, col):
...
\# execute() used to execute sql and return the number of affected rows
def execute(self, sql):
...
\# executeTimes() Multiple executions of the same sql statement
def executeTimes(self, sql, times):
...
\# CheckAffectedRows() Check if the number of affected rows is as expected
def checkAffectedRows(self, expectAffectedRows):
...
### CI submission adoption principle.
- Every commit / PR compilation must pass. Currently, the warning is treated
as an error, so the warning must also be resolved.
- Test cases that already exist must pass.
- Because CI is very important to support build and automatically test
procedure, it is necessary to manually test the test case before adding it
and do as many iterations as possible to ensure that the test case provides
stable and reliable test results when added.
> Note: In the future, according to the requirements and test development
> progress will add stress testing, performance testing, code style,
> and other features based on functional testing.
def pre_test(){
sh '''
sudo rmtaos||echo 'no taosd installed'
'''
sh '''
cd ${WKC}
git reset --hard
git checkout $BRANCH_NAME
git pull
git submodule update
cd ${WK}
git reset --hard
git checkout $BRANCH_NAME
git pull
export TZ=Asia/Harbin
date
rm -rf ${WK}/debug
mkdir debug
cd debug
cmake .. > /dev/null
make > /dev/null
make install > /dev/null
pip3 install ${WKC}/src/connector/python
'''
return 1
}
def pre_test_p(){
sh '''
sudo rmtaos||echo 'no taosd installed'
'''
sh '''
cd ${WKC}
git reset --hard
git checkout $BRANCH_NAME
git pull
git submodule update
cd ${WK}
git reset --hard
git checkout $BRANCH_NAME
git pull
export TZ=Asia/Harbin
date
rm -rf ${WK}/debug
mkdir debug
cd debug
cmake .. > /dev/null
make > /dev/null
make install > /dev/null
pip3 install ${WKC}/src/connector/python
'''
return 1
}
pipeline {
agent none
environment{
WK = '/data/lib/jenkins/workspace/TDinternal'
WKC= '/data/lib/jenkins/workspace/TDinternal/community'
}
stages {
stage('Parallel test stage') {
parallel {
stage('pytest') {
agent{label 'slad1'}
steps {
pre_test_p()
sh '''
cd ${WKC}/tests
find pytest -name '*'sql|xargs rm -rf
./test-all.sh pytest
date'''
}
}
stage('test_b1') {
agent{label 'slad2'}
steps {
pre_test()
sh '''
cd ${WKC}/tests
./test-all.sh b1
date'''
}
}
stage('test_crash_gen') {
agent{label "slad3"}
steps {
pre_test()
sh '''
cd ${WKC}/tests/pytest
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/pytest
./crash_gen.sh -a -p -t 4 -s 2000
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/pytest
rm -rf /var/lib/taos/*
rm -rf /var/log/taos/*
./handle_crash_gen_val_log.sh
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/pytest
rm -rf /var/lib/taos/*
rm -rf /var/log/taos/*
./handle_taosd_val_log.sh
'''
}
sh'''
nohup taosd >/dev/null &
sleep 10
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/gotest
bash batchtest.sh
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/examples/python/PYTHONConnectorChecker
python3 PythonChecker.py
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/examples/JDBC/JDBCDemo/
mvn clean package >/dev/null
java -jar target/JdbcRestfulDemo-jar-with-dependencies.jar
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cp -rf ${WKC}/tests/examples/nodejs ${JENKINS_HOME}/workspace/
cd ${JENKINS_HOME}/workspace/nodejs
node nodejsChecker.js host=localhost
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${JENKINS_HOME}/workspace/C#NET/src/CheckC#
dotnet run
'''
}
sh '''
pkill -9 taosd || echo 1
cd ${WKC}/tests
./test-all.sh b2
date
'''
sh '''
cd ${WKC}/tests
./test-all.sh full unit
date'''
}
}
stage('test_valgrind') {
agent{label "slad4"}
steps {
pre_test()
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/pytest
nohup taosd >/dev/null &
sleep 10
python3 concurrent_inquiry.py -c 1
'''
}
sh '''
cd ${WKC}/tests
./test-all.sh full jdbc
date'''
sh '''
cd ${WKC}/tests/pytest
./valgrind-test.sh 2>&1 > mem-error-out.log
./handle_val_log.sh
date
cd ${WKC}/tests
./test-all.sh b3
date'''
sh '''
date
cd ${WKC}/tests
./test-all.sh full example
date'''
}
}
stage('arm64_build'){
agent{label 'arm64'}
steps{
sh '''
cd ${WK}
git fetch
git checkout develop
git pull
cd ${WKC}
git fetch
git checkout develop
git pull
git submodule update
cd ${WKC}/packaging
./release.sh -v cluster -c aarch64 -n 2.0.0.0 -m 2.0.0.0
'''
}
}
stage('arm32_build'){
agent{label 'arm32'}
steps{
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WK}
git fetch
git checkout develop
git pull
cd ${WKC}
git fetch
git checkout develop
git pull
git submodule update
cd ${WKC}/packaging
./release.sh -v cluster -c aarch32 -n 2.0.0.0 -m 2.0.0.0
'''
}
}
}
}
}
}
post {
success {
emailext (
subject: "PR-result: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' SUCCESS",
body: """<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
</head>
<body leftmargin="8" marginwidth="0" topmargin="8" marginheight="4" offset="0">
<table width="95%" cellpadding="0" cellspacing="0" style="font-size: 16pt; font-family: Tahoma, Arial, Helvetica, sans-serif">
<tr>
<td><br />
<b><font color="#0B610B"><font size="6">构建信息</font></font></b>
<hr size="2" width="100%" align="center" /></td>
</tr>
<tr>
<td>
<ul>
<div style="font-size:18px">
<li>构建名称>>分支:${env.BRANCH_NAME}</li>
<li>构建结果:<span style="color:green"> Successful </span></li>
<li>构建编号:${BUILD_NUMBER}</li>
<li>触发用户:${env.CHANGE_AUTHOR}</li>
<li>提交信息:${env.CHANGE_TITLE}</li>
<li>构建地址:<a href=${BUILD_URL}>${BUILD_URL}</a></li>
<li>构建日志:<a href=${BUILD_URL}console>${BUILD_URL}console</a></li>
</div>
</ul>
</td>
</tr>
</table></font>
</body>
</html>""",
to: "yqliu@taosdata.com,pxiao@taosdata.com",
from: "support@taosdata.com"
)
}
failure {
emailext (
subject: "PR-result: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' FAIL",
body: """<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
</head>
<body leftmargin="8" marginwidth="0" topmargin="8" marginheight="4" offset="0">
<table width="95%" cellpadding="0" cellspacing="0" style="font-size: 16pt; font-family: Tahoma, Arial, Helvetica, sans-serif">
<tr>
<td><br />
<b><font color="#0B610B"><font size="6">构建信息</font></font></b>
<hr size="2" width="100%" align="center" /></td>
</tr>
<tr>
<td>
<ul>
<div style="font-size:18px">
<li>构建名称>>分支:${env.BRANCH_NAME}</li>
<li>构建结果:<span style="color:red"> Failure </span></li>
<li>构建编号:${BUILD_NUMBER}</li>
<li>触发用户:${env.CHANGE_AUTHOR}</li>
<li>提交信息:${env.CHANGE_TITLE}</li>
<li>构建地址:<a href=${BUILD_URL}>${BUILD_URL}</a></li>
<li>构建日志:<a href=${BUILD_URL}console>${BUILD_URL}console</a></li>
</div>
</ul>
</td>
</tr>
</table></font>
</body>
</html>""",
to: "yqliu@taosdata.com,pxiao@taosdata.com",
from: "support@taosdata.com"
)
}
}
}
\ No newline at end of file
datastax-java-driver {
basic.request {
timeout = 200000 seconds
}
}
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.cassandra.test</groupId>
<artifactId>cassandratest</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>jar</packaging>
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-plugins</artifactId>
<version>30</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>3.0.0</version>
</plugin>
</plugins>
</pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>3.1.0</version>
<configuration>
<archive>
<manifest>
<mainClass>CassandraTest</mainClass>
</manifest>
</archive>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<source>8</source>
<target>8</target>
</configuration>
</plugin>
</plugins>
</build>
<name>cassandratest</name>
<!-- FIXME change it to the project's website -->
<url>http://www.example.com</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.13.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.datastax.oss</groupId>
<artifactId>java-driver-core</artifactId>
<version>4.1.0</version>
</dependency>
<dependency>
<groupId>com.datastax.oss</groupId>
<artifactId>java-driver-query-builder</artifactId>
<version>4.1.0</version>
</dependency>
<dependency>
<groupId>com.datastax.oss</groupId>
<artifactId>java-driver-mapper-runtime</artifactId>
<version>4.1.0</version>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.7</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.7</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.5</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.5</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-1.2-api</artifactId>
<version>2.8.2</version>
</dependency>
</dependencies>
</project>
import com.datastax.oss.driver.api.core.CqlSession;
import com.datastax.oss.driver.api.core.cql.*;
import com.datastax.oss.driver.api.core.session.*;
import com.datastax.oss.driver.api.core.config.*;
import com.datastax.oss.driver.api.core.cql.ResultSet;
import com.datastax.oss.driver.api.core.cql.Row;
//import com.datastax.driver.core.Cluster;
//import com.datastax.driver.core.Cluster;
import java.io.BufferedWriter;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileWriter;
import java.io.FileReader;
import java.io.IOException;
import java.text.DecimalFormat;
import java.util.Random;
import java.math.*;
import java.lang.reflect.Method;
public class CassandraTest{
public static void main(String args[]) {
// begin to parse argument
String datadir = "/home/ubuntu/testdata";
String sqlfile = "/home/ubuntu/fang/cassandra/q1.txt";
String cfgfile = "/home/ubuntu/fang/cassandra/application.conf";
boolean q4flag = false;
int numOfRows = 1000000;
int numOfFiles =0;
int numOfClients =0;
int rowsPerRequest =0;
for (int i = 0; i < args.length; ++i) {
if (args[i].equalsIgnoreCase("-dataDir")) {
if (i < args.length - 1) {
datadir = args[++i];
}
} else if (args[i].equalsIgnoreCase("-numofFiles")) {
if (i < args.length - 1) {
numOfFiles = Integer.parseInt(args[++i]);
}
} else if (args[i].equalsIgnoreCase("-rowsPerRequest")) {
if (i < args.length - 1) {
rowsPerRequest = Integer.parseInt(args[++i]);
}
} else if (args[i].equalsIgnoreCase("-writeClients")) {
if (i < args.length - 1) {
numOfClients = Integer.parseInt(args[++i]);
}
} else if (args[i].equalsIgnoreCase("-sql")) {
sqlfile = args[++i];
} else if (args[i].equalsIgnoreCase("-timetest")) {
q4flag = true;
} else if (args[i].equalsIgnoreCase("-conf")) {
cfgfile = args[++i];
}
}
// file below to make sure no timeout error
File confile = new File(cfgfile);
System.out.println("parameters\n");
if (numOfFiles >0) {
// write data
System.out.printf("----dataDir:%s\n", datadir);
System.out.printf("----numOfFiles:%d\n", numOfFiles);
System.out.printf("----numOfClients:%d\n", numOfClients);
System.out.printf("----rowsPerRequest:%d\n", rowsPerRequest);
// connect to cassandra server
System.out.printf("----connecting to cassandra server\n");
try {
CqlSession session = CqlSession.builder()
.withConfigLoader(DriverConfigLoader.fromFile(confile))
.build();
session.execute("drop keyspace if exists cassandra");
session.execute("CREATE KEYSPACE if not exists cassandra WITH replication = {'class':'SimpleStrategy', 'replication_factor':1}");
if (q4flag) {
session.execute("create table if not exists cassandra.test (devid int, devname text, devgroup int, ts bigint, minute bigint, temperature int, humidity float ,primary key (minute,ts,devgroup,devid,devname))");
} else {
session.execute("create table if not exists cassandra.test (devid int, devname text, devgroup int, ts bigint, temperature int, humidity float ,primary key (devgroup,devid,devname,ts))");
}
session.close();
System.out.printf("----created keyspace cassandra and table test\n");
// begin to insert data
System.out.printf("----begin to insert data\n");
long startTime = System.currentTimeMillis();
int a = numOfFiles/numOfClients;
int b = numOfFiles%numOfClients;
int last = 0;
WriteThread[] writethreads = new WriteThread[numOfClients];
int[] wargs = new int[2]; // data file start, end
wargs[0] = numOfRows; //rows to be read from each file
wargs[1] = rowsPerRequest;
int fstart =0;
int fend =0;
for (int i = 0; i<numOfClients; ++i) {
if (i<b) {
fstart = last;
fend = last+a;
last = last+a+1;
writethreads[i] = new WriteThread(fstart,fend,wargs,datadir,q4flag);
System.out.printf("----Thread %d begin to write\n",i);
writethreads[i].start();
} else {
fstart = last;
fend = last+a-1;
last = last+a;
writethreads[i] = new WriteThread(fstart,fend,wargs,datadir,q4flag);
System.out.printf("----Thread %d begin to write\n",i);
writethreads[i].start();
}
}
for (int i =0; i<numOfClients; ++i) {
try {
writethreads[i].join();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
long stopTime = System.currentTimeMillis();
float elapseTime = stopTime - startTime;
elapseTime = elapseTime/1000;
float speeds = numOfRows*numOfFiles/elapseTime;
System.out.printf("---- insertation speed: %f Rows/Second\n",speeds);
} catch (Exception ex) {
ex.printStackTrace();
System.exit(1);
} finally {
System.out.printf("---- insertion end\n");
}
// above:write part; below: read part;
} else {
// query data begin
System.out.printf("----sql command file:%s\n", sqlfile);
// connect to cassandra server
try {
CqlSession session = CqlSession.builder()
.withConfigLoader(DriverConfigLoader.fromFile(confile))
.build();
//session.execute("use cassandra;");
BufferedReader br = null;
String line = "";
try {
br = new BufferedReader(new FileReader(sqlfile));
while ((line = br.readLine()) != null && line.length()>10) {
long startTime = System.currentTimeMillis();
// begin to query one line command //
// end querying one line command
try {
ResultSet results = session.execute(line);
long icounter = 0;
for (Row row : results) {
icounter++;
}
long stopTime = System.currentTimeMillis();
float elapseTime = stopTime - startTime;
elapseTime = elapseTime/1000;
System.out.printf("----spend %f seconds to query: %s\n", elapseTime, line);
} catch (Exception ex) {
ex.printStackTrace();
System.out.printf("---- query failed!\n");
System.exit(1);
}
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
if (br != null) {
try {
br.close();
} catch (IOException e) {
e.printStackTrace();
}
}
session.close();
}
} catch (Exception ex) {
ex.printStackTrace();
} finally {
System.out.println("query end:----\n");
}
} // end write or query
System.exit(0);
}// end main
}// end class
import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.math.*;
import com.datastax.oss.driver.api.core.CqlSession;
import com.datastax.oss.driver.api.core.cql.*;
import com.datastax.oss.driver.api.core.session.*;
import com.datastax.oss.driver.api.core.config.*;
public class WriteThread extends Thread {
private int[] wargs; // fstart, fend, rows to be read, rows perrequest
private String fdir;
private int fstart;
private int fend;
private boolean q4flag;
public WriteThread (int fstart, int fend,int[] wargs, String fdir, boolean q4flag) {
this.fstart = fstart;
this.fend = fend;
this.fdir = fdir;
this.wargs = wargs;
this.q4flag = q4flag;
}
// begin to insert in this thread
public void run() {
/*
// this configuration file makes sure no timeout error
File confile = new File("/home/ubuntu/fang/cassandra/application.conf");
*/
// connect to server
try {
CqlSession session = CqlSession.builder()
//.withConfigLoader(DriverConfigLoader.fromFile(confile))
.build();
//session.execute("use cassandra");
int tominute = 6000;
for (int i=fstart; i<=fend; i++) {
String csvfile;
csvfile = fdir + "/testdata"+ Integer.toString(i)+".csv";
BufferedReader br = null;
String line = "";
String cvsSplitBy = " ";
try {
br = new BufferedReader(new FileReader(csvfile));
System.out.println("---- begin to read file " +csvfile+"\n");
for (int itotalrow =0; itotalrow<wargs[0]; itotalrow=itotalrow+wargs[1]) {
String cqlstr = "BEGIN BATCH ";
for (int irow =0; irow<wargs[1]; ++irow) {
line = br.readLine();
if (line !=null) {
String[] meter = line.split(cvsSplitBy);
BigInteger tminute = new BigInteger(meter[3]);
tminute = tminute.divide(BigInteger.valueOf(tominute));
if (q4flag) {
cqlstr = cqlstr + "insert into cassandra.test (devid,devname,devgroup,ts, minute,temperature,humidity) values ";
cqlstr = cqlstr +"("+meter[0] +"," +"'" +meter[1] +"'" +"," +meter[2] +"," + meter[3] +",";
cqlstr = cqlstr +tminute.toString() +"," +meter[4] +"," +meter[5] +");";
} else {
cqlstr = cqlstr + "insert into cassandra.test (devid,devname,devgroup,ts,temperature,humidity) values ";
cqlstr = cqlstr +"("+meter[0] +"," +"'" +meter[1] +"'" +"," +meter[2] +"," + meter[3] +",";
cqlstr = cqlstr +meter[4] +"," +meter[5] +");";
}
} // if this line is not null
}//end row iteration in one batch
cqlstr = cqlstr+" APPLY BATCH;";
try {
//System.out.println(cqlstr+"----\n");
session.execute(cqlstr);
} catch (Exception ex) {
ex.printStackTrace();
}
}// end one file reading
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
if (br != null) {
try {
br.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}//end file iteration
session.close();
} catch (Exception ex) {
ex.printStackTrace();
}
}//end run
}//end class
select * from cassandra.test where devgroup=0 allow filtering;
select * from cassandra.test where devgroup=10 allow filtering;
select * from cassandra.test where devgroup=20 allow filtering;
select * from cassandra.test where devgroup=30 allow filtering;
select * from cassandra.test where devgroup=40 allow filtering;
select * from cassandra.test where devgroup=50 allow filtering;
select * from cassandra.test where devgroup=60 allow filtering;
select * from cassandra.test where devgroup=70 allow filtering;
select * from cassandra.test where devgroup=80 allow filtering;
select * from cassandra.test where devgroup=90 allow filtering;
select count(*) from cassandra.test where devgroup<10 allow filtering;
select count(*) from cassandra.test where devgroup<20 allow filtering;
select count(*) from cassandra.test where devgroup<30 allow filtering;
select count(*) from cassandra.test where devgroup<40 allow filtering;
select count(*) from cassandra.test where devgroup<50 allow filtering;
select count(*) from cassandra.test where devgroup<60 allow filtering;
select count(*) from cassandra.test where devgroup<70 allow filtering;
select count(*) from cassandra.test where devgroup<80 allow filtering;
select count(*) from cassandra.test where devgroup<90 allow filtering;
select count(*) from cassandra.test allow filtering;
select avg(temperature) from cassandra.test where devgroup<10 allow filtering;
select avg(temperature) from cassandra.test where devgroup<20 allow filtering;
select avg(temperature) from cassandra.test where devgroup<30 allow filtering;
select avg(temperature) from cassandra.test where devgroup<40 allow filtering;
select avg(temperature) from cassandra.test where devgroup<50 allow filtering;
select avg(temperature) from cassandra.test where devgroup<60 allow filtering;
select avg(temperature) from cassandra.test where devgroup<70 allow filtering;
select avg(temperature) from cassandra.test where devgroup<80 allow filtering;
select avg(temperature) from cassandra.test where devgroup<90 allow filtering;
select avg(temperature) from cassandra.test allow filtering;
select sum(temperature) from cassandra.test where devgroup<10 allow filtering;
select sum(temperature) from cassandra.test where devgroup<20 allow filtering;
select sum(temperature) from cassandra.test where devgroup<30 allow filtering;
select sum(temperature) from cassandra.test where devgroup<40 allow filtering;
select sum(temperature) from cassandra.test where devgroup<50 allow filtering;
select sum(temperature) from cassandra.test where devgroup<60 allow filtering;
select sum(temperature) from cassandra.test where devgroup<70 allow filtering;
select sum(temperature) from cassandra.test where devgroup<80 allow filtering;
select sum(temperature) from cassandra.test where devgroup<90 allow filtering;
select sum(temperature) from cassandra.test allow filtering;
select max(temperature) from cassandra.test where devgroup<10 allow filtering;
select max(temperature) from cassandra.test where devgroup<20 allow filtering;
select max(temperature) from cassandra.test where devgroup<30 allow filtering;
select max(temperature) from cassandra.test where devgroup<40 allow filtering;
select max(temperature) from cassandra.test where devgroup<50 allow filtering;
select max(temperature) from cassandra.test where devgroup<60 allow filtering;
select max(temperature) from cassandra.test where devgroup<70 allow filtering;
select max(temperature) from cassandra.test where devgroup<80 allow filtering;
select max(temperature) from cassandra.test where devgroup<90 allow filtering;
select max(temperature) from cassandra.test allow filtering;
select min(temperature) from cassandra.test where devgroup<10 allow filtering;
select min(temperature) from cassandra.test where devgroup<20 allow filtering;
select min(temperature) from cassandra.test where devgroup<30 allow filtering;
select min(temperature) from cassandra.test where devgroup<40 allow filtering;
select min(temperature) from cassandra.test where devgroup<50 allow filtering;
select min(temperature) from cassandra.test where devgroup<60 allow filtering;
select min(temperature) from cassandra.test where devgroup<70 allow filtering;
select min(temperature) from cassandra.test where devgroup<80 allow filtering;
select min(temperature) from cassandra.test where devgroup<90 allow filtering;
select min(temperature) from cassandra.test allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test where devgroup<10 group by devgroup allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test where devgroup<20 group by devgroup allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test where devgroup<30 group by devgroup allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test where devgroup<40 group by devgroup allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test where devgroup<50 group by devgroup allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test where devgroup<60 group by devgroup allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test where devgroup<70 group by devgroup allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test where devgroup<80 group by devgroup allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test where devgroup<90 group by devgroup allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test group by devgroup allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test where devgroup<10 group by minute allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test where devgroup<20 group by minute allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test where devgroup<30 group by minute allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test where devgroup<40 group by minute allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test where devgroup<50 group by minute allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test where devgroup<60 group by minute allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test where devgroup<70 group by minute allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test where devgroup<80 group by minute allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test where devgroup<90 group by minute allow filtering;
select count(temperature), sum(temperature), avg(temperature) from cassandra.test group by minute;
package com.taosdata.generator;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.text.DecimalFormat;
import java.util.Random;
public class DataGenerator {
/*
* to simulate the change action of humidity The valid range of humidity is
* [0, 100]
*/
public static class ValueGen {
int center;
int range;
Random rand;
public ValueGen(int center, int range) {
this.center = center;
this.range = range;
this.rand = new Random();
}
double next() {
double v = this.rand.nextGaussian();
if (v < -3) {
v = -3;
}
if (v > 3) {
v = 3;
}
return (this.range / 3.00) * v + center;
}
}
// data scale
private static int timestep = 1000; // sample time interval in milliseconds
private static long dataStartTime = 1563249700000L;
private static int deviceId = 0;
private static String tagPrefix = "dev_";
// MachineNum RowsPerMachine MachinesInOneFile
public static void main(String args[]) {
int numOfDevice = 10000;
int numOfFiles = 100;
int rowsPerDevice = 10000;
String directory = "~/";
for (int i = 0; i < args.length; i++) {
if (args[i].equalsIgnoreCase("-numOfDevices")) {
if (i < args.length - 1) {
numOfDevice = Integer.parseInt(args[++i]);
} else {
System.out.println("'-numOfDevices' requires a parameter, default is 10000");
}
} else if (args[i].equalsIgnoreCase("-numOfFiles")) {
if (i < args.length - 1) {
numOfFiles = Integer.parseInt(args[++i]);
} else {
System.out.println("'-numOfFiles' requires a parameter, default is 100");
}
} else if (args[i].equalsIgnoreCase("-rowsPerDevice")) {
if (i < args.length - 1) {
rowsPerDevice = Integer.parseInt(args[++i]);
} else {
System.out.println("'-rowsPerDevice' requires a parameter, default is 10000");
}
} else if (args[i].equalsIgnoreCase("-dataDir")) {
if (i < args.length - 1) {
directory = args[++i];
} else {
System.out.println("'-dataDir' requires a parameter, default is ~/testdata");
}
}
}
System.out.println("parameters");
System.out.printf("----dataDir:%s\n", directory);
System.out.printf("----numOfFiles:%d\n", numOfFiles);
System.out.printf("----numOfDevice:%d\n", numOfDevice);
System.out.printf("----rowsPerDevice:%d\n", rowsPerDevice);
int numOfDevPerFile = numOfDevice / numOfFiles;
long ts = dataStartTime;
// deviceId, time stamp, humid(int), temp(double), tagString(dev_deviceid)
int humidityDistRadius = 35;
int tempDistRadius = 17;
for (int i = 0; i < numOfFiles; ++i) { // prepare the data file
dataStartTime = ts;
// generate file name
String path = directory;
try {
path += "/testdata" + String.valueOf(i) + ".csv";
getDataInOneFile(path, rowsPerDevice, numOfDevPerFile, humidityDistRadius, tempDistRadius);
} catch (IOException e) {
e.printStackTrace();
}
}
}
private static void getDataInOneFile(String path, int rowsPerDevice, int num, int humidityDistRadius, int tempDistRadius) throws IOException {
DecimalFormat df = new DecimalFormat("0.0000");
long startTime = dataStartTime;
FileWriter fw = new FileWriter(new File(path));
BufferedWriter bw = new BufferedWriter(fw);
for (int i = 0; i < num; ++i) {
deviceId += 1;
Random rand = new Random();
double centralVal = Math.abs(rand.nextInt(100));
if (centralVal < humidityDistRadius) {
centralVal = humidityDistRadius;
}
if (centralVal + humidityDistRadius > 100) {
centralVal = 100 - humidityDistRadius;
}
DataGenerator.ValueGen humidityDataGen = new DataGenerator.ValueGen((int) centralVal, humidityDistRadius);
dataStartTime = startTime;
centralVal = Math.abs(rand.nextInt(22));
DataGenerator.ValueGen tempDataGen = new DataGenerator.ValueGen((int) centralVal, tempDistRadius);
for (int j = 0; j < rowsPerDevice; ++j) {
int humidity = (int) humidityDataGen.next();
double temp = tempDataGen.next();
int deviceGroup = deviceId % 100;
StringBuffer sb = new StringBuffer();
sb.append(deviceId).append(" ").append(tagPrefix).append(deviceId).append(" ").append(deviceGroup)
.append(" ").append(dataStartTime).append(" ").append(humidity).append(" ")
.append(df.format(temp));
bw.write(sb.toString());
bw.write("\n");
dataStartTime += timestep;
}
}
bw.close();
fw.close();
System.out.printf("file:%s generated\n", path);
}
}
此差异已折叠。
select * from devices where devgroup='0';
select * from devices where devgroup='10';
select * from devices where devgroup='20';
select * from devices where devgroup='30';
select * from devices where devgroup='40';
select * from devices where devgroup='50';
select * from devices where devgroup='60';
select * from devices where devgroup='70';
select * from devices where devgroup='80';
select * from devices where devgroup='90';
select count(temperature) from devices where devgroup=~/[1-1][0-9]/;
select count(temperature) from devices where devgroup=~/[1-2][0-9]/;
select count(temperature) from devices where devgroup=~/[1-3][0-9]/;
select count(temperature) from devices where devgroup=~/[1-4][0-9]/;
select count(temperature) from devices where devgroup=~/[1-5][0-9]/;
select count(temperature) from devices where devgroup=~/[1-6][0-9]/;
select count(temperature) from devices where devgroup=~/[1-7][0-9]/;
select count(temperature) from devices where devgroup=~/[1-8][0-9]/;
select count(temperature) from devices where devgroup=~/[1-9][0-9]/;
select count(temperature) from devices;
select mean(temperature) from devices where devgroup=~/[1-1][0-9]/;
select mean(temperature) from devices where devgroup=~/[1-2][0-9]/;
select mean(temperature) from devices where devgroup=~/[1-3][0-9]/;
select mean(temperature) from devices where devgroup=~/[1-4][0-9]/;
select mean(temperature) from devices where devgroup=~/[1-5][0-9]/;
select mean(temperature) from devices where devgroup=~/[1-6][0-9]/;
select mean(temperature) from devices where devgroup=~/[1-7][0-9]/;
select mean(temperature) from devices where devgroup=~/[1-8][0-9]/;
select mean(temperature) from devices where devgroup=~/[1-9][0-9]/;
select mean(temperature) from devices;
select sum(temperature) from devices where devgroup=~/[1-1][0-9]/;
select sum(temperature) from devices where devgroup=~/[1-2][0-9]/;
select sum(temperature) from devices where devgroup=~/[1-3][0-9]/;
select sum(temperature) from devices where devgroup=~/[1-4][0-9]/;
select sum(temperature) from devices where devgroup=~/[1-5][0-9]/;
select sum(temperature) from devices where devgroup=~/[1-6][0-9]/;
select sum(temperature) from devices where devgroup=~/[1-7][0-9]/;
select sum(temperature) from devices where devgroup=~/[1-8][0-9]/;
select sum(temperature) from devices where devgroup=~/[1-9][0-9]/;
select sum(temperature) from devices;
select max(temperature) from devices where devgroup=~/[1-1][0-9]/;
select max(temperature) from devices where devgroup=~/[1-2][0-9]/;
select max(temperature) from devices where devgroup=~/[1-3][0-9]/;
select max(temperature) from devices where devgroup=~/[1-4][0-9]/;
select max(temperature) from devices where devgroup=~/[1-5][0-9]/;
select max(temperature) from devices where devgroup=~/[1-6][0-9]/;
select max(temperature) from devices where devgroup=~/[1-7][0-9]/;
select max(temperature) from devices where devgroup=~/[1-8][0-9]/;
select max(temperature) from devices where devgroup=~/[1-9][0-9]/;
select max(temperature) from devices;
select min(temperature) from devices where devgroup=~/[1-1][0-9]/;
select min(temperature) from devices where devgroup=~/[1-2][0-9]/;
select min(temperature) from devices where devgroup=~/[1-3][0-9]/;
select min(temperature) from devices where devgroup=~/[1-4][0-9]/;
select min(temperature) from devices where devgroup=~/[1-5][0-9]/;
select min(temperature) from devices where devgroup=~/[1-6][0-9]/;
select min(temperature) from devices where devgroup=~/[1-7][0-9]/;
select min(temperature) from devices where devgroup=~/[1-8][0-9]/;
select min(temperature) from devices where devgroup=~/[1-9][0-9]/;
select min(temperature) from devices;
select spread(temperature) from devices where devgroup=~/[1-1][0-9]/;
select spread(temperature) from devices where devgroup=~/[1-2][0-9]/;
select spread(temperature) from devices where devgroup=~/[1-3][0-9]/;
select spread(temperature) from devices where devgroup=~/[1-4][0-9]/;
select spread(temperature) from devices where devgroup=~/[1-5][0-9]/;
select spread(temperature) from devices where devgroup=~/[1-6][0-9]/;
select spread(temperature) from devices where devgroup=~/[1-7][0-9]/;
select spread(temperature) from devices where devgroup=~/[1-8][0-9]/;
select spread(temperature) from devices where devgroup=~/[1-9][0-9]/;
select spread(temperature) from devices;
select count(temperature), sum(temperature), mean(temperature) from devices where devgroup=~/[1-1][0-9]/ group by devgroup;
select count(temperature), sum(temperature), mean(temperature) from devices where devgroup=~/[1-2][0-9]/ group by devgroup;
select count(temperature), sum(temperature), mean(temperature) from devices where devgroup=~/[1-3][0-9]/ group by devgroup;
select count(temperature), sum(temperature), mean(temperature) from devices where devgroup=~/[1-4][0-9]/ group by devgroup;
select count(temperature), sum(temperature), mean(temperature) from devices where devgroup=~/[1-5][0-9]/ group by devgroup;
select count(temperature), sum(temperature), mean(temperature) from devices where devgroup=~/[1-6][0-9]/ group by devgroup;
select count(temperature), sum(temperature), mean(temperature) from devices where devgroup=~/[1-7][0-9]/ group by devgroup;
select count(temperature), sum(temperature), mean(temperature) from devices where devgroup=~/[1-8][0-9]/ group by devgroup;
select count(temperature), sum(temperature), mean(temperature) from devices where devgroup=~/[1-9][0-9]/ group by devgroup;
select count(temperature), sum(temperature), mean(temperature) from devices group by devgroup;
select count(temperature), sum(temperature), mean(temperature) from devices where devgroup=~/[1-1][0-9]/ group by time(1m);
select count(temperature), sum(temperature), mean(temperature) from devices where devgroup=~/[1-2][0-9]/ group by time(1m);
select count(temperature), sum(temperature), mean(temperature) from devices where devgroup=~/[1-3][0-9]/ group by time(1m);
select count(temperature), sum(temperature), mean(temperature) from devices where devgroup=~/[1-4][0-9]/ group by time(1m);
select count(temperature), sum(temperature), mean(temperature) from devices where devgroup=~/[1-5][0-9]/ group by time(1m);
select count(temperature), sum(temperature), mean(temperature) from devices where devgroup=~/[1-6][0-9]/ group by time(1m);
select count(temperature), sum(temperature), mean(temperature) from devices where devgroup=~/[1-7][0-9]/ group by time(1m);
select count(temperature), sum(temperature), mean(temperature) from devices where devgroup=~/[1-8][0-9]/ group by time(1m);
select count(temperature), sum(temperature), mean(temperature) from devices where devgroup=~/[1-9][0-9]/ group by time(1m);
select count(temperature), sum(temperature), mean(temperature) from devices group by time(1m);
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<!-- 日志保存路径为tomcat下面的logs下面的mobileLog文件夹,logback会自动创建文件夹,这样设置了就可以输出日志文件了
<substitutionProperty name="logbase" value="${catalina.base}/logs/mobileLog/"
/> -->
<substitutionProperty name="logbase" value="${user.dir}/logs/ " />
<!-- 这个是要配置输出文件的 -->
<jmxConfigurator />
<appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%date [%thread] %-5level %logger{80} - %msg%n</pattern>
</layout>
</appender>
<!-- 文件输出日志 (文件大小策略进行文件输出,超过指定大小对文件备份) -->
<appender name="logfile"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<Encoding>UTF-8</Encoding>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<File>${logbase}%d{yyyy-MM-dd}.log.html</File>
<FileNamePattern>${logbase}.%d{yyyy-MM-dd}.log.html.zip
</FileNamePattern>
</rollingPolicy>
<triggeringPolicy
class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>2MB</MaxFileSize>
</triggeringPolicy>
<layout class="ch.qos.logback.classic.html.HTMLLayout">
<pattern>%date%level%thread%10logger%file%line%msg</pattern>
</layout>
</appender>
<!-- Output by Email -->
<!--
<appender name="Email" class="ch.qos.logback.classic.net.SMTPAppender">
<SMTPHost>stmp host name</SMTPHost>
<To>Email Address</To>
<To>Email Address</To>
<From>Email Address</From>
<Subject>TESTING Email Function: %logger{20} - %m</Subject>
<layout class="ch.qos.logback.classic.html.HTMLLayout">
<pattern>%date%level%thread%10logger%file%line%msg</pattern>
</layout>
</appender> -->
<!-- Output to Database -->
<!--
<appender name="DB" class="ch.qos.logback.classic.db.DBAppender">
<connectionSource class="ch.qos.logback.core.db.DriverManagerConnectionSource">
<driverClass>com.mysql.jdbc.Driver</driverClass>
<url>jdbc:mysql://localhost:3306/test</url>
<user>root</user>
<password>trend_dev</password>
</connectionSource>
</appender> -->
<root>
<level value="debug" />
<appender-ref ref="logfile" />
<appender-ref ref="logfile" />
</root>
</configuration>
CMAKE_MINIMUM_REQUIRED(VERSION 3.0...3.20)
PROJECT(TDengine)
IF (TD_LINUX)
add_executable(tdengineTest tdengineTest.c)
target_link_libraries(tdengineTest taos_static tutil common pthread)
ENDIF()
IF (TD_DARWIN)
add_executable(tdengineTest tdengineTest.c)
target_link_libraries(tdengineTest taos_static tutil common pthread)
ENDIF()
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册