提交 1f02bee6 编写于 作者: D dapan1121

Merge branch 'develop' into feature/TD-2577

...@@ -171,6 +171,8 @@ matrix: ...@@ -171,6 +171,8 @@ matrix:
- build-essential - build-essential
- cmake - cmake
- binutils-2.26 - binutils-2.26
- unixodbc
- unixodbc-dev
env: env:
- DESC="trusty/gcc-4.8/bintuils-2.26 build" - DESC="trusty/gcc-4.8/bintuils-2.26 build"
...@@ -198,6 +200,8 @@ matrix: ...@@ -198,6 +200,8 @@ matrix:
packages: packages:
- build-essential - build-essential
- cmake - cmake
- unixodbc
- unixodbc-dev
before_script: before_script:
- export TZ=Asia/Harbin - export TZ=Asia/Harbin
...@@ -252,6 +256,8 @@ matrix: ...@@ -252,6 +256,8 @@ matrix:
packages: packages:
- build-essential - build-essential
- cmake - cmake
- unixodbc
- unixodbc-dev
env: env:
- DESC="arm64 xenial build" - DESC="arm64 xenial build"
...@@ -280,6 +286,7 @@ matrix: ...@@ -280,6 +286,7 @@ matrix:
addons: addons:
homebrew: homebrew:
- cmake - cmake
- unixodbc
script: script:
- cd ${TRAVIS_BUILD_DIR} - cd ${TRAVIS_BUILD_DIR}
......
...@@ -16,6 +16,8 @@ SET(TD_GRANT FALSE) ...@@ -16,6 +16,8 @@ SET(TD_GRANT FALSE)
SET(TD_MQTT FALSE) SET(TD_MQTT FALSE)
SET(TD_TSDB_PLUGINS FALSE) SET(TD_TSDB_PLUGINS FALSE)
SET(TD_STORAGE FALSE) SET(TD_STORAGE FALSE)
SET(TD_TOPIC FALSE)
SET(TD_MODULE FALSE)
SET(TD_COVER FALSE) SET(TD_COVER FALSE)
SET(TD_MEM_CHECK FALSE) SET(TD_MEM_CHECK FALSE)
......
...@@ -6,6 +6,7 @@ node { ...@@ -6,6 +6,7 @@ node {
} }
def skipstage=0 def skipstage=0
def abortPreviousBuilds() { def abortPreviousBuilds() {
def currentJobName = env.JOB_NAME def currentJobName = env.JOB_NAME
def currentBuildNumber = env.BUILD_NUMBER.toInteger() def currentBuildNumber = env.BUILD_NUMBER.toInteger()
...@@ -24,7 +25,7 @@ def abortPreviousBuilds() { ...@@ -24,7 +25,7 @@ def abortPreviousBuilds() {
build.doKill() //doTerm(),doKill(),doTerm() build.doKill() //doTerm(),doKill(),doTerm()
} }
} }
//abort previous build // abort previous build
abortPreviousBuilds() abortPreviousBuilds()
def abort_previous(){ def abort_previous(){
def buildNumber = env.BUILD_NUMBER as int def buildNumber = env.BUILD_NUMBER as int
...@@ -32,24 +33,25 @@ def abort_previous(){ ...@@ -32,24 +33,25 @@ def abort_previous(){
milestone(buildNumber) milestone(buildNumber)
} }
def pre_test(){ def pre_test(){
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
sudo rmtaos
'''
}
sh '''
sh '''
sudo rmtaos || echo "taosd has not installed"
'''
sh '''
killall -9 taosd ||echo "no taosd running"
killall -9 gdb || echo "no gdb running"
cd ${WKC} cd ${WKC}
git checkout develop git checkout develop
git reset --hard HEAD~10 >/dev/null git reset --hard HEAD~10 >/dev/null
git pull git pull >/dev/null
git fetch origin +refs/pull/${CHANGE_ID}/merge git fetch origin +refs/pull/${CHANGE_ID}/merge
git checkout -qf FETCH_HEAD git checkout -qf FETCH_HEAD
git --no-pager diff --name-only FETCH_HEAD $(git merge-base FETCH_HEAD develop)|grep -v -E '.*md|//src//connector|Jenkinsfile' || exit 0 find ${WKC}/tests/pytest -name \'*\'.sql -exec rm -rf {} \\;
cd ${WK} cd ${WK}
git reset --hard HEAD~10 git reset --hard HEAD~10
git checkout develop git checkout develop
git pull git pull >/dev/null
cd ${WK} cd ${WK}
export TZ=Asia/Harbin export TZ=Asia/Harbin
date date
...@@ -79,6 +81,10 @@ pipeline { ...@@ -79,6 +81,10 @@ pipeline {
changeRequest() changeRequest()
} }
steps { steps {
script{
abort_previous()
abortPreviousBuilds()
}
sh''' sh'''
cp -r ${WORKSPACE} ${WORKSPACE}.tes cp -r ${WORKSPACE} ${WORKSPACE}.tes
cd ${WORKSPACE}.tes cd ${WORKSPACE}.tes
...@@ -115,7 +121,6 @@ pipeline { ...@@ -115,7 +121,6 @@ pipeline {
sh ''' sh '''
date date
cd ${WKC}/tests cd ${WKC}/tests
find pytest -name '*'sql|xargs rm -rf
./test-all.sh p1 ./test-all.sh p1
date''' date'''
} }
...@@ -131,7 +136,6 @@ pipeline { ...@@ -131,7 +136,6 @@ pipeline {
sh ''' sh '''
date date
cd ${WKC}/tests cd ${WKC}/tests
find pytest -name '*'sql|xargs rm -rf
./test-all.sh p2 ./test-all.sh p2
date''' date'''
} }
......
[![Build Status](https://travis-ci.org/taosdata/TDengine.svg?branch=master)](https://travis-ci.org/taosdata/TDengine)
[![Build status](https://ci.appveyor.com/api/projects/status/kf3pwh2or5afsgl9/branch/master?svg=true)](https://ci.appveyor.com/project/sangshuduo/tdengine-2n8ge/branch/master)
[![Coverage Status](https://coveralls.io/repos/github/taosdata/TDengine/badge.svg?branch=develop)](https://coveralls.io/github/taosdata/TDengine?branch=develop)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4201/badge)](https://bestpractices.coreinfrastructure.org/projects/4201)
[![tdengine](https://snapcraft.io//tdengine/badge.svg)](https://snapcraft.io/tdengine)
[![TDengine](TDenginelogo.png)](https://www.taosdata.com)
简体中文 | [English](./README.md)
# TDengine 简介
TDengine是涛思数据专为物联网、车联网、工业互联网、IT运维等设计和优化的大数据平台。除核心的快10倍以上的时序数据库功能外,还提供缓存、数据订阅、流式计算等功能,最大程度减少研发和运维的复杂度,且核心代码,包括集群功能全部开源(开源协议,AGPL v3.0)。
- 10 倍以上性能提升。定义了创新的数据存储结构,单核每秒就能处理至少2万次请求,插入数百万个数据点,读出一千万以上数据点,比现有通用数据库快了十倍以上。
- 硬件或云服务成本降至1/5。由于超强性能,计算资源不到通用大数据方案的1/5;通过列式存储和先进的压缩算法,存储空间不到通用数据库的1/10。
- 全栈时序数据处理引擎。将数据库、消息队列、缓存、流式计算等功能融合一起,应用无需再集成Kafka/Redis/HBase/Spark等软件,大幅降低应用开发和维护成本。
- 强大的分析功能。无论是十年前还是一秒钟前的数据,指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。即席查询可通过Shell/Python/R/Matlab随时进行。
- 与第三方工具无缝连接。不用一行代码,即可与Telegraf, Grafana, EMQ X, Prometheus, Matlab, R集成。后续还将支持MQTT, OPC, Hadoop,Spark等, BI工具也将无缝连接。
- 零运维成本、零学习成本。安装、集群一秒搞定,无需分库分表,实时备份。标准SQL,支持JDBC,RESTful,支持Python/Java/C/C++/Go/Node.JS, 与MySQL相似,零学习成本。
# 文档
TDengine是一个高效的存储、查询、分析时序大数据的平台,专为物联网、车联网、工业互联网、运维监测等优化而设计。您可以像使用关系型数据库MySQL一样来使用它,但建议您在使用前仔细阅读一遍下面的文档,特别是 [数据模型](https://www.taosdata.com/cn/documentation/architecture)[数据建模](https://www.taosdata.com/cn/documentation/model)。除本文档之外,欢迎 [下载产品白皮书](https://www.taosdata.com/downloads/TDengine%20White%20Paper.pdf)
# 生成
TDengine目前2.0版服务器仅能在Linux系统上安装和运行,后续会支持Windows、macOS等系统。客户端可以在Windows或Linux上安装和运行。任何OS的应用也可以选择RESTful接口连接服务器taosd。CPU支持X64/ARM64/MIPS64/Alpha64,后续会支持ARM32、RISC-V等CPU架构。用户可根据需求选择通过[源码](https://www.taosdata.com/cn/getting-started/#通过源码安装)或者[安装包](https://www.taosdata.com/cn/getting-started/#通过安装包安装)来安装。本快速指南仅适用于通过源码安装。
## 安装工具
### Ubuntu 16.04 及以上版本 & Debian:
```bash
sudo apt-get install -y gcc cmake build-essential git
```
### Ubuntu 14.04:
```bash
sudo apt-get install -y gcc cmake3 build-essential git binutils-2.26
export PATH=/usr/lib/binutils-2.26/bin:$PATH
```
编译或打包 JDBC 驱动源码,需安装 Java JDK 8 或以上版本和 Apache Maven 2.7 或以上版本。
安装 OpenJDK 8:
```bash
sudo apt-get install -y openjdk-8-jdk
```
安装 Apache Maven:
```bash
sudo apt-get install -y maven
```
### CentOS 7:
```bash
sudo yum install -y gcc gcc-c++ make cmake git
```
安装 OpenJDK 8:
```bash
sudo yum install -y java-1.8.0-openjdk
```
安装 Apache Maven:
```bash
sudo yum install -y maven
```
### CentOS 8 & Fedora:
```bash
sudo dnf install -y gcc gcc-c++ make cmake epel-release git
```
安装 OpenJDK 8:
```bash
sudo dnf install -y java-1.8.0-openjdk
```
安装 Apache Maven:
```bash
sudo dnf install -y maven
```
## 获取源码
首先,你需要从 GitHub 克隆源码:
```bash
git clone https://github.com/taosdata/TDengine.git
cd TDengine
```
Go 连接器和 Grafana 插件在其他独立仓库,如果安装它们的话,需要在 TDengine 目录下通过此命令安装:
```bash
git submodule update --init --recursive
```
## 生成 TDengine
### Linux 系统
```bash
mkdir debug && cd debug
cmake .. && cmake --build .
```
在X86-64、X86、arm64 和 arm32 平台上,TDengine 生成脚本可以自动检测机器架构。也可以手动配置 CPUTYPE 参数来指定 CPU 类型,如 aarch64 或 aarch32 等。
aarch64:
```bash
cmake .. -DCPUTYPE=aarch64 && cmake --build .
```
aarch32:
```bash
cmake .. -DCPUTYPE=aarch32 && cmake --build .
```
### Windows 系统
如果你使用的是 Visual Studio 2013 版本:
打开 cmd.exe,执行 vcvarsall.bat 时,为 64 位操作系统指定“x86_amd64”,为 32 位操作系统指定“x86”。
```bash
mkdir debug && cd debug
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < x86_amd64 | x86 >
cmake .. -G "NMake Makefiles"
nmake
```
如果你使用的是 Visual Studio 2019 或 2017 版本:
打开cmd.exe,执行 vcvarsall.bat 时,为 64 位操作系统指定“x64”,为 32 位操作系统指定“x86”。
```bash
mkdir debug && cd debug
"c:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvarsall.bat" < x64 | x86 >
cmake .. -G "NMake Makefiles"
nmake
```
你也可以从开始菜单中找到"Visual Studio < 2019 | 2017 >"菜单项,根据你的系统选择"x64 Native Tools Command Prompt for VS < 2019 | 2017 >"或"x86 Native Tools Command Prompt for VS < 2019 | 2017 >",打开命令行窗口,执行:
```bash
mkdir debug && cd debug
cmake .. -G "NMake Makefiles"
nmake
```
### Mac OS X 系统
安装 Xcode 命令行工具和 cmake. 在 Catalina 和 Big Sur 操作系统上,需要安装 XCode 11.4+ 版本。
```bash
mkdir debug && cd debug
cmake .. && cmake --build .
```
# 安装
如果你不想安装,可以直接在shell中运行。生成完成后,安装 TDengine:
```bash
make install
```
用户可以在[文件目录结构](https://www.taosdata.com/cn/documentation/administrator#directories)中了解更多在操作系统中生成的目录或文件。
安装成功后,在终端中启动 TDengine 服务:
```bash
taosd
```
用户可以使用 TDengine Shell 来连接 TDengine 服务,在终端中,输入:
```bash
taos
```
如果 TDengine Shell 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
## 快速运行
TDengine 生成后,在终端执行以下命令:
```bash
./build/bin/taosd -c test/cfg
```
在另一个终端,使用 TDengine shell 连接服务器:
```bash
./build/bin/taos -c test/cfg
```
"-c test/cfg"指定系统配置文件所在目录。
# 体验 TDengine
在TDengine终端中,用户可以通过SQL命令来创建/删除数据库、表等,并进行插入查询操作。
```bash
create database demo;
use demo;
create table t (ts timestamp, speed int);
insert into t values ('2019-07-15 00:00:00', 10);
insert into t values ('2019-07-15 01:00:00', 20);
select * from t;
ts | speed |
===================================
19-07-15 00:00:00.000| 10|
19-07-15 01:00:00.000| 20|
Query OK, 2 row(s) in set (0.001700s)
```
# 应用开发
## 官方连接器
TDengine 提供了丰富的应用程序开发接口,其中包括C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用:
- Java
- C/C++
- Python
- Go
- RESTful API
- Node.js
## 第三方连接器
TDengine 社区生态中也有一些非常友好的第三方连接器,可以通过以下链接访问它们的源码。
- [Rust Connector](https://github.com/taosdata/TDengine/tree/master/tests/examples/rust)
- [.Net Core Connector](https://github.com/maikebing/Maikebing.EntityFrameworkCore.Taos)
- [Lua Connector](https://github.com/taosdata/TDengine/tree/develop/tests/examples/lua)
# 运行和添加测试例
TDengine 的测试框架和所有测试例全部开源。
点击 [这里](tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md),了解如何运行测试例和添加新的测试例。
# 成为社区贡献者
点击 [这里](https://www.taosdata.com/cn/contributor/),了解如何成为 TDengine 的贡献者。
# 加入技术交流群
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小T为好友,即可入群。
# [谁在使用TDengine](https://github.com/taosdata/TDengine/issues/2432)
欢迎所有 TDengine 用户及贡献者在 [这里](https://github.com/taosdata/TDengine/issues/2432) 分享您在当前工作中开发/使用 TDengine 的故事。
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
[![TDengine](TDenginelogo.png)](https://www.taosdata.com) [![TDengine](TDenginelogo.png)](https://www.taosdata.com)
English | [简体中文](./README-CN.md)
# What is TDengine? # What is TDengine?
TDengine is an open-sourced big data platform under [GNU AGPL v3.0](http://www.gnu.org/licenses/agpl-3.0.html), designed and optimized for the Internet of Things (IoT), Connected Cars, Industrial IoT, and IT Infrastructure and Application Monitoring. Besides the 10x faster time-series database, it provides caching, stream computing, message queuing and other functionalities to reduce the complexity and cost of development and operation. TDengine is an open-sourced big data platform under [GNU AGPL v3.0](http://www.gnu.org/licenses/agpl-3.0.html), designed and optimized for the Internet of Things (IoT), Connected Cars, Industrial IoT, and IT Infrastructure and Application Monitoring. Besides the 10x faster time-series database, it provides caching, stream computing, message queuing and other functionalities to reduce the complexity and cost of development and operation.
...@@ -29,7 +31,7 @@ For user manual, system design and architecture, engineering blogs, refer to [TD ...@@ -29,7 +31,7 @@ For user manual, system design and architecture, engineering blogs, refer to [TD
# Building # Building
At the moment, TDengine only supports building and running on Linux systems. You can choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) or from the source code. This quick guide is for installation from the source only. At the moment, TDengine only supports building and running on Linux systems. You can choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) or from the source code. This quick guide is for installation from the source only.
To build TDengine, use [CMake](https://cmake.org/) 3.5 or higher versions in the project directory. To build TDengine, use [CMake](https://cmake.org/) 2.8.12.x or higher versions in the project directory.
## Install tools ## Install tools
...@@ -160,39 +162,42 @@ mkdir debug && cd debug ...@@ -160,39 +162,42 @@ mkdir debug && cd debug
cmake .. && cmake --build . cmake .. && cmake --build .
``` ```
# Quick Run
# Quick Run
To quickly start a TDengine server after building, run the command below in terminal:
```bash
./build/bin/taosd -c test/cfg
```
In another terminal, use the TDengine shell to connect the server:
```bash
./build/bin/taos -c test/cfg
```
option "-c test/cfg" specifies the system configuration file directory.
# Installing # Installing
After building successfully, TDengine can be installed by: After building successfully, TDengine can be installed by:
```bash ```bash
make install sudo make install
``` ```
Users can find more information about directories installed on the system in the [directory and files](https://www.taosdata.com/en/documentation/administrator/#Directory-and-Files) section. It should be noted that installing from source code does not configure service management for TDengine.
Users can find more information about directories installed on the system in the [directory and files](https://www.taosdata.com/en/documentation/administrator/#Directory-and-Files) section. Since version 2.0, installing from source code will also configure service management for TDengine.
Users can also choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) for it. Users can also choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) for it.
To start the service after installation, in a terminal, use: To start the service after installation, in a terminal, use:
```cmd ```bash
taosd sudo systemctl start taosd
``` ```
Then users can use the [TDengine shell](https://www.taosdata.com/en/getting-started/#TDengine-Shell) to connect the TDengine server. In a terminal, use: Then users can use the [TDengine shell](https://www.taosdata.com/en/getting-started/#TDengine-Shell) to connect the TDengine server. In a terminal, use:
```cmd ```bash
taos taos
``` ```
If TDengine shell connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown. If TDengine shell connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown.
## Quick Run
If you don't want to run TDengine as a service, you can run it in current shell. For example, to quickly start a TDengine server after building, run the command below in terminal:
```bash
./build/bin/taosd -c test/cfg
```
In another terminal, use the TDengine shell to connect the server:
```bash
./build/bin/taos -c test/cfg
```
option "-c test/cfg" specifies the system configuration file directory.
# Try TDengine # Try TDengine
It is easy to run SQL commands from TDengine shell which is the same as other SQL databases. It is easy to run SQL commands from TDengine shell which is the same as other SQL databases.
```sql ```sql
...@@ -245,3 +250,6 @@ Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to th ...@@ -245,3 +250,6 @@ Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to th
Add WeChat “tdengine” to join the group,you can communicate with other users. Add WeChat “tdengine” to join the group,you can communicate with other users.
# [User List](https://github.com/taosdata/TDengine/issues/2432)
If you are using TDengine and feel it helps or you'd like to do some contributions, please add your company to [user list](https://github.com/taosdata/TDengine/issues/2432) and let us know your needs.
...@@ -25,6 +25,14 @@ IF (TD_STORAGE) ...@@ -25,6 +25,14 @@ IF (TD_STORAGE)
ADD_DEFINITIONS(-D_STORAGE) ADD_DEFINITIONS(-D_STORAGE)
ENDIF () ENDIF ()
IF (TD_TOPIC)
ADD_DEFINITIONS(-D_TOPIC)
ENDIF ()
IF (TD_MODULE)
ADD_DEFINITIONS(-D_MODULE)
ENDIF ()
IF (TD_GODLL) IF (TD_GODLL)
ADD_DEFINITIONS(-D_TD_GO_DLL_) ADD_DEFINITIONS(-D_TD_GO_DLL_)
ENDIF () ENDIF ()
......
...@@ -9,6 +9,22 @@ ELSEIF (${ACCOUNT} MATCHES "false") ...@@ -9,6 +9,22 @@ ELSEIF (${ACCOUNT} MATCHES "false")
MESSAGE(STATUS "Build without account plugins") MESSAGE(STATUS "Build without account plugins")
ENDIF () ENDIF ()
IF (${TOPIC} MATCHES "true")
SET(TD_TOPIC TRUE)
MESSAGE(STATUS "Build with topic plugins")
ELSEIF (${TOPIC} MATCHES "false")
SET(TD_TOPIC FALSE)
MESSAGE(STATUS "Build without topic plugins")
ENDIF ()
IF (${TD_MODULE} MATCHES "true")
SET(TD_MODULE TRUE)
MESSAGE(STATUS "Build with module plugins")
ELSEIF (${TOPIC} MATCHES "false")
SET(TD_MODULE FALSE)
MESSAGE(STATUS "Build without module plugins")
ENDIF ()
IF (${COVER} MATCHES "true") IF (${COVER} MATCHES "true")
SET(TD_COVER TRUE) SET(TD_COVER TRUE)
MESSAGE(STATUS "Build with test coverage") MESSAGE(STATUS "Build with test coverage")
......
...@@ -32,7 +32,7 @@ ELSEIF (TD_WINDOWS) ...@@ -32,7 +32,7 @@ ELSEIF (TD_WINDOWS)
#INSTALL(TARGETS taos RUNTIME DESTINATION driver) #INSTALL(TARGETS taos RUNTIME DESTINATION driver)
#INSTALL(TARGETS shell RUNTIME DESTINATION .) #INSTALL(TARGETS shell RUNTIME DESTINATION .)
IF (TD_MVN_INSTALLED) IF (TD_MVN_INSTALLED)
INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos-jdbcdriver-2.0.21-dist.jar DESTINATION connector/jdbc) INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos-jdbcdriver-2.0.22-dist.jar DESTINATION connector/jdbc)
ENDIF () ENDIF ()
ELSEIF (TD_DARWIN) ELSEIF (TD_DARWIN)
SET(TD_MAKE_INSTALL_SH "${TD_COMMUNITY_DIR}/packaging/tools/make_install.sh") SET(TD_MAKE_INSTALL_SH "${TD_COMMUNITY_DIR}/packaging/tools/make_install.sh")
......
...@@ -4,7 +4,7 @@ PROJECT(TDengine) ...@@ -4,7 +4,7 @@ PROJECT(TDengine)
IF (DEFINED VERNUMBER) IF (DEFINED VERNUMBER)
SET(TD_VER_NUMBER ${VERNUMBER}) SET(TD_VER_NUMBER ${VERNUMBER})
ELSE () ELSE ()
SET(TD_VER_NUMBER "2.0.17.0") SET(TD_VER_NUMBER "2.0.18.0")
ENDIF () ENDIF ()
IF (DEFINED VERCOMPATIBLE) IF (DEFINED VERCOMPATIBLE)
......
...@@ -120,7 +120,7 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专 ...@@ -120,7 +120,7 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专
* [TDengine性能对比测试工具](https://www.taosdata.com/blog/2020/01/18/1166.html) * [TDengine性能对比测试工具](https://www.taosdata.com/blog/2020/01/18/1166.html)
* [IDEA数据库管理工具可视化使用TDengine](https://www.taosdata.com/blog/2020/08/27/1767.html) * [IDEA数据库管理工具可视化使用TDengine](https://www.taosdata.com/blog/2020/08/27/1767.html)
* [基于eletron开发的跨平台TDengine图形化管理工具](https://github.com/skye0207/TDengineGUI) * [基于eletron开发的跨平台TDengine图形化管理工具](https://github.com/skye0207/TDengineGUI)
* [DataX,支持TDengine的离线数据采集/同步工具](https://github.com/alibaba/DataX) * [DataX,支持TDengine的离线数据采集/同步工具](https://github.com/wgzhao/DataX)(文档:[读取插件](https://github.com/wgzhao/DataX/blob/master/docs/src/main/sphinx/reader/tdenginereader.md)[写入插件](https://github.com/wgzhao/DataX/blob/master/docs/src/main/sphinx/writer/tdenginewriter.md)
## TDengine与其他数据库的对比测试 ## TDengine与其他数据库的对比测试
......
...@@ -145,7 +145,7 @@ TDengine 建议用数据采集点的名字(如上表中的D1001)来做表名。 ...@@ -145,7 +145,7 @@ TDengine 建议用数据采集点的名字(如上表中的D1001)来做表名。
在TDengine的设计里,**表用来代表一个具体的数据采集点,超级表用来代表一组相同类型的数据采集点集合**。当为某个具体数据采集点创建表时,用户使用超级表的定义做模板,同时指定该具体采集点(表)的标签值。与传统的关系型数据库相比,表(一个数据采集点)是带有静态标签的,而且这些标签可以事后增加、删除、修改。**一张超级表包含有多张表,这些表具有相同的时序数据schema,但带有不同的标签值** 在TDengine的设计里,**表用来代表一个具体的数据采集点,超级表用来代表一组相同类型的数据采集点集合**。当为某个具体数据采集点创建表时,用户使用超级表的定义做模板,同时指定该具体采集点(表)的标签值。与传统的关系型数据库相比,表(一个数据采集点)是带有静态标签的,而且这些标签可以事后增加、删除、修改。**一张超级表包含有多张表,这些表具有相同的时序数据schema,但带有不同的标签值**
当对多个具有相同数据类型的数据采集点进行聚合操作时,TDengine将先把满足标签过滤条件的表从超级表的中查找出来,然后再扫描这些表的时序数据,进行聚合操作,这样能将需要扫描的数据集大幅减少,从而大幅提高聚合计算的性能。 当对多个具有相同数据类型的数据采集点进行聚合操作时,TDengine会先把满足标签过滤条件的表从超级表中找出来,然后再扫描这些表的时序数据,进行聚合操作,这样需要扫描的数据集会大幅减少,从而显著提高聚合计算的性能。
## <a class="anchor" id="cluster"></a>集群与基本逻辑单元 ## <a class="anchor" id="cluster"></a>集群与基本逻辑单元
......
...@@ -451,7 +451,8 @@ Query OK, 1 row(s) in set (0.000141s) ...@@ -451,7 +451,8 @@ Query OK, 1 row(s) in set (0.000141s)
| taos-jdbcdriver 版本 | TDengine 版本 | JDK 版本 | | taos-jdbcdriver 版本 | TDengine 版本 | JDK 版本 |
| -------------------- | ----------------- | -------- | | -------------------- | ----------------- | -------- |
| 2.0.12 及以上 | 2.0.8.0 及以上 | 1.8.x | | 2.0.22 | 2.0.18.0 及以上 | 1.8.x |
| 2.0.12 - 2.0.21 | 2.0.8.0 - 2.0.17.0 | 1.8.x |
| 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.x | 1.8.x | | 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.x | 1.8.x |
| 1.0.3 | 1.6.1.x 及以上 | 1.8.x | | 1.0.3 | 1.6.1.x 及以上 | 1.8.x |
| 1.0.2 | 1.6.1.x 及以上 | 1.8.x | | 1.0.2 | 1.6.1.x 及以上 | 1.8.x |
......
...@@ -111,9 +111,10 @@ taos> ...@@ -111,9 +111,10 @@ taos>
**提示:** **提示:**
- 任何已经加入集群在线的数据节点,都可以作为后续待加入节点的firstEP。 - 任何已经加入集群在线的数据节点,都可以作为后续待加入节点的 firstEP。
- firstEp这个参数仅仅在该数据节点首次加入集群时有作用,加入集群后,该数据节点会保存最新的mnode的End Point列表,不再依赖这个参数。 - firstEp 这个参数仅仅在该数据节点首次加入集群时有作用,加入集群后,该数据节点会保存最新的 mnode 的 End Point 列表,不再依赖这个参数。
- 两个没有配置firstEp参数的数据节点dnode启动后,会独立运行起来。这个时候,无法将其中一个数据节点加入到另外一个数据节点,形成集群。**无法将两个独立的集群合并成为新的集群** - 接下来,配置文件中的 firstEp 参数就主要在客户端连接的时候使用了,例如 taos shell 如果不加参数,会默认连接由 firstEp 指定的节点。
- 两个没有配置 firstEp 参数的数据节点 dnode 启动后,会独立运行起来。这个时候,无法将其中一个数据节点加入到另外一个数据节点,形成集群。**无法将两个独立的集群合并成为新的集群**
## <a class="anchor" id="management"></a>数据节点管理 ## <a class="anchor" id="management"></a>数据节点管理
......
...@@ -6,17 +6,27 @@ ...@@ -6,17 +6,27 @@
### 内存需求 ### 内存需求
每个 DB 可以创建固定数目的 vgroup,默认与 CPU 核数相同,可通过 maxVgroupsPerDb 配置;vgroup 中的每个副本会是一个 vnode;每个 vnode 会占用固定大小的内存(大小与数据库的配置参数 blocks 和 cache 有关);每个 Table 会占用与标签总长度有关的内存;此外,系统会有一些固定的内存开销。因此,每个 DB 需要的系统内存可通过如下公式计算: 每个 Database 可以创建固定数目的 vgroup,默认与 CPU 核数相同,可通过 maxVgroupsPerDb 配置;vgroup 中的每个副本会是一个 vnode;每个 vnode 会占用固定大小的内存(大小与数据库的配置参数 blocks 和 cache 有关);每个 Table 会占用与标签总长度有关的内存;此外,系统会有一些固定的内存开销。因此,每个 DB 需要的系统内存可通过如下公式计算:
``` ```
Memory Size = maxVgroupsPerDb * (blocks * cache + 10MB) + numOfTables * (tagSizePerTable + 0.5KB) Database Memory Size = maxVgroupsPerDb * (blocks * cache + 10MB) + numOfTables * (tagSizePerTable + 0.5KB)
``` ```
示例:假设是 4 核机器,cache 是缺省大小 16M, blocks 是缺省值 6,假设有 10 万张表,标签总长度是 256 字节,则总的内存需求为:4 \* (16 \* 6 + 10) + 100000 \* (0.25 + 0.5) / 1000 = 499M。 示例:假设是 4 核机器,cache 是缺省大小 16M, blocks 是缺省值 6,并且一个 DB 中有 10 万张表,标签总长度是 256 字节,则这个 DB 总的内存需求为:4 \* (16 \* 6 + 10) + 100000 \* (0.25 + 0.5) / 1000 = 499M。
实际运行的系统往往会根据数据特点的不同,将数据存放在不同的 DB 里。因此做规划时,也需要考虑。 在实际的系统运维中,我们通常会更关心 TDengine 服务进程(taosd)会占用的内存量。
```
taosd 内存总量 = vnode 内存 + mnode 内存 + 查询内存
```
其中:
1. “vnode 内存”指的是集群中所有的 Database 存储分摊到当前 taosd 节点上所占用的内存资源。可以按上文“Database Memory Size”计算公式估算每个 DB 的内存占用量进行加总,再按集群中总共的 TDengine 节点数做平均(如果设置为多副本,则还需要乘以对应的副本倍数)。
2. “mnode 内存”指的是集群中管理节点所占用的资源。如果一个 taosd 节点上分布有 mnode 管理节点,则内存消耗还需要增加“0.2KB * 集群中数据表总数”。
3. “查询内存”指的是服务端处理查询请求时所需要占用的内存。单条查询语句至少会占用“0.2KB * 查询涉及的数据表总数”的内存量。
注意:以上内存估算方法,主要讲解了系统的“必须内存需求”,而不是“内存总数上限”。在实际运行的生产环境中,由于操作系统缓存、资源管理调度等方面的原因,内存规划应当在估算结果的基础上保留一定冗余,以维持系统状态和系统性能的稳定性。并且,生产环境通常会配置系统资源的监控工具,以便及时发现硬件资源的紧缺情况。
如果内存充裕,可以加大 Blocks 的配置,这样更多数据将保存在内存里,提高查询速度。 最后,如果内存充裕,可以考虑加大 Blocks 的配置,这样更多数据将保存在内存里,提高查询速度。
### CPU 需求 ### CPU 需求
......
...@@ -125,7 +125,7 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic ...@@ -125,7 +125,7 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic
```mysql ```mysql
ALTER DATABASE db_name CACHELAST 0; ALTER DATABASE db_name CACHELAST 0;
``` ```
CACHELAST 参数控制是否在内存中缓存数据子表的 last_row。缺省值为 0,取值范围 [0, 1]。其中 0 表示不启用、1 表示启用。(从 2.0.11 版本开始支持) CACHELAST 参数控制是否在内存中缓存数据子表的 last_row。缺省值为 0,取值范围 [0, 1]。其中 0 表示不启用、1 表示启用。(从 2.0.11 版本开始支持,修改后需要重启服务器生效。
**Tips**: 以上所有参数修改后都可以用show databases来确认是否修改成功。 **Tips**: 以上所有参数修改后都可以用show databases来确认是否修改成功。
...@@ -249,7 +249,7 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic ...@@ -249,7 +249,7 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic
3) TAGS 列名不能为预留关键字; 3) TAGS 列名不能为预留关键字;
4) TAGS 最多允许128个,至少1个,总长度不超过16k个字符 4) TAGS 最多允许 128 个,至少 1 个,总长度不超过 16 KB
- **删除超级表** - **删除超级表**
...@@ -331,7 +331,8 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic ...@@ -331,7 +331,8 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic
```mysql ```mysql
INSERT INTO tb_name VALUES (field1_value1, ...) (field1_value2, ...) ...; INSERT INTO tb_name VALUES (field1_value1, ...) (field1_value2, ...) ...;
``` ```
向表tb_name中插入多条记录 向表tb_name中插入多条记录
**注意**:在使用“插入多条记录”方式写入数据时,不能把第一列的时间戳取值都设为now,否则会导致语句中的多条记录使用相同的时间戳,于是就可能出现相互覆盖以致这些数据行无法全部被正确保存。
- **按指定的列插入多条记录** - **按指定的列插入多条记录**
```mysql ```mysql
......
...@@ -16,13 +16,13 @@ ...@@ -16,13 +16,13 @@
## 1. TDengine2.0之前的版本升级到2.0及以上的版本应该注意什么?☆☆☆ ## 1. TDengine2.0之前的版本升级到2.0及以上的版本应该注意什么?☆☆☆
2.0版在之前版本的基础上,进行了完全的重构,配置文件和数据文件是不兼容的。在升级之前务必进行如下操作: 2.0版在之前版本的基础上,进行了完全的重构,配置文件和数据文件是不兼容的。在升级之前务必进行如下操作:
1. 删除配置文件,执行 <code> sudo rm -rf /etc/taos/taos.cfg </code> 1. 删除配置文件,执行 `sudo rm -rf /etc/taos/taos.cfg`
2. 删除日志文件,执行 <code> sudo rm -rf /var/log/taos/ </code> 2. 删除日志文件,执行 `sudo rm -rf /var/log/taos/`
3. 确保数据已经不再需要的前提下,删除数据文件,执行 <code> sudo rm -rf /var/lib/taos/ </code> 3. 确保数据已经不再需要的前提下,删除数据文件,执行 `sudo rm -rf /var/lib/taos/`
4. 安装最新稳定版本的TDengine 4. 安装最新稳定版本的 TDengine
5. 如果数据需要迁移数据或者数据文件损坏,请联系涛思数据官方技术支持团队,进行协助解决 5. 如果需要迁移数据或者数据文件损坏,请联系涛思数据官方技术支持团队,进行协助解决
## 2. Windows平台下JDBCDriver找不到动态链接库,怎么办? ## 2. Windows平台下JDBCDriver找不到动态链接库,怎么办?
......
...@@ -213,10 +213,10 @@ fi ...@@ -213,10 +213,10 @@ fi
if echo $osinfo | grep -qwi "ubuntu" ; then if echo $osinfo | grep -qwi "ubuntu" ; then
# echo "this is ubuntu system" # echo "this is ubuntu system"
${csudo} rm -f /var/lib/dpkg/info/tdengine* || : ${csudo} dpkg --force-all -P tdengine || :
elif echo $osinfo | grep -qwi "debian" ; then elif echo $osinfo | grep -qwi "debian" ; then
# echo "this is debian system" # echo "this is debian system"
${csudo} rm -f /var/lib/dpkg/info/tdengine* || : ${csudo} dpkg --force-all -P tdengine || :
elif echo $osinfo | grep -qwi "centos" ; then elif echo $osinfo | grep -qwi "centos" ; then
# echo "this is centos system" # echo "this is centos system"
${csudo} rpm -e --noscripts tdengine || : ${csudo} rpm -e --noscripts tdengine || :
......
name: tdengine name: tdengine
base: core18 base: core18
version: '2.0.17.0' version: '2.0.18.0'
icon: snap/gui/t-dengine.svg icon: snap/gui/t-dengine.svg
summary: an open-source big data platform designed and optimized for IoT. summary: an open-source big data platform designed and optimized for IoT.
description: | description: |
...@@ -72,7 +72,7 @@ parts: ...@@ -72,7 +72,7 @@ parts:
- usr/bin/taosd - usr/bin/taosd
- usr/bin/taos - usr/bin/taos
- usr/bin/taosdemo - usr/bin/taosdemo
- usr/lib/libtaos.so.2.0.17.0 - usr/lib/libtaos.so.2.0.18.0
- usr/lib/libtaos.so.1 - usr/lib/libtaos.so.1
- usr/lib/libtaos.so - usr/lib/libtaos.so
......
...@@ -19,6 +19,6 @@ ADD_SUBDIRECTORY(tsdb) ...@@ -19,6 +19,6 @@ ADD_SUBDIRECTORY(tsdb)
ADD_SUBDIRECTORY(wal) ADD_SUBDIRECTORY(wal)
ADD_SUBDIRECTORY(cq) ADD_SUBDIRECTORY(cq)
ADD_SUBDIRECTORY(dnode) ADD_SUBDIRECTORY(dnode)
#ADD_SUBDIRECTORY(connector/odbc) ADD_SUBDIRECTORY(connector/odbc)
ADD_SUBDIRECTORY(connector/jdbc) ADD_SUBDIRECTORY(connector/jdbc)
...@@ -425,7 +425,7 @@ static bool bnMonitorVgroups() { ...@@ -425,7 +425,7 @@ static bool bnMonitorVgroups() {
while (1) { while (1) {
pIter = mnodeGetNextVgroup(pIter, &pVgroup); pIter = mnodeGetNextVgroup(pIter, &pVgroup);
if (pVgroup == NULL) break; if (pVgroup == NULL || pVgroup->pDb == NULL) break;
int32_t dbReplica = pVgroup->pDb->cfg.replications; int32_t dbReplica = pVgroup->pDb->cfg.replications;
int32_t vgReplica = pVgroup->numOfVnodes; int32_t vgReplica = pVgroup->numOfVnodes;
...@@ -721,4 +721,4 @@ int32_t bnAlterDnode(struct SDnodeObj *pSrcDnode, int32_t vnodeId, int32_t dnode ...@@ -721,4 +721,4 @@ int32_t bnAlterDnode(struct SDnodeObj *pSrcDnode, int32_t vnodeId, int32_t dnode
mnodeDecDnodeRef(pDestDnode); mnodeDecDnodeRef(pDestDnode);
return code; return code;
} }
\ No newline at end of file
...@@ -83,6 +83,22 @@ typedef struct SJoinSupporter { ...@@ -83,6 +83,22 @@ typedef struct SJoinSupporter {
SArray* pVgroupTables; SArray* pVgroupTables;
} SJoinSupporter; } SJoinSupporter;
typedef struct SMergeCtx {
SJoinSupporter* p;
int32_t idx;
SArray* res;
int8_t compared;
}SMergeCtx;
typedef struct SMergeTsCtx {
SJoinSupporter* p;
STSBuf* res;
int64_t numOfInput;
int8_t compared;
}SMergeTsCtx;
typedef struct SVgroupTableInfo { typedef struct SVgroupTableInfo {
SVgroupInfo vgInfo; SVgroupInfo vgInfo;
SArray* itemList; //SArray<STableIdInfo> SArray* itemList; //SArray<STableIdInfo>
...@@ -123,6 +139,7 @@ int32_t tscGetDataBlockFromList(SHashObj* pHashList, int64_t id, int32_t size, i ...@@ -123,6 +139,7 @@ int32_t tscGetDataBlockFromList(SHashObj* pHashList, int64_t id, int32_t size, i
bool tscIsPointInterpQuery(SQueryInfo* pQueryInfo); bool tscIsPointInterpQuery(SQueryInfo* pQueryInfo);
bool tscIsTWAQuery(SQueryInfo* pQueryInfo); bool tscIsTWAQuery(SQueryInfo* pQueryInfo);
bool tscIsSecondStageQuery(SQueryInfo* pQueryInfo); bool tscIsSecondStageQuery(SQueryInfo* pQueryInfo);
bool tscGroupbyColumn(SQueryInfo* pQueryInfo);
bool tscNonOrderedProjectionQueryOnSTable(SQueryInfo *pQueryInfo, int32_t tableIndex); bool tscNonOrderedProjectionQueryOnSTable(SQueryInfo *pQueryInfo, int32_t tableIndex);
bool tscOrderedProjectionQueryOnSTable(SQueryInfo* pQueryInfo, int32_t tableIndex); bool tscOrderedProjectionQueryOnSTable(SQueryInfo* pQueryInfo, int32_t tableIndex);
...@@ -133,6 +150,7 @@ bool tscIsProjectionQuery(SQueryInfo* pQueryInfo); ...@@ -133,6 +150,7 @@ bool tscIsProjectionQuery(SQueryInfo* pQueryInfo);
bool tscIsTwoStageSTableQuery(SQueryInfo* pQueryInfo, int32_t tableIndex); bool tscIsTwoStageSTableQuery(SQueryInfo* pQueryInfo, int32_t tableIndex);
bool tscQueryTags(SQueryInfo* pQueryInfo); bool tscQueryTags(SQueryInfo* pQueryInfo);
bool tscMultiRoundQuery(SQueryInfo* pQueryInfo, int32_t tableIndex); bool tscMultiRoundQuery(SQueryInfo* pQueryInfo, int32_t tableIndex);
bool tscQueryBlockInfo(SQueryInfo* pQueryInfo);
SSqlExpr* tscAddFuncInSelectClause(SQueryInfo* pQueryInfo, int32_t outputColIndex, int16_t functionId, SSqlExpr* tscAddFuncInSelectClause(SQueryInfo* pQueryInfo, int32_t outputColIndex, int16_t functionId,
SColumnIndex* pIndex, SSchema* pColSchema, int16_t colType); SColumnIndex* pIndex, SSchema* pColSchema, int16_t colType);
...@@ -152,7 +170,6 @@ SInternalField* tscFieldInfoInsert(SFieldInfo* pFieldInfo, int32_t index, TAOS_F ...@@ -152,7 +170,6 @@ SInternalField* tscFieldInfoInsert(SFieldInfo* pFieldInfo, int32_t index, TAOS_F
SInternalField* tscFieldInfoGetInternalField(SFieldInfo* pFieldInfo, int32_t index); SInternalField* tscFieldInfoGetInternalField(SFieldInfo* pFieldInfo, int32_t index);
TAOS_FIELD* tscFieldInfoGetField(SFieldInfo* pFieldInfo, int32_t index); TAOS_FIELD* tscFieldInfoGetField(SFieldInfo* pFieldInfo, int32_t index);
void tscFieldInfoUpdateOffset(SQueryInfo* pQueryInfo);
void tscFieldInfoUpdateOffset(SQueryInfo* pQueryInfo); void tscFieldInfoUpdateOffset(SQueryInfo* pQueryInfo);
int16_t tscFieldInfoGetOffset(SQueryInfo* pQueryInfo, int32_t index); int16_t tscFieldInfoGetOffset(SQueryInfo* pQueryInfo, int32_t index);
...@@ -182,6 +199,7 @@ int32_t tscSqlExprCopy(SArray* dst, const SArray* src, uint64_t uid, bool deep ...@@ -182,6 +199,7 @@ int32_t tscSqlExprCopy(SArray* dst, const SArray* src, uint64_t uid, bool deep
void tscSqlExprInfoDestroy(SArray* pExprInfo); void tscSqlExprInfoDestroy(SArray* pExprInfo);
SColumn* tscColumnClone(const SColumn* src); SColumn* tscColumnClone(const SColumn* src);
bool tscColumnExists(SArray* pColumnList, SColumnIndex* pColIndex);
SColumn* tscColumnListInsert(SArray* pColList, SColumnIndex* colIndex); SColumn* tscColumnListInsert(SArray* pColList, SColumnIndex* colIndex);
SArray* tscColumnListClone(const SArray* src, int16_t tableIndex); SArray* tscColumnListClone(const SArray* src, int16_t tableIndex);
void tscColumnListDestroy(SArray* pColList); void tscColumnListDestroy(SArray* pColList);
......
...@@ -153,15 +153,15 @@ typedef struct SCond { ...@@ -153,15 +153,15 @@ typedef struct SCond {
} SCond; } SCond;
typedef struct SJoinNode { typedef struct SJoinNode {
char tableName[TSDB_TABLE_FNAME_LEN];
uint64_t uid; uint64_t uid;
int16_t tagColId; int16_t tagColId;
SArray* tsJoin;
SArray* tagJoin;
} SJoinNode; } SJoinNode;
typedef struct SJoinInfo { typedef struct SJoinInfo {
bool hasJoin; bool hasJoin;
SJoinNode left; SJoinNode* joinTables[TSDB_MAX_JOIN_TABLE_NUM];
SJoinNode right;
} SJoinInfo; } SJoinInfo;
typedef struct STagCond { typedef struct STagCond {
...@@ -209,9 +209,10 @@ typedef struct STableDataBlocks { ...@@ -209,9 +209,10 @@ typedef struct STableDataBlocks {
typedef struct SQueryInfo { typedef struct SQueryInfo {
int16_t command; // the command may be different for each subclause, so keep it seperately. int16_t command; // the command may be different for each subclause, so keep it seperately.
uint32_t type; // query/insert type uint32_t type; // query/insert type
STimeWindow window; // the whole query time window
STimeWindow window; // query time window SInterval interval; // tumble time window
SInterval interval; SSessionWindow sessionWindow; // session time window
SSqlGroupbyExpr groupbyExpr; // group by tags info SSqlGroupbyExpr groupbyExpr; // group by tags info
SArray * colList; // SArray<SColumn*> SArray * colList; // SArray<SColumn*>
...@@ -244,6 +245,7 @@ typedef struct SQueryInfo { ...@@ -244,6 +245,7 @@ typedef struct SQueryInfo {
typedef struct { typedef struct {
int command; int command;
uint8_t msgType; uint8_t msgType;
char reserve1[3]; // fix bus error on arm32
bool autoCreated; // create table if it is not existed during retrieve table meta in mnode bool autoCreated; // create table if it is not existed during retrieve table meta in mnode
union { union {
...@@ -256,8 +258,10 @@ typedef struct { ...@@ -256,8 +258,10 @@ typedef struct {
char * curSql; // current sql, resume position of sql after parsing paused char * curSql; // current sql, resume position of sql after parsing paused
int8_t parseFinished; int8_t parseFinished;
char reserve2[3]; // fix bus error on arm32
int16_t numOfCols; int16_t numOfCols;
char reserve3[2]; // fix bus error on arm32
uint32_t allocSize; uint32_t allocSize;
char * payload; char * payload;
int32_t payloadLen; int32_t payloadLen;
...@@ -267,7 +271,9 @@ typedef struct { ...@@ -267,7 +271,9 @@ typedef struct {
int32_t numOfParams; int32_t numOfParams;
int8_t dataSourceType; // load data from file or not int8_t dataSourceType; // load data from file or not
char reserve4[3]; // fix bus error on arm32
int8_t submitSchema; // submit block is built with table schema int8_t submitSchema; // submit block is built with table schema
char reserve5[3]; // fix bus error on arm32
STagData tagData; // NOTE: pTagData->data is used as a variant length array STagData tagData; // NOTE: pTagData->data is used as a variant length array
SName **pTableNameList; // all involved tableMeta list of current insert sql statement. SName **pTableNameList; // all involved tableMeta list of current insert sql statement.
...@@ -290,7 +296,7 @@ typedef struct { ...@@ -290,7 +296,7 @@ typedef struct {
char * pRsp; char * pRsp;
int32_t rspType; int32_t rspType;
int32_t rspLen; int32_t rspLen;
uint64_t qhandle; uint64_t qId;
int64_t useconds; int64_t useconds;
int64_t offset; // offset value from vnode during projection query of stable int64_t offset; // offset value from vnode during projection query of stable
int32_t row; int32_t row;
...@@ -373,7 +379,7 @@ typedef struct SSqlObj { ...@@ -373,7 +379,7 @@ typedef struct SSqlObj {
int64_t svgroupRid; int64_t svgroupRid;
int64_t squeryLock; int64_t squeryLock;
int32_t retryReason; // previous error code
struct SSqlObj *prev, *next; struct SSqlObj *prev, *next;
int64_t self; int64_t self;
} SSqlObj; } SSqlObj;
...@@ -409,7 +415,6 @@ typedef struct SSqlStream { ...@@ -409,7 +415,6 @@ typedef struct SSqlStream {
void tscSetStreamDestTable(SSqlStream* pStream, const char* dstTable); void tscSetStreamDestTable(SSqlStream* pStream, const char* dstTable);
int tscAcquireRpc(const char *key, const char *user, const char *secret,void **pRpcObj); int tscAcquireRpc(const char *key, const char *user, const char *secret,void **pRpcObj);
void tscReleaseRpc(void *param); void tscReleaseRpc(void *param);
void tscInitMsgsFp(); void tscInitMsgsFp();
......
...@@ -481,15 +481,19 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_fetchRowImp(JNIEn ...@@ -481,15 +481,19 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_fetchRowImp(JNIEn
case TSDB_DATA_TYPE_BOOL: case TSDB_DATA_TYPE_BOOL:
(*env)->CallVoidMethod(env, rowobj, g_rowdataSetBooleanFp, i, (jboolean)(*((char *)row[i]) == 1)); (*env)->CallVoidMethod(env, rowobj, g_rowdataSetBooleanFp, i, (jboolean)(*((char *)row[i]) == 1));
break; break;
case TSDB_DATA_TYPE_UTINYINT:
case TSDB_DATA_TYPE_TINYINT: case TSDB_DATA_TYPE_TINYINT:
(*env)->CallVoidMethod(env, rowobj, g_rowdataSetByteFp, i, (jbyte) * ((int8_t *)row[i])); (*env)->CallVoidMethod(env, rowobj, g_rowdataSetByteFp, i, (jbyte) * ((int8_t *)row[i]));
break; break;
case TSDB_DATA_TYPE_USMALLINT:
case TSDB_DATA_TYPE_SMALLINT: case TSDB_DATA_TYPE_SMALLINT:
(*env)->CallVoidMethod(env, rowobj, g_rowdataSetShortFp, i, (jshort) * ((int16_t *)row[i])); (*env)->CallVoidMethod(env, rowobj, g_rowdataSetShortFp, i, (jshort) * ((int16_t *)row[i]));
break; break;
case TSDB_DATA_TYPE_UINT:
case TSDB_DATA_TYPE_INT: case TSDB_DATA_TYPE_INT:
(*env)->CallVoidMethod(env, rowobj, g_rowdataSetIntFp, i, (jint) * (int32_t *)row[i]); (*env)->CallVoidMethod(env, rowobj, g_rowdataSetIntFp, i, (jint) * (int32_t *)row[i]);
break; break;
case TSDB_DATA_TYPE_UBIGINT:
case TSDB_DATA_TYPE_BIGINT: case TSDB_DATA_TYPE_BIGINT:
(*env)->CallVoidMethod(env, rowobj, g_rowdataSetLongFp, i, (jlong) * ((int64_t *)row[i])); (*env)->CallVoidMethod(env, rowobj, g_rowdataSetLongFp, i, (jlong) * ((int64_t *)row[i]));
break; break;
......
...@@ -160,8 +160,8 @@ static void tscProcessAsyncRetrieveImpl(void *param, TAOS_RES *tres, int numOfRo ...@@ -160,8 +160,8 @@ static void tscProcessAsyncRetrieveImpl(void *param, TAOS_RES *tres, int numOfRo
SSqlCmd *pCmd = &pSql->cmd; SSqlCmd *pCmd = &pSql->cmd;
SSqlRes *pRes = &pSql->res; SSqlRes *pRes = &pSql->res;
if ((pRes->qhandle == 0 || numOfRows != 0) && pCmd->command < TSDB_SQL_LOCAL) { if ((pRes->qId == 0 || numOfRows != 0) && pCmd->command < TSDB_SQL_LOCAL) {
if (pRes->qhandle == 0 && numOfRows != 0) { if (pRes->qId == 0 && numOfRows != 0) {
tscError("qhandle is NULL"); tscError("qhandle is NULL");
} else { } else {
pRes->code = numOfRows; pRes->code = numOfRows;
...@@ -208,7 +208,7 @@ void taos_fetch_rows_a(TAOS_RES *taosa, __async_cb_func_t fp, void *param) { ...@@ -208,7 +208,7 @@ void taos_fetch_rows_a(TAOS_RES *taosa, __async_cb_func_t fp, void *param) {
pSql->fetchFp = fp; pSql->fetchFp = fp;
pSql->fp = tscAsyncFetchRowsProxy; pSql->fp = tscAsyncFetchRowsProxy;
if (pRes->qhandle == 0) { if (pRes->qId == 0) {
tscError("qhandle is NULL"); tscError("qhandle is NULL");
pRes->code = TSDB_CODE_TSC_INVALID_QHANDLE; pRes->code = TSDB_CODE_TSC_INVALID_QHANDLE;
pSql->param = param; pSql->param = param;
...@@ -281,7 +281,7 @@ void tscQueueAsyncError(void(*fp), void *param, int32_t code) { ...@@ -281,7 +281,7 @@ void tscQueueAsyncError(void(*fp), void *param, int32_t code) {
} }
static void tscAsyncResultCallback(SSchedMsg *pMsg) { static void tscAsyncResultCallback(SSchedMsg *pMsg) {
SSqlObj* pSql = pMsg->ahandle; SSqlObj* pSql = (SSqlObj*)taosAcquireRef(tscObjRef, (int64_t)pMsg->ahandle);
if (pSql == NULL || pSql->signature != pSql) { if (pSql == NULL || pSql->signature != pSql) {
tscDebug("%p SqlObj is freed, not add into queue async res", pSql); tscDebug("%p SqlObj is freed, not add into queue async res", pSql);
return; return;
...@@ -292,25 +292,69 @@ static void tscAsyncResultCallback(SSchedMsg *pMsg) { ...@@ -292,25 +292,69 @@ static void tscAsyncResultCallback(SSchedMsg *pMsg) {
SSqlRes *pRes = &pSql->res; SSqlRes *pRes = &pSql->res;
if (pSql->fp == NULL || pSql->fetchFp == NULL){ if (pSql->fp == NULL || pSql->fetchFp == NULL){
taosReleaseRef(tscObjRef, pSql->self);
return; return;
} }
pSql->fp = pSql->fetchFp; pSql->fp = pSql->fetchFp;
(*pSql->fp)(pSql->param, pSql, pRes->code); (*pSql->fp)(pSql->param, pSql, pRes->code);
taosReleaseRef(tscObjRef, pSql->self);
} }
void tscAsyncResultOnError(SSqlObj* pSql) { void tscAsyncResultOnError(SSqlObj* pSql) {
SSchedMsg schedMsg = {0}; SSchedMsg schedMsg = {0};
schedMsg.fp = tscAsyncResultCallback; schedMsg.fp = tscAsyncResultCallback;
schedMsg.ahandle = pSql; schedMsg.ahandle = (void *)pSql->self;
schedMsg.thandle = (void *)1; schedMsg.thandle = (void *)1;
schedMsg.msg = 0; schedMsg.msg = 0;
taosScheduleTask(tscQhandle, &schedMsg); taosScheduleTask(tscQhandle, &schedMsg);
} }
int tscSendMsgToServer(SSqlObj *pSql); int tscSendMsgToServer(SSqlObj *pSql);
static int32_t updateMetaBeforeRetryQuery(SSqlObj* pSql, STableMetaInfo* pTableMetaInfo, SQueryInfo* pQueryInfo) {
// handle the invalid table error code for super table.
// update the pExpr info, colList info, number of table columns
// TODO Re-parse this sql and issue the corresponding subquery as an alternative for this case.
if (pSql->retryReason == TSDB_CODE_TDB_INVALID_TABLE_ID) {
int32_t numOfExprs = (int32_t) tscSqlExprNumOfExprs(pQueryInfo);
int32_t numOfCols = tscGetNumOfColumns(pTableMetaInfo->pTableMeta);
int32_t numOfTags = tscGetNumOfTags(pTableMetaInfo->pTableMeta);
SSchema *pSchema = tscGetTableSchema(pTableMetaInfo->pTableMeta);
for (int32_t i = 0; i < numOfExprs; ++i) {
SSqlExpr *pExpr = tscSqlExprGet(pQueryInfo, i);
pExpr->uid = pTableMetaInfo->pTableMeta->id.uid;
if (pExpr->colInfo.colIndex >= 0) {
int32_t index = pExpr->colInfo.colIndex;
if ((TSDB_COL_IS_NORMAL_COL(pExpr->colInfo.flag) && index >= numOfCols) ||
(TSDB_COL_IS_TAG(pExpr->colInfo.flag) && (index < numOfCols || index >= (numOfCols + numOfTags)))) {
return pSql->retryReason;
}
if ((pSchema[pExpr->colInfo.colIndex].colId != pExpr->colInfo.colId) &&
strcasecmp(pExpr->colInfo.name, pSchema[pExpr->colInfo.colIndex].name) != 0) {
return pSql->retryReason;
}
}
}
// validate the table columns information
for (int32_t i = 0; i < taosArrayGetSize(pQueryInfo->colList); ++i) {
SColumn *pCol = taosArrayGetP(pQueryInfo->colList, i);
if (pCol->colIndex.columnIndex >= numOfCols) {
return pSql->retryReason;
}
}
} else {
// do nothing
}
return TSDB_CODE_SUCCESS;
}
void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) { void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) {
SSqlObj* pSql = (SSqlObj*)taosAcquireRef(tscObjRef, (int64_t)param); SSqlObj* pSql = (SSqlObj*)taosAcquireRef(tscObjRef, (int64_t)param);
if (pSql == NULL) return; if (pSql == NULL) return;
...@@ -336,7 +380,8 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) { ...@@ -336,7 +380,8 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) {
if (TSDB_QUERY_HAS_TYPE(pQueryInfo->type, (TSDB_QUERY_TYPE_STABLE_SUBQUERY|TSDB_QUERY_TYPE_SUBQUERY|TSDB_QUERY_TYPE_TAG_FILTER_QUERY))) { if (TSDB_QUERY_HAS_TYPE(pQueryInfo->type, (TSDB_QUERY_TYPE_STABLE_SUBQUERY|TSDB_QUERY_TYPE_SUBQUERY|TSDB_QUERY_TYPE_TAG_FILTER_QUERY))) {
tscDebug("%p update local table meta, continue to process sql and send the corresponding query", pSql); tscDebug("%p update local table meta, continue to process sql and send the corresponding query", pSql);
STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0); STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
code = tscGetTableMeta(pSql, pTableMetaInfo); code = tscGetTableMeta(pSql, pTableMetaInfo);
assert(code == TSDB_CODE_TSC_ACTION_IN_PROGRESS || code == TSDB_CODE_SUCCESS); assert(code == TSDB_CODE_TSC_ACTION_IN_PROGRESS || code == TSDB_CODE_SUCCESS);
...@@ -346,6 +391,10 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) { ...@@ -346,6 +391,10 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) {
} }
assert((tscGetNumOfTags(pTableMetaInfo->pTableMeta) != 0)); assert((tscGetNumOfTags(pTableMetaInfo->pTableMeta) != 0));
code = updateMetaBeforeRetryQuery(pSql, pTableMetaInfo, pQueryInfo);
if (code != TSDB_CODE_SUCCESS) {
goto _error;
}
// tscProcessSql can add error into async res // tscProcessSql can add error into async res
tscProcessSql(pSql); tscProcessSql(pSql);
......
...@@ -309,7 +309,7 @@ TAOS_ROW tscFetchRow(void *param) { ...@@ -309,7 +309,7 @@ TAOS_ROW tscFetchRow(void *param) {
SSqlCmd *pCmd = &pSql->cmd; SSqlCmd *pCmd = &pSql->cmd;
SSqlRes *pRes = &pSql->res; SSqlRes *pRes = &pSql->res;
if (pRes->qhandle == 0 || if (pRes->qId == 0 ||
pCmd->command == TSDB_SQL_RETRIEVE_EMPTY_RESULT || pCmd->command == TSDB_SQL_RETRIEVE_EMPTY_RESULT ||
pCmd->command == TSDB_SQL_INSERT) { pCmd->command == TSDB_SQL_INSERT) {
return NULL; return NULL;
...@@ -905,7 +905,7 @@ int tscProcessLocalCmd(SSqlObj *pSql) { ...@@ -905,7 +905,7 @@ int tscProcessLocalCmd(SSqlObj *pSql) {
* set the qhandle to be 1 in order to pass the qhandle check, and to call partial release function to * set the qhandle to be 1 in order to pass the qhandle check, and to call partial release function to
* free allocated resources and remove the SqlObj from sql query linked list * free allocated resources and remove the SqlObj from sql query linked list
*/ */
pRes->qhandle = 0x1; pRes->qId = 0x1;
pRes->numOfRows = 0; pRes->numOfRows = 0;
} else if (pCmd->command == TSDB_SQL_SHOW_CREATE_TABLE) { } else if (pCmd->command == TSDB_SQL_SHOW_CREATE_TABLE) {
pRes->code = tscProcessShowCreateTable(pSql); pRes->code = tscProcessShowCreateTable(pSql);
......
...@@ -101,6 +101,10 @@ static void tscInitSqlContext(SSqlCmd *pCmd, SLocalMerger *pReducer, tOrderDescr ...@@ -101,6 +101,10 @@ static void tscInitSqlContext(SSqlCmd *pCmd, SLocalMerger *pReducer, tOrderDescr
} else if (functionId == TSDB_FUNC_APERCT) { } else if (functionId == TSDB_FUNC_APERCT) {
pCtx->param[0].i64 = pExpr->param[0].i64; pCtx->param[0].i64 = pExpr->param[0].i64;
pCtx->param[0].nType = pExpr->param[0].nType; pCtx->param[0].nType = pExpr->param[0].nType;
} else if (functionId == TSDB_FUNC_BLKINFO) {
pCtx->param[0].i64 = pExpr->param[0].i64;
pCtx->param[0].nType = pExpr->param[0].nType;
pCtx->numOfParams = 1;
} }
pCtx->interBufBytes = pExpr->interBytes; pCtx->interBufBytes = pExpr->interBytes;
...@@ -952,10 +956,10 @@ static void doFillResult(SSqlObj *pSql, SLocalMerger *pLocalMerge, bool doneOutp ...@@ -952,10 +956,10 @@ static void doFillResult(SSqlObj *pSql, SLocalMerger *pLocalMerge, bool doneOutp
// todo extract function // todo extract function
int64_t actualETime = (pQueryInfo->order.order == TSDB_ORDER_ASC)? pQueryInfo->window.ekey: pQueryInfo->window.skey; int64_t actualETime = (pQueryInfo->order.order == TSDB_ORDER_ASC)? pQueryInfo->window.ekey: pQueryInfo->window.skey;
tFilePage **pResPages = malloc(POINTER_BYTES * pQueryInfo->fieldsInfo.numOfOutput); void** pResPages = malloc(POINTER_BYTES * pQueryInfo->fieldsInfo.numOfOutput);
for (int32_t i = 0; i < pQueryInfo->fieldsInfo.numOfOutput; ++i) { for (int32_t i = 0; i < pQueryInfo->fieldsInfo.numOfOutput; ++i) {
TAOS_FIELD *pField = tscFieldInfoGetField(&pQueryInfo->fieldsInfo, i); TAOS_FIELD *pField = tscFieldInfoGetField(&pQueryInfo->fieldsInfo, i);
pResPages[i] = calloc(1, sizeof(tFilePage) + pField->bytes * pLocalMerge->resColModel->capacity); pResPages[i] = calloc(1, pField->bytes * pLocalMerge->resColModel->capacity);
} }
while (1) { while (1) {
...@@ -967,7 +971,7 @@ static void doFillResult(SSqlObj *pSql, SLocalMerger *pLocalMerge, bool doneOutp ...@@ -967,7 +971,7 @@ static void doFillResult(SSqlObj *pSql, SLocalMerger *pLocalMerge, bool doneOutp
if (pQueryInfo->limit.offset > 0) { if (pQueryInfo->limit.offset > 0) {
for (int32_t i = 0; i < pQueryInfo->fieldsInfo.numOfOutput; ++i) { for (int32_t i = 0; i < pQueryInfo->fieldsInfo.numOfOutput; ++i) {
TAOS_FIELD *pField = tscFieldInfoGetField(&pQueryInfo->fieldsInfo, i); TAOS_FIELD *pField = tscFieldInfoGetField(&pQueryInfo->fieldsInfo, i);
memmove(pResPages[i]->data, pResPages[i]->data + pField->bytes * pQueryInfo->limit.offset, memmove(pResPages[i], ((char*)pResPages[i]) + pField->bytes * pQueryInfo->limit.offset,
(size_t)(newRows * pField->bytes)); (size_t)(newRows * pField->bytes));
} }
} }
...@@ -1011,7 +1015,7 @@ static void doFillResult(SSqlObj *pSql, SLocalMerger *pLocalMerge, bool doneOutp ...@@ -1011,7 +1015,7 @@ static void doFillResult(SSqlObj *pSql, SLocalMerger *pLocalMerge, bool doneOutp
int32_t offset = 0; int32_t offset = 0;
for (int32_t i = 0; i < pQueryInfo->fieldsInfo.numOfOutput; ++i) { for (int32_t i = 0; i < pQueryInfo->fieldsInfo.numOfOutput; ++i) {
TAOS_FIELD *pField = tscFieldInfoGetField(&pQueryInfo->fieldsInfo, i); TAOS_FIELD *pField = tscFieldInfoGetField(&pQueryInfo->fieldsInfo, i);
memcpy(pRes->data + offset * pRes->numOfRows, pResPages[i]->data, (size_t)(pField->bytes * pRes->numOfRows)); memcpy(pRes->data + offset * pRes->numOfRows, pResPages[i], (size_t)(pField->bytes * pRes->numOfRows));
offset += pField->bytes; offset += pField->bytes;
} }
...@@ -1690,7 +1694,7 @@ void tscInitResObjForLocalQuery(SSqlObj *pObj, int32_t numOfRes, int32_t rowLen) ...@@ -1690,7 +1694,7 @@ void tscInitResObjForLocalQuery(SSqlObj *pObj, int32_t numOfRes, int32_t rowLen)
tscDestroyLocalMerger(pObj); tscDestroyLocalMerger(pObj);
} }
pRes->qhandle = 1; // hack to pass the safety check in fetch_row function pRes->qId = 1; // hack to pass the safety check in fetch_row function
pRes->numOfRows = 0; pRes->numOfRows = 0;
pRes->row = 0; pRes->row = 0;
......
...@@ -307,7 +307,8 @@ int32_t tsParseOneColumnData(SSchema *pSchema, SStrToken *pToken, char *payload, ...@@ -307,7 +307,8 @@ int32_t tsParseOneColumnData(SSchema *pSchema, SStrToken *pToken, char *payload,
return tscInvalidSQLErrMsg(msg, "illegal float data", pToken->z); return tscInvalidSQLErrMsg(msg, "illegal float data", pToken->z);
} }
*((float *)payload) = (float)dv; // *((float *)payload) = (float)dv;
SET_FLOAT_VAL(payload, dv);
} }
break; break;
...@@ -393,7 +394,7 @@ static int32_t tsCheckTimestamp(STableDataBlocks *pDataBlocks, const char *start ...@@ -393,7 +394,7 @@ static int32_t tsCheckTimestamp(STableDataBlocks *pDataBlocks, const char *start
TSKEY k = *(TSKEY *)start; TSKEY k = *(TSKEY *)start;
if (k == 0) { if (k == INT64_MIN) {
if (pDataBlocks->tsSource == TSDB_USE_CLI_TS) { if (pDataBlocks->tsSource == TSDB_USE_CLI_TS) {
return -1; return -1;
} else if (pDataBlocks->tsSource == -1) { } else if (pDataBlocks->tsSource == -1) {
...@@ -1359,7 +1360,7 @@ int tsParseSql(SSqlObj *pSql, bool initial) { ...@@ -1359,7 +1360,7 @@ int tsParseSql(SSqlObj *pSql, bool initial) {
} }
} }
} else { } else {
SSqlInfo SQLInfo = qSQLParse(pSql->sqlstr); SSqlInfo SQLInfo = qSqlParse(pSql->sqlstr);
ret = tscToSQLCmd(pSql, &SQLInfo); ret = tscToSQLCmd(pSql, &SQLInfo);
if (ret == TSDB_CODE_TSC_INVALID_SQL && pSql->parseRetry == 0 && SQLInfo.type == TSDB_SQL_NULL) { if (ret == TSDB_CODE_TSC_INVALID_SQL && pSql->parseRetry == 0 && SQLInfo.type == TSDB_SQL_NULL) {
tscResetSqlCmd(pCmd, true); tscResetSqlCmd(pCmd, true);
......
...@@ -261,7 +261,7 @@ static int doBindParam(char* data, SParamInfo* param, TAOS_BIND* bind) { ...@@ -261,7 +261,7 @@ static int doBindParam(char* data, SParamInfo* param, TAOS_BIND* bind) {
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
} }
if (1) { if (0) {
// allow user bind param data with different type // allow user bind param data with different type
union { union {
int8_t v1; int8_t v1;
...@@ -903,7 +903,7 @@ int taos_stmt_prepare(TAOS_STMT* stmt, const char* sql, unsigned long length) { ...@@ -903,7 +903,7 @@ int taos_stmt_prepare(TAOS_STMT* stmt, const char* sql, unsigned long length) {
return TSDB_CODE_TSC_OUT_OF_MEMORY; return TSDB_CODE_TSC_OUT_OF_MEMORY;
} }
pRes->qhandle = 0; pRes->qId = 0;
pRes->numOfRows = 1; pRes->numOfRows = 1;
strtolower(pSql->sqlstr, sql); strtolower(pSql->sqlstr, sql);
...@@ -1057,14 +1057,28 @@ int taos_stmt_get_param(TAOS_STMT *stmt, int idx, int *type, int *bytes) { ...@@ -1057,14 +1057,28 @@ int taos_stmt_get_param(TAOS_STMT *stmt, int idx, int *type, int *bytes) {
} }
if (pStmt->isInsert) { if (pStmt->isInsert) {
SSqlObj* pSql = pStmt->pSql; SSqlCmd* pCmd = &pStmt->pSql->cmd;
SSqlCmd *pCmd = &pSql->cmd; STableMetaInfo* pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, 0, 0);
STableDataBlocks* pBlock = taosArrayGetP(pCmd->pDataBlocks, 0); STableMeta* pTableMeta = pTableMetaInfo->pTableMeta;
if (pCmd->pTableBlockHashList == NULL) {
pCmd->pTableBlockHashList = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, false);
}
STableDataBlocks* pBlock = NULL;
assert(pCmd->numOfParams == pBlock->numOfParams); int32_t ret =
if (idx < 0 || idx >= pBlock->numOfParams) return -1; tscGetDataBlockFromList(pCmd->pTableBlockHashList, pTableMeta->id.uid, TSDB_PAYLOAD_SIZE, sizeof(SSubmitBlk),
pTableMeta->tableInfo.rowSize, &pTableMetaInfo->name, pTableMeta, &pBlock, NULL);
if (ret != 0) {
// todo handle error
}
if (idx<0 || idx>=pBlock->numOfParams) {
tscError("param %d: out of range", idx);
abort();
}
SParamInfo* param = pBlock->params + idx; SParamInfo* param = &pBlock->params[idx];
if (type) *type = param->type; if (type) *type = param->type;
if (bytes) *bytes = param->bytes; if (bytes) *bytes = param->bytes;
......
...@@ -249,8 +249,8 @@ int tscBuildQueryStreamDesc(void *pMsg, STscObj *pObj) { ...@@ -249,8 +249,8 @@ int tscBuildQueryStreamDesc(void *pMsg, STscObj *pObj) {
pQdesc->stime = htobe64(pSql->stime); pQdesc->stime = htobe64(pSql->stime);
pQdesc->queryId = htonl(pSql->queryId); pQdesc->queryId = htonl(pSql->queryId);
//pQdesc->useconds = htobe64(pSql->res.useconds); //pQdesc->useconds = htobe64(pSql->res.useconds);
pQdesc->useconds = htobe64(now - pSql->stime); pQdesc->useconds = htobe64(now - pSql->stime); // use local time instead of sever rsp elapsed time
pQdesc->qHandle = htobe64(pSql->res.qhandle); pQdesc->qHandle = htobe64(pSql->res.qId);
pHeartbeat->numOfQueries++; pHeartbeat->numOfQueries++;
pQdesc++; pQdesc++;
......
此差异已折叠。
...@@ -350,8 +350,8 @@ void tscProcessMsgFromServer(SRpcMsg *rpcMsg, SRpcEpSet *pEpSet) { ...@@ -350,8 +350,8 @@ void tscProcessMsgFromServer(SRpcMsg *rpcMsg, SRpcEpSet *pEpSet) {
taosMsleep(duration); taosMsleep(duration);
} }
pSql->retryReason = rpcMsg->code;
rpcMsg->code = tscRenewTableMeta(pSql, 0); rpcMsg->code = tscRenewTableMeta(pSql, 0);
// if there is an error occurring, proceed to the following error handling procedure. // if there is an error occurring, proceed to the following error handling procedure.
if (rpcMsg->code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) { if (rpcMsg->code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) {
taosReleaseRef(tscObjRef, handle); taosReleaseRef(tscObjRef, handle);
...@@ -497,8 +497,6 @@ int tscProcessSql(SSqlObj *pSql) { ...@@ -497,8 +497,6 @@ int tscProcessSql(SSqlObj *pSql) {
return pSql->res.code; return pSql->res.code;
} }
} else if (pCmd->command >= TSDB_SQL_LOCAL) { } else if (pCmd->command >= TSDB_SQL_LOCAL) {
//pSql->epSet = tscMgmtEpSet;
// } else { // local handler
return (*tscProcessMsgRsp[pCmd->command])(pSql); return (*tscProcessMsgRsp[pCmd->command])(pSql);
} }
...@@ -510,7 +508,7 @@ int tscBuildFetchMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -510,7 +508,7 @@ int tscBuildFetchMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
SQueryInfo *pQueryInfo = tscGetQueryInfoDetail(&pSql->cmd, pSql->cmd.clauseIndex); SQueryInfo *pQueryInfo = tscGetQueryInfoDetail(&pSql->cmd, pSql->cmd.clauseIndex);
pRetrieveMsg->free = htons(pQueryInfo->type); pRetrieveMsg->free = htons(pQueryInfo->type);
pRetrieveMsg->qhandle = htobe64(pSql->res.qhandle); pRetrieveMsg->qId = htobe64(pSql->res.qId);
// todo valid the vgroupId at the client side // todo valid the vgroupId at the client side
STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0); STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
...@@ -522,7 +520,7 @@ int tscBuildFetchMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -522,7 +520,7 @@ int tscBuildFetchMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
assert(pVgroupInfo->vgroups[vgIndex].vgId > 0 && vgIndex < pTableMetaInfo->vgroupList->numOfVgroups); assert(pVgroupInfo->vgroups[vgIndex].vgId > 0 && vgIndex < pTableMetaInfo->vgroupList->numOfVgroups);
pRetrieveMsg->header.vgId = htonl(pVgroupInfo->vgroups[vgIndex].vgId); pRetrieveMsg->header.vgId = htonl(pVgroupInfo->vgroups[vgIndex].vgId);
tscDebug("%p build fetch msg from vgId:%d, vgIndex:%d", pSql, pVgroupInfo->vgroups[vgIndex].vgId, vgIndex); tscDebug("%p build fetch msg from vgId:%d, vgIndex:%d, qhandle:%" PRIX64, pSql, pVgroupInfo->vgroups[vgIndex].vgId, vgIndex, pSql->res.qId);
} else { } else {
int32_t numOfVgroups = (int32_t)taosArrayGetSize(pTableMetaInfo->pVgroupTables); int32_t numOfVgroups = (int32_t)taosArrayGetSize(pTableMetaInfo->pVgroupTables);
assert(vgIndex >= 0 && vgIndex < numOfVgroups); assert(vgIndex >= 0 && vgIndex < numOfVgroups);
...@@ -530,12 +528,12 @@ int tscBuildFetchMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -530,12 +528,12 @@ int tscBuildFetchMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
SVgroupTableInfo* pTableIdList = taosArrayGet(pTableMetaInfo->pVgroupTables, vgIndex); SVgroupTableInfo* pTableIdList = taosArrayGet(pTableMetaInfo->pVgroupTables, vgIndex);
pRetrieveMsg->header.vgId = htonl(pTableIdList->vgInfo.vgId); pRetrieveMsg->header.vgId = htonl(pTableIdList->vgInfo.vgId);
tscDebug("%p build fetch msg from vgId:%d, vgIndex:%d", pSql, pTableIdList->vgInfo.vgId, vgIndex); tscDebug("%p build fetch msg from vgId:%d, vgIndex:%d, qId:%" PRIu64, pSql, pTableIdList->vgInfo.vgId, vgIndex, pSql->res.qId);
} }
} else { } else {
STableMeta* pTableMeta = pTableMetaInfo->pTableMeta; STableMeta* pTableMeta = pTableMetaInfo->pTableMeta;
pRetrieveMsg->header.vgId = htonl(pTableMeta->vgId); pRetrieveMsg->header.vgId = htonl(pTableMeta->vgId);
tscDebug("%p build fetch msg from only one vgroup, vgId:%d", pSql, pTableMeta->vgId); tscDebug("%p build fetch msg from only one vgroup, vgId:%d, qId:%" PRIu64, pSql, pTableMeta->vgId, pSql->res.qId);
} }
pSql->cmd.payloadLen = sizeof(SRetrieveTableMsg); pSql->cmd.payloadLen = sizeof(SRetrieveTableMsg);
...@@ -615,7 +613,7 @@ static int32_t tscEstimateQueryMsgSize(SSqlObj *pSql, int32_t clauseIndex) { ...@@ -615,7 +613,7 @@ static int32_t tscEstimateQueryMsgSize(SSqlObj *pSql, int32_t clauseIndex) {
tableSerialize + sqlLen + 4096 + pQueryInfo->bufLen; tableSerialize + sqlLen + 4096 + pQueryInfo->bufLen;
} }
static char *doSerializeTableInfo(SQueryTableMsg* pQueryMsg, SSqlObj *pSql, char *pMsg) { static char *doSerializeTableInfo(SQueryTableMsg* pQueryMsg, SSqlObj *pSql, char *pMsg, int32_t *succeed) {
STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(&pSql->cmd, pSql->cmd.clauseIndex, 0); STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(&pSql->cmd, pSql->cmd.clauseIndex, 0);
TSKEY dfltKey = htobe64(pQueryMsg->window.skey); TSKEY dfltKey = htobe64(pQueryMsg->window.skey);
...@@ -628,9 +626,14 @@ static char *doSerializeTableInfo(SQueryTableMsg* pQueryMsg, SSqlObj *pSql, char ...@@ -628,9 +626,14 @@ static char *doSerializeTableInfo(SQueryTableMsg* pQueryMsg, SSqlObj *pSql, char
assert(index >= 0); assert(index >= 0);
SVgroupInfo* pVgroupInfo = NULL; SVgroupInfo* pVgroupInfo = NULL;
if (pTableMetaInfo->vgroupList->numOfVgroups > 0) { if (pTableMetaInfo->vgroupList && pTableMetaInfo->vgroupList->numOfVgroups > 0) {
assert(index < pTableMetaInfo->vgroupList->numOfVgroups); assert(index < pTableMetaInfo->vgroupList->numOfVgroups);
pVgroupInfo = &pTableMetaInfo->vgroupList->vgroups[index]; pVgroupInfo = &pTableMetaInfo->vgroupList->vgroups[index];
} else {
tscError("%p No vgroup info found", pSql);
*succeed = 0;
return pMsg;
} }
vgId = pVgroupInfo->vgId; vgId = pVgroupInfo->vgId;
...@@ -645,7 +648,6 @@ static char *doSerializeTableInfo(SQueryTableMsg* pQueryMsg, SSqlObj *pSql, char ...@@ -645,7 +648,6 @@ static char *doSerializeTableInfo(SQueryTableMsg* pQueryMsg, SSqlObj *pSql, char
} }
pSql->epSet.inUse = rand()%pSql->epSet.numOfEps; pSql->epSet.inUse = rand()%pSql->epSet.numOfEps;
pQueryMsg->head.vgId = htonl(vgId); pQueryMsg->head.vgId = htonl(vgId);
STableIdInfo *pTableIdInfo = (STableIdInfo *)pMsg; STableIdInfo *pTableIdInfo = (STableIdInfo *)pMsg;
...@@ -660,8 +662,6 @@ static char *doSerializeTableInfo(SQueryTableMsg* pQueryMsg, SSqlObj *pSql, char ...@@ -660,8 +662,6 @@ static char *doSerializeTableInfo(SQueryTableMsg* pQueryMsg, SSqlObj *pSql, char
int32_t numOfVgroups = (int32_t)taosArrayGetSize(pTableMetaInfo->pVgroupTables); int32_t numOfVgroups = (int32_t)taosArrayGetSize(pTableMetaInfo->pVgroupTables);
assert(index >= 0 && index < numOfVgroups); assert(index >= 0 && index < numOfVgroups);
tscDebug("%p query on stable, vgIndex:%d, numOfVgroups:%d", pSql, index, numOfVgroups);
SVgroupTableInfo* pTableIdList = taosArrayGet(pTableMetaInfo->pVgroupTables, index); SVgroupTableInfo* pTableIdList = taosArrayGet(pTableMetaInfo->pVgroupTables, index);
// set the vgroup info // set the vgroup info
...@@ -670,7 +670,10 @@ static char *doSerializeTableInfo(SQueryTableMsg* pQueryMsg, SSqlObj *pSql, char ...@@ -670,7 +670,10 @@ static char *doSerializeTableInfo(SQueryTableMsg* pQueryMsg, SSqlObj *pSql, char
int32_t numOfTables = (int32_t)taosArrayGetSize(pTableIdList->itemList); int32_t numOfTables = (int32_t)taosArrayGetSize(pTableIdList->itemList);
pQueryMsg->numOfTables = htonl(numOfTables); // set the number of tables pQueryMsg->numOfTables = htonl(numOfTables); // set the number of tables
tscDebug("%p query on stable, vgId:%d, numOfTables:%d, vgIndex:%d, numOfVgroups:%d", pSql,
pTableIdList->vgInfo.vgId, numOfTables, index, numOfVgroups);
// serialize each table id info // serialize each table id info
for(int32_t i = 0; i < numOfTables; ++i) { for(int32_t i = 0; i < numOfTables; ++i) {
STableIdInfo* pItem = taosArrayGet(pTableIdList->itemList, i); STableIdInfo* pItem = taosArrayGet(pTableIdList->itemList, i);
...@@ -705,7 +708,7 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -705,7 +708,7 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
STableMeta * pTableMeta = pTableMetaInfo->pTableMeta; STableMeta * pTableMeta = pTableMetaInfo->pTableMeta;
size_t numOfSrcCols = taosArrayGetSize(pQueryInfo->colList); size_t numOfSrcCols = taosArrayGetSize(pQueryInfo->colList);
if (numOfSrcCols <= 0 && !tscQueryTags(pQueryInfo)) { if (numOfSrcCols <= 0 && !tscQueryTags(pQueryInfo) && !tscQueryBlockInfo(pQueryInfo)) {
tscError("%p illegal value of numOfCols in query msg: %" PRIu64 ", table cols:%d", pSql, (uint64_t)numOfSrcCols, tscError("%p illegal value of numOfCols in query msg: %" PRIu64 ", table cols:%d", pSql, (uint64_t)numOfSrcCols,
tscGetNumOfColumns(pTableMeta)); tscGetNumOfColumns(pTableMeta));
...@@ -756,6 +759,8 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -756,6 +759,8 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
pQueryMsg->vgroupLimit = htobe64(pQueryInfo->vgroupLimit); pQueryMsg->vgroupLimit = htobe64(pQueryInfo->vgroupLimit);
pQueryMsg->sqlstrLen = htonl(sqlLen); pQueryMsg->sqlstrLen = htonl(sqlLen);
pQueryMsg->prevResultLen = htonl(pQueryInfo->bufLen); pQueryMsg->prevResultLen = htonl(pQueryInfo->bufLen);
pQueryMsg->sw.gap = htobe64(pQueryInfo->sessionWindow.gap);
pQueryMsg->sw.primaryColId = htonl(PRIMARYKEY_TIMESTAMP_COL_INDEX);
size_t numOfOutput = tscSqlExprNumOfExprs(pQueryInfo); size_t numOfOutput = tscSqlExprNumOfExprs(pQueryInfo);
pQueryMsg->numOfOutput = htons((int16_t)numOfOutput); // this is the stage one output column number pQueryMsg->numOfOutput = htons((int16_t)numOfOutput); // this is the stage one output column number
...@@ -835,6 +840,25 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -835,6 +840,25 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
pSqlFuncExpr->colInfo.colIndex = htons(pExpr->colInfo.colIndex); pSqlFuncExpr->colInfo.colIndex = htons(pExpr->colInfo.colIndex);
pSqlFuncExpr->colInfo.flag = htons(pExpr->colInfo.flag); pSqlFuncExpr->colInfo.flag = htons(pExpr->colInfo.flag);
if (TSDB_COL_IS_UD_COL(pExpr->colInfo.flag)) {
pSqlFuncExpr->colType = htons(pExpr->resType);
pSqlFuncExpr->colBytes = htons(pExpr->resBytes);
} else if (pExpr->colInfo.colId == TSDB_TBNAME_COLUMN_INDEX) {
SSchema *s = tGetTbnameColumnSchema();
pSqlFuncExpr->colType = htons(s->type);
pSqlFuncExpr->colBytes = htons(s->bytes);
} else if (pExpr->colInfo.colId == TSDB_BLOCK_DIST_COLUMN_INDEX) {
SSchema s = tGetBlockDistColumnSchema();
pSqlFuncExpr->colType = htons(s.type);
pSqlFuncExpr->colBytes = htons(s.bytes);
} else {
SSchema* s = tscGetColumnSchemaById(pTableMeta, pExpr->colInfo.colId);
pSqlFuncExpr->colType = htons(s->type);
pSqlFuncExpr->colBytes = htons(s->bytes);
}
pSqlFuncExpr->functionId = htons(pExpr->functionId); pSqlFuncExpr->functionId = htons(pExpr->functionId);
pSqlFuncExpr->numOfParams = htons(pExpr->numOfParams); pSqlFuncExpr->numOfParams = htons(pExpr->numOfParams);
pSqlFuncExpr->resColId = htons(pExpr->resColId); pSqlFuncExpr->resColId = htons(pExpr->resColId);
...@@ -876,8 +900,7 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -876,8 +900,7 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
} }
for (int32_t j = 0; j < pExpr->numOfParams; ++j) { for (int32_t j = 0; j < pExpr->numOfParams; ++j) { // todo add log
// todo add log
pSqlFuncExpr->arg[j].argType = htons((uint16_t)pExpr->param[j].nType); pSqlFuncExpr->arg[j].argType = htons((uint16_t)pExpr->param[j].nType);
pSqlFuncExpr->arg[j].argBytes = htons(pExpr->param[j].nLen); pSqlFuncExpr->arg[j].argBytes = htons(pExpr->param[j].nLen);
...@@ -902,6 +925,8 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -902,6 +925,8 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
for (int32_t i = 0; i < output; ++i) { for (int32_t i = 0; i < output; ++i) {
SInternalField* pField = tscFieldInfoGetInternalField(&pQueryInfo->fieldsInfo, i); SInternalField* pField = tscFieldInfoGetInternalField(&pQueryInfo->fieldsInfo, i);
SSqlExpr *pExpr = pField->pSqlExpr; SSqlExpr *pExpr = pField->pSqlExpr;
// this should be switched to projection query
if (pExpr != NULL) { if (pExpr != NULL) {
// the queried table has been removed and a new table with the same name has already been created already // the queried table has been removed and a new table with the same name has already been created already
// return error msg // return error msg
...@@ -915,33 +940,31 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -915,33 +940,31 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
return TSDB_CODE_TSC_INVALID_SQL; return TSDB_CODE_TSC_INVALID_SQL;
} }
pSqlFuncExpr1->colInfo.colId = htons(pExpr->colInfo.colId); pSqlFuncExpr1->numOfParams = 0; // no params for projection query
pSqlFuncExpr1->colInfo.colIndex = htons(pExpr->colInfo.colIndex); pSqlFuncExpr1->functionId = htons(TSDB_FUNC_PRJ);
pSqlFuncExpr1->colInfo.flag = htons(pExpr->colInfo.flag); pSqlFuncExpr1->colInfo.colId = htons(pExpr->resColId);
pSqlFuncExpr1->colInfo.flag = htons(TSDB_COL_NORMAL);
pSqlFuncExpr1->functionId = htons(pExpr->functionId);
pSqlFuncExpr1->numOfParams = htons(pExpr->numOfParams); bool assign = false;
pMsg += sizeof(SSqlFuncMsg); for (int32_t f = 0; f < tscSqlExprNumOfExprs(pQueryInfo); ++f) {
SSqlExpr *pe = tscSqlExprGet(pQueryInfo, f);
for (int32_t j = 0; j < pExpr->numOfParams; ++j) { if (pe == pExpr) {
// todo add log pSqlFuncExpr1->colInfo.colIndex = htons(f);
pSqlFuncExpr1->arg[j].argType = htons((uint16_t)pExpr->param[j].nType); pSqlFuncExpr1->colType = htons(pe->resType);
pSqlFuncExpr1->arg[j].argBytes = htons(pExpr->param[j].nLen); pSqlFuncExpr1->colBytes = htons(pe->resBytes);
assign = true;
if (pExpr->param[j].nType == TSDB_DATA_TYPE_BINARY) { break;
memcpy(pMsg, pExpr->param[j].pz, pExpr->param[j].nLen);
pMsg += pExpr->param[j].nLen;
} else {
pSqlFuncExpr1->arg[j].argValue.i64 = htobe64(pExpr->param[j].i64);
} }
} }
assert(assign);
pMsg += sizeof(SSqlFuncMsg);
pSqlFuncExpr1 = (SSqlFuncMsg *)pMsg; pSqlFuncExpr1 = (SSqlFuncMsg *)pMsg;
} else { } else {
assert(pField->pArithExprInfo != NULL); assert(pField->pArithExprInfo != NULL);
SExprInfo* pExprInfo = pField->pArithExprInfo; SExprInfo* pExprInfo = pField->pArithExprInfo;
pSqlFuncExpr1->colInfo.colId = htons(pExprInfo->base.colInfo.colId); pSqlFuncExpr1->colInfo.colId = htons(pExprInfo->base.colInfo.colId);
pSqlFuncExpr1->functionId = htons(pExprInfo->base.functionId); pSqlFuncExpr1->functionId = htons(pExprInfo->base.functionId);
pSqlFuncExpr1->numOfParams = htons(pExprInfo->base.numOfParams); pSqlFuncExpr1->numOfParams = htons(pExprInfo->base.numOfParams);
pMsg += sizeof(SSqlFuncMsg); pMsg += sizeof(SSqlFuncMsg);
...@@ -966,8 +989,13 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -966,8 +989,13 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
pQueryMsg->secondStageOutput = 0; pQueryMsg->secondStageOutput = 0;
} }
int32_t succeed = 1;
// serialize the table info (sid, uid, tags) // serialize the table info (sid, uid, tags)
pMsg = doSerializeTableInfo(pQueryMsg, pSql, pMsg); pMsg = doSerializeTableInfo(pQueryMsg, pSql, pMsg, &succeed);
if (succeed == 0) {
return TSDB_CODE_TSC_APP_ERROR;
}
SSqlGroupbyExpr *pGroupbyExpr = &pQueryInfo->groupbyExpr; SSqlGroupbyExpr *pGroupbyExpr = &pQueryInfo->groupbyExpr;
if (pGroupbyExpr->numOfGroupCols > 0) { if (pGroupbyExpr->numOfGroupCols > 0) {
...@@ -1091,7 +1119,8 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -1091,7 +1119,8 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
int32_t tscBuildCreateDbMsg(SSqlObj *pSql, SSqlInfo *pInfo) { int32_t tscBuildCreateDbMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
SSqlCmd *pCmd = &pSql->cmd; SSqlCmd *pCmd = &pSql->cmd;
pCmd->payloadLen = sizeof(SCreateDbMsg); pCmd->payloadLen = sizeof(SCreateDbMsg);
pCmd->msgType = TSDB_MSG_TYPE_CM_CREATE_DB;
pCmd->msgType = (pInfo->pMiscInfo->dbOpt.dbType == TSDB_DB_TYPE_DEFAULT) ? TSDB_MSG_TYPE_CM_CREATE_DB : TSDB_MSG_TYPE_CM_CREATE_TP;
SCreateDbMsg *pCreateDbMsg = (SCreateDbMsg *)pCmd->payload; SCreateDbMsg *pCreateDbMsg = (SCreateDbMsg *)pCmd->payload;
...@@ -1223,7 +1252,7 @@ int32_t tscBuildDropDbMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -1223,7 +1252,7 @@ int32_t tscBuildDropDbMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
pDropDbMsg->ignoreNotExists = pInfo->pMiscInfo->existsCheck ? 1 : 0; pDropDbMsg->ignoreNotExists = pInfo->pMiscInfo->existsCheck ? 1 : 0;
pCmd->msgType = TSDB_MSG_TYPE_CM_DROP_DB; pCmd->msgType = (pInfo->pMiscInfo->dbType == TSDB_DB_TYPE_DEFAULT) ? TSDB_MSG_TYPE_CM_DROP_DB : TSDB_MSG_TYPE_CM_DROP_TP;
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
} }
...@@ -1301,6 +1330,23 @@ int32_t tscBuildUseDbMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -1301,6 +1330,23 @@ int32_t tscBuildUseDbMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
} }
int32_t tscBuildSyncDbReplicaMsg(SSqlObj* pSql, SSqlInfo *pInfo) {
SSqlCmd *pCmd = &pSql->cmd;
pCmd->payloadLen = sizeof(SSyncDbMsg);
if (TSDB_CODE_SUCCESS != tscAllocPayload(pCmd, pCmd->payloadLen)) {
tscError("%p failed to malloc for query msg", pSql);
return TSDB_CODE_TSC_OUT_OF_MEMORY;
}
SSyncDbMsg *pSyncMsg = (SSyncDbMsg *)pCmd->payload;
STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0);
tNameExtractFullName(&pTableMetaInfo->name, pSyncMsg->db);
pCmd->msgType = TSDB_MSG_TYPE_CM_SYNC_DB;
return TSDB_CODE_SUCCESS;
}
int32_t tscBuildShowMsg(SSqlObj *pSql, SSqlInfo *pInfo) { int32_t tscBuildShowMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
STscObj *pObj = pSql->pTscObj; STscObj *pObj = pSql->pTscObj;
SSqlCmd *pCmd = &pSql->cmd; SSqlCmd *pCmd = &pSql->cmd;
...@@ -1367,7 +1413,7 @@ int tscEstimateCreateTableMsgLength(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -1367,7 +1413,7 @@ int tscEstimateCreateTableMsgLength(SSqlObj *pSql, SSqlInfo *pInfo) {
SSqlCmd *pCmd = &(pSql->cmd); SSqlCmd *pCmd = &(pSql->cmd);
int32_t size = minMsgSize() + sizeof(SCMCreateTableMsg) + sizeof(SCreateTableMsg); int32_t size = minMsgSize() + sizeof(SCMCreateTableMsg) + sizeof(SCreateTableMsg);
SCreateTableSQL *pCreateTableInfo = pInfo->pCreateTableInfo; SCreateTableSql *pCreateTableInfo = pInfo->pCreateTableInfo;
if (pCreateTableInfo->type == TSQL_CREATE_TABLE_FROM_STABLE) { if (pCreateTableInfo->type == TSQL_CREATE_TABLE_FROM_STABLE) {
int32_t numOfTables = (int32_t)taosArrayGetSize(pInfo->pCreateTableInfo->childTableInfo); int32_t numOfTables = (int32_t)taosArrayGetSize(pInfo->pCreateTableInfo->childTableInfo);
size += numOfTables * (sizeof(SCreateTableMsg) + TSDB_MAX_TAGS_LEN); size += numOfTables * (sizeof(SCreateTableMsg) + TSDB_MAX_TAGS_LEN);
...@@ -1376,7 +1422,7 @@ int tscEstimateCreateTableMsgLength(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -1376,7 +1422,7 @@ int tscEstimateCreateTableMsgLength(SSqlObj *pSql, SSqlInfo *pInfo) {
} }
if (pCreateTableInfo->pSelect != NULL) { if (pCreateTableInfo->pSelect != NULL) {
size += (pCreateTableInfo->pSelect->selectToken.n + 1); size += (pCreateTableInfo->pSelect->sqlstr.n + 1);
} }
return size + TSDB_EXTRA_PAYLOAD_SIZE; return size + TSDB_EXTRA_PAYLOAD_SIZE;
...@@ -1434,7 +1480,7 @@ int tscBuildCreateTableMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -1434,7 +1480,7 @@ int tscBuildCreateTableMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
int32_t code = tNameExtractFullName(&pTableMetaInfo->name, pCreateMsg->tableName); int32_t code = tNameExtractFullName(&pTableMetaInfo->name, pCreateMsg->tableName);
assert(code == 0); assert(code == 0);
SCreateTableSQL *pCreateTable = pInfo->pCreateTableInfo; SCreateTableSql *pCreateTable = pInfo->pCreateTableInfo;
pCreateMsg->igExists = pCreateTable->existCheck ? 1 : 0; pCreateMsg->igExists = pCreateTable->existCheck ? 1 : 0;
pCreateMsg->numOfColumns = htons(pCmd->numOfCols); pCreateMsg->numOfColumns = htons(pCmd->numOfCols);
...@@ -1457,11 +1503,11 @@ int tscBuildCreateTableMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -1457,11 +1503,11 @@ int tscBuildCreateTableMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
pMsg = (char *)pSchema; pMsg = (char *)pSchema;
if (type == TSQL_CREATE_STREAM) { // check if it is a stream sql if (type == TSQL_CREATE_STREAM) { // check if it is a stream sql
SQuerySQL *pQuerySql = pInfo->pCreateTableInfo->pSelect; SQuerySqlNode *pQuerySql = pInfo->pCreateTableInfo->pSelect;
strncpy(pMsg, pQuerySql->selectToken.z, pQuerySql->selectToken.n + 1); strncpy(pMsg, pQuerySql->sqlstr.z, pQuerySql->sqlstr.n + 1);
pCreateMsg->sqlLen = htons(pQuerySql->selectToken.n + 1); pCreateMsg->sqlLen = htons(pQuerySql->sqlstr.n + 1);
pMsg += pQuerySql->selectToken.n + 1; pMsg += pQuerySql->sqlstr.n + 1;
} }
} }
...@@ -1550,9 +1596,11 @@ int tscBuildUpdateTagMsg(SSqlObj* pSql, SSqlInfo *pInfo) { ...@@ -1550,9 +1596,11 @@ int tscBuildUpdateTagMsg(SSqlObj* pSql, SSqlInfo *pInfo) {
int tscAlterDbMsg(SSqlObj *pSql, SSqlInfo *pInfo) { int tscAlterDbMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
SSqlCmd *pCmd = &pSql->cmd; SSqlCmd *pCmd = &pSql->cmd;
pCmd->payloadLen = sizeof(SAlterDbMsg); pCmd->payloadLen = sizeof(SAlterDbMsg);
pCmd->msgType = TSDB_MSG_TYPE_CM_ALTER_DB; pCmd->msgType = (pInfo->pMiscInfo->dbOpt.dbType == TSDB_DB_TYPE_DEFAULT) ? TSDB_MSG_TYPE_CM_ALTER_DB : TSDB_MSG_TYPE_CM_ALTER_TP;
SAlterDbMsg *pAlterDbMsg = (SAlterDbMsg* )pCmd->payload; SAlterDbMsg *pAlterDbMsg = (SAlterDbMsg* )pCmd->payload;
pAlterDbMsg->dbType = -1;
STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0); STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0);
tNameExtractFullName(&pTableMetaInfo->name, pAlterDbMsg->db); tNameExtractFullName(&pTableMetaInfo->name, pAlterDbMsg->db);
...@@ -1571,7 +1619,7 @@ int tscBuildRetrieveFromMgmtMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -1571,7 +1619,7 @@ int tscBuildRetrieveFromMgmtMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
SQueryInfo *pQueryInfo = tscGetQueryInfoDetail(pCmd, 0); SQueryInfo *pQueryInfo = tscGetQueryInfoDetail(pCmd, 0);
SRetrieveTableMsg *pRetrieveMsg = (SRetrieveTableMsg*)pCmd->payload; SRetrieveTableMsg *pRetrieveMsg = (SRetrieveTableMsg*)pCmd->payload;
pRetrieveMsg->qhandle = htobe64(pSql->res.qhandle); pRetrieveMsg->qId = htobe64(pSql->res.qId);
pRetrieveMsg->free = htons(pQueryInfo->type); pRetrieveMsg->free = htons(pQueryInfo->type);
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
...@@ -2079,19 +2127,24 @@ int tscProcessSTableVgroupRsp(SSqlObj *pSql) { ...@@ -2079,19 +2127,24 @@ int tscProcessSTableVgroupRsp(SSqlObj *pSql) {
assert(pInfo->vgroupList != NULL); assert(pInfo->vgroupList != NULL);
pInfo->vgroupList->numOfVgroups = pVgroupMsg->numOfVgroups; pInfo->vgroupList->numOfVgroups = pVgroupMsg->numOfVgroups;
for (int32_t j = 0; j < pInfo->vgroupList->numOfVgroups; ++j) { if (pInfo->vgroupList->numOfVgroups <= 0) {
//just init, no need to lock //tfree(pInfo->vgroupList);
SVgroupInfo *pVgroups = &pInfo->vgroupList->vgroups[j]; tscError("%p empty vgroup info", pSql);
} else {
for (int32_t j = 0; j < pInfo->vgroupList->numOfVgroups; ++j) {
//just init, no need to lock
SVgroupInfo *pVgroups = &pInfo->vgroupList->vgroups[j];
SVgroupMsg *vmsg = &pVgroupMsg->vgroups[j]; SVgroupMsg *vmsg = &pVgroupMsg->vgroups[j];
pVgroups->vgId = htonl(vmsg->vgId); pVgroups->vgId = htonl(vmsg->vgId);
pVgroups->numOfEps = vmsg->numOfEps; pVgroups->numOfEps = vmsg->numOfEps;
assert(pVgroups->numOfEps >= 1 && pVgroups->vgId >= 1); assert(pVgroups->numOfEps >= 1 && pVgroups->vgId >= 1);
for (int32_t k = 0; k < pVgroups->numOfEps; ++k) { for (int32_t k = 0; k < pVgroups->numOfEps; ++k) {
pVgroups->epAddr[k].port = htons(vmsg->epAddr[k].port); pVgroups->epAddr[k].port = htons(vmsg->epAddr[k].port);
pVgroups->epAddr[k].fqdn = strndup(vmsg->epAddr[k].fqdn, tListLen(vmsg->epAddr[k].fqdn)); pVgroups->epAddr[k].fqdn = strndup(vmsg->epAddr[k].fqdn, tListLen(vmsg->epAddr[k].fqdn));
}
} }
} }
...@@ -2117,7 +2170,7 @@ int tscProcessShowRsp(SSqlObj *pSql) { ...@@ -2117,7 +2170,7 @@ int tscProcessShowRsp(SSqlObj *pSql) {
pShow = (SShowRsp *)pRes->pRsp; pShow = (SShowRsp *)pRes->pRsp;
pShow->qhandle = htobe64(pShow->qhandle); pShow->qhandle = htobe64(pShow->qhandle);
pRes->qhandle = pShow->qhandle; pRes->qId = pShow->qhandle;
tscResetForNextRetrieve(pRes); tscResetForNextRetrieve(pRes);
pMetaMsg = &(pShow->tableMeta); pMetaMsg = &(pShow->tableMeta);
...@@ -2299,11 +2352,12 @@ int tscProcessQueryRsp(SSqlObj *pSql) { ...@@ -2299,11 +2352,12 @@ int tscProcessQueryRsp(SSqlObj *pSql) {
SSqlRes *pRes = &pSql->res; SSqlRes *pRes = &pSql->res;
SQueryTableRsp *pQuery = (SQueryTableRsp *)pRes->pRsp; SQueryTableRsp *pQuery = (SQueryTableRsp *)pRes->pRsp;
pQuery->qhandle = htobe64(pQuery->qhandle); pQuery->qId = htobe64(pQuery->qId);
pRes->qhandle = pQuery->qhandle; pRes->qId = pQuery->qId;
pRes->data = NULL; pRes->data = NULL;
tscResetForNextRetrieve(pRes); tscResetForNextRetrieve(pRes);
tscDebug("%p query rsp received, qId:%"PRIu64, pSql, pRes->qId);
return 0; return 0;
} }
...@@ -2361,7 +2415,8 @@ int tscProcessRetrieveRspFromNode(SSqlObj *pSql) { ...@@ -2361,7 +2415,8 @@ int tscProcessRetrieveRspFromNode(SSqlObj *pSql) {
} }
pRes->row = 0; pRes->row = 0;
tscDebug("%p numOfRows:%d, offset:%" PRId64 ", complete:%d", pSql, pRes->numOfRows, pRes->offset, pRes->completed); tscDebug("%p numOfRows:%d, offset:%" PRId64 ", complete:%d, qId:%"PRIu64, pSql, pRes->numOfRows, pRes->offset,
pRes->completed, pRes->qId);
return 0; return 0;
} }
...@@ -2574,6 +2629,7 @@ void tscInitMsgsFp() { ...@@ -2574,6 +2629,7 @@ void tscInitMsgsFp() {
tscBuildMsg[TSDB_SQL_DROP_USER] = tscBuildDropUserAcctMsg; tscBuildMsg[TSDB_SQL_DROP_USER] = tscBuildDropUserAcctMsg;
tscBuildMsg[TSDB_SQL_DROP_ACCT] = tscBuildDropUserAcctMsg; tscBuildMsg[TSDB_SQL_DROP_ACCT] = tscBuildDropUserAcctMsg;
tscBuildMsg[TSDB_SQL_DROP_DB] = tscBuildDropDbMsg; tscBuildMsg[TSDB_SQL_DROP_DB] = tscBuildDropDbMsg;
tscBuildMsg[TSDB_SQL_SYNC_DB_REPLICA] = tscBuildSyncDbReplicaMsg;
tscBuildMsg[TSDB_SQL_DROP_TABLE] = tscBuildDropTableMsg; tscBuildMsg[TSDB_SQL_DROP_TABLE] = tscBuildDropTableMsg;
tscBuildMsg[TSDB_SQL_ALTER_USER] = tscBuildUserMsg; tscBuildMsg[TSDB_SQL_ALTER_USER] = tscBuildUserMsg;
tscBuildMsg[TSDB_SQL_CREATE_DNODE] = tscBuildCreateDnodeMsg; tscBuildMsg[TSDB_SQL_CREATE_DNODE] = tscBuildCreateDnodeMsg;
......
...@@ -476,7 +476,7 @@ TAOS_ROW taos_fetch_row(TAOS_RES *res) { ...@@ -476,7 +476,7 @@ TAOS_ROW taos_fetch_row(TAOS_RES *res) {
SSqlCmd *pCmd = &pSql->cmd; SSqlCmd *pCmd = &pSql->cmd;
SSqlRes *pRes = &pSql->res; SSqlRes *pRes = &pSql->res;
if (pRes->qhandle == 0 || if (pRes->qId == 0 ||
pRes->code == TSDB_CODE_TSC_QUERY_CANCELLED || pRes->code == TSDB_CODE_TSC_QUERY_CANCELLED ||
pCmd->command == TSDB_SQL_RETRIEVE_EMPTY_RESULT || pCmd->command == TSDB_SQL_RETRIEVE_EMPTY_RESULT ||
pCmd->command == TSDB_SQL_INSERT) { pCmd->command == TSDB_SQL_INSERT) {
...@@ -508,7 +508,7 @@ int taos_fetch_block(TAOS_RES *res, TAOS_ROW *rows) { ...@@ -508,7 +508,7 @@ int taos_fetch_block(TAOS_RES *res, TAOS_ROW *rows) {
SSqlCmd *pCmd = &pSql->cmd; SSqlCmd *pCmd = &pSql->cmd;
SSqlRes *pRes = &pSql->res; SSqlRes *pRes = &pSql->res;
if (pRes->qhandle == 0 || if (pRes->qId == 0 ||
pRes->code == TSDB_CODE_TSC_QUERY_CANCELLED || pRes->code == TSDB_CODE_TSC_QUERY_CANCELLED ||
pCmd->command == TSDB_SQL_RETRIEVE_EMPTY_RESULT || pCmd->command == TSDB_SQL_RETRIEVE_EMPTY_RESULT ||
pCmd->command == TSDB_SQL_INSERT) { pCmd->command == TSDB_SQL_INSERT) {
...@@ -554,7 +554,7 @@ static bool tscKillQueryInDnode(SSqlObj* pSql) { ...@@ -554,7 +554,7 @@ static bool tscKillQueryInDnode(SSqlObj* pSql) {
SSqlCmd* pCmd = &pSql->cmd; SSqlCmd* pCmd = &pSql->cmd;
SSqlRes* pRes = &pSql->res; SSqlRes* pRes = &pSql->res;
if (pRes == NULL || pRes->qhandle == 0) { if (pRes == NULL || pRes->qId == 0) {
return true; return true;
} }
...@@ -1050,7 +1050,7 @@ int taos_load_table_info(TAOS *taos, const char *tableNameList) { ...@@ -1050,7 +1050,7 @@ int taos_load_table_info(TAOS *taos, const char *tableNameList) {
* If qhandle is NOT set 0, the function of taos_free_result() will send message to server by calling tscProcessSql() * If qhandle is NOT set 0, the function of taos_free_result() will send message to server by calling tscProcessSql()
* to free connection, which may cause segment fault, when the parse phrase is not even successfully executed. * to free connection, which may cause segment fault, when the parse phrase is not even successfully executed.
*/ */
pRes->qhandle = 0; pRes->qId = 0;
free(str); free(str);
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
......
...@@ -149,7 +149,7 @@ static SSub* tscCreateSubscription(STscObj* pObj, const char* topic, const char* ...@@ -149,7 +149,7 @@ static SSub* tscCreateSubscription(STscObj* pObj, const char* topic, const char*
} }
strtolower(pSql->sqlstr, pSql->sqlstr); strtolower(pSql->sqlstr, pSql->sqlstr);
pRes->qhandle = 0; pRes->qId = 0;
pRes->numOfRows = 1; pRes->numOfRows = 1;
code = tscAllocPayload(pCmd, TSDB_DEFAULT_PAYLOAD_SIZE); code = tscAllocPayload(pCmd, TSDB_DEFAULT_PAYLOAD_SIZE);
...@@ -448,7 +448,7 @@ SSqlObj* recreateSqlObj(SSub* pSub) { ...@@ -448,7 +448,7 @@ SSqlObj* recreateSqlObj(SSub* pSub) {
return NULL; return NULL;
} }
pRes->qhandle = 0; pRes->qId = 0;
pRes->numOfRows = 1; pRes->numOfRows = 1;
int code = tscAllocPayload(pCmd, TSDB_DEFAULT_PAYLOAD_SIZE); int code = tscAllocPayload(pCmd, TSDB_DEFAULT_PAYLOAD_SIZE);
...@@ -503,9 +503,19 @@ TAOS_RES *taos_consume(TAOS_SUB *tsub) { ...@@ -503,9 +503,19 @@ TAOS_RES *taos_consume(TAOS_SUB *tsub) {
SSqlCmd *pCmd = &pSql->cmd; SSqlCmd *pCmd = &pSql->cmd;
STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0); STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(pCmd, pCmd->clauseIndex, 0);
SQueryInfo *pQueryInfo = tscGetQueryInfoDetail(pCmd, 0); SQueryInfo *pQueryInfo = tscGetQueryInfoDetail(pCmd, 0);
if (taosArrayGetSize(pSub->progress) > 0) { // fix crash in single tabel subscription if (taosArrayGetSize(pSub->progress) > 0) { // fix crash in single table subscription
pQueryInfo->window.skey = ((SSubscriptionProgress*)taosArrayGet(pSub->progress, 0))->key;
tscDebug("subscribe:%s set subscribe skey:%"PRId64, pSub->topic, pQueryInfo->window.skey); size_t size = taosArrayGetSize(pSub->progress);
TSKEY s = INT64_MAX;
for(int32_t i = 0; i < size; ++i) {
TSKEY k = ((SSubscriptionProgress*)taosArrayGet(pSub->progress, i))->key;
if (s > k) {
s = k;
}
}
pQueryInfo->window.skey = s;
tscDebug("subscribe:%s set next round subscribe skey:%"PRId64, pSub->topic, pQueryInfo->window.skey);
} }
if (pSub->pTimer == NULL) { if (pSub->pTimer == NULL) {
...@@ -536,7 +546,7 @@ TAOS_RES *taos_consume(TAOS_SUB *tsub) { ...@@ -536,7 +546,7 @@ TAOS_RES *taos_consume(TAOS_SUB *tsub) {
uint32_t type = pQueryInfo->type; uint32_t type = pQueryInfo->type;
tscFreeSqlResult(pSql); tscFreeSqlResult(pSql);
pRes->numOfRows = 1; pRes->numOfRows = 1;
pRes->qhandle = 0; pRes->qId = 0;
pSql->cmd.command = TSDB_SQL_SELECT; pSql->cmd.command = TSDB_SQL_SELECT;
pQueryInfo->type = type; pQueryInfo->type = type;
......
此差异已折叠。
...@@ -97,6 +97,22 @@ bool tscQueryTags(SQueryInfo* pQueryInfo) { ...@@ -97,6 +97,22 @@ bool tscQueryTags(SQueryInfo* pQueryInfo) {
return true; return true;
} }
bool tscQueryBlockInfo(SQueryInfo* pQueryInfo) {
int32_t numOfCols = (int32_t) tscSqlExprNumOfExprs(pQueryInfo);
for (int32_t i = 0; i < numOfCols; ++i) {
SSqlExpr* pExpr = tscSqlExprGet(pQueryInfo, i);
int32_t functId = pExpr->functionId;
// "select count(tbname)" query
if (functId == TSDB_FUNC_BLKINFO) {
return true;
}
}
return false;
}
bool tscIsTwoStageSTableQuery(SQueryInfo* pQueryInfo, int32_t tableIndex) { bool tscIsTwoStageSTableQuery(SQueryInfo* pQueryInfo, int32_t tableIndex) {
if (pQueryInfo == NULL) { if (pQueryInfo == NULL) {
return false; return false;
...@@ -223,6 +239,21 @@ bool tscIsSecondStageQuery(SQueryInfo* pQueryInfo) { ...@@ -223,6 +239,21 @@ bool tscIsSecondStageQuery(SQueryInfo* pQueryInfo) {
return false; return false;
} }
bool tscGroupbyColumn(SQueryInfo* pQueryInfo) {
STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
int32_t numOfCols = tscGetNumOfColumns(pTableMetaInfo->pTableMeta);
SSqlGroupbyExpr* pGroupbyExpr = &pQueryInfo->groupbyExpr;
for (int32_t k = 0; k < pGroupbyExpr->numOfGroupCols; ++k) {
SColIndex* pIndex = taosArrayGet(pGroupbyExpr->columnInfo, k);
if (!TSDB_COL_IS_TAG(pIndex->flag) && pIndex->colIndex < numOfCols) { // group by normal columns
return true;
}
}
return false;
}
bool tscIsTWAQuery(SQueryInfo* pQueryInfo) { bool tscIsTWAQuery(SQueryInfo* pQueryInfo) {
size_t numOfExprs = tscSqlExprNumOfExprs(pQueryInfo); size_t numOfExprs = tscSqlExprNumOfExprs(pQueryInfo);
for (int32_t i = 0; i < numOfExprs; ++i) { for (int32_t i = 0; i < numOfExprs; ++i) {
...@@ -1151,7 +1182,7 @@ SSqlExpr* tscSqlExprInsert(SQueryInfo* pQueryInfo, int32_t index, int16_t functi ...@@ -1151,7 +1182,7 @@ SSqlExpr* tscSqlExprInsert(SQueryInfo* pQueryInfo, int32_t index, int16_t functi
} }
SSqlExpr* tscSqlExprAppend(SQueryInfo* pQueryInfo, int16_t functionId, SColumnIndex* pColIndex, int16_t type, SSqlExpr* tscSqlExprAppend(SQueryInfo* pQueryInfo, int16_t functionId, SColumnIndex* pColIndex, int16_t type,
int16_t size, int16_t resColId, int16_t interSize, bool isTagCol) { int16_t size, int16_t resColId, int16_t interSize, bool isTagCol) {
SSqlExpr* pExpr = doCreateSqlExpr(pQueryInfo, functionId, pColIndex, type, size, resColId, interSize, isTagCol); SSqlExpr* pExpr = doCreateSqlExpr(pQueryInfo, functionId, pColIndex, type, size, resColId, interSize, isTagCol);
taosArrayPush(pQueryInfo->exprList, &pExpr); taosArrayPush(pQueryInfo->exprList, &pExpr);
return pExpr; return pExpr;
...@@ -1271,6 +1302,34 @@ int32_t tscSqlExprCopy(SArray* dst, const SArray* src, uint64_t uid, bool deepco ...@@ -1271,6 +1302,34 @@ int32_t tscSqlExprCopy(SArray* dst, const SArray* src, uint64_t uid, bool deepco
return 0; return 0;
} }
bool tscColumnExists(SArray* pColumnList, SColumnIndex* pColIndex) {
// ignore the tbname columnIndex to be inserted into source list
if (pColIndex->columnIndex < 0) {
return false;
}
size_t numOfCols = taosArrayGetSize(pColumnList);
int16_t col = pColIndex->columnIndex;
int32_t i = 0;
while (i < numOfCols) {
SColumn* pCol = taosArrayGetP(pColumnList, i);
if ((pCol->colIndex.columnIndex != col) || (pCol->colIndex.tableIndex != pColIndex->tableIndex)) {
++i;
continue;
} else {
break;
}
}
if (i >= numOfCols || numOfCols == 0) {
return false;
}
return true;
}
SColumn* tscColumnListInsert(SArray* pColumnList, SColumnIndex* pColIndex) { SColumn* tscColumnListInsert(SArray* pColumnList, SColumnIndex* pColIndex) {
// ignore the tbname columnIndex to be inserted into source list // ignore the tbname columnIndex to be inserted into source list
if (pColIndex->columnIndex < 0) { if (pColIndex->columnIndex < 0) {
...@@ -1563,7 +1622,25 @@ int32_t tscTagCondCopy(STagCond* dest, const STagCond* src) { ...@@ -1563,7 +1622,25 @@ int32_t tscTagCondCopy(STagCond* dest, const STagCond* src) {
dest->tbnameCond.uid = src->tbnameCond.uid; dest->tbnameCond.uid = src->tbnameCond.uid;
dest->tbnameCond.len = src->tbnameCond.len; dest->tbnameCond.len = src->tbnameCond.len;
memcpy(&dest->joinInfo, &src->joinInfo, sizeof(SJoinInfo)); dest->joinInfo.hasJoin = src->joinInfo.hasJoin;
for (int32_t i = 0; i < TSDB_MAX_JOIN_TABLE_NUM; ++i) {
if (src->joinInfo.joinTables[i]) {
dest->joinInfo.joinTables[i] = calloc(1, sizeof(SJoinNode));
memcpy(dest->joinInfo.joinTables[i], src->joinInfo.joinTables[i], sizeof(SJoinNode));
if (src->joinInfo.joinTables[i]->tsJoin) {
dest->joinInfo.joinTables[i]->tsJoin = taosArrayDup(src->joinInfo.joinTables[i]->tsJoin);
}
if (src->joinInfo.joinTables[i]->tagJoin) {
dest->joinInfo.joinTables[i]->tagJoin = taosArrayDup(src->joinInfo.joinTables[i]->tagJoin);
}
}
}
dest->relType = src->relType; dest->relType = src->relType;
if (src->pCond == NULL) { if (src->pCond == NULL) {
...@@ -1609,6 +1686,23 @@ void tscTagCondRelease(STagCond* pTagCond) { ...@@ -1609,6 +1686,23 @@ void tscTagCondRelease(STagCond* pTagCond) {
taosArrayDestroy(pTagCond->pCond); taosArrayDestroy(pTagCond->pCond);
} }
for (int32_t i = 0; i < TSDB_MAX_JOIN_TABLE_NUM; ++i) {
SJoinNode *node = pTagCond->joinInfo.joinTables[i];
if (node == NULL) {
continue;
}
if (node->tsJoin != NULL) {
taosArrayDestroy(node->tsJoin);
}
if (node->tagJoin != NULL) {
taosArrayDestroy(node->tagJoin);
}
tfree(node);
}
memset(pTagCond, 0, sizeof(STagCond)); memset(pTagCond, 0, sizeof(STagCond));
} }
...@@ -1733,10 +1827,15 @@ void tscInitQueryInfo(SQueryInfo* pQueryInfo) { ...@@ -1733,10 +1827,15 @@ void tscInitQueryInfo(SQueryInfo* pQueryInfo) {
pQueryInfo->fieldsInfo.internalField = taosArrayInit(4, sizeof(SInternalField)); pQueryInfo->fieldsInfo.internalField = taosArrayInit(4, sizeof(SInternalField));
assert(pQueryInfo->exprList == NULL); assert(pQueryInfo->exprList == NULL);
pQueryInfo->exprList = taosArrayInit(4, POINTER_BYTES); pQueryInfo->exprList = taosArrayInit(4, POINTER_BYTES);
pQueryInfo->colList = taosArrayInit(4, POINTER_BYTES); pQueryInfo->colList = taosArrayInit(4, POINTER_BYTES);
pQueryInfo->udColumnId = TSDB_UD_COLUMN_INDEX; pQueryInfo->udColumnId = TSDB_UD_COLUMN_INDEX;
pQueryInfo->resColumnId= -1000; pQueryInfo->resColumnId = -1000;
pQueryInfo->limit.limit = -1;
pQueryInfo->limit.offset = 0;
pQueryInfo->slimit.limit = -1;
pQueryInfo->slimit.offset = 0;
} }
int32_t tscAddSubqueryInfo(SSqlCmd* pCmd) { int32_t tscAddSubqueryInfo(SSqlCmd* pCmd) {
...@@ -2293,16 +2392,21 @@ void tscDoQuery(SSqlObj* pSql) { ...@@ -2293,16 +2392,21 @@ void tscDoQuery(SSqlObj* pSql) {
} }
int16_t tscGetJoinTagColIdByUid(STagCond* pTagCond, uint64_t uid) { int16_t tscGetJoinTagColIdByUid(STagCond* pTagCond, uint64_t uid) {
if (pTagCond->joinInfo.left.uid == uid) { int32_t i = 0;
return pTagCond->joinInfo.left.tagColId; while (i < TSDB_MAX_JOIN_TABLE_NUM) {
} else if (pTagCond->joinInfo.right.uid == uid) { SJoinNode* node = pTagCond->joinInfo.joinTables[i];
return pTagCond->joinInfo.right.tagColId; if (node && node->uid == uid) {
} else { return node->tagColId;
assert(0); }
return -1;
i++;
} }
assert(0);
return -1;
} }
int16_t tscGetTagColIndexById(STableMeta* pTableMeta, int16_t colId) { int16_t tscGetTagColIndexById(STableMeta* pTableMeta, int16_t colId) {
int32_t numOfTags = tscGetNumOfTags(pTableMeta); int32_t numOfTags = tscGetNumOfTags(pTableMeta);
......
...@@ -51,6 +51,7 @@ enum { ...@@ -51,6 +51,7 @@ enum {
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_ALTER_ACCT, "alter-acct" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_ALTER_ACCT, "alter-acct" )
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_ALTER_TABLE, "alter-table" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_ALTER_TABLE, "alter-table" )
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_ALTER_DB, "alter-db" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_ALTER_DB, "alter-db" )
TSDB_DEFINE_SQL_TYPE(TSDB_SQL_SYNC_DB_REPLICA, "sync db-replica")
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_CREATE_MNODE, "create-mnode" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_CREATE_MNODE, "create-mnode" )
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_DROP_MNODE, "drop-mnode" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_DROP_MNODE, "drop-mnode" )
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_CREATE_DNODE, "create-dnode" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_CREATE_DNODE, "create-dnode" )
...@@ -87,13 +88,13 @@ enum { ...@@ -87,13 +88,13 @@ enum {
*/ */
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_RETRIEVE_EMPTY_RESULT, "retrieve-empty-result" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_RETRIEVE_EMPTY_RESULT, "retrieve-empty-result" )
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_RESET_CACHE, "reset-cache" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_RESET_CACHE, "reset-cache" )
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_SERV_STATUS, "serv-status" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_SERV_STATUS, "serv-status" )
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_CURRENT_DB, "current-db" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_CURRENT_DB, "current-db" )
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_SERV_VERSION, "serv-version" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_SERV_VERSION, "serv-version" )
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_CLI_VERSION, "cli-version" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_CLI_VERSION, "cli-version" )
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_CURRENT_USER, "current-user ") TSDB_DEFINE_SQL_TYPE( TSDB_SQL_CURRENT_USER, "current-user ")
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_CFG_LOCAL, "cfg-local" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_CFG_LOCAL, "cfg-local" )
TSDB_DEFINE_SQL_TYPE( TSDB_SQL_MAX, "max" ) TSDB_DEFINE_SQL_TYPE( TSDB_SQL_MAX, "max" )
}; };
......
...@@ -283,12 +283,37 @@ typedef struct { ...@@ -283,12 +283,37 @@ typedef struct {
#define keyCol(pCols) (&((pCols)->cols[0])) // Key column #define keyCol(pCols) (&((pCols)->cols[0])) // Key column
#define dataColsTKeyAt(pCols, idx) ((TKEY *)(keyCol(pCols)->pData))[(idx)] #define dataColsTKeyAt(pCols, idx) ((TKEY *)(keyCol(pCols)->pData))[(idx)]
#define dataColsKeyAt(pCols, idx) tdGetKey(dataColsTKeyAt(pCols, idx)) #define dataColsKeyAt(pCols, idx) tdGetKey(dataColsTKeyAt(pCols, idx))
#define dataColsTKeyFirst(pCols) (((pCols)->numOfRows == 0) ? TKEY_INVALID : dataColsTKeyAt(pCols, 0)) static FORCE_INLINE TKEY dataColsTKeyFirst(SDataCols *pCols) {
#define dataColsKeyFirst(pCols) (((pCols)->numOfRows == 0) ? TSDB_DATA_TIMESTAMP_NULL : dataColsKeyAt(pCols, 0)) if (pCols->numOfRows) {
#define dataColsTKeyLast(pCols) \ return dataColsTKeyAt(pCols, 0);
(((pCols)->numOfRows == 0) ? TKEY_INVALID : dataColsTKeyAt(pCols, (pCols)->numOfRows - 1)) } else {
#define dataColsKeyLast(pCols) \ return TKEY_INVALID;
(((pCols)->numOfRows == 0) ? TSDB_DATA_TIMESTAMP_NULL : dataColsKeyAt(pCols, (pCols)->numOfRows - 1)) }
}
static FORCE_INLINE TSKEY dataColsKeyFirst(SDataCols *pCols) {
if (pCols->numOfRows) {
return dataColsKeyAt(pCols, 0);
} else {
return TSDB_DATA_TIMESTAMP_NULL;
}
}
static FORCE_INLINE TKEY dataColsTKeyLast(SDataCols *pCols) {
if (pCols->numOfRows) {
return dataColsTKeyAt(pCols, pCols->numOfRows - 1);
} else {
return TKEY_INVALID;
}
}
static FORCE_INLINE TSKEY dataColsKeyLast(SDataCols *pCols) {
if (pCols->numOfRows) {
return dataColsKeyAt(pCols, pCols->numOfRows - 1);
} else {
return TSDB_DATA_TIMESTAMP_NULL;
}
}
SDataCols *tdNewDataCols(int maxRowSize, int maxCols, int maxRows); SDataCols *tdNewDataCols(int maxRowSize, int maxCols, int maxRows);
void tdResetDataCols(SDataCols *pCols); void tdResetDataCols(SDataCols *pCols);
......
...@@ -95,6 +95,7 @@ extern int8_t tsCompression; ...@@ -95,6 +95,7 @@ extern int8_t tsCompression;
extern int8_t tsWAL; extern int8_t tsWAL;
extern int32_t tsFsyncPeriod; extern int32_t tsFsyncPeriod;
extern int32_t tsReplications; extern int32_t tsReplications;
extern int16_t tsPartitons;
extern int32_t tsQuorum; extern int32_t tsQuorum;
extern int8_t tsUpdate; extern int8_t tsUpdate;
extern int8_t tsCacheLastRow; extern int8_t tsCacheLastRow;
...@@ -162,6 +163,7 @@ extern float tsTotalDataDirGB; ...@@ -162,6 +163,7 @@ extern float tsTotalDataDirGB;
extern float tsAvailLogDirGB; extern float tsAvailLogDirGB;
extern float tsAvailTmpDirectorySpace; extern float tsAvailTmpDirectorySpace;
extern float tsAvailDataDirGB; extern float tsAvailDataDirGB;
extern float tsUsedDataDirGB;
extern float tsMinimalLogDirGB; extern float tsMinimalLogDirGB;
extern float tsReservedTmpDirectorySpace; extern float tsReservedTmpDirectorySpace;
extern float tsMinimalDataDirGB; extern float tsMinimalDataDirGB;
......
...@@ -33,7 +33,7 @@ typedef struct SDataStatis { ...@@ -33,7 +33,7 @@ typedef struct SDataStatis {
typedef struct SColumnInfoData { typedef struct SColumnInfoData {
SColumnInfo info; SColumnInfo info;
void* pData; // the corresponding block data in memory char* pData; // the corresponding block data in memory
} SColumnInfoData; } SColumnInfoData;
typedef struct SResPair { typedef struct SResPair {
......
...@@ -41,41 +41,46 @@ static uint8_t UNUSED_FUNC isQueryOnPrimaryKey(const char *primaryColumnName, co ...@@ -41,41 +41,46 @@ static uint8_t UNUSED_FUNC isQueryOnPrimaryKey(const char *primaryColumnName, co
static void reverseCopy(char* dest, const char* src, int16_t type, int32_t numOfRows) { static void reverseCopy(char* dest, const char* src, int16_t type, int32_t numOfRows) {
switch(type) { switch(type) {
case TSDB_DATA_TYPE_TINYINT: { case TSDB_DATA_TYPE_TINYINT:
case TSDB_DATA_TYPE_UTINYINT:{
int8_t* p = (int8_t*) dest; int8_t* p = (int8_t*) dest;
int8_t* pSrc = (int8_t*) src; int8_t* pSrc = (int8_t*) src;
for(int32_t i = 0; i < numOfRows; ++i) { for(int32_t i = 0; i < numOfRows; ++i) {
p[i] = pSrc[numOfRows - i - 1]; p[i] = pSrc[numOfRows - i - 1];
} }
break; return;
} }
case TSDB_DATA_TYPE_SMALLINT: {
case TSDB_DATA_TYPE_SMALLINT:
case TSDB_DATA_TYPE_USMALLINT:{
int16_t* p = (int16_t*) dest; int16_t* p = (int16_t*) dest;
int16_t* pSrc = (int16_t*) src; int16_t* pSrc = (int16_t*) src;
for(int32_t i = 0; i < numOfRows; ++i) { for(int32_t i = 0; i < numOfRows; ++i) {
p[i] = pSrc[numOfRows - i - 1]; p[i] = pSrc[numOfRows - i - 1];
} }
break; return;
} }
case TSDB_DATA_TYPE_INT: { case TSDB_DATA_TYPE_INT:
case TSDB_DATA_TYPE_UINT: {
int32_t* p = (int32_t*) dest; int32_t* p = (int32_t*) dest;
int32_t* pSrc = (int32_t*) src; int32_t* pSrc = (int32_t*) src;
for(int32_t i = 0; i < numOfRows; ++i) { for(int32_t i = 0; i < numOfRows; ++i) {
p[i] = pSrc[numOfRows - i - 1]; p[i] = pSrc[numOfRows - i - 1];
} }
break; return;
} }
case TSDB_DATA_TYPE_BIGINT: { case TSDB_DATA_TYPE_BIGINT:
case TSDB_DATA_TYPE_UBIGINT: {
int64_t* p = (int64_t*) dest; int64_t* p = (int64_t*) dest;
int64_t* pSrc = (int64_t*) src; int64_t* pSrc = (int64_t*) src;
for(int32_t i = 0; i < numOfRows; ++i) { for(int32_t i = 0; i < numOfRows; ++i) {
p[i] = pSrc[numOfRows - i - 1]; p[i] = pSrc[numOfRows - i - 1];
} }
break; return;
} }
case TSDB_DATA_TYPE_FLOAT: { case TSDB_DATA_TYPE_FLOAT: {
float* p = (float*) dest; float* p = (float*) dest;
...@@ -84,7 +89,7 @@ static void reverseCopy(char* dest, const char* src, int16_t type, int32_t numOf ...@@ -84,7 +89,7 @@ static void reverseCopy(char* dest, const char* src, int16_t type, int32_t numOf
for(int32_t i = 0; i < numOfRows; ++i) { for(int32_t i = 0; i < numOfRows; ++i) {
p[i] = pSrc[numOfRows - i - 1]; p[i] = pSrc[numOfRows - i - 1];
} }
break; return;
} }
case TSDB_DATA_TYPE_DOUBLE: { case TSDB_DATA_TYPE_DOUBLE: {
double* p = (double*) dest; double* p = (double*) dest;
...@@ -93,7 +98,7 @@ static void reverseCopy(char* dest, const char* src, int16_t type, int32_t numOf ...@@ -93,7 +98,7 @@ static void reverseCopy(char* dest, const char* src, int16_t type, int32_t numOf
for(int32_t i = 0; i < numOfRows; ++i) { for(int32_t i = 0; i < numOfRows; ++i) {
p[i] = pSrc[numOfRows - i - 1]; p[i] = pSrc[numOfRows - i - 1];
} }
break; return;
} }
default: assert(0); default: assert(0);
} }
......
此差异已折叠。
...@@ -8,7 +8,7 @@ IF (TD_MVN_INSTALLED) ...@@ -8,7 +8,7 @@ IF (TD_MVN_INSTALLED)
ADD_CUSTOM_COMMAND(OUTPUT ${JDBC_CMD_NAME} ADD_CUSTOM_COMMAND(OUTPUT ${JDBC_CMD_NAME}
POST_BUILD POST_BUILD
COMMAND mvn -Dmaven.test.skip=true install -f ${CMAKE_CURRENT_SOURCE_DIR}/pom.xml COMMAND mvn -Dmaven.test.skip=true install -f ${CMAKE_CURRENT_SOURCE_DIR}/pom.xml
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/target/taos-jdbcdriver-2.0.21-dist.jar ${LIBRARY_OUTPUT_PATH} COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/target/taos-jdbcdriver-2.0.22-dist.jar ${LIBRARY_OUTPUT_PATH}
COMMAND mvn -Dmaven.test.skip=true clean -f ${CMAKE_CURRENT_SOURCE_DIR}/pom.xml COMMAND mvn -Dmaven.test.skip=true clean -f ${CMAKE_CURRENT_SOURCE_DIR}/pom.xml
COMMENT "build jdbc driver") COMMENT "build jdbc driver")
ADD_CUSTOM_TARGET(${JDBC_TARGET_NAME} ALL WORKING_DIRECTORY ${EXECUTABLE_OUTPUT_PATH} DEPENDS ${JDBC_CMD_NAME}) ADD_CUSTOM_TARGET(${JDBC_TARGET_NAME} ALL WORKING_DIRECTORY ${EXECUTABLE_OUTPUT_PATH} DEPENDS ${JDBC_CMD_NAME})
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册