提交 3cc1a3a3 编写于 作者: C cpwu

Merge branch '3.0' into cpwu/3.0

......@@ -19,3 +19,6 @@
[submodule "tools/taosadapter"]
path = tools/taosadapter
url = https://github.com/taosdata/taosadapter.git
[submodule "tools/taosws-rs"]
path = tools/taosws-rs
url = https://github.com/taosdata/taosws-rs.git
<p>
<p align="center">
<a href="https://tdengine.com" target="_blank">
<img
src="docs/assets/tdengine.svg"
alt="TDengine"
width="500"
/>
</a>
</p>
<p>
[![Build Status](https://travis-ci.org/taosdata/TDengine.svg?branch=master)](https://travis-ci.org/taosdata/TDengine)
[![Build status](https://ci.appveyor.com/api/projects/status/kf3pwh2or5afsgl9/branch/master?svg=true)](https://ci.appveyor.com/project/sangshuduo/tdengine-2n8ge/branch/master)
[![Coverage Status](https://coveralls.io/repos/github/taosdata/TDengine/badge.svg?branch=develop)](https://coveralls.io/github/taosdata/TDengine?branch=develop)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4201/badge)](https://bestpractices.coreinfrastructure.org/projects/4201)
[![tdengine](https://snapcraft.io//tdengine/badge.svg)](https://snapcraft.io/tdengine)
[![TDengine](TDenginelogo.png)](https://www.taosdata.com)
简体中文 | [English](./README.md)
简体中文 | [English](README.md) | 很多职位正在热招中,请看[这里](https://www.taosdata.com/cn/careers/)
# TDengine 简介
TDengine是涛思数据专为物联网、车联网、工业互联网、IT运维等设计和优化的大数据平台。除核心的快10倍以上的时序数据库功能外,还提供缓存、数据订阅、流式计算等功能,最大程度减少研发和运维的复杂度,且核心代码,包括集群功能全部开源(开源协议,AGPL v3.0)。
TDengine 是一款高性能、分布式、支持 SQL 的时序数据库(Time-Series Database)。而且除时序数据库功能外,它还提供缓存、数据订阅、流式计算等功能,最大程度减少研发和运维的复杂度,且核心代码,包括集群功能全部开源(开源协议,AGPL v3.0)。与其他时序数据数据库相比,TDengine 有以下特点:
- **高性能**:通过创新的存储引擎设计,无论是数据写入还是查询,TDengine 的性能比通用数据库快 10 倍以上,也远超其他时序数据库,而且存储空间也大为节省。
- **分布式**:通过原生分布式的设计,TDengine 提供了水平扩展的能力,只需要增加节点就能获得更强的数据处理能力,同时通过多副本机制保证了系统的高可用。
- **支持 SQL**:TDengine 采用 SQL 作为数据查询语言,减少学习和迁移成本,同时提供 SQL 扩展来处理时序数据特有的分析,而且支持方便灵活的 schemaless 数据写入。
- **All in One**:将数据库、消息队列、缓存、流式计算等功能融合一起,应用无需再集成 Kafka/Redis/HBase/Spark 等软件,大幅降低应用开发和维护成本。
- **零管理**:安装、集群几秒搞定,无任何依赖,不用分库分表,系统运行状态监测能与 Grafana 或其他运维工具无缝集成。
- **零学习成本**:采用 SQL 查询语言,支持 Python、Java、C/C++、Go、Rust、Node.js 等多种编程语言,与 MySQL 相似,零学习成本。
- **无缝集成**:不用一行代码,即可与 Telegraf、Grafana、EMQX、Prometheus、StatsD、collectd、Matlab、R 等第三方工具无缝集成。
- 10 倍以上性能提升。定义了创新的数据存储结构,单核每秒就能处理至少2万次请求,插入数百万个数据点,读出一千万以上数据点,比现有通用数据库快了十倍以上。
- 硬件或云服务成本降至1/5。由于超强性能,计算资源不到通用大数据方案的1/5;通过列式存储和先进的压缩算法,存储空间不到通用数据库的1/10。
- 全栈时序数据处理引擎。将数据库、消息队列、缓存、流式计算等功能融合一起,应用无需再集成Kafka/Redis/HBase/Spark等软件,大幅降低应用开发和维护成本。
- 强大的分析功能。无论是十年前还是一秒钟前的数据,指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。即席查询可通过Shell/Python/R/Matlab随时进行。
- 与第三方工具无缝连接。不用一行代码,即可与Telegraf, Grafana, EMQ X, Prometheus, Matlab, R集成。后续还将支持MQTT, OPC, Hadoop,Spark等, BI工具也将无缝连接。
- 零运维成本、零学习成本。安装、集群一秒搞定,无需分库分表,实时备份。标准SQL,支持JDBC,RESTful,支持Python/Java/C/C++/Go/Node.JS, 与MySQL相似,零学习成本。
- **互动 Console**: 通过命令行 console,不用编程,执行 SQL 语句就能做即席查询、各种数据库的操作、管理以及集群的维护.
TDengine 可以广泛应用于物联网、工业互联网、车联网、IT 运维、能源、金融等领域,让大量设备、数据采集器每天产生的高达 TB 甚至 PB 级的数据能得到高效实时的处理,对业务的运行状态进行实时的监测、预警,从大数据中挖掘出商业价值。
# 文档
TDengine是一个高效的存储、查询、分析时序大数据的平台,专为物联网、车联网、工业互联网、运维监测等优化而设计。您可以像使用关系型数据库MySQL一样来使用它,但建议您在使用前仔细阅读一遍下面的文档,特别是 [数据模型](https://www.taosdata.com/cn/documentation/architecture)[数据建模](https://www.taosdata.com/cn/documentation/model)。除本文档之外,欢迎 [下载产品白皮书](https://www.taosdata.com/downloads/TDengine%20White%20Paper.pdf)
TDengine 采用传统的关系数据库模型,您可以像使用关系型数据库 MySQL 一样来使用它。但由于引入了超级表,一个采集点一张表的概念,建议您在使用前仔细阅读一遍下面的文档,特别是 [数据模型](https://www.taosdata.com/cn/documentation/architecture)[数据建模](https://www.taosdata.com/cn/documentation/model)。除本文档之外,欢迎 [下载产品白皮书](https://www.taosdata.com/downloads/TDengine%20White%20Paper.pdf)
# 构建
TDengine目前2.0版服务器仅能在Linux系统上安装和运行,后续会支持Windows、macOS等系统。客户端可以在Windows或Linux上安装和运行。任何OS的应用也可以选择RESTful接口连接服务器taosd。CPU支持X64/ARM64/MIPS64/Alpha64,后续会支持ARM32、RISC-V等CPU架构。用户可根据需求选择通过[源码](https://www.taosdata.com/cn/getting-started/#通过源码安装)或者[安装包](https://www.taosdata.com/cn/getting-started/#通过安装包安装)来安装。本快速指南仅适用于通过源码安装。
TDengine 目前 2.0 版服务器仅能在 Linux 系统上安装和运行,后续会支持 Windows、macOS 等系统。客户端可以在 Windows 或 Linux 上安装和运行。任何 OS 的应用也可以选择 RESTful 接口连接服务器 taosd。CPU 支持 X64/ARM64/MIPS64/Alpha64,后续会支持 ARM32、RISC-V 等 CPU 架构。用户可根据需求选择通过[源码](https://www.taosdata.com/cn/getting-started/#通过源码安装)或者[安装包](https://www.taosdata.com/cn/getting-started/#通过安装包安装)来安装。本快速指南仅适用于通过源码安装。
## 安装工具
### Ubuntu 16.04 及以上版本 & Debian:
```bash
sudo apt-get install -y gcc cmake build-essential git
sudo apt-get install -y gcc cmake build-essential git libssl-dev
```
### Ubuntu 14.04:
......@@ -56,10 +77,22 @@ sudo apt-get install -y openjdk-8-jdk
sudo apt-get install -y maven
```
#### 为 taos-tools 安装编译需要的软件
taosTools 是用于 TDengine 的辅助工具软件集合。目前它包含 taosBenchmark(曾命名为 taosdemo)和 taosdump 两个软件。
默认 TDengine 编译不包含 taosTools。您可以在编译 TDengine 时使用`cmake .. -DBUILD_TOOLS=true` 来同时编译 taosTools。
为了在 Ubuntu/Debian 系统上编译 [taos-tools](https://github.com/taosdata/taos-tools) 需要安装如下软件:
```bash
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev pkg-config
```
### CentOS 7:
```bash
sudo yum install -y gcc gcc-c++ make cmake git
sudo yum install -y gcc gcc-c++ make cmake git openssl-devel
```
安装 OpenJDK 8:
......@@ -74,10 +107,10 @@ sudo yum install -y java-1.8.0-openjdk
sudo yum install -y maven
```
### CentOS 8 & Fedora:
### CentOS 8 & Fedora
```bash
sudo dnf install -y gcc gcc-c++ make cmake epel-release git
sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel
```
安装 OpenJDK 8:
......@@ -92,6 +125,33 @@ sudo dnf install -y java-1.8.0-openjdk
sudo dnf install -y maven
```
#### 在 CentOS 上构建 taosTools 安装依赖软件
为了在 CentOS 上构建 [taosTools](https://github.com/taosdata/taos-tools) 需要安装如下依赖软件
```bash
sudo yum install zlib-devel xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libstdc++-static openssl-devel
```
注意:由于 snappy 缺乏 pkg-config 支持
(参考 [链接](https://github.com/google/snappy/pull/86)),会导致
cmake 提示无法发现 libsnappy,实际上工作正常。
### 设置 golang 开发环境
TDengine 包含数个使用 Go 语言开发的组件,请参考 golang.org 官方文档设置 go 开发环境。
请使用 1.14 及以上版本。对于中国用户,我们建议使用代理来加速软件包下载。
```
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
```
### 设置 rust 开发环境
TDengine 包含数个使用 Rust 语言开发的组件. 请参考 rust-lang.org 官方文档设置 rust 开发环境。
## 获取源码
首先,你需要从 GitHub 克隆源码:
......@@ -107,22 +167,41 @@ Go 连接器和 Grafana 插件在其他独立仓库,如果安装它们的话
git submodule update --init --recursive
```
如果使用 https 协议下载比较慢,可以通过修改 ~/.gitconfig 文件添加以下两行设置使用 ssh 协议下载。需要首先上传 ssh 密钥到 GitHub,详细方法请参考 GitHub 官方文档。
```
[url "git@github.com:"]
insteadOf = https://github.com/
```
## 构建 TDengine
### Linux 系统
可以运行代码仓库中的 `build.sh` 脚本编译出 TDengine 和 taosTools(包含 taosBenchmark 和 taosdump)。
```bash
mkdir debug && cd debug
cmake .. && cmake --build .
./build.sh
```
您可以选择使用 Jemalloc 作为内存分配器,替代默认的 glibc:
这个脚本等价于执行如下命令:
```bash
git submodule update --init --recursive
mkdir debug
cd debug
cmake .. -DBUILD_TOOLS=true
make
```
您也可以选择使用 jemalloc 作为内存分配器,替代默认的 glibc:
```bash
apt install autoconf
cmake .. -DJEMALLOC_ENABLED=true
```
在X86-64、X86、arm64、arm32 和 mips64 平台上,TDengine 生成脚本可以自动检测机器架构。也可以手动配置 CPUTYPE 参数来指定 CPU 类型,如 aarch64 或 aarch32 等。
X86-64、X86、arm64、arm32 和 mips64 平台上,TDengine 生成脚本可以自动检测机器架构。也可以手动配置 CPUTYPE 参数来指定 CPU 类型,如 aarch64 或 aarch32 等。
aarch64:
......@@ -157,7 +236,7 @@ nmake
如果你使用的是 Visual Studio 2019 或 2017 版本:
打开cmd.exe,执行 vcvarsall.bat 时,为 64 位操作系统指定“x64”,为 32 位操作系统指定“x86”。
打开 cmd.exe,执行 vcvarsall.bat 时,为 64 位操作系统指定“x64”,为 32 位操作系统指定“x86”。
```bash
mkdir debug && cd debug
......@@ -174,9 +253,7 @@ cmake .. -G "NMake Makefiles"
nmake
```
如果你使用的是 Visual Studio 2022 版本, 脚本 `vcvarsall.bat` 的默认安装路径是 `C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat`
### Mac OS X 系统
### macOS 系统
安装 Xcode 命令行工具和 cmake. 在 Catalina 和 Big Sur 操作系统上,需要安装 XCode 11.4+ 版本。
......@@ -187,13 +264,17 @@ cmake .. && cmake --build .
# 安装
生成完成后,安装 TDengine(下文给出的指令以 Linux 为例,如果是在 Windows 下,那么对应的指令会是 `nmake install`):
## Linux 系统
生成完成后,安装 TDengine:
```bash
sudo make install
```
用户可以在[文件目录结构](https://www.taosdata.com/cn/documentation/administrator#directories)中了解更多在操作系统中生成的目录或文件。
从 2.0 版本开始, 从源代码安装也会为 TDengine 配置服务管理。
用户也可以选择[从安装包中安装](https://www.taosdata.com/en/getting-started/#Install-from-Package)
安装成功后,在终端中启动 TDengine 服务:
......@@ -209,6 +290,40 @@ taos
如果 TDengine Shell 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
## Windows 系统
生成完成后,安装 TDengine:
```cmd
nmake install
```
## macOS 系统
生成完成后,安装 TDengine:
```bash
sudo make install
```
安装成功后,如果想以服务形式启动,先配置 `.plist` 文件,在终端中执行:
```bash
sudo cp ../packaging/macOS/com.taosdata.tdengine.plist /Library/LaunchDaemons
```
在终端中启动 TDengine 服务:
```bash
sudo launchctl load /Library/LaunchDaemons/com.taosdata.tdengine.plist
```
在终端中停止 TDengine 服务:
```bash
sudo launchctl unload /Library/LaunchDaemons/com.taosdata.tdengine.plist
```
## 快速运行
如果不希望以服务方式运行 TDengine,也可以在终端中直接运行它。也即在生成完成后,执行以下命令(在 Windows 下,生成的可执行文件会带有 .exe 后缀,例如会名为 taosd.exe ):
......@@ -227,15 +342,15 @@ taos
# 体验 TDengine
TDengine终端中,用户可以通过SQL命令来创建/删除数据库、表等,并进行插入查询操作。
TDengine 终端中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行插入查询操作。
```bash
create database demo;
use demo;
create table t (ts timestamp, speed int);
insert into t values ('2019-07-15 00:00:00', 10);
insert into t values ('2019-07-15 01:00:00', 20);
select * from t;
```sql
CREATE DATABASE demo;
USE demo;
CREATE TABLE t (ts TIMESTAMP, speed INT);
INSERT INTO t VALUES('2019-07-15 00:00:00', 10);
INSERT INTO t VALUES('2019-07-15 01:00:00', 20);
SELECT * FROM t;
ts | speed |
===================================
19-07-15 00:00:00.000| 10|
......@@ -247,33 +362,35 @@ Query OK, 2 row(s) in set (0.001700s)
## 官方连接器
TDengine 提供了丰富的应用程序开发接口,其中包括C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用:
TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用:
- [Java](https://www.taosdata.com/cn/documentation/connector/java)
- Java
- [C/C++](https://www.taosdata.com/cn/documentation/connector#c-cpp)
- C/C++
- [Python](https://www.taosdata.com/cn/documentation/connector#python)
- Python
- [Go](https://www.taosdata.com/cn/documentation/connector#go)
- Go
- [RESTful API](https://www.taosdata.com/cn/documentation/connector#restful)
- RESTful API
- [Node.js](https://www.taosdata.com/cn/documentation/connector#nodejs)
- Node.js
- [Rust](https://www.taosdata.com/cn/documentation/connector/rust)
## 第三方连接器
TDengine 社区生态中也有一些非常友好的第三方连接器,可以通过以下链接访问它们的源码。
- [Rust Connector](https://github.com/taosdata/TDengine/tree/master/tests/examples/rust)
- [Rust Bindings](https://github.com/songtianyi/tdengine-rust-bindings/tree/master/examples)
- [.Net Core Connector](https://github.com/maikebing/Maikebing.EntityFrameworkCore.Taos)
- [Lua Connector](https://github.com/taosdata/TDengine/tree/develop/tests/examples/lua)
- [Lua Connector](https://github.com/taosdata/TDengine/tree/develop/examples/lua)
# 运行和添加测试例
TDengine 的测试框架和所有测试例全部开源。
点击 [这里](tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md),了解如何运行测试例和添加新的测试例。
点击 [这里](https://github.com/taosdata/TDengine/blob/develop/tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md),了解如何运行测试例和添加新的测试例。
# 成为社区贡献者
......@@ -281,8 +398,8 @@ TDengine 的测试框架和所有测试例全部开源。
# 加入技术交流群
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小T为好友,即可入群。
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小 T 为好友,即可入群。
# [谁在使用TDengine](https://github.com/taosdata/TDengine/issues/2432)
# [谁在使用 TDengine](https://github.com/taosdata/TDengine/issues/2432)
欢迎所有 TDengine 用户及贡献者在 [这里](https://github.com/taosdata/TDengine/issues/2432) 分享您在当前工作中开发/使用 TDengine 的故事。
欢迎所有 TDengine 用户及贡献者在 [这里](https://github.com/taosdata/TDengine/issues/2432) 分享您在当前工作中开发/使用 TDengine 的故事。
<p>
<p align="center">
<a href="https://tdengine.com" target="_blank">
<img
src="docs/assets/tdengine.svg"
alt="TDengine"
width="500"
/>
</a>
</p>
<p>
[![Build Status](https://cloud.drone.io/api/badges/taosdata/TDengine/status.svg?ref=refs/heads/master)](https://cloud.drone.io/taosdata/TDengine)
[![Build status](https://ci.appveyor.com/api/projects/status/kf3pwh2or5afsgl9/branch/master?svg=true)](https://ci.appveyor.com/project/sangshuduo/tdengine-2n8ge/branch/master)
[![Coverage Status](https://coveralls.io/repos/github/taosdata/TDengine/badge.svg?branch=develop)](https://coveralls.io/github/taosdata/TDengine?branch=develop)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4201/badge)](https://bestpractices.coreinfrastructure.org/projects/4201)
[![tdengine](https://snapcraft.io//tdengine/badge.svg)](https://snapcraft.io/tdengine)
[![TDengine](TDenginelogo.png)](https://www.taosdata.com)
English | [简体中文](./README-CN.md)
English | [简体中文](README-CN.md) | We are hiring, check [here](https://tdengine.com/careers)
# What is TDengine?
TDengine is an open-sourced big data platform under [GNU AGPL v3.0](http://www.gnu.org/licenses/agpl-3.0.html), designed and optimized for the Internet of Things (IoT), Connected Cars, Industrial IoT, and IT Infrastructure and Application Monitoring. Besides the 10x faster time-series database, it provides caching, stream computing, message queuing and other functionalities to reduce the complexity and cost of development and operation.
TDengine is a high-performance, scalable time-series database with SQL support. Its code including cluster feature is open source under [GNU AGPL v3.0](http://www.gnu.org/licenses/agpl-3.0.html). Besides the database, it provides caching, stream processing, data subscription and other functionalities to reduce the complexity and cost of development and operation. TDengine differentiates itself from other TSDBs with the following advantages.
- **High Performance**: TDengine outperforms other time series databases in data ingestion and querying while significantly reducing storage cost and compute costs, with an innovatively designed and purpose-built storage engine.
- **Scalable**: TDengine provides out-of-box scalability and high-availability through its native distributed design. Nodes can be added through simple configuration to achieve greater data processing power. In addition, this feature is open source.
- **10x Faster on Insert/Query Speeds**: Through the innovative design on storage, on a single-core machine, over 20K requests can be processed, millions of data points can be ingested, and over 10 million data points can be retrieved in a second. It is 10 times faster than other databases.
- **SQL Support**: TDengine uses SQL as the query language, thereby reducing learning and migration costs, while adding SQL extensions to handle time-series data better, and supporting convenient and flexible schemaless data ingestion.
- **1/5 Hardware/Cloud Service Costs**: Compared with typical big data solutions, less than 1/5 of computing resources are required. Via column-based storage and tuned compression algorithms for different data types, less than 1/10 of storage space is needed.
- **All in One**: TDengine has built-in caching, stream processing and data subscription functions, it is no longer necessary to integrate Kafka/Redis/HBase/Spark or other software in some scenarios. It makes the system architecture much simpler and easy to maintain.
- **Full Stack for Time-Series Data**: By integrating a database with message queuing, caching, and stream computing features together, it is no longer necessary to integrate Kafka/Redis/HBase/Spark or other software. It makes the system architecture much simpler and more robust.
- **Seamless Integration**: Without a single line of code, TDengine provide seamless integration with third-party tools such as Telegraf, Grafana, EMQX, Prometheus, StatsD, collectd, etc. More will be integrated.
- **Powerful Data Analysis**: Whether it is 10 years or one minute ago, data can be queried just by specifying the time range. Data can be aggregated over time, multiple time streams or both. Ad Hoc queries or analyses can be executed via TDengine shell, Python, R or Matlab.
- **Zero Management**: Installation and cluster setup can be done in seconds. Data partitioning and sharding are executed automatically. TDengine’s running status can be monitored via Grafana or other DevOps tools.
- **Seamless Integration with Other Tools**: Telegraf, Grafana, Matlab, R, and other tools can be integrated with TDengine without a line of code. MQTT, OPC, Hadoop, Spark, and many others will be integrated soon.
- **Zero Learning Cost**: With SQL as the query language, support for ubiquitous tools like Python, Java, C/C++, Go, Rust, Node.js connectors, there is zero learning cost.
- **Zero Management, No Learning Curve**: It takes only seconds to download, install, and run it successfully; there are no other dependencies. Automatic partitioning on tables or DBs. Standard SQL is used, with C/C++, Python, JDBC, Go and RESTful connectors.
- **Interactive Console**: TDengine provides convenient console access to the database to run ad hoc queries, maintain the database, or manage the cluster without any programming.
TDengine can be widely applied to Internet of Things (IoT), Connected Vehicles, Industrial IoT, DevOps, energy, finance and many other scenarios.
# Documentation
For user manual, system design and architecture, engineering blogs, refer to [TDengine Documentation](https://www.taosdata.com/en/documentation/)(中文版请点击[这里](https://www.taosdata.com/cn/documentation20/))
for details. The documentation from our website can also be downloaded locally from *documentation/tdenginedocs-en* or *documentation/tdenginedocs-cn*.
for details. The documentation from our website can also be downloaded locally from _documentation/tdenginedocs-en_ or _documentation/tdenginedocs-cn_.
# Building
At the moment, TDengine only supports building and running on Linux systems. You can choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) or from the source code. This quick guide is for installation from the source only.
To build TDengine, use [CMake](https://cmake.org/) 2.8.12.x or higher versions in the project directory.
At the moment, TDengine server only supports running on Linux systems. You can choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) or build it from the source code. This quick guide is for installation from the source only.
To build TDengine, use [CMake](https://cmake.org/) 3.0.2 or higher versions in the project directory.
## Install tools
## Install build dependencies
### Ubuntu 16.04 and above or Debian
### Ubuntu 16.04 and above & Debian:
```bash
sudo apt-get install -y gcc cmake build-essential git
sudo apt-get install -y gcc cmake build-essential git libssl-dev
```
### Ubuntu 14.04:
### Ubuntu 14.04
```bash
sudo apt-get install -y gcc cmake3 build-essential git binutils-2.26
export PATH=/usr/lib/binutils-2.26/bin:$PATH
```
To compile and package the JDBC driver source code, you should have a Java jdk-8 or higher and Apache Maven 2.7 or higher installed.
To compile and package the JDBC driver source code, you should have a Java jdk-8 or higher and Apache Maven 2.7 or higher installed.
To install openjdk-8:
```bash
sudo apt-get install -y openjdk-8-jdk
```
To install Apache Maven:
```bash
sudo apt-get install -y maven
sudo apt-get install -y maven
```
### Centos 7:
#### Install build dependencies for taosTools
We provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. From TDengine 2.4.0.0, taosBenchmark and taosdump were not released together with TDengine.
By default, TDengine compiling does not include taosTools. You can use 'cmake .. -DBUILD_TOOLS=true' to make them be compiled with TDengine.
To build the [taosTools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
```bash
sudo yum install -y gcc gcc-c++ make cmake git
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev pkg-config
```
### CentOS 7
```bash
sudo yum install epel-release
sudo yum update
sudo yum install -y gcc gcc-c++ make cmake3 git openssl-devel
sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake
```
To install openjdk-8:
```bash
sudo yum install -y java-1.8.0-openjdk
```
To install Apache Maven:
```bash
sudo yum install -y maven
```
### Centos 8 & Fedora:
### CentOS 8 & Fedora
```bash
sudo dnf install -y gcc gcc-c++ make cmake epel-release git
sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel
```
To install openjdk-8:
```bash
sudo dnf install -y java-1.8.0-openjdk
```
To install Apache Maven:
```bash
sudo dnf install -y maven
```
#### Install build dependencies for taosTools on CentOS
To build the [taosTools](https://github.com/taosdata/taos-tools) on CentOS, the following packages need to be installed.
```bash
sudo yum install zlib-devel xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libstdc++-static openssl-devel
```
Note: Since snappy lacks pkg-config support (refer to [link](https://github.com/google/snappy/pull/86)), it lead a cmake prompt libsnappy not found. But snappy will works well.
### Setup golang environment
TDengine includes few components developed by Go language. Please refer to golang.org official documentation for golang environment setup.
Please use version 1.14+. For the user in China, we recommend using a proxy to accelerate package downloading.
```
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
```
### Setup rust environment
TDengine includees few compoments developed by Rust language. Please refer to rust-lang.org official documentation for rust environment setup.
## Get the source codes
First of all, you may clone the source codes from github:
```bash
git clone https://github.com/taosdata/TDengine.git
cd TDengine
```
The connectors for go & grafana have been moved to separated repositories,
The connectors for go & Grafana and some tools have been moved to separated repositories,
so you should run this command in the TDengine directory to install them:
```bash
git submodule update --init --recursive
```
You can modify the file ~/.gitconfig to use ssh protocol instead of https for better download speed. You need to upload ssh public key to GitHub first. Please refer to GitHub official documentation for detail.
```
[url "git@github.com:"]
insteadOf = https://github.com/
```
## Build TDengine
### On Linux platform
You can run the bash script `build.sh` to build both TDengine and taosTools including taosBenchmark and taosdump as below:
```bash
mkdir debug && cd debug
cmake .. && cmake --build .
./build.sh
```
It equals to execute following commands:
```bash
git submodule update --init --recursive
mkdir debug
cd debug
cmake .. -DBUILD_TOOLS=true
make
```
Note TDengine 2.3.x.0 and later use a component named 'taosAdapter' to play http daemon role by default instead of the http daemon embedded in the early version of TDengine. The taosAdapter is programmed by go language. If you pull TDengine source code to the latest from an existing codebase, please execute 'git submodule update --init --recursive' to pull taosAdapter source code. Please install go language version 1.14 or above for compiling taosAdapter. If you meet difficulties regarding 'go mod', especially you are from China, you can use a proxy to solve the problem.
```
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
```
The embedded http daemon still be built from TDengine source code by default. Or you can use the following command to choose to build taosAdapter.
```
cmake .. -DBUILD_HTTP=false
```
You can use Jemalloc as memory allocator instead of glibc:
```
apt install autoconf
cmake .. -DJEMALLOC_ENABLED=true
......@@ -120,24 +222,28 @@ TDengine build script can detect the host machine's architecture on X86-64, X86,
You can also specify CPUTYPE option like aarch64 or aarch32 too if the detection result is not correct:
aarch64:
```bash
cmake .. -DCPUTYPE=aarch64 && cmake --build .
```
aarch32:
```bash
cmake .. -DCPUTYPE=aarch32 && cmake --build .
```
mips64:
```bash
cmake .. -DCPUTYPE=mips64 && cmake --build .
```
### On Windows platform
If you use Visual Studio 2013, please open a command window by executing "cmd.exe".
If you use the Visual Studio 2013, please open a command window by executing "cmd.exe".
Please specify "amd64" for 64 bits Windows or specify "x86" is for 32 bits Windows when you execute vcvarsall.bat.
```cmd
mkdir debug && cd debug
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < amd64 | x86 >
......@@ -145,7 +251,7 @@ cmake .. -G "NMake Makefiles"
nmake
```
If you use Visual Studio 2019 or 2017:
If you use the Visual Studio 2019 or 2017:
please open a command window by executing "cmd.exe".
Please specify "x64" for 64 bits Windows or specify "x86" is for 32 bits Windows when you execute vcvarsall.bat.
......@@ -158,15 +264,14 @@ nmake
```
Or, you can simply open a command window by clicking Windows Start -> "Visual Studio < 2019 | 2017 >" folder -> "x64 Native Tools Command Prompt for VS < 2019 | 2017 >" or "x86 Native Tools Command Prompt for VS < 2019 | 2017 >" depends what architecture your Windows is, then execute commands as follows:
```cmd
mkdir debug && cd debug
cmake .. -G "NMake Makefiles"
nmake
```
If you use Visual Studio 2022, the only change is the default path of `vcvarsall.bat`, which is `C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat`.
### On Mac OS X platform
### On macOS platform
Please install XCode command line tools and cmake. Verified with XCode 11.4+ on Catalina and Big Sur.
......@@ -177,7 +282,10 @@ cmake .. && cmake --build .
# Installing
After building successfully, TDengine can be installed by: (On Windows platform, the following command should be `nmake install`)
## On Linux platform
After building successfully, TDengine can be installed by
```bash
sudo make install
```
......@@ -186,68 +294,129 @@ Users can find more information about directories installed on the system in the
Users can also choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) for it.
To start the service after installation, in a terminal, use:
```bash
sudo systemctl start taosd
```
Then users can use the [TDengine shell](https://www.taosdata.com/en/getting-started/#TDengine-Shell) to connect the TDengine server. In a terminal, use:
```bash
taos
```
If TDengine shell connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown.
### Install TDengine by apt-get
If you use Debian or Ubuntu system, you can use 'apt-get' command to install TDengine from official repository. Please use following commands to setup:
```
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
[Optional] echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
sudo apt-get update
apt-cache policy tdengine
sudo apt-get install tdengine
```
## On Windows platform
After building successfully, TDengine can be installed by:
```cmd
nmake install
```
## On macOS platform
After building successfully, TDengine can be installed by:
```bash
sudo make install
```
To start the service after installation, config `.plist` file first, in a terminal, use:
```bash
sudo cp ../packaging/macOS/com.taosdata.tdengine.plist /Library/LaunchDaemons
```
To start the service, in a terminal, use:
```bash
sudo launchctl load /Library/LaunchDaemons/com.taosdata.tdengine.plist
```
To stop the service, in a terminal, use:
```bash
sudo launchctl unload /Library/LaunchDaemons/com.taosdata.tdengine.plist
```
## Quick Run
If you don't want to run TDengine as a service, you can run it in current shell. For example, to quickly start a TDengine server after building, run the command below in terminal: (We take Linux as an example, command on Windows will be `taosd.exe`)
```bash
./build/bin/taosd -c test/cfg
```
In another terminal, use the TDengine shell to connect the server:
```bash
./build/bin/taos -c test/cfg
```
option "-c test/cfg" specifies the system configuration file directory.
option "-c test/cfg" specifies the system configuration file directory.
# Try TDengine
It is easy to run SQL commands from TDengine shell which is the same as other SQL databases.
```sql
create database db;
use db;
create table t (ts timestamp, a int);
insert into t values ('2019-07-15 00:00:00', 1);
insert into t values ('2019-07-15 01:00:00', 2);
select * from t;
drop database db;
CREATE DATABASE demo;
USE demo;
CREATE TABLE t (ts TIMESTAMP, speed INT);
INSERT INTO t VALUES('2019-07-15 00:00:00', 10);
INSERT INTO t VALUES('2019-07-15 01:00:00', 20);
SELECT * FROM t;
ts | speed |
===================================
19-07-15 00:00:00.000| 10|
19-07-15 01:00:00.000| 20|
Query OK, 2 row(s) in set (0.001700s)
```
# Developing with TDengine
### Official Connectors
## Official Connectors
TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation.
- [Java](https://www.taosdata.com/en/documentation/connector/#Java-Connector)
- [C/C++](https://www.taosdata.com/en/documentation/connector/#C/C++-Connector)
- [Python](https://www.taosdata.com/en/documentation/connector/#Python-Connector)
- [Go](https://www.taosdata.com/en/documentation/connector/#Go-Connector)
- [RESTful API](https://www.taosdata.com/en/documentation/connector/#RESTful-Connector)
- [Node.js](https://www.taosdata.com/en/documentation/connector/#Node.js-Connector)
- [Java](https://www.taosdata.com/en/documentation/connector/java)
- [C/C++](https://www.taosdata.com/en/documentation/connector#c-cpp)
- [Python](https://www.taosdata.com/en/documentation/connector#python)
- [Go](https://www.taosdata.com/en/documentation/connector#go)
- [RESTful API](https://www.taosdata.com/en/documentation/connector#restful)
- [Node.js](https://www.taosdata.com/en/documentation/connector#nodejs)
- [Rust](https://www.taosdata.com/en/documentation/connector/rust)
### Third Party Connectors
## Third Party Connectors
The TDengine community has also kindly built some of their own connectors! Follow the links below to find the source code for them.
- [Rust Connector](https://github.com/taosdata/TDengine/tree/master/tests/examples/rust)
- [Rust Bindings](https://github.com/songtianyi/tdengine-rust-bindings/tree/master/examples)
- [.Net Core Connector](https://github.com/maikebing/Maikebing.EntityFrameworkCore.Taos)
- [Lua Connector](https://github.com/taosdata/TDengine/tree/develop/tests/examples/lua)
# How to run the test cases and how to add a new test case?
TDengine's test framework and all test cases are fully open source.
Please refer to [this document](tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md) for how to run test and develop new test case.
# How to run the test cases and how to add a new test case
TDengine's test framework and all test cases are fully open source.
Please refer to [this document](https://github.com/taosdata/TDengine/blob/develop/tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md) for how to run test and develop new test case.
# TDengine Roadmap
- Support event-driven stream computing
- Support user defined functions
- Support MQTT connection
......
......@@ -18,6 +18,14 @@ if (NOT DEFINED TD_GRANT)
SET(TD_GRANT FALSE)
endif()
IF ("${WEBSOCKET}" MATCHES "true")
SET(TD_WEBSOCKET TRUE)
MESSAGE("Enable websocket")
ADD_DEFINITIONS(-DWEBSOCKET)
ELSE ()
SET(TD_WEBSOCKET FALSE)
ENDIF ()
IF ("${BUILD_HTTP}" STREQUAL "")
IF (TD_LINUX)
IF (TD_ARM_32)
......
......@@ -55,7 +55,8 @@ enum {
enum {
STREAM_INPUT__DATA_SUBMIT = 1,
STREAM_INPUT__DATA_BLOCK,
STREAM_INPUT__DATA_SCAN,
STREAM_INPUT__TABLE_SCAN,
STREAM_INPUT__TQ_SCAN,
STREAM_INPUT__DATA_RETRIEVE,
STREAM_INPUT__TRIGGER,
STREAM_INPUT__CHECKPOINT,
......@@ -88,8 +89,6 @@ typedef struct {
#pragma pack(push, 1)
typedef struct SColumnDataAgg {
int16_t colId;
int16_t minIndex;
int16_t maxIndex;
int16_t numOfNull;
int64_t sum;
int64_t max;
......@@ -124,7 +123,8 @@ enum {
};
typedef struct {
int8_t fetchType;
int8_t fetchType;
STqOffsetVal offset;
union {
SSDataBlock data;
void* meta;
......
......@@ -231,7 +231,7 @@ SSDataBlock* createDataBlock();
int32_t blockDataAppendColInfo(SSDataBlock* pBlock, SColumnInfoData* pColInfoData);
SColumnInfoData createColumnInfoData(int16_t type, int32_t bytes, int16_t colId);
SColumnInfoData* bdGetColumnInfoData(SSDataBlock* pBlock, int32_t index);
SColumnInfoData* bdGetColumnInfoData(const SSDataBlock* pBlock, int32_t index);
void blockEncode(const SSDataBlock* pBlock, char* data, int32_t* dataLen, int32_t numOfCols, int8_t needCompress);
const char* blockDecode(SSDataBlock* pBlock, int32_t numOfCols, int32_t numOfRows, const char* pData);
......
......@@ -57,8 +57,8 @@ extern int32_t tMsgDict[];
#define TMSG_SEG_SEQ(TYPE) ((TYPE)&0xff)
#define TMSG_INFO(TYPE) \
((TYPE) >= 0 && \
((TYPE) < TDMT_DND_MAX_MSG | (TYPE) < TDMT_MND_MAX_MSG | (TYPE) < TDMT_VND_MAX_MSG | (TYPE) < TDMT_SCH_MAX_MSG | \
(TYPE) < TDMT_STREAM_MAX_MSG | (TYPE) < TDMT_MON_MAX_MSG | (TYPE) < TDMT_SYNC_MAX_MSG)) \
((TYPE) < TDMT_DND_MAX_MSG || (TYPE) < TDMT_MND_MAX_MSG || (TYPE) < TDMT_VND_MAX_MSG || (TYPE) < TDMT_SCH_MAX_MSG || \
(TYPE) < TDMT_STREAM_MAX_MSG || (TYPE) < TDMT_MON_MAX_MSG || (TYPE) < TDMT_SYNC_MAX_MSG)) \
? tMsgInfo[tMsgDict[TMSG_SEG_CODE(TYPE)] + TMSG_SEG_SEQ(TYPE)] \
: 0
#define TMSG_INDEX(TYPE) (tMsgDict[TMSG_SEG_CODE(TYPE)] + TMSG_SEG_SEQ(TYPE))
......@@ -665,6 +665,7 @@ typedef struct {
char tbFName[TSDB_TABLE_FNAME_LEN];
int32_t sversion;
int32_t tversion;
int64_t affectedRows;
} SQueryTableRsp;
int32_t tSerializeSQueryTableRsp(void* buf, int32_t bufLen, SQueryTableRsp* pRsp);
......@@ -1510,6 +1511,7 @@ typedef struct SSubQueryMsg {
int32_t execId;
int8_t taskType;
int8_t explain;
int8_t needFetch;
uint32_t sqlLen; // the query sql,
uint32_t phyLen;
char msg[];
......
......@@ -79,195 +79,196 @@
#define TK_EXISTS 61
#define TK_BUFFER 62
#define TK_CACHELAST 63
#define TK_COMP 64
#define TK_DURATION 65
#define TK_NK_VARIABLE 66
#define TK_FSYNC 67
#define TK_MAXROWS 68
#define TK_MINROWS 69
#define TK_KEEP 70
#define TK_PAGES 71
#define TK_PAGESIZE 72
#define TK_PRECISION 73
#define TK_REPLICA 74
#define TK_STRICT 75
#define TK_WAL 76
#define TK_VGROUPS 77
#define TK_SINGLE_STABLE 78
#define TK_RETENTIONS 79
#define TK_SCHEMALESS 80
#define TK_NK_COLON 81
#define TK_TABLE 82
#define TK_NK_LP 83
#define TK_NK_RP 84
#define TK_STABLE 85
#define TK_ADD 86
#define TK_COLUMN 87
#define TK_MODIFY 88
#define TK_RENAME 89
#define TK_TAG 90
#define TK_SET 91
#define TK_NK_EQ 92
#define TK_USING 93
#define TK_TAGS 94
#define TK_COMMENT 95
#define TK_BOOL 96
#define TK_TINYINT 97
#define TK_SMALLINT 98
#define TK_INT 99
#define TK_INTEGER 100
#define TK_BIGINT 101
#define TK_FLOAT 102
#define TK_DOUBLE 103
#define TK_BINARY 104
#define TK_TIMESTAMP 105
#define TK_NCHAR 106
#define TK_UNSIGNED 107
#define TK_JSON 108
#define TK_VARCHAR 109
#define TK_MEDIUMBLOB 110
#define TK_BLOB 111
#define TK_VARBINARY 112
#define TK_DECIMAL 113
#define TK_MAX_DELAY 114
#define TK_WATERMARK 115
#define TK_ROLLUP 116
#define TK_TTL 117
#define TK_SMA 118
#define TK_FIRST 119
#define TK_LAST 120
#define TK_SHOW 121
#define TK_DATABASES 122
#define TK_TABLES 123
#define TK_STABLES 124
#define TK_MNODES 125
#define TK_MODULES 126
#define TK_QNODES 127
#define TK_FUNCTIONS 128
#define TK_INDEXES 129
#define TK_ACCOUNTS 130
#define TK_APPS 131
#define TK_CONNECTIONS 132
#define TK_LICENCE 133
#define TK_GRANTS 134
#define TK_QUERIES 135
#define TK_SCORES 136
#define TK_TOPICS 137
#define TK_VARIABLES 138
#define TK_BNODES 139
#define TK_SNODES 140
#define TK_CLUSTER 141
#define TK_TRANSACTIONS 142
#define TK_DISTRIBUTED 143
#define TK_CONSUMERS 144
#define TK_SUBSCRIPTIONS 145
#define TK_LIKE 146
#define TK_INDEX 147
#define TK_FUNCTION 148
#define TK_INTERVAL 149
#define TK_TOPIC 150
#define TK_AS 151
#define TK_WITH 152
#define TK_META 153
#define TK_CONSUMER 154
#define TK_GROUP 155
#define TK_DESC 156
#define TK_DESCRIBE 157
#define TK_RESET 158
#define TK_QUERY 159
#define TK_CACHE 160
#define TK_EXPLAIN 161
#define TK_ANALYZE 162
#define TK_VERBOSE 163
#define TK_NK_BOOL 164
#define TK_RATIO 165
#define TK_NK_FLOAT 166
#define TK_COMPACT 167
#define TK_VNODES 168
#define TK_IN 169
#define TK_OUTPUTTYPE 170
#define TK_AGGREGATE 171
#define TK_BUFSIZE 172
#define TK_STREAM 173
#define TK_INTO 174
#define TK_TRIGGER 175
#define TK_AT_ONCE 176
#define TK_WINDOW_CLOSE 177
#define TK_IGNORE 178
#define TK_EXPIRED 179
#define TK_KILL 180
#define TK_CONNECTION 181
#define TK_TRANSACTION 182
#define TK_BALANCE 183
#define TK_VGROUP 184
#define TK_MERGE 185
#define TK_REDISTRIBUTE 186
#define TK_SPLIT 187
#define TK_SYNCDB 188
#define TK_DELETE 189
#define TK_INSERT 190
#define TK_NULL 191
#define TK_NK_QUESTION 192
#define TK_NK_ARROW 193
#define TK_ROWTS 194
#define TK_TBNAME 195
#define TK_QSTARTTS 196
#define TK_QENDTS 197
#define TK_WSTARTTS 198
#define TK_WENDTS 199
#define TK_WDURATION 200
#define TK_CAST 201
#define TK_NOW 202
#define TK_TODAY 203
#define TK_TIMEZONE 204
#define TK_CLIENT_VERSION 205
#define TK_SERVER_VERSION 206
#define TK_SERVER_STATUS 207
#define TK_CURRENT_USER 208
#define TK_COUNT 209
#define TK_LAST_ROW 210
#define TK_BETWEEN 211
#define TK_IS 212
#define TK_NK_LT 213
#define TK_NK_GT 214
#define TK_NK_LE 215
#define TK_NK_GE 216
#define TK_NK_NE 217
#define TK_MATCH 218
#define TK_NMATCH 219
#define TK_CONTAINS 220
#define TK_JOIN 221
#define TK_INNER 222
#define TK_SELECT 223
#define TK_DISTINCT 224
#define TK_WHERE 225
#define TK_PARTITION 226
#define TK_BY 227
#define TK_SESSION 228
#define TK_STATE_WINDOW 229
#define TK_SLIDING 230
#define TK_FILL 231
#define TK_VALUE 232
#define TK_NONE 233
#define TK_PREV 234
#define TK_LINEAR 235
#define TK_NEXT 236
#define TK_HAVING 237
#define TK_RANGE 238
#define TK_EVERY 239
#define TK_ORDER 240
#define TK_SLIMIT 241
#define TK_SOFFSET 242
#define TK_LIMIT 243
#define TK_OFFSET 244
#define TK_ASC 245
#define TK_NULLS 246
#define TK_ID 247
#define TK_NK_BITNOT 248
#define TK_VALUES 249
#define TK_IMPORT 250
#define TK_NK_SEMI 251
#define TK_FILE 252
#define TK_CACHELASTSIZE 64
#define TK_COMP 65
#define TK_DURATION 66
#define TK_NK_VARIABLE 67
#define TK_FSYNC 68
#define TK_MAXROWS 69
#define TK_MINROWS 70
#define TK_KEEP 71
#define TK_PAGES 72
#define TK_PAGESIZE 73
#define TK_PRECISION 74
#define TK_REPLICA 75
#define TK_STRICT 76
#define TK_WAL 77
#define TK_VGROUPS 78
#define TK_SINGLE_STABLE 79
#define TK_RETENTIONS 80
#define TK_SCHEMALESS 81
#define TK_NK_COLON 82
#define TK_TABLE 83
#define TK_NK_LP 84
#define TK_NK_RP 85
#define TK_STABLE 86
#define TK_ADD 87
#define TK_COLUMN 88
#define TK_MODIFY 89
#define TK_RENAME 90
#define TK_TAG 91
#define TK_SET 92
#define TK_NK_EQ 93
#define TK_USING 94
#define TK_TAGS 95
#define TK_COMMENT 96
#define TK_BOOL 97
#define TK_TINYINT 98
#define TK_SMALLINT 99
#define TK_INT 100
#define TK_INTEGER 101
#define TK_BIGINT 102
#define TK_FLOAT 103
#define TK_DOUBLE 104
#define TK_BINARY 105
#define TK_TIMESTAMP 106
#define TK_NCHAR 107
#define TK_UNSIGNED 108
#define TK_JSON 109
#define TK_VARCHAR 110
#define TK_MEDIUMBLOB 111
#define TK_BLOB 112
#define TK_VARBINARY 113
#define TK_DECIMAL 114
#define TK_MAX_DELAY 115
#define TK_WATERMARK 116
#define TK_ROLLUP 117
#define TK_TTL 118
#define TK_SMA 119
#define TK_FIRST 120
#define TK_LAST 121
#define TK_SHOW 122
#define TK_DATABASES 123
#define TK_TABLES 124
#define TK_STABLES 125
#define TK_MNODES 126
#define TK_MODULES 127
#define TK_QNODES 128
#define TK_FUNCTIONS 129
#define TK_INDEXES 130
#define TK_ACCOUNTS 131
#define TK_APPS 132
#define TK_CONNECTIONS 133
#define TK_LICENCE 134
#define TK_GRANTS 135
#define TK_QUERIES 136
#define TK_SCORES 137
#define TK_TOPICS 138
#define TK_VARIABLES 139
#define TK_BNODES 140
#define TK_SNODES 141
#define TK_CLUSTER 142
#define TK_TRANSACTIONS 143
#define TK_DISTRIBUTED 144
#define TK_CONSUMERS 145
#define TK_SUBSCRIPTIONS 146
#define TK_LIKE 147
#define TK_INDEX 148
#define TK_FUNCTION 149
#define TK_INTERVAL 150
#define TK_TOPIC 151
#define TK_AS 152
#define TK_WITH 153
#define TK_META 154
#define TK_CONSUMER 155
#define TK_GROUP 156
#define TK_DESC 157
#define TK_DESCRIBE 158
#define TK_RESET 159
#define TK_QUERY 160
#define TK_CACHE 161
#define TK_EXPLAIN 162
#define TK_ANALYZE 163
#define TK_VERBOSE 164
#define TK_NK_BOOL 165
#define TK_RATIO 166
#define TK_NK_FLOAT 167
#define TK_COMPACT 168
#define TK_VNODES 169
#define TK_IN 170
#define TK_OUTPUTTYPE 171
#define TK_AGGREGATE 172
#define TK_BUFSIZE 173
#define TK_STREAM 174
#define TK_INTO 175
#define TK_TRIGGER 176
#define TK_AT_ONCE 177
#define TK_WINDOW_CLOSE 178
#define TK_IGNORE 179
#define TK_EXPIRED 180
#define TK_KILL 181
#define TK_CONNECTION 182
#define TK_TRANSACTION 183
#define TK_BALANCE 184
#define TK_VGROUP 185
#define TK_MERGE 186
#define TK_REDISTRIBUTE 187
#define TK_SPLIT 188
#define TK_SYNCDB 189
#define TK_DELETE 190
#define TK_INSERT 191
#define TK_NULL 192
#define TK_NK_QUESTION 193
#define TK_NK_ARROW 194
#define TK_ROWTS 195
#define TK_TBNAME 196
#define TK_QSTARTTS 197
#define TK_QENDTS 198
#define TK_WSTARTTS 199
#define TK_WENDTS 200
#define TK_WDURATION 201
#define TK_CAST 202
#define TK_NOW 203
#define TK_TODAY 204
#define TK_TIMEZONE 205
#define TK_CLIENT_VERSION 206
#define TK_SERVER_VERSION 207
#define TK_SERVER_STATUS 208
#define TK_CURRENT_USER 209
#define TK_COUNT 210
#define TK_LAST_ROW 211
#define TK_BETWEEN 212
#define TK_IS 213
#define TK_NK_LT 214
#define TK_NK_GT 215
#define TK_NK_LE 216
#define TK_NK_GE 217
#define TK_NK_NE 218
#define TK_MATCH 219
#define TK_NMATCH 220
#define TK_CONTAINS 221
#define TK_JOIN 222
#define TK_INNER 223
#define TK_SELECT 224
#define TK_DISTINCT 225
#define TK_WHERE 226
#define TK_PARTITION 227
#define TK_BY 228
#define TK_SESSION 229
#define TK_STATE_WINDOW 230
#define TK_SLIDING 231
#define TK_FILL 232
#define TK_VALUE 233
#define TK_NONE 234
#define TK_PREV 235
#define TK_LINEAR 236
#define TK_NEXT 237
#define TK_HAVING 238
#define TK_RANGE 239
#define TK_EVERY 240
#define TK_ORDER 241
#define TK_SLIMIT 242
#define TK_SOFFSET 243
#define TK_LIMIT 244
#define TK_OFFSET 245
#define TK_ASC 246
#define TK_NULLS 247
#define TK_ID 248
#define TK_NK_BITNOT 249
#define TK_VALUES 250
#define TK_IMPORT 251
#define TK_NK_SEMI 252
#define TK_FILE 253
#define TK_NK_SPACE 300
#define TK_NK_COMMENT 301
......
......@@ -45,6 +45,10 @@ typedef struct SDeleterParam {
SArray* pUidList;
} SDeleterParam;
typedef struct SInserterParam {
SReadHandle* readHandle;
} SInserterParam;
typedef struct SDataSinkStat {
uint64_t cachedSize;
} SDataSinkStat;
......@@ -96,7 +100,7 @@ void dsEndPut(DataSinkHandle handle, uint64_t useconds);
* @param handle
* @param pLen data length
*/
void dsGetDataLength(DataSinkHandle handle, int32_t* pLen, bool* pQueryEnd);
void dsGetDataLength(DataSinkHandle handle, int64_t* pLen, bool* pQueryEnd);
/**
* Get data, the caller needs to allocate data memory.
......
......@@ -157,7 +157,7 @@ int64_t qGetQueriedTableUid(qTaskInfo_t tinfo);
*/
int32_t qGetQualifiedTableIdList(void* pTableList, const char* tagCond, int32_t tagCondLen, SArray* pTableIdList);
void qProcessFetchRsp(void* parent, struct SRpcMsg* pMsg, struct SEpSet* pEpSet);
void qProcessRspMsg(void* parent, struct SRpcMsg* pMsg, struct SEpSet* pEpSet);
int32_t qGetExplainExecInfo(qTaskInfo_t tinfo, int32_t* resNum, SExplainExecInfo** pRes);
......@@ -174,7 +174,13 @@ int32_t qDeserializeTaskStatus(qTaskInfo_t tinfo, const char* pInput, int32_t le
*/
int32_t qGetStreamScanStatus(qTaskInfo_t tinfo, uint64_t* uid, int64_t* ts);
int32_t qStreamPrepareScan(qTaskInfo_t tinfo, uint64_t uid, int64_t ts);
int32_t qStreamPrepareTsdbScan(qTaskInfo_t tinfo, uint64_t uid, int64_t ts);
int32_t qStreamPrepareScan1(qTaskInfo_t tinfo, const STqOffsetVal* pOffset);
int32_t qStreamExtractOffset(qTaskInfo_t tinfo, STqOffsetVal* pOffset);
void* qStreamExtractMetaMsg(qTaskInfo_t tinfo);
void* qExtractReaderFromStreamScanner(void* scanner);
int32_t qExtractStreamScanner(qTaskInfo_t tinfo, void** scanner);
......
......@@ -172,7 +172,13 @@ typedef struct tExprNode {
void tExprTreeDestroy(tExprNode *pNode, void (*fp)(void *));
typedef enum {
SHOULD_FREE_COLDATA = 0x1, // the newly created column data needs to be destroyed.
DELEGATED_MGMT_COLDATA = 0x2, // input column data should not be released.
} ECOLDATA_MGMT_TYPE_E;
struct SScalarParam {
ECOLDATA_MGMT_TYPE_E type;
SColumnInfoData *columnData;
SHashObj *pHashFilter;
int32_t hashValueType;
......
......@@ -51,7 +51,8 @@ extern "C" {
typedef struct SDatabaseOptions {
ENodeType type;
int32_t buffer;
int8_t cachelast;
int8_t cacheLast;
int32_t cacheLastSize;
int8_t compressionLevel;
int32_t daysPerFile;
SValueNode* pDaysPerFile;
......
......@@ -67,7 +67,7 @@ int32_t qWorkerProcessCQueryMsg(void *node, void *qWorkerMgmt, SRpcMsg *pMsg, in
int32_t qWorkerProcessFetchMsg(void *node, void *qWorkerMgmt, SRpcMsg *pMsg, int64_t ts);
int32_t qWorkerProcessFetchRsp(void *node, void *qWorkerMgmt, SRpcMsg *pMsg, int64_t ts);
int32_t qWorkerProcessRspMsg(void *node, void *qWorkerMgmt, SRpcMsg *pMsg, int64_t ts);
int32_t qWorkerProcessCancelMsg(void *node, void *qWorkerMgmt, SRpcMsg *pMsg, int64_t ts);
......
......@@ -194,6 +194,7 @@ int32_t walRestoreFromSnapshot(SWal *, int64_t ver);
SWalReader *walOpenReader(SWal *, SWalFilterCond *pCond);
void walCloseReader(SWalReader *pRead);
int32_t walReadVer(SWalReader *pRead, int64_t ver);
int32_t walReadSeekVer(SWalReader *pRead, int64_t ver);
int32_t walNextValidMsg(SWalReader *pRead);
// only for tq usage
......
......@@ -477,22 +477,6 @@ static FORCE_INLINE void* tDecoderMalloc(SDecoder* pCoder, int32_t size) {
return n; \
} while (0)
#define tGetV(p, v) \
do { \
int32_t n = 0; \
if (v) *v = 0; \
for (;;) { \
if (p[n] <= 0x7f) { \
if (v) (*v) |= (p[n] << (7 * n)); \
n++; \
break; \
} \
if (v) (*v) |= ((p[n] & 0x7f) << (7 * n)); \
n++; \
} \
return n; \
} while (0)
// PUT
static FORCE_INLINE int32_t tPutU8(uint8_t* p, uint8_t v) {
if (p) ((uint8_t*)p)[0] = v;
......@@ -607,7 +591,22 @@ static FORCE_INLINE int32_t tGetI64(uint8_t* p, int64_t* v) {
return sizeof(int64_t);
}
static FORCE_INLINE int32_t tGetU16v(uint8_t* p, uint16_t* v) { tGetV(p, v); }
static FORCE_INLINE int32_t tGetU16v(uint8_t* p, uint16_t* v) {
int32_t n = 0;
if (v) *v = 0;
for (;;) {
if (p[n] <= 0x7f) {
if (v) (*v) |= (((uint16_t)p[n]) << (7 * n));
n++;
break;
}
if (v) (*v) |= (((uint16_t)(p[n] & 0x7f)) << (7 * n));
n++;
}
return n;
}
static FORCE_INLINE int32_t tGetI16v(uint8_t* p, int16_t* v) {
int32_t n;
......@@ -619,7 +618,22 @@ static FORCE_INLINE int32_t tGetI16v(uint8_t* p, int16_t* v) {
return n;
}
static FORCE_INLINE int32_t tGetU32v(uint8_t* p, uint32_t* v) { tGetV(p, v); }
static FORCE_INLINE int32_t tGetU32v(uint8_t* p, uint32_t* v) {
int32_t n = 0;
if (v) *v = 0;
for (;;) {
if (p[n] <= 0x7f) {
if (v) (*v) |= (((uint32_t)p[n]) << (7 * n));
n++;
break;
}
if (v) (*v) |= (((uint32_t)(p[n] & 0x7f)) << (7 * n));
n++;
}
return n;
}
static FORCE_INLINE int32_t tGetI32v(uint8_t* p, int32_t* v) {
int32_t n;
......@@ -631,7 +645,22 @@ static FORCE_INLINE int32_t tGetI32v(uint8_t* p, int32_t* v) {
return n;
}
static FORCE_INLINE int32_t tGetU64v(uint8_t* p, uint64_t* v) { tGetV(p, v); }
static FORCE_INLINE int32_t tGetU64v(uint8_t* p, uint64_t* v) {
int32_t n = 0;
if (v) *v = 0;
for (;;) {
if (p[n] <= 0x7f) {
if (v) (*v) |= (((uint64_t)p[n]) << (7 * n));
n++;
break;
}
if (v) (*v) |= (((uint64_t)(p[n] & 0x7f)) << (7 * n));
n++;
}
return n;
}
static FORCE_INLINE int32_t tGetI64v(uint8_t* p, int64_t* v) {
int32_t n;
......
......@@ -37,4 +37,5 @@ if [ -f "${install_main_dir}/taosadapter.service" ]; then
fi
# there can not libtaos.so*, otherwise ln -s error
${csudo}rm -f ${install_main_dir}/driver/libtaos* || :
${csudo}rm -f ${install_main_dir}/driver/libtaos.* || :
${csudo}rm -f ${install_main_dir}/driver/libtaosws.* || :
......@@ -29,8 +29,12 @@ else
${csudo}rm -f ${bin_link_dir}/taosdemo || :
${csudo}rm -f ${cfg_link_dir}/* || :
${csudo}rm -f ${inc_link_dir}/taos.h || :
${csudo}rm -f ${inc_link_dir}/taosdef.h || :
${csudo}rm -f ${inc_link_dir}/taoserror.h || :
${csudo}rm -f ${inc_link_dir}/taosudf.h || :
${csudo}rm -f ${inc_link_dir}/taosws.h || :
${csudo}rm -f ${lib_link_dir}/libtaos.* || :
${csudo}rm -f ${lib_link_dir}/libtaosws.* || :
${csudo}rm -f ${log_link_dir} || :
${csudo}rm -f ${data_link_dir} || :
......
......@@ -30,6 +30,7 @@ mkdir -p ${pkg_dir}
cd ${pkg_dir}
libfile="libtaos.so.${tdengine_ver}"
wslibfile="libtaosws.so"
# create install dir
install_home_path="/usr/local/taos"
......@@ -67,10 +68,12 @@ fi
cp ${compile_dir}/build/bin/taos ${pkg_dir}${install_home_path}/bin
cp ${compile_dir}/build/lib/${libfile} ${pkg_dir}${install_home_path}/driver
cp ${compile_dir}/build/lib/${wslibfile} ${pkg_dir}${install_home_path}/driver ||:
cp ${compile_dir}/../include/client/taos.h ${pkg_dir}${install_home_path}/include
cp ${compile_dir}/../include/common/taosdef.h ${pkg_dir}${install_home_path}/include
cp ${compile_dir}/../include/util/taoserror.h ${pkg_dir}${install_home_path}/include
cp ${compile_dir}/../include/libs/function/taosudf.h ${pkg_dir}${install_home_path}/include
cp ${compile_dir}/../src/inc/taosws.h ${pkg_dir}${install_home_path}/include ||:
cp -r ${top_dir}/examples/* ${pkg_dir}${install_home_path}/examples
#cp -r ${top_dir}/src/connector/python ${pkg_dir}${install_home_path}/connector
#cp -r ${top_dir}/src/connector/go ${pkg_dir}${install_home_path}/connector
......
......@@ -42,6 +42,7 @@ echo version: %{_version}
echo buildroot: %{buildroot}
libfile="libtaos.so.%{_version}"
wslibfile="libtaosws.so"
# create install path, and cp file
mkdir -p %{buildroot}%{homepath}/bin
......@@ -74,10 +75,12 @@ if [ -f %{_compiledir}/build/bin/taosadapter ]; then
cp %{_compiledir}/build/bin/taosadapter %{buildroot}%{homepath}/bin ||:
fi
cp %{_compiledir}/build/lib/${libfile} %{buildroot}%{homepath}/driver
cp %{_compiledir}/build/lib/${wslibfile} %{buildroot}%{homepath}/driver ||:
cp %{_compiledir}/../include/client/taos.h %{buildroot}%{homepath}/include
cp %{_compiledir}/../include/common/taosdef.h %{buildroot}%{homepath}/include
cp %{_compiledir}/../include/util/taoserror.h %{buildroot}%{homepath}/include
cp %{_compiledir}/../include/libs/function/taosudf.h %{buildroot}%{homepath}/include
cp %{_compiledir}/../src/inc/taosws.h %{buildroot}%{homepath}/include ||:
#cp -r %{_compiledir}/../src/connector/python %{buildroot}%{homepath}/connector
#cp -r %{_compiledir}/../src/connector/go %{buildroot}%{homepath}/connector
#cp -r %{_compiledir}/../src/connector/nodejs %{buildroot}%{homepath}/connector
......
......@@ -227,9 +227,13 @@ function install_lib() {
${csudo}ln -s ${install_main_dir}/driver/libtaos.* ${lib_link_dir}/libtaos.so.1
${csudo}ln -s ${lib_link_dir}/libtaos.so.1 ${lib_link_dir}/libtaos.so
${csudo}ln -s ${lib_link_dir}/libtaosws.so ${lib_link_dir}/libtaosws.so || :
if [[ -d ${lib64_link_dir} && ! -e ${lib64_link_dir}/libtaos.so ]]; then
${csudo}ln -s ${install_main_dir}/driver/libtaos.* ${lib64_link_dir}/libtaos.so.1 || :
${csudo}ln -s ${lib64_link_dir}/libtaos.so.1 ${lib64_link_dir}/libtaos.so || :
${csudo}ln -s ${lib64_link_dir}/libtaosws.so ${lib64_link_dir}/libtaosws.so || :
fi
${csudo}ldconfig
......@@ -313,11 +317,16 @@ function install_jemalloc() {
function install_header() {
${csudo}rm -f ${inc_link_dir}/taos.h ${inc_link_dir}/taosdef.h ${inc_link_dir}/taoserror.h ${inc_link_dir}/taosudf.h || :
${csudo}rm -f ${inc_link_dir}/taosws.h || :
${csudo}cp -f ${script_dir}/inc/* ${install_main_dir}/include && ${csudo}chmod 644 ${install_main_dir}/include/*
${csudo}ln -s ${install_main_dir}/include/taos.h ${inc_link_dir}/taos.h
${csudo}ln -s ${install_main_dir}/include/taosdef.h ${inc_link_dir}/taosdef.h
${csudo}ln -s ${install_main_dir}/include/taoserror.h ${inc_link_dir}/taoserror.h
${csudo}ln -s ${install_main_dir}/include/taosudf.h ${inc_link_dir}/taosudf.h
${csudo}ln -s ${install_main_dir}/include/taosws.h ${inc_link_dir}/taosws.h || :
}
function add_newHostname_to_hosts() {
......
......@@ -294,21 +294,29 @@ function install_avro() {
function install_lib() {
# Remove links
${csudo}rm -f ${lib_link_dir}/libtaos.* || :
${csudo}rm -f ${lib_link_dir}/libtaosws.* || :
if [ "$osType" != "Darwin" ]; then
${csudo}rm -f ${lib64_link_dir}/libtaos.* || :
${csudo}rm -f ${lib64_link_dir}/libtaosws.* || :
fi
if [ "$osType" != "Darwin" ]; then
${csudo}cp ${binary_dir}/build/lib/libtaos.so.${verNumber} \
${install_main_dir}/driver &&
${csudo}chmod 777 ${install_main_dir}/driver/*
${csudo}chmod 777 ${install_main_dir}/driver/libtaos.so.${verNumber}
${csudo}cp ${binary_dir}/build/lib/libtaosws.so \
${install_main_dir}/driver &&
${csudo}chmod 777 ${install_main_dir}/driver/libtaosws.so
${csudo}ln -sf ${install_main_dir}/driver/libtaos.* ${lib_link_dir}/libtaos.so.1
${csudo}ln -sf ${lib_link_dir}/libtaos.so.1 ${lib_link_dir}/libtaos.so
${csudo}ln -sf ${install_main_dir}/driver/libtaosws.so ${lib_link_dir}/libtaosws.so || :
if [ -d "${lib64_link_dir}" ]; then
${csudo}ln -sf ${install_main_dir}/driver/libtaos.* ${lib64_link_dir}/libtaos.so.1
${csudo}ln -sf ${lib64_link_dir}/libtaos.so.1 ${lib64_link_dir}/libtaos.so
${csudo}ln -sf ${lib64_link_dir}/libtaosws.so ${lib64_link_dir}/libtaosws.so || :
fi
else
${csudo}cp -Rf ${binary_dir}/build/lib/libtaos.${verNumber}.dylib \
......@@ -337,8 +345,8 @@ function install_lib() {
fi
install_jemalloc
install_avro lib
install_avro lib64
#install_avro lib
#install_avro lib64
if [ "$osType" != "Darwin" ]; then
${csudo}ldconfig
......@@ -350,11 +358,19 @@ function install_header() {
if [ "$osType" != "Darwin" ]; then
${csudo}rm -f ${inc_link_dir}/taos.h ${inc_link_dir}/taosdef.h ${inc_link_dir}/taoserror.h ${inc_link_dir}/taosudf.h || :
${csudo}cp -f ${source_dir}/include/client/taos.h ${source_dir}/include/common/taosdef.h ${source_dir}/include/util/taoserror.h ${source_dir}/include/libs/function/taosudf.h \
${csudo}rm -f ${inc_link_dir}/taosws.h || :
${csudo}cp -f ${source_dir}/src/inc/taos.h ${source_dir}/src/inc/taosdef.h ${source_dir}/src/inc/taoserror.h \
${install_main_dir}/include && ${csudo}chmod 644 ${install_main_dir}/include/*
${csudo}cp -f ${binary_dir}/build/include/taosws.h ${install_main_dir}/include && ${csudo}chmod 644 ${install_main_dir}/include/taosws.h
${csudo}ln -s ${install_main_dir}/include/taos.h ${inc_link_dir}/taos.h
${csudo}ln -s ${install_main_dir}/include/taosdef.h ${inc_link_dir}/taosdef.h
${csudo}ln -s ${install_main_dir}/include/taoserror.h ${inc_link_dir}/taoserror.h
${csudo}ln -s ${install_main_dir}/include/taosudf.h ${inc_link_dir}/taosudf.h
${csudo}ln -s ${install_main_dir}/include/taosws.h ${inc_link_dir}/taosws.h || :
else
${csudo}cp -f ${source_dir}/include/client/taos.h ${source_dir}/include/common/taosdef.h ${source_dir}/include/util/taoserror.h ${source_dir}/include/libs/function/taosudf.h \
${install_main_dir}/include ||
......
......@@ -92,8 +92,11 @@ else
fi
lib_files="${build_dir}/lib/libtaos.so.${version}"
wslib_files="${build_dir}/lib/libtaosws.so."
header_files="${code_dir}/include/client/taos.h ${code_dir}/include/common/taosdef.h ${code_dir}/include/util/taoserror.h ${code_dir}/include/libs/function/taosudf.h"
wsheader_files="${code_dir}/inc/taosws.h"
if [ "$dbName" != "taos" ]; then
cfg_dir="${top_dir}/../enterprise/packaging/cfg"
else
......@@ -109,6 +112,9 @@ init_file_rpm=${script_dir}/../rpm/taosd
# make directories.
mkdir -p ${install_dir}
mkdir -p ${install_dir}/inc && cp ${header_files} ${install_dir}/inc
${wsheader_files} ${install_dir}/inc || :
mkdir -p ${install_dir}/cfg && cp ${cfg_dir}/${configFile} ${install_dir}/cfg/${configFile}
if [ -f "${compile_dir}/test/cfg/taosadapter.toml" ]; then
......@@ -283,6 +289,7 @@ fi
# Copy driver
mkdir -p ${install_dir}/driver && cp ${lib_files} ${install_dir}/driver && echo "${versionComp}" >${install_dir}/driver/vercomp.txt
cp ${wslib_files} ${install_dir}/driver || :
# Copy connector
if [ "$verMode" == "cluster" ]; then
......
......@@ -102,7 +102,10 @@ function clean_local_bin() {
function clean_lib() {
# Remove link
${csudo}rm -f ${lib_link_dir}/libtaos.* || :
${csudo}rm -f ${lib_link_dir}/libtaosws.* || :
${csudo}rm -f ${lib64_link_dir}/libtaos.* || :
${csudo}rm -f ${lib64_link_dir}/libtaosws.* || :
#${csudo}rm -rf ${v15_java_app_dir} || :
}
......@@ -111,6 +114,8 @@ function clean_header() {
${csudo}rm -f ${inc_link_dir}/taos.h || :
${csudo}rm -f ${inc_link_dir}/taosdef.h || :
${csudo}rm -f ${inc_link_dir}/taoserror.h || :
${csudo}rm -f ${inc_link_dir}/taosws.h || :
}
function clean_config() {
......
......@@ -750,7 +750,6 @@ TEST(testCase, projection_query_stables) {
taos_close(pConn);
}
TEST(testCase, agg_query_tables) {
TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(pConn, nullptr);
......@@ -763,7 +762,7 @@ TEST(testCase, agg_query_tables) {
}
taos_free_result(pRes);
pRes = taos_query(pConn, "show table distributed st1");
pRes = taos_query(pConn, "show table distributed tup");
if (taos_errno(pRes) != 0) {
printf("failed to select from table, reason:%s\n", taos_errstr(pRes));
taos_free_result(pRes);
......@@ -822,13 +821,29 @@ TEST(testCase, async_api_test) {
}
#endif
TEST(testCase, update_test) {
TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(pConn, nullptr);
taos_query(pConn, "use abc1");
TAOS_RES* pRes = taos_query(pConn, "create database if not exists abc1");
if (taos_errno(pRes) != TSDB_CODE_SUCCESS) {
printf("failed to create database, code:%s", taos_errstr(pRes));
taos_free_result(pRes);
return;
}
taos_free_result(pRes);
TAOS_RES* pRes = taos_query(pConn, "create table tup (ts timestamp, k int);");
pRes = taos_query(pConn, "use abc1");
if (taos_errno(pRes) != TSDB_CODE_SUCCESS) {
printf("failed to use db, code:%s", taos_errstr(pRes));
taos_free_result(pRes);
return;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "create table tup (ts timestamp, k int);");
if (taos_errno(pRes) != 0) {
printf("failed to create table, reason:%s", taos_errstr(pRes));
}
......@@ -836,11 +851,10 @@ TEST(testCase, update_test) {
taos_free_result(pRes);
char s[256] = {0};
for(int32_t i = 0; i < 7000; ++i) {
sprintf(s, "insert into tup values('2020-1-1 1:1:1', %d)", i);
for(int32_t i = 0; i < 17000; ++i) {
sprintf(s, "insert into tup values(now+%da, %d)", i, i);
pRes = taos_query(pConn, s);
taos_free_result(pRes);
}
}
#pragma GCC diagnostic pop
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
\ No newline at end of file
......@@ -228,7 +228,7 @@ int32_t colDataMergeCol(SColumnInfoData* pColumnInfoData, uint32_t numOfRow1, ui
uint32_t finalNumOfRows = numOfRow1 + numOfRow2;
if (IS_VAR_DATA_TYPE(pColumnInfoData->info.type)) {
// Handle the bitmap
if (finalNumOfRows > *capacity) {
if (finalNumOfRows > *capacity || numOfRow1 == 0) {
char* p = taosMemoryRealloc(pColumnInfoData->varmeta.offset, sizeof(int32_t) * (numOfRow1 + numOfRow2));
if (p == NULL) {
return TSDB_CODE_OUT_OF_MEMORY;
......@@ -262,7 +262,7 @@ int32_t colDataMergeCol(SColumnInfoData* pColumnInfoData, uint32_t numOfRow1, ui
memcpy(pColumnInfoData->pData + oldLen, pSource->pData, len);
pColumnInfoData->varmeta.length = len + oldLen;
} else {
if (finalNumOfRows > *capacity) {
if (finalNumOfRows > *capacity || numOfRow1 == 0) {
ASSERT(finalNumOfRows * pColumnInfoData->info.bytes);
char* tmp = taosMemoryRealloc(pColumnInfoData->pData, finalNumOfRows * pColumnInfoData->info.bytes);
if (tmp == NULL) {
......@@ -1356,7 +1356,7 @@ SColumnInfoData createColumnInfoData(int16_t type, int32_t bytes, int16_t colId)
return col;
}
SColumnInfoData* bdGetColumnInfoData(SSDataBlock* pBlock, int32_t index) {
SColumnInfoData* bdGetColumnInfoData(const SSDataBlock* pBlock, int32_t index) {
ASSERT(pBlock != NULL);
if (index >= taosArrayGetSize(pBlock->pDataBlock)) {
return NULL;
......@@ -2119,3 +2119,4 @@ const char* blockDecode(SSDataBlock* pBlock, int32_t numOfCols, int32_t numOfRow
ASSERT(pStart - pData == dataLen);
return pStart;
}
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#define _DEFAULT_SOURCE
#define TSDB_SQL_C
#include "tmsgtype.h"
......@@ -384,6 +384,8 @@ SArray *vmGetMsgHandles() {
if (dmSetMgmtHandle(pArray, TDMT_SYNC_APPEND_ENTRIES, vmPutMsgToSyncQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_SYNC_APPEND_ENTRIES_BATCH, vmPutMsgToSyncQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_SYNC_APPEND_ENTRIES_REPLY, vmPutMsgToSyncQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_SYNC_SNAPSHOT_SEND, vmPutMsgToSyncQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_SYNC_SNAPSHOT_RSP, vmPutMsgToSyncQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_SYNC_SET_VNODE_STANDBY, vmPutMsgToSyncQueue, 0) == NULL) goto _OVER;
code = 0;
......
......@@ -89,7 +89,8 @@ static void dmProcessRpcMsg(SDnode *pDnode, SRpcMsg *pRpc, SEpSet *pEpSet) {
case TDMT_DND_SYSTABLE_RETRIEVE_RSP:
case TDMT_SCH_FETCH_RSP:
case TDMT_SCH_MERGE_FETCH_RSP:
qWorkerProcessFetchRsp(NULL, NULL, pRpc, 0);
case TDMT_VND_SUBMIT_RSP:
qWorkerProcessRspMsg(NULL, NULL, pRpc, 0);
return;
case TDMT_MND_STATUS_RSP:
if (pEpSet != NULL) {
......
......@@ -546,7 +546,11 @@ static int32_t mndProcessRebalanceReq(SRpcMsg *pMsg) {
char cgroup[TSDB_CGROUP_LEN];
mndSplitSubscribeKey(pRebInfo->key, topic, cgroup, true);
SMqTopicObj *pTopic = mndAcquireTopic(pMnode, topic);
ASSERT(pTopic);
/*ASSERT(pTopic);*/
if (pTopic == NULL) {
mError("rebalance %s failed since topic %s was dropped, abort", pRebInfo->key, topic);
continue;
}
taosRLockLatch(&pTopic->lock);
rebOutput.pSub = mndCreateSub(pMnode, pTopic, pRebInfo->key);
......
......@@ -89,9 +89,6 @@ int32_t qndProcessQueryMsg(SQnode *pQnode, int64_t ts, SRpcMsg *pMsg) {
case TDMT_SCH_MERGE_FETCH:
code = qWorkerProcessFetchMsg(pQnode, pQnode->pQuery, pMsg, ts);
break;
case TDMT_SCH_FETCH_RSP:
code = qWorkerProcessFetchRsp(pQnode, pQnode->pQuery, pMsg, ts);
break;
case TDMT_SCH_CANCEL_TASK:
code = qWorkerProcessCancelMsg(pQnode, pQnode->pQuery, pMsg, ts);
break;
......
......@@ -9,7 +9,6 @@ target_sources(
"src/vnd/vnodeCfg.c"
"src/vnd/vnodeCommit.c"
"src/vnd/vnodeQuery.c"
"src/vnd/vnodeStateMgr.c"
"src/vnd/vnodeModule.c"
"src/vnd/vnodeSvr.c"
"src/vnd/vnodeSync.c"
......@@ -27,12 +26,10 @@ target_sources(
"src/meta/metaSnapshot.c"
# sma
"src/sma/sma.c"
"src/sma/smaEnv.c"
"src/sma/smaUtil.c"
"src/sma/smaOpen.c"
"src/sma/smaCommit.c"
"src/sma/smaSnapshot.c"
"src/sma/smaRollup.c"
"src/sma/smaTimeRange.c"
......@@ -43,7 +40,6 @@ target_sources(
"src/tsdb/tsdbOpen.c"
"src/tsdb/tsdbMemTable.c"
"src/tsdb/tsdbRead.c"
"src/tsdb/tsdbReadImpl.c"
"src/tsdb/tsdbCache.c"
"src/tsdb/tsdbWrite.c"
"src/tsdb/tsdbReaderWriter.c"
......
......@@ -131,7 +131,7 @@ int32_t tsdbReaderOpen(SVnode *pVnode, SQueryTableDataCond *pCond, SArray *pTabl
void tsdbReaderClose(STsdbReader *pReader);
bool tsdbNextDataBlock(STsdbReader *pReader);
void tsdbRetrieveDataBlockInfo(STsdbReader *pReader, SDataBlockInfo *pDataBlockInfo);
int32_t tsdbRetrieveDataBlockStatisInfo(STsdbReader *pReader, SColumnDataAgg ***pBlockStatis, bool *allHave);
int32_t tsdbRetrieveDatablockSMA(STsdbReader *pReader, SColumnDataAgg ***pBlockStatis, bool *allHave);
SArray *tsdbRetrieveDataBlock(STsdbReader *pTsdbReadHandle, SArray *pColumnIdList);
int32_t tsdbReaderReset(STsdbReader *pReader, SQueryTableDataCond *pCond, int32_t tWinIdx);
int32_t tsdbGetFileBlocksDistInfo(STsdbReader *pReader, STableBlockDistInfo *pTableBlockInfo);
......@@ -143,6 +143,7 @@ int32_t tsdbLastRowReaderOpen(void *pVnode, int32_t type, SArray *pTableIdList,
void **pReader);
int32_t tsdbRetrieveLastRow(void *pReader, SSDataBlock *pResBlock, const int32_t *slotIds);
int32_t tsdbLastrowReaderClose(void *pReader);
int32_t tsdbGetTableSchema(SVnode* pVnode, int64_t uid, STSchema** pSchema, int64_t* suid);
// tq
......@@ -173,6 +174,9 @@ int32_t tqReaderSetTbUidList(STqReader *pReader, const SArray *tbUidList);
int32_t tqReaderAddTbUidList(STqReader *pReader, const SArray *tbUidList);
int32_t tqReaderRemoveTbUidList(STqReader *pReader, const SArray *tbUidList);
int32_t tqSeekVer(STqReader *pReader, int64_t ver);
int32_t tqNextBlock(STqReader *pReader, SFetchRet *ret);
int32_t tqReaderSetDataMsg(STqReader *pReader, SSubmitReq *pMsg, int64_t ver);
bool tqNextDataBlock(STqReader *pReader);
bool tqNextDataBlockFilterOut(STqReader *pReader, SHashObj *filterOutUids);
......
......@@ -47,7 +47,8 @@ struct SSmaEnv {
};
typedef struct {
int32_t smaRef;
int8_t inited;
int32_t rsetId;
} SSmaMgmt;
#define SMA_ENV_LOCK(env) ((env)->lock)
......@@ -95,6 +96,7 @@ enum {
TASK_TRIGGER_STAT_CANCELLED = 4,
TASK_TRIGGER_STAT_FINISHED = 5,
};
void tdDestroySmaEnv(SSmaEnv *pSmaEnv);
void *tdFreeSmaEnv(SSmaEnv *pSmaEnv);
......@@ -104,6 +106,10 @@ int32_t tdInsertRSmaData(SSma *pSma, char *msg);
int32_t tdRefSmaStat(SSma *pSma, SSmaStat *pStat);
int32_t tdUnRefSmaStat(SSma *pSma, SSmaStat *pStat);
void *tdAcquireSmaRef(int32_t rsetId, int64_t refId, const char *tags, int32_t ln);
int32_t tdReleaseSmaRef(int32_t rsetId, int64_t refId, const char *tags, int32_t ln);
int32_t tdCheckAndInitSmaEnv(SSma *pSma, int8_t smaType);
int32_t tdLockSma(SSma *pSma);
......
......@@ -129,6 +129,7 @@ typedef struct {
static STqMgmt tqMgmt = {0};
// tqRead
int64_t tqScanLog(STQ* pTq, const STqExecHandle* pExec, SMqDataRsp* pRsp, STqOffsetVal* offset);
int64_t tqFetchLog(STQ* pTq, STqHandle* pHandle, int64_t* fetchOffset, SWalCkHead** pHeadWithCkSum);
// tqExec
......
......@@ -163,6 +163,8 @@ SSubmitReq* tdBlockToSubmit(const SArray* pBlocks, const STSchema* pSchema, bool
const char* stbFullName, int32_t vgId);
// sma
int32_t smaInit();
void smaCleanUp();
int32_t smaOpen(SVnode* pVnode);
int32_t smaClose(SSma* pSma);
int32_t smaBegin(SSma* pSma);
......
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "sma.h"
// functions for external invocation
// TODO: Who is responsible for resource allocate and release?
int32_t tdProcessTSmaInsert(SSma* pSma, int64_t indexUid, const char* msg) {
int32_t code = TSDB_CODE_SUCCESS;
if ((code = tdProcessTSmaInsertImpl(pSma, indexUid, msg)) < 0) {
smaWarn("vgId:%d, insert tsma data failed since %s", SMA_VID(pSma), tstrerror(terrno));
}
// TODO: destroy SSDataBlocks(msg)
return code;
}
int32_t tdProcessTSmaCreate(SSma* pSma, int64_t version, const char* msg) {
int32_t code = TSDB_CODE_SUCCESS;
if ((code = tdProcessTSmaCreateImpl(pSma, version, msg)) < 0) {
smaWarn("vgId:%d, create tsma failed since %s", SMA_VID(pSma), tstrerror(terrno));
}
// TODO: destroy SSDataBlocks(msg)
return code;
}
int32_t smaGetTSmaDays(SVnodeCfg* pCfg, void* pCont, uint32_t contLen, int32_t* days) {
int32_t code = TSDB_CODE_SUCCESS;
if ((code = tdProcessTSmaGetDaysImpl(pCfg, pCont, contLen, days)) < 0) {
smaWarn("vgId:%d, get tsma days failed since %s", pCfg->vgId, tstrerror(terrno));
}
smaDebug("vgId:%d, get tsma days %d", pCfg->vgId, *days);
return code;
}
// functions for internal invocation
#if 0
/**
* @brief TODO: Assume that the final generated result it less than 3M
*
* @param pReq
* @param pDataBlocks
* @param vgId
* @param suid // TODO: check with Liao whether suid response is reasonable
*
* TODO: colId should be set
*/
int32_t buildSubmitReqFromDataBlock(SSubmitReq** pReq, const SArray* pDataBlocks, STSchema* pTSchema, int32_t vgId,
tb_uid_t suid, const char* stbName, bool isCreateCtb) {
int32_t sz = taosArrayGetSize(pDataBlocks);
int32_t bufSize = sizeof(SSubmitReq);
for (int32_t i = 0; i < sz; ++i) {
SDataBlockInfo* pBlkInfo = &((SSDataBlock*)taosArrayGet(pDataBlocks, i))->info;
bufSize += pBlkInfo->rows * (TD_ROW_HEAD_LEN + pBlkInfo->rowSize + BitmapLen(pBlkInfo->numOfCols));
bufSize += sizeof(SSubmitBlk);
}
*pReq = taosMemoryCalloc(1, bufSize);
if (!(*pReq)) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
return TSDB_CODE_FAILED;
}
void* pDataBuf = *pReq;
SArray* pTagArray = NULL;
int32_t msgLen = sizeof(SSubmitReq);
int32_t numOfBlks = 0;
int32_t schemaLen = 0;
SRowBuilder rb = {0};
tdSRowInit(&rb, pTSchema->version);
for (int32_t i = 0; i < sz; ++i) {
SSDataBlock* pDataBlock = taosArrayGet(pDataBlocks, i);
SDataBlockInfo* pDataBlkInfo = &pDataBlock->info;
int32_t colNum = pDataBlkInfo->numOfCols;
int32_t rows = pDataBlkInfo->rows;
int32_t rowSize = pDataBlkInfo->rowSize;
int64_t groupId = pDataBlkInfo->groupId;
if (rb.nCols != colNum) {
tdSRowSetTpInfo(&rb, colNum, pTSchema->flen);
}
if(isCreateCtb) {
SMetaReader mr = {0};
const char* ctbName = buildCtbNameByGroupId(stbName, pDataBlock->info.groupId);
if (metaGetTableEntryByName(&mr, ctbName) != 0) {
smaDebug("vgId:%d, no tsma ctb %s exists", vgId, ctbName);
}
SVCreateTbReq ctbReq = {0};
ctbReq.name = ctbName;
ctbReq.type = TSDB_CHILD_TABLE;
ctbReq.ctb.suid = suid;
STagVal tagVal = {.cid = colNum + PRIMARYKEY_TIMESTAMP_COL_ID,
.type = TSDB_DATA_TYPE_BIGINT,
.i64 = groupId};
STag* pTag = NULL;
if(!pTagArray) {
pTagArray = taosArrayInit(1, sizeof(STagVal));
if (!pTagArray) goto _err;
}
taosArrayClear(pTagArray);
taosArrayPush(pTagArray, &tagVal);
tTagNew(pTagArray, 1, false, &pTag);
if (pTag == NULL) {
tdDestroySVCreateTbReq(&ctbReq);
goto _err;
}
ctbReq.ctb.pTag = (uint8_t*)pTag;
int32_t code;
tEncodeSize(tEncodeSVCreateTbReq, &ctbReq, schemaLen, code);
tdDestroySVCreateTbReq(&ctbReq);
if (code < 0) {
goto _err;
}
}
SSubmitBlk* pSubmitBlk = POINTER_SHIFT(pDataBuf, msgLen);
pSubmitBlk->suid = suid;
pSubmitBlk->uid = groupId;
pSubmitBlk->numOfRows = rows;
msgLen += sizeof(SSubmitBlk);
int32_t dataLen = 0;
for (int32_t j = 0; j < rows; ++j) { // iterate by row
tdSRowResetBuf(&rb, POINTER_SHIFT(pDataBuf, msgLen)); // set row buf
bool isStartKey = false;
int32_t offset = 0;
for (int32_t k = 0; k < colNum; ++k) { // iterate by column
SColumnInfoData* pColInfoData = taosArrayGet(pDataBlock->pDataBlock, k);
STColumn* pCol = &pTSchema->columns[k];
void* var = POINTER_SHIFT(pColInfoData->pData, j * pColInfoData->info.bytes);
switch (pColInfoData->info.type) {
case TSDB_DATA_TYPE_TIMESTAMP:
if (!isStartKey) {
isStartKey = true;
tdAppendColValToRow(&rb, PRIMARYKEY_TIMESTAMP_COL_ID, TSDB_DATA_TYPE_TIMESTAMP, TD_VTYPE_NORM, var, true,
offset, k);
} else {
tdAppendColValToRow(&rb, PRIMARYKEY_TIMESTAMP_COL_ID + k, TSDB_DATA_TYPE_TIMESTAMP, TD_VTYPE_NORM, var,
true, offset, k);
}
break;
case TSDB_DATA_TYPE_NCHAR: {
tdAppendColValToRow(&rb, PRIMARYKEY_TIMESTAMP_COL_ID + k, TSDB_DATA_TYPE_NCHAR, TD_VTYPE_NORM, var, true,
offset, k);
break;
}
case TSDB_DATA_TYPE_VARCHAR: { // TSDB_DATA_TYPE_BINARY
tdAppendColValToRow(&rb, PRIMARYKEY_TIMESTAMP_COL_ID + k, TSDB_DATA_TYPE_VARCHAR, TD_VTYPE_NORM, var, true,
offset, k);
break;
}
case TSDB_DATA_TYPE_VARBINARY:
case TSDB_DATA_TYPE_DECIMAL:
case TSDB_DATA_TYPE_BLOB:
case TSDB_DATA_TYPE_JSON:
case TSDB_DATA_TYPE_MEDIUMBLOB:
uError("the column type %" PRIi16 " is defined but not implemented yet", pColInfoData->info.type);
TASSERT(0);
break;
default:
if (pColInfoData->info.type < TSDB_DATA_TYPE_MAX && pColInfoData->info.type > TSDB_DATA_TYPE_NULL) {
if (pCol->type == pColInfoData->info.type) {
tdAppendColValToRow(&rb, PRIMARYKEY_TIMESTAMP_COL_ID + k, pCol->type, TD_VTYPE_NORM, var, true, offset,
k);
} else {
char tv[8] = {0};
if (pColInfoData->info.type == TSDB_DATA_TYPE_FLOAT) {
float v = 0;
GET_TYPED_DATA(v, float, pColInfoData->info.type, var);
SET_TYPED_DATA(&tv, pCol->type, v);
} else if (pColInfoData->info.type == TSDB_DATA_TYPE_DOUBLE) {
double v = 0;
GET_TYPED_DATA(v, double, pColInfoData->info.type, var);
SET_TYPED_DATA(&tv, pCol->type, v);
} else if (IS_SIGNED_NUMERIC_TYPE(pColInfoData->info.type)) {
int64_t v = 0;
GET_TYPED_DATA(v, int64_t, pColInfoData->info.type, var);
SET_TYPED_DATA(&tv, pCol->type, v);
} else {
uint64_t v = 0;
GET_TYPED_DATA(v, uint64_t, pColInfoData->info.type, var);
SET_TYPED_DATA(&tv, pCol->type, v);
}
tdAppendColValToRow(&rb, PRIMARYKEY_TIMESTAMP_COL_ID + k, pCol->type, TD_VTYPE_NORM, tv, true, offset,
k);
}
} else {
uError("the column type %" PRIi16 " is undefined\n", pColInfoData->info.type);
TASSERT(0);
}
break;
}
offset += TYPE_BYTES[pCol->type]; // sum/avg would convert to int64_t/uint64_t/double during aggregation
}
dataLen += TD_ROW_LEN(rb.pBuf);
#ifdef TD_DEBUG_PRINT_ROW
tdSRowPrint(rb.pBuf, pTSchema, __func__);
#endif
}
++numOfBlks;
pSubmitBlk->dataLen = dataLen;
msgLen += pSubmitBlk->dataLen;
}
(*pReq)->length = msgLen;
(*pReq)->header.vgId = htonl(vgId);
(*pReq)->header.contLen = htonl(msgLen);
(*pReq)->length = (*pReq)->header.contLen;
(*pReq)->numOfBlocks = htonl(numOfBlks);
SSubmitBlk* blk = (SSubmitBlk*)((*pReq) + 1);
while (numOfBlks--) {
int32_t dataLen = blk->dataLen;
blk->uid = htobe64(blk->uid);
blk->suid = htobe64(blk->suid);
blk->padding = htonl(blk->padding);
blk->sversion = htonl(blk->sversion);
blk->dataLen = htonl(blk->dataLen);
blk->schemaLen = htonl(blk->schemaLen);
blk->numOfRows = htons(blk->numOfRows);
blk = (SSubmitBlk*)(blk->data + dataLen);
}
return TSDB_CODE_SUCCESS;
_err:
taosMemoryFreeClear(*pReq);
taosArrayDestroy(pTagArray);
return TSDB_CODE_FAILED;
}
#endif
......@@ -121,7 +121,7 @@ static int32_t tdProcessRSmaPreCommitImpl(SSma *pSma) {
// step 3: perform persist task for qTaskInfo
tdRSmaPersistExecImpl(pRSmaStat);
smaDebug("vgId:%d, rsma pre commit succeess", SMA_VID(pSma));
smaDebug("vgId:%d, rsma pre commit success", SMA_VID(pSma));
return TSDB_CODE_SUCCESS;
}
......@@ -173,6 +173,7 @@ static int32_t tdProcessRSmaPostCommitImpl(SSma *pSma) {
}
if ((pDir = taosOpenDir(dir)) == NULL) {
regfree(&regex);
terrno = TAOS_SYSTEM_ERROR(errno);
smaWarn("vgId:%d, rsma post commit, open dir %s failed since %s", TD_VID(pVnode), dir, terrstr());
return TSDB_CODE_FAILED;
......
......@@ -18,7 +18,7 @@
typedef struct SSmaStat SSmaStat;
#define RSMA_TASK_INFO_HASH_SLOT 8
#define SMA_MGMT_REF_NUM 1024
#define SMA_MGMT_REF_NUM 10240
extern SSmaMgmt smaMgmt;
......@@ -30,7 +30,62 @@ static int32_t tdInitSmaEnv(SSma *pSma, int8_t smaType, const char *path, SSmaE
static void *tdFreeTSmaStat(STSmaStat *pStat);
static void tdDestroyRSmaStat(void *pRSmaStat);
/**
* @brief rsma init
*
* @return int32_t
*/
// implementation
int32_t smaInit() {
int8_t old;
int32_t nLoops = 0;
while (1) {
old = atomic_val_compare_exchange_8(&smaMgmt.inited, 0, 2);
if (old != 2) break;
if (++nLoops > 1000) {
sched_yield();
nLoops = 0;
}
}
if (old == 0) {
smaMgmt.rsetId = taosOpenRef(SMA_MGMT_REF_NUM, tdDestroyRSmaStat);
if (smaMgmt.rsetId < 0) {
smaError("failed to init sma rset since %s", terrstr());
atomic_store_8(&smaMgmt.inited, 0);
return TSDB_CODE_FAILED;
}
smaInfo("sma rset is initialized, rsetId:%d", smaMgmt.rsetId);
atomic_store_8(&smaMgmt.inited, 1);
}
return TSDB_CODE_SUCCESS;
}
/**
* @brief rsma cleanup
*
*/
void smaCleanUp() {
int8_t old;
int32_t nLoops = 0;
while (1) {
old = atomic_val_compare_exchange_8(&smaMgmt.inited, 1, 2);
if (old != 2) break;
if (++nLoops > 1000) {
sched_yield();
nLoops = 0;
}
}
if (old == 1) {
smaInfo("sma rset is cleaned up, resetId:%d", smaMgmt.rsetId);
taosCloseRef(smaMgmt.rsetId);
atomic_store_8(&smaMgmt.inited, 0);
}
}
static SSmaEnv *tdNewSmaEnv(const SSma *pSma, int8_t smaType, const char *path) {
SSmaEnv *pEnv = NULL;
......@@ -135,17 +190,16 @@ static int32_t tdInitSmaStat(SSmaStat **pSmaStat, int8_t smaType, const SSma *pS
atomic_store_8(RSMA_TRIGGER_STAT(pRSmaStat), TASK_TRIGGER_STAT_INIT);
// init smaMgmt
smaMgmt.smaRef = taosOpenRef(SMA_MGMT_REF_NUM, tdDestroyRSmaStat);
if (smaMgmt.smaRef < 0) {
smaError("init smaRef failed, num:%d", SMA_MGMT_REF_NUM);
terrno = TSDB_CODE_OUT_OF_MEMORY;
return TSDB_CODE_FAILED;
}
smaInit();
int64_t refId = taosAddRef(smaMgmt.smaRef, pRSmaStat);
int64_t refId = taosAddRef(smaMgmt.rsetId, pRSmaStat);
if (refId < 0) {
smaError("taosAddRef smaRef failed, since:%s", tstrerror(terrno));
smaError("vgId:%d, taosAddRef refId:%" PRIi64 " to rsetId rsetId:%d max:%d failed since:%s", SMA_VID(pSma),
refId, smaMgmt.rsetId, SMA_MGMT_REF_NUM, tstrerror(terrno));
return TSDB_CODE_FAILED;
} else {
smaDebug("vgId:%d, taosAddRef refId:%" PRIi64 " to rsetId rsetId:%d max:%d succeed", SMA_VID(pSma), refId,
smaMgmt.rsetId, SMA_MGMT_REF_NUM);
}
pRSmaStat->refId = refId;
......@@ -275,8 +329,13 @@ int32_t tdDestroySmaState(SSmaStat *pSmaStat, int8_t smaType) {
tdDestroyTSmaStat(SMA_TSMA_STAT(pSmaStat));
} else if (smaType == TSDB_SMA_TYPE_ROLLUP) {
SRSmaStat *pRSmaStat = SMA_RSMA_STAT(pSmaStat);
if (taosRemoveRef(smaMgmt.smaRef, RSMA_REF_ID(pRSmaStat)) < 0) {
smaError("remove refId from rsmaRef:0x%" PRIx64 " failed since %s", RSMA_REF_ID(pRSmaStat), terrstr());
if (taosRemoveRef(smaMgmt.rsetId, RSMA_REF_ID(pRSmaStat)) < 0) {
smaError("vgId:%d, remove refId:%" PRIi64 " from rsmaRef:%" PRIi32 " failed since %s", SMA_VID(pRSmaStat->pSma),
RSMA_REF_ID(pRSmaStat), smaMgmt.rsetId, terrstr());
ASSERT(0);
} else {
smaDebug("vgId:%d, remove refId:%" PRIi64 " from rsmaRef:%" PRIi32 " succeed", SMA_VID(pRSmaStat->pSma),
RSMA_REF_ID(pRSmaStat), smaMgmt.rsetId);
}
} else {
ASSERT(0);
......@@ -323,7 +382,7 @@ int32_t tdCheckAndInitSmaEnv(SSma *pSma, int8_t smaType) {
}
break;
default:
TASSERT(0);
smaError("vgId:%d undefined smaType:%", SMA_VID(pSma), smaType);
return TSDB_CODE_FAILED;
}
......
......@@ -19,7 +19,8 @@
#define RSMA_QTASKINFO_HEAD_LEN (sizeof(int32_t) + sizeof(int8_t) + sizeof(int64_t)) // len + type + suid
SSmaMgmt smaMgmt = {
.smaRef = -1,
.inited = 0,
.rsetId = -1,
};
#define TD_QTASKINFO_FNAME_PREFIX "qtaskinfo.ver"
......@@ -608,9 +609,11 @@ _err:
static void tdRSmaFetchTrigger(void *param, void *tmrId) {
SRSmaInfoItem *pItem = param;
SSma *pSma = NULL;
SRSmaStat *pStat = (SRSmaStat *)taosAcquireRef(smaMgmt.smaRef, pItem->refId);
SRSmaStat *pStat = (SRSmaStat *)tdAcquireSmaRef(smaMgmt.rsetId, pItem->refId, __func__, __LINE__);
if (!pStat) {
smaDebug("rsma fetch task not start since already destroyed");
smaDebug("rsma fetch task not start since already destroyed, rsetId rsetId:%" PRIi64 " refId:%d)", smaMgmt.rsetId,
pItem->refId);
return;
}
......@@ -622,9 +625,10 @@ static void tdRSmaFetchTrigger(void *param, void *tmrId) {
case TASK_TRIGGER_STAT_PAUSED:
case TASK_TRIGGER_STAT_CANCELLED:
case TASK_TRIGGER_STAT_FINISHED: {
taosReleaseRef(smaMgmt.smaRef, pItem->refId);
smaDebug("vgId:%d, not fetch rsma level %" PRIi8 " data for table:%" PRIi64 " since stat is cancelled",
SMA_VID(pSma), pItem->level, pItem->pRsmaInfo->suid);
tdReleaseSmaRef(smaMgmt.rsetId, pItem->refId, __func__, __LINE__);
smaDebug("vgId:%d, not fetch rsma level %" PRIi8 " data for table:%" PRIi64 " since stat is %" PRIi8
", rsetId rsetId:%" PRIi64 " refId:%d",
SMA_VID(pSma), pItem->level, pItem->pRsmaInfo->suid, rsmaTriggerStat, smaMgmt.rsetId, pItem->refId);
return;
}
default:
......@@ -665,7 +669,7 @@ static void tdRSmaFetchTrigger(void *param, void *tmrId) {
}
_end:
taosReleaseRef(smaMgmt.smaRef, pItem->refId);
tdReleaseSmaRef(smaMgmt.rsetId, pItem->refId, __func__, __LINE__);
}
static int32_t tdExecuteRSmaImpl(SSma *pSma, const void *pMsg, int32_t inputType, SRSmaInfoItem *pItem, tb_uid_t suid,
......@@ -1258,7 +1262,8 @@ _end:
}
atomic_store_8(RSMA_RUNNING_STAT(pRSmaStat), 0);
taosReleaseRef(smaMgmt.smaRef, pRSmaStat->refId);
smaDebug("vgId:%d, release rsetId rsetId:%" PRIi64 " refId:%d", SMA_VID(pSma), smaMgmt.rsetId, pRSmaStat->refId);
tdReleaseSmaRef(smaMgmt.rsetId, pRSmaStat->refId, __func__, __LINE__);
taosThreadExit(NULL);
return NULL;
}
......@@ -1283,7 +1288,9 @@ static void tdRSmaPersistTask(SRSmaStat *pRSmaStat) {
atomic_load_8(RSMA_TRIGGER_STAT(pRSmaStat)));
}
atomic_store_8(RSMA_RUNNING_STAT(pRSmaStat), 0);
taosReleaseRef(smaMgmt.smaRef, pRSmaStat->refId);
smaDebug("vgId:%d, release rsetId rsetId:%" PRIi64 " refId:%d)", SMA_VID(pRSmaStat->pSma), smaMgmt.rsetId,
pRSmaStat->refId);
tdReleaseSmaRef(smaMgmt.rsetId, pRSmaStat->refId, __func__, __LINE__);
}
taosThreadAttrDestroy(&thAttr);
......@@ -1297,8 +1304,8 @@ static void tdRSmaPersistTask(SRSmaStat *pRSmaStat) {
*/
static void tdRSmaPersistTrigger(void *param, void *tmrId) {
SRSmaStat *rsmaStat = param;
SRSmaStat *pRSmaStat = (SRSmaStat *)taosAcquireRef(smaMgmt.smaRef, rsmaStat->refId);
SRSmaStat *pRSmaStat = (SRSmaStat *)taosAcquireRef(smaMgmt.rsetId, rsmaStat->refId);
ASSERT(0);
if (!pRSmaStat) {
smaDebug("rsma persistence task not start since already destroyed");
return;
......@@ -1341,5 +1348,5 @@ static void tdRSmaPersistTrigger(void *param, void *tmrId) {
smaWarn("rsma persistence not start since unknown stat %" PRIi8, tmrStat);
} break;
}
taosReleaseRef(smaMgmt.smaRef, rsmaStat->refId);
taosReleaseRef(smaMgmt.rsetId, rsmaStat->refId);
}
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "sma.h"
\ No newline at end of file
......@@ -20,6 +20,36 @@
#define SMA_STORAGE_MINUTES_DAY 1440
#define SMA_STORAGE_SPLIT_FACTOR 14400 // least records in tsma file
// TODO: Who is responsible for resource allocate and release?
int32_t tdProcessTSmaInsert(SSma *pSma, int64_t indexUid, const char *msg) {
int32_t code = TSDB_CODE_SUCCESS;
if ((code = tdProcessTSmaInsertImpl(pSma, indexUid, msg)) < 0) {
smaWarn("vgId:%d, insert tsma data failed since %s", SMA_VID(pSma), tstrerror(terrno));
}
// TODO: destroy SSDataBlocks(msg)
return code;
}
int32_t tdProcessTSmaCreate(SSma *pSma, int64_t version, const char *msg) {
int32_t code = TSDB_CODE_SUCCESS;
if ((code = tdProcessTSmaCreateImpl(pSma, version, msg)) < 0) {
smaWarn("vgId:%d, create tsma failed since %s", SMA_VID(pSma), tstrerror(terrno));
}
// TODO: destroy SSDataBlocks(msg)
return code;
}
int32_t smaGetTSmaDays(SVnodeCfg *pCfg, void *pCont, uint32_t contLen, int32_t *days) {
int32_t code = TSDB_CODE_SUCCESS;
if ((code = tdProcessTSmaGetDaysImpl(pCfg, pCont, contLen, days)) < 0) {
smaWarn("vgId:%d, get tsma days failed since %s", pCfg->vgId, tstrerror(terrno));
}
smaDebug("vgId:%d, get tsma days %d", pCfg->vgId, *days);
return code;
}
/**
* @brief Judge the tsma file split days
*
......
......@@ -294,4 +294,23 @@ int32_t tdRemoveTFile(STFile *pTFile) {
}
// smaXXXUtil ================
void *tdAcquireSmaRef(int32_t rsetId, int64_t refId, const char *tags, int32_t ln) {
void *pResult = taosAcquireRef(rsetId, refId);
if (!pResult) {
smaWarn("%s:%d taosAcquireRef for rsetId:%" PRIi64 " refId:%d failed since %s", tags, ln, rsetId, refId, terrstr());
} else {
smaDebug("%s:%d taosAcquireRef for rsetId:%" PRIi64 " refId:%d success", tags, ln, rsetId, refId);
}
return pResult;
}
int32_t tdReleaseSmaRef(int32_t rsetId, int64_t refId, const char *tags, int32_t ln) {
if (taosReleaseRef(rsetId, refId) < 0) {
smaWarn("%s:%d taosReleaseRef for rsetId:%" PRIi64 " refId:%d failed since %s", tags, ln, rsetId, refId, terrstr());
return TSDB_CODE_FAILED;
}
smaDebug("%s:%d taosReleaseRef for rsetId:%" PRIi64 " refId:%d success", tags, ln, rsetId, refId);
return TSDB_CODE_SUCCESS;
}
// ...
\ No newline at end of file
......@@ -244,11 +244,6 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) {
STqOffsetVal fetchOffsetNew;
// 1.find handle
char buf[80];
tFormatOffset(buf, 80, &reqOffset);
tqDebug("tmq poll: consumer %ld (epoch %d) recv poll req in vg %d, req offset %s", consumerId, pReq->epoch,
TD_VID(pTq->pVnode), buf);
STqHandle* pHandle = taosHashGet(pTq->handles, pReq->subKey, strlen(pReq->subKey));
/*ASSERT(pHandle);*/
if (pHandle == NULL) {
......@@ -270,6 +265,11 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) {
consumerEpoch = atomic_val_compare_exchange_32(&pHandle->epoch, consumerEpoch, reqEpoch);
}
char buf[80];
tFormatOffset(buf, 80, &reqOffset);
tqDebug("tmq poll: consumer %ld (epoch %d), subkey %s, recv poll req in vg %d, req offset %s", consumerId,
pReq->epoch, pHandle->subKey, TD_VID(pTq->pVnode), buf);
// 2.reset offset if needed
if (reqOffset.type > 0) {
fetchOffsetNew = reqOffset;
......@@ -279,7 +279,7 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) {
fetchOffsetNew = pOffset->val;
char formatBuf[80];
tFormatOffset(formatBuf, 80, &fetchOffsetNew);
tqDebug("tmq poll: consumer %ld, offset reset to %s", consumerId, formatBuf);
tqDebug("tmq poll: consumer %ld, subkey %s, offset reset to %s", consumerId, pHandle->subKey, formatBuf);
} else {
if (reqOffset.type == TMQ_OFFSET__RESET_EARLIEAST) {
if (pReq->useSnapshot && pHandle->execHandle.subType == TOPIC_SUB_TYPE__COLUMN) {
......@@ -294,9 +294,29 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) {
}
} else if (reqOffset.type == TMQ_OFFSET__RESET_LATEST) {
tqOffsetResetToLog(&fetchOffsetNew, walGetLastVer(pTq->pVnode->pWal));
tqDebug("tmq poll: consumer %ld, subkey %s, offset reset to %ld", consumerId, pHandle->subKey,
fetchOffsetNew.version);
SMqDataRsp dataRsp = {0};
tqInitDataRsp(&dataRsp, pReq, pHandle->execHandle.subType);
dataRsp.rspOffset = fetchOffsetNew;
code = 0;
if (tqSendDataRsp(pTq, pMsg, pReq, &dataRsp) < 0) {
code = -1;
}
taosArrayDestroy(dataRsp.blockDataLen);
taosArrayDestroyP(dataRsp.blockData, (FDelete)taosMemoryFree);
if (dataRsp.withSchema) {
taosArrayDestroyP(dataRsp.blockSchema, (FDelete)tDeleteSSchemaWrapper);
}
if (dataRsp.withTbName) {
taosArrayDestroyP(dataRsp.blockTbName, (FDelete)taosMemoryFree);
}
return code;
} else if (reqOffset.type == TMQ_OFFSET__RESET_NONE) {
tqError("tmq poll: no offset committed for consumer %ld in vg %d, subkey %s, reset none failed", consumerId,
TD_VID(pTq->pVnode), pReq->subKey);
tqError("tmq poll: subkey %s, no offset committed for consumer %ld in vg %d, subkey %s, reset none failed",
pHandle->subKey, consumerId, TD_VID(pTq->pVnode), pReq->subKey);
terrno = TSDB_CODE_TQ_NO_COMMITTED_OFFSET;
return -1;
}
......@@ -307,7 +327,24 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) {
SMqDataRsp dataRsp = {0};
tqInitDataRsp(&dataRsp, pReq, pHandle->execHandle.subType);
if (fetchOffsetNew.type == TMQ_OFFSET__LOG) {
if (pHandle->execHandle.subType == TOPIC_SUB_TYPE__COLUMN && fetchOffsetNew.type == TMQ_OFFSET__LOG) {
fetchOffsetNew.version++;
if (tqScanLog(pTq, &pHandle->execHandle, &dataRsp, &fetchOffsetNew) < 0) {
ASSERT(0);
code = -1;
goto OVER;
}
if (dataRsp.blockNum == 0) {
// TODO add to async task
/*dataRsp.rspOffset.version--;*/
}
if (tqSendDataRsp(pTq, pMsg, pReq, &dataRsp) < 0) {
code = -1;
}
goto OVER;
}
if (pHandle->execHandle.subType != TOPIC_SUB_TYPE__COLUMN && fetchOffsetNew.type == TMQ_OFFSET__LOG) {
int64_t fetchVer = fetchOffsetNew.version + 1;
SWalCkHead* pCkHead = taosMemoryMalloc(sizeof(SWalCkHead) + 2048);
if (pCkHead == NULL) {
......@@ -319,8 +356,10 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) {
while (1) {
consumerEpoch = atomic_load_32(&pHandle->epoch);
if (consumerEpoch > reqEpoch) {
tqWarn("tmq poll: consumer %ld (epoch %d) vg %d offset %ld, found new consumer epoch %d, discard req epoch %d",
consumerId, pReq->epoch, TD_VID(pTq->pVnode), fetchVer, consumerEpoch, reqEpoch);
tqWarn(
"tmq poll: consumer %ld (epoch %d), subkey %s, vg %d offset %ld, found new consumer epoch %d, discard req "
"epoch %d",
consumerId, pReq->epoch, pHandle->subKey, TD_VID(pTq->pVnode), fetchVer, consumerEpoch, reqEpoch);
break;
}
......
......@@ -46,7 +46,7 @@ static int32_t tqAddBlockSchemaToRsp(const STqExecHandle* pExec, int32_t workerI
return 0;
}
static int32_t tqAddTbNameToRsp(const STQ* pTq, int64_t uid, SMqDataRsp* pRsp, int32_t workerId) {
static int32_t tqAddTbNameToRsp(const STQ* pTq, int64_t uid, SMqDataRsp* pRsp) {
SMetaReader mr = {0};
metaReaderInit(&mr, pTq->pVnode->pMeta, 0);
if (metaGetTableEntryByUid(&mr, uid) < 0) {
......@@ -59,6 +59,53 @@ static int32_t tqAddTbNameToRsp(const STQ* pTq, int64_t uid, SMqDataRsp* pRsp, i
return 0;
}
int64_t tqScanLog(STQ* pTq, const STqExecHandle* pExec, SMqDataRsp* pRsp, STqOffsetVal* pOffset) {
qTaskInfo_t task = pExec->execCol.task[0];
if (qStreamPrepareScan1(task, pOffset) < 0) {
pRsp->rspOffset = *pOffset;
pRsp->rspOffset.version--;
return 0;
}
while (1) {
SSDataBlock* pDataBlock = NULL;
uint64_t ts = 0;
if (qExecTask(task, &pDataBlock, &ts) < 0) {
ASSERT(0);
}
if (pDataBlock != NULL) {
tqAddBlockDataToRsp(pDataBlock, pRsp);
if (pRsp->withTbName) {
int64_t uid = pExec->pExecReader[0]->msgIter.uid;
tqAddTbNameToRsp(pTq, uid, pRsp);
}
pRsp->blockNum++;
continue;
}
void* meta = qStreamExtractMetaMsg(task);
if (meta != NULL) {
// tq add meta to rsp
}
if (qStreamExtractOffset(task, &pRsp->rspOffset) < 0) {
ASSERT(0);
}
if (pRsp->rspOffset.type == TMQ_OFFSET__LOG) {
ASSERT(pRsp->rspOffset.version + 1 >= pRsp->reqOffset.version);
}
ASSERT(pRsp->rspOffset.type != 0);
break;
}
return 0;
}
int32_t tqScanSnapshot(STQ* pTq, const STqExecHandle* pExec, SMqDataRsp* pRsp, STqOffsetVal offset, int32_t workerId) {
ASSERT(pExec->subType == TOPIC_SUB_TYPE__COLUMN);
qTaskInfo_t task = pExec->execCol.task[workerId];
......@@ -67,7 +114,7 @@ int32_t tqScanSnapshot(STQ* pTq, const STqExecHandle* pExec, SMqDataRsp* pRsp, S
/*ASSERT(0);*/
/*}*/
if (qStreamPrepareScan(task, offset.uid, offset.ts) < 0) {
if (qStreamPrepareTsdbScan(task, offset.uid, offset.ts) < 0) {
ASSERT(0);
}
......@@ -93,7 +140,7 @@ int32_t tqScanSnapshot(STQ* pTq, const STqExecHandle* pExec, SMqDataRsp* pRsp, S
if (qGetStreamScanStatus(task, &uid, &ts) < 0) {
ASSERT(0);
}
tqAddTbNameToRsp(pTq, uid, pRsp, workerId);
tqAddTbNameToRsp(pTq, uid, pRsp);
#endif
}
pRsp->blockNum++;
......@@ -129,7 +176,7 @@ int32_t tqLogScanExec(STQ* pTq, STqExecHandle* pExec, SSubmitReq* pReq, SMqDataR
tqAddBlockDataToRsp(pDataBlock, pRsp);
if (pRsp->withTbName) {
int64_t uid = pExec->pExecReader[workerId]->msgIter.uid;
tqAddTbNameToRsp(pTq, uid, pRsp, workerId);
tqAddTbNameToRsp(pTq, uid, pRsp);
}
pRsp->blockNum++;
}
......@@ -146,7 +193,7 @@ int32_t tqLogScanExec(STQ* pTq, STqExecHandle* pExec, SSubmitReq* pReq, SMqDataR
tqAddBlockDataToRsp(&block, pRsp);
if (pRsp->withTbName) {
int64_t uid = pExec->pExecReader[workerId]->msgIter.uid;
tqAddTbNameToRsp(pTq, uid, pRsp, workerId);
tqAddTbNameToRsp(pTq, uid, pRsp);
}
tqAddBlockSchemaToRsp(pExec, workerId, pRsp);
pRsp->blockNum++;
......@@ -164,7 +211,7 @@ int32_t tqLogScanExec(STQ* pTq, STqExecHandle* pExec, SSubmitReq* pReq, SMqDataR
tqAddBlockDataToRsp(&block, pRsp);
if (pRsp->withTbName) {
int64_t uid = pExec->pExecReader[workerId]->msgIter.uid;
tqAddTbNameToRsp(pTq, uid, pRsp, workerId);
tqAddTbNameToRsp(pTq, uid, pRsp);
}
tqAddBlockSchemaToRsp(pExec, workerId, pRsp);
pRsp->blockNum++;
......
......@@ -15,11 +15,6 @@
#include "tq.h"
int64_t tqScanLog(STQ* pTq, const STqExecHandle* pExec, SMqDataRsp* pRsp, STqOffsetVal offset) {
/*if ()*/
return 0;
}
int64_t tqFetchLog(STQ* pTq, STqHandle* pHandle, int64_t* fetchOffset, SWalCkHead** ppCkHead) {
int32_t code = 0;
taosThreadMutexLock(&pHandle->pWalReader->mutex);
......@@ -84,8 +79,10 @@ STqReader* tqOpenReader(SVnode* pVnode) {
return NULL;
}
// TODO open
/*pReader->pWalReader = walOpenReader(pVnode->pWal, NULL);*/
pReader->pWalReader = walOpenReader(pVnode->pWal, NULL);
if (pReader->pWalReader == NULL) {
return NULL;
}
pReader->pVnodeMeta = pVnode->pMeta;
pReader->pMsg = NULL;
......@@ -106,12 +103,19 @@ void tqCloseReader(STqReader* pReader) {
taosMemoryFree(pReader);
}
int32_t tqSeekVer(STqReader* pReader, int64_t ver) {
//
return walReadSeekVer(pReader->pWalReader, ver);
}
int32_t tqNextBlock(STqReader* pReader, SFetchRet* ret) {
bool fromProcessedMsg = pReader->pMsg != NULL;
while (1) {
if (!fromProcessedMsg) {
if (walNextValidMsg(pReader->pWalReader) < 0) {
ret->offset.type = TMQ_OFFSET__LOG;
ret->offset.version = pReader->ver;
ret->fetchType = FETCH_TYPE__NONE;
return -1;
}
......@@ -130,19 +134,25 @@ int32_t tqNextBlock(STqReader* pReader, SFetchRet* ret) {
memset(&ret->data, 0, sizeof(SSDataBlock));
int32_t code = tqRetrieveDataBlock(&ret->data, pReader);
if (code != 0 || ret->data.info.rows == 0) {
ASSERT(0);
continue;
#if 0
if (fromProcessedMsg) {
ret->fetchType = FETCH_TYPE__NONE;
return 0;
} else {
break;
}
#endif
}
ret->fetchType = FETCH_TYPE__DATA;
return 0;
}
if (fromProcessedMsg) {
ret->offset.type = TMQ_OFFSET__LOG;
ret->offset.version = pReader->ver;
ASSERT(pReader->ver != -1);
ret->fetchType = FETCH_TYPE__NONE;
return 0;
}
......
......@@ -155,7 +155,7 @@ int32_t tsdbCacheInsertLastrow(SLRUCache *pCache, STsdb *pTsdb, tb_uid_t uid, ST
/* tsdbCacheInsertLastrow(pCache, uid, row, dup); */
}
}
} else {
} /*else {
if (dup) {
cacheRow = tdRowDup(row);
} else {
......@@ -168,7 +168,7 @@ int32_t tsdbCacheInsertLastrow(SLRUCache *pCache, STsdb *pTsdb, tb_uid_t uid, ST
if (status != TAOS_LRU_STATUS_OK) {
code = -1;
}
}
}*/
return code;
}
......@@ -992,7 +992,7 @@ static int32_t mergeLast(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray) {
*pColVal = COL_VAL_VALUE(pTColumn->colId, pTColumn->type, (SValue){.ts = maxKey});
// if (taosArrayPush(pColArray, pColVal) == NULL) {
if (taosArrayPush(pColArray, &(SLastCol){.ts = TSKEY_MAX, .colVal = *pColVal}) == NULL) {
if (taosArrayPush(pColArray, &(SLastCol){.ts = maxKey, .colVal = *pColVal}) == NULL) {
code = TSDB_CODE_OUT_OF_MEMORY;
goto _err;
}
......@@ -1127,7 +1127,7 @@ int32_t tsdbCacheGetLastrowH(SLRUCache *pCache, tb_uid_t uid, STsdb *pTsdb, LRUH
//*ppRow = (STSRow *)taosLRUCacheValue(pCache, h);
} else {
STSRow *pRow = NULL;
bool dup = false;
bool dup = false; // which is always false for now
code = mergeLastRow(uid, pTsdb, &dup, &pRow);
// if table's empty or error, return code of -1
if (code < 0 || pRow == NULL) {
......@@ -1139,7 +1139,14 @@ int32_t tsdbCacheGetLastrowH(SLRUCache *pCache, tb_uid_t uid, STsdb *pTsdb, LRUH
return 0;
}
tsdbCacheInsertLastrow(pCache, pTsdb, uid, pRow, dup);
_taos_lru_deleter_t deleter = deleteTableCacheLastrow;
LRUStatus status =
taosLRUCacheInsert(pCache, key, keyLen, pRow, TD_ROW_LEN(pRow), deleter, NULL, TAOS_LRU_PRIORITY_LOW);
if (status != TAOS_LRU_STATUS_OK) {
code = -1;
}
// tsdbCacheInsertLastrow(pCache, pTsdb, uid, pRow, dup);
h = taosLRUCacheLookup(pCache, key, keyLen);
//*ppRow = (STSRow *)taosLRUCacheValue(pCache, h);
}
......@@ -1202,7 +1209,7 @@ int32_t tsdbCacheGetLastH(SLRUCache *pCache, tb_uid_t uid, STsdb *pTsdb, LRUHand
if (status != TAOS_LRU_STATUS_OK) {
code = -1;
}
/* tsdbCacheInsertLast(pCache, uid, pRow); */
h = taosLRUCacheLookup(pCache, key, keyLen);
//*ppRow = (STSRow *)taosLRUCacheValue(pCache, h);
}
......@@ -1235,9 +1242,23 @@ int32_t tsdbCacheDelete(SLRUCache *pCache, tb_uid_t uid, TSKEY eKey) {
getTableCacheKey(uid, 1, key, &keyLen);
h = taosLRUCacheLookup(pCache, key, keyLen);
if (h) {
// clear last cache anyway, no matter where eKey ends.
taosLRUCacheRelease(pCache, h, true);
SArray *pLast = (SArray *)taosLRUCacheValue(pCache, h);
bool invalidate = false;
int16_t nCol = taosArrayGetSize(pLast);
for (int16_t iCol = 0; iCol < nCol; ++iCol) {
SLastCol *tTsVal = (SLastCol *)taosArrayGet(pLast, iCol);
if (eKey >= tTsVal->ts) {
invalidate = true;
break;
}
}
if (invalidate) {
taosLRUCacheRelease(pCache, h, true);
} else {
taosLRUCacheRelease(pCache, h, false);
}
// void taosLRUCacheErase(SLRUCache * cache, const void *key, size_t keyLen);
}
......
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "tsdb.h"
......@@ -424,10 +424,12 @@ int32_t tsdbReadDelIdx(SDelFReader *pReader, SArray *aDelIdx, uint8_t **ppBuf) {
ASSERT(n == size - sizeof(TSCKSUM));
tFree(pBuf);
return code;
_err:
tsdbError("vgId:%d read del idx failed since %s", TD_VID(pReader->pTsdb->pVnode), tstrerror(code));
tFree(pBuf);
return code;
}
......@@ -1156,7 +1158,7 @@ _err:
int32_t tsdbReadBlockSma(SDataFReader *pReader, SBlock *pBlock, SArray *aColumnDataAgg, uint8_t **ppBuf) {
int32_t code = 0;
TdFilePtr pFD = pReader->pSmaFD;
int64_t offset = pBlock->aSubBlock[0].offset;
int64_t offset = pBlock->aSubBlock[0].sOffset;
int64_t size = pBlock->aSubBlock[0].nSma * sizeof(SColumnDataAgg) + sizeof(TSCKSUM);
uint8_t *pBuf = NULL;
int64_t n;
......@@ -1179,10 +1181,13 @@ int32_t tsdbReadBlockSma(SDataFReader *pReader, SBlock *pBlock, SArray *aColumnD
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
} else if (n < size) {
code = TSDB_CODE_FILE_CORRUPTED;
goto _err;
}
// check
if (!taosCheckChecksumWhole(NULL, size)) {
if (!taosCheckChecksumWhole(*ppBuf, size)) {
code = TSDB_CODE_FILE_CORRUPTED;
goto _err;
}
......
......@@ -1016,7 +1016,7 @@ int32_t tBlockDataAppendRow(SBlockData *pBlockData, TSDBROW *pRow, STSchema *pTS
SColData *pColData;
SColVal *pColVal;
ASSERT(nColData > 0);
if (nColData == 0) goto _exit;
tRowIterInit(pIter, pRow, pTSchema);
pColData = tBlockDataGetColDataByIdx(pBlockData, iColData);
......@@ -1046,6 +1046,7 @@ int32_t tBlockDataAppendRow(SBlockData *pBlockData, TSDBROW *pRow, STSchema *pTS
}
}
_exit:
pBlockData->nRow++;
return code;
......@@ -1234,10 +1235,26 @@ void tsdbCalcColDataSMA(SColData *pColData, SColumnDataAgg *pColAgg) {
break;
case TSDB_DATA_TYPE_SMALLINT:
break;
case TSDB_DATA_TYPE_INT:
case TSDB_DATA_TYPE_INT: {
pColAgg->sum += colVal.value.i32;
if (pColAgg->min > colVal.value.i32) {
pColAgg->min = colVal.value.i32;
}
if (pColAgg->max < colVal.value.i32) {
pColAgg->max = colVal.value.i32;
}
break;
case TSDB_DATA_TYPE_BIGINT:
}
case TSDB_DATA_TYPE_BIGINT: {
pColAgg->sum += colVal.value.i64;
if (pColAgg->min > colVal.value.i64) {
pColAgg->min = colVal.value.i64;
}
if (pColAgg->max < colVal.value.i64) {
pColAgg->max = colVal.value.i64;
}
break;
}
case TSDB_DATA_TYPE_FLOAT:
break;
case TSDB_DATA_TYPE_DOUBLE:
......
......@@ -100,6 +100,7 @@ void vnodeCleanup() {
walCleanUp();
tqCleanUp();
smaCleanUp();
}
int vnodeScheduleTask(int (*execute)(void*), void* arg) {
......
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
\ No newline at end of file
......@@ -296,7 +296,7 @@ int32_t vnodeProcessFetchMsg(SVnode *pVnode, SRpcMsg *pMsg, SQueueInfo *pInfo) {
case TDMT_SCH_MERGE_FETCH:
return qWorkerProcessFetchMsg(pVnode, pVnode->pQuery, pMsg, 0);
case TDMT_SCH_FETCH_RSP:
return qWorkerProcessFetchRsp(pVnode, pVnode->pQuery, pMsg, 0);
return qWorkerProcessRspMsg(pVnode, pVnode->pQuery, pMsg, 0);
case TDMT_SCH_CANCEL_TASK:
return qWorkerProcessCancelMsg(pVnode, pVnode->pQuery, pMsg, 0);
case TDMT_SCH_DROP_TASK:
......
......@@ -338,12 +338,12 @@ int32_t vnodeProcessSyncMsg(SVnode *pVnode, SRpcMsg *pMsg, SRpcMsg **pRsp) {
} else if (pMsg->msgType == TDMT_SYNC_REQUEST_VOTE) {
SyncRequestVote *pSyncMsg = syncRequestVoteFromRpcMsg2(pMsg);
ASSERT(pSyncMsg != NULL);
code = syncNodeOnRequestVoteCb(pSyncNode, pSyncMsg);
code = syncNodeOnRequestVoteSnapshotCb(pSyncNode, pSyncMsg);
syncRequestVoteDestroy(pSyncMsg);
} else if (pMsg->msgType == TDMT_SYNC_REQUEST_VOTE_REPLY) {
SyncRequestVoteReply *pSyncMsg = syncRequestVoteReplyFromRpcMsg2(pMsg);
ASSERT(pSyncMsg != NULL);
code = syncNodeOnRequestVoteReplyCb(pSyncNode, pSyncMsg);
code = syncNodeOnRequestVoteReplySnapshotCb(pSyncNode, pSyncMsg);
syncRequestVoteReplyDestroy(pSyncMsg);
} else if (pMsg->msgType == TDMT_SYNC_APPEND_ENTRIES_BATCH) {
SyncAppendEntriesBatch *pSyncMsg = syncAppendEntriesBatchFromRpcMsg2(pMsg);
......@@ -355,6 +355,14 @@ int32_t vnodeProcessSyncMsg(SVnode *pVnode, SRpcMsg *pMsg, SRpcMsg **pRsp) {
ASSERT(pSyncMsg != NULL);
code = syncNodeOnAppendEntriesReplySnapshot2Cb(pSyncNode, pSyncMsg);
syncAppendEntriesReplyDestroy(pSyncMsg);
} else if (pMsg->msgType == TDMT_SYNC_SNAPSHOT_SEND) {
SyncSnapshotSend *pSyncMsg = syncSnapshotSendFromRpcMsg2(pMsg);
code = syncNodeOnSnapshotSendCb(pSyncNode, pSyncMsg);
syncSnapshotSendDestroy(pSyncMsg);
} else if (pMsg->msgType == TDMT_SYNC_SNAPSHOT_RSP) {
SyncSnapshotRsp *pSyncMsg = syncSnapshotRspFromRpcMsg2(pMsg);
code = syncNodeOnSnapshotRspCb(pSyncNode, pSyncMsg);
syncSnapshotRspDestroy(pSyncMsg);
} else if (pMsg->msgType == TDMT_SYNC_SET_VNODE_STANDBY) {
code = vnodeSetStandBy(pVnode);
if (code != 0 && terrno != 0) code = terrno;
......
......@@ -34,7 +34,7 @@ typedef struct SDataSinkManager {
typedef int32_t (*FPutDataBlock)(struct SDataSinkHandle* pHandle, const SInputData* pInput, bool* pContinue);
typedef void (*FEndPut)(struct SDataSinkHandle* pHandle, uint64_t useconds);
typedef void (*FGetDataLength)(struct SDataSinkHandle* pHandle, int32_t* pLen, bool* pQueryEnd);
typedef void (*FGetDataLength)(struct SDataSinkHandle* pHandle, int64_t* pLen, bool* pQueryEnd);
typedef int32_t (*FGetDataBlock)(struct SDataSinkHandle* pHandle, SOutputData* pOutput);
typedef int32_t (*FDestroyDataSinker)(struct SDataSinkHandle* pHandle);
typedef int32_t (*FGetCacheSize)(struct SDataSinkHandle* pHandle, uint64_t* size);
......@@ -50,6 +50,7 @@ typedef struct SDataSinkHandle {
int32_t createDataDispatcher(SDataSinkManager* pManager, const SDataSinkNode* pDataSink, DataSinkHandle* pHandle);
int32_t createDataDeleter(SDataSinkManager* pManager, const SDataSinkNode* pDataSink, DataSinkHandle* pHandle, void *pParam);
int32_t createDataInserter(SDataSinkManager* pManager, const SDataSinkNode* pDataSink, DataSinkHandle* pHandle, void *pParam);
#ifdef __cplusplus
}
......
......@@ -51,13 +51,12 @@ typedef int32_t (*__block_search_fn_t)(char* data, int32_t num, int64_t key, int
#define NEEDTO_COMPRESS_QUERY(size) ((size) > tsCompressColData ? 1 : 0)
#define START_TS_COLUMN_INDEX 0
#define END_TS_COLUMN_INDEX 1
#define UID_COLUMN_INDEX 2
#define GROUPID_COLUMN_INDEX UID_COLUMN_INDEX
#define START_TS_COLUMN_INDEX 0
#define END_TS_COLUMN_INDEX 1
#define UID_COLUMN_INDEX 2
#define GROUPID_COLUMN_INDEX UID_COLUMN_INDEX
#define DELETE_GROUPID_COLUMN_INDEX 2
enum {
// when this task starts to execute, this status will set
TASK_NOT_COMPLETED = 0x1u,
......@@ -81,8 +80,8 @@ typedef struct SResultInfo { // TODO refactor
} SResultInfo;
typedef struct STableQueryInfo {
TSKEY lastKey; // last check ts, todo remove it later
SResultRowPosition pos; // current active time window
TSKEY lastKey; // last check ts, todo remove it later
SResultRowPosition pos; // current active time window
} STableQueryInfo;
typedef struct SLimit {
......@@ -105,7 +104,7 @@ typedef struct STaskCostInfo {
uint64_t loadDataTime;
SFileBlockLoadRecorder* pRecoder;
uint64_t elapsedTime;
uint64_t elapsedTime;
uint64_t firstStageMergeTime;
uint64_t winInfoSize;
......@@ -118,8 +117,8 @@ typedef struct STaskCostInfo {
} STaskCostInfo;
typedef struct SOperatorCostInfo {
double openCost;
double totalCost;
double openCost;
double totalCost;
} SOperatorCostInfo;
struct SOperatorInfo;
......@@ -139,24 +138,35 @@ typedef struct STaskIdInfo {
char* str;
} STaskIdInfo;
typedef struct {
STqOffsetVal prepareStatus; // for tmq
STqOffsetVal lastStatus; // for tmq
void* metaBlk; // for tmq fetching meta
SSDataBlock* pullOverBlk; // for streaming
SWalFilterCond cond;
} SStreamTaskInfo;
typedef struct SExecTaskInfo {
STaskIdInfo id;
uint32_t status;
STimeWindow window;
STaskCostInfo cost;
int64_t owner; // if it is in execution
int32_t code;
STaskIdInfo id;
uint32_t status;
STimeWindow window;
STaskCostInfo cost;
int64_t owner; // if it is in execution
int32_t code;
SStreamTaskInfo streamInfo;
struct {
char *tablename;
char *dbname;
int32_t tversion;
SSchemaWrapper*sw;
char* tablename;
char* dbname;
int32_t tversion;
SSchemaWrapper* sw;
} schemaVer;
STableListInfo tableqinfoList; // this is a table list
const char* sql; // query sql string
jmp_buf env; // jump to this position when error happens.
EOPTR_EXEC_MODEL execModel; // operator execution model [batch model|stream model]
STableListInfo tableqinfoList; // this is a table list
const char* sql; // query sql string
jmp_buf env; // jump to this position when error happens.
EOPTR_EXEC_MODEL execModel; // operator execution model [batch model|stream model]
struct SOperatorInfo* pRoot;
} SExecTaskInfo;
......@@ -168,36 +178,36 @@ enum {
};
typedef struct SOperatorFpSet {
__optr_open_fn_t _openFn; // DO NOT invoke this function directly
__optr_fn_t getNextFn;
__optr_fn_t getStreamResFn; // execute the aggregate in the stream model, todo remove it
__optr_fn_t cleanupFn; // call this function to release the allocated resources ASAP
__optr_close_fn_t closeFn;
__optr_encode_fn_t encodeResultRow;
__optr_decode_fn_t decodeResultRow;
__optr_explain_fn_t getExplainFn;
__optr_open_fn_t _openFn; // DO NOT invoke this function directly
__optr_fn_t getNextFn;
__optr_fn_t getStreamResFn; // execute the aggregate in the stream model, todo remove it
__optr_fn_t cleanupFn; // call this function to release the allocated resources ASAP
__optr_close_fn_t closeFn;
__optr_encode_fn_t encodeResultRow;
__optr_decode_fn_t decodeResultRow;
__optr_explain_fn_t getExplainFn;
} SOperatorFpSet;
typedef struct SExprSupp {
SExprInfo* pExprInfo;
int32_t numOfExprs; // the number of scalar expression in group operator
int32_t numOfExprs; // the number of scalar expression in group operator
SqlFunctionCtx* pCtx;
int32_t* rowEntryInfoOffset; // offset value for each row result cell info
} SExprSupp;
typedef struct SOperatorInfo {
uint8_t operatorType;
bool blocking; // block operator or not
uint8_t status; // denote if current operator is completed
char* name; // name, for debug purpose
void* info; // extension attribution
SExprSupp exprSupp;
SExecTaskInfo* pTaskInfo;
SOperatorCostInfo cost;
SResultInfo resultInfo;
struct SOperatorInfo** pDownstream; // downstram pointer list
int32_t numOfDownstream; // number of downstream. The value is always ONE expect for join operator
SOperatorFpSet fpSet;
uint8_t operatorType;
bool blocking; // block operator or not
uint8_t status; // denote if current operator is completed
char* name; // name, for debug purpose
void* info; // extension attribution
SExprSupp exprSupp;
SExecTaskInfo* pTaskInfo;
SOperatorCostInfo cost;
SResultInfo resultInfo;
struct SOperatorInfo** pDownstream; // downstram pointer list
int32_t numOfDownstream; // number of downstream. The value is always ONE expect for join operator
SOperatorFpSet fpSet;
} SOperatorInfo;
typedef enum {
......@@ -210,12 +220,12 @@ typedef enum {
#define COL_MATCH_FROM_SLOT_ID 0x2
typedef struct SSourceDataInfo {
int32_t index;
SRetrieveTableRsp* pRsp;
uint64_t totalRows;
int32_t code;
EX_SOURCE_STATUS status;
const char* taskId;
int32_t index;
SRetrieveTableRsp* pRsp;
uint64_t totalRows;
int32_t code;
EX_SOURCE_STATUS status;
const char* taskId;
} SSourceDataInfo;
typedef struct SLoadRemoteDataInfo {
......@@ -325,10 +335,10 @@ typedef enum EStreamScanMode {
} EStreamScanMode;
typedef struct SCatchSupporter {
SHashObj* pWindowHashTable; // quick locate the window object for each window
SDiskbasedBuf* pDataBuf; // buffer based on blocked-wised disk file
int32_t keySize;
int64_t* pKeyBuf;
SHashObj* pWindowHashTable; // quick locate the window object for each window
SDiskbasedBuf* pDataBuf; // buffer based on blocked-wised disk file
int32_t keySize;
int64_t* pKeyBuf;
} SCatchSupporter;
typedef struct SStreamAggSupporter {
......@@ -344,48 +354,48 @@ typedef struct SStreamAggSupporter {
typedef struct SessionWindowSupporter {
SStreamAggSupporter* pStreamAggSup;
int64_t gap;
uint8_t parentType;
int64_t gap;
uint8_t parentType;
} SessionWindowSupporter;
typedef struct SStreamScanInfo {
uint64_t tableUid; // queried super table uid
SExprInfo* pPseudoExpr;
int32_t numOfPseudoExpr;
int32_t primaryTsIndex; // primary time stamp slot id
SReadHandle readHandle;
SInterval interval; // if the upstream is an interval operator, the interval info is also kept here.
SArray* pColMatchInfo; //
SNode* pCondition;
SArray* pBlockLists; // multiple SSDatablock.
SSDataBlock* pRes; // result SSDataBlock
SSDataBlock* pUpdateRes; // update SSDataBlock
int32_t updateResIndex;
int32_t blockType; // current block type
int32_t validBlockIndex; // Is current data has returned?
uint64_t numOfExec; // execution times
STqReader* tqReader;
int32_t tsArrayIndex;
SArray* tsArray;
uint64_t groupId;
SUpdateInfo* pUpdateInfo;
uint64_t tableUid; // queried super table uid
SExprInfo* pPseudoExpr;
int32_t numOfPseudoExpr;
int32_t primaryTsIndex; // primary time stamp slot id
SReadHandle readHandle;
SInterval interval; // if the upstream is an interval operator, the interval info is also kept here.
SArray* pColMatchInfo; //
SNode* pCondition;
EStreamScanMode scanMode;
SOperatorInfo* pStreamScanOp;
SOperatorInfo* pTableScanOp;
SArray* childIds;
SArray* pBlockLists; // multiple SSDatablock.
SSDataBlock* pRes; // result SSDataBlock
SSDataBlock* pUpdateRes; // update SSDataBlock
int32_t updateResIndex;
int32_t blockType; // current block type
int32_t validBlockIndex; // Is current data has returned?
uint64_t numOfExec; // execution times
STqReader* tqReader;
int32_t tsArrayIndex;
SArray* tsArray;
uint64_t groupId;
SUpdateInfo* pUpdateInfo;
EStreamScanMode scanMode;
SOperatorInfo* pStreamScanOp;
SOperatorInfo* pTableScanOp;
SArray* childIds;
SessionWindowSupporter sessionSup;
bool assignBlockUid; // assign block uid to groupId, temporarily used for generating rollup SMA.
int32_t scanWinIndex; // for state operator
int32_t pullDataResIndex;
SSDataBlock* pPullDataRes; // pull data SSDataBlock
SSDataBlock* pDeleteDataRes; // delete data SSDataBlock
int32_t deleteDataIndex;
bool assignBlockUid; // assign block uid to groupId, temporarily used for generating rollup SMA.
int32_t scanWinIndex; // for state operator
int32_t pullDataResIndex;
SSDataBlock* pPullDataRes; // pull data SSDataBlock
SSDataBlock* pDeleteDataRes; // delete data SSDataBlock
int32_t deleteDataIndex;
// status for tmq
//SSchemaWrapper schema;
// SSchemaWrapper schema;
STqOffset offset;
} SStreamScanInfo;
......@@ -595,7 +605,7 @@ typedef struct SSessionAggOperatorInfo {
int64_t gap; // session window gap
int32_t tsSlotId; // primary timestamp slot id
STimeWindowAggSupp twAggSup;
SNode *pCondition;
const SNode* pCondition;
} SSessionAggOperatorInfo;
typedef struct SResultWindowInfo {
......@@ -657,7 +667,7 @@ typedef struct SStateWindowOperatorInfo {
int32_t tsSlotId; // primary timestamp column slot id
STimeWindowAggSupp twAggSup;
// bool reptScan;
const SNode *pCondition;
const SNode* pCondition;
} SStateWindowOperatorInfo;
typedef struct SStreamStateAggOperatorInfo {
......@@ -806,7 +816,7 @@ SOperatorInfo* createMergeIntervalOperatorInfo(SOperatorInfo* downstream, SExprI
SOperatorInfo* createMergeAlignedIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo* pExprInfo, int32_t numOfCols,
SSDataBlock* pResBlock, SInterval* pInterval, int32_t primaryTsSlotId,
SExecTaskInfo* pTaskInfo);
SNode* pCondition, SExecTaskInfo* pTaskInfo);
SOperatorInfo* createStreamFinalIntervalOperatorInfo(SOperatorInfo* downstream,
SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo, int32_t numOfChild);
......@@ -885,7 +895,7 @@ int32_t decodeOperator(SOperatorInfo* ops, const char* data, int32_t length);
void setTaskStatus(SExecTaskInfo* pTaskInfo, int8_t status);
int32_t createExecTaskInfoImpl(SSubplan* pPlan, SExecTaskInfo** pTaskInfo, SReadHandle* pHandle, uint64_t taskId,
const char* sql, EOPTR_EXEC_MODEL model);
int32_t createDataSinkParam(SDataSinkNode *pNode, void **pParam, qTaskInfo_t* pTaskInfo);
int32_t createDataSinkParam(SDataSinkNode *pNode, void **pParam, qTaskInfo_t* pTaskInfo, SReadHandle* readHandle);
int32_t getOperatorExplainExecInfo(SOperatorInfo* operatorInfo, SExplainExecInfo** pRes, int32_t* capacity,
int32_t* resNum);
......
......@@ -154,7 +154,7 @@ static void endPut(struct SDataSinkHandle* pHandle, uint64_t useconds) {
taosThreadMutexUnlock(&pDeleter->mutex);
}
static void getDataLength(SDataSinkHandle* pHandle, int32_t* pLen, bool* pQueryEnd) {
static void getDataLength(SDataSinkHandle* pHandle, int64_t* pLen, bool* pQueryEnd) {
SDataDeleterHandle* pDeleter = (SDataDeleterHandle*)pHandle;
if (taosQueueEmpty(pDeleter->pDataBlocks)) {
*pQueryEnd = pDeleter->queryEnd;
......@@ -168,7 +168,7 @@ static void getDataLength(SDataSinkHandle* pHandle, int32_t* pLen, bool* pQueryE
taosFreeQitem(pBuf);
*pLen = ((SDataCacheEntry*)(pDeleter->nextOutput.pData))->dataLen;
*pQueryEnd = pDeleter->queryEnd;
qDebug("got data len %d, row num %d in sink", *pLen, ((SDataCacheEntry*)(pDeleter->nextOutput.pData))->numOfRows);
qDebug("got data len %" PRId64 ", row num %d in sink", *pLen, ((SDataCacheEntry*)(pDeleter->nextOutput.pData))->numOfRows);
}
static int32_t getDataBlock(SDataSinkHandle* pHandle, SOutputData* pOutput) {
......
......@@ -156,7 +156,7 @@ static void endPut(struct SDataSinkHandle* pHandle, uint64_t useconds) {
taosThreadMutexUnlock(&pDispatcher->mutex);
}
static void getDataLength(SDataSinkHandle* pHandle, int32_t* pLen, bool* pQueryEnd) {
static void getDataLength(SDataSinkHandle* pHandle, int64_t* pLen, bool* pQueryEnd) {
SDataDispatchHandle* pDispatcher = (SDataDispatchHandle*)pHandle;
if (taosQueueEmpty(pDispatcher->pDataBlocks)) {
*pQueryEnd = pDispatcher->queryEnd;
......@@ -170,7 +170,7 @@ static void getDataLength(SDataSinkHandle* pHandle, int32_t* pLen, bool* pQueryE
taosFreeQitem(pBuf);
*pLen = ((SDataCacheEntry*)(pDispatcher->nextOutput.pData))->dataLen;
*pQueryEnd = pDispatcher->queryEnd;
qDebug("got data len %d, row num %d in sink", *pLen, ((SDataCacheEntry*)(pDispatcher->nextOutput.pData))->numOfRows);
qDebug("got data len %" PRId64 ", row num %d in sink", *pLen, ((SDataCacheEntry*)(pDispatcher->nextOutput.pData))->numOfRows);
}
static int32_t getDataBlock(SDataSinkHandle* pHandle, SOutputData* pOutput) {
......
......@@ -24,195 +24,266 @@
extern SDataSinkStat gDataSinkStat;
typedef struct SDataInserterBuf {
int32_t useSize;
int32_t allocSize;
char* pData;
} SDataInserterBuf;
typedef struct SDataCacheEntry {
int32_t dataLen;
int32_t numOfRows;
int32_t numOfCols;
int8_t compressed;
char data[];
} SDataCacheEntry;
typedef struct SSubmitRes {
int64_t affectedRows;
int32_t code;
SSubmitRsp *pRsp;
} SSubmitRes;
typedef struct SDataInserterHandle {
SDataSinkHandle sink;
SDataSinkManager* pManager;
SDataBlockDescNode* pSchema;
SDataDeleterNode* pDeleter;
SDeleterParam* pParam;
STaosQueue* pDataBlocks;
SDataInserterBuf nextOutput;
STSchema* pSchema;
SQueryInserterNode* pNode;
SSubmitRes submitRes;
SInserterParam* pParam;
SArray* pDataBlocks;
SHashObj* pCols;
int32_t status;
bool queryEnd;
uint64_t useconds;
uint64_t cachedSize;
TdThreadMutex mutex;
tsem_t ready;
} SDataInserterHandle;
static bool needCompress(const SSDataBlock* pData, int32_t numOfCols) {
if (tsCompressColData < 0 || 0 == pData->info.rows) {
return false;
}
typedef struct SSubmitRspParam {
SDataInserterHandle* pInserter;
} SSubmitRspParam;
int32_t inserterCallback(void* param, SDataBuf* pMsg, int32_t code) {
SSubmitRspParam* pParam = (SSubmitRspParam*)param;
SDataInserterHandle* pInserter = pParam->pInserter;
for (int32_t col = 0; col < numOfCols; ++col) {
SColumnInfoData* pColRes = taosArrayGet(pData->pDataBlock, col);
int32_t colSize = pColRes->info.bytes * pData->info.rows;
if (NEEDTO_COMPRESS_QUERY(colSize)) {
return true;
pInserter->submitRes.code = code;
if (code == TSDB_CODE_SUCCESS) {
pInserter->submitRes.pRsp = taosMemoryCalloc(1, sizeof(SSubmitRsp));
SDecoder coder = {0};
tDecoderInit(&coder, pMsg->pData, pMsg->len);
code = tDecodeSSubmitRsp(&coder, pInserter->submitRes.pRsp);
if (code) {
tFreeSSubmitRsp(pInserter->submitRes.pRsp);
pInserter->submitRes.code = code;
goto _return;
}
if (pInserter->submitRes.pRsp->nBlocks > 0) {
for (int32_t i = 0; i < pInserter->submitRes.pRsp->nBlocks; ++i) {
SSubmitBlkRsp *blk = pInserter->submitRes.pRsp->pBlocks + i;
if (TSDB_CODE_SUCCESS != blk->code) {
code = blk->code;
tFreeSSubmitRsp(pInserter->submitRes.pRsp);
pInserter->submitRes.code = code;
goto _return;
}
}
}
pInserter->submitRes.affectedRows += pInserter->submitRes.pRsp->affectedRows;
qDebug("submit rsp received, affectedRows:%d, total:%d", pInserter->submitRes.pRsp->affectedRows, pInserter->submitRes.affectedRows);
tFreeSSubmitRsp(pInserter->submitRes.pRsp);
}
return false;
}
_return:
static void toDataCacheEntry(SDataInserterHandle* pHandle, const SInputData* pInput, SDataInserterBuf* pBuf) {
int32_t numOfCols = LIST_LENGTH(pHandle->pSchema->pSlots);
tsem_post(&pInserter->ready);
SDataCacheEntry* pEntry = (SDataCacheEntry*)pBuf->pData;
pEntry->compressed = 0;
pEntry->numOfRows = pInput->pData->info.rows;
pEntry->numOfCols = taosArrayGetSize(pInput->pData->pDataBlock);
pEntry->dataLen = sizeof(SDeleterRes);
taosMemoryFree(param);
return TSDB_CODE_SUCCESS;
}
ASSERT(1 == pEntry->numOfRows);
ASSERT(1 == pEntry->numOfCols);
pBuf->useSize = sizeof(SDataCacheEntry);
static int32_t sendSubmitRequest(SDataInserterHandle* pInserter, SSubmitReq* pMsg, void* pTransporter, SEpSet* pEpset) {
// send the fetch remote task result reques
SMsgSendInfo* pMsgSendInfo = taosMemoryCalloc(1, sizeof(SMsgSendInfo));
if (NULL == pMsgSendInfo) {
taosMemoryFreeClear(pMsg);
terrno = TSDB_CODE_QRY_OUT_OF_MEMORY;
return terrno;
}
SColumnInfoData* pColRes = (SColumnInfoData*)taosArrayGet(pInput->pData->pDataBlock, 0);
SSubmitRspParam* pParam = taosMemoryCalloc(1, sizeof(SSubmitRspParam));
pParam->pInserter = pInserter;
SDeleterRes* pRes = (SDeleterRes*)pEntry->data;
pRes->suid = pHandle->pParam->suid;
pRes->uidList = pHandle->pParam->pUidList;
pRes->skey = pHandle->pDeleter->deleteTimeRange.skey;
pRes->ekey = pHandle->pDeleter->deleteTimeRange.ekey;
pRes->affectedRows = *(int64_t*)pColRes->pData;
pMsgSendInfo->param = pParam;
pMsgSendInfo->msgInfo.pData = pMsg;
pMsgSendInfo->msgInfo.len = ntohl(pMsg->length);
pMsgSendInfo->msgType = TDMT_VND_SUBMIT;
pMsgSendInfo->fp = inserterCallback;
pBuf->useSize += pEntry->dataLen;
atomic_add_fetch_64(&pHandle->cachedSize, pEntry->dataLen);
atomic_add_fetch_64(&gDataSinkStat.cachedSize, pEntry->dataLen);
int64_t transporterId = 0;
return asyncSendMsgToServer(pTransporter, pEpset, &transporterId, pMsgSendInfo);
}
static bool allocBuf(SDataInserterHandle* pDeleter, const SInputData* pInput, SDataInserterBuf* pBuf) {
uint32_t capacity = pDeleter->pManager->cfg.maxDataBlockNumPerQuery;
if (taosQueueItemSize(pDeleter->pDataBlocks) > capacity) {
qError("SinkNode queue is full, no capacity, max:%d, current:%d, no capacity", capacity,
taosQueueItemSize(pDeleter->pDataBlocks));
return false;
int32_t dataBlockToSubmit(SDataInserterHandle* pInserter, SSubmitReq** pReq) {
const SArray* pBlocks = pInserter->pDataBlocks;
const STSchema* pTSchema = pInserter->pSchema;
int64_t uid = pInserter->pNode->tableId;
int64_t suid = pInserter->pNode->stableId;
int32_t vgId = pInserter->pNode->vgId;
bool fullCol = (pInserter->pNode->pCols->length == pTSchema->numOfCols);
SSubmitReq* ret = NULL;
int32_t sz = taosArrayGetSize(pBlocks);
// cal size
int32_t cap = sizeof(SSubmitReq);
for (int32_t i = 0; i < sz; i++) {
SSDataBlock* pDataBlock = taosArrayGetP(pBlocks, i);
int32_t rows = pDataBlock->info.rows;
// TODO min
int32_t rowSize = pDataBlock->info.rowSize;
int32_t maxLen = TD_ROW_MAX_BYTES_FROM_SCHEMA(pTSchema);
cap += sizeof(SSubmitBlk) + rows * maxLen;
}
pBuf->allocSize = sizeof(SDataCacheEntry) + sizeof(SDeleterRes);
// assign data
// TODO
ret = taosMemoryCalloc(1, cap);
ret->header.vgId = htonl(vgId);
ret->version = htonl(pTSchema->version);
ret->length = sizeof(SSubmitReq);
ret->numOfBlocks = htonl(sz);
SSubmitBlk* blkHead = POINTER_SHIFT(ret, sizeof(SSubmitReq));
for (int32_t i = 0; i < sz; i++) {
SSDataBlock* pDataBlock = taosArrayGetP(pBlocks, i);
blkHead->sversion = htonl(pTSchema->version);
// TODO
blkHead->suid = htobe64(suid);
blkHead->uid = htobe64(uid);
blkHead->schemaLen = htonl(0);
int32_t rows = 0;
int32_t dataLen = 0;
STSRow* rowData = POINTER_SHIFT(blkHead, sizeof(SSubmitBlk));
int64_t lastTs = TSKEY_MIN;
bool ignoreRow = false;
for (int32_t j = 0; j < pDataBlock->info.rows; j++) {
SRowBuilder rb = {0};
tdSRowInit(&rb, pTSchema->version);
tdSRowSetTpInfo(&rb, pTSchema->numOfCols, pTSchema->flen);
tdSRowResetBuf(&rb, rowData);
ignoreRow = false;
for (int32_t k = 0; k < pTSchema->numOfCols; k++) {
const STColumn* pColumn = &pTSchema->columns[k];
SColumnInfoData* pColData = NULL;
int16_t colIdx = k;
if (!fullCol) {
int16_t *slotId = taosHashGet(pInserter->pCols, &pColumn->colId, sizeof(pColumn->colId));
if (NULL == slotId) {
continue;
}
colIdx = *slotId;
}
pColData = taosArrayGet(pDataBlock->pDataBlock, colIdx);
if (pColData->info.type != pColumn->type) {
qError("col type mis-match, schema type:%d, type in block:%d", pColumn->type, pColData->info.type);
terrno = TSDB_CODE_APP_ERROR;
return TSDB_CODE_APP_ERROR;
}
if (colDataIsNull_s(pColData, j)) {
if (0 == k && TSDB_DATA_TYPE_TIMESTAMP == pColumn->type) {
ignoreRow = true;
break;
}
tdAppendColValToRow(&rb, pColumn->colId, pColumn->type, TD_VTYPE_NULL, NULL, false, pColumn->offset, k);
} else {
void* data = colDataGetData(pColData, j);
if (0 == k && TSDB_DATA_TYPE_TIMESTAMP == pColumn->type) {
if (*(int64_t*)data == lastTs) {
ignoreRow = true;
break;
} else {
lastTs = *(int64_t*)data;
}
}
tdAppendColValToRow(&rb, pColumn->colId, pColumn->type, TD_VTYPE_NORM, data, true, pColumn->offset, k);
}
}
if (ignoreRow) {
continue;
}
rows++;
int32_t rowLen = TD_ROW_LEN(rowData);
rowData = POINTER_SHIFT(rowData, rowLen);
dataLen += rowLen;
}
blkHead->dataLen = htonl(dataLen);
blkHead->numOfRows = htons(rows);
pBuf->pData = taosMemoryMalloc(pBuf->allocSize);
if (pBuf->pData == NULL) {
qError("SinkNode failed to malloc memory, size:%d, code:%d", pBuf->allocSize, TAOS_SYSTEM_ERROR(errno));
ret->length += sizeof(SSubmitBlk) + dataLen;
blkHead = POINTER_SHIFT(blkHead, sizeof(SSubmitBlk) + dataLen);
}
return NULL != pBuf->pData;
}
ret->length = htonl(ret->length);
static int32_t updateStatus(SDataInserterHandle* pDeleter) {
taosThreadMutexLock(&pDeleter->mutex);
int32_t blockNums = taosQueueItemSize(pDeleter->pDataBlocks);
int32_t status =
(0 == blockNums ? DS_BUF_EMPTY
: (blockNums < pDeleter->pManager->cfg.maxDataBlockNumPerQuery ? DS_BUF_LOW : DS_BUF_FULL));
pDeleter->status = status;
taosThreadMutexUnlock(&pDeleter->mutex);
return status;
}
*pReq = ret;
static int32_t getStatus(SDataInserterHandle* pDeleter) {
taosThreadMutexLock(&pDeleter->mutex);
int32_t status = pDeleter->status;
taosThreadMutexUnlock(&pDeleter->mutex);
return status;
return TSDB_CODE_SUCCESS;
}
static int32_t putDataBlock(SDataSinkHandle* pHandle, const SInputData* pInput, bool* pContinue) {
SDataInserterHandle* pDeleter = (SDataInserterHandle*)pHandle;
SDataInserterBuf* pBuf = taosAllocateQitem(sizeof(SDataInserterBuf), DEF_QITEM);
if (NULL == pBuf || !allocBuf(pDeleter, pInput, pBuf)) {
return TSDB_CODE_QRY_OUT_OF_MEMORY;
SDataInserterHandle* pInserter = (SDataInserterHandle*)pHandle;
taosArrayPush(pInserter->pDataBlocks, &pInput->pData);
SSubmitReq* pMsg = NULL;
int32_t code = dataBlockToSubmit(pInserter, &pMsg);
if (code) {
return code;
}
toDataCacheEntry(pDeleter, pInput, pBuf);
taosWriteQitem(pDeleter->pDataBlocks, pBuf);
*pContinue = (DS_BUF_LOW == updateStatus(pDeleter) ? true : false);
return TSDB_CODE_SUCCESS;
}
static void endPut(struct SDataSinkHandle* pHandle, uint64_t useconds) {
SDataInserterHandle* pDeleter = (SDataInserterHandle*)pHandle;
taosThreadMutexLock(&pDeleter->mutex);
pDeleter->queryEnd = true;
pDeleter->useconds = useconds;
taosThreadMutexUnlock(&pDeleter->mutex);
}
static void getDataLength(SDataSinkHandle* pHandle, int32_t* pLen, bool* pQueryEnd) {
SDataInserterHandle* pDeleter = (SDataInserterHandle*)pHandle;
if (taosQueueEmpty(pDeleter->pDataBlocks)) {
*pQueryEnd = pDeleter->queryEnd;
*pLen = 0;
return;
code = sendSubmitRequest(pInserter, pMsg, pInserter->pParam->readHandle->pMsgCb->clientRpc, &pInserter->pNode->epSet);
if (code) {
return code;
}
SDataInserterBuf* pBuf = NULL;
taosReadQitem(pDeleter->pDataBlocks, (void**)&pBuf);
memcpy(&pDeleter->nextOutput, pBuf, sizeof(SDataInserterBuf));
taosFreeQitem(pBuf);
*pLen = ((SDataCacheEntry*)(pDeleter->nextOutput.pData))->dataLen;
*pQueryEnd = pDeleter->queryEnd;
qDebug("got data len %d, row num %d in sink", *pLen, ((SDataCacheEntry*)(pDeleter->nextOutput.pData))->numOfRows);
}
tsem_wait(&pInserter->ready);
static int32_t getDataBlock(SDataSinkHandle* pHandle, SOutputData* pOutput) {
SDataInserterHandle* pDeleter = (SDataInserterHandle*)pHandle;
if (NULL == pDeleter->nextOutput.pData) {
assert(pDeleter->queryEnd);
pOutput->useconds = pDeleter->useconds;
pOutput->precision = pDeleter->pSchema->precision;
pOutput->bufStatus = DS_BUF_EMPTY;
pOutput->queryEnd = pDeleter->queryEnd;
return TSDB_CODE_SUCCESS;
if (pInserter->submitRes.code) {
return pInserter->submitRes.code;
}
SDataCacheEntry* pEntry = (SDataCacheEntry*)(pDeleter->nextOutput.pData);
memcpy(pOutput->pData, pEntry->data, pEntry->dataLen);
pOutput->numOfRows = pEntry->numOfRows;
pOutput->numOfCols = pEntry->numOfCols;
pOutput->compressed = pEntry->compressed;
atomic_sub_fetch_64(&pDeleter->cachedSize, pEntry->dataLen);
atomic_sub_fetch_64(&gDataSinkStat.cachedSize, pEntry->dataLen);
taosMemoryFreeClear(pDeleter->nextOutput.pData); // todo persistent
pOutput->bufStatus = updateStatus(pDeleter);
taosThreadMutexLock(&pDeleter->mutex);
pOutput->queryEnd = pDeleter->queryEnd;
pOutput->useconds = pDeleter->useconds;
pOutput->precision = pDeleter->pSchema->precision;
taosThreadMutexUnlock(&pDeleter->mutex);
*pContinue = true;
return TSDB_CODE_SUCCESS;
}
static void endPut(struct SDataSinkHandle* pHandle, uint64_t useconds) {
SDataInserterHandle* pInserter = (SDataInserterHandle*)pHandle;
taosThreadMutexLock(&pInserter->mutex);
pInserter->queryEnd = true;
pInserter->useconds = useconds;
taosThreadMutexUnlock(&pInserter->mutex);
}
static void getDataLength(SDataSinkHandle* pHandle, int64_t* pLen, bool* pQueryEnd) {
SDataInserterHandle* pDispatcher = (SDataInserterHandle*)pHandle;
*pLen = pDispatcher->submitRes.affectedRows;
qDebug("got total affectedRows %" PRId64 , *pLen);
}
static int32_t destroyDataSinker(SDataSinkHandle* pHandle) {
SDataInserterHandle* pDeleter = (SDataInserterHandle*)pHandle;
atomic_sub_fetch_64(&gDataSinkStat.cachedSize, pDeleter->cachedSize);
taosMemoryFreeClear(pDeleter->nextOutput.pData);
while (!taosQueueEmpty(pDeleter->pDataBlocks)) {
SDataInserterBuf* pBuf = NULL;
taosReadQitem(pDeleter->pDataBlocks, (void**)&pBuf);
taosMemoryFreeClear(pBuf->pData);
taosFreeQitem(pBuf);
}
taosCloseQueue(pDeleter->pDataBlocks);
taosThreadMutexDestroy(&pDeleter->mutex);
SDataInserterHandle* pInserter = (SDataInserterHandle*)pHandle;
atomic_sub_fetch_64(&gDataSinkStat.cachedSize, pInserter->cachedSize);
taosArrayDestroy(pInserter->pDataBlocks);
taosMemoryFree(pInserter->pSchema);
taosThreadMutexDestroy(&pInserter->mutex);
return TSDB_CODE_SUCCESS;
}
......@@ -230,25 +301,46 @@ int32_t createDataInserter(SDataSinkManager* pManager, const SDataSinkNode* pDat
return TSDB_CODE_QRY_OUT_OF_MEMORY;
}
SDataDeleterNode* pDeleterNode = (SDataDeleterNode *)pDataSink;
SQueryInserterNode* pInserterNode = (SQueryInserterNode *)pDataSink;
inserter->sink.fPut = putDataBlock;
inserter->sink.fEndPut = endPut;
inserter->sink.fGetLen = getDataLength;
inserter->sink.fGetData = getDataBlock;
inserter->sink.fGetData = NULL;
inserter->sink.fDestroy = destroyDataSinker;
inserter->sink.fGetCacheSize = getCacheSize;
inserter->pManager = pManager;
inserter->pDeleter = pDeleterNode;
inserter->pSchema = pDataSink->pInputDataBlockDesc;
inserter->pNode = pInserterNode;
inserter->pParam = pParam;
inserter->status = DS_BUF_EMPTY;
inserter->queryEnd = false;
inserter->pDataBlocks = taosOpenQueue();
int64_t suid = 0;
int32_t code = tsdbGetTableSchema(inserter->pParam->readHandle->vnode, pInserterNode->tableId, &inserter->pSchema, &suid);
if (code) {
return code;
}
if (pInserterNode->stableId != suid) {
terrno = TSDB_CODE_TDB_INVALID_TABLE_ID;
return terrno;
}
inserter->pDataBlocks = taosArrayInit(1, POINTER_BYTES);
taosThreadMutexInit(&inserter->mutex, NULL);
if (NULL == inserter->pDataBlocks) {
terrno = TSDB_CODE_QRY_OUT_OF_MEMORY;
return TSDB_CODE_QRY_OUT_OF_MEMORY;
}
inserter->pCols = taosHashInit(pInserterNode->pCols->length, taosGetDefaultHashFunction(TSDB_DATA_TYPE_SMALLINT), false, HASH_NO_LOCK);
SNode* pNode = NULL;
FOREACH(pNode, pInserterNode->pCols) {
SColumnNode* pCol = (SColumnNode*)pNode;
taosHashPut(inserter->pCols, &pCol->colId, sizeof(pCol->colId), &pCol->slotId, sizeof(pCol->slotId));
}
tsem_init(&inserter->ready, 0, 0);
*pHandle = inserter;
return TSDB_CODE_SUCCESS;
}
......@@ -40,6 +40,8 @@ int32_t dsCreateDataSinker(const SDataSinkNode *pDataSink, DataSinkHandle* pHand
return createDataDispatcher(&gDataSinkManager, pDataSink, pHandle);
case QUERY_NODE_PHYSICAL_PLAN_DELETE:
return createDataDeleter(&gDataSinkManager, pDataSink, pHandle, pParam);
case QUERY_NODE_PHYSICAL_PLAN_QUERY_INSERT:
return createDataInserter(&gDataSinkManager, pDataSink, pHandle, pParam);
}
return TSDB_CODE_FAILED;
}
......@@ -54,7 +56,7 @@ void dsEndPut(DataSinkHandle handle, uint64_t useconds) {
return pHandleImpl->fEndPut(pHandleImpl, useconds);
}
void dsGetDataLength(DataSinkHandle handle, int32_t* pLen, bool* pQueryEnd) {
void dsGetDataLength(DataSinkHandle handle, int64_t* pLen, bool* pQueryEnd) {
SDataSinkHandle* pHandleImpl = (SDataSinkHandle*)handle;
pHandleImpl->fGetLen(pHandleImpl, pLen, pQueryEnd);
}
......
......@@ -748,11 +748,12 @@ SInterval extractIntervalInfo(const STableScanPhysiNode* pTableScanNode) {
SColumn extractColumnFromColumnNode(SColumnNode* pColNode) {
SColumn c = {0};
c.slotId = pColNode->slotId;
c.colId = pColNode->colId;
c.type = pColNode->node.resType.type;
c.bytes = pColNode->node.resType.bytes;
c.scale = pColNode->node.resType.scale;
c.slotId = pColNode->slotId;
c.colId = pColNode->colId;
c.type = pColNode->node.resType.type;
c.bytes = pColNode->node.resType.bytes;
c.scale = pColNode->node.resType.scale;
c.precision = pColNode->node.resType.precision;
return c;
}
......@@ -774,6 +775,8 @@ int32_t initQueryTableDataCond(SQueryTableDataCond* pCond, const STableScanPhysi
pCond->suid = pTableScanNode->scan.suid;
pCond->type = BLOCK_LOAD_OFFSET_ORDER;
pCond->startVersion = -1;
pCond->endVersion = -1;
// pCond->type = pTableScanNode->scanFlag;
int32_t j = 0;
......
......@@ -60,9 +60,9 @@ static int32_t doSetStreamBlock(SOperatorInfo* pOperator, void* input, size_t nu
taosArrayAddAll(p->pDataBlock, pDataBlock->pDataBlock);
taosArrayPush(pInfo->pBlockLists, &p);
}
} else if (type == STREAM_INPUT__DATA_SCAN) {
} else if (type == STREAM_INPUT__TABLE_SCAN) {
// do nothing
ASSERT(pInfo->blockType == STREAM_INPUT__DATA_SCAN);
ASSERT(pInfo->blockType == STREAM_INPUT__TABLE_SCAN);
} else {
ASSERT(0);
}
......@@ -76,7 +76,7 @@ int32_t qStreamScanSnapshot(qTaskInfo_t tinfo) {
return TSDB_CODE_QRY_APP_ERROR;
}
SExecTaskInfo* pTaskInfo = (SExecTaskInfo*)tinfo;
return doSetStreamBlock(pTaskInfo->pRoot, NULL, 0, STREAM_INPUT__DATA_SCAN, 0, NULL);
return doSetStreamBlock(pTaskInfo->pRoot, NULL, 0, STREAM_INPUT__TABLE_SCAN, 0, NULL);
}
int32_t qSetStreamInput(qTaskInfo_t tinfo, const void* input, int32_t type, bool assignUid) {
......
......@@ -38,6 +38,8 @@ static void destroyGroupOperatorInfo(void* param, int32_t numOfOutput) {
taosArrayDestroy(pInfo->pGroupCols);
taosArrayDestroy(pInfo->pGroupColVals);
cleanupExprSupp(&pInfo->scalarSup);
taosMemoryFreeClear(param);
}
static int32_t initGroupOptrInfo(SArray** pGroupColVals, int32_t* keyLen, char** keyBuf, const SArray* pGroupColList) {
......@@ -724,6 +726,8 @@ static void destroyPartitionOperatorInfo(void* param, int32_t numOfOutput) {
taosMemoryFree(pInfo->columnOffset);
cleanupExprSupp(&pInfo->scalarSup);
taosMemoryFreeClear(param);
}
SOperatorInfo* createPartitionOperatorInfo(SOperatorInfo* downstream, SPartitionPhysiNode* pPartNode, SExecTaskInfo* pTaskInfo) {
......@@ -806,4 +810,4 @@ int32_t setGroupResultOutputBuf(SOperatorInfo* pOperator, SOptrBasicInfo* binfo,
setResultRowInitCtx(pResultRow, pCtx, numOfCols, pOperator->exprSupp.rowEntryInfoOffset);
return TSDB_CODE_SUCCESS;
}
\ No newline at end of file
}
......@@ -104,6 +104,8 @@ void setJoinColumnInfo(SColumnInfo* pColumn, const SColumnNode* pColumnNode) {
void destroyMergeJoinOperator(void* param, int32_t numOfOutput) {
SJoinOperatorInfo* pJoinOperator = (SJoinOperatorInfo*)param;
nodesDestroyNode(pJoinOperator->pCondAfterMerge);
taosMemoryFreeClear(param);
}
static void doMergeJoinImpl(struct SOperatorInfo* pOperator, SSDataBlock* pRes) {
......
......@@ -39,6 +39,7 @@ typedef struct SAstCreateContext {
typedef enum EDatabaseOptionType {
DB_OPTION_BUFFER = 1,
DB_OPTION_CACHELAST,
DB_OPTION_CACHELASTSIZE,
DB_OPTION_COMP,
DB_OPTION_DAYS,
DB_OPTION_FSYNC,
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册