提交 1c062060 编写于 作者: H Haojun Liao

other:merge 3.0

上级 0f1cf13a
# 贡献指南
我们感谢所有开发者提交贡献。随时关注我们,Fork 存储库,报告错误,以及在 GitHub 上提交您的代码。但是,我们希望开发者遵循我们的指南,才能更好的做出贡献。
## 报告错误
- 任何用户都可以通过 **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)** 向我们报告错误。请您对所遇到的问题进行**详细描述**,最好提供重现错误的详细步骤。
- 欢迎提供包含由 Bug 生成的日志文件的附录。
## 需要强调的代码提交规则
- 在提交代码之前,需要**同意贡献者许可协议(CLA)**。点击 [TaosData CLA](https://cla-assistant.io/taosdata/TDengine) 阅读并签署协议。如果您不接受该协议,请停止提交。
- 请在 [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) 中解决问题或添加注册功能。
- 如果在 [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) 中没有找到相应的问题或功能,请**创建一个新的 issue**
- 将代码提交到我们的存储库时,请创建**包含问题编号的 PR**
## 贡献指南
1. 请用友好的语气书写。
2. **主动语态**总体上优于被动语态。主动语态中的句子会突出执行动作的人,而不是被动语态突出动作的接受者。
3. 文档写作建议
- 正确拼写产品名称 “TDengine”。 “TD” 用大写字母,“TD” 和 “engine” 之间没有空格 **(正确拼写:TDengine)**
- 在句号或其他标点符号后只留一个空格。
4. 尽量**使用简单句**,而不是复杂句。
## 给贡献者的礼品
只要您是为 TDengine 做贡献的开发者,不管是代码贡献、修复 bug 或功能请求,还是文档更改,您都将会获得一份**特别的贡献者纪念品礼物**
<p align="left">
<img
src="docs/assets/contributing-cup.jpg"
alt=""
width="200"
/>
<img
src="docs/assets/contributing-notebook.jpg"
alt=""
width="200"
/>
<img
src="docs/assets/contributing-shirt.jpg"
alt=""
width="200"
/>
TDengine 社区致力于让更多的开发者理解和使用它。
请填写**贡献者提交表**以选择您想收到的礼物。
- [贡献者提交表](https://page.ma.scrmtech.com/form/index?pf_uid=27715_2095&id=12100)
## 联系我们
如果您有什么问题需要解决,或者有什么问题需要解答,可以添加微信:TDengineECO
import hudson.model.Result
import hudson.model.*;
import jenkins.model.CauseOfInterruption
docs_only=0
node {
}
......@@ -29,6 +30,49 @@ def abort_previous(){
if (buildNumber > 1) milestone(buildNumber - 1)
milestone(buildNumber)
}
def check_docs() {
if (env.CHANGE_URL =~ /\/TDengine\//) {
sh '''
hostname
date
env
'''
sh '''
cd ${WKC}
git reset --hard
git clean -fxd
rm -rf examples/rust/
git remote prune origin
git fetch
'''
script {
sh '''
cd ${WKC}
git checkout ''' + env.CHANGE_TARGET + '''
'''
}
sh '''
cd ${WKC}
git remote prune origin
git pull >/dev/null
git fetch origin +refs/pull/${CHANGE_ID}/merge
git checkout -qf FETCH_HEAD
'''
def file_changed = sh (
script: '''
cd ${WKC}
git --no-pager diff --name-only FETCH_HEAD `git merge-base FETCH_HEAD ${CHANGE_TARGET}`|grep -v "^docs/en/"|grep -v "^docs/zh/" || :
''',
returnStdout: true
).trim()
if (file_changed == '') {
echo "docs PR"
docs_only=1
} else {
echo file_changed
}
}
}
def pre_test(){
sh '''
hostname
......@@ -307,10 +351,27 @@ pipeline {
WKPY = '/var/lib/jenkins/workspace/taos-connector-python'
}
stages {
stage('check') {
when {
allOf {
not { expression { env.CHANGE_BRANCH =~ /docs\// }}
not { expression { env.CHANGE_URL =~ /\/TDinternal\// }}
}
}
parallel {
stage('check docs') {
agent{label " worker03 || slave215 || slave217 || slave219 || Mac_catalina "}
steps {
check_docs()
}
}
}
}
stage('run test') {
when {
allOf {
not { expression { env.CHANGE_BRANCH =~ /docs\// }}
expression { docs_only == 0 }
}
}
parallel {
......
......@@ -210,14 +210,14 @@ cmake .. -G "NMake Makefiles"
nmake
```
### macOS 系统
<!-- ### macOS 系统
安装 Xcode 命令行工具和 cmake. 在 Catalina 和 Big Sur 操作系统上,需要安装 XCode 11.4+ 版本。
```bash
mkdir debug && cd debug
cmake .. && cmake --build .
```
``` -->
# 安装
......
......@@ -15,7 +15,6 @@
[![Coverage Status](https://coveralls.io/repos/github/taosdata/TDengine/badge.svg?branch=develop)](https://coveralls.io/github/taosdata/TDengine?branch=develop)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4201/badge)](https://bestpractices.coreinfrastructure.org/projects/4201)
English | [简体中文](README-CN.md) | We are hiring, check [here](https://tdengine.com/careers)
# What is TDengine?
......@@ -42,7 +41,7 @@ For user manual, system design and architecture, please refer to [TDengine Docum
At the moment, TDengine server supports running on Linux, Windows systems.Any OS application can also choose the RESTful interface of taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU , and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future.
You can choose to install through source code according to your needs, [container](https://docs.taosdata.com/get-started/docker/), [installation package](https://docs.taosdata.com/get-started/package/) or [Kubenetes](https://docs.taosdata.com/deployment/k8s/) to install. This quick guide only applies to installing from source.
You can choose to install through source code according to your needs, [container](https://docs.taosdata.com/get-started/docker/), [installation package](https://docs.taosdata.com/get-started/package/) or [Kubernetes](https://docs.taosdata.com/deployment/k8s/) to install. This quick guide only applies to installing from source.
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
......@@ -58,7 +57,6 @@ sudo apt-get install -y gcc cmake build-essential git libssl-dev
#### Install build dependencies for taosTools
To build the [taosTools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
```bash
......@@ -82,14 +80,13 @@ sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel
#### Install build dependencies for taosTools on CentOS
#### CentOS 7.9
```
sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libstdc++-static openssl-devel
```
#### CentOS 8/Rocky Linux
#### CentOS 8/Rocky Linux
```
sudo yum install -y epel-release
......@@ -100,14 +97,14 @@ sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgco
Note: Since snappy lacks pkg-config support (refer to [link](https://github.com/google/snappy/pull/86)), it leads a cmake prompt libsnappy not found. But snappy still works well.
If the powertools installation fails, you can try to use:
If the PowerTools installation fails, you can try to use:
```
sudo yum config-manager --set-enabled Powertools
sudo yum config-manager --set-enabled powertools
```
### Setup golang environment
TDengine includes a few components like taosAdapter developed by Go language. Please refer to golang.org official documentation for golang environment setup.
Please use version 1.14+. For the user in China, we recommend using a proxy to accelerate package downloading.
......@@ -125,7 +122,7 @@ cmake .. -DBUILD_HTTP=false
### Setup rust environment
TDengine includes a few compoments developed by Rust language. Please refer to rust-lang.org official documentation for rust environment setup.
TDengine includes a few components developed by Rust language. Please refer to rust-lang.org official documentation for rust environment setup.
## Get the source codes
......@@ -136,7 +133,6 @@ git clone https://github.com/taosdata/TDengine.git
cd TDengine
```
You can modify the file ~/.gitconfig to use ssh protocol instead of https for better download speed. You will need to upload ssh public key to GitHub first. Please refer to GitHub official documentation for detail.
```
......@@ -146,14 +142,12 @@ You can modify the file ~/.gitconfig to use ssh protocol instead of https for be
## Special Note
[JDBC Connector](https://github.com/taosdata/taos-connector-jdbc)[Go Connector](https://github.com/taosdata/driver-go)[Python Connector](https://github.com/taosdata/taos-connector-python)[Node.js Connector](https://github.com/taosdata/taos-connector-node)[C# Connector](https://github.com/taosdata/taos-connector-dotnet)[Rust Connector](https://github.com/taosdata/taos-connector-rust) and [Grafana plugin](https://github.com/taosdata/grafanaplugin) has been moved to standalone repository.
## Build TDengine
### On Linux platform
You can run the bash script `build.sh` to build both TDengine and taosTools including taosBenchmark and taosdump as below:
```bash
......@@ -169,7 +163,6 @@ cmake .. -DBUILD_TOOLS=true
make
```
You can use Jemalloc as memory allocator instead of glibc:
```
......@@ -218,14 +211,14 @@ cmake .. -G "NMake Makefiles"
nmake
```
### On macOS platform
<!-- ### On macOS platform
Please install XCode command line tools and cmake. Verified with XCode 11.4+ on Catalina and Big Sur.
```shell
mkdir debug && cd debug
cmake .. && cmake --build .
```
``` -->
# Installing
......@@ -237,7 +230,7 @@ After building successfully, TDengine can be installed by
sudo make install
```
Users can find more information about directories installed on the system in the [directory and files](https://docs.taosdata.com/reference/directory/) section.
Users can find more information about directories installed on the system in the [directory and files](https://docs.taosdata.com/reference/directory/) section.
Installing from source code will also configure service management for TDengine.Users can also choose to [install from packages](https://docs.taosdata.com/get-started/package/) for it.
......@@ -309,7 +302,7 @@ Query OK, 2 row(s) in set (0.001700s)
## Official Connectors
TDengine provides abundant developing tools for users to develop on TDengine. include C/C++、Java、Python、Go、Node.js、C# 、RESTful ,Follow the links below to find your desired connectors and relevant documentation.
TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation.
- [Java](https://docs.taosdata.com/reference/connector/java/)
- [C/C++](https://docs.taosdata.com/reference/connector/cpp/)
......
---
sidebar_label: 流式计算
title: 流式计算
---
## 创建流式计算
```sql
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name AS subquery
stream_options: {
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
WATERMARK time
}
```
其中 subquery 是 select 普通查询语法的子集:
```sql
subquery: SELECT select_list
from_clause
[WHERE condition]
[PARTITION BY tag_list]
[window_clause]
```
支持会话窗口、状态窗口与滑动窗口,其中,会话窗口与状态窗口搭配超级表时必须与partition by tbname一起使用
```sql
window_clause: {
SESSION(ts_col, tol_val)
| STATE_WINDOW(col)
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)]
}
```
其中,SESSION 是会话窗口,tol_val 是时间间隔的最大范围。在 tol_val 时间间隔范围内的数据都属于同一个窗口,如果连续的两条数据的时间超过 tol_val,则自动开启下一个窗口。
窗口的定义与时序数据特色查询中的定义完全相同,详见 [TDengine 特色查询](../distinguished)
例如,如下语句创建流式计算,同时自动创建名为 avg_vol 的超级表,此流计算以一分钟为时间窗口、30 秒为前向增量统计这些电表的平均电压,并将来自 meters 表的数据的计算结果写入 avg_vol 表,不同 partition 的数据会分别创建子表并写入不同子表。
```sql
CREATE STREAM avg_vol_s INTO avg_vol AS
SELECT _wstartts, count(*), avg(voltage) FROM meters PARTITION BY tbname INTERVAL(1m) SLIDING(30s);
```
## 流式计算的 partition
可以使用 PARTITION BY TBNAME 或 PARTITION BY tag,对一个流进行多分区的计算,每个分区的时间线与时间窗口是独立的,会各自聚合,并写入到目的表中的不同子表。
不带 PARTITION BY 选项时,所有的数据将写入到一张子表。
流式计算创建的超级表有唯一的 tag 列 groupId,每个 partition 会被分配唯一 groupId。与 schemaless 写入一致,我们通过 MD5 计算子表名,并自动创建它。
## 删除流式计算
```sql
DROP STREAM [IF NOT EXISTS] stream_name;
```
仅删除流式计算任务,由流式计算写入的数据不会被删除。
## 展示流式计算
```sql
SHOW STREAMS;
```
若要展示更详细的信息,可以使用:
```sql
SELECT * from performance_schema.`perf_streams`;
```
## 流式计算的触发模式
在创建流时,可以通过 TRIGGER 指令指定流式计算的触发模式。
对于非窗口计算,流式计算的触发是实时的;对于窗口计算,目前提供 3 种触发模式:
1. AT_ONCE:写入立即触发
2. WINDOW_CLOSE:窗口关闭时触发(窗口关闭由事件时间决定,可配合 watermark 使用)
3. MAX_DELAY time:若窗口关闭,则触发计算。若窗口未关闭,且未关闭时长超过 max delay 指定的时间,则触发计算。
由于窗口关闭是由事件时间决定的,如事件流中断、或持续延迟,则事件时间无法更新,可能导致无法得到最新的计算结果。
因此,流式计算提供了以事件时间结合处理时间计算的 MAX_DELAY 触发模式。
MAX_DELAY 模式在窗口关闭时会立即触发计算。此外,当数据写入后,计算触发的时间超过 max delay 指定的时间,则立即触发计算
## 流式计算的窗口关闭
流式计算以事件时间(插入记录中的时间戳主键)为基准计算窗口关闭,而非以 TDengine 服务器的时间,以事件时间为基准,可以避免客户端与服务器时间不一致带来的问题,能够解决乱序数据写入等等问题。流式计算还提供了 watermark 来定义容忍的乱序程度。
在创建流时,可以在 stream_option 中指定 watermark,它定义了数据乱序的容忍上界。
流式计算通过 watermark 来度量对乱序数据的容忍程度,watermark 默认为 0。
T = 最新事件时间 - watermark
每次写入的数据都会以上述公式更新窗口关闭时间,并将窗口结束时间 < T 的所有打开的窗口关闭,若触发模式为 WINDOW_CLOSE 或 MAX_DELAY,则推送窗口聚合结果。
![TDengine 流式计算窗口关闭示意图](./watermark.webp)
图中,纵轴表示不同时刻,对于不同时刻,我们画出其对应的 TDengine 收到的数据,即为横轴。
横轴上的数据点表示已经收到的数据,其中蓝色的点表示事件时间(即数据中的时间戳主键)最后的数据,该数据点减去定义的 watermark 时间,得到乱序容忍的上界 T。
所有结束时间小于 T 的窗口都将被关闭(图中以灰色方框标记)。
T2 时刻,乱序数据(黄色的点)到达 TDengine,由于有 watermark 的存在,这些数据进入的窗口并未被关闭,因此可以被正确处理。
T3 时刻,最新事件到达,T 向后推移超过了第二个窗口关闭的时间,该窗口被关闭,乱序数据被正确处理。
在 window_close 或 max_delay 模式下,窗口关闭直接影响推送结果。在 at_once 模式下,窗口关闭只与内存占用有关。
## 流式计算的过期数据处理策略
对于已关闭的窗口,再次落入该窗口中的数据被标记为过期数据.
TDengine 对于过期数据提供两种处理方式,由 IGNORE EXPIRED 选项指定:
1. 重新计算,即 IGNORE EXPIRED 0:默认配置,从 TSDB 中重新查找对应窗口的所有数据并重新计算得到最新结果
2. 直接丢弃, 即 IGNORE EXPIRED 1:忽略过期数据
无论在哪种模式下,watermark 都应该被妥善设置,来得到正确结果(直接丢弃模式)或避免频繁触发重算带来的性能开销(重新计算模式)。
......@@ -17,7 +17,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.34</version>
<version>3.0.0</version>
</dependency>
</dependencies>
......
......@@ -47,7 +47,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.18</version>
<version>3.0.0</version>
</dependency>
</dependencies>
......
......@@ -10,7 +10,7 @@
```xml
<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="com.taosdata.jdbc.TSDBDriver"></property>
<property name="url" value="jdbc:TAOS://127.0.0.1:6030/log"></property>
<property name="url" value="jdbc:TAOS://127.0.0.1:6030/test"></property>
<property name="username" value="root"></property>
<property name="password" value="taosdata"></property>
</bean>
......@@ -28,5 +28,5 @@ mvn clean package
```
打包成功之后,进入 `target/` 目录下,执行以下命令就可运行测试:
```shell
java -jar SpringJdbcTemplate-1.0-SNAPSHOT-jar-with-dependencies.jar
java -jar target/SpringJdbcTemplate-1.0-SNAPSHOT-jar-with-dependencies.jar
```
\ No newline at end of file
......@@ -28,7 +28,7 @@ public class App {
//use database
executor.doExecute("use test");
// create table
executor.doExecute("create table if not exists test.weather (ts timestamp, temperature int, humidity float)");
executor.doExecute("create table if not exists test.weather (ts timestamp, temperature float, humidity int)");
WeatherDao weatherDao = ctx.getBean(WeatherDao.class);
Weather weather = new Weather(new Timestamp(new Date().getTime()), random.nextFloat() * 50.0f, random.nextInt(100));
......
......@@ -41,7 +41,7 @@ public class BatcherInsertTest {
//use database
executor.doExecute("use test");
// create table
executor.doExecute("create table if not exists test.weather (ts timestamp, temperature int, humidity float)");
executor.doExecute("create table if not exists test.weather (ts timestamp, temperature float, humidity int)");
}
@Test
......
......@@ -13,13 +13,13 @@ ConnectionPoolDemo的程序逻辑:
### 如何运行这个例子:
```shell script
mvn clean package assembly:single
java -jar target/connectionPools-1.0-SNAPSHOT-jar-with-dependencies.jar -host 127.0.0.1
mvn clean package
java -jar target/ConnectionPoolDemo-jar-with-dependencies.jar -host 127.0.0.1
```
使用mvn运行ConnectionPoolDemo的main方法,可以指定参数
```shell script
Usage:
java -jar target/connectionPools-1.0-SNAPSHOT-jar-with-dependencies.jar
java -jar target/ConnectionPoolDemo-jar-with-dependencies.jar
-host : hostname
-poolType <c3p0| dbcp| druid| hikari>
-poolSize <poolSize>
......
......@@ -18,7 +18,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.18</version>
<version>3.0.0</version>
</dependency>
<!-- druid -->
<dependency>
......
......@@ -47,7 +47,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.18</version>
<version>3.0.0</version>
</dependency>
<dependency>
......
# 使用说明
## 创建使用db
```shell
$ taos
> create database mp_test
```
## 执行测试用例
```shell
$ mvn clean test
```
\ No newline at end of file
......@@ -2,7 +2,17 @@ package com.taosdata.example.mybatisplusdemo.mapper;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.taosdata.example.mybatisplusdemo.domain.Weather;
import org.apache.ibatis.annotations.Insert;
import org.apache.ibatis.annotations.Update;
public interface WeatherMapper extends BaseMapper<Weather> {
@Update("CREATE TABLE if not exists weather(ts timestamp, temperature float, humidity int, location nchar(100))")
int createTable();
@Insert("insert into weather (ts, temperature, humidity, location) values(#{ts}, #{temperature}, #{humidity}, #{location})")
int insertOne(Weather one);
@Update("drop table if exists weather")
void dropTable();
}
......@@ -2,7 +2,7 @@ spring:
datasource:
driver-class-name: com.taosdata.jdbc.TSDBDriver
url: jdbc:TAOS://localhost:6030/mp_test?charset=UTF-8&locale=en_US.UTF-8&timezone=UTC-8
user: root
username: root
password: taosdata
druid:
......
......@@ -82,27 +82,15 @@ public class TemperatureMapperTest {
Assert.assertEquals(1, affectRows);
}
/***
* test SelectOne
* **/
@Test
public void testSelectOne() {
QueryWrapper<Temperature> wrapper = new QueryWrapper<>();
wrapper.eq("location", "beijing");
Temperature one = mapper.selectOne(wrapper);
System.out.println(one);
Assert.assertNotNull(one);
}
/***
* test select By map
* ***/
@Test
public void testSelectByMap() {
Map<String, Object> map = new HashMap<>();
map.put("location", "beijing");
map.put("location", "北京");
List<Temperature> temperatures = mapper.selectByMap(map);
Assert.assertEquals(1, temperatures.size());
Assert.assertTrue(temperatures.size() > 1);
}
/***
......@@ -120,7 +108,7 @@ public class TemperatureMapperTest {
@Test
public void testSelectCount() {
int count = mapper.selectCount(null);
Assert.assertEquals(5, count);
Assert.assertEquals(10, count);
}
/****
......
......@@ -6,6 +6,7 @@ import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
import com.taosdata.example.mybatisplusdemo.domain.Weather;
import org.junit.Assert;
import org.junit.Test;
import org.junit.Before;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
......@@ -26,6 +27,18 @@ public class WeatherMapperTest {
@Autowired
private WeatherMapper mapper;
@Before
public void createTable(){
mapper.dropTable();
mapper.createTable();
Weather one = new Weather();
one.setTs(new Timestamp(1605024000000l));
one.setTemperature(12.22f);
one.setLocation("望京");
one.setHumidity(100);
mapper.insertOne(one);
}
@Test
public void testSelectList() {
List<Weather> weathers = mapper.selectList(null);
......@@ -46,20 +59,20 @@ public class WeatherMapperTest {
@Test
public void testSelectOne() {
QueryWrapper<Weather> wrapper = new QueryWrapper<>();
wrapper.eq("location", "beijing");
wrapper.eq("location", "望京");
Weather one = mapper.selectOne(wrapper);
System.out.println(one);
Assert.assertEquals(12.22f, one.getTemperature(), 0.00f);
Assert.assertEquals("beijing", one.getLocation());
Assert.assertEquals("望京", one.getLocation());
}
@Test
public void testSelectByMap() {
Map<String, Object> map = new HashMap<>();
map.put("location", "beijing");
List<Weather> weathers = mapper.selectByMap(map);
Assert.assertEquals(1, weathers.size());
}
// @Test
// public void testSelectByMap() {
// Map<String, Object> map = new HashMap<>();
// map.put("location", "beijing");
// List<Weather> weathers = mapper.selectByMap(map);
// Assert.assertEquals(1, weathers.size());
// }
@Test
public void testSelectObjs() {
......
......@@ -10,4 +10,4 @@
| 6 | taosdemo | This is an internal tool for testing Our JDBC-JNI, JDBC-RESTful, RESTful interfaces |
more detail: https://www.taosdata.com/cn//documentation20/connector-java/
\ No newline at end of file
more detail: https://docs.taosdata.com/reference/connector/java/
\ No newline at end of file
......@@ -68,7 +68,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.34</version>
<version>3.0.0</version>
</dependency>
<dependency>
......
## TDengine SpringBoot + Mybatis Demo
## 需要提前创建 test 数据库
### 配置 application.properties
```properties
# datasource config
spring.datasource.driver-class-name=com.taosdata.jdbc.TSDBDriver
spring.datasource.url=jdbc:TAOS://127.0.0.1:6030/log
spring.datasource.url=jdbc:TAOS://127.0.0.1:6030/test
spring.datasource.username=root
spring.datasource.password=taosdata
......
......@@ -6,7 +6,6 @@ import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;
import java.util.List;
import java.util.Map;
@RequestMapping("/weather")
@RestController
......
......@@ -10,8 +10,7 @@
</resultMap>
<select id="lastOne" resultType="java.util.Map">
select last_row(*), location, groupid
from test.weather
select last_row(ts) as ts, last_row(temperature) as temperature, last_row(humidity) as humidity, last_row(note) as note,last_row(location) as location , last_row(groupid) as groupid from test.weather;
</select>
<update id="dropDB">
......
......@@ -5,7 +5,7 @@
#spring.datasource.password=taosdata
# datasource config - JDBC-RESTful
spring.datasource.driver-class-name=com.taosdata.jdbc.rs.RestfulDriver
spring.datasource.url=jdbc:TAOS-RS://localhsot:6041/test?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8
spring.datasource.url=jdbc:TAOS-RS://localhost:6041/test?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8
spring.datasource.username=root
spring.datasource.password=taosdata
spring.datasource.druid.initial-size=5
......
......@@ -67,7 +67,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.20</version>
<version>3.0.0</version>
<!-- <scope>system</scope>-->
<!-- <systemPath>${project.basedir}/src/main/resources/lib/taos-jdbcdriver-2.0.15-dist.jar</systemPath>-->
</dependency>
......
......@@ -2,9 +2,9 @@
cd tests/examples/JDBC/taosdemo
mvn clean package -Dmaven.test.skip=true
# 先建表,再插入的
java -jar target/taosdemo-2.0-jar-with-dependencies.jar -host [hostname] -database [database] -doCreateTable true -superTableSQL "create table weather(ts timestamp, f1 int) tags(t1 nchar(4))" -numOfTables 1000 -numOfRowsPerTable 100000000 -numOfThreadsForInsert 10 -numOfTablesPerSQL 10 -numOfValuesPerSQL 100
java -jar target/taosdemo-2.0.1-jar-with-dependencies.jar -host [hostname] -database [database] -doCreateTable true -superTableSQL "create table weather(ts timestamp, f1 int) tags(t1 nchar(4))" -numOfTables 1000 -numOfRowsPerTable 100000000 -numOfThreadsForInsert 10 -numOfTablesPerSQL 10 -numOfValuesPerSQL 100
# 不建表,直接插入的
java -jar target/taosdemo-2.0-jar-with-dependencies.jar -host [hostname] -database [database] -doCreateTable false -superTableSQL "create table weather(ts timestamp, f1 int) tags(t1 nchar(4))" -numOfTables 1000 -numOfRowsPerTable 100000000 -numOfThreadsForInsert 10 -numOfTablesPerSQL 10 -numOfValuesPerSQL 100
java -jar target/taosdemo-2.0.1-jar-with-dependencies.jar -host [hostname] -database [database] -doCreateTable false -superTableSQL "create table weather(ts timestamp, f1 int) tags(t1 nchar(4))" -numOfTables 1000 -numOfRowsPerTable 100000000 -numOfThreadsForInsert 10 -numOfTablesPerSQL 10 -numOfValuesPerSQL 100
```
需求:
......
......@@ -32,8 +32,10 @@ public class TaosDemoApplication {
System.exit(0);
}
// 初始化
final DataSource dataSource = DataSourceFactory.getInstance(config.host, config.port, config.user, config.password);
if (config.executeSql != null && !config.executeSql.isEmpty() && !config.executeSql.replaceAll("\\s", "").isEmpty()) {
final DataSource dataSource = DataSourceFactory.getInstance(config.host, config.port, config.user,
config.password);
if (config.executeSql != null && !config.executeSql.isEmpty()
&& !config.executeSql.replaceAll("\\s", "").isEmpty()) {
Thread task = new Thread(new SqlExecuteTask(dataSource, config.executeSql));
task.start();
try {
......@@ -55,7 +57,7 @@ public class TaosDemoApplication {
databaseParam.put("keep", Integer.toString(config.keep));
databaseParam.put("days", Integer.toString(config.days));
databaseParam.put("replica", Integer.toString(config.replica));
//TODO: other database parameters
// TODO: other database parameters
databaseService.createDatabase(databaseParam);
databaseService.useDatabase(config.database);
long end = System.currentTimeMillis();
......@@ -70,11 +72,13 @@ public class TaosDemoApplication {
if (config.database != null && !config.database.isEmpty())
superTableMeta.setDatabase(config.database);
} else if (config.numOfFields == 0) {
String sql = "create table " + config.database + "." + config.superTable + " (ts timestamp, temperature float, humidity int) tags(location nchar(64), groupId int)";
String sql = "create table " + config.database + "." + config.superTable
+ " (ts timestamp, temperature float, humidity int) tags(location nchar(64), groupId int)";
superTableMeta = SuperTableMetaGenerator.generate(sql);
} else {
// create super table with specified field size and tag size
superTableMeta = SuperTableMetaGenerator.generate(config.database, config.superTable, config.numOfFields, config.prefixOfFields, config.numOfTags, config.prefixOfTags);
superTableMeta = SuperTableMetaGenerator.generate(config.database, config.superTable, config.numOfFields,
config.prefixOfFields, config.numOfTags, config.prefixOfTags);
}
/**********************************************************************************/
// 建表
......@@ -84,7 +88,8 @@ public class TaosDemoApplication {
superTableService.create(superTableMeta);
if (!config.autoCreateTable) {
// 批量建子表
subTableService.createSubTable(superTableMeta, config.numOfTables, config.prefixOfTable, config.numOfThreadsForCreate);
subTableService.createSubTable(superTableMeta, config.numOfTables, config.prefixOfTable,
config.numOfThreadsForCreate);
}
}
end = System.currentTimeMillis();
......@@ -93,7 +98,7 @@ public class TaosDemoApplication {
// 插入
long tableSize = config.numOfTables;
int threadSize = config.numOfThreadsForInsert;
long startTime = getProperStartTime(config.startTime, config.keep);
long startTime = getProperStartTime(config.startTime, config.days);
if (tableSize < threadSize)
threadSize = (int) tableSize;
......@@ -101,13 +106,13 @@ public class TaosDemoApplication {
start = System.currentTimeMillis();
// multi threads to insert
int affectedRows = subTableService.insertMultiThreads(superTableMeta, threadSize, tableSize, startTime, gap, config);
int affectedRows = subTableService.insertMultiThreads(superTableMeta, threadSize, tableSize, startTime, gap,
config);
end = System.currentTimeMillis();
logger.info("insert " + affectedRows + " rows, time cost: " + (end - start) + " ms");
/**********************************************************************************/
// 查询
/**********************************************************************************/
// 删除表
if (config.dropTable) {
......
package com.taosdata.taosdemo.service;
import com.taosdata.jdbc.utils.SqlSyntaxValidator;
import javax.sql.DataSource;
import java.sql.*;
import java.util.ArrayList;
......@@ -23,10 +21,6 @@ public class QueryService {
Boolean[] ret = new Boolean[sqls.length];
for (int i = 0; i < sqls.length; i++) {
ret[i] = true;
if (!SqlSyntaxValidator.isValidForExecuteQuery(sqls[i])) {
ret[i] = false;
continue;
}
try (Connection conn = dataSource.getConnection(); Statement stmt = conn.createStatement()) {
stmt.executeQuery(sqls[i]);
} catch (SQLException e) {
......
......@@ -15,9 +15,12 @@ public class SqlSpeller {
StringBuilder sb = new StringBuilder();
sb.append("create database if not exists ").append(map.get("database")).append(" ");
if (map.containsKey("keep"))
sb.append("keep ").append(map.get("keep")).append(" ");
if (map.containsKey("days"))
sb.append("days ").append(map.get("days")).append(" ");
sb.append("keep ");
if (map.containsKey("days")) {
sb.append(map.get("days")).append("d ");
} else {
sb.append(" ");
}
if (map.containsKey("replica"))
sb.append("replica ").append(map.get("replica")).append(" ");
if (map.containsKey("cache"))
......@@ -29,7 +32,7 @@ public class SqlSpeller {
if (map.containsKey("maxrows"))
sb.append("maxrows ").append(map.get("maxrows")).append(" ");
if (map.containsKey("precision"))
sb.append("precision ").append(map.get("precision")).append(" ");
sb.append("precision '").append(map.get("precision")).append("' ");
if (map.containsKey("comp"))
sb.append("comp ").append(map.get("comp")).append(" ");
if (map.containsKey("walLevel"))
......@@ -46,11 +49,13 @@ public class SqlSpeller {
// create table if not exists xx.xx using xx.xx tags(x,x,x)
public static String createTableUsingSuperTable(SubTableMeta subTableMeta) {
StringBuilder sb = new StringBuilder();
sb.append("create table if not exists ").append(subTableMeta.getDatabase()).append(".").append(subTableMeta.getName()).append(" ");
sb.append("using ").append(subTableMeta.getDatabase()).append(".").append(subTableMeta.getSupertable()).append(" ");
// String tagStr = subTableMeta.getTags().stream().filter(Objects::nonNull)
// .map(tagValue -> tagValue.getName() + " '" + tagValue.getValue() + "' ")
// .collect(Collectors.joining(",", "(", ")"));
sb.append("create table if not exists ").append(subTableMeta.getDatabase()).append(".")
.append(subTableMeta.getName()).append(" ");
sb.append("using ").append(subTableMeta.getDatabase()).append(".").append(subTableMeta.getSupertable())
.append(" ");
// String tagStr = subTableMeta.getTags().stream().filter(Objects::nonNull)
// .map(tagValue -> tagValue.getName() + " '" + tagValue.getValue() + "' ")
// .collect(Collectors.joining(",", "(", ")"));
sb.append("tags ").append(tagValues(subTableMeta.getTags()));
return sb.toString();
}
......@@ -63,7 +68,7 @@ public class SqlSpeller {
return sb.toString();
}
//f1, f2, f3
// f1, f2, f3
private static String fieldValues(List<FieldValue> fields) {
return IntStream.range(0, fields.size()).mapToObj(i -> {
if (i == 0) {
......@@ -73,13 +78,13 @@ public class SqlSpeller {
}
}).collect(Collectors.joining(",", "(", ")"));
// return fields.stream()
// .filter(Objects::nonNull)
// .map(fieldValue -> "'" + fieldValue.getValue() + "'")
// .collect(Collectors.joining(",", "(", ")"));
// return fields.stream()
// .filter(Objects::nonNull)
// .map(fieldValue -> "'" + fieldValue.getValue() + "'")
// .collect(Collectors.joining(",", "(", ")"));
}
//(f1, f2, f3),(f1, f2, f3)
// (f1, f2, f3),(f1, f2, f3)
private static String rowValues(List<RowValue> rowValues) {
return rowValues.stream().filter(Objects::nonNull)
.map(rowValue -> fieldValues(rowValue.getFields()))
......@@ -89,8 +94,10 @@ public class SqlSpeller {
// insert into xx.xxx using xx.xx tags(x,x,x) values(x,x,x),(x,x,x)...
public static String insertOneTableMultiValuesUsingSuperTable(SubTableValue subTableValue) {
StringBuilder sb = new StringBuilder();
sb.append("insert into ").append(subTableValue.getDatabase()).append(".").append(subTableValue.getName()).append(" ");
sb.append("using ").append(subTableValue.getDatabase()).append(".").append(subTableValue.getSupertable()).append(" ");
sb.append("insert into ").append(subTableValue.getDatabase()).append(".").append(subTableValue.getName())
.append(" ");
sb.append("using ").append(subTableValue.getDatabase()).append(".").append(subTableValue.getSupertable())
.append(" ");
sb.append("tags ").append(tagValues(subTableValue.getTags()) + " ");
sb.append("values ").append(rowValues(subTableValue.getValues()));
return sb.toString();
......@@ -126,7 +133,8 @@ public class SqlSpeller {
// create table if not exists xx.xx (f1 xx,f2 xx...) tags(t1 xx, t2 xx...)
public static String createSuperTable(SuperTableMeta tableMetadata) {
StringBuilder sb = new StringBuilder();
sb.append("create table if not exists ").append(tableMetadata.getDatabase()).append(".").append(tableMetadata.getName());
sb.append("create table if not exists ").append(tableMetadata.getDatabase()).append(".")
.append(tableMetadata.getName());
String fields = tableMetadata.getFields().stream()
.filter(Objects::nonNull).map(field -> field.getName() + " " + field.getType() + " ")
.collect(Collectors.joining(",", "(", ")"));
......@@ -139,10 +147,10 @@ public class SqlSpeller {
return sb.toString();
}
public static String createTable(TableMeta tableMeta) {
StringBuilder sb = new StringBuilder();
sb.append("create table if not exists ").append(tableMeta.getDatabase()).append(".").append(tableMeta.getName()).append(" ");
sb.append("create table if not exists ").append(tableMeta.getDatabase()).append(".").append(tableMeta.getName())
.append(" ");
String fields = tableMeta.getFields().stream()
.filter(Objects::nonNull).map(field -> field.getName() + " " + field.getType() + " ")
.collect(Collectors.joining(",", "(", ")"));
......@@ -179,16 +187,17 @@ public class SqlSpeller {
public static String insertMultiTableMultiValuesWithColumns(List<TableValue> tables) {
StringBuilder sb = new StringBuilder();
sb.append("insert into ").append(tables.stream().filter(Objects::nonNull)
.map(table -> table.getDatabase() + "." + table.getName() + " " + columnNames(table.getColumns()) + " values " + rowValues(table.getValues()))
.map(table -> table.getDatabase() + "." + table.getName() + " " + columnNames(table.getColumns())
+ " values " + rowValues(table.getValues()))
.collect(Collectors.joining(" ")));
return sb.toString();
}
public static String insertMultiTableMultiValues(List<TableValue> tables) {
StringBuilder sb = new StringBuilder();
sb.append("insert into ").append(tables.stream().filter(Objects::nonNull).map(table ->
table.getDatabase() + "." + table.getName() + " values " + rowValues(table.getValues())
).collect(Collectors.joining(" ")));
sb.append("insert into ").append(tables.stream().filter(Objects::nonNull)
.map(table -> table.getDatabase() + "." + table.getName() + " values " + rowValues(table.getValues()))
.collect(Collectors.joining(" ")));
return sb.toString();
}
}
jdbc.driver=com.taosdata.jdbc.rs.RestfulDriver
#jdbc.driver=com.taosdata.jdbc.TSDBDriver
# jdbc.driver=com.taosdata.jdbc.rs.RestfulDriver
jdbc.driver=com.taosdata.jdbc.TSDBDriver
hikari.maximum-pool-size=20
hikari.minimum-idle=20
hikari.max-lifetime=0
\ No newline at end of file
package com.taosdata.taosdemo.service;
import com.taosdata.taosdemo.domain.TableMeta;
import org.junit.Before;
import org.junit.Test;
import java.util.ArrayList;
import java.util.List;
public class TableServiceTest {
private TableService tableService;
private List<TableMeta> tables;
@Before
public void before() {
tables = new ArrayList<>();
for (int i = 0; i < 1; i++) {
TableMeta tableMeta = new TableMeta();
tableMeta.setDatabase("test");
tableMeta.setName("weather" + (i + 1));
tables.add(tableMeta);
}
}
@Test
public void testCreate() {
tableService.create(tables);
}
}
\ No newline at end of file
......@@ -13,6 +13,7 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
// clang-format off
#include <assert.h>
#include <stdio.h>
#include <string.h>
......@@ -94,13 +95,8 @@ int32_t create_stream() {
}
taos_free_result(pRes);
/*const char* sql = "select min(k), max(k), sum(k) from tu1";*/
/*const char* sql = "select min(k), max(k), sum(k) as sum_of_k from st1";*/
/*const char* sql = "select sum(k) from tu1 interval(10m)";*/
/*pRes = tmq_create_stream(pConn, "stream1", "out1", sql);*/
pRes = taos_query(pConn,
"create stream stream1 trigger max_delay 10s watermark 10s into outstb as select _wstart start, "
"count(k) from st1 partition by tbname interval(20s) ");
"create stream stream1 trigger at_once watermark 10s into outstb as select _wstart start, avg(k) from st1 partition by tbname interval(10s)");
if (taos_errno(pRes) != 0) {
printf("failed to create stream stream1, reason:%s\n", taos_errstr(pRes));
return -1;
......
......@@ -44,6 +44,30 @@ enum {
)
// clang-format on
typedef struct {
TSKEY ts;
uint64_t groupId;
} SWinKey;
static inline int SWinKeyCmpr(const void* pKey1, int kLen1, const void* pKey2, int kLen2) {
SWinKey* pWin1 = (SWinKey*)pKey1;
SWinKey* pWin2 = (SWinKey*)pKey2;
if (pWin1->groupId > pWin2->groupId) {
return 1;
} else if (pWin1->groupId < pWin2->groupId) {
return -1;
}
if (pWin1->ts > pWin2->ts) {
return 1;
} else if (pWin1->ts < pWin2->ts) {
return -1;
}
return 0;
}
enum {
TMQ_MSG_TYPE__DUMMY = 0,
TMQ_MSG_TYPE__POLL_RSP,
......@@ -181,7 +205,7 @@ typedef struct SColumn {
int16_t slotId;
char name[TSDB_COL_NAME_LEN];
int8_t flag; // column type: normal column, tag, or user-input column (integer/float/string)
int16_t colType; // column type: normal column, tag, or window column
int16_t type;
int32_t bytes;
uint8_t precision;
......
......@@ -66,6 +66,7 @@ extern int32_t tsNumOfVnodeStreamThreads;
extern int32_t tsNumOfVnodeFetchThreads;
extern int32_t tsNumOfVnodeWriteThreads;
extern int32_t tsNumOfVnodeSyncThreads;
extern int32_t tsNumOfVnodeRsmaThreads;
extern int32_t tsNumOfQnodeQueryThreads;
extern int32_t tsNumOfQnodeFetchThreads;
extern int32_t tsNumOfSnodeSharedThreads;
......@@ -130,6 +131,7 @@ extern int32_t tsMqRebalanceInterval;
extern int32_t tsTtlUnit;
extern int32_t tsTtlPushInterval;
extern int32_t tsGrantHBInterval;
extern int32_t tsUptimeInterval;
#define NEEDTO_COMPRESSS_MSG(size) (tsCompressMsgSize != -1 && (size) > tsCompressMsgSize)
......
......@@ -2667,31 +2667,6 @@ typedef struct {
int32_t padding;
} SRSmaExecMsg;
typedef struct {
int64_t suid;
int8_t level;
} SRSmaFetchMsg;
static FORCE_INLINE int32_t tEncodeSRSmaFetchMsg(SEncoder* pCoder, const SRSmaFetchMsg* pReq) {
if (tStartEncode(pCoder) < 0) return -1;
if (tEncodeI64(pCoder, pReq->suid) < 0) return -1;
if (tEncodeI8(pCoder, pReq->level) < 0) return -1;
tEndEncode(pCoder);
return 0;
}
static FORCE_INLINE int32_t tDecodeSRSmaFetchMsg(SDecoder* pCoder, SRSmaFetchMsg* pReq) {
if (tStartDecode(pCoder) < 0) return -1;
if (tDecodeI64(pCoder, &pReq->suid) < 0) return -1;
if (tDecodeI8(pCoder, &pReq->level) < 0) return -1;
tEndDecode(pCoder);
return 0;
}
typedef struct {
int8_t version; // for compatibility(default 0)
int8_t intervalUnit; // MACRO: TIME_UNIT_XXX
......
......@@ -170,6 +170,7 @@ enum {
TD_DEF_MSG_TYPE(TDMT_MND_SPLIT_VGROUP, "split-vgroup", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_MND_SHOW_VARIABLES, "show-variables", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_MND_SERVER_VERSION, "server-version", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_MND_UPTIME_TIMER, "uptime-timer", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_MND_MAX_MSG, "mnd-max", NULL, NULL)
TD_NEW_MSG_SEG(TDMT_VND_MSG)
......@@ -201,7 +202,7 @@ enum {
TD_DEF_MSG_TYPE(TDMT_VND_CANCEL_SMA, "vnode-cancel-sma", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_DROP_SMA, "vnode-drop-sma", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_SUBMIT_RSMA, "vnode-submit-rsma", SSubmitReq, SSubmitRsp)
TD_DEF_MSG_TYPE(TDMT_VND_FETCH_RSMA, "vnode-fetch-rsma", SRSmaFetchMsg, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_FETCH_RSMA, "vnode-fetch-rsma", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_EXEC_RSMA, "vnode-exec-rsma", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_DELETE, "delete-data", SVDeleteReq, SVDeleteRsp)
TD_DEF_MSG_TYPE(TDMT_VND_BATCH_DEL, "batch-delete", SBatchDeleteReq, NULL)
......
......@@ -29,7 +29,7 @@ typedef void* DataSinkHandle;
struct SRpcMsg;
struct SSubplan;
typedef struct SReadHandle {
typedef struct {
void* tqReader;
void* meta;
void* config;
......@@ -41,6 +41,7 @@ typedef struct SReadHandle {
bool initTableReader;
bool initTqReader;
int32_t numOfVgroups;
void* pStateBackend;
} SReadHandle;
// in queue mode, data streams are seperated by msg
......@@ -78,8 +79,8 @@ int32_t qSetMultiStreamInput(qTaskInfo_t tinfo, const void* pBlocks, size_t numO
/**
* @brief Cleanup SSDataBlock for StreamScanInfo
*
* @param tinfo
*
* @param tinfo
*/
void tdCleanupStreamInputDataBlock(qTaskInfo_t tinfo);
......@@ -163,7 +164,7 @@ int32_t qGetQualifiedTableIdList(void* pTableList, const char* tagCond, int32_t
void qProcessRspMsg(void* parent, struct SRpcMsg* pMsg, struct SEpSet* pEpSet);
int32_t qGetExplainExecInfo(qTaskInfo_t tinfo, SArray* pExecInfoList/*,int32_t* resNum, SExplainExecInfo** pRes*/);
int32_t qGetExplainExecInfo(qTaskInfo_t tinfo, SArray* pExecInfoList /*,int32_t* resNum, SExplainExecInfo** pRes*/);
int32_t qSerializeTaskStatus(qTaskInfo_t tinfo, char** pOutput, int32_t* len);
......
......@@ -142,6 +142,7 @@ typedef struct SqlFunctionCtx {
struct SSDataBlock *pDstBlock; // used by indifinite rows function to set selectivity
int32_t curBufPage;
bool increase;
bool isStream;
char udfName[TSDB_FUNC_NAME_LEN];
} SqlFunctionCtx;
......
......@@ -57,7 +57,9 @@ typedef enum EColumnType {
COLUMN_TYPE_COLUMN = 1,
COLUMN_TYPE_TAG,
COLUMN_TYPE_TBNAME,
COLUMN_TYPE_WINDOW_PC,
COLUMN_TYPE_WINDOW_START,
COLUMN_TYPE_WINDOW_END,
COLUMN_TYPE_WINDOW_DURATION,
COLUMN_TYPE_GROUP_KEY
} EColumnType;
......@@ -276,6 +278,7 @@ typedef struct SSelectStmt {
bool hasLastRowFunc;
bool hasTimeLineFunc;
bool hasUdaf;
bool hasStateKey;
bool onlyHasKeepOrderFunc;
bool groupSort;
} SSelectStmt;
......
......@@ -263,6 +263,14 @@ typedef struct {
SArray* checkpointVer;
} SStreamRecoveringState;
// incremental state storage
typedef struct {
SStreamTask* pOwner;
TDB* db;
TTB* pStateDb;
TXN txn;
} SStreamState;
typedef struct SStreamTask {
int64_t streamId;
int32_t taskId;
......@@ -312,6 +320,10 @@ typedef struct SStreamTask {
// msg handle
SMsgCb* pMsgCb;
// state backend
SStreamState* pState;
} SStreamTask;
int32_t tEncodeStreamEpInfo(SEncoder* pEncoder, const SStreamChildEpInfo* pInfo);
......@@ -507,7 +519,7 @@ typedef struct SStreamMeta {
char* path;
TDB* db;
TTB* pTaskDb;
TTB* pStateDb;
TTB* pCheckpointDb;
SHashObj* pTasks;
SHashObj* pRecoverStatus;
void* ahandle;
......@@ -528,6 +540,37 @@ int32_t streamMetaCommit(SStreamMeta* pMeta);
int32_t streamMetaRollBack(SStreamMeta* pMeta);
int32_t streamLoadTasks(SStreamMeta* pMeta);
SStreamState* streamStateOpen(char* path, SStreamTask* pTask);
void streamStateClose(SStreamState* pState);
int32_t streamStateBegin(SStreamState* pState);
int32_t streamStateCommit(SStreamState* pState);
int32_t streamStateAbort(SStreamState* pState);
typedef struct {
TBC* pCur;
} SStreamStateCur;
#if 1
int32_t streamStatePut(SStreamState* pState, const SWinKey* key, const void* value, int32_t vLen);
int32_t streamStateGet(SStreamState* pState, const SWinKey* key, void** pVal, int32_t* pVLen);
int32_t streamStateDel(SStreamState* pState, const SWinKey* key);
void streamFreeVal(void* val);
SStreamStateCur* streamStateGetCur(SStreamState* pState, const SWinKey* key);
SStreamStateCur* streamStateSeekKeyNext(SStreamState* pState, const SWinKey* key);
SStreamStateCur* streamStateSeekKeyPrev(SStreamState* pState, const SWinKey* key);
void streamStateFreeCur(SStreamStateCur* pCur);
int32_t streamStateGetKVByCur(SStreamStateCur* pCur, SWinKey* pKey, const void** pVal, int32_t* pVLen);
int32_t streamStateSeekFirst(SStreamState* pState, SStreamStateCur* pCur);
int32_t streamStateSeekLast(SStreamState* pState, SStreamStateCur* pCur);
int32_t streamStateCurNext(SStreamState* pState, SStreamStateCur* pCur);
int32_t streamStateCurPrev(SStreamState* pState, SStreamStateCur* pCur);
#endif
#ifdef __cplusplus
}
#endif
......
......@@ -49,7 +49,7 @@ int32_t* taosGetErrno();
#define TSDB_CODE_RPC_TIMEOUT TAOS_DEF_ERROR_CODE(0, 0x0019)
//common & util
#define TSDB_CODE_TIME_UNSYNCED TAOS_DEF_ERROR_CODE(0, 0x0013)
#define TSDB_CODE_TIME_UNSYNCED TAOS_DEF_ERROR_CODE(0, 0x0013)
#define TSDB_CODE_APP_NOT_READY TAOS_DEF_ERROR_CODE(0, 0x0014)
#define TSDB_CODE_OPS_NOT_SUPPORT TAOS_DEF_ERROR_CODE(0, 0x0100)
......@@ -222,7 +222,7 @@ int32_t* taosGetErrno();
#define TSDB_CODE_MND_INVALID_DB_OPTION TAOS_DEF_ERROR_CODE(0, 0x0382)
#define TSDB_CODE_MND_INVALID_DB TAOS_DEF_ERROR_CODE(0, 0x0383)
#define TSDB_CODE_MND_TOO_MANY_DATABASES TAOS_DEF_ERROR_CODE(0, 0x0385)
#define TSDB_CODE_MND_DB_NOT_EXIST TAOS_DEF_ERROR_CODE(0, 0x0388)
#define TSDB_CODE_MND_DB_NOT_EXIST TAOS_DEF_ERROR_CODE(0, 0x0388)
#define TSDB_CODE_MND_INVALID_DB_ACCT TAOS_DEF_ERROR_CODE(0, 0x0389)
#define TSDB_CODE_MND_DB_OPTION_UNCHANGED TAOS_DEF_ERROR_CODE(0, 0x038A)
#define TSDB_CODE_MND_DB_INDEX_NOT_EXIST TAOS_DEF_ERROR_CODE(0, 0x038B)
......@@ -433,7 +433,7 @@ int32_t* taosGetErrno();
#define TSDB_CODE_TQ_NO_DISK_PERMISSIONS TAOS_DEF_ERROR_CODE(0, 0x0A03)
#define TSDB_CODE_TQ_FILE_CORRUPTED TAOS_DEF_ERROR_CODE(0, 0x0A04)
#define TSDB_CODE_TQ_OUT_OF_MEMORY TAOS_DEF_ERROR_CODE(0, 0x0A05)
#define TSDB_CODE_TQ_FILE_ALREADY_EXISTS TAOS_DEF_ERROR_CODE(0, 0x0A06)
#define TSDB_CODE_TQ_FILE_ALREADY_EXISTS TAOS_DEF_ERROR_CODE(0, 0x0A06)
#define TSDB_CODE_TQ_FAILED_TO_CREATE_DIR TAOS_DEF_ERROR_CODE(0, 0x0A07)
#define TSDB_CODE_TQ_META_NO_SUCH_KEY TAOS_DEF_ERROR_CODE(0, 0x0A08)
#define TSDB_CODE_TQ_META_KEY_NOT_IN_TXN TAOS_DEF_ERROR_CODE(0, 0x0A09)
......@@ -490,7 +490,7 @@ int32_t* taosGetErrno();
#define TSDB_CODE_PAR_WRONG_NUMBER_OF_SELECT TAOS_DEF_ERROR_CODE(0, 0x2609)
#define TSDB_CODE_PAR_GROUPBY_LACK_EXPRESSION TAOS_DEF_ERROR_CODE(0, 0x260A)
#define TSDB_CODE_PAR_NOT_SELECTED_EXPRESSION TAOS_DEF_ERROR_CODE(0, 0x260B)
#define TSDB_CODE_PAR_NOT_SINGLE_GROUP TAOS_DEF_ERROR_CODE(0, 0x260C)
#define TSDB_CODE_PAR_NOT_SINGLE_GROUP TAOS_DEF_ERROR_CODE(0, 0x260C)
#define TSDB_CODE_PAR_TAGS_NOT_MATCHED TAOS_DEF_ERROR_CODE(0, 0x260D)
#define TSDB_CODE_PAR_INVALID_TAG_NAME TAOS_DEF_ERROR_CODE(0, 0x260E)
#define TSDB_CODE_PAR_NAME_OR_PASSWD_TOO_LONG TAOS_DEF_ERROR_CODE(0, 0x2610)
......
......@@ -386,7 +386,7 @@ typedef enum ELogicConditionType {
#define TSDB_DEFAULT_EXPLAIN_VERBOSE false
#define TSDB_EXPLAIN_RESULT_ROW_SIZE 512
#define TSDB_EXPLAIN_RESULT_ROW_SIZE (16*1024)
#define TSDB_EXPLAIN_RESULT_COLUMN_NAME "QUERY_PLAN"
#define TSDB_MAX_FIELD_LEN 16384
......
......@@ -76,6 +76,7 @@ void taosFreeQall(STaosQall *qall);
int32_t taosReadAllQitems(STaosQueue *queue, STaosQall *qall);
int32_t taosGetQitem(STaosQall *qall, void **ppItem);
void taosResetQitems(STaosQall *qall);
int32_t taosQallItemSize(STaosQall *qall);
STaosQset *taosOpenQset();
void taosCloseQset(STaosQset *qset);
......
......@@ -40,10 +40,12 @@ if not exist %work_dir%\debug\ver-%2-x86 (
)
cd %work_dir%\debug\ver-%2-x64
call vcvarsall.bat x64
cmake ../../ -G "NMake Makefiles JOM" -DCMAKE_MAKE_PROGRAM=jom -DBUILD_TOOLS=true -DBUILD_HTTP=false -DVERNUMBER=%2 -DCPUTYPE=x64
cmake ../../ -G "NMake Makefiles JOM" -DCMAKE_MAKE_PROGRAM=jom -DBUILD_TOOLS=true -DBUILD_HTTP=false -DBUILD_TEST=false -DVERNUMBER=%2 -DCPUTYPE=x64
cmake --build .
rd /s /Q C:\TDengine
cmake --install .
for /r c:\TDengine %%i in (*.dll) do signtool sign /f D:\\123.pfx /p taosdata %%i
for /r c:\TDengine %%i in (*.exe) do signtool sign /f D:\\123.pfx /p taosdata %%i
if not %errorlevel% == 0 ( call :RUNFAILED build x64 failed & exit /b 1)
cd %package_dir%
iscc /DMyAppInstallName="%packagServerName_x64%" /DMyAppVersion="%2" /DMyAppExcludeSource="" tools\tdengine.iss /O..\release
......@@ -51,19 +53,7 @@ if not %errorlevel% == 0 ( call :RUNFAILED package %packagServerName_x64% faile
iscc /DMyAppInstallName="%packagClientName_x64%" /DMyAppVersion="%2" /DMyAppExcludeSource="taosd.exe" tools\tdengine.iss /O..\release
if not %errorlevel% == 0 ( call :RUNFAILED package %packagClientName_x64% failed & exit /b 1)
cd %work_dir%\debug\ver-%2-x86
call vcvarsall.bat x86
cmake ../../ -G "NMake Makefiles JOM" -DCMAKE_MAKE_PROGRAM=jom -DBUILD_TOOLS=true -DBUILD_HTTP=false -DVERNUMBER=%2 -DCPUTYPE=x86
cmake --build .
rd /s /Q C:\TDengine
cmake --install .
if not %errorlevel% == 0 ( call :RUNFAILED build x86 failed & exit /b 1)
cd %package_dir%
@REM iscc /DMyAppInstallName="%packagServerName_x86%" /DMyAppVersion="%2" /DMyAppExcludeSource="" tools\tdengine.iss /O..\release
@REM if not %errorlevel% == 0 ( call :RUNFAILED package %packagServerName_x86% failed & exit /b 1)
iscc /DMyAppInstallName="%packagClientName_x86%" /DMyAppVersion="%2" /DMyAppExcludeSource="taosd.exe" tools\tdengine.iss /O..\release
if not %errorlevel% == 0 ( call :RUNFAILED package %packagClientName_x86% failed & exit /b 1)
for /r ..\release %%i in (*.exe) do signtool sign /f d:\\123.pfx /p taosdata %%i
goto EXIT0
:USAGE
......
......@@ -96,7 +96,12 @@ typedef struct {
typedef struct SQueryExecMetric {
int64_t start; // start timestamp, us
int64_t parsed; // start to parse, us
int64_t syntaxStart; // start to parse, us
int64_t syntaxEnd; // end to parse, us
int64_t ctgStart; // start to parse, us
int64_t ctgEnd; // end to parse, us
int64_t semanticEnd;
int64_t execEnd;
int64_t send; // start to send to server, us
int64_t rsp; // receive response from server, us
} SQueryExecMetric;
......
......@@ -29,6 +29,7 @@ extern "C" {
#define tscDebug(...) do { if (cDebugFlag & DEBUG_DEBUG) { taosPrintLog("TSC ", DEBUG_DEBUG, cDebugFlag, __VA_ARGS__); }} while(0)
#define tscTrace(...) do { if (cDebugFlag & DEBUG_TRACE) { taosPrintLog("TSC ", DEBUG_TRACE, cDebugFlag, __VA_ARGS__); }} while(0)
#define tscDebugL(...) do { if (cDebugFlag & DEBUG_DEBUG) { taosPrintLongString("TSC ", DEBUG_DEBUG, cDebugFlag, __VA_ARGS__); }} while(0)
#define tscPerf(...) do { taosPrintLog("TSC ", 0, cDebugFlag, __VA_ARGS__); } while(0)
#ifdef __cplusplus
}
......
......@@ -69,14 +69,25 @@ static void deregisterRequest(SRequestObj *pRequest) {
int32_t currentInst = atomic_sub_fetch_64((int64_t *)&pActivity->currentRequests, 1);
int32_t num = atomic_sub_fetch_32(&pTscObj->numOfReqs, 1);
int64_t duration = taosGetTimestampUs() - pRequest->metric.start;
int64_t nowUs = taosGetTimestampUs();
int64_t duration = nowUs - pRequest->metric.start;
tscDebug("0x%" PRIx64 " free Request from connObj: 0x%" PRIx64 ", reqId:0x%" PRIx64 " elapsed:%" PRIu64
" ms, current:%d, app current:%d",
pRequest->self, pTscObj->id, pRequest->requestId, duration / 1000, num, currentInst);
if (QUERY_NODE_VNODE_MODIF_STMT == pRequest->stmtType) {
tscPerf("insert duration %" PRId64 "us: syntax:%" PRId64 "us, ctg:%" PRId64 "us, semantic:%" PRId64 "us, exec:%" PRId64 "us",
duration, pRequest->metric.syntaxEnd - pRequest->metric.syntaxStart,
pRequest->metric.ctgEnd - pRequest->metric.ctgStart,
pRequest->metric.semanticEnd - pRequest->metric.ctgEnd,
pRequest->metric.execEnd - pRequest->metric.semanticEnd);
atomic_add_fetch_64((int64_t *)&pActivity->insertElapsedTime, duration);
} else if (QUERY_NODE_SELECT_STMT == pRequest->stmtType) {
tscPerf("select duration %" PRId64 "us: syntax:%" PRId64 "us, ctg:%" PRId64 "us, semantic:%" PRId64 "us, exec:%" PRId64 "us",
duration, pRequest->metric.syntaxEnd - pRequest->metric.syntaxStart,
pRequest->metric.ctgEnd - pRequest->metric.ctgStart,
pRequest->metric.semanticEnd - pRequest->metric.ctgEnd,
pRequest->metric.execEnd - pRequest->metric.semanticEnd);
atomic_add_fetch_64((int64_t *)&pActivity->queryElapsedTime, duration);
}
......@@ -330,7 +341,6 @@ void doDestroyRequest(void *p) {
schedulerFreeJob(&pRequest->body.queryJob, 0);
taosMemoryFreeClear(pRequest->msgBuf);
taosMemoryFreeClear(pRequest->sqlstr);
taosMemoryFreeClear(pRequest->pDb);
doFreeReqResultInfo(&pRequest->body.resInfo);
......@@ -349,6 +359,7 @@ void doDestroyRequest(void *p) {
taosMemoryFree(pRequest->body.param);
}
taosMemoryFreeClear(pRequest->sqlstr);
taosMemoryFree(pRequest);
tscTrace("end to destroy request %" PRIx64 " p:%p", reqId, pRequest);
}
......@@ -393,7 +404,9 @@ void taos_init_imp(void) {
schedulerInit();
tscDebug("starting to initialize TAOS driver");
#ifndef WINDOWS
taosSetCoreDump(true);
#endif
initTaskQueue();
fmFuncMgtInit();
......
......@@ -238,6 +238,9 @@ int32_t parseSql(SRequestObj* pRequest, bool topicQuery, SQuery** pQuery, SStmtC
TSWAP(pRequest->targetTableList, (*pQuery)->pTargetTableList);
}
taosArrayDestroy(cxt.pTableMetaPos);
taosArrayDestroy(cxt.pTableVgroupPos);
return code;
}
......@@ -839,6 +842,8 @@ void schedulerExecCb(SExecResult* pResult, void* param, int32_t code) {
}
schedulerFreeJob(&pRequest->body.queryJob, 0);
pRequest->metric.execEnd = taosGetTimestampUs();
}
taosMemoryFree(pResult);
......
......@@ -674,6 +674,8 @@ static void destorySqlParseWrapper(SqlParseWrapper *pWrapper) {
taosArrayDestroy(pWrapper->catalogReq.pIndex);
taosArrayDestroy(pWrapper->catalogReq.pUser);
taosArrayDestroy(pWrapper->catalogReq.pTableIndex);
taosArrayDestroy(pWrapper->pCtx->pTableMetaPos);
taosArrayDestroy(pWrapper->pCtx->pTableVgroupPos);
taosMemoryFree(pWrapper->pCtx);
taosMemoryFree(pWrapper);
}
......@@ -683,6 +685,8 @@ void retrieveMetaCallback(SMetaData *pResultMeta, void *param, int32_t code) {
SQuery *pQuery = pWrapper->pQuery;
SRequestObj *pRequest = pWrapper->pRequest;
pRequest->metric.ctgEnd = taosGetTimestampUs();
if (code == TSDB_CODE_SUCCESS) {
code = qAnalyseSqlSemantic(pWrapper->pCtx, &pWrapper->catalogReq, pResultMeta, pQuery);
pRequest->stableQuery = pQuery->stableQuery;
......@@ -691,6 +695,8 @@ void retrieveMetaCallback(SMetaData *pResultMeta, void *param, int32_t code) {
}
}
pRequest->metric.semanticEnd = taosGetTimestampUs();
if (code == TSDB_CODE_SUCCESS) {
if (pQuery->haveResultSet) {
setResSchemaInfo(&pRequest->body.resInfo, pQuery->pResSchema, pQuery->numOfResCols);
......@@ -782,12 +788,16 @@ void doAsyncQuery(SRequestObj *pRequest, bool updateMetaForce) {
SQuery *pQuery = NULL;
pRequest->metric.syntaxStart = taosGetTimestampUs();
SCatalogReq catalogReq = {.forceUpdate = updateMetaForce, .qNodeRequired = qnodeRequired(pRequest)};
code = qParseSqlSyntax(pCxt, &pQuery, &catalogReq);
if (code != TSDB_CODE_SUCCESS) {
goto _error;
}
pRequest->metric.syntaxEnd = taosGetTimestampUs();
if (!updateMetaForce) {
STscObj *pTscObj = pRequest->pTscObj;
SAppClusterSummary *pActivity = &pTscObj->pAppInfo->summary;
......@@ -814,6 +824,8 @@ void doAsyncQuery(SRequestObj *pRequest, bool updateMetaForce) {
.requestObjRefId = pCxt->requestRid,
.mgmtEps = pCxt->mgmtEpSet};
pRequest->metric.ctgStart = taosGetTimestampUs();
code = catalogAsyncGetAllMeta(pCxt->pCatalog, &conn, &catalogReq, retrieveMetaCallback, pWrapper,
&pRequest->body.queryJob);
pCxt = NULL;
......
......@@ -66,8 +66,9 @@ static const SSysDbTableSchema bnodesSchema[] = {
};
static const SSysDbTableSchema clusterSchema[] = {
{.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT},
{.name = "id", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT},
{.name = "name", .bytes = TSDB_CLUSTER_ID_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
{.name = "uptime", .bytes = 4, .type = TSDB_DATA_TYPE_INT},
{.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP},
};
......
......@@ -61,6 +61,7 @@ int32_t tsNumOfVnodeStreamThreads = 2;
int32_t tsNumOfVnodeFetchThreads = 4;
int32_t tsNumOfVnodeWriteThreads = 2;
int32_t tsNumOfVnodeSyncThreads = 2;
int32_t tsNumOfVnodeRsmaThreads = 2;
int32_t tsNumOfQnodeQueryThreads = 4;
int32_t tsNumOfQnodeFetchThreads = 4;
int32_t tsNumOfSnodeSharedThreads = 2;
......@@ -76,7 +77,7 @@ bool tsMonitorComp = false;
// telem
bool tsEnableTelem = true;
int32_t tsTelemInterval = 86400;
int32_t tsTelemInterval = 43200;
char tsTelemServer[TSDB_FQDN_LEN] = "telemetry.taosdata.com";
uint16_t tsTelemPort = 80;
......@@ -164,6 +165,7 @@ int32_t tsMqRebalanceInterval = 2;
int32_t tsTtlUnit = 86400;
int32_t tsTtlPushInterval = 86400;
int32_t tsGrantHBInterval = 60;
int32_t tsUptimeInterval = 300; // seconds
#ifndef _STORAGE
int32_t taosSetTfsCfg(SConfig *pCfg) {
......@@ -377,6 +379,10 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
tsNumOfVnodeSyncThreads = TMAX(tsNumOfVnodeSyncThreads, 16);
if (cfgAddInt32(pCfg, "numOfVnodeSyncThreads", tsNumOfVnodeSyncThreads, 1, 1024, 0) != 0) return -1;
tsNumOfVnodeRsmaThreads = tsNumOfCores;
tsNumOfVnodeRsmaThreads = TMAX(tsNumOfVnodeRsmaThreads, 4);
if (cfgAddInt32(pCfg, "numOfVnodeRsmaThreads", tsNumOfVnodeRsmaThreads, 1, 1024, 0) != 0) return -1;
tsNumOfQnodeQueryThreads = tsNumOfCores * 2;
tsNumOfQnodeQueryThreads = TMAX(tsNumOfQnodeQueryThreads, 4);
if (cfgAddInt32(pCfg, "numOfQnodeQueryThreads", tsNumOfQnodeQueryThreads, 1, 1024, 0) != 0) return -1;
......@@ -538,6 +544,7 @@ static int32_t taosSetServerCfg(SConfig *pCfg) {
tsNumOfVnodeFetchThreads = cfgGetItem(pCfg, "numOfVnodeFetchThreads")->i32;
tsNumOfVnodeWriteThreads = cfgGetItem(pCfg, "numOfVnodeWriteThreads")->i32;
tsNumOfVnodeSyncThreads = cfgGetItem(pCfg, "numOfVnodeSyncThreads")->i32;
tsNumOfVnodeRsmaThreads = cfgGetItem(pCfg, "numOfVnodeRsmaThreads")->i32;
tsNumOfQnodeQueryThreads = cfgGetItem(pCfg, "numOfQnodeQueryThreads")->i32;
tsNumOfQnodeFetchThreads = cfgGetItem(pCfg, "numOfQnodeFetchThreads")->i32;
tsNumOfSnodeSharedThreads = cfgGetItem(pCfg, "numOfSnodeSharedThreads")->i32;
......@@ -782,6 +789,8 @@ int32_t taosSetCfg(SConfig *pCfg, char *name) {
tsNumOfVnodeWriteThreads = cfgGetItem(pCfg, "numOfVnodeWriteThreads")->i32;
} else if (strcasecmp("numOfVnodeSyncThreads", name) == 0) {
tsNumOfVnodeSyncThreads = cfgGetItem(pCfg, "numOfVnodeSyncThreads")->i32;
} else if (strcasecmp("numOfVnodeRsmaThreads", name) == 0) {
tsNumOfVnodeRsmaThreads = cfgGetItem(pCfg, "numOfVnodeRsmaThreads")->i32;
} else if (strcasecmp("numOfQnodeQueryThreads", name) == 0) {
tsNumOfQnodeQueryThreads = cfgGetItem(pCfg, "numOfQnodeQueryThreads")->i32;
} else if (strcasecmp("numOfQnodeFetchThreads", name) == 0) {
......
......@@ -27,6 +27,7 @@ void mndCleanupCluster(SMnode *pMnode);
int32_t mndGetClusterName(SMnode *pMnode, char *clusterName, int32_t len);
int64_t mndGetClusterId(SMnode *pMnode);
int64_t mndGetClusterCreateTime(SMnode *pMnode);
float mndGetClusterUpTime(SMnode *pMnode);
#ifdef __cplusplus
}
......
......@@ -179,6 +179,7 @@ typedef struct {
char name[TSDB_CLUSTER_ID_LEN];
int64_t createdTime;
int64_t updateTime;
int32_t upTime;
} SClusterObj;
typedef struct {
......
......@@ -19,7 +19,7 @@
#include "mndTrans.h"
#define CLUSTER_VER_NUMBE 1
#define CLUSTER_RESERVE_SIZE 64
#define CLUSTER_RESERVE_SIZE 60
static SSdbRaw *mndClusterActionEncode(SClusterObj *pCluster);
static SSdbRow *mndClusterActionDecode(SSdbRaw *pRaw);
......@@ -29,6 +29,7 @@ static int32_t mndClusterActionUpdate(SSdb *pSdb, SClusterObj *pOldCluster, SCl
static int32_t mndCreateDefaultCluster(SMnode *pMnode);
static int32_t mndRetrieveClusters(SRpcMsg *pMsg, SShowObj *pShow, SSDataBlock *pBlock, int32_t rows);
static void mndCancelGetNextCluster(SMnode *pMnode, void *pIter);
static int32_t mndProcessUptimeTimer(SRpcMsg *pReq);
int32_t mndInitCluster(SMnode *pMnode) {
SSdbTable table = {
......@@ -42,8 +43,10 @@ int32_t mndInitCluster(SMnode *pMnode) {
.deleteFp = (SdbDeleteFp)mndClusterActionDelete,
};
mndSetMsgHandle(pMnode, TDMT_MND_UPTIME_TIMER, mndProcessUptimeTimer);
mndAddShowRetrieveHandle(pMnode, TSDB_MGMT_TABLE_CLUSTER, mndRetrieveClusters);
mndAddShowFreeIterHandle(pMnode, TSDB_MGMT_TABLE_CLUSTER, mndCancelGetNextCluster);
return sdbSetTable(pMnode->pSdb, table);
}
......@@ -62,40 +65,69 @@ int32_t mndGetClusterName(SMnode *pMnode, char *clusterName, int32_t len) {
return 0;
}
int64_t mndGetClusterId(SMnode *pMnode) {
SSdb *pSdb = pMnode->pSdb;
void *pIter = NULL;
int64_t clusterId = -1;
static SClusterObj *mndAcquireCluster(SMnode *pMnode) {
SSdb *pSdb = pMnode->pSdb;
void *pIter = NULL;
while (1) {
SClusterObj *pCluster = NULL;
pIter = sdbFetch(pSdb, SDB_CLUSTER, pIter, (void **)&pCluster);
if (pIter == NULL) break;
return pCluster;
}
return NULL;
}
static void mndReleaseCluster(SMnode *pMnode, SClusterObj *pCluster) {
SSdb *pSdb = pMnode->pSdb;
sdbRelease(pSdb, pCluster);
}
int64_t mndGetClusterId(SMnode *pMnode) {
int64_t clusterId = 0;
SClusterObj *pCluster = mndAcquireCluster(pMnode);
if (pCluster != NULL) {
clusterId = pCluster->id;
sdbRelease(pSdb, pCluster);
mndReleaseCluster(pMnode, pCluster);
}
return clusterId;
}
int64_t mndGetClusterCreateTime(SMnode *pMnode) {
SSdb *pSdb = pMnode->pSdb;
void *pIter = NULL;
int64_t createTime = INT64_MAX;
while (1) {
SClusterObj *pCluster = NULL;
pIter = sdbFetch(pSdb, SDB_CLUSTER, pIter, (void **)&pCluster);
if (pIter == NULL) break;
int64_t createTime = 0;
SClusterObj *pCluster = mndAcquireCluster(pMnode);
if (pCluster != NULL) {
createTime = pCluster->createdTime;
sdbRelease(pSdb, pCluster);
mndReleaseCluster(pMnode, pCluster);
}
return createTime;
}
static int32_t mndGetClusterUpTimeImp(SClusterObj *pCluster) {
#if 0
int32_t upTime = taosGetTimestampSec() - pCluster->updateTime / 1000;
upTime = upTime + pCluster->upTime;
return upTime;
#else
return pCluster->upTime;
#endif
}
float mndGetClusterUpTime(SMnode *pMnode) {
int64_t upTime = 0;
SClusterObj *pCluster = mndAcquireCluster(pMnode);
if (pCluster != NULL) {
upTime = mndGetClusterUpTimeImp(pCluster);
mndReleaseCluster(pMnode, pCluster);
}
return upTime / 86400.0f;
}
static SSdbRaw *mndClusterActionEncode(SClusterObj *pCluster) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
......@@ -107,6 +139,7 @@ static SSdbRaw *mndClusterActionEncode(SClusterObj *pCluster) {
SDB_SET_INT64(pRaw, dataPos, pCluster->createdTime, _OVER)
SDB_SET_INT64(pRaw, dataPos, pCluster->updateTime, _OVER)
SDB_SET_BINARY(pRaw, dataPos, pCluster->name, TSDB_CLUSTER_ID_LEN, _OVER)
SDB_SET_INT32(pRaw, dataPos, pCluster->upTime, _OVER)
SDB_SET_RESERVE(pRaw, dataPos, CLUSTER_RESERVE_SIZE, _OVER)
terrno = 0;
......@@ -144,6 +177,7 @@ static SSdbRow *mndClusterActionDecode(SSdbRaw *pRaw) {
SDB_GET_INT64(pRaw, dataPos, &pCluster->createdTime, _OVER)
SDB_GET_INT64(pRaw, dataPos, &pCluster->updateTime, _OVER)
SDB_GET_BINARY(pRaw, dataPos, pCluster->name, TSDB_CLUSTER_ID_LEN, _OVER)
SDB_GET_INT32(pRaw, dataPos, &pCluster->upTime, _OVER)
SDB_GET_RESERVE(pRaw, dataPos, CLUSTER_RESERVE_SIZE, _OVER)
terrno = 0;
......@@ -162,6 +196,7 @@ _OVER:
static int32_t mndClusterActionInsert(SSdb *pSdb, SClusterObj *pCluster) {
mTrace("cluster:%" PRId64 ", perform insert action, row:%p", pCluster->id, pCluster);
pSdb->pMnode->clusterId = pCluster->id;
pCluster->updateTime = taosGetTimestampMs();
return 0;
}
......@@ -171,7 +206,10 @@ static int32_t mndClusterActionDelete(SSdb *pSdb, SClusterObj *pCluster) {
}
static int32_t mndClusterActionUpdate(SSdb *pSdb, SClusterObj *pOld, SClusterObj *pNew) {
mTrace("cluster:%" PRId64 ", perform update action, old row:%p new row:%p", pOld->id, pOld, pNew);
mTrace("cluster:%" PRId64 ", perform update action, old row:%p new row:%p, uptime from %d to %d", pOld->id, pOld,
pNew, pOld->upTime, pNew->upTime);
pOld->upTime = pNew->upTime;
pOld->updateTime = taosGetTimestampMs();
return 0;
}
......@@ -242,6 +280,10 @@ static int32_t mndRetrieveClusters(SRpcMsg *pMsg, SShowObj *pShow, SSDataBlock *
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataAppend(pColInfo, numOfRows, buf, false);
int32_t upTime = mndGetClusterUpTimeImp(pCluster);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataAppend(pColInfo, numOfRows, (const char *)&upTime, false);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataAppend(pColInfo, numOfRows, (const char *)&pCluster->createdTime, false);
......@@ -257,3 +299,40 @@ static void mndCancelGetNextCluster(SMnode *pMnode, void *pIter) {
SSdb *pSdb = pMnode->pSdb;
sdbCancelFetch(pSdb, pIter);
}
static int32_t mndProcessUptimeTimer(SRpcMsg *pReq) {
SMnode *pMnode = pReq->info.node;
SClusterObj clusterObj = {0};
SClusterObj *pCluster = mndAcquireCluster(pMnode);
if (pCluster != NULL) {
memcpy(&clusterObj, pCluster, sizeof(SClusterObj));
clusterObj.upTime += tsUptimeInterval;
mndReleaseCluster(pMnode, pCluster);
}
if (clusterObj.id <= 0) {
mError("can't get cluster info while update uptime");
return 0;
}
mTrace("update cluster uptime to %" PRId64, clusterObj.upTime);
STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_NOTHING, pReq);
if (pTrans == NULL) return -1;
SSdbRaw *pCommitRaw = mndClusterActionEncode(&clusterObj);
if (pCommitRaw == NULL || mndTransAppendCommitlog(pTrans, pCommitRaw) != 0) {
mError("trans:%d, failed to append commit log since %s", pTrans->id, terrstr());
mndTransDrop(pTrans);
return -1;
}
sdbSetRawStatus(pCommitRaw, SDB_STATUS_READY);
if (mndTransPrepare(pMnode, pTrans) != 0) {
mError("trans:%d, failed to prepare since %s", pTrans->id, terrstr());
mndTransDrop(pTrans);
return -1;
}
mndTransDrop(pTrans);
return 0;
}
......@@ -100,6 +100,16 @@ static void mndGrantHeartBeat(SMnode *pMnode) {
}
}
static void mndIncreaseUpTime(SMnode *pMnode) {
int32_t contLen = 0;
void *pReq = mndBuildTimerMsg(&contLen);
if (pReq != NULL) {
SRpcMsg rpcMsg = {
.msgType = TDMT_MND_UPTIME_TIMER, .pCont = pReq, .contLen = contLen, .info.ahandle = (void *)0x9528};
tmsgPutToQueue(&pMnode->msgCb, WRITE_QUEUE, &rpcMsg);
}
}
static void *mndThreadFp(void *param) {
SMnode *pMnode = param;
int64_t lastTime = 0;
......@@ -122,13 +132,17 @@ static void *mndThreadFp(void *param) {
mndCalMqRebalance(pMnode);
}
if (lastTime % (tsTelemInterval * 10) == 0) {
if (lastTime % (tsTelemInterval * 10) == 1) {
mndPullupTelem(pMnode);
}
if (lastTime % (tsGrantHBInterval * 10) == 0) {
mndGrantHeartBeat(pMnode);
}
if ((lastTime % (tsUptimeInterval * 10)) == ((tsUptimeInterval - 1) * 10)) {
mndIncreaseUpTime(pMnode);
}
}
return NULL;
......@@ -556,7 +570,8 @@ static int32_t mndCheckMnodeState(SRpcMsg *pMsg) {
}
if (mndAcquireRpcRef(pMsg->info.node) == 0) return 0;
if (pMsg->msgType == TDMT_MND_MQ_TIMER || pMsg->msgType == TDMT_MND_TELEM_TIMER ||
pMsg->msgType == TDMT_MND_TRANS_TIMER || pMsg->msgType == TDMT_MND_TTL_TIMER) {
pMsg->msgType == TDMT_MND_TRANS_TIMER || pMsg->msgType == TDMT_MND_TTL_TIMER ||
pMsg->msgType == TDMT_MND_UPTIME_TIMER) {
return -1;
}
......@@ -705,7 +720,8 @@ int32_t mndGetMonitorInfo(SMnode *pMnode, SMonClusterInfo *pClusterInfo, SMonVgr
if (pObj->id == pMnode->selfDnodeId) {
pClusterInfo->first_ep_dnode_id = pObj->id;
tstrncpy(pClusterInfo->first_ep, pObj->pDnode->ep, sizeof(pClusterInfo->first_ep));
pClusterInfo->master_uptime = (ms - pObj->stateStartTime) / (86400000.0f);
pClusterInfo->master_uptime = mndGetClusterUpTime(pMnode);
// pClusterInfo->master_uptime = (ms - pObj->stateStartTime) / (86400000.0f);
tstrncpy(desc.role, syncStr(TAOS_SYNC_STATE_LEADER), sizeof(desc.role));
} else {
tstrncpy(desc.role, syncStr(pObj->state), sizeof(desc.role));
......
......@@ -442,6 +442,8 @@ static void *mndBuildVCreateStbReq(SMnode *pMnode, SVgObj *pVgroup, SStbObj *pSt
if (req.rollup) {
req.rsmaParam.maxdelay[0] = pStb->maxdelay[0];
req.rsmaParam.maxdelay[1] = pStb->maxdelay[1];
req.rsmaParam.watermark[0] = pStb->watermark[0];
req.rsmaParam.watermark[1] = pStb->watermark[1];
if (pStb->ast1Len > 0) {
if (mndConvertRsmaTask(&req.rsmaParam.qmsg[0], &req.rsmaParam.qmsgLen[0], pStb->pAst1, pStb->uid,
STREAM_TRIGGER_WINDOW_CLOSE, req.rsmaParam.watermark[0]) < 0) {
......
......@@ -68,7 +68,7 @@ void mndSyncCommitMsg(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbM
if (pMgmt->errCode != 0) {
mError("trans:%d, failed to propose since %s, post sem", transId, tstrerror(pMgmt->errCode));
} else {
mInfo("trans:%d, is proposed and post sem", transId, tstrerror(pMgmt->errCode));
mDebug("trans:%d, is proposed and post sem", transId, tstrerror(pMgmt->errCode));
}
pMgmt->transId = 0;
taosWUnLockLatch(&pMgmt->lock);
......@@ -118,7 +118,7 @@ void mndReConfig(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SReConfigCbMeta cbM
SSyncMgmt *pMgmt = &pMnode->syncMgmt;
pMgmt->errCode = cbMeta.code;
mInfo("trans:-1, sync reconfig is proposed, saved:%d code:0x%x, index:%" PRId64 " term:%" PRId64, pMgmt->transId,
mDebug("trans:-1, sync reconfig is proposed, saved:%d code:0x%x, index:%" PRId64 " term:%" PRId64, pMgmt->transId,
cbMeta.code, cbMeta.index, cbMeta.term);
taosWLockLatch(&pMgmt->lock);
......@@ -126,7 +126,7 @@ void mndReConfig(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SReConfigCbMeta cbM
if (pMgmt->errCode != 0) {
mError("trans:-1, failed to propose sync reconfig since %s, post sem", tstrerror(pMgmt->errCode));
} else {
mInfo("trans:-1, sync reconfig is proposed, saved:%d code:0x%x, index:%" PRId64 " term:%" PRId64 " post sem",
mDebug("trans:-1, sync reconfig is proposed, saved:%d code:0x%x, index:%" PRId64 " term:%" PRId64 " post sem",
pMgmt->transId, cbMeta.code, cbMeta.index, cbMeta.term);
}
pMgmt->transId = 0;
......@@ -228,7 +228,7 @@ int32_t mndInitSync(SMnode *pMnode) {
syncInfo.isStandBy = pMgmt->standby;
syncInfo.snapshotStrategy = SYNC_STRATEGY_STANDARD_SNAPSHOT;
mInfo("start to open mnode sync, standby:%d", pMgmt->standby);
mDebug("start to open mnode sync, standby:%d", pMgmt->standby);
if (pMgmt->standby || pMgmt->replica.id > 0) {
SSyncCfg *pCfg = &syncInfo.syncCfg;
pCfg->replicaNum = 1;
......@@ -236,7 +236,7 @@ int32_t mndInitSync(SMnode *pMnode) {
SNodeInfo *pNode = &pCfg->nodeInfo[0];
tstrncpy(pNode->nodeFqdn, pMgmt->replica.fqdn, sizeof(pNode->nodeFqdn));
pNode->nodePort = pMgmt->replica.port;
mInfo("mnode ep:%s:%u", pNode->nodeFqdn, pNode->nodePort);
mDebug("mnode ep:%s:%u", pNode->nodeFqdn, pNode->nodePort);
}
tsem_init(&pMgmt->syncSem, 0, 0);
......
......@@ -131,7 +131,9 @@ static int32_t mndProcessTelemTimer(SRpcMsg* pReq) {
char* pCont = mndBuildTelemetryReport(pMnode);
if (pCont != NULL) {
if (taosSendHttpReport(tsTelemServer, tsTelemPort, pCont, strlen(pCont), HTTP_FLAT) != 0) {
mError("failed to send telemetry msg");
mError("failed to send telemetry report");
} else {
mTrace("succeed to send telemetry report");
}
taosMemoryFree(pCont);
}
......
......@@ -1308,7 +1308,7 @@ static bool mndTransPerformRedoActionStage(SMnode *pMnode, STrans *pTrans) {
if (pTrans->policy == TRN_POLICY_ROLLBACK) {
if (pTrans->lastAction != 0) {
STransAction *pAction = taosArrayGet(pTrans->redoActions, pTrans->lastAction);
if (pAction->retryCode != 0 && pAction->retryCode != pAction->errCode) {
if (pAction->retryCode != 0 && pAction->retryCode == pAction->errCode) {
if (pTrans->failedTimes < 6) {
mError("trans:%d, stage keep on redoAction since action:%d code:0x%x not 0x%x, failedTimes:%d", pTrans->id,
pTrans->lastAction, pTrans->code, pAction->retryCode, pTrans->failedTimes);
......
......@@ -5,7 +5,9 @@ target_link_libraries(
PUBLIC sut
)
add_test(
NAME smaTest
COMMAND smaTest
)
if(NOT ${TD_WINDOWS})
add_test(
NAME smaTest
COMMAND smaTest
)
endif(NOT ${TD_WINDOWS})
......@@ -5,7 +5,9 @@ target_link_libraries(
PUBLIC sut
)
add_test(
NAME stbTest
COMMAND stbTest
)
\ No newline at end of file
if(NOT ${TD_WINDOWS})
add_test(
NAME stbTest
COMMAND stbTest
)
endif(NOT ${TD_WINDOWS})
\ No newline at end of file
......@@ -32,7 +32,7 @@ extern "C" {
#define smaTrace(...) do { if (smaDebugFlag & DEBUG_TRACE) { taosPrintLog("SMA ", DEBUG_TRACE, tsdbDebugFlag, __VA_ARGS__); }} while(0)
// clang-format on
#define RSMA_TASK_INFO_HASH_SLOT 8
#define RSMA_TASK_INFO_HASH_SLOT (8)
typedef struct SSmaEnv SSmaEnv;
typedef struct SSmaStat SSmaStat;
......@@ -48,9 +48,12 @@ typedef struct SQTaskFWriter SQTaskFWriter;
struct SSmaEnv {
SRWLatch lock;
int8_t type;
int8_t flag; // 0x01 inClose
SSmaStat *pStat;
};
#define SMA_ENV_FLG_CLOSE ((int8_t)0x1)
typedef struct {
int8_t inited;
int32_t rsetId;
......@@ -90,14 +93,13 @@ struct SRSmaStat {
SSma *pSma;
int64_t commitAppliedVer; // vnode applied version for async commit
int64_t refId; // shared by fetch tasks
volatile int64_t qBufSize; // queue buffer size
volatile int64_t nBufItems; // number of items in queue buffer
SRWLatch lock; // r/w lock for rsma fs(e.g. qtaskinfo)
int8_t triggerStat; // shared by fetch tasks
int8_t commitStat; // 0 not in committing, 1 in committing
int8_t execStat; // 0 not in exec , 1 in exec
SArray *aTaskFile; // qTaskFiles committed recently(for recovery/snapshot r/w)
SHashObj *infoHash; // key: suid, value: SRSmaInfo
SHashObj *fetchHash; // key: suid, value: L1 or L2 or L1|L2
tsem_t notEmpty; // has items in queue buffer
};
struct SSmaStat {
......@@ -106,31 +108,34 @@ struct SSmaStat {
SRSmaStat rsmaStat; // rollup sma
};
T_REF_DECLARE()
char data[];
};
#define SMA_STAT_TSMA(s) (&(s)->tsmaStat)
#define SMA_STAT_RSMA(s) (&(s)->rsmaStat)
#define RSMA_INFO_HASH(r) ((r)->infoHash)
#define RSMA_FETCH_HASH(r) ((r)->fetchHash)
#define RSMA_TRIGGER_STAT(r) (&(r)->triggerStat)
#define RSMA_COMMIT_STAT(r) (&(r)->commitStat)
#define RSMA_REF_ID(r) ((r)->refId)
#define RSMA_FS_LOCK(r) (&(r)->lock)
struct SRSmaInfoItem {
int8_t level;
int8_t level : 4;
int8_t fetchLevel : 4;
int8_t triggerStat;
uint16_t interval; // second
int32_t maxDelay;
uint16_t nSkipped;
int32_t maxDelay; // ms
tmr_h tmrId;
};
struct SRSmaInfo {
STSchema *pTSchema;
int64_t suid;
int64_t refId; // refId of SRSmaStat
uint64_t delFlag : 1;
uint64_t lastReceived : 63; // second
int64_t refId; // refId of SRSmaStat
int64_t lastRecv; // ms
int8_t assigned; // 0 idle, 1 assgined for exec
int8_t delFlag;
int16_t padding;
T_REF_DECLARE()
SRSmaInfoItem items[TSDB_RETENTION_L2];
void *taskInfo[TSDB_RETENTION_L2]; // qTaskInfo_t
......
......@@ -198,8 +198,6 @@ int32_t smaAsyncPreCommit(SSma* pSma);
int32_t smaAsyncCommit(SSma* pSma);
int32_t smaAsyncPostCommit(SSma* pSma);
int32_t smaDoRetention(SSma* pSma, int64_t now);
int32_t smaProcessFetch(SSma* pSma, void* pMsg);
int32_t smaProcessExec(SSma* pSma, void* pMsg);
int32_t tdProcessTSmaCreate(SSma* pSma, int64_t version, const char* msg);
int32_t tdProcessTSmaInsert(SSma* pSma, int64_t indexUid, const char* msg);
......
......@@ -109,7 +109,7 @@ int32_t smaBegin(SSma *pSma) {
/**
* @brief pre-commit for rollup sma(sync commit).
* 1) set trigger stat of rsma timer TASK_TRIGGER_STAT_PAUSED.
* 2) wait all triggered fetch tasks finished
* 2) wait for all triggered fetch tasks to finish
* 3) perform persist task for qTaskInfo
*
* @param pSma
......@@ -127,14 +127,14 @@ static int32_t tdProcessRSmaSyncPreCommitImpl(SSma *pSma) {
// step 1: set rsma stat paused
atomic_store_8(RSMA_TRIGGER_STAT(pRSmaStat), TASK_TRIGGER_STAT_PAUSED);
// step 2: wait all triggered fetch tasks finished
// step 2: wait for all triggered fetch tasks to finish
int32_t nLoops = 0;
while (1) {
if (T_REF_VAL_GET(pStat) == 0) {
smaDebug("vgId:%d, rsma fetch tasks all finished", SMA_VID(pSma));
smaDebug("vgId:%d, rsma fetch tasks are all finished", SMA_VID(pSma));
break;
} else {
smaDebug("vgId:%d, rsma fetch tasks not all finished yet", SMA_VID(pSma));
smaDebug("vgId:%d, rsma fetch tasks are not all finished yet", SMA_VID(pSma));
}
++nLoops;
if (nLoops > 1000) {
......@@ -316,15 +316,17 @@ static int32_t tdProcessRSmaAsyncPreCommitImpl(SSma *pSma) {
// step 1: set rsma stat
atomic_store_8(RSMA_TRIGGER_STAT(pRSmaStat), TASK_TRIGGER_STAT_PAUSED);
atomic_store_8(RSMA_COMMIT_STAT(pRSmaStat), 1);
pRSmaStat->commitAppliedVer = pSma->pVnode->state.applied;
ASSERT(pRSmaStat->commitAppliedVer > 0);
// step 2: wait all triggered fetch tasks finished
// step 2: wait for all triggered fetch tasks to finish
int32_t nLoops = 0;
while (1) {
if (T_REF_VAL_GET(pStat) == 0) {
smaDebug("vgId:%d, rsma fetch tasks all finished", SMA_VID(pSma));
smaDebug("vgId:%d, rsma commit, fetch tasks are all finished", SMA_VID(pSma));
break;
} else {
smaDebug("vgId:%d, rsma fetch tasks not all finished yet", SMA_VID(pSma));
smaDebug("vgId:%d, rsma commit, fetch tasks are not all finished yet", SMA_VID(pSma));
}
++nLoops;
if (nLoops > 1000) {
......@@ -338,30 +340,29 @@ static int32_t tdProcessRSmaAsyncPreCommitImpl(SSma *pSma) {
* 1) This is high cost task and should not put in asyncPreCommit originally.
* 2) But, if put in asyncCommit, would trigger taskInfo cloning frequently.
*/
nLoops = 0;
smaInfo("vgId:%d, start to wait for rsma qtask free, TID:%p", SMA_VID(pSma), (void *)taosGetSelfPthreadId());
if (tdRSmaProcessExecImpl(pSma, RSMA_EXEC_COMMIT) < 0) {
return TSDB_CODE_FAILED;
}
int8_t old;
while (1) {
old = atomic_val_compare_exchange_8(&pRSmaStat->execStat, 0, 1);
if (old == 0) break;
if (++nLoops > 1000) {
smaInfo("vgId:%d, rsma commit, wait for all items to be consumed, TID:%p", SMA_VID(pSma), (void*)taosGetSelfPthreadId());
nLoops = 0;
while (atomic_load_64(&pRSmaStat->nBufItems) > 0) {
++nLoops;
if (nLoops > 1000) {
sched_yield();
nLoops = 0;
smaDebug("vgId:%d, wait for rsma qtask free, TID:%p", SMA_VID(pSma), (void *)taosGetSelfPthreadId());
}
}
smaInfo("vgId:%d, end to wait for rsma qtask free, TID:%p", SMA_VID(pSma), (void *)taosGetSelfPthreadId());
if (tdRSmaProcessExecImpl(pSma, RSMA_EXEC_COMMIT) < 0) {
atomic_store_8(&pRSmaStat->execStat, 0);
smaInfo("vgId:%d, rsma commit, all items are consumed, TID:%p", SMA_VID(pSma), (void *)taosGetSelfPthreadId());
if (tdRSmaPersistExecImpl(pRSmaStat, RSMA_INFO_HASH(pRSmaStat)) < 0) {
return TSDB_CODE_FAILED;
}
smaInfo("vgId:%d, rsma commit, operator state commited, TID:%p", SMA_VID(pSma), (void *)taosGetSelfPthreadId());
#if 0 // consuming task of qTaskInfo clone
// step 4: swap queue/qall and iQueue/iQall
// lock
taosWLockLatch(SMA_ENV_LOCK(pEnv));
// taosWLockLatch(SMA_ENV_LOCK(pEnv));
ASSERT(RSMA_INFO_HASH(pRSmaStat));
......@@ -376,13 +377,9 @@ static int32_t tdProcessRSmaAsyncPreCommitImpl(SSma *pSma) {
pIter = taosHashIterate(RSMA_INFO_HASH(pRSmaStat), pIter);
}
atomic_store_64(&pRSmaStat->qBufSize, 0);
atomic_store_8(&pRSmaStat->execStat, 0);
// unlock
taosWUnLockLatch(SMA_ENV_LOCK(pEnv));
// step 5: others
pRSmaStat->commitAppliedVer = pSma->pVnode->state.applied;
// taosWUnLockLatch(SMA_ENV_LOCK(pEnv));
#endif
return TSDB_CODE_SUCCESS;
}
......@@ -398,13 +395,14 @@ static int32_t tdProcessRSmaAsyncCommitImpl(SSma *pSma) {
if (!pSmaEnv) {
return TSDB_CODE_SUCCESS;
}
#if 0
SRSmaStat *pRSmaStat = (SRSmaStat *)SMA_ENV_STAT(pSmaEnv);
// perform persist task for qTaskInfo operator
if (tdRSmaPersistExecImpl(pRSmaStat, RSMA_INFO_HASH(pRSmaStat)) < 0) {
return TSDB_CODE_FAILED;
}
#endif
return TSDB_CODE_SUCCESS;
}
......@@ -426,10 +424,10 @@ static int32_t tdProcessRSmaAsyncPostCommitImpl(SSma *pSma) {
// step 1: merge qTaskInfo and iQTaskInfo
// lock
taosWLockLatch(SMA_ENV_LOCK(pEnv));
// taosWLockLatch(SMA_ENV_LOCK(pEnv));
void *pIter = taosHashIterate(RSMA_INFO_HASH(pRSmaStat), NULL);
while (pIter) {
void *pIter = NULL;
while ((pIter = taosHashIterate(RSMA_INFO_HASH(pRSmaStat), pIter))) {
tb_uid_t *pSuid = (tb_uid_t *)taosHashGetKey(pIter, NULL);
SRSmaInfo *pRSmaInfo = *(SRSmaInfo **)pIter;
if (RSMA_INFO_IS_DEL(pRSmaInfo)) {
......@@ -447,14 +445,13 @@ static int32_t tdProcessRSmaAsyncPostCommitImpl(SSma *pSma) {
SMA_VID(pSma), refVal, *pSuid);
}
pIter = taosHashIterate(RSMA_INFO_HASH(pRSmaStat), pIter);
continue;
}
#if 0
if (pRSmaInfo->taskInfo[0]) {
if (pRSmaInfo->iTaskInfo[0]) {
SRSmaInfo *pRSmaInfo = *(SRSmaInfo **)pRSmaInfo->iTaskInfo[0];
tdFreeRSmaInfo(pSma, pRSmaInfo, true);
tdFreeRSmaInfo(pSma, pRSmaInfo, false);
pRSmaInfo->iTaskInfo[0] = NULL;
}
} else {
......@@ -463,8 +460,7 @@ static int32_t tdProcessRSmaAsyncPostCommitImpl(SSma *pSma) {
taosHashPut(RSMA_INFO_HASH(pRSmaStat), pSuid, sizeof(tb_uid_t), pIter, sizeof(pIter));
smaDebug("vgId:%d, rsma async post commit, migrated from iRsmaInfoHash for table:%" PRIi64, SMA_VID(pSma), *pSuid);
pIter = taosHashIterate(RSMA_INFO_HASH(pRSmaStat), pIter);
#endif
}
for (int32_t i = 0; i < taosArrayGetSize(rsmaDeleted); ++i) {
......@@ -480,10 +476,9 @@ static int32_t tdProcessRSmaAsyncPostCommitImpl(SSma *pSma) {
taosHashRemove(RSMA_INFO_HASH(pRSmaStat), pSuid, sizeof(tb_uid_t));
}
taosArrayDestroy(rsmaDeleted);
// TODO: remove suid in files?
// unlock
taosWUnLockLatch(SMA_ENV_LOCK(pEnv));
// taosWUnLockLatch(SMA_ENV_LOCK(pEnv));
// step 2: cleanup outdated qtaskinfo files
tdCleanupQTaskInfoFiles(pSma, pRSmaStat);
......
......@@ -23,11 +23,13 @@ extern SSmaMgmt smaMgmt;
// declaration of static functions
static int32_t tdInitSmaStat(SSmaStat **pSmaStat, int8_t smaType, const SSma *pSma);
static SSmaEnv *tdNewSmaEnv(const SSma *pSma, int8_t smaType, const char *path);
static int32_t tdInitSmaEnv(SSma *pSma, int8_t smaType, const char *path, SSmaEnv **pEnv);
static void *tdFreeTSmaStat(STSmaStat *pStat);
static void tdDestroyRSmaStat(void *pRSmaStat);
static int32_t tdNewSmaEnv(SSma *pSma, int8_t smaType, SSmaEnv **ppEnv);
static int32_t tdInitSmaEnv(SSma *pSma, int8_t smaType, SSmaEnv **ppEnv);
static int32_t tdInitSmaStat(SSmaStat **pSmaStat, int8_t smaType, const SSma *pSma);
static int32_t tdRsmaStartExecutor(const SSma *pSma);
static int32_t tdRsmaStopExecutor(const SSma *pSma);
static void *tdFreeTSmaStat(STSmaStat *pStat);
static void tdDestroyRSmaStat(void *pRSmaStat);
/**
* @brief rsma init
......@@ -97,35 +99,42 @@ void smaCleanUp() {
}
}
static SSmaEnv *tdNewSmaEnv(const SSma *pSma, int8_t smaType, const char *path) {
static int32_t tdNewSmaEnv(SSma *pSma, int8_t smaType, SSmaEnv **ppEnv) {
SSmaEnv *pEnv = NULL;
pEnv = (SSmaEnv *)taosMemoryCalloc(1, sizeof(SSmaEnv));
*ppEnv = pEnv;
if (!pEnv) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
return NULL;
return TSDB_CODE_FAILED;
}
SMA_ENV_TYPE(pEnv) = smaType;
taosInitRWLatch(&(pEnv->lock));
(smaType == TSDB_SMA_TYPE_TIME_RANGE) ? atomic_store_ptr(&SMA_TSMA_ENV(pSma), *ppEnv)
: atomic_store_ptr(&SMA_RSMA_ENV(pSma), *ppEnv);
if (tdInitSmaStat(&SMA_ENV_STAT(pEnv), smaType, pSma) != TSDB_CODE_SUCCESS) {
tdFreeSmaEnv(pEnv);
return NULL;
*ppEnv = NULL;
(smaType == TSDB_SMA_TYPE_TIME_RANGE) ? atomic_store_ptr(&SMA_TSMA_ENV(pSma), NULL)
: atomic_store_ptr(&SMA_RSMA_ENV(pSma), NULL);
return TSDB_CODE_FAILED;
}
return pEnv;
return TSDB_CODE_SUCCESS;
}
static int32_t tdInitSmaEnv(SSma *pSma, int8_t smaType, const char *path, SSmaEnv **pEnv) {
if (!pEnv) {
static int32_t tdInitSmaEnv(SSma *pSma, int8_t smaType, SSmaEnv **ppEnv) {
if (!ppEnv) {
terrno = TSDB_CODE_INVALID_PTR;
return TSDB_CODE_FAILED;
}
if (!(*pEnv)) {
if (!(*pEnv = tdNewSmaEnv(pSma, smaType, path))) {
if (!(*ppEnv)) {
if (tdNewSmaEnv(pSma, smaType, ppEnv) != TSDB_CODE_SUCCESS) {
return TSDB_CODE_FAILED;
}
}
......@@ -199,7 +208,7 @@ static int32_t tdInitSmaStat(SSmaStat **pSmaStat, int8_t smaType, const SSma *pS
* tdInitSmaStat invoked in other multithread environment later.
*/
if (!(*pSmaStat)) {
*pSmaStat = (SSmaStat *)taosMemoryCalloc(1, sizeof(SSmaStat));
*pSmaStat = (SSmaStat *)taosMemoryCalloc(1, sizeof(SSmaStat) + sizeof(TdThread) * tsNumOfVnodeRsmaThreads);
if (!(*pSmaStat)) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
return TSDB_CODE_FAILED;
......@@ -209,6 +218,7 @@ static int32_t tdInitSmaStat(SSmaStat **pSmaStat, int8_t smaType, const SSma *pS
SRSmaStat *pRSmaStat = (SRSmaStat *)(*pSmaStat);
pRSmaStat->pSma = (SSma *)pSma;
atomic_store_8(RSMA_TRIGGER_STAT(pRSmaStat), TASK_TRIGGER_STAT_INIT);
tsem_init(&pRSmaStat->notEmpty, 0, 0);
// init smaMgmt
smaInit();
......@@ -231,9 +241,7 @@ static int32_t tdInitSmaStat(SSmaStat **pSmaStat, int8_t smaType, const SSma *pS
return TSDB_CODE_FAILED;
}
RSMA_FETCH_HASH(pRSmaStat) = taosHashInit(
RSMA_TASK_INFO_HASH_SLOT, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_ENTRY_LOCK);
if (!RSMA_FETCH_HASH(pRSmaStat)) {
if (tdRsmaStartExecutor(pSma) < 0) {
return TSDB_CODE_FAILED;
}
} else if (smaType == TSDB_SMA_TYPE_TIME_RANGE) {
......@@ -267,6 +275,7 @@ static void tdDestroyRSmaStat(void *pRSmaStat) {
smaDebug("vgId:%d, destroy rsma stat %p", SMA_VID(pSma), pRSmaStat);
// step 1: set rsma trigger stat cancelled
atomic_store_8(RSMA_TRIGGER_STAT(pStat), TASK_TRIGGER_STAT_CANCELLED);
tsem_destroy(&(pStat->notEmpty));
// step 2: destroy the rsma info and associated fetch tasks
if (taosHashGetSize(RSMA_INFO_HASH(pStat)) > 0) {
......@@ -279,17 +288,14 @@ static void tdDestroyRSmaStat(void *pRSmaStat) {
}
taosHashCleanup(RSMA_INFO_HASH(pStat));
// step 3: destroy the rsma fetch hash
taosHashCleanup(RSMA_FETCH_HASH(pStat));
// step 4: wait all triggered fetch tasks finished
// step 3: wait for all triggered fetch tasks to finish
int32_t nLoops = 0;
while (1) {
if (T_REF_VAL_GET((SSmaStat *)pStat) == 0) {
smaDebug("vgId:%d, rsma fetch tasks all finished", SMA_VID(pSma));
smaDebug("vgId:%d, rsma fetch tasks are all finished", SMA_VID(pSma));
break;
} else {
smaDebug("vgId:%d, rsma fetch tasks not all finished yet", SMA_VID(pSma));
smaDebug("vgId:%d, rsma fetch tasks are not all finished yet", SMA_VID(pSma));
}
++nLoops;
if (nLoops > 1000) {
......@@ -298,6 +304,9 @@ static void tdDestroyRSmaStat(void *pRSmaStat) {
}
}
// step 4:
tdRsmaStopExecutor(pSma);
// step 5: free pStat
taosMemoryFreeClear(pStat);
}
......@@ -388,17 +397,70 @@ int32_t tdCheckAndInitSmaEnv(SSma *pSma, int8_t smaType) {
pEnv = (smaType == TSDB_SMA_TYPE_TIME_RANGE) ? atomic_load_ptr(&SMA_TSMA_ENV(pSma))
: atomic_load_ptr(&SMA_RSMA_ENV(pSma));
if (!pEnv) {
char rname[TSDB_FILENAME_LEN] = {0};
if (tdInitSmaEnv(pSma, smaType, rname, &pEnv) < 0) {
if (tdInitSmaEnv(pSma, smaType, &pEnv) < 0) {
tdUnLockSma(pSma);
return TSDB_CODE_FAILED;
}
(smaType == TSDB_SMA_TYPE_TIME_RANGE) ? atomic_store_ptr(&SMA_TSMA_ENV(pSma), pEnv)
: atomic_store_ptr(&SMA_RSMA_ENV(pSma), pEnv);
}
tdUnLockSma(pSma);
return TSDB_CODE_SUCCESS;
};
void *tdRSmaExecutorFunc(void *param) {
setThreadName("vnode-rsma");
tdRSmaProcessExecImpl((SSma *)param, RSMA_EXEC_OVERFLOW);
return NULL;
}
static int32_t tdRsmaStartExecutor(const SSma *pSma) {
TdThreadAttr thAttr = {0};
taosThreadAttrInit(&thAttr);
taosThreadAttrSetDetachState(&thAttr, PTHREAD_CREATE_JOINABLE);
SSmaEnv *pEnv = SMA_RSMA_ENV(pSma);
SSmaStat *pStat = SMA_ENV_STAT(pEnv);
TdThread *pthread = (TdThread *)&pStat->data;
for (int32_t i = 0; i < tsNumOfVnodeRsmaThreads; ++i) {
if (taosThreadCreate(&pthread[i], &thAttr, tdRSmaExecutorFunc, (void *)pSma) != 0) {
terrno = TAOS_SYSTEM_ERROR(errno);
smaError("vgId:%d, failed to create pthread for rsma since %s", SMA_VID(pSma), terrstr());
return -1;
}
smaDebug("vgId:%d, success to create pthread for rsma", SMA_VID(pSma));
}
taosThreadAttrDestroy(&thAttr);
return 0;
}
static int32_t tdRsmaStopExecutor(const SSma *pSma) {
if (pSma && VND_IS_RSMA(pSma->pVnode)) {
SSmaEnv *pEnv = NULL;
SSmaStat *pStat = NULL;
SRSmaStat *pRSmaStat = NULL;
TdThread *pthread = NULL;
if (!(pEnv = SMA_RSMA_ENV(pSma)) || !(pStat = SMA_ENV_STAT(pEnv))) {
return 0;
}
pEnv->flag |= SMA_ENV_FLG_CLOSE;
pRSmaStat = (SRSmaStat *)pStat;
pthread = (TdThread *)&pStat->data;
for (int32_t i = 0; i < tsNumOfVnodeRsmaThreads; ++i) {
tsem_post(&(pRSmaStat->notEmpty));
}
for (int32_t i = 0; i < tsNumOfVnodeRsmaThreads; ++i) {
if (taosCheckPthreadValid(pthread[i])) {
smaDebug("vgId:%d, start to join pthread for rsma:%" PRId64, SMA_VID(pSma), pthread[i]);
taosThreadJoin(pthread[i], NULL);
}
}
}
return 0;
}
\ No newline at end of file
......@@ -375,6 +375,9 @@ int32_t tdCloneRSmaInfo(SSma *pSma, SRSmaInfo *pInfo) {
if (TABLE_IS_ROLLUP(mr.me.flags)) {
param = &mr.me.stbEntry.rsmaParam;
for (int32_t i = 0; i < TSDB_RETENTION_L2; ++i) {
if (!pInfo->iTaskInfo[i]) {
continue;
}
if (tdCloneQTaskInfo(pSma, pInfo->taskInfo[i], pInfo->iTaskInfo[i], param, pInfo->suid, i) < 0) {
goto _err;
}
......
......@@ -79,6 +79,10 @@ STQ* tqOpen(const char* path, SVnode* pVnode) {
ASSERT(0);
}
if (streamLoadTasks(pTq->pStreamMeta) < 0) {
ASSERT(0);
}
return pTq;
}
......@@ -648,17 +652,28 @@ int32_t tqExpandTask(STQ* pTq, SStreamTask* pTask) {
// expand executor
if (pTask->taskLevel == TASK_LEVEL__SOURCE) {
pTask->pState = streamStateOpen(pTq->pStreamMeta->path, pTask);
if (pTask->pState == NULL) {
return -1;
}
SReadHandle handle = {
.meta = pTq->pVnode->pMeta,
.vnode = pTq->pVnode,
.initTqReader = 1,
.pStateBackend = pTask->pState,
};
pTask->exec.executor = qCreateStreamExecTaskInfo(pTask->exec.qmsg, &handle);
ASSERT(pTask->exec.executor);
} else if (pTask->taskLevel == TASK_LEVEL__AGG) {
pTask->pState = streamStateOpen(pTq->pStreamMeta->path, pTask);
if (pTask->pState == NULL) {
return -1;
}
SReadHandle mgHandle = {
.vnode = NULL,
.numOfVgroups = (int32_t)taosArrayGetSize(pTask->childEpInfo),
.pStateBackend = pTask->pState,
};
pTask->exec.executor = qCreateStreamExecTaskInfo(pTask->exec.qmsg, &mgHandle);
ASSERT(pTask->exec.executor);
......
......@@ -422,6 +422,8 @@ typedef struct {
STsdb *pTsdb; // [input]
SBlockIdx *pBlockIdxExp; // [input]
STSchema *pTSchema; // [input]
tb_uid_t suid;
tb_uid_t uid;
int32_t nFileSet;
int32_t iFileSet;
SArray *aDFileSet;
......@@ -494,6 +496,8 @@ static int32_t getNextRowFromFSLast(void *iter, TSDBROW **ppRow) {
if (!state->pBlockDataL) {
state->pBlockDataL = &state->blockDataL;
tBlockDataCreate(state->pBlockDataL);
}
code = tBlockDataInit(state->pBlockDataL, suid, suid ? 0 : uid, state->pTSchema);
if (code) goto _err;
......@@ -593,6 +597,9 @@ typedef struct SFSNextRowIter {
SFSNEXTROWSTATES state; // [input]
STsdb *pTsdb; // [input]
SBlockIdx *pBlockIdxExp; // [input]
STSchema *pTSchema; // [input]
tb_uid_t suid;
tb_uid_t uid;
int32_t nFileSet;
int32_t iFileSet;
SArray *aDFileSet;
......@@ -685,6 +692,10 @@ static int32_t getNextRowFromFS(void *iter, TSDBROW **ppRow) {
tMapDataGetItemByIdx(&state->blockMap, state->iBlock, &block, tGetBlock);
/* code = tsdbReadBlockData(state->pDataFReader, &state->blockIdx, &block, &state->blockData, NULL, NULL); */
tBlockDataReset(state->pBlockData);
code = tBlockDataInit(state->pBlockData, state->suid, state->uid, state->pTSchema);
if (code) goto _err;
code = tsdbReadDataBlock(state->pDataFReader, &block, state->pBlockData);
if (code) goto _err;
......@@ -958,16 +969,21 @@ static int32_t nextRowIterOpen(CacheNextRowIter *pIter, tb_uid_t uid, STsdb *pTs
pIter->idx = (SBlockIdx){.suid = suid, .uid = uid};
pIter->fsLastState.state = (SFSLASTNEXTROWSTATES) SFSNEXTROW_FS;
pIter->fsLastState.state = (SFSLASTNEXTROWSTATES)SFSNEXTROW_FS;
pIter->fsLastState.pTsdb = pTsdb;
pIter->fsLastState.aDFileSet = pIter->pReadSnap->fs.aDFileSet;
pIter->fsLastState.pBlockIdxExp = &pIter->idx;
pIter->fsLastState.pTSchema = pTSchema;
pIter->fsLastState.suid = suid;
pIter->fsLastState.uid = uid;
pIter->fsState.state = SFSNEXTROW_FS;
pIter->fsState.pTsdb = pTsdb;
pIter->fsState.aDFileSet = pIter->pReadSnap->fs.aDFileSet;
pIter->fsState.pBlockIdxExp = &pIter->idx;
pIter->fsState.pTSchema = pTSchema;
pIter->fsState.suid = suid;
pIter->fsState.uid = uid;
pIter->input[0] = (TsdbNextRowState){&pIter->memRow, true, false, &pIter->memState, getNextRowFromMem, NULL};
pIter->input[1] = (TsdbNextRowState){&pIter->imemRow, true, false, &pIter->imemState, getNextRowFromMem, NULL};
......
......@@ -220,13 +220,6 @@ int vnodeCommit(SVnode *pVnode) {
vInfo("vgId:%d, start to commit, commit ID:%" PRId64 " version:%" PRId64, TD_VID(pVnode), pVnode->state.commitID,
pVnode->state.applied);
// preCommit
// smaSyncPreCommit(pVnode->pSma);
smaAsyncPreCommit(pVnode->pSma);
vnodeBufPoolUnRef(pVnode->inUse);
pVnode->inUse = NULL;
pVnode->state.commitTerm = pVnode->state.applyTerm;
// save info
......@@ -241,6 +234,16 @@ int vnodeCommit(SVnode *pVnode) {
}
walBeginSnapshot(pVnode->pWal, pVnode->state.applied);
// preCommit
// smaSyncPreCommit(pVnode->pSma);
if(smaAsyncPreCommit(pVnode->pSma) < 0){
ASSERT(0);
return -1;
}
vnodeBufPoolUnRef(pVnode->inUse);
pVnode->inUse = NULL;
// commit each sub-system
if (metaCommit(pVnode->pMeta) < 0) {
ASSERT(0);
......@@ -248,7 +251,10 @@ int vnodeCommit(SVnode *pVnode) {
}
if (VND_IS_RSMA(pVnode)) {
smaAsyncCommit(pVnode->pSma);
if (smaAsyncCommit(pVnode->pSma) < 0) {
ASSERT(0);
return -1;
}
if (tsdbCommit(VND_RSMA0(pVnode)) < 0) {
ASSERT(0);
......@@ -285,7 +291,10 @@ int vnodeCommit(SVnode *pVnode) {
// postCommit
// smaSyncPostCommit(pVnode->pSma);
smaAsyncPostCommit(pVnode->pSma);
if (smaAsyncPostCommit(pVnode->pSma) < 0) {
ASSERT(0);
return -1;
}
// apply the commit (TODO)
walEndSnapshot(pVnode->pWal);
......
......@@ -301,10 +301,6 @@ int32_t vnodeProcessQueryMsg(SVnode *pVnode, SRpcMsg *pMsg) {
return qWorkerProcessQueryMsg(&handle, pVnode->pQuery, pMsg, 0);
case TDMT_SCH_QUERY_CONTINUE:
return qWorkerProcessCQueryMsg(&handle, pVnode->pQuery, pMsg, 0);
case TDMT_VND_FETCH_RSMA:
return smaProcessFetch(pVnode->pSma, pMsg);
case TDMT_VND_EXEC_RSMA:
return smaProcessExec(pVnode->pSma, pMsg);
default:
vError("unknown msg type:%d in query queue", pMsg->msgType);
return TSDB_CODE_VND_APP_ERROR;
......@@ -382,14 +378,14 @@ static int32_t vnodeProcessTrimReq(SVnode *pVnode, int64_t version, void *pReq,
int32_t code = 0;
SVTrimDbReq trimReq = {0};
vInfo("vgId:%d, trim vnode request will be processed, time:%d", pVnode->config.vgId, trimReq.timestamp);
// decode
if (tDeserializeSVTrimDbReq(pReq, len, &trimReq) != 0) {
code = TSDB_CODE_INVALID_MSG;
goto _exit;
}
vInfo("vgId:%d, trim vnode request will be processed, time:%d", pVnode->config.vgId, trimReq.timestamp);
// process
code = tsdbDoRetention(pVnode->pTsdb, trimReq.timestamp);
if (code) goto _exit;
......
......@@ -893,7 +893,7 @@ int32_t catalogChkTbMetaVersion(SCatalog* pCtg, SRequestConnInfo *pConn, SArray*
CTG_API_LEAVE(TSDB_CODE_CTG_INVALID_INPUT);
}
SName name;
SName name = {0};
int32_t sver = 0;
int32_t tver = 0;
int32_t tbNum = taosArrayGetSize(pTables);
......
......@@ -100,7 +100,6 @@ extern "C" {
typedef struct SExplainGroup {
int32_t nodeNum;
int32_t physiPlanExecNum;
int32_t physiPlanNum;
int32_t physiPlanExecIdx;
SRWLatch lock;
SSubplan *plan;
......
......@@ -296,8 +296,6 @@ int32_t qExplainGenerateResNode(SPhysiNode *pNode, SExplainGroup *group, SExplai
QRY_ERR_JRET(qExplainGenerateResChildren(pNode, group, &resNode->pChildren));
++group->physiPlanNum;
*pResNode = resNode;
return TSDB_CODE_SUCCESS;
......@@ -1548,12 +1546,6 @@ int32_t qExplainAppendGroupResRows(void *pCtx, int32_t groupId, int32_t level) {
QRY_ERR_RET(qExplainGenerateResNode(group->plan->pNode, group, &node));
if ((EXPLAIN_MODE_ANALYZE == ctx->mode) && (group->physiPlanNum != group->physiPlanExecNum)) {
qError("physiPlanNum %d mismatch with physiExecNum %d in group %d", group->physiPlanNum, group->physiPlanExecNum,
groupId);
QRY_ERR_JRET(TSDB_CODE_QRY_APP_ERROR);
}
QRY_ERR_JRET(qExplainResNodeToRows(node, ctx, level));
_return:
......@@ -1578,12 +1570,9 @@ int32_t qExplainGetRspFromCtx(void *ctx, SRetrieveTableRsp **pRsp) {
SColumnInfoData *pInfoData = taosArrayGet(pBlock->pDataBlock, 0);
char buf[1024] = {0};
for (int32_t i = 0; i < rowNum; ++i) {
SQueryExplainRowInfo *row = taosArrayGet(pCtx->rows, i);
varDataCopy(buf, row->buf);
ASSERT(varDataTLen(row->buf) == row->len);
colDataAppend(pInfoData, i, buf, false);
colDataAppend(pInfoData, i, row->buf, false);
}
pBlock->info.rows = rowNum;
......
......@@ -119,6 +119,7 @@ SSDataBlock* createResDataBlock(SDataBlockDescNode* pNode);
EDealRes doTranslateTagExpr(SNode** pNode, void* pContext);
int32_t getTableList(void* metaHandle, void* pVnode, SScanPhysiNode* pScanNode, SNode* pTagCond, SNode* pTagIndexCond, STableListInfo* pListInfo);
int32_t getGroupIdFromTagsVal(void* pMeta, uint64_t uid, SNodeList* pGroupNode, char* keyBuf, uint64_t* pGroupId);
int32_t getColInfoResultForGroupby(void* metaHandle, SNodeList* group, STableListInfo* pTableListInfo);
size_t getTableTagsBufLen(const SNodeList* pGroups);
SArray* createSortInfo(SNodeList* pNodeList);
......
......@@ -150,6 +150,7 @@ typedef struct {
SQueryTableDataCond tableCond;
int64_t recoverStartVer;
int64_t recoverEndVer;
SStreamState* pState;
} SStreamTaskInfo;
typedef struct {
......@@ -1016,7 +1017,7 @@ bool functionNeedToExecute(SqlFunctionCtx* pCtx);
bool isOverdue(TSKEY ts, STimeWindowAggSupp* pSup);
bool isCloseWindow(STimeWindow* pWin, STimeWindowAggSupp* pSup);
bool isDeletedWindow(STimeWindow* pWin, uint64_t groupId, SAggSupporter* pSup);
void appendOneRow(SSDataBlock* pBlock, TSKEY* pStartTs, TSKEY* pEndTs, uint64_t* pUid);
void appendOneRow(SSDataBlock* pBlock, TSKEY* pStartTs, TSKEY* pEndTs, int32_t uidCol, uint64_t* pID);
void printDataBlock(SSDataBlock* pBlock, const char* flag);
int32_t finalizeResultRowIntoResultDataBlock(SDiskbasedBuf* pBuf, SResultRowPosition* resultRowPosition,
......
此差异已折叠。
......@@ -352,12 +352,14 @@ int32_t qCreateExecTask(SReadHandle* readHandle, int32_t vgId, uint64_t taskId,
int32_t code = createExecTaskInfoImpl(pSubplan, pTask, readHandle, taskId, sql, model);
if (code != TSDB_CODE_SUCCESS) {
qError("failed to createExecTaskInfoImpl, code: %s", tstrerror(code));
goto _error;
}
SDataSinkMgtCfg cfg = {.maxDataBlockNum = 10000, .maxDataBlockNumPerQuery = 5000};
code = dsDataSinkMgtInit(&cfg);
if (code != TSDB_CODE_SUCCESS) {
qError("failed to dsDataSinkMgtInit, code: %s", tstrerror(code));
goto _error;
}
......@@ -365,6 +367,7 @@ int32_t qCreateExecTask(SReadHandle* readHandle, int32_t vgId, uint64_t taskId,
void* pSinkParam = NULL;
code = createDataSinkParam(pSubplan->pDataSink, &pSinkParam, pTaskInfo, readHandle);
if (code != TSDB_CODE_SUCCESS) {
qError("failed to createDataSinkParam, code: %s", tstrerror(code));
goto _error;
}
......
......@@ -140,20 +140,6 @@ static int32_t doCopyToSDataBlock(SExecTaskInfo* pTaskInfo, SSDataBlock* pBlock,
static void initCtxOutputBuffer(SqlFunctionCtx* pCtx, int32_t size);
static void doSetTableGroupOutputBuf(SOperatorInfo* pOperator, int32_t numOfOutput, uint64_t groupId);
// setup the output buffer for each operator
static bool hasNull(SColumn* pColumn, SColumnDataAgg* pStatis) {
if (TSDB_COL_IS_TAG(pColumn->flag) || TSDB_COL_IS_UD_COL(pColumn->flag) ||
pColumn->colId == PRIMARYKEY_TIMESTAMP_COL_ID) {
return false;
}
if (pStatis != NULL && pStatis->numOfNull == 0) {
return false;
}
return true;
}
#if 0
static bool chkResultRowFromKey(STaskRuntimeEnv* pRuntimeEnv, SResultRowInfo* pResultRowInfo, char* pData,
int16_t bytes, bool masterscan, uint64_t uid) {
......@@ -381,7 +367,7 @@ static void functionCtxSave(SqlFunctionCtx* pCtx, SFunctionCtxStatus* pStatus) {
static void functionCtxRestore(SqlFunctionCtx* pCtx, SFunctionCtxStatus* pStatus) {
pCtx->input.colDataAggIsSet = pStatus->hasAgg;
pCtx->input.numOfRows = pStatus->numOfRows;
pCtx->input.numOfRows = pStatus->numOfRows;
pCtx->input.startRowIndex = pStatus->startOffset;
}
......@@ -3139,6 +3125,7 @@ int32_t aggDecodeResultRow(SOperatorInfo* pOperator, char* result) {
initResultRow(resultRow);
pInfo->resultRowInfo.cur = (SResultRowPosition){.pageId = resultRow->pageId, .offset = resultRow->offset};
// releaseBufPage(pSup->pResultBuf, getBufPage(pSup->pResultBuf, pageId));
}
if (offset != length) {
......@@ -3330,7 +3317,11 @@ static SSDataBlock* doFillImpl(SOperatorInfo* pOperator) {
pInfo->curGroupId = pInfo->pRes->info.groupId; // the first data block
pInfo->totalInputRows += pInfo->pRes->info.rows;
taosFillSetStartInfo(pInfo->pFillInfo, pInfo->pRes->info.rows, pBlock->info.window.ekey);
if (order == pInfo->pFillInfo->order) {
taosFillSetStartInfo(pInfo->pFillInfo, pInfo->pRes->info.rows, pBlock->info.window.ekey);
} else {
taosFillSetStartInfo(pInfo->pFillInfo, pInfo->pRes->info.rows, pBlock->info.window.skey);
}
taosFillSetInputDataBlock(pInfo->pFillInfo, pInfo->pRes);
} else if (pInfo->curGroupId != pBlock->info.groupId) { // the new group data block
pInfo->existNewGroupBlock = pBlock;
......@@ -3699,13 +3690,20 @@ static int32_t initFillInfo(SFillOperatorInfo* pInfo, SExprInfo* pExpr, int32_t
const char* id, SInterval* pInterval, int32_t fillType, int32_t order) {
SFillColInfo* pColInfo = createFillColInfo(pExpr, numOfCols, pNotFillExpr, numOfNotFillCols, pValNode);
STimeWindow w = getAlignQueryTimeWindow(pInterval, pInterval->precision, win.skey);
w = getFirstQualifiedTimeWindow(win.skey, &w, pInterval, TSDB_ORDER_ASC);
int64_t startKey = (order == TSDB_ORDER_ASC) ? win.skey : win.ekey;
STimeWindow w = getAlignQueryTimeWindow(pInterval, pInterval->precision, startKey);
w = getFirstQualifiedTimeWindow(startKey, &w, pInterval, order);
pInfo->pFillInfo = taosCreateFillInfo(w.skey, numOfCols, numOfNotFillCols, capacity, pInterval, fillType, pColInfo,
pInfo->primaryTsCol, order, id);
pInfo->win = win;
if (order == TSDB_ORDER_ASC) {
pInfo->win.skey = win.skey;
pInfo->win.ekey = win.ekey;
} else {
pInfo->win.skey = win.ekey;
pInfo->win.ekey = win.skey;
}
pInfo->p = taosMemoryCalloc(numOfCols, POINTER_BYTES);
if (pInfo->pFillInfo == NULL || pInfo->p == NULL) {
......@@ -3882,9 +3880,9 @@ static void cleanupTableSchemaInfo(SSchemaInfo* pSchemaInfo) {
tDeleteSSchemaWrapper(pSchemaInfo->qsw);
}
static int32_t sortTableGroup(STableListInfo* pTableListInfo, int32_t groupNum) {
static int32_t sortTableGroup(STableListInfo* pTableListInfo) {
taosArrayClear(pTableListInfo->pGroupList);
SArray* sortSupport = taosArrayInit(groupNum, sizeof(uint64_t));
SArray* sortSupport = taosArrayInit(16, sizeof(uint64_t));
if (sortSupport == NULL) return TSDB_CODE_OUT_OF_MEMORY;
for (int32_t i = 0; i < taosArrayGetSize(pTableListInfo->pTableList); i++) {
STableKeyInfo* info = taosArrayGet(pTableListInfo->pTableList, i);
......@@ -3962,48 +3960,26 @@ int32_t generateGroupIdMap(STableListInfo* pTableListInfo, SReadHandle* pHandle,
if (pTableListInfo->map == NULL) {
return TSDB_CODE_OUT_OF_MEMORY;
}
int32_t keyLen = 0;
void* keyBuf = NULL;
SNode* node;
FOREACH(node, group) {
SExprNode* pExpr = (SExprNode*)node;
keyLen += pExpr->resType.bytes;
}
int32_t nullFlagSize = sizeof(int8_t) * LIST_LENGTH(group);
keyLen += nullFlagSize;
keyBuf = taosMemoryCalloc(1, keyLen);
if (keyBuf == NULL) {
return TSDB_CODE_OUT_OF_MEMORY;
}
bool assignUid = groupbyTbname(group);
int32_t groupNum = 0;
size_t numOfTables = taosArrayGetSize(pTableListInfo->pTableList);
for (int32_t i = 0; i < numOfTables; i++) {
STableKeyInfo* info = taosArrayGet(pTableListInfo->pTableList, i);
size_t numOfTables = taosArrayGetSize(pTableListInfo->pTableList);
if (assignUid) {
if (assignUid) {
for (int32_t i = 0; i < numOfTables; i++) {
STableKeyInfo* info = taosArrayGet(pTableListInfo->pTableList, i);
info->groupId = info->uid;
} else {
int32_t code = getGroupIdFromTagsVal(pHandle->meta, info->uid, group, keyBuf, &info->groupId);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
taosHashPut(pTableListInfo->map, &(info->uid), sizeof(uint64_t), &info->groupId, sizeof(uint64_t));
}
} else {
int32_t code = getColInfoResultForGroupby(pHandle->meta, group, pTableListInfo);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
taosHashPut(pTableListInfo->map, &(info->uid), sizeof(uint64_t), &info->groupId, sizeof(uint64_t));
groupNum++;
}
taosMemoryFree(keyBuf);
if (pTableListInfo->needSortTableByGroupId) {
return sortTableGroup(pTableListInfo, groupNum);
return sortTableGroup(pTableListInfo);
}
return TDB_CODE_SUCCESS;
......@@ -4048,6 +4024,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
pTableListInfo, pTagCond, pTagIndexCond, GET_TASKID(pTaskInfo));
if (code) {
pTaskInfo->code = code;
qError("failed to createScanTableListInfo, code: %s", tstrerror(code));
return NULL;
}
......@@ -4067,6 +4044,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
pTableListInfo, pTagCond, pTagIndexCond, GET_TASKID(pTaskInfo));
if (code) {
pTaskInfo->code = code;
qError("failed to createScanTableListInfo, code: %s", tstrerror(code));
return NULL;
}
......@@ -4090,6 +4068,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
pHandle, pTableListInfo, pTagCond, pTagIndexCond, GET_TASKID(pTaskInfo));
if (code) {
pTaskInfo->code = code;
qError("failed to createScanTableListInfo, code: %s", tstrerror(code));
return NULL;
}
......@@ -4112,6 +4091,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
int32_t code = getTableList(pHandle->meta, pHandle->vnode, pScanPhyNode, pTagCond, pTagIndexCond, pTableListInfo);
if (code != TSDB_CODE_SUCCESS) {
pTaskInfo->code = terrno;
qError("failed to getTableList, code: %s", tstrerror(code));
return NULL;
}
......@@ -4610,6 +4590,10 @@ int32_t createExecTaskInfoImpl(SSubplan* pPlan, SExecTaskInfo** pTaskInfo, SRead
goto _complete;
}
if (pHandle && pHandle->pStateBackend) {
(*pTaskInfo)->streamInfo.pState = pHandle->pStateBackend;
}
(*pTaskInfo)->sql = sql;
sql = NULL;
(*pTaskInfo)->pSubplan = pPlan;
......
......@@ -1086,7 +1086,10 @@ static int32_t generateSessionScanRange(SStreamScanInfo* pInfo, SSDataBlock* pSr
SColumnInfoData* pDestStartCol = taosArrayGet(pDestBlock->pDataBlock, START_TS_COLUMN_INDEX);
SColumnInfoData* pDestEndCol = taosArrayGet(pDestBlock->pDataBlock, END_TS_COLUMN_INDEX);
SColumnInfoData* pDestUidCol = taosArrayGet(pDestBlock->pDataBlock, UID_COLUMN_INDEX);
SColumnInfoData* pDestGpCol = taosArrayGet(pDestBlock->pDataBlock, GROUPID_COLUMN_INDEX);
SColumnInfoData* pDestCalStartTsCol = taosArrayGet(pDestBlock->pDataBlock, CALCULATE_START_TS_COLUMN_INDEX);
SColumnInfoData* pDestCalEndTsCol = taosArrayGet(pDestBlock->pDataBlock, CALCULATE_END_TS_COLUMN_INDEX);
int32_t dummy = 0;
for (int32_t i = 0; i < pSrcBlock->info.rows; i++) {
uint64_t groupId = getGroupId(pInfo->pTableScanOp, uidCol[i]);
......@@ -1100,9 +1103,13 @@ static int32_t generateSessionScanRange(SStreamScanInfo* pInfo, SSDataBlock* pSr
SResultWindowInfo* pEndWin =
getCurSessionWindow(pInfo->sessionSup.pStreamAggSup, endData[i], endData[i], groupId, 0, &dummy);
ASSERT(pEndWin);
TSKEY ts = INT64_MIN;
colDataAppend(pDestStartCol, i, (const char*)&pStartWin->win.skey, false);
colDataAppend(pDestEndCol, i, (const char*)&pEndWin->win.ekey, false);
colDataAppendNULL(pDestUidCol, i);
colDataAppend(pDestGpCol, i, (const char*)&groupId, false);
colDataAppendNULL(pDestCalStartTsCol, i);
colDataAppendNULL(pDestCalEndTsCol, i);
pDestBlock->info.rows++;
}
return TSDB_CODE_SUCCESS;
......@@ -1157,13 +1164,13 @@ static int32_t generateScanRange(SStreamScanInfo* pInfo, SSDataBlock* pSrcBlock,
return code;
}
void appendOneRow(SSDataBlock* pBlock, TSKEY* pStartTs, TSKEY* pEndTs, uint64_t* pUid) {
void appendOneRow(SSDataBlock* pBlock, TSKEY* pStartTs, TSKEY* pEndTs, int32_t uidCol, uint64_t* pID) {
SColumnInfoData* pStartTsCol = taosArrayGet(pBlock->pDataBlock, START_TS_COLUMN_INDEX);
SColumnInfoData* pEndTsCol = taosArrayGet(pBlock->pDataBlock, END_TS_COLUMN_INDEX);
SColumnInfoData* pUidCol = taosArrayGet(pBlock->pDataBlock, UID_COLUMN_INDEX);
SColumnInfoData* pUidCol = taosArrayGet(pBlock->pDataBlock, uidCol);
colDataAppend(pStartTsCol, pBlock->info.rows, (const char*)pStartTs, false);
colDataAppend(pEndTsCol, pBlock->info.rows, (const char*)pEndTs, false);
colDataAppend(pUidCol, pBlock->info.rows, (const char*)pUid, false);
colDataAppend(pUidCol, pBlock->info.rows, (const char*)pID, false);
pBlock->info.rows++;
}
......@@ -1190,7 +1197,7 @@ static void checkUpdateData(SStreamScanInfo* pInfo, bool invertible, SSDataBlock
bool closedWin = isClosed && isSignleIntervalWindow(pInfo) &&
isDeletedWindow(&win, pBlock->info.groupId, pInfo->sessionSup.pIntervalAggSup);
if ((update || closedWin) && out) {
appendOneRow(pInfo->pUpdateDataRes, tsCol + rowId, tsCol + rowId, &pBlock->info.uid);
appendOneRow(pInfo->pUpdateDataRes, tsCol + rowId, tsCol + rowId, UID_COLUMN_INDEX, &pBlock->info.uid);
}
}
if (out) {
......@@ -1274,6 +1281,42 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
SStreamScanInfo* pInfo = pOperator->info;
#if 0
SStreamState* pState = pTaskInfo->streamInfo.pState;
if (pState) {
printf(">>>>>>>> stream write backend\n");
SWinKey key = {
.ts = 1,
.groupId = 2,
};
char tmp[100] = "abcdefg1";
if (streamStatePut(pState, &key, &tmp, strlen(tmp) + 1) < 0) {
ASSERT(0);
}
key.ts = 2;
char tmp2[100] = "abcdefg2";
if (streamStatePut(pState, &key, &tmp2, strlen(tmp2) + 1) < 0) {
ASSERT(0);
}
key.groupId = 5;
key.ts = 1;
char tmp3[100] = "abcdefg3";
if (streamStatePut(pState, &key, &tmp3, strlen(tmp3) + 1) < 0) {
ASSERT(0);
}
char* val2 = NULL;
int32_t sz;
if (streamStateGet(pState, &key, (void**)&val2, &sz) < 0) {
ASSERT(0);
}
printf("stream read %s %d\n", val2, sz);
streamFreeVal(val2);
}
#endif
qDebug("stream scan called");
if (pTaskInfo->streamInfo.prepareStatus.type == TMQ_OFFSET__LOG) {
while (1) {
......@@ -2640,6 +2683,7 @@ int32_t createScanTableListInfo(SScanPhysiNode* pScanNode, SNodeList* pGroupTags
int32_t code = getTableList(pHandle->meta, pHandle->vnode, pScanNode, pTagCond, pTagIndexCond, pTableListInfo);
if (code != TSDB_CODE_SUCCESS) {
qError("failed to getTableList, code: %s", tstrerror(code));
return code;
}
......
......@@ -36,6 +36,7 @@
#define GET_DEST_SLOT_ID(_p) ((_p)->pExpr->base.resSchema.slotId)
static void doSetVal(SColumnInfoData* pDstColInfoData, int32_t rowIndex, const SGroupKeys* pKey);
static bool fillIfWindowPseudoColumn(SFillInfo* pFillInfo, SFillColInfo* pCol, SColumnInfoData* pDstColInfoData, int32_t rowIndex);
static void setNullRow(SSDataBlock* pBlock, SFillInfo* pFillInfo, int32_t rowIndex) {
for(int32_t i = 0; i < pFillInfo->numOfCols; ++i) {
......@@ -43,9 +44,8 @@ static void setNullRow(SSDataBlock* pBlock, SFillInfo* pFillInfo, int32_t rowInd
int32_t dstSlotId = GET_DEST_SLOT_ID(pCol);
SColumnInfoData* pDstColInfo = taosArrayGet(pBlock->pDataBlock, dstSlotId);
if (pCol->notFillCol) {
if (pDstColInfo->info.type == TSDB_DATA_TYPE_TIMESTAMP) {
colDataAppend(pDstColInfo, rowIndex, (const char*)&pFillInfo->currentKey, false);
} else {
bool filled = fillIfWindowPseudoColumn(pFillInfo, pCol, pDstColInfo, rowIndex);
if (!filled) {
SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->prev.pRowVal : pFillInfo->next.pRowVal;
SGroupKeys* pKey = taosArrayGet(p, i);
doSetVal(pDstColInfo, rowIndex, pKey);
......@@ -76,6 +76,35 @@ static void doSetUserSpecifiedValue(SColumnInfoData* pDst, SVariant* pVar, int32
}
}
//fill windows pseudo column, _wstart, _wend, _wduration and return true, otherwise return false
static bool fillIfWindowPseudoColumn(SFillInfo* pFillInfo, SFillColInfo* pCol, SColumnInfoData* pDstColInfoData, int32_t rowIndex) {
if (!pCol->notFillCol) {
return false;
}
if (pCol->pExpr->pExpr->nodeType == QUERY_NODE_COLUMN) {
if (pCol->pExpr->base.numOfParams != 1) {
return false;
}
if (pCol->pExpr->base.pParam[0].pCol->colType == COLUMN_TYPE_WINDOW_START) {
colDataAppend(pDstColInfoData, rowIndex, (const char*)&pFillInfo->currentKey, false);
return true;
} else if (pCol->pExpr->base.pParam[0].pCol->colType == COLUMN_TYPE_WINDOW_END) {
//TODO: include endpoint
SInterval* pInterval = &pFillInfo->interval;
int32_t step = (pFillInfo->order == TSDB_ORDER_ASC) ? 1 : -1;
int64_t windowEnd =
taosTimeAdd(pFillInfo->currentKey, pInterval->sliding * step, pInterval->slidingUnit, pInterval->precision);
colDataAppend(pDstColInfoData, rowIndex, (const char*)&windowEnd, false);
return true;
} else if (pCol->pExpr->base.pParam[0].pCol->colType == COLUMN_TYPE_WINDOW_DURATION) {
//TODO: include endpoint
colDataAppend(pDstColInfoData, rowIndex, (const char*)&pFillInfo->interval.sliding, false);
return true;
}
}
return false;
}
static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock* pSrcBlock, int64_t ts,
bool outOfBound) {
SPoint point1, point2, point;
......@@ -92,10 +121,8 @@ static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock*
SFillColInfo* pCol = &pFillInfo->pFillCol[i];
SColumnInfoData* pDstColInfoData = taosArrayGet(pBlock->pDataBlock, GET_DEST_SLOT_ID(pCol));
if (pDstColInfoData->info.type == TSDB_DATA_TYPE_TIMESTAMP) {
colDataAppend(pDstColInfoData, index, (const char*)&pFillInfo->currentKey, false);
} else {
bool filled = fillIfWindowPseudoColumn(pFillInfo, pCol, pDstColInfoData, index);
if (!filled) {
SGroupKeys* pKey = taosArrayGet(p, i);
doSetVal(pDstColInfoData, index, pKey);
}
......@@ -106,10 +133,8 @@ static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock*
for (int32_t i = 0; i < pFillInfo->numOfCols; ++i) {
SFillColInfo* pCol = &pFillInfo->pFillCol[i];
SColumnInfoData* pDstColInfoData = taosArrayGet(pBlock->pDataBlock, GET_DEST_SLOT_ID(pCol));
if (pDstColInfoData->info.type == TSDB_DATA_TYPE_TIMESTAMP) {
colDataAppend(pDstColInfoData, index, (const char*)&pFillInfo->currentKey, false);
} else {
bool filled = fillIfWindowPseudoColumn(pFillInfo, pCol, pDstColInfoData, index);
if (!filled) {
SGroupKeys* pKey = taosArrayGet(p, i);
doSetVal(pDstColInfoData, index, pKey);
}
......@@ -127,9 +152,8 @@ static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock*
int16_t type = pDstCol->info.type;
if (pCol->notFillCol) {
if (type == TSDB_DATA_TYPE_TIMESTAMP) {
colDataAppend(pDstCol, index, (const char*)&pFillInfo->currentKey, false);
} else {
bool filled = fillIfWindowPseudoColumn(pFillInfo, pCol, pDstCol, index);
if (!filled) {
SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->prev.pRowVal : pFillInfo->next.pRowVal;
SGroupKeys* pKey = taosArrayGet(p, i);
doSetVal(pDstCol, index, pKey);
......@@ -170,9 +194,8 @@ static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock*
SColumnInfoData* pDst = taosArrayGet(pBlock->pDataBlock, slotId);
if (pCol->notFillCol) {
if (pDst->info.type == TSDB_DATA_TYPE_TIMESTAMP) {
colDataAppend(pDst, index, (const char*)&pFillInfo->currentKey, false);
} else {
bool filled = fillIfWindowPseudoColumn(pFillInfo, pCol, pDst, index);
if (!filled) {
SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->prev.pRowVal : pFillInfo->next.pRowVal;
SGroupKeys* pKey = taosArrayGet(p, i);
doSetVal(pDst, index, pKey);
......@@ -540,7 +563,7 @@ int64_t getNumOfResultsAfterFillGap(SFillInfo* pFillInfo, TSKEY ekey, int32_t ma
int64_t numOfRes = -1;
if (numOfRows > 0) { // still fill gap within current data block, not generating data after the result set.
TSKEY lastKey = (TSDB_ORDER_ASC == pFillInfo->order ? tsList[pFillInfo->numOfRows - 1] : tsList[0]);
TSKEY lastKey = tsList[pFillInfo->numOfRows - 1];
numOfRes = taosTimeCountInterval(lastKey, pFillInfo->currentKey, pFillInfo->interval.sliding,
pFillInfo->interval.slidingUnit, pFillInfo->interval.precision);
numOfRes += 1;
......
......@@ -468,7 +468,7 @@ int32_t functionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
SColumnInfoData* pCol = taosArrayGet(pBlock->pDataBlock, slotId);
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
pResInfo->isNullRes = (pResInfo->isNullRes == 1) ? 1 : (pResInfo->numOfRes == 0);
pResInfo->isNullRes = (pResInfo->numOfRes == 0) ? 1 : 0;
char* in = GET_ROWCELL_INTERBUF(pResInfo);
colDataAppend(pCol, pBlock->info.rows, in, pResInfo->isNullRes);
......@@ -498,7 +498,7 @@ int32_t functionFinalizeWithResultBuf(SqlFunctionCtx* pCtx, SSDataBlock* pBlock,
SColumnInfoData* pCol = taosArrayGet(pBlock->pDataBlock, slotId);
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
pResInfo->isNullRes = (pResInfo->isNullRes == 1) ? 1 : (pResInfo->numOfRes == 0);;
pResInfo->isNullRes = (pResInfo->numOfRes == 0) ? 1 : 0;
char* in = finalResult;
colDataAppend(pCol, pBlock->info.rows, in, pResInfo->isNullRes);
......@@ -663,8 +663,7 @@ int32_t sumFunction(SqlFunctionCtx* pCtx) {
// check for overflow
if (IS_FLOAT_TYPE(type) && (isinf(pSumRes->dsum) || isnan(pSumRes->dsum))) {
GET_RES_INFO(pCtx)->isNullRes = 1;
numOfElem = 1;
numOfElem = 0;
}
_sum_over:
......@@ -791,8 +790,7 @@ int32_t avgFunction(SqlFunctionCtx* pCtx) {
int32_t numOfRows = pInput->numOfRows;
if (IS_NULL_TYPE(type)) {
GET_RES_INFO(pCtx)->isNullRes = 1;
numOfElem = 1;
numOfElem = 0;
goto _avg_over;
}
......@@ -1613,7 +1611,7 @@ int32_t minmaxFunctionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
int32_t currentRow = pBlock->info.rows;
SColumnInfoData* pCol = taosArrayGet(pBlock->pDataBlock, slotId);
pEntryInfo->isNullRes = (pEntryInfo->isNullRes == 1) ? 1 : (pEntryInfo->numOfRes == 0);
pEntryInfo->isNullRes = (pEntryInfo->numOfRes == 0) ? 1 : 0;
if (pCol->info.type == TSDB_DATA_TYPE_FLOAT) {
float v = *(double*)&pRes->v;
......@@ -1792,8 +1790,7 @@ int32_t stddevFunction(SqlFunctionCtx* pCtx) {
int32_t numOfRows = pInput->numOfRows;
if (IS_NULL_TYPE(type)) {
GET_RES_INFO(pCtx)->isNullRes = 1;
numOfElem = 1;
numOfElem = 0;
goto _stddev_over;
}
......
......@@ -255,6 +255,13 @@ static int32_t sifInitOperParams(SIFParam **params, SOperatorNode *node, SIFCtx
if (node->opType == OP_TYPE_JSON_GET_VALUE) {
return code;
}
if ((node->pLeft != NULL && nodeType(node->pLeft) == QUERY_NODE_COLUMN) &&
(node->pRight != NULL && nodeType(node->pRight) == QUERY_NODE_VALUE)) {
SColumnNode *cn = (SColumnNode *)(node->pLeft);
if (cn->node.resType.type == TSDB_DATA_TYPE_JSON) {
SIF_ERR_RET(TSDB_CODE_QRY_INVALID_INPUT);
}
}
SIFParam *paramList = taosMemoryCalloc(nParam, sizeof(SIFParam));
if (NULL == paramList) {
......
......@@ -817,6 +817,7 @@ void nodesDestroyNode(SNode* pNode) {
destroyLogicNode((SLogicNode*)pLogicNode);
nodesDestroyNode(pLogicNode->pWStartTs);
nodesDestroyNode(pLogicNode->pValues);
nodesDestroyList(pLogicNode->pFillExprs);
break;
}
case QUERY_NODE_LOGIC_PLAN_SORT: {
......
......@@ -125,6 +125,37 @@ static int32_t skipInsertInto(char** pSql, SMsgBuf* pMsg) {
return TSDB_CODE_SUCCESS;
}
static char* tableNameGetPosition(SToken* pToken, char target) {
bool inEscape = false;
bool inQuote = false;
char quotaStr = 0;
for (uint32_t i = 0; i < pToken->n; ++i) {
if (*(pToken->z + i) == target && (!inEscape) && (!inQuote)) {
return pToken->z + i;
}
if (*(pToken->z + i) == TS_ESCAPE_CHAR) {
if (!inQuote) {
inEscape = !inEscape;
}
}
if (*(pToken->z + i) == '\'' || *(pToken->z + i) == '"') {
if (!inEscape) {
if (!inQuote) {
quotaStr = *(pToken->z + i);
inQuote = !inQuote;
} else if (quotaStr == *(pToken->z + i)) {
inQuote = !inQuote;
}
}
}
}
return NULL;
}
static int32_t createSName(SName* pName, SToken* pTableName, int32_t acctId, const char* dbName, SMsgBuf* pMsgBuf) {
const char* msg1 = "name too long";
const char* msg2 = "invalid database name";
......@@ -132,7 +163,7 @@ static int32_t createSName(SName* pName, SToken* pTableName, int32_t acctId, con
const char* msg4 = "invalid table name";
int32_t code = TSDB_CODE_SUCCESS;
char* p = strnchr(pTableName->z, TS_PATH_DELIMITER[0], pTableName->n, true);
char* p = tableNameGetPosition(pTableName, TS_PATH_DELIMITER[0]);
if (p != NULL) { // db has been specified in sql string so we ignore current db path
assert(*p == TS_PATH_DELIMITER[0]);
......@@ -681,6 +712,11 @@ static int32_t parseBoundColumns(SInsertParseContext* pCxt, SParsedDataColInfo*
break;
}
char tmpTokenBuf[TSDB_COL_NAME_LEN + 2] = {0}; // used for deleting Escape character backstick(`)
strncpy(tmpTokenBuf, sToken.z, sToken.n);
sToken.z = tmpTokenBuf;
sToken.n = strdequote(sToken.z);
col_id_t t = lastColIdx + 1;
col_id_t index = findCol(&sToken, t, nCols, pSchema);
if (index < 0 && t > 0) {
......@@ -1686,9 +1722,17 @@ static int32_t collectTableMetaKey(SInsertParseSyntaxCxt* pCxt, bool isStable, i
return TSDB_CODE_SUCCESS;
}
static int32_t checkTableName(const char* pTableName, SMsgBuf* pMsgBuf) {
if (NULL != strchr(pTableName, '.')) {
return generateSyntaxErrMsgExt(pMsgBuf, TSDB_CODE_PAR_INVALID_IDENTIFIER_NAME, "The table name cannot contain '.'");
}
return TSDB_CODE_SUCCESS;
}
static int32_t collectAutoCreateTableMetaKey(SInsertParseSyntaxCxt* pCxt, int32_t tableNo, SToken* pTbToken) {
SName name;
CHECK_CODE(createSName(&name, pTbToken, pCxt->pComCxt->acctId, pCxt->pComCxt->db, &pCxt->msg));
CHECK_CODE(checkTableName(name.tname, &pCxt->msg));
CHECK_CODE(reserveTableMetaInCacheForInsert(&name, CATALOG_REQ_TYPE_VGROUP, tableNo, pCxt->pMetaCache));
return TSDB_CODE_SUCCESS;
}
......
......@@ -1159,6 +1159,16 @@ void destoryParseMetaCache(SParseMetaCache* pMetaCache, bool request) {
taosHashCleanup(pMetaCache->pTableMeta);
taosHashCleanup(pMetaCache->pTableVgroup);
}
SInsertTablesMetaReq* p = taosHashIterate(pMetaCache->pInsertTables, NULL);
while (NULL != p) {
taosArrayDestroy(p->pTableMetaPos);
taosArrayDestroy(p->pTableMetaReq);
taosArrayDestroy(p->pTableVgroupPos);
taosArrayDestroy(p->pTableVgroupReq);
p = taosHashIterate(pMetaCache->pInsertTables, p);
}
taosHashCleanup(pMetaCache->pInsertTables);
taosHashCleanup(pMetaCache->pDbVgroup);
taosHashCleanup(pMetaCache->pDbCfg);
taosHashCleanup(pMetaCache->pDbInfo);
......
......@@ -149,13 +149,10 @@ int32_t qwExecTask(QW_FPARAMS_DEF, SQWTaskCtx *ctx, bool *queryStop) {
}
}
taosArrayDestroy(pResList);
QW_RET(code);
_return:
taosArrayDestroy(pResList);
return code;
taosArrayDestroy(pResList);
QW_RET(code);
}
int32_t qwGenerateSchHbRsp(SQWorker *mgmt, SQWSchStatus *sch, SQWHbInfo *hbInfo) {
......
此差异已折叠。
此差异已折叠。
......@@ -358,7 +358,7 @@ int32_t streamDispatchAllBlocks(SStreamTask* pTask, const SStreamDataBlock* pDat
FAIL_SHUFFLE_DISPATCH:
if (pReqs) {
for (int32_t i = 0; i < vgSz; i++) {
taosArrayDestroy(pReqs[i].data);
taosArrayDestroyP(pReqs[i].data, taosMemoryFree);
taosArrayDestroy(pReqs[i].dataLen);
}
taosMemoryFree(pReqs);
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册