Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
taosdata
TDengine
提交
de9b4358
T
TDengine
项目概览
taosdata
/
TDengine
大约 2 年 前同步成功
通知
1192
Star
22018
Fork
4786
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
T
TDengine
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
de9b4358
编写于
7月 21, 2022
作者:
S
Shengliang Guan
浏览文件
操作
浏览文件
下载
差异文件
Merge remote-tracking branch 'origin/3.0' into fix/tsim
上级
5175138e
8a2fb511
变更
53
显示空白变更内容
内联
并排
Showing
53 changed file
with
2045 addition
and
623 deletion
+2045
-623
docs/zh/05-get-started/03-package.md
docs/zh/05-get-started/03-package.md
+96
-40
docs/zh/05-get-started/06-first-use.md
docs/zh/05-get-started/06-first-use.md
+0
-135
docs/zh/10-deployment/01-deploy.md
docs/zh/10-deployment/01-deploy.md
+4
-8
docs/zh/13-operation/01-pkg-install.md
docs/zh/13-operation/01-pkg-install.md
+75
-3
include/common/tcommon.h
include/common/tcommon.h
+1
-0
include/common/tmsg.h
include/common/tmsg.h
+2
-1
include/libs/executor/executor.h
include/libs/executor/executor.h
+1
-1
include/libs/stream/tstream.h
include/libs/stream/tstream.h
+1
-0
source/client/test/clientTests.cpp
source/client/test/clientTests.cpp
+1
-1
source/common/src/tdatablock.c
source/common/src/tdatablock.c
+6
-0
source/common/src/tmsg.c
source/common/src/tmsg.c
+3
-3
source/common/src/trow.c
source/common/src/trow.c
+10
-5
source/common/test/dataformatTest.cpp
source/common/test/dataformatTest.cpp
+1
-1
source/dnode/mnode/impl/src/mndSubscribe.c
source/dnode/mnode/impl/src/mndSubscribe.c
+4
-1
source/dnode/mnode/impl/src/mndTopic.c
source/dnode/mnode/impl/src/mndTopic.c
+8
-1
source/dnode/vnode/src/inc/tq.h
source/dnode/vnode/src/inc/tq.h
+2
-1
source/dnode/vnode/src/sma/smaRollup.c
source/dnode/vnode/src/sma/smaRollup.c
+29
-26
source/dnode/vnode/src/tq/tq.c
source/dnode/vnode/src/tq/tq.c
+3
-3
source/dnode/vnode/src/tq/tqMeta.c
source/dnode/vnode/src/tq/tqMeta.c
+2
-1
source/dnode/vnode/src/tq/tqPush.c
source/dnode/vnode/src/tq/tqPush.c
+2
-0
source/dnode/vnode/src/tq/tqRead.c
source/dnode/vnode/src/tq/tqRead.c
+1
-0
source/dnode/vnode/src/tsdb/tsdbRead.c
source/dnode/vnode/src/tsdb/tsdbRead.c
+13
-7
source/libs/executor/inc/executorimpl.h
source/libs/executor/inc/executorimpl.h
+2
-1
source/libs/executor/src/executor.c
source/libs/executor/src/executor.c
+2
-1
source/libs/executor/src/executorMain.c
source/libs/executor/src/executorMain.c
+4
-0
source/libs/executor/src/executorimpl.c
source/libs/executor/src/executorimpl.c
+42
-24
source/libs/executor/src/scanoperator.c
source/libs/executor/src/scanoperator.c
+8
-3
source/libs/executor/src/tfill.c
source/libs/executor/src/tfill.c
+47
-39
source/libs/parser/src/parInsert.c
source/libs/parser/src/parInsert.c
+3
-3
source/libs/parser/src/parInsertData.c
source/libs/parser/src/parInsertData.c
+230
-2
source/libs/stream/src/streamData.c
source/libs/stream/src/streamData.c
+2
-0
source/libs/stream/src/streamDispatch.c
source/libs/stream/src/streamDispatch.c
+2
-0
source/libs/sync/src/syncMain.c
source/libs/sync/src/syncMain.c
+32
-33
source/libs/sync/src/syncRaftCfg.c
source/libs/sync/src/syncRaftCfg.c
+5
-5
tests/script/tsim/insert/dupinsert.sim
tests/script/tsim/insert/dupinsert.sim
+176
-0
tests/script/tsim/insert/update0.sim
tests/script/tsim/insert/update0.sim
+12
-12
tests/script/tsim/insert/update1_sort_merge.sim
tests/script/tsim/insert/update1_sort_merge.sim
+818
-0
tests/script/tsim/stream/basic1.sim
tests/script/tsim/stream/basic1.sim
+8
-8
tests/system-test/2-query/Now.py
tests/system-test/2-query/Now.py
+8
-8
tests/system-test/2-query/distribute_agg_apercentile.py
tests/system-test/2-query/distribute_agg_apercentile.py
+12
-12
tests/system-test/2-query/distribute_agg_avg.py
tests/system-test/2-query/distribute_agg_avg.py
+22
-22
tests/system-test/2-query/distribute_agg_count.py
tests/system-test/2-query/distribute_agg_count.py
+26
-26
tests/system-test/2-query/distribute_agg_max.py
tests/system-test/2-query/distribute_agg_max.py
+28
-28
tests/system-test/2-query/distribute_agg_min.py
tests/system-test/2-query/distribute_agg_min.py
+28
-28
tests/system-test/2-query/distribute_agg_spread.py
tests/system-test/2-query/distribute_agg_spread.py
+24
-24
tests/system-test/2-query/distribute_agg_sum.py
tests/system-test/2-query/distribute_agg_sum.py
+22
-22
tests/system-test/2-query/irate.py
tests/system-test/2-query/irate.py
+5
-5
tests/system-test/2-query/log.py
tests/system-test/2-query/log.py
+59
-59
tests/system-test/2-query/query_cols_tags_and_or.py
tests/system-test/2-query/query_cols_tags_and_or.py
+15
-15
tests/system-test/7-tmq/TD-17699.py
tests/system-test/7-tmq/TD-17699.py
+129
-0
tools/shell/src/shellEngine.c
tools/shell/src/shellEngine.c
+7
-3
tools/taos-tools
tools/taos-tools
+1
-1
tools/taosws-rs
tools/taosws-rs
+1
-1
未找到文件。
docs/zh/05-get-started/03-package.md
浏览文件 @
de9b4358
---
---
sidebar_label
:
安装包
sidebar_label
:
安装包
title
:
使用安装包
安装和卸载
title
:
使用安装包
立即开始
---
---
import Tabs from "@theme/Tabs";
import Tabs from "@theme/Tabs";
...
@@ -169,72 +169,128 @@ install.sh 安装脚本在执行过程中,会通过命令行交互界面询问
...
@@ -169,72 +169,128 @@ install.sh 安装脚本在执行过程中,会通过命令行交互界面询问
:::
:::
##
卸载
##
启动
<Tabs>
安装后,请使用
`systemctl`
命令来启动 TDengine 的服务进程。
<TabItem
label=
"apt-get 卸载"
value=
"aptremove"
>
内容 TBD
```
bash
systemctl start taosd
```
</TabItem>
检查服务是否正常工作:
<TabItem
label=
"Deb 卸载"
value=
"debuninst"
>
卸载命令如下:
```
bash
systemctl status taosd
```
如果服务进程处于活动状态,则 status 指令会显示如下的相关信息:
```
```
$ sudo dpkg -r tdengine
Active: active (running)
(Reading database ... 137504 files and directories currently installed.)
```
Removing tdengine (2.4.0.7) ...
TDengine is removed successfully!
如果后台服务进程处于停止状态,则 status 指令会显示如下的相关信息:
```
Active: inactive (dead)
```
```
</TabItem>
如果 TDengine 服务正常工作,那么您可以通过 TDengine 的命令行程序
`taos`
来访问并体验 TDengine。
systemctl 命令汇总:
-
启动服务进程:
`systemctl start taosd`
-
停止服务进程:
`systemctl stop taosd`
-
重启服务进程:
`systemctl restart taosd`
-
查看服务状态:
`systemctl status taosd`
:::info
-
systemctl 命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 sudo 。
-
`systemctl stop taosd`
指令在执行后并不会马上停止 TDengine 服务,而是会等待系统中必要的落盘工作正常完成。在数据量很大的情况下,这可能会消耗较长时间。
-
如果系统中不支持
`systemd`
,也可以用手动运行
`/usr/local/taos/bin/taosd`
方式启动 TDengine 服务。
:::
<TabItem
label=
"RPM 卸载"
value=
"rpmuninst"
>
## TDengine 命令行 (CLI)
卸载命令如下:
为便于检查 TDengine 的状态,执行数据库 (Database) 的各种即席(Ad Hoc)查询,TDengine 提供一命令行应用程序(以下简称为 TDengine CLI) taos。要进入 TDengine 命令行,您只要在安装有 TDengine 的 Linux 终端执行
`taos`
即可。
```
bash
taos
```
```
$ sudo rpm -e tdengine
TDengine is removed successfully!
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考
[
FAQ
](
/train-faq/faq
)
来解决终端连接服务端失败的问题)。 TDengine CLI 的提示符号如下:
```
cmd
taos>
```
```
</TabItem>
在 TDengine CLI 中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行数据库(database)插入查询操作。在终端中运行的 SQL 语句需要以分号结束来运行。示例:
```
sql
create
database
demo
;
use
demo
;
create
table
t
(
ts
timestamp
,
speed
int
);
insert
into
t
values
(
'2019-07-15 00:00:00'
,
10
);
insert
into
t
values
(
'2019-07-15 01:00:00'
,
20
);
select
*
from
t
;
ts
|
speed
|
========================================
2019
-
07
-
15
00
:
00
:
00
.
000
|
10
|
2019
-
07
-
15
01
:
00
:
00
.
000
|
20
|
Query
OK
,
2
row
(
s
)
in
set
(
0
.
003128
s
)
```
<TabItem
label=
"tar.gz 卸载"
value=
"taruninst"
>
除执行 SQL 语句外,系统管理员还可以从 TDengine CLI 进行检查系统运行状态、添加删除用户账号等操作。TDengine CLI 连同应用驱动也可以独立安装在 Linux 或 Windows 机器上运行,更多细节请参考
[
这里
](
../reference/taos-shell/
)
卸载命令如下:
## 使用 taosBenchmark 体验写入速度
```
启动 TDengine 的服务,在 Linux 终端执行
`taosBenchmark`
(曾命名为
`taosdemo`
):
$ rmtaos
Nginx for TDengine is running, stopping it...
TDengine is removed successfully!
taosKeeper is removed successfully!
```
bash
taosBenchmark
```
```
</TabItem>
该命令将在数据库 test 下面自动创建一张超级表 meters,该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupId,groupId 被设置为 1 到 10, location 被设置为 "California.SanFrancisco" 或者 "California.LosAngeles"。
</Tabs>
:::info
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。
-
TDengine 提供了多种安装包,但最好不要在一个系统上同时使用 tar.gz 安装包和 deb 或 rpm 安装包。否则会相互影响,导致在使用时出现问题
。
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行
`taosBenchmark --help`
详细列出。taosBenchmark 详细使用方法请参照
[
如何使用 taosBenchmark 对 TDengine 进行性能测试
](
https://www.taosdata.com/2021/10/09/3111.html
)
。
-
对于 deb 包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 TDengine 包的安装信息,执行如下命令:
## 使用 TDengine CLI 体验查询速度
```
使用上述 taosBenchmark 插入数据后,可以在 TDengine CLI 输入查询命令,体验查询速度。
$ sudo rm -f /var/lib/dpkg/info/tdengine*
```
然后再重新进行安装就可以了。
查询超级表下记录总条数:
-
对于 rpm 包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 TDengine 包的安装信息,执行如下命令:
```
sql
taos
>
select
count
(
*
)
from
test
.
meters
;
```
```
查询 1 亿条记录的平均值、最大值、最小值等:
$ sudo rpm -e --noscripts tdengine
```
然后再重新进行安装就可以了。
```
sql
taos
>
select
avg
(
current
),
max
(
voltage
),
min
(
phase
)
from
test
.
meters
;
```
:::
查询 location="California.SanFrancisco" 的记录总条数:
\ No newline at end of file
```
sql
taos
>
select
count
(
*
)
from
test
.
meters
where
location
=
"California.SanFrancisco"
;
```
查询 groupId=10 的所有记录的平均值、最大值、最小值等:
```
sql
taos
>
select
avg
(
current
),
max
(
voltage
),
min
(
phase
)
from
test
.
meters
where
groupId
=
10
;
```
对表 d10 按 10s 进行平均值、最大值和最小值聚合统计:
```
sql
taos
>
select
avg
(
current
),
max
(
voltage
),
min
(
phase
)
from
test
.
d10
interval
(
10
s
);
```
\ No newline at end of file
docs/zh/05-get-started/06-first-use.md
已删除
100644 → 0
浏览文件 @
5175138e
---
sidebar_label
:
开始使用
title
:
快速体验 TDengine
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import PkgInstall from "./
\_
pkg_install.mdx";
import AptGetInstall from "./
\_
apt_get_install.mdx";
## 启动
安装后,请使用
`systemctl`
命令来启动 TDengine 的服务进程。
```
bash
systemctl start taosd
```
检查服务是否正常工作:
```
bash
systemctl status taosd
```
如果服务进程处于活动状态,则 status 指令会显示如下的相关信息:
```
Active: active (running)
```
如果后台服务进程处于停止状态,则 status 指令会显示如下的相关信息:
```
Active: inactive (dead)
```
如果 TDengine 服务正常工作,那么您可以通过 TDengine 的命令行程序
`taos`
来访问并体验 TDengine。
systemctl 命令汇总:
-
启动服务进程:
`systemctl start taosd`
-
停止服务进程:
`systemctl stop taosd`
-
重启服务进程:
`systemctl restart taosd`
-
查看服务状态:
`systemctl status taosd`
:::info
-
systemctl 命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 sudo 。
-
`systemctl stop taosd`
指令在执行后并不会马上停止 TDengine 服务,而是会等待系统中必要的落盘工作正常完成。在数据量很大的情况下,这可能会消耗较长时间。
-
如果系统中不支持
`systemd`
,也可以用手动运行
`/usr/local/taos/bin/taosd`
方式启动 TDengine 服务。
:::
## TDengine 命令行 (CLI)
为便于检查 TDengine 的状态,执行数据库 (Database) 的各种即席(Ad Hoc)查询,TDengine 提供一命令行应用程序(以下简称为 TDengine CLI) taos。要进入 TDengine 命令行,您只要在安装有 TDengine 的 Linux 终端执行
`taos`
即可。
```
bash
taos
```
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考
[
FAQ
](
/train-faq/faq
)
来解决终端连接服务端失败的问题)。 TDengine CLI 的提示符号如下:
```
cmd
taos>
```
在 TDengine CLI 中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行数据库(database)插入查询操作。在终端中运行的 SQL 语句需要以分号结束来运行。示例:
```
sql
create
database
demo
;
use
demo
;
create
table
t
(
ts
timestamp
,
speed
int
);
insert
into
t
values
(
'2019-07-15 00:00:00'
,
10
);
insert
into
t
values
(
'2019-07-15 01:00:00'
,
20
);
select
*
from
t
;
ts
|
speed
|
========================================
2019
-
07
-
15
00
:
00
:
00
.
000
|
10
|
2019
-
07
-
15
01
:
00
:
00
.
000
|
20
|
Query
OK
,
2
row
(
s
)
in
set
(
0
.
003128
s
)
```
除执行 SQL 语句外,系统管理员还可以从 TDengine CLI 进行检查系统运行状态、添加删除用户账号等操作。TDengine CLI 连同应用驱动也可以独立安装在 Linux 或 Windows 机器上运行,更多细节请参考
[
这里
](
../reference/taos-shell/
)
## 使用 taosBenchmark 体验写入速度
启动 TDengine 的服务,在 Linux 终端执行
`taosBenchmark`
(曾命名为
`taosdemo`
):
```
bash
taosBenchmark
```
该命令将在数据库 test 下面自动创建一张超级表 meters,该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupId,groupId 被设置为 1 到 10, location 被设置为 "California.SanFrancisco" 或者 "California.LosAngeles"。
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行
`taosBenchmark --help`
详细列出。taosBenchmark 详细使用方法请参照
[
如何使用 taosBenchmark 对 TDengine 进行性能测试
](
https://www.taosdata.com/2021/10/09/3111.html
)
。
## 使用 TDengine CLI 体验查询速度
使用上述 taosBenchmark 插入数据后,可以在 TDengine CLI 输入查询命令,体验查询速度。
查询超级表下记录总条数:
```
sql
taos
>
select
count
(
*
)
from
test
.
meters
;
```
查询 1 亿条记录的平均值、最大值、最小值等:
```
sql
taos
>
select
avg
(
current
),
max
(
voltage
),
min
(
phase
)
from
test
.
meters
;
```
查询 location="California.SanFrancisco" 的记录总条数:
```
sql
taos
>
select
count
(
*
)
from
test
.
meters
where
location
=
"California.SanFrancisco"
;
```
查询 groupId=10 的所有记录的平均值、最大值、最小值等:
```
sql
taos
>
select
avg
(
current
),
max
(
voltage
),
min
(
phase
)
from
test
.
meters
where
groupId
=
10
;
```
对表 d10 按 10s 进行平均值、最大值和最小值聚合统计:
```
sql
taos
>
select
avg
(
current
),
max
(
voltage
),
min
(
phase
)
from
test
.
d10
interval
(
10
s
);
```
docs/zh/10-deployment/01-deploy.md
浏览文件 @
de9b4358
...
@@ -55,6 +55,8 @@ fqdn h1.taosdata.com
...
@@ -55,6 +55,8 @@ fqdn h1.taosdata.com
// 配置本数据节点的端口号,缺省是 6030
// 配置本数据节点的端口号,缺省是 6030
serverPort
6030
serverPort
6030
```
一定要修改的参数是 firstEp 和 fqdn。在每个数据节点,firstEp 需全部配置成一样,但 fqdn 一定要配置成其所在数据节点的值。其他参数可不做任何修改,除非你很清楚为什么要修改。
一定要修改的参数是 firstEp 和 fqdn。在每个数据节点,firstEp 需全部配置成一样,但 fqdn 一定要配置成其所在数据节点的值。其他参数可不做任何修改,除非你很清楚为什么要修改。
加入到集群中的数据节点 dnode,下表中涉及集群相关的参数必须完全相同,否则不能成功加入到集群中。
加入到集群中的数据节点 dnode,下表中涉及集群相关的参数必须完全相同,否则不能成功加入到集群中。
...
@@ -68,12 +70,9 @@ serverPort 6030
...
@@ -68,12 +70,9 @@ serverPort 6030
## 启动集群
## 启动集群
### 启动第一个数据节点
按照《立即开始》里的步骤,启动第一个数据节点,例如 h1.taosdata.com,然后执行 taos,启动 taos shell,从 shell 里执行命令“SHOW DNODES”,如下所示:
按照《立即开始》里的步骤,启动第一个数据节点,例如 h1.taosdata.com,然后执行 taos,启动 taos shell,从 shell 里执行命令“SHOW DNODES”,如下所示:
```
```
Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
...
@@ -85,15 +84,12 @@ id | endpoint | vnodes | support_vnodes | status | create_time | note |
...
@@ -85,15 +84,12 @@ id | endpoint | vnodes | support_vnodes | status | create_time | note |
1 | h1.taosdata.com:6030 | 0 | 1024 | ready | 2022-07-16 10:50:42.673 | |
1 | h1.taosdata.com:6030 | 0 | 1024 | ready | 2022-07-16 10:50:42.673 | |
Query OK, 1 rows affected (0.007984s)
Query OK, 1 rows affected (0.007984s)
taos>
taos>
```
````
上述命令里,可以看到刚启动的数据节点的 End Point 是:h1.taos.com:6030,就是这个新集群的 firstEp。
上述命令里,可以看到刚启动的数据节点的 End Point 是:h1.taos.com:6030,就是这个新集群的 firstEp。
##
#
添加数据节点
## 添加数据节点
将后续的数据节点添加到现有集群,具体有以下几步:
将后续的数据节点添加到现有集群,具体有以下几步:
...
...
docs/zh/13-operation/01-pkg-install.md
浏览文件 @
de9b4358
...
@@ -8,9 +8,11 @@ import TabItem from "@theme/TabItem";
...
@@ -8,9 +8,11 @@ import TabItem from "@theme/TabItem";
本节将介绍一些关于安装和卸载更深层次的内容,以及升级的注意事项。
本节将介绍一些关于安装和卸载更深层次的内容,以及升级的注意事项。
## 安装和卸载
## 安装
关于安装,请参考
[
使用安装包立即开始
](
../get-started/package
)
关于安装和卸载,请参考
[
安装和卸载
](
../get-started/package
)
## 安装目录说明
## 安装目录说明
...
@@ -40,6 +42,76 @@ lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/
...
@@ -40,6 +42,76 @@ lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/
-
/usr/local/taos/driver 目录下的动态库文件,会软链接到 /usr/lib 目录下;
-
/usr/local/taos/driver 目录下的动态库文件,会软链接到 /usr/lib 目录下;
-
/usr/local/taos/include 目录下的头文件,会软链接到到 /usr/include 目录下;
-
/usr/local/taos/include 目录下的头文件,会软链接到到 /usr/include 目录下;
## 卸载
<Tabs>
<TabItem
label=
"apt-get 卸载"
value=
"aptremove"
>
内容 TBD
</TabItem>
<TabItem
label=
"Deb 卸载"
value=
"debuninst"
>
卸载命令如下:
```
$ sudo dpkg -r tdengine
(Reading database ... 137504 files and directories currently installed.)
Removing tdengine (2.4.0.7) ...
TDengine is removed successfully!
```
</TabItem>
<TabItem
label=
"RPM 卸载"
value=
"rpmuninst"
>
卸载命令如下:
```
$ sudo rpm -e tdengine
TDengine is removed successfully!
```
</TabItem>
<TabItem
label=
"tar.gz 卸载"
value=
"taruninst"
>
卸载命令如下:
```
$ rmtaos
Nginx for TDengine is running, stopping it...
TDengine is removed successfully!
taosKeeper is removed successfully!
```
</TabItem>
</Tabs>
:::info
-
TDengine 提供了多种安装包,但最好不要在一个系统上同时使用 tar.gz 安装包和 deb 或 rpm 安装包。否则会相互影响,导致在使用时出现问题。
-
对于 deb 包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 TDengine 包的安装信息,执行如下命令:
```
$ sudo rm -f /var/lib/dpkg/info/tdengine*
```
然后再重新进行安装就可以了。
-
对于 rpm 包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 TDengine 包的安装信息,执行如下命令:
```
$ sudo rpm -e --noscripts tdengine
```
然后再重新进行安装就可以了。
:::
## 卸载和更新文件说明
## 卸载和更新文件说明
卸载安装包的时候,将保留配置文件、数据库文件和日志文件,即 /etc/taos/taos.cfg 、 /var/lib/taos 、 /var/log/taos 。如果用户确认后不需保留,可以手工删除,但一定要慎重,因为删除后,数据将永久丢失,不可以恢复!
卸载安装包的时候,将保留配置文件、数据库文件和日志文件,即 /etc/taos/taos.cfg 、 /var/lib/taos 、 /var/log/taos 。如果用户确认后不需保留,可以手工删除,但一定要慎重,因为删除后,数据将永久丢失,不可以恢复!
...
...
include/common/tcommon.h
浏览文件 @
de9b4358
...
@@ -103,6 +103,7 @@ typedef struct SDataBlockInfo {
...
@@ -103,6 +103,7 @@ typedef struct SDataBlockInfo {
int16_t
hasVarCol
;
int16_t
hasVarCol
;
uint32_t
capacity
;
uint32_t
capacity
;
// TODO: optimize and remove following
// TODO: optimize and remove following
int64_t
version
;
// used for stream, and need serialization
int32_t
childId
;
// used for stream, do not serialize
int32_t
childId
;
// used for stream, do not serialize
EStreamType
type
;
// used for stream, do not serialize
EStreamType
type
;
// used for stream, do not serialize
STimeWindow
calWin
;
// used for stream, do not serialize
STimeWindow
calWin
;
// used for stream, do not serialize
...
...
include/common/tmsg.h
浏览文件 @
de9b4358
...
@@ -438,7 +438,7 @@ static FORCE_INLINE int32_t tDecodeSSchemaWrapperEx(SDecoder* pDecoder, SSchemaW
...
@@ -438,7 +438,7 @@ static FORCE_INLINE int32_t tDecodeSSchemaWrapperEx(SDecoder* pDecoder, SSchemaW
return
0
;
return
0
;
}
}
STSchema
*
tdGetSTSChemaFromSSChema
(
SSchema
*
*
pSchema
,
int32_t
nCols
);
STSchema
*
tdGetSTSChemaFromSSChema
(
SSchema
*
pSchema
,
int32_t
nCols
,
int32_t
sver
);
typedef
struct
{
typedef
struct
{
char
name
[
TSDB_TABLE_FNAME_LEN
];
char
name
[
TSDB_TABLE_FNAME_LEN
];
...
@@ -1359,6 +1359,7 @@ typedef struct {
...
@@ -1359,6 +1359,7 @@ typedef struct {
int32_t
numOfCols
;
int32_t
numOfCols
;
int64_t
skey
;
int64_t
skey
;
int64_t
ekey
;
int64_t
ekey
;
int64_t
version
;
// for stream
char
data
[];
char
data
[];
}
SRetrieveTableRsp
;
}
SRetrieveTableRsp
;
...
...
include/libs/executor/executor.h
浏览文件 @
de9b4358
...
@@ -64,7 +64,7 @@ qTaskInfo_t qCreateStreamExecTaskInfo(void* msg, SReadHandle* readers);
...
@@ -64,7 +64,7 @@ qTaskInfo_t qCreateStreamExecTaskInfo(void* msg, SReadHandle* readers);
* @param SReadHandle
* @param SReadHandle
* @return
* @return
*/
*/
qTaskInfo_t
qCreateQueueExecTaskInfo
(
void
*
msg
,
SReadHandle
*
readers
,
int32_t
*
numOfCols
);
qTaskInfo_t
qCreateQueueExecTaskInfo
(
void
*
msg
,
SReadHandle
*
readers
,
int32_t
*
numOfCols
,
SSchemaWrapper
**
pSchemaWrapper
);
/**
/**
* Set the input data block for the stream scan.
* Set the input data block for the stream scan.
...
...
include/libs/stream/tstream.h
浏览文件 @
de9b4358
...
@@ -142,6 +142,7 @@ static FORCE_INLINE void* streamQueueNextItem(SStreamQueue* queue) {
...
@@ -142,6 +142,7 @@ static FORCE_INLINE void* streamQueueNextItem(SStreamQueue* queue) {
ASSERT
(
queue
->
qItem
!=
NULL
);
ASSERT
(
queue
->
qItem
!=
NULL
);
return
streamQueueCurItem
(
queue
);
return
streamQueueCurItem
(
queue
);
}
else
{
}
else
{
queue
->
qItem
=
NULL
;
taosGetQitem
(
queue
->
qall
,
&
queue
->
qItem
);
taosGetQitem
(
queue
->
qall
,
&
queue
->
qItem
);
if
(
queue
->
qItem
==
NULL
)
{
if
(
queue
->
qItem
==
NULL
)
{
taosReadAllQitems
(
queue
->
queue
,
queue
->
qall
);
taosReadAllQitems
(
queue
->
queue
,
queue
->
qall
);
...
...
source/client/test/clientTests.cpp
浏览文件 @
de9b4358
...
@@ -826,7 +826,7 @@ TEST(testCase, update_test) {
...
@@ -826,7 +826,7 @@ TEST(testCase, update_test) {
TAOS
*
pConn
=
taos_connect
(
"localhost"
,
"root"
,
"taosdata"
,
NULL
,
0
);
TAOS
*
pConn
=
taos_connect
(
"localhost"
,
"root"
,
"taosdata"
,
NULL
,
0
);
ASSERT_NE
(
pConn
,
nullptr
);
ASSERT_NE
(
pConn
,
nullptr
);
TAOS_RES
*
pRes
=
taos_query
(
pConn
,
"
create database if not exists abc1
"
);
TAOS_RES
*
pRes
=
taos_query
(
pConn
,
"
select cast(0 as timestamp)-1y
"
);
if
(
taos_errno
(
pRes
)
!=
TSDB_CODE_SUCCESS
)
{
if
(
taos_errno
(
pRes
)
!=
TSDB_CODE_SUCCESS
)
{
printf
(
"failed to create database, code:%s"
,
taos_errstr
(
pRes
));
printf
(
"failed to create database, code:%s"
,
taos_errstr
(
pRes
));
taos_free_result
(
pRes
);
taos_free_result
(
pRes
);
...
...
source/common/src/tdatablock.c
浏览文件 @
de9b4358
...
@@ -1163,9 +1163,15 @@ static int32_t doEnsureCapacity(SColumnInfoData* pColumn, const SDataBlockInfo*
...
@@ -1163,9 +1163,15 @@ static int32_t doEnsureCapacity(SColumnInfoData* pColumn, const SDataBlockInfo*
void
colInfoDataCleanup
(
SColumnInfoData
*
pColumn
,
uint32_t
numOfRows
)
{
void
colInfoDataCleanup
(
SColumnInfoData
*
pColumn
,
uint32_t
numOfRows
)
{
if
(
IS_VAR_DATA_TYPE
(
pColumn
->
info
.
type
))
{
if
(
IS_VAR_DATA_TYPE
(
pColumn
->
info
.
type
))
{
pColumn
->
varmeta
.
length
=
0
;
pColumn
->
varmeta
.
length
=
0
;
if
(
pColumn
->
varmeta
.
offset
>
0
)
{
memset
(
pColumn
->
varmeta
.
offset
,
0
,
sizeof
(
int32_t
)
*
numOfRows
);
}
}
else
{
}
else
{
if
(
pColumn
->
nullbitmap
!=
NULL
)
{
if
(
pColumn
->
nullbitmap
!=
NULL
)
{
memset
(
pColumn
->
nullbitmap
,
0
,
BitmapLen
(
numOfRows
));
memset
(
pColumn
->
nullbitmap
,
0
,
BitmapLen
(
numOfRows
));
if
(
pColumn
->
pData
!=
NULL
)
{
memset
(
pColumn
->
pData
,
0
,
pColumn
->
info
.
bytes
*
numOfRows
);
}
}
}
}
}
}
}
...
...
source/common/src/tmsg.c
浏览文件 @
de9b4358
...
@@ -4941,14 +4941,14 @@ int tDecodeSVCreateStbReq(SDecoder *pCoder, SVCreateStbReq *pReq) {
...
@@ -4941,14 +4941,14 @@ int tDecodeSVCreateStbReq(SDecoder *pCoder, SVCreateStbReq *pReq) {
return
0
;
return
0
;
}
}
STSchema
*
tdGetSTSChemaFromSSChema
(
SSchema
*
*
pSchema
,
int32_t
nCols
)
{
STSchema
*
tdGetSTSChemaFromSSChema
(
SSchema
*
pSchema
,
int32_t
nCols
,
int32_t
sver
)
{
STSchemaBuilder
schemaBuilder
=
{
0
};
STSchemaBuilder
schemaBuilder
=
{
0
};
if
(
tdInitTSchemaBuilder
(
&
schemaBuilder
,
1
)
<
0
)
{
if
(
tdInitTSchemaBuilder
(
&
schemaBuilder
,
sver
)
<
0
)
{
return
NULL
;
return
NULL
;
}
}
for
(
int
i
=
0
;
i
<
nCols
;
i
++
)
{
for
(
int
i
=
0
;
i
<
nCols
;
i
++
)
{
SSchema
*
schema
=
*
pSchema
+
i
;
SSchema
*
schema
=
pSchema
+
i
;
if
(
tdAddColToSchema
(
&
schemaBuilder
,
schema
->
type
,
schema
->
flags
,
schema
->
colId
,
schema
->
bytes
)
<
0
)
{
if
(
tdAddColToSchema
(
&
schemaBuilder
,
schema
->
type
,
schema
->
flags
,
schema
->
colId
,
schema
->
bytes
)
<
0
)
{
tdDestroyTSchemaBuilder
(
&
schemaBuilder
);
tdDestroyTSchemaBuilder
(
&
schemaBuilder
);
return
NULL
;
return
NULL
;
...
...
source/common/src/trow.c
浏览文件 @
de9b4358
...
@@ -568,6 +568,7 @@ int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) {
...
@@ -568,6 +568,7 @@ int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) {
int32_t
maxVarDataLen
=
0
;
int32_t
maxVarDataLen
=
0
;
int32_t
iColVal
=
0
;
int32_t
iColVal
=
0
;
void
*
varBuf
=
NULL
;
void
*
varBuf
=
NULL
;
bool
isAlloc
=
false
;
ASSERT
(
nColVal
>
1
);
ASSERT
(
nColVal
>
1
);
...
@@ -610,8 +611,11 @@ int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) {
...
@@ -610,8 +611,11 @@ int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) {
++
iColVal
;
++
iColVal
;
}
}
if
(
!
(
*
ppRow
))
{
*
ppRow
=
(
STSRow
*
)
taosMemoryCalloc
(
*
ppRow
=
(
STSRow
*
)
taosMemoryCalloc
(
1
,
sizeof
(
STSRow
)
+
pTSchema
->
flen
+
varDataLen
+
TD_BITMAP_BYTES
(
pTSchema
->
numOfCols
-
1
));
1
,
sizeof
(
STSRow
)
+
pTSchema
->
flen
+
varDataLen
+
TD_BITMAP_BYTES
(
pTSchema
->
numOfCols
-
1
));
isAlloc
=
true
;
}
if
(
!
(
*
ppRow
))
{
if
(
!
(
*
ppRow
))
{
terrno
=
TSDB_CODE_OUT_OF_MEMORY
;
terrno
=
TSDB_CODE_OUT_OF_MEMORY
;
...
@@ -621,7 +625,9 @@ int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) {
...
@@ -621,7 +625,9 @@ int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) {
if
(
maxVarDataLen
>
0
)
{
if
(
maxVarDataLen
>
0
)
{
varBuf
=
taosMemoryMalloc
(
maxVarDataLen
);
varBuf
=
taosMemoryMalloc
(
maxVarDataLen
);
if
(
!
varBuf
)
{
if
(
!
varBuf
)
{
if
(
isAlloc
)
{
taosMemoryFreeClear
(
*
ppRow
);
taosMemoryFreeClear
(
*
ppRow
);
}
terrno
=
TSDB_CODE_OUT_OF_MEMORY
;
terrno
=
TSDB_CODE_OUT_OF_MEMORY
;
return
-
1
;
return
-
1
;
}
}
...
@@ -1323,12 +1329,11 @@ void tTSRowGetVal(STSRow *pRow, STSchema *pTSchema, int16_t iCol, SColVal *pColV
...
@@ -1323,12 +1329,11 @@ void tTSRowGetVal(STSRow *pRow, STSchema *pTSchema, int16_t iCol, SColVal *pColV
SCellVal
cv
;
SCellVal
cv
;
SValue
value
;
SValue
value
;
ASSERT
(
iCol
>
0
);
ASSERT
(
(
pTColumn
->
colId
==
PRIMARYKEY_TIMESTAMP_COL_ID
)
||
(
iCol
>
0
)
);
if
(
TD_IS_TP_ROW
(
pRow
))
{
if
(
TD_IS_TP_ROW
(
pRow
))
{
tdSTpRowGetVal
(
pRow
,
pTColumn
->
colId
,
pTColumn
->
type
,
pTSchema
->
flen
,
pTColumn
->
offset
,
iCol
-
1
,
&
cv
);
tdSTpRowGetVal
(
pRow
,
pTColumn
->
colId
,
pTColumn
->
type
,
pTSchema
->
flen
,
pTColumn
->
offset
,
iCol
-
1
,
&
cv
);
}
else
if
(
TD_IS_KV_ROW
(
pRow
))
{
}
else
if
(
TD_IS_KV_ROW
(
pRow
))
{
ASSERT
(
iCol
>
0
);
tdSKvRowGetVal
(
pRow
,
pTColumn
->
colId
,
iCol
-
1
,
&
cv
);
tdSKvRowGetVal
(
pRow
,
pTColumn
->
colId
,
iCol
-
1
,
&
cv
);
}
else
{
}
else
{
ASSERT
(
0
);
ASSERT
(
0
);
...
...
source/common/test/dataformatTest.cpp
浏览文件 @
de9b4358
...
@@ -116,7 +116,7 @@ STSchema *genSTSchema(int16_t nCols) {
...
@@ -116,7 +116,7 @@ STSchema *genSTSchema(int16_t nCols) {
}
}
STSchema
*
pResult
=
NULL
;
STSchema
*
pResult
=
NULL
;
pResult
=
tdGetSTSChemaFromSSChema
(
&
pSchema
,
nCols
);
pResult
=
tdGetSTSChemaFromSSChema
(
pSchema
,
nCols
,
1
);
taosMemoryFree
(
pSchema
);
taosMemoryFree
(
pSchema
);
return
pResult
;
return
pResult
;
...
...
source/dnode/mnode/impl/src/mndSubscribe.c
浏览文件 @
de9b4358
...
@@ -868,7 +868,10 @@ int32_t mndDropSubByTopic(SMnode *pMnode, STrans *pTrans, const char *topicName)
...
@@ -868,7 +868,10 @@ int32_t mndDropSubByTopic(SMnode *pMnode, STrans *pTrans, const char *topicName)
}
}
// iter all vnode to delete handle
// iter all vnode to delete handle
ASSERT
(
taosHashGetSize
(
pSub
->
consumerHash
)
==
0
);
if
(
taosHashGetSize
(
pSub
->
consumerHash
)
!=
0
)
{
sdbRelease
(
pSdb
,
pSub
);
return
-
1
;
}
int32_t
sz
=
taosArrayGetSize
(
pSub
->
unassignedVgs
);
int32_t
sz
=
taosArrayGetSize
(
pSub
->
unassignedVgs
);
for
(
int32_t
i
=
0
;
i
<
sz
;
i
++
)
{
for
(
int32_t
i
=
0
;
i
<
sz
;
i
++
)
{
SMqVgEp
*
pVgEp
=
taosArrayGetP
(
pSub
->
unassignedVgs
,
i
);
SMqVgEp
*
pVgEp
=
taosArrayGetP
(
pSub
->
unassignedVgs
,
i
);
...
...
source/dnode/mnode/impl/src/mndTopic.c
浏览文件 @
de9b4358
...
@@ -583,6 +583,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
...
@@ -583,6 +583,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
mndTransSetDbName
(
pTrans
,
pTopic
->
db
,
NULL
);
mndTransSetDbName
(
pTrans
,
pTopic
->
db
,
NULL
);
if
(
pTrans
==
NULL
)
{
if
(
pTrans
==
NULL
)
{
mError
(
"topic:%s, failed to drop since %s"
,
pTopic
->
name
,
terrstr
());
mError
(
"topic:%s, failed to drop since %s"
,
pTopic
->
name
,
terrstr
());
mndReleaseTopic
(
pMnode
,
pTopic
);
return
-
1
;
return
-
1
;
}
}
...
@@ -590,11 +591,17 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
...
@@ -590,11 +591,17 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
if
(
mndDropOffsetByTopic
(
pMnode
,
pTrans
,
dropReq
.
name
)
<
0
)
{
if
(
mndDropOffsetByTopic
(
pMnode
,
pTrans
,
dropReq
.
name
)
<
0
)
{
ASSERT
(
0
);
ASSERT
(
0
);
mndTransDrop
(
pTrans
);
mndReleaseTopic
(
pMnode
,
pTopic
);
return
-
1
;
return
-
1
;
}
}
// TODO check if rebalancing
if
(
mndDropSubByTopic
(
pMnode
,
pTrans
,
dropReq
.
name
)
<
0
)
{
if
(
mndDropSubByTopic
(
pMnode
,
pTrans
,
dropReq
.
name
)
<
0
)
{
ASSERT
(
0
);
/*ASSERT(0);*/
mError
(
"topic:%s, failed to drop since %s"
,
pTopic
->
name
,
terrstr
());
mndTransDrop
(
pTrans
);
mndReleaseTopic
(
pMnode
,
pTopic
);
return
-
1
;
return
-
1
;
}
}
...
...
source/dnode/vnode/src/inc/tq.h
浏览文件 @
de9b4358
...
@@ -89,6 +89,7 @@ typedef struct {
...
@@ -89,6 +89,7 @@ typedef struct {
STqExecDb
execDb
;
STqExecDb
execDb
;
};
};
int32_t
numOfCols
;
// number of out pout column, temporarily used
int32_t
numOfCols
;
// number of out pout column, temporarily used
SSchemaWrapper
*
pSchemaWrapper
;
// columns that are involved in query
}
STqExecHandle
;
}
STqExecHandle
;
typedef
struct
{
typedef
struct
{
...
...
source/dnode/vnode/src/sma/smaRollup.c
浏览文件 @
de9b4358
...
@@ -579,9 +579,11 @@ static int32_t tdRSmaFetchAndSubmitResult(SRSmaInfoItem *pItem, STSchema *pTSche
...
@@ -579,9 +579,11 @@ static int32_t tdRSmaFetchAndSubmitResult(SRSmaInfoItem *pItem, STSchema *pTSche
while
(
1
)
{
while
(
1
)
{
SSDataBlock
*
output
=
NULL
;
SSDataBlock
*
output
=
NULL
;
uint64_t
ts
;
uint64_t
ts
;
if
(
qExecTask
(
pItem
->
taskInfo
,
&
output
,
&
ts
)
<
0
)
{
int32_t
code
=
qExecTask
(
pItem
->
taskInfo
,
&
output
,
&
ts
);
if
(
code
<
0
)
{
smaError
(
"vgId:%d, qExecTask for rsma table %"
PRIi64
"l evel %"
PRIi8
" failed since %s"
,
SMA_VID
(
pSma
),
suid
,
smaError
(
"vgId:%d, qExecTask for rsma table %"
PRIi64
"l evel %"
PRIi8
" failed since %s"
,
SMA_VID
(
pSma
),
suid
,
pItem
->
level
,
terrstr
());
pItem
->
level
,
terrstr
(
code
));
goto
_err
;
goto
_err
;
}
}
if
(
!
output
)
{
if
(
!
output
)
{
...
@@ -597,7 +599,6 @@ static int32_t tdRSmaFetchAndSubmitResult(SRSmaInfoItem *pItem, STSchema *pTSche
...
@@ -597,7 +599,6 @@ static int32_t tdRSmaFetchAndSubmitResult(SRSmaInfoItem *pItem, STSchema *pTSche
}
}
taosArrayPush
(
pResult
,
output
);
taosArrayPush
(
pResult
,
output
);
}
if
(
taosArrayGetSize
(
pResult
)
>
0
)
{
if
(
taosArrayGetSize
(
pResult
)
>
0
)
{
#if 1
#if 1
...
@@ -616,17 +617,19 @@ static int32_t tdRSmaFetchAndSubmitResult(SRSmaInfoItem *pItem, STSchema *pTSche
...
@@ -616,17 +617,19 @@ static int32_t tdRSmaFetchAndSubmitResult(SRSmaInfoItem *pItem, STSchema *pTSche
if
(
pReq
&&
tdProcessSubmitReq
(
sinkTsdb
,
atomic_add_fetch_64
(
&
pStat
->
submitVer
,
1
),
pReq
)
<
0
)
{
if
(
pReq
&&
tdProcessSubmitReq
(
sinkTsdb
,
atomic_add_fetch_64
(
&
pStat
->
submitVer
,
1
),
pReq
)
<
0
)
{
taosMemoryFreeClear
(
pReq
);
taosMemoryFreeClear
(
pReq
);
smaError
(
"vgId:%d, process submit req for rsma table %"
PRIi64
" level %"
PRIi8
" failed since %s"
,
SMA_VID
(
pSma
)
,
smaError
(
"vgId:%d, process submit req for rsma table %"
PRIi64
" level %"
PRIi8
" failed since %s"
,
suid
,
pItem
->
level
,
terrstr
());
SMA_VID
(
pSma
),
suid
,
pItem
->
level
,
terrstr
());
goto
_err
;
goto
_err
;
}
}
taosMemoryFreeClear
(
pReq
);
taosMemoryFreeClear
(
pReq
);
taosArrayClear
(
pResult
);
}
else
if
(
terrno
==
0
)
{
}
else
if
(
terrno
==
0
)
{
smaDebug
(
"vgId:%d, no rsma %"
PRIi8
" data fetched yet"
,
SMA_VID
(
pSma
),
pItem
->
level
);
smaDebug
(
"vgId:%d, no rsma %"
PRIi8
" data fetched yet"
,
SMA_VID
(
pSma
),
pItem
->
level
);
}
else
{
}
else
{
smaDebug
(
"vgId:%d, no rsma %"
PRIi8
" data fetched since %s"
,
SMA_VID
(
pSma
),
pItem
->
level
,
tstrerror
(
terrno
));
smaDebug
(
"vgId:%d, no rsma %"
PRIi8
" data fetched since %s"
,
SMA_VID
(
pSma
),
pItem
->
level
,
tstrerror
(
terrno
));
}
}
}
tdDestroySDataBlockArray
(
pResult
);
tdDestroySDataBlockArray
(
pResult
);
return
TSDB_CODE_SUCCESS
;
return
TSDB_CODE_SUCCESS
;
...
...
source/dnode/vnode/src/tq/tq.c
浏览文件 @
de9b4358
...
@@ -526,8 +526,8 @@ int32_t tqProcessVgChangeReq(STQ* pTq, char* msg, int32_t msgLen) {
...
@@ -526,8 +526,8 @@ int32_t tqProcessVgChangeReq(STQ* pTq, char* msg, int32_t msgLen) {
.
initTqReader
=
true
,
.
initTqReader
=
true
,
.
version
=
ver
,
.
version
=
ver
,
};
};
pHandle
->
execHandle
.
execCol
.
task
[
i
]
=
pHandle
->
execHandle
.
execCol
.
task
[
i
]
=
qCreateQueueExecTaskInfo
(
pHandle
->
execHandle
.
execCol
.
qmsg
,
&
handle
,
&
pHandle
->
execHandle
.
numOfCols
,
qCreateQueueExecTaskInfo
(
pHandle
->
execHandle
.
execCol
.
qmsg
,
&
handle
,
&
pHandle
->
execHandle
.
numOfCols
);
&
pHandle
->
execHandle
.
pSchemaWrapper
);
ASSERT
(
pHandle
->
execHandle
.
execCol
.
task
[
i
]);
ASSERT
(
pHandle
->
execHandle
.
execCol
.
task
[
i
]);
void
*
scanner
=
NULL
;
void
*
scanner
=
NULL
;
qExtractStreamScanner
(
pHandle
->
execHandle
.
execCol
.
task
[
i
],
&
scanner
);
qExtractStreamScanner
(
pHandle
->
execHandle
.
execCol
.
task
[
i
],
&
scanner
);
...
@@ -634,7 +634,7 @@ int32_t tqProcessTaskDeployReq(STQ* pTq, char* msg, int32_t msgLen) {
...
@@ -634,7 +634,7 @@ int32_t tqProcessTaskDeployReq(STQ* pTq, char* msg, int32_t msgLen) {
ASSERT
(
pTask
->
tbSink
.
pSchemaWrapper
->
pSchema
);
ASSERT
(
pTask
->
tbSink
.
pSchemaWrapper
->
pSchema
);
pTask
->
tbSink
.
pTSchema
=
pTask
->
tbSink
.
pTSchema
=
tdGetSTSChemaFromSSChema
(
&
pTask
->
tbSink
.
pSchemaWrapper
->
pSchema
,
pTask
->
tbSink
.
pSchemaWrapper
->
nCols
);
tdGetSTSChemaFromSSChema
(
pTask
->
tbSink
.
pSchemaWrapper
->
pSchema
,
pTask
->
tbSink
.
pSchemaWrapper
->
nCols
,
1
);
ASSERT
(
pTask
->
tbSink
.
pTSchema
);
ASSERT
(
pTask
->
tbSink
.
pTSchema
);
}
}
...
...
source/dnode/vnode/src/tq/tqMeta.c
浏览文件 @
de9b4358
...
@@ -93,7 +93,8 @@ int32_t tqMetaOpen(STQ* pTq) {
...
@@ -93,7 +93,8 @@ int32_t tqMetaOpen(STQ* pTq) {
.
version
=
handle
.
snapshotVer
,
.
version
=
handle
.
snapshotVer
,
};
};
handle
.
execHandle
.
execCol
.
task
[
i
]
=
qCreateQueueExecTaskInfo
(
handle
.
execHandle
.
execCol
.
qmsg
,
&
reader
,
&
handle
.
execHandle
.
numOfCols
);
handle
.
execHandle
.
execCol
.
task
[
i
]
=
qCreateQueueExecTaskInfo
(
handle
.
execHandle
.
execCol
.
qmsg
,
&
reader
,
&
handle
.
execHandle
.
numOfCols
,
&
handle
.
execHandle
.
pSchemaWrapper
);
ASSERT
(
handle
.
execHandle
.
execCol
.
task
[
i
]);
ASSERT
(
handle
.
execHandle
.
execCol
.
task
[
i
]);
void
*
scanner
=
NULL
;
void
*
scanner
=
NULL
;
qExtractStreamScanner
(
handle
.
execHandle
.
execCol
.
task
[
i
],
&
scanner
);
qExtractStreamScanner
(
handle
.
execHandle
.
execCol
.
task
[
i
],
&
scanner
);
...
...
source/dnode/vnode/src/tq/tqPush.c
浏览文件 @
de9b4358
...
@@ -249,6 +249,8 @@ int tqPushMsg(STQ* pTq, void* msg, int32_t msgLen, tmsg_t msgType, int64_t ver)
...
@@ -249,6 +249,8 @@ int tqPushMsg(STQ* pTq, void* msg, int32_t msgLen, tmsg_t msgType, int64_t ver)
return
-
1
;
return
-
1
;
}
}
memcpy
(
data
,
msg
,
msgLen
);
memcpy
(
data
,
msg
,
msgLen
);
SSubmitReq
*
pReq
=
(
SSubmitReq
*
)
data
;
pReq
->
version
=
ver
;
tqProcessStreamTrigger
(
pTq
,
data
);
tqProcessStreamTrigger
(
pTq
,
data
);
}
}
...
...
source/dnode/vnode/src/tq/tqRead.c
浏览文件 @
de9b4358
...
@@ -314,6 +314,7 @@ int32_t tqRetrieveDataBlock(SSDataBlock* pBlock, STqReader* pReader) {
...
@@ -314,6 +314,7 @@ int32_t tqRetrieveDataBlock(SSDataBlock* pBlock, STqReader* pReader) {
pBlock
->
info
.
uid
=
pReader
->
msgIter
.
uid
;
pBlock
->
info
.
uid
=
pReader
->
msgIter
.
uid
;
pBlock
->
info
.
rows
=
pReader
->
msgIter
.
numOfRows
;
pBlock
->
info
.
rows
=
pReader
->
msgIter
.
numOfRows
;
pBlock
->
info
.
version
=
pReader
->
pMsg
->
version
;
while
((
row
=
tGetSubmitBlkNext
(
&
pReader
->
blkIter
))
!=
NULL
)
{
while
((
row
=
tGetSubmitBlkNext
(
&
pReader
->
blkIter
))
!=
NULL
)
{
tdSTSRowIterReset
(
&
iter
,
row
);
tdSTSRowIterReset
(
&
iter
,
row
);
...
...
source/dnode/vnode/src/tsdb/tsdbRead.c
浏览文件 @
de9b4358
...
@@ -1943,17 +1943,20 @@ int32_t initDelSkylineIterator(STableBlockScanInfo* pBlockScanInfo, STsdbReader*
...
@@ -1943,17 +1943,20 @@ int32_t initDelSkylineIterator(STableBlockScanInfo* pBlockScanInfo, STsdbReader*
if
(
pDelFile
)
{
if
(
pDelFile
)
{
SDelFReader
*
pDelFReader
=
NULL
;
SDelFReader
*
pDelFReader
=
NULL
;
code
=
tsdbDelFReaderOpen
(
&
pDelFReader
,
pDelFile
,
pTsdb
,
NULL
);
code
=
tsdbDelFReaderOpen
(
&
pDelFReader
,
pDelFile
,
pTsdb
,
NULL
);
if
(
code
)
{
if
(
code
!=
TSDB_CODE_SUCCESS
)
{
goto
_err
;
goto
_err
;
}
}
SArray
*
aDelIdx
=
taosArrayInit
(
4
,
sizeof
(
SDelIdx
));
SArray
*
aDelIdx
=
taosArrayInit
(
4
,
sizeof
(
SDelIdx
));
if
(
aDelIdx
==
NULL
)
{
if
(
aDelIdx
==
NULL
)
{
tsdbDelFReaderClose
(
&
pDelFReader
);
goto
_err
;
goto
_err
;
}
}
code
=
tsdbReadDelIdx
(
pDelFReader
,
aDelIdx
,
NULL
);
code
=
tsdbReadDelIdx
(
pDelFReader
,
aDelIdx
,
NULL
);
if
(
code
)
{
if
(
code
!=
TSDB_CODE_SUCCESS
)
{
taosArrayDestroy
(
aDelIdx
);
tsdbDelFReaderClose
(
&
pDelFReader
);
goto
_err
;
goto
_err
;
}
}
...
@@ -1962,11 +1965,15 @@ int32_t initDelSkylineIterator(STableBlockScanInfo* pBlockScanInfo, STsdbReader*
...
@@ -1962,11 +1965,15 @@ int32_t initDelSkylineIterator(STableBlockScanInfo* pBlockScanInfo, STsdbReader*
if
(
pIdx
!=
NULL
)
{
if
(
pIdx
!=
NULL
)
{
code
=
tsdbReadDelData
(
pDelFReader
,
pIdx
,
pDelData
,
NULL
);
code
=
tsdbReadDelData
(
pDelFReader
,
pIdx
,
pDelData
,
NULL
);
}
taosArrayDestroy
(
aDelIdx
);
tsdbDelFReaderClose
(
&
pDelFReader
);
if
(
code
!=
TSDB_CODE_SUCCESS
)
{
if
(
code
!=
TSDB_CODE_SUCCESS
)
{
goto
_err
;
goto
_err
;
}
}
}
}
}
SDelData
*
p
=
NULL
;
SDelData
*
p
=
NULL
;
if
(
pMemTbData
!=
NULL
)
{
if
(
pMemTbData
!=
NULL
)
{
...
@@ -2532,8 +2539,7 @@ static int32_t checkForNeighborFileBlock(STsdbReader* pReader, STableBlockScanIn
...
@@ -2532,8 +2539,7 @@ static int32_t checkForNeighborFileBlock(STsdbReader* pReader, STableBlockScanIn
pDumpInfo
->
rowIndex
=
pDumpInfo
->
rowIndex
=
doMergeRowsInFileBlockImpl
(
pBlockData
,
pDumpInfo
->
rowIndex
,
key
,
pMerger
,
&
pReader
->
verRange
,
step
);
doMergeRowsInFileBlockImpl
(
pBlockData
,
pDumpInfo
->
rowIndex
,
key
,
pMerger
,
&
pReader
->
verRange
,
step
);
if
(
pDumpInfo
->
rowIndex
>=
pDumpInfo
->
totalRows
)
{
if
(
pDumpInfo
->
rowIndex
>=
pBlock
->
nRow
)
{
*
state
=
CHECK_FILEBLOCK_CONT
;
*
state
=
CHECK_FILEBLOCK_CONT
;
}
}
}
}
...
...
source/libs/executor/inc/executorimpl.h
浏览文件 @
de9b4358
...
@@ -164,6 +164,7 @@ typedef struct {
...
@@ -164,6 +164,7 @@ typedef struct {
char
*
dbname
;
char
*
dbname
;
int32_t
tversion
;
int32_t
tversion
;
SSchemaWrapper
*
sw
;
SSchemaWrapper
*
sw
;
SSchemaWrapper
*
qsw
;
}
SSchemaInfo
;
}
SSchemaInfo
;
typedef
struct
SExecTaskInfo
{
typedef
struct
SExecTaskInfo
{
...
@@ -868,7 +869,7 @@ SOperatorInfo* createDataBlockInfoScanOperator(void* dataReader, SReadHandle* re
...
@@ -868,7 +869,7 @@ SOperatorInfo* createDataBlockInfoScanOperator(void* dataReader, SReadHandle* re
SExecTaskInfo
*
pTaskInfo
);
SExecTaskInfo
*
pTaskInfo
);
SOperatorInfo
*
createStreamScanOperatorInfo
(
SReadHandle
*
pHandle
,
STableScanPhysiNode
*
pTableScanNode
,
SNode
*
pTagCond
,
SOperatorInfo
*
createStreamScanOperatorInfo
(
SReadHandle
*
pHandle
,
STableScanPhysiNode
*
pTableScanNode
,
SNode
*
pTagCond
,
SExecTaskInfo
*
pTaskInfo
,
STimeWindowAggSupp
*
pTwSup
);
SExecTaskInfo
*
pTaskInfo
);
SOperatorInfo
*
createFillOperatorInfo
(
SOperatorInfo
*
downstream
,
SFillPhysiNode
*
pPhyFillNode
,
SExecTaskInfo
*
pTaskInfo
);
SOperatorInfo
*
createFillOperatorInfo
(
SOperatorInfo
*
downstream
,
SFillPhysiNode
*
pPhyFillNode
,
SExecTaskInfo
*
pTaskInfo
);
...
...
source/libs/executor/src/executor.c
浏览文件 @
de9b4358
...
@@ -120,7 +120,7 @@ int32_t qSetMultiStreamInput(qTaskInfo_t tinfo, const void* pBlocks, size_t numO
...
@@ -120,7 +120,7 @@ int32_t qSetMultiStreamInput(qTaskInfo_t tinfo, const void* pBlocks, size_t numO
return
code
;
return
code
;
}
}
qTaskInfo_t
qCreateQueueExecTaskInfo
(
void
*
msg
,
SReadHandle
*
readers
,
int32_t
*
numOfCols
)
{
qTaskInfo_t
qCreateQueueExecTaskInfo
(
void
*
msg
,
SReadHandle
*
readers
,
int32_t
*
numOfCols
,
SSchemaWrapper
**
pSchemaWrapper
)
{
if
(
msg
==
NULL
)
{
if
(
msg
==
NULL
)
{
// TODO create raw scan
// TODO create raw scan
return
NULL
;
return
NULL
;
...
@@ -154,6 +154,7 @@ qTaskInfo_t qCreateQueueExecTaskInfo(void* msg, SReadHandle* readers, int32_t* n
...
@@ -154,6 +154,7 @@ qTaskInfo_t qCreateQueueExecTaskInfo(void* msg, SReadHandle* readers, int32_t* n
}
}
}
}
*
pSchemaWrapper
=
tCloneSSchemaWrapper
(((
SExecTaskInfo
*
)
pTaskInfo
)
->
schemaInfo
.
qsw
);
return
pTaskInfo
;
return
pTaskInfo
;
}
}
...
...
source/libs/executor/src/executorMain.c
浏览文件 @
de9b4358
...
@@ -199,6 +199,10 @@ int32_t qAsyncKillTask(qTaskInfo_t qinfo) {
...
@@ -199,6 +199,10 @@ int32_t qAsyncKillTask(qTaskInfo_t qinfo) {
void
qDestroyTask
(
qTaskInfo_t
qTaskHandle
)
{
void
qDestroyTask
(
qTaskInfo_t
qTaskHandle
)
{
SExecTaskInfo
*
pTaskInfo
=
(
SExecTaskInfo
*
)
qTaskHandle
;
SExecTaskInfo
*
pTaskInfo
=
(
SExecTaskInfo
*
)
qTaskHandle
;
if
(
pTaskInfo
==
NULL
)
{
return
;
}
qDebug
(
"%s execTask completed, numOfRows:%"
PRId64
,
GET_TASKID
(
pTaskInfo
),
pTaskInfo
->
pRoot
->
resultInfo
.
totalRows
);
qDebug
(
"%s execTask completed, numOfRows:%"
PRId64
,
GET_TASKID
(
pTaskInfo
),
pTaskInfo
->
pRoot
->
resultInfo
.
totalRows
);
queryCostStatis
(
pTaskInfo
);
// print the query cost summary
queryCostStatis
(
pTaskInfo
);
// print the query cost summary
...
...
source/libs/executor/src/executorimpl.c
浏览文件 @
de9b4358
...
@@ -1647,11 +1647,6 @@ static int32_t compressQueryColData(SColumnInfoData* pColRes, int32_t numOfRows,
...
@@ -1647,11 +1647,6 @@ static int32_t compressQueryColData(SColumnInfoData* pColRes, int32_t numOfRows,
colSize
+
COMP_OVERFLOW_BYTES
,
compressed
,
NULL
,
0
);
colSize
+
COMP_OVERFLOW_BYTES
,
compressed
,
NULL
,
0
);
}
}
int32_t
doFillTimeIntervalGapsInResults
(
struct
SFillInfo
*
pFillInfo
,
SSDataBlock
*
pBlock
,
int32_t
capacity
)
{
int32_t
numOfRows
=
(
int32_t
)
taosFillResultDataBlock
(
pFillInfo
,
pBlock
,
capacity
-
pBlock
->
info
.
rows
);
return
pBlock
->
info
.
rows
;
}
void
queryCostStatis
(
SExecTaskInfo
*
pTaskInfo
)
{
void
queryCostStatis
(
SExecTaskInfo
*
pTaskInfo
)
{
STaskCostInfo
*
pSummary
=
&
pTaskInfo
->
cost
;
STaskCostInfo
*
pSummary
=
&
pTaskInfo
->
cost
;
...
@@ -4147,35 +4142,62 @@ static STsdbReader* doCreateDataReader(STableScanPhysiNode* pTableScanNode, SRea
...
@@ -4147,35 +4142,62 @@ static STsdbReader* doCreateDataReader(STableScanPhysiNode* pTableScanNode, SRea
static
SArray
*
extractColumnInfo
(
SNodeList
*
pNodeList
);
static
SArray
*
extractColumnInfo
(
SNodeList
*
pNodeList
);
int32_t
extractTableSchemaInfo
(
SReadHandle
*
pHandle
,
uint64_t
uid
,
SExecTaskInfo
*
pTaskInfo
)
{
SSchemaWrapper
*
extractQueriedColumnSchema
(
SScanPhysiNode
*
pScanNode
);
int32_t
extractTableSchemaInfo
(
SReadHandle
*
pHandle
,
SScanPhysiNode
*
pScanNode
,
SExecTaskInfo
*
pTaskInfo
)
{
SMetaReader
mr
=
{
0
};
SMetaReader
mr
=
{
0
};
metaReaderInit
(
&
mr
,
pHandle
->
meta
,
0
);
metaReaderInit
(
&
mr
,
pHandle
->
meta
,
0
);
int32_t
code
=
metaGetTableEntryByUid
(
&
mr
,
uid
);
int32_t
code
=
metaGetTableEntryByUid
(
&
mr
,
pScanNode
->
uid
);
if
(
code
!=
TSDB_CODE_SUCCESS
)
{
if
(
code
!=
TSDB_CODE_SUCCESS
)
{
qError
(
"failed to get the table meta, uid:0x%"
PRIx64
", suid:0x%"
PRIx64
", %s"
,
pScanNode
->
uid
,
pScanNode
->
suid
,
GET_TASKID
(
pTaskInfo
));
metaReaderClear
(
&
mr
);
metaReaderClear
(
&
mr
);
return
terrno
;
return
terrno
;
}
}
pTaskInfo
->
schemaInfo
.
tablename
=
strdup
(
mr
.
me
.
name
);
SSchemaInfo
*
pSchemaInfo
=
&
pTaskInfo
->
schemaInfo
;
pSchemaInfo
->
tablename
=
strdup
(
mr
.
me
.
name
);
if
(
mr
.
me
.
type
==
TSDB_SUPER_TABLE
)
{
if
(
mr
.
me
.
type
==
TSDB_SUPER_TABLE
)
{
p
TaskInfo
->
schemaInfo
.
sw
=
tCloneSSchemaWrapper
(
&
mr
.
me
.
stbEntry
.
schemaRow
);
p
SchemaInfo
->
sw
=
tCloneSSchemaWrapper
(
&
mr
.
me
.
stbEntry
.
schemaRow
);
p
TaskInfo
->
schemaInfo
.
tversion
=
mr
.
me
.
stbEntry
.
schemaTag
.
version
;
p
SchemaInfo
->
tversion
=
mr
.
me
.
stbEntry
.
schemaTag
.
version
;
}
else
if
(
mr
.
me
.
type
==
TSDB_CHILD_TABLE
)
{
}
else
if
(
mr
.
me
.
type
==
TSDB_CHILD_TABLE
)
{
tDecoderClear
(
&
mr
.
coder
);
tDecoderClear
(
&
mr
.
coder
);
tb_uid_t
suid
=
mr
.
me
.
ctbEntry
.
suid
;
tb_uid_t
suid
=
mr
.
me
.
ctbEntry
.
suid
;
metaGetTableEntryByUid
(
&
mr
,
suid
);
metaGetTableEntryByUid
(
&
mr
,
suid
);
p
TaskInfo
->
schemaInfo
.
sw
=
tCloneSSchemaWrapper
(
&
mr
.
me
.
stbEntry
.
schemaRow
);
p
SchemaInfo
->
sw
=
tCloneSSchemaWrapper
(
&
mr
.
me
.
stbEntry
.
schemaRow
);
p
TaskInfo
->
schemaInfo
.
tversion
=
mr
.
me
.
stbEntry
.
schemaTag
.
version
;
p
SchemaInfo
->
tversion
=
mr
.
me
.
stbEntry
.
schemaTag
.
version
;
}
else
{
}
else
{
p
TaskInfo
->
schemaInfo
.
sw
=
tCloneSSchemaWrapper
(
&
mr
.
me
.
ntbEntry
.
schemaRow
);
p
SchemaInfo
->
sw
=
tCloneSSchemaWrapper
(
&
mr
.
me
.
ntbEntry
.
schemaRow
);
}
}
metaReaderClear
(
&
mr
);
metaReaderClear
(
&
mr
);
pSchemaInfo
->
qsw
=
extractQueriedColumnSchema
(
pScanNode
);
return
TSDB_CODE_SUCCESS
;
return
TSDB_CODE_SUCCESS
;
}
}
SSchemaWrapper
*
extractQueriedColumnSchema
(
SScanPhysiNode
*
pScanNode
)
{
int32_t
numOfCols
=
LIST_LENGTH
(
pScanNode
->
pScanCols
);
SSchemaWrapper
*
pqSw
=
taosMemoryCalloc
(
1
,
sizeof
(
SSchemaWrapper
));
pqSw
->
pSchema
=
taosMemoryCalloc
(
numOfCols
,
sizeof
(
SSchema
));
for
(
int32_t
i
=
0
;
i
<
numOfCols
;
++
i
)
{
STargetNode
*
pNode
=
(
STargetNode
*
)
nodesListGetNode
(
pScanNode
->
pScanCols
,
i
);
SColumnNode
*
pColNode
=
(
SColumnNode
*
)
pNode
->
pExpr
;
SSchema
*
pSchema
=
&
pqSw
->
pSchema
[
pqSw
->
nCols
++
];
pSchema
->
colId
=
pColNode
->
colId
;
pSchema
->
type
=
pColNode
->
node
.
resType
.
type
;
pSchema
->
type
=
pColNode
->
node
.
resType
.
bytes
;
strncpy
(
pSchema
->
name
,
pColNode
->
colName
,
tListLen
(
pSchema
->
name
));
}
return
pqSw
;
}
static
void
cleanupTableSchemaInfo
(
SSchemaInfo
*
pSchemaInfo
)
{
static
void
cleanupTableSchemaInfo
(
SSchemaInfo
*
pSchemaInfo
)
{
taosMemoryFreeClear
(
pSchemaInfo
->
dbname
);
taosMemoryFreeClear
(
pSchemaInfo
->
dbname
);
if
(
pSchemaInfo
->
sw
==
NULL
)
{
if
(
pSchemaInfo
->
sw
==
NULL
)
{
...
@@ -4183,8 +4205,8 @@ static void cleanupTableSchemaInfo(SSchemaInfo* pSchemaInfo) {
...
@@ -4183,8 +4205,8 @@ static void cleanupTableSchemaInfo(SSchemaInfo* pSchemaInfo) {
}
}
taosMemoryFree
(
pSchemaInfo
->
tablename
);
taosMemoryFree
(
pSchemaInfo
->
tablename
);
t
aosMemoryFree
(
pSchemaInfo
->
sw
->
pSchema
);
t
DeleteSSchemaWrapper
(
pSchemaInfo
->
sw
);
t
aosMemoryFree
(
pSchemaInfo
->
sw
);
t
DeleteSSchemaWrapper
(
pSchemaInfo
->
q
sw
);
}
}
static
int32_t
sortTableGroup
(
STableListInfo
*
pTableListInfo
,
int32_t
groupNum
)
{
static
int32_t
sortTableGroup
(
STableListInfo
*
pTableListInfo
,
int32_t
groupNum
)
{
...
@@ -4385,7 +4407,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
...
@@ -4385,7 +4407,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
return
NULL
;
return
NULL
;
}
}
code
=
extractTableSchemaInfo
(
pHandle
,
pTableScanNode
->
scan
.
uid
,
pTaskInfo
);
code
=
extractTableSchemaInfo
(
pHandle
,
&
pTableScanNode
->
scan
,
pTaskInfo
);
if
(
code
)
{
if
(
code
)
{
pTaskInfo
->
code
=
terrno
;
pTaskInfo
->
code
=
terrno
;
return
NULL
;
return
NULL
;
...
@@ -4405,7 +4427,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
...
@@ -4405,7 +4427,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
return
NULL
;
return
NULL
;
}
}
code
=
extractTableSchemaInfo
(
pHandle
,
pTableScanNode
->
scan
.
uid
,
pTaskInfo
);
code
=
extractTableSchemaInfo
(
pHandle
,
&
pTableScanNode
->
scan
,
pTaskInfo
);
if
(
code
)
{
if
(
code
)
{
pTaskInfo
->
code
=
terrno
;
pTaskInfo
->
code
=
terrno
;
return
NULL
;
return
NULL
;
...
@@ -4422,11 +4444,6 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
...
@@ -4422,11 +4444,6 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
return
createExchangeOperatorInfo
(
pHandle
->
pMsgCb
->
clientRpc
,
(
SExchangePhysiNode
*
)
pPhyNode
,
pTaskInfo
);
return
createExchangeOperatorInfo
(
pHandle
->
pMsgCb
->
clientRpc
,
(
SExchangePhysiNode
*
)
pPhyNode
,
pTaskInfo
);
}
else
if
(
QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN
==
type
)
{
}
else
if
(
QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN
==
type
)
{
STableScanPhysiNode
*
pTableScanNode
=
(
STableScanPhysiNode
*
)
pPhyNode
;
STableScanPhysiNode
*
pTableScanNode
=
(
STableScanPhysiNode
*
)
pPhyNode
;
STimeWindowAggSupp
twSup
=
{
.
waterMark
=
pTableScanNode
->
watermark
,
.
calTrigger
=
pTableScanNode
->
triggerType
,
.
maxTs
=
INT64_MIN
,
};
if
(
pHandle
->
vnode
)
{
if
(
pHandle
->
vnode
)
{
int32_t
code
=
createScanTableListInfo
(
&
pTableScanNode
->
scan
,
pTableScanNode
->
pGroupTags
,
int32_t
code
=
createScanTableListInfo
(
&
pTableScanNode
->
scan
,
pTableScanNode
->
pGroupTags
,
pTableScanNode
->
groupSort
,
pHandle
,
pTableListInfo
,
pTagCond
,
pTagIndexCond
,
GET_TASKID
(
pTaskInfo
));
pTableScanNode
->
groupSort
,
pHandle
,
pTableListInfo
,
pTagCond
,
pTagIndexCond
,
GET_TASKID
(
pTaskInfo
));
...
@@ -4436,7 +4453,8 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
...
@@ -4436,7 +4453,8 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
}
}
}
}
SOperatorInfo
*
pOperator
=
createStreamScanOperatorInfo
(
pHandle
,
pTableScanNode
,
pTagCond
,
pTaskInfo
,
&
twSup
);
pTaskInfo
->
schemaInfo
.
qsw
=
extractQueriedColumnSchema
(
&
pTableScanNode
->
scan
);
SOperatorInfo
*
pOperator
=
createStreamScanOperatorInfo
(
pHandle
,
pTableScanNode
,
pTagCond
,
pTaskInfo
);
return
pOperator
;
return
pOperator
;
}
else
if
(
QUERY_NODE_PHYSICAL_PLAN_SYSTABLE_SCAN
==
type
)
{
}
else
if
(
QUERY_NODE_PHYSICAL_PLAN_SYSTABLE_SCAN
==
type
)
{
...
@@ -4487,7 +4505,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
...
@@ -4487,7 +4505,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
return
NULL
;
return
NULL
;
}
}
code
=
extractTableSchemaInfo
(
pHandle
,
pScanNode
->
scan
.
uid
,
pTaskInfo
);
code
=
extractTableSchemaInfo
(
pHandle
,
&
pScanNode
->
scan
,
pTaskInfo
);
if
(
code
!=
TSDB_CODE_SUCCESS
)
{
if
(
code
!=
TSDB_CODE_SUCCESS
)
{
pTaskInfo
->
code
=
code
;
pTaskInfo
->
code
=
code
;
return
NULL
;
return
NULL
;
...
...
source/libs/executor/src/scanoperator.c
浏览文件 @
de9b4358
...
@@ -1525,7 +1525,7 @@ static void destroyStreamScanOperatorInfo(void* param, int32_t numOfOutput) {
...
@@ -1525,7 +1525,7 @@ static void destroyStreamScanOperatorInfo(void* param, int32_t numOfOutput) {
}
}
SOperatorInfo
*
createStreamScanOperatorInfo
(
SReadHandle
*
pHandle
,
STableScanPhysiNode
*
pTableScanNode
,
SNode
*
pTagCond
,
SOperatorInfo
*
createStreamScanOperatorInfo
(
SReadHandle
*
pHandle
,
STableScanPhysiNode
*
pTableScanNode
,
SNode
*
pTagCond
,
SExecTaskInfo
*
pTaskInfo
,
STimeWindowAggSupp
*
pTwSup
)
{
SExecTaskInfo
*
pTaskInfo
)
{
SStreamScanInfo
*
pInfo
=
taosMemoryCalloc
(
1
,
sizeof
(
SStreamScanInfo
));
SStreamScanInfo
*
pInfo
=
taosMemoryCalloc
(
1
,
sizeof
(
SStreamScanInfo
));
SOperatorInfo
*
pOperator
=
taosMemoryCalloc
(
1
,
sizeof
(
SOperatorInfo
));
SOperatorInfo
*
pOperator
=
taosMemoryCalloc
(
1
,
sizeof
(
SOperatorInfo
));
...
@@ -1539,6 +1539,12 @@ SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhys
...
@@ -1539,6 +1539,12 @@ SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhys
pInfo
->
pTagCond
=
pTagCond
;
pInfo
->
pTagCond
=
pTagCond
;
pInfo
->
twAggSup
=
(
STimeWindowAggSupp
){
.
waterMark
=
pTableScanNode
->
watermark
,
.
calTrigger
=
pTableScanNode
->
triggerType
,
.
maxTs
=
INT64_MIN
,
};
int32_t
numOfCols
=
0
;
int32_t
numOfCols
=
0
;
pInfo
->
pColMatchInfo
=
extractColMatchInfo
(
pScanPhyNode
->
pScanCols
,
pDescNode
,
&
numOfCols
,
COL_MATCH_FROM_COL_ID
);
pInfo
->
pColMatchInfo
=
extractColMatchInfo
(
pScanPhyNode
->
pScanCols
,
pDescNode
,
&
numOfCols
,
COL_MATCH_FROM_COL_ID
);
...
@@ -1591,7 +1597,7 @@ SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhys
...
@@ -1591,7 +1597,7 @@ SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhys
}
}
if
(
pTSInfo
->
interval
.
interval
>
0
)
{
if
(
pTSInfo
->
interval
.
interval
>
0
)
{
pInfo
->
pUpdateInfo
=
updateInfoInitP
(
&
pTSInfo
->
interval
,
p
TwSup
->
waterMark
);
pInfo
->
pUpdateInfo
=
updateInfoInitP
(
&
pTSInfo
->
interval
,
p
Info
->
twAggSup
.
waterMark
);
}
else
{
}
else
{
pInfo
->
pUpdateInfo
=
NULL
;
pInfo
->
pUpdateInfo
=
NULL
;
}
}
...
@@ -1631,7 +1637,6 @@ SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhys
...
@@ -1631,7 +1637,6 @@ SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhys
pInfo
->
deleteDataIndex
=
0
;
pInfo
->
deleteDataIndex
=
0
;
pInfo
->
pDeleteDataRes
=
createPullDataBlock
();
pInfo
->
pDeleteDataRes
=
createPullDataBlock
();
pInfo
->
updateWin
=
(
STimeWindow
){.
skey
=
INT64_MAX
,
.
ekey
=
INT64_MAX
};
pInfo
->
updateWin
=
(
STimeWindow
){.
skey
=
INT64_MAX
,
.
ekey
=
INT64_MAX
};
pInfo
->
twAggSup
=
*
pTwSup
;
pOperator
->
name
=
"StreamScanOperator"
;
pOperator
->
name
=
"StreamScanOperator"
;
pOperator
->
operatorType
=
QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN
;
pOperator
->
operatorType
=
QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN
;
...
...
source/libs/executor/src/tfill.c
浏览文件 @
de9b4358
...
@@ -66,12 +66,32 @@ static void setNullRow(SSDataBlock* pBlock, int64_t ts, int32_t rowIndex) {
...
@@ -66,12 +66,32 @@ static void setNullRow(SSDataBlock* pBlock, int64_t ts, int32_t rowIndex) {
static
void
doSetVal
(
SColumnInfoData
*
pDstColInfoData
,
int32_t
rowIndex
,
const
SGroupKeys
*
pKey
);
static
void
doSetVal
(
SColumnInfoData
*
pDstColInfoData
,
int32_t
rowIndex
,
const
SGroupKeys
*
pKey
);
static
void
doSetUserSpecifiedValue
(
SColumnInfoData
*
pDst
,
SVariant
*
pVar
,
int32_t
rowIndex
,
int64_t
currentKey
)
{
if
(
pDst
->
info
.
type
==
TSDB_DATA_TYPE_FLOAT
)
{
float
v
=
0
;
GET_TYPED_DATA
(
v
,
float
,
pVar
->
nType
,
&
pVar
->
i
);
colDataAppend
(
pDst
,
rowIndex
,
(
char
*
)
&
v
,
false
);
}
else
if
(
pDst
->
info
.
type
==
TSDB_DATA_TYPE_DOUBLE
)
{
double
v
=
0
;
GET_TYPED_DATA
(
v
,
double
,
pVar
->
nType
,
&
pVar
->
i
);
colDataAppend
(
pDst
,
rowIndex
,
(
char
*
)
&
v
,
false
);
}
else
if
(
IS_SIGNED_NUMERIC_TYPE
(
pDst
->
info
.
type
))
{
int64_t
v
=
0
;
GET_TYPED_DATA
(
v
,
int64_t
,
pVar
->
nType
,
&
pVar
->
i
);
colDataAppend
(
pDst
,
rowIndex
,
(
char
*
)
&
v
,
false
);
}
else
if
(
pDst
->
info
.
type
==
TSDB_DATA_TYPE_TIMESTAMP
)
{
colDataAppend
(
pDst
,
rowIndex
,
(
const
char
*
)
&
currentKey
,
false
);
}
else
{
// varchar/nchar data
colDataAppendNULL
(
pDst
,
rowIndex
);
}
}
static
void
doFillOneRow
(
SFillInfo
*
pFillInfo
,
SSDataBlock
*
pBlock
,
SSDataBlock
*
pSrcBlock
,
int64_t
ts
,
static
void
doFillOneRow
(
SFillInfo
*
pFillInfo
,
SSDataBlock
*
pBlock
,
SSDataBlock
*
pSrcBlock
,
int64_t
ts
,
bool
outOfBound
)
{
bool
outOfBound
)
{
SPoint
point1
,
point2
,
point
;
SPoint
point1
,
point2
,
point
;
int32_t
step
=
GET_FORWARD_DIRECTION_FACTOR
(
pFillInfo
->
order
);
int32_t
step
=
GET_FORWARD_DIRECTION_FACTOR
(
pFillInfo
->
order
);
//
set the primary timestamp column value
//
set the primary timestamp column value
int32_t
index
=
pBlock
->
info
.
rows
;
int32_t
index
=
pBlock
->
info
.
rows
;
// set the other values
// set the other values
...
@@ -160,30 +180,13 @@ static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock*
...
@@ -160,30 +180,13 @@ static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock*
}
else
{
// fill with user specified value for each column
}
else
{
// fill with user specified value for each column
for
(
int32_t
i
=
0
;
i
<
pFillInfo
->
numOfCols
;
++
i
)
{
for
(
int32_t
i
=
0
;
i
<
pFillInfo
->
numOfCols
;
++
i
)
{
SFillColInfo
*
pCol
=
&
pFillInfo
->
pFillCol
[
i
];
SFillColInfo
*
pCol
=
&
pFillInfo
->
pFillCol
[
i
];
if
(
TSDB_COL_IS_TAG
(
pCol
->
flag
)
/* || IS_VAR_DATA_TYPE(pCol->schema.type)*/
)
{
if
(
TSDB_COL_IS_TAG
(
pCol
->
flag
))
{
continue
;
continue
;
}
}
SVariant
*
pVar
=
&
pFillInfo
->
pFillCol
[
i
].
fillVal
;
SVariant
*
pVar
=
&
pFillInfo
->
pFillCol
[
i
].
fillVal
;
SColumnInfoData
*
pDst
=
taosArrayGet
(
pBlock
->
pDataBlock
,
i
);
SColumnInfoData
*
pDst
=
taosArrayGet
(
pBlock
->
pDataBlock
,
i
);
if
(
pDst
->
info
.
type
==
TSDB_DATA_TYPE_FLOAT
)
{
doSetUserSpecifiedValue
(
pDst
,
pVar
,
index
,
pFillInfo
->
currentKey
);
float
v
=
0
;
GET_TYPED_DATA
(
v
,
float
,
pVar
->
nType
,
&
pVar
->
i
);
colDataAppend
(
pDst
,
index
,
(
char
*
)
&
v
,
false
);
}
else
if
(
pDst
->
info
.
type
==
TSDB_DATA_TYPE_DOUBLE
)
{
double
v
=
0
;
GET_TYPED_DATA
(
v
,
double
,
pVar
->
nType
,
&
pVar
->
i
);
colDataAppend
(
pDst
,
index
,
(
char
*
)
&
v
,
false
);
}
else
if
(
IS_SIGNED_NUMERIC_TYPE
(
pDst
->
info
.
type
))
{
int64_t
v
=
0
;
GET_TYPED_DATA
(
v
,
int64_t
,
pVar
->
nType
,
&
pVar
->
i
);
colDataAppend
(
pDst
,
index
,
(
char
*
)
&
v
,
false
);
}
else
if
(
pDst
->
info
.
type
==
TSDB_DATA_TYPE_TIMESTAMP
)
{
colDataAppend
(
pDst
,
index
,
(
const
char
*
)
&
pFillInfo
->
currentKey
,
false
);
}
else
{
// varchar/nchar data
colDataAppendNULL
(
pDst
,
index
);
}
}
}
}
}
...
@@ -273,7 +276,7 @@ static int32_t fillResultImpl(SFillInfo* pFillInfo, SSDataBlock* pBlock, int32_t
...
@@ -273,7 +276,7 @@ static int32_t fillResultImpl(SFillInfo* pFillInfo, SSDataBlock* pBlock, int32_t
return
outputRows
;
return
outputRows
;
}
}
}
else
{
}
else
{
assert
(
pFillInfo
->
currentKey
==
ts
);
ASSERT
(
pFillInfo
->
currentKey
==
ts
);
int32_t
index
=
pBlock
->
info
.
rows
;
int32_t
index
=
pBlock
->
info
.
rows
;
if
(
pFillInfo
->
type
==
TSDB_FILL_NEXT
&&
(
pFillInfo
->
index
+
1
)
<
pFillInfo
->
numOfRows
)
{
if
(
pFillInfo
->
type
==
TSDB_FILL_NEXT
&&
(
pFillInfo
->
index
+
1
)
<
pFillInfo
->
numOfRows
)
{
...
@@ -295,27 +298,32 @@ static int32_t fillResultImpl(SFillInfo* pFillInfo, SSDataBlock* pBlock, int32_t
...
@@ -295,27 +298,32 @@ static int32_t fillResultImpl(SFillInfo* pFillInfo, SSDataBlock* pBlock, int32_t
SColumnInfoData
*
pSrc
=
taosArrayGet
(
pFillInfo
->
pSrcBlock
->
pDataBlock
,
srcSlotId
);
SColumnInfoData
*
pSrc
=
taosArrayGet
(
pFillInfo
->
pSrcBlock
->
pDataBlock
,
srcSlotId
);
char
*
src
=
colDataGetData
(
pSrc
,
pFillInfo
->
index
);
char
*
src
=
colDataGetData
(
pSrc
,
pFillInfo
->
index
);
if
(
i
==
0
||
(
/*pCol->functionId != FUNCTION_COUNT &&*/
!
colDataIsNull_s
(
pSrc
,
pFillInfo
->
index
))
/*||
if
(
/*i == 0 || (*/
!
colDataIsNull_s
(
pSrc
,
pFillInfo
->
index
))
{
(pCol->functionId == FUNCTION_COUNT && GET_INT64_VAL(src) != 0)*/
)
{
bool
isNull
=
colDataIsNull_s
(
pSrc
,
pFillInfo
->
index
);
bool
isNull
=
colDataIsNull_s
(
pSrc
,
pFillInfo
->
index
);
colDataAppend
(
pDst
,
index
,
src
,
isNull
);
colDataAppend
(
pDst
,
index
,
src
,
isNull
);
saveColData
(
pFillInfo
->
prev
,
i
,
src
,
isNull
);
saveColData
(
pFillInfo
->
prev
,
i
,
src
,
isNull
);
}
else
{
if
(
pDst
->
info
.
type
==
TSDB_DATA_TYPE_TIMESTAMP
)
{
colDataAppend
(
pDst
,
index
,
(
const
char
*
)
&
pFillInfo
->
currentKey
,
false
);
}
else
{
// i > 0 and data is null , do interpolation
}
else
{
// i > 0 and data is null , do interpolation
if
(
pFillInfo
->
type
==
TSDB_FILL_PREV
)
{
if
(
pFillInfo
->
type
==
TSDB_FILL_PREV
)
{
SGroupKeys
*
pKey
=
taosArrayGet
(
pFillInfo
->
prev
,
i
);
SArray
*
p
=
FILL_IS_ASC_FILL
(
pFillInfo
)
?
pFillInfo
->
prev
:
pFillInfo
->
next
;
SGroupKeys
*
pKey
=
taosArrayGet
(
p
,
i
);
doSetVal
(
pDst
,
index
,
pKey
);
doSetVal
(
pDst
,
index
,
pKey
);
}
else
if
(
pFillInfo
->
type
==
TSDB_FILL_LINEAR
)
{
}
else
if
(
pFillInfo
->
type
==
TSDB_FILL_LINEAR
)
{
bool
isNull
=
colDataIsNull_s
(
pSrc
,
pFillInfo
->
index
);
bool
isNull
=
colDataIsNull_s
(
pSrc
,
pFillInfo
->
index
);
colDataAppend
(
pDst
,
index
,
src
,
isNull
);
colDataAppend
(
pDst
,
index
,
src
,
isNull
);
saveColData
(
pFillInfo
->
prev
,
i
,
src
,
isNull
);
saveColData
(
pFillInfo
->
prev
,
i
,
src
,
isNull
);
// todo:
}
else
if
(
pFillInfo
->
type
==
TSDB_FILL_NULL
)
{
}
else
if
(
pFillInfo
->
type
==
TSDB_FILL_NULL
)
{
colDataAppendNULL
(
pDst
,
index
);
colDataAppendNULL
(
pDst
,
index
);
}
else
if
(
pFillInfo
->
type
==
TSDB_FILL_NEXT
)
{
}
else
if
(
pFillInfo
->
type
==
TSDB_FILL_NEXT
)
{
SGroupKeys
*
pKey
=
taosArrayGet
(
pFillInfo
->
next
,
i
);
SArray
*
p
=
FILL_IS_ASC_FILL
(
pFillInfo
)
?
pFillInfo
->
next
:
pFillInfo
->
prev
;
SGroupKeys
*
pKey
=
taosArrayGet
(
p
,
i
);
doSetVal
(
pDst
,
index
,
pKey
);
doSetVal
(
pDst
,
index
,
pKey
);
}
else
{
}
else
{
SVariant
*
pVar
=
&
pFillInfo
->
pFillCol
[
i
].
fillVal
;
SVariant
*
pVar
=
&
pFillInfo
->
pFillCol
[
i
].
fillVal
;
colDataAppend
(
pDst
,
index
,
(
char
*
)
&
pVar
->
i
,
false
);
doSetUserSpecifiedValue
(
pDst
,
pVar
,
index
,
pFillInfo
->
currentKey
);
}
}
}
}
}
}
}
...
...
source/libs/parser/src/parInsert.c
浏览文件 @
de9b4358
...
@@ -1200,7 +1200,7 @@ static int parseOneRow(SInsertParseContext* pCxt, STableDataBlocks* pDataBlocks,
...
@@ -1200,7 +1200,7 @@ static int parseOneRow(SInsertParseContext* pCxt, STableDataBlocks* pDataBlocks,
*
gotRow
=
true
;
*
gotRow
=
true
;
#ifdef TD_DEBUG_PRINT_ROW
#ifdef TD_DEBUG_PRINT_ROW
STSchema
*
pSTSchema
=
tdGetSTSChemaFromSSChema
(
&
schema
,
spd
->
numOfCols
);
STSchema
*
pSTSchema
=
tdGetSTSChemaFromSSChema
(
schema
,
spd
->
numOfCols
,
1
);
tdSRowPrint
(
row
,
pSTSchema
,
__func__
);
tdSRowPrint
(
row
,
pSTSchema
,
__func__
);
taosMemoryFree
(
pSTSchema
);
taosMemoryFree
(
pSTSchema
);
#endif
#endif
...
@@ -1972,7 +1972,7 @@ int32_t qBindStmtColsValue(void* pBlock, TAOS_MULTI_BIND* bind, char* msgBuf, in
...
@@ -1972,7 +1972,7 @@ int32_t qBindStmtColsValue(void* pBlock, TAOS_MULTI_BIND* bind, char* msgBuf, in
}
}
}
}
#ifdef TD_DEBUG_PRINT_ROW
#ifdef TD_DEBUG_PRINT_ROW
STSchema
*
pSTSchema
=
tdGetSTSChemaFromSSChema
(
&
pSchema
,
spd
->
numOfCols
);
STSchema
*
pSTSchema
=
tdGetSTSChemaFromSSChema
(
pSchema
,
spd
->
numOfCols
,
1
);
tdSRowPrint
(
row
,
pSTSchema
,
__func__
);
tdSRowPrint
(
row
,
pSTSchema
,
__func__
);
taosMemoryFree
(
pSTSchema
);
taosMemoryFree
(
pSTSchema
);
#endif
#endif
...
@@ -2057,7 +2057,7 @@ int32_t qBindStmtSingleColValue(void* pBlock, TAOS_MULTI_BIND* bind, char* msgBu
...
@@ -2057,7 +2057,7 @@ int32_t qBindStmtSingleColValue(void* pBlock, TAOS_MULTI_BIND* bind, char* msgBu
#ifdef TD_DEBUG_PRINT_ROW
#ifdef TD_DEBUG_PRINT_ROW
if
(
rowEnd
)
{
if
(
rowEnd
)
{
STSchema
*
pSTSchema
=
tdGetSTSChemaFromSSChema
(
&
pSchema
,
spd
->
numOfCols
);
STSchema
*
pSTSchema
=
tdGetSTSChemaFromSSChema
(
pSchema
,
spd
->
numOfCols
,
1
);
tdSRowPrint
(
row
,
pSTSchema
,
__func__
);
tdSRowPrint
(
row
,
pSTSchema
,
__func__
);
taosMemoryFree
(
pSTSchema
);
taosMemoryFree
(
pSTSchema
);
}
}
...
...
source/libs/parser/src/parInsertData.c
浏览文件 @
de9b4358
...
@@ -19,6 +19,7 @@
...
@@ -19,6 +19,7 @@
#include "parInt.h"
#include "parInt.h"
#include "parUtil.h"
#include "parUtil.h"
#include "querynodes.h"
#include "querynodes.h"
#include "tRealloc.h"
#define IS_RAW_PAYLOAD(t) \
#define IS_RAW_PAYLOAD(t) \
(((int)(t)) == PAYLOAD_TYPE_RAW) // 0: K-V payload for non-prepare insert, 1: rawPayload for prepare insert
(((int)(t)) == PAYLOAD_TYPE_RAW) // 0: K-V payload for non-prepare insert, 1: rawPayload for prepare insert
...
@@ -34,6 +35,32 @@ typedef struct SBlockKeyInfo {
...
@@ -34,6 +35,32 @@ typedef struct SBlockKeyInfo {
SBlockKeyTuple
*
pKeyTuple
;
SBlockKeyTuple
*
pKeyTuple
;
}
SBlockKeyInfo
;
}
SBlockKeyInfo
;
typedef
struct
{
int32_t
index
;
SArray
*
rowArray
;
// array of merged rows(mem allocated by tRealloc/free by tFree)
STSchema
*
pSchema
;
int64_t
tbUid
;
// suid for child table, uid for normal table
}
SBlockRowMerger
;
static
FORCE_INLINE
void
tdResetSBlockRowMerger
(
SBlockRowMerger
*
pMerger
)
{
if
(
pMerger
)
{
pMerger
->
index
=
-
1
;
}
}
static
void
tdFreeSBlockRowMerger
(
SBlockRowMerger
*
pMerger
)
{
if
(
pMerger
)
{
int32_t
size
=
taosArrayGetSize
(
pMerger
->
rowArray
);
for
(
int32_t
i
=
0
;
i
<
size
;
++
i
)
{
tFree
(
*
(
void
**
)
taosArrayGet
(
pMerger
->
rowArray
,
i
));
}
taosArrayDestroy
(
pMerger
->
rowArray
);
taosMemoryFreeClear
(
pMerger
->
pSchema
);
taosMemoryFree
(
pMerger
);
}
}
static
int32_t
rowDataCompar
(
const
void
*
lhs
,
const
void
*
rhs
)
{
static
int32_t
rowDataCompar
(
const
void
*
lhs
,
const
void
*
rhs
)
{
TSKEY
left
=
*
(
TSKEY
*
)
lhs
;
TSKEY
left
=
*
(
TSKEY
*
)
lhs
;
TSKEY
right
=
*
(
TSKEY
*
)
rhs
;
TSKEY
right
=
*
(
TSKEY
*
)
rhs
;
...
@@ -328,7 +355,7 @@ void sortRemoveDataBlockDupRowsRaw(STableDataBlocks* dataBuf) {
...
@@ -328,7 +355,7 @@ void sortRemoveDataBlockDupRowsRaw(STableDataBlocks* dataBuf) {
}
}
// data block is disordered, sort it in ascending order
// data block is disordered, sort it in ascending order
int
sortRemoveDataBlockDupRows
(
STableDataBlocks
*
dataBuf
,
SBlockKeyInfo
*
pBlkKeyInfo
)
{
static
int
sortRemoveDataBlockDupRows
(
STableDataBlocks
*
dataBuf
,
SBlockKeyInfo
*
pBlkKeyInfo
)
{
SSubmitBlk
*
pBlocks
=
(
SSubmitBlk
*
)
dataBuf
->
pData
;
SSubmitBlk
*
pBlocks
=
(
SSubmitBlk
*
)
dataBuf
->
pData
;
int16_t
nRows
=
pBlocks
->
numOfRows
;
int16_t
nRows
=
pBlocks
->
numOfRows
;
...
@@ -396,6 +423,201 @@ int sortRemoveDataBlockDupRows(STableDataBlocks* dataBuf, SBlockKeyInfo* pBlkKey
...
@@ -396,6 +423,201 @@ int sortRemoveDataBlockDupRows(STableDataBlocks* dataBuf, SBlockKeyInfo* pBlkKey
return
0
;
return
0
;
}
}
static
void
*
tdGetCurRowFromBlockMerger
(
SBlockRowMerger
*
pBlkRowMerger
)
{
if
(
pBlkRowMerger
&&
(
pBlkRowMerger
->
index
>=
0
))
{
ASSERT
(
pBlkRowMerger
->
index
<
taosArrayGetSize
(
pBlkRowMerger
->
rowArray
));
return
*
(
void
**
)
taosArrayGet
(
pBlkRowMerger
->
rowArray
,
pBlkRowMerger
->
index
);
}
return
NULL
;
}
static
int32_t
tdBlockRowMerge
(
STableMeta
*
pTableMeta
,
SBlockKeyTuple
*
pEndKeyTp
,
int32_t
nDupRows
,
SBlockRowMerger
**
pBlkRowMerger
,
int32_t
rowSize
)
{
ASSERT
(
nDupRows
>
1
);
SBlockKeyTuple
*
pStartKeyTp
=
pEndKeyTp
-
(
nDupRows
-
1
);
ASSERT
(
pStartKeyTp
->
skey
==
pEndKeyTp
->
skey
);
// TODO: optimization if end row is all normal
#if 0
STSRow* pEndRow = (STSRow*)pEndKeyTp->payloadAddr;
if(isNormal(pEndRow)) { // set the end row if it is normal and return directly
pStartKeyTp->payloadAddr = pEndKeyTp->payloadAddr;
return TSDB_CODE_SUCCESS;
}
#endif
if
(
!
(
*
pBlkRowMerger
))
{
(
*
pBlkRowMerger
)
=
taosMemoryCalloc
(
1
,
sizeof
(
**
pBlkRowMerger
));
if
(
!
(
*
pBlkRowMerger
))
{
terrno
=
TSDB_CODE_OUT_OF_MEMORY
;
return
TSDB_CODE_FAILED
;
}
(
*
pBlkRowMerger
)
->
index
=
-
1
;
if
(
!
(
*
pBlkRowMerger
)
->
rowArray
)
{
(
*
pBlkRowMerger
)
->
rowArray
=
taosArrayInit
(
1
,
sizeof
(
void
*
));
if
(
!
(
*
pBlkRowMerger
)
->
rowArray
)
{
terrno
=
TSDB_CODE_OUT_OF_MEMORY
;
return
TSDB_CODE_FAILED
;
}
}
}
if
((
*
pBlkRowMerger
)
->
pSchema
)
{
if
((
*
pBlkRowMerger
)
->
pSchema
->
version
!=
pTableMeta
->
sversion
)
{
taosMemoryFreeClear
((
*
pBlkRowMerger
)
->
pSchema
);
}
else
{
if
((
*
pBlkRowMerger
)
->
tbUid
!=
(
pTableMeta
->
suid
>
0
?
pTableMeta
->
suid
:
pTableMeta
->
uid
))
{
taosMemoryFreeClear
((
*
pBlkRowMerger
)
->
pSchema
);
}
}
}
if
(
!
(
*
pBlkRowMerger
)
->
pSchema
)
{
(
*
pBlkRowMerger
)
->
pSchema
=
tdGetSTSChemaFromSSChema
(
pTableMeta
->
schema
,
pTableMeta
->
tableInfo
.
numOfColumns
,
pTableMeta
->
sversion
);
if
(
!
(
*
pBlkRowMerger
)
->
pSchema
)
{
terrno
=
TSDB_CODE_OUT_OF_MEMORY
;
return
TSDB_CODE_FAILED
;
}
(
*
pBlkRowMerger
)
->
tbUid
=
pTableMeta
->
suid
>
0
?
pTableMeta
->
suid
:
pTableMeta
->
uid
;
}
void
*
pDestRow
=
NULL
;
++
((
*
pBlkRowMerger
)
->
index
);
if
((
*
pBlkRowMerger
)
->
index
<
taosArrayGetSize
((
*
pBlkRowMerger
)
->
rowArray
))
{
void
*
pAlloc
=
*
(
void
**
)
taosArrayGet
((
*
pBlkRowMerger
)
->
rowArray
,
(
*
pBlkRowMerger
)
->
index
);
if
(
tRealloc
((
uint8_t
**
)
&
pAlloc
,
rowSize
)
!=
0
)
{
return
TSDB_CODE_FAILED
;
}
pDestRow
=
pAlloc
;
}
else
{
if
(
tRealloc
((
uint8_t
**
)
&
pDestRow
,
rowSize
)
!=
0
)
{
return
TSDB_CODE_FAILED
;
}
taosArrayPush
((
*
pBlkRowMerger
)
->
rowArray
,
&
pDestRow
);
}
// merge rows to pDestRow
STSchema
*
pSchema
=
(
*
pBlkRowMerger
)
->
pSchema
;
SArray
*
pArray
=
taosArrayInit
(
pSchema
->
numOfCols
,
sizeof
(
SColVal
));
for
(
int32_t
i
=
0
;
i
<
pSchema
->
numOfCols
;
++
i
)
{
SColVal
colVal
=
{
0
};
for
(
int32_t
j
=
0
;
j
<
nDupRows
;
++
j
)
{
tTSRowGetVal
((
pEndKeyTp
-
j
)
->
payloadAddr
,
pSchema
,
i
,
&
colVal
);
if
(
!
colVal
.
isNone
)
{
break
;
}
}
taosArrayPush
(
pArray
,
&
colVal
);
}
if
(
tdSTSRowNew
(
pArray
,
pSchema
,
(
STSRow
**
)
&
pDestRow
)
<
0
)
{
taosArrayDestroy
(
pArray
);
return
TSDB_CODE_FAILED
;
}
taosArrayDestroy
(
pArray
);
return
TSDB_CODE_SUCCESS
;
}
// data block is disordered, sort it in ascending order, and merge dup rows if exists
static
int
sortMergeDataBlockDupRows
(
STableDataBlocks
*
dataBuf
,
SBlockKeyInfo
*
pBlkKeyInfo
,
SBlockRowMerger
**
ppBlkRowMerger
)
{
SSubmitBlk
*
pBlocks
=
(
SSubmitBlk
*
)
dataBuf
->
pData
;
STableMeta
*
pTableMeta
=
dataBuf
->
pTableMeta
;
int16_t
nRows
=
pBlocks
->
numOfRows
;
// size is less than the total size, since duplicated rows may be removed.
// allocate memory
size_t
nAlloc
=
nRows
*
sizeof
(
SBlockKeyTuple
);
if
(
pBlkKeyInfo
->
pKeyTuple
==
NULL
||
pBlkKeyInfo
->
maxBytesAlloc
<
nAlloc
)
{
char
*
tmp
=
taosMemoryRealloc
(
pBlkKeyInfo
->
pKeyTuple
,
nAlloc
);
if
(
tmp
==
NULL
)
{
return
TSDB_CODE_TSC_OUT_OF_MEMORY
;
}
pBlkKeyInfo
->
pKeyTuple
=
(
SBlockKeyTuple
*
)
tmp
;
pBlkKeyInfo
->
maxBytesAlloc
=
(
int32_t
)
nAlloc
;
}
memset
(
pBlkKeyInfo
->
pKeyTuple
,
0
,
nAlloc
);
tdResetSBlockRowMerger
(
*
ppBlkRowMerger
);
int32_t
extendedRowSize
=
getExtendedRowSize
(
dataBuf
);
SBlockKeyTuple
*
pBlkKeyTuple
=
pBlkKeyInfo
->
pKeyTuple
;
char
*
pBlockData
=
pBlocks
->
data
+
pBlocks
->
schemaLen
;
int
n
=
0
;
while
(
n
<
nRows
)
{
pBlkKeyTuple
->
skey
=
TD_ROW_KEY
((
STSRow
*
)
pBlockData
);
pBlkKeyTuple
->
payloadAddr
=
pBlockData
;
pBlkKeyTuple
->
index
=
n
;
// next loop
pBlockData
+=
extendedRowSize
;
++
pBlkKeyTuple
;
++
n
;
}
if
(
!
dataBuf
->
ordered
)
{
pBlkKeyTuple
=
pBlkKeyInfo
->
pKeyTuple
;
taosSort
(
pBlkKeyTuple
,
nRows
,
sizeof
(
SBlockKeyTuple
),
rowDataComparStable
);
pBlkKeyTuple
=
pBlkKeyInfo
->
pKeyTuple
;
bool
hasDup
=
false
;
int32_t
nextPos
=
0
;
int32_t
i
=
0
;
int32_t
j
=
1
;
while
(
j
<
nRows
)
{
TSKEY
ti
=
(
pBlkKeyTuple
+
i
)
->
skey
;
TSKEY
tj
=
(
pBlkKeyTuple
+
j
)
->
skey
;
if
(
ti
==
tj
)
{
++
j
;
continue
;
}
if
((
j
-
i
)
>
1
)
{
if
(
tdBlockRowMerge
(
pTableMeta
,
(
pBlkKeyTuple
+
j
-
1
),
j
-
i
,
ppBlkRowMerger
,
extendedRowSize
)
<
0
)
{
return
TSDB_CODE_FAILED
;
}
(
pBlkKeyTuple
+
nextPos
)
->
payloadAddr
=
tdGetCurRowFromBlockMerger
(
*
ppBlkRowMerger
);
if
(
!
hasDup
)
{
hasDup
=
true
;
}
i
=
j
;
}
else
{
if
(
hasDup
)
{
memmove
(
pBlkKeyTuple
+
nextPos
,
pBlkKeyTuple
+
i
,
sizeof
(
SBlockKeyTuple
));
}
++
i
;
}
++
nextPos
;
++
j
;
}
if
((
j
-
i
)
>
1
)
{
ASSERT
((
pBlkKeyTuple
+
i
)
->
skey
==
(
pBlkKeyTuple
+
j
-
1
)
->
skey
);
if
(
tdBlockRowMerge
(
pTableMeta
,
(
pBlkKeyTuple
+
j
-
1
),
j
-
i
,
ppBlkRowMerger
,
extendedRowSize
)
<
0
)
{
return
TSDB_CODE_FAILED
;
}
(
pBlkKeyTuple
+
nextPos
)
->
payloadAddr
=
tdGetCurRowFromBlockMerger
(
*
ppBlkRowMerger
);
}
else
if
(
hasDup
)
{
memmove
(
pBlkKeyTuple
+
nextPos
,
pBlkKeyTuple
+
i
,
sizeof
(
SBlockKeyTuple
));
}
dataBuf
->
ordered
=
true
;
pBlocks
->
numOfRows
=
nextPos
+
1
;
}
dataBuf
->
size
=
sizeof
(
SSubmitBlk
)
+
pBlocks
->
numOfRows
*
extendedRowSize
;
dataBuf
->
prevTS
=
INT64_MIN
;
return
TSDB_CODE_SUCCESS
;
}
// Erase the empty space reserved for binary data
// Erase the empty space reserved for binary data
static
int
trimDataBlock
(
void
*
pDataBlock
,
STableDataBlocks
*
pTableDataBlock
,
SBlockKeyTuple
*
blkKeyTuple
,
static
int
trimDataBlock
(
void
*
pDataBlock
,
STableDataBlocks
*
pTableDataBlock
,
SBlockKeyTuple
*
blkKeyTuple
,
bool
isRawPayload
)
{
bool
isRawPayload
)
{
...
@@ -464,6 +686,8 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
...
@@ -464,6 +686,8 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
STableDataBlocks
**
p
=
taosHashIterate
(
pHashObj
,
NULL
);
STableDataBlocks
**
p
=
taosHashIterate
(
pHashObj
,
NULL
);
STableDataBlocks
*
pOneTableBlock
=
*
p
;
STableDataBlocks
*
pOneTableBlock
=
*
p
;
SBlockKeyInfo
blkKeyInfo
=
{
0
};
// share by pOneTableBlock
SBlockKeyInfo
blkKeyInfo
=
{
0
};
// share by pOneTableBlock
SBlockRowMerger
*
pBlkRowMerger
=
NULL
;
while
(
pOneTableBlock
)
{
while
(
pOneTableBlock
)
{
SSubmitBlk
*
pBlocks
=
(
SSubmitBlk
*
)
pOneTableBlock
->
pData
;
SSubmitBlk
*
pBlocks
=
(
SSubmitBlk
*
)
pOneTableBlock
->
pData
;
if
(
pBlocks
->
numOfRows
>
0
)
{
if
(
pBlocks
->
numOfRows
>
0
)
{
...
@@ -473,6 +697,7 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
...
@@ -473,6 +697,7 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
getDataBlockFromList
(
pVnodeDataBlockHashList
,
&
pOneTableBlock
->
vgId
,
sizeof
(
pOneTableBlock
->
vgId
),
TSDB_PAYLOAD_SIZE
,
INSERT_HEAD_SIZE
,
0
,
getDataBlockFromList
(
pVnodeDataBlockHashList
,
&
pOneTableBlock
->
vgId
,
sizeof
(
pOneTableBlock
->
vgId
),
TSDB_PAYLOAD_SIZE
,
INSERT_HEAD_SIZE
,
0
,
pOneTableBlock
->
pTableMeta
,
&
dataBuf
,
pVnodeDataBlockList
,
NULL
);
pOneTableBlock
->
pTableMeta
,
&
dataBuf
,
pVnodeDataBlockList
,
NULL
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
tdFreeSBlockRowMerger
(
pBlkRowMerger
);
taosHashCleanup
(
pVnodeDataBlockHashList
);
taosHashCleanup
(
pVnodeDataBlockHashList
);
destroyBlockArrayList
(
pVnodeDataBlockList
);
destroyBlockArrayList
(
pVnodeDataBlockList
);
taosMemoryFreeClear
(
blkKeyInfo
.
pKeyTuple
);
taosMemoryFreeClear
(
blkKeyInfo
.
pKeyTuple
);
...
@@ -490,6 +715,7 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
...
@@ -490,6 +715,7 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
if
(
tmp
!=
NULL
)
{
if
(
tmp
!=
NULL
)
{
dataBuf
->
pData
=
tmp
;
dataBuf
->
pData
=
tmp
;
}
else
{
// failed to allocate memory, free already allocated memory and return error code
}
else
{
// failed to allocate memory, free already allocated memory and return error code
tdFreeSBlockRowMerger
(
pBlkRowMerger
);
taosHashCleanup
(
pVnodeDataBlockHashList
);
taosHashCleanup
(
pVnodeDataBlockHashList
);
destroyBlockArrayList
(
pVnodeDataBlockList
);
destroyBlockArrayList
(
pVnodeDataBlockList
);
taosMemoryFreeClear
(
dataBuf
->
pData
);
taosMemoryFreeClear
(
dataBuf
->
pData
);
...
@@ -501,7 +727,8 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
...
@@ -501,7 +727,8 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
if
(
isRawPayload
)
{
if
(
isRawPayload
)
{
sortRemoveDataBlockDupRowsRaw
(
pOneTableBlock
);
sortRemoveDataBlockDupRowsRaw
(
pOneTableBlock
);
}
else
{
}
else
{
if
((
code
=
sortRemoveDataBlockDupRows
(
pOneTableBlock
,
&
blkKeyInfo
))
!=
0
)
{
if
((
code
=
sortMergeDataBlockDupRows
(
pOneTableBlock
,
&
blkKeyInfo
,
&
pBlkRowMerger
))
!=
0
)
{
tdFreeSBlockRowMerger
(
pBlkRowMerger
);
taosHashCleanup
(
pVnodeDataBlockHashList
);
taosHashCleanup
(
pVnodeDataBlockHashList
);
destroyBlockArrayList
(
pVnodeDataBlockList
);
destroyBlockArrayList
(
pVnodeDataBlockList
);
taosMemoryFreeClear
(
dataBuf
->
pData
);
taosMemoryFreeClear
(
dataBuf
->
pData
);
...
@@ -529,6 +756,7 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
...
@@ -529,6 +756,7 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
}
}
// free the table data blocks;
// free the table data blocks;
tdFreeSBlockRowMerger
(
pBlkRowMerger
);
taosHashCleanup
(
pVnodeDataBlockHashList
);
taosHashCleanup
(
pVnodeDataBlockHashList
);
taosMemoryFreeClear
(
blkKeyInfo
.
pKeyTuple
);
taosMemoryFreeClear
(
blkKeyInfo
.
pKeyTuple
);
*
pVgDataBlocks
=
pVnodeDataBlockList
;
*
pVgDataBlocks
=
pVnodeDataBlockList
;
...
...
source/libs/stream/src/streamData.c
浏览文件 @
de9b4358
...
@@ -34,6 +34,7 @@ int32_t streamDispatchReqToData(const SStreamDispatchReq* pReq, SStreamDataBlock
...
@@ -34,6 +34,7 @@ int32_t streamDispatchReqToData(const SStreamDispatchReq* pReq, SStreamDataBlock
// TODO: refactor
// TODO: refactor
pDataBlock
->
info
.
window
.
skey
=
be64toh
(
pRetrieve
->
skey
);
pDataBlock
->
info
.
window
.
skey
=
be64toh
(
pRetrieve
->
skey
);
pDataBlock
->
info
.
window
.
ekey
=
be64toh
(
pRetrieve
->
ekey
);
pDataBlock
->
info
.
window
.
ekey
=
be64toh
(
pRetrieve
->
ekey
);
pDataBlock
->
info
.
version
=
be64toh
(
pRetrieve
->
version
);
pDataBlock
->
info
.
type
=
pRetrieve
->
streamBlockType
;
pDataBlock
->
info
.
type
=
pRetrieve
->
streamBlockType
;
pDataBlock
->
info
.
childId
=
pReq
->
upstreamChildId
;
pDataBlock
->
info
.
childId
=
pReq
->
upstreamChildId
;
...
@@ -54,6 +55,7 @@ int32_t streamRetrieveReqToData(const SStreamRetrieveReq* pReq, SStreamDataBlock
...
@@ -54,6 +55,7 @@ int32_t streamRetrieveReqToData(const SStreamRetrieveReq* pReq, SStreamDataBlock
// TODO: refactor
// TODO: refactor
pDataBlock
->
info
.
window
.
skey
=
be64toh
(
pRetrieve
->
skey
);
pDataBlock
->
info
.
window
.
skey
=
be64toh
(
pRetrieve
->
skey
);
pDataBlock
->
info
.
window
.
ekey
=
be64toh
(
pRetrieve
->
ekey
);
pDataBlock
->
info
.
window
.
ekey
=
be64toh
(
pRetrieve
->
ekey
);
pDataBlock
->
info
.
version
=
be64toh
(
pRetrieve
->
version
);
pDataBlock
->
info
.
type
=
pRetrieve
->
streamBlockType
;
pDataBlock
->
info
.
type
=
pRetrieve
->
streamBlockType
;
...
...
source/libs/stream/src/streamDispatch.c
浏览文件 @
de9b4358
...
@@ -108,6 +108,7 @@ int32_t streamBroadcastToChildren(SStreamTask* pTask, const SSDataBlock* pBlock)
...
@@ -108,6 +108,7 @@ int32_t streamBroadcastToChildren(SStreamTask* pTask, const SSDataBlock* pBlock)
pRetrieve
->
numOfCols
=
htonl
(
numOfCols
);
pRetrieve
->
numOfCols
=
htonl
(
numOfCols
);
pRetrieve
->
skey
=
htobe64
(
pBlock
->
info
.
window
.
skey
);
pRetrieve
->
skey
=
htobe64
(
pBlock
->
info
.
window
.
skey
);
pRetrieve
->
ekey
=
htobe64
(
pBlock
->
info
.
window
.
ekey
);
pRetrieve
->
ekey
=
htobe64
(
pBlock
->
info
.
window
.
ekey
);
pRetrieve
->
version
=
htobe64
(
pBlock
->
info
.
version
);
int32_t
actualLen
=
0
;
int32_t
actualLen
=
0
;
blockEncode
(
pBlock
,
pRetrieve
->
data
,
&
actualLen
,
numOfCols
,
false
);
blockEncode
(
pBlock
,
pRetrieve
->
data
,
&
actualLen
,
numOfCols
,
false
);
...
@@ -182,6 +183,7 @@ static int32_t streamAddBlockToDispatchMsg(const SSDataBlock* pBlock, SStreamDis
...
@@ -182,6 +183,7 @@ static int32_t streamAddBlockToDispatchMsg(const SSDataBlock* pBlock, SStreamDis
pRetrieve
->
numOfRows
=
htonl
(
pBlock
->
info
.
rows
);
pRetrieve
->
numOfRows
=
htonl
(
pBlock
->
info
.
rows
);
pRetrieve
->
skey
=
htobe64
(
pBlock
->
info
.
window
.
skey
);
pRetrieve
->
skey
=
htobe64
(
pBlock
->
info
.
window
.
skey
);
pRetrieve
->
ekey
=
htobe64
(
pBlock
->
info
.
window
.
ekey
);
pRetrieve
->
ekey
=
htobe64
(
pBlock
->
info
.
window
.
ekey
);
pRetrieve
->
version
=
htobe64
(
pBlock
->
info
.
version
);
int32_t
numOfCols
=
(
int32_t
)
taosArrayGetSize
(
pBlock
->
pDataBlock
);
int32_t
numOfCols
=
(
int32_t
)
taosArrayGetSize
(
pBlock
->
pDataBlock
);
pRetrieve
->
numOfCols
=
htonl
(
numOfCols
);
pRetrieve
->
numOfCols
=
htonl
(
numOfCols
);
...
...
source/libs/sync/src/syncMain.c
浏览文件 @
de9b4358
...
@@ -1550,12 +1550,12 @@ void syncNodeEventLog(const SSyncNode* pSyncNode, char* str) {
...
@@ -1550,12 +1550,12 @@ void syncNodeEventLog(const SSyncNode* pSyncNode, char* str) {
char
logBuf
[
256
+
256
];
char
logBuf
[
256
+
256
];
if
(
pSyncNode
!=
NULL
&&
pSyncNode
->
pRaftCfg
!=
NULL
&&
pSyncNode
->
pRaftStore
!=
NULL
)
{
if
(
pSyncNode
!=
NULL
&&
pSyncNode
->
pRaftCfg
!=
NULL
&&
pSyncNode
->
pRaftStore
!=
NULL
)
{
snprintf
(
logBuf
,
sizeof
(
logBuf
),
snprintf
(
logBuf
,
sizeof
(
logBuf
),
"vgId:%d, sync %s %s, t
erm:%"
PRIu64
", commit:%"
PRId64
", first:%"
PRId64
", last
:%"
PRId64
"vgId:%d, sync %s %s, t
m:%"
PRIu64
", cmt:%"
PRId64
", fst:%"
PRId64
", lst:%"
PRId64
", snap
:%"
PRId64
", snap
shot:%"
PRId64
", snapshot-ter
m:%"
PRIu64
", snap
-t
m:%"
PRIu64
", s
tand
by:%d, "
", sby:%d, "
"st
rategy:%d, bat
ch:%d, "
"st
gy:%d, b
ch:%d, "
"r
eplica
-num:%d, "
"r-num:%d, "
"lc
onfig:%"
PRId64
", changing:%d, restore
:%d, %s"
,
"lc
fg:%"
PRId64
", chging:%d, rsto
:%d, %s"
,
pSyncNode
->
vgId
,
syncUtilState2String
(
pSyncNode
->
state
),
str
,
pSyncNode
->
pRaftStore
->
currentTerm
,
pSyncNode
->
vgId
,
syncUtilState2String
(
pSyncNode
->
state
),
str
,
pSyncNode
->
pRaftStore
->
currentTerm
,
pSyncNode
->
commitIndex
,
logBeginIndex
,
logLastIndex
,
snapshot
.
lastApplyIndex
,
snapshot
.
lastApplyTerm
,
pSyncNode
->
commitIndex
,
logBeginIndex
,
logLastIndex
,
snapshot
.
lastApplyIndex
,
snapshot
.
lastApplyTerm
,
pSyncNode
->
pRaftCfg
->
isStandBy
,
pSyncNode
->
pRaftCfg
->
snapshotStrategy
,
pSyncNode
->
pRaftCfg
->
batchSize
,
pSyncNode
->
pRaftCfg
->
isStandBy
,
pSyncNode
->
pRaftCfg
->
snapshotStrategy
,
pSyncNode
->
pRaftCfg
->
batchSize
,
...
@@ -1573,12 +1573,12 @@ void syncNodeEventLog(const SSyncNode* pSyncNode, char* str) {
...
@@ -1573,12 +1573,12 @@ void syncNodeEventLog(const SSyncNode* pSyncNode, char* str) {
char
*
s
=
(
char
*
)
taosMemoryMalloc
(
len
);
char
*
s
=
(
char
*
)
taosMemoryMalloc
(
len
);
if
(
pSyncNode
!=
NULL
&&
pSyncNode
->
pRaftCfg
!=
NULL
&&
pSyncNode
->
pRaftStore
!=
NULL
)
{
if
(
pSyncNode
!=
NULL
&&
pSyncNode
->
pRaftCfg
!=
NULL
&&
pSyncNode
->
pRaftStore
!=
NULL
)
{
snprintf
(
s
,
len
,
snprintf
(
s
,
len
,
"vgId:%d, sync %s %s, t
erm:%"
PRIu64
", commit:%"
PRId64
", first:%"
PRId64
", last
:%"
PRId64
"vgId:%d, sync %s %s, t
m:%"
PRIu64
", cmt:%"
PRId64
", fst:%"
PRId64
", lst:%"
PRId64
", snap
:%"
PRId64
", snap
shot:%"
PRId64
", snapshot-ter
m:%"
PRIu64
", snap
-t
m:%"
PRIu64
", s
tand
by:%d, "
", sby:%d, "
"st
rategy:%d, bat
ch:%d, "
"st
gy:%d, b
ch:%d, "
"r
eplica
-num:%d, "
"r-num:%d, "
"lc
onfig:%"
PRId64
", changing:%d, restore
:%d, %s"
,
"lc
fg:%"
PRId64
", chging:%d, rsto
:%d, %s"
,
pSyncNode
->
vgId
,
syncUtilState2String
(
pSyncNode
->
state
),
str
,
pSyncNode
->
pRaftStore
->
currentTerm
,
pSyncNode
->
vgId
,
syncUtilState2String
(
pSyncNode
->
state
),
str
,
pSyncNode
->
pRaftStore
->
currentTerm
,
pSyncNode
->
commitIndex
,
logBeginIndex
,
logLastIndex
,
snapshot
.
lastApplyIndex
,
snapshot
.
lastApplyTerm
,
pSyncNode
->
commitIndex
,
logBeginIndex
,
logLastIndex
,
snapshot
.
lastApplyIndex
,
snapshot
.
lastApplyTerm
,
pSyncNode
->
pRaftCfg
->
isStandBy
,
pSyncNode
->
pRaftCfg
->
snapshotStrategy
,
pSyncNode
->
pRaftCfg
->
batchSize
,
pSyncNode
->
pRaftCfg
->
isStandBy
,
pSyncNode
->
pRaftCfg
->
snapshotStrategy
,
pSyncNode
->
pRaftCfg
->
batchSize
,
...
@@ -1621,12 +1621,12 @@ void syncNodeErrorLog(const SSyncNode* pSyncNode, char* str) {
...
@@ -1621,12 +1621,12 @@ void syncNodeErrorLog(const SSyncNode* pSyncNode, char* str) {
char
logBuf
[
256
+
256
];
char
logBuf
[
256
+
256
];
if
(
pSyncNode
!=
NULL
&&
pSyncNode
->
pRaftCfg
!=
NULL
&&
pSyncNode
->
pRaftStore
!=
NULL
)
{
if
(
pSyncNode
!=
NULL
&&
pSyncNode
->
pRaftCfg
!=
NULL
&&
pSyncNode
->
pRaftStore
!=
NULL
)
{
snprintf
(
logBuf
,
sizeof
(
logBuf
),
snprintf
(
logBuf
,
sizeof
(
logBuf
),
"vgId:%d, sync %s %s, t
erm:%"
PRIu64
", commit:%"
PRId64
", first:%"
PRId64
", last
:%"
PRId64
"vgId:%d, sync %s %s, t
m:%"
PRIu64
", cmt:%"
PRId64
", fst:%"
PRId64
", lst:%"
PRId64
", snap
:%"
PRId64
", snap
shot:%"
PRId64
", snapshot-ter
m:%"
PRIu64
", snap
-t
m:%"
PRIu64
", s
tand
by:%d, "
", sby:%d, "
"st
rategy:%d, bat
ch:%d, "
"st
gy:%d, b
ch:%d, "
"r
eplica
-num:%d, "
"r-num:%d, "
"lc
onfig:%"
PRId64
", changing:%d, restore
:%d, %s"
,
"lc
fg:%"
PRId64
", chging:%d, rsto
:%d, %s"
,
pSyncNode
->
vgId
,
syncUtilState2String
(
pSyncNode
->
state
),
str
,
pSyncNode
->
pRaftStore
->
currentTerm
,
pSyncNode
->
vgId
,
syncUtilState2String
(
pSyncNode
->
state
),
str
,
pSyncNode
->
pRaftStore
->
currentTerm
,
pSyncNode
->
commitIndex
,
logBeginIndex
,
logLastIndex
,
snapshot
.
lastApplyIndex
,
snapshot
.
lastApplyTerm
,
pSyncNode
->
commitIndex
,
logBeginIndex
,
logLastIndex
,
snapshot
.
lastApplyIndex
,
snapshot
.
lastApplyTerm
,
pSyncNode
->
pRaftCfg
->
isStandBy
,
pSyncNode
->
pRaftCfg
->
snapshotStrategy
,
pSyncNode
->
pRaftCfg
->
batchSize
,
pSyncNode
->
pRaftCfg
->
isStandBy
,
pSyncNode
->
pRaftCfg
->
snapshotStrategy
,
pSyncNode
->
pRaftCfg
->
batchSize
,
...
@@ -1642,12 +1642,12 @@ void syncNodeErrorLog(const SSyncNode* pSyncNode, char* str) {
...
@@ -1642,12 +1642,12 @@ void syncNodeErrorLog(const SSyncNode* pSyncNode, char* str) {
char
*
s
=
(
char
*
)
taosMemoryMalloc
(
len
);
char
*
s
=
(
char
*
)
taosMemoryMalloc
(
len
);
if
(
pSyncNode
!=
NULL
&&
pSyncNode
->
pRaftCfg
!=
NULL
&&
pSyncNode
->
pRaftStore
!=
NULL
)
{
if
(
pSyncNode
!=
NULL
&&
pSyncNode
->
pRaftCfg
!=
NULL
&&
pSyncNode
->
pRaftStore
!=
NULL
)
{
snprintf
(
s
,
len
,
snprintf
(
s
,
len
,
"vgId:%d, sync %s %s, t
erm:%"
PRIu64
", commit:%"
PRId64
", first:%"
PRId64
", last
:%"
PRId64
"vgId:%d, sync %s %s, t
m:%"
PRIu64
", cmt:%"
PRId64
", fst:%"
PRId64
", lst:%"
PRId64
", snap
:%"
PRId64
", snap
shot:%"
PRId64
", snapshot-ter
m:%"
PRIu64
", snap
-t
m:%"
PRIu64
", s
tand
by:%d, "
", sby:%d, "
"st
rategy:%d, bat
ch:%d, "
"st
gy:%d, b
ch:%d, "
"r
eplica
-num:%d, "
"r-num:%d, "
"lc
onfig:%"
PRId64
", changing:%d, restore
:%d, %s"
,
"lc
fg:%"
PRId64
", chging:%d, rsto
:%d, %s"
,
pSyncNode
->
vgId
,
syncUtilState2String
(
pSyncNode
->
state
),
str
,
pSyncNode
->
pRaftStore
->
currentTerm
,
pSyncNode
->
vgId
,
syncUtilState2String
(
pSyncNode
->
state
),
str
,
pSyncNode
->
pRaftStore
->
currentTerm
,
pSyncNode
->
commitIndex
,
logBeginIndex
,
logLastIndex
,
snapshot
.
lastApplyIndex
,
snapshot
.
lastApplyTerm
,
pSyncNode
->
commitIndex
,
logBeginIndex
,
logLastIndex
,
snapshot
.
lastApplyIndex
,
snapshot
.
lastApplyTerm
,
pSyncNode
->
pRaftCfg
->
isStandBy
,
pSyncNode
->
pRaftCfg
->
snapshotStrategy
,
pSyncNode
->
pRaftCfg
->
batchSize
,
pSyncNode
->
pRaftCfg
->
isStandBy
,
pSyncNode
->
pRaftCfg
->
snapshotStrategy
,
pSyncNode
->
pRaftCfg
->
batchSize
,
...
@@ -1675,11 +1675,10 @@ char* syncNode2SimpleStr(const SSyncNode* pSyncNode) {
...
@@ -1675,11 +1675,10 @@ char* syncNode2SimpleStr(const SSyncNode* pSyncNode) {
SyncIndex
logBeginIndex
=
pSyncNode
->
pLogStore
->
syncLogBeginIndex
(
pSyncNode
->
pLogStore
);
SyncIndex
logBeginIndex
=
pSyncNode
->
pLogStore
->
syncLogBeginIndex
(
pSyncNode
->
pLogStore
);
snprintf
(
s
,
len
,
snprintf
(
s
,
len
,
"vgId:%d, sync %s, term:%"
PRIu64
", commit:%"
PRId64
", first:%"
PRId64
", last:%"
PRId64
"vgId:%d, sync %s, tm:%"
PRIu64
", cmt:%"
PRId64
", fst:%"
PRId64
", lst:%"
PRId64
", snap:%"
PRId64
", snapshot:%"
PRId64
", sby:%d, "
", standby:%d, "
"r-num:%d, "
"replica-num:%d, "
"lcfg:%"
PRId64
", chging:%d, rsto:%d"
,
"lconfig:%"
PRId64
", changing:%d, restore:%d"
,
pSyncNode
->
vgId
,
syncUtilState2String
(
pSyncNode
->
state
),
pSyncNode
->
pRaftStore
->
currentTerm
,
pSyncNode
->
vgId
,
syncUtilState2String
(
pSyncNode
->
state
),
pSyncNode
->
pRaftStore
->
currentTerm
,
pSyncNode
->
commitIndex
,
logBeginIndex
,
logLastIndex
,
snapshot
.
lastApplyIndex
,
pSyncNode
->
pRaftCfg
->
isStandBy
,
pSyncNode
->
commitIndex
,
logBeginIndex
,
logLastIndex
,
snapshot
.
lastApplyIndex
,
pSyncNode
->
pRaftCfg
->
isStandBy
,
pSyncNode
->
replicaNum
,
pSyncNode
->
pRaftCfg
->
lastConfigIndex
,
pSyncNode
->
changing
,
pSyncNode
->
restoreFinish
);
pSyncNode
->
replicaNum
,
pSyncNode
->
pRaftCfg
->
lastConfigIndex
,
pSyncNode
->
changing
,
pSyncNode
->
restoreFinish
);
...
@@ -2977,7 +2976,7 @@ void syncLogSendAppendEntries(SSyncNode* pSyncNode, const SyncAppendEntries* pMs
...
@@ -2977,7 +2976,7 @@ void syncLogSendAppendEntries(SSyncNode* pSyncNode, const SyncAppendEntries* pMs
char
logBuf
[
256
];
char
logBuf
[
256
];
snprintf
(
logBuf
,
sizeof
(
logBuf
),
snprintf
(
logBuf
,
sizeof
(
logBuf
),
"send sync-append-entries to %s:%d, {term:%"
PRIu64
", pre-index:%"
PRId64
", pre-term:%"
PRIu64
"send sync-append-entries to %s:%d, {term:%"
PRIu64
", pre-index:%"
PRId64
", pre-term:%"
PRIu64
", pterm:%"
PRIu64
", c
ommi
t:%"
PRId64
", pterm:%"
PRIu64
", c
m
t:%"
PRId64
", "
", "
"datalen:%d}, %s"
,
"datalen:%d}, %s"
,
host
,
port
,
pMsg
->
term
,
pMsg
->
prevLogIndex
,
pMsg
->
prevLogTerm
,
pMsg
->
privateTerm
,
pMsg
->
commitIndex
,
host
,
port
,
pMsg
->
term
,
pMsg
->
prevLogIndex
,
pMsg
->
prevLogTerm
,
pMsg
->
privateTerm
,
pMsg
->
commitIndex
,
...
@@ -2992,7 +2991,7 @@ void syncLogRecvAppendEntries(SSyncNode* pSyncNode, const SyncAppendEntries* pMs
...
@@ -2992,7 +2991,7 @@ void syncLogRecvAppendEntries(SSyncNode* pSyncNode, const SyncAppendEntries* pMs
char
logBuf
[
256
];
char
logBuf
[
256
];
snprintf
(
logBuf
,
sizeof
(
logBuf
),
snprintf
(
logBuf
,
sizeof
(
logBuf
),
"recv sync-append-entries from %s:%d {term:%"
PRIu64
", pre-index:%"
PRIu64
", pre-term:%"
PRIu64
"recv sync-append-entries from %s:%d {term:%"
PRIu64
", pre-index:%"
PRIu64
", pre-term:%"
PRIu64
", c
ommi
t:%"
PRIu64
", pterm:%"
PRIu64
", c
m
t:%"
PRIu64
", pterm:%"
PRIu64
", "
", "
"datalen:%d}, %s"
,
"datalen:%d}, %s"
,
host
,
port
,
pMsg
->
term
,
pMsg
->
prevLogIndex
,
pMsg
->
prevLogTerm
,
pMsg
->
commitIndex
,
pMsg
->
privateTerm
,
host
,
port
,
pMsg
->
term
,
pMsg
->
prevLogIndex
,
pMsg
->
prevLogTerm
,
pMsg
->
commitIndex
,
pMsg
->
privateTerm
,
...
@@ -3007,7 +3006,7 @@ void syncLogSendAppendEntriesBatch(SSyncNode* pSyncNode, const SyncAppendEntries
...
@@ -3007,7 +3006,7 @@ void syncLogSendAppendEntriesBatch(SSyncNode* pSyncNode, const SyncAppendEntries
char
logBuf
[
256
];
char
logBuf
[
256
];
snprintf
(
logBuf
,
sizeof
(
logBuf
),
snprintf
(
logBuf
,
sizeof
(
logBuf
),
"send sync-append-entries-batch to %s:%d, {term:%"
PRIu64
", pre-index:%"
PRId64
", pre-term:%"
PRIu64
"send sync-append-entries-batch to %s:%d, {term:%"
PRIu64
", pre-index:%"
PRId64
", pre-term:%"
PRIu64
", pterm:%"
PRIu64
", c
ommi
t:%"
PRId64
", datalen:%d, count:%d}, %s"
,
", pterm:%"
PRIu64
", c
m
t:%"
PRId64
", datalen:%d, count:%d}, %s"
,
host
,
port
,
pMsg
->
term
,
pMsg
->
prevLogIndex
,
pMsg
->
prevLogTerm
,
pMsg
->
privateTerm
,
pMsg
->
commitIndex
,
host
,
port
,
pMsg
->
term
,
pMsg
->
prevLogIndex
,
pMsg
->
prevLogTerm
,
pMsg
->
privateTerm
,
pMsg
->
commitIndex
,
pMsg
->
dataLen
,
pMsg
->
dataCount
,
s
);
pMsg
->
dataLen
,
pMsg
->
dataCount
,
s
);
syncNodeEventLog
(
pSyncNode
,
logBuf
);
syncNodeEventLog
(
pSyncNode
,
logBuf
);
...
@@ -3020,7 +3019,7 @@ void syncLogRecvAppendEntriesBatch(SSyncNode* pSyncNode, const SyncAppendEntries
...
@@ -3020,7 +3019,7 @@ void syncLogRecvAppendEntriesBatch(SSyncNode* pSyncNode, const SyncAppendEntries
char
logBuf
[
256
];
char
logBuf
[
256
];
snprintf
(
logBuf
,
sizeof
(
logBuf
),
snprintf
(
logBuf
,
sizeof
(
logBuf
),
"recv sync-append-entries-batch from %s:%d, {term:%"
PRIu64
", pre-index:%"
PRId64
", pre-term:%"
PRIu64
"recv sync-append-entries-batch from %s:%d, {term:%"
PRIu64
", pre-index:%"
PRId64
", pre-term:%"
PRIu64
", pterm:%"
PRIu64
", c
ommi
t:%"
PRId64
", datalen:%d, count:%d}, %s"
,
", pterm:%"
PRIu64
", c
m
t:%"
PRId64
", datalen:%d, count:%d}, %s"
,
host
,
port
,
pMsg
->
term
,
pMsg
->
prevLogIndex
,
pMsg
->
prevLogTerm
,
pMsg
->
privateTerm
,
pMsg
->
commitIndex
,
host
,
port
,
pMsg
->
term
,
pMsg
->
prevLogIndex
,
pMsg
->
prevLogTerm
,
pMsg
->
privateTerm
,
pMsg
->
commitIndex
,
pMsg
->
dataLen
,
pMsg
->
dataCount
,
s
);
pMsg
->
dataLen
,
pMsg
->
dataCount
,
s
);
syncNodeEventLog
(
pSyncNode
,
logBuf
);
syncNodeEventLog
(
pSyncNode
,
logBuf
);
...
...
source/libs/sync/src/syncRaftCfg.c
浏览文件 @
de9b4358
...
@@ -101,7 +101,7 @@ cJSON *syncCfg2Json(SSyncCfg *pSyncCfg) {
...
@@ -101,7 +101,7 @@ cJSON *syncCfg2Json(SSyncCfg *pSyncCfg) {
char
*
syncCfg2Str
(
SSyncCfg
*
pSyncCfg
)
{
char
*
syncCfg2Str
(
SSyncCfg
*
pSyncCfg
)
{
cJSON
*
pJson
=
syncCfg2Json
(
pSyncCfg
);
cJSON
*
pJson
=
syncCfg2Json
(
pSyncCfg
);
char
*
serialized
=
cJSON_Print
(
pJson
);
char
*
serialized
=
cJSON_Print
(
pJson
);
cJSON_Delete
(
pJson
);
cJSON_Delete
(
pJson
);
return
serialized
;
return
serialized
;
}
}
...
@@ -109,10 +109,10 @@ char *syncCfg2Str(SSyncCfg *pSyncCfg) {
...
@@ -109,10 +109,10 @@ char *syncCfg2Str(SSyncCfg *pSyncCfg) {
char
*
syncCfg2SimpleStr
(
SSyncCfg
*
pSyncCfg
)
{
char
*
syncCfg2SimpleStr
(
SSyncCfg
*
pSyncCfg
)
{
if
(
pSyncCfg
!=
NULL
)
{
if
(
pSyncCfg
!=
NULL
)
{
int32_t
len
=
512
;
int32_t
len
=
512
;
char
*
s
=
taosMemoryMalloc
(
len
);
char
*
s
=
taosMemoryMalloc
(
len
);
memset
(
s
,
0
,
len
);
memset
(
s
,
0
,
len
);
snprintf
(
s
,
len
,
"{r
eplica-num:%d, my-index
:%d, "
,
pSyncCfg
->
replicaNum
,
pSyncCfg
->
myIndex
);
snprintf
(
s
,
len
,
"{r
-num:%d, my
:%d, "
,
pSyncCfg
->
replicaNum
,
pSyncCfg
->
myIndex
);
char
*
p
=
s
+
strlen
(
s
);
char
*
p
=
s
+
strlen
(
s
);
for
(
int
i
=
0
;
i
<
pSyncCfg
->
replicaNum
;
++
i
)
{
for
(
int
i
=
0
;
i
<
pSyncCfg
->
replicaNum
;
++
i
)
{
/*
/*
...
@@ -206,7 +206,7 @@ cJSON *raftCfg2Json(SRaftCfg *pRaftCfg) {
...
@@ -206,7 +206,7 @@ cJSON *raftCfg2Json(SRaftCfg *pRaftCfg) {
char
*
raftCfg2Str
(
SRaftCfg
*
pRaftCfg
)
{
char
*
raftCfg2Str
(
SRaftCfg
*
pRaftCfg
)
{
cJSON
*
pJson
=
raftCfg2Json
(
pRaftCfg
);
cJSON
*
pJson
=
raftCfg2Json
(
pRaftCfg
);
char
*
serialized
=
cJSON_Print
(
pJson
);
char
*
serialized
=
cJSON_Print
(
pJson
);
cJSON_Delete
(
pJson
);
cJSON_Delete
(
pJson
);
return
serialized
;
return
serialized
;
}
}
...
@@ -285,7 +285,7 @@ int32_t raftCfgFromJson(const cJSON *pRoot, SRaftCfg *pRaftCfg) {
...
@@ -285,7 +285,7 @@ int32_t raftCfgFromJson(const cJSON *pRoot, SRaftCfg *pRaftCfg) {
(
pRaftCfg
->
configIndexArr
)[
i
]
=
atoll
(
pIndex
->
valuestring
);
(
pRaftCfg
->
configIndexArr
)[
i
]
=
atoll
(
pIndex
->
valuestring
);
}
}
cJSON
*
pJsonSyncCfg
=
cJSON_GetObjectItem
(
pJson
,
"SSyncCfg"
);
cJSON
*
pJsonSyncCfg
=
cJSON_GetObjectItem
(
pJson
,
"SSyncCfg"
);
int32_t
code
=
syncCfgFromJson
(
pJsonSyncCfg
,
&
(
pRaftCfg
->
cfg
));
int32_t
code
=
syncCfgFromJson
(
pJsonSyncCfg
,
&
(
pRaftCfg
->
cfg
));
ASSERT
(
code
==
0
);
ASSERT
(
code
==
0
);
...
...
tests/script/tsim/insert/dupinsert.sim
0 → 100644
浏览文件 @
de9b4358
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/exec.sh -n dnode1 -s start
sql connect
print =============== create database
sql drop database if exists d0
sql create database d0 keep 365000d,365000d,365000d
sql use d0
print =============== create super table
sql create table if not exists stb (ts timestamp, c1 int unsigned, c2 double, c3 binary(10), c4 nchar(10), c5 double) tags (city binary(20),district binary(20));
sql show stables
if $rows != 1 then
return -1
endi
print =============== create child table
sql create table ct1 using stb tags("BeiJing", "ChaoYang")
sql create table ct2 using stb tags("BeiJing", "HaiDian")
sql create table ct3 using stb tags("BeiJing", "PingGu")
sql create table ct4 using stb tags("BeiJing", "YanQing")
sql show tables
if $rows != 4 then
print rows $rows != 4
return -1
endi
print =============== step 1 insert records into ct1 - taosd merge
sql insert into ct1(ts,c1,c2) values('2022-05-03 16:59:00.010', 10, 20);
sql insert into ct1(ts,c1,c2,c3,c4) values('2022-05-03 16:59:00.011', 11, NULL, 'binary', 'nchar');
sql insert into ct1 values('2022-05-03 16:59:00.016', 16, NULL, NULL, 'nchar', NULL);
sql insert into ct1 values('2022-05-03 16:59:00.016', 17, NULL, NULL, 'nchar', 170);
sql insert into ct1 values('2022-05-03 16:59:00.020', 20, NULL, NULL, 'nchar', 200);
sql insert into ct1 values('2022-05-03 16:59:00.016', 18, NULL, NULL, 'nchar', 180);
sql insert into ct1 values('2022-05-03 16:59:00.021', 21, NULL, NULL, 'nchar', 210);
sql insert into ct1 values('2022-05-03 16:59:00.022', 22, NULL, NULL, 'nchar', 220);
print =============== step 2 insert records into ct1/ct2 - taosc merge for 2022-05-03 16:59:00.010
sql insert into ct1(ts,c1,c2) values('2022-05-03 16:59:00.010', 10,10), ('2022-05-03 16:59:00.010',20,10.0), ('2022-05-03 16:59:00.010',30,NULL) ct2(ts,c1) values('2022-05-03 16:59:00.010',10), ('2022-05-03 16:59:00.010',20) ct1(ts,c2) values('2022-05-03 16:59:00.010',10), ('2022-05-03 16:59:00.010',100) ct1(ts,c3) values('2022-05-03 16:59:00.010','bin1'), ('2022-05-03 16:59:00.010','bin2') ct1(ts,c4,c5) values('2022-05-03 16:59:00.010',NULL,NULL), ('2022-05-03 16:59:00.010','nchar4',1000.01) ct2(ts,c2,c3,c4,c5) values('2022-05-03 16:59:00.010',20,'xkl','zxc',10);
print =============== step 3 insert records into ct3
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.020', 10,10);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.021', 10,10), ('2022-05-03 16:59:00.021',20,20.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.022', 30,30), ('2022-05-03 16:59:00.022',40,40.0),('2022-05-03 16:59:00.022',50,50.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.023', 60,60), ('2022-05-03 16:59:00.023',70,70.0),('2022-05-03 16:59:00.023',80,80.0), ('2022-05-03 16:59:00.023',90,90.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.024', 100,100), ('2022-05-03 16:59:00.025',110,110.0),('2022-05-03 16:59:00.025',120,120.0), ('2022-05-03 16:59:00.025',130,130.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.030', 140,140), ('2022-05-03 16:59:00.030',150,150.0),('2022-05-03 16:59:00.031',160,160.0), ('2022-05-03 16:59:00.030',170,170.0), ('2022-05-03 16:59:00.031',180,180.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.042', 190,190), ('2022-05-03 16:59:00.041',200,200.0),('2022-05-03 16:59:00.040',210,210.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.050', 220,220), ('2022-05-03 16:59:00.051',230,230.0),('2022-05-03 16:59:00.052',240,240.0);
print =============== step 4 insert records into ct4
sql insert into ct4(ts,c1,c3,c4) values('2022-05-03 16:59:00.020', 10,'b0','n0');
sql insert into ct4(ts,c1,c3,c4) values('2022-05-03 16:59:00.021', 20,'b1','n1'), ('2022-05-03 16:59:00.021',30,'b2','n2');
sql insert into ct4(ts,c1,c3,c4) values('2022-05-03 16:59:00.022', 40,'b3','n3'), ('2022-05-03 16:59:00.022',40,'b4','n4'),('2022-05-03 16:59:00.022',50,'b5','n5');
sql insert into ct4(ts,c1,c3,c4) values('2022-05-03 16:59:00.023', 60,'b6','n6'), ('2022-05-03 16:59:00.024',70,'b7','n7'),('2022-05-03 16:59:00.024',80,'b8','n8'), ('2022-05-03 16:59:00.023',90,'b9','n9');
print =============== step 5 query records of ct1 from memory(taosc and taosd merge)
sql select * from ct1;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
print $data50 $data51 $data52 $data53 $data54 $data55
print =============== step 6 query records of ct2 from memory(taosc and taosd merge)
sql select * from ct2;
print $data00 $data01 $data02 $data03 $data04 $data05
if $rows != 1 then
print rows $rows != 1
return -1
endi
print =============== step 7 query records of ct3 from memory
sql select * from ct3;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
print $data50 $data51 $data52 $data53 $data54 $data55
print $data60 $data61 $data62 $data63 $data64 $data65
print $data70 $data71 $data72 $data73 $data74 $data75
print $data80 $data81 $data82 $data83 $data84 $data85
print $data90 $data91 $data92 $data93 $data94 $data95
print $data[10][0] $data[10][1] $data[10][2] $data[10][3] $data[10][4] $data[10][5]
print $data[11][0] $data[11][1] $data[11][2] $data[11][3] $data[11][4] $data[11][5]
print $data[12][0] $data[12][1] $data[12][2] $data[12][3] $data[12][4] $data[12][5]
print $data[13][0] $data[13][1] $data[13][2] $data[13][3] $data[13][4] $data[13][5]
if $rows != 14 then
print rows $rows != 14
return -1
endi
print =============== step 8 query records of ct4 from memory
sql select * from ct4;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
if $rows != 5 then
print rows $rows != 5
return -1
endi
#==================== reboot to trigger commit data to file
system sh/exec.sh -n dnode1 -s stop -x SIGINT
system sh/exec.sh -n dnode1 -s start
print =============== step 9 query records of ct1 from file
sql select * from ct1;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
print $data50 $data51 $data52 $data53 $data54 $data55
if $rows != 6 then
print rows $rows != 6
return -1
endi
print =============== step 10 query records of ct2 from file
sql select * from ct2;
print $data00 $data01 $data02 $data03 $data04 $data05
if $rows != 1 then
print rows $rows != 1
return -1
endi
print =============== step 11 query records of ct3 from file
sql select * from ct3;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
print $data50 $data51 $data52 $data53 $data54 $data55
print $data60 $data61 $data62 $data63 $data64 $data65
print $data70 $data71 $data72 $data73 $data74 $data75
print $data80 $data81 $data82 $data83 $data84 $data85
print $data90 $data91 $data92 $data93 $data94 $data95
print $data[10][0] $data[10][1] $data[10][2] $data[10][3] $data[10][4] $data[10][5]
print $data[11][0] $data[11][1] $data[11][2] $data[11][3] $data[11][4] $data[11][5]
print $data[12][0] $data[12][1] $data[12][2] $data[12][3] $data[12][4] $data[12][5]
print $data[13][0] $data[13][1] $data[13][2] $data[13][3] $data[13][4] $data[13][5]
print =============== step 12 query records of ct4 from file
sql select * from ct4;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
\ No newline at end of file
tests/script/tsim/insert/update0.sim
浏览文件 @
de9b4358
...
@@ -79,8 +79,8 @@ if $rows != 3 then
...
@@ -79,8 +79,8 @@ if $rows != 3 then
return -1
return -1
endi
endi
if $data01 !=
1
03 then
if $data01 !=
3
03 then
print data01 $data01 !=
1
03
print data01 $data01 !=
3
03
return -1
return -1
endi
endi
...
@@ -89,8 +89,8 @@ if $data11 != 80 then
...
@@ -89,8 +89,8 @@ if $data11 != 80 then
return -1
return -1
endi
endi
if $data21 !=
4
0 then
if $data21 !=
6
0 then
print data21 $data21 !=
4
0
print data21 $data21 !=
6
0
return -1
return -1
endi
endi
...
@@ -138,8 +138,8 @@ if $rows != 3 then
...
@@ -138,8 +138,8 @@ if $rows != 3 then
return -1
return -1
endi
endi
if $data01 !=
1
03 then
if $data01 !=
3
03 then
print data01 $data01 !=
1
03
print data01 $data01 !=
3
03
return -1
return -1
endi
endi
...
@@ -148,8 +148,8 @@ if $data11 != 80 then
...
@@ -148,8 +148,8 @@ if $data11 != 80 then
return -1
return -1
endi
endi
if $data21 !=
4
0 then
if $data21 !=
6
0 then
print data21 $data21 !=
4
0
print data21 $data21 !=
6
0
return -1
return -1
endi
endi
...
@@ -208,8 +208,8 @@ if $data01 != 10 then
...
@@ -208,8 +208,8 @@ if $data01 != 10 then
return -1
return -1
endi
endi
if $data11 !=
1
03 then
if $data11 !=
3
03 then
print data11 $data11 !=
1
03
print data11 $data11 !=
3
03
return -1
return -1
endi
endi
...
@@ -218,8 +218,8 @@ if $data21 != NULL then
...
@@ -218,8 +218,8 @@ if $data21 != NULL then
return -1
return -1
endi
endi
if $data31 !=
4
0 then
if $data31 !=
6
0 then
print data31 $data31 !=
4
0
print data31 $data31 !=
6
0
return -1
return -1
endi
endi
...
...
tests/script/tsim/insert/update1_sort_merge.sim
0 → 100644
浏览文件 @
de9b4358
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/exec.sh -n dnode1 -s start
sql connect
print =============== create database
sql drop database if exists d0
sql create database d0 keep 365000d,365000d,365000d
sql use d0
print =============== create super table
sql create table if not exists stb (ts timestamp, c1 int unsigned, c2 double, c3 binary(10), c4 nchar(10), c5 double) tags (city binary(20),district binary(20));
sql show stables
if $rows != 1 then
return -1
endi
print =============== create child table
sql create table ct1 using stb tags("BeiJing", "ChaoYang")
sql create table ct2 using stb tags("BeiJing", "HaiDian")
sql create table ct3 using stb tags("BeiJing", "PingGu")
sql create table ct4 using stb tags("BeiJing", "YanQing")
sql show tables
if $rows != 4 then
print rows $rows != 4
return -1
endi
print =============== step 1 insert records into ct1 - taosd merge
sql insert into ct1(ts,c1,c2) values('2022-05-03 16:59:00.010', 10, 20);
sql insert into ct1(ts,c1,c2,c3,c4) values('2022-05-03 16:59:00.011', 11, NULL, 'binary', 'nchar');
sql insert into ct1 values('2022-05-03 16:59:00.016', 16, NULL, NULL, 'nchar', NULL);
sql insert into ct1 values('2022-05-03 16:59:00.016', 17, NULL, NULL, 'nchar', 170);
sql insert into ct1 values('2022-05-03 16:59:00.020', 20, NULL, NULL, 'nchar', 200);
sql insert into ct1 values('2022-05-03 16:59:00.016', 18, NULL, NULL, 'nchar', 180);
sql insert into ct1 values('2022-05-03 16:59:00.021', 21, NULL, NULL, 'nchar', 210);
sql insert into ct1 values('2022-05-03 16:59:00.022', 22, NULL, NULL, 'nchar', 220);
print =============== step 2 insert records into ct1/ct2 - taosc merge for 2022-05-03 16:59:00.010
sql insert into ct1(ts,c1,c2) values('2022-05-03 16:59:00.010', 10,10), ('2022-05-03 16:59:00.010',20,10.0), ('2022-05-03 16:59:00.010',30,NULL) ct2(ts,c1) values('2022-05-03 16:59:00.010',10), ('2022-05-03 16:59:00.010',20) ct1(ts,c2) values('2022-05-03 16:59:00.010',10), ('2022-05-03 16:59:00.010',100) ct1(ts,c3) values('2022-05-03 16:59:00.010','bin1'), ('2022-05-03 16:59:00.010','bin2') ct1(ts,c4,c5) values('2022-05-03 16:59:00.010',NULL,NULL), ('2022-05-03 16:59:00.010','nchar4',1000.01) ct2(ts,c2,c3,c4,c5) values('2022-05-03 16:59:00.010',20,'xkl','zxc',10);
print =============== step 3 insert records into ct3
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.020', 10,10);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.021', 10,10), ('2022-05-03 16:59:00.021',20,20.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.022', 30,30), ('2022-05-03 16:59:00.022',40,40.0),('2022-05-03 16:59:00.022',50,50.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.023', 60,60), ('2022-05-03 16:59:00.023',70,70.0),('2022-05-03 16:59:00.023',80,80.0), ('2022-05-03 16:59:00.023',90,90.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.024', 100,100), ('2022-05-03 16:59:00.025',110,110.0),('2022-05-03 16:59:00.025',120,120.0), ('2022-05-03 16:59:00.025',130,130.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.030', 140,140), ('2022-05-03 16:59:00.030',150,150.0),('2022-05-03 16:59:00.031',160,160.0), ('2022-05-03 16:59:00.030',170,170.0), ('2022-05-03 16:59:00.031',180,180.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.042', 190,190), ('2022-05-03 16:59:00.041',200,200.0),('2022-05-03 16:59:00.040',210,210.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.050', 220,220), ('2022-05-03 16:59:00.051',230,230.0),('2022-05-03 16:59:00.052',240,240.0);
print =============== step 4 insert records into ct4
sql insert into ct4(ts,c1,c3,c4) values('2022-05-03 16:59:00.020', 10,'b0','n0');
sql insert into ct4(ts,c1,c3,c4) values('2022-05-03 16:59:00.021', 20,'b1','n1'), ('2022-05-03 16:59:00.021',30,'b2','n2');
sql insert into ct4(ts,c1,c3,c4) values('2022-05-03 16:59:00.022', 40,'b3','n3'), ('2022-05-03 16:59:00.022',40,'b4','n4'),('2022-05-03 16:59:00.022',50,'b5','n5');
sql insert into ct4(ts,c1,c3,c4) values('2022-05-03 16:59:00.023', 60,'b6','n6'), ('2022-05-03 16:59:00.024',70,'b7','n7'),('2022-05-03 16:59:00.024',80,'b8','n8'), ('2022-05-03 16:59:00.023',90,'b9','n9');
print =============== step 5 query records of ct1 from memory(taosc and taosd merge)
sql select * from ct1;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
print $data50 $data51 $data52 $data53 $data54 $data55
if $rows != 6 then
print rows $rows != 6
return -1
endi
if $data01 != 30 then
print data01 $data01 != 30
return -1
endi
if $data02 != 100.000000000 then
print data02 $data02 != 100.000000000
return -1
endi
if $data03 != bin2 then
print data03 $data03 != bin2
return -1
endi
if $data04 != nchar4 then
print data04 $data04 != nchar4
return -1
endi
if $data05 != 1000.010000000 then
print data05 $data05 != 1000.010000000
return -1
endi
if $data11 != 11 then
print data11 $data11 != 11
return -1
endi
if $data12 != NULL then
print data12 $data12 != NULL
return -1
endi
if $data13 != binary then
print data13 $data13 != binary
return -1
endi
if $data14 != nchar then
print data14 $data14 != nchar
return -1
endi
if $data15 != NULL then
print data15 $data15 != NULL
return -1
endi
if $data51 != 22 then
print data51 $data51 != 22
return -1
endi
if $data52 != NULL then
print data52 $data52 != NULL
return -1
endi
if $data53 != NULL then
print data53 $data53 != NULL
return -1
endi
if $data54 != nchar then
print data54 $data54 != nchar
return -1
endi
if $data55 != 220.000000000 then
print data55 $data55 != 220.000000000
return -1
endi
print =============== step 6 query records of ct2 from memory(taosc and taosd merge)
sql select * from ct2;
print $data00 $data01 $data02 $data03 $data04 $data05
if $rows != 1 then
print rows $rows != 1
return -1
endi
if $data01 != 20 then
print data01 $data01 != 20
return -1
endi
if $data02 != 20.000000000 then
print data02 $data02 != 20.000000000
return -1
endi
if $data03 != xkl then
print data03 $data03 != xkl
return -1
endi
if $data04 != zxc then
print data04 $data04 != zxc
return -1
endi
if $data05 != 10.000000000 then
print data05 $data05 != 10.000000000
return -1
endi
print =============== step 7 query records of ct3 from memory
sql select * from ct3;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
print $data50 $data51 $data52 $data53 $data54 $data55
print $data60 $data61 $data62 $data63 $data64 $data65
print $data70 $data71 $data72 $data73 $data74 $data75
print $data80 $data81 $data82 $data83 $data84 $data85
print $data90 $data91 $data92 $data93 $data94 $data95
print $data[10][0] $data[10][1] $data[10][2] $data[10][3] $data[10][4] $data[10][5]
print $data[11][0] $data[11][1] $data[11][2] $data[11][3] $data[11][4] $data[11][5]
print $data[12][0] $data[12][1] $data[12][2] $data[12][3] $data[12][4] $data[12][5]
print $data[13][0] $data[13][1] $data[13][2] $data[13][3] $data[13][4] $data[13][5]
if $rows != 14 then
print rows $rows != 14
return -1
endi
if $data01 != 10 then
print data01 $data01 != 10
return -1
endi
if $data11 != 20 then
print data11 $data1 != 20
return -1
endi
if $data21 != 50 then
print data21 $data21 != 50
return -1
endi
if $data31 != 90 then
print data31 $data31 != 90
return -1
endi
if $data41 != 100 then
print data41 $data41 != 100
return -1
endi
if $data51 != 130 then
print data51 $data51 != 130
return -1
endi
if $data61 != 170 then
print data61 $data61 != 170
return -1
endi
if $data71 != 180 then
print data71 $data71 != 180
return -1
endi
if $data81 != 210 then
print data81 $data81 != 210
return -1
endi
if $data91 != 200 then
print data91 $data91 != 200
return -1
endi
if $data[10][1] != 190 then
print data[10][1] $data[10][1] != 190
return -1
endi
if $data[11][1] != 220 then
print data[11][1] $data[11][1] != 220
return -1
endi
if $data[12][1] != 230 then
print data[12][1] $data[12][1] != 230
return -1
endi
if $data[13][1] != 240 then
print data[13][1] $data[13][1] != 240
return -1
endi
if $data05 != 10.000000000 then
print data05 $data05 != 10.000000000
return -1
endi
if $data15 != 20.000000000 then
print data15 $data5 != 20.000000000
return -1
endi
if $data25 != 50.000000000 then
print data25 $data25 != 50.000000000
return -1
endi
if $data35 != 90.000000000 then
print data35 $data35 != 90.000000000
return -1
endi
if $data45 != 100.000000000 then
print data45 $data45 != 100.000000000
return -1
endi
if $data55 != 130.000000000 then
print data55 $data55 != 130.000000000
return -1
endi
if $data65 != 170.000000000 then
print data65 $data65 != 170.000000000
return -1
endi
if $data75 != 180.000000000 then
print data75 $data75 != 180.000000000
return -1
endi
if $data85 != 210.000000000 then
print data85 $data85 != 210.000000000
return -1
endi
if $data95 != 200.000000000 then
print data95 $data95 != 200.000000000
return -1
endi
if $data[10][5] != 190.000000000 then
print data[10][5] $data[10][5] != 190.000000000
return -1
endi
if $data[11][5] != 220.000000000 then
print data[11][5] $data[11][5] != 220.000000000
return -1
endi
if $data[12][5] != 230.000000000 then
print data[12][5] $data[12][5] != 230.000000000
return -1
endi
if $data[13][5] != 240.000000000 then
print data[13][5] $data[13][5] != 240.000000000
return -1
endi
print =============== step 8 query records of ct4 from memory
sql select * from ct4;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
if $rows != 5 then
print rows $rows != 5
return -1
endi
if $data01 != 10 then
print data01 $data01 != 10
return -1
endi
if $data11 != 30 then
print data11 $data11 != 30
return -1
endi
if $data21 != 50 then
print data21 $data21 != 50
return -1
endi
if $data31 != 90 then
print data31 $data31 != 90
return -1
endi
if $data41 != 80 then
print data41 $data41 != 80
return -1
endi
if $data03 != b0 then
print data03 $data03 != b0
return -1
endi
if $data13 != b2 then
print data13 $data13 != b2
return -1
endi
if $data23 != b5 then
print data23 $data23 != b5
return -1
endi
if $data33 != b9 then
print data33 $data33 != b9
return -1
endi
if $data43 != b8 then
print data43 $data43 != b8
return -1
endi
if $data04 != n0 then
print data04 $data04 != n0
return -1
endi
if $data14 != n2 then
print data14 $data14 != n2
return -1
endi
if $data24 != n5 then
print data24 $data24 != n5
return -1
endi
if $data34 != n9 then
print data34 $data34 != n9
return -1
endi
if $data44 != n8 then
print data44 $data44 != n8
return -1
endi
#==================== reboot to trigger commit data to file
system sh/exec.sh -n dnode1 -s stop -x SIGINT
system sh/exec.sh -n dnode1 -s start
print =============== step 9 query records of ct1 from file
sql select * from ct1;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
print $data50 $data51 $data52 $data53 $data54 $data55
if $rows != 6 then
print rows $rows != 6
return -1
endi
if $data01 != 30 then
print data01 $data01 != 30
return -1
endi
if $data02 != 100.000000000 then
print data02 $data02 != 100.000000000
return -1
endi
if $data03 != bin2 then
print data03 $data03 != bin2
return -1
endi
if $data04 != nchar4 then
print data04 $data04 != nchar4
return -1
endi
if $data05 != 1000.010000000 then
print data05 $data05 != 1000.010000000
return -1
endi
if $data11 != 11 then
print data11 $data11 != 11
return -1
endi
if $data12 != NULL then
print data12 $data12 != NULL
return -1
endi
if $data13 != binary then
print data13 $data13 != binary
return -1
endi
if $data14 != nchar then
print data14 $data14 != nchar
return -1
endi
if $data15 != NULL then
print data15 $data15 != NULL
return -1
endi
if $data51 != 22 then
print data51 $data51 != 22
return -1
endi
if $data52 != NULL then
print data52 $data52 != NULL
return -1
endi
if $data53 != NULL then
print data53 $data53 != NULL
return -1
endi
if $data54 != nchar then
print data54 $data54 != nchar
return -1
endi
if $data55 != 220.000000000 then
print data55 $data55 != 220.000000000
return -1
endi
print =============== step 10 query records of ct2 from file
sql select * from ct2;
print $data00 $data01 $data02 $data03 $data04 $data05
if $rows != 1 then
print rows $rows != 1
return -1
endi
if $data01 != 20 then
print data01 $data01 != 20
return -1
endi
if $data02 != 20.000000000 then
print data02 $data02 != 20.000000000
return -1
endi
if $data03 != xkl then
print data03 $data03 != xkl
return -1
endi
if $data04 != zxc then
print data04 $data04 != zxc
return -1
endi
if $data05 != 10.000000000 then
print data05 $data05 != 10.000000000
return -1
endi
print =============== step 11 query records of ct3 from file
sql select * from ct3;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
print $data50 $data51 $data52 $data53 $data54 $data55
print $data60 $data61 $data62 $data63 $data64 $data65
print $data70 $data71 $data72 $data73 $data74 $data75
print $data80 $data81 $data82 $data83 $data84 $data85
print $data90 $data91 $data92 $data93 $data94 $data95
print $data[10][0] $data[10][1] $data[10][2] $data[10][3] $data[10][4] $data[10][5]
print $data[11][0] $data[11][1] $data[11][2] $data[11][3] $data[11][4] $data[11][5]
print $data[12][0] $data[12][1] $data[12][2] $data[12][3] $data[12][4] $data[12][5]
print $data[13][0] $data[13][1] $data[13][2] $data[13][3] $data[13][4] $data[13][5]
if $rows != 14 then
print rows $rows != 14
return -1
endi
if $data01 != 10 then
print data01 $data01 != 10
return -1
endi
if $data11 != 20 then
print data11 $data1 != 20
return -1
endi
if $data21 != 50 then
print data21 $data21 != 50
return -1
endi
if $data31 != 90 then
print data31 $data31 != 90
return -1
endi
if $data41 != 100 then
print data41 $data41 != 100
return -1
endi
if $data51 != 130 then
print data51 $data51 != 130
return -1
endi
if $data61 != 170 then
print data61 $data61 != 170
return -1
endi
if $data71 != 180 then
print data71 $data71 != 180
return -1
endi
if $data81 != 210 then
print data81 $data81 != 210
return -1
endi
if $data91 != 200 then
print data91 $data91 != 200
return -1
endi
if $data[10][1] != 190 then
print data[10][1] $data[10][1] != 190
return -1
endi
if $data[11][1] != 220 then
print data[11][1] $data[11][1] != 220
return -1
endi
if $data[12][1] != 230 then
print data[12][1] $data[12][1] != 230
return -1
endi
if $data[13][1] != 240 then
print data[13][1] $data[13][1] != 240
return -1
endi
if $data05 != 10.000000000 then
print data05 $data05 != 10.000000000
return -1
endi
if $data15 != 20.000000000 then
print data15 $data5 != 20.000000000
return -1
endi
if $data25 != 50.000000000 then
print data25 $data25 != 50.000000000
return -1
endi
if $data35 != 90.000000000 then
print data35 $data35 != 90.000000000
return -1
endi
if $data45 != 100.000000000 then
print data45 $data45 != 100.000000000
return -1
endi
if $data55 != 130.000000000 then
print data55 $data55 != 130.000000000
return -1
endi
if $data65 != 170.000000000 then
print data65 $data65 != 170.000000000
return -1
endi
if $data75 != 180.000000000 then
print data75 $data75 != 180.000000000
return -1
endi
if $data85 != 210.000000000 then
print data85 $data85 != 210.000000000
return -1
endi
if $data95 != 200.000000000 then
print data95 $data95 != 200.000000000
return -1
endi
if $data[10][5] != 190.000000000 then
print data[10][5] $data[10][5] != 190.000000000
return -1
endi
if $data[11][5] != 220.000000000 then
print data[11][5] $data[11][5] != 220.000000000
return -1
endi
if $data[12][5] != 230.000000000 then
print data[12][5] $data[12][5] != 230.000000000
return -1
endi
if $data[13][5] != 240.000000000 then
print data[13][5] $data[13][5] != 240.000000000
return -1
endi
print =============== step 12 query records of ct4 from file
sql select * from ct4;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
if $rows != 5 then
print rows $rows != 5
return -1
endi
if $data01 != 10 then
print data01 $data01 != 10
return -1
endi
if $data11 != 30 then
print data11 $data11 != 30
return -1
endi
if $data21 != 50 then
print data21 $data21 != 50
return -1
endi
if $data31 != 90 then
print data31 $data31 != 90
return -1
endi
if $data41 != 80 then
print data41 $data41 != 80
return -1
endi
if $data03 != b0 then
print data03 $data03 != b0
return -1
endi
if $data13 != b2 then
print data13 $data13 != b2
return -1
endi
if $data23 != b5 then
print data23 $data23 != b5
return -1
endi
if $data33 != b9 then
print data33 $data33 != b9
return -1
endi
if $data43 != b8 then
print data43 $data43 != b8
return -1
endi
if $data04 != n0 then
print data04 $data04 != n0
return -1
endi
if $data14 != n2 then
print data14 $data14 != n2
return -1
endi
if $data24 != n5 then
print data24 $data24 != n5
return -1
endi
if $data34 != n9 then
print data34 $data34 != n9
return -1
endi
if $data44 != n8 then
print data44 $data44 != n8
return -1
endi
\ No newline at end of file
tests/script/tsim/stream/basic1.sim
浏览文件 @
de9b4358
...
@@ -391,13 +391,13 @@ if $data02 != 4 then
...
@@ -391,13 +391,13 @@ if $data02 != 4 then
return -1
return -1
endi
endi
if $data03 !=
14
then
if $data03 !=
50
then
print ======$data03
print ======$data03
!= 50
return -1
return -1
endi
endi
if $data04 !=
4
then
if $data04 !=
20
then
print ======$data04
print ======$data04
!= 20
return -1
return -1
endi
endi
...
@@ -421,13 +421,13 @@ if $data12 != 4 then
...
@@ -421,13 +421,13 @@ if $data12 != 4 then
return -1
return -1
endi
endi
if $data13 !=
10
then
if $data13 !=
46
then
print ======$data13
print ======$data13
!= 46
return -1
return -1
endi
endi
if $data14 !=
3
then
if $data14 !=
20
then
print ======$data14
print ======$data14
!= 20
return -1
return -1
endi
endi
...
...
tests/system-test/2-query/Now.py
浏览文件 @
de9b4358
tests/system-test/2-query/distribute_agg_apercentile.py
浏览文件 @
de9b4358
tests/system-test/2-query/distribute_agg_avg.py
浏览文件 @
de9b4358
tests/system-test/2-query/distribute_agg_count.py
浏览文件 @
de9b4358
tests/system-test/2-query/distribute_agg_max.py
浏览文件 @
de9b4358
tests/system-test/2-query/distribute_agg_min.py
浏览文件 @
de9b4358
tests/system-test/2-query/distribute_agg_spread.py
浏览文件 @
de9b4358
tests/system-test/2-query/distribute_agg_sum.py
浏览文件 @
de9b4358
tests/system-test/2-query/irate.py
浏览文件 @
de9b4358
tests/system-test/2-query/log.py
浏览文件 @
de9b4358
tests/system-test/2-query/query_cols_tags_and_or.py
浏览文件 @
de9b4358
tests/system-test/7-tmq/TD-17699.py
0 → 100644
浏览文件 @
de9b4358
import
sys
import
time
import
socket
import
os
import
threading
import
taos
from
util.log
import
*
from
util.sql
import
*
from
util.cases
import
*
from
util.dnodes
import
*
from
util.common
import
*
sys
.
path
.
append
(
"./7-tmq"
)
from
tmqCommon
import
*
class
TDTestCase
:
paraDict
=
{
'dbName'
:
'db1'
,
'dropFlag'
:
1
,
'event'
:
''
,
'vgroups'
:
2
,
'stbName'
:
'stb0'
,
'colPrefix'
:
'c'
,
'tagPrefix'
:
't'
,
'colSchema'
:
[{
'type'
:
'INT'
,
'count'
:
2
},
{
'type'
:
'binary'
,
'len'
:
16
,
'count'
:
1
},
{
'type'
:
'timestamp'
,
'count'
:
1
}],
'tagSchema'
:
[{
'type'
:
'INT'
,
'count'
:
1
},
{
'type'
:
'binary'
,
'len'
:
20
,
'count'
:
1
}],
'ctbPrefix'
:
'ctb'
,
'ctbStartIdx'
:
0
,
'ctbNum'
:
100
,
'rowsPerTbl'
:
1000
,
'batchNum'
:
1000
,
'startTs'
:
1640966400000
,
# 2022-01-01 00:00:00.000
'pollDelay'
:
20
,
'showMsg'
:
1
,
'showRow'
:
1
}
cdbName
=
'cdb'
# some parameter to consumer processor
consumerId
=
0
expectrowcnt
=
0
topicList
=
''
ifcheckdata
=
0
ifManualCommit
=
1
groupId
=
'group.id:cgrp1'
autoCommit
=
'enable.auto.commit:false'
autoCommitInterval
=
'auto.commit.interval.ms:1000'
autoOffset
=
'auto.offset.reset:earliest'
pollDelay
=
20
showMsg
=
1
showRow
=
1
hostname
=
socket
.
gethostname
()
def
init
(
self
,
conn
,
logSql
):
tdLog
.
debug
(
f
"start to excute
{
__file__
}
"
)
logSql
=
False
tdSql
.
init
(
conn
.
cursor
(),
logSql
)
def
tmqCase1
(
self
):
tdLog
.
printNoPrefix
(
"======== test case 1: "
)
tdLog
.
info
(
"step 1: create database, stb, ctb and insert data"
)
tmqCom
.
initConsumerTable
(
self
.
cdbName
)
tdCom
.
create_database
(
tdSql
,
self
.
paraDict
[
"dbName"
],
self
.
paraDict
[
"dropFlag"
])
self
.
paraDict
[
"stbName"
]
=
'stb1'
tdCom
.
create_stable
(
tdSql
,
dbname
=
self
.
paraDict
[
"dbName"
],
stbname
=
self
.
paraDict
[
"stbName"
],
column_elm_list
=
self
.
paraDict
[
"colSchema"
],
tag_elm_list
=
self
.
paraDict
[
"tagSchema"
],
count
=
1
,
default_stbname_prefix
=
self
.
paraDict
[
"stbName"
])
tdCom
.
create_ctable
(
tdSql
,
dbname
=
self
.
paraDict
[
"dbName"
],
stbname
=
self
.
paraDict
[
"stbName"
],
tag_elm_list
=
self
.
paraDict
[
'tagSchema'
],
count
=
self
.
paraDict
[
"ctbNum"
],
default_ctbname_prefix
=
self
.
paraDict
[
"ctbPrefix"
])
tmqCom
.
insert_data_2
(
tdSql
,
self
.
paraDict
[
"dbName"
],
self
.
paraDict
[
"ctbPrefix"
],
self
.
paraDict
[
"ctbNum"
],
self
.
paraDict
[
"rowsPerTbl"
],
self
.
paraDict
[
"batchNum"
],
self
.
paraDict
[
"startTs"
],
self
.
paraDict
[
"ctbStartIdx"
])
# pThread1 = tmqCom.asyncInsertData(paraDict=self.paraDict)
self
.
paraDict
[
"stbName"
]
=
'stb2'
self
.
paraDict
[
"ctbPrefix"
]
=
'newctb'
self
.
paraDict
[
"batchNum"
]
=
1000
tdCom
.
create_stable
(
tdSql
,
dbname
=
self
.
paraDict
[
"dbName"
],
stbname
=
self
.
paraDict
[
"stbName"
],
column_elm_list
=
self
.
paraDict
[
"colSchema"
],
tag_elm_list
=
self
.
paraDict
[
"tagSchema"
],
count
=
1
,
default_stbname_prefix
=
self
.
paraDict
[
"stbName"
])
tdCom
.
create_ctable
(
tdSql
,
dbname
=
self
.
paraDict
[
"dbName"
],
stbname
=
self
.
paraDict
[
"stbName"
],
tag_elm_list
=
self
.
paraDict
[
'tagSchema'
],
count
=
self
.
paraDict
[
"ctbNum"
],
default_ctbname_prefix
=
self
.
paraDict
[
"ctbPrefix"
])
# tmqCom.insert_data_2(tdSql,self.paraDict["dbName"],self.paraDict["ctbPrefix"],self.paraDict["ctbNum"],self.paraDict["rowsPerTbl"],self.paraDict["batchNum"],self.paraDict["startTs"],self.paraDict["ctbStartIdx"])
pThread2
=
tmqCom
.
asyncInsertData
(
paraDict
=
self
.
paraDict
)
tdLog
.
info
(
"create topics from db"
)
topicName1
=
'UpperCasetopic_%s'
%
(
self
.
paraDict
[
'dbName'
])
tdSql
.
execute
(
"create topic %s as database %s"
%
(
topicName1
,
self
.
paraDict
[
'dbName'
]))
topicList
=
topicName1
+
','
+
topicName1
keyList
=
'%s,%s,%s,%s'
%
(
self
.
groupId
,
self
.
autoCommit
,
self
.
autoCommitInterval
,
self
.
autoOffset
)
self
.
expectrowcnt
=
self
.
paraDict
[
"rowsPerTbl"
]
*
self
.
paraDict
[
"ctbNum"
]
*
2
tmqCom
.
insertConsumerInfo
(
self
.
consumerId
,
self
.
expectrowcnt
,
topicList
,
keyList
,
self
.
ifcheckdata
,
self
.
ifManualCommit
)
tdLog
.
info
(
"start consume processor"
)
tmqCom
.
startTmqSimProcess
(
self
.
pollDelay
,
self
.
paraDict
[
"dbName"
],
self
.
showMsg
,
self
.
showRow
,
self
.
cdbName
)
tmqCom
.
getStartConsumeNotifyFromTmqsim
()
tdLog
.
info
(
"drop one stable"
)
self
.
paraDict
[
"stbName"
]
=
'stb1'
tdSql
.
execute
(
"drop table %s.%s"
%
(
self
.
paraDict
[
'dbName'
],
self
.
paraDict
[
'stbName'
]))
tmqCom
.
drop_ctable
(
tdSql
,
dbname
=
self
.
paraDict
[
'dbName'
],
count
=
self
.
paraDict
[
"ctbNum"
],
default_ctbname_prefix
=
self
.
paraDict
[
"ctbPrefix"
])
# pThread2.join()
tdLog
.
info
(
"wait result from consumer, then check it"
)
expectRows
=
1
resultList
=
tmqCom
.
selectConsumeResult
(
expectRows
)
totalConsumeRows
=
0
for
i
in
range
(
expectRows
):
totalConsumeRows
+=
resultList
[
i
]
if
not
(
totalConsumeRows
>=
self
.
expectrowcnt
/
2
and
totalConsumeRows
<=
self
.
expectrowcnt
):
tdLog
.
info
(
"act consume rows: %d, expect consume rows: between %d and %d"
%
(
totalConsumeRows
,
self
.
expectrowcnt
/
2
,
self
.
expectrowcnt
))
tdLog
.
exit
(
"tmq consume rows error!"
)
time
.
sleep
(
10
)
tdSql
.
query
(
"drop topic %s"
%
topicName1
)
tdLog
.
printNoPrefix
(
"======== test case 1 end ...... "
)
def
run
(
self
):
tdSql
.
prepare
()
self
.
tmqCase1
()
def
stop
(
self
):
tdSql
.
close
()
tdLog
.
success
(
f
"
{
__file__
}
successfully executed"
)
event
=
threading
.
Event
()
tdCases
.
addLinux
(
__file__
,
TDTestCase
())
tdCases
.
addWindows
(
__file__
,
TDTestCase
())
tools/shell/src/shellEngine.c
浏览文件 @
de9b4358
...
@@ -685,7 +685,7 @@ int32_t shellHorizontalPrintResult(TAOS_RES *tres, const char *sql) {
...
@@ -685,7 +685,7 @@ int32_t shellHorizontalPrintResult(TAOS_RES *tres, const char *sql) {
uint64_t
resShowMaxNum
=
UINT64_MAX
;
uint64_t
resShowMaxNum
=
UINT64_MAX
;
if
(
shell
.
args
.
commands
==
NULL
&&
shell
.
args
.
file
[
0
]
==
0
&&
!
shellIsLimitQuery
(
sql
)
&&
!
shellIsShowQuery
(
sql
)
)
{
if
(
shell
.
args
.
commands
==
NULL
&&
shell
.
args
.
file
[
0
]
==
0
&&
!
shellIsLimitQuery
(
sql
))
{
resShowMaxNum
=
SHELL_DEFAULT_RES_SHOW_NUM
;
resShowMaxNum
=
SHELL_DEFAULT_RES_SHOW_NUM
;
}
}
...
@@ -706,8 +706,12 @@ int32_t shellHorizontalPrintResult(TAOS_RES *tres, const char *sql) {
...
@@ -706,8 +706,12 @@ int32_t shellHorizontalPrintResult(TAOS_RES *tres, const char *sql) {
}
else
if
(
showMore
)
{
}
else
if
(
showMore
)
{
printf
(
"
\r\n
"
);
printf
(
"
\r\n
"
);
printf
(
" Notice: The result shows only the first %d rows.
\r\n
"
,
SHELL_DEFAULT_RES_SHOW_NUM
);
printf
(
" Notice: The result shows only the first %d rows.
\r\n
"
,
SHELL_DEFAULT_RES_SHOW_NUM
);
if
(
shellIsShowQuery
(
sql
))
{
printf
(
" You can use '>>' to redirect the whole set of the result to a specified file.
\r\n
"
);
}
else
{
printf
(
" You can use the `LIMIT` clause to get fewer result to show.
\r\n
"
);
printf
(
" You can use the `LIMIT` clause to get fewer result to show.
\r\n
"
);
printf
(
" Or use '>>' to redirect the whole set of the result to a specified file.
\r\n
"
);
printf
(
" Or use '>>' to redirect the whole set of the result to a specified file.
\r\n
"
);
}
printf
(
"
\r\n
"
);
printf
(
"
\r\n
"
);
printf
(
" You can use Ctrl+C to stop the underway fetching.
\r\n
"
);
printf
(
" You can use Ctrl+C to stop the underway fetching.
\r\n
"
);
printf
(
"
\r\n
"
);
printf
(
"
\r\n
"
);
...
...
taos-tools
@
0b8a3373
比较
69b558cc
...
0b8a3373
Subproject commit
69b558ccbfe54a4407fe23eeae2e67c540f59e55
Subproject commit
0b8a3373bb7548f8106d13e7d3b0a988d3c4d48a
taosws-rs
@
c5fded26
比较
267a96fb
...
c5fded26
Subproject commit
267a96fb09fc2ba14acfa47f7d3678def64c29c5
Subproject commit
c5fded266d3b10508e38bf3285bb7ecf798bc343
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录