Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
taosdata
TDengine
提交
a3d4dce3
T
TDengine
项目概览
taosdata
/
TDengine
大约 2 年 前同步成功
通知
1192
Star
22018
Fork
4786
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
T
TDengine
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
a3d4dce3
编写于
8月 08, 2022
作者:
D
dapan1121
浏览文件
操作
浏览文件
下载
差异文件
Merge branch '3.0' into fix/TD-18076
上级
ea4904fb
87c1656f
变更
62
展开全部
显示空白变更内容
内联
并排
Showing
62 changed file
with
959 addition
and
663 deletion
+959
-663
docs/examples/go/query/sync/main.go
docs/examples/go/query/sync/main.go
+1
-1
docs/examples/python/native_insert_example.py
docs/examples/python/native_insert_example.py
+8
-8
docs/zh/05-get-started/03-package.md
docs/zh/05-get-started/03-package.md
+56
-48
docs/zh/17-operation/01-pkg-install.md
docs/zh/17-operation/01-pkg-install.md
+2
-5
docs/zh/27-train-faq/01-faq.md
docs/zh/27-train-faq/01-faq.md
+200
-241
examples/rust
examples/rust
+1
-0
include/common/tdatablock.h
include/common/tdatablock.h
+1
-1
include/common/tmsg.h
include/common/tmsg.h
+0
-3
include/common/tmsgdef.h
include/common/tmsgdef.h
+1
-0
include/common/ttypes.h
include/common/ttypes.h
+0
-2
include/libs/function/function.h
include/libs/function/function.h
+1
-1
include/libs/nodes/plannodes.h
include/libs/nodes/plannodes.h
+2
-0
include/libs/stream/tstream.h
include/libs/stream/tstream.h
+32
-3
include/util/taoserror.h
include/util/taoserror.h
+2
-0
source/client/src/clientEnv.c
source/client/src/clientEnv.c
+1
-1
source/client/src/clientMain.c
source/client/src/clientMain.c
+7
-2
source/common/src/tdatablock.c
source/common/src/tdatablock.c
+7
-9
source/dnode/mgmt/mgmt_vnode/src/vmHandle.c
source/dnode/mgmt/mgmt_vnode/src/vmHandle.c
+1
-0
source/dnode/mgmt/node_mgmt/src/dmTransport.c
source/dnode/mgmt/node_mgmt/src/dmTransport.c
+2
-1
source/dnode/vnode/src/inc/vnodeInt.h
source/dnode/vnode/src/inc/vnodeInt.h
+1
-0
source/dnode/vnode/src/meta/metaQuery.c
source/dnode/vnode/src/meta/metaQuery.c
+1
-1
source/dnode/vnode/src/sma/smaRollup.c
source/dnode/vnode/src/sma/smaRollup.c
+146
-56
source/dnode/vnode/src/tq/tq.c
source/dnode/vnode/src/tq/tq.c
+2
-0
source/dnode/vnode/src/vnd/vnodeQuery.c
source/dnode/vnode/src/vnd/vnodeQuery.c
+1
-1
source/dnode/vnode/src/vnd/vnodeSvr.c
source/dnode/vnode/src/vnd/vnodeSvr.c
+2
-0
source/dnode/vnode/src/vnd/vnodeSync.c
source/dnode/vnode/src/vnd/vnodeSync.c
+9
-1
source/libs/catalog/src/ctgAsync.c
source/libs/catalog/src/ctgAsync.c
+1
-1
source/libs/command/inc/commandInt.h
source/libs/command/inc/commandInt.h
+1
-0
source/libs/command/src/explain.c
source/libs/command/src/explain.c
+5
-0
source/libs/executor/src/joinoperator.c
source/libs/executor/src/joinoperator.c
+15
-9
source/libs/function/src/builtinsimpl.c
source/libs/function/src/builtinsimpl.c
+8
-3
source/libs/nodes/src/nodesCloneFuncs.c
source/libs/nodes/src/nodesCloneFuncs.c
+1
-0
source/libs/nodes/src/nodesCodeFuncs.c
source/libs/nodes/src/nodesCodeFuncs.c
+14
-0
source/libs/nodes/src/nodesUtilFuncs.c
source/libs/nodes/src/nodesUtilFuncs.c
+3
-0
source/libs/parser/src/parAstParser.c
source/libs/parser/src/parAstParser.c
+7
-0
source/libs/parser/test/mockCatalog.cpp
source/libs/parser/test/mockCatalog.cpp
+6
-0
source/libs/parser/test/parShowToUse.cpp
source/libs/parser/test/parShowToUse.cpp
+9
-0
source/libs/planner/src/planLogicCreater.c
source/libs/planner/src/planLogicCreater.c
+2
-0
source/libs/planner/src/planPhysiCreater.c
source/libs/planner/src/planPhysiCreater.c
+1
-0
source/libs/stream/src/stream.c
source/libs/stream/src/stream.c
+1
-0
source/libs/stream/src/streamDispatch.c
source/libs/stream/src/streamDispatch.c
+6
-1
source/libs/stream/src/streamExec.c
source/libs/stream/src/streamExec.c
+6
-2
source/libs/stream/src/streamRecover.c
source/libs/stream/src/streamRecover.c
+93
-31
source/libs/stream/src/streamTask.c
source/libs/stream/src/streamTask.c
+2
-2
source/libs/sync/inc/syncInt.h
source/libs/sync/inc/syncInt.h
+2
-0
source/libs/sync/src/syncMain.c
source/libs/sync/src/syncMain.c
+14
-3
source/libs/sync/src/syncReplication.c
source/libs/sync/src/syncReplication.c
+23
-2
source/libs/transport/inc/transComm.h
source/libs/transport/inc/transComm.h
+2
-2
source/libs/transport/src/transComm.c
source/libs/transport/src/transComm.c
+3
-4
source/util/src/terror.c
source/util/src/terror.c
+2
-0
tests/docs-examples-test/go.sh
tests/docs-examples-test/go.sh
+1
-1
tests/docs-examples-test/python.sh
tests/docs-examples-test/python.sh
+47
-0
tests/parallel_test/collect_cases.sh
tests/parallel_test/collect_cases.sh
+1
-1
tests/parallel_test/run_case.sh
tests/parallel_test/run_case.sh
+2
-0
tests/script/tsim/sma/rsmaCreateInsertQuery.sim
tests/script/tsim/sma/rsmaCreateInsertQuery.sim
+2
-2
tests/script/tsim/sma/rsmaPersistenceRecovery.sim
tests/script/tsim/sma/rsmaPersistenceRecovery.sim
+4
-4
tests/system-test/1-insert/create_retentions.py
tests/system-test/1-insert/create_retentions.py
+1
-1
tests/system-test/2-query/distribute_agg_spread.py
tests/system-test/2-query/distribute_agg_spread.py
+66
-73
tests/system-test/2-query/distribute_agg_stddev.py
tests/system-test/2-query/distribute_agg_stddev.py
+55
-67
tests/system-test/2-query/distribute_agg_sum.py
tests/system-test/2-query/distribute_agg_sum.py
+56
-63
tests/system-test/fulltest.sh
tests/system-test/fulltest.sh
+9
-6
tools/taos-tools
tools/taos-tools
+1
-0
未找到文件。
docs/examples/go/query/sync/main.go
浏览文件 @
a3d4dce3
...
...
@@ -31,6 +31,6 @@ func main() {
log
.
Fatalln
(
"scan error:
\n
"
,
err
)
return
}
log
.
Fatal
ln
(
r
.
ts
,
r
.
current
)
log
.
Print
ln
(
r
.
ts
,
r
.
current
)
}
}
docs/examples/python/native_insert_example.py
浏览文件 @
a3d4dce3
import
taos
lines
=
[
"d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,
California.SanFrancisco
,2"
,
"d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,
California.LosAngeles
,3"
,
"d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,
California.LosAngeles
,2"
,
"d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,
California.LosAngeles
,3"
,
"d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,
California.SanFrancisco
,3"
,
"d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,
California.SanFrancisco
,2"
,
"d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,
California.SanFrancisco
,2"
,
"d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,
California.LosAngeles
,2"
]
lines
=
[
"d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,
'California.SanFrancisco'
,2"
,
"d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,
'California.LosAngeles'
,3"
,
"d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,
'California.LosAngeles'
,2"
,
"d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,
'California.LosAngeles'
,3"
,
"d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,
'California.SanFrancisco'
,3"
,
"d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,
'California.SanFrancisco'
,2"
,
"d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,
'California.SanFrancisco'
,2"
,
"d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,
'California.LosAngeles'
,2"
]
def
get_connection
()
->
taos
.
TaosConnection
:
...
...
docs/zh/05-get-started/03-package.md
浏览文件 @
a3d4dce3
...
...
@@ -46,19 +46,19 @@ apt-get 方式只适用于 Debian 或 Ubuntu 系统
</TabItem>
<TabItem
label=
"Deb 安装"
value=
"debinst"
>
1、从官网下载获得 deb 安装包,例如 TDengine-server-
2.4.0.7
-Linux-x64.deb;
2、进入到 TDengine-server-
2.4.0.7
-Linux-x64.deb 安装包所在目录,执行如下的安装命令:
1、从官网下载获得 deb 安装包,例如 TDengine-server-
3.0.0.10002
-Linux-x64.deb;
2、进入到 TDengine-server-
3.0.0.10002
-Linux-x64.deb 安装包所在目录,执行如下的安装命令:
```
$ sudo dpkg -i TDengine-server-
2.4.0.7
-Linux-x64.deb
(Reading database ... 137504 files and directories currently installed.)
Preparing to unpack TDengine-server-2.4.0.7-Linux-x64.deb ...
TDengine is removed successfully!
Unpacking tdengine (
2.4.0.7) over (2.4.0.7
) ...
Setting up tdengine (
2.4.0.7
) ...
$ sudo dpkg -i TDengine-server-
3.0.0.10002
-Linux-x64.deb
Selecting previously unselected package tdengine.
(Reading database ... 119653 files and directories currently installed.)
Preparing to unpack TDengine-server-3.0.0.10002-Linux-x64.deb ...
Unpacking tdengine (
3.0.0.10002
) ...
Setting up tdengine (
3.0.0.10002
) ...
Start to install TDengine...
System hostname is:
ubuntu-1804
System hostname is:
v3cluster-0002
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
OR leave it blank to build one:
...
...
@@ -68,92 +68,100 @@ Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /e
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : taos -h
ubuntu-1804
to login into TDengine server
To access TDengine : taos -h
v3cluster-0002
to login into TDengine server
TDengine is installed successfully!
```
</TabItem>
<TabItem
label=
"RPM 安装"
value=
"rpminst"
>
1、从官网下载获得 rpm 安装包,例如 TDengine-server-
2.4.0.7
-Linux-x64.rpm;
2、进入到 TDengine-server-
2.4.0.7
-Linux-x64.rpm 安装包所在目录,执行如下的安装命令:
1、从官网下载获得 rpm 安装包,例如 TDengine-server-
3.0.0.10002
-Linux-x64.rpm;
2、进入到 TDengine-server-
3.0.0.10002
-Linux-x64.rpm 安装包所在目录,执行如下的安装命令:
```
$ sudo rpm -ivh TDengine-server-
2.4.0.7
-Linux-x64.rpm
$ sudo rpm -ivh TDengine-server-
3.0.0.10002
-Linux-x64.rpm
Preparing... ################################# [100%]
Stop taosd service success!
Updating / installing...
1:tdengine-
2.4.0.7-3
################################# [100%]
1:tdengine-
3.0.0.10002-3
################################# [100%]
Start to install TDengine...
System hostname is: c
entos7
System hostname is: c
henhaoran01
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
OR leave it blank to build one:
Enter your email address for priority support or enter empty to skip:
Created symlink from /etc/systemd/system/multi-user.target.wants/taosd.service to /etc/systemd/system/taosd.service.
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : taos -h c
entos7
to login into TDengine server
To access TDengine : taos -h c
henhaoran01
to login into TDengine server
TDengine is installed successfully!
```
</TabItem>
<TabItem
label=
"tar.gz 安装"
value=
"tarinst"
>
1、从官网下载获得 tar.gz 安装包,例如 TDengine-server-
2.4.0.7
-Linux-x64.tar.gz;
2、进入到 TDengine-server-
2.4.0.7
-Linux-x64.tar.gz 安装包所在目录,先解压文件后,进入子目录,执行其中的 install.sh 安装脚本:
1、从官网下载获得 tar.gz 安装包,例如 TDengine-server-
3.0.0.10002
-Linux-x64.tar.gz;
2、进入到 TDengine-server-
3.0.0.10002
-Linux-x64.tar.gz 安装包所在目录,先解压文件后,进入子目录,执行其中的 install.sh 安装脚本:
```
$ tar xvzf TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
TDengine-enterprise-server-2.4.0.7/
TDengine-enterprise-server-2.4.0.7/driver/
TDengine-enterprise-server-2.4.0.7/driver/vercomp.txt
TDengine-enterprise-server-2.4.0.7/driver/libtaos.so.2.4.0.7
TDengine-enterprise-server-2.4.0.7/install.sh
TDengine-enterprise-server-2.4.0.7/examples/
$ tar -zxvf TDengine-server-3.0.0.10002-Linux-x64.tar.gz
TDengine-server-3.0.0.10002/
TDengine-server-3.0.0.10002/driver/
TDengine-server-3.0.0.10002/driver/libtaos.so.3.0.0.10002
TDengine-server-3.0.0.10002/driver/vercomp.txt
TDengine-server-3.0.0.10002/release_note
TDengine-server-3.0.0.10002/taos.tar.gz
TDengine-server-3.0.0.10002/install.sh
...
$ ll
total
43816
drwxr
wxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31
./
drwxr
-xr-x 20 ubuntu ubuntu 4096 Feb 22 09:30
../
drwxr
wxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 TDengine-enterprise-server-2.4.0.7
/
-rw
-rw-r-- 1 ubuntu ubuntu 44852544 Feb 22 09:31 TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
total
56832
drwxr
-xr-x 3 root root 4096 Aug 8 10:29
./
drwxr
wxrwx 6 root root 4096 Aug 5 16:45
../
drwxr
-xr-x 4 root root 4096 Aug 4 18:03 TDengine-server-3.0.0.10002
/
-rw
xr-xr-x 1 root root 58183066 Aug 8 10:28 TDengine-server-3.0.0.10002-Linux-x64.tar.gz*
$ cd TDengine-
enterprise-server-2.4.0.7
/
$ cd TDengine-
server-3.0.0.10002
/
$ ll
total 40784
drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 ./
drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ../
drwxrwxr-x 2 ubuntu ubuntu 4096 Feb 22 09:30 driver/
drwxrwxr-x 10 ubuntu ubuntu 4096 Feb 22 09:30 examples/
-rwxrwxr-x 1 ubuntu ubuntu 33294 Feb 22 09:30 install.sh*
-rw-rw-r-- 1 ubuntu ubuntu 41704288 Feb 22 09:30 taos.tar.gz
total 51612
drwxr-xr-x 4 root root 4096 Aug 4 18:03 ./
drwxr-xr-x 3 root root 4096 Aug 8 10:29 ../
drwxr-xr-x 2 root root 4096 Aug 4 18:03 driver/
drwxr-xr-x 11 root root 4096 Aug 4 18:03 examples/
-rwxr-xr-x 1 root root 30980 Aug 4 18:03 install.sh*
-rw-r--r-- 1 root root 6724 Aug 4 18:03 release_note
-rw-r--r-- 1 root root 52793079 Aug 4 18:03 taos.tar.gz
$ sudo ./install.sh
Start to
update
TDengine...
Start to
install
TDengine...
Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service.
Nginx for TDengine is updated successfully!
System hostname is: v3cluster-0002
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
OR leave it blank to build one:
Enter your email address for priority support or enter empty to skip:
To configure TDengine : edit /etc/taos/taos.cfg
To configure
Taos A
dapter (if has) : edit /etc/taos/taosadapter.toml
To configure
taosa
dapter (if has) : edit /etc/taos/taosadapter.toml
To start TDengine : sudo systemctl start taosd
To access TDengine :
use taos -h ubuntu-1804 in shell OR from http://127.0.0.1:6060
To access TDengine :
taos -h v3cluster-0002 to login into TDengine server
TDengine is updated successfully!
Install taoskeeper as a standalone service
taoskeeper is installed, enable it by `systemctl enable taoskeeper`
TDengine is installed successfully!
```
:::info
...
...
docs/zh/17-operation/01-pkg-install.md
浏览文件 @
a3d4dce3
...
...
@@ -56,8 +56,8 @@ lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/
```
$ sudo dpkg -r tdengine
(Reading database ... 1
37504
files and directories currently installed.)
Removing tdengine (
2.4.0.7
) ...
(Reading database ... 1
20119
files and directories currently installed.)
Removing tdengine (
3.0.0.10002
) ...
TDengine is removed successfully!
```
...
...
@@ -81,10 +81,7 @@ TDengine is removed successfully!
```
$ rmtaos
Nginx for TDengine is running, stopping it...
TDengine is removed successfully!
taosKeeper is removed successfully!
```
</TabItem>
...
...
docs/zh/27-train-faq/01-faq.md
浏览文件 @
a3d4dce3
此差异已折叠。
点击以展开。
rust
@
7ed7a977
Subproject commit 7ed7a97715388fa144718764d6bf20f9bfc29a12
include/common/tdatablock.h
浏览文件 @
a3d4dce3
...
...
@@ -246,7 +246,7 @@ void blockDebugShowDataBlocks(const SArray* dataBlocks, const char* flag);
// for debug
char
*
dumpBlockData
(
SSDataBlock
*
pDataBlock
,
const
char
*
flag
,
char
**
dumpBuf
);
int32_t
buildSubmitReqFromDataBlock
(
SSubmitReq
**
pReq
,
const
S
Array
*
pDataBlocks
,
STSchema
*
pTSchema
,
int32_t
vgId
,
int32_t
buildSubmitReqFromDataBlock
(
SSubmitReq
**
pReq
,
const
S
SDataBlock
*
pDataBlocks
,
STSchema
*
pTSchema
,
int32_t
vgId
,
tb_uid_t
suid
);
char
*
buildCtbNameByGroupId
(
const
char
*
stbName
,
uint64_t
groupId
);
...
...
include/common/tmsg.h
浏览文件 @
a3d4dce3
...
...
@@ -2658,7 +2658,6 @@ typedef struct {
}
SVgEpSet
;
typedef
struct
{
int64_t
refId
;
int64_t
suid
;
int8_t
level
;
}
SRSmaFetchMsg
;
...
...
@@ -2666,7 +2665,6 @@ typedef struct {
static
FORCE_INLINE
int32_t
tEncodeSRSmaFetchMsg
(
SEncoder
*
pCoder
,
const
SRSmaFetchMsg
*
pReq
)
{
if
(
tStartEncode
(
pCoder
)
<
0
)
return
-
1
;
if
(
tEncodeI64
(
pCoder
,
pReq
->
refId
)
<
0
)
return
-
1
;
if
(
tEncodeI64
(
pCoder
,
pReq
->
suid
)
<
0
)
return
-
1
;
if
(
tEncodeI8
(
pCoder
,
pReq
->
level
)
<
0
)
return
-
1
;
...
...
@@ -2677,7 +2675,6 @@ static FORCE_INLINE int32_t tEncodeSRSmaFetchMsg(SEncoder* pCoder, const SRSmaFe
static
FORCE_INLINE
int32_t
tDecodeSRSmaFetchMsg
(
SDecoder
*
pCoder
,
SRSmaFetchMsg
*
pReq
)
{
if
(
tStartDecode
(
pCoder
)
<
0
)
return
-
1
;
if
(
tDecodeI64
(
pCoder
,
&
pReq
->
refId
)
<
0
)
return
-
1
;
if
(
tDecodeI64
(
pCoder
,
&
pReq
->
suid
)
<
0
)
return
-
1
;
if
(
tDecodeI8
(
pCoder
,
&
pReq
->
level
)
<
0
)
return
-
1
;
...
...
include/common/tmsgdef.h
浏览文件 @
a3d4dce3
...
...
@@ -200,6 +200,7 @@ enum {
TD_DEF_MSG_TYPE
(
TDMT_VND_CANCEL_SMA
,
"vnode-cancel-sma"
,
NULL
,
NULL
)
TD_DEF_MSG_TYPE
(
TDMT_VND_DROP_SMA
,
"vnode-drop-sma"
,
NULL
,
NULL
)
TD_DEF_MSG_TYPE
(
TDMT_VND_SUBMIT_RSMA
,
"vnode-submit-rsma"
,
SSubmitReq
,
SSubmitRsp
)
TD_DEF_MSG_TYPE
(
TDMT_VND_FETCH_RSMA
,
"vnode-fetch-rsma"
,
SRSmaFetchMsg
,
NULL
)
TD_DEF_MSG_TYPE
(
TDMT_VND_DELETE
,
"delete-data"
,
SVDeleteReq
,
SVDeleteRsp
)
TD_DEF_MSG_TYPE
(
TDMT_VND_ALTER_CONFIG
,
"alter-config"
,
NULL
,
NULL
)
TD_DEF_MSG_TYPE
(
TDMT_VND_ALTER_REPLICA
,
"alter-replica"
,
NULL
,
NULL
)
...
...
include/common/ttypes.h
浏览文件 @
a3d4dce3
...
...
@@ -354,8 +354,6 @@ void operateVal(void *dst, void *s1, void *s2, int32_t optr, int32_t type);
void
*
getDataMin
(
int32_t
type
);
void
*
getDataMax
(
int32_t
type
);
#define SET_DOUBLE_NULL(v) (*(uint64_t *)(v) = TSDB_DATA_DOUBLE_NULL)
#define SET_BIGINT_NULL(v) (*(uint64_t *)(v) = TSDB_DATA_BIGINT_NULL)
#ifdef __cplusplus
}
...
...
include/libs/function/function.h
浏览文件 @
a3d4dce3
...
...
@@ -67,7 +67,7 @@ typedef struct SResultRowEntryInfo {
bool
initialized
:
1
;
// output buffer has been initialized
bool
complete
:
1
;
// query has completed
uint8_t
isNullRes
:
6
;
// the result is null
uint16_t
numOfRes
;
// num of output result in current buffer
uint16_t
numOfRes
;
// num of output result in current buffer
. NOT NULL RESULT
}
SResultRowEntryInfo
;
// determine the real data need to calculated the result
...
...
include/libs/nodes/plannodes.h
浏览文件 @
a3d4dce3
...
...
@@ -121,6 +121,7 @@ typedef struct SProjectLogicNode {
SLogicNode
node
;
SNodeList
*
pProjections
;
char
stmtName
[
TSDB_TABLE_NAME_LEN
];
bool
ignoreGroupId
;
}
SProjectLogicNode
;
typedef
struct
SIndefRowsFuncLogicNode
{
...
...
@@ -344,6 +345,7 @@ typedef struct SProjectPhysiNode {
SPhysiNode
node
;
SNodeList
*
pProjections
;
bool
mergeDataBlock
;
bool
ignoreGroupId
;
}
SProjectPhysiNode
;
typedef
struct
SIndefRowsFuncPhysiNode
{
...
...
include/libs/stream/tstream.h
浏览文件 @
a3d4dce3
...
...
@@ -226,11 +226,36 @@ typedef struct {
int32_t
nodeId
;
int32_t
childId
;
int32_t
taskId
;
int64_t
checkpointVer
;
int64_t
processedVer
;
//
int64_t checkpointVer;
//
int64_t processedVer;
SEpSet
epSet
;
}
SStreamChildEpInfo
;
typedef
struct
{
int32_t
nodeId
;
int32_t
childId
;
int64_t
stateSaveVer
;
int64_t
stateProcessedVer
;
}
SStreamCheckpointInfo
;
typedef
struct
{
int64_t
streamId
;
int64_t
checkTs
;
int32_t
checkpointId
;
// incremental
int32_t
taskId
;
SArray
*
checkpointVer
;
// SArray<SStreamCheckpointInfo>
}
SStreamMultiVgCheckpointInfo
;
typedef
struct
{
int32_t
taskId
;
int32_t
checkpointId
;
// incremental
}
SStreamCheckpointKey
;
typedef
struct
{
int32_t
taskId
;
SArray
*
checkpointVer
;
}
SStreamRecoveringState
;
typedef
struct
SStreamTask
{
int64_t
streamId
;
int32_t
taskId
;
...
...
@@ -256,6 +281,8 @@ typedef struct SStreamTask {
// children info
SArray
*
childEpInfo
;
// SArray<SStreamChildEpInfo*>
int32_t
nextCheckId
;
SArray
*
checkpointInfo
;
// SArray<SStreamCheckpointInfo>
// exec
STaskExec
exec
;
...
...
@@ -445,6 +472,7 @@ typedef struct {
int32_t
tDecodeStreamDispatchReq
(
SDecoder
*
pDecoder
,
SStreamDispatchReq
*
pReq
);
int32_t
tDecodeStreamRetrieveReq
(
SDecoder
*
pDecoder
,
SStreamRetrieveReq
*
pReq
);
void
tFreeStreamDispatchReq
(
SStreamDispatchReq
*
pReq
);
int32_t
streamSetupTrigger
(
SStreamTask
*
pTask
);
...
...
@@ -468,6 +496,7 @@ typedef struct SStreamMeta {
TTB
*
pTaskDb
;
TTB
*
pStateDb
;
SHashObj
*
pTasks
;
SHashObj
*
pRecoveringState
;
void
*
ahandle
;
TXN
txn
;
FTaskExpand
*
expandFunc
;
...
...
include/util/taoserror.h
浏览文件 @
a3d4dce3
...
...
@@ -610,6 +610,8 @@ int32_t* taosGetErrno();
#define TSDB_CODE_RSMA_QTASKINFO_CREATE TAOS_DEF_ERROR_CODE(0, 0x3152)
#define TSDB_CODE_RSMA_FILE_CORRUPTED TAOS_DEF_ERROR_CODE(0, 0x3153)
#define TSDB_CODE_RSMA_REMOVE_EXISTS TAOS_DEF_ERROR_CODE(0, 0x3154)
#define TSDB_CODE_RSMA_FETCH_MSG_MSSED_UP TAOS_DEF_ERROR_CODE(0, 0x3155)
#define TSDB_CODE_RSMA_EMPTY_INFO TAOS_DEF_ERROR_CODE(0, 0x3156)
//index
#define TSDB_CODE_INDEX_REBUILDING TAOS_DEF_ERROR_CODE(0, 0x3200)
...
...
source/client/src/clientEnv.c
浏览文件 @
a3d4dce3
...
...
@@ -126,7 +126,7 @@ void *openTransporter(const char *user, const char *auth, int32_t numOfThread) {
rpcInit
.
numOfThreads
=
numOfThread
;
rpcInit
.
cfp
=
processMsgFromServer
;
rpcInit
.
rfp
=
clientRpcRfp
;
rpcInit
.
tfp
=
clientRpcTfp
;
//
rpcInit.tfp = clientRpcTfp;
rpcInit
.
sessions
=
1024
;
rpcInit
.
connType
=
TAOS_CONN_CLIENT
;
rpcInit
.
user
=
(
char
*
)
user
;
...
...
source/client/src/clientMain.c
浏览文件 @
a3d4dce3
...
...
@@ -658,12 +658,17 @@ typedef struct SqlParseWrapper {
SQuery
*
pQuery
;
}
SqlParseWrapper
;
static
void
destoryTablesReq
(
void
*
p
)
{
STablesReq
*
pRes
=
(
STablesReq
*
)
p
;
taosArrayDestroy
(
pRes
->
pTables
);
}
static
void
destorySqlParseWrapper
(
SqlParseWrapper
*
pWrapper
)
{
taosArrayDestroy
(
pWrapper
->
catalogReq
.
pDbVgroup
);
taosArrayDestroy
(
pWrapper
->
catalogReq
.
pDbCfg
);
taosArrayDestroy
(
pWrapper
->
catalogReq
.
pDbInfo
);
taosArrayDestroy
(
pWrapper
->
catalogReq
.
pTableMeta
);
taosArrayDestroy
(
pWrapper
->
catalogReq
.
pTableHash
);
taosArrayDestroy
Ex
(
pWrapper
->
catalogReq
.
pTableMeta
,
destoryTablesReq
);
taosArrayDestroy
Ex
(
pWrapper
->
catalogReq
.
pTableHash
,
destoryTablesReq
);
taosArrayDestroy
(
pWrapper
->
catalogReq
.
pUdf
);
taosArrayDestroy
(
pWrapper
->
catalogReq
.
pIndex
);
taosArrayDestroy
(
pWrapper
->
catalogReq
.
pUser
);
...
...
source/common/src/tdatablock.c
浏览文件 @
a3d4dce3
...
...
@@ -1874,21 +1874,20 @@ char* dumpBlockData(SSDataBlock* pDataBlock, const char* flag, char** pDataBuf)
* @brief TODO: Assume that the final generated result it less than 3M
*
* @param pReq
* @param pDataBlock
s
* @param pDataBlock
* @param vgId
* @param suid
// TODO: check with Liao whether suid response is reasonable
* @param suid
*
* TODO: colId should be set
*/
int32_t
buildSubmitReqFromDataBlock
(
SSubmitReq
**
pReq
,
const
S
Array
*
pDataBlocks
,
STSchema
*
pTSchema
,
int32_t
vgId
,
int32_t
buildSubmitReqFromDataBlock
(
SSubmitReq
**
pReq
,
const
S
SDataBlock
*
pDataBlock
,
STSchema
*
pTSchema
,
int32_t
vgId
,
tb_uid_t
suid
)
{
int32_t
sz
=
taosArrayGetSize
(
pDataBlocks
);
int32_t
bufSize
=
sizeof
(
SSubmitReq
);
int32_t
sz
=
1
;
for
(
int32_t
i
=
0
;
i
<
sz
;
++
i
)
{
SDataBlockInfo
*
pBlkInfo
=
&
((
SSDataBlock
*
)
taosArrayGet
(
pDataBlocks
,
i
))
->
info
;
const
SDataBlockInfo
*
pBlkInfo
=
&
pDataBlock
->
info
;
int32_t
numOfCols
=
taosArrayGetSize
(
pDataBlocks
);
bufSize
+=
pBlkInfo
->
rows
*
(
TD_ROW_HEAD_LEN
+
pBlkInfo
->
rowSize
+
BitmapLen
(
numOfCols
));
int32_t
colNum
=
taosArrayGetSize
(
pDataBlock
->
pDataBlock
);
bufSize
+=
pBlkInfo
->
rows
*
(
TD_ROW_HEAD_LEN
+
pBlkInfo
->
rowSize
+
BitmapLen
(
colNum
));
bufSize
+=
sizeof
(
SSubmitBlk
);
}
...
...
@@ -1905,7 +1904,6 @@ int32_t buildSubmitReqFromDataBlock(SSubmitReq** pReq, const SArray* pDataBlocks
tdSRowInit
(
&
rb
,
pTSchema
->
version
);
for
(
int32_t
i
=
0
;
i
<
sz
;
++
i
)
{
SSDataBlock
*
pDataBlock
=
taosArrayGet
(
pDataBlocks
,
i
);
int32_t
colNum
=
taosArrayGetSize
(
pDataBlock
->
pDataBlock
);
int32_t
rows
=
pDataBlock
->
info
.
rows
;
// int32_t rowSize = pDataBlock->info.rowSize;
...
...
source/dnode/mgmt/mgmt_vnode/src/vmHandle.c
浏览文件 @
a3d4dce3
...
...
@@ -347,6 +347,7 @@ SArray *vmGetMsgHandles() {
if
(
dmSetMgmtHandle
(
pArray
,
TDMT_VND_TABLES_META
,
vmPutMsgToFetchQueue
,
0
)
==
NULL
)
goto
_OVER
;
if
(
dmSetMgmtHandle
(
pArray
,
TDMT_SCH_CANCEL_TASK
,
vmPutMsgToFetchQueue
,
0
)
==
NULL
)
goto
_OVER
;
if
(
dmSetMgmtHandle
(
pArray
,
TDMT_SCH_DROP_TASK
,
vmPutMsgToFetchQueue
,
0
)
==
NULL
)
goto
_OVER
;
if
(
dmSetMgmtHandle
(
pArray
,
TDMT_VND_FETCH_RSMA
,
vmPutMsgToFetchQueue
,
0
)
==
NULL
)
goto
_OVER
;
if
(
dmSetMgmtHandle
(
pArray
,
TDMT_VND_CREATE_STB
,
vmPutMsgToWriteQueue
,
0
)
==
NULL
)
goto
_OVER
;
if
(
dmSetMgmtHandle
(
pArray
,
TDMT_VND_DROP_TTL_TABLE
,
vmPutMsgToWriteQueue
,
0
)
==
NULL
)
goto
_OVER
;
if
(
dmSetMgmtHandle
(
pArray
,
TDMT_VND_ALTER_STB
,
vmPutMsgToWriteQueue
,
0
)
==
NULL
)
goto
_OVER
;
...
...
source/dnode/mgmt/node_mgmt/src/dmTransport.c
浏览文件 @
a3d4dce3
...
...
@@ -255,7 +255,8 @@ static inline void dmReleaseHandle(SRpcHandleInfo *pHandle, int8_t type) {
static
bool
rpcRfp
(
int32_t
code
,
tmsg_t
msgType
)
{
if
(
code
==
TSDB_CODE_RPC_REDIRECT
||
code
==
TSDB_CODE_RPC_NETWORK_UNAVAIL
||
code
==
TSDB_CODE_NODE_NOT_DEPLOYED
||
code
==
TSDB_CODE_SYN_NOT_LEADER
||
code
==
TSDB_CODE_APP_NOT_READY
||
code
==
TSDB_CODE_RPC_BROKEN_LINK
)
{
if
(
msgType
==
TDMT_SCH_QUERY
||
msgType
==
TDMT_SCH_MERGE_QUERY
||
msgType
==
TDMT_SCH_FETCH
||
msgType
==
TDMT_SCH_MERGE_FETCH
)
{
if
(
msgType
==
TDMT_SCH_QUERY
||
msgType
==
TDMT_SCH_MERGE_QUERY
||
msgType
==
TDMT_SCH_FETCH
||
msgType
==
TDMT_SCH_MERGE_FETCH
)
{
return
false
;
}
return
true
;
...
...
source/dnode/vnode/src/inc/vnodeInt.h
浏览文件 @
a3d4dce3
...
...
@@ -187,6 +187,7 @@ int32_t smaAsyncPreCommit(SSma* pSma);
int32_t
smaAsyncCommit
(
SSma
*
pSma
);
int32_t
smaAsyncPostCommit
(
SSma
*
pSma
);
int32_t
smaDoRetention
(
SSma
*
pSma
,
int64_t
now
);
int32_t
smaProcessFetch
(
SSma
*
pSma
,
void
*
pMsg
);
int32_t
tdProcessTSmaCreate
(
SSma
*
pSma
,
int64_t
version
,
const
char
*
msg
);
int32_t
tdProcessTSmaInsert
(
SSma
*
pSma
,
int64_t
indexUid
,
const
char
*
msg
);
...
...
source/dnode/vnode/src/meta/metaQuery.c
浏览文件 @
a3d4dce3
...
...
@@ -481,7 +481,7 @@ int64_t metaGetTbNum(SMeta *pMeta) {
/* int64_t num = 0; */
/* vnodeGetAllCtbNum(pMeta->pVnode, &num); */
return
pMeta
->
pVnode
->
config
.
vndStats
.
numOfCTables
;
return
pMeta
->
pVnode
->
config
.
vndStats
.
numOfCTables
+
pMeta
->
pVnode
->
config
.
vndStats
.
numOfNTables
;
}
// N.B. Called by statusReq per second
...
...
source/dnode/vnode/src/sma/smaRollup.c
浏览文件 @
a3d4dce3
...
...
@@ -36,16 +36,14 @@ static int32_t tdExecuteRSmaImpl(SSma *pSma, const void *pMsg, int32_t inputT
int8_t
level
);
static
SRSmaInfo
*
tdAcquireRSmaInfoBySuid
(
SSma
*
pSma
,
int64_t
suid
);
static
void
tdReleaseRSmaInfo
(
SSma
*
pSma
,
SRSmaInfo
*
pInfo
);
static
int32_t
tdRSmaFetchAndSubmitResult
(
qTaskInfo_t
taskInfo
,
SRSmaInfoItem
*
pItem
,
STSchema
*
pTSchema
,
int64_t
suid
,
SRSmaStat
*
pStat
,
int8_t
blkType
);
static
int32_t
tdRSmaFetchAndSubmitResult
(
SSma
*
pSma
,
qTaskInfo_t
taskInfo
,
SRSmaInfoItem
*
pItem
,
STSchema
*
pTSchema
,
int64_t
suid
,
int8_t
blkType
);
static
void
tdRSmaFetchTrigger
(
void
*
param
,
void
*
tmrId
);
static
int32_t
tdRSmaFetchSend
(
SSma
*
pSma
,
SRSmaInfo
*
pInfo
,
int8_t
level
);
static
int32_t
tdRSmaQTaskInfoIterInit
(
SRSmaQTaskInfoIter
*
pIter
,
STFile
*
pTFile
);
static
int32_t
tdRSmaQTaskInfoIterNextBlock
(
SRSmaQTaskInfoIter
*
pIter
,
bool
*
isFinish
);
static
int32_t
tdRSmaQTaskInfoRestore
(
SSma
*
pSma
,
int8_t
type
,
SRSmaQTaskInfoIter
*
pIter
);
static
int32_t
tdRSmaQTaskInfoItemRestore
(
SSma
*
pSma
,
const
SRSmaQTaskInfoItem
*
infoItem
);
static
int32_t
tdRSmaRestoreQTaskInfoInit
(
SSma
*
pSma
,
int64_t
*
nTables
);
static
int32_t
tdRSmaRestoreQTaskInfoReload
(
SSma
*
pSma
,
int8_t
type
,
int64_t
qTaskFileVer
);
static
int32_t
tdRSmaRestoreTSDataReload
(
SSma
*
pSma
);
...
...
@@ -604,11 +602,8 @@ _end:
return
code
;
}
static
int32_t
tdRSmaFetchAndSubmitResult
(
qTaskInfo_t
taskInfo
,
SRSmaInfoItem
*
pItem
,
STSchema
*
pTSchema
,
int64_t
suid
,
SRSmaStat
*
pStat
,
int8_t
blkType
)
{
SArray
*
pResult
=
NULL
;
SSma
*
pSma
=
pStat
->
pSma
;
static
int32_t
tdRSmaFetchAndSubmitResult
(
SSma
*
pSma
,
qTaskInfo_t
taskInfo
,
SRSmaInfoItem
*
pItem
,
STSchema
*
pTSchema
,
int64_t
suid
,
int8_t
blkType
)
{
while
(
1
)
{
SSDataBlock
*
output
=
NULL
;
uint64_t
ts
;
...
...
@@ -619,30 +614,20 @@ static int32_t tdRSmaFetchAndSubmitResult(qTaskInfo_t taskInfo, SRSmaInfoItem *p
pItem
->
level
,
terrstr
(
code
));
goto
_err
;
}
if
(
!
output
)
{
break
;
}
if
(
!
pResult
)
{
pResult
=
taosArrayInit
(
1
,
sizeof
(
SSDataBlock
));
if
(
!
pResult
)
{
terrno
=
TSDB_CODE_OUT_OF_MEMORY
;
goto
_err
;
}
}
taosArrayPush
(
pResult
,
output
);
if
(
taosArrayGetSize
(
pResult
)
>
0
)
{
#if 1
if
(
output
)
{
#if 0
char flag[10] = {0};
snprintf(flag, 10, "level %" PRIi8, pItem->level);
SArray *pResult = taosArrayInit(1, sizeof(SSDataBlock));
taosArrayPush(pResult, output);
blockDebugShowDataBlocks(pResult, flag);
taosArrayDestroy(pResult);
#endif
STsdb
*
sinkTsdb
=
(
pItem
->
level
==
TSDB_RETENTION_L1
?
pSma
->
pRSmaTsdb
[
0
]
:
pSma
->
pRSmaTsdb
[
1
]);
SSubmitReq
*
pReq
=
NULL
;
// TODO: the schema update should be handled later(TD-17965)
if
(
buildSubmitReqFromDataBlock
(
&
pReq
,
pResul
t
,
pTSchema
,
SMA_VID
(
pSma
),
suid
)
<
0
)
{
if
(
buildSubmitReqFromDataBlock
(
&
pReq
,
outpu
t
,
pTSchema
,
SMA_VID
(
pSma
),
suid
)
<
0
)
{
smaError
(
"vgId:%d, build submit req for rsma stable %"
PRIi64
" level %"
PRIi8
" failed since %s"
,
SMA_VID
(
pSma
),
suid
,
pItem
->
level
,
terrstr
());
goto
_err
;
...
...
@@ -659,18 +644,17 @@ static int32_t tdRSmaFetchAndSubmitResult(qTaskInfo_t taskInfo, SRSmaInfoItem *p
SMA_VID
(
pSma
),
suid
,
pItem
->
level
,
output
->
info
.
version
);
taosMemoryFreeClear
(
pReq
);
taosArrayClear
(
pResult
);
}
else
if
(
terrno
==
0
)
{
smaDebug
(
"vgId:%d, no rsma %"
PRIi8
" data fetched yet"
,
SMA_VID
(
pSma
),
pItem
->
level
);
break
;
}
else
{
smaDebug
(
"vgId:%d, no rsma %"
PRIi8
" data fetched since %s"
,
SMA_VID
(
pSma
),
pItem
->
level
,
terrstr
());
goto
_err
;
}
}
tdDestroySDataBlockArray
(
pResult
);
return
TSDB_CODE_SUCCESS
;
_err:
tdDestroySDataBlockArray
(
pResult
);
return
TSDB_CODE_FAILED
;
}
...
...
@@ -694,11 +678,9 @@ static int32_t tdExecuteRSmaImpl(SSma *pSma, const void *pMsg, int32_t inputType
return
TSDB_CODE_FAILED
;
}
SSmaEnv
*
pEnv
=
SMA_RSMA_ENV
(
pSma
);
SRSmaStat
*
pStat
=
SMA_RSMA_STAT
(
pEnv
->
pStat
);
SRSmaInfoItem
*
pItem
=
RSMA_INFO_ITEM
(
pInfo
,
idx
);
tdRSmaFetchAndSubmitResult
(
RSMA_INFO_QTASK
(
pInfo
,
idx
),
pItem
,
pInfo
->
pTSchema
,
suid
,
pStat
,
tdRSmaFetchAndSubmitResult
(
pSma
,
RSMA_INFO_QTASK
(
pInfo
,
idx
),
pItem
,
pInfo
->
pTSchema
,
suid
,
STREAM_INPUT__DATA_SUBMIT
);
atomic_store_8
(
&
pItem
->
triggerStat
,
TASK_TRIGGER_STAT_ACTIVE
);
...
...
@@ -724,11 +706,13 @@ static SRSmaInfo *tdAcquireRSmaInfoBySuid(SSma *pSma, int64_t suid) {
SRSmaInfo
*
pRSmaInfo
=
NULL
;
if
(
!
pEnv
)
{
terrno
=
TSDB_CODE_RSMA_INVALID_ENV
;
return
NULL
;
}
pStat
=
(
SRSmaStat
*
)
SMA_ENV_STAT
(
pEnv
);
if
(
!
pStat
||
!
RSMA_INFO_HASH
(
pStat
))
{
terrno
=
TSDB_CODE_RSMA_INVALID_STAT
;
return
NULL
;
}
...
...
@@ -743,12 +727,12 @@ static SRSmaInfo *tdAcquireRSmaInfoBySuid(SSma *pSma, int64_t suid) {
taosRUnLockLatch
(
SMA_ENV_LOCK
(
pEnv
));
return
pRSmaInfo
;
}
taosRUnLockLatch
(
SMA_ENV_LOCK
(
pEnv
));
if
(
RSMA_COMMIT_STAT
(
pStat
)
==
0
)
{
// return NULL if not in committing stat
taosRUnLockLatch
(
SMA_ENV_LOCK
(
pEnv
));
return
NULL
;
}
taosRUnLockLatch
(
SMA_ENV_LOCK
(
pEnv
));
// clone the SRSmaInfo from iRsmaInfoHash to rsmaInfoHash if in committing stat
SRSmaInfo
*
pCowRSmaInfo
=
NULL
;
...
...
@@ -779,7 +763,7 @@ static SRSmaInfo *tdAcquireRSmaInfoBySuid(SSma *pSma, int64_t suid) {
ASSERT
(
!
pCowRSmaInfo
);
}
if
(
pCowRSmaInfo
)
{
if
(
pCowRSmaInfo
)
{
tdRefRSmaInfo
(
pSma
,
pCowRSmaInfo
);
}
// unlock
...
...
@@ -1323,7 +1307,7 @@ _err:
}
/**
* @brief trigger to get rsma result
* @brief trigger to get rsma result
in async mode
*
* @param param
* @param tmrId
...
...
@@ -1357,8 +1341,7 @@ static void tdRSmaFetchTrigger(void *param, void *tmrId) {
" refId:%d"
,
SMA_VID
(
pSma
),
pItem
->
level
,
rsmaTriggerStat
,
smaMgmt
.
rsetId
,
pRSmaInfo
->
refId
);
if
(
rsmaTriggerStat
==
TASK_TRIGGER_STAT_PAUSED
)
{
taosTmrReset
(
tdRSmaFetchTrigger
,
pItem
->
maxDelay
>
5000
?
5000
:
pItem
->
maxDelay
,
pItem
,
smaMgmt
.
tmrHandle
,
&
pItem
->
tmrId
);
taosTmrReset
(
tdRSmaFetchTrigger
,
5000
,
pItem
,
smaMgmt
.
tmrHandle
,
&
pItem
->
tmrId
);
}
return
;
}
...
...
@@ -1372,16 +1355,8 @@ static void tdRSmaFetchTrigger(void *param, void *tmrId) {
case
TASK_TRIGGER_STAT_ACTIVE
:
{
smaDebug
(
"vgId:%d, fetch rsma level %"
PRIi8
" data for table:%"
PRIi64
" since stat is active"
,
SMA_VID
(
pSma
),
pItem
->
level
,
pRSmaInfo
->
suid
);
// sync procedure => async process
SSDataBlock
dataBlock
=
{.
info
.
type
=
STREAM_GET_ALL
};
qTaskInfo_t
taskInfo
=
pRSmaInfo
->
taskInfo
[
pItem
->
level
-
1
];
qSetMultiStreamInput
(
taskInfo
,
&
dataBlock
,
1
,
STREAM_INPUT__DATA_BLOCK
);
tdRSmaFetchAndSubmitResult
(
taskInfo
,
pItem
,
pRSmaInfo
->
pTSchema
,
pRSmaInfo
->
suid
,
pStat
,
STREAM_INPUT__DATA_BLOCK
);
tdCleanupStreamInputDataBlock
(
taskInfo
);
// async process
tdRSmaFetchSend
(
pSma
,
pRSmaInfo
,
pItem
->
level
);
}
break
;
case
TASK_TRIGGER_STAT_PAUSED
:
{
smaDebug
(
"vgId:%d, not fetch rsma level %"
PRIi8
" data for table:%"
PRIi64
" since stat is paused"
,
...
...
@@ -1404,3 +1379,118 @@ static void tdRSmaFetchTrigger(void *param, void *tmrId) {
_end:
tdReleaseSmaRef
(
smaMgmt
.
rsetId
,
pRSmaInfo
->
refId
);
}
/**
* @brief put rsma fetch msg to fetch queue
*
* @param pSma
* @param pInfo
* @param level
* @return int32_t
*/
int32_t
tdRSmaFetchSend
(
SSma
*
pSma
,
SRSmaInfo
*
pInfo
,
int8_t
level
)
{
SRSmaFetchMsg
fetchMsg
=
{
.
suid
=
pInfo
->
suid
,
.
level
=
level
};
int32_t
ret
=
0
;
int32_t
contLen
=
0
;
SEncoder
encoder
=
{
0
};
tEncodeSize
(
tEncodeSRSmaFetchMsg
,
&
fetchMsg
,
contLen
,
ret
);
if
(
ret
<
0
)
{
terrno
=
TSDB_CODE_OUT_OF_MEMORY
;
tEncoderClear
(
&
encoder
);
goto
_err
;
}
void
*
pBuf
=
rpcMallocCont
(
contLen
+
sizeof
(
SMsgHead
));
tEncoderInit
(
&
encoder
,
POINTER_SHIFT
(
pBuf
,
sizeof
(
SMsgHead
)),
contLen
);
if
(
tEncodeSRSmaFetchMsg
(
&
encoder
,
&
fetchMsg
)
<
0
)
{
terrno
=
TSDB_CODE_OUT_OF_MEMORY
;
tEncoderClear
(
&
encoder
);
}
tEncoderClear
(
&
encoder
);
((
SMsgHead
*
)
pBuf
)
->
vgId
=
SMA_VID
(
pSma
);
((
SMsgHead
*
)
pBuf
)
->
contLen
=
contLen
+
sizeof
(
SMsgHead
);
SRpcMsg
rpcMsg
=
{
.
code
=
0
,
.
msgType
=
TDMT_VND_FETCH_RSMA
,
.
pCont
=
pBuf
,
.
contLen
=
contLen
,
};
if
((
terrno
=
tmsgPutToQueue
(
&
pSma
->
pVnode
->
msgCb
,
FETCH_QUEUE
,
&
rpcMsg
))
!=
0
)
{
smaError
(
"vgId:%d, failed to put rsma fetch msg into fetch-queue for suid:%"
PRIi64
" level:%"
PRIi8
" since %s"
,
SMA_VID
(
pSma
),
pInfo
->
suid
,
level
,
terrstr
());
goto
_err
;
}
smaDebug
(
"vgId:%d, success to put rsma fetch msg into fetch-queue for suid:%"
PRIi64
" level:%"
PRIi8
,
SMA_VID
(
pSma
),
pInfo
->
suid
,
level
);
return
TSDB_CODE_SUCCESS
;
_err:
return
TSDB_CODE_FAILED
;
}
/**
* @brief fetch rsma data of level 2/3 and submit
*
* @param pSma
* @param pMsg
* @return int32_t
*/
int32_t
smaProcessFetch
(
SSma
*
pSma
,
void
*
pMsg
)
{
SRpcMsg
*
pRpcMsg
=
(
SRpcMsg
*
)
pMsg
;
SRSmaFetchMsg
req
=
{
0
};
SDecoder
decoder
=
{
0
};
void
*
pBuf
=
NULL
;
SRSmaInfo
*
pInfo
=
NULL
;
SRSmaInfoItem
*
pItem
=
NULL
;
if
(
!
pRpcMsg
||
pRpcMsg
->
contLen
<
sizeof
(
SMsgHead
))
{
terrno
=
TSDB_CODE_RSMA_FETCH_MSG_MSSED_UP
;
return
-
1
;
}
pBuf
=
POINTER_SHIFT
(
pRpcMsg
->
pCont
,
sizeof
(
SMsgHead
));
tDecoderInit
(
&
decoder
,
pBuf
,
pRpcMsg
->
contLen
);
if
(
tDecodeSRSmaFetchMsg
(
&
decoder
,
&
req
)
<
0
)
{
terrno
=
TSDB_CODE_INVALID_MSG
;
goto
_err
;
}
pInfo
=
tdAcquireRSmaInfoBySuid
(
pSma
,
req
.
suid
);
if
(
!
pInfo
)
{
if
(
terrno
==
TSDB_CODE_SUCCESS
)
{
terrno
=
TSDB_CODE_RSMA_EMPTY_INFO
;
}
smaWarn
(
"vgId:%d, failed to process rsma fetch msg for suid:%"
PRIi64
" level:%"
PRIi8
" since %s"
,
SMA_VID
(
pSma
),
req
.
suid
,
req
.
level
,
terrstr
());
goto
_err
;
}
pItem
=
RSMA_INFO_ITEM
(
pInfo
,
req
.
level
-
1
);
SSDataBlock
dataBlock
=
{.
info
.
type
=
STREAM_GET_ALL
};
qTaskInfo_t
taskInfo
=
RSMA_INFO_QTASK
(
pInfo
,
req
.
level
-
1
);
if
((
terrno
=
qSetMultiStreamInput
(
taskInfo
,
&
dataBlock
,
1
,
STREAM_INPUT__DATA_BLOCK
))
<
0
)
{
goto
_err
;
}
if
(
tdRSmaFetchAndSubmitResult
(
pSma
,
taskInfo
,
pItem
,
pInfo
->
pTSchema
,
pInfo
->
suid
,
STREAM_INPUT__DATA_BLOCK
)
<
0
)
{
goto
_err
;
}
tdCleanupStreamInputDataBlock
(
taskInfo
);
tdReleaseRSmaInfo
(
pSma
,
pInfo
);
tDecoderClear
(
&
decoder
);
smaDebug
(
"vgId:%d, success to process rsma fetch msg for suid:%"
PRIi64
" level:%"
PRIi8
,
SMA_VID
(
pSma
),
req
.
suid
,
req
.
level
);
return
TSDB_CODE_SUCCESS
;
_err:
tdReleaseRSmaInfo
(
pSma
,
pInfo
);
tDecoderClear
(
&
decoder
);
smaError
(
"vgId:%d, failed to process rsma fetch msg since %s"
,
SMA_VID
(
pSma
),
terrstr
());
return
TSDB_CODE_FAILED
;
}
source/dnode/vnode/src/tq/tq.c
浏览文件 @
a3d4dce3
...
...
@@ -859,8 +859,10 @@ void vnodeEnqueueStreamMsg(SVnode* pVnode, SRpcMsg* pMsg) {
tDecoderInit
(
&
decoder
,
msgBody
,
msgLen
);
if
(
tDecodeStreamDispatchReq
(
&
decoder
,
&
req
)
<
0
)
{
code
=
TSDB_CODE_MSG_DECODE_ERROR
;
tDecoderClear
(
&
decoder
);
goto
FAIL
;
}
tDecoderClear
(
&
decoder
);
int32_t
taskId
=
req
.
taskId
;
...
...
source/dnode/vnode/src/vnd/vnodeQuery.c
浏览文件 @
a3d4dce3
...
...
@@ -473,7 +473,7 @@ int32_t vnodeGetTimeSeriesNum(SVnode *pVnode, int64_t *num) {
int
numOfCols
=
0
;
vnodeGetStbColumnNum
(
pVnode
,
id
,
&
numOfCols
);
*
num
+=
ctbNum
*
numOfCols
;
*
num
+=
ctbNum
*
(
numOfCols
-
1
)
;
}
metaCloseStbCursor
(
pCur
);
...
...
source/dnode/vnode/src/vnd/vnodeSvr.c
浏览文件 @
a3d4dce3
...
...
@@ -325,6 +325,8 @@ int32_t vnodeProcessFetchMsg(SVnode *pVnode, SRpcMsg *pMsg, SQueueInfo *pInfo) {
return
vnodeGetTableCfg
(
pVnode
,
pMsg
,
true
);
case
TDMT_VND_BATCH_META
:
return
vnodeGetBatchMeta
(
pVnode
,
pMsg
);
case
TDMT_VND_FETCH_RSMA
:
return
smaProcessFetch
(
pVnode
->
pSma
,
pMsg
);
case
TDMT_VND_CONSUME
:
return
tqProcessPollReq
(
pVnode
->
pTq
,
pMsg
);
case
TDMT_STREAM_TASK_RUN
:
...
...
source/dnode/vnode/src/vnd/vnodeSync.c
浏览文件 @
a3d4dce3
...
...
@@ -141,6 +141,10 @@ static void inline vnodeHandleWriteMsg(SVnode *pVnode, SRpcMsg *pMsg) {
}
if
(
rsp
.
info
.
handle
!=
NULL
)
{
tmsgSendRsp
(
&
rsp
);
}
else
{
if
(
rsp
.
pCont
)
{
rpcFreeCont
(
rsp
.
pCont
);
}
}
}
...
...
@@ -299,6 +303,10 @@ void vnodeApplyWriteMsg(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs) {
vnodePostBlockMsg
(
pVnode
,
pMsg
);
if
(
rsp
.
info
.
handle
!=
NULL
)
{
tmsgSendRsp
(
&
rsp
);
}
else
{
if
(
rsp
.
pCont
)
{
rpcFreeCont
(
rsp
.
pCont
);
}
}
vGTrace
(
"vgId:%d, msg:%p is freed, code:0x%x index:%"
PRId64
,
vgId
,
pMsg
,
rsp
.
code
,
pMsg
->
info
.
conn
.
applyIndex
);
...
...
source/libs/catalog/src/ctgAsync.c
浏览文件 @
a3d4dce3
...
...
@@ -1082,7 +1082,7 @@ _return:
ctgReleaseVgInfoToCache
(
pCtg
,
dbCache
);
}
if
(
pTask
->
res
)
{
if
(
pTask
->
res
||
code
)
{
ctgHandleTaskEnd
(
pTask
,
code
);
}
...
...
source/libs/command/inc/commandInt.h
浏览文件 @
a3d4dce3
...
...
@@ -58,6 +58,7 @@ extern "C" {
#define EXPLAIN_RATIO_TIME_FORMAT "Ratio: %f"
#define EXPLAIN_MERGE_FORMAT "Merge"
#define EXPLAIN_MERGE_KEYS_FORMAT "Merge Key: "
#define EXPLAIN_IGNORE_GROUPID_FORMAT "Ignore Group Id: %s"
#define EXPLAIN_PLANNING_TIME_FORMAT "Planning Time: %.3f ms"
#define EXPLAIN_EXEC_TIME_FORMAT "Execution Time: %.3f ms"
...
...
source/libs/command/src/explain.c
浏览文件 @
a3d4dce3
...
...
@@ -612,6 +612,11 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
EXPLAIN_ROW_END
();
QRY_ERR_RET
(
qExplainResAppendRow
(
ctx
,
tbuf
,
tlen
,
level
+
1
));
EXPLAIN_ROW_NEW
(
level
+
1
,
EXPLAIN_OUTPUT_FORMAT
);
EXPLAIN_ROW_APPEND
(
EXPLAIN_IGNORE_GROUPID_FORMAT
,
pPrjNode
->
ignoreGroupId
?
"true"
:
"false"
);
EXPLAIN_ROW_END
();
QRY_ERR_RET
(
qExplainResAppendRow
(
ctx
,
tbuf
,
tlen
,
level
+
1
));
if
(
pPrjNode
->
node
.
pConditions
)
{
EXPLAIN_ROW_NEW
(
level
+
1
,
EXPLAIN_FILTER_FORMAT
);
QRY_ERR_RET
(
nodesNodeToSQL
(
pPrjNode
->
node
.
pConditions
,
tbuf
+
VARSTR_HEADER_SIZE
,
...
...
source/libs/executor/src/joinoperator.c
浏览文件 @
a3d4dce3
...
...
@@ -256,7 +256,7 @@ static int32_t mergeJoinJoinDownstreamTsRanges(SOperatorInfo* pOperator, int64_t
SArray
*
rightRowLocations
=
taosArrayInit
(
8
,
sizeof
(
SRowLocation
));
SArray
*
rightCreatedBlocks
=
taosArrayInit
(
8
,
POINTER_BYTES
);
int32_t
code
=
TSDB_CODE_SUCCESS
;
mergeJoinGetDownStreamRowsEqualTimeStamp
(
pOperator
,
0
,
pJoinInfo
->
leftCol
.
slotId
,
pJoinInfo
->
pLeft
,
pJoinInfo
->
leftPos
,
timestamp
,
leftRowLocations
,
leftCreatedBlocks
);
mergeJoinGetDownStreamRowsEqualTimeStamp
(
pOperator
,
1
,
pJoinInfo
->
rightCol
.
slotId
,
pJoinInfo
->
pRight
,
...
...
@@ -264,15 +264,21 @@ static int32_t mergeJoinJoinDownstreamTsRanges(SOperatorInfo* pOperator, int64_t
size_t
leftNumJoin
=
taosArrayGetSize
(
leftRowLocations
);
size_t
rightNumJoin
=
taosArrayGetSize
(
rightRowLocations
);
code
=
blockDataEnsureCapacity
(
pRes
,
*
nRows
+
leftNumJoin
*
rightNumJoin
);
if
(
code
!=
TSDB_CODE_SUCCESS
)
{
qError
(
"%s can not ensure block capacity for join. left: %zu, right: %zu"
,
GET_TASKID
(
pOperator
->
pTaskInfo
),
leftNumJoin
,
rightNumJoin
);
}
if
(
code
==
TSDB_CODE_SUCCESS
)
{
for
(
int32_t
i
=
0
;
i
<
leftNumJoin
;
++
i
)
{
for
(
int32_t
j
=
0
;
j
<
rightNumJoin
;
++
j
)
{
SRowLocation
*
leftRow
=
taosArrayGet
(
leftRowLocations
,
i
);
SRowLocation
*
rightRow
=
taosArrayGet
(
rightRowLocations
,
j
);
SRowLocation
*
leftRow
=
taosArrayGet
(
leftRowLocations
,
i
);
SRowLocation
*
rightRow
=
taosArrayGet
(
rightRowLocations
,
j
);
mergeJoinJoinLeftRight
(
pOperator
,
pRes
,
*
nRows
,
leftRow
->
pDataBlock
,
leftRow
->
pos
,
rightRow
->
pDataBlock
,
rightRow
->
pos
);
++*
nRows
;
}
}
}
for
(
int
i
=
0
;
i
<
taosArrayGetSize
(
rightCreatedBlocks
);
++
i
)
{
SSDataBlock
*
pBlock
=
taosArrayGetP
(
rightCreatedBlocks
,
i
);
...
...
source/libs/function/src/builtinsimpl.c
浏览文件 @
a3d4dce3
...
...
@@ -3845,14 +3845,17 @@ int32_t spreadFunctionMerge(SqlFunctionCtx* pCtx) {
SSpreadInfo
*
pInfo
=
GET_ROWCELL_INTERBUF
(
GET_RES_INFO
(
pCtx
));
int32_t
start
=
pInput
->
startRowIndex
;
for
(
int32_t
i
=
start
;
i
<
start
+
pInput
->
numOfRows
;
++
i
)
{
char
*
data
=
colDataGetData
(
pCol
,
i
);
SSpreadInfo
*
pInputInfo
=
(
SSpreadInfo
*
)
varDataVal
(
data
);
if
(
pInputInfo
->
hasResult
)
{
spreadTransferInfo
(
pInputInfo
,
pInfo
);
}
}
SET_VAL
(
GET_RES_INFO
(
pCtx
),
1
,
1
);
if
(
pInfo
->
hasResult
)
{
GET_RES_INFO
(
pCtx
)
->
numOfRes
=
1
;
}
return
TSDB_CODE_SUCCESS
;
}
...
...
@@ -3861,6 +3864,8 @@ int32_t spreadFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
SSpreadInfo
*
pInfo
=
GET_ROWCELL_INTERBUF
(
GET_RES_INFO
(
pCtx
));
if
(
pInfo
->
hasResult
==
true
)
{
SET_DOUBLE_VAL
(
&
pInfo
->
result
,
pInfo
->
max
-
pInfo
->
min
);
}
else
{
GET_RES_INFO
(
pCtx
)
->
isNullRes
=
1
;
}
return
functionFinalize
(
pCtx
,
pBlock
);
}
...
...
source/libs/nodes/src/nodesCloneFuncs.c
浏览文件 @
a3d4dce3
...
...
@@ -390,6 +390,7 @@ static int32_t logicProjectCopy(const SProjectLogicNode* pSrc, SProjectLogicNode
COPY_BASE_OBJECT_FIELD
(
node
,
logicNodeCopy
);
CLONE_NODE_LIST_FIELD
(
pProjections
);
COPY_CHAR_ARRAY_FIELD
(
stmtName
);
COPY_SCALAR_FIELD
(
ignoreGroupId
);
return
TSDB_CODE_SUCCESS
;
}
...
...
source/libs/nodes/src/nodesCodeFuncs.c
浏览文件 @
a3d4dce3
...
...
@@ -655,6 +655,7 @@ static int32_t jsonToLogicScanNode(const SJson* pJson, void* pObj) {
}
static
const
char
*
jkProjectLogicPlanProjections
=
"Projections"
;
static
const
char
*
jkProjectLogicPlanIgnoreGroupId
=
"IgnoreGroupId"
;
static
int32_t
logicProjectNodeToJson
(
const
void
*
pObj
,
SJson
*
pJson
)
{
const
SProjectLogicNode
*
pNode
=
(
const
SProjectLogicNode
*
)
pObj
;
...
...
@@ -663,6 +664,9 @@ static int32_t logicProjectNodeToJson(const void* pObj, SJson* pJson) {
if
(
TSDB_CODE_SUCCESS
==
code
)
{
code
=
nodeListToJson
(
pJson
,
jkProjectLogicPlanProjections
,
pNode
->
pProjections
);
}
if
(
TSDB_CODE_SUCCESS
==
code
)
{
code
=
tjsonAddIntegerToObject
(
pJson
,
jkProjectLogicPlanIgnoreGroupId
,
pNode
->
ignoreGroupId
);
}
return
code
;
}
...
...
@@ -674,6 +678,9 @@ static int32_t jsonToLogicProjectNode(const SJson* pJson, void* pObj) {
if
(
TSDB_CODE_SUCCESS
==
code
)
{
code
=
jsonToNodeList
(
pJson
,
jkProjectLogicPlanProjections
,
&
pNode
->
pProjections
);
}
if
(
TSDB_CODE_SUCCESS
==
code
)
{
code
=
tjsonGetBoolValue
(
pJson
,
jkProjectLogicPlanIgnoreGroupId
,
&
pNode
->
ignoreGroupId
);
}
return
code
;
}
...
...
@@ -1689,6 +1696,7 @@ static int32_t jsonToPhysiSysTableScanNode(const SJson* pJson, void* pObj) {
static
const
char
*
jkProjectPhysiPlanProjections
=
"Projections"
;
static
const
char
*
jkProjectPhysiPlanMergeDataBlock
=
"MergeDataBlock"
;
static
const
char
*
jkProjectPhysiPlanIgnoreGroupId
=
"IgnoreGroupId"
;
static
int32_t
physiProjectNodeToJson
(
const
void
*
pObj
,
SJson
*
pJson
)
{
const
SProjectPhysiNode
*
pNode
=
(
const
SProjectPhysiNode
*
)
pObj
;
...
...
@@ -1700,6 +1708,9 @@ static int32_t physiProjectNodeToJson(const void* pObj, SJson* pJson) {
if
(
TSDB_CODE_SUCCESS
==
code
)
{
code
=
tjsonAddBoolToObject
(
pJson
,
jkProjectPhysiPlanMergeDataBlock
,
pNode
->
mergeDataBlock
);
}
if
(
TSDB_CODE_SUCCESS
==
code
)
{
code
=
tjsonAddBoolToObject
(
pJson
,
jkProjectPhysiPlanIgnoreGroupId
,
pNode
->
ignoreGroupId
);
}
return
code
;
}
...
...
@@ -1714,6 +1725,9 @@ static int32_t jsonToPhysiProjectNode(const SJson* pJson, void* pObj) {
if
(
TSDB_CODE_SUCCESS
==
code
)
{
code
=
tjsonGetBoolValue
(
pJson
,
jkProjectPhysiPlanMergeDataBlock
,
&
pNode
->
mergeDataBlock
);
}
if
(
TSDB_CODE_SUCCESS
==
code
)
{
code
=
tjsonGetBoolValue
(
pJson
,
jkProjectPhysiPlanIgnoreGroupId
,
&
pNode
->
ignoreGroupId
);
}
return
code
;
}
...
...
source/libs/nodes/src/nodesUtilFuncs.c
浏览文件 @
a3d4dce3
...
...
@@ -392,6 +392,9 @@ static void destroyDataSinkNode(SDataSinkNode* pNode) { nodesDestroyNode((SNode*
static
void
destroyExprNode
(
SExprNode
*
pExpr
)
{
taosArrayDestroy
(
pExpr
->
pAssociation
);
}
static
void
destroyTableCfg
(
STableCfg
*
pCfg
)
{
if
(
NULL
==
pCfg
)
{
return
;
}
taosArrayDestroy
(
pCfg
->
pFuncs
);
taosMemoryFree
(
pCfg
->
pComment
);
taosMemoryFree
(
pCfg
->
pSchemas
);
...
...
source/libs/parser/src/parAstParser.c
浏览文件 @
a3d4dce3
...
...
@@ -339,6 +339,11 @@ static int32_t collectMetaKeyFromShowBnodes(SCollectMetaKeyCxt* pCxt, SShowStmt*
pCxt
->
pMetaCache
);
}
static
int32_t
collectMetaKeyFromShowCluster
(
SCollectMetaKeyCxt
*
pCxt
,
SShowStmt
*
pStmt
)
{
return
reserveTableMetaInCache
(
pCxt
->
pParseCxt
->
acctId
,
TSDB_INFORMATION_SCHEMA_DB
,
TSDB_INS_TABLE_CLUSTER
,
pCxt
->
pMetaCache
);
}
static
int32_t
collectMetaKeyFromShowDatabases
(
SCollectMetaKeyCxt
*
pCxt
,
SShowStmt
*
pStmt
)
{
return
reserveTableMetaInCache
(
pCxt
->
pParseCxt
->
acctId
,
TSDB_INFORMATION_SCHEMA_DB
,
TSDB_INS_TABLE_DATABASES
,
pCxt
->
pMetaCache
);
...
...
@@ -547,6 +552,8 @@ static int32_t collectMetaKeyFromQuery(SCollectMetaKeyCxt* pCxt, SNode* pStmt) {
return
collectMetaKeyFromShowSnodes
(
pCxt
,
(
SShowStmt
*
)
pStmt
);
case
QUERY_NODE_SHOW_BNODES_STMT
:
return
collectMetaKeyFromShowBnodes
(
pCxt
,
(
SShowStmt
*
)
pStmt
);
case
QUERY_NODE_SHOW_CLUSTER_STMT
:
return
collectMetaKeyFromShowCluster
(
pCxt
,
(
SShowStmt
*
)
pStmt
);
case
QUERY_NODE_SHOW_DATABASES_STMT
:
return
collectMetaKeyFromShowDatabases
(
pCxt
,
(
SShowStmt
*
)
pStmt
);
case
QUERY_NODE_SHOW_FUNCTIONS_STMT
:
...
...
source/libs/parser/test/mockCatalog.cpp
浏览文件 @
a3d4dce3
...
...
@@ -119,6 +119,12 @@ void generateInformationSchema(MockCatalogService* mcs) {
.
addColumn
(
"dnode_id"
,
TSDB_DATA_TYPE_INT
);
builder
.
done
();
}
{
ITableBuilder
&
builder
=
mcs
->
createTableBuilder
(
TSDB_INFORMATION_SCHEMA_DB
,
TSDB_INS_TABLE_CLUSTER
,
TSDB_SYSTEM_TABLE
,
1
)
.
addColumn
(
"id"
,
TSDB_DATA_TYPE_BIGINT
);
builder
.
done
();
}
}
void
generatePerformanceSchema
(
MockCatalogService
*
mcs
)
{
...
...
source/libs/parser/test/parShowToUse.cpp
浏览文件 @
a3d4dce3
...
...
@@ -25,6 +25,15 @@ class ParserShowToUseTest : public ParserDdlTest {};
// todo SHOW apps
// todo SHOW connections
TEST_F
(
ParserShowToUseTest
,
showCluster
)
{
useDb
(
"root"
,
"test"
);
setCheckDdlFunc
(
[
&
](
const
SQuery
*
pQuery
,
ParserStage
stage
)
{
ASSERT_EQ
(
nodeType
(
pQuery
->
pRoot
),
QUERY_NODE_SELECT_STMT
);
});
run
(
"SHOW CLUSTER"
);
}
TEST_F
(
ParserShowToUseTest
,
showConsumers
)
{
useDb
(
"root"
,
"test"
);
...
...
source/libs/planner/src/planLogicCreater.c
浏览文件 @
a3d4dce3
...
...
@@ -865,6 +865,7 @@ static int32_t createProjectLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSel
TSWAP
(
pProject
->
node
.
pLimit
,
pSelect
->
pLimit
);
TSWAP
(
pProject
->
node
.
pSlimit
,
pSelect
->
pSlimit
);
pProject
->
ignoreGroupId
=
(
NULL
==
pSelect
->
pPartitionByList
);
pProject
->
node
.
groupAction
=
(
!
pSelect
->
isSubquery
&&
pCxt
->
pPlanCxt
->
streamQuery
)
?
GROUP_ACTION_KEEP
:
GROUP_ACTION_CLEAR
;
pProject
->
node
.
requireDataOrder
=
DATA_ORDER_LEVEL_NONE
;
...
...
@@ -1078,6 +1079,7 @@ static int32_t createSetOpProjectLogicNode(SLogicPlanContext* pCxt, SSetOperator
if
(
NULL
==
pSetOperator
->
pOrderByList
)
{
TSWAP
(
pProject
->
node
.
pLimit
,
pSetOperator
->
pLimit
);
}
pProject
->
ignoreGroupId
=
true
;
int32_t
code
=
TSDB_CODE_SUCCESS
;
...
...
source/libs/planner/src/planPhysiCreater.c
浏览文件 @
a3d4dce3
...
...
@@ -998,6 +998,7 @@ static int32_t createProjectPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChild
}
pProject
->
mergeDataBlock
=
projectCanMergeDataBlock
(
pProjectLogicNode
);
pProject
->
ignoreGroupId
=
pProjectLogicNode
->
ignoreGroupId
;
int32_t
code
=
TSDB_CODE_SUCCESS
;
if
(
0
==
LIST_LENGTH
(
pChildren
))
{
...
...
source/libs/stream/src/stream.c
浏览文件 @
a3d4dce3
...
...
@@ -136,6 +136,7 @@ int32_t streamTaskEnqueue(SStreamTask* pTask, SStreamDispatchReq* pReq, SRpcMsg*
pRsp
->
pCont
=
buf
;
pRsp
->
contLen
=
sizeof
(
SMsgHead
)
+
sizeof
(
SStreamDispatchRsp
);
tmsgSendRsp
(
pRsp
);
tFreeStreamDispatchReq
(
pReq
);
return
status
==
TASK_INPUT_STATUS__NORMAL
?
0
:
-
1
;
}
...
...
source/libs/stream/src/streamDispatch.c
浏览文件 @
a3d4dce3
...
...
@@ -62,6 +62,11 @@ int32_t tDecodeStreamDispatchReq(SDecoder* pDecoder, SStreamDispatchReq* pReq) {
return
0
;
}
void
tFreeStreamDispatchReq
(
SStreamDispatchReq
*
pReq
)
{
taosArrayDestroyP
(
pReq
->
data
,
taosMemoryFree
);
taosArrayDestroy
(
pReq
->
dataLen
);
}
int32_t
tEncodeStreamRetrieveReq
(
SEncoder
*
pEncoder
,
const
SStreamRetrieveReq
*
pReq
)
{
if
(
tStartEncode
(
pEncoder
)
<
0
)
return
-
1
;
if
(
tEncodeI64
(
pEncoder
,
pReq
->
streamId
)
<
0
)
return
-
1
;
...
...
@@ -279,7 +284,7 @@ int32_t streamDispatchAllBlocks(SStreamTask* pTask, const SStreamDataBlock* pDat
}
code
=
0
;
FAIL_FIXED_DISPATCH:
taosArrayDestroy
(
req
.
data
);
taosArrayDestroy
P
(
req
.
data
,
taosMemoryFree
);
taosArrayDestroy
(
req
.
dataLen
);
return
code
;
...
...
source/libs/stream/src/streamExec.c
浏览文件 @
a3d4dce3
...
...
@@ -15,7 +15,7 @@
#include "streamInc.h"
static
int32_t
streamTaskExecImpl
(
SStreamTask
*
pTask
,
void
*
data
,
SArray
*
pRes
)
{
static
int32_t
streamTaskExecImpl
(
SStreamTask
*
pTask
,
const
void
*
data
,
SArray
*
pRes
)
{
void
*
exec
=
pTask
->
exec
.
executor
;
// set input
...
...
@@ -82,14 +82,16 @@ static int32_t streamTaskExecImpl(SStreamTask* pTask, void* data, SArray* pRes)
return
0
;
}
#if 0
static FORCE_INLINE int32_t streamUpdateVer(SStreamTask* pTask, SStreamDataBlock* pBlock) {
ASSERT(pBlock->type == STREAM_INPUT__DATA_BLOCK);
int32_t childId = pBlock->childId;
int64_t ver = pBlock->sourceVer;
SStreamChildEpInfo* pChildInfo = taosArrayGetP(pTask->childEpInfo, childId);
pChildInfo
->
processedVer
=
ver
;
/*pChildInfo-> = ver;*/
return 0;
}
#endif
int32_t
streamPipelineExec
(
SStreamTask
*
pTask
,
int32_t
batchNum
)
{
ASSERT
(
pTask
->
taskLevel
!=
TASK_LEVEL__SINK
);
...
...
@@ -198,6 +200,8 @@ int32_t streamExecForAll(SStreamTask* pTask) {
streamTaskExecImpl
(
pTask
,
data
,
pRes
);
qDebug
(
"stream task %d exec end"
,
pTask
->
taskId
);
streamFreeQitem
(
data
);
if
(
taosArrayGetSize
(
pRes
)
!=
0
)
{
SStreamDataBlock
*
qRes
=
taosAllocateQitem
(
sizeof
(
SStreamDataBlock
),
DEF_QITEM
);
if
(
qRes
==
NULL
)
{
...
...
source/libs/stream/src/streamRecover.c
浏览文件 @
a3d4dce3
...
...
@@ -87,63 +87,95 @@ int32_t tDecodeSMStreamTaskRecoverRsp(SDecoder* pDecoder, SMStreamTaskRecoverRsp
return
0
;
}
typedef
struct
{
int32_t
vgId
;
int32_t
childId
;
int64_t
ver
;
}
SStreamVgVerCheckpoint
;
int32_t
tEncodeSStreamVgVerCheckpoint
(
SEncoder
*
pEncoder
,
const
SStreamVgVerCheckpoint
*
pCheckpoint
)
{
if
(
tEncodeI32
(
pEncoder
,
pCheckpoint
->
vgId
)
<
0
)
return
-
1
;
int32_t
tEncodeSStreamCheckpointInfo
(
SEncoder
*
pEncoder
,
const
SStreamCheckpointInfo
*
pCheckpoint
)
{
if
(
tEncodeI32
(
pEncoder
,
pCheckpoint
->
nodeId
)
<
0
)
return
-
1
;
if
(
tEncodeI32
(
pEncoder
,
pCheckpoint
->
childId
)
<
0
)
return
-
1
;
if
(
tEncodeI64
(
pEncoder
,
pCheckpoint
->
v
er
)
<
0
)
return
-
1
;
if
(
tEncodeI64
(
pEncoder
,
pCheckpoint
->
stateProcessedV
er
)
<
0
)
return
-
1
;
return
0
;
}
int32_t
tDecodeSStream
VgVerCheckpoint
(
SDecoder
*
pDecoder
,
SStreamVgVerCheckpoint
*
pCheckpoint
)
{
if
(
tDecodeI32
(
pDecoder
,
&
pCheckpoint
->
vg
Id
)
<
0
)
return
-
1
;
int32_t
tDecodeSStream
CheckpointInfo
(
SDecoder
*
pDecoder
,
SStreamCheckpointInfo
*
pCheckpoint
)
{
if
(
tDecodeI32
(
pDecoder
,
&
pCheckpoint
->
node
Id
)
<
0
)
return
-
1
;
if
(
tDecodeI32
(
pDecoder
,
&
pCheckpoint
->
childId
)
<
0
)
return
-
1
;
if
(
tDecodeI64
(
pDecoder
,
&
pCheckpoint
->
v
er
)
<
0
)
return
-
1
;
if
(
tDecodeI64
(
pDecoder
,
&
pCheckpoint
->
stateProcessedV
er
)
<
0
)
return
-
1
;
return
0
;
}
typedef
struct
{
int64_t
streamId
;
int64_t
checkTs
;
int64_t
checkpointId
;
int32_t
taskId
;
SArray
*
checkpointVer
;
// SArray<SStreamVgCheckpointVer>
}
SStreamAggVerCheckpoint
;
int32_t
tEncodeSStreamAggVerCheckpoint
(
SEncoder
*
pEncoder
,
const
SStreamAggVerCheckpoint
*
pCheckpoint
)
{
int32_t
tEncodeSStreamMultiVgCheckpointInfo
(
SEncoder
*
pEncoder
,
const
SStreamMultiVgCheckpointInfo
*
pCheckpoint
)
{
if
(
tEncodeI64
(
pEncoder
,
pCheckpoint
->
streamId
)
<
0
)
return
-
1
;
if
(
tEncodeI64
(
pEncoder
,
pCheckpoint
->
checkTs
)
<
0
)
return
-
1
;
if
(
tEncodeI
64
(
pEncoder
,
pCheckpoint
->
checkpointId
)
<
0
)
return
-
1
;
if
(
tEncodeI
32
(
pEncoder
,
pCheckpoint
->
checkpointId
)
<
0
)
return
-
1
;
if
(
tEncodeI32
(
pEncoder
,
pCheckpoint
->
taskId
)
<
0
)
return
-
1
;
int32_t
sz
=
taosArrayGetSize
(
pCheckpoint
->
checkpointVer
);
if
(
tEncodeI32
(
pEncoder
,
sz
)
<
0
)
return
-
1
;
for
(
int32_t
i
=
0
;
i
<
sz
;
i
++
)
{
SStream
VgVerCheckpoint
*
pOneVgCkpoint
=
taosArrayGet
(
pCheckpoint
->
checkpointVer
,
i
);
if
(
tEncodeSStream
VgVerCheckpoint
(
pEncoder
,
pOneVgCkpoint
)
<
0
)
return
-
1
;
SStream
CheckpointInfo
*
pOneVgCkpoint
=
taosArrayGet
(
pCheckpoint
->
checkpointVer
,
i
);
if
(
tEncodeSStream
CheckpointInfo
(
pEncoder
,
pOneVgCkpoint
)
<
0
)
return
-
1
;
}
return
0
;
}
int32_t
tDecodeSStream
AggVerCheckpoint
(
SDecoder
*
pDecoder
,
SStreamAggVerCheckpoint
*
pCheckpoint
)
{
int32_t
tDecodeSStream
MultiVgCheckpointInfo
(
SDecoder
*
pDecoder
,
SStreamMultiVgCheckpointInfo
*
pCheckpoint
)
{
if
(
tDecodeI64
(
pDecoder
,
&
pCheckpoint
->
streamId
)
<
0
)
return
-
1
;
if
(
tDecodeI64
(
pDecoder
,
&
pCheckpoint
->
checkTs
)
<
0
)
return
-
1
;
if
(
tDecodeI
64
(
pDecoder
,
&
pCheckpoint
->
checkpointId
)
<
0
)
return
-
1
;
if
(
tDecodeI
32
(
pDecoder
,
&
pCheckpoint
->
checkpointId
)
<
0
)
return
-
1
;
if
(
tDecodeI32
(
pDecoder
,
&
pCheckpoint
->
taskId
)
<
0
)
return
-
1
;
int32_t
sz
;
if
(
tDecodeI32
(
pDecoder
,
&
sz
)
<
0
)
return
-
1
;
for
(
int32_t
i
=
0
;
i
<
sz
;
i
++
)
{
SStream
VgVerCheckpoint
oneVgCheckpoint
;
if
(
tDecodeSStream
VgVerCheckpoint
(
pDecoder
,
&
oneVgCheckpoint
)
<
0
)
return
-
1
;
SStream
CheckpointInfo
oneVgCheckpoint
;
if
(
tDecodeSStream
CheckpointInfo
(
pDecoder
,
&
oneVgCheckpoint
)
<
0
)
return
-
1
;
taosArrayPush
(
pCheckpoint
->
checkpointVer
,
&
oneVgCheckpoint
);
}
return
0
;
}
int32_t
streamCheckSinkLevel
(
SStreamMeta
*
pMeta
,
SStreamTask
*
pTask
)
{
void
*
buf
=
NULL
;
ASSERT
(
pTask
->
taskLevel
==
TASK_LEVEL__SINK
);
int32_t
sz
=
taosArrayGetSize
(
pTask
->
checkpointInfo
);
SStreamMultiVgCheckpointInfo
checkpoint
;
checkpoint
.
checkpointId
=
0
;
checkpoint
.
checkTs
=
taosGetTimestampMs
();
checkpoint
.
streamId
=
pTask
->
streamId
;
checkpoint
.
taskId
=
pTask
->
taskId
;
checkpoint
.
checkpointVer
=
pTask
->
checkpointInfo
;
int32_t
len
;
int32_t
code
;
tEncodeSize
(
tEncodeSStreamMultiVgCheckpointInfo
,
&
checkpoint
,
len
,
code
);
if
(
code
<
0
)
{
return
-
1
;
}
buf
=
taosMemoryCalloc
(
1
,
len
);
if
(
buf
==
NULL
)
{
return
-
1
;
}
SEncoder
encoder
;
tEncoderInit
(
&
encoder
,
buf
,
len
);
tEncodeSStreamMultiVgCheckpointInfo
(
&
encoder
,
&
checkpoint
);
tEncoderClear
(
&
encoder
);
SStreamCheckpointKey
key
=
{
.
taskId
=
pTask
->
taskId
,
.
checkpointId
=
checkpoint
.
checkpointId
,
};
if
(
tdbTbUpsert
(
pMeta
->
pStateDb
,
&
key
,
sizeof
(
SStreamCheckpointKey
),
buf
,
len
,
&
pMeta
->
txn
)
<
0
)
{
ASSERT
(
0
);
goto
FAIL
;
}
taosMemoryFree
(
buf
);
return
0
;
FAIL:
if
(
buf
)
taosMemoryFree
(
buf
);
return
-
1
;
}
int32_t
streamRecoverSinkLevel
(
SStreamMeta
*
pMeta
,
SStreamTask
*
pTask
)
{
ASSERT
(
pTask
->
taskLevel
==
TASK_LEVEL__SINK
);
// load status
...
...
@@ -154,9 +186,39 @@ int32_t streamRecoverSinkLevel(SStreamMeta* pMeta, SStreamTask* pTask) {
}
SDecoder
decoder
;
tDecoderInit
(
&
decoder
,
pVal
,
vLen
);
SStreamAggVerCheckpoint
aggCheckpoint
;
tDecodeSStreamAggVerCheckpoint
(
&
decoder
,
&
aggCheckpoint
);
/*pTask->*/
SStreamMultiVgCheckpointInfo
aggCheckpoint
;
tDecodeSStreamMultiVgCheckpointInfo
(
&
decoder
,
&
aggCheckpoint
);
tDecoderClear
(
&
decoder
);
pTask
->
nextCheckId
=
aggCheckpoint
.
checkpointId
+
1
;
pTask
->
checkpointInfo
=
aggCheckpoint
.
checkpointVer
;
return
0
;
}
int32_t
streamCheckAggLevel
(
SStreamMeta
*
pMeta
,
SStreamTask
*
pTask
)
{
ASSERT
(
pTask
->
taskLevel
==
TASK_LEVEL__AGG
);
// save and copy state
// save state info
return
0
;
}
int32_t
streamRecoverAggLevel
(
SStreamMeta
*
pMeta
,
SStreamTask
*
pTask
)
{
ASSERT
(
pTask
->
taskLevel
==
TASK_LEVEL__AGG
);
// try recover sink level
// after all sink level recovered, choose current state backend to recover
return
0
;
}
int32_t
streamCheckSourceLevel
(
SStreamMeta
*
pMeta
,
SStreamTask
*
pTask
)
{
ASSERT
(
pTask
->
taskLevel
==
TASK_LEVEL__SOURCE
);
// try recover agg level
//
return
0
;
}
int32_t
streamRecoverSourceLevel
(
SStreamMeta
*
pMeta
,
SStreamTask
*
pTask
)
{
ASSERT
(
pTask
->
taskLevel
==
TASK_LEVEL__SOURCE
);
return
0
;
}
...
...
source/libs/stream/src/streamTask.c
浏览文件 @
a3d4dce3
...
...
@@ -34,7 +34,7 @@ int32_t tEncodeStreamEpInfo(SEncoder* pEncoder, const SStreamChildEpInfo* pInfo)
if
(
tEncodeI32
(
pEncoder
,
pInfo
->
taskId
)
<
0
)
return
-
1
;
if
(
tEncodeI32
(
pEncoder
,
pInfo
->
nodeId
)
<
0
)
return
-
1
;
if
(
tEncodeI32
(
pEncoder
,
pInfo
->
childId
)
<
0
)
return
-
1
;
if
(
tEncodeI64
(
pEncoder
,
pInfo
->
processedVer
)
<
0
)
return
-
1
;
/*if (tEncodeI64(pEncoder, pInfo->processedVer) < 0) return -1;*/
if
(
tEncodeSEpSet
(
pEncoder
,
&
pInfo
->
epSet
)
<
0
)
return
-
1
;
return
0
;
}
...
...
@@ -43,7 +43,7 @@ int32_t tDecodeStreamEpInfo(SDecoder* pDecoder, SStreamChildEpInfo* pInfo) {
if
(
tDecodeI32
(
pDecoder
,
&
pInfo
->
taskId
)
<
0
)
return
-
1
;
if
(
tDecodeI32
(
pDecoder
,
&
pInfo
->
nodeId
)
<
0
)
return
-
1
;
if
(
tDecodeI32
(
pDecoder
,
&
pInfo
->
childId
)
<
0
)
return
-
1
;
if
(
tDecodeI64
(
pDecoder
,
&
pInfo
->
processedVer
)
<
0
)
return
-
1
;
/*if (tDecodeI64(pDecoder, &pInfo->processedVer) < 0) return -1;*/
if
(
tDecodeSEpSet
(
pDecoder
,
&
pInfo
->
epSet
)
<
0
)
return
-
1
;
return
0
;
}
...
...
source/libs/sync/inc/syncInt.h
浏览文件 @
a3d4dce3
...
...
@@ -192,9 +192,11 @@ int32_t syncNodeRestartElectTimer(SSyncNode* pSyncNode, int32_t ms);
int32_t
syncNodeResetElectTimer
(
SSyncNode
*
pSyncNode
);
int32_t
syncNodeStartHeartbeatTimer
(
SSyncNode
*
pSyncNode
);
int32_t
syncNodeStartNowHeartbeatTimer
(
SSyncNode
*
pSyncNode
);
int32_t
syncNodeStartHeartbeatTimerMS
(
SSyncNode
*
pSyncNode
,
int32_t
ms
);
int32_t
syncNodeStopHeartbeatTimer
(
SSyncNode
*
pSyncNode
);
int32_t
syncNodeRestartHeartbeatTimer
(
SSyncNode
*
pSyncNode
);
int32_t
syncNodeRestartNowHeartbeatTimer
(
SSyncNode
*
pSyncNode
);
int32_t
syncNodeRestartNowHeartbeatTimerMS
(
SSyncNode
*
pSyncNode
,
int32_t
ms
);
// utils --------------
int32_t
syncNodeSendMsgById
(
const
SRaftId
*
destRaftId
,
SSyncNode
*
pSyncNode
,
SRpcMsg
*
pMsg
);
...
...
source/libs/sync/src/syncMain.c
浏览文件 @
a3d4dce3
...
...
@@ -1322,10 +1322,10 @@ int32_t syncNodeStartHeartbeatTimer(SSyncNode* pSyncNode) {
return
ret
;
}
int32_t
syncNodeStart
NowHeartbeatTimer
(
SSyncNode
*
pSyncNode
)
{
int32_t
syncNodeStart
HeartbeatTimerMS
(
SSyncNode
*
pSyncNode
,
int32_t
ms
)
{
int32_t
ret
=
0
;
if
(
syncEnvIsStart
())
{
taosTmrReset
(
pSyncNode
->
FpHeartbeatTimerCB
,
1
,
pSyncNode
,
gSyncEnv
->
pTimerManager
,
&
pSyncNode
->
pHeartbeatTimer
);
taosTmrReset
(
pSyncNode
->
FpHeartbeatTimerCB
,
ms
,
pSyncNode
,
gSyncEnv
->
pTimerManager
,
&
pSyncNode
->
pHeartbeatTimer
);
atomic_store_64
(
&
pSyncNode
->
heartbeatTimerLogicClock
,
pSyncNode
->
heartbeatTimerLogicClockUser
);
}
else
{
sError
(
"vgId:%d, start heartbeat timer error, sync env is stop"
,
pSyncNode
->
vgId
);
...
...
@@ -1333,13 +1333,18 @@ int32_t syncNodeStartNowHeartbeatTimer(SSyncNode* pSyncNode) {
do
{
char
logBuf
[
128
];
snprintf
(
logBuf
,
sizeof
(
logBuf
),
"start heartbeat timer, ms:%d"
,
1
);
snprintf
(
logBuf
,
sizeof
(
logBuf
),
"start heartbeat timer, ms:%d"
,
ms
);
syncNodeEventLog
(
pSyncNode
,
logBuf
);
}
while
(
0
);
return
ret
;
}
int32_t
syncNodeStartNowHeartbeatTimer
(
SSyncNode
*
pSyncNode
)
{
int32_t
ret
=
syncNodeStartHeartbeatTimerMS
(
pSyncNode
,
1
);
return
ret
;
}
int32_t
syncNodeStopHeartbeatTimer
(
SSyncNode
*
pSyncNode
)
{
int32_t
ret
=
0
;
atomic_add_fetch_64
(
&
pSyncNode
->
heartbeatTimerLogicClockUser
,
1
);
...
...
@@ -1363,6 +1368,12 @@ int32_t syncNodeRestartNowHeartbeatTimer(SSyncNode* pSyncNode) {
return
0
;
}
int32_t
syncNodeRestartNowHeartbeatTimerMS
(
SSyncNode
*
pSyncNode
,
int32_t
ms
)
{
syncNodeStopHeartbeatTimer
(
pSyncNode
);
syncNodeStartHeartbeatTimerMS
(
pSyncNode
,
ms
);
return
0
;
}
// utils --------------
int32_t
syncNodeSendMsgById
(
const
SRaftId
*
destRaftId
,
SSyncNode
*
pSyncNode
,
SRpcMsg
*
pMsg
)
{
SEpSet
epSet
;
...
...
source/libs/sync/src/syncReplication.c
浏览文件 @
a3d4dce3
...
...
@@ -200,9 +200,23 @@ int32_t syncNodeAppendEntriesPeersSnapshot2(SSyncNode* pSyncNode) {
// send msg
syncNodeAppendEntriesBatch
(
pSyncNode
,
pDestId
,
pMsg
);
syncAppendEntriesBatchDestroy
(
pMsg
);
// speed up
if
(
pMsg
->
dataCount
>
0
&&
pMsg
->
prevLogIndex
<
pSyncNode
->
commitIndex
)
{
ret
=
1
;
do
{
char
logBuf
[
128
];
char
host
[
64
];
uint16_t
port
;
syncUtilU642Addr
(
pDestId
->
addr
,
host
,
sizeof
(
host
),
&
port
);
snprintf
(
logBuf
,
sizeof
(
logBuf
),
"speed up for %s:%d, pre-index:%ld"
,
host
,
port
,
pMsg
->
prevLogIndex
);
syncNodeEventLog
(
pSyncNode
,
logBuf
);
}
while
(
0
);
}
}
return
0
;
return
ret
;
}
int32_t
syncNodeAppendEntriesPeersSnapshot
(
SSyncNode
*
pSyncNode
)
{
...
...
@@ -309,7 +323,14 @@ int32_t syncNodeReplicate(SSyncNode* pSyncNode) {
break
;
}
if
(
ret
>
0
)
{
// speed up replicate
int32_t
ms
=
pSyncNode
->
heartbeatTimerMS
<
50
?
pSyncNode
->
heartbeatTimerMS
:
50
;
syncNodeRestartNowHeartbeatTimerMS
(
pSyncNode
,
ms
);
}
else
{
syncNodeRestartHeartbeatTimer
(
pSyncNode
);
}
return
ret
;
}
...
...
source/libs/transport/inc/transComm.h
浏览文件 @
a3d4dce3
...
...
@@ -105,13 +105,13 @@ typedef SRpcCtxVal STransCtxVal;
typedef
SRpcInfo
STrans
;
typedef
SRpcConnInfo
STransHandleInfo
;
// ref mgt
// handle
// ref mgt handle
typedef
struct
SExHandle
{
void
*
handle
;
int64_t
refId
;
void
*
pThrd
;
}
SExHandle
;
/*convet from fqdn to ip */
typedef
struct
SCvtAddr
{
char
ip
[
TSDB_FQDN_LEN
];
...
...
source/libs/transport/src/transComm.c
浏览文件 @
a3d4dce3
...
...
@@ -222,14 +222,13 @@ SAsyncPool* transAsyncPoolCreate(uv_loop_t* loop, int sz, void* arg, AsyncCB cb)
pool
->
asyncs
=
taosMemoryCalloc
(
1
,
sizeof
(
uv_async_t
)
*
pool
->
nAsync
);
for
(
int
i
=
0
;
i
<
pool
->
nAsync
;
i
++
)
{
uv_async_t
*
async
=
&
(
pool
->
asyncs
[
i
]);
uv_async_init
(
loop
,
async
,
cb
);
SAsyncItem
*
item
=
taosMemoryCalloc
(
1
,
sizeof
(
SAsyncItem
));
item
->
pThrd
=
arg
;
QUEUE_INIT
(
&
item
->
qmsg
);
taosThreadMutexInit
(
&
item
->
mtx
,
NULL
);
uv_async_t
*
async
=
&
(
pool
->
asyncs
[
i
]);
uv_async_init
(
loop
,
async
,
cb
);
async
->
data
=
item
;
}
return
pool
;
...
...
@@ -238,7 +237,7 @@ SAsyncPool* transAsyncPoolCreate(uv_loop_t* loop, int sz, void* arg, AsyncCB cb)
void
transAsyncPoolDestroy
(
SAsyncPool
*
pool
)
{
for
(
int
i
=
0
;
i
<
pool
->
nAsync
;
i
++
)
{
uv_async_t
*
async
=
&
(
pool
->
asyncs
[
i
]);
// uv_close((uv_handle_t*)async, NULL);
SAsyncItem
*
item
=
async
->
data
;
taosThreadMutexDestroy
(
&
item
->
mtx
);
taosMemoryFree
(
item
);
...
...
source/util/src/terror.c
浏览文件 @
a3d4dce3
...
...
@@ -614,6 +614,8 @@ TAOS_DEFINE_ERROR(TSDB_CODE_RSMA_INVALID_STAT, "Invalid rsma state"
TAOS_DEFINE_ERROR
(
TSDB_CODE_RSMA_QTASKINFO_CREATE
,
"Rsma qtaskinfo creation error"
)
TAOS_DEFINE_ERROR
(
TSDB_CODE_RSMA_FILE_CORRUPTED
,
"Rsma file corrupted"
)
TAOS_DEFINE_ERROR
(
TSDB_CODE_RSMA_REMOVE_EXISTS
,
"Rsma remove exists"
)
TAOS_DEFINE_ERROR
(
TSDB_CODE_RSMA_FETCH_MSG_MSSED_UP
,
"Rsma fetch msg is messed up"
)
TAOS_DEFINE_ERROR
(
TSDB_CODE_RSMA_EMPTY_INFO
,
"Rsma info is empty"
)
//index
TAOS_DEFINE_ERROR
(
TSDB_CODE_INDEX_REBUILDING
,
"Index is rebuilding"
)
...
...
tests/docs-examples-test/go.sh
浏览文件 @
a3d4dce3
...
...
@@ -4,7 +4,7 @@ set -e
taosd
>>
/dev/null 2>&1 &
taosadapter
>>
/dev/null 2>&1 &
sleep
10
cd
../../docs/examples/go
go mod tidy
...
...
tests/docs-examples-test/python.sh
0 → 100644
浏览文件 @
a3d4dce3
#!/bin/bash
set
-e
taosd
>>
/dev/null 2>&1 &
taosadapter
>>
/dev/null 2>&1 &
sleep
10
cd
../../docs/examples/python
# 1
taos
-s
"create database if not exists log"
python3 connect_example.py
# 2
taos
-s
"drop database if exists power"
python3 native_insert_example.py
# 3
taos
-s
"drop database power"
python3 bind_param_example.py
# 4
taos
-s
"drop database power"
python3 multi_bind_example.py
# 5
python3 query_example.py
# 6
python3 async_query_example.py
# 7
taos
-s
"drop database if exists test"
python3 line_protocol_example.py
# 8
taos
-s
"drop database test"
python3 telnet_line_protocol_example.py
# 9
taos
-s
"drop database test"
python3 json_protocol_example.py
# 10
# python3 subscribe_demo.py
tests/parallel_test/collect_cases.sh
浏览文件 @
a3d4dce3
...
...
@@ -41,7 +41,7 @@ fi
cat
../script/jenkins/basic.txt |grep
-v
"^#"
|grep
-v
"^$"
|sed
"s/^/,,script,/"
>>
$case_file
grep
"^python"
../system-test/fulltest.sh |sed
"s/^/,,system-test,/"
>>
$case_file
grep
"^python"
../develop-test/fulltest.sh |sed
"s/^/,,develop-test,/"
>>
$case_file
find ../docs-examples-test/
-name
"*.sh"
-printf
'%f\n'
| xargs
-I
{}
echo
",,docs-examples-test,bash {}"
>>
$case_file
# tar source code for run.sh to use
# if [ $ent -eq 0 ]; then
# cd ../../../
...
...
tests/parallel_test/run_case.sh
浏览文件 @
a3d4dce3
...
...
@@ -50,12 +50,14 @@ if [ $ent -eq 0 ]; then
export
LD_LIBRARY_PATH
=
/home/TDengine/debug/build/lib
ln
-s
/home/TDengine/debug/build/lib/libtaos.so /usr/lib/libtaos.so 2>/dev/null
ln
-s
/home/TDengine/debug/build/lib/libtaos.so /usr/lib/libtaos.so.1 2>/dev/null
ln
-s
/home/TDengine/include/client/taos.h /usr/include/taos.h 2>/dev/null
CONTAINER_TESTDIR
=
/home/TDengine
else
export
PATH
=
$PATH
:/home/TDinternal/debug/build/bin
export
LD_LIBRARY_PATH
=
/home/TDinternal/debug/build/lib
ln
-s
/home/TDinternal/debug/build/lib/libtaos.so /usr/lib/libtaos.so 2>/dev/null
ln
-s
/home/TDinternal/debug/build/lib/libtaos.so /usr/lib/libtaos.so.1 2>/dev/null
ln
-s
/home/TDinternal/community/include/client/taos.h /usr/include/taos.h 2>/dev/null
CONTAINER_TESTDIR
=
/home/TDinternal/community
fi
mkdir
-p
/var/lib/taos/subscribe
...
...
tests/script/tsim/sma/rsmaCreateInsertQuery.sim
浏览文件 @
a3d4dce3
...
...
@@ -29,8 +29,8 @@ sql insert into ct1 values(now, 10);
sql insert into ct1 values(now+1s, 1);
sql insert into ct1 values(now+2s, 100);
print =============== wait maxdelay 15+
1
seconds for results
sleep 1
6
000
print =============== wait maxdelay 15+
2
seconds for results
sleep 1
7
000
print =============== select * from retention level 2 from memory
sql select * from ct1;
...
...
tests/script/tsim/sma/rsmaPersistenceRecovery.sim
浏览文件 @
a3d4dce3
...
...
@@ -29,8 +29,8 @@ sql insert into ct1 values(now, 10, 10.0);
sql insert into ct1 values(now+1s, 1, 1.0);
sql insert into ct1 values(now+2s, 100, 100.0);
print =============== wait maxdelay 5+
1
seconds for results
sleep
6
000
print =============== wait maxdelay 5+
2
seconds for results
sleep
7
000
print =============== select * from retention level 2 from memory
sql select * from ct1;
...
...
@@ -135,8 +135,8 @@ print =============== insert after rsma qtaskinfo recovery
sql insert into ct1 values(now, 50, 500.0);
sql insert into ct1 values(now+1s, 40, 40.0);
print =============== wait maxdelay 5+
1
seconds for results
sleep
6
000
print =============== wait maxdelay 5+
2
seconds for results
sleep
7
000
print =============== select * from retention level 2 from file and memory after rsma qtaskinfo recovery
sql select * from ct1;
...
...
tests/system-test/1-insert/create_retentions.py
浏览文件 @
a3d4dce3
...
...
@@ -187,7 +187,7 @@ class TDTestCase:
tdSql
.
execute
(
f
'create table
{
dbname
}
.ct
{
i
+
1
}
using
{
dbname
}
.
{
stb
}
tags (
{
i
+
1
}
)'
)
def
__insert_data
(
self
,
rows
,
ctb_num
=
20
,
dbname
=
DBNAME
,
rsma
=
False
,
rsma_type
=
"sum"
):
tdLog
.
printNoPrefix
(
"==========step: start inser data into tables now....."
)
tdLog
.
printNoPrefix
(
"==========step: start inser
t
data into tables now....."
)
# from ...pytest.util.common import DataSet
data
=
DataSet
()
data
.
get_order_set
(
rows
)
...
...
tests/system-test/2-query/distribute_agg_spread.py
浏览文件 @
a3d4dce3
...
...
@@ -6,13 +6,10 @@ import random
class
TDTestCase
:
updatecfgDict
=
{
'debugFlag'
:
143
,
"cDebugFlag"
:
143
,
"uDebugFlag"
:
143
,
"rpcDebugFlag"
:
143
,
"tmrDebugFlag"
:
143
,
"jniDebugFlag"
:
143
,
"simDebugFlag"
:
143
,
"dDebugFlag"
:
143
,
"dDebugFlag"
:
143
,
"vDebugFlag"
:
143
,
"mDebugFlag"
:
143
,
"qDebugFlag"
:
143
,
"wDebugFlag"
:
143
,
"sDebugFlag"
:
143
,
"tsdbDebugFlag"
:
143
,
"tqDebugFlag"
:
143
,
"fsDebugFlag"
:
143
,
"udfDebugFlag"
:
143
,
"maxTablesPerVnode"
:
2
,
"minTablesPerVnode"
:
2
,
"tableIncStepPerVnode"
:
2
}
updatecfgDict
=
{
"maxTablesPerVnode"
:
2
,
"minTablesPerVnode"
:
2
,
"tableIncStepPerVnode"
:
2
}
def
init
(
self
,
conn
,
logSql
):
tdLog
.
debug
(
"start to execute %s"
%
__file__
)
tdLog
.
debug
(
f
"start to execute
{
__file__
}
"
)
tdSql
.
init
(
conn
.
cursor
())
self
.
vnode_disbutes
=
None
self
.
ts
=
1537146000000
...
...
@@ -31,60 +28,61 @@ class TDTestCase:
same_result
=
tdSql
.
queryResult
if
spread_result
!=
same_result
:
tdLog
.
exit
(
" max function work not as expected, sql : %s "
%
spread_sql
)
tdLog
.
exit
(
f
" max function work not as expected, sql :
{
spread_sql
}
"
)
else
:
tdLog
.
info
(
" max function work as expected, sql : %s "
%
spread_sql
)
tdLog
.
info
(
f
" max function work as expected, sql :
{
spread_sql
}
"
)
def
prepare_datas_of_distribute
(
self
):
def
prepare_datas_of_distribute
(
self
,
dbname
=
"testdb"
):
# prepate datas for 20 tables distributed at different vgroups
tdSql
.
execute
(
"create database if not exists testdb
keep 3650 duration 1000 vgroups 5"
)
tdSql
.
execute
(
" use testdb
"
)
tdSql
.
execute
(
f
"create database if not exists
{
dbname
}
keep 3650 duration 1000 vgroups 5"
)
tdSql
.
execute
(
f
" use
{
dbname
}
"
)
tdSql
.
execute
(
'''create table
stb1
f
'''create table
{
dbname
}
.
stb1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
tags (t0 timestamp, t1 int, t2 bigint, t3 smallint, t4 tinyint, t5 float, t6 double, t7 bool, t8 binary(16),t9 nchar(32))
'''
)
tdSql
.
execute
(
'''
create table t1
f
'''
create table
{
dbname
}
.
t1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
'''
)
for
i
in
range
(
20
):
tdSql
.
execute
(
f
'create table
ct
{
i
+
1
}
using
stb1 tags ( now(),
{
1
*
i
}
,
{
11111
*
i
}
,
{
111
*
i
}
,
{
1
*
i
}
,
{
1.11
*
i
}
,
{
11.11
*
i
}
,
{
i
%
2
}
, "binary
{
i
}
", "nchar
{
i
}
" )'
)
tdSql
.
execute
(
f
'create table
{
dbname
}
.ct
{
i
+
1
}
using
{
dbname
}
.
stb1 tags ( now(),
{
1
*
i
}
,
{
11111
*
i
}
,
{
111
*
i
}
,
{
1
*
i
}
,
{
1.11
*
i
}
,
{
11.11
*
i
}
,
{
i
%
2
}
, "binary
{
i
}
", "nchar
{
i
}
" )'
)
for
i
in
range
(
9
):
tdSql
.
execute
(
f
"insert into ct1 values ( now()-
{
i
*
10
}
s,
{
1
*
i
}
,
{
11111
*
i
}
,
{
111
*
i
}
,
{
11
*
i
}
,
{
1.11
*
i
}
,
{
11.11
*
i
}
,
{
i
%
2
}
, 'binary
{
i
}
', 'nchar
{
i
}
', now()+
{
1
*
i
}
a )"
f
"insert into
{
dbname
}
.
ct1 values ( now()-
{
i
*
10
}
s,
{
1
*
i
}
,
{
11111
*
i
}
,
{
111
*
i
}
,
{
11
*
i
}
,
{
1.11
*
i
}
,
{
11.11
*
i
}
,
{
i
%
2
}
, 'binary
{
i
}
', 'nchar
{
i
}
', now()+
{
1
*
i
}
a )"
)
tdSql
.
execute
(
f
"insert into ct4 values ( now()-
{
i
*
90
}
d,
{
1
*
i
}
,
{
11111
*
i
}
,
{
111
*
i
}
,
{
11
*
i
}
,
{
1.11
*
i
}
,
{
11.11
*
i
}
,
{
i
%
2
}
, 'binary
{
i
}
', 'nchar
{
i
}
', now()+
{
1
*
i
}
a )"
f
"insert into
{
dbname
}
.
ct4 values ( now()-
{
i
*
90
}
d,
{
1
*
i
}
,
{
11111
*
i
}
,
{
111
*
i
}
,
{
11
*
i
}
,
{
1.11
*
i
}
,
{
11.11
*
i
}
,
{
i
%
2
}
, 'binary
{
i
}
', 'nchar
{
i
}
', now()+
{
1
*
i
}
a )"
)
for
i
in
range
(
1
,
21
):
if
i
==
1
or
i
==
4
:
continue
else
:
tbname
=
"ct"
+
f
'
{
i
}
'
tbname
=
f
"ct
{
i
}
"
for
j
in
range
(
9
):
tdSql
.
execute
(
f
"insert into
{
tbname
}
values ( now()-
{
(
i
+
j
)
*
10
}
s,
{
1
*
(
j
+
i
)
}
,
{
11111
*
(
j
+
i
)
}
,
{
111
*
(
j
+
i
)
}
,
{
11
*
(
j
)
}
,
{
1.11
*
(
j
+
i
)
}
,
{
11.11
*
(
j
+
i
)
}
,
{
(
j
+
i
)
%
2
}
, 'binary
{
j
}
', 'nchar
{
j
}
', now()+
{
1
*
j
}
a )"
f
"insert into
{
dbname
}
.
{
tbname
}
values ( now()-
{
(
i
+
j
)
*
10
}
s,
{
1
*
(
j
+
i
)
}
,
{
11111
*
(
j
+
i
)
}
,
{
111
*
(
j
+
i
)
}
,
{
11
*
(
j
)
}
,
{
1.11
*
(
j
+
i
)
}
,
{
11.11
*
(
j
+
i
)
}
,
{
(
j
+
i
)
%
2
}
, 'binary
{
j
}
', 'nchar
{
j
}
', now()+
{
1
*
j
}
a )"
)
tdSql
.
execute
(
"insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )"
)
tdSql
.
execute
(
"insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )"
)
tdSql
.
execute
(
"insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )"
)
tdSql
.
execute
(
"insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )"
)
tdSql
.
execute
(
"insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) "
)
tdSql
.
execute
(
"insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) "
)
tdSql
.
execute
(
"insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) "
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )"
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )"
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )"
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )"
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) "
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) "
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) "
)
tdSql
.
execute
(
f
'''insert into t1 values
f
'''insert into
{
dbname
}
.
t1 values
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
...
...
@@ -100,11 +98,11 @@ class TDTestCase:
'''
)
tdLog
.
info
(
" prepare data for distributed_aggregate done! "
)
tdLog
.
info
(
f
" prepare data for distributed_aggregate done! "
)
def
check_distribute_datas
(
self
):
def
check_distribute_datas
(
self
,
dbname
=
"testdb"
):
# get vgroup_ids of all
tdSql
.
query
(
"show
vgroups "
)
tdSql
.
query
(
f
"show
{
dbname
}
.
vgroups "
)
vgroups
=
tdSql
.
queryResult
vnode_tables
=
{}
...
...
@@ -112,9 +110,8 @@ class TDTestCase:
for
vgroup_id
in
vgroups
:
vnode_tables
[
vgroup_id
[
0
]]
=
[]
# check sub_table of per vnode ,make sure sub_table has been distributed
tdSql
.
query
(
"show
tables like 'ct%'"
)
tdSql
.
query
(
f
"show
{
dbname
}
.
tables like 'ct%'"
)
table_names
=
tdSql
.
queryResult
tablenames
=
[]
for
table_name
in
table_names
:
...
...
@@ -126,9 +123,9 @@ class TDTestCase:
if
len
(
v
)
>=
2
:
count
+=
1
if
count
<
2
:
tdLog
.
exit
(
" the datas of all not satisfy sub_table has been distributed "
)
tdLog
.
exit
(
f
" the datas of all not satisfy sub_table has been distributed "
)
def
check_spread_distribute_diff_vnode
(
self
,
col_name
):
def
check_spread_distribute_diff_vnode
(
self
,
col_name
,
dbname
=
"testdb"
):
vgroup_ids
=
[]
for
k
,
v
in
self
.
vnode_disbutes
.
items
():
...
...
@@ -142,13 +139,13 @@ class TDTestCase:
distribute_tbnames
.
append
(
random
.
sample
(
vnode_tables
,
1
)[
0
])
tbname_ins
=
""
for
tbname
in
distribute_tbnames
:
tbname_ins
+=
"'%s' ,"
%
tbname
tbname_ins
+=
f
"'
{
tbname
}
' ,"
tbname_filters
=
tbname_ins
[:
-
1
]
spread_sql
=
f
"select spread(
{
col_name
}
) from stb1 where tbname in (
{
tbname_filters
}
)"
spread_sql
=
f
"select spread(
{
col_name
}
) from
{
dbname
}
.
stb1 where tbname in (
{
tbname_filters
}
)"
same_sql
=
f
"select max(
{
col_name
}
) - min(
{
col_name
}
) from stb1 where tbname in (
{
tbname_filters
}
)"
same_sql
=
f
"select max(
{
col_name
}
) - min(
{
col_name
}
) from
{
dbname
}
.
stb1 where tbname in (
{
tbname_filters
}
)"
tdSql
.
query
(
spread_sql
)
spread_result
=
tdSql
.
queryResult
...
...
@@ -157,20 +154,20 @@ class TDTestCase:
same_result
=
tdSql
.
queryResult
if
spread_result
!=
same_result
:
tdLog
.
exit
(
" spread function work not as expected, sql : %s "
%
spread_sql
)
tdLog
.
exit
(
f
" spread function work not as expected, sql :
{
spread_sql
}
"
)
else
:
tdLog
.
info
(
" spread function work as expected, sql : %s "
%
spread_sql
)
tdLog
.
info
(
f
" spread function work as expected, sql :
{
spread_sql
}
"
)
def
check_spread_status
(
self
):
def
check_spread_status
(
self
,
dbname
=
"testdb"
):
# check max function work status
tdSql
.
query
(
"show
tables like 'ct%'"
)
tdSql
.
query
(
f
"show
{
dbname
}
.
tables like 'ct%'"
)
table_names
=
tdSql
.
queryResult
tablenames
=
[]
for
table_name
in
table_names
:
tablenames
.
append
(
table_name
[
0
]
)
tablenames
.
append
(
f
"
{
dbname
}
.
{
table_name
[
0
]
}
"
)
tdSql
.
query
(
"desc
stb1"
)
tdSql
.
query
(
f
"desc
{
dbname
}
.
stb1"
)
col_names
=
tdSql
.
queryResult
colnames
=
[]
...
...
@@ -185,80 +182,76 @@ class TDTestCase:
# check max function for different vnode
for
colname
in
colnames
:
if
colname
.
startswith
(
"c"
):
if
colname
.
startswith
(
f
"c"
):
self
.
check_spread_distribute_diff_vnode
(
colname
)
else
:
# self.check_spread_distribute_diff_vnode(colname) # bug for tag
pass
def
distribute_agg_query
(
self
):
def
distribute_agg_query
(
self
,
dbname
=
"testdb"
):
# basic filter
tdSql
.
query
(
"select spread(c1) from
stb1 where c1 is null"
)
tdSql
.
query
(
f
"select spread(c1) from
{
dbname
}
.
stb1 where c1 is null"
)
tdSql
.
checkRows
(
1
)
tdSql
.
query
(
"select spread(c1) from
stb1 where t1=1"
)
tdSql
.
query
(
f
"select spread(c1) from
{
dbname
}
.
stb1 where t1=1"
)
tdSql
.
checkData
(
0
,
0
,
8.000000000
)
tdSql
.
query
(
"select spread(c1+c2) from
stb1 where c1 =1 "
)
tdSql
.
query
(
f
"select spread(c1+c2) from
{
dbname
}
.
stb1 where c1 =1 "
)
tdSql
.
checkData
(
0
,
0
,
0.000000000
)
tdSql
.
query
(
"select spread(c1) from
stb1 where tbname=
\"
ct2
\"
"
)
tdSql
.
query
(
f
"select spread(c1) from
{
dbname
}
.
stb1 where tbname=
\"
ct2
\"
"
)
tdSql
.
checkData
(
0
,
0
,
8.000000000
)
tdSql
.
query
(
"select spread(c1) from
stb1 partition by tbname"
)
tdSql
.
query
(
f
"select spread(c1) from
{
dbname
}
.
stb1 partition by tbname"
)
tdSql
.
checkRows
(
20
)
tdSql
.
query
(
"select spread(c1) from
stb1 where t1> 4 partition by tbname"
)
tdSql
.
query
(
f
"select spread(c1) from
{
dbname
}
.
stb1 where t1> 4 partition by tbname"
)
tdSql
.
checkRows
(
15
)
# union all
tdSql
.
query
(
"select spread(c1) from stb1 union all select max(c1)-min(c1) from
stb1 "
)
tdSql
.
query
(
f
"select spread(c1) from
{
dbname
}
.stb1 union all select max(c1)-min(c1) from
{
dbname
}
.
stb1 "
)
tdSql
.
checkRows
(
2
)
tdSql
.
checkData
(
0
,
0
,
28.000000000
)
# join
tdSql
.
execute
(
" create database if not exists db "
)
tdSql
.
execute
(
" use db "
)
tdSql
.
execute
(
" create stable
st (ts timestamp , c1 int ,c2 float) tags(t1 int) "
)
tdSql
.
execute
(
" create table tb1 using
st tags(1) "
)
tdSql
.
execute
(
" create table tb2 using
st tags(2) "
)
tdSql
.
execute
(
f
" create database if not exists db "
)
tdSql
.
execute
(
f
" use db "
)
tdSql
.
execute
(
f
" create stable db.
st (ts timestamp , c1 int ,c2 float) tags(t1 int) "
)
tdSql
.
execute
(
f
" create table db.tb1 using db.
st tags(1) "
)
tdSql
.
execute
(
f
" create table db.tb2 using db.
st tags(2) "
)
for
i
in
range
(
10
):
ts
=
i
*
10
+
self
.
ts
tdSql
.
execute
(
f
" insert into tb1 values(
{
ts
}
,
{
i
}
,
{
i
}
.0)"
)
tdSql
.
execute
(
f
" insert into tb2 values(
{
ts
}
,
{
i
}
,
{
i
}
.0)"
)
tdSql
.
execute
(
f
" insert into
db.
tb1 values(
{
ts
}
,
{
i
}
,
{
i
}
.0)"
)
tdSql
.
execute
(
f
" insert into
db.
tb2 values(
{
ts
}
,
{
i
}
,
{
i
}
.0)"
)
tdSql
.
query
(
"select spread(tb1.c1), spread(tb2.c2) from tb1,
tb2 where tb1.ts=tb2.ts"
)
tdSql
.
query
(
f
"select spread(tb1.c1), spread(tb2.c2) from db.tb1 tb1, db.tb2
tb2 where tb1.ts=tb2.ts"
)
tdSql
.
checkRows
(
1
)
tdSql
.
checkData
(
0
,
0
,
9.000000000
)
tdSql
.
checkData
(
0
,
0
,
9.00000
)
# group by
tdSql
.
execute
(
" use testdb
"
)
tdSql
.
query
(
" select max(c1),c1 from
stb1 group by t1 "
)
tdSql
.
execute
(
f
" use
{
dbname
}
"
)
tdSql
.
query
(
f
" select max(c1),c1 from
{
dbname
}
.
stb1 group by t1 "
)
tdSql
.
checkRows
(
20
)
tdSql
.
query
(
" select max(c1),c1 from
stb1 group by c1 "
)
tdSql
.
query
(
f
" select max(c1),c1 from
{
dbname
}
.
stb1 group by c1 "
)
tdSql
.
checkRows
(
30
)
tdSql
.
query
(
" select max(c1),c2 from
stb1 group by c2 "
)
tdSql
.
query
(
f
" select max(c1),c2 from
{
dbname
}
.
stb1 group by c2 "
)
tdSql
.
checkRows
(
31
)
# partition by tbname or partition by tag
tdSql
.
query
(
"select spread(c1) from
stb1 partition by tbname"
)
tdSql
.
query
(
f
"select spread(c1) from
{
dbname
}
.
stb1 partition by tbname"
)
query_data
=
tdSql
.
queryResult
# nest query for support max
tdSql
.
query
(
"select spread(c2+2)+1 from (select max(c1) c2 from
stb1)"
)
tdSql
.
query
(
f
"select spread(c2+2)+1 from (select max(c1) c2 from
{
dbname
}
.
stb1)"
)
tdSql
.
checkData
(
0
,
0
,
1.000000000
)
tdSql
.
query
(
"select spread(c1+2)+1 as c2 from (select ts ,c1 ,c2 from
stb1)"
)
tdSql
.
query
(
f
"select spread(c1+2)+1 as c2 from (select ts ,c1 ,c2 from
{
dbname
}
.
stb1)"
)
tdSql
.
checkData
(
0
,
0
,
29.000000000
)
tdSql
.
query
(
"select spread(a+2)+1 as c2 from (select ts ,abs(c1) a ,c2 from
stb1)"
)
tdSql
.
query
(
f
"select spread(a+2)+1 as c2 from (select ts ,abs(c1) a ,c2 from
{
dbname
}
.
stb1)"
)
tdSql
.
checkData
(
0
,
0
,
29.000000000
)
# mixup with other functions
tdSql
.
query
(
"select max(c1),count(c1),last(c2,c3),spread(c1) from
stb1"
)
tdSql
.
query
(
f
"select max(c1),count(c1),last(c2,c3),spread(c1) from
{
dbname
}
.
stb1"
)
tdSql
.
checkData
(
0
,
0
,
28
)
tdSql
.
checkData
(
0
,
1
,
184
)
tdSql
.
checkData
(
0
,
2
,
-
99999
)
...
...
@@ -275,7 +268,7 @@ class TDTestCase:
def
stop
(
self
):
tdSql
.
close
()
tdLog
.
success
(
"%s successfully executed"
%
__file__
)
tdLog
.
success
(
f
"
{
__file__
}
successfully executed"
)
tdCases
.
addWindows
(
__file__
,
TDTestCase
())
tdCases
.
addLinux
(
__file__
,
TDTestCase
())
tests/system-test/2-query/distribute_agg_stddev.py
浏览文件 @
a3d4dce3
...
...
@@ -7,10 +7,7 @@ import platform
import
math
class
TDTestCase
:
updatecfgDict
=
{
'debugFlag'
:
143
,
"cDebugFlag"
:
143
,
"uDebugFlag"
:
143
,
"rpcDebugFlag"
:
143
,
"tmrDebugFlag"
:
143
,
"jniDebugFlag"
:
143
,
"simDebugFlag"
:
143
,
"dDebugFlag"
:
143
,
"dDebugFlag"
:
143
,
"vDebugFlag"
:
143
,
"mDebugFlag"
:
143
,
"qDebugFlag"
:
143
,
"wDebugFlag"
:
143
,
"sDebugFlag"
:
143
,
"tsdbDebugFlag"
:
143
,
"tqDebugFlag"
:
143
,
"fsDebugFlag"
:
143
,
"udfDebugFlag"
:
143
,
"maxTablesPerVnode"
:
2
,
"minTablesPerVnode"
:
2
,
"tableIncStepPerVnode"
:
2
}
updatecfgDict
=
{
"maxTablesPerVnode"
:
2
,
"minTablesPerVnode"
:
2
,
"tableIncStepPerVnode"
:
2
}
def
init
(
self
,
conn
,
logSql
):
tdLog
.
debug
(
"start to execute %s"
%
__file__
)
...
...
@@ -45,55 +42,56 @@ class TDTestCase:
else
:
tdLog
.
exit
(
" sql:%s; row:0 col:0 data:%d , expect:%d"
%
(
stddev_sql
,
tdSql
.
queryResult
[
0
][
0
],
stddev_result
))
def
prepare_datas_of_distribute
(
self
):
def
prepare_datas_of_distribute
(
self
,
dbname
=
"testdb"
):
# prepate datas for 20 tables distributed at different vgroups
tdSql
.
execute
(
"create database if not exists testdb
keep 3650 duration 1000 vgroups 5"
)
tdSql
.
execute
(
" use testdb
"
)
tdSql
.
execute
(
f
"create database if not exists
{
dbname
}
keep 3650 duration 1000 vgroups 5"
)
tdSql
.
execute
(
f
" use
{
dbname
}
"
)
tdSql
.
execute
(
'''create table
stb1
f
'''create table
{
dbname
}
.
stb1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
tags (t0 timestamp, t1 int, t2 bigint, t3 smallint, t4 tinyint, t5 float, t6 double, t7 bool, t8 binary(16),t9 nchar(32))
'''
)
tdSql
.
execute
(
'''
create table t1
f
'''
create table
{
dbname
}
.
t1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
'''
)
for
i
in
range
(
20
):
tdSql
.
execute
(
f
'create table
ct
{
i
+
1
}
using
stb1 tags ( now(),
{
1
*
i
}
,
{
11111
*
i
}
,
{
111
*
i
}
,
{
1
*
i
}
,
{
1.11
*
i
}
,
{
11.11
*
i
}
,
{
i
%
2
}
, "binary
{
i
}
", "nchar
{
i
}
" )'
)
tdSql
.
execute
(
f
'create table
{
dbname
}
.ct
{
i
+
1
}
using
{
dbname
}
.
stb1 tags ( now(),
{
1
*
i
}
,
{
11111
*
i
}
,
{
111
*
i
}
,
{
1
*
i
}
,
{
1.11
*
i
}
,
{
11.11
*
i
}
,
{
i
%
2
}
, "binary
{
i
}
", "nchar
{
i
}
" )'
)
for
i
in
range
(
9
):
tdSql
.
execute
(
f
"insert into ct1 values ( now()-
{
i
*
10
}
s,
{
1
*
i
}
,
{
11111
*
i
}
,
{
111
*
i
}
,
{
11
*
i
}
,
{
1.11
*
i
}
,
{
11.11
*
i
}
,
{
i
%
2
}
, 'binary
{
i
}
', 'nchar
{
i
}
', now()+
{
1
*
i
}
a )"
f
"insert into
{
dbname
}
.
ct1 values ( now()-
{
i
*
10
}
s,
{
1
*
i
}
,
{
11111
*
i
}
,
{
111
*
i
}
,
{
11
*
i
}
,
{
1.11
*
i
}
,
{
11.11
*
i
}
,
{
i
%
2
}
, 'binary
{
i
}
', 'nchar
{
i
}
', now()+
{
1
*
i
}
a )"
)
tdSql
.
execute
(
f
"insert into ct4 values ( now()-
{
i
*
90
}
d,
{
1
*
i
}
,
{
11111
*
i
}
,
{
111
*
i
}
,
{
11
*
i
}
,
{
1.11
*
i
}
,
{
11.11
*
i
}
,
{
i
%
2
}
, 'binary
{
i
}
', 'nchar
{
i
}
', now()+
{
1
*
i
}
a )"
f
"insert into
{
dbname
}
.
ct4 values ( now()-
{
i
*
90
}
d,
{
1
*
i
}
,
{
11111
*
i
}
,
{
111
*
i
}
,
{
11
*
i
}
,
{
1.11
*
i
}
,
{
11.11
*
i
}
,
{
i
%
2
}
, 'binary
{
i
}
', 'nchar
{
i
}
', now()+
{
1
*
i
}
a )"
)
for
i
in
range
(
1
,
21
):
if
i
==
1
or
i
==
4
:
continue
else
:
tbname
=
"ct"
+
f
'
{
i
}
'
tbname
=
f
"ct
{
i
}
"
for
j
in
range
(
9
):
tdSql
.
execute
(
f
"insert into
{
tbname
}
values ( now()-
{
(
i
+
j
)
*
10
}
s,
{
1
*
(
j
+
i
)
}
,
{
11111
*
(
j
+
i
)
}
,
{
111
*
(
j
+
i
)
}
,
{
11
*
(
j
)
}
,
{
1.11
*
(
j
+
i
)
}
,
{
11.11
*
(
j
+
i
)
}
,
{
(
j
+
i
)
%
2
}
, 'binary
{
j
}
', 'nchar
{
j
}
', now()+
{
1
*
j
}
a )"
f
"insert into
{
dbname
}
.
{
tbname
}
values ( now()-
{
(
i
+
j
)
*
10
}
s,
{
1
*
(
j
+
i
)
}
,
{
11111
*
(
j
+
i
)
}
,
{
111
*
(
j
+
i
)
}
,
{
11
*
(
j
)
}
,
{
1.11
*
(
j
+
i
)
}
,
{
11.11
*
(
j
+
i
)
}
,
{
(
j
+
i
)
%
2
}
, 'binary
{
j
}
', 'nchar
{
j
}
', now()+
{
1
*
j
}
a )"
)
tdSql
.
execute
(
"insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )"
)
tdSql
.
execute
(
"insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )"
)
tdSql
.
execute
(
"insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )"
)
tdSql
.
execute
(
"insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )"
)
tdSql
.
execute
(
"insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) "
)
tdSql
.
execute
(
"insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) "
)
tdSql
.
execute
(
"insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) "
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )"
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )"
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )"
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )"
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) "
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) "
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) "
)
tdSql
.
execute
(
f
'''insert into t1 values
f
'''insert into
{
dbname
}
.
t1 values
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
...
...
@@ -109,11 +107,11 @@ class TDTestCase:
'''
)
tdLog
.
info
(
" prepare data for distributed_aggregate done! "
)
tdLog
.
info
(
f
" prepare data for distributed_aggregate done! "
)
def
check_distribute_datas
(
self
):
def
check_distribute_datas
(
self
,
dbname
=
"testdb"
):
# get vgroup_ids of all
tdSql
.
query
(
"show
vgroups "
)
tdSql
.
query
(
f
"show
{
dbname
}
.
vgroups "
)
vgroups
=
tdSql
.
queryResult
vnode_tables
=
{}
...
...
@@ -121,9 +119,8 @@ class TDTestCase:
for
vgroup_id
in
vgroups
:
vnode_tables
[
vgroup_id
[
0
]]
=
[]
# check sub_table of per vnode ,make sure sub_table has been distributed
tdSql
.
query
(
"show
tables like 'ct%'"
)
tdSql
.
query
(
f
"show
{
dbname
}
.
tables like 'ct%'"
)
table_names
=
tdSql
.
queryResult
tablenames
=
[]
for
table_name
in
table_names
:
...
...
@@ -135,9 +132,9 @@ class TDTestCase:
if
len
(
v
)
>=
2
:
count
+=
1
if
count
<
2
:
tdLog
.
exit
(
" the datas of all not satisfy sub_table has been distributed "
)
tdLog
.
exit
(
f
" the datas of all not satisfy sub_table has been distributed "
)
def
check_stddev_distribute_diff_vnode
(
self
,
col_name
):
def
check_stddev_distribute_diff_vnode
(
self
,
col_name
,
dbname
=
"testdb"
):
vgroup_ids
=
[]
for
k
,
v
in
self
.
vnode_disbutes
.
items
():
...
...
@@ -155,9 +152,9 @@ class TDTestCase:
tbname_filters
=
tbname_ins
[:
-
1
]
stddev_sql
=
f
"select stddev(
{
col_name
}
) from stb1 where tbname in (
{
tbname_filters
}
);"
stddev_sql
=
f
"select stddev(
{
col_name
}
) from
{
dbname
}
.
stb1 where tbname in (
{
tbname_filters
}
);"
same_sql
=
f
"select
{
col_name
}
from stb1 where tbname in (
{
tbname_filters
}
) and
{
col_name
}
is not null "
same_sql
=
f
"select
{
col_name
}
from
{
dbname
}
.
stb1 where tbname in (
{
tbname_filters
}
) and
{
col_name
}
is not null "
tdSql
.
query
(
same_sql
)
pre_data
=
np
.
array
(
tdSql
.
queryResult
)[
np
.
array
(
tdSql
.
queryResult
)
!=
None
]
...
...
@@ -175,17 +172,16 @@ class TDTestCase:
tdSql
.
query
(
stddev_sql
)
tdSql
.
checkData
(
0
,
0
,
stddev_result
)
def
check_stddev_status
(
self
):
def
check_stddev_status
(
self
,
dbname
=
"testdb"
):
# check max function work status
tdSql
.
query
(
"show
tables like 'ct%'"
)
tdSql
.
query
(
f
"show
{
dbname
}
.
tables like 'ct%'"
)
table_names
=
tdSql
.
queryResult
tablenames
=
[]
for
table_name
in
table_names
:
tablenames
.
append
(
table_name
[
0
]
)
tablenames
.
append
(
f
"
{
dbname
}
.
{
table_name
[
0
]
}
"
)
tdSql
.
query
(
"desc
stb1"
)
tdSql
.
query
(
f
"desc
{
dbname
}
.
stb1"
)
col_names
=
tdSql
.
queryResult
colnames
=
[]
...
...
@@ -197,50 +193,42 @@ class TDTestCase:
for
colname
in
colnames
:
if
colname
.
startswith
(
"c"
):
self
.
check_stddev_functions
(
tablename
,
colname
)
else
:
# self.check_stddev_functions(tablename,colname)
pass
# check max function for different vnode
for
colname
in
colnames
:
if
colname
.
startswith
(
"c"
):
self
.
check_stddev_distribute_diff_vnode
(
colname
)
else
:
# self.check_stddev_distribute_diff_vnode(colname) # bug for tag
pass
def
distribute_agg_query
(
self
):
def
distribute_agg_query
(
self
,
dbname
=
"testdb"
):
# basic filter
tdSql
.
query
(
" select stddev(c1) from
stb1 "
)
tdSql
.
query
(
f
"select stddev(c1) from
{
dbname
}
.
stb1 "
)
tdSql
.
checkData
(
0
,
0
,
6.694663959
)
tdSql
.
query
(
" select stddev(a) from (select stddev(c1) a from
stb1 partition by tbname) "
)
tdSql
.
query
(
f
"select stddev(a) from (select stddev(c1) a from
{
dbname
}
.
stb1 partition by tbname) "
)
tdSql
.
checkData
(
0
,
0
,
0.156797505
)
tdSql
.
query
(
" select stddev(c1) from
stb1 where t1=1"
)
tdSql
.
query
(
f
"select stddev(c1) from
{
dbname
}
.
stb1 where t1=1"
)
tdSql
.
checkData
(
0
,
0
,
2.581988897
)
tdSql
.
query
(
"select stddev(c1+c2) from
stb1 where c1 =1 "
)
tdSql
.
query
(
f
"select stddev(c1+c2) from
{
dbname
}
.
stb1 where c1 =1 "
)
tdSql
.
checkData
(
0
,
0
,
0.000000000
)
tdSql
.
query
(
"select stddev(c1) from
stb1 where tbname=
\"
ct2
\"
"
)
tdSql
.
query
(
f
"select stddev(c1) from
{
dbname
}
.
stb1 where tbname=
\"
ct2
\"
"
)
tdSql
.
checkData
(
0
,
0
,
2.581988897
)
tdSql
.
query
(
"select stddev(c1) from
stb1 partition by tbname"
)
tdSql
.
query
(
f
"select stddev(c1) from
{
dbname
}
.
stb1 partition by tbname"
)
tdSql
.
checkRows
(
20
)
tdSql
.
query
(
"select stddev(c1) from
stb1 where t1> 4 partition by tbname"
)
tdSql
.
query
(
f
"select stddev(c1) from
{
dbname
}
.
stb1 where t1> 4 partition by tbname"
)
tdSql
.
checkRows
(
15
)
# union all
tdSql
.
query
(
"select stddev(c1) from stb1 union all select stddev(c1) from
stb1 "
)
tdSql
.
query
(
f
"select stddev(c1) from
{
dbname
}
.stb1 union all select stddev(c1) from
{
dbname
}
.
stb1 "
)
tdSql
.
checkRows
(
2
)
tdSql
.
checkData
(
0
,
0
,
6.694663959
)
tdSql
.
query
(
"select stddev(a) from (select stddev(c1) a from stb1 union all select stddev(c1) a from
stb1)"
)
tdSql
.
query
(
f
"select stddev(a) from (select stddev(c1) a from
{
dbname
}
.stb1 union all select stddev(c1) a from
{
dbname
}
.
stb1)"
)
tdSql
.
checkRows
(
1
)
tdSql
.
checkData
(
0
,
0
,
0.000000000
)
...
...
@@ -248,38 +236,38 @@ class TDTestCase:
tdSql
.
execute
(
" create database if not exists db "
)
tdSql
.
execute
(
" use db "
)
tdSql
.
execute
(
" create stable st (ts timestamp , c1 int ,c2 float) tags(t1 int) "
)
tdSql
.
execute
(
" create table
tb1 using
st tags(1) "
)
tdSql
.
execute
(
" create table
tb2 using
st tags(2) "
)
tdSql
.
execute
(
" create stable
db.
st (ts timestamp , c1 int ,c2 float) tags(t1 int) "
)
tdSql
.
execute
(
" create table
db.tb1 using db.
st tags(1) "
)
tdSql
.
execute
(
" create table
db.tb2 using db.
st tags(2) "
)
for
i
in
range
(
10
):
ts
=
i
*
10
+
self
.
ts
tdSql
.
execute
(
f
" insert into tb1 values(
{
ts
}
,
{
i
}
,
{
i
}
.0)"
)
tdSql
.
execute
(
f
" insert into tb2 values(
{
ts
}
,
{
i
}
,
{
i
}
.0)"
)
tdSql
.
execute
(
f
" insert into
db.
tb1 values(
{
ts
}
,
{
i
}
,
{
i
}
.0)"
)
tdSql
.
execute
(
f
" insert into
db.
tb2 values(
{
ts
}
,
{
i
}
,
{
i
}
.0)"
)
tdSql
.
query
(
"select stddev(tb1.c1), stddev(tb2.c2) from
tb1,
tb2 where tb1.ts=tb2.ts"
)
tdSql
.
query
(
"select stddev(tb1.c1), stddev(tb2.c2) from
db.tb1 tb1, db.tb2
tb2 where tb1.ts=tb2.ts"
)
tdSql
.
checkRows
(
1
)
tdSql
.
checkData
(
0
,
0
,
2.872281323
)
tdSql
.
checkData
(
0
,
1
,
2.872281323
)
# group by
tdSql
.
execute
(
" use testdb
"
)
tdSql
.
execute
(
f
" use
{
dbname
}
"
)
# partition by tbname or partition by tag
tdSql
.
query
(
"select stddev(c1) from
stb1 partition by tbname"
)
tdSql
.
query
(
f
"select stddev(c1) from
{
dbname
}
.
stb1 partition by tbname"
)
tdSql
.
checkRows
(
20
)
# nest query for support max
tdSql
.
query
(
"select stddev(c2+2)+1 from (select stddev(c1) c2 from
stb1)"
)
tdSql
.
query
(
f
"select stddev(c2+2)+1 from (select stddev(c1) c2 from
{
dbname
}
.
stb1)"
)
tdSql
.
checkData
(
0
,
0
,
1.000000000
)
tdSql
.
query
(
"select stddev(c1+2) as c2 from (select ts ,c1 ,c2 from
stb1)"
)
tdSql
.
query
(
f
"select stddev(c1+2) as c2 from (select ts ,c1 ,c2 from
{
dbname
}
.
stb1)"
)
tdSql
.
checkData
(
0
,
0
,
6.694663959
)
tdSql
.
query
(
"select stddev(a+2) as c2 from (select ts ,abs(c1) a ,c2 from
stb1)"
)
tdSql
.
query
(
f
"select stddev(a+2) as c2 from (select ts ,abs(c1) a ,c2 from
{
dbname
}
.
stb1)"
)
tdSql
.
checkData
(
0
,
0
,
6.694663959
)
# mixup with other functions
tdSql
.
query
(
"select max(c1),count(c1),last(c2,c3),sum(c1+c2),avg(c1),stddev(c1) from
stb1"
)
tdSql
.
query
(
f
"select max(c1),count(c1),last(c2,c3),sum(c1+c2),avg(c1),stddev(c1) from
{
dbname
}
.
stb1"
)
tdSql
.
checkData
(
0
,
0
,
28
)
tdSql
.
checkData
(
0
,
1
,
184
)
tdSql
.
checkData
(
0
,
2
,
-
99999
)
...
...
tests/system-test/2-query/distribute_agg_sum.py
浏览文件 @
a3d4dce3
...
...
@@ -7,10 +7,7 @@ import platform
class
TDTestCase
:
updatecfgDict
=
{
'debugFlag'
:
143
,
"cDebugFlag"
:
143
,
"uDebugFlag"
:
143
,
"rpcDebugFlag"
:
143
,
"tmrDebugFlag"
:
143
,
"jniDebugFlag"
:
143
,
"simDebugFlag"
:
143
,
"dDebugFlag"
:
143
,
"dDebugFlag"
:
143
,
"vDebugFlag"
:
143
,
"mDebugFlag"
:
143
,
"qDebugFlag"
:
143
,
"wDebugFlag"
:
143
,
"sDebugFlag"
:
143
,
"tsdbDebugFlag"
:
143
,
"tqDebugFlag"
:
143
,
"fsDebugFlag"
:
143
,
"udfDebugFlag"
:
143
,
"maxTablesPerVnode"
:
2
,
"minTablesPerVnode"
:
2
,
"tableIncStepPerVnode"
:
2
}
updatecfgDict
=
{
"maxTablesPerVnode"
:
2
,
"minTablesPerVnode"
:
2
,
"tableIncStepPerVnode"
:
2
}
def
init
(
self
,
conn
,
logSql
):
tdLog
.
debug
(
"start to execute %s"
%
__file__
)
...
...
@@ -34,55 +31,56 @@ class TDTestCase:
tdSql
.
query
(
sum_sql
)
tdSql
.
checkData
(
0
,
0
,
pre_sum
)
def
prepare_datas_of_distribute
(
self
):
def
prepare_datas_of_distribute
(
self
,
dbname
=
"testdb"
):
# prepate datas for 20 tables distributed at different vgroups
tdSql
.
execute
(
"create database if not exists testdb
keep 3650 duration 1000 vgroups 5"
)
tdSql
.
execute
(
" use testdb
"
)
tdSql
.
execute
(
f
"create database if not exists
{
dbname
}
keep 3650 duration 1000 vgroups 5"
)
tdSql
.
execute
(
f
" use
{
dbname
}
"
)
tdSql
.
execute
(
'''create table
stb1
f
'''create table
{
dbname
}
.
stb1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
tags (t0 timestamp, t1 int, t2 bigint, t3 smallint, t4 tinyint, t5 float, t6 double, t7 bool, t8 binary(16),t9 nchar(32))
'''
)
tdSql
.
execute
(
'''
create table t1
f
'''
create table
{
dbname
}
.
t1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
'''
)
for
i
in
range
(
20
):
tdSql
.
execute
(
f
'create table
ct
{
i
+
1
}
using
stb1 tags ( now(),
{
1
*
i
}
,
{
11111
*
i
}
,
{
111
*
i
}
,
{
1
*
i
}
,
{
1.11
*
i
}
,
{
11.11
*
i
}
,
{
i
%
2
}
, "binary
{
i
}
", "nchar
{
i
}
" )'
)
tdSql
.
execute
(
f
'create table
{
dbname
}
.ct
{
i
+
1
}
using
{
dbname
}
.
stb1 tags ( now(),
{
1
*
i
}
,
{
11111
*
i
}
,
{
111
*
i
}
,
{
1
*
i
}
,
{
1.11
*
i
}
,
{
11.11
*
i
}
,
{
i
%
2
}
, "binary
{
i
}
", "nchar
{
i
}
" )'
)
for
i
in
range
(
9
):
tdSql
.
execute
(
f
"insert into ct1 values ( now()-
{
i
*
10
}
s,
{
1
*
i
}
,
{
11111
*
i
}
,
{
111
*
i
}
,
{
11
*
i
}
,
{
1.11
*
i
}
,
{
11.11
*
i
}
,
{
i
%
2
}
, 'binary
{
i
}
', 'nchar
{
i
}
', now()+
{
1
*
i
}
a )"
f
"insert into
{
dbname
}
.
ct1 values ( now()-
{
i
*
10
}
s,
{
1
*
i
}
,
{
11111
*
i
}
,
{
111
*
i
}
,
{
11
*
i
}
,
{
1.11
*
i
}
,
{
11.11
*
i
}
,
{
i
%
2
}
, 'binary
{
i
}
', 'nchar
{
i
}
', now()+
{
1
*
i
}
a )"
)
tdSql
.
execute
(
f
"insert into ct4 values ( now()-
{
i
*
90
}
d,
{
1
*
i
}
,
{
11111
*
i
}
,
{
111
*
i
}
,
{
11
*
i
}
,
{
1.11
*
i
}
,
{
11.11
*
i
}
,
{
i
%
2
}
, 'binary
{
i
}
', 'nchar
{
i
}
', now()+
{
1
*
i
}
a )"
f
"insert into
{
dbname
}
.
ct4 values ( now()-
{
i
*
90
}
d,
{
1
*
i
}
,
{
11111
*
i
}
,
{
111
*
i
}
,
{
11
*
i
}
,
{
1.11
*
i
}
,
{
11.11
*
i
}
,
{
i
%
2
}
, 'binary
{
i
}
', 'nchar
{
i
}
', now()+
{
1
*
i
}
a )"
)
for
i
in
range
(
1
,
21
):
if
i
==
1
or
i
==
4
:
continue
else
:
tbname
=
"ct"
+
f
'
{
i
}
'
tbname
=
f
"ct
{
i
}
"
for
j
in
range
(
9
):
tdSql
.
execute
(
f
"insert into
{
tbname
}
values ( now()-
{
(
i
+
j
)
*
10
}
s,
{
1
*
(
j
+
i
)
}
,
{
11111
*
(
j
+
i
)
}
,
{
111
*
(
j
+
i
)
}
,
{
11
*
(
j
)
}
,
{
1.11
*
(
j
+
i
)
}
,
{
11.11
*
(
j
+
i
)
}
,
{
(
j
+
i
)
%
2
}
, 'binary
{
j
}
', 'nchar
{
j
}
', now()+
{
1
*
j
}
a )"
f
"insert into
{
dbname
}
.
{
tbname
}
values ( now()-
{
(
i
+
j
)
*
10
}
s,
{
1
*
(
j
+
i
)
}
,
{
11111
*
(
j
+
i
)
}
,
{
111
*
(
j
+
i
)
}
,
{
11
*
(
j
)
}
,
{
1.11
*
(
j
+
i
)
}
,
{
11.11
*
(
j
+
i
)
}
,
{
(
j
+
i
)
%
2
}
, 'binary
{
j
}
', 'nchar
{
j
}
', now()+
{
1
*
j
}
a )"
)
tdSql
.
execute
(
"insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )"
)
tdSql
.
execute
(
"insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )"
)
tdSql
.
execute
(
"insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )"
)
tdSql
.
execute
(
"insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )"
)
tdSql
.
execute
(
"insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) "
)
tdSql
.
execute
(
"insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) "
)
tdSql
.
execute
(
"insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) "
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )"
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )"
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )"
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )"
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) "
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) "
)
tdSql
.
execute
(
f
"insert into
{
dbname
}
.ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) "
)
tdSql
.
execute
(
f
'''insert into t1 values
f
'''insert into
{
dbname
}
.
t1 values
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
...
...
@@ -98,11 +96,11 @@ class TDTestCase:
'''
)
tdLog
.
info
(
" prepare data for distributed_aggregate done! "
)
tdLog
.
info
(
f
" prepare data for distributed_aggregate done! "
)
def
check_distribute_datas
(
self
):
def
check_distribute_datas
(
self
,
dbname
=
"testdb"
):
# get vgroup_ids of all
tdSql
.
query
(
"show
vgroups "
)
tdSql
.
query
(
f
"show
{
dbname
}
.
vgroups "
)
vgroups
=
tdSql
.
queryResult
vnode_tables
=
{}
...
...
@@ -110,9 +108,8 @@ class TDTestCase:
for
vgroup_id
in
vgroups
:
vnode_tables
[
vgroup_id
[
0
]]
=
[]
# check sub_table of per vnode ,make sure sub_table has been distributed
tdSql
.
query
(
"show
tables like 'ct%'"
)
tdSql
.
query
(
f
"show
{
dbname
}
.
tables like 'ct%'"
)
table_names
=
tdSql
.
queryResult
tablenames
=
[]
for
table_name
in
table_names
:
...
...
@@ -124,9 +121,9 @@ class TDTestCase:
if
len
(
v
)
>=
2
:
count
+=
1
if
count
<
2
:
tdLog
.
exit
(
" the datas of all not satisfy sub_table has been distributed "
)
tdLog
.
exit
(
f
" the datas of all not satisfy sub_table has been distributed "
)
def
check_sum_distribute_diff_vnode
(
self
,
col_name
):
def
check_sum_distribute_diff_vnode
(
self
,
col_name
,
dbname
=
"testdb"
):
vgroup_ids
=
[]
for
k
,
v
in
self
.
vnode_disbutes
.
items
():
...
...
@@ -144,9 +141,9 @@ class TDTestCase:
tbname_filters
=
tbname_ins
[:
-
1
]
sum_sql
=
f
"select sum(
{
col_name
}
) from stb1 where tbname in (
{
tbname_filters
}
);"
sum_sql
=
f
"select sum(
{
col_name
}
) from
{
dbname
}
.
stb1 where tbname in (
{
tbname_filters
}
);"
same_sql
=
f
"select
{
col_name
}
from stb1 where tbname in (
{
tbname_filters
}
) and
{
col_name
}
is not null "
same_sql
=
f
"select
{
col_name
}
from
{
dbname
}
.
stb1 where tbname in (
{
tbname_filters
}
) and
{
col_name
}
is not null "
tdSql
.
query
(
same_sql
)
pre_data
=
np
.
array
(
tdSql
.
queryResult
)[
np
.
array
(
tdSql
.
queryResult
)
!=
None
]
...
...
@@ -157,16 +154,16 @@ class TDTestCase:
tdSql
.
query
(
sum_sql
)
tdSql
.
checkData
(
0
,
0
,
pre_sum
)
def
check_sum_status
(
self
):
def
check_sum_status
(
self
,
dbname
=
"testdb"
):
# check max function work status
tdSql
.
query
(
"show
tables like 'ct%'"
)
tdSql
.
query
(
f
"show
{
dbname
}
.
tables like 'ct%'"
)
table_names
=
tdSql
.
queryResult
tablenames
=
[]
for
table_name
in
table_names
:
tablenames
.
append
(
table_name
[
0
]
)
tablenames
.
append
(
f
"
{
dbname
}
.
{
table_name
[
0
]
}
"
)
tdSql
.
query
(
"desc
stb1"
)
tdSql
.
query
(
f
"desc
{
dbname
}
.
stb1"
)
col_names
=
tdSql
.
queryResult
colnames
=
[]
...
...
@@ -183,79 +180,75 @@ class TDTestCase:
for
colname
in
colnames
:
if
colname
.
startswith
(
"c"
):
self
.
check_sum_distribute_diff_vnode
(
colname
)
else
:
# self.check_sum_distribute_diff_vnode(colname) # bug for tag
pass
def
distribute_agg_query
(
self
):
def
distribute_agg_query
(
self
,
dbname
=
"testdb"
):
# basic filter
tdSql
.
query
(
" select sum(c1) from
stb1 "
)
tdSql
.
query
(
f
"select sum(c1) from
{
dbname
}
.
stb1 "
)
tdSql
.
checkData
(
0
,
0
,
2592
)
tdSql
.
query
(
" select sum(a) from (select sum(c1) a from
stb1 partition by tbname) "
)
tdSql
.
query
(
f
"select sum(a) from (select sum(c1) a from
{
dbname
}
.
stb1 partition by tbname) "
)
tdSql
.
checkData
(
0
,
0
,
2592
)
tdSql
.
query
(
" select sum(c1) from
stb1 where t1=1"
)
tdSql
.
query
(
f
"select sum(c1) from
{
dbname
}
.
stb1 where t1=1"
)
tdSql
.
checkData
(
0
,
0
,
54
)
tdSql
.
query
(
"select sum(c1+c2) from
stb1 where c1 =1 "
)
tdSql
.
query
(
f
"select sum(c1+c2) from
{
dbname
}
.
stb1 where c1 =1 "
)
tdSql
.
checkData
(
0
,
0
,
22224.000000000
)
tdSql
.
query
(
"select sum(c1) from
stb1 where tbname=
\"
ct2
\"
"
)
tdSql
.
query
(
f
"select sum(c1) from
{
dbname
}
.
stb1 where tbname=
\"
ct2
\"
"
)
tdSql
.
checkData
(
0
,
0
,
54
)
tdSql
.
query
(
"select sum(c1) from
stb1 partition by tbname"
)
tdSql
.
query
(
f
"select sum(c1) from
{
dbname
}
.
stb1 partition by tbname"
)
tdSql
.
checkRows
(
20
)
tdSql
.
query
(
"select sum(c1) from
stb1 where t1> 4 partition by tbname"
)
tdSql
.
query
(
f
"select sum(c1) from
{
dbname
}
.
stb1 where t1> 4 partition by tbname"
)
tdSql
.
checkRows
(
15
)
# union all
tdSql
.
query
(
"select sum(c1) from stb1 union all select sum(c1) from
stb1 "
)
tdSql
.
query
(
f
"select sum(c1) from
{
dbname
}
.stb1 union all select sum(c1) from
{
dbname
}
.
stb1 "
)
tdSql
.
checkRows
(
2
)
tdSql
.
checkData
(
0
,
0
,
2592
)
tdSql
.
query
(
"select sum(a) from (select sum(c1) a from stb1 union all select sum(c1) a from
stb1)"
)
tdSql
.
query
(
f
"select sum(a) from (select sum(c1) a from
{
dbname
}
.stb1 union all select sum(c1) a from
{
dbname
}
.
stb1)"
)
tdSql
.
checkRows
(
1
)
tdSql
.
checkData
(
0
,
0
,
5184
)
# join
tdSql
.
execute
(
" create database if not exists db "
)
tdSql
.
execute
(
" use db "
)
tdSql
.
execute
(
" create stable st (ts timestamp , c1 int ,c2 float) tags(t1 int) "
)
tdSql
.
execute
(
" create table
tb1 using
st tags(1) "
)
tdSql
.
execute
(
" create table
tb2 using
st tags(2) "
)
tdSql
.
execute
(
" create stable
db.
st (ts timestamp , c1 int ,c2 float) tags(t1 int) "
)
tdSql
.
execute
(
" create table
db.tb1 using db.
st tags(1) "
)
tdSql
.
execute
(
" create table
db.tb2 using db.
st tags(2) "
)
for
i
in
range
(
10
):
ts
=
i
*
10
+
self
.
ts
tdSql
.
execute
(
f
" insert into tb1 values(
{
ts
}
,
{
i
}
,
{
i
}
.0)"
)
tdSql
.
execute
(
f
" insert into tb2 values(
{
ts
}
,
{
i
}
,
{
i
}
.0)"
)
tdSql
.
execute
(
f
" insert into db.tb1 values(
{
ts
}
,
{
i
}
,
{
i
}
.0)"
)
tdSql
.
execute
(
f
" insert into db.tb2 values(
{
ts
}
,
{
i
}
,
{
i
}
.0)"
)
tdSql
.
query
(
"select sum(tb1.c1), sum(tb2.c2) from
tb1,
tb2 where tb1.ts=tb2.ts"
)
tdSql
.
query
(
"select sum(tb1.c1), sum(tb2.c2) from
db.tb1 tb1, db.tb2
tb2 where tb1.ts=tb2.ts"
)
tdSql
.
checkRows
(
1
)
tdSql
.
checkData
(
0
,
0
,
45
)
tdSql
.
checkData
(
0
,
1
,
45.000000000
)
# group by
tdSql
.
execute
(
" use testdb
"
)
tdSql
.
execute
(
f
"use
{
dbname
}
"
)
# partition by tbname or partition by tag
tdSql
.
query
(
"select sum(c1) from
stb1 partition by tbname"
)
tdSql
.
query
(
f
"select sum(c1) from
{
dbname
}
.
stb1 partition by tbname"
)
tdSql
.
checkRows
(
20
)
# nest query for support max
tdSql
.
query
(
"select abs(c2+2)+1 from (select sum(c1) c2 from
stb1)"
)
tdSql
.
query
(
f
"select abs(c2+2)+1 from (select sum(c1) c2 from
{
dbname
}
.
stb1)"
)
tdSql
.
checkData
(
0
,
0
,
2595.000000000
)
tdSql
.
query
(
"select sum(c1+2) as c2 from (select ts ,c1 ,c2 from
stb1)"
)
tdSql
.
query
(
f
"select sum(c1+2) as c2 from (select ts ,c1 ,c2 from
{
dbname
}
.
stb1)"
)
tdSql
.
checkData
(
0
,
0
,
2960.000000000
)
tdSql
.
query
(
"select sum(a+2) as c2 from (select ts ,abs(c1) a ,c2 from
stb1)"
)
tdSql
.
query
(
f
"select sum(a+2) as c2 from (select ts ,abs(c1) a ,c2 from
{
dbname
}
.
stb1)"
)
tdSql
.
checkData
(
0
,
0
,
2960.000000000
)
# mixup with other functions
tdSql
.
query
(
"select max(c1),count(c1),last(c2,c3),sum(c1+c2) from
stb1"
)
tdSql
.
query
(
f
"select max(c1),count(c1),last(c2,c3),sum(c1+c2) from
{
dbname
}
.
stb1"
)
tdSql
.
checkData
(
0
,
0
,
28
)
tdSql
.
checkData
(
0
,
1
,
184
)
tdSql
.
checkData
(
0
,
2
,
-
99999
)
...
...
tests/system-test/fulltest.sh
浏览文件 @
a3d4dce3
...
...
@@ -90,6 +90,12 @@ python3 ./test.py -f 2-query/distribute_agg_max.py
python3 ./test.py
-f
2-query/distribute_agg_max.py
-R
python3 ./test.py
-f
2-query/distribute_agg_min.py
python3 ./test.py
-f
2-query/distribute_agg_min.py
-R
python3 ./test.py
-f
2-query/distribute_agg_spread.py
python3 ./test.py
-f
2-query/distribute_agg_spread.py
-R
python3 ./test.py
-f
2-query/distribute_agg_stddev.py
python3 ./test.py
-f
2-query/distribute_agg_stddev.py
-R
python3 ./test.py
-f
2-query/distribute_agg_sum.py
python3 ./test.py
-f
2-query/distribute_agg_sum.py
-R
...
...
@@ -156,9 +162,6 @@ python3 ./test.py -f 2-query/function_stateduration.py
python3 ./test.py
-f
2-query/statecount.py
python3 ./test.py
-f
2-query/tail.py
python3 ./test.py
-f
2-query/ttl_comment.py
python3 ./test.py
-f
2-query/distribute_agg_sum.py
python3 ./test.py
-f
2-query/distribute_agg_spread.py
python3 ./test.py
-f
2-query/distribute_agg_stddev.py
python3 ./test.py
-f
2-query/twa.py
python3 ./test.py
-f
2-query/irate.py
python3 ./test.py
-f
2-query/function_null.py
...
...
taos-tools
@
3c7dafee
Subproject commit 3c7dafeea3e558968165b73bee0f51024898e3da
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录