Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
慢慢CG
TDengine
提交
40073e19
T
TDengine
项目概览
慢慢CG
/
TDengine
与 Fork 源项目一致
Fork自
taosdata / TDengine
通知
1
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
T
TDengine
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
40073e19
编写于
8月 10, 2021
作者:
C
Cary Xu
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'master' into hotfix/td-5765
上级
3c566750
368deabb
变更
45
隐藏空白更改
内联
并排
Showing
45 changed file
with
3055 addition
and
797 deletion
+3055
-797
.drone.yml
.drone.yml
+81
-27
packaging/tools/makepkg.sh
packaging/tools/makepkg.sh
+1
-1
src/client/inc/tsclient.h
src/client/inc/tsclient.h
+488
-19
src/client/src/tscParseInsert.c
src/client/src/tscParseInsert.c
+131
-453
src/client/src/tscPrepare.c
src/client/src/tscPrepare.c
+1
-1
src/client/src/tscUtil.c
src/client/src/tscUtil.c
+24
-114
src/common/inc/tdataformat.h
src/common/inc/tdataformat.h
+81
-26
src/common/src/tdataformat.c
src/common/src/tdataformat.c
+35
-27
src/connector/grafanaplugin
src/connector/grafanaplugin
+1
-1
src/inc/taosmsg.h
src/inc/taosmsg.h
+1
-0
src/kit/taosdemo/taosdemo.c
src/kit/taosdemo/taosdemo.c
+8
-8
src/plugins/http/src/httpParser.c
src/plugins/http/src/httpParser.c
+34
-8
src/plugins/http/src/httpUtil.c
src/plugins/http/src/httpUtil.c
+12
-4
src/query/inc/qExecutor.h
src/query/inc/qExecutor.h
+0
-1
src/query/src/qExecutor.c
src/query/src/qExecutor.c
+130
-41
src/query/src/qPlan.c
src/query/src/qPlan.c
+12
-4
src/rpc/src/rpcTcp.c
src/rpc/src/rpcTcp.c
+4
-0
src/tsdb/inc/tsdbMeta.h
src/tsdb/inc/tsdbMeta.h
+3
-5
src/tsdb/src/tsdbCommit.c
src/tsdb/src/tsdbCommit.c
+3
-2
src/tsdb/src/tsdbCompact.c
src/tsdb/src/tsdbCompact.c
+1
-1
src/tsdb/src/tsdbMemTable.c
src/tsdb/src/tsdbMemTable.c
+10
-12
src/tsdb/src/tsdbMeta.c
src/tsdb/src/tsdbMeta.c
+54
-26
src/tsdb/src/tsdbRead.c
src/tsdb/src/tsdbRead.c
+3
-3
src/tsdb/src/tsdbReadImpl.c
src/tsdb/src/tsdbReadImpl.c
+5
-5
tests/pytest/alter/alterColMultiTimes.py
tests/pytest/alter/alterColMultiTimes.py
+67
-0
tests/pytest/fulltest.sh
tests/pytest/fulltest.sh
+7
-6
tests/pytest/insert/schemalessInsert.py
tests/pytest/insert/schemalessInsert.py
+1
-1
tests/pytest/tools/taosdemoAllTest/NanoTestCase/nano_samples.csv
...ytest/tools/taosdemoAllTest/NanoTestCase/nano_samples.csv
+100
-0
tests/pytest/tools/taosdemoAllTest/NanoTestCase/nano_sampletags.csv
...st/tools/taosdemoAllTest/NanoTestCase/nano_sampletags.csv
+100
-0
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoInsertMSDB.json
...ools/taosdemoAllTest/NanoTestCase/taosdemoInsertMSDB.json
+63
-0
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoInsertNanoDB.json
...ls/taosdemoAllTest/NanoTestCase/taosdemoInsertNanoDB.json
+63
-0
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoInsertUSDB.json
...ools/taosdemoAllTest/NanoTestCase/taosdemoInsertUSDB.json
+63
-0
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestInsertTime_step.py
...osdemoAllTest/NanoTestCase/taosdemoTestInsertTime_step.py
+115
-0
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabase.json
...aosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabase.json
+88
-0
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabaseInsertForSub.json
...st/NanoTestCase/taosdemoTestNanoDatabaseInsertForSub.json
+84
-0
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabaseNow.json
...demoAllTest/NanoTestCase/taosdemoTestNanoDatabaseNow.json
+62
-0
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabasecsv.json
...demoAllTest/NanoTestCase/taosdemoTestNanoDatabasecsv.json
+84
-0
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoInsert.py
...demoAllTest/NanoTestCase/taosdemoTestSupportNanoInsert.py
+156
-0
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoQuery.json
...emoAllTest/NanoTestCase/taosdemoTestSupportNanoQuery.json
+92
-0
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoQuery.py
...sdemoAllTest/NanoTestCase/taosdemoTestSupportNanoQuery.py
+157
-0
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoQuerycsv.json
...AllTest/NanoTestCase/taosdemoTestSupportNanoQuerycsv.json
+110
-0
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoSubscribe.json
...llTest/NanoTestCase/taosdemoTestSupportNanoSubscribe.json
+32
-0
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanosubscribe.py
...oAllTest/NanoTestCase/taosdemoTestSupportNanosubscribe.py
+125
-0
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdumpTestNanoSupport.py
...s/taosdemoAllTest/NanoTestCase/taosdumpTestNanoSupport.py
+362
-0
tests/script/jenkins/basic.txt
tests/script/jenkins/basic.txt
+1
-1
未找到文件。
.drone.yml
浏览文件 @
40073e19
...
@@ -25,15 +25,14 @@ steps:
...
@@ -25,15 +25,14 @@ steps:
-
master
-
master
---
---
kind
:
pipeline
kind
:
pipeline
name
:
test_arm64
name
:
test_arm64
_bionic
platform
:
platform
:
os
:
linux
os
:
linux
arch
:
arm64
arch
:
arm64
steps
:
steps
:
-
name
:
build
-
name
:
build
image
:
gc
c
image
:
arm64v8/ubuntu:bioni
c
commands
:
commands
:
-
apt-get update
-
apt-get update
-
apt-get install -y cmake build-essential
-
apt-get install -y cmake build-essential
...
@@ -48,9 +47,87 @@ steps:
...
@@ -48,9 +47,87 @@ steps:
branch
:
branch
:
-
develop
-
develop
-
master
-
master
-
2.0
---
---
kind
:
pipeline
kind
:
pipeline
name
:
test_arm
name
:
test_arm64_focal
platform
:
os
:
linux
arch
:
arm64
steps
:
-
name
:
build
image
:
arm64v8/ubuntu:focal
commands
:
-
echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
-
apt-get update
-
apt-get install -y -qq cmake build-essential
-
mkdir debug
-
cd debug
-
cmake .. -DCPUTYPE=aarch64 > /dev/null
-
make
trigger
:
event
:
-
pull_request
when
:
branch
:
-
develop
-
master
-
2.0
---
kind
:
pipeline
name
:
test_arm64_centos7
platform
:
os
:
linux
arch
:
arm64
steps
:
-
name
:
build
image
:
arm64v8/centos:7
commands
:
-
yum install -y gcc gcc-c++ make cmake git
-
mkdir debug
-
cd debug
-
cmake .. -DCPUTYPE=aarch64 > /dev/null
-
make
trigger
:
event
:
-
pull_request
when
:
branch
:
-
develop
-
master
-
2.0
---
kind
:
pipeline
name
:
test_arm64_centos8
platform
:
os
:
linux
arch
:
arm64
steps
:
-
name
:
build
image
:
arm64v8/centos:8
commands
:
-
dnf install -y gcc gcc-c++ make cmake epel-release git libarchive
-
mkdir debug
-
cd debug
-
cmake .. -DCPUTYPE=aarch64 > /dev/null
-
make
trigger
:
event
:
-
pull_request
when
:
branch
:
-
develop
-
master
-
2.0
---
kind
:
pipeline
name
:
test_arm_bionic
platform
:
platform
:
os
:
linux
os
:
linux
...
@@ -73,7 +150,6 @@ steps:
...
@@ -73,7 +150,6 @@ steps:
branch
:
branch
:
-
develop
-
develop
-
master
-
master
---
---
kind
:
pipeline
kind
:
pipeline
name
:
build_trusty
name
:
build_trusty
...
@@ -174,25 +250,3 @@ steps:
...
@@ -174,25 +250,3 @@ steps:
-
develop
-
develop
-
master
-
master
---
kind
:
pipeline
name
:
goodbye
platform
:
os
:
linux
arch
:
amd64
steps
:
-
name
:
64-bit
image
:
alpine
commands
:
-
echo 64-bit is good.
when
:
branch
:
-
develop
-
master
depends_on
:
-
test_arm64
-
test_amd64
\ No newline at end of file
packaging/tools/makepkg.sh
浏览文件 @
40073e19
...
@@ -35,7 +35,7 @@ fi
...
@@ -35,7 +35,7 @@ fi
if
[
"
$pagMode
"
==
"lite"
]
;
then
if
[
"
$pagMode
"
==
"lite"
]
;
then
strip
${
build_dir
}
/bin/taosd
strip
${
build_dir
}
/bin/taosd
strip
${
build_dir
}
/bin/taos
strip
${
build_dir
}
/bin/taos
bin_files
=
"
${
build_dir
}
/bin/taosd
${
build_dir
}
/bin/taos
${
script_dir
}
/remove.sh"
bin_files
=
"
${
build_dir
}
/bin/taosd
${
build_dir
}
/bin/taos
${
script_dir
}
/remove.sh
${
script_dir
}
/startPre.sh
"
else
else
bin_files
=
"
${
build_dir
}
/bin/taosd
${
build_dir
}
/bin/taos
${
build_dir
}
/bin/taosdump
${
build_dir
}
/bin/taosdemo
${
build_dir
}
/bin/tarbitrator
\
bin_files
=
"
${
build_dir
}
/bin/taosd
${
build_dir
}
/bin/taos
${
build_dir
}
/bin/taosdump
${
build_dir
}
/bin/taosdemo
${
build_dir
}
/bin/tarbitrator
\
${
script_dir
}
/remove.sh
${
script_dir
}
/set_core.sh
${
script_dir
}
/startPre.sh
${
script_dir
}
/taosd-dump-cfg.gdb"
${
script_dir
}
/remove.sh
${
script_dir
}
/set_core.sh
${
script_dir
}
/startPre.sh
${
script_dir
}
/taosd-dump-cfg.gdb"
...
...
src/client/inc/tsclient.h
浏览文件 @
40073e19
...
@@ -84,9 +84,14 @@ typedef struct SParamInfo {
...
@@ -84,9 +84,14 @@ typedef struct SParamInfo {
}
SParamInfo
;
}
SParamInfo
;
typedef
struct
SBoundColumn
{
typedef
struct
SBoundColumn
{
bool
hasVal
;
// denote if current column has bound or not
int32_t
offset
;
// all column offset value
int32_t
offset
;
// all column offset value
int32_t
toffset
;
// first part offset for SDataRow TODO: get offset from STSchema on future
uint8_t
valStat
;
// denote if current column bound or not(0 means has val, 1 means no val)
}
SBoundColumn
;
}
SBoundColumn
;
typedef
enum
{
VAL_STAT_HAS
=
0x0
,
// 0 means has val
VAL_STAT_NONE
=
0x01
,
// 1 means no val
}
EValStat
;
typedef
struct
{
typedef
struct
{
uint16_t
schemaColIdx
;
uint16_t
schemaColIdx
;
...
@@ -99,32 +104,106 @@ typedef enum _COL_ORDER_STATUS {
...
@@ -99,32 +104,106 @@ typedef enum _COL_ORDER_STATUS {
ORDER_STATUS_ORDERED
=
1
,
ORDER_STATUS_ORDERED
=
1
,
ORDER_STATUS_DISORDERED
=
2
,
ORDER_STATUS_DISORDERED
=
2
,
}
EOrderStatus
;
}
EOrderStatus
;
typedef
struct
SParsedDataColInfo
{
typedef
struct
SParsedDataColInfo
{
int16_t
numOfCols
;
int16_t
numOfCols
;
int16_t
numOfBound
;
int16_t
numOfBound
;
int32_t
*
boundedColumns
;
// bounded column idx according to schema
uint16_t
flen
;
// TODO: get from STSchema
uint16_t
allNullLen
;
// TODO: get from STSchema
uint16_t
extendedVarLen
;
int32_t
*
boundedColumns
;
// bound column idx according to schema
SBoundColumn
*
cols
;
SBoundColumn
*
cols
;
SBoundIdxInfo
*
colIdxInfo
;
SBoundIdxInfo
*
colIdxInfo
;
int8_t
orderStatus
;
// bound
ed columns:
int8_t
orderStatus
;
// bound
columns
}
SParsedDataColInfo
;
}
SParsedDataColInfo
;
#define IS_DATA_COL_ORDERED(s
) ((
s) == (int8_t)ORDER_STATUS_ORDERED)
#define IS_DATA_COL_ORDERED(s
pd) ((spd->orderStatu
s) == (int8_t)ORDER_STATUS_ORDERED)
typedef
struct
{
typedef
struct
{
SSchema
*
pSchema
;
int32_t
dataLen
;
// len of SDataRow
int16_t
sversion
;
int32_t
kvLen
;
// len of SKVRow
int32_t
flen
;
}
SMemRowInfo
;
uint16_t
nCols
;
typedef
struct
{
void
*
buf
;
uint8_t
memRowType
;
void
*
pDataBlock
;
uint8_t
compareStat
;
// 0 unknown, 1 need compare, 2 no need
SSubmitBlk
*
pSubmitBlk
;
TDRowTLenT
dataRowInitLen
;
TDRowTLenT
kvRowInitLen
;
SMemRowInfo
*
rowInfo
;
}
SMemRowBuilder
;
}
SMemRowBuilder
;
typedef
struct
{
typedef
enum
{
TDRowLenT
allNullLen
;
ROW_COMPARE_UNKNOWN
=
0
,
}
SMemRowHelper
;
ROW_COMPARE_NEED
=
1
,
ROW_COMPARE_NO_NEED
=
2
,
}
ERowCompareStat
;
int
tsParseTime
(
SStrToken
*
pToken
,
int64_t
*
time
,
char
**
next
,
char
*
error
,
int16_t
timePrec
);
int
initMemRowBuilder
(
SMemRowBuilder
*
pBuilder
,
uint32_t
nRows
,
uint32_t
nCols
,
uint32_t
nBoundCols
,
int32_t
allNullLen
);
void
destroyMemRowBuilder
(
SMemRowBuilder
*
pBuilder
);
/**
* @brief
*
* @param memRowType
* @param spd
* @param idx the absolute bound index of columns
* @return FORCE_INLINE
*/
static
FORCE_INLINE
void
tscGetMemRowAppendInfo
(
SSchema
*
pSchema
,
uint8_t
memRowType
,
SParsedDataColInfo
*
spd
,
int32_t
idx
,
int32_t
*
toffset
,
int16_t
*
colId
)
{
int32_t
schemaIdx
=
0
;
if
(
IS_DATA_COL_ORDERED
(
spd
))
{
schemaIdx
=
spd
->
boundedColumns
[
idx
];
if
(
isDataRowT
(
memRowType
))
{
*
toffset
=
(
spd
->
cols
+
schemaIdx
)
->
toffset
;
// the offset of firstPart
}
else
{
*
toffset
=
idx
*
sizeof
(
SColIdx
);
// the offset of SColIdx
}
}
else
{
ASSERT
(
idx
==
(
spd
->
colIdxInfo
+
idx
)
->
boundIdx
);
schemaIdx
=
(
spd
->
colIdxInfo
+
idx
)
->
schemaColIdx
;
if
(
isDataRowT
(
memRowType
))
{
*
toffset
=
(
spd
->
cols
+
schemaIdx
)
->
toffset
;
}
else
{
*
toffset
=
((
spd
->
colIdxInfo
+
idx
)
->
finalIdx
)
*
sizeof
(
SColIdx
);
}
}
*
colId
=
pSchema
[
schemaIdx
].
colId
;
}
/**
* @brief Applicable to consume by multi-columns
*
* @param row
* @param value
* @param isCopyVarData In some scenario, the varVal is copied to row directly before calling tdAppend***ColVal()
* @param colId
* @param colType
* @param idx index in SSchema
* @param pBuilder
* @param spd
* @return FORCE_INLINE
*/
static
FORCE_INLINE
void
tscAppendMemRowColVal
(
SMemRow
row
,
const
void
*
value
,
bool
isCopyVarData
,
int16_t
colId
,
int8_t
colType
,
int32_t
toffset
,
SMemRowBuilder
*
pBuilder
,
int32_t
rowNum
)
{
tdAppendMemRowColVal
(
row
,
value
,
isCopyVarData
,
colId
,
colType
,
toffset
);
if
(
pBuilder
->
compareStat
==
ROW_COMPARE_NEED
)
{
SMemRowInfo
*
pRowInfo
=
pBuilder
->
rowInfo
+
rowNum
;
tdGetColAppendDeltaLen
(
value
,
colType
,
&
pRowInfo
->
dataLen
,
&
pRowInfo
->
kvLen
);
}
}
// Applicable to consume by one row
static
FORCE_INLINE
void
tscAppendMemRowColValEx
(
SMemRow
row
,
const
void
*
value
,
bool
isCopyVarData
,
int16_t
colId
,
int8_t
colType
,
int32_t
toffset
,
int32_t
*
dataLen
,
int32_t
*
kvLen
,
uint8_t
compareStat
)
{
tdAppendMemRowColVal
(
row
,
value
,
isCopyVarData
,
colId
,
colType
,
toffset
);
if
(
compareStat
==
ROW_COMPARE_NEED
)
{
tdGetColAppendDeltaLen
(
value
,
colType
,
dataLen
,
kvLen
);
}
}
typedef
struct
STableDataBlocks
{
typedef
struct
STableDataBlocks
{
SName
tableName
;
SName
tableName
;
int8_t
tsSource
;
// where does the UNIX timestamp come from, server or client
int8_t
tsSource
;
// where does the UNIX timestamp come from, server or client
...
@@ -146,7 +225,7 @@ typedef struct STableDataBlocks {
...
@@ -146,7 +225,7 @@ typedef struct STableDataBlocks {
uint32_t
numOfAllocedParams
;
uint32_t
numOfAllocedParams
;
uint32_t
numOfParams
;
uint32_t
numOfParams
;
SParamInfo
*
params
;
SParamInfo
*
params
;
SMemRow
Helper
rowHelp
er
;
SMemRow
Builder
rowBuild
er
;
}
STableDataBlocks
;
}
STableDataBlocks
;
typedef
struct
{
typedef
struct
{
...
@@ -435,8 +514,398 @@ int16_t getNewResColId(SSqlCmd* pCmd);
...
@@ -435,8 +514,398 @@ int16_t getNewResColId(SSqlCmd* pCmd);
int32_t
schemaIdxCompar
(
const
void
*
lhs
,
const
void
*
rhs
);
int32_t
schemaIdxCompar
(
const
void
*
lhs
,
const
void
*
rhs
);
int32_t
boundIdxCompar
(
const
void
*
lhs
,
const
void
*
rhs
);
int32_t
boundIdxCompar
(
const
void
*
lhs
,
const
void
*
rhs
);
int
initSMemRowHelper
(
SMemRowHelper
*
pHelper
,
SSchema
*
pSSchema
,
uint16_t
nCols
,
uint16_t
allNullColsLen
);
static
FORCE_INLINE
int32_t
getExtendedRowSize
(
STableDataBlocks
*
pBlock
)
{
int32_t
getExtendedRowSize
(
STableComInfo
*
tinfo
);
ASSERT
(
pBlock
->
rowSize
==
pBlock
->
pTableMeta
->
tableInfo
.
rowSize
);
return
pBlock
->
rowSize
+
TD_MEM_ROW_DATA_HEAD_SIZE
+
pBlock
->
boundColumnInfo
.
extendedVarLen
;
}
static
FORCE_INLINE
void
checkAndConvertMemRow
(
SMemRow
row
,
int32_t
dataLen
,
int32_t
kvLen
)
{
if
(
isDataRow
(
row
))
{
if
(
kvLen
<
(
dataLen
*
KVRatioConvert
))
{
memRowSetConvert
(
row
);
}
}
else
if
(
kvLen
>
dataLen
)
{
memRowSetConvert
(
row
);
}
}
static
FORCE_INLINE
void
initSMemRow
(
SMemRow
row
,
uint8_t
memRowType
,
STableDataBlocks
*
pBlock
,
int16_t
nBoundCols
)
{
memRowSetType
(
row
,
memRowType
);
if
(
isDataRowT
(
memRowType
))
{
dataRowSetVersion
(
memRowDataBody
(
row
),
pBlock
->
pTableMeta
->
sversion
);
dataRowSetLen
(
memRowDataBody
(
row
),
(
TDRowLenT
)(
TD_DATA_ROW_HEAD_SIZE
+
pBlock
->
boundColumnInfo
.
flen
));
}
else
{
ASSERT
(
nBoundCols
>
0
);
memRowSetKvVersion
(
row
,
pBlock
->
pTableMeta
->
sversion
);
kvRowSetNCols
(
memRowKvBody
(
row
),
nBoundCols
);
kvRowSetLen
(
memRowKvBody
(
row
),
(
TDRowLenT
)(
TD_KV_ROW_HEAD_SIZE
+
sizeof
(
SColIdx
)
*
nBoundCols
));
}
}
/**
* TODO: Move to tdataformat.h and refactor when STSchema available.
* - fetch flen and toffset from STSChema and remove param spd
*/
static
FORCE_INLINE
void
convertToSDataRow
(
SMemRow
dest
,
SMemRow
src
,
SSchema
*
pSchema
,
int
nCols
,
SParsedDataColInfo
*
spd
)
{
ASSERT
(
isKvRow
(
src
));
SKVRow
kvRow
=
memRowKvBody
(
src
);
SDataRow
dataRow
=
memRowDataBody
(
dest
);
memRowSetType
(
dest
,
SMEM_ROW_DATA
);
dataRowSetVersion
(
dataRow
,
memRowKvVersion
(
src
));
dataRowSetLen
(
dataRow
,
(
TDRowLenT
)(
TD_DATA_ROW_HEAD_SIZE
+
spd
->
flen
));
int32_t
kvIdx
=
0
;
for
(
int
i
=
0
;
i
<
nCols
;
++
i
)
{
SSchema
*
schema
=
pSchema
+
i
;
void
*
val
=
tdGetKVRowValOfColEx
(
kvRow
,
schema
->
colId
,
&
kvIdx
);
tdAppendDataColVal
(
dataRow
,
val
!=
NULL
?
val
:
getNullValue
(
schema
->
type
),
true
,
schema
->
type
,
(
spd
->
cols
+
i
)
->
toffset
);
}
}
// TODO: Move to tdataformat.h and refactor when STSchema available.
static
FORCE_INLINE
void
convertToSKVRow
(
SMemRow
dest
,
SMemRow
src
,
SSchema
*
pSchema
,
int
nCols
,
int
nBoundCols
,
SParsedDataColInfo
*
spd
)
{
ASSERT
(
isDataRow
(
src
));
SDataRow
dataRow
=
memRowDataBody
(
src
);
SKVRow
kvRow
=
memRowKvBody
(
dest
);
memRowSetType
(
dest
,
SMEM_ROW_KV
);
memRowSetKvVersion
(
kvRow
,
dataRowVersion
(
dataRow
));
kvRowSetNCols
(
kvRow
,
nBoundCols
);
kvRowSetLen
(
kvRow
,
(
TDRowLenT
)(
TD_KV_ROW_HEAD_SIZE
+
sizeof
(
SColIdx
)
*
nBoundCols
));
int32_t
toffset
=
0
,
kvOffset
=
0
;
for
(
int
i
=
0
;
i
<
nCols
;
++
i
)
{
if
((
spd
->
cols
+
i
)
->
valStat
==
VAL_STAT_HAS
)
{
SSchema
*
schema
=
pSchema
+
i
;
toffset
=
(
spd
->
cols
+
i
)
->
toffset
;
void
*
val
=
tdGetRowDataOfCol
(
dataRow
,
schema
->
type
,
toffset
+
TD_DATA_ROW_HEAD_SIZE
);
tdAppendKvColVal
(
kvRow
,
val
,
true
,
schema
->
colId
,
schema
->
type
,
kvOffset
);
kvOffset
+=
sizeof
(
SColIdx
);
}
}
}
// TODO: Move to tdataformat.h and refactor when STSchema available.
static
FORCE_INLINE
void
convertSMemRow
(
SMemRow
dest
,
SMemRow
src
,
STableDataBlocks
*
pBlock
)
{
STableMeta
*
pTableMeta
=
pBlock
->
pTableMeta
;
STableComInfo
tinfo
=
tscGetTableInfo
(
pTableMeta
);
SSchema
*
pSchema
=
tscGetTableSchema
(
pTableMeta
);
SParsedDataColInfo
*
spd
=
&
pBlock
->
boundColumnInfo
;
ASSERT
(
dest
!=
src
);
if
(
isDataRow
(
src
))
{
// TODO: Can we use pBlock -> numOfParam directly?
ASSERT
(
spd
->
numOfBound
>
0
);
convertToSKVRow
(
dest
,
src
,
pSchema
,
tinfo
.
numOfColumns
,
spd
->
numOfBound
,
spd
);
}
else
{
convertToSDataRow
(
dest
,
src
,
pSchema
,
tinfo
.
numOfColumns
,
spd
);
}
}
static
bool
isNullStr
(
SStrToken
*
pToken
)
{
return
(
pToken
->
type
==
TK_NULL
)
||
((
pToken
->
type
==
TK_STRING
)
&&
(
pToken
->
n
!=
0
)
&&
(
strncasecmp
(
TSDB_DATA_NULL_STR_L
,
pToken
->
z
,
pToken
->
n
)
==
0
));
}
static
FORCE_INLINE
int32_t
tscToDouble
(
SStrToken
*
pToken
,
double
*
value
,
char
**
endPtr
)
{
errno
=
0
;
*
value
=
strtold
(
pToken
->
z
,
endPtr
);
// not a valid integer number, return error
if
((
*
endPtr
-
pToken
->
z
)
!=
pToken
->
n
)
{
return
TK_ILLEGAL
;
}
return
pToken
->
type
;
}
static
uint8_t
TRUE_VALUE
=
(
uint8_t
)
TSDB_TRUE
;
static
uint8_t
FALSE_VALUE
=
(
uint8_t
)
TSDB_FALSE
;
static
FORCE_INLINE
int32_t
tsParseOneColumnKV
(
SSchema
*
pSchema
,
SStrToken
*
pToken
,
SMemRow
row
,
char
*
msg
,
char
**
str
,
bool
primaryKey
,
int16_t
timePrec
,
int32_t
toffset
,
int16_t
colId
,
int32_t
*
dataLen
,
int32_t
*
kvLen
,
uint8_t
compareStat
)
{
int64_t
iv
;
int32_t
ret
;
char
*
endptr
=
NULL
;
if
(
IS_NUMERIC_TYPE
(
pSchema
->
type
)
&&
pToken
->
n
==
0
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid numeric data"
,
pToken
->
z
);
}
switch
(
pSchema
->
type
)
{
case
TSDB_DATA_TYPE_BOOL
:
{
// bool
if
(
isNullStr
(
pToken
))
{
tscAppendMemRowColValEx
(
row
,
getNullValue
(
pSchema
->
type
),
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
else
{
if
((
pToken
->
type
==
TK_BOOL
||
pToken
->
type
==
TK_STRING
)
&&
(
pToken
->
n
!=
0
))
{
if
(
strncmp
(
pToken
->
z
,
"true"
,
pToken
->
n
)
==
0
)
{
tscAppendMemRowColValEx
(
row
,
&
TRUE_VALUE
,
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
else
if
(
strncmp
(
pToken
->
z
,
"false"
,
pToken
->
n
)
==
0
)
{
tscAppendMemRowColValEx
(
row
,
&
FALSE_VALUE
,
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
else
{
return
tscSQLSyntaxErrMsg
(
msg
,
"invalid bool data"
,
pToken
->
z
);
}
}
else
if
(
pToken
->
type
==
TK_INTEGER
)
{
iv
=
strtoll
(
pToken
->
z
,
NULL
,
10
);
tscAppendMemRowColValEx
(
row
,
((
iv
==
0
)
?
&
FALSE_VALUE
:
&
TRUE_VALUE
),
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
else
if
(
pToken
->
type
==
TK_FLOAT
)
{
double
dv
=
strtod
(
pToken
->
z
,
NULL
);
tscAppendMemRowColValEx
(
row
,
((
dv
==
0
)
?
&
FALSE_VALUE
:
&
TRUE_VALUE
),
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
else
{
return
tscInvalidOperationMsg
(
msg
,
"invalid bool data"
,
pToken
->
z
);
}
}
break
;
}
case
TSDB_DATA_TYPE_TINYINT
:
if
(
isNullStr
(
pToken
))
{
tscAppendMemRowColValEx
(
row
,
getNullValue
(
pSchema
->
type
),
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
else
{
ret
=
tStrToInteger
(
pToken
->
z
,
pToken
->
type
,
pToken
->
n
,
&
iv
,
true
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid tinyint data"
,
pToken
->
z
);
}
else
if
(
!
IS_VALID_TINYINT
(
iv
))
{
return
tscInvalidOperationMsg
(
msg
,
"data overflow"
,
pToken
->
z
);
}
uint8_t
tmpVal
=
(
uint8_t
)
iv
;
tscAppendMemRowColValEx
(
row
,
&
tmpVal
,
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
break
;
case
TSDB_DATA_TYPE_UTINYINT
:
if
(
isNullStr
(
pToken
))
{
tscAppendMemRowColValEx
(
row
,
getNullValue
(
pSchema
->
type
),
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
else
{
ret
=
tStrToInteger
(
pToken
->
z
,
pToken
->
type
,
pToken
->
n
,
&
iv
,
false
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid unsigned tinyint data"
,
pToken
->
z
);
}
else
if
(
!
IS_VALID_UTINYINT
(
iv
))
{
return
tscInvalidOperationMsg
(
msg
,
"unsigned tinyint data overflow"
,
pToken
->
z
);
}
uint8_t
tmpVal
=
(
uint8_t
)
iv
;
tscAppendMemRowColValEx
(
row
,
&
tmpVal
,
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
break
;
case
TSDB_DATA_TYPE_SMALLINT
:
if
(
isNullStr
(
pToken
))
{
tscAppendMemRowColValEx
(
row
,
getNullValue
(
pSchema
->
type
),
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
else
{
ret
=
tStrToInteger
(
pToken
->
z
,
pToken
->
type
,
pToken
->
n
,
&
iv
,
true
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid smallint data"
,
pToken
->
z
);
}
else
if
(
!
IS_VALID_SMALLINT
(
iv
))
{
return
tscInvalidOperationMsg
(
msg
,
"smallint data overflow"
,
pToken
->
z
);
}
int16_t
tmpVal
=
(
int16_t
)
iv
;
tscAppendMemRowColValEx
(
row
,
&
tmpVal
,
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
break
;
case
TSDB_DATA_TYPE_USMALLINT
:
if
(
isNullStr
(
pToken
))
{
tscAppendMemRowColValEx
(
row
,
getNullValue
(
pSchema
->
type
),
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
else
{
ret
=
tStrToInteger
(
pToken
->
z
,
pToken
->
type
,
pToken
->
n
,
&
iv
,
false
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid unsigned smallint data"
,
pToken
->
z
);
}
else
if
(
!
IS_VALID_USMALLINT
(
iv
))
{
return
tscInvalidOperationMsg
(
msg
,
"unsigned smallint data overflow"
,
pToken
->
z
);
}
uint16_t
tmpVal
=
(
uint16_t
)
iv
;
tscAppendMemRowColValEx
(
row
,
&
tmpVal
,
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
break
;
case
TSDB_DATA_TYPE_INT
:
if
(
isNullStr
(
pToken
))
{
tscAppendMemRowColValEx
(
row
,
getNullValue
(
pSchema
->
type
),
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
else
{
ret
=
tStrToInteger
(
pToken
->
z
,
pToken
->
type
,
pToken
->
n
,
&
iv
,
true
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid int data"
,
pToken
->
z
);
}
else
if
(
!
IS_VALID_INT
(
iv
))
{
return
tscInvalidOperationMsg
(
msg
,
"int data overflow"
,
pToken
->
z
);
}
int32_t
tmpVal
=
(
int32_t
)
iv
;
tscAppendMemRowColValEx
(
row
,
&
tmpVal
,
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
break
;
case
TSDB_DATA_TYPE_UINT
:
if
(
isNullStr
(
pToken
))
{
tscAppendMemRowColValEx
(
row
,
getNullValue
(
pSchema
->
type
),
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
else
{
ret
=
tStrToInteger
(
pToken
->
z
,
pToken
->
type
,
pToken
->
n
,
&
iv
,
false
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid unsigned int data"
,
pToken
->
z
);
}
else
if
(
!
IS_VALID_UINT
(
iv
))
{
return
tscInvalidOperationMsg
(
msg
,
"unsigned int data overflow"
,
pToken
->
z
);
}
uint32_t
tmpVal
=
(
uint32_t
)
iv
;
tscAppendMemRowColValEx
(
row
,
&
tmpVal
,
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
break
;
case
TSDB_DATA_TYPE_BIGINT
:
if
(
isNullStr
(
pToken
))
{
tscAppendMemRowColValEx
(
row
,
getNullValue
(
pSchema
->
type
),
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
else
{
ret
=
tStrToInteger
(
pToken
->
z
,
pToken
->
type
,
pToken
->
n
,
&
iv
,
true
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid bigint data"
,
pToken
->
z
);
}
else
if
(
!
IS_VALID_BIGINT
(
iv
))
{
return
tscInvalidOperationMsg
(
msg
,
"bigint data overflow"
,
pToken
->
z
);
}
tscAppendMemRowColValEx
(
row
,
&
iv
,
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
break
;
case
TSDB_DATA_TYPE_UBIGINT
:
if
(
isNullStr
(
pToken
))
{
tscAppendMemRowColValEx
(
row
,
getNullValue
(
pSchema
->
type
),
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
else
{
ret
=
tStrToInteger
(
pToken
->
z
,
pToken
->
type
,
pToken
->
n
,
&
iv
,
false
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid unsigned bigint data"
,
pToken
->
z
);
}
else
if
(
!
IS_VALID_UBIGINT
((
uint64_t
)
iv
))
{
return
tscInvalidOperationMsg
(
msg
,
"unsigned bigint data overflow"
,
pToken
->
z
);
}
uint64_t
tmpVal
=
(
uint64_t
)
iv
;
tscAppendMemRowColValEx
(
row
,
&
tmpVal
,
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
break
;
case
TSDB_DATA_TYPE_FLOAT
:
if
(
isNullStr
(
pToken
))
{
tscAppendMemRowColValEx
(
row
,
getNullValue
(
pSchema
->
type
),
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
else
{
double
dv
;
if
(
TK_ILLEGAL
==
tscToDouble
(
pToken
,
&
dv
,
&
endptr
))
{
return
tscInvalidOperationMsg
(
msg
,
"illegal float data"
,
pToken
->
z
);
}
if
(((
dv
==
HUGE_VAL
||
dv
==
-
HUGE_VAL
)
&&
errno
==
ERANGE
)
||
dv
>
FLT_MAX
||
dv
<
-
FLT_MAX
||
isinf
(
dv
)
||
isnan
(
dv
))
{
return
tscInvalidOperationMsg
(
msg
,
"illegal float data"
,
pToken
->
z
);
}
float
tmpVal
=
(
float
)
dv
;
tscAppendMemRowColValEx
(
row
,
&
tmpVal
,
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
break
;
case
TSDB_DATA_TYPE_DOUBLE
:
if
(
isNullStr
(
pToken
))
{
tscAppendMemRowColValEx
(
row
,
getNullValue
(
pSchema
->
type
),
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
else
{
double
dv
;
if
(
TK_ILLEGAL
==
tscToDouble
(
pToken
,
&
dv
,
&
endptr
))
{
return
tscInvalidOperationMsg
(
msg
,
"illegal double data"
,
pToken
->
z
);
}
if
(((
dv
==
HUGE_VAL
||
dv
==
-
HUGE_VAL
)
&&
errno
==
ERANGE
)
||
isinf
(
dv
)
||
isnan
(
dv
))
{
return
tscInvalidOperationMsg
(
msg
,
"illegal double data"
,
pToken
->
z
);
}
tscAppendMemRowColValEx
(
row
,
&
dv
,
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
break
;
case
TSDB_DATA_TYPE_BINARY
:
// binary data cannot be null-terminated char string, otherwise the last char of the string is lost
if
(
pToken
->
type
==
TK_NULL
)
{
tscAppendMemRowColValEx
(
row
,
getNullValue
(
pSchema
->
type
),
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
else
{
// too long values will return invalid sql, not be truncated automatically
if
(
pToken
->
n
+
VARSTR_HEADER_SIZE
>
pSchema
->
bytes
)
{
// todo refactor
return
tscInvalidOperationMsg
(
msg
,
"string data overflow"
,
pToken
->
z
);
}
// STR_WITH_SIZE_TO_VARSTR(payload, pToken->z, pToken->n);
char
*
rowEnd
=
memRowEnd
(
row
);
STR_WITH_SIZE_TO_VARSTR
(
rowEnd
,
pToken
->
z
,
pToken
->
n
);
tscAppendMemRowColValEx
(
row
,
rowEnd
,
false
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
break
;
case
TSDB_DATA_TYPE_NCHAR
:
if
(
pToken
->
type
==
TK_NULL
)
{
tscAppendMemRowColValEx
(
row
,
getNullValue
(
pSchema
->
type
),
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
else
{
// if the converted output len is over than pColumnModel->bytes, return error: 'Argument list too long'
int32_t
output
=
0
;
char
*
rowEnd
=
memRowEnd
(
row
);
if
(
!
taosMbsToUcs4
(
pToken
->
z
,
pToken
->
n
,
(
char
*
)
varDataVal
(
rowEnd
),
pSchema
->
bytes
-
VARSTR_HEADER_SIZE
,
&
output
))
{
char
buf
[
512
]
=
{
0
};
snprintf
(
buf
,
tListLen
(
buf
),
"%s"
,
strerror
(
errno
));
return
tscInvalidOperationMsg
(
msg
,
buf
,
pToken
->
z
);
}
varDataSetLen
(
rowEnd
,
output
);
tscAppendMemRowColValEx
(
row
,
rowEnd
,
false
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
break
;
case
TSDB_DATA_TYPE_TIMESTAMP
:
{
if
(
pToken
->
type
==
TK_NULL
)
{
if
(
primaryKey
)
{
// When building SKVRow primaryKey, we should not skip even with NULL value.
int64_t
tmpVal
=
0
;
tscAppendMemRowColValEx
(
row
,
&
tmpVal
,
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
else
{
tscAppendMemRowColValEx
(
row
,
getNullValue
(
pSchema
->
type
),
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
}
else
{
int64_t
tmpVal
;
if
(
tsParseTime
(
pToken
,
&
tmpVal
,
str
,
msg
,
timePrec
)
!=
TSDB_CODE_SUCCESS
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid timestamp"
,
pToken
->
z
);
}
tscAppendMemRowColValEx
(
row
,
&
tmpVal
,
true
,
colId
,
pSchema
->
type
,
toffset
,
dataLen
,
kvLen
,
compareStat
);
}
break
;
}
}
return
TSDB_CODE_SUCCESS
;
}
#ifdef __cplusplus
#ifdef __cplusplus
}
}
...
...
src/client/src/tscParseInsert.c
浏览文件 @
40073e19
...
@@ -38,43 +38,60 @@ enum {
...
@@ -38,43 +38,60 @@ enum {
TSDB_USE_CLI_TS
=
1
,
TSDB_USE_CLI_TS
=
1
,
};
};
static
uint8_t
TRUE_VALUE
=
(
uint8_t
)
TSDB_TRUE
;
static
uint8_t
FALSE_VALUE
=
(
uint8_t
)
TSDB_FALSE
;
static
int32_t
tscAllocateMemIfNeed
(
STableDataBlocks
*
pDataBlock
,
int32_t
rowSize
,
int32_t
*
numOfRows
);
static
int32_t
tscAllocateMemIfNeed
(
STableDataBlocks
*
pDataBlock
,
int32_t
rowSize
,
int32_t
*
numOfRows
);
static
int32_t
parseBoundColumns
(
SInsertStatementParam
*
pInsertParam
,
SParsedDataColInfo
*
pColInfo
,
SSchema
*
pSchema
,
static
int32_t
parseBoundColumns
(
SInsertStatementParam
*
pInsertParam
,
SParsedDataColInfo
*
pColInfo
,
SSchema
*
pSchema
,
char
*
str
,
char
**
end
);
char
*
str
,
char
**
end
);
int
initMemRowBuilder
(
SMemRowBuilder
*
pBuilder
,
uint32_t
nRows
,
uint32_t
nCols
,
uint32_t
nBoundCols
,
int32_t
allNullLen
)
{
ASSERT
(
nRows
>=
0
&&
nCols
>
0
&&
(
nBoundCols
<=
nCols
));
if
(
nRows
>
0
)
{
// already init(bind multiple rows by single column)
if
(
pBuilder
->
compareStat
==
ROW_COMPARE_NEED
&&
(
pBuilder
->
rowInfo
!=
NULL
))
{
return
TSDB_CODE_SUCCESS
;
}
}
int32_t
getExtendedRowSize
(
STableComInfo
*
tinfo
)
{
if
(
nBoundCols
==
0
)
{
// file input
return
tinfo
->
rowSize
+
PAYLOAD_HEADER_LEN
+
PAYLOAD_COL_HEAD_LEN
*
tinfo
->
numOfColumns
;
pBuilder
->
memRowType
=
SMEM_ROW_DATA
;
}
pBuilder
->
compareStat
=
ROW_COMPARE_NO_NEED
;
int
initSMemRowHelper
(
SMemRowHelper
*
pHelper
,
SSchema
*
pSSchema
,
uint16_t
nCols
,
uint16_t
allNullColsLen
)
{
return
TSDB_CODE_SUCCESS
;
pHelper
->
allNullLen
=
allNullColsLen
;
// TODO: get allNullColsLen when creating or altering table meta
}
else
{
if
(
pHelper
->
allNullLen
==
0
)
{
float
boundRatio
=
((
float
)
nBoundCols
/
(
float
)
nCols
);
for
(
uint16_t
i
=
0
;
i
<
nCols
;
++
i
)
{
uint8_t
type
=
pSSchema
[
i
].
type
;
if
(
boundRatio
<
KVRatioKV
)
{
int32_t
typeLen
=
TYPE_BYTES
[
type
];
pBuilder
->
memRowType
=
SMEM_ROW_KV
;
pHelper
->
allNullLen
+=
typeLen
;
pBuilder
->
compareStat
=
ROW_COMPARE_NO_NEED
;
if
(
TSDB_DATA_TYPE_BINARY
==
type
)
{
return
TSDB_CODE_SUCCESS
;
pHelper
->
allNullLen
+=
(
VARSTR_HEADER_SIZE
+
CHAR_BYTES
);
}
else
if
(
boundRatio
>
KVRatioData
)
{
}
else
if
(
TSDB_DATA_TYPE_NCHAR
==
type
)
{
pBuilder
->
memRowType
=
SMEM_ROW_DATA
;
int
len
=
VARSTR_HEADER_SIZE
+
TSDB_NCHAR_SIZE
;
pBuilder
->
compareStat
=
ROW_COMPARE_NO_NEED
;
pHelper
->
allNullLen
+=
len
;
return
TSDB_CODE_SUCCESS
;
}
}
pBuilder
->
compareStat
=
ROW_COMPARE_NEED
;
if
(
boundRatio
<
KVRatioPredict
)
{
pBuilder
->
memRowType
=
SMEM_ROW_KV
;
}
else
{
pBuilder
->
memRowType
=
SMEM_ROW_DATA
;
}
}
}
}
return
0
;
}
pBuilder
->
dataRowInitLen
=
TD_MEM_ROW_DATA_HEAD_SIZE
+
allNullLen
;
static
int32_t
tscToDouble
(
SStrToken
*
pToken
,
double
*
value
,
char
**
endPtr
)
{
pBuilder
->
kvRowInitLen
=
TD_MEM_ROW_KV_HEAD_SIZE
+
nBoundCols
*
sizeof
(
SColIdx
);
errno
=
0
;
*
value
=
strtold
(
pToken
->
z
,
endPtr
);
if
(
nRows
>
0
)
{
pBuilder
->
rowInfo
=
tcalloc
(
nRows
,
sizeof
(
SMemRowInfo
));
// not a valid integer number, return error
if
(
pBuilder
->
rowInfo
==
NULL
)
{
if
((
*
endPtr
-
pToken
->
z
)
!=
pToken
->
n
)
{
return
TSDB_CODE_TSC_OUT_OF_MEMORY
;
return
TK_ILLEGAL
;
}
for
(
int
i
=
0
;
i
<
nRows
;
++
i
)
{
(
pBuilder
->
rowInfo
+
i
)
->
dataLen
=
pBuilder
->
dataRowInitLen
;
(
pBuilder
->
rowInfo
+
i
)
->
kvLen
=
pBuilder
->
kvRowInitLen
;
}
}
}
return
pToken
->
type
;
return
TSDB_CODE_SUCCESS
;
}
}
int
tsParseTime
(
SStrToken
*
pToken
,
int64_t
*
time
,
char
**
next
,
char
*
error
,
int16_t
timePrec
)
{
int
tsParseTime
(
SStrToken
*
pToken
,
int64_t
*
time
,
char
**
next
,
char
*
error
,
int16_t
timePrec
)
{
...
@@ -146,10 +163,6 @@ int tsParseTime(SStrToken *pToken, int64_t *time, char **next, char *error, int1
...
@@ -146,10 +163,6 @@ int tsParseTime(SStrToken *pToken, int64_t *time, char **next, char *error, int1
return
TSDB_CODE_SUCCESS
;
return
TSDB_CODE_SUCCESS
;
}
}
static
bool
isNullStr
(
SStrToken
*
pToken
)
{
return
(
pToken
->
type
==
TK_NULL
)
||
((
pToken
->
type
==
TK_STRING
)
&&
(
pToken
->
n
!=
0
)
&&
(
strncasecmp
(
TSDB_DATA_NULL_STR_L
,
pToken
->
z
,
pToken
->
n
)
==
0
));
}
int32_t
tsParseOneColumn
(
SSchema
*
pSchema
,
SStrToken
*
pToken
,
char
*
payload
,
char
*
msg
,
char
**
str
,
bool
primaryKey
,
int32_t
tsParseOneColumn
(
SSchema
*
pSchema
,
SStrToken
*
pToken
,
char
*
payload
,
char
*
msg
,
char
**
str
,
bool
primaryKey
,
int16_t
timePrec
)
{
int16_t
timePrec
)
{
int64_t
iv
;
int64_t
iv
;
...
@@ -400,342 +413,6 @@ int32_t tsParseOneColumn(SSchema *pSchema, SStrToken *pToken, char *payload, cha
...
@@ -400,342 +413,6 @@ int32_t tsParseOneColumn(SSchema *pSchema, SStrToken *pToken, char *payload, cha
return
TSDB_CODE_SUCCESS
;
return
TSDB_CODE_SUCCESS
;
}
}
static
FORCE_INLINE
TDRowLenT
tsSetPayloadColValue
(
char
*
payloadStart
,
char
*
payload
,
int16_t
columnId
,
uint8_t
columnType
,
const
void
*
value
,
uint16_t
valueLen
,
TDRowTLenT
tOffset
)
{
payloadColSetId
(
payload
,
columnId
);
payloadColSetType
(
payload
,
columnType
);
memcpy
(
POINTER_SHIFT
(
payloadStart
,
tOffset
),
value
,
valueLen
);
return
valueLen
;
}
static
int32_t
tsParseOneColumnKV
(
SSchema
*
pSchema
,
SStrToken
*
pToken
,
char
*
payloadStart
,
char
*
primaryKeyStart
,
char
*
payload
,
char
*
msg
,
char
**
str
,
bool
primaryKey
,
int16_t
timePrec
,
TDRowTLenT
tOffset
,
TDRowLenT
*
sizeAppend
,
TDRowLenT
*
dataRowColDeltaLen
,
TDRowLenT
*
kvRowColLen
)
{
int64_t
iv
;
int32_t
ret
;
char
*
endptr
=
NULL
;
if
(
IS_NUMERIC_TYPE
(
pSchema
->
type
)
&&
pToken
->
n
==
0
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid numeric data"
,
pToken
->
z
);
}
switch
(
pSchema
->
type
)
{
case
TSDB_DATA_TYPE_BOOL
:
{
// bool
if
(
isNullStr
(
pToken
))
{
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
getNullValue
(
TSDB_DATA_TYPE_BOOL
),
TYPE_BYTES
[
TSDB_DATA_TYPE_BOOL
],
tOffset
);
}
else
{
if
((
pToken
->
type
==
TK_BOOL
||
pToken
->
type
==
TK_STRING
)
&&
(
pToken
->
n
!=
0
))
{
if
(
strncmp
(
pToken
->
z
,
"true"
,
pToken
->
n
)
==
0
)
{
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
&
TRUE_VALUE
,
TYPE_BYTES
[
TSDB_DATA_TYPE_BOOL
],
tOffset
);
*
kvRowColLen
+=
(
TDRowLenT
)(
sizeof
(
SColIdx
)
+
TYPE_BYTES
[
TSDB_DATA_TYPE_BOOL
]);
}
else
if
(
strncmp
(
pToken
->
z
,
"false"
,
pToken
->
n
)
==
0
)
{
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
&
FALSE_VALUE
,
TYPE_BYTES
[
TSDB_DATA_TYPE_BOOL
],
tOffset
);
*
kvRowColLen
+=
(
TDRowLenT
)(
sizeof
(
SColIdx
)
+
TYPE_BYTES
[
TSDB_DATA_TYPE_BOOL
]);
}
else
{
return
tscSQLSyntaxErrMsg
(
msg
,
"invalid bool data"
,
pToken
->
z
);
}
}
else
if
(
pToken
->
type
==
TK_INTEGER
)
{
iv
=
strtoll
(
pToken
->
z
,
NULL
,
10
);
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
((
iv
==
0
)
?
&
FALSE_VALUE
:
&
TRUE_VALUE
),
TYPE_BYTES
[
TSDB_DATA_TYPE_BOOL
],
tOffset
);
*
kvRowColLen
+=
(
TDRowLenT
)(
sizeof
(
SColIdx
)
+
TYPE_BYTES
[
TSDB_DATA_TYPE_BOOL
]);
}
else
if
(
pToken
->
type
==
TK_FLOAT
)
{
double
dv
=
strtod
(
pToken
->
z
,
NULL
);
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
((
dv
==
0
)
?
&
FALSE_VALUE
:
&
TRUE_VALUE
),
TYPE_BYTES
[
TSDB_DATA_TYPE_BOOL
],
tOffset
);
*
kvRowColLen
+=
(
TDRowLenT
)(
sizeof
(
SColIdx
)
+
TYPE_BYTES
[
TSDB_DATA_TYPE_BOOL
]);
}
else
{
return
tscInvalidOperationMsg
(
msg
,
"invalid bool data"
,
pToken
->
z
);
}
}
break
;
}
case
TSDB_DATA_TYPE_TINYINT
:
if
(
isNullStr
(
pToken
))
{
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
getNullValue
(
TSDB_DATA_TYPE_TINYINT
),
TYPE_BYTES
[
TSDB_DATA_TYPE_TINYINT
],
tOffset
);
}
else
{
ret
=
tStrToInteger
(
pToken
->
z
,
pToken
->
type
,
pToken
->
n
,
&
iv
,
true
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid tinyint data"
,
pToken
->
z
);
}
else
if
(
!
IS_VALID_TINYINT
(
iv
))
{
return
tscInvalidOperationMsg
(
msg
,
"data overflow"
,
pToken
->
z
);
}
uint8_t
tmpVal
=
(
uint8_t
)
iv
;
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
&
tmpVal
,
TYPE_BYTES
[
TSDB_DATA_TYPE_TINYINT
],
tOffset
);
*
kvRowColLen
+=
(
TDRowLenT
)(
sizeof
(
SColIdx
)
+
TYPE_BYTES
[
TSDB_DATA_TYPE_TINYINT
]);
}
break
;
case
TSDB_DATA_TYPE_UTINYINT
:
if
(
isNullStr
(
pToken
))
{
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
getNullValue
(
TSDB_DATA_TYPE_UTINYINT
),
TYPE_BYTES
[
TSDB_DATA_TYPE_UTINYINT
],
tOffset
);
}
else
{
ret
=
tStrToInteger
(
pToken
->
z
,
pToken
->
type
,
pToken
->
n
,
&
iv
,
false
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid unsigned tinyint data"
,
pToken
->
z
);
}
else
if
(
!
IS_VALID_UTINYINT
(
iv
))
{
return
tscInvalidOperationMsg
(
msg
,
"unsigned tinyint data overflow"
,
pToken
->
z
);
}
uint8_t
tmpVal
=
(
uint8_t
)
iv
;
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
&
tmpVal
,
TYPE_BYTES
[
TSDB_DATA_TYPE_UTINYINT
],
tOffset
);
*
kvRowColLen
+=
(
TDRowLenT
)(
sizeof
(
SColIdx
)
+
TYPE_BYTES
[
TSDB_DATA_TYPE_UTINYINT
]);
}
break
;
case
TSDB_DATA_TYPE_SMALLINT
:
if
(
isNullStr
(
pToken
))
{
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
getNullValue
(
TSDB_DATA_TYPE_SMALLINT
),
TYPE_BYTES
[
TSDB_DATA_TYPE_SMALLINT
],
tOffset
);
}
else
{
ret
=
tStrToInteger
(
pToken
->
z
,
pToken
->
type
,
pToken
->
n
,
&
iv
,
true
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid smallint data"
,
pToken
->
z
);
}
else
if
(
!
IS_VALID_SMALLINT
(
iv
))
{
return
tscInvalidOperationMsg
(
msg
,
"smallint data overflow"
,
pToken
->
z
);
}
int16_t
tmpVal
=
(
int16_t
)
iv
;
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
&
tmpVal
,
TYPE_BYTES
[
TSDB_DATA_TYPE_SMALLINT
],
tOffset
);
*
kvRowColLen
+=
(
TDRowLenT
)(
sizeof
(
SColIdx
)
+
TYPE_BYTES
[
TSDB_DATA_TYPE_SMALLINT
]);
}
break
;
case
TSDB_DATA_TYPE_USMALLINT
:
if
(
isNullStr
(
pToken
))
{
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
getNullValue
(
TSDB_DATA_TYPE_USMALLINT
),
TYPE_BYTES
[
TSDB_DATA_TYPE_USMALLINT
],
tOffset
);
}
else
{
ret
=
tStrToInteger
(
pToken
->
z
,
pToken
->
type
,
pToken
->
n
,
&
iv
,
false
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid unsigned smallint data"
,
pToken
->
z
);
}
else
if
(
!
IS_VALID_USMALLINT
(
iv
))
{
return
tscInvalidOperationMsg
(
msg
,
"unsigned smallint data overflow"
,
pToken
->
z
);
}
uint16_t
tmpVal
=
(
uint16_t
)
iv
;
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
&
tmpVal
,
TYPE_BYTES
[
TSDB_DATA_TYPE_USMALLINT
],
tOffset
);
*
kvRowColLen
+=
(
TDRowLenT
)(
sizeof
(
SColIdx
)
+
TYPE_BYTES
[
TSDB_DATA_TYPE_USMALLINT
]);
}
break
;
case
TSDB_DATA_TYPE_INT
:
if
(
isNullStr
(
pToken
))
{
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
getNullValue
(
TSDB_DATA_TYPE_INT
),
TYPE_BYTES
[
TSDB_DATA_TYPE_INT
],
tOffset
);
}
else
{
ret
=
tStrToInteger
(
pToken
->
z
,
pToken
->
type
,
pToken
->
n
,
&
iv
,
true
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid int data"
,
pToken
->
z
);
}
else
if
(
!
IS_VALID_INT
(
iv
))
{
return
tscInvalidOperationMsg
(
msg
,
"int data overflow"
,
pToken
->
z
);
}
int32_t
tmpVal
=
(
int32_t
)
iv
;
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
&
tmpVal
,
TYPE_BYTES
[
TSDB_DATA_TYPE_INT
],
tOffset
);
*
kvRowColLen
+=
(
TDRowLenT
)(
sizeof
(
SColIdx
)
+
TYPE_BYTES
[
TSDB_DATA_TYPE_INT
]);
}
break
;
case
TSDB_DATA_TYPE_UINT
:
if
(
isNullStr
(
pToken
))
{
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
getNullValue
(
TSDB_DATA_TYPE_UINT
),
TYPE_BYTES
[
TSDB_DATA_TYPE_UINT
],
tOffset
);
}
else
{
ret
=
tStrToInteger
(
pToken
->
z
,
pToken
->
type
,
pToken
->
n
,
&
iv
,
false
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid unsigned int data"
,
pToken
->
z
);
}
else
if
(
!
IS_VALID_UINT
(
iv
))
{
return
tscInvalidOperationMsg
(
msg
,
"unsigned int data overflow"
,
pToken
->
z
);
}
uint32_t
tmpVal
=
(
uint32_t
)
iv
;
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
&
tmpVal
,
TYPE_BYTES
[
TSDB_DATA_TYPE_UINT
],
tOffset
);
*
kvRowColLen
+=
(
TDRowLenT
)(
sizeof
(
SColIdx
)
+
TYPE_BYTES
[
TSDB_DATA_TYPE_UINT
]);
}
break
;
case
TSDB_DATA_TYPE_BIGINT
:
if
(
isNullStr
(
pToken
))
{
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
getNullValue
(
TSDB_DATA_TYPE_BIGINT
),
TYPE_BYTES
[
TSDB_DATA_TYPE_BIGINT
],
tOffset
);
}
else
{
ret
=
tStrToInteger
(
pToken
->
z
,
pToken
->
type
,
pToken
->
n
,
&
iv
,
true
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid bigint data"
,
pToken
->
z
);
}
else
if
(
!
IS_VALID_BIGINT
(
iv
))
{
return
tscInvalidOperationMsg
(
msg
,
"bigint data overflow"
,
pToken
->
z
);
}
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
&
iv
,
TYPE_BYTES
[
TSDB_DATA_TYPE_BIGINT
],
tOffset
);
*
kvRowColLen
+=
(
TDRowLenT
)(
sizeof
(
SColIdx
)
+
TYPE_BYTES
[
TSDB_DATA_TYPE_BIGINT
]);
}
break
;
case
TSDB_DATA_TYPE_UBIGINT
:
if
(
isNullStr
(
pToken
))
{
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
getNullValue
(
TSDB_DATA_TYPE_UBIGINT
),
TYPE_BYTES
[
TSDB_DATA_TYPE_UBIGINT
],
tOffset
);
}
else
{
ret
=
tStrToInteger
(
pToken
->
z
,
pToken
->
type
,
pToken
->
n
,
&
iv
,
false
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid unsigned bigint data"
,
pToken
->
z
);
}
else
if
(
!
IS_VALID_UBIGINT
((
uint64_t
)
iv
))
{
return
tscInvalidOperationMsg
(
msg
,
"unsigned bigint data overflow"
,
pToken
->
z
);
}
uint64_t
tmpVal
=
(
uint64_t
)
iv
;
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
&
tmpVal
,
TYPE_BYTES
[
TSDB_DATA_TYPE_UBIGINT
],
tOffset
);
*
kvRowColLen
+=
(
TDRowLenT
)(
sizeof
(
SColIdx
)
+
TYPE_BYTES
[
TSDB_DATA_TYPE_UBIGINT
]);
}
break
;
case
TSDB_DATA_TYPE_FLOAT
:
if
(
isNullStr
(
pToken
))
{
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
getNullValue
(
TSDB_DATA_TYPE_FLOAT
),
TYPE_BYTES
[
TSDB_DATA_TYPE_FLOAT
],
tOffset
);
}
else
{
double
dv
;
if
(
TK_ILLEGAL
==
tscToDouble
(
pToken
,
&
dv
,
&
endptr
))
{
return
tscInvalidOperationMsg
(
msg
,
"illegal float data"
,
pToken
->
z
);
}
if
(((
dv
==
HUGE_VAL
||
dv
==
-
HUGE_VAL
)
&&
errno
==
ERANGE
)
||
dv
>
FLT_MAX
||
dv
<
-
FLT_MAX
||
isinf
(
dv
)
||
isnan
(
dv
))
{
return
tscInvalidOperationMsg
(
msg
,
"illegal float data"
,
pToken
->
z
);
}
float
tmpVal
=
(
float
)
dv
;
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
&
tmpVal
,
TYPE_BYTES
[
TSDB_DATA_TYPE_FLOAT
],
tOffset
);
*
kvRowColLen
+=
(
TDRowLenT
)(
sizeof
(
SColIdx
)
+
TYPE_BYTES
[
TSDB_DATA_TYPE_FLOAT
]);
}
break
;
case
TSDB_DATA_TYPE_DOUBLE
:
if
(
isNullStr
(
pToken
))
{
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
getNullValue
(
TSDB_DATA_TYPE_DOUBLE
),
TYPE_BYTES
[
TSDB_DATA_TYPE_DOUBLE
],
tOffset
);
}
else
{
double
dv
;
if
(
TK_ILLEGAL
==
tscToDouble
(
pToken
,
&
dv
,
&
endptr
))
{
return
tscInvalidOperationMsg
(
msg
,
"illegal double data"
,
pToken
->
z
);
}
if
(((
dv
==
HUGE_VAL
||
dv
==
-
HUGE_VAL
)
&&
errno
==
ERANGE
)
||
isinf
(
dv
)
||
isnan
(
dv
))
{
return
tscInvalidOperationMsg
(
msg
,
"illegal double data"
,
pToken
->
z
);
}
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
&
dv
,
TYPE_BYTES
[
TSDB_DATA_TYPE_DOUBLE
],
tOffset
);
*
kvRowColLen
+=
(
TDRowLenT
)(
sizeof
(
SColIdx
)
+
TYPE_BYTES
[
TSDB_DATA_TYPE_DOUBLE
]);
}
break
;
case
TSDB_DATA_TYPE_BINARY
:
// binary data cannot be null-terminated char string, otherwise the last char of the string is lost
if
(
pToken
->
type
==
TK_NULL
)
{
payloadColSetId
(
payload
,
pSchema
->
colId
);
payloadColSetType
(
payload
,
pSchema
->
type
);
memcpy
(
POINTER_SHIFT
(
payloadStart
,
tOffset
),
getNullValue
(
TSDB_DATA_TYPE_BINARY
),
VARSTR_HEADER_SIZE
+
CHAR_BYTES
);
*
sizeAppend
=
(
TDRowLenT
)(
VARSTR_HEADER_SIZE
+
CHAR_BYTES
);
}
else
{
// too long values will return invalid sql, not be truncated automatically
if
(
pToken
->
n
+
VARSTR_HEADER_SIZE
>
pSchema
->
bytes
)
{
// todo refactor
return
tscInvalidOperationMsg
(
msg
,
"string data overflow"
,
pToken
->
z
);
}
// STR_WITH_SIZE_TO_VARSTR(payload, pToken->z, pToken->n);
payloadColSetId
(
payload
,
pSchema
->
colId
);
payloadColSetType
(
payload
,
pSchema
->
type
);
varDataSetLen
(
POINTER_SHIFT
(
payloadStart
,
tOffset
),
pToken
->
n
);
memcpy
(
varDataVal
(
POINTER_SHIFT
(
payloadStart
,
tOffset
)),
pToken
->
z
,
pToken
->
n
);
*
sizeAppend
=
(
TDRowLenT
)(
VARSTR_HEADER_SIZE
+
pToken
->
n
);
*
dataRowColDeltaLen
+=
(
TDRowLenT
)(
pToken
->
n
-
CHAR_BYTES
);
*
kvRowColLen
+=
(
TDRowLenT
)(
sizeof
(
SColIdx
)
+
VARSTR_HEADER_SIZE
+
pToken
->
n
);
}
break
;
case
TSDB_DATA_TYPE_NCHAR
:
if
(
pToken
->
type
==
TK_NULL
)
{
payloadColSetId
(
payload
,
pSchema
->
colId
);
payloadColSetType
(
payload
,
pSchema
->
type
);
memcpy
(
POINTER_SHIFT
(
payloadStart
,
tOffset
),
getNullValue
(
TSDB_DATA_TYPE_NCHAR
),
VARSTR_HEADER_SIZE
+
TSDB_NCHAR_SIZE
);
*
sizeAppend
=
(
TDRowLenT
)(
VARSTR_HEADER_SIZE
+
TSDB_NCHAR_SIZE
);
}
else
{
// if the converted output len is over than pColumnModel->bytes, return error: 'Argument list too long'
int32_t
output
=
0
;
payloadColSetId
(
payload
,
pSchema
->
colId
);
payloadColSetType
(
payload
,
pSchema
->
type
);
if
(
!
taosMbsToUcs4
(
pToken
->
z
,
pToken
->
n
,
varDataVal
(
POINTER_SHIFT
(
payloadStart
,
tOffset
)),
pSchema
->
bytes
-
VARSTR_HEADER_SIZE
,
&
output
))
{
char
buf
[
512
]
=
{
0
};
snprintf
(
buf
,
tListLen
(
buf
),
"%s"
,
strerror
(
errno
));
return
tscInvalidOperationMsg
(
msg
,
buf
,
pToken
->
z
);
}
varDataSetLen
(
POINTER_SHIFT
(
payloadStart
,
tOffset
),
output
);
*
sizeAppend
=
(
TDRowLenT
)(
VARSTR_HEADER_SIZE
+
output
);
*
dataRowColDeltaLen
+=
(
TDRowLenT
)(
output
-
sizeof
(
uint32_t
));
*
kvRowColLen
+=
(
TDRowLenT
)(
sizeof
(
SColIdx
)
+
VARSTR_HEADER_SIZE
+
output
);
}
break
;
case
TSDB_DATA_TYPE_TIMESTAMP
:
{
if
(
pToken
->
type
==
TK_NULL
)
{
if
(
primaryKey
)
{
// When building SKVRow primaryKey, we should not skip even with NULL value.
int64_t
tmpVal
=
0
;
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
primaryKeyStart
,
pSchema
->
colId
,
pSchema
->
type
,
&
tmpVal
,
TYPE_BYTES
[
TSDB_DATA_TYPE_TIMESTAMP
],
tOffset
);
*
kvRowColLen
+=
(
TDRowLenT
)(
sizeof
(
SColIdx
)
+
TYPE_BYTES
[
TSDB_DATA_TYPE_TIMESTAMP
]);
}
else
{
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
payload
,
pSchema
->
colId
,
pSchema
->
type
,
getNullValue
(
TSDB_DATA_TYPE_TIMESTAMP
),
TYPE_BYTES
[
TSDB_DATA_TYPE_TIMESTAMP
],
tOffset
);
}
}
else
{
int64_t
tmpVal
;
if
(
tsParseTime
(
pToken
,
&
tmpVal
,
str
,
msg
,
timePrec
)
!=
TSDB_CODE_SUCCESS
)
{
return
tscInvalidOperationMsg
(
msg
,
"invalid timestamp"
,
pToken
->
z
);
}
*
sizeAppend
=
tsSetPayloadColValue
(
payloadStart
,
primaryKey
?
primaryKeyStart
:
payload
,
pSchema
->
colId
,
pSchema
->
type
,
&
tmpVal
,
TYPE_BYTES
[
TSDB_DATA_TYPE_TIMESTAMP
],
tOffset
);
*
kvRowColLen
+=
(
TDRowLenT
)(
sizeof
(
SColIdx
)
+
TYPE_BYTES
[
TSDB_DATA_TYPE_TIMESTAMP
]);
}
break
;
}
}
return
TSDB_CODE_SUCCESS
;
}
/*
/*
* The server time/client time should not be mixed up in one sql string
* The server time/client time should not be mixed up in one sql string
* Do not employ sort operation is not involved if server time is used.
* Do not employ sort operation is not involved if server time is used.
...
@@ -777,31 +454,24 @@ int tsParseOneRow(char **str, STableDataBlocks *pDataBlocks, int16_t timePrec, i
...
@@ -777,31 +454,24 @@ int tsParseOneRow(char **str, STableDataBlocks *pDataBlocks, int16_t timePrec, i
int32_t
index
=
0
;
int32_t
index
=
0
;
SStrToken
sToken
=
{
0
};
SStrToken
sToken
=
{
0
};
SMemRowHelper
*
pHelper
=
&
pDataBlocks
->
rowHelper
;
char
*
row
=
pDataBlocks
->
pData
+
pDataBlocks
->
size
;
// skip the SSubmitBlk header
char
*
payload
=
pDataBlocks
->
pData
+
pDataBlocks
->
size
;
SParsedDataColInfo
*
spd
=
&
pDataBlocks
->
boundColumnInfo
;
SParsedDataColInfo
*
spd
=
&
pDataBlocks
->
boundColumnInfo
;
SSchema
*
schema
=
tscGetTableSchema
(
pDataBlocks
->
pTableMeta
);
STableMeta
*
pTableMeta
=
pDataBlocks
->
pTableMeta
;
SSchema
*
schema
=
tscGetTableSchema
(
pTableMeta
);
TDRowTLenT
dataRowLen
=
pHelper
->
allNullLen
;
SMemRowBuilder
*
pBuilder
=
&
pDataBlocks
->
rowBuilder
;
TDRowTLenT
kvRowLen
=
TD_MEM_ROW_KV_VER_SIZE
;
int32_t
dataLen
=
pBuilder
->
dataRowInitLen
;
TDRowTLenT
payloadValOffset
=
0
;
int32_t
kvLen
=
pBuilder
->
kvRowInitLen
;
TDRowLenT
colValOffset
=
0
;
bool
isParseBindParam
=
false
;
ASSERT
(
dataRowLen
>
0
);
payloadSetNCols
(
payload
,
spd
->
numOfBound
);
initSMemRow
(
row
,
pBuilder
->
memRowType
,
pDataBlocks
,
spd
->
numOfBound
);
payloadValOffset
=
payloadValuesOffset
(
payload
);
// rely on payloadNCols
// payloadSetTLen(payload, payloadValOffset);
char
*
kvPrimaryKeyStart
=
payload
+
PAYLOAD_HEADER_LEN
;
// primaryKey in 1st column tuple
char
*
kvStart
=
kvPrimaryKeyStart
+
PAYLOAD_COL_HEAD_LEN
;
// the column tuple behind the primaryKey
// 1. set the parsed value from sql string
// 1. set the parsed value from sql string
for
(
int
i
=
0
;
i
<
spd
->
numOfBound
;
++
i
)
{
for
(
int
i
=
0
;
i
<
spd
->
numOfBound
;
++
i
)
{
// the start position in data block buffer of current value in sql
// the start position in data block buffer of current value in sql
int32_t
colIndex
=
spd
->
boundedColumns
[
i
];
int32_t
colIndex
=
spd
->
boundedColumns
[
i
];
char
*
start
=
payload
+
spd
->
cols
[
colIndex
].
offset
;
char
*
start
=
row
+
spd
->
cols
[
colIndex
].
offset
;
SSchema
*
pSchema
=
&
schema
[
colIndex
];
// get colId here
SSchema
*
pSchema
=
&
schema
[
colIndex
];
// get colId here
...
@@ -810,6 +480,9 @@ int tsParseOneRow(char **str, STableDataBlocks *pDataBlocks, int16_t timePrec, i
...
@@ -810,6 +480,9 @@ int tsParseOneRow(char **str, STableDataBlocks *pDataBlocks, int16_t timePrec, i
*
str
+=
index
;
*
str
+=
index
;
if
(
sToken
.
type
==
TK_QUESTION
)
{
if
(
sToken
.
type
==
TK_QUESTION
)
{
if
(
!
isParseBindParam
)
{
isParseBindParam
=
true
;
}
if
(
pInsertParam
->
insertType
!=
TSDB_QUERY_TYPE_STMT_INSERT
)
{
if
(
pInsertParam
->
insertType
!=
TSDB_QUERY_TYPE_STMT_INSERT
)
{
return
tscSQLSyntaxErrMsg
(
pInsertParam
->
msg
,
"? only allowed in binding insertion"
,
*
str
);
return
tscSQLSyntaxErrMsg
(
pInsertParam
->
msg
,
"? only allowed in binding insertion"
,
*
str
);
}
}
...
@@ -860,54 +533,45 @@ int tsParseOneRow(char **str, STableDataBlocks *pDataBlocks, int16_t timePrec, i
...
@@ -860,54 +533,45 @@ int tsParseOneRow(char **str, STableDataBlocks *pDataBlocks, int16_t timePrec, i
sToken
.
n
-=
2
+
cnt
;
sToken
.
n
-=
2
+
cnt
;
}
}
bool
isPrimaryKey
=
(
colIndex
==
PRIMARYKEY_TIMESTAMP_COL_INDEX
);
bool
isPrimaryKey
=
(
colIndex
==
PRIMARYKEY_TIMESTAMP_COL_INDEX
);
TDRowLenT
dataRowDeltaColLen
=
0
;
// When combine the data as SDataRow, the delta len between all NULL columns.
int32_t
toffset
=
-
1
;
TDRowLenT
kvRowColLen
=
0
;
int16_t
colId
=
-
1
;
TDRowLenT
colValAppended
=
0
;
tscGetMemRowAppendInfo
(
schema
,
pBuilder
->
memRowType
,
spd
,
i
,
&
toffset
,
&
colId
)
;
if
(
!
IS_DATA_COL_ORDERED
(
spd
->
orderStatus
))
{
int32_t
ret
=
tsParseOneColumnKV
(
pSchema
,
&
sToken
,
row
,
pInsertParam
->
msg
,
str
,
isPrimaryKey
,
timePrec
,
toffset
,
ASSERT
(
spd
->
colIdxInfo
!=
NULL
);
colId
,
&
dataLen
,
&
kvLen
,
pBuilder
->
compareStat
);
if
(
!
isPrimaryKey
)
{
kvStart
=
POINTER_SHIFT
(
kvPrimaryKeyStart
,
spd
->
colIdxInfo
[
i
].
finalIdx
*
PAYLOAD_COL_HEAD_LEN
);
}
else
{
ASSERT
(
spd
->
colIdxInfo
[
i
].
finalIdx
==
0
);
}
}
// the primary key locates in 1st column
int32_t
ret
=
tsParseOneColumnKV
(
pSchema
,
&
sToken
,
payload
,
kvPrimaryKeyStart
,
kvStart
,
pInsertParam
->
msg
,
str
,
isPrimaryKey
,
timePrec
,
payloadValOffset
+
colValOffset
,
&
colValAppended
,
&
dataRowDeltaColLen
,
&
kvRowColLen
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
ret
;
return
ret
;
}
}
if
(
isPrimaryKey
)
{
if
(
isPrimaryKey
)
{
if
(
tsCheckTimestamp
(
pDataBlocks
,
payloadValues
(
payload
))
!=
TSDB_CODE_SUCCESS
)
{
TSKEY
tsKey
=
memRowKey
(
row
);
if
(
tsCheckTimestamp
(
pDataBlocks
,
(
const
char
*
)
&
tsKey
)
!=
TSDB_CODE_SUCCESS
)
{
tscInvalidOperationMsg
(
pInsertParam
->
msg
,
"client time/server time can not be mixed up"
,
sToken
.
z
);
tscInvalidOperationMsg
(
pInsertParam
->
msg
,
"client time/server time can not be mixed up"
,
sToken
.
z
);
return
TSDB_CODE_TSC_INVALID_TIME_STAMP
;
return
TSDB_CODE_TSC_INVALID_TIME_STAMP
;
}
}
payloadColSetOffset
(
kvPrimaryKeyStart
,
colValOffset
);
}
else
{
payloadColSetOffset
(
kvStart
,
colValOffset
);
if
(
IS_DATA_COL_ORDERED
(
spd
->
orderStatus
))
{
kvStart
+=
PAYLOAD_COL_HEAD_LEN
;
// move to next column
}
}
}
colValOffset
+=
colValAppended
;
kvRowLen
+=
kvRowColLen
;
dataRowLen
+=
dataRowDeltaColLen
;
}
}
if
(
kvRowLen
<
dataRowLen
)
{
if
(
!
isParseBindParam
)
{
payloadSetType
(
payload
,
SMEM_ROW_KV
);
// 2. check and set convert flag
}
else
{
if
(
pBuilder
->
compareStat
==
ROW_COMPARE_NEED
)
{
payloadSetType
(
payload
,
SMEM_ROW_DATA
);
checkAndConvertMemRow
(
row
,
dataLen
,
kvLen
);
}
// 3. set the null value for the columns that do not assign values
if
((
spd
->
numOfBound
<
spd
->
numOfCols
)
&&
isDataRow
(
row
)
&&
!
isNeedConvertRow
(
row
))
{
SDataRow
dataRow
=
memRowDataBody
(
row
);
for
(
int32_t
i
=
0
;
i
<
spd
->
numOfCols
;
++
i
)
{
if
(
spd
->
cols
[
i
].
valStat
==
VAL_STAT_NONE
)
{
tdAppendDataColVal
(
dataRow
,
getNullValue
(
schema
[
i
].
type
),
true
,
schema
[
i
].
type
,
spd
->
cols
[
i
].
toffset
);
}
}
}
}
}
*
len
=
(
int32_t
)(
payloadValOffset
+
colValOffset
);
*
len
=
getExtendedRowSize
(
pDataBlocks
);
payloadSetTLen
(
payload
,
*
len
);
return
TSDB_CODE_SUCCESS
;
return
TSDB_CODE_SUCCESS
;
}
}
...
@@ -957,11 +621,13 @@ int32_t tsParseValues(char **str, STableDataBlocks *pDataBlock, int maxRows, SIn
...
@@ -957,11 +621,13 @@ int32_t tsParseValues(char **str, STableDataBlocks *pDataBlock, int maxRows, SIn
int32_t
precision
=
tinfo
.
precision
;
int32_t
precision
=
tinfo
.
precision
;
int32_t
extendedRowSize
=
getExtendedRowSize
(
&
tinfo
);
int32_t
extendedRowSize
=
getExtendedRowSize
(
pDataBlock
);
initSMemRowHelper
(
&
pDataBlock
->
rowHelper
,
tscGetTableSchema
(
pDataBlock
->
pTableMeta
),
tscGetNumOfColumns
(
pDataBlock
->
pTableMeta
),
0
);
if
(
TSDB_CODE_SUCCESS
!=
(
code
=
initMemRowBuilder
(
&
pDataBlock
->
rowBuilder
,
0
,
tinfo
.
numOfColumns
,
pDataBlock
->
boundColumnInfo
.
numOfBound
,
pDataBlock
->
boundColumnInfo
.
allNullLen
)))
{
return
code
;
}
while
(
1
)
{
while
(
1
)
{
index
=
0
;
index
=
0
;
sToken
=
tStrGetToken
(
*
str
,
&
index
,
false
);
sToken
=
tStrGetToken
(
*
str
,
&
index
,
false
);
...
@@ -1010,19 +676,37 @@ int32_t tsParseValues(char **str, STableDataBlocks *pDataBlock, int maxRows, SIn
...
@@ -1010,19 +676,37 @@ int32_t tsParseValues(char **str, STableDataBlocks *pDataBlock, int maxRows, SIn
void
tscSetBoundColumnInfo
(
SParsedDataColInfo
*
pColInfo
,
SSchema
*
pSchema
,
int32_t
numOfCols
)
{
void
tscSetBoundColumnInfo
(
SParsedDataColInfo
*
pColInfo
,
SSchema
*
pSchema
,
int32_t
numOfCols
)
{
pColInfo
->
numOfCols
=
numOfCols
;
pColInfo
->
numOfCols
=
numOfCols
;
pColInfo
->
numOfBound
=
numOfCols
;
pColInfo
->
numOfBound
=
numOfCols
;
pColInfo
->
orderStatus
=
ORDER_STATUS_ORDERED
;
pColInfo
->
orderStatus
=
ORDER_STATUS_ORDERED
;
// default is ORDERED for non-bound mode
pColInfo
->
boundedColumns
=
calloc
(
pColInfo
->
numOfCols
,
sizeof
(
int32_t
));
pColInfo
->
boundedColumns
=
calloc
(
pColInfo
->
numOfCols
,
sizeof
(
int32_t
));
pColInfo
->
cols
=
calloc
(
pColInfo
->
numOfCols
,
sizeof
(
SBoundColumn
));
pColInfo
->
cols
=
calloc
(
pColInfo
->
numOfCols
,
sizeof
(
SBoundColumn
));
pColInfo
->
colIdxInfo
=
NULL
;
pColInfo
->
colIdxInfo
=
NULL
;
pColInfo
->
flen
=
0
;
pColInfo
->
allNullLen
=
0
;
int32_t
nVar
=
0
;
for
(
int32_t
i
=
0
;
i
<
pColInfo
->
numOfCols
;
++
i
)
{
for
(
int32_t
i
=
0
;
i
<
pColInfo
->
numOfCols
;
++
i
)
{
uint8_t
type
=
pSchema
[
i
].
type
;
if
(
i
>
0
)
{
if
(
i
>
0
)
{
pColInfo
->
cols
[
i
].
offset
=
pSchema
[
i
-
1
].
bytes
+
pColInfo
->
cols
[
i
-
1
].
offset
;
pColInfo
->
cols
[
i
].
offset
=
pSchema
[
i
-
1
].
bytes
+
pColInfo
->
cols
[
i
-
1
].
offset
;
pColInfo
->
cols
[
i
].
toffset
=
pColInfo
->
flen
;
}
pColInfo
->
flen
+=
TYPE_BYTES
[
type
];
switch
(
type
)
{
case
TSDB_DATA_TYPE_BINARY
:
pColInfo
->
allNullLen
+=
(
VARSTR_HEADER_SIZE
+
CHAR_BYTES
);
++
nVar
;
break
;
case
TSDB_DATA_TYPE_NCHAR
:
pColInfo
->
allNullLen
+=
(
VARSTR_HEADER_SIZE
+
TSDB_NCHAR_SIZE
);
++
nVar
;
break
;
default:
break
;
}
}
pColInfo
->
cols
[
i
].
hasVal
=
true
;
pColInfo
->
boundedColumns
[
i
]
=
i
;
pColInfo
->
boundedColumns
[
i
]
=
i
;
}
}
pColInfo
->
allNullLen
+=
pColInfo
->
flen
;
pColInfo
->
extendedVarLen
=
(
uint16_t
)(
nVar
*
sizeof
(
VarDataOffsetT
));
}
}
int32_t
tscAllocateMemIfNeed
(
STableDataBlocks
*
pDataBlock
,
int32_t
rowSize
,
int32_t
*
numOfRows
)
{
int32_t
tscAllocateMemIfNeed
(
STableDataBlocks
*
pDataBlock
,
int32_t
rowSize
,
int32_t
*
numOfRows
)
{
...
@@ -1122,35 +806,29 @@ int tscSortRemoveDataBlockDupRows(STableDataBlocks *dataBuf, SBlockKeyInfo *pBlk
...
@@ -1122,35 +806,29 @@ int tscSortRemoveDataBlockDupRows(STableDataBlocks *dataBuf, SBlockKeyInfo *pBlk
if
(
dataBuf
->
tsSource
==
TSDB_USE_SERVER_TS
)
{
if
(
dataBuf
->
tsSource
==
TSDB_USE_SERVER_TS
)
{
assert
(
dataBuf
->
ordered
);
assert
(
dataBuf
->
ordered
);
}
}
// allocate memory
// allocate memory
size_t
nAlloc
=
nRows
*
sizeof
(
SBlockKeyTuple
);
size_t
nAlloc
=
nRows
*
sizeof
(
SBlockKeyTuple
);
if
(
pBlkKeyInfo
->
pKeyTuple
==
NULL
||
pBlkKeyInfo
->
maxBytesAlloc
<
nAlloc
)
{
if
(
pBlkKeyInfo
->
pKeyTuple
==
NULL
||
pBlkKeyInfo
->
maxBytesAlloc
<
nAlloc
)
{
size_t
nRealAlloc
=
nAlloc
+
10
*
sizeof
(
SBlockKeyTuple
);
size_t
nRealAlloc
=
nAlloc
+
10
*
sizeof
(
SBlockKeyTuple
);
char
*
tmp
=
trealloc
(
pBlkKeyInfo
->
pKeyTuple
,
nRealAlloc
);
char
*
tmp
=
trealloc
(
pBlkKeyInfo
->
pKeyTuple
,
nRealAlloc
);
if
(
tmp
==
NULL
)
{
if
(
tmp
==
NULL
)
{
return
TSDB_CODE_TSC_OUT_OF_MEMORY
;
return
TSDB_CODE_TSC_OUT_OF_MEMORY
;
}
}
pBlkKeyInfo
->
pKeyTuple
=
(
SBlockKeyTuple
*
)
tmp
;
pBlkKeyInfo
->
pKeyTuple
=
(
SBlockKeyTuple
*
)
tmp
;
pBlkKeyInfo
->
maxBytesAlloc
=
(
int32_t
)
nRealAlloc
;
pBlkKeyInfo
->
maxBytesAlloc
=
(
int32_t
)
nRealAlloc
;
}
}
memset
(
pBlkKeyInfo
->
pKeyTuple
,
0
,
nAlloc
);
memset
(
pBlkKeyInfo
->
pKeyTuple
,
0
,
nAlloc
);
int32_t
extendedRowSize
=
getExtendedRowSize
(
dataBuf
);
SBlockKeyTuple
*
pBlkKeyTuple
=
pBlkKeyInfo
->
pKeyTuple
;
SBlockKeyTuple
*
pBlkKeyTuple
=
pBlkKeyInfo
->
pKeyTuple
;
char
*
pBlockData
=
pBlocks
->
data
;
char
*
pBlockData
=
pBlocks
->
data
;
TDRowTLenT
totolPayloadTLen
=
0
;
TDRowTLenT
payloadTLen
=
0
;
int
n
=
0
;
int
n
=
0
;
while
(
n
<
nRows
)
{
while
(
n
<
nRows
)
{
pBlkKeyTuple
->
skey
=
payloadTS
Key
(
pBlockData
);
pBlkKeyTuple
->
skey
=
memRow
Key
(
pBlockData
);
pBlkKeyTuple
->
payloadAddr
=
pBlockData
;
pBlkKeyTuple
->
payloadAddr
=
pBlockData
;
payloadTLen
=
payloadTLen
(
pBlockData
);
#if 0
ASSERT(payloadNCols(pBlockData) <= 4096);
ASSERT(payloadTLen(pBlockData) < 65536);
#endif
totolPayloadTLen
+=
payloadTLen
;
// next loop
// next loop
pBlockData
+=
payloadTLen
;
pBlockData
+=
extendedRowSize
;
++
pBlkKeyTuple
;
++
pBlkKeyTuple
;
++
n
;
++
n
;
}
}
...
@@ -1167,7 +845,6 @@ int tscSortRemoveDataBlockDupRows(STableDataBlocks *dataBuf, SBlockKeyInfo *pBlk
...
@@ -1167,7 +845,6 @@ int tscSortRemoveDataBlockDupRows(STableDataBlocks *dataBuf, SBlockKeyInfo *pBlk
TSKEY
tj
=
(
pBlkKeyTuple
+
j
)
->
skey
;
TSKEY
tj
=
(
pBlkKeyTuple
+
j
)
->
skey
;
if
(
ti
==
tj
)
{
if
(
ti
==
tj
)
{
totolPayloadTLen
-=
payloadTLen
(
pBlkKeyTuple
+
j
);
++
j
;
++
j
;
continue
;
continue
;
}
}
...
@@ -1183,17 +860,15 @@ int tscSortRemoveDataBlockDupRows(STableDataBlocks *dataBuf, SBlockKeyInfo *pBlk
...
@@ -1183,17 +860,15 @@ int tscSortRemoveDataBlockDupRows(STableDataBlocks *dataBuf, SBlockKeyInfo *pBlk
pBlocks
->
numOfRows
=
i
+
1
;
pBlocks
->
numOfRows
=
i
+
1
;
}
}
dataBuf
->
size
=
sizeof
(
SSubmitBlk
)
+
totolPayloadTLen
;
dataBuf
->
size
=
sizeof
(
SSubmitBlk
)
+
pBlocks
->
numOfRows
*
extendedRowSize
;
dataBuf
->
prevTS
=
INT64_MIN
;
dataBuf
->
prevTS
=
INT64_MIN
;
return
0
;
return
0
;
}
}
static
int32_t
doParseInsertStatement
(
SInsertStatementParam
*
pInsertParam
,
char
**
str
,
STableDataBlocks
*
dataBuf
,
int32_t
*
totalNum
)
{
static
int32_t
doParseInsertStatement
(
SInsertStatementParam
*
pInsertParam
,
char
**
str
,
STableDataBlocks
*
dataBuf
,
int32_t
*
totalNum
)
{
STableComInfo
tinfo
=
tscGetTableInfo
(
dataBuf
->
pTableMeta
);
int32_t
maxNumOfRows
;
int32_t
maxNumOfRows
;
int32_t
code
=
tscAllocateMemIfNeed
(
dataBuf
,
getExtendedRowSize
(
&
tinfo
),
&
maxNumOfRows
);
int32_t
code
=
tscAllocateMemIfNeed
(
dataBuf
,
getExtendedRowSize
(
dataBuf
),
&
maxNumOfRows
);
if
(
TSDB_CODE_SUCCESS
!=
code
)
{
if
(
TSDB_CODE_SUCCESS
!=
code
)
{
return
TSDB_CODE_TSC_OUT_OF_MEMORY
;
return
TSDB_CODE_TSC_OUT_OF_MEMORY
;
}
}
...
@@ -1531,7 +1206,7 @@ static int32_t parseBoundColumns(SInsertStatementParam *pInsertParam, SParsedDat
...
@@ -1531,7 +1206,7 @@ static int32_t parseBoundColumns(SInsertStatementParam *pInsertParam, SParsedDat
pColInfo
->
numOfBound
=
0
;
pColInfo
->
numOfBound
=
0
;
memset
(
pColInfo
->
boundedColumns
,
0
,
sizeof
(
int32_t
)
*
nCols
);
memset
(
pColInfo
->
boundedColumns
,
0
,
sizeof
(
int32_t
)
*
nCols
);
for
(
int32_t
i
=
0
;
i
<
nCols
;
++
i
)
{
for
(
int32_t
i
=
0
;
i
<
nCols
;
++
i
)
{
pColInfo
->
cols
[
i
].
hasVal
=
false
;
pColInfo
->
cols
[
i
].
valStat
=
VAL_STAT_NONE
;
}
}
int32_t
code
=
TSDB_CODE_SUCCESS
;
int32_t
code
=
TSDB_CODE_SUCCESS
;
...
@@ -1570,12 +1245,12 @@ static int32_t parseBoundColumns(SInsertStatementParam *pInsertParam, SParsedDat
...
@@ -1570,12 +1245,12 @@ static int32_t parseBoundColumns(SInsertStatementParam *pInsertParam, SParsedDat
int32_t
nScanned
=
0
,
t
=
lastColIdx
+
1
;
int32_t
nScanned
=
0
,
t
=
lastColIdx
+
1
;
while
(
t
<
nCols
)
{
while
(
t
<
nCols
)
{
if
(
strncmp
(
sToken
.
z
,
pSchema
[
t
].
name
,
sToken
.
n
)
==
0
&&
strlen
(
pSchema
[
t
].
name
)
==
sToken
.
n
)
{
if
(
strncmp
(
sToken
.
z
,
pSchema
[
t
].
name
,
sToken
.
n
)
==
0
&&
strlen
(
pSchema
[
t
].
name
)
==
sToken
.
n
)
{
if
(
pColInfo
->
cols
[
t
].
hasVal
==
true
)
{
if
(
pColInfo
->
cols
[
t
].
valStat
==
VAL_STAT_HAS
)
{
code
=
tscInvalidOperationMsg
(
pInsertParam
->
msg
,
"duplicated column name"
,
sToken
.
z
);
code
=
tscInvalidOperationMsg
(
pInsertParam
->
msg
,
"duplicated column name"
,
sToken
.
z
);
goto
_clean
;
goto
_clean
;
}
}
pColInfo
->
cols
[
t
].
hasVal
=
true
;
pColInfo
->
cols
[
t
].
valStat
=
VAL_STAT_HAS
;
pColInfo
->
boundedColumns
[
pColInfo
->
numOfBound
]
=
t
;
pColInfo
->
boundedColumns
[
pColInfo
->
numOfBound
]
=
t
;
++
pColInfo
->
numOfBound
;
++
pColInfo
->
numOfBound
;
findColumnIndex
=
true
;
findColumnIndex
=
true
;
...
@@ -1593,12 +1268,12 @@ static int32_t parseBoundColumns(SInsertStatementParam *pInsertParam, SParsedDat
...
@@ -1593,12 +1268,12 @@ static int32_t parseBoundColumns(SInsertStatementParam *pInsertParam, SParsedDat
int32_t
nRemain
=
nCols
-
nScanned
;
int32_t
nRemain
=
nCols
-
nScanned
;
while
(
t
<
nRemain
)
{
while
(
t
<
nRemain
)
{
if
(
strncmp
(
sToken
.
z
,
pSchema
[
t
].
name
,
sToken
.
n
)
==
0
&&
strlen
(
pSchema
[
t
].
name
)
==
sToken
.
n
)
{
if
(
strncmp
(
sToken
.
z
,
pSchema
[
t
].
name
,
sToken
.
n
)
==
0
&&
strlen
(
pSchema
[
t
].
name
)
==
sToken
.
n
)
{
if
(
pColInfo
->
cols
[
t
].
hasVal
==
true
)
{
if
(
pColInfo
->
cols
[
t
].
valStat
==
VAL_STAT_HAS
)
{
code
=
tscInvalidOperationMsg
(
pInsertParam
->
msg
,
"duplicated column name"
,
sToken
.
z
);
code
=
tscInvalidOperationMsg
(
pInsertParam
->
msg
,
"duplicated column name"
,
sToken
.
z
);
goto
_clean
;
goto
_clean
;
}
}
pColInfo
->
cols
[
t
].
hasVal
=
true
;
pColInfo
->
cols
[
t
].
valStat
=
VAL_STAT_HAS
;
pColInfo
->
boundedColumns
[
pColInfo
->
numOfBound
]
=
t
;
pColInfo
->
boundedColumns
[
pColInfo
->
numOfBound
]
=
t
;
++
pColInfo
->
numOfBound
;
++
pColInfo
->
numOfBound
;
findColumnIndex
=
true
;
findColumnIndex
=
true
;
...
@@ -1833,7 +1508,7 @@ int tsParseInsertSql(SSqlObj *pSql) {
...
@@ -1833,7 +1508,7 @@ int tsParseInsertSql(SSqlObj *pSql) {
goto
_clean
;
goto
_clean
;
}
}
if
(
dataBuf
->
boundColumnInfo
.
cols
[
0
].
hasVal
==
false
)
{
if
(
dataBuf
->
boundColumnInfo
.
cols
[
0
].
valStat
==
VAL_STAT_NONE
)
{
code
=
tscInvalidOperationMsg
(
pInsertParam
->
msg
,
"primary timestamp column can not be null"
,
NULL
);
code
=
tscInvalidOperationMsg
(
pInsertParam
->
msg
,
"primary timestamp column can not be null"
,
NULL
);
goto
_clean
;
goto
_clean
;
}
}
...
@@ -2044,15 +1719,18 @@ static void parseFileSendDataBlock(void *param, TAOS_RES *tres, int32_t numOfRow
...
@@ -2044,15 +1719,18 @@ static void parseFileSendDataBlock(void *param, TAOS_RES *tres, int32_t numOfRow
goto
_error
;
goto
_error
;
}
}
tscAllocateMemIfNeed
(
pTableDataBlock
,
getExtendedRowSize
(
&
tinfo
),
&
maxRows
);
tscAllocateMemIfNeed
(
pTableDataBlock
,
getExtendedRowSize
(
pTableDataBlock
),
&
maxRows
);
tokenBuf
=
calloc
(
1
,
TSDB_MAX_BYTES_PER_ROW
);
tokenBuf
=
calloc
(
1
,
TSDB_MAX_BYTES_PER_ROW
);
if
(
tokenBuf
==
NULL
)
{
if
(
tokenBuf
==
NULL
)
{
code
=
TSDB_CODE_TSC_OUT_OF_MEMORY
;
code
=
TSDB_CODE_TSC_OUT_OF_MEMORY
;
goto
_error
;
goto
_error
;
}
}
initSMemRowHelper
(
&
pTableDataBlock
->
rowHelper
,
tscGetTableSchema
(
pTableDataBlock
->
pTableMeta
),
if
(
TSDB_CODE_SUCCESS
!=
tscGetNumOfColumns
(
pTableDataBlock
->
pTableMeta
),
0
);
(
ret
=
initMemRowBuilder
(
&
pTableDataBlock
->
rowBuilder
,
0
,
tinfo
.
numOfColumns
,
pTableDataBlock
->
numOfParams
,
pTableDataBlock
->
boundColumnInfo
.
allNullLen
)))
{
goto
_error
;
}
while
((
readLen
=
tgetline
(
&
line
,
&
n
,
fp
))
!=
-
1
)
{
while
((
readLen
=
tgetline
(
&
line
,
&
n
,
fp
))
!=
-
1
)
{
if
((
'\r'
==
line
[
readLen
-
1
])
||
(
'\n'
==
line
[
readLen
-
1
]))
{
if
((
'\r'
==
line
[
readLen
-
1
])
||
(
'\n'
==
line
[
readLen
-
1
]))
{
...
...
src/client/src/tscPrepare.c
浏览文件 @
40073e19
...
@@ -299,7 +299,7 @@ static int fillColumnsNull(STableDataBlocks* pBlock, int32_t rowNum) {
...
@@ -299,7 +299,7 @@ static int fillColumnsNull(STableDataBlocks* pBlock, int32_t rowNum) {
SSchema
*
schema
=
(
SSchema
*
)
pBlock
->
pTableMeta
->
schema
;
SSchema
*
schema
=
(
SSchema
*
)
pBlock
->
pTableMeta
->
schema
;
for
(
int32_t
i
=
0
;
i
<
spd
->
numOfCols
;
++
i
)
{
for
(
int32_t
i
=
0
;
i
<
spd
->
numOfCols
;
++
i
)
{
if
(
!
spd
->
cols
[
i
].
hasVal
)
{
// current column do not have any value to insert, set it to null
if
(
spd
->
cols
[
i
].
valStat
==
VAL_STAT_NONE
)
{
// current column do not have any value to insert, set it to null
for
(
int32_t
n
=
0
;
n
<
rowNum
;
++
n
)
{
for
(
int32_t
n
=
0
;
n
<
rowNum
;
++
n
)
{
char
*
ptr
=
pBlock
->
pData
+
sizeof
(
SSubmitBlk
)
+
pBlock
->
rowSize
*
n
+
offset
;
char
*
ptr
=
pBlock
->
pData
+
sizeof
(
SSubmitBlk
)
+
pBlock
->
rowSize
*
n
+
offset
;
...
...
src/client/src/tscUtil.c
浏览文件 @
40073e19
...
@@ -1776,101 +1776,6 @@ int32_t tscGetDataBlockFromList(SHashObj* pHashList, int64_t id, int32_t size, i
...
@@ -1776,101 +1776,6 @@ int32_t tscGetDataBlockFromList(SHashObj* pHashList, int64_t id, int32_t size, i
return
TSDB_CODE_SUCCESS
;
return
TSDB_CODE_SUCCESS
;
}
}
static
SMemRow
tdGenMemRowFromBuilder
(
SMemRowBuilder
*
pBuilder
)
{
SSchema
*
pSchema
=
pBuilder
->
pSchema
;
char
*
p
=
(
char
*
)
pBuilder
->
buf
;
int
toffset
=
0
;
uint16_t
nCols
=
pBuilder
->
nCols
;
uint8_t
memRowType
=
payloadType
(
p
);
uint16_t
nColsBound
=
payloadNCols
(
p
);
if
(
pBuilder
->
nCols
<=
0
||
nColsBound
<=
0
)
{
return
NULL
;
}
char
*
pVals
=
POINTER_SHIFT
(
p
,
payloadValuesOffset
(
p
));
SMemRow
*
memRow
=
(
SMemRow
)
pBuilder
->
pDataBlock
;
memRowSetType
(
memRow
,
memRowType
);
// ----------------- Raw payload structure for row:
/* |<------------ Head ------------->|<----------- body of column data tuple ------------------->|
* | |<----------------- flen ------------->|<--- value part --->|
* |SMemRowType| dataTLen | nCols | colId | colType | offset | ... | value |...|...|... |
* +-----------+----------+----------+--------------------------------------|--------------------|
* | uint8_t | uint32_t | uint16_t | int16_t | uint8_t | uint16_t | ... |.......|...|...|... |
* +-----------+----------+----------+--------------------------------------+--------------------|
* 1. offset in column data tuple starts from the value part in case of uint16_t overflow.
* 2. dataTLen: total length including the header and body.
*/
if
(
memRowType
==
SMEM_ROW_DATA
)
{
SDataRow
trow
=
(
SDataRow
)
memRowDataBody
(
memRow
);
dataRowSetLen
(
trow
,
(
TDRowLenT
)(
TD_DATA_ROW_HEAD_SIZE
+
pBuilder
->
flen
));
dataRowSetVersion
(
trow
,
pBuilder
->
sversion
);
p
=
(
char
*
)
payloadBody
(
pBuilder
->
buf
);
uint16_t
i
=
0
,
j
=
0
;
while
(
j
<
nCols
)
{
if
(
i
>=
nColsBound
)
{
break
;
}
int16_t
colId
=
payloadColId
(
p
);
if
(
colId
==
pSchema
[
j
].
colId
)
{
// ASSERT(payloadColType(p) == pSchema[j].type);
tdAppendColVal
(
trow
,
POINTER_SHIFT
(
pVals
,
payloadColOffset
(
p
)),
pSchema
[
j
].
type
,
toffset
);
toffset
+=
TYPE_BYTES
[
pSchema
[
j
].
type
];
p
=
payloadNextCol
(
p
);
++
i
;
++
j
;
}
else
if
(
colId
<
pSchema
[
j
].
colId
)
{
p
=
payloadNextCol
(
p
);
++
i
;
}
else
{
tdAppendColVal
(
trow
,
getNullValue
(
pSchema
[
j
].
type
),
pSchema
[
j
].
type
,
toffset
);
toffset
+=
TYPE_BYTES
[
pSchema
[
j
].
type
];
++
j
;
}
}
while
(
j
<
nCols
)
{
tdAppendColVal
(
trow
,
getNullValue
(
pSchema
[
j
].
type
),
pSchema
[
j
].
type
,
toffset
);
toffset
+=
TYPE_BYTES
[
pSchema
[
j
].
type
];
++
j
;
}
#if 0 // no need anymore
while (i < nColsBound) {
p = payloadNextCol(p);
++i;
}
#endif
}
else
if
(
memRowType
==
SMEM_ROW_KV
)
{
SKVRow
kvRow
=
(
SKVRow
)
memRowKvBody
(
memRow
);
kvRowSetLen
(
kvRow
,
(
TDRowLenT
)(
TD_KV_ROW_HEAD_SIZE
+
sizeof
(
SColIdx
)
*
nColsBound
));
kvRowSetNCols
(
kvRow
,
nColsBound
);
memRowSetKvVersion
(
memRow
,
pBuilder
->
sversion
);
p
=
(
char
*
)
payloadBody
(
pBuilder
->
buf
);
int
i
=
0
;
while
(
i
<
nColsBound
)
{
int16_t
colId
=
payloadColId
(
p
);
uint8_t
colType
=
payloadColType
(
p
);
tdAppendKvColVal
(
kvRow
,
POINTER_SHIFT
(
pVals
,
payloadColOffset
(
p
)),
colId
,
colType
,
&
toffset
);
//toffset += sizeof(SColIdx);
p
=
payloadNextCol
(
p
);
++
i
;
}
}
else
{
ASSERT
(
0
);
}
int32_t
rowTLen
=
memRowTLen
(
memRow
);
pBuilder
->
pDataBlock
=
(
char
*
)
pBuilder
->
pDataBlock
+
rowTLen
;
// next row
pBuilder
->
pSubmitBlk
->
dataLen
+=
rowTLen
;
return
memRow
;
}
// Erase the empty space reserved for binary data
// Erase the empty space reserved for binary data
static
int
trimDataBlock
(
void
*
pDataBlock
,
STableDataBlocks
*
pTableDataBlock
,
SInsertStatementParam
*
insertParam
,
static
int
trimDataBlock
(
void
*
pDataBlock
,
STableDataBlocks
*
pTableDataBlock
,
SInsertStatementParam
*
insertParam
,
SBlockKeyTuple
*
blkKeyTuple
)
{
SBlockKeyTuple
*
blkKeyTuple
)
{
...
@@ -1902,10 +1807,11 @@ static int trimDataBlock(void* pDataBlock, STableDataBlocks* pTableDataBlock, SI
...
@@ -1902,10 +1807,11 @@ static int trimDataBlock(void* pDataBlock, STableDataBlocks* pTableDataBlock, SI
int32_t
schemaSize
=
sizeof
(
STColumn
)
*
numOfCols
;
int32_t
schemaSize
=
sizeof
(
STColumn
)
*
numOfCols
;
pBlock
->
schemaLen
=
schemaSize
;
pBlock
->
schemaLen
=
schemaSize
;
}
else
{
}
else
{
for
(
int32_t
j
=
0
;
j
<
tinfo
.
numOfColumns
;
++
j
)
{
if
(
IS_RAW_PAYLOAD
(
insertParam
->
payloadType
))
{
flen
+=
TYPE_BYTES
[
pSchema
[
j
].
type
];
for
(
int32_t
j
=
0
;
j
<
tinfo
.
numOfColumns
;
++
j
)
{
flen
+=
TYPE_BYTES
[
pSchema
[
j
].
type
];
}
}
}
pBlock
->
schemaLen
=
0
;
pBlock
->
schemaLen
=
0
;
}
}
...
@@ -1932,18 +1838,19 @@ static int trimDataBlock(void* pDataBlock, STableDataBlocks* pTableDataBlock, SI
...
@@ -1932,18 +1838,19 @@ static int trimDataBlock(void* pDataBlock, STableDataBlocks* pTableDataBlock, SI
pBlock
->
dataLen
+=
memRowTLen
(
memRow
);
pBlock
->
dataLen
+=
memRowTLen
(
memRow
);
}
}
}
else
{
}
else
{
SMemRowBuilder
rowBuilder
;
rowBuilder
.
pSchema
=
pSchema
;
rowBuilder
.
sversion
=
pTableMeta
->
sversion
;
rowBuilder
.
flen
=
flen
;
rowBuilder
.
nCols
=
tinfo
.
numOfColumns
;
rowBuilder
.
pDataBlock
=
pDataBlock
;
rowBuilder
.
pSubmitBlk
=
pBlock
;
rowBuilder
.
buf
=
p
;
for
(
int32_t
i
=
0
;
i
<
numOfRows
;
++
i
)
{
for
(
int32_t
i
=
0
;
i
<
numOfRows
;
++
i
)
{
rowBuilder
.
buf
=
(
blkKeyTuple
+
i
)
->
payloadAddr
;
char
*
payload
=
(
blkKeyTuple
+
i
)
->
payloadAddr
;
tdGenMemRowFromBuilder
(
&
rowBuilder
);
if
(
isNeedConvertRow
(
payload
))
{
convertSMemRow
(
pDataBlock
,
payload
,
pTableDataBlock
);
TDRowTLenT
rowTLen
=
memRowTLen
(
pDataBlock
);
pDataBlock
=
POINTER_SHIFT
(
pDataBlock
,
rowTLen
);
pBlock
->
dataLen
+=
rowTLen
;
}
else
{
TDRowTLenT
rowTLen
=
memRowTLen
(
payload
);
memcpy
(
pDataBlock
,
payload
,
rowTLen
);
pDataBlock
=
POINTER_SHIFT
(
pDataBlock
,
rowTLen
);
pBlock
->
dataLen
+=
rowTLen
;
}
}
}
}
}
...
@@ -1956,9 +1863,9 @@ static int trimDataBlock(void* pDataBlock, STableDataBlocks* pTableDataBlock, SI
...
@@ -1956,9 +1863,9 @@ static int trimDataBlock(void* pDataBlock, STableDataBlocks* pTableDataBlock, SI
static
int32_t
getRowExpandSize
(
STableMeta
*
pTableMeta
)
{
static
int32_t
getRowExpandSize
(
STableMeta
*
pTableMeta
)
{
int32_t
result
=
TD_MEM_ROW_DATA_HEAD_SIZE
;
int32_t
result
=
TD_MEM_ROW_DATA_HEAD_SIZE
;
int32_t
columns
=
tscGetNumOfColumns
(
pTableMeta
);
int32_t
columns
=
tscGetNumOfColumns
(
pTableMeta
);
SSchema
*
pSchema
=
tscGetTableSchema
(
pTableMeta
);
SSchema
*
pSchema
=
tscGetTableSchema
(
pTableMeta
);
for
(
int32_t
i
=
0
;
i
<
columns
;
i
++
)
{
for
(
int32_t
i
=
0
;
i
<
columns
;
i
++
)
{
if
(
IS_VAR_DATA_TYPE
((
pSchema
+
i
)
->
type
))
{
if
(
IS_VAR_DATA_TYPE
((
pSchema
+
i
)
->
type
))
{
result
+=
TYPE_BYTES
[
TSDB_DATA_TYPE_BINARY
];
result
+=
TYPE_BYTES
[
TSDB_DATA_TYPE_BINARY
];
}
}
...
@@ -2004,7 +1911,7 @@ int32_t tscMergeTableDataBlocks(SInsertStatementParam *pInsertParam, bool freeBl
...
@@ -2004,7 +1911,7 @@ int32_t tscMergeTableDataBlocks(SInsertStatementParam *pInsertParam, bool freeBl
SSubmitBlk
*
pBlocks
=
(
SSubmitBlk
*
)
pOneTableBlock
->
pData
;
SSubmitBlk
*
pBlocks
=
(
SSubmitBlk
*
)
pOneTableBlock
->
pData
;
if
(
pBlocks
->
numOfRows
>
0
)
{
if
(
pBlocks
->
numOfRows
>
0
)
{
// the maximum expanded size in byte when a row-wise data is converted to SDataRow format
// the maximum expanded size in byte when a row-wise data is converted to SDataRow format
int32_t
expandSize
=
getRowExpandSize
(
pOneTableBlock
->
pTableMeta
)
;
int32_t
expandSize
=
isRawPayload
?
getRowExpandSize
(
pOneTableBlock
->
pTableMeta
)
:
0
;
STableDataBlocks
*
dataBuf
=
NULL
;
STableDataBlocks
*
dataBuf
=
NULL
;
int32_t
ret
=
tscGetDataBlockFromList
(
pVnodeDataBlockHashList
,
pOneTableBlock
->
vgId
,
TSDB_PAYLOAD_SIZE
,
int32_t
ret
=
tscGetDataBlockFromList
(
pVnodeDataBlockHashList
,
pOneTableBlock
->
vgId
,
TSDB_PAYLOAD_SIZE
,
...
@@ -2017,7 +1924,8 @@ int32_t tscMergeTableDataBlocks(SInsertStatementParam *pInsertParam, bool freeBl
...
@@ -2017,7 +1924,8 @@ int32_t tscMergeTableDataBlocks(SInsertStatementParam *pInsertParam, bool freeBl
return
ret
;
return
ret
;
}
}
int64_t
destSize
=
dataBuf
->
size
+
pOneTableBlock
->
size
+
pBlocks
->
numOfRows
*
expandSize
+
sizeof
(
STColumn
)
*
tscGetNumOfColumns
(
pOneTableBlock
->
pTableMeta
);
int64_t
destSize
=
dataBuf
->
size
+
pOneTableBlock
->
size
+
pBlocks
->
numOfRows
*
expandSize
+
sizeof
(
STColumn
)
*
tscGetNumOfColumns
(
pOneTableBlock
->
pTableMeta
);
if
(
dataBuf
->
nAllocSize
<
destSize
)
{
if
(
dataBuf
->
nAllocSize
<
destSize
)
{
dataBuf
->
nAllocSize
=
(
uint32_t
)(
destSize
*
1
.
5
);
dataBuf
->
nAllocSize
=
(
uint32_t
)(
destSize
*
1
.
5
);
...
@@ -2061,7 +1969,9 @@ int32_t tscMergeTableDataBlocks(SInsertStatementParam *pInsertParam, bool freeBl
...
@@ -2061,7 +1969,9 @@ int32_t tscMergeTableDataBlocks(SInsertStatementParam *pInsertParam, bool freeBl
pBlocks
->
numOfRows
,
pBlocks
->
sversion
,
blkKeyInfo
.
pKeyTuple
->
skey
,
pLastKeyTuple
->
skey
);
pBlocks
->
numOfRows
,
pBlocks
->
sversion
,
blkKeyInfo
.
pKeyTuple
->
skey
,
pLastKeyTuple
->
skey
);
}
}
int32_t
len
=
pBlocks
->
numOfRows
*
(
pOneTableBlock
->
rowSize
+
expandSize
)
+
sizeof
(
STColumn
)
*
tscGetNumOfColumns
(
pOneTableBlock
->
pTableMeta
);
int32_t
len
=
pBlocks
->
numOfRows
*
(
isRawPayload
?
(
pOneTableBlock
->
rowSize
+
expandSize
)
:
getExtendedRowSize
(
pOneTableBlock
))
+
sizeof
(
STColumn
)
*
tscGetNumOfColumns
(
pOneTableBlock
->
pTableMeta
);
pBlocks
->
tid
=
htonl
(
pBlocks
->
tid
);
pBlocks
->
tid
=
htonl
(
pBlocks
->
tid
);
pBlocks
->
uid
=
htobe64
(
pBlocks
->
uid
);
pBlocks
->
uid
=
htobe64
(
pBlocks
->
uid
);
...
...
src/common/inc/tdataformat.h
浏览文件 @
40073e19
...
@@ -186,6 +186,7 @@ typedef void *SDataRow;
...
@@ -186,6 +186,7 @@ typedef void *SDataRow;
#define TD_DATA_ROW_HEAD_SIZE (sizeof(uint16_t) + sizeof(int16_t))
#define TD_DATA_ROW_HEAD_SIZE (sizeof(uint16_t) + sizeof(int16_t))
#define dataRowLen(r) (*(TDRowLenT *)(r)) // 0~65535
#define dataRowLen(r) (*(TDRowLenT *)(r)) // 0~65535
#define dataRowEnd(r) POINTER_SHIFT(r, dataRowLen(r))
#define dataRowVersion(r) (*(int16_t *)POINTER_SHIFT(r, sizeof(int16_t)))
#define dataRowVersion(r) (*(int16_t *)POINTER_SHIFT(r, sizeof(int16_t)))
#define dataRowTuple(r) POINTER_SHIFT(r, TD_DATA_ROW_HEAD_SIZE)
#define dataRowTuple(r) POINTER_SHIFT(r, TD_DATA_ROW_HEAD_SIZE)
#define dataRowTKey(r) (*(TKEY *)(dataRowTuple(r)))
#define dataRowTKey(r) (*(TKEY *)(dataRowTuple(r)))
...
@@ -201,14 +202,18 @@ void tdFreeDataRow(SDataRow row);
...
@@ -201,14 +202,18 @@ void tdFreeDataRow(SDataRow row);
void
tdInitDataRow
(
SDataRow
row
,
STSchema
*
pSchema
);
void
tdInitDataRow
(
SDataRow
row
,
STSchema
*
pSchema
);
SDataRow
tdDataRowDup
(
SDataRow
row
);
SDataRow
tdDataRowDup
(
SDataRow
row
);
// offset here not include dataRow header length
// offset here not include dataRow header length
static
FORCE_INLINE
int
tdAppendColVal
(
SDataRow
row
,
const
void
*
value
,
int8_t
type
,
int32_t
offset
)
{
static
FORCE_INLINE
int
tdAppendDataColVal
(
SDataRow
row
,
const
void
*
value
,
bool
isCopyVarData
,
int8_t
type
,
int32_t
offset
)
{
ASSERT
(
value
!=
NULL
);
ASSERT
(
value
!=
NULL
);
int32_t
toffset
=
offset
+
TD_DATA_ROW_HEAD_SIZE
;
int32_t
toffset
=
offset
+
TD_DATA_ROW_HEAD_SIZE
;
if
(
IS_VAR_DATA_TYPE
(
type
))
{
if
(
IS_VAR_DATA_TYPE
(
type
))
{
*
(
VarDataOffsetT
*
)
POINTER_SHIFT
(
row
,
toffset
)
=
dataRowLen
(
row
);
*
(
VarDataOffsetT
*
)
POINTER_SHIFT
(
row
,
toffset
)
=
dataRowLen
(
row
);
memcpy
(
POINTER_SHIFT
(
row
,
dataRowLen
(
row
)),
value
,
varDataTLen
(
value
));
if
(
isCopyVarData
)
{
memcpy
(
POINTER_SHIFT
(
row
,
dataRowLen
(
row
)),
value
,
varDataTLen
(
value
));
}
dataRowLen
(
row
)
+=
varDataTLen
(
value
);
dataRowLen
(
row
)
+=
varDataTLen
(
value
);
}
else
{
}
else
{
if
(
offset
==
0
)
{
if
(
offset
==
0
)
{
...
@@ -223,6 +228,12 @@ static FORCE_INLINE int tdAppendColVal(SDataRow row, const void *value, int8_t t
...
@@ -223,6 +228,12 @@ static FORCE_INLINE int tdAppendColVal(SDataRow row, const void *value, int8_t t
return
0
;
return
0
;
}
}
// offset here not include dataRow header length
static
FORCE_INLINE
int
tdAppendColVal
(
SDataRow
row
,
const
void
*
value
,
int8_t
type
,
int32_t
offset
)
{
return
tdAppendDataColVal
(
row
,
value
,
true
,
type
,
offset
);
}
// NOTE: offset here including the header size
// NOTE: offset here including the header size
static
FORCE_INLINE
void
*
tdGetRowDataOfCol
(
SDataRow
row
,
int8_t
type
,
int32_t
offset
)
{
static
FORCE_INLINE
void
*
tdGetRowDataOfCol
(
SDataRow
row
,
int8_t
type
,
int32_t
offset
)
{
if
(
IS_VAR_DATA_TYPE
(
type
))
{
if
(
IS_VAR_DATA_TYPE
(
type
))
{
...
@@ -328,11 +339,10 @@ static FORCE_INLINE void dataColReset(SDataCol *pDataCol) { pDataCol->len = 0; }
...
@@ -328,11 +339,10 @@ static FORCE_INLINE void dataColReset(SDataCol *pDataCol) { pDataCol->len = 0; }
int
tdAllocMemForCol
(
SDataCol
*
pCol
,
int
maxPoints
);
int
tdAllocMemForCol
(
SDataCol
*
pCol
,
int
maxPoints
);
void
dataColInit
(
SDataCol
*
pDataCol
,
STColumn
*
pCol
,
int
maxPoints
);
void
dataColInit
(
SDataCol
*
pDataCol
,
STColumn
*
pCol
,
int
maxPoints
);
void
dataColAppendVal
(
SDataCol
*
pCol
,
const
void
*
value
,
int
numOfRows
,
int
maxPoints
);
int
dataColAppendVal
(
SDataCol
*
pCol
,
const
void
*
value
,
int
numOfRows
,
int
maxPoints
);
void
dataColSetOffset
(
SDataCol
*
pCol
,
int
nEle
);
void
dataColSetOffset
(
SDataCol
*
pCol
,
int
nEle
);
bool
isNEleNull
(
SDataCol
*
pCol
,
int
nEle
);
bool
isNEleNull
(
SDataCol
*
pCol
,
int
nEle
);
void
dataColSetNEleNull
(
SDataCol
*
pCol
,
int
nEle
,
int
maxPoints
);
// Get the data pointer from a column-wised data
// Get the data pointer from a column-wised data
static
FORCE_INLINE
const
void
*
tdGetColDataOfRow
(
SDataCol
*
pCol
,
int
row
)
{
static
FORCE_INLINE
const
void
*
tdGetColDataOfRow
(
SDataCol
*
pCol
,
int
row
)
{
...
@@ -357,13 +367,11 @@ static FORCE_INLINE int32_t dataColGetNEleLen(SDataCol *pDataCol, int rows) {
...
@@ -357,13 +367,11 @@ static FORCE_INLINE int32_t dataColGetNEleLen(SDataCol *pDataCol, int rows) {
}
}
typedef
struct
{
typedef
struct
{
int
maxRowSize
;
int
maxCols
;
// max number of columns
int
maxCols
;
// max number of columns
int
maxPoints
;
// max number of points
int
maxPoints
;
// max number of points
int
numOfRows
;
int
numOfCols
;
// Total number of cols
int
numOfRows
;
int
sversion
;
// TODO: set sversion
int
numOfCols
;
// Total number of cols
int
sversion
;
// TODO: set sversion
SDataCol
*
cols
;
SDataCol
*
cols
;
}
SDataCols
;
}
SDataCols
;
...
@@ -407,7 +415,7 @@ static FORCE_INLINE TSKEY dataColsKeyLast(SDataCols *pCols) {
...
@@ -407,7 +415,7 @@ static FORCE_INLINE TSKEY dataColsKeyLast(SDataCols *pCols) {
}
}
}
}
SDataCols
*
tdNewDataCols
(
int
max
RowSize
,
int
max
Cols
,
int
maxRows
);
SDataCols
*
tdNewDataCols
(
int
maxCols
,
int
maxRows
);
void
tdResetDataCols
(
SDataCols
*
pCols
);
void
tdResetDataCols
(
SDataCols
*
pCols
);
int
tdInitDataCols
(
SDataCols
*
pCols
,
STSchema
*
pSchema
);
int
tdInitDataCols
(
SDataCols
*
pCols
,
STSchema
*
pSchema
);
SDataCols
*
tdDupDataCols
(
SDataCols
*
pCols
,
bool
keepData
);
SDataCols
*
tdDupDataCols
(
SDataCols
*
pCols
,
bool
keepData
);
...
@@ -475,9 +483,10 @@ static FORCE_INLINE void *tdGetKVRowIdxOfCol(SKVRow row, int16_t colId) {
...
@@ -475,9 +483,10 @@ static FORCE_INLINE void *tdGetKVRowIdxOfCol(SKVRow row, int16_t colId) {
}
}
// offset here not include kvRow header length
// offset here not include kvRow header length
static
FORCE_INLINE
int
tdAppendKvColVal
(
SKVRow
row
,
const
void
*
value
,
int16_t
colId
,
int8_t
type
,
int32_t
*
offset
)
{
static
FORCE_INLINE
int
tdAppendKvColVal
(
SKVRow
row
,
const
void
*
value
,
bool
isCopyValData
,
int16_t
colId
,
int8_t
type
,
int32_t
offset
)
{
ASSERT
(
value
!=
NULL
);
ASSERT
(
value
!=
NULL
);
int32_t
toffset
=
*
offset
+
TD_KV_ROW_HEAD_SIZE
;
int32_t
toffset
=
offset
+
TD_KV_ROW_HEAD_SIZE
;
SColIdx
*
pColIdx
=
(
SColIdx
*
)
POINTER_SHIFT
(
row
,
toffset
);
SColIdx
*
pColIdx
=
(
SColIdx
*
)
POINTER_SHIFT
(
row
,
toffset
);
char
*
ptr
=
(
char
*
)
POINTER_SHIFT
(
row
,
kvRowLen
(
row
));
char
*
ptr
=
(
char
*
)
POINTER_SHIFT
(
row
,
kvRowLen
(
row
));
...
@@ -485,10 +494,12 @@ static FORCE_INLINE int tdAppendKvColVal(SKVRow row, const void *value, int16_t
...
@@ -485,10 +494,12 @@ static FORCE_INLINE int tdAppendKvColVal(SKVRow row, const void *value, int16_t
pColIdx
->
offset
=
kvRowLen
(
row
);
// offset of pColIdx including the TD_KV_ROW_HEAD_SIZE
pColIdx
->
offset
=
kvRowLen
(
row
);
// offset of pColIdx including the TD_KV_ROW_HEAD_SIZE
if
(
IS_VAR_DATA_TYPE
(
type
))
{
if
(
IS_VAR_DATA_TYPE
(
type
))
{
memcpy
(
ptr
,
value
,
varDataTLen
(
value
));
if
(
isCopyValData
)
{
memcpy
(
ptr
,
value
,
varDataTLen
(
value
));
}
kvRowLen
(
row
)
+=
varDataTLen
(
value
);
kvRowLen
(
row
)
+=
varDataTLen
(
value
);
}
else
{
}
else
{
if
(
*
offset
==
0
)
{
if
(
offset
==
0
)
{
ASSERT
(
type
==
TSDB_DATA_TYPE_TIMESTAMP
);
ASSERT
(
type
==
TSDB_DATA_TYPE_TIMESTAMP
);
TKEY
tvalue
=
tdGetTKEY
(
*
(
TSKEY
*
)
value
);
TKEY
tvalue
=
tdGetTKEY
(
*
(
TSKEY
*
)
value
);
memcpy
(
ptr
,
(
void
*
)(
&
tvalue
),
TYPE_BYTES
[
type
]);
memcpy
(
ptr
,
(
void
*
)(
&
tvalue
),
TYPE_BYTES
[
type
]);
...
@@ -497,7 +508,6 @@ static FORCE_INLINE int tdAppendKvColVal(SKVRow row, const void *value, int16_t
...
@@ -497,7 +508,6 @@ static FORCE_INLINE int tdAppendKvColVal(SKVRow row, const void *value, int16_t
}
}
kvRowLen
(
row
)
+=
TYPE_BYTES
[
type
];
kvRowLen
(
row
)
+=
TYPE_BYTES
[
type
];
}
}
*
offset
+=
sizeof
(
SColIdx
);
return
0
;
return
0
;
}
}
...
@@ -592,12 +602,24 @@ typedef void *SMemRow;
...
@@ -592,12 +602,24 @@ typedef void *SMemRow;
#define TD_MEM_ROW_DATA_HEAD_SIZE (TD_MEM_ROW_TYPE_SIZE + TD_DATA_ROW_HEAD_SIZE)
#define TD_MEM_ROW_DATA_HEAD_SIZE (TD_MEM_ROW_TYPE_SIZE + TD_DATA_ROW_HEAD_SIZE)
#define TD_MEM_ROW_KV_HEAD_SIZE (TD_MEM_ROW_TYPE_SIZE + TD_MEM_ROW_KV_VER_SIZE + TD_KV_ROW_HEAD_SIZE)
#define TD_MEM_ROW_KV_HEAD_SIZE (TD_MEM_ROW_TYPE_SIZE + TD_MEM_ROW_KV_VER_SIZE + TD_KV_ROW_HEAD_SIZE)
#define SMEM_ROW_DATA 0U // SDataRow
#define SMEM_ROW_DATA 0x0U // SDataRow
#define SMEM_ROW_KV 1U // SKVRow
#define SMEM_ROW_KV 0x01U // SKVRow
#define SMEM_ROW_CONVERT 0x80U // SMemRow convert flag
#define KVRatioKV (0.2f) // all bool
#define KVRatioPredict (0.4f)
#define KVRatioData (0.75f) // all bigint
#define KVRatioConvert (0.9f)
#define memRowType(r) (*(uint8_t *)(r))
#define memRowType(r) ((*(uint8_t *)(r)) & 0x01)
#define memRowSetType(r, t) ((*(uint8_t *)(r)) = (t)) // set the total byte in case of dirty memory
#define memRowSetConvert(r) ((*(uint8_t *)(r)) = (((*(uint8_t *)(r)) & 0x7F) | SMEM_ROW_CONVERT)) // highest bit
#define isDataRowT(t) (SMEM_ROW_DATA == (((uint8_t)(t)) & 0x01))
#define isDataRow(r) (SMEM_ROW_DATA == memRowType(r))
#define isDataRow(r) (SMEM_ROW_DATA == memRowType(r))
#define isKvRowT(t) (SMEM_ROW_KV == (((uint8_t)(t)) & 0x01))
#define isKvRow(r) (SMEM_ROW_KV == memRowType(r))
#define isKvRow(r) (SMEM_ROW_KV == memRowType(r))
#define isNeedConvertRow(r) (((*(uint8_t *)(r)) & 0x80) == SMEM_ROW_CONVERT)
#define memRowDataBody(r) POINTER_SHIFT(r, TD_MEM_ROW_TYPE_SIZE) // section after flag
#define memRowDataBody(r) POINTER_SHIFT(r, TD_MEM_ROW_TYPE_SIZE) // section after flag
#define memRowKvBody(r) \
#define memRowKvBody(r) \
...
@@ -614,6 +636,14 @@ typedef void *SMemRow;
...
@@ -614,6 +636,14 @@ typedef void *SMemRow;
#define memRowLen(r) (isDataRow(r) ? memRowDataLen(r) : memRowKvLen(r))
#define memRowLen(r) (isDataRow(r) ? memRowDataLen(r) : memRowKvLen(r))
#define memRowTLen(r) (isDataRow(r) ? memRowDataTLen(r) : memRowKvTLen(r)) // using uint32_t/int32_t to store the TLen
#define memRowTLen(r) (isDataRow(r) ? memRowDataTLen(r) : memRowKvTLen(r)) // using uint32_t/int32_t to store the TLen
static
FORCE_INLINE
char
*
memRowEnd
(
SMemRow
row
)
{
if
(
isDataRow
(
row
))
{
return
(
char
*
)
dataRowEnd
(
memRowDataBody
(
row
));
}
else
{
return
(
char
*
)
kvRowEnd
(
memRowKvBody
(
row
));
}
}
#define memRowDataVersion(r) dataRowVersion(memRowDataBody(r))
#define memRowDataVersion(r) dataRowVersion(memRowDataBody(r))
#define memRowKvVersion(r) (*(int16_t *)POINTER_SHIFT(r, TD_MEM_ROW_TYPE_SIZE))
#define memRowKvVersion(r) (*(int16_t *)POINTER_SHIFT(r, TD_MEM_ROW_TYPE_SIZE))
#define memRowVersion(r) (isDataRow(r) ? memRowDataVersion(r) : memRowKvVersion(r)) // schema version
#define memRowVersion(r) (isDataRow(r) ? memRowDataVersion(r) : memRowKvVersion(r)) // schema version
...
@@ -631,7 +661,6 @@ typedef void *SMemRow;
...
@@ -631,7 +661,6 @@ typedef void *SMemRow;
} \
} \
} while (0)
} while (0)
#define memRowSetType(r, t) (memRowType(r) = (t))
#define memRowSetLen(r, l) (isDataRow(r) ? memRowDataLen(r) = (l) : memRowKvLen(r) = (l))
#define memRowSetLen(r, l) (isDataRow(r) ? memRowDataLen(r) = (l) : memRowKvLen(r) = (l))
#define memRowSetVersion(r, v) (isDataRow(r) ? dataRowSetVersion(memRowDataBody(r), v) : memRowSetKvVersion(r, v))
#define memRowSetVersion(r, v) (isDataRow(r) ? dataRowSetVersion(memRowDataBody(r), v) : memRowSetKvVersion(r, v))
#define memRowCpy(dst, r) memcpy((dst), (r), memRowTLen(r))
#define memRowCpy(dst, r) memcpy((dst), (r), memRowTLen(r))
...
@@ -664,12 +693,12 @@ static FORCE_INLINE void *tdGetMemRowDataOfColEx(void *row, int16_t colId, int8_
...
@@ -664,12 +693,12 @@ static FORCE_INLINE void *tdGetMemRowDataOfColEx(void *row, int16_t colId, int8_
}
}
}
}
static
FORCE_INLINE
int
tdAppendMem
ColVal
(
SMemRow
row
,
const
void
*
value
,
int16_t
colId
,
int8_t
type
,
int32_t
offset
,
static
FORCE_INLINE
int
tdAppendMem
RowColVal
(
SMemRow
row
,
const
void
*
value
,
bool
isCopyVarData
,
int16_t
colId
,
int32_t
*
kvO
ffset
)
{
int8_t
type
,
int32_t
o
ffset
)
{
if
(
isDataRow
(
row
))
{
if
(
isDataRow
(
row
))
{
tdAppend
ColVal
(
memRowDataBody
(
row
),
value
,
type
,
offset
);
tdAppend
DataColVal
(
memRowDataBody
(
row
),
value
,
isCopyVarData
,
type
,
offset
);
}
else
{
}
else
{
tdAppendKvColVal
(
memRowKvBody
(
row
),
value
,
colId
,
type
,
kvO
ffset
);
tdAppendKvColVal
(
memRowKvBody
(
row
),
value
,
isCopyVarData
,
colId
,
type
,
o
ffset
);
}
}
return
0
;
return
0
;
}
}
...
@@ -691,6 +720,30 @@ static FORCE_INLINE int32_t tdGetColAppendLen(uint8_t rowType, const void *value
...
@@ -691,6 +720,30 @@ static FORCE_INLINE int32_t tdGetColAppendLen(uint8_t rowType, const void *value
return
len
;
return
len
;
}
}
/**
* 1. calculate the delta of AllNullLen for SDataRow.
* 2. calculate the real len for SKVRow.
*/
static
FORCE_INLINE
void
tdGetColAppendDeltaLen
(
const
void
*
value
,
int8_t
colType
,
int32_t
*
dataLen
,
int32_t
*
kvLen
)
{
switch
(
colType
)
{
case
TSDB_DATA_TYPE_BINARY
:
{
int32_t
varLen
=
varDataLen
(
value
);
*
dataLen
+=
(
varLen
-
CHAR_BYTES
);
*
kvLen
+=
(
varLen
+
sizeof
(
SColIdx
));
break
;
}
case
TSDB_DATA_TYPE_NCHAR
:
{
int32_t
varLen
=
varDataLen
(
value
);
*
dataLen
+=
(
varLen
-
TSDB_NCHAR_SIZE
);
*
kvLen
+=
(
varLen
+
sizeof
(
SColIdx
));
break
;
}
default:
{
*
kvLen
+=
(
TYPE_BYTES
[
colType
]
+
sizeof
(
SColIdx
));
break
;
}
}
}
typedef
struct
{
typedef
struct
{
int16_t
colId
;
int16_t
colId
;
...
@@ -706,7 +759,7 @@ static FORCE_INLINE void setSColInfo(SColInfo* colInfo, int16_t colId, uint8_t c
...
@@ -706,7 +759,7 @@ static FORCE_INLINE void setSColInfo(SColInfo* colInfo, int16_t colId, uint8_t c
SMemRow
mergeTwoMemRows
(
void
*
buffer
,
SMemRow
row1
,
SMemRow
row2
,
STSchema
*
pSchema1
,
STSchema
*
pSchema2
);
SMemRow
mergeTwoMemRows
(
void
*
buffer
,
SMemRow
row1
,
SMemRow
row2
,
STSchema
*
pSchema1
,
STSchema
*
pSchema2
);
#if 0
// ----------------- Raw payload structure for row:
// ----------------- Raw payload structure for row:
/* |<------------ Head ------------->|<----------- body of column data tuple ------------------->|
/* |<------------ Head ------------->|<----------- body of column data tuple ------------------->|
* | |<----------------- flen ------------->|<--- value part --->|
* | |<----------------- flen ------------->|<--- value part --->|
...
@@ -752,6 +805,8 @@ SMemRow mergeTwoMemRows(void *buffer, SMemRow row1, SMemRow row2, STSchema *pSch
...
@@ -752,6 +805,8 @@ SMemRow mergeTwoMemRows(void *buffer, SMemRow row1, SMemRow row2, STSchema *pSch
static FORCE_INLINE char *payloadNextCol(char *pCol) { return (char *)POINTER_SHIFT(pCol, PAYLOAD_COL_HEAD_LEN); }
static FORCE_INLINE char *payloadNextCol(char *pCol) { return (char *)POINTER_SHIFT(pCol, PAYLOAD_COL_HEAD_LEN); }
#endif
#ifdef __cplusplus
#ifdef __cplusplus
}
}
#endif
#endif
...
...
src/common/src/tdataformat.c
浏览文件 @
40073e19
...
@@ -19,10 +19,10 @@
...
@@ -19,10 +19,10 @@
#include "wchar.h"
#include "wchar.h"
#include "tarray.h"
#include "tarray.h"
static
void
dataColSetNEleNull
(
SDataCol
*
pCol
,
int
nEle
);
static
void
tdMergeTwoDataCols
(
SDataCols
*
target
,
SDataCols
*
src1
,
int
*
iter1
,
int
limit1
,
SDataCols
*
src2
,
int
*
iter2
,
static
void
tdMergeTwoDataCols
(
SDataCols
*
target
,
SDataCols
*
src1
,
int
*
iter1
,
int
limit1
,
SDataCols
*
src2
,
int
*
iter2
,
int
limit2
,
int
tRows
,
bool
forceSetNull
);
int
limit2
,
int
tRows
,
bool
forceSetNull
);
//TODO: change caller to use return val
int
tdAllocMemForCol
(
SDataCol
*
pCol
,
int
maxPoints
)
{
int
tdAllocMemForCol
(
SDataCol
*
pCol
,
int
maxPoints
)
{
int
spaceNeeded
=
pCol
->
bytes
*
maxPoints
;
int
spaceNeeded
=
pCol
->
bytes
*
maxPoints
;
if
(
IS_VAR_DATA_TYPE
(
pCol
->
type
))
{
if
(
IS_VAR_DATA_TYPE
(
pCol
->
type
))
{
...
@@ -31,7 +31,7 @@ int tdAllocMemForCol(SDataCol *pCol, int maxPoints) {
...
@@ -31,7 +31,7 @@ int tdAllocMemForCol(SDataCol *pCol, int maxPoints) {
if
(
pCol
->
spaceSize
<
spaceNeeded
)
{
if
(
pCol
->
spaceSize
<
spaceNeeded
)
{
void
*
ptr
=
realloc
(
pCol
->
pData
,
spaceNeeded
);
void
*
ptr
=
realloc
(
pCol
->
pData
,
spaceNeeded
);
if
(
ptr
==
NULL
)
{
if
(
ptr
==
NULL
)
{
uDebug
(
"malloc failure, size:%"
PRId64
" failed, reason:%s"
,
(
int64_t
)
pCol
->
spaceSize
,
uDebug
(
"malloc failure, size:%"
PRId64
" failed, reason:%s"
,
(
int64_t
)
spaceNeeded
,
strerror
(
errno
));
strerror
(
errno
));
return
-
1
;
return
-
1
;
}
else
{
}
else
{
...
@@ -239,20 +239,19 @@ void dataColInit(SDataCol *pDataCol, STColumn *pCol, int maxPoints) {
...
@@ -239,20 +239,19 @@ void dataColInit(SDataCol *pDataCol, STColumn *pCol, int maxPoints) {
pDataCol
->
len
=
0
;
pDataCol
->
len
=
0
;
}
}
// value from timestamp should be TKEY here instead of TSKEY
// value from timestamp should be TKEY here instead of TSKEY
void
dataColAppendVal
(
SDataCol
*
pCol
,
const
void
*
value
,
int
numOfRows
,
int
maxPoints
)
{
int
dataColAppendVal
(
SDataCol
*
pCol
,
const
void
*
value
,
int
numOfRows
,
int
maxPoints
)
{
ASSERT
(
pCol
!=
NULL
&&
value
!=
NULL
);
ASSERT
(
pCol
!=
NULL
&&
value
!=
NULL
);
if
(
isAllRowsNull
(
pCol
))
{
if
(
isAllRowsNull
(
pCol
))
{
if
(
isNull
(
value
,
pCol
->
type
))
{
if
(
isNull
(
value
,
pCol
->
type
))
{
// all null value yet, just return
// all null value yet, just return
return
;
return
0
;
}
}
if
(
tdAllocMemForCol
(
pCol
,
maxPoints
)
<
0
)
return
-
1
;
if
(
numOfRows
>
0
)
{
if
(
numOfRows
>
0
)
{
// Find the first not null value, fill all previouse values as NULL
// Find the first not null value, fill all previouse values as NULL
dataColSetNEleNull
(
pCol
,
numOfRows
,
maxPoints
);
dataColSetNEleNull
(
pCol
,
numOfRows
);
}
else
{
tdAllocMemForCol
(
pCol
,
maxPoints
);
}
}
}
}
...
@@ -268,12 +267,21 @@ void dataColAppendVal(SDataCol *pCol, const void *value, int numOfRows, int maxP
...
@@ -268,12 +267,21 @@ void dataColAppendVal(SDataCol *pCol, const void *value, int numOfRows, int maxP
memcpy
(
POINTER_SHIFT
(
pCol
->
pData
,
pCol
->
len
),
value
,
pCol
->
bytes
);
memcpy
(
POINTER_SHIFT
(
pCol
->
pData
,
pCol
->
len
),
value
,
pCol
->
bytes
);
pCol
->
len
+=
pCol
->
bytes
;
pCol
->
len
+=
pCol
->
bytes
;
}
}
return
0
;
}
static
FORCE_INLINE
const
void
*
tdGetColDataOfRowUnsafe
(
SDataCol
*
pCol
,
int
row
)
{
if
(
IS_VAR_DATA_TYPE
(
pCol
->
type
))
{
return
POINTER_SHIFT
(
pCol
->
pData
,
pCol
->
dataOff
[
row
]);
}
else
{
return
POINTER_SHIFT
(
pCol
->
pData
,
TYPE_BYTES
[
pCol
->
type
]
*
row
);
}
}
}
bool
isNEleNull
(
SDataCol
*
pCol
,
int
nEle
)
{
bool
isNEleNull
(
SDataCol
*
pCol
,
int
nEle
)
{
if
(
isAllRowsNull
(
pCol
))
return
true
;
if
(
isAllRowsNull
(
pCol
))
return
true
;
for
(
int
i
=
0
;
i
<
nEle
;
i
++
)
{
for
(
int
i
=
0
;
i
<
nEle
;
i
++
)
{
if
(
!
isNull
(
tdGetColDataOfRow
(
pCol
,
i
),
pCol
->
type
))
return
false
;
if
(
!
isNull
(
tdGetColDataOfRow
Unsafe
(
pCol
,
i
),
pCol
->
type
))
return
false
;
}
}
return
true
;
return
true
;
}
}
...
@@ -290,9 +298,7 @@ static FORCE_INLINE void dataColSetNullAt(SDataCol *pCol, int index) {
...
@@ -290,9 +298,7 @@ static FORCE_INLINE void dataColSetNullAt(SDataCol *pCol, int index) {
}
}
}
}
void
dataColSetNEleNull
(
SDataCol
*
pCol
,
int
nEle
,
int
maxPoints
)
{
static
void
dataColSetNEleNull
(
SDataCol
*
pCol
,
int
nEle
)
{
tdAllocMemForCol
(
pCol
,
maxPoints
);
if
(
IS_VAR_DATA_TYPE
(
pCol
->
type
))
{
if
(
IS_VAR_DATA_TYPE
(
pCol
->
type
))
{
pCol
->
len
=
0
;
pCol
->
len
=
0
;
for
(
int
i
=
0
;
i
<
nEle
;
i
++
)
{
for
(
int
i
=
0
;
i
<
nEle
;
i
++
)
{
...
@@ -318,7 +324,7 @@ void dataColSetOffset(SDataCol *pCol, int nEle) {
...
@@ -318,7 +324,7 @@ void dataColSetOffset(SDataCol *pCol, int nEle) {
}
}
}
}
SDataCols
*
tdNewDataCols
(
int
max
RowSize
,
int
max
Cols
,
int
maxRows
)
{
SDataCols
*
tdNewDataCols
(
int
maxCols
,
int
maxRows
)
{
SDataCols
*
pCols
=
(
SDataCols
*
)
calloc
(
1
,
sizeof
(
SDataCols
));
SDataCols
*
pCols
=
(
SDataCols
*
)
calloc
(
1
,
sizeof
(
SDataCols
));
if
(
pCols
==
NULL
)
{
if
(
pCols
==
NULL
)
{
uDebug
(
"malloc failure, size:%"
PRId64
" failed, reason:%s"
,
(
int64_t
)
sizeof
(
SDataCols
),
strerror
(
errno
));
uDebug
(
"malloc failure, size:%"
PRId64
" failed, reason:%s"
,
(
int64_t
)
sizeof
(
SDataCols
),
strerror
(
errno
));
...
@@ -326,6 +332,9 @@ SDataCols *tdNewDataCols(int maxRowSize, int maxCols, int maxRows) {
...
@@ -326,6 +332,9 @@ SDataCols *tdNewDataCols(int maxRowSize, int maxCols, int maxRows) {
}
}
pCols
->
maxPoints
=
maxRows
;
pCols
->
maxPoints
=
maxRows
;
pCols
->
maxCols
=
maxCols
;
pCols
->
numOfRows
=
0
;
pCols
->
numOfCols
=
0
;
if
(
maxCols
>
0
)
{
if
(
maxCols
>
0
)
{
pCols
->
cols
=
(
SDataCol
*
)
calloc
(
maxCols
,
sizeof
(
SDataCol
));
pCols
->
cols
=
(
SDataCol
*
)
calloc
(
maxCols
,
sizeof
(
SDataCol
));
...
@@ -342,13 +351,8 @@ SDataCols *tdNewDataCols(int maxRowSize, int maxCols, int maxRows) {
...
@@ -342,13 +351,8 @@ SDataCols *tdNewDataCols(int maxRowSize, int maxCols, int maxRows) {
pCols
->
cols
[
i
].
pData
=
NULL
;
pCols
->
cols
[
i
].
pData
=
NULL
;
pCols
->
cols
[
i
].
dataOff
=
NULL
;
pCols
->
cols
[
i
].
dataOff
=
NULL
;
}
}
pCols
->
maxCols
=
maxCols
;
}
}
pCols
->
maxRowSize
=
maxRowSize
;
return
pCols
;
return
pCols
;
}
}
...
@@ -357,8 +361,9 @@ int tdInitDataCols(SDataCols *pCols, STSchema *pSchema) {
...
@@ -357,8 +361,9 @@ int tdInitDataCols(SDataCols *pCols, STSchema *pSchema) {
int
oldMaxCols
=
pCols
->
maxCols
;
int
oldMaxCols
=
pCols
->
maxCols
;
if
(
schemaNCols
(
pSchema
)
>
oldMaxCols
)
{
if
(
schemaNCols
(
pSchema
)
>
oldMaxCols
)
{
pCols
->
maxCols
=
schemaNCols
(
pSchema
);
pCols
->
maxCols
=
schemaNCols
(
pSchema
);
pCols
->
cols
=
(
SDataCol
*
)
realloc
(
pCols
->
cols
,
sizeof
(
SDataCol
)
*
pCols
->
maxCols
);
void
*
ptr
=
(
SDataCol
*
)
realloc
(
pCols
->
cols
,
sizeof
(
SDataCol
)
*
pCols
->
maxCols
);
if
(
pCols
->
cols
==
NULL
)
return
-
1
;
if
(
ptr
==
NULL
)
return
-
1
;
pCols
->
cols
=
ptr
;
for
(
i
=
oldMaxCols
;
i
<
pCols
->
maxCols
;
i
++
)
{
for
(
i
=
oldMaxCols
;
i
<
pCols
->
maxCols
;
i
++
)
{
pCols
->
cols
[
i
].
pData
=
NULL
;
pCols
->
cols
[
i
].
pData
=
NULL
;
pCols
->
cols
[
i
].
dataOff
=
NULL
;
pCols
->
cols
[
i
].
dataOff
=
NULL
;
...
@@ -366,10 +371,6 @@ int tdInitDataCols(SDataCols *pCols, STSchema *pSchema) {
...
@@ -366,10 +371,6 @@ int tdInitDataCols(SDataCols *pCols, STSchema *pSchema) {
}
}
}
}
if
(
schemaTLen
(
pSchema
)
>
pCols
->
maxRowSize
)
{
pCols
->
maxRowSize
=
schemaTLen
(
pSchema
);
}
tdResetDataCols
(
pCols
);
tdResetDataCols
(
pCols
);
pCols
->
numOfCols
=
schemaNCols
(
pSchema
);
pCols
->
numOfCols
=
schemaNCols
(
pSchema
);
...
@@ -398,7 +399,7 @@ SDataCols *tdFreeDataCols(SDataCols *pCols) {
...
@@ -398,7 +399,7 @@ SDataCols *tdFreeDataCols(SDataCols *pCols) {
}
}
SDataCols
*
tdDupDataCols
(
SDataCols
*
pDataCols
,
bool
keepData
)
{
SDataCols
*
tdDupDataCols
(
SDataCols
*
pDataCols
,
bool
keepData
)
{
SDataCols
*
pRet
=
tdNewDataCols
(
pDataCols
->
max
RowSize
,
pDataCols
->
max
Cols
,
pDataCols
->
maxPoints
);
SDataCols
*
pRet
=
tdNewDataCols
(
pDataCols
->
maxCols
,
pDataCols
->
maxPoints
);
if
(
pRet
==
NULL
)
return
NULL
;
if
(
pRet
==
NULL
)
return
NULL
;
pRet
->
numOfCols
=
pDataCols
->
numOfCols
;
pRet
->
numOfCols
=
pDataCols
->
numOfCols
;
...
@@ -413,7 +414,10 @@ SDataCols *tdDupDataCols(SDataCols *pDataCols, bool keepData) {
...
@@ -413,7 +414,10 @@ SDataCols *tdDupDataCols(SDataCols *pDataCols, bool keepData) {
if
(
keepData
)
{
if
(
keepData
)
{
if
(
pDataCols
->
cols
[
i
].
len
>
0
)
{
if
(
pDataCols
->
cols
[
i
].
len
>
0
)
{
tdAllocMemForCol
(
&
pRet
->
cols
[
i
],
pRet
->
maxPoints
);
if
(
tdAllocMemForCol
(
&
pRet
->
cols
[
i
],
pRet
->
maxPoints
)
<
0
)
{
tdFreeDataCols
(
pRet
);
return
NULL
;
}
pRet
->
cols
[
i
].
len
=
pDataCols
->
cols
[
i
].
len
;
pRet
->
cols
[
i
].
len
=
pDataCols
->
cols
[
i
].
len
;
memcpy
(
pRet
->
cols
[
i
].
pData
,
pDataCols
->
cols
[
i
].
pData
,
pDataCols
->
cols
[
i
].
len
);
memcpy
(
pRet
->
cols
[
i
].
pData
,
pDataCols
->
cols
[
i
].
pData
,
pDataCols
->
cols
[
i
].
len
);
if
(
IS_VAR_DATA_TYPE
(
pRet
->
cols
[
i
].
type
))
{
if
(
IS_VAR_DATA_TYPE
(
pRet
->
cols
[
i
].
type
))
{
...
@@ -584,9 +588,12 @@ static void tdMergeTwoDataCols(SDataCols *target, SDataCols *src1, int *iter1, i
...
@@ -584,9 +588,12 @@ static void tdMergeTwoDataCols(SDataCols *target, SDataCols *src1, int *iter1, i
if
((
key1
>
key2
)
||
(
key1
==
key2
&&
!
TKEY_IS_DELETED
(
tkey2
)))
{
if
((
key1
>
key2
)
||
(
key1
==
key2
&&
!
TKEY_IS_DELETED
(
tkey2
)))
{
for
(
int
i
=
0
;
i
<
src2
->
numOfCols
;
i
++
)
{
for
(
int
i
=
0
;
i
<
src2
->
numOfCols
;
i
++
)
{
ASSERT
(
target
->
cols
[
i
].
type
==
src2
->
cols
[
i
].
type
);
ASSERT
(
target
->
cols
[
i
].
type
==
src2
->
cols
[
i
].
type
);
if
(
src2
->
cols
[
i
].
len
>
0
&&
(
forceSetNull
||
(
!
forceSetNull
&&
!
isNull
(
src2
->
cols
[
i
].
pData
,
src2
->
cols
[
i
].
type
))
))
{
if
(
src2
->
cols
[
i
].
len
>
0
&&
!
isNull
(
src2
->
cols
[
i
].
pData
,
src2
->
cols
[
i
].
type
))
{
dataColAppendVal
(
&
(
target
->
cols
[
i
]),
tdGetColDataOfRow
(
src2
->
cols
+
i
,
*
iter2
),
target
->
numOfRows
,
dataColAppendVal
(
&
(
target
->
cols
[
i
]),
tdGetColDataOfRow
(
src2
->
cols
+
i
,
*
iter2
),
target
->
numOfRows
,
target
->
maxPoints
);
target
->
maxPoints
);
}
else
if
(
!
forceSetNull
&&
key1
==
key2
&&
src1
->
cols
[
i
].
len
>
0
)
{
dataColAppendVal
(
&
(
target
->
cols
[
i
]),
tdGetColDataOfRow
(
src1
->
cols
+
i
,
*
iter1
),
target
->
numOfRows
,
target
->
maxPoints
);
}
}
}
}
target
->
numOfRows
++
;
target
->
numOfRows
++
;
...
@@ -844,7 +851,8 @@ SMemRow mergeTwoMemRows(void *buffer, SMemRow row1, SMemRow row2, STSchema *pSch
...
@@ -844,7 +851,8 @@ SMemRow mergeTwoMemRows(void *buffer, SMemRow row1, SMemRow row2, STSchema *pSch
int16_t
k
;
int16_t
k
;
for
(
k
=
0
;
k
<
nKvNCols
;
++
k
)
{
for
(
k
=
0
;
k
<
nKvNCols
;
++
k
)
{
SColInfo
*
pColInfo
=
taosArrayGet
(
stashRow
,
k
);
SColInfo
*
pColInfo
=
taosArrayGet
(
stashRow
,
k
);
tdAppendKvColVal
(
kvRow
,
pColInfo
->
colVal
,
pColInfo
->
colId
,
pColInfo
->
colType
,
&
toffset
);
tdAppendKvColVal
(
kvRow
,
pColInfo
->
colVal
,
true
,
pColInfo
->
colId
,
pColInfo
->
colType
,
toffset
);
toffset
+=
sizeof
(
SColIdx
);
}
}
ASSERT
(
kvLen
==
memRowTLen
(
tRow
));
ASSERT
(
kvLen
==
memRowTLen
(
tRow
));
}
}
...
...
grafanaplugin
@
a44ec1ca
比较
4a4d7909
...
a44ec1ca
Subproject commit
4a4d79099b076b8ff12d5b4fdbcba54049a6866d
Subproject commit
a44ec1ca493ad01b2bf825b6418f69e11f548206
src/inc/taosmsg.h
浏览文件 @
40073e19
...
@@ -471,6 +471,7 @@ typedef struct {
...
@@ -471,6 +471,7 @@ typedef struct {
bool
stableQuery
;
// super table query or not
bool
stableQuery
;
// super table query or not
bool
topBotQuery
;
// TODO used bitwise flag
bool
topBotQuery
;
// TODO used bitwise flag
bool
interpQuery
;
// interp query or not
bool
groupbyColumn
;
// denote if this is a groupby normal column query
bool
groupbyColumn
;
// denote if this is a groupby normal column query
bool
hasTagResults
;
// if there are tag values in final result or not
bool
hasTagResults
;
// if there are tag values in final result or not
bool
timeWindowInterpo
;
// if the time window start/end required interpolation
bool
timeWindowInterpo
;
// if the time window start/end required interpolation
...
...
src/kit/taosdemo/taosdemo.c
浏览文件 @
40073e19
...
@@ -5983,7 +5983,7 @@ static int32_t prepareStbStmtBind(
...
@@ -5983,7 +5983,7 @@ static int32_t prepareStbStmtBind(
int64_t
startTime
,
int32_t
recSeq
,
int64_t
startTime
,
int32_t
recSeq
,
bool
isColumn
)
bool
isColumn
)
{
{
char
*
bindBuffer
=
calloc
(
1
,
g_args
.
len_of_binary
);
char
*
bindBuffer
=
calloc
(
1
,
DOUBLE_BUFF_LEN
);
//
g_args.len_of_binary);
if
(
bindBuffer
==
NULL
)
{
if
(
bindBuffer
==
NULL
)
{
errorPrint
(
"%s() LN%d, Failed to allocate %d bind buffer
\n
"
,
errorPrint
(
"%s() LN%d, Failed to allocate %d bind buffer
\n
"
,
__func__
,
__LINE__
,
g_args
.
len_of_binary
);
__func__
,
__LINE__
,
g_args
.
len_of_binary
);
...
@@ -6319,7 +6319,7 @@ static void printStatPerThread(threadInfo *pThreadInfo)
...
@@ -6319,7 +6319,7 @@ static void printStatPerThread(threadInfo *pThreadInfo)
pThreadInfo
->
totalInsertRows
,
pThreadInfo
->
totalInsertRows
,
pThreadInfo
->
totalAffectedRows
,
pThreadInfo
->
totalAffectedRows
,
(
pThreadInfo
->
totalDelay
)
?
(
pThreadInfo
->
totalDelay
)
?
(
double
)(
pThreadInfo
->
totalAffectedRows
/
(
pThreadInfo
->
totalDelay
/
1000000
.
0
))
:
(
double
)(
pThreadInfo
->
totalAffectedRows
/
(
(
double
)
pThreadInfo
->
totalDelay
/
1000000
.
0
))
:
FLT_MAX
);
FLT_MAX
);
}
}
...
@@ -6539,7 +6539,7 @@ static void* syncWriteInterlace(threadInfo *pThreadInfo) {
...
@@ -6539,7 +6539,7 @@ static void* syncWriteInterlace(threadInfo *pThreadInfo) {
verbosePrint
(
"[%d] %s() LN%d, buffer=%s
\n
"
,
verbosePrint
(
"[%d] %s() LN%d, buffer=%s
\n
"
,
pThreadInfo
->
threadID
,
__func__
,
__LINE__
,
pThreadInfo
->
buffer
);
pThreadInfo
->
threadID
,
__func__
,
__LINE__
,
pThreadInfo
->
buffer
);
startTs
=
taosGetTimestamp
M
s
();
startTs
=
taosGetTimestamp
U
s
();
if
(
recOfBatch
==
0
)
{
if
(
recOfBatch
==
0
)
{
errorPrint
(
"[%d] %s() LN%d Failed to insert records of batch %d
\n
"
,
errorPrint
(
"[%d] %s() LN%d Failed to insert records of batch %d
\n
"
,
...
@@ -6555,10 +6555,10 @@ static void* syncWriteInterlace(threadInfo *pThreadInfo) {
...
@@ -6555,10 +6555,10 @@ static void* syncWriteInterlace(threadInfo *pThreadInfo) {
}
}
int64_t
affectedRows
=
execInsert
(
pThreadInfo
,
recOfBatch
);
int64_t
affectedRows
=
execInsert
(
pThreadInfo
,
recOfBatch
);
endTs
=
taosGetTimestamp
M
s
();
endTs
=
taosGetTimestamp
U
s
();
uint64_t
delay
=
endTs
-
startTs
;
uint64_t
delay
=
endTs
-
startTs
;
performancePrint
(
"%s() LN%d, insert execution time is %
"
PRIu64
"
ms
\n
"
,
performancePrint
(
"%s() LN%d, insert execution time is %
10.2f
ms
\n
"
,
__func__
,
__LINE__
,
delay
);
__func__
,
__LINE__
,
delay
/
1000
.
0
);
verbosePrint
(
"[%d] %s() LN%d affectedRows=%"
PRId64
"
\n
"
,
verbosePrint
(
"[%d] %s() LN%d affectedRows=%"
PRId64
"
\n
"
,
pThreadInfo
->
threadID
,
pThreadInfo
->
threadID
,
__func__
,
__LINE__
,
affectedRows
);
__func__
,
__LINE__
,
affectedRows
);
...
@@ -6721,8 +6721,8 @@ static void* syncWriteProgressive(threadInfo *pThreadInfo) {
...
@@ -6721,8 +6721,8 @@ static void* syncWriteProgressive(threadInfo *pThreadInfo) {
endTs
=
taosGetTimestampUs
();
endTs
=
taosGetTimestampUs
();
uint64_t
delay
=
endTs
-
startTs
;
uint64_t
delay
=
endTs
-
startTs
;
performancePrint
(
"%s() LN%d, insert execution time is %
"
PRId64
"
ms
\n
"
,
performancePrint
(
"%s() LN%d, insert execution time is %
10.f
ms
\n
"
,
__func__
,
__LINE__
,
delay
);
__func__
,
__LINE__
,
delay
/
1000
.
0
);
verbosePrint
(
"[%d] %s() LN%d affectedRows=%d
\n
"
,
verbosePrint
(
"[%d] %s() LN%d affectedRows=%d
\n
"
,
pThreadInfo
->
threadID
,
pThreadInfo
->
threadID
,
__func__
,
__LINE__
,
affectedRows
);
__func__
,
__LINE__
,
affectedRows
);
...
...
src/plugins/http/src/httpParser.c
浏览文件 @
40073e19
...
@@ -101,13 +101,17 @@ char *httpGetStatusDesc(int32_t statusCode) {
...
@@ -101,13 +101,17 @@ char *httpGetStatusDesc(int32_t statusCode) {
}
}
static
void
httpCleanupString
(
HttpString
*
str
)
{
static
void
httpCleanupString
(
HttpString
*
str
)
{
free
(
str
->
str
);
if
(
str
->
str
)
{
str
->
str
=
NULL
;
free
(
str
->
str
);
str
->
pos
=
0
;
str
->
str
=
NULL
;
str
->
size
=
0
;
str
->
pos
=
0
;
str
->
size
=
0
;
}
}
}
static
int32_t
httpAppendString
(
HttpString
*
str
,
const
char
*
s
,
int32_t
len
)
{
static
int32_t
httpAppendString
(
HttpString
*
str
,
const
char
*
s
,
int32_t
len
)
{
char
*
new_str
=
NULL
;
if
(
str
->
size
==
0
)
{
if
(
str
->
size
==
0
)
{
str
->
pos
=
0
;
str
->
pos
=
0
;
str
->
size
=
len
+
1
;
str
->
size
=
len
+
1
;
...
@@ -115,7 +119,16 @@ static int32_t httpAppendString(HttpString *str, const char *s, int32_t len) {
...
@@ -115,7 +119,16 @@ static int32_t httpAppendString(HttpString *str, const char *s, int32_t len) {
}
else
if
(
str
->
pos
+
len
+
1
>=
str
->
size
)
{
}
else
if
(
str
->
pos
+
len
+
1
>=
str
->
size
)
{
str
->
size
+=
len
;
str
->
size
+=
len
;
str
->
size
*=
4
;
str
->
size
*=
4
;
str
->
str
=
realloc
(
str
->
str
,
str
->
size
);
new_str
=
realloc
(
str
->
str
,
str
->
size
);
if
(
new_str
==
NULL
&&
str
->
str
)
{
// if str->str was not NULL originally,
// the old allocated memory was left unchanged,
// see man 3 realloc
free
(
str
->
str
);
}
str
->
str
=
new_str
;
}
else
{
}
else
{
}
}
...
@@ -317,7 +330,7 @@ static int32_t httpOnParseHeaderField(HttpParser *parser, const char *key, const
...
@@ -317,7 +330,7 @@ static int32_t httpOnParseHeaderField(HttpParser *parser, const char *key, const
static
int32_t
httpOnBody
(
HttpParser
*
parser
,
const
char
*
chunk
,
int32_t
len
)
{
static
int32_t
httpOnBody
(
HttpParser
*
parser
,
const
char
*
chunk
,
int32_t
len
)
{
HttpContext
*
pContext
=
parser
->
pContext
;
HttpContext
*
pContext
=
parser
->
pContext
;
HttpString
*
buf
=
&
parser
->
body
;
HttpString
*
buf
=
&
parser
->
body
;
if
(
parser
->
parseCode
!=
TSDB_CODE_SUCCESS
)
return
-
1
;
if
(
parser
->
parseCode
!=
TSDB_CODE_SUCCESS
)
return
-
1
;
if
(
buf
->
size
<=
0
)
{
if
(
buf
->
size
<=
0
)
{
...
@@ -326,6 +339,7 @@ static int32_t httpOnBody(HttpParser *parser, const char *chunk, int32_t len) {
...
@@ -326,6 +339,7 @@ static int32_t httpOnBody(HttpParser *parser, const char *chunk, int32_t len) {
}
}
int32_t
newSize
=
buf
->
pos
+
len
+
1
;
int32_t
newSize
=
buf
->
pos
+
len
+
1
;
char
*
newStr
=
NULL
;
if
(
newSize
>=
buf
->
size
)
{
if
(
newSize
>=
buf
->
size
)
{
if
(
buf
->
size
>=
HTTP_BUFFER_SIZE
)
{
if
(
buf
->
size
>=
HTTP_BUFFER_SIZE
)
{
httpError
(
"context:%p, fd:%d, failed parse body, exceeding buffer size %d"
,
pContext
,
pContext
->
fd
,
buf
->
size
);
httpError
(
"context:%p, fd:%d, failed parse body, exceeding buffer size %d"
,
pContext
,
pContext
->
fd
,
buf
->
size
);
...
@@ -336,7 +350,12 @@ static int32_t httpOnBody(HttpParser *parser, const char *chunk, int32_t len) {
...
@@ -336,7 +350,12 @@ static int32_t httpOnBody(HttpParser *parser, const char *chunk, int32_t len) {
newSize
=
MAX
(
newSize
,
HTTP_BUFFER_INIT
);
newSize
=
MAX
(
newSize
,
HTTP_BUFFER_INIT
);
newSize
*=
4
;
newSize
*=
4
;
newSize
=
MIN
(
newSize
,
HTTP_BUFFER_SIZE
);
newSize
=
MIN
(
newSize
,
HTTP_BUFFER_SIZE
);
buf
->
str
=
realloc
(
buf
->
str
,
newSize
);
newStr
=
realloc
(
buf
->
str
,
newSize
);
if
(
newStr
==
NULL
&&
buf
->
str
)
{
free
(
buf
->
str
);
}
buf
->
str
=
newStr
;
buf
->
size
=
newSize
;
buf
->
size
=
newSize
;
if
(
buf
->
str
==
NULL
)
{
if
(
buf
->
str
==
NULL
)
{
...
@@ -374,13 +393,20 @@ static HTTP_PARSER_STATE httpTopStack(HttpParser *parser) {
...
@@ -374,13 +393,20 @@ static HTTP_PARSER_STATE httpTopStack(HttpParser *parser) {
static
int32_t
httpPushStack
(
HttpParser
*
parser
,
HTTP_PARSER_STATE
state
)
{
static
int32_t
httpPushStack
(
HttpParser
*
parser
,
HTTP_PARSER_STATE
state
)
{
HttpStack
*
stack
=
&
parser
->
stacks
;
HttpStack
*
stack
=
&
parser
->
stacks
;
int8_t
*
newStacks
=
NULL
;
if
(
stack
->
size
==
0
)
{
if
(
stack
->
size
==
0
)
{
stack
->
pos
=
0
;
stack
->
pos
=
0
;
stack
->
size
=
32
;
stack
->
size
=
32
;
stack
->
stacks
=
malloc
(
stack
->
size
*
sizeof
(
int8_t
));
stack
->
stacks
=
malloc
(
stack
->
size
*
sizeof
(
int8_t
));
}
else
if
(
stack
->
pos
+
1
>
stack
->
size
)
{
}
else
if
(
stack
->
pos
+
1
>
stack
->
size
)
{
stack
->
size
*=
2
;
stack
->
size
*=
2
;
stack
->
stacks
=
realloc
(
stack
->
stacks
,
stack
->
size
*
sizeof
(
int8_t
));
newStacks
=
realloc
(
stack
->
stacks
,
stack
->
size
*
sizeof
(
int8_t
));
if
(
newStacks
==
NULL
&&
stack
->
stacks
)
{
free
(
stack
->
stacks
);
}
stack
->
stacks
=
newStacks
;
}
else
{
}
else
{
}
}
...
...
src/plugins/http/src/httpUtil.c
浏览文件 @
40073e19
...
@@ -188,13 +188,17 @@ bool httpMallocMultiCmds(HttpContext *pContext, int32_t cmdSize, int32_t bufferS
...
@@ -188,13 +188,17 @@ bool httpMallocMultiCmds(HttpContext *pContext, int32_t cmdSize, int32_t bufferS
bool
httpReMallocMultiCmdsSize
(
HttpContext
*
pContext
,
int32_t
cmdSize
)
{
bool
httpReMallocMultiCmdsSize
(
HttpContext
*
pContext
,
int32_t
cmdSize
)
{
HttpSqlCmds
*
multiCmds
=
pContext
->
multiCmds
;
HttpSqlCmds
*
multiCmds
=
pContext
->
multiCmds
;
if
(
cmdSize
>
HTTP_MAX_CMD_SIZE
)
{
if
(
cmdSize
<=
0
&&
cmdSize
>
HTTP_MAX_CMD_SIZE
)
{
httpError
(
"context:%p, fd:%d, user:%s, mulitcmd size:%d large then %d"
,
pContext
,
pContext
->
fd
,
pContext
->
user
,
httpError
(
"context:%p, fd:%d, user:%s, mulitcmd size:%d large then %d"
,
pContext
,
pContext
->
fd
,
pContext
->
user
,
cmdSize
,
HTTP_MAX_CMD_SIZE
);
cmdSize
,
HTTP_MAX_CMD_SIZE
);
return
false
;
return
false
;
}
}
multiCmds
->
cmds
=
(
HttpSqlCmd
*
)
realloc
(
multiCmds
->
cmds
,
(
size_t
)
cmdSize
*
sizeof
(
HttpSqlCmd
));
HttpSqlCmd
*
new_cmds
=
(
HttpSqlCmd
*
)
realloc
(
multiCmds
->
cmds
,
(
size_t
)
cmdSize
*
sizeof
(
HttpSqlCmd
));
if
(
new_cmds
==
NULL
&&
multiCmds
->
cmds
)
{
free
(
multiCmds
->
cmds
);
}
multiCmds
->
cmds
=
new_cmds
;
if
(
multiCmds
->
cmds
==
NULL
)
{
if
(
multiCmds
->
cmds
==
NULL
)
{
httpError
(
"context:%p, fd:%d, user:%s, malloc cmds:%d error"
,
pContext
,
pContext
->
fd
,
pContext
->
user
,
cmdSize
);
httpError
(
"context:%p, fd:%d, user:%s, malloc cmds:%d error"
,
pContext
,
pContext
->
fd
,
pContext
->
user
,
cmdSize
);
return
false
;
return
false
;
...
@@ -208,13 +212,17 @@ bool httpReMallocMultiCmdsSize(HttpContext *pContext, int32_t cmdSize) {
...
@@ -208,13 +212,17 @@ bool httpReMallocMultiCmdsSize(HttpContext *pContext, int32_t cmdSize) {
bool
httpReMallocMultiCmdsBuffer
(
HttpContext
*
pContext
,
int32_t
bufferSize
)
{
bool
httpReMallocMultiCmdsBuffer
(
HttpContext
*
pContext
,
int32_t
bufferSize
)
{
HttpSqlCmds
*
multiCmds
=
pContext
->
multiCmds
;
HttpSqlCmds
*
multiCmds
=
pContext
->
multiCmds
;
if
(
bufferSize
>
HTTP_MAX_BUFFER_SIZE
)
{
if
(
bufferSize
<=
0
||
bufferSize
>
HTTP_MAX_BUFFER_SIZE
)
{
httpError
(
"context:%p, fd:%d, user:%s, mulitcmd buffer size:%d large then %d"
,
pContext
,
pContext
->
fd
,
httpError
(
"context:%p, fd:%d, user:%s, mulitcmd buffer size:%d large then %d"
,
pContext
,
pContext
->
fd
,
pContext
->
user
,
bufferSize
,
HTTP_MAX_BUFFER_SIZE
);
pContext
->
user
,
bufferSize
,
HTTP_MAX_BUFFER_SIZE
);
return
false
;
return
false
;
}
}
multiCmds
->
buffer
=
(
char
*
)
realloc
(
multiCmds
->
buffer
,
(
size_t
)
bufferSize
);
char
*
new_buffer
=
(
char
*
)
realloc
(
multiCmds
->
buffer
,
(
size_t
)
bufferSize
);
if
(
new_buffer
==
NULL
&&
multiCmds
->
buffer
)
{
free
(
multiCmds
->
buffer
);
}
multiCmds
->
buffer
=
new_buffer
;
if
(
multiCmds
->
buffer
==
NULL
)
{
if
(
multiCmds
->
buffer
==
NULL
)
{
httpError
(
"context:%p, fd:%d, user:%s, malloc buffer:%d error"
,
pContext
,
pContext
->
fd
,
pContext
->
user
,
bufferSize
);
httpError
(
"context:%p, fd:%d, user:%s, malloc buffer:%d error"
,
pContext
,
pContext
->
fd
,
pContext
->
user
,
bufferSize
);
return
false
;
return
false
;
...
...
src/query/inc/qExecutor.h
浏览文件 @
40073e19
...
@@ -206,7 +206,6 @@ typedef struct SQueryAttr {
...
@@ -206,7 +206,6 @@ typedef struct SQueryAttr {
bool
stableQuery
;
// super table query or not
bool
stableQuery
;
// super table query or not
bool
topBotQuery
;
// TODO used bitwise flag
bool
topBotQuery
;
// TODO used bitwise flag
bool
interpQuery
;
// denote if this is an interp query
bool
groupbyColumn
;
// denote if this is a groupby normal column query
bool
groupbyColumn
;
// denote if this is a groupby normal column query
bool
hasTagResults
;
// if there are tag values in final result or not
bool
hasTagResults
;
// if there are tag values in final result or not
bool
timeWindowInterpo
;
// if the time window start/end required interpolation
bool
timeWindowInterpo
;
// if the time window start/end required interpolation
...
...
src/query/src/qExecutor.c
浏览文件 @
40073e19
...
@@ -631,37 +631,29 @@ static STimeWindow getActiveTimeWindow(SResultRowInfo * pResultRowInfo, int64_t
...
@@ -631,37 +631,29 @@ static STimeWindow getActiveTimeWindow(SResultRowInfo * pResultRowInfo, int64_t
}
}
// get the correct time window according to the handled timestamp
// get the correct time window according to the handled timestamp
static
STimeWindow
getCurrentActiveTimeWindow
(
SResultRowInfo
*
pResultRowInfo
,
int64_t
ts
,
SQueryAttr
*
pQuery
)
{
static
STimeWindow
getCurrentActiveTimeWindow
(
SResultRowInfo
*
pResultRowInfo
,
int64_t
ts
,
SQueryAttr
*
pQuery
Attr
)
{
STimeWindow
w
=
{
0
};
STimeWindow
w
=
{
0
};
#if 0
if (pResultRowInfo->curIndex == -1) { // the first window, from the previous stored value
if
(
pResultRowInfo
->
curPos
==
-
1
)
{
// the first window, from the previous stored value
if (pResultRowInfo->prevSKey == TSKEY_INITIAL_VAL) {
getInitialStartTimeWindow
(
pQueryAttr
,
ts
,
&
w
);
getInitialStartTimeWindow(pQuery, ts, &w);
pResultRowInfo->prevSKey = w.skey;
} else {
w.skey = pResultRowInfo->prevSKey;
}
if (pQuery
->interval.intervalUnit == 'n' || pQuery
->interval.intervalUnit == 'y') {
if
(
pQuery
Attr
->
interval
.
intervalUnit
==
'n'
||
pQueryAttr
->
interval
.
intervalUnit
==
'y'
)
{
w.ekey = taosTimeAdd(w.skey, pQuery
->interval.interval, pQuery->interval.intervalUnit, pQuery
->precision) - 1;
w
.
ekey
=
taosTimeAdd
(
w
.
skey
,
pQuery
Attr
->
interval
.
interval
,
pQueryAttr
->
interval
.
intervalUnit
,
pQueryAttr
->
precision
)
-
1
;
}
else
{
}
else
{
w.ekey = w.skey + pQuery->interval.interval - 1;
w
.
ekey
=
w
.
skey
+
pQuery
Attr
->
interval
.
interval
-
1
;
}
}
}
else
{
}
else
{
int32_t slot = curTimeWindowIndex(pResultRowInfo);
w
=
getResultRow
(
pResultRowInfo
,
pResultRowInfo
->
curPos
)
->
win
;
SResultRow* pWindowRes = getResultRow(pResultRowInfo, slot);
w = pWindowRes->win;
}
}
/*
/*
* query border check, skey should not be bounded by the query time range, since the value skey will
* query border check, skey should not be bounded by the query time range, since the value skey will
* be used as the time window index value. So we only change ekey of time window accordingly.
* be used as the time window index value. So we only change ekey of time window accordingly.
*/
*/
if (w.ekey > pQuery
->window.ekey && QUERY_IS_ASC_QUERY(pQuery
)) {
if
(
w
.
ekey
>
pQuery
Attr
->
window
.
ekey
&&
QUERY_IS_ASC_QUERY
(
pQueryAttr
))
{
w.ekey = pQuery->window.ekey;
w
.
ekey
=
pQuery
Attr
->
window
.
ekey
;
}
}
#endif
return
w
;
return
w
;
}
}
...
@@ -1059,7 +1051,7 @@ static int32_t getNextQualifiedWindow(SQueryAttr* pQueryAttr, STimeWindow *pNext
...
@@ -1059,7 +1051,7 @@ static int32_t getNextQualifiedWindow(SQueryAttr* pQueryAttr, STimeWindow *pNext
}
}
/* interp query with fill should not skip time window */
/* interp query with fill should not skip time window */
if
(
pQueryAttr
->
i
nterpQuery
&&
pQueryAttr
->
fillType
!=
TSDB_FILL_NONE
)
{
if
(
pQueryAttr
->
pointI
nterpQuery
&&
pQueryAttr
->
fillType
!=
TSDB_FILL_NONE
)
{
return
startPos
;
return
startPos
;
}
}
...
@@ -1576,18 +1568,15 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul
...
@@ -1576,18 +1568,15 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul
}
}
static
void
hashAllIntervalAgg
(
SOperatorInfo
*
pOperatorInfo
,
SResultRowInfo
*
pResultRowInfo
,
SSDataBlock
*
pSDataBlock
,
int32_t
groupId
)
{
static
void
hashAllIntervalAgg
(
SOperatorInfo
*
pOperatorInfo
,
SResultRowInfo
*
pResultRowInfo
,
SSDataBlock
*
pSDataBlock
,
int32_t
tableGroupId
)
{
(
void
)
getCurrentActiveTimeWindow
;
#if 0
STableIntervalOperatorInfo
*
pInfo
=
(
STableIntervalOperatorInfo
*
)
pOperatorInfo
->
info
;
STableIntervalOperatorInfo
*
pInfo
=
(
STableIntervalOperatorInfo
*
)
pOperatorInfo
->
info
;
SQueryRuntimeEnv
*
pRuntimeEnv
=
pOperatorInfo
->
pRuntimeEnv
;
SQueryRuntimeEnv
*
pRuntimeEnv
=
pOperatorInfo
->
pRuntimeEnv
;
int32_t
numOfOutput
=
pOperatorInfo
->
numOfOutput
;
int32_t
numOfOutput
=
pOperatorInfo
->
numOfOutput
;
SQueryAttr*
pQuery
= pRuntimeEnv->pQueryAttr;
SQueryAttr
*
pQueryAttr
=
pRuntimeEnv
->
pQueryAttr
;
int32_t step = GET_FORWARD_DIRECTION_FACTOR(pQuery->order.order);
int32_t
step
=
GET_FORWARD_DIRECTION_FACTOR
(
pQuery
Attr
->
order
.
order
);
bool ascQuery = QUERY_IS_ASC_QUERY(pQuery);
bool
ascQuery
=
QUERY_IS_ASC_QUERY
(
pQuery
Attr
);
TSKEY
*
tsCols
=
NULL
;
TSKEY
*
tsCols
=
NULL
;
if
(
pSDataBlock
->
pDataBlock
!=
NULL
)
{
if
(
pSDataBlock
->
pDataBlock
!=
NULL
)
{
...
@@ -1598,34 +1587,35 @@ static void hashAllIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pRe
...
@@ -1598,34 +1587,35 @@ static void hashAllIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pRe
}
}
int32_t
startPos
=
ascQuery
?
0
:
(
pSDataBlock
->
info
.
rows
-
1
);
int32_t
startPos
=
ascQuery
?
0
:
(
pSDataBlock
->
info
.
rows
-
1
);
TSKEY ts = getStartTsKey(pQuery, &pSDataBlock->info.window, tsCols, pSDataBlock->info.rows);
TSKEY
ts
=
getStartTsKey
(
pQuery
Attr
,
&
pSDataBlock
->
info
.
window
,
tsCols
,
pSDataBlock
->
info
.
rows
);
STimeWindow win = getCurrentActiveTimeWindow(pResultRowInfo, ts, pQuery);
STimeWindow
win
=
getCurrentActiveTimeWindow
(
pResultRowInfo
,
ts
,
pQuery
Attr
);
bool
masterScan
=
IS_MASTER_SCAN
(
pRuntimeEnv
);
bool
masterScan
=
IS_MASTER_SCAN
(
pRuntimeEnv
);
SResultRow
*
pResult
=
NULL
;
SResultRow
*
pResult
=
NULL
;
int32_t
forwardStep
=
0
;
int32_t
forwardStep
=
0
;
int32_t
ret
=
0
;
while
(
1
)
{
while
(
1
)
{
// null data, failed to allocate more memory buffer
// null data, failed to allocate more memory buffer
int32_t code = setWindowOutputBufByKey(pRuntimeEnv, pResultRowInfo, &win, masterScan, &pResult, groupId
,
ret
=
setResultOutputBufByKey
(
pRuntimeEnv
,
pResultRowInfo
,
pSDataBlock
->
info
.
tid
,
&
win
,
masterScan
,
&
pResult
,
pInfo->pCtx, numOfOutput, pInfo->rowCellInfoOffset);
tableGroupId
,
pInfo
->
pCtx
,
numOfOutput
,
pInfo
->
rowCellInfoOffset
);
if (
code != TSDB_CODE_SUCCESS || pResult == NULL
) {
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
longjmp
(
pRuntimeEnv
->
env
,
TSDB_CODE_QRY_OUT_OF_MEMORY
);
longjmp
(
pRuntimeEnv
->
env
,
TSDB_CODE_QRY_OUT_OF_MEMORY
);
}
}
TSKEY
ekey = reviseWindowEkey(pQuery
, &win);
TSKEY
ekey
=
reviseWindowEkey
(
pQueryAttr
,
&
win
);
forwardStep = getNumOfRowsInTimeWindow(p
Query
, &pSDataBlock->info, tsCols, startPos, ekey, binarySearchForKey, true);
forwardStep
=
getNumOfRowsInTimeWindow
(
p
RuntimeEnv
,
&
pSDataBlock
->
info
,
tsCols
,
startPos
,
ekey
,
binarySearchForKey
,
true
);
// window start(end) key interpolation
// window start(end) key interpolation
doWindowBorderInterpolation
(
pOperatorInfo
,
pSDataBlock
,
pInfo
->
pCtx
,
pResult
,
&
win
,
startPos
,
forwardStep
);
doWindowBorderInterpolation
(
pOperatorInfo
,
pSDataBlock
,
pInfo
->
pCtx
,
pResult
,
&
win
,
startPos
,
forwardStep
);
doApplyFunctions
(
pRuntimeEnv
,
pInfo
->
pCtx
,
&
win
,
startPos
,
forwardStep
,
tsCols
,
pSDataBlock
->
info
.
rows
,
numOfOutput
);
doApplyFunctions
(
pRuntimeEnv
,
pInfo
->
pCtx
,
&
win
,
startPos
,
forwardStep
,
tsCols
,
pSDataBlock
->
info
.
rows
,
numOfOutput
);
int32_t
prevEndPos
=
(
forwardStep
-
1
)
*
step
+
startPos
;
int32_t
prevEndPos
=
(
forwardStep
-
1
)
*
step
+
startPos
;
startPos = getNextQualifiedWindow(pQuery, &win, &pSDataBlock->info, tsCols, binarySearchForKey, prevEndPos);
startPos
=
getNextQualifiedWindow
(
pQuery
Attr
,
&
win
,
&
pSDataBlock
->
info
,
tsCols
,
binarySearchForKey
,
prevEndPos
);
if
(
startPos
<
0
)
{
if
(
startPos
<
0
)
{
if (win.skey <= pQuery->window.ekey) {
if
(
win
.
skey
<=
pQuery
Attr
->
window
.
ekey
)
{
int32_t code = set
WindowOutputBufByKey(pRuntimeEnv, pResultRowInfo, &win, masterScan, &pResult, g
roupId,
int32_t
code
=
set
ResultOutputBufByKey
(
pRuntimeEnv
,
pResultRowInfo
,
pSDataBlock
->
info
.
tid
,
&
win
,
masterScan
,
&
pResult
,
tableG
roupId
,
pInfo
->
pCtx
,
numOfOutput
,
pInfo
->
rowCellInfoOffset
);
pInfo
->
pCtx
,
numOfOutput
,
pInfo
->
rowCellInfoOffset
);
if
(
code
!=
TSDB_CODE_SUCCESS
||
pResult
==
NULL
)
{
if
(
code
!=
TSDB_CODE_SUCCESS
||
pResult
==
NULL
)
{
longjmp
(
pRuntimeEnv
->
env
,
TSDB_CODE_QRY_OUT_OF_MEMORY
);
longjmp
(
pRuntimeEnv
->
env
,
TSDB_CODE_QRY_OUT_OF_MEMORY
);
...
@@ -1643,13 +1633,12 @@ static void hashAllIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pRe
...
@@ -1643,13 +1633,12 @@ static void hashAllIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pRe
setResultRowInterpo
(
pResult
,
RESULT_ROW_END_INTERP
);
setResultRowInterpo
(
pResult
,
RESULT_ROW_END_INTERP
);
}
}
if (pQuery->timeWindowInterpo) {
if
(
pQuery
Attr
->
timeWindowInterpo
)
{
int32_t
rowIndex
=
ascQuery
?
(
pSDataBlock
->
info
.
rows
-
1
)
:
0
;
int32_t
rowIndex
=
ascQuery
?
(
pSDataBlock
->
info
.
rows
-
1
)
:
0
;
saveDataBlockLastRow
(
pRuntimeEnv
,
&
pSDataBlock
->
info
,
pSDataBlock
->
pDataBlock
,
rowIndex
);
saveDataBlockLastRow
(
pRuntimeEnv
,
&
pSDataBlock
->
info
,
pSDataBlock
->
pDataBlock
,
rowIndex
);
}
}
updateResultRowInfoActiveIndex(pResultRowInfo, pQuery, pQuery->current->lastKey);
updateResultRowInfoActiveIndex
(
pResultRowInfo
,
pQueryAttr
,
pRuntimeEnv
->
current
->
lastKey
);
#endif
}
}
...
@@ -2150,6 +2139,12 @@ static int32_t setupQueryRuntimeEnv(SQueryRuntimeEnv *pRuntimeEnv, int32_t numOf
...
@@ -2150,6 +2139,12 @@ static int32_t setupQueryRuntimeEnv(SQueryRuntimeEnv *pRuntimeEnv, int32_t numOf
setTableScanFilterOperatorInfo
(
pRuntimeEnv
->
proot
->
upstream
[
0
]
->
info
,
pRuntimeEnv
->
proot
);
setTableScanFilterOperatorInfo
(
pRuntimeEnv
->
proot
->
upstream
[
0
]
->
info
,
pRuntimeEnv
->
proot
);
break
;
break
;
}
}
case
OP_AllMultiTableTimeInterval
:
{
pRuntimeEnv
->
proot
=
createAllMultiTableTimeIntervalOperatorInfo
(
pRuntimeEnv
,
pRuntimeEnv
->
proot
,
pQueryAttr
->
pExpr1
,
pQueryAttr
->
numOfOutput
);
setTableScanFilterOperatorInfo
(
pRuntimeEnv
->
proot
->
upstream
[
0
]
->
info
,
pRuntimeEnv
->
proot
);
break
;
}
case
OP_TimeWindow
:
{
case
OP_TimeWindow
:
{
pRuntimeEnv
->
proot
=
pRuntimeEnv
->
proot
=
createTimeIntervalOperatorInfo
(
pRuntimeEnv
,
pRuntimeEnv
->
proot
,
pQueryAttr
->
pExpr1
,
pQueryAttr
->
numOfOutput
);
createTimeIntervalOperatorInfo
(
pRuntimeEnv
,
pRuntimeEnv
->
proot
,
pQueryAttr
->
pExpr1
,
pQueryAttr
->
numOfOutput
);
...
@@ -2159,6 +2154,15 @@ static int32_t setupQueryRuntimeEnv(SQueryRuntimeEnv *pRuntimeEnv, int32_t numOf
...
@@ -2159,6 +2154,15 @@ static int32_t setupQueryRuntimeEnv(SQueryRuntimeEnv *pRuntimeEnv, int32_t numOf
}
}
break
;
break
;
}
}
case
OP_AllTimeWindow
:
{
pRuntimeEnv
->
proot
=
createAllTimeIntervalOperatorInfo
(
pRuntimeEnv
,
pRuntimeEnv
->
proot
,
pQueryAttr
->
pExpr1
,
pQueryAttr
->
numOfOutput
);
int32_t
opType
=
pRuntimeEnv
->
proot
->
upstream
[
0
]
->
operatorType
;
if
(
opType
!=
OP_DummyInput
&&
opType
!=
OP_Join
)
{
setTableScanFilterOperatorInfo
(
pRuntimeEnv
->
proot
->
upstream
[
0
]
->
info
,
pRuntimeEnv
->
proot
);
}
break
;
}
case
OP_Groupby
:
{
case
OP_Groupby
:
{
pRuntimeEnv
->
proot
=
pRuntimeEnv
->
proot
=
createGroupbyOperatorInfo
(
pRuntimeEnv
,
pRuntimeEnv
->
proot
,
pQueryAttr
->
pExpr1
,
pQueryAttr
->
numOfOutput
);
createGroupbyOperatorInfo
(
pRuntimeEnv
,
pRuntimeEnv
->
proot
,
pQueryAttr
->
pExpr1
,
pQueryAttr
->
numOfOutput
);
...
@@ -3098,7 +3102,7 @@ int32_t loadDataBlockOnDemand(SQueryRuntimeEnv* pRuntimeEnv, STableScanInfo* pTa
...
@@ -3098,7 +3102,7 @@ int32_t loadDataBlockOnDemand(SQueryRuntimeEnv* pRuntimeEnv, STableScanInfo* pTa
TSKEY
k
=
ascQuery
?
pBlock
->
info
.
window
.
skey
:
pBlock
->
info
.
window
.
ekey
;
TSKEY
k
=
ascQuery
?
pBlock
->
info
.
window
.
skey
:
pBlock
->
info
.
window
.
ekey
;
STimeWindow
win
=
getActiveTimeWindow
(
pTableScanInfo
->
pResultRowInfo
,
k
,
pQueryAttr
);
STimeWindow
win
=
getActiveTimeWindow
(
pTableScanInfo
->
pResultRowInfo
,
k
,
pQueryAttr
);
if
(
pQueryAttr
->
i
nterpQuery
)
{
if
(
pQueryAttr
->
pointI
nterpQuery
)
{
needFilter
=
chkWindowOutputBufByKey
(
pRuntimeEnv
,
pTableScanInfo
->
pResultRowInfo
,
&
win
,
masterScan
,
&
pResult
,
groupId
,
needFilter
=
chkWindowOutputBufByKey
(
pRuntimeEnv
,
pTableScanInfo
->
pResultRowInfo
,
&
win
,
masterScan
,
&
pResult
,
groupId
,
pTableScanInfo
->
pCtx
,
pTableScanInfo
->
numOfOutput
,
pTableScanInfo
->
pCtx
,
pTableScanInfo
->
numOfOutput
,
pTableScanInfo
->
rowCellInfoOffset
);
pTableScanInfo
->
rowCellInfoOffset
);
...
@@ -5769,6 +5773,66 @@ static SSDataBlock* doIntervalAgg(void* param, bool* newgroup) {
...
@@ -5769,6 +5773,66 @@ static SSDataBlock* doIntervalAgg(void* param, bool* newgroup) {
return
pIntervalInfo
->
pRes
->
info
.
rows
==
0
?
NULL
:
pIntervalInfo
->
pRes
;
return
pIntervalInfo
->
pRes
->
info
.
rows
==
0
?
NULL
:
pIntervalInfo
->
pRes
;
}
}
static
SSDataBlock
*
doAllIntervalAgg
(
void
*
param
,
bool
*
newgroup
)
{
SOperatorInfo
*
pOperator
=
(
SOperatorInfo
*
)
param
;
if
(
pOperator
->
status
==
OP_EXEC_DONE
)
{
return
NULL
;
}
STableIntervalOperatorInfo
*
pIntervalInfo
=
pOperator
->
info
;
SQueryRuntimeEnv
*
pRuntimeEnv
=
pOperator
->
pRuntimeEnv
;
if
(
pOperator
->
status
==
OP_RES_TO_RETURN
)
{
toSSDataBlock
(
&
pRuntimeEnv
->
groupResInfo
,
pRuntimeEnv
,
pIntervalInfo
->
pRes
);
if
(
pIntervalInfo
->
pRes
->
info
.
rows
==
0
||
!
hasRemainDataInCurrentGroup
(
&
pRuntimeEnv
->
groupResInfo
))
{
pOperator
->
status
=
OP_EXEC_DONE
;
}
return
pIntervalInfo
->
pRes
;
}
SQueryAttr
*
pQueryAttr
=
pRuntimeEnv
->
pQueryAttr
;
int32_t
order
=
pQueryAttr
->
order
.
order
;
STimeWindow
win
=
pQueryAttr
->
window
;
SOperatorInfo
*
upstream
=
pOperator
->
upstream
[
0
];
while
(
1
)
{
publishOperatorProfEvent
(
upstream
,
QUERY_PROF_BEFORE_OPERATOR_EXEC
);
SSDataBlock
*
pBlock
=
upstream
->
exec
(
upstream
,
newgroup
);
publishOperatorProfEvent
(
upstream
,
QUERY_PROF_AFTER_OPERATOR_EXEC
);
if
(
pBlock
==
NULL
)
{
break
;
}
setTagValue
(
pOperator
,
pRuntimeEnv
->
current
->
pTable
,
pIntervalInfo
->
pCtx
,
pOperator
->
numOfOutput
);
// the pDataBlock are always the same one, no need to call this again
setInputDataBlock
(
pOperator
,
pIntervalInfo
->
pCtx
,
pBlock
,
pQueryAttr
->
order
.
order
);
hashAllIntervalAgg
(
pOperator
,
&
pIntervalInfo
->
resultRowInfo
,
pBlock
,
0
);
}
// restore the value
pQueryAttr
->
order
.
order
=
order
;
pQueryAttr
->
window
=
win
;
pOperator
->
status
=
OP_RES_TO_RETURN
;
closeAllResultRows
(
&
pIntervalInfo
->
resultRowInfo
);
setQueryStatus
(
pRuntimeEnv
,
QUERY_COMPLETED
);
finalizeQueryResult
(
pOperator
,
pIntervalInfo
->
pCtx
,
&
pIntervalInfo
->
resultRowInfo
,
pIntervalInfo
->
rowCellInfoOffset
);
initGroupResInfo
(
&
pRuntimeEnv
->
groupResInfo
,
&
pIntervalInfo
->
resultRowInfo
);
toSSDataBlock
(
&
pRuntimeEnv
->
groupResInfo
,
pRuntimeEnv
,
pIntervalInfo
->
pRes
);
if
(
pIntervalInfo
->
pRes
->
info
.
rows
==
0
||
!
hasRemainDataInCurrentGroup
(
&
pRuntimeEnv
->
groupResInfo
))
{
pOperator
->
status
=
OP_EXEC_DONE
;
}
return
pIntervalInfo
->
pRes
->
info
.
rows
==
0
?
NULL
:
pIntervalInfo
->
pRes
;
}
static
SSDataBlock
*
doSTableIntervalAgg
(
void
*
param
,
bool
*
newgroup
)
{
static
SSDataBlock
*
doSTableIntervalAgg
(
void
*
param
,
bool
*
newgroup
)
{
SOperatorInfo
*
pOperator
=
(
SOperatorInfo
*
)
param
;
SOperatorInfo
*
pOperator
=
(
SOperatorInfo
*
)
param
;
if
(
pOperator
->
status
==
OP_EXEC_DONE
)
{
if
(
pOperator
->
status
==
OP_EXEC_DONE
)
{
...
@@ -6502,6 +6566,32 @@ SOperatorInfo* createTimeIntervalOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOp
...
@@ -6502,6 +6566,32 @@ SOperatorInfo* createTimeIntervalOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOp
appendUpstream
(
pOperator
,
upstream
);
appendUpstream
(
pOperator
,
upstream
);
return
pOperator
;
return
pOperator
;
}
}
SOperatorInfo
*
createAllTimeIntervalOperatorInfo
(
SQueryRuntimeEnv
*
pRuntimeEnv
,
SOperatorInfo
*
upstream
,
SExprInfo
*
pExpr
,
int32_t
numOfOutput
)
{
STableIntervalOperatorInfo
*
pInfo
=
calloc
(
1
,
sizeof
(
STableIntervalOperatorInfo
));
pInfo
->
pCtx
=
createSQLFunctionCtx
(
pRuntimeEnv
,
pExpr
,
numOfOutput
,
&
pInfo
->
rowCellInfoOffset
);
pInfo
->
pRes
=
createOutputBuf
(
pExpr
,
numOfOutput
,
pRuntimeEnv
->
resultInfo
.
capacity
);
initResultRowInfo
(
&
pInfo
->
resultRowInfo
,
8
,
TSDB_DATA_TYPE_INT
);
SOperatorInfo
*
pOperator
=
calloc
(
1
,
sizeof
(
SOperatorInfo
));
pOperator
->
name
=
"AllTimeIntervalAggOperator"
;
pOperator
->
operatorType
=
OP_AllTimeWindow
;
pOperator
->
blockingOptr
=
true
;
pOperator
->
status
=
OP_IN_EXECUTING
;
pOperator
->
pExpr
=
pExpr
;
pOperator
->
numOfOutput
=
numOfOutput
;
pOperator
->
info
=
pInfo
;
pOperator
->
pRuntimeEnv
=
pRuntimeEnv
;
pOperator
->
exec
=
doAllIntervalAgg
;
pOperator
->
cleanup
=
destroyBasicOperatorInfo
;
appendUpstream
(
pOperator
,
upstream
);
return
pOperator
;
}
SOperatorInfo
*
createStatewindowOperatorInfo
(
SQueryRuntimeEnv
*
pRuntimeEnv
,
SOperatorInfo
*
upstream
,
SExprInfo
*
pExpr
,
int32_t
numOfOutput
)
{
SOperatorInfo
*
createStatewindowOperatorInfo
(
SQueryRuntimeEnv
*
pRuntimeEnv
,
SOperatorInfo
*
upstream
,
SExprInfo
*
pExpr
,
int32_t
numOfOutput
)
{
SStateWindowOperatorInfo
*
pInfo
=
calloc
(
1
,
sizeof
(
SStateWindowOperatorInfo
));
SStateWindowOperatorInfo
*
pInfo
=
calloc
(
1
,
sizeof
(
SStateWindowOperatorInfo
));
pInfo
->
colIndex
=
-
1
;
pInfo
->
colIndex
=
-
1
;
...
@@ -6525,7 +6615,6 @@ SOperatorInfo* createStatewindowOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOpe
...
@@ -6525,7 +6615,6 @@ SOperatorInfo* createStatewindowOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOpe
appendUpstream
(
pOperator
,
upstream
);
appendUpstream
(
pOperator
,
upstream
);
return
pOperator
;
return
pOperator
;
}
}
SOperatorInfo
*
createSWindowOperatorInfo
(
SQueryRuntimeEnv
*
pRuntimeEnv
,
SOperatorInfo
*
upstream
,
SExprInfo
*
pExpr
,
int32_t
numOfOutput
)
{
SOperatorInfo
*
createSWindowOperatorInfo
(
SQueryRuntimeEnv
*
pRuntimeEnv
,
SOperatorInfo
*
upstream
,
SExprInfo
*
pExpr
,
int32_t
numOfOutput
)
{
SSWindowOperatorInfo
*
pInfo
=
calloc
(
1
,
sizeof
(
SSWindowOperatorInfo
));
SSWindowOperatorInfo
*
pInfo
=
calloc
(
1
,
sizeof
(
SSWindowOperatorInfo
));
...
...
src/query/src/qPlan.c
浏览文件 @
40073e19
...
@@ -567,10 +567,18 @@ SArray* createExecOperatorPlan(SQueryAttr* pQueryAttr) {
...
@@ -567,10 +567,18 @@ SArray* createExecOperatorPlan(SQueryAttr* pQueryAttr) {
}
}
}
else
if
(
pQueryAttr
->
interval
.
interval
>
0
)
{
}
else
if
(
pQueryAttr
->
interval
.
interval
>
0
)
{
if
(
pQueryAttr
->
stableQuery
)
{
if
(
pQueryAttr
->
stableQuery
)
{
op
=
OP_MultiTableTimeInterval
;
if
(
pQueryAttr
->
pointInterpQuery
)
{
op
=
OP_AllMultiTableTimeInterval
;
}
else
{
op
=
OP_MultiTableTimeInterval
;
}
taosArrayPush
(
plan
,
&
op
);
taosArrayPush
(
plan
,
&
op
);
}
else
{
}
else
{
op
=
OP_TimeWindow
;
if
(
pQueryAttr
->
pointInterpQuery
)
{
op
=
OP_AllTimeWindow
;
}
else
{
op
=
OP_TimeWindow
;
}
taosArrayPush
(
plan
,
&
op
);
taosArrayPush
(
plan
,
&
op
);
if
(
pQueryAttr
->
pExpr2
!=
NULL
)
{
if
(
pQueryAttr
->
pExpr2
!=
NULL
)
{
...
@@ -578,7 +586,7 @@ SArray* createExecOperatorPlan(SQueryAttr* pQueryAttr) {
...
@@ -578,7 +586,7 @@ SArray* createExecOperatorPlan(SQueryAttr* pQueryAttr) {
taosArrayPush
(
plan
,
&
op
);
taosArrayPush
(
plan
,
&
op
);
}
}
if
(
pQueryAttr
->
fillType
!=
TSDB_FILL_NONE
&&
(
!
pQueryAttr
->
pointInterpQuery
)
)
{
if
(
pQueryAttr
->
fillType
!=
TSDB_FILL_NONE
)
{
op
=
OP_Fill
;
op
=
OP_Fill
;
taosArrayPush
(
plan
,
&
op
);
taosArrayPush
(
plan
,
&
op
);
}
}
...
...
src/rpc/src/rpcTcp.c
浏览文件 @
40073e19
...
@@ -397,7 +397,11 @@ void *taosOpenTcpClientConnection(void *shandle, void *thandle, uint32_t ip, uin
...
@@ -397,7 +397,11 @@ void *taosOpenTcpClientConnection(void *shandle, void *thandle, uint32_t ip, uin
SThreadObj
*
pThreadObj
=
pClientObj
->
pThreadObj
[
index
];
SThreadObj
*
pThreadObj
=
pClientObj
->
pThreadObj
[
index
];
SOCKET
fd
=
taosOpenTcpClientSocket
(
ip
,
port
,
pThreadObj
->
ip
);
SOCKET
fd
=
taosOpenTcpClientSocket
(
ip
,
port
,
pThreadObj
->
ip
);
#if defined(_TD_WINDOWS_64) || defined(_TD_WINDOWS_32)
if
(
fd
==
(
SOCKET
)
-
1
)
return
NULL
;
#else
if
(
fd
<=
0
)
return
NULL
;
if
(
fd
<=
0
)
return
NULL
;
#endif
struct
sockaddr_in
sin
;
struct
sockaddr_in
sin
;
uint16_t
localPort
=
0
;
uint16_t
localPort
=
0
;
...
...
src/tsdb/inc/tsdbMeta.h
浏览文件 @
40073e19
...
@@ -24,8 +24,7 @@ typedef struct STable {
...
@@ -24,8 +24,7 @@ typedef struct STable {
tstr
*
name
;
// NOTE: there a flexible string here
tstr
*
name
;
// NOTE: there a flexible string here
uint64_t
suid
;
uint64_t
suid
;
struct
STable
*
pSuper
;
// super table pointer
struct
STable
*
pSuper
;
// super table pointer
uint8_t
numOfSchemas
;
SArray
*
schema
;
STSchema
*
schema
[
TSDB_MAX_TABLE_SCHEMAS
];
STSchema
*
tagSchema
;
STSchema
*
tagSchema
;
SKVRow
tagVal
;
SKVRow
tagVal
;
SSkipList
*
pIndex
;
// For TSDB_SUPER_TABLE, it is the skiplist index
SSkipList
*
pIndex
;
// For TSDB_SUPER_TABLE, it is the skiplist index
...
@@ -107,10 +106,9 @@ static FORCE_INLINE STSchema* tsdbGetTableSchemaImpl(STable* pTable, bool lock,
...
@@ -107,10 +106,9 @@ static FORCE_INLINE STSchema* tsdbGetTableSchemaImpl(STable* pTable, bool lock,
if
(
lock
)
TSDB_RLOCK_TABLE
(
pDTable
);
if
(
lock
)
TSDB_RLOCK_TABLE
(
pDTable
);
if
(
_version
<
0
)
{
// get the latest version of schema
if
(
_version
<
0
)
{
// get the latest version of schema
pTSchema
=
pDTable
->
schema
[
pDTable
->
numOfSchemas
-
1
]
;
pTSchema
=
*
(
STSchema
**
)
taosArrayGetLast
(
pDTable
->
schema
)
;
}
else
{
// get the schema with version
}
else
{
// get the schema with version
void
*
ptr
=
taosbsearch
(
&
_version
,
pDTable
->
schema
,
pDTable
->
numOfSchemas
,
sizeof
(
STSchema
*
),
void
*
ptr
=
taosArraySearch
(
pDTable
->
schema
,
&
_version
,
tsdbCompareSchemaVersion
,
TD_EQ
);
tsdbCompareSchemaVersion
,
TD_EQ
);
if
(
ptr
==
NULL
)
{
if
(
ptr
==
NULL
)
{
terrno
=
TSDB_CODE_TDB_IVD_TB_SCHEMA_VERSION
;
terrno
=
TSDB_CODE_TDB_IVD_TB_SCHEMA_VERSION
;
goto
_exit
;
goto
_exit
;
...
...
src/tsdb/src/tsdbCommit.c
浏览文件 @
40073e19
...
@@ -722,7 +722,7 @@ static int tsdbInitCommitH(SCommitH *pCommith, STsdbRepo *pRepo) {
...
@@ -722,7 +722,7 @@ static int tsdbInitCommitH(SCommitH *pCommith, STsdbRepo *pRepo) {
return
-
1
;
return
-
1
;
}
}
pCommith
->
pDataCols
=
tdNewDataCols
(
0
,
0
,
pCfg
->
maxRowsPerFileBlock
);
pCommith
->
pDataCols
=
tdNewDataCols
(
0
,
pCfg
->
maxRowsPerFileBlock
);
if
(
pCommith
->
pDataCols
==
NULL
)
{
if
(
pCommith
->
pDataCols
==
NULL
)
{
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
tsdbDestroyCommitH
(
pCommith
);
tsdbDestroyCommitH
(
pCommith
);
...
@@ -920,7 +920,6 @@ int tsdbWriteBlockImpl(STsdbRepo *pRepo, STable *pTable, SDFile *pDFile, SDataCo
...
@@ -920,7 +920,6 @@ int tsdbWriteBlockImpl(STsdbRepo *pRepo, STable *pTable, SDFile *pDFile, SDataCo
SDataCol
*
pDataCol
=
pDataCols
->
cols
+
ncol
;
SDataCol
*
pDataCol
=
pDataCols
->
cols
+
ncol
;
SBlockCol
*
pBlockCol
=
pBlockData
->
cols
+
nColsNotAllNull
;
SBlockCol
*
pBlockCol
=
pBlockData
->
cols
+
nColsNotAllNull
;
// if (isNEleNull(pDataCol, rowsToWrite)) { // all data to commit are NULL, just ignore it
if
(
isAllRowsNull
(
pDataCol
))
{
// all data to commit are NULL, just ignore it
if
(
isAllRowsNull
(
pDataCol
))
{
// all data to commit are NULL, just ignore it
continue
;
continue
;
}
}
...
@@ -1277,6 +1276,7 @@ static void tsdbLoadAndMergeFromCache(SDataCols *pDataCols, int *iter, SCommitIt
...
@@ -1277,6 +1276,7 @@ static void tsdbLoadAndMergeFromCache(SDataCols *pDataCols, int *iter, SCommitIt
if
(
key1
<
key2
)
{
if
(
key1
<
key2
)
{
for
(
int
i
=
0
;
i
<
pDataCols
->
numOfCols
;
i
++
)
{
for
(
int
i
=
0
;
i
<
pDataCols
->
numOfCols
;
i
++
)
{
//TODO: dataColAppendVal may fail
dataColAppendVal
(
pTarget
->
cols
+
i
,
tdGetColDataOfRow
(
pDataCols
->
cols
+
i
,
*
iter
),
pTarget
->
numOfRows
,
dataColAppendVal
(
pTarget
->
cols
+
i
,
tdGetColDataOfRow
(
pDataCols
->
cols
+
i
,
*
iter
),
pTarget
->
numOfRows
,
pTarget
->
maxPoints
);
pTarget
->
maxPoints
);
}
}
...
@@ -1308,6 +1308,7 @@ static void tsdbLoadAndMergeFromCache(SDataCols *pDataCols, int *iter, SCommitIt
...
@@ -1308,6 +1308,7 @@ static void tsdbLoadAndMergeFromCache(SDataCols *pDataCols, int *iter, SCommitIt
ASSERT
(
!
isRowDel
);
ASSERT
(
!
isRowDel
);
for
(
int
i
=
0
;
i
<
pDataCols
->
numOfCols
;
i
++
)
{
for
(
int
i
=
0
;
i
<
pDataCols
->
numOfCols
;
i
++
)
{
//TODO: dataColAppendVal may fail
dataColAppendVal
(
pTarget
->
cols
+
i
,
tdGetColDataOfRow
(
pDataCols
->
cols
+
i
,
*
iter
),
pTarget
->
numOfRows
,
dataColAppendVal
(
pTarget
->
cols
+
i
,
tdGetColDataOfRow
(
pDataCols
->
cols
+
i
,
*
iter
),
pTarget
->
numOfRows
,
pTarget
->
maxPoints
);
pTarget
->
maxPoints
);
}
}
...
...
src/tsdb/src/tsdbCompact.c
浏览文件 @
40073e19
...
@@ -296,7 +296,7 @@ static int tsdbCompactMeta(STsdbRepo *pRepo) {
...
@@ -296,7 +296,7 @@ static int tsdbCompactMeta(STsdbRepo *pRepo) {
return
-
1
;
return
-
1
;
}
}
pComph
->
pDataCols
=
tdNewDataCols
(
0
,
0
,
pCfg
->
maxRowsPerFileBlock
);
pComph
->
pDataCols
=
tdNewDataCols
(
0
,
pCfg
->
maxRowsPerFileBlock
);
if
(
pComph
->
pDataCols
==
NULL
)
{
if
(
pComph
->
pDataCols
==
NULL
)
{
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
tsdbDestroyCompactH
(
pComph
);
tsdbDestroyCompactH
(
pComph
);
...
...
src/tsdb/src/tsdbMemTable.c
浏览文件 @
40073e19
...
@@ -702,11 +702,12 @@ static int tsdbScanAndConvertSubmitMsg(STsdbRepo *pRepo, SSubmitMsg *pMsg) {
...
@@ -702,11 +702,12 @@ static int tsdbScanAndConvertSubmitMsg(STsdbRepo *pRepo, SSubmitMsg *pMsg) {
}
}
//row1 has higher priority
//row1 has higher priority
static
SMemRow
tsdbInsertDupKeyMerge
(
SMemRow
row1
,
SMemRow
row2
,
STsdbRepo
*
pRepo
,
STSchema
**
ppSchema1
,
STSchema
**
ppSchema2
,
STable
*
pTable
,
int32_t
*
pAffectedRows
,
int64_t
*
pPoints
,
SMemRow
*
pLastRow
)
{
static
SMemRow
tsdbInsertDupKeyMerge
(
SMemRow
row1
,
SMemRow
row2
,
STsdbRepo
*
pRepo
,
STSchema
**
ppSchema1
,
STSchema
**
ppSchema2
,
STable
*
pTable
,
int32_t
*
pPoints
,
SMemRow
*
pLastRow
)
{
//for compatiblity, duplicate key inserted when update=0 should be also calculated as affected rows!
//for compatiblity, duplicate key inserted when update=0 should be also calculated as affected rows!
if
(
row1
==
NULL
&&
row2
==
NULL
&&
pRepo
->
config
.
update
==
TD_ROW_DISCARD_UPDATE
)
{
if
(
row1
==
NULL
&&
row2
==
NULL
&&
pRepo
->
config
.
update
==
TD_ROW_DISCARD_UPDATE
)
{
(
*
pAffectedRows
)
++
;
(
*
pPoints
)
++
;
(
*
pPoints
)
++
;
return
NULL
;
return
NULL
;
}
}
...
@@ -715,7 +716,6 @@ static SMemRow tsdbInsertDupKeyMerge(SMemRow row1, SMemRow row2, STsdbRepo* pRep
...
@@ -715,7 +716,6 @@ static SMemRow tsdbInsertDupKeyMerge(SMemRow row1, SMemRow row2, STsdbRepo* pRep
void
*
pMem
=
tsdbAllocBytes
(
pRepo
,
memRowTLen
(
row1
));
void
*
pMem
=
tsdbAllocBytes
(
pRepo
,
memRowTLen
(
row1
));
if
(
pMem
==
NULL
)
return
NULL
;
if
(
pMem
==
NULL
)
return
NULL
;
memRowCpy
(
pMem
,
row1
);
memRowCpy
(
pMem
,
row1
);
(
*
pAffectedRows
)
++
;
(
*
pPoints
)
++
;
(
*
pPoints
)
++
;
*
pLastRow
=
pMem
;
*
pLastRow
=
pMem
;
return
pMem
;
return
pMem
;
...
@@ -750,18 +750,16 @@ static SMemRow tsdbInsertDupKeyMerge(SMemRow row1, SMemRow row2, STsdbRepo* pRep
...
@@ -750,18 +750,16 @@ static SMemRow tsdbInsertDupKeyMerge(SMemRow row1, SMemRow row2, STsdbRepo* pRep
if
(
pMem
==
NULL
)
return
NULL
;
if
(
pMem
==
NULL
)
return
NULL
;
memRowCpy
(
pMem
,
tmp
);
memRowCpy
(
pMem
,
tmp
);
(
*
pAffectedRows
)
++
;
(
*
pPoints
)
++
;
(
*
pPoints
)
++
;
*
pLastRow
=
pMem
;
*
pLastRow
=
pMem
;
return
pMem
;
return
pMem
;
}
}
static
void
*
tsdbInsertDupKeyMergePacked
(
void
**
args
)
{
static
void
*
tsdbInsertDupKeyMergePacked
(
void
**
args
)
{
return
tsdbInsertDupKeyMerge
(
args
[
0
],
args
[
1
],
args
[
2
],
(
STSchema
**
)
&
args
[
3
],
(
STSchema
**
)
&
args
[
4
],
args
[
5
],
args
[
6
],
args
[
7
]
,
args
[
8
]
);
return
tsdbInsertDupKeyMerge
(
args
[
0
],
args
[
1
],
args
[
2
],
(
STSchema
**
)
&
args
[
3
],
(
STSchema
**
)
&
args
[
4
],
args
[
5
],
args
[
6
],
args
[
7
]);
}
}
static
void
tsdbSetupSkipListHookFns
(
SSkipList
*
pSkipList
,
STsdbRepo
*
pRepo
,
STable
*
pTable
,
int32_t
*
p
AffectedRows
,
int64_t
*
p
Points
,
SMemRow
*
pLastRow
)
{
static
void
tsdbSetupSkipListHookFns
(
SSkipList
*
pSkipList
,
STsdbRepo
*
pRepo
,
STable
*
pTable
,
int32_t
*
pPoints
,
SMemRow
*
pLastRow
)
{
if
(
pSkipList
->
insertHandleFn
==
NULL
)
{
if
(
pSkipList
->
insertHandleFn
==
NULL
)
{
tGenericSavedFunc
*
dupHandleSavedFunc
=
genericSavedFuncInit
((
GenericVaFunc
)
&
tsdbInsertDupKeyMergePacked
,
9
);
tGenericSavedFunc
*
dupHandleSavedFunc
=
genericSavedFuncInit
((
GenericVaFunc
)
&
tsdbInsertDupKeyMergePacked
,
9
);
...
@@ -769,17 +767,16 @@ static void tsdbSetupSkipListHookFns(SSkipList* pSkipList, STsdbRepo *pRepo, STa
...
@@ -769,17 +767,16 @@ static void tsdbSetupSkipListHookFns(SSkipList* pSkipList, STsdbRepo *pRepo, STa
dupHandleSavedFunc
->
args
[
3
]
=
NULL
;
dupHandleSavedFunc
->
args
[
3
]
=
NULL
;
dupHandleSavedFunc
->
args
[
4
]
=
NULL
;
dupHandleSavedFunc
->
args
[
4
]
=
NULL
;
dupHandleSavedFunc
->
args
[
5
]
=
pTable
;
dupHandleSavedFunc
->
args
[
5
]
=
pTable
;
dupHandleSavedFunc
->
args
[
6
]
=
pAffectedRows
;
dupHandleSavedFunc
->
args
[
7
]
=
pPoints
;
dupHandleSavedFunc
->
args
[
8
]
=
pLastRow
;
pSkipList
->
insertHandleFn
=
dupHandleSavedFunc
;
pSkipList
->
insertHandleFn
=
dupHandleSavedFunc
;
}
}
pSkipList
->
insertHandleFn
->
args
[
6
]
=
pPoints
;
pSkipList
->
insertHandleFn
->
args
[
7
]
=
pLastRow
;
}
}
static
int
tsdbInsertDataToTable
(
STsdbRepo
*
pRepo
,
SSubmitBlk
*
pBlock
,
int32_t
*
pAffectedRows
)
{
static
int
tsdbInsertDataToTable
(
STsdbRepo
*
pRepo
,
SSubmitBlk
*
pBlock
,
int32_t
*
pAffectedRows
)
{
STsdbMeta
*
pMeta
=
pRepo
->
tsdbMeta
;
STsdbMeta
*
pMeta
=
pRepo
->
tsdbMeta
;
int
64
_t
points
=
0
;
int
32
_t
points
=
0
;
STable
*
pTable
=
NULL
;
STable
*
pTable
=
NULL
;
SSubmitBlkIter
blkIter
=
{
0
};
SSubmitBlkIter
blkIter
=
{
0
};
SMemTable
*
pMemTable
=
NULL
;
SMemTable
*
pMemTable
=
NULL
;
...
@@ -830,9 +827,10 @@ static int tsdbInsertDataToTable(STsdbRepo* pRepo, SSubmitBlk* pBlock, int32_t *
...
@@ -830,9 +827,10 @@ static int tsdbInsertDataToTable(STsdbRepo* pRepo, SSubmitBlk* pBlock, int32_t *
SMemRow
lastRow
=
NULL
;
SMemRow
lastRow
=
NULL
;
int64_t
osize
=
SL_SIZE
(
pTableData
->
pData
);
int64_t
osize
=
SL_SIZE
(
pTableData
->
pData
);
tsdbSetupSkipListHookFns
(
pTableData
->
pData
,
pRepo
,
pTable
,
pAffectedRows
,
&
points
,
&
lastRow
);
tsdbSetupSkipListHookFns
(
pTableData
->
pData
,
pRepo
,
pTable
,
&
points
,
&
lastRow
);
tSkipListPutBatchByIter
(
pTableData
->
pData
,
&
blkIter
,
(
iter_next_fn_t
)
tsdbGetSubmitBlkNext
);
tSkipListPutBatchByIter
(
pTableData
->
pData
,
&
blkIter
,
(
iter_next_fn_t
)
tsdbGetSubmitBlkNext
);
int64_t
dsize
=
SL_SIZE
(
pTableData
->
pData
)
-
osize
;
int64_t
dsize
=
SL_SIZE
(
pTableData
->
pData
)
-
osize
;
(
*
pAffectedRows
)
+=
points
;
if
(
lastRow
!=
NULL
)
{
if
(
lastRow
!=
NULL
)
{
...
...
src/tsdb/src/tsdbMeta.c
浏览文件 @
40073e19
...
@@ -17,7 +17,6 @@
...
@@ -17,7 +17,6 @@
#define TSDB_SUPER_TABLE_SL_LEVEL 5
#define TSDB_SUPER_TABLE_SL_LEVEL 5
#define DEFAULT_TAG_INDEX_COLUMN 0
#define DEFAULT_TAG_INDEX_COLUMN 0
static
int
tsdbCompareSchemaVersion
(
const
void
*
key1
,
const
void
*
key2
);
static
char
*
getTagIndexKey
(
const
void
*
pData
);
static
char
*
getTagIndexKey
(
const
void
*
pData
);
static
STable
*
tsdbNewTable
();
static
STable
*
tsdbNewTable
();
static
STable
*
tsdbCreateTableFromCfg
(
STableCfg
*
pCfg
,
bool
isSuper
,
STable
*
pSTable
);
static
STable
*
tsdbCreateTableFromCfg
(
STableCfg
*
pCfg
,
bool
isSuper
,
STable
*
pSTable
);
...
@@ -44,6 +43,8 @@ static int tsdbRemoveTableFromStore(STsdbRepo *pRepo, STable *pTable);
...
@@ -44,6 +43,8 @@ static int tsdbRemoveTableFromStore(STsdbRepo *pRepo, STable *pTable);
static
int
tsdbRmTableFromMeta
(
STsdbRepo
*
pRepo
,
STable
*
pTable
);
static
int
tsdbRmTableFromMeta
(
STsdbRepo
*
pRepo
,
STable
*
pTable
);
static
int
tsdbAdjustMetaTables
(
STsdbRepo
*
pRepo
,
int
tid
);
static
int
tsdbAdjustMetaTables
(
STsdbRepo
*
pRepo
,
int
tid
);
static
int
tsdbCheckTableTagVal
(
SKVRow
*
pKVRow
,
STSchema
*
pSchema
);
static
int
tsdbCheckTableTagVal
(
SKVRow
*
pKVRow
,
STSchema
*
pSchema
);
static
int
tsdbAddSchema
(
STable
*
pTable
,
STSchema
*
pSchema
);
static
void
tsdbFreeTableSchema
(
STable
*
pTable
);
// ------------------ OUTER FUNCTIONS ------------------
// ------------------ OUTER FUNCTIONS ------------------
int
tsdbCreateTable
(
STsdbRepo
*
repo
,
STableCfg
*
pCfg
)
{
int
tsdbCreateTable
(
STsdbRepo
*
repo
,
STableCfg
*
pCfg
)
{
...
@@ -723,17 +724,10 @@ void tsdbUpdateTableSchema(STsdbRepo *pRepo, STable *pTable, STSchema *pSchema,
...
@@ -723,17 +724,10 @@ void tsdbUpdateTableSchema(STsdbRepo *pRepo, STable *pTable, STSchema *pSchema,
STsdbMeta
*
pMeta
=
pRepo
->
tsdbMeta
;
STsdbMeta
*
pMeta
=
pRepo
->
tsdbMeta
;
STable
*
pCTable
=
(
TABLE_TYPE
(
pTable
)
==
TSDB_CHILD_TABLE
)
?
pTable
->
pSuper
:
pTable
;
STable
*
pCTable
=
(
TABLE_TYPE
(
pTable
)
==
TSDB_CHILD_TABLE
)
?
pTable
->
pSuper
:
pTable
;
ASSERT
(
schemaVersion
(
pSchema
)
>
schemaVersion
(
pCTable
->
schema
[
pCTable
->
numOfSchemas
-
1
]
));
ASSERT
(
schemaVersion
(
pSchema
)
>
schemaVersion
(
*
(
STSchema
**
)
taosArrayGetLast
(
pCTable
->
schema
)
));
TSDB_WLOCK_TABLE
(
pCTable
);
TSDB_WLOCK_TABLE
(
pCTable
);
if
(
pCTable
->
numOfSchemas
<
TSDB_MAX_TABLE_SCHEMAS
)
{
tsdbAddSchema
(
pCTable
,
pSchema
);
pCTable
->
schema
[
pCTable
->
numOfSchemas
++
]
=
pSchema
;
}
else
{
ASSERT
(
pCTable
->
numOfSchemas
==
TSDB_MAX_TABLE_SCHEMAS
);
tdFreeSchema
(
pCTable
->
schema
[
0
]);
memmove
(
pCTable
->
schema
,
pCTable
->
schema
+
1
,
sizeof
(
STSchema
*
)
*
(
TSDB_MAX_TABLE_SCHEMAS
-
1
));
pCTable
->
schema
[
pCTable
->
numOfSchemas
-
1
]
=
pSchema
;
}
if
(
schemaNCols
(
pSchema
)
>
pMeta
->
maxCols
)
pMeta
->
maxCols
=
schemaNCols
(
pSchema
);
if
(
schemaNCols
(
pSchema
)
>
pMeta
->
maxCols
)
pMeta
->
maxCols
=
schemaNCols
(
pSchema
);
if
(
schemaTLen
(
pSchema
)
>
pMeta
->
maxRowBytes
)
pMeta
->
maxRowBytes
=
schemaTLen
(
pSchema
);
if
(
schemaTLen
(
pSchema
)
>
pMeta
->
maxRowBytes
)
pMeta
->
maxRowBytes
=
schemaTLen
(
pSchema
);
...
@@ -829,9 +823,7 @@ static STable *tsdbCreateTableFromCfg(STableCfg *pCfg, bool isSuper, STable *pST
...
@@ -829,9 +823,7 @@ static STable *tsdbCreateTableFromCfg(STableCfg *pCfg, bool isSuper, STable *pST
TABLE_TID
(
pTable
)
=
-
1
;
TABLE_TID
(
pTable
)
=
-
1
;
TABLE_SUID
(
pTable
)
=
-
1
;
TABLE_SUID
(
pTable
)
=
-
1
;
pTable
->
pSuper
=
NULL
;
pTable
->
pSuper
=
NULL
;
pTable
->
numOfSchemas
=
1
;
if
(
tsdbAddSchema
(
pTable
,
tdDupSchema
(
pCfg
->
schema
))
<
0
)
{
pTable
->
schema
[
0
]
=
tdDupSchema
(
pCfg
->
schema
);
if
(
pTable
->
schema
[
0
]
==
NULL
)
{
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
goto
_err
;
goto
_err
;
}
}
...
@@ -842,7 +834,8 @@ static STable *tsdbCreateTableFromCfg(STableCfg *pCfg, bool isSuper, STable *pST
...
@@ -842,7 +834,8 @@ static STable *tsdbCreateTableFromCfg(STableCfg *pCfg, bool isSuper, STable *pST
}
}
pTable
->
tagVal
=
NULL
;
pTable
->
tagVal
=
NULL
;
STColumn
*
pCol
=
schemaColAt
(
pTable
->
tagSchema
,
DEFAULT_TAG_INDEX_COLUMN
);
STColumn
*
pCol
=
schemaColAt
(
pTable
->
tagSchema
,
DEFAULT_TAG_INDEX_COLUMN
);
pTable
->
pIndex
=
tSkipListCreate
(
TSDB_SUPER_TABLE_SL_LEVEL
,
colType
(
pCol
),
(
uint8_t
)(
colBytes
(
pCol
)),
NULL
,
SL_ALLOW_DUP_KEY
,
getTagIndexKey
);
pTable
->
pIndex
=
tSkipListCreate
(
TSDB_SUPER_TABLE_SL_LEVEL
,
colType
(
pCol
),
(
uint8_t
)(
colBytes
(
pCol
)),
NULL
,
SL_ALLOW_DUP_KEY
,
getTagIndexKey
);
if
(
pTable
->
pIndex
==
NULL
)
{
if
(
pTable
->
pIndex
==
NULL
)
{
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
goto
_err
;
goto
_err
;
...
@@ -871,9 +864,7 @@ static STable *tsdbCreateTableFromCfg(STableCfg *pCfg, bool isSuper, STable *pST
...
@@ -871,9 +864,7 @@ static STable *tsdbCreateTableFromCfg(STableCfg *pCfg, bool isSuper, STable *pST
}
}
}
else
{
}
else
{
TABLE_SUID
(
pTable
)
=
-
1
;
TABLE_SUID
(
pTable
)
=
-
1
;
pTable
->
numOfSchemas
=
1
;
if
(
tsdbAddSchema
(
pTable
,
tdDupSchema
(
pCfg
->
schema
))
<
0
)
{
pTable
->
schema
[
0
]
=
tdDupSchema
(
pCfg
->
schema
);
if
(
pTable
->
schema
[
0
]
==
NULL
)
{
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
goto
_err
;
goto
_err
;
}
}
...
@@ -907,9 +898,7 @@ static void tsdbFreeTable(STable *pTable) {
...
@@ -907,9 +898,7 @@ static void tsdbFreeTable(STable *pTable) {
TABLE_UID
(
pTable
));
TABLE_UID
(
pTable
));
tfree
(
TABLE_NAME
(
pTable
));
tfree
(
TABLE_NAME
(
pTable
));
if
(
TABLE_TYPE
(
pTable
)
!=
TSDB_CHILD_TABLE
)
{
if
(
TABLE_TYPE
(
pTable
)
!=
TSDB_CHILD_TABLE
)
{
for
(
int
i
=
0
;
i
<
TSDB_MAX_TABLE_SCHEMAS
;
i
++
)
{
tsdbFreeTableSchema
(
pTable
);
tdFreeSchema
(
pTable
->
schema
[
i
]);
}
if
(
TABLE_TYPE
(
pTable
)
==
TSDB_SUPER_TABLE
)
{
if
(
TABLE_TYPE
(
pTable
)
==
TSDB_SUPER_TABLE
)
{
tdFreeSchema
(
pTable
->
tagSchema
);
tdFreeSchema
(
pTable
->
tagSchema
);
...
@@ -1261,9 +1250,10 @@ static int tsdbEncodeTable(void **buf, STable *pTable) {
...
@@ -1261,9 +1250,10 @@ static int tsdbEncodeTable(void **buf, STable *pTable) {
tlen
+=
taosEncodeFixedU64
(
buf
,
TABLE_SUID
(
pTable
));
tlen
+=
taosEncodeFixedU64
(
buf
,
TABLE_SUID
(
pTable
));
tlen
+=
tdEncodeKVRow
(
buf
,
pTable
->
tagVal
);
tlen
+=
tdEncodeKVRow
(
buf
,
pTable
->
tagVal
);
}
else
{
}
else
{
tlen
+=
taosEncodeFixedU8
(
buf
,
pTable
->
numOfSchemas
);
tlen
+=
taosEncodeFixedU8
(
buf
,
(
uint8_t
)
taosArrayGetSize
(
pTable
->
schema
));
for
(
int
i
=
0
;
i
<
pTable
->
numOfSchemas
;
i
++
)
{
for
(
int
i
=
0
;
i
<
taosArrayGetSize
(
pTable
->
schema
);
i
++
)
{
tlen
+=
tdEncodeSchema
(
buf
,
pTable
->
schema
[
i
]);
STSchema
*
pSchema
=
taosArrayGetP
(
pTable
->
schema
,
i
);
tlen
+=
tdEncodeSchema
(
buf
,
pSchema
);
}
}
if
(
TABLE_TYPE
(
pTable
)
==
TSDB_SUPER_TABLE
)
{
if
(
TABLE_TYPE
(
pTable
)
==
TSDB_SUPER_TABLE
)
{
...
@@ -1294,9 +1284,12 @@ static void *tsdbDecodeTable(void *buf, STable **pRTable) {
...
@@ -1294,9 +1284,12 @@ static void *tsdbDecodeTable(void *buf, STable **pRTable) {
buf
=
taosDecodeFixedU64
(
buf
,
&
TABLE_SUID
(
pTable
));
buf
=
taosDecodeFixedU64
(
buf
,
&
TABLE_SUID
(
pTable
));
buf
=
tdDecodeKVRow
(
buf
,
&
(
pTable
->
tagVal
));
buf
=
tdDecodeKVRow
(
buf
,
&
(
pTable
->
tagVal
));
}
else
{
}
else
{
buf
=
taosDecodeFixedU8
(
buf
,
&
(
pTable
->
numOfSchemas
));
uint8_t
nSchemas
;
for
(
int
i
=
0
;
i
<
pTable
->
numOfSchemas
;
i
++
)
{
buf
=
taosDecodeFixedU8
(
buf
,
&
nSchemas
);
buf
=
tdDecodeSchema
(
buf
,
&
(
pTable
->
schema
[
i
]));
for
(
int
i
=
0
;
i
<
nSchemas
;
i
++
)
{
STSchema
*
pSchema
;
buf
=
tdDecodeSchema
(
buf
,
&
pSchema
);
tsdbAddSchema
(
pTable
,
pSchema
);
}
}
if
(
TABLE_TYPE
(
pTable
)
==
TSDB_SUPER_TABLE
)
{
if
(
TABLE_TYPE
(
pTable
)
==
TSDB_SUPER_TABLE
)
{
...
@@ -1458,3 +1451,38 @@ static int tsdbCheckTableTagVal(SKVRow *pKVRow, STSchema *pSchema) {
...
@@ -1458,3 +1451,38 @@ static int tsdbCheckTableTagVal(SKVRow *pKVRow, STSchema *pSchema) {
return
0
;
return
0
;
}
}
static
int
tsdbAddSchema
(
STable
*
pTable
,
STSchema
*
pSchema
)
{
ASSERT
(
TABLE_TYPE
(
pTable
)
!=
TSDB_CHILD_TABLE
);
if
(
pTable
->
schema
==
NULL
)
{
pTable
->
schema
=
taosArrayInit
(
TSDB_MAX_TABLE_SCHEMAS
,
sizeof
(
SSchema
*
));
if
(
pTable
->
schema
==
NULL
)
{
terrno
=
TAOS_SYSTEM_ERROR
(
errno
);
return
-
1
;
}
}
ASSERT
(
taosArrayGetSize
(
pTable
->
schema
)
==
0
||
schemaVersion
(
pSchema
)
>
schemaVersion
(
*
(
STSchema
**
)
taosArrayGetLast
(
pTable
->
schema
)));
if
(
taosArrayPush
(
pTable
->
schema
,
&
pSchema
)
==
NULL
)
{
terrno
=
TAOS_SYSTEM_ERROR
(
errno
);
return
-
1
;
}
return
0
;
}
static
void
tsdbFreeTableSchema
(
STable
*
pTable
)
{
ASSERT
(
pTable
!=
NULL
);
if
(
pTable
->
schema
)
{
for
(
size_t
i
=
0
;
i
<
taosArrayGetSize
(
pTable
->
schema
);
i
++
)
{
STSchema
*
pSchema
=
taosArrayGetP
(
pTable
->
schema
,
i
);
tdFreeSchema
(
pSchema
);
}
taosArrayDestroy
(
pTable
->
schema
);
}
}
\ No newline at end of file
src/tsdb/src/tsdbRead.c
浏览文件 @
40073e19
...
@@ -466,7 +466,7 @@ static STsdbQueryHandle* tsdbQueryTablesImpl(STsdbRepo* tsdb, STsdbQueryCond* pC
...
@@ -466,7 +466,7 @@ static STsdbQueryHandle* tsdbQueryTablesImpl(STsdbRepo* tsdb, STsdbQueryCond* pC
STsdbMeta
*
pMeta
=
tsdbGetMeta
(
tsdb
);
STsdbMeta
*
pMeta
=
tsdbGetMeta
(
tsdb
);
assert
(
pMeta
!=
NULL
);
assert
(
pMeta
!=
NULL
);
pQueryHandle
->
pDataCols
=
tdNewDataCols
(
pMeta
->
max
RowBytes
,
pMeta
->
max
Cols
,
pQueryHandle
->
pTsdb
->
config
.
maxRowsPerFileBlock
);
pQueryHandle
->
pDataCols
=
tdNewDataCols
(
pMeta
->
maxCols
,
pQueryHandle
->
pTsdb
->
config
.
maxRowsPerFileBlock
);
if
(
pQueryHandle
->
pDataCols
==
NULL
)
{
if
(
pQueryHandle
->
pDataCols
==
NULL
)
{
tsdbError
(
"%p failed to malloc buf for pDataCols, %"
PRIu64
,
pQueryHandle
,
pQueryHandle
->
qId
);
tsdbError
(
"%p failed to malloc buf for pDataCols, %"
PRIu64
,
pQueryHandle
,
pQueryHandle
->
qId
);
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
...
@@ -1446,7 +1446,7 @@ static int doBinarySearchKey(char* pValue, int num, TSKEY key, int order) {
...
@@ -1446,7 +1446,7 @@ static int doBinarySearchKey(char* pValue, int num, TSKEY key, int order) {
return
midPos
;
return
midPos
;
}
}
int32_t
doCopyRowsFromFileBlock
(
STsdbQueryHandle
*
pQueryHandle
,
int32_t
capacity
,
int32_t
numOfRows
,
int32_t
start
,
int32_t
end
)
{
static
int32_t
doCopyRowsFromFileBlock
(
STsdbQueryHandle
*
pQueryHandle
,
int32_t
capacity
,
int32_t
numOfRows
,
int32_t
start
,
int32_t
end
)
{
char
*
pData
=
NULL
;
char
*
pData
=
NULL
;
int32_t
step
=
ASCENDING_TRAVERSE
(
pQueryHandle
->
order
)
?
1
:
-
1
;
int32_t
step
=
ASCENDING_TRAVERSE
(
pQueryHandle
->
order
)
?
1
:
-
1
;
...
@@ -1481,7 +1481,7 @@ int32_t doCopyRowsFromFileBlock(STsdbQueryHandle* pQueryHandle, int32_t capacity
...
@@ -1481,7 +1481,7 @@ int32_t doCopyRowsFromFileBlock(STsdbQueryHandle* pQueryHandle, int32_t capacity
pData
=
(
char
*
)
pColInfo
->
pData
+
(
capacity
-
numOfRows
-
num
)
*
pColInfo
->
info
.
bytes
;
pData
=
(
char
*
)
pColInfo
->
pData
+
(
capacity
-
numOfRows
-
num
)
*
pColInfo
->
info
.
bytes
;
}
}
if
(
pColInfo
->
info
.
colId
==
src
->
colId
)
{
if
(
!
isAllRowsNull
(
src
)
&&
pColInfo
->
info
.
colId
==
src
->
colId
)
{
if
(
pColInfo
->
info
.
type
!=
TSDB_DATA_TYPE_BINARY
&&
pColInfo
->
info
.
type
!=
TSDB_DATA_TYPE_NCHAR
)
{
if
(
pColInfo
->
info
.
type
!=
TSDB_DATA_TYPE_BINARY
&&
pColInfo
->
info
.
type
!=
TSDB_DATA_TYPE_NCHAR
)
{
memmove
(
pData
,
(
char
*
)
src
->
pData
+
bytes
*
start
,
bytes
*
num
);
memmove
(
pData
,
(
char
*
)
src
->
pData
+
bytes
*
start
,
bytes
*
num
);
}
else
{
// handle the var-string
}
else
{
// handle the var-string
...
...
src/tsdb/src/tsdbReadImpl.c
浏览文件 @
40073e19
...
@@ -42,14 +42,14 @@ int tsdbInitReadH(SReadH *pReadh, STsdbRepo *pRepo) {
...
@@ -42,14 +42,14 @@ int tsdbInitReadH(SReadH *pReadh, STsdbRepo *pRepo) {
return
-
1
;
return
-
1
;
}
}
pReadh
->
pDCols
[
0
]
=
tdNewDataCols
(
0
,
0
,
pCfg
->
maxRowsPerFileBlock
);
pReadh
->
pDCols
[
0
]
=
tdNewDataCols
(
0
,
pCfg
->
maxRowsPerFileBlock
);
if
(
pReadh
->
pDCols
[
0
]
==
NULL
)
{
if
(
pReadh
->
pDCols
[
0
]
==
NULL
)
{
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
tsdbDestroyReadH
(
pReadh
);
tsdbDestroyReadH
(
pReadh
);
return
-
1
;
return
-
1
;
}
}
pReadh
->
pDCols
[
1
]
=
tdNewDataCols
(
0
,
0
,
pCfg
->
maxRowsPerFileBlock
);
pReadh
->
pDCols
[
1
]
=
tdNewDataCols
(
0
,
pCfg
->
maxRowsPerFileBlock
);
if
(
pReadh
->
pDCols
[
1
]
==
NULL
)
{
if
(
pReadh
->
pDCols
[
1
]
==
NULL
)
{
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
tsdbDestroyReadH
(
pReadh
);
tsdbDestroyReadH
(
pReadh
);
...
@@ -463,7 +463,7 @@ static int tsdbLoadBlockDataImpl(SReadH *pReadh, SBlock *pBlock, SDataCols *pDat
...
@@ -463,7 +463,7 @@ static int tsdbLoadBlockDataImpl(SReadH *pReadh, SBlock *pBlock, SDataCols *pDat
SDataCol
*
pDataCol
=
&
(
pDataCols
->
cols
[
dcol
]);
SDataCol
*
pDataCol
=
&
(
pDataCols
->
cols
[
dcol
]);
if
(
dcol
!=
0
&&
ccol
>=
pBlockData
->
numOfCols
)
{
if
(
dcol
!=
0
&&
ccol
>=
pBlockData
->
numOfCols
)
{
// Set current column as NULL and forward
// Set current column as NULL and forward
dataCol
SetNEleNull
(
pDataCol
,
pBlock
->
numOfRows
,
pDataCols
->
maxPoints
);
dataCol
Reset
(
pDataCol
);
dcol
++
;
dcol
++
;
continue
;
continue
;
}
}
...
@@ -503,7 +503,7 @@ static int tsdbLoadBlockDataImpl(SReadH *pReadh, SBlock *pBlock, SDataCols *pDat
...
@@ -503,7 +503,7 @@ static int tsdbLoadBlockDataImpl(SReadH *pReadh, SBlock *pBlock, SDataCols *pDat
ccol
++
;
ccol
++
;
}
else
{
}
else
{
// Set current column as NULL and forward
// Set current column as NULL and forward
dataCol
SetNEleNull
(
pDataCol
,
pBlock
->
numOfRows
,
pDataCols
->
maxPoints
);
dataCol
Reset
(
pDataCol
);
dcol
++
;
dcol
++
;
}
}
}
}
...
@@ -608,7 +608,7 @@ static int tsdbLoadBlockDataColsImpl(SReadH *pReadh, SBlock *pBlock, SDataCols *
...
@@ -608,7 +608,7 @@ static int tsdbLoadBlockDataColsImpl(SReadH *pReadh, SBlock *pBlock, SDataCols *
}
}
if
(
pBlockCol
==
NULL
)
{
if
(
pBlockCol
==
NULL
)
{
dataCol
SetNEleNull
(
pDataCol
,
pBlock
->
numOfRows
,
pDataCols
->
maxPoints
);
dataCol
Reset
(
pDataCol
);
continue
;
continue
;
}
}
...
...
tests/pytest/alter/alterColMultiTimes.py
0 → 100644
浏览文件 @
40073e19
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import
random
import
string
from
util.log
import
*
from
util.cases
import
*
from
util.sql
import
*
from
util.dnodes
import
*
class
TDTestCase
:
def
init
(
self
,
conn
,
logSql
):
tdLog
.
debug
(
"start to execute %s"
%
__file__
)
tdSql
.
init
(
conn
.
cursor
(),
logSql
)
def
genColList
(
self
):
'''
generate column list
'''
col_list
=
list
()
for
i
in
range
(
1
,
18
):
col_list
.
append
(
f
'c
{
i
}
'
)
return
col_list
def
genIncreaseValue
(
self
,
input_value
):
'''
add ', 1' to end of value every loop
'''
value_list
=
list
(
input_value
)
value_list
.
insert
(
-
1
,
", 1"
)
return
''
.
join
(
value_list
)
def
insertAlter
(
self
):
'''
after each alter and insert, when execute 'select * from {tbname};' taosd will coredump
'''
tbname
=
''
.
join
(
random
.
choice
(
string
.
ascii_letters
.
lower
())
for
i
in
range
(
7
))
input_value
=
'(now, 1)'
tdSql
.
execute
(
f
'create table
{
tbname
}
(ts timestamp, c0 int);'
)
tdSql
.
execute
(
f
'insert into
{
tbname
}
values
{
input_value
}
;'
)
for
col
in
self
.
genColList
():
input_value
=
self
.
genIncreaseValue
(
input_value
)
tdSql
.
execute
(
f
'alter table
{
tbname
}
add column
{
col
}
int;'
)
tdSql
.
execute
(
f
'insert into
{
tbname
}
values
{
input_value
}
;'
)
tdSql
.
query
(
f
'select * from
{
tbname
}
;'
)
tdSql
.
checkRows
(
18
)
def
run
(
self
):
tdSql
.
prepare
()
self
.
insertAlter
()
def
stop
(
self
):
tdSql
.
close
()
tdLog
.
success
(
"%s successfully executed"
%
__file__
)
tdCases
.
addWindows
(
__file__
,
TDTestCase
())
tdCases
.
addLinux
(
__file__
,
TDTestCase
())
tests/pytest/fulltest.sh
浏览文件 @
40073e19
...
@@ -168,11 +168,11 @@ python3 test.py -f tools/taosdemoTestInterlace.py
...
@@ -168,11 +168,11 @@ python3 test.py -f tools/taosdemoTestInterlace.py
python3 test.py
-f
tools/taosdemoTestQuery.py
python3 test.py
-f
tools/taosdemoTestQuery.py
# nano support
# nano support
python3 test.py
-f
tools/taosdemoAllTest/taosdemoTestSupportNanoInsert.py
python3 test.py
-f
tools/taosdemoAllTest/
NanoTestCase/
taosdemoTestSupportNanoInsert.py
python3 test.py
-f
tools/taosdemoAllTest/taosdemoTestSupportNanoQuery.py
python3 test.py
-f
tools/taosdemoAllTest/
NanoTestCase/
taosdemoTestSupportNanoQuery.py
#python3 test.py -f tools/taosdemoAllTest
/taosdemoTestSupportNanosubscribe.py
python3 test.py
-f
tools/taosdemoAllTest/NanoTestCase
/taosdemoTestSupportNanosubscribe.py
python3 test.py
-f
tools/taosdemoAllTest/taosdemoTestInsertTime_step.py
python3 test.py
-f
tools/taosdemoAllTest/
NanoTestCase/
taosdemoTestInsertTime_step.py
python3 test.py
-f
tools/taosdumpTestNanoSupport.py
python3 test.py
-f
tools/taosd
emoAllTest/NanoTestCase/taosd
umpTestNanoSupport.py
# update
# update
python3 ./test.py
-f
update/allow_update.py
python3 ./test.py
-f
update/allow_update.py
...
@@ -381,9 +381,10 @@ python3 ./test.py -f query/querySession.py
...
@@ -381,9 +381,10 @@ python3 ./test.py -f query/querySession.py
python3 test.py
-f
alter/alter_create_exception.py
python3 test.py
-f
alter/alter_create_exception.py
python3 ./test.py
-f
insert/flushwhiledrop.py
python3 ./test.py
-f
insert/flushwhiledrop.py
python3 ./test.py
-f
insert/schemalessInsert.py
python3 ./test.py
-f
insert/schemalessInsert.py
python3 ./test.py
-f
alter/alterColMultiTimes.py
#======================p4-end===============
#======================p4-end===============
python3 test.py
-f
tools/taosdemoAllTest/pytest.py
...
...
tests/pytest/insert/schemalessInsert.py
浏览文件 @
40073e19
...
@@ -705,7 +705,7 @@ class TDTestCase:
...
@@ -705,7 +705,7 @@ class TDTestCase:
case no id when stb exist
case no id when stb exist
"""
"""
self
.
cleanStb
()
self
.
cleanStb
()
input_sql
,
stb_name
=
self
.
genFullTypeSql
(
t0
=
"f"
,
c0
=
"f"
)
input_sql
,
stb_name
=
self
.
genFullTypeSql
(
t
b_name
=
"sub_table_0123456"
,
t
0
=
"f"
,
c0
=
"f"
)
self
.
resCmp
(
input_sql
,
stb_name
)
self
.
resCmp
(
input_sql
,
stb_name
)
input_sql
,
stb_name
=
self
.
genFullTypeSql
(
stb_name
=
stb_name
,
id_noexist_tag
=
True
,
t0
=
"f"
,
c0
=
"f"
)
input_sql
,
stb_name
=
self
.
genFullTypeSql
(
stb_name
=
stb_name
,
id_noexist_tag
=
True
,
t0
=
"f"
,
c0
=
"f"
)
self
.
resCmp
(
input_sql
,
stb_name
,
condition
=
'where tbname like "t_%"'
)
self
.
resCmp
(
input_sql
,
stb_name
,
condition
=
'where tbname like "t_%"'
)
...
...
tests/pytest/tools/taosdemoAllTest/NanoTestCase/nano_samples.csv
0 → 100644
浏览文件 @
40073e19
8.855,"binary_str0" ,1626870128248246976
8.75,"binary_str1" ,1626870128249060032
5.44,"binary_str2" ,1626870128249067968
8.45,"binary_str3" ,1626870128249072064
4.07,"binary_str4" ,1626870128249075904
6.97,"binary_str5" ,1626870128249078976
6.86,"binary_str6" ,1626870128249082048
1.585,"binary_str7" ,1626870128249085120
1.4,"binary_str8" ,1626870128249087936
5.135,"binary_str9" ,1626870128249092032
3.15,"binary_str10" ,1626870128249095104
1.765,"binary_str11" ,1626870128249097920
7.71,"binary_str12" ,1626870128249100992
3.91,"binary_str13" ,1626870128249104064
5.615,"binary_str14" ,1626870128249106880
9.495,"binary_str15" ,1626870128249109952
3.825,"binary_str16" ,1626870128249113024
1.94,"binary_str17" ,1626870128249117120
5.385,"binary_str18" ,1626870128249119936
7.075,"binary_str19" ,1626870128249123008
5.715,"binary_str20" ,1626870128249126080
1.83,"binary_str21" ,1626870128249128896
6.365,"binary_str22" ,1626870128249131968
6.55,"binary_str23" ,1626870128249135040
6.315,"binary_str24" ,1626870128249138112
3.82,"binary_str25" ,1626870128249140928
2.455,"binary_str26" ,1626870128249145024
7.795,"binary_str27" ,1626870128249148096
2.47,"binary_str28" ,1626870128249150912
1.37,"binary_str29" ,1626870128249155008
5.39,"binary_str30" ,1626870128249158080
5.13,"binary_str31" ,1626870128249160896
4.09,"binary_str32" ,1626870128249163968
5.855,"binary_str33" ,1626870128249167040
0.17,"binary_str34" ,1626870128249170112
1.955,"binary_str35" ,1626870128249173952
0.585,"binary_str36" ,1626870128249178048
0.33,"binary_str37" ,1626870128249181120
7.925,"binary_str38" ,1626870128249183936
9.685,"binary_str39" ,1626870128249187008
2.6,"binary_str40" ,1626870128249191104
5.705,"binary_str41" ,1626870128249193920
3.965,"binary_str42" ,1626870128249196992
4.43,"binary_str43" ,1626870128249200064
8.73,"binary_str44" ,1626870128249202880
3.105,"binary_str45" ,1626870128249205952
9.39,"binary_str46" ,1626870128249209024
2.825,"binary_str47" ,1626870128249212096
9.675,"binary_str48" ,1626870128249214912
9.99,"binary_str49" ,1626870128249217984
4.51,"binary_str50" ,1626870128249221056
4.94,"binary_str51" ,1626870128249223872
7.72,"binary_str52" ,1626870128249226944
4.135,"binary_str53" ,1626870128249231040
2.325,"binary_str54" ,1626870128249234112
4.585,"binary_str55" ,1626870128249236928
8.76,"binary_str56" ,1626870128249240000
4.715,"binary_str57" ,1626870128249243072
0.56,"binary_str58" ,1626870128249245888
5.35,"binary_str59" ,1626870128249249984
5.075,"binary_str60" ,1626870128249253056
6.665,"binary_str61" ,1626870128249256128
7.13,"binary_str62" ,1626870128249258944
2.775,"binary_str63" ,1626870128249262016
5.775,"binary_str64" ,1626870128249265088
1.62,"binary_str65" ,1626870128249267904
1.625,"binary_str66" ,1626870128249270976
8.15,"binary_str67" ,1626870128249274048
0.75,"binary_str68" ,1626870128249277120
3.265,"binary_str69" ,1626870128249280960
8.585,"binary_str70" ,1626870128249284032
1.88,"binary_str71" ,1626870128249287104
8.44,"binary_str72" ,1626870128249289920
5.12,"binary_str73" ,1626870128249295040
2.58,"binary_str74" ,1626870128249298112
9.42,"binary_str75" ,1626870128249300928
1.765,"binary_str76" ,1626870128249304000
2.66,"binary_str77" ,1626870128249308096
1.405,"binary_str78" ,1626870128249310912
5.595,"binary_str79" ,1626870128249315008
2.28,"binary_str80" ,1626870128249318080
9.24,"binary_str81" ,1626870128249320896
9.03,"binary_str82" ,1626870128249323968
6.055,"binary_str83" ,1626870128249327040
1.74,"binary_str84" ,1626870128249330112
5.77,"binary_str85" ,1626870128249332928
1.97,"binary_str86" ,1626870128249336000
0.3,"binary_str87" ,1626870128249339072
7.145,"binary_str88" ,1626870128249342912
0.88,"binary_str89" ,1626870128249345984
8.025,"binary_str90" ,1626870128249349056
4.81,"binary_str91" ,1626870128249351872
0.725,"binary_str92" ,1626870128249355968
3.85,"binary_str93" ,1626870128249359040
9.455,"binary_str94" ,1626870128249362112
2.265,"binary_str95" ,1626870128249364928
3.985,"binary_str96" ,1626870128249368000
9.375,"binary_str97" ,1626870128249371072
0.2,"binary_str98" ,1626870128249373888
6.95,"binary_str99" ,1626870128249377984
tests/pytest/tools/taosdemoAllTest/NanoTestCase/nano_sampletags.csv
0 → 100644
浏览文件 @
40073e19
"string0",7,8.615
"string1",4,9.895
"string2",3,2.92
"string3",3,5.62
"string4",7,1.615
"string5",6,1.45
"string6",5,7.48
"string7",7,3.01
"string8",5,4.76
"string9",10,7.09
"string10",2,8.38
"string11",7,8.65
"string12",5,5.025
"string13",10,5.765
"string14",2,4.57
"string15",2,1.03
"string16",7,6.98
"string17",10,0.23
"string18",7,5.815
"string19",1,2.37
"string20",10,8.865
"string21",3,1.235
"string22",2,8.62
"string23",9,1.045
"string24",8,4.34
"string25",1,5.455
"string26",2,4.475
"string27",1,6.95
"string28",2,3.39
"string29",3,6.79
"string30",7,9.735
"string31",1,9.79
"string32",10,9.955
"string33",1,5.095
"string34",3,3.86
"string35",9,5.105
"string36",10,4.22
"string37",1,2.78
"string38",9,6.345
"string39",1,0.975
"string40",5,6.16
"string41",4,7.735
"string42",5,6.6
"string43",8,2.845
"string44",1,0.655
"string45",3,2.995
"string46",9,3.6
"string47",8,3.47
"string48",3,7.98
"string49",6,2.225
"string50",9,5.44
"string51",4,6.335
"string52",3,2.955
"string53",1,0.565
"string54",6,5.575
"string55",6,9.905
"string56",9,6.025
"string57",8,0.94
"string58",10,0.15
"string59",8,1.555
"string60",4,2.28
"string61",2,8.29
"string62",9,6.22
"string63",6,3.35
"string64",10,6.7
"string65",3,9.345
"string66",7,9.815
"string67",1,5.365
"string68",10,3.81
"string69",1,6.405
"string70",8,2.715
"string71",3,8.58
"string72",8,6.34
"string73",2,7.49
"string74",4,8.64
"string75",3,8.995
"string76",7,3.465
"string77",1,7.64
"string78",6,3.65
"string79",6,1.4
"string80",6,5.875
"string81",2,1.22
"string82",5,7.87
"string83",9,8.41
"string84",9,8.9
"string85",9,3.89
"string86",2,5.0
"string87",2,4.495
"string88",4,2.835
"string89",3,5.895
"string90",7,8.41
"string91",5,5.125
"string92",7,9.165
"string93",5,8.315
"string94",10,7.485
"string95",7,4.635
"string96",2,6.015
"string97",8,0.595
"string98",3,8.79
"string99",4,1.72
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoInsertMSDB.json
0 → 100644
浏览文件 @
40073e19
{
"filetype"
:
"insert"
,
"cfgdir"
:
"/etc/taos"
,
"host"
:
"127.0.0.1"
,
"port"
:
6030
,
"user"
:
"root"
,
"password"
:
"taosdata"
,
"thread_count"
:
10
,
"thread_count_create_tbl"
:
10
,
"result_file"
:
"./insert_res.txt"
,
"confirm_parameter_prompt"
:
"no"
,
"insert_interval"
:
0
,
"interlace_rows"
:
100
,
"num_of_records_per_req"
:
1000
,
"max_sql_len"
:
1024000
,
"databases"
:
[{
"dbinfo"
:
{
"name"
:
"testdb3"
,
"drop"
:
"yes"
,
"replica"
:
1
,
"days"
:
10
,
"cache"
:
50
,
"blocks"
:
8
,
"precision"
:
"ms"
,
"keep"
:
3600
,
"minRows"
:
100
,
"maxRows"
:
4096
,
"comp"
:
2
,
"walLevel"
:
1
,
"cachelast"
:
0
,
"quorum"
:
1
,
"fsync"
:
3000
,
"update"
:
0
},
"super_tables"
:
[{
"name"
:
"stb0"
,
"child_table_exists"
:
"no"
,
"childtable_count"
:
100
,
"childtable_prefix"
:
"tb0_"
,
"auto_create_table"
:
"no"
,
"batch_create_tbl_num"
:
20
,
"data_source"
:
"rand"
,
"insert_mode"
:
"taosc"
,
"insert_rows"
:
100
,
"childtable_offset"
:
0
,
"multi_thread_write_one_tbl"
:
"no"
,
"insert_interval"
:
0
,
"max_sql_len"
:
1024000
,
"disorder_ratio"
:
0
,
"disorder_range"
:
1000
,
"timestamp_step"
:
1000
,
"start_timestamp"
:
"2021-07-01 00:00:00.000"
,
"sample_format"
:
""
,
"sample_file"
:
""
,
"tags_file"
:
""
,
"columns"
:
[{
"type"
:
"INT"
},
{
"type"
:
"DOUBLE"
,
"count"
:
3
},
{
"type"
:
"BINARY"
,
"len"
:
16
,
"count"
:
2
},
{
"type"
:
"BINARY"
,
"len"
:
32
,
"count"
:
2
},
{
"type"
:
"TIMESTAMP"
},
{
"type"
:
"BIGINT"
,
"count"
:
3
},{
"type"
:
"FLOAT"
,
"count"
:
1
},{
"type"
:
"SMALLINT"
,
"count"
:
1
},{
"type"
:
"TINYINT"
,
"count"
:
1
},
{
"type"
:
"BOOL"
},{
"type"
:
"NCHAR"
,
"len"
:
16
}],
"tags"
:
[{
"type"
:
"TINYINT"
,
"count"
:
2
},
{
"type"
:
"BINARY"
,
"len"
:
16
,
"count"
:
5
},{
"type"
:
"NCHAR"
,
"len"
:
16
,
"count"
:
1
}]
}]
}]
}
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoInsertNanoDB.json
0 → 100644
浏览文件 @
40073e19
{
"filetype"
:
"insert"
,
"cfgdir"
:
"/etc/taos"
,
"host"
:
"127.0.0.1"
,
"port"
:
6030
,
"user"
:
"root"
,
"password"
:
"taosdata"
,
"thread_count"
:
10
,
"thread_count_create_tbl"
:
10
,
"result_file"
:
"./insert_res.txt"
,
"confirm_parameter_prompt"
:
"no"
,
"insert_interval"
:
0
,
"interlace_rows"
:
100
,
"num_of_records_per_req"
:
1000
,
"max_sql_len"
:
1024000
,
"databases"
:
[{
"dbinfo"
:
{
"name"
:
"testdb1"
,
"drop"
:
"yes"
,
"replica"
:
1
,
"days"
:
10
,
"cache"
:
50
,
"blocks"
:
8
,
"precision"
:
"ns"
,
"keep"
:
3600
,
"minRows"
:
100
,
"maxRows"
:
4096
,
"comp"
:
2
,
"walLevel"
:
1
,
"cachelast"
:
0
,
"quorum"
:
1
,
"fsync"
:
3000
,
"update"
:
0
},
"super_tables"
:
[{
"name"
:
"stb0"
,
"child_table_exists"
:
"no"
,
"childtable_count"
:
100
,
"childtable_prefix"
:
"tb0_"
,
"auto_create_table"
:
"no"
,
"batch_create_tbl_num"
:
20
,
"data_source"
:
"rand"
,
"insert_mode"
:
"taosc"
,
"insert_rows"
:
100
,
"childtable_offset"
:
0
,
"multi_thread_write_one_tbl"
:
"no"
,
"insert_interval"
:
0
,
"max_sql_len"
:
1024000
,
"disorder_ratio"
:
0
,
"disorder_range"
:
1000
,
"timestamp_step"
:
1000
,
"start_timestamp"
:
"2021-07-01 00:00:00.000"
,
"sample_format"
:
""
,
"sample_file"
:
""
,
"tags_file"
:
""
,
"columns"
:
[{
"type"
:
"INT"
},
{
"type"
:
"DOUBLE"
,
"count"
:
3
},
{
"type"
:
"BINARY"
,
"len"
:
16
,
"count"
:
2
},
{
"type"
:
"BINARY"
,
"len"
:
32
,
"count"
:
2
},
{
"type"
:
"TIMESTAMP"
},
{
"type"
:
"BIGINT"
,
"count"
:
3
},{
"type"
:
"FLOAT"
,
"count"
:
1
},{
"type"
:
"SMALLINT"
,
"count"
:
1
},{
"type"
:
"TINYINT"
,
"count"
:
1
},
{
"type"
:
"BOOL"
},{
"type"
:
"NCHAR"
,
"len"
:
16
}],
"tags"
:
[{
"type"
:
"TINYINT"
,
"count"
:
2
},
{
"type"
:
"BINARY"
,
"len"
:
16
,
"count"
:
5
},{
"type"
:
"NCHAR"
,
"len"
:
16
,
"count"
:
1
}]
}]
}]
}
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoInsertUSDB.json
0 → 100644
浏览文件 @
40073e19
{
"filetype"
:
"insert"
,
"cfgdir"
:
"/etc/taos"
,
"host"
:
"127.0.0.1"
,
"port"
:
6030
,
"user"
:
"root"
,
"password"
:
"taosdata"
,
"thread_count"
:
10
,
"thread_count_create_tbl"
:
10
,
"result_file"
:
"./insert_res.txt"
,
"confirm_parameter_prompt"
:
"no"
,
"insert_interval"
:
0
,
"interlace_rows"
:
100
,
"num_of_records_per_req"
:
1000
,
"max_sql_len"
:
1024000
,
"databases"
:
[{
"dbinfo"
:
{
"name"
:
"testdb2"
,
"drop"
:
"yes"
,
"replica"
:
1
,
"days"
:
10
,
"cache"
:
50
,
"blocks"
:
8
,
"precision"
:
"us"
,
"keep"
:
3600
,
"minRows"
:
100
,
"maxRows"
:
4096
,
"comp"
:
2
,
"walLevel"
:
1
,
"cachelast"
:
0
,
"quorum"
:
1
,
"fsync"
:
3000
,
"update"
:
0
},
"super_tables"
:
[{
"name"
:
"stb0"
,
"child_table_exists"
:
"no"
,
"childtable_count"
:
100
,
"childtable_prefix"
:
"tb0_"
,
"auto_create_table"
:
"no"
,
"batch_create_tbl_num"
:
20
,
"data_source"
:
"rand"
,
"insert_mode"
:
"taosc"
,
"insert_rows"
:
100
,
"childtable_offset"
:
0
,
"multi_thread_write_one_tbl"
:
"no"
,
"insert_interval"
:
0
,
"max_sql_len"
:
1024000
,
"disorder_ratio"
:
0
,
"disorder_range"
:
1000
,
"timestamp_step"
:
1000
,
"start_timestamp"
:
"2021-07-01 00:00:00.000"
,
"sample_format"
:
""
,
"sample_file"
:
""
,
"tags_file"
:
""
,
"columns"
:
[{
"type"
:
"INT"
},
{
"type"
:
"DOUBLE"
,
"count"
:
3
},
{
"type"
:
"BINARY"
,
"len"
:
16
,
"count"
:
2
},
{
"type"
:
"BINARY"
,
"len"
:
32
,
"count"
:
2
},
{
"type"
:
"TIMESTAMP"
},
{
"type"
:
"BIGINT"
,
"count"
:
3
},{
"type"
:
"FLOAT"
,
"count"
:
1
},{
"type"
:
"SMALLINT"
,
"count"
:
1
},{
"type"
:
"TINYINT"
,
"count"
:
1
},
{
"type"
:
"BOOL"
},{
"type"
:
"NCHAR"
,
"len"
:
16
}],
"tags"
:
[{
"type"
:
"TINYINT"
,
"count"
:
2
},
{
"type"
:
"BINARY"
,
"len"
:
16
,
"count"
:
5
},{
"type"
:
"NCHAR"
,
"len"
:
16
,
"count"
:
1
}]
}]
}]
}
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestInsertTime_step.py
0 → 100644
浏览文件 @
40073e19
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import
sys
import
os
from
util.log
import
*
from
util.cases
import
*
from
util.sql
import
*
from
util.dnodes
import
*
class
TDTestCase
:
def
init
(
self
,
conn
,
logSql
):
tdLog
.
debug
(
"start to execute %s"
%
__file__
)
tdSql
.
init
(
conn
.
cursor
(),
logSql
)
def
getBuildPath
(
self
):
selfPath
=
os
.
path
.
dirname
(
os
.
path
.
realpath
(
__file__
))
if
(
"community"
in
selfPath
):
projPath
=
selfPath
[:
selfPath
.
find
(
"community"
)]
else
:
projPath
=
selfPath
[:
selfPath
.
find
(
"tests"
)]
for
root
,
dirs
,
files
in
os
.
walk
(
projPath
):
if
(
"taosd"
in
files
):
rootRealPath
=
os
.
path
.
dirname
(
os
.
path
.
realpath
(
root
))
if
(
"packaging"
not
in
rootRealPath
):
buildPath
=
root
[:
len
(
root
)
-
len
(
"/build/bin"
)]
break
return
buildPath
def
run
(
self
):
buildPath
=
self
.
getBuildPath
()
if
(
buildPath
==
""
):
tdLog
.
exit
(
"taosd not found!"
)
else
:
tdLog
.
info
(
"taosd found in %s"
%
buildPath
)
binPath
=
buildPath
+
"/build/bin/"
# insert: create one or mutiple tables per sql and insert multiple rows per sql
# check the params of taosdemo about time_step is nano
os
.
system
(
"%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoInsertNanoDB.json -y "
%
binPath
)
tdSql
.
execute
(
"use testdb1"
)
tdSql
.
query
(
"show stables"
)
tdSql
.
checkData
(
0
,
4
,
100
)
tdSql
.
query
(
"select count (tbname) from stb0"
)
tdSql
.
checkData
(
0
,
0
,
100
)
tdSql
.
query
(
"select count(*) from tb0_0"
)
tdSql
.
checkData
(
0
,
0
,
100
)
tdSql
.
query
(
"select count(*) from stb0"
)
tdSql
.
checkData
(
0
,
0
,
10000
)
tdSql
.
query
(
"describe stb0"
)
tdSql
.
getData
(
9
,
1
)
tdSql
.
checkDataType
(
9
,
1
,
"TIMESTAMP"
)
tdSql
.
query
(
"select last(ts) from stb0"
)
tdSql
.
checkData
(
0
,
0
,
"2021-07-01 00:00:00.000099000"
)
# check the params of taosdemo about time_step is us
os
.
system
(
"%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoInsertUSDB.json -y "
%
binPath
)
tdSql
.
execute
(
"use testdb2"
)
tdSql
.
query
(
"show stables"
)
tdSql
.
checkData
(
0
,
4
,
100
)
tdSql
.
query
(
"select count (tbname) from stb0"
)
tdSql
.
checkData
(
0
,
0
,
100
)
tdSql
.
query
(
"select count(*) from tb0_0"
)
tdSql
.
checkData
(
0
,
0
,
100
)
tdSql
.
query
(
"select count(*) from stb0"
)
tdSql
.
checkData
(
0
,
0
,
10000
)
tdSql
.
query
(
"describe stb0"
)
tdSql
.
getData
(
9
,
1
)
tdSql
.
checkDataType
(
9
,
1
,
"TIMESTAMP"
)
tdSql
.
query
(
"select last(ts) from stb0"
)
tdSql
.
checkData
(
0
,
0
,
"2021-07-01 00:00:00.099000"
)
# check the params of taosdemo about time_step is ms
os
.
system
(
"%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoInsertMSDB.json -y "
%
binPath
)
tdSql
.
execute
(
"use testdb3"
)
tdSql
.
query
(
"show stables"
)
tdSql
.
checkData
(
0
,
4
,
100
)
tdSql
.
query
(
"select count (tbname) from stb0"
)
tdSql
.
checkData
(
0
,
0
,
100
)
tdSql
.
query
(
"select count(*) from tb0_0"
)
tdSql
.
checkData
(
0
,
0
,
100
)
tdSql
.
query
(
"select count(*) from stb0"
)
tdSql
.
checkData
(
0
,
0
,
10000
)
tdSql
.
query
(
"describe stb0"
)
tdSql
.
checkDataType
(
9
,
1
,
"TIMESTAMP"
)
tdSql
.
query
(
"select last(ts) from stb0"
)
tdSql
.
checkData
(
0
,
0
,
"2021-07-01 00:01:39.000"
)
os
.
system
(
"rm -rf ./res.txt"
)
os
.
system
(
"rm -rf ./*.py.sql"
)
def
stop
(
self
):
tdSql
.
close
()
tdLog
.
success
(
"%s successfully executed"
%
__file__
)
tdCases
.
addWindows
(
__file__
,
TDTestCase
())
tdCases
.
addLinux
(
__file__
,
TDTestCase
())
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabase.json
0 → 100644
浏览文件 @
40073e19
{
"filetype"
:
"insert"
,
"cfgdir"
:
"/etc/taos"
,
"host"
:
"127.0.0.1"
,
"port"
:
6030
,
"user"
:
"root"
,
"password"
:
"taosdata"
,
"thread_count"
:
10
,
"thread_count_create_tbl"
:
10
,
"result_file"
:
"./insert_res.txt"
,
"confirm_parameter_prompt"
:
"no"
,
"insert_interval"
:
0
,
"interlace_rows"
:
100
,
"num_of_records_per_req"
:
1000
,
"max_sql_len"
:
1024000
,
"databases"
:
[{
"dbinfo"
:
{
"name"
:
"nsdb"
,
"drop"
:
"yes"
,
"replica"
:
1
,
"days"
:
10
,
"cache"
:
50
,
"blocks"
:
8
,
"precision"
:
"ns"
,
"keep"
:
3600
,
"minRows"
:
100
,
"maxRows"
:
4096
,
"comp"
:
2
,
"walLevel"
:
1
,
"cachelast"
:
0
,
"quorum"
:
1
,
"fsync"
:
3000
,
"update"
:
0
},
"super_tables"
:
[{
"name"
:
"stb0"
,
"child_table_exists"
:
"no"
,
"childtable_count"
:
100
,
"childtable_prefix"
:
"tb0_"
,
"auto_create_table"
:
"no"
,
"batch_create_tbl_num"
:
20
,
"data_source"
:
"rand"
,
"insert_mode"
:
"taosc"
,
"insert_rows"
:
100
,
"childtable_offset"
:
0
,
"multi_thread_write_one_tbl"
:
"no"
,
"insert_interval"
:
0
,
"max_sql_len"
:
1024000
,
"disorder_ratio"
:
0
,
"disorder_range"
:
1000
,
"timestamp_step"
:
10000000
,
"start_timestamp"
:
"2021-07-01 00:00:00.000"
,
"sample_format"
:
""
,
"sample_file"
:
""
,
"tags_file"
:
""
,
"columns"
:
[{
"type"
:
"INT"
},
{
"type"
:
"DOUBLE"
,
"count"
:
3
},
{
"type"
:
"BINARY"
,
"len"
:
16
,
"count"
:
2
},
{
"type"
:
"BINARY"
,
"len"
:
32
,
"count"
:
2
},
{
"type"
:
"TIMESTAMP"
},
{
"type"
:
"BIGINT"
,
"count"
:
3
},{
"type"
:
"FLOAT"
,
"count"
:
1
},{
"type"
:
"SMALLINT"
,
"count"
:
1
},{
"type"
:
"TINYINT"
,
"count"
:
1
},
{
"type"
:
"BOOL"
},{
"type"
:
"NCHAR"
,
"len"
:
16
}],
"tags"
:
[{
"type"
:
"TINYINT"
,
"count"
:
2
},
{
"type"
:
"BINARY"
,
"len"
:
16
,
"count"
:
5
},{
"type"
:
"NCHAR"
,
"len"
:
16
,
"count"
:
1
}]
},
{
"name"
:
"stb1"
,
"child_table_exists"
:
"no"
,
"childtable_count"
:
100
,
"childtable_prefix"
:
"tb1_"
,
"auto_create_table"
:
"no"
,
"batch_create_tbl_num"
:
20
,
"data_source"
:
"rand"
,
"insert_mode"
:
"taosc"
,
"insert_rows"
:
100
,
"childtable_offset"
:
0
,
"multi_thread_write_one_tbl"
:
"no"
,
"insert_interval"
:
0
,
"max_sql_len"
:
1024000
,
"disorder_ratio"
:
10
,
"disorder_range"
:
1000
,
"timestamp_step"
:
10000000
,
"start_timestamp"
:
"2021-07-01 00:00:00.000"
,
"sample_format"
:
""
,
"sample_file"
:
""
,
"tags_file"
:
""
,
"columns"
:
[{
"type"
:
"INT"
},
{
"type"
:
"DOUBLE"
,
"count"
:
3
},
{
"type"
:
"BINARY"
,
"len"
:
16
,
"count"
:
2
},
{
"type"
:
"BINARY"
,
"len"
:
32
,
"count"
:
2
},
{
"type"
:
"TIMESTAMP"
},
{
"type"
:
"BIGINT"
,
"count"
:
3
},{
"type"
:
"FLOAT"
,
"count"
:
1
},{
"type"
:
"SMALLINT"
,
"count"
:
1
},{
"type"
:
"TINYINT"
,
"count"
:
1
},
{
"type"
:
"BOOL"
},{
"type"
:
"NCHAR"
,
"len"
:
16
}],
"tags"
:
[{
"type"
:
"TINYINT"
,
"count"
:
2
},
{
"type"
:
"BINARY"
,
"len"
:
16
,
"count"
:
5
},{
"type"
:
"NCHAR"
,
"len"
:
16
,
"count"
:
1
}]
}]
}]
}
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabaseInsertForSub.json
0 → 100644
浏览文件 @
40073e19
{
"filetype"
:
"insert"
,
"cfgdir"
:
"/etc/taos"
,
"host"
:
"127.0.0.1"
,
"port"
:
6030
,
"user"
:
"root"
,
"password"
:
"taosdata"
,
"thread_count"
:
10
,
"thread_count_create_tbl"
:
10
,
"result_file"
:
"./insert_res.txt"
,
"confirm_parameter_prompt"
:
"no"
,
"insert_interval"
:
0
,
"interlace_rows"
:
100
,
"num_of_records_per_req"
:
1000
,
"max_sql_len"
:
1024000
,
"databases"
:
[{
"dbinfo"
:
{
"name"
:
"subnsdb"
,
"drop"
:
"yes"
,
"replica"
:
1
,
"days"
:
10
,
"cache"
:
50
,
"blocks"
:
8
,
"precision"
:
"ns"
,
"keep"
:
3600
,
"minRows"
:
100
,
"maxRows"
:
4096
,
"comp"
:
2
,
"walLevel"
:
1
,
"cachelast"
:
0
,
"quorum"
:
1
,
"fsync"
:
3000
,
"update"
:
0
},
"super_tables"
:
[{
"name"
:
"stb0"
,
"child_table_exists"
:
"no"
,
"childtable_count"
:
10
,
"childtable_prefix"
:
"tb0_"
,
"auto_create_table"
:
"no"
,
"batch_create_tbl_num"
:
20
,
"data_source"
:
"samples"
,
"insert_mode"
:
"taosc"
,
"insert_rows"
:
10
,
"childtable_offset"
:
0
,
"multi_thread_write_one_tbl"
:
"no"
,
"insert_interval"
:
0
,
"max_sql_len"
:
1024000
,
"disorder_ratio"
:
0
,
"disorder_range"
:
1000
,
"timestamp_step"
:
10000000
,
"start_timestamp"
:
"2021-07-01 00:00:00.000"
,
"sample_format"
:
"csv"
,
"sample_file"
:
"./tools/taosdemoAllTest/NanoTestCase/nano_samples.csv"
,
"tags_file"
:
"./tools/taosdemoAllTest/NanoTestCase/nano_sampletags.csv"
,
"columns"
:
[{
"type"
:
"DOUBLE"
},
{
"type"
:
"BINARY"
,
"len"
:
64
,
"count"
:
1
},
{
"type"
:
"TIMESTAMP"
,
"count"
:
1
}],
"tags"
:
[{
"type"
:
"BINARY"
,
"len"
:
16
,
"count"
:
1
},{
"type"
:
"INT"
},{
"type"
:
"DOUBLE"
}]
},
{
"name"
:
"stb1"
,
"child_table_exists"
:
"no"
,
"childtable_count"
:
10
,
"childtable_prefix"
:
"tb1_"
,
"auto_create_table"
:
"no"
,
"batch_create_tbl_num"
:
20
,
"data_source"
:
"samples"
,
"insert_mode"
:
"taosc"
,
"insert_rows"
:
10
,
"childtable_offset"
:
0
,
"multi_thread_write_one_tbl"
:
"no"
,
"insert_interval"
:
0
,
"max_sql_len"
:
1024000
,
"disorder_ratio"
:
10
,
"disorder_range"
:
1000
,
"timestamp_step"
:
10000000
,
"start_timestamp"
:
"2021-07-01 00:00:00.000"
,
"sample_format"
:
"csv"
,
"sample_file"
:
"./tools/taosdemoAllTest/NanoTestCase/nano_samples.csv"
,
"tags_file"
:
"./tools/taosdemoAllTest/NanoTestCase/nano_sampletags.csv"
,
"columns"
:
[{
"type"
:
"DOUBLE"
},
{
"type"
:
"BINARY"
,
"len"
:
64
,
"count"
:
1
},
{
"type"
:
"TIMESTAMP"
,
"count"
:
1
}],
"tags"
:
[{
"type"
:
"BINARY"
,
"len"
:
16
,
"count"
:
1
},{
"type"
:
"INT"
},{
"type"
:
"DOUBLE"
}]
}]
}]
}
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabaseNow.json
0 → 100644
浏览文件 @
40073e19
{
"filetype"
:
"insert"
,
"cfgdir"
:
"/etc/taos"
,
"host"
:
"127.0.0.1"
,
"port"
:
6030
,
"user"
:
"root"
,
"password"
:
"taosdata"
,
"thread_count"
:
10
,
"thread_count_create_tbl"
:
10
,
"result_file"
:
"./insert_res.txt"
,
"confirm_parameter_prompt"
:
"no"
,
"insert_interval"
:
0
,
"interlace_rows"
:
100
,
"num_of_records_per_req"
:
1000
,
"max_sql_len"
:
1024000
,
"databases"
:
[{
"dbinfo"
:
{
"name"
:
"nsdb2"
,
"drop"
:
"yes"
,
"replica"
:
1
,
"days"
:
10
,
"cache"
:
50
,
"blocks"
:
8
,
"precision"
:
"ns"
,
"keep"
:
3600
,
"minRows"
:
100
,
"maxRows"
:
4096
,
"comp"
:
2
,
"walLevel"
:
1
,
"cachelast"
:
0
,
"quorum"
:
1
,
"fsync"
:
3000
,
"update"
:
0
},
"super_tables"
:
[{
"name"
:
"stb0"
,
"child_table_exists"
:
"no"
,
"childtable_count"
:
100
,
"childtable_prefix"
:
"tb0_"
,
"auto_create_table"
:
"no"
,
"batch_create_tbl_num"
:
20
,
"data_source"
:
"rand"
,
"insert_mode"
:
"taosc"
,
"insert_rows"
:
100
,
"childtable_offset"
:
0
,
"multi_thread_write_one_tbl"
:
"no"
,
"insert_interval"
:
0
,
"max_sql_len"
:
1024000
,
"disorder_ratio"
:
0
,
"disorder_range"
:
1000
,
"timestamp_step"
:
10
,
"start_timestamp"
:
"now"
,
"sample_format"
:
""
,
"sample_file"
:
""
,
"tags_file"
:
""
,
"columns"
:
[{
"type"
:
"INT"
},
{
"type"
:
"DOUBLE"
,
"count"
:
3
},
{
"type"
:
"BINARY"
,
"len"
:
16
,
"count"
:
2
},
{
"type"
:
"BINARY"
,
"len"
:
32
,
"count"
:
2
},
{
"type"
:
"TIMESTAMP"
},
{
"type"
:
"BIGINT"
,
"count"
:
3
},{
"type"
:
"FLOAT"
,
"count"
:
1
},{
"type"
:
"SMALLINT"
,
"count"
:
1
},{
"type"
:
"TINYINT"
,
"count"
:
1
},
{
"type"
:
"BOOL"
},{
"type"
:
"NCHAR"
,
"len"
:
16
}],
"tags"
:
[{
"type"
:
"TINYINT"
,
"count"
:
2
},
{
"type"
:
"BINARY"
,
"len"
:
16
,
"count"
:
5
},{
"type"
:
"NCHAR"
,
"len"
:
16
,
"count"
:
1
}]
}]
}]
}
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabasecsv.json
0 → 100644
浏览文件 @
40073e19
{
"filetype"
:
"insert"
,
"cfgdir"
:
"/etc/taos"
,
"host"
:
"127.0.0.1"
,
"port"
:
6030
,
"user"
:
"root"
,
"password"
:
"taosdata"
,
"thread_count"
:
10
,
"thread_count_create_tbl"
:
10
,
"result_file"
:
"./insert_res.txt"
,
"confirm_parameter_prompt"
:
"no"
,
"insert_interval"
:
0
,
"interlace_rows"
:
100
,
"num_of_records_per_req"
:
1000
,
"max_sql_len"
:
1024000
,
"databases"
:
[{
"dbinfo"
:
{
"name"
:
"nsdbcsv"
,
"drop"
:
"yes"
,
"replica"
:
1
,
"days"
:
10
,
"cache"
:
50
,
"blocks"
:
8
,
"precision"
:
"ns"
,
"keep"
:
3600
,
"minRows"
:
100
,
"maxRows"
:
4096
,
"comp"
:
2
,
"walLevel"
:
1
,
"cachelast"
:
0
,
"quorum"
:
1
,
"fsync"
:
3000
,
"update"
:
0
},
"super_tables"
:
[{
"name"
:
"stb0"
,
"child_table_exists"
:
"no"
,
"childtable_count"
:
100
,
"childtable_prefix"
:
"tb0_"
,
"auto_create_table"
:
"no"
,
"batch_create_tbl_num"
:
20
,
"data_source"
:
"samples"
,
"insert_mode"
:
"taosc"
,
"insert_rows"
:
100
,
"childtable_offset"
:
0
,
"multi_thread_write_one_tbl"
:
"no"
,
"insert_interval"
:
0
,
"max_sql_len"
:
1024000
,
"disorder_ratio"
:
0
,
"disorder_range"
:
1000
,
"timestamp_step"
:
10000000
,
"start_timestamp"
:
"2021-07-01 00:00:00.000"
,
"sample_format"
:
"csv"
,
"sample_file"
:
"./tools/taosdemoAllTest/NanoTestCase/nano_samples.csv"
,
"tags_file"
:
"./tools/taosdemoAllTest/NanoTestCase/nano_sampletags.csv"
,
"columns"
:
[{
"type"
:
"DOUBLE"
},
{
"type"
:
"BINARY"
,
"len"
:
64
,
"count"
:
1
},
{
"type"
:
"TIMESTAMP"
,
"count"
:
1
}],
"tags"
:
[{
"type"
:
"BINARY"
,
"len"
:
16
,
"count"
:
1
},{
"type"
:
"INT"
},{
"type"
:
"DOUBLE"
}]
},
{
"name"
:
"stb1"
,
"child_table_exists"
:
"no"
,
"childtable_count"
:
100
,
"childtable_prefix"
:
"tb1_"
,
"auto_create_table"
:
"no"
,
"batch_create_tbl_num"
:
20
,
"data_source"
:
"samples"
,
"insert_mode"
:
"taosc"
,
"insert_rows"
:
100
,
"childtable_offset"
:
0
,
"multi_thread_write_one_tbl"
:
"no"
,
"insert_interval"
:
0
,
"max_sql_len"
:
1024000
,
"disorder_ratio"
:
10
,
"disorder_range"
:
1000
,
"timestamp_step"
:
10000000
,
"start_timestamp"
:
"2021-07-01 00:00:00.000"
,
"sample_format"
:
"csv"
,
"sample_file"
:
"./tools/taosdemoAllTest/NanoTestCase/nano_samples.csv"
,
"tags_file"
:
"./tools/taosdemoAllTest/NanoTestCase/nano_sampletags.csv"
,
"columns"
:
[{
"type"
:
"DOUBLE"
},
{
"type"
:
"BINARY"
,
"len"
:
64
,
"count"
:
1
},
{
"type"
:
"TIMESTAMP"
,
"count"
:
1
}],
"tags"
:
[{
"type"
:
"BINARY"
,
"len"
:
16
,
"count"
:
1
},{
"type"
:
"INT"
},{
"type"
:
"DOUBLE"
}]
}]
}]
}
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoInsert.py
0 → 100644
浏览文件 @
40073e19
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import
sys
import
os
from
util.log
import
*
from
util.cases
import
*
from
util.sql
import
*
from
util.dnodes
import
*
class
TDTestCase
:
def
init
(
self
,
conn
,
logSql
):
tdLog
.
debug
(
"start to execute %s"
%
__file__
)
tdSql
.
init
(
conn
.
cursor
(),
logSql
)
def
getBuildPath
(
self
):
selfPath
=
os
.
path
.
dirname
(
os
.
path
.
realpath
(
__file__
))
if
(
"community"
in
selfPath
):
projPath
=
selfPath
[:
selfPath
.
find
(
"community"
)]
else
:
projPath
=
selfPath
[:
selfPath
.
find
(
"tests"
)]
for
root
,
dirs
,
files
in
os
.
walk
(
projPath
):
if
(
"taosd"
in
files
):
rootRealPath
=
os
.
path
.
dirname
(
os
.
path
.
realpath
(
root
))
if
(
"packaging"
not
in
rootRealPath
):
buildPath
=
root
[:
len
(
root
)
-
len
(
"/build/bin"
)]
break
return
buildPath
def
run
(
self
):
buildPath
=
self
.
getBuildPath
()
if
(
buildPath
==
""
):
tdLog
.
exit
(
"taosd not found!"
)
else
:
tdLog
.
info
(
"taosd found in %s"
%
buildPath
)
binPath
=
buildPath
+
"/build/bin/"
# insert: create one or mutiple tables per sql and insert multiple rows per sql
# insert data from a special timestamp
# check stable stb0
os
.
system
(
"%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabase.json -y "
%
binPath
)
tdSql
.
execute
(
"use nsdb"
)
tdSql
.
query
(
"show stables"
)
tdSql
.
checkData
(
0
,
4
,
100
)
tdSql
.
query
(
"select count (tbname) from stb0"
)
tdSql
.
checkData
(
0
,
0
,
100
)
tdSql
.
query
(
"select count(*) from tb0_0"
)
tdSql
.
checkData
(
0
,
0
,
100
)
tdSql
.
query
(
"select count(*) from stb0"
)
tdSql
.
checkData
(
0
,
0
,
10000
)
tdSql
.
query
(
"describe stb0"
)
tdSql
.
checkDataType
(
9
,
1
,
"TIMESTAMP"
)
tdSql
.
query
(
"select last(ts) from stb0"
)
tdSql
.
checkData
(
0
,
0
,
"2021-07-01 00:00:00.990000000"
)
# check stable stb1 which is insert with disord
tdSql
.
query
(
"select count (tbname) from stb1"
)
tdSql
.
checkData
(
0
,
0
,
100
)
tdSql
.
query
(
"select count(*) from tb1_0"
)
tdSql
.
checkData
(
0
,
0
,
100
)
tdSql
.
query
(
"select count(*) from stb1"
)
tdSql
.
checkData
(
0
,
0
,
10000
)
# check c8 is an nano timestamp
tdSql
.
query
(
"describe stb1"
)
tdSql
.
checkDataType
(
9
,
1
,
"TIMESTAMP"
)
# check insert timestamp_step is nano_second
tdSql
.
query
(
"select last(ts) from stb1"
)
tdSql
.
checkData
(
0
,
0
,
"2021-07-01 00:00:00.990000000"
)
# insert data from now time
# check stable stb0
os
.
system
(
"%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabaseNow.json -y "
%
binPath
)
tdSql
.
execute
(
"use nsdb2"
)
tdSql
.
query
(
"show stables"
)
tdSql
.
checkData
(
0
,
4
,
100
)
tdSql
.
query
(
"select count (tbname) from stb0"
)
tdSql
.
checkData
(
0
,
0
,
100
)
tdSql
.
query
(
"select count(*) from tb0_0"
)
tdSql
.
checkData
(
0
,
0
,
100
)
tdSql
.
query
(
"select count(*) from stb0"
)
tdSql
.
checkData
(
0
,
0
,
10000
)
# check c8 is an nano timestamp
tdSql
.
query
(
"describe stb0"
)
tdSql
.
checkDataType
(
9
,
1
,
"TIMESTAMP"
)
# insert by csv files and timetamp is long int , strings in ts and cols
os
.
system
(
"%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabasecsv.json -y "
%
binPath
)
tdSql
.
execute
(
"use nsdbcsv"
)
tdSql
.
query
(
"show stables"
)
tdSql
.
checkData
(
0
,
4
,
100
)
tdSql
.
query
(
"select count(*) from stb0"
)
tdSql
.
checkData
(
0
,
0
,
10000
)
tdSql
.
query
(
"describe stb0"
)
tdSql
.
checkDataType
(
3
,
1
,
"TIMESTAMP"
)
tdSql
.
query
(
"select count(*) from stb0 where ts >
\"
2021-07-01 00:00:00.490000000
\"
"
)
tdSql
.
checkData
(
0
,
0
,
5000
)
tdSql
.
query
(
"select count(*) from stb0 where ts < 1626918583000000000"
)
tdSql
.
checkData
(
0
,
0
,
10000
)
os
.
system
(
"rm -rf ./insert_res.txt"
)
os
.
system
(
"rm -rf tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNano*.py.sql"
)
# taosdemo test insert with command and parameter , detals show taosdemo --help
os
.
system
(
"%staosdemo -u root -P taosdata -p 6030 -a 1 -m pre -n 10 -T 20 -t 60 -o res.txt -y "
%
binPath
)
tdSql
.
query
(
"select count(*) from test.meters"
)
tdSql
.
checkData
(
0
,
0
,
600
)
# check taosdemo -s
sqls_ls
=
[
'drop database if exists nsdbsql;'
,
'create database nsdbsql precision "ns" keep 3600 days 6 update 1;'
,
'use nsdbsql;'
,
'CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupdId int);'
,
'CREATE TABLE d1001 USING meters TAGS ("Beijing.Chaoyang", 2);'
,
'INSERT INTO d1001 USING METERS TAGS ("Beijng.Chaoyang", 2) VALUES (now, 10.2, 219, 0.32);'
,
'INSERT INTO d1001 USING METERS TAGS ("Beijng.Chaoyang", 2) VALUES (now, 85, 32, 0.76);'
]
with
open
(
"./taosdemoTestNanoCreateDB.sql"
,
mode
=
"a"
)
as
sql_files
:
for
sql
in
sqls_ls
:
sql_files
.
write
(
sql
+
"
\n
"
)
sql_files
.
close
()
sleep
(
10
)
os
.
system
(
"%staosdemo -s taosdemoTestNanoCreateDB.sql -y "
%
binPath
)
tdSql
.
query
(
"select count(*) from nsdbsql.meters"
)
tdSql
.
checkData
(
0
,
0
,
2
)
os
.
system
(
"rm -rf ./res.txt"
)
os
.
system
(
"rm -rf ./*.py.sql"
)
os
.
system
(
"rm -rf ./taosdemoTestNanoCreateDB.sql"
)
def
stop
(
self
):
tdSql
.
close
()
tdLog
.
success
(
"%s successfully executed"
%
__file__
)
tdCases
.
addWindows
(
__file__
,
TDTestCase
())
tdCases
.
addLinux
(
__file__
,
TDTestCase
())
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoQuery.json
0 → 100644
浏览文件 @
40073e19
{
"filetype"
:
"query"
,
"cfgdir"
:
"/etc/taos"
,
"host"
:
"127.0.0.1"
,
"port"
:
6030
,
"user"
:
"root"
,
"password"
:
"taosdata"
,
"confirm_parameter_prompt"
:
"no"
,
"databases"
:
"nsdb"
,
"query_times"
:
10
,
"query_mode"
:
"taosc"
,
"specified_table_query"
:
{
"query_interval"
:
1
,
"concurrent"
:
2
,
"sqls"
:
[
{
"sql"
:
"select count(*) from stb0 where ts>
\"
2021-07-01 00:01:00.000000000
\"
;"
,
"result"
:
"./query_res0.txt"
},
{
"sql"
:
"select count(*) from stb0 where ts>
\"
2021-07-01 00:01:00.000000000
\"
and ts <=
\"
2021-07-01 00:01:10.000000000
\"
;"
,
"result"
:
"./query_res1.txt"
},
{
"sql"
:
"select count(*) from stb0 where ts>now-20d ;"
,
"result"
:
"./query_res2.txt"
},
{
"sql"
:
"select max(c10) from stb0;"
,
"result"
:
"./query_res3.txt"
},
{
"sql"
:
"select min(c1) from stb0;"
,
"result"
:
"./query_res4.txt"
},
{
"sql"
:
"select avg(c1) from stb0;"
,
"result"
:
"./query_res5.txt"
},
{
"sql"
:
"select count(*) from stb0 group by tbname;"
,
"result"
:
"./query_res6.txt"
}
]
},
"super_table_query"
:
{
"stblname"
:
"stb0"
,
"query_interval"
:
0
,
"threads"
:
4
,
"sqls"
:
[
{
"sql"
:
"select count(*) from xxxx where ts>
\"
2021-07-01 00:01:00.000000000
\"
;"
,
"result"
:
"./query_res_tb0.txt"
},
{
"sql"
:
"select count(*) from xxxx where ts>
\"
2021-07-01 00:01:00.000000000
\"
and ts <=
\"
2021-07-01 00:01:10.000000000
\"
;"
,
"result"
:
"./query_res_tb1.txt"
},
{
"sql"
:
"select first(*) from xxxx ;"
,
"result"
:
"./query_res_tb2.txt"
},
{
"sql"
:
"select last(*) from xxxx;"
,
"result"
:
"./query_res_tb3.txt"
},
{
"sql"
:
"select last_row(*) from xxxx ;"
,
"result"
:
"./query_res_tb4.txt"
},
{
"sql"
:
"select max(c10) from xxxx ;"
,
"result"
:
"./query_res_tb5.txt"
},
{
"sql"
:
"select min(c1) from xxxx ;"
,
"result"
:
"./query_res_tb6.txt"
},
{
"sql"
:
"select avg(c10) from xxxx ;"
,
"result"
:
"./query_res_tb7.txt"
}
]
}
}
\ No newline at end of file
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoQuery.py
0 → 100644
浏览文件 @
40073e19
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import
sys
import
os
from
util.log
import
*
from
util.cases
import
*
from
util.sql
import
*
from
util.dnodes
import
*
class
TDTestCase
:
def
init
(
self
,
conn
,
logSql
):
tdLog
.
debug
(
"start to execute %s"
%
__file__
)
tdSql
.
init
(
conn
.
cursor
(),
logSql
)
def
getBuildPath
(
self
):
selfPath
=
os
.
path
.
dirname
(
os
.
path
.
realpath
(
__file__
))
if
(
"community"
in
selfPath
):
projPath
=
selfPath
[:
selfPath
.
find
(
"community"
)]
else
:
projPath
=
selfPath
[:
selfPath
.
find
(
"tests"
)]
for
root
,
dirs
,
files
in
os
.
walk
(
projPath
):
if
(
"taosd"
in
files
):
rootRealPath
=
os
.
path
.
dirname
(
os
.
path
.
realpath
(
root
))
if
(
"packaging"
not
in
rootRealPath
):
buildPath
=
root
[:
len
(
root
)
-
len
(
"/build/bin"
)]
break
return
buildPath
def
run
(
self
):
buildPath
=
self
.
getBuildPath
()
if
(
buildPath
==
""
):
tdLog
.
exit
(
"taosd not found!"
)
else
:
tdLog
.
info
(
"taosd found in %s"
%
buildPath
)
binPath
=
buildPath
+
"/build/bin/"
# query: query test for nanoSecond with where and max min groupby order
os
.
system
(
"%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabase.json -y "
%
binPath
)
tdSql
.
execute
(
"use nsdb"
)
# use where to filter
tdSql
.
query
(
"select count(*) from stb0 where ts>
\"
2021-07-01 00:00:00.590000000
\"
"
)
tdSql
.
checkData
(
0
,
0
,
4000
)
tdSql
.
query
(
"select count(*) from stb0 where ts>
\"
2021-07-01 00:00:00.000000000
\"
and ts <=
\"
2021-07-01 00:00:00.590000000
\"
"
)
tdSql
.
checkData
(
0
,
0
,
5900
)
tdSql
.
query
(
"select count(*) from tb0_0 where ts>
\"
2021-07-01 00:00:00.590000000
\"
;"
)
tdSql
.
checkData
(
0
,
0
,
40
)
tdSql
.
query
(
"select count(*) from tb0_0 where ts>
\"
2021-07-01 00:00:00.000000000
\"
and ts <=
\"
2021-07-01 00:00:00.590000000
\"
"
)
tdSql
.
checkData
(
0
,
0
,
59
)
# select max min avg from special col
tdSql
.
query
(
"select max(c10) from stb0;"
)
print
(
"select max(c10) from stb0 : "
,
tdSql
.
getData
(
0
,
0
))
tdSql
.
query
(
"select max(c10) from tb0_0;"
)
print
(
"select max(c10) from tb0_0 : "
,
tdSql
.
getData
(
0
,
0
))
tdSql
.
query
(
"select min(c1) from stb0;"
)
print
(
"select min(c1) from stb0 : "
,
tdSql
.
getData
(
0
,
0
))
tdSql
.
query
(
"select min(c1) from tb0_0;"
)
print
(
"select min(c1) from tb0_0 : "
,
tdSql
.
getData
(
0
,
0
))
tdSql
.
query
(
"select avg(c1) from stb0;"
)
print
(
"select avg(c1) from stb0 : "
,
tdSql
.
getData
(
0
,
0
))
tdSql
.
query
(
"select avg(c1) from tb0_0;"
)
print
(
"select avg(c1) from tb0_0 : "
,
tdSql
.
getData
(
0
,
0
))
tdSql
.
query
(
"select count(*) from stb0 group by tbname;"
)
tdSql
.
checkData
(
0
,
0
,
100
)
tdSql
.
checkData
(
10
,
0
,
100
)
# query : query above sqls by taosdemo and continuously
os
.
system
(
"%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoQuery.json -y "
%
binPath
)
os
.
system
(
"%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabasecsv.json -y "
%
binPath
)
tdSql
.
execute
(
"use nsdbcsv"
)
tdSql
.
query
(
"show stables"
)
tdSql
.
checkData
(
0
,
4
,
100
)
tdSql
.
query
(
"select count(*) from stb0"
)
tdSql
.
checkData
(
0
,
0
,
10000
)
tdSql
.
query
(
"describe stb0"
)
tdSql
.
checkDataType
(
3
,
1
,
"TIMESTAMP"
)
tdSql
.
query
(
"select count(*) from stb0 where ts >
\"
2021-07-01 00:00:00.490000000
\"
"
)
tdSql
.
checkData
(
0
,
0
,
5000
)
tdSql
.
query
(
"select count(*) from stb0 where ts <now -1d-1h-3s"
)
tdSql
.
checkData
(
0
,
0
,
10000
)
tdSql
.
query
(
"select count(*) from stb0 where ts < 1626918583000000000"
)
tdSql
.
checkData
(
0
,
0
,
10000
)
tdSql
.
execute
(
'select count(*) from stb0 where c2 > 162687012800000000'
)
tdSql
.
execute
(
'select count(*) from stb0 where c2 < 162687012800000000'
)
tdSql
.
execute
(
'select count(*) from stb0 where c2 = 162687012800000000'
)
tdSql
.
execute
(
'select count(*) from stb0 where c2 != 162687012800000000'
)
tdSql
.
execute
(
'select count(*) from stb0 where c2 <> 162687012800000000'
)
tdSql
.
execute
(
'select count(*) from stb0 where c2 > "2021-07-21 20:22:08.248246976"'
)
tdSql
.
execute
(
'select count(*) from stb0 where c2 < "2021-07-21 20:22:08.248246976"'
)
tdSql
.
execute
(
'select count(*) from stb0 where c2 = "2021-07-21 20:22:08.248246976"'
)
tdSql
.
execute
(
'select count(*) from stb0 where c2 != "2021-07-21 20:22:08.248246976"'
)
tdSql
.
execute
(
'select count(*) from stb0 where c2 <> "2021-07-21 20:22:08.248246976"'
)
tdSql
.
execute
(
'select count(*) from stb0 where ts between "2021-07-01 00:00:00.000000000" and "2021-07-01 00:00:00.990000000"'
)
tdSql
.
execute
(
'select count(*) from stb0 where ts between 1625068800000000000 and 1625068801000000000'
)
tdSql
.
query
(
'select avg(c0) from stb0 interval(5000000000b)'
)
tdSql
.
checkRows
(
1
)
tdSql
.
query
(
'select avg(c0) from stb0 interval(100000000b)'
)
tdSql
.
checkRows
(
10
)
tdSql
.
error
(
'select avg(c0) from stb0 interval(1b)'
)
tdSql
.
error
(
'select avg(c0) from stb0 interval(999b)'
)
tdSql
.
query
(
'select avg(c0) from stb0 interval(1000b)'
)
tdSql
.
checkRows
(
100
)
tdSql
.
query
(
'select avg(c0) from stb0 interval(1u)'
)
tdSql
.
checkRows
(
100
)
tdSql
.
query
(
'select avg(c0) from stb0 interval(100000000b) sliding (100000000b)'
)
tdSql
.
checkRows
(
10
)
# query : query above sqls by taosdemo and continuously
os
.
system
(
"%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoQuerycsv.json -y "
%
binPath
)
os
.
system
(
"rm -rf ./query_res*.txt*"
)
os
.
system
(
"rm -rf tools/taosdemoAllTest/NanoTestCase/*.py.sql"
)
def
stop
(
self
):
tdSql
.
close
()
tdLog
.
success
(
"%s successfully executed"
%
__file__
)
tdCases
.
addWindows
(
__file__
,
TDTestCase
())
tdCases
.
addLinux
(
__file__
,
TDTestCase
())
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoQuerycsv.json
0 → 100644
浏览文件 @
40073e19
{
"filetype"
:
"query"
,
"cfgdir"
:
"/etc/taos"
,
"host"
:
"127.0.0.1"
,
"port"
:
6030
,
"user"
:
"root"
,
"password"
:
"taosdata"
,
"confirm_parameter_prompt"
:
"no"
,
"databases"
:
"nsdbcsv"
,
"query_times"
:
10
,
"query_mode"
:
"taosc"
,
"specified_table_query"
:
{
"query_interval"
:
1
,
"concurrent"
:
2
,
"sqls"
:
[
{
"sql"
:
"select count(*) from stb0 where ts>
\"
2021-07-01 00:00:00.490000000
\"
;"
,
"result"
:
"./query_res0.txt"
},
{
"sql"
:
"select count(*) from stb0 where ts < now -22d-1h-3s ;"
,
"result"
:
"./query_res1.txt"
},
{
"sql"
:
"select count(*) from stb0 where ts < 1626918583000000000 ;"
,
"result"
:
"./query_res2.txt"
},
{
"sql"
:
"select count(*) from stb0 where c2 <> 162687012800000000;"
,
"result"
:
"./query_res3.txt"
},
{
"sql"
:
"select count(*) from stb0 where c2 !=
\"
2021-07-21 20:22:08.248246976
\"
;"
,
"result"
:
"./query_res4.txt"
},
{
"sql"
:
"select count(*) from stb0 where ts between
\"
2021-07-01 00:00:00.000000000
\"
and
\"
2021-07-01 00:00:00.990000000
\"
;"
,
"result"
:
"./query_res5.txt"
},
{
"sql"
:
"select count(*) from stb0 group by tbname;"
,
"result"
:
"./query_res6.txt"
},
{
"sql"
:
"select count(*) from stb0 where ts between 1625068800000000000 and 1625068801000000000;"
,
"result"
:
"./query_res7.txt"
},
{
"sql"
:
"select avg(c0) from stb0 interval(5000000000b);"
,
"result"
:
"./query_res8.txt"
},
{
"sql"
:
"select avg(c0) from stb0 interval(100000000b) sliding (100000000b);"
,
"result"
:
"./query_res9.txt"
}
]
},
"super_table_query"
:
{
"stblname"
:
"stb0"
,
"query_interval"
:
0
,
"threads"
:
4
,
"sqls"
:
[
{
"sql"
:
"select count(*) from xxxx where ts >
\"
2021-07-01 00:00:00.490000000
\"
;"
,
"result"
:
"./query_res_tb0.txt"
},
{
"sql"
:
"select count(*) from xxxx where ts between
\"
2021-07-01 00:00:00.000000000
\"
and
\"
2021-07-01 00:00:00.990000000
\"
;"
,
"result"
:
"./query_res_tb1.txt"
},
{
"sql"
:
"select first(*) from xxxx ;"
,
"result"
:
"./query_res_tb2.txt"
},
{
"sql"
:
"select last(*) from xxxx;"
,
"result"
:
"./query_res_tb3.txt"
},
{
"sql"
:
"select last_row(*) from xxxx ;"
,
"result"
:
"./query_res_tb4.txt"
},
{
"sql"
:
"select max(c0) from xxxx ;"
,
"result"
:
"./query_res_tb5.txt"
},
{
"sql"
:
"select min(c0) from xxxx ;"
,
"result"
:
"./query_res_tb6.txt"
},
{
"sql"
:
"select avg(c0) from xxxx ;"
,
"result"
:
"./query_res_tb7.txt"
},
{
"sql"
:
"select avg(c0) from xxxx interval(100000000b) sliding (100000000b) ;"
,
"result"
:
"./query_res_tb8.txt"
}
]
}
}
\ No newline at end of file
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoSubscribe.json
0 → 100644
浏览文件 @
40073e19
{
"filetype"
:
"subscribe"
,
"cfgdir"
:
"/etc/taos"
,
"host"
:
"127.0.0.1"
,
"port"
:
6030
,
"user"
:
"root"
,
"password"
:
"taosdata"
,
"databases"
:
"subnsdb"
,
"confirm_parameter_prompt"
:
"no"
,
"specified_table_query"
:
{
"concurrent"
:
2
,
"mode"
:
"sync"
,
"interval"
:
10000
,
"restart"
:
"yes"
,
"keepProgress"
:
"yes"
,
"sqls"
:
[
{
"sql"
:
"select * from stb0 where ts>=
\"
2021-07-01 00:00:00.000000000
\"
;"
,
"result"
:
"./subscribe_res0.txt"
},
{
"sql"
:
"select * from stb0 where ts < now -2d-1h-3s ;"
,
"result"
:
"./subscribe_res1.txt"
},
{
"sql"
:
"select * from stb0 where ts < 1626918583000000000 ;"
,
"result"
:
"./subscribe_res2.txt"
}]
}
}
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanosubscribe.py
0 → 100644
浏览文件 @
40073e19
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import
sys
import
os
from
util.log
import
*
from
util.cases
import
*
from
util.sql
import
*
from
util.dnodes
import
*
import
time
from
datetime
import
datetime
import
subprocess
class
TDTestCase
:
def
init
(
self
,
conn
,
logSql
):
tdLog
.
debug
(
"start to execute %s"
%
__file__
)
tdSql
.
init
(
conn
.
cursor
(),
logSql
)
def
getBuildPath
(
self
):
selfPath
=
os
.
path
.
dirname
(
os
.
path
.
realpath
(
__file__
))
if
(
"community"
in
selfPath
):
projPath
=
selfPath
[:
selfPath
.
find
(
"community"
)]
else
:
projPath
=
selfPath
[:
selfPath
.
find
(
"tests"
)]
for
root
,
dirs
,
files
in
os
.
walk
(
projPath
):
if
(
"taosd"
in
files
):
rootRealPath
=
os
.
path
.
dirname
(
os
.
path
.
realpath
(
root
))
if
(
"packaging"
not
in
rootRealPath
):
buildPath
=
root
[:
len
(
root
)
-
len
(
"/build/bin"
)]
break
return
buildPath
# get the number of subscriptions
def
subTimes
(
self
,
filename
):
self
.
filename
=
filename
command
=
'cat %s |wc -l'
%
filename
times
=
int
(
subprocess
.
getstatusoutput
(
command
)[
1
])
return
times
# assert results
def
assertCheck
(
self
,
filename
,
subResult
,
expectResult
):
self
.
filename
=
filename
self
.
subResult
=
subResult
self
.
expectResult
=
expectResult
args0
=
(
filename
,
subResult
,
expectResult
)
assert
subResult
==
expectResult
,
"Queryfile:%s ,result is %s != expect: %s"
%
args0
def
run
(
self
):
buildPath
=
self
.
getBuildPath
()
if
(
buildPath
==
""
):
tdLog
.
exit
(
"taosd not found!"
)
else
:
tdLog
.
info
(
"taosd found in %s"
%
buildPath
)
binPath
=
buildPath
+
"/build/bin/"
# clear env
os
.
system
(
"ps -ef |grep 'taosdemoAllTest/taosdemoTestSupportNanoSubscribe.json' |grep -v 'grep' |awk '{print $2}'|xargs kill -9"
)
os
.
system
(
"rm -rf ./subscribe_res*"
)
os
.
system
(
"rm -rf ./all_subscribe_res*"
)
# insert data
os
.
system
(
"%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabaseInsertForSub.json"
%
binPath
)
os
.
system
(
"nohup %staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoSubscribe.json &"
%
binPath
)
query_pid
=
int
(
subprocess
.
getstatusoutput
(
'ps aux|grep "taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoSubscribe.json" |grep -v "grep"|awk
\'
{print $2}
\'
'
)[
1
])
# merge result files
sleep
(
5
)
os
.
system
(
"cat subscribe_res0.txt* > all_subscribe_res0.txt"
)
os
.
system
(
"cat subscribe_res1.txt* > all_subscribe_res1.txt"
)
os
.
system
(
"cat subscribe_res2.txt* > all_subscribe_res2.txt"
)
# correct subscribeTimes testcase
subTimes0
=
self
.
subTimes
(
"all_subscribe_res0.txt"
)
self
.
assertCheck
(
"all_subscribe_res0.txt"
,
subTimes0
,
200
)
subTimes1
=
self
.
subTimes
(
"all_subscribe_res1.txt"
)
self
.
assertCheck
(
"all_subscribe_res1.txt"
,
subTimes1
,
200
)
subTimes2
=
self
.
subTimes
(
"all_subscribe_res2.txt"
)
self
.
assertCheck
(
"all_subscribe_res2.txt"
,
subTimes2
,
200
)
# insert extral data
tdSql
.
execute
(
"use subnsdb"
)
tdSql
.
execute
(
"insert into tb0_0 values(now,100.1000,'subtest1',now-1s)"
)
sleep
(
15
)
os
.
system
(
"cat subscribe_res0.txt* > all_subscribe_res0.txt"
)
subTimes0
=
self
.
subTimes
(
"all_subscribe_res0.txt"
)
print
(
"pass"
)
self
.
assertCheck
(
"all_subscribe_res0.txt"
,
subTimes0
,
202
)
# correct data testcase
os
.
system
(
"kill -9 %d"
%
query_pid
)
sleep
(
3
)
os
.
system
(
"rm -rf ./subscribe_res*"
)
os
.
system
(
"rm -rf ./all_subscribe*"
)
os
.
system
(
"rm -rf ./*.py.sql"
)
def
stop
(
self
):
tdSql
.
close
()
tdLog
.
success
(
"%s successfully executed"
%
__file__
)
tdCases
.
addWindows
(
__file__
,
TDTestCase
())
tdCases
.
addLinux
(
__file__
,
TDTestCase
())
tests/pytest/tools/taosdemoAllTest/NanoTestCase/taosdumpTestNanoSupport.py
0 → 100644
浏览文件 @
40073e19
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import
sys
import
os
from
util.log
import
*
from
util.cases
import
*
from
util.sql
import
*
from
util.dnodes
import
*
class
TDTestCase
:
def
init
(
self
,
conn
,
logSql
):
tdLog
.
debug
(
"start to execute %s"
%
__file__
)
tdSql
.
init
(
conn
.
cursor
(),
logSql
)
self
.
ts
=
1625068800000000000
# this is timestamp "2021-07-01 00:00:00"
self
.
numberOfTables
=
10
self
.
numberOfRecords
=
100
def
checkCommunity
(
self
):
selfPath
=
os
.
path
.
dirname
(
os
.
path
.
realpath
(
__file__
))
if
(
"community"
in
selfPath
):
return
False
else
:
return
True
def
getBuildPath
(
self
):
selfPath
=
os
.
path
.
dirname
(
os
.
path
.
realpath
(
__file__
))
if
(
"community"
in
selfPath
):
projPath
=
selfPath
[:
selfPath
.
find
(
"community"
)]
else
:
projPath
=
selfPath
[:
selfPath
.
find
(
"tests"
)]
for
root
,
dirs
,
files
in
os
.
walk
(
projPath
):
if
(
"taosdump"
in
files
):
rootRealPath
=
os
.
path
.
dirname
(
os
.
path
.
realpath
(
root
))
if
(
"packaging"
not
in
rootRealPath
):
buildPath
=
root
[:
len
(
root
)
-
len
(
"/build/bin"
)]
break
return
buildPath
def
createdb
(
self
,
precision
=
"ns"
):
tb_nums
=
self
.
numberOfTables
per_tb_rows
=
self
.
numberOfRecords
def
build_db
(
precision
,
start_time
):
tdSql
.
execute
(
"drop database if exists timedb1"
)
tdSql
.
execute
(
"create database timedb1 days 10 keep 365 blocks 8 precision "
+
"
\"
"
+
precision
+
"
\"
"
)
tdSql
.
execute
(
"use timedb1"
)
tdSql
.
execute
(
"create stable st(ts timestamp, c1 int, c2 nchar(10),c3 timestamp) tags(t1 int, t2 binary(10))"
)
for
tb
in
range
(
tb_nums
):
tbname
=
"t"
+
str
(
tb
)
tdSql
.
execute
(
"create table "
+
tbname
+
" using st tags(1, 'beijing')"
)
sql
=
"insert into "
+
tbname
+
" values"
currts
=
start_time
if
precision
==
"ns"
:
ts_seed
=
1000000000
elif
precision
==
"us"
:
ts_seed
=
1000000
else
:
ts_seed
=
1000
for
i
in
range
(
per_tb_rows
):
sql
+=
"(%d, %d, 'nchar%d',%d)"
%
(
currts
+
i
*
ts_seed
,
i
%
100
,
i
%
100
,
currts
+
i
*
100
)
# currts +1000ms (1000000000ns)
tdSql
.
execute
(
sql
)
if
precision
==
"ns"
:
start_time
=
1625068800000000000
build_db
(
precision
,
start_time
)
elif
precision
==
"us"
:
start_time
=
1625068800000000
build_db
(
precision
,
start_time
)
elif
precision
==
"ms"
:
start_time
=
1625068800000
build_db
(
precision
,
start_time
)
else
:
print
(
"other time precision not valid , please check! "
)
def
run
(
self
):
# clear envs
os
.
system
(
"rm -rf ./taosdumptest/"
)
tdSql
.
execute
(
"drop database if exists dumptmp1"
)
tdSql
.
execute
(
"drop database if exists dumptmp2"
)
tdSql
.
execute
(
"drop database if exists dumptmp3"
)
if
not
os
.
path
.
exists
(
"./taosdumptest/tmp1"
):
os
.
makedirs
(
"./taosdumptest/dumptmp1"
)
else
:
print
(
"path exist!"
)
if
not
os
.
path
.
exists
(
"./taosdumptest/dumptmp2"
):
os
.
makedirs
(
"./taosdumptest/dumptmp2"
)
if
not
os
.
path
.
exists
(
"./taosdumptest/dumptmp3"
):
os
.
makedirs
(
"./taosdumptest/dumptmp3"
)
buildPath
=
self
.
getBuildPath
()
if
(
buildPath
==
""
):
tdLog
.
exit
(
"taosdump not found!"
)
else
:
tdLog
.
info
(
"taosdump found in %s"
%
buildPath
)
binPath
=
buildPath
+
"/build/bin/"
# create nano second database
self
.
createdb
(
precision
=
"ns"
)
# dump all data
os
.
system
(
"%staosdump --databases timedb1 -o ./taosdumptest/dumptmp1"
%
binPath
)
# dump part data with -S -E
os
.
system
(
'%staosdump --databases timedb1 -S 1625068810000000000 -E 1625068860000000000 -C ns -o ./taosdumptest/dumptmp2 '
%
binPath
)
os
.
system
(
'%staosdump --databases timedb1 -S 1625068810000000000 -o ./taosdumptest/dumptmp3 '
%
binPath
)
# replace strings to dump in databases
os
.
system
(
"sed -i
\"
s/timedb1/dumptmp1/g
\"
`grep timedb1 -rl ./taosdumptest/dumptmp1`"
)
os
.
system
(
"sed -i
\"
s/timedb1/dumptmp2/g
\"
`grep timedb1 -rl ./taosdumptest/dumptmp2`"
)
os
.
system
(
"sed -i
\"
s/timedb1/dumptmp3/g
\"
`grep timedb1 -rl ./taosdumptest/dumptmp3`"
)
os
.
system
(
"%staosdump -i ./taosdumptest/dumptmp1"
%
binPath
)
os
.
system
(
"%staosdump -i ./taosdumptest/dumptmp2"
%
binPath
)
os
.
system
(
"%staosdump -i ./taosdumptest/dumptmp3"
%
binPath
)
# dump data and check for taosdump
tdSql
.
query
(
"select count(*) from dumptmp1.st"
)
tdSql
.
checkData
(
0
,
0
,
1000
)
tdSql
.
query
(
"select count(*) from dumptmp2.st"
)
tdSql
.
checkData
(
0
,
0
,
510
)
tdSql
.
query
(
"select count(*) from dumptmp3.st"
)
tdSql
.
checkData
(
0
,
0
,
900
)
# check data
origin_res
=
tdSql
.
getResult
(
"select * from timedb1.st"
)
dump_res
=
tdSql
.
getResult
(
"select * from dumptmp1.st"
)
if
origin_res
==
dump_res
:
tdLog
.
info
(
"test nano second : dump check data pass for all data!"
)
else
:
tdLog
.
info
(
"test nano second : dump check data failed for all data!"
)
origin_res
=
tdSql
.
getResult
(
"select * from timedb1.st where ts >=1625068810000000000 and ts <= 1625068860000000000"
)
dump_res
=
tdSql
.
getResult
(
"select * from dumptmp2.st"
)
if
origin_res
==
dump_res
:
tdLog
.
info
(
" test nano second : dump check data pass for data! "
)
else
:
tdLog
.
info
(
" test nano second : dump check data failed for data !"
)
origin_res
=
tdSql
.
getResult
(
"select * from timedb1.st where ts >=1625068810000000000 "
)
dump_res
=
tdSql
.
getResult
(
"select * from dumptmp3.st"
)
if
origin_res
==
dump_res
:
tdLog
.
info
(
" test nano second : dump check data pass for data! "
)
else
:
tdLog
.
info
(
" test nano second : dump check data failed for data !"
)
# us second support test case
os
.
system
(
"rm -rf ./taosdumptest/"
)
tdSql
.
execute
(
"drop database if exists dumptmp1"
)
tdSql
.
execute
(
"drop database if exists dumptmp2"
)
tdSql
.
execute
(
"drop database if exists dumptmp3"
)
if
not
os
.
path
.
exists
(
"./taosdumptest/tmp1"
):
os
.
makedirs
(
"./taosdumptest/dumptmp1"
)
else
:
print
(
"path exits!"
)
if
not
os
.
path
.
exists
(
"./taosdumptest/dumptmp2"
):
os
.
makedirs
(
"./taosdumptest/dumptmp2"
)
if
not
os
.
path
.
exists
(
"./taosdumptest/dumptmp3"
):
os
.
makedirs
(
"./taosdumptest/dumptmp3"
)
buildPath
=
self
.
getBuildPath
()
if
(
buildPath
==
""
):
tdLog
.
exit
(
"taosdump not found!"
)
else
:
tdLog
.
info
(
"taosdump found in %s"
%
buildPath
)
binPath
=
buildPath
+
"/build/bin/"
self
.
createdb
(
precision
=
"us"
)
os
.
system
(
"%staosdump --databases timedb1 -o ./taosdumptest/dumptmp1"
%
binPath
)
os
.
system
(
'%staosdump --databases timedb1 -S 1625068810000000 -E 1625068860000000 -C us -o ./taosdumptest/dumptmp2 '
%
binPath
)
os
.
system
(
'%staosdump --databases timedb1 -S 1625068810000000 -o ./taosdumptest/dumptmp3 '
%
binPath
)
os
.
system
(
"sed -i
\"
s/timedb1/dumptmp1/g
\"
`grep timedb1 -rl ./taosdumptest/dumptmp1`"
)
os
.
system
(
"sed -i
\"
s/timedb1/dumptmp2/g
\"
`grep timedb1 -rl ./taosdumptest/dumptmp2`"
)
os
.
system
(
"sed -i
\"
s/timedb1/dumptmp3/g
\"
`grep timedb1 -rl ./taosdumptest/dumptmp3`"
)
os
.
system
(
"%staosdump -i ./taosdumptest/dumptmp1"
%
binPath
)
os
.
system
(
"%staosdump -i ./taosdumptest/dumptmp2"
%
binPath
)
os
.
system
(
"%staosdump -i ./taosdumptest/dumptmp3"
%
binPath
)
tdSql
.
query
(
"select count(*) from dumptmp1.st"
)
tdSql
.
checkData
(
0
,
0
,
1000
)
tdSql
.
query
(
"select count(*) from dumptmp2.st"
)
tdSql
.
checkData
(
0
,
0
,
510
)
tdSql
.
query
(
"select count(*) from dumptmp3.st"
)
tdSql
.
checkData
(
0
,
0
,
900
)
origin_res
=
tdSql
.
getResult
(
"select * from timedb1.st"
)
dump_res
=
tdSql
.
getResult
(
"select * from dumptmp1.st"
)
if
origin_res
==
dump_res
:
tdLog
.
info
(
"test us second : dump check data pass for all data!"
)
else
:
tdLog
.
info
(
"test us second : dump check data failed for all data!"
)
origin_res
=
tdSql
.
getResult
(
"select * from timedb1.st where ts >=1625068810000000 and ts <= 1625068860000000"
)
dump_res
=
tdSql
.
getResult
(
"select * from dumptmp2.st"
)
if
origin_res
==
dump_res
:
tdLog
.
info
(
" test us second : dump check data pass for data! "
)
else
:
tdLog
.
info
(
" test us second : dump check data failed for data!"
)
origin_res
=
tdSql
.
getResult
(
"select * from timedb1.st where ts >=1625068810000000 "
)
dump_res
=
tdSql
.
getResult
(
"select * from dumptmp3.st"
)
if
origin_res
==
dump_res
:
tdLog
.
info
(
" test us second : dump check data pass for data! "
)
else
:
tdLog
.
info
(
" test us second : dump check data failed for data! "
)
# ms second support test case
os
.
system
(
"rm -rf ./taosdumptest/"
)
tdSql
.
execute
(
"drop database if exists dumptmp1"
)
tdSql
.
execute
(
"drop database if exists dumptmp2"
)
tdSql
.
execute
(
"drop database if exists dumptmp3"
)
if
not
os
.
path
.
exists
(
"./taosdumptest/tmp1"
):
os
.
makedirs
(
"./taosdumptest/dumptmp1"
)
else
:
print
(
"path exits!"
)
if
not
os
.
path
.
exists
(
"./taosdumptest/dumptmp2"
):
os
.
makedirs
(
"./taosdumptest/dumptmp2"
)
if
not
os
.
path
.
exists
(
"./taosdumptest/dumptmp3"
):
os
.
makedirs
(
"./taosdumptest/dumptmp3"
)
buildPath
=
self
.
getBuildPath
()
if
(
buildPath
==
""
):
tdLog
.
exit
(
"taosdump not found!"
)
else
:
tdLog
.
info
(
"taosdump found in %s"
%
buildPath
)
binPath
=
buildPath
+
"/build/bin/"
self
.
createdb
(
precision
=
"ms"
)
os
.
system
(
"%staosdump --databases timedb1 -o ./taosdumptest/dumptmp1"
%
binPath
)
os
.
system
(
'%staosdump --databases timedb1 -S 1625068810000 -E 1625068860000 -C ms -o ./taosdumptest/dumptmp2 '
%
binPath
)
os
.
system
(
'%staosdump --databases timedb1 -S 1625068810000 -o ./taosdumptest/dumptmp3 '
%
binPath
)
os
.
system
(
"sed -i
\"
s/timedb1/dumptmp1/g
\"
`grep timedb1 -rl ./taosdumptest/dumptmp1`"
)
os
.
system
(
"sed -i
\"
s/timedb1/dumptmp2/g
\"
`grep timedb1 -rl ./taosdumptest/dumptmp2`"
)
os
.
system
(
"sed -i
\"
s/timedb1/dumptmp3/g
\"
`grep timedb1 -rl ./taosdumptest/dumptmp3`"
)
os
.
system
(
"%staosdump -i ./taosdumptest/dumptmp1"
%
binPath
)
os
.
system
(
"%staosdump -i ./taosdumptest/dumptmp2"
%
binPath
)
os
.
system
(
"%staosdump -i ./taosdumptest/dumptmp3"
%
binPath
)
tdSql
.
query
(
"select count(*) from dumptmp1.st"
)
tdSql
.
checkData
(
0
,
0
,
1000
)
tdSql
.
query
(
"select count(*) from dumptmp2.st"
)
tdSql
.
checkData
(
0
,
0
,
510
)
tdSql
.
query
(
"select count(*) from dumptmp3.st"
)
tdSql
.
checkData
(
0
,
0
,
900
)
origin_res
=
tdSql
.
getResult
(
"select * from timedb1.st"
)
dump_res
=
tdSql
.
getResult
(
"select * from dumptmp1.st"
)
if
origin_res
==
dump_res
:
tdLog
.
info
(
"test ms second : dump check data pass for all data!"
)
else
:
tdLog
.
info
(
"test ms second : dump check data failed for all data!"
)
origin_res
=
tdSql
.
getResult
(
"select * from timedb1.st where ts >=1625068810000 and ts <= 1625068860000"
)
dump_res
=
tdSql
.
getResult
(
"select * from dumptmp2.st"
)
if
origin_res
==
dump_res
:
tdLog
.
info
(
" test ms second : dump check data pass for data! "
)
else
:
tdLog
.
info
(
" test ms second : dump check data failed for data!"
)
origin_res
=
tdSql
.
getResult
(
"select * from timedb1.st where ts >=1625068810000 "
)
dump_res
=
tdSql
.
getResult
(
"select * from dumptmp3.st"
)
if
origin_res
==
dump_res
:
tdLog
.
info
(
" test ms second : dump check data pass for data! "
)
else
:
tdLog
.
info
(
" test ms second : dump check data failed for data! "
)
os
.
system
(
"rm -rf ./taosdumptest/"
)
os
.
system
(
"rm -rf ./dump_result.txt"
)
os
.
system
(
"rm -rf *.py.sql"
)
def
stop
(
self
):
tdSql
.
close
()
tdLog
.
success
(
"%s successfully executed"
%
__file__
)
tdCases
.
addWindows
(
__file__
,
TDTestCase
())
tdCases
.
addLinux
(
__file__
,
TDTestCase
())
tests/script/jenkins/basic.txt
浏览文件 @
40073e19
...
@@ -74,7 +74,7 @@ cd ../../../debug; make
...
@@ -74,7 +74,7 @@ cd ../../../debug; make
./test.sh -f general/parser/where.sim
./test.sh -f general/parser/where.sim
./test.sh -f general/parser/slimit.sim
./test.sh -f general/parser/slimit.sim
./test.sh -f general/parser/select_with_tags.sim
./test.sh -f general/parser/select_with_tags.sim
#
./test.sh -f general/parser/interp.sim
./test.sh -f general/parser/interp.sim
./test.sh -f general/parser/tags_dynamically_specifiy.sim
./test.sh -f general/parser/tags_dynamically_specifiy.sim
./test.sh -f general/parser/groupby.sim
./test.sh -f general/parser/groupby.sim
./test.sh -f general/parser/set_tag_vals.sim
./test.sh -f general/parser/set_tag_vals.sim
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录