Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
慢慢CG
TDengine
提交
524a1329
T
TDengine
项目概览
慢慢CG
/
TDengine
与 Fork 源项目一致
Fork自
taosdata / TDengine
通知
1
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
T
TDengine
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
524a1329
编写于
5月 31, 2021
作者:
H
Haojun Liao
浏览文件
操作
浏览文件
下载
差异文件
[td-255]merge develop.
上级
c284d848
184f3976
变更
44
展开全部
隐藏空白更改
内联
并排
Showing
44 changed file
with
3876 addition
and
587 deletion
+3876
-587
documentation20/cn/02.getting-started/docs.md
documentation20/cn/02.getting-started/docs.md
+8
-7
documentation20/cn/11.administrator/docs.md
documentation20/cn/11.administrator/docs.md
+1
-1
documentation20/cn/12.taos-sql/01.error-code/docs.md
documentation20/cn/12.taos-sql/01.error-code/docs.md
+1
-1
documentation20/cn/12.taos-sql/docs.md
documentation20/cn/12.taos-sql/docs.md
+8
-3
src/client/inc/tscUtil.h
src/client/inc/tscUtil.h
+2
-0
src/client/jni/com_taosdata_jdbc_TSDBJNIConnector.h
src/client/jni/com_taosdata_jdbc_TSDBJNIConnector.h
+9
-1
src/client/src/TSDBJNIConnector.c
src/client/src/TSDBJNIConnector.c
+80
-22
src/client/src/tscPrepare.c
src/client/src/tscPrepare.c
+209
-25
src/client/src/tscUtil.c
src/client/src/tscUtil.c
+61
-0
src/common/inc/tdataformat.h
src/common/inc/tdataformat.h
+1
-0
src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBJNIConnector.java
...dbc/src/main/java/com/taosdata/jdbc/TSDBJNIConnector.java
+10
-0
src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBPreparedStatement.java
...rc/main/java/com/taosdata/jdbc/TSDBPreparedStatement.java
+219
-3
src/dnode/src/dnodeMain.c
src/dnode/src/dnodeMain.c
+33
-6
src/inc/taos.h
src/inc/taos.h
+1
-0
src/inc/taosdef.h
src/inc/taosdef.h
+3
-1
src/inc/taoserror.h
src/inc/taoserror.h
+1
-1
src/inc/tsdb.h
src/inc/tsdb.h
+11
-1
src/kit/taosdemo/taosdemo.c
src/kit/taosdemo/taosdemo.c
+641
-410
src/mnode/src/mnodeMain.c
src/mnode/src/mnodeMain.c
+1
-1
src/mnode/src/mnodeSdb.c
src/mnode/src/mnodeSdb.c
+4
-2
src/mnode/src/mnodeTable.c
src/mnode/src/mnodeTable.c
+2
-2
src/os/inc/osDir.h
src/os/inc/osDir.h
+1
-0
src/os/src/detail/osDir.c
src/os/src/detail/osDir.c
+4
-0
src/os/src/detail/osTime.c
src/os/src/detail/osTime.c
+2
-2
src/plugins/http/src/httpUtil.c
src/plugins/http/src/httpUtil.c
+5
-0
src/query/src/qExecutor.c
src/query/src/qExecutor.c
+36
-1
src/tsdb/inc/tsdbCommit.h
src/tsdb/inc/tsdbCommit.h
+1
-0
src/tsdb/inc/tsdbMeta.h
src/tsdb/inc/tsdbMeta.h
+11
-0
src/tsdb/inc/tsdbint.h
src/tsdb/inc/tsdbint.h
+5
-0
src/tsdb/src/tsdbCommit.c
src/tsdb/src/tsdbCommit.c
+4
-2
src/tsdb/src/tsdbCommitQueue.c
src/tsdb/src/tsdbCommitQueue.c
+16
-6
src/tsdb/src/tsdbFS.c
src/tsdb/src/tsdbFS.c
+77
-4
src/tsdb/src/tsdbMain.c
src/tsdb/src/tsdbMain.c
+333
-33
src/tsdb/src/tsdbMemTable.c
src/tsdb/src/tsdbMemTable.c
+49
-3
src/tsdb/src/tsdbMeta.c
src/tsdb/src/tsdbMeta.c
+132
-1
src/tsdb/src/tsdbRead.c
src/tsdb/src/tsdbRead.c
+218
-10
src/tsdb/src/tsdbSync.c
src/tsdb/src/tsdbSync.c
+33
-8
src/wal/src/walWrite.c
src/wal/src/walWrite.c
+2
-0
tests/pytest/fulltest.sh
tests/pytest/fulltest.sh
+2
-0
tests/pytest/stable/insert.py
tests/pytest/stable/insert.py
+10
-0
tests/script/api/batchprepare.c
tests/script/api/batchprepare.c
+1140
-30
tests/script/general/parser/last_cache.sim
tests/script/general/parser/last_cache.sim
+71
-0
tests/script/general/parser/last_cache_query.sim
tests/script/general/parser/last_cache_query.sim
+416
-0
tests/script/general/parser/testSuite.sim
tests/script/general/parser/testSuite.sim
+2
-0
未找到文件。
documentation20/cn/02.getting-started/docs.md
浏览文件 @
524a1329
...
...
@@ -24,7 +24,7 @@ TDengine的安装非常简单,从下载到安装成功仅仅只要几秒钟。
## <a class="anchor" id="start"></a>轻松启动
安装成功后,用户可使用
`systemctl`
命令来启动TDengine
的服务进程。
安装成功后,用户可使用
`systemctl`
命令来启动 TDengine
的服务进程。
```
bash
$
systemctl start taosd
...
...
@@ -35,21 +35,22 @@ $ systemctl start taosd
$
systemctl status taosd
```
如果
TDengine服务正常工作,那么您可以通过TDengine的命令行程序
`taos`
来访问并体验
TDengine。
如果
TDengine 服务正常工作,那么您可以通过 TDengine 的命令行程序
`taos`
来访问并体验
TDengine。
**注意:**
-
systemctl命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 sudo
-
为更好的获得产品反馈,改善产品,TDengine会采集基本的使用信息,但您可以修改系统配置文件taos.cfg里的配置参数telemetryReporting, 将其设为0,就可将其关闭。
-
TDengine采用FQDN(一般就是hostname)作为节点的ID,为保证正常运行,需要给运行taosd的服务器配置好hostname,在客户端应用运行的机器配置好DNS服务或hosts文件,保证FQDN能够解析。
-
systemctl 命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 sudo 。
-
为更好的获得产品反馈,改善产品,TDengine 会采集基本的使用信息,但您可以修改系统配置文件 taos.cfg 里的配置参数 telemetryReporting, 将其设为 0,就可将其关闭。
-
TDengine 采用 FQDN (一般就是 hostname )作为节点的 ID,为保证正常运行,需要给运行 taosd 的服务器配置好 hostname,在客户端应用运行的机器配置好 DNS 服务或 hosts 文件,保证 FQDN 能够解析。
-
`systemctl stop taosd`
指令在执行后并不会马上停止 TDengine 服务,而是会等待系统中必要的落盘工作正常完成。在数据量很大的情况下,这可能会消耗较长时间。
*
TDengine 支持在使用
[
`systemd`
](
https://en.wikipedia.org/wiki/Systemd
)
做进程服务管理的linux系统上安装,用
`which systemctl`
命令来检测系统中是否存在
`systemd`
包:
*
TDengine 支持在使用
[
`systemd`
](
https://en.wikipedia.org/wiki/Systemd
)
做进程服务管理的 linux 系统上安装,用
`which systemctl`
命令来检测系统中是否存在
`systemd`
包:
```
bash
$
which systemctl
```
如果系统中不支持systemd,也可以用手动运行 /usr/local/taos/bin/taosd 方式启动 TDengine 服务。
如果系统中不支持
systemd,也可以用手动运行 /usr/local/taos/bin/taosd 方式启动 TDengine 服务。
## <a class="anchor" id="console"></a>TDengine命令行程序
...
...
documentation20/cn/11.administrator/docs.md
浏览文件 @
524a1329
...
...
@@ -129,7 +129,7 @@ taosd -C
-
blocks:每个VNODE(TSDB)中有多少cache大小的内存块。因此一个VNODE的用的内存大小粗略为(cache
*
blocks)。单位为块,默认值:4。(可通过 alter database 修改)
-
replica:副本个数,取值范围:1-3。单位为个,默认值:1。(可通过 alter database 修改)
-
precision:时间戳精度标识,ms表示毫秒,us表示微秒。默认值:ms。
-
cacheLast:是否在内存中缓存子表
last_row,0:关闭;1:开启。
默认值:0。(可通过 alter database 修改)(从 2.0.11 版本开始支持此参数)
-
cacheLast:是否在内存中缓存子表
的最近数据,0:关闭;1:缓存子表最近一行数据;2:缓存子表每一列的最近的非NULL值,3:同时打开缓存最近行和列功能,
默认值:0。(可通过 alter database 修改)(从 2.0.11 版本开始支持此参数)
对于一个应用场景,可能有多种数据特征的数据并存,最佳的设计是将具有相同数据特征的表放在一个库里,这样一个应用有多个库,而每个库可以配置不同的存储参数,从而保证系统有最优的性能。TDengine允许应用在创建库时指定上述存储参数,如果指定,该参数就将覆盖对应的系统配置参数。举例,有下述SQL:
...
...
documentation20/cn/12.taos-sql/01.error-code/docs.md
浏览文件 @
524a1329
...
...
@@ -26,7 +26,7 @@
| TSDB_CODE_COM_OUT_OF_MEMORY | 0 | 0x0102 | "Out of memory" | -2147483390 |
| TSDB_CODE_COM_INVALID_CFG_MSG | 0 | 0x0103 | "Invalid config message" | -2147483389 |
| TSDB_CODE_COM_FILE_CORRUPTED | 0 | 0x0104 | "Data file corrupted" | -2147483388 |
| TSDB_CODE_TSC_INVALID_
SQL
| 0 | 0x0200 | "Invalid SQL statement" | -2147483136 |
| TSDB_CODE_TSC_INVALID_
OPERATION
| 0 | 0x0200 | "Invalid SQL statement" | -2147483136 |
| TSDB_CODE_TSC_INVALID_QHANDLE | 0 | 0x0201 | "Invalid qhandle" | -2147483135 |
| TSDB_CODE_TSC_INVALID_TIME_STAMP | 0 | 0x0202 | "Invalid combination of client/service time" | -2147483134 |
| TSDB_CODE_TSC_INVALID_VALUE | 0 | 0x0203 | "Invalid value in client" | -2147483133 |
...
...
documentation20/cn/12.taos-sql/docs.md
浏览文件 @
524a1329
...
...
@@ -76,7 +76,7 @@ TDengine 缺省的时间戳是毫秒精度,但通过修改配置参数 enableM
4) 一条SQL 语句的最大长度为65480个字符;
5) 数据库还有更多与存储相关的配置参数,请参见
[
服务端配置
](
https://www.taosdata.com/cn/documentation/
taos-sql#management
)
章节。
5) 数据库还有更多与存储相关的配置参数,请参见
[
服务端配置
](
https://www.taosdata.com/cn/documentation/
administrator#config
)
章节。
-
**显示系统当前参数**
...
...
@@ -126,7 +126,7 @@ TDengine 缺省的时间戳是毫秒精度,但通过修改配置参数 enableM
```mysql
ALTER DATABASE db_name CACHELAST 0;
```
CACHELAST 参数控制是否在内存中缓存数据子表的 last_row。缺省值为 0,取值范围 [0, 1]。其中 0 表示不启用、1 表示启用。(从 2.0.11
版本开始支持,修改后需要重启服务器
生效。)
CACHELAST 参数控制是否在内存中缓存数据子表的 last_row。缺省值为 0,取值范围 [0, 1]。其中 0 表示不启用、1 表示启用。(从 2.0.11
.0 版本开始支持。从 2.1.1.0 版本开始,修改此参数后无需重启服务器即可
生效。)
**Tips**: 以上所有参数修改后都可以用show databases来确认是否修改成功。
...
...
@@ -399,7 +399,12 @@ TDengine 缺省的时间戳是毫秒精度,但通过修改配置参数 enableM
INSERT INTO tb1_name (tb1_field1_name, ...) [USING stb1_name TAGS (tag_value1, ...)] VALUES (field1_value1, ...) (field1_value2, ...) ...
tb2_name (tb2_field1_name, ...) [USING stb2_name TAGS (tag_value2, ...)] VALUES (field1_value1, ...) (field1_value2, ...) ...;
```
以自动建表的方式,同时向表tb1_name和tb2_name中按列分别插入多条记录。
以自动建表的方式,同时向表tb1_name和tb2_name中按列分别插入多条记录。
从 2.0.20.5 版本开始,子表的列名可以不跟在子表名称后面,而是可以放在 TAGS 和 VALUES 之间,例如像下面这样写:
```
mysql
INSERT INTO tb1_name [USING stb1_name TAGS (tag_value1, ...)] (tb1_field1_name, ...) VALUES (field1_value1, ...) (field1_value2, ...) ...;
```
注意:虽然两种写法都可以,但并不能在一条 SQL 语句中混用,否则会报语法错误。
**历史记录写入**
:可使用IMPORT或者INSERT命令,IMPORT的语法,功能与INSERT完全一样。
...
...
src/client/inc/tscUtil.h
浏览文件 @
524a1329
...
...
@@ -94,6 +94,8 @@ typedef struct SVgroupTableInfo {
SArray
*
itemList
;
// SArray<STableIdInfo>
}
SVgroupTableInfo
;
int32_t
converToStr
(
char
*
str
,
int
type
,
void
*
buf
,
int32_t
bufSize
,
int32_t
*
len
);
int32_t
tscCreateDataBlock
(
size_t
initialSize
,
int32_t
rowSize
,
int32_t
startOffset
,
SName
*
name
,
STableMeta
*
pTableMeta
,
STableDataBlocks
**
dataBlocks
);
void
tscDestroyDataBlock
(
STableDataBlocks
*
pDataBlock
,
bool
removeMeta
);
void
tscSortRemoveDataBlockDupRows
(
STableDataBlocks
*
dataBuf
);
...
...
src/client/jni/com_taosdata_jdbc_TSDBJNIConnector.h
浏览文件 @
524a1329
...
...
@@ -218,11 +218,19 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_executeBatchImp(J
/*
* Class: com_taosdata_jdbc_TSDBJNIConnector
* Method:
executeBatchImp
* Method:
closeStmt
* Signature: (JJ)I
*/
JNIEXPORT
jint
JNICALL
Java_com_taosdata_jdbc_TSDBJNIConnector_closeStmt
(
JNIEnv
*
env
,
jobject
jobj
,
jlong
stmt
,
jlong
con
);
/**
* Class: com_taosdata_jdbc_TSDBJNIConnector
* Method: setTableNameTagsImp
* Signature: (JLjava/lang/String;I[B[B[B[BJ)I
*/
JNIEXPORT
jint
JNICALL
Java_com_taosdata_jdbc_TSDBJNIConnector_setTableNameTagsImp
(
JNIEnv
*
,
jobject
,
jlong
,
jstring
,
jint
,
jbyteArray
,
jbyteArray
,
jbyteArray
,
jbyteArray
,
jlong
);
#ifdef __cplusplus
}
#endif
...
...
src/client/src/TSDBJNIConnector.c
浏览文件 @
524a1329
...
...
@@ -749,7 +749,6 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_setBindTableNameI
}
jniDebug
(
"jobj:%p, conn:%p, set stmt bind table name:%s"
,
jobj
,
tsconn
,
name
);
(
*
env
)
->
ReleaseStringUTFChars
(
env
,
jname
,
name
);
return
JNI_SUCCESS
;
}
...
...
@@ -762,7 +761,7 @@ JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_bindColDataImp(J
return
JNI_CONNECTION_NULL
;
}
TAOS_STMT
*
pStmt
=
(
TAOS_STMT
*
)
stmt
;
TAOS_STMT
*
pStmt
=
(
TAOS_STMT
*
)
stmt
;
if
(
pStmt
==
NULL
)
{
jniError
(
"jobj:%p, conn:%p, invalid stmt"
,
jobj
,
tscon
);
return
JNI_SQL_NULL
;
...
...
@@ -777,14 +776,14 @@ JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_bindColDataImp(J
}
len
=
(
*
env
)
->
GetArrayLength
(
env
,
lengthList
);
char
*
lengthArray
=
(
char
*
)
calloc
(
1
,
len
);
(
*
env
)
->
GetByteArrayRegion
(
env
,
lengthList
,
0
,
len
,
(
jbyte
*
)
lengthArray
);
char
*
lengthArray
=
(
char
*
)
calloc
(
1
,
len
);
(
*
env
)
->
GetByteArrayRegion
(
env
,
lengthList
,
0
,
len
,
(
jbyte
*
)
lengthArray
);
if
((
*
env
)
->
ExceptionCheck
(
env
))
{
}
len
=
(
*
env
)
->
GetArrayLength
(
env
,
nullList
);
char
*
nullArray
=
(
char
*
)
calloc
(
1
,
len
);
(
*
env
)
->
GetByteArrayRegion
(
env
,
nullList
,
0
,
len
,
(
jbyte
*
)
nullArray
);
char
*
nullArray
=
(
char
*
)
calloc
(
1
,
len
);
(
*
env
)
->
GetByteArrayRegion
(
env
,
nullList
,
0
,
len
,
(
jbyte
*
)
nullArray
);
if
((
*
env
)
->
ExceptionCheck
(
env
))
{
}
...
...
@@ -799,22 +798,10 @@ JNIEXPORT jlong JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_bindColDataImp(J
b
->
length
=
(
int32_t
*
)
lengthArray
;
// set the length and is_null array
switch
(
dataType
)
{
case
TSDB_DATA_TYPE_INT
:
case
TSDB_DATA_TYPE_TINYINT
:
case
TSDB_DATA_TYPE_SMALLINT
:
case
TSDB_DATA_TYPE_TIMESTAMP
:
case
TSDB_DATA_TYPE_BIGINT
:
{
int32_t
bytes
=
tDataTypes
[
dataType
].
bytes
;
for
(
int32_t
i
=
0
;
i
<
numOfRows
;
++
i
)
{
b
->
length
[
i
]
=
bytes
;
}
break
;
}
case
TSDB_DATA_TYPE_NCHAR
:
case
TSDB_DATA_TYPE_BINARY
:
{
// do nothing
if
(
!
IS_VAR_DATA_TYPE
(
dataType
))
{
int32_t
bytes
=
tDataTypes
[
dataType
].
bytes
;
for
(
int32_t
i
=
0
;
i
<
numOfRows
;
++
i
)
{
b
->
length
[
i
]
=
bytes
;
}
}
...
...
@@ -878,3 +865,74 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_closeStmt(JNIEnv
jniDebug
(
"jobj:%p, conn:%p, stmt closed"
,
jobj
,
tscon
);
return
JNI_SUCCESS
;
}
JNIEXPORT
jint
JNICALL
Java_com_taosdata_jdbc_TSDBJNIConnector_setTableNameTagsImp
(
JNIEnv
*
env
,
jobject
jobj
,
jlong
stmt
,
jstring
tableName
,
jint
numOfTags
,
jbyteArray
tags
,
jbyteArray
typeList
,
jbyteArray
lengthList
,
jbyteArray
nullList
,
jlong
conn
)
{
TAOS
*
tsconn
=
(
TAOS
*
)
conn
;
if
(
tsconn
==
NULL
)
{
jniError
(
"jobj:%p, connection already closed"
,
jobj
);
return
JNI_CONNECTION_NULL
;
}
TAOS_STMT
*
pStmt
=
(
TAOS_STMT
*
)
stmt
;
if
(
pStmt
==
NULL
)
{
jniError
(
"jobj:%p, conn:%p, invalid stmt handle"
,
jobj
,
tsconn
);
return
JNI_SQL_NULL
;
}
jsize
len
=
(
*
env
)
->
GetArrayLength
(
env
,
tags
);
char
*
tagsData
=
(
char
*
)
calloc
(
1
,
len
);
(
*
env
)
->
GetByteArrayRegion
(
env
,
tags
,
0
,
len
,
(
jbyte
*
)
tagsData
);
if
((
*
env
)
->
ExceptionCheck
(
env
))
{
// todo handle error
}
len
=
(
*
env
)
->
GetArrayLength
(
env
,
lengthList
);
int64_t
*
lengthArray
=
(
int64_t
*
)
calloc
(
1
,
len
);
(
*
env
)
->
GetByteArrayRegion
(
env
,
lengthList
,
0
,
len
,
(
jbyte
*
)
lengthArray
);
if
((
*
env
)
->
ExceptionCheck
(
env
))
{
}
len
=
(
*
env
)
->
GetArrayLength
(
env
,
typeList
);
char
*
typeArray
=
(
char
*
)
calloc
(
1
,
len
);
(
*
env
)
->
GetByteArrayRegion
(
env
,
typeList
,
0
,
len
,
(
jbyte
*
)
typeArray
);
if
((
*
env
)
->
ExceptionCheck
(
env
))
{
}
len
=
(
*
env
)
->
GetArrayLength
(
env
,
nullList
);
int32_t
*
nullArray
=
(
int32_t
*
)
calloc
(
1
,
len
);
(
*
env
)
->
GetByteArrayRegion
(
env
,
nullList
,
0
,
len
,
(
jbyte
*
)
nullArray
);
if
((
*
env
)
->
ExceptionCheck
(
env
))
{
}
const
char
*
name
=
(
*
env
)
->
GetStringUTFChars
(
env
,
tableName
,
NULL
);
char
*
curTags
=
tagsData
;
TAOS_BIND
*
tagsBind
=
calloc
(
numOfTags
,
sizeof
(
TAOS_BIND
));
for
(
int32_t
i
=
0
;
i
<
numOfTags
;
++
i
)
{
tagsBind
[
i
].
buffer_type
=
typeArray
[
i
];
tagsBind
[
i
].
buffer
=
curTags
;
tagsBind
[
i
].
is_null
=
&
nullArray
[
i
];
tagsBind
[
i
].
length
=
(
uintptr_t
*
)
&
lengthArray
[
i
];
curTags
+=
lengthArray
[
i
];
}
int32_t
code
=
taos_stmt_set_tbname_tags
((
void
*
)
stmt
,
name
,
tagsBind
);
int32_t
nTags
=
(
int32_t
)
numOfTags
;
jniDebug
(
"jobj:%p, conn:%p, set table name:%s, numOfTags:%d"
,
jobj
,
tsconn
,
name
,
nTags
);
tfree
(
tagsData
);
tfree
(
lengthArray
);
tfree
(
typeArray
);
tfree
(
nullArray
);
(
*
env
)
->
ReleaseStringUTFChars
(
env
,
tableName
,
name
);
if
(
code
!=
TSDB_CODE_SUCCESS
)
{
jniError
(
"jobj:%p, conn:%p, code:%s"
,
jobj
,
tsconn
,
tstrerror
(
code
));
return
JNI_TDENGINE_ERROR
;
}
return
JNI_SUCCESS
;
}
src/client/src/tscPrepare.c
浏览文件 @
524a1329
...
...
@@ -46,9 +46,13 @@ typedef struct SNormalStmt {
typedef
struct
SMultiTbStmt
{
bool
nameSet
;
bool
tagSet
;
uint64_t
currentUid
;
uint32_t
tbNum
;
SStrToken
tbname
;
SStrToken
stbname
;
SStrToken
values
;
SArray
*
tags
;
SHashObj
*
pTableHash
;
SHashObj
*
pTableBlockHashList
;
// data block for each table
}
SMultiTbStmt
;
...
...
@@ -1199,6 +1203,184 @@ static int insertBatchStmtExecute(STscStmt* pStmt) {
return
pStmt
->
pSql
->
res
.
code
;
}
int
stmtParseInsertTbTags
(
SSqlObj
*
pSql
,
STscStmt
*
pStmt
)
{
SSqlCmd
*
pCmd
=
&
pSql
->
cmd
;
int32_t
ret
=
TSDB_CODE_SUCCESS
;
if
((
ret
=
tsInsertInitialCheck
(
pSql
))
!=
TSDB_CODE_SUCCESS
)
{
return
ret
;
}
int32_t
index
=
0
;
SStrToken
sToken
=
tStrGetToken
(
pCmd
->
curSql
,
&
index
,
false
);
if
(
sToken
.
n
==
0
)
{
return
TSDB_CODE_TSC_INVALID_OPERATION
;
}
if
(
sToken
.
n
==
1
&&
sToken
.
type
==
TK_QUESTION
)
{
pStmt
->
multiTbInsert
=
true
;
pStmt
->
mtb
.
tbname
=
sToken
;
pStmt
->
mtb
.
nameSet
=
false
;
if
(
pStmt
->
mtb
.
pTableHash
==
NULL
)
{
pStmt
->
mtb
.
pTableHash
=
taosHashInit
(
16
,
taosGetDefaultHashFunction
(
TSDB_DATA_TYPE_BINARY
),
true
,
false
);
}
if
(
pStmt
->
mtb
.
pTableBlockHashList
==
NULL
)
{
pStmt
->
mtb
.
pTableBlockHashList
=
taosHashInit
(
16
,
taosGetDefaultHashFunction
(
TSDB_DATA_TYPE_BIGINT
),
true
,
false
);
}
pStmt
->
mtb
.
tagSet
=
true
;
sToken
=
tStrGetToken
(
pCmd
->
curSql
,
&
index
,
false
);
if
(
sToken
.
n
>
0
&&
sToken
.
type
==
TK_VALUES
)
{
return
TSDB_CODE_SUCCESS
;
}
if
(
sToken
.
n
<=
0
||
sToken
.
type
!=
TK_USING
)
{
return
TSDB_CODE_TSC_INVALID_OPERATION
;
}
sToken
=
tStrGetToken
(
pCmd
->
curSql
,
&
index
,
false
);
if
(
sToken
.
n
<=
0
||
((
sToken
.
type
!=
TK_ID
)
&&
(
sToken
.
type
!=
TK_STRING
)))
{
return
TSDB_CODE_TSC_INVALID_OPERATION
;
}
pStmt
->
mtb
.
stbname
=
sToken
;
sToken
=
tStrGetToken
(
pCmd
->
curSql
,
&
index
,
false
);
if
(
sToken
.
n
<=
0
||
sToken
.
type
!=
TK_TAGS
)
{
return
TSDB_CODE_TSC_INVALID_OPERATION
;
}
sToken
=
tStrGetToken
(
pCmd
->
curSql
,
&
index
,
false
);
if
(
sToken
.
n
<=
0
||
sToken
.
type
!=
TK_LP
)
{
return
TSDB_CODE_TSC_INVALID_OPERATION
;
}
pStmt
->
mtb
.
tags
=
taosArrayInit
(
4
,
sizeof
(
SStrToken
));
int32_t
loopCont
=
1
;
while
(
loopCont
)
{
sToken
=
tStrGetToken
(
pCmd
->
curSql
,
&
index
,
false
);
if
(
sToken
.
n
<=
0
)
{
return
TSDB_CODE_TSC_INVALID_OPERATION
;
}
switch
(
sToken
.
type
)
{
case
TK_RP
:
loopCont
=
0
;
break
;
case
TK_VALUES
:
return
TSDB_CODE_TSC_INVALID_OPERATION
;
case
TK_QUESTION
:
pStmt
->
mtb
.
tagSet
=
false
;
//continue
default:
taosArrayPush
(
pStmt
->
mtb
.
tags
,
&
sToken
);
break
;
}
}
if
(
taosArrayGetSize
(
pStmt
->
mtb
.
tags
)
<=
0
)
{
return
TSDB_CODE_TSC_INVALID_OPERATION
;
}
sToken
=
tStrGetToken
(
pCmd
->
curSql
,
&
index
,
false
);
if
(
sToken
.
n
<=
0
||
sToken
.
type
!=
TK_VALUES
)
{
return
TSDB_CODE_TSC_INVALID_OPERATION
;
}
pStmt
->
mtb
.
values
=
sToken
;
}
return
TSDB_CODE_SUCCESS
;
}
int
stmtGenInsertStatement
(
SSqlObj
*
pSql
,
STscStmt
*
pStmt
,
const
char
*
name
,
TAOS_BIND
*
tags
)
{
size_t
tagNum
=
taosArrayGetSize
(
pStmt
->
mtb
.
tags
);
size_t
size
=
1048576
;
char
*
str
=
calloc
(
1
,
size
);
size_t
len
=
0
;
int32_t
ret
=
0
;
int32_t
j
=
0
;
while
(
1
)
{
len
=
(
size_t
)
snprintf
(
str
,
size
-
1
,
"insert into %s using %.*s tags("
,
name
,
pStmt
->
mtb
.
stbname
.
n
,
pStmt
->
mtb
.
stbname
.
z
);
if
(
len
>=
(
size
-
1
))
{
size
*=
2
;
free
(
str
);
str
=
calloc
(
1
,
size
);
continue
;
}
j
=
0
;
for
(
size_t
i
=
0
;
i
<
tagNum
&&
len
<
(
size
-
1
);
++
i
)
{
SStrToken
*
t
=
taosArrayGet
(
pStmt
->
mtb
.
tags
,
i
);
if
(
t
->
type
==
TK_QUESTION
)
{
int32_t
l
=
0
;
if
(
i
>
0
)
{
str
[
len
++
]
=
','
;
}
if
(
tags
[
j
].
is_null
&&
(
*
tags
[
j
].
is_null
))
{
ret
=
converToStr
(
str
+
len
,
TSDB_DATA_TYPE_NULL
,
NULL
,
-
1
,
&
l
);
}
else
{
if
(
tags
[
j
].
buffer
==
NULL
)
{
free
(
str
);
tscError
(
"empty"
);
return
TSDB_CODE_TSC_APP_ERROR
;
}
ret
=
converToStr
(
str
+
len
,
tags
[
j
].
buffer_type
,
tags
[
j
].
buffer
,
tags
[
j
].
length
?
(
int32_t
)
*
tags
[
j
].
length
:
-
1
,
&
l
);
}
++
j
;
if
(
ret
)
{
free
(
str
);
return
ret
;
}
len
+=
l
;
}
else
{
len
+=
(
size_t
)
snprintf
(
str
+
len
,
size
-
len
-
1
,
i
>
0
?
",%.*s"
:
"%.*s"
,
t
->
n
,
t
->
z
);
}
}
if
(
len
>=
(
size
-
1
))
{
size
*=
2
;
free
(
str
);
str
=
calloc
(
1
,
size
);
continue
;
}
strcat
(
str
,
") "
);
len
+=
2
;
if
((
len
+
strlen
(
pStmt
->
mtb
.
values
.
z
))
>=
(
size
-
1
))
{
size
*=
2
;
free
(
str
);
str
=
calloc
(
1
,
size
);
continue
;
}
strcat
(
str
,
pStmt
->
mtb
.
values
.
z
);
break
;
}
free
(
pSql
->
sqlstr
);
pSql
->
sqlstr
=
str
;
return
TSDB_CODE_SUCCESS
;
}
////////////////////////////////////////////////////////////////////////////////
// interface functions
...
...
@@ -1291,34 +1473,15 @@ int taos_stmt_prepare(TAOS_STMT* stmt, const char* sql, unsigned long length) {
registerSqlObj
(
pSql
);
int32_t
ret
=
TSDB_CODE_SUCCESS
;
if
((
ret
=
tsInsertInitialCheck
(
pSql
))
!=
TSDB_CODE_SUCCESS
)
{
int32_t
ret
=
stmtParseInsertTbTags
(
pSql
,
pStmt
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
ret
;
}
int32_t
index
=
0
;
SStrToken
sToken
=
tStrGetToken
(
pCmd
->
insertParam
.
sql
,
&
index
,
false
);
if
(
sToken
.
n
==
0
)
{
return
TSDB_CODE_TSC_INVALID_OPERATION
;
}
if
(
sToken
.
n
==
1
&&
sToken
.
type
==
TK_QUESTION
)
{
pStmt
->
multiTbInsert
=
true
;
pStmt
->
mtb
.
tbname
=
sToken
;
pStmt
->
mtb
.
nameSet
=
false
;
if
(
pStmt
->
mtb
.
pTableHash
==
NULL
)
{
pStmt
->
mtb
.
pTableHash
=
taosHashInit
(
16
,
taosGetDefaultHashFunction
(
TSDB_DATA_TYPE_BINARY
),
true
,
false
);
}
if
(
pStmt
->
mtb
.
pTableBlockHashList
==
NULL
)
{
pStmt
->
mtb
.
pTableBlockHashList
=
taosHashInit
(
16
,
taosGetDefaultHashFunction
(
TSDB_DATA_TYPE_BIGINT
),
true
,
false
);
}
if
(
pStmt
->
multiTbInsert
)
{
return
TSDB_CODE_SUCCESS
;
}
pStmt
->
multiTbInsert
=
false
;
memset
(
&
pStmt
->
mtb
,
0
,
sizeof
(
pStmt
->
mtb
));
int32_t
code
=
tsParseSql
(
pSql
,
true
);
...
...
@@ -1335,7 +1498,7 @@ int taos_stmt_prepare(TAOS_STMT* stmt, const char* sql, unsigned long length) {
return
normalStmtPrepare
(
pStmt
);
}
int
taos_stmt_set_tbname
(
TAOS_STMT
*
stmt
,
const
char
*
name
)
{
int
taos_stmt_set_tbname
_tags
(
TAOS_STMT
*
stmt
,
const
char
*
name
,
TAOS_BIND
*
tags
)
{
STscStmt
*
pStmt
=
(
STscStmt
*
)
stmt
;
SSqlObj
*
pSql
=
pStmt
->
pSql
;
SSqlCmd
*
pCmd
=
&
pSql
->
cmd
;
...
...
@@ -1383,8 +1546,22 @@ int taos_stmt_set_tbname(TAOS_STMT* stmt, const char* name) {
return
TSDB_CODE_SUCCESS
;
}
pStmt
->
mtb
.
tbname
=
tscReplaceStrToken
(
&
pSql
->
sqlstr
,
&
pStmt
->
mtb
.
tbname
,
name
);
if
(
pStmt
->
mtb
.
tagSet
)
{
pStmt
->
mtb
.
tbname
=
tscReplaceStrToken
(
&
pSql
->
sqlstr
,
&
pStmt
->
mtb
.
tbname
,
name
);
}
else
{
if
(
tags
==
NULL
)
{
tscError
(
"No tags set"
);
return
TSDB_CODE_TSC_APP_ERROR
;
}
int32_t
ret
=
stmtGenInsertStatement
(
pSql
,
pStmt
,
name
,
tags
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
ret
;
}
}
pStmt
->
mtb
.
nameSet
=
true
;
pStmt
->
mtb
.
tagSet
=
true
;
tscDebug
(
"0x%"
PRIx64
" SQL: %s"
,
pSql
->
self
,
pSql
->
sqlstr
);
...
...
@@ -1432,6 +1609,12 @@ int taos_stmt_set_tbname(TAOS_STMT* stmt, const char* name) {
return
code
;
}
int
taos_stmt_set_tbname
(
TAOS_STMT
*
stmt
,
const
char
*
name
)
{
return
taos_stmt_set_tbname_tags
(
stmt
,
name
,
NULL
);
}
int
taos_stmt_close
(
TAOS_STMT
*
stmt
)
{
STscStmt
*
pStmt
=
(
STscStmt
*
)
stmt
;
if
(
!
pStmt
->
isInsert
)
{
...
...
@@ -1449,7 +1632,8 @@ int taos_stmt_close(TAOS_STMT* stmt) {
taosHashCleanup
(
pStmt
->
mtb
.
pTableHash
);
pStmt
->
mtb
.
pTableBlockHashList
=
tscDestroyBlockHashTable
(
pStmt
->
mtb
.
pTableBlockHashList
,
true
);
taosHashCleanup
(
pStmt
->
pSql
->
cmd
.
insertParam
.
pTableBlockHashList
);
pStmt
->
pSql
->
cmd
.
insertParam
.
pTableNameList
=
NULL
;
pStmt
->
pSql
->
cmd
.
insertParam
.
pTableBlockHashList
=
NULL
;
taosArrayDestroy
(
pStmt
->
mtb
.
tags
);
}
}
...
...
src/client/src/tscUtil.c
浏览文件 @
524a1329
...
...
@@ -32,6 +32,67 @@
static
void
freeQueryInfoImpl
(
SQueryInfo
*
pQueryInfo
);
int32_t
converToStr
(
char
*
str
,
int
type
,
void
*
buf
,
int32_t
bufSize
,
int32_t
*
len
)
{
int32_t
n
=
0
;
switch
(
type
)
{
case
TSDB_DATA_TYPE_NULL
:
n
=
sprintf
(
str
,
"null"
);
break
;
case
TSDB_DATA_TYPE_BOOL
:
n
=
sprintf
(
str
,
(
*
(
int8_t
*
)
buf
)
?
"true"
:
"false"
);
break
;
case
TSDB_DATA_TYPE_TINYINT
:
n
=
sprintf
(
str
,
"%d"
,
*
(
int8_t
*
)
buf
);
break
;
case
TSDB_DATA_TYPE_SMALLINT
:
n
=
sprintf
(
str
,
"%d"
,
*
(
int16_t
*
)
buf
);
break
;
case
TSDB_DATA_TYPE_INT
:
n
=
sprintf
(
str
,
"%d"
,
*
(
int32_t
*
)
buf
);
break
;
case
TSDB_DATA_TYPE_BIGINT
:
case
TSDB_DATA_TYPE_TIMESTAMP
:
n
=
sprintf
(
str
,
"%"
PRId64
,
*
(
int64_t
*
)
buf
);
break
;
case
TSDB_DATA_TYPE_FLOAT
:
n
=
sprintf
(
str
,
"%f"
,
GET_FLOAT_VAL
(
buf
));
break
;
case
TSDB_DATA_TYPE_DOUBLE
:
n
=
sprintf
(
str
,
"%f"
,
GET_DOUBLE_VAL
(
buf
));
break
;
case
TSDB_DATA_TYPE_BINARY
:
case
TSDB_DATA_TYPE_NCHAR
:
if
(
bufSize
<
0
)
{
tscError
(
"invalid buf size"
);
return
TSDB_CODE_TSC_INVALID_VALUE
;
}
*
str
=
'"'
;
memcpy
(
str
+
1
,
buf
,
bufSize
);
*
(
str
+
bufSize
+
1
)
=
'"'
;
n
=
bufSize
+
2
;
break
;
default:
tscError
(
"unsupported type:%d"
,
type
);
return
TSDB_CODE_TSC_INVALID_VALUE
;
}
*
len
=
n
;
return
TSDB_CODE_SUCCESS
;
}
static
void
tscStrToLower
(
char
*
str
,
int32_t
n
)
{
if
(
str
==
NULL
||
n
<=
0
)
{
return
;}
for
(
int32_t
i
=
0
;
i
<
n
;
i
++
)
{
...
...
src/common/inc/tdataformat.h
浏览文件 @
524a1329
...
...
@@ -234,6 +234,7 @@ typedef struct SDataCol {
int
len
;
// column data length
VarDataOffsetT
*
dataOff
;
// For binary and nchar data, the offset in the data column
void
*
pData
;
// Actual data pointer
TSKEY
ts
;
// only used in last NULL column
}
SDataCol
;
static
FORCE_INLINE
void
dataColReset
(
SDataCol
*
pDataCol
)
{
pDataCol
->
len
=
0
;
}
...
...
src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBJNIConnector.java
浏览文件 @
524a1329
...
...
@@ -310,6 +310,16 @@ public class TSDBJNIConnector {
private
native
int
setBindTableNameImp
(
long
stmt
,
String
name
,
long
conn
);
public
void
setBindTableNameAndTags
(
long
stmt
,
String
tableName
,
int
numOfTags
,
ByteBuffer
tags
,
ByteBuffer
typeList
,
ByteBuffer
lengthList
,
ByteBuffer
nullList
)
throws
SQLException
{
int
code
=
setTableNameTagsImp
(
stmt
,
tableName
,
numOfTags
,
tags
.
array
(),
typeList
.
array
(),
lengthList
.
array
(),
nullList
.
array
(),
this
.
taos
);
if
(
code
!=
TSDBConstants
.
JNI_SUCCESS
)
{
throw
TSDBError
.
createSQLException
(
TSDBErrorNumbers
.
ERROR_UNKNOWN
,
"failed to bind table name and corresponding tags"
);
}
}
private
native
int
setTableNameTagsImp
(
long
stmt
,
String
name
,
int
numOfTags
,
byte
[]
tags
,
byte
[]
typeList
,
byte
[]
lengthList
,
byte
[]
nullList
,
long
conn
);
public
void
bindColumnDataArray
(
long
stmt
,
ByteBuffer
colDataList
,
ByteBuffer
lengthList
,
ByteBuffer
isNullList
,
int
type
,
int
bytes
,
int
numOfRows
,
int
columnIndex
)
throws
SQLException
{
int
code
=
bindColDataImp
(
stmt
,
colDataList
.
array
(),
lengthList
.
array
(),
isNullList
.
array
(),
type
,
bytes
,
numOfRows
,
columnIndex
,
this
.
taos
);
if
(
code
!=
TSDBConstants
.
JNI_SUCCESS
)
{
...
...
src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBPreparedStatement.java
浏览文件 @
524a1329
...
...
@@ -41,6 +41,9 @@ public class TSDBPreparedStatement extends TSDBStatement implements PreparedStat
private
boolean
isPrepared
;
private
ArrayList
<
ColumnInfo
>
colData
;
private
ArrayList
<
TableTagInfo
>
tableTags
;
private
int
tagValueLength
;
private
String
tableName
;
private
long
nativeStmtHandle
=
0
;
...
...
@@ -63,8 +66,8 @@ public class TSDBPreparedStatement extends TSDBStatement implements PreparedStat
if
(
parameterCnt
>
1
)
{
// the table name is also a parameter, so ignore it.
this
.
colData
=
new
ArrayList
<
ColumnInfo
>(
parameterCnt
-
1
);
this
.
colData
.
addAll
(
Collections
.
nCopies
(
parameterCnt
-
1
,
null
)
);
this
.
colData
=
new
ArrayList
<
ColumnInfo
>();
this
.
tableTags
=
new
ArrayList
<
TableTagInfo
>(
);
}
}
...
...
@@ -562,11 +565,109 @@ public class TSDBPreparedStatement extends TSDBStatement implements PreparedStat
}
};
private
static
class
TableTagInfo
{
private
boolean
isNull
;
private
Object
value
;
private
int
type
;
public
TableTagInfo
(
Object
value
,
int
type
)
{
this
.
value
=
value
;
this
.
type
=
type
;
}
public
static
TableTagInfo
createNullTag
(
int
type
)
{
TableTagInfo
info
=
new
TableTagInfo
(
null
,
type
);
info
.
isNull
=
true
;
return
info
;
}
};
public
void
setTableName
(
String
name
)
{
this
.
tableName
=
name
;
}
private
void
ensureTagCapacity
(
int
index
)
{
if
(
this
.
tableTags
.
size
()
<
index
+
1
)
{
int
delta
=
index
+
1
-
this
.
tableTags
.
size
();
this
.
tableTags
.
addAll
(
Collections
.
nCopies
(
delta
,
null
));
}
}
public
void
setTagNull
(
int
index
,
int
type
)
{
ensureTagCapacity
(
index
);
this
.
tableTags
.
set
(
index
,
TableTagInfo
.
createNullTag
(
type
));
}
public
void
setTagBoolean
(
int
index
,
boolean
value
)
{
ensureTagCapacity
(
index
);
this
.
tableTags
.
set
(
index
,
new
TableTagInfo
(
value
,
TSDBConstants
.
TSDB_DATA_TYPE_BOOL
));
this
.
tagValueLength
+=
Byte
.
BYTES
;
}
public
void
setTagInt
(
int
index
,
int
value
)
{
ensureTagCapacity
(
index
);
this
.
tableTags
.
set
(
index
,
new
TableTagInfo
(
value
,
TSDBConstants
.
TSDB_DATA_TYPE_INT
));
this
.
tagValueLength
+=
Integer
.
BYTES
;
}
public
void
setTagByte
(
int
index
,
byte
value
)
{
ensureTagCapacity
(
index
);
this
.
tableTags
.
set
(
index
,
new
TableTagInfo
(
value
,
TSDBConstants
.
TSDB_DATA_TYPE_TINYINT
));
this
.
tagValueLength
+=
Byte
.
BYTES
;
}
public
void
setTagShort
(
int
index
,
short
value
)
{
ensureTagCapacity
(
index
);
this
.
tableTags
.
set
(
index
,
new
TableTagInfo
(
value
,
TSDBConstants
.
TSDB_DATA_TYPE_SMALLINT
));
this
.
tagValueLength
+=
Short
.
BYTES
;
}
public
void
setTagLong
(
int
index
,
long
value
)
{
ensureTagCapacity
(
index
);
this
.
tableTags
.
set
(
index
,
new
TableTagInfo
(
value
,
TSDBConstants
.
TSDB_DATA_TYPE_BIGINT
));
this
.
tagValueLength
+=
Long
.
BYTES
;
}
public
void
setTagTimestamp
(
int
index
,
long
value
)
{
ensureTagCapacity
(
index
);
this
.
tableTags
.
set
(
index
,
new
TableTagInfo
(
value
,
TSDBConstants
.
TSDB_DATA_TYPE_TIMESTAMP
));
this
.
tagValueLength
+=
Long
.
BYTES
;
}
public
void
setTagFloat
(
int
index
,
float
value
)
{
ensureTagCapacity
(
index
);
this
.
tableTags
.
set
(
index
,
new
TableTagInfo
(
value
,
TSDBConstants
.
TSDB_DATA_TYPE_FLOAT
));
this
.
tagValueLength
+=
Float
.
BYTES
;
}
public
void
setTagDouble
(
int
index
,
double
value
)
{
ensureTagCapacity
(
index
);
this
.
tableTags
.
set
(
index
,
new
TableTagInfo
(
value
,
TSDBConstants
.
TSDB_DATA_TYPE_DOUBLE
));
this
.
tagValueLength
+=
Double
.
BYTES
;
}
public
void
setTagString
(
int
index
,
String
value
)
{
ensureTagCapacity
(
index
);
this
.
tableTags
.
set
(
index
,
new
TableTagInfo
(
value
,
TSDBConstants
.
TSDB_DATA_TYPE_BINARY
));
this
.
tagValueLength
+=
value
.
getBytes
().
length
;
}
public
void
setTagNString
(
int
index
,
String
value
)
{
ensureTagCapacity
(
index
);
this
.
tableTags
.
set
(
index
,
new
TableTagInfo
(
value
,
TSDBConstants
.
TSDB_DATA_TYPE_NCHAR
));
String
charset
=
TaosGlobalConfig
.
getCharset
();
try
{
this
.
tagValueLength
+=
value
.
getBytes
(
charset
).
length
;
}
catch
(
UnsupportedEncodingException
e
)
{
e
.
printStackTrace
();
}
}
public
<
T
>
void
setValueImpl
(
int
columnIndex
,
ArrayList
<
T
>
list
,
int
type
,
int
bytes
)
throws
SQLException
{
if
(
this
.
colData
.
size
()
==
0
)
{
this
.
colData
.
addAll
(
Collections
.
nCopies
(
this
.
parameters
.
length
-
1
-
this
.
tableTags
.
size
(),
null
));
}
ColumnInfo
col
=
(
ColumnInfo
)
this
.
colData
.
get
(
columnIndex
);
if
(
col
==
null
)
{
ColumnInfo
p
=
new
ColumnInfo
();
...
...
@@ -641,7 +742,122 @@ public class TSDBPreparedStatement extends TSDBStatement implements PreparedStat
TSDBJNIConnector
connector
=
((
TSDBConnection
)
this
.
getConnection
()).
getConnector
();
this
.
nativeStmtHandle
=
connector
.
prepareStmt
(
rawSql
);
connector
.
setBindTableName
(
this
.
nativeStmtHandle
,
this
.
tableName
);
if
(
this
.
tableTags
==
null
)
{
connector
.
setBindTableName
(
this
.
nativeStmtHandle
,
this
.
tableName
);
}
else
{
int
num
=
this
.
tableTags
.
size
();
ByteBuffer
tagDataList
=
ByteBuffer
.
allocate
(
this
.
tagValueLength
);
tagDataList
.
order
(
ByteOrder
.
LITTLE_ENDIAN
);
ByteBuffer
typeList
=
ByteBuffer
.
allocate
(
num
);
typeList
.
order
(
ByteOrder
.
LITTLE_ENDIAN
);
ByteBuffer
lengthList
=
ByteBuffer
.
allocate
(
num
*
Long
.
BYTES
);
lengthList
.
order
(
ByteOrder
.
LITTLE_ENDIAN
);
ByteBuffer
isNullList
=
ByteBuffer
.
allocate
(
num
*
Integer
.
BYTES
);
isNullList
.
order
(
ByteOrder
.
LITTLE_ENDIAN
);
for
(
int
i
=
0
;
i
<
num
;
++
i
)
{
TableTagInfo
tag
=
this
.
tableTags
.
get
(
i
);
if
(
tag
.
isNull
)
{
typeList
.
put
((
byte
)
tag
.
type
);
isNullList
.
putInt
(
1
);
lengthList
.
putLong
(
0
);
continue
;
}
switch
(
tag
.
type
)
{
case
TSDBConstants
.
TSDB_DATA_TYPE_INT
:
{
Integer
val
=
(
Integer
)
tag
.
value
;
tagDataList
.
putInt
(
val
);
lengthList
.
putLong
(
Integer
.
BYTES
);
break
;
}
case
TSDBConstants
.
TSDB_DATA_TYPE_TINYINT
:
{
Byte
val
=
(
Byte
)
tag
.
value
;
tagDataList
.
put
(
val
);
lengthList
.
putLong
(
Byte
.
BYTES
);
break
;
}
case
TSDBConstants
.
TSDB_DATA_TYPE_BOOL
:
{
Boolean
val
=
(
Boolean
)
tag
.
value
;
tagDataList
.
put
((
byte
)
(
val
?
1
:
0
));
lengthList
.
putLong
(
Byte
.
BYTES
);
break
;
}
case
TSDBConstants
.
TSDB_DATA_TYPE_SMALLINT
:
{
Short
val
=
(
Short
)
tag
.
value
;
tagDataList
.
putShort
(
val
);
lengthList
.
putLong
(
Short
.
BYTES
);
break
;
}
case
TSDBConstants
.
TSDB_DATA_TYPE_TIMESTAMP
:
case
TSDBConstants
.
TSDB_DATA_TYPE_BIGINT
:
{
Long
val
=
(
Long
)
tag
.
value
;
tagDataList
.
putLong
(
val
==
null
?
0
:
val
);
lengthList
.
putLong
(
Long
.
BYTES
);
break
;
}
case
TSDBConstants
.
TSDB_DATA_TYPE_FLOAT
:
{
Float
val
=
(
Float
)
tag
.
value
;
tagDataList
.
putFloat
(
val
==
null
?
0
:
val
);
lengthList
.
putLong
(
Float
.
BYTES
);
break
;
}
case
TSDBConstants
.
TSDB_DATA_TYPE_DOUBLE
:
{
Double
val
=
(
Double
)
tag
.
value
;
tagDataList
.
putDouble
(
val
==
null
?
0
:
val
);
lengthList
.
putLong
(
Double
.
BYTES
);
break
;
}
case
TSDBConstants
.
TSDB_DATA_TYPE_NCHAR
:
case
TSDBConstants
.
TSDB_DATA_TYPE_BINARY
:
{
String
charset
=
TaosGlobalConfig
.
getCharset
();
String
val
=
(
String
)
tag
.
value
;
byte
[]
b
=
null
;
try
{
if
(
tag
.
type
==
TSDBConstants
.
TSDB_DATA_TYPE_BINARY
)
{
b
=
val
.
getBytes
();
}
else
{
b
=
val
.
getBytes
(
charset
);
}
}
catch
(
UnsupportedEncodingException
e
)
{
e
.
printStackTrace
();
}
tagDataList
.
put
(
b
);
lengthList
.
putLong
(
b
.
length
);
break
;
}
case
TSDBConstants
.
TSDB_DATA_TYPE_UTINYINT
:
case
TSDBConstants
.
TSDB_DATA_TYPE_USMALLINT
:
case
TSDBConstants
.
TSDB_DATA_TYPE_UINT
:
case
TSDBConstants
.
TSDB_DATA_TYPE_UBIGINT
:
{
throw
TSDBError
.
createSQLException
(
TSDBErrorNumbers
.
ERROR_UNKNOWN
,
"not support data types"
);
}
}
typeList
.
put
((
byte
)
tag
.
type
);
isNullList
.
putInt
(
tag
.
isNull
?
1
:
0
);
}
connector
.
setBindTableNameAndTags
(
this
.
nativeStmtHandle
,
this
.
tableName
,
this
.
tableTags
.
size
(),
tagDataList
,
typeList
,
lengthList
,
isNullList
);
}
ColumnInfo
colInfo
=
(
ColumnInfo
)
this
.
colData
.
get
(
0
);
if
(
colInfo
==
null
)
{
...
...
src/dnode/src/dnodeMain.c
浏览文件 @
524a1329
...
...
@@ -86,6 +86,17 @@ static SStep tsDnodeSteps[] = {
{
"dnode-telemetry"
,
dnodeInitTelemetry
,
dnodeCleanupTelemetry
},
};
static
SStep
tsDnodeCompactSteps
[]
=
{
{
"dnode-tfile"
,
tfInit
,
tfCleanup
},
{
"dnode-storage"
,
dnodeInitStorage
,
dnodeCleanupStorage
},
{
"dnode-eps"
,
dnodeInitEps
,
dnodeCleanupEps
},
{
"dnode-wal"
,
walInit
,
walCleanUp
},
{
"dnode-mread"
,
dnodeInitMRead
,
NULL
},
{
"dnode-mwrite"
,
dnodeInitMWrite
,
NULL
},
{
"dnode-mpeer"
,
dnodeInitMPeer
,
NULL
},
{
"dnode-modules"
,
dnodeInitModules
,
dnodeCleanupModules
},
};
static
int
dnodeCreateDir
(
const
char
*
dir
)
{
if
(
mkdir
(
dir
,
0755
)
!=
0
&&
errno
!=
EEXIST
)
{
return
-
1
;
...
...
@@ -95,13 +106,23 @@ static int dnodeCreateDir(const char *dir) {
}
static
void
dnodeCleanupComponents
()
{
int32_t
stepSize
=
sizeof
(
tsDnodeSteps
)
/
sizeof
(
SStep
);
dnodeStepCleanup
(
tsDnodeSteps
,
stepSize
);
if
(
!
tsCompactMnodeWal
)
{
int32_t
stepSize
=
sizeof
(
tsDnodeSteps
)
/
sizeof
(
SStep
);
dnodeStepCleanup
(
tsDnodeSteps
,
stepSize
);
}
else
{
int32_t
stepSize
=
sizeof
(
tsDnodeCompactSteps
)
/
sizeof
(
SStep
);
dnodeStepCleanup
(
tsDnodeCompactSteps
,
stepSize
);
}
}
static
int32_t
dnodeInitComponents
()
{
int32_t
stepSize
=
sizeof
(
tsDnodeSteps
)
/
sizeof
(
SStep
);
return
dnodeStepInit
(
tsDnodeSteps
,
stepSize
);
if
(
!
tsCompactMnodeWal
)
{
int32_t
stepSize
=
sizeof
(
tsDnodeSteps
)
/
sizeof
(
SStep
);
return
dnodeStepInit
(
tsDnodeSteps
,
stepSize
);
}
else
{
int32_t
stepSize
=
sizeof
(
tsDnodeCompactSteps
)
/
sizeof
(
SStep
);
return
dnodeStepInit
(
tsDnodeCompactSteps
,
stepSize
);
}
}
static
int32_t
dnodeInitTmr
()
{
...
...
@@ -219,14 +240,20 @@ static int32_t dnodeInitStorage() {
if
(
tsCompactMnodeWal
==
1
)
{
sprintf
(
tsMnodeTmpDir
,
"%s/mnode_tmp"
,
tsDataDir
);
tfsRmdir
(
tsMnodeTmpDir
);
if
(
taosDirExist
(
tsMnodeTmpDir
))
{
dError
(
"mnode_tmp dir already exist in %s,quit compact job"
,
tsMnodeTmpDir
);
return
-
1
;
}
if
(
dnodeCreateDir
(
tsMnodeTmpDir
)
<
0
)
{
dError
(
"failed to create dir: %s, reason: %s"
,
tsMnodeTmpDir
,
strerror
(
errno
));
return
-
1
;
}
sprintf
(
tsMnodeBakDir
,
"%s/mnode_bak"
,
tsDataDir
);
//tfsRmdir(tsMnodeBakDir);
if
(
taosDirExist
(
tsMnodeBakDir
))
{
dError
(
"mnode_bak dir already exist in %s,quit compact job"
,
tsMnodeBakDir
);
return
-
1
;
}
}
//TODO(dengyihao): no need to init here
if
(
dnodeCreateDir
(
tsMnodeDir
)
<
0
)
{
...
...
src/inc/taos.h
浏览文件 @
524a1329
...
...
@@ -112,6 +112,7 @@ typedef struct TAOS_MULTI_BIND {
TAOS_STMT
*
taos_stmt_init
(
TAOS
*
taos
);
int
taos_stmt_prepare
(
TAOS_STMT
*
stmt
,
const
char
*
sql
,
unsigned
long
length
);
int
taos_stmt_set_tbname_tags
(
TAOS_STMT
*
stmt
,
const
char
*
name
,
TAOS_BIND
*
tags
);
int
taos_stmt_set_tbname
(
TAOS_STMT
*
stmt
,
const
char
*
name
);
int
taos_stmt_is_insert
(
TAOS_STMT
*
stmt
,
int
*
insert
);
int
taos_stmt_num_params
(
TAOS_STMT
*
stmt
,
int
*
nums
);
...
...
src/inc/taosdef.h
浏览文件 @
524a1329
...
...
@@ -33,6 +33,8 @@ extern "C" {
#endif
#define TSWINDOW_INITIALIZER ((STimeWindow) {INT64_MIN, INT64_MAX})
#define TSWINDOW_DESC_INITIALIZER ((STimeWindow) {INT64_MAX, INT64_MIN})
#define TSKEY_INITIAL_VAL INT64_MIN
// Bytes for each type.
...
...
@@ -298,7 +300,7 @@ do { \
#define TSDB_DEFAULT_DB_UPDATE_OPTION 0
#define TSDB_MIN_DB_CACHE_LAST_ROW 0
#define TSDB_MAX_DB_CACHE_LAST_ROW
1
#define TSDB_MAX_DB_CACHE_LAST_ROW
3
#define TSDB_DEFAULT_CACHE_LAST_ROW 0
#define TSDB_MIN_FSYNC_PERIOD 0
...
...
src/inc/taoserror.h
浏览文件 @
524a1329
...
...
@@ -74,7 +74,7 @@ int32_t* taosGetErrno();
#define TSDB_CODE_REF_NOT_EXIST TAOS_DEF_ERROR_CODE(0, 0x010A) //"Ref is not there")
//client
#define TSDB_CODE_TSC_INVALID_OPERATION
TAOS_DEF_ERROR_CODE(0, 0x0200) //"Invalid SQL statement
")
#define TSDB_CODE_TSC_INVALID_OPERATION
TAOS_DEF_ERROR_CODE(0, 0x0200) //"Invalid Operation
")
#define TSDB_CODE_TSC_INVALID_QHANDLE TAOS_DEF_ERROR_CODE(0, 0x0201) //"Invalid qhandle")
#define TSDB_CODE_TSC_INVALID_TIME_STAMP TAOS_DEF_ERROR_CODE(0, 0x0202) //"Invalid combination of client/service time")
#define TSDB_CODE_TSC_INVALID_VALUE TAOS_DEF_ERROR_CODE(0, 0x0203) //"Invalid value in client")
...
...
src/inc/tsdb.h
浏览文件 @
524a1329
...
...
@@ -69,9 +69,13 @@ typedef struct {
int8_t
precision
;
int8_t
compression
;
int8_t
update
;
int8_t
cacheLastRow
;
int8_t
cacheLastRow
;
// 0:no cache, 1: cache last row, 2: cache last NULL column 3: 1&2
}
STsdbCfg
;
#define CACHE_NO_LAST(c) ((c)->cacheLastRow == 0)
#define CACHE_LAST_ROW(c) (((c)->cacheLastRow & 1) > 0)
#define CACHE_LAST_NULL_COLUMN(c) (((c)->cacheLastRow & 2) > 0)
// --------- TSDB REPOSITORY USAGE STATISTICS
typedef
struct
{
int64_t
totalStorage
;
// total bytes occupie
...
...
@@ -261,6 +265,12 @@ TsdbQueryHandleT *tsdbQueryTables(STsdbRepo *tsdb, STsdbQueryCond *pCond, STable
TsdbQueryHandleT
tsdbQueryLastRow
(
STsdbRepo
*
tsdb
,
STsdbQueryCond
*
pCond
,
STableGroupInfo
*
tableInfo
,
uint64_t
qId
,
SMemRef
*
pRef
);
TsdbQueryHandleT
tsdbQueryCacheLast
(
STsdbRepo
*
tsdb
,
STsdbQueryCond
*
pCond
,
STableGroupInfo
*
groupList
,
uint64_t
qId
,
SMemRef
*
pMemRef
);
bool
isTsdbCacheLastRow
(
TsdbQueryHandleT
*
pQueryHandle
);
/**
* get the queried table object list
* @param pHandle
...
...
src/kit/taosdemo/taosdemo.c
浏览文件 @
524a1329
此差异已折叠。
点击以展开。
src/mnode/src/mnodeMain.c
浏览文件 @
524a1329
...
...
@@ -121,7 +121,7 @@ int32_t mnodeStartSystem() {
int32_t
mnodeInitSystem
()
{
mnodeInitTimer
();
if
(
mnodeNeedStart
())
{
if
(
mnodeNeedStart
()
||
tsCompactMnodeWal
)
{
return
mnodeStartSystem
();
}
return
0
;
...
...
src/mnode/src/mnodeSdb.c
浏览文件 @
524a1329
...
...
@@ -690,7 +690,7 @@ static int32_t sdbProcessWrite(void *wparam, void *hparam, int32_t qtype, void *
pthread_mutex_unlock
(
&
tsSdbMgmt
.
mutex
);
// from app, row is created
if
(
pRow
!=
NULL
)
{
if
(
pRow
!=
NULL
&&
tsCompactMnodeWal
!=
1
)
{
// forward to peers
pRow
->
processedCount
=
0
;
int32_t
syncCode
=
syncForwardToPeer
(
tsSdbMgmt
.
sync
,
pHead
,
pRow
,
TAOS_QTYPE_RPC
,
false
);
...
...
@@ -713,7 +713,9 @@ static int32_t sdbProcessWrite(void *wparam, void *hparam, int32_t qtype, void *
actStr
[
action
],
sdbGetKeyStr
(
pTable
,
pHead
->
cont
),
pHead
->
version
);
// even it is WAL/FWD, it shall be called to update version in sync
syncForwardToPeer
(
tsSdbMgmt
.
sync
,
pHead
,
pRow
,
TAOS_QTYPE_RPC
,
false
);
if
(
tsCompactMnodeWal
!=
1
)
{
syncForwardToPeer
(
tsSdbMgmt
.
sync
,
pHead
,
pRow
,
TAOS_QTYPE_RPC
,
false
);
}
// from wal or forward msg, row not created, should add into hash
if
(
action
==
SDB_ACTION_INSERT
)
{
...
...
src/mnode/src/mnodeTable.c
浏览文件 @
524a1329
...
...
@@ -3375,7 +3375,7 @@ static int32_t mnodeCompactSuperTables() {
.
rowSize
=
sizeof
(
SSTableObj
)
+
schemaSize
,
};
mInfo
(
"compact super %"
PRIu64
,
pTable
->
uid
);
//
mInfo("compact super %" PRIu64, pTable->uid);
sdbInsertCompactRow
(
&
row
);
}
...
...
@@ -3401,7 +3401,7 @@ static int32_t mnodeCompactChildTables() {
.
pTable
=
tsChildTableSdb
,
};
mInfo
(
"compact child %"
PRIu64
":%d"
,
pTable
->
uid
,
pTable
->
tid
);
//
mInfo("compact child %" PRIu64 ":%d", pTable->uid, pTable->tid);
sdbInsertCompactRow
(
&
row
);
}
...
...
src/os/inc/osDir.h
浏览文件 @
524a1329
...
...
@@ -21,6 +21,7 @@ extern "C" {
#endif
void
taosRemoveDir
(
char
*
rootDir
);
bool
taosDirExist
(
const
char
*
dirname
);
int32_t
taosMkDir
(
const
char
*
pathname
,
mode_t
mode
);
void
taosRemoveOldLogFiles
(
char
*
rootDir
,
int32_t
keepDays
);
int32_t
taosRename
(
char
*
oldName
,
char
*
newName
);
...
...
src/os/src/detail/osDir.c
浏览文件 @
524a1329
...
...
@@ -45,6 +45,10 @@ void taosRemoveDir(char *rootDir) {
uInfo
(
"dir:%s is removed"
,
rootDir
);
}
bool
taosDirExist
(
const
char
*
dirname
)
{
return
access
(
dirname
,
F_OK
)
==
0
;
}
int
taosMkDir
(
const
char
*
path
,
mode_t
mode
)
{
int
code
=
mkdir
(
path
,
0755
);
if
(
code
<
0
&&
errno
==
EEXIST
)
code
=
0
;
...
...
src/os/src/detail/osTime.c
浏览文件 @
524a1329
...
...
@@ -87,12 +87,12 @@ static int32_t (*parseLocaltimeFp[]) (char* timestr, int64_t* time, int32_t time
int32_t
taosGetTimestampSec
()
{
return
(
int32_t
)
time
(
NULL
);
}
int32_t
taosParseTime
(
char
*
timestr
,
int64_t
*
time
,
int32_t
len
,
int32_t
timePrec
,
int8_t
daylight
)
{
int32_t
taosParseTime
(
char
*
timestr
,
int64_t
*
time
,
int32_t
len
,
int32_t
timePrec
,
int8_t
day
_
light
)
{
/* parse datatime string in with tz */
if
(
strnchr
(
timestr
,
'T'
,
len
,
false
)
!=
NULL
)
{
return
parseTimeWithTz
(
timestr
,
time
,
timePrec
);
}
else
{
return
(
*
parseLocaltimeFp
[
daylight
])(
timestr
,
time
,
timePrec
);
return
(
*
parseLocaltimeFp
[
day
_
light
])(
timestr
,
time
,
timePrec
);
}
}
...
...
src/plugins/http/src/httpUtil.c
浏览文件 @
524a1329
...
...
@@ -237,6 +237,11 @@ void httpFreeMultiCmds(HttpContext *pContext) {
JsonBuf
*
httpMallocJsonBuf
(
HttpContext
*
pContext
)
{
if
(
pContext
->
jsonBuf
==
NULL
)
{
pContext
->
jsonBuf
=
(
JsonBuf
*
)
malloc
(
sizeof
(
JsonBuf
));
if
(
pContext
->
jsonBuf
==
NULL
)
{
return
NULL
;
}
memset
(
pContext
->
jsonBuf
,
0
,
sizeof
(
JsonBuf
));
}
if
(
!
pContext
->
jsonBuf
->
pContext
)
{
...
...
src/query/src/qExecutor.c
浏览文件 @
524a1329
...
...
@@ -33,6 +33,8 @@
#define SET_MASTER_SCAN_FLAG(runtime) ((runtime)->scanFlag = MASTER_SCAN)
#define SET_REVERSE_SCAN_FLAG(runtime) ((runtime)->scanFlag = REVERSE_SCAN)
#define TSWINDOW_IS_EQUAL(t1, t2) (((t1).skey == (t2).skey) && ((t1).ekey == (t2).ekey))
#define SWITCH_ORDER(n) (((n) = ((n) == TSDB_ORDER_ASC) ? TSDB_ORDER_DESC : TSDB_ORDER_ASC))
#define SDATA_BLOCK_INITIALIZER (SDataBlockInfo) {{0}, 0}
...
...
@@ -1997,6 +1999,37 @@ static bool isFirstLastRowQuery(SQueryAttr *pQueryAttr) {
return
false
;
}
static
bool
isCachedLastQuery
(
SQueryAttr
*
pQueryAttr
)
{
for
(
int32_t
i
=
0
;
i
<
pQueryAttr
->
numOfOutput
;
++
i
)
{
int32_t
functionID
=
pQueryAttr
->
pExpr1
[
i
].
base
.
functionId
;
if
(
functionID
==
TSDB_FUNC_LAST
||
functionID
==
TSDB_FUNC_LAST_DST
)
{
continue
;
}
return
false
;
}
if
(
pQueryAttr
->
order
.
order
!=
TSDB_ORDER_DESC
||
!
TSWINDOW_IS_EQUAL
(
pQueryAttr
->
window
,
TSWINDOW_DESC_INITIALIZER
))
{
return
false
;
}
if
(
pQueryAttr
->
groupbyColumn
)
{
return
false
;
}
if
(
pQueryAttr
->
interval
.
interval
>
0
)
{
return
false
;
}
if
(
pQueryAttr
->
numOfFilterCols
>
0
||
pQueryAttr
->
havingNum
>
0
)
{
return
false
;
}
return
true
;
}
/**
* The following 4 kinds of query are treated as the tags query
* tagprj, tid_tag query, count(tbname), 'abc' (user defined constant value column) query
...
...
@@ -4022,6 +4055,8 @@ static int32_t setupQueryHandle(void* tsdb, SQueryRuntimeEnv* pRuntimeEnv, int64
}
}
}
}
else
if
(
isCachedLastQuery
(
pQueryAttr
))
{
pRuntimeEnv
->
pQueryHandle
=
tsdbQueryCacheLast
(
tsdb
,
&
cond
,
&
pQueryAttr
->
tableGroupInfo
,
qId
,
&
pQueryAttr
->
memRef
);
}
else
if
(
pQueryAttr
->
pointInterpQuery
)
{
pRuntimeEnv
->
pQueryHandle
=
tsdbQueryRowsInExternalWindow
(
tsdb
,
&
cond
,
&
pQueryAttr
->
tableGroupInfo
,
qId
,
&
pQueryAttr
->
memRef
);
}
else
{
...
...
@@ -4267,7 +4302,7 @@ static SSDataBlock* doTableScan(void* param, bool *newgroup) {
}
if
(
++
pTableScanInfo
->
current
>=
pTableScanInfo
->
times
)
{
if
(
pTableScanInfo
->
reverseTimes
<=
0
)
{
if
(
pTableScanInfo
->
reverseTimes
<=
0
||
isTsdbCacheLastRow
(
pTableScanInfo
->
pQueryHandle
)
)
{
return
NULL
;
}
else
{
break
;
...
...
src/tsdb/inc/tsdbCommit.h
浏览文件 @
524a1329
...
...
@@ -33,6 +33,7 @@ void tsdbGetRtnSnap(STsdbRepo *pRepo, SRtn *pRtn);
int
tsdbEncodeKVRecord
(
void
**
buf
,
SKVRecord
*
pRecord
);
void
*
tsdbDecodeKVRecord
(
void
*
buf
,
SKVRecord
*
pRecord
);
void
*
tsdbCommitData
(
STsdbRepo
*
pRepo
);
int
tsdbApplyRtn
(
STsdbRepo
*
pRepo
);
static
FORCE_INLINE
int
tsdbGetFidLevel
(
int
fid
,
SRtn
*
pRtn
)
{
if
(
fid
>=
pRtn
->
maxFid
)
{
...
...
src/tsdb/inc/tsdbMeta.h
浏览文件 @
524a1329
...
...
@@ -36,6 +36,12 @@ typedef struct STable {
char
*
sql
;
void
*
cqhandle
;
SRWLatch
latch
;
// TODO: implementa latch functions
SDataCol
*
lastCols
;
int16_t
maxColNum
;
int16_t
restoreColumnNum
;
bool
hasRestoreLastColumn
;
int
lastColSVersion
;
T_REF_DECLARE
()
}
STable
;
...
...
@@ -78,6 +84,11 @@ void tsdbUnRefTable(STable* pTable);
void
tsdbUpdateTableSchema
(
STsdbRepo
*
pRepo
,
STable
*
pTable
,
STSchema
*
pSchema
,
bool
insertAct
);
int
tsdbRestoreTable
(
STsdbRepo
*
pRepo
,
void
*
cont
,
int
contLen
);
void
tsdbOrgMeta
(
STsdbRepo
*
pRepo
);
int
tsdbInitColIdCacheWithSchema
(
STable
*
pTable
,
STSchema
*
pSchema
);
int16_t
tsdbGetLastColumnsIndexByColId
(
STable
*
pTable
,
int16_t
colId
);
int
tsdbUpdateLastColSchema
(
STable
*
pTable
,
STSchema
*
pNewSchema
);
STSchema
*
tsdbGetTableLatestSchema
(
STable
*
pTable
);
void
tsdbFreeLastColumns
(
STable
*
pTable
);
static
FORCE_INLINE
int
tsdbCompareSchemaVersion
(
const
void
*
key1
,
const
void
*
key2
)
{
if
(
*
(
int16_t
*
)
key1
<
schemaVersion
(
*
(
STSchema
**
)
key2
))
{
...
...
src/tsdb/inc/tsdbint.h
浏览文件 @
524a1329
...
...
@@ -75,6 +75,9 @@ struct STsdbRepo {
STsdbCfg
save_config
;
// save apply config
bool
config_changed
;
// config changed flag
pthread_mutex_t
save_mutex
;
// protect save config
uint8_t
hasCachedLastRow
;
uint8_t
hasCachedLastColumn
;
STsdbAppH
appH
;
STsdbStat
stat
;
...
...
@@ -83,6 +86,7 @@ struct STsdbRepo {
SMemTable
*
mem
;
SMemTable
*
imem
;
STsdbFS
*
fs
;
SRtn
rtn
;
tsem_t
readyToCommit
;
pthread_mutex_t
mutex
;
bool
repoLocked
;
...
...
@@ -100,6 +104,7 @@ int tsdbUnlockRepo(STsdbRepo* pRepo);
STsdbMeta
*
tsdbGetMeta
(
STsdbRepo
*
pRepo
);
int
tsdbCheckCommit
(
STsdbRepo
*
pRepo
);
int
tsdbRestoreInfo
(
STsdbRepo
*
pRepo
);
int
tsdbCacheLastData
(
STsdbRepo
*
pRepo
,
STsdbCfg
*
oldCfg
);
void
tsdbGetRootDir
(
int
repoid
,
char
dirName
[]);
void
tsdbGetDataDir
(
int
repoid
,
char
dirName
[]);
...
...
src/tsdb/src/tsdbCommit.c
浏览文件 @
524a1329
...
...
@@ -86,10 +86,12 @@ static void tsdbCloseCommitFile(SCommitH *pCommith, bool hasError);
static
bool
tsdbCanAddSubBlock
(
SCommitH
*
pCommith
,
SBlock
*
pBlock
,
SMergeInfo
*
pInfo
);
static
void
tsdbLoadAndMergeFromCache
(
SDataCols
*
pDataCols
,
int
*
iter
,
SCommitIter
*
pCommitIter
,
SDataCols
*
pTarget
,
TSKEY
maxKey
,
int
maxRows
,
int8_t
update
);
static
int
tsdbApplyRtn
(
STsdbRepo
*
pRepo
);
static
int
tsdbApplyRtnOnFSet
(
STsdbRepo
*
pRepo
,
SDFileSet
*
pSet
,
SRtn
*
pRtn
);
void
*
tsdbCommitData
(
STsdbRepo
*
pRepo
)
{
if
(
pRepo
->
imem
==
NULL
)
{
return
NULL
;
}
tsdbStartCommit
(
pRepo
);
// Commit to update meta file
...
...
@@ -1428,7 +1430,7 @@ static bool tsdbCanAddSubBlock(SCommitH *pCommith, SBlock *pBlock, SMergeInfo *p
return
false
;
}
static
int
tsdbApplyRtn
(
STsdbRepo
*
pRepo
)
{
int
tsdbApplyRtn
(
STsdbRepo
*
pRepo
)
{
SRtn
rtn
;
SFSIter
fsiter
;
STsdbFS
*
pfs
=
REPO_FS
(
pRepo
);
...
...
src/tsdb/src/tsdbCommitQueue.c
浏览文件 @
524a1329
...
...
@@ -113,11 +113,15 @@ int tsdbScheduleCommit(STsdbRepo *pRepo) {
}
static
void
tsdbApplyRepoConfig
(
STsdbRepo
*
pRepo
)
{
pthread_mutex_lock
(
&
pRepo
->
save_mutex
);
pRepo
->
config_changed
=
false
;
STsdbCfg
*
pSaveCfg
=
&
pRepo
->
save_config
;
STsdbCfg
oldCfg
;
int32_t
oldTotalBlocks
=
pRepo
->
config
.
totalBlocks
;
memcpy
(
&
oldCfg
,
&
(
pRepo
->
config
),
sizeof
(
STsdbCfg
));
pRepo
->
config
.
compression
=
pRepo
->
save_config
.
compression
;
pRepo
->
config
.
keep
=
pRepo
->
save_config
.
keep
;
pRepo
->
config
.
keep1
=
pRepo
->
save_config
.
keep1
;
...
...
@@ -125,10 +129,12 @@ static void tsdbApplyRepoConfig(STsdbRepo *pRepo) {
pRepo
->
config
.
cacheLastRow
=
pRepo
->
save_config
.
cacheLastRow
;
pRepo
->
config
.
totalBlocks
=
pRepo
->
save_config
.
totalBlocks
;
tsdbInfo
(
"vgId:%d apply new config: compression(%d), keep(%d,%d,%d), totalBlocks(%d), cacheLastRow(%d),totalBlocks(%d)"
,
pthread_mutex_unlock
(
&
pRepo
->
save_mutex
);
tsdbInfo
(
"vgId:%d apply new config: compression(%d), keep(%d,%d,%d), totalBlocks(%d), cacheLastRow(%d->%d),totalBlocks(%d->%d)"
,
REPO_ID
(
pRepo
),
pSaveCfg
->
compression
,
pSaveCfg
->
keep
,
pSaveCfg
->
keep1
,
pSaveCfg
->
keep2
,
pSaveCfg
->
totalBlocks
,
pSaveCfg
->
cacheLastRow
,
pSaveCfg
->
totalBlocks
);
pSaveCfg
->
totalBlocks
,
oldCfg
.
cacheLastRow
,
pSaveCfg
->
cacheLastRow
,
oldTotalBlocks
,
pSaveCfg
->
totalBlocks
);
int
err
=
tsdbExpendPool
(
pRepo
,
oldTotalBlocks
);
if
(
!
TAOS_SUCCEEDED
(
err
))
{
...
...
@@ -136,6 +142,12 @@ static void tsdbApplyRepoConfig(STsdbRepo *pRepo) {
REPO_ID
(
pRepo
),
oldTotalBlocks
,
pSaveCfg
->
totalBlocks
,
tstrerror
(
err
));
}
if
(
oldCfg
.
cacheLastRow
!=
pRepo
->
config
.
cacheLastRow
)
{
if
(
tsdbLockRepo
(
pRepo
)
<
0
)
return
;
tsdbCacheLastData
(
pRepo
,
&
oldCfg
);
tsdbUnlockRepo
(
pRepo
);
}
}
static
void
*
tsdbLoopCommit
(
void
*
arg
)
{
...
...
@@ -165,10 +177,8 @@ static void *tsdbLoopCommit(void *arg) {
pRepo
=
((
SCommitReq
*
)
pNode
->
data
)
->
pRepo
;
// check if need to apply new config
if
(
pRepo
->
config_changed
)
{
pthread_mutex_lock
(
&
pRepo
->
save_mutex
);
if
(
pRepo
->
config_changed
)
{
tsdbApplyRepoConfig
(
pRepo
);
pthread_mutex_unlock
(
&
pRepo
->
save_mutex
);
}
tsdbCommitData
(
pRepo
);
...
...
src/tsdb/src/tsdbFS.c
浏览文件 @
524a1329
...
...
@@ -33,7 +33,9 @@ static int tsdbScanDataDir(STsdbRepo *pRepo);
static
bool
tsdbIsTFileInFS
(
STsdbFS
*
pfs
,
const
TFILE
*
pf
);
static
int
tsdbRestoreCurrent
(
STsdbRepo
*
pRepo
);
static
int
tsdbComparTFILE
(
const
void
*
arg1
,
const
void
*
arg2
);
static
void
tsdbScanAndTryFixDFilesHeader
(
STsdbRepo
*
pRepo
);
static
void
tsdbScanAndTryFixDFilesHeader
(
STsdbRepo
*
pRepo
,
int32_t
*
nExpired
);
static
int
tsdbProcessExpiredFS
(
STsdbRepo
*
pRepo
);
static
int
tsdbCreateMeta
(
STsdbRepo
*
pRepo
);
// ================== CURRENT file header info
static
int
tsdbEncodeFSHeader
(
void
**
buf
,
SFSHeader
*
pHeader
)
{
...
...
@@ -212,6 +214,8 @@ STsdbFS *tsdbNewFS(STsdbCfg *pCfg) {
return
NULL
;
}
pfs
->
intxn
=
false
;
pfs
->
nstatus
=
tsdbNewFSStatus
(
maxFSet
);
if
(
pfs
->
nstatus
==
NULL
)
{
tsdbFreeFS
(
pfs
);
...
...
@@ -234,22 +238,84 @@ void *tsdbFreeFS(STsdbFS *pfs) {
return
NULL
;
}
static
int
tsdbProcessExpiredFS
(
STsdbRepo
*
pRepo
)
{
tsdbStartFSTxn
(
pRepo
,
0
,
0
);
if
(
tsdbCreateMeta
(
pRepo
)
<
0
)
{
tsdbError
(
"vgId:%d failed to create meta since %s"
,
REPO_ID
(
pRepo
),
tstrerror
(
terrno
));
return
-
1
;
}
if
(
tsdbApplyRtn
(
pRepo
)
<
0
)
{
tsdbEndFSTxnWithError
(
REPO_FS
(
pRepo
));
tsdbError
(
"vgId:%d failed to apply rtn since %s"
,
REPO_ID
(
pRepo
),
tstrerror
(
terrno
));
return
-
1
;
}
if
(
tsdbEndFSTxn
(
pRepo
)
<
0
)
{
tsdbError
(
"vgId:%d failed to end fs txn since %s"
,
REPO_ID
(
pRepo
),
tstrerror
(
terrno
));
return
-
1
;
}
return
0
;
}
static
int
tsdbCreateMeta
(
STsdbRepo
*
pRepo
)
{
STsdbFS
*
pfs
=
REPO_FS
(
pRepo
);
SMFile
*
pOMFile
=
pfs
->
cstatus
->
pmf
;
SMFile
mf
;
SDiskID
did
;
if
(
pOMFile
!=
NULL
)
{
// keep the old meta file
tsdbUpdateMFile
(
pfs
,
pOMFile
);
return
0
;
}
// Create a new meta file
did
.
level
=
TFS_PRIMARY_LEVEL
;
did
.
id
=
TFS_PRIMARY_ID
;
tsdbInitMFile
(
&
mf
,
did
,
REPO_ID
(
pRepo
),
FS_TXN_VERSION
(
REPO_FS
(
pRepo
)));
if
(
tsdbCreateMFile
(
&
mf
,
true
)
<
0
)
{
tsdbError
(
"vgId:%d failed to create META file since %s"
,
REPO_ID
(
pRepo
),
tstrerror
(
terrno
));
return
-
1
;
}
tsdbInfo
(
"vgId:%d meta file %s is created"
,
REPO_ID
(
pRepo
),
TSDB_FILE_FULL_NAME
(
&
mf
));
if
(
tsdbUpdateMFileHeader
(
&
mf
)
<
0
)
{
tsdbError
(
"vgId:%d failed to update META file header since %s, revert it"
,
REPO_ID
(
pRepo
),
tstrerror
(
terrno
));
tsdbApplyMFileChange
(
&
mf
,
pOMFile
);
return
-
1
;
}
TSDB_FILE_FSYNC
(
&
mf
);
tsdbCloseMFile
(
&
mf
);
tsdbUpdateMFile
(
pfs
,
&
mf
);
return
0
;
}
int
tsdbOpenFS
(
STsdbRepo
*
pRepo
)
{
STsdbFS
*
pfs
=
REPO_FS
(
pRepo
);
char
current
[
TSDB_FILENAME_LEN
]
=
"
\0
"
;
int
nExpired
=
0
;
ASSERT
(
pfs
!=
NULL
);
tsdbGetTxnFname
(
REPO_ID
(
pRepo
),
TSDB_TXN_CURR_FILE
,
current
);
tsdbGetRtnSnap
(
pRepo
,
&
pRepo
->
rtn
);
if
(
access
(
current
,
F_OK
)
==
0
)
{
if
(
tsdbOpenFSFromCurrent
(
pRepo
)
<
0
)
{
tsdbError
(
"vgId:%d failed to open FS since %s"
,
REPO_ID
(
pRepo
),
tstrerror
(
terrno
));
return
-
1
;
}
tsdbScanAndTryFixDFilesHeader
(
pRepo
);
tsdbScanAndTryFixDFilesHeader
(
pRepo
,
&
nExpired
);
if
(
nExpired
>
0
)
{
tsdbProcessExpiredFS
(
pRepo
);
}
}
else
{
// should skip expired fileset inside of the function
if
(
tsdbRestoreCurrent
(
pRepo
)
<
0
)
{
tsdbError
(
"vgId:%d failed to restore current file since %s"
,
REPO_ID
(
pRepo
),
tstrerror
(
terrno
));
return
-
1
;
...
...
@@ -1110,6 +1176,11 @@ static int tsdbRestoreDFileSet(STsdbRepo *pRepo) {
ASSERT
(
tvid
==
REPO_ID
(
pRepo
));
if
(
tfid
<
pRepo
->
rtn
.
minFid
)
{
// skip file expired
++
index
;
continue
;
}
if
(
ftype
==
0
)
{
fset
.
fid
=
tfid
;
}
else
{
...
...
@@ -1206,7 +1277,7 @@ static int tsdbComparTFILE(const void *arg1, const void *arg2) {
}
}
static
void
tsdbScanAndTryFixDFilesHeader
(
STsdbRepo
*
pRepo
)
{
static
void
tsdbScanAndTryFixDFilesHeader
(
STsdbRepo
*
pRepo
,
int32_t
*
nExpired
)
{
STsdbFS
*
pfs
=
REPO_FS
(
pRepo
);
SFSStatus
*
pStatus
=
pfs
->
cstatus
;
SDFInfo
info
;
...
...
@@ -1214,7 +1285,9 @@ static void tsdbScanAndTryFixDFilesHeader(STsdbRepo *pRepo) {
for
(
size_t
i
=
0
;
i
<
taosArrayGetSize
(
pStatus
->
df
);
i
++
)
{
SDFileSet
fset
;
tsdbInitDFileSetEx
(
&
fset
,
(
SDFileSet
*
)
taosArrayGet
(
pStatus
->
df
,
i
));
if
(
fset
.
fid
<
pRepo
->
rtn
.
minFid
)
{
++*
nExpired
;
}
tsdbDebug
(
"vgId:%d scan DFileSet %d header"
,
REPO_ID
(
pRepo
),
fset
.
fid
);
if
(
tsdbOpenDFileSet
(
&
fset
,
O_RDWR
)
<
0
)
{
...
...
src/tsdb/src/tsdbMain.c
浏览文件 @
524a1329
...
...
@@ -26,6 +26,8 @@ static STsdbRepo *tsdbNewRepo(STsdbCfg *pCfg, STsdbAppH *pAppH);
static
void
tsdbFreeRepo
(
STsdbRepo
*
pRepo
);
static
void
tsdbStartStream
(
STsdbRepo
*
pRepo
);
static
void
tsdbStopStream
(
STsdbRepo
*
pRepo
);
static
int
tsdbRestoreLastColumns
(
STsdbRepo
*
pRepo
,
STable
*
pTable
,
SReadH
*
pReadh
);
static
int
tsdbRestoreLastRow
(
STsdbRepo
*
pRepo
,
STable
*
pTable
,
SReadH
*
pReadh
,
SBlockIdx
*
pIdx
);
// Function declaration
int32_t
tsdbCreateRepo
(
int
repoid
)
{
...
...
@@ -267,6 +269,10 @@ int32_t tsdbConfigRepo(STsdbRepo *repo, STsdbCfg *pCfg) {
repo
->
config_changed
=
true
;
pthread_mutex_unlock
(
&
repo
->
save_mutex
);
// schedule a commit msg then the new config will be applied immediatly
tsdbAsyncCommit
(
repo
);
return
0
;
#if 0
STsdbRepo *pRepo = (STsdbRepo *)repo;
...
...
@@ -511,8 +517,10 @@ static int32_t tsdbCheckAndSetDefaultCfg(STsdbCfg *pCfg) {
if
(
pCfg
->
update
!=
0
)
pCfg
->
update
=
1
;
// update cacheLastRow
if
(
pCfg
->
cacheLastRow
!=
0
)
pCfg
->
cacheLastRow
=
1
;
if
(
pCfg
->
cacheLastRow
!=
0
)
{
if
(
pCfg
->
cacheLastRow
>
3
)
pCfg
->
cacheLastRow
=
1
;
}
return
0
;
}
...
...
@@ -545,6 +553,8 @@ static STsdbRepo *tsdbNewRepo(STsdbCfg *pCfg, STsdbAppH *pAppH) {
return
NULL
;
}
pRepo
->
config_changed
=
false
;
atomic_store_8
(
&
pRepo
->
hasCachedLastRow
,
0
);
atomic_store_8
(
&
pRepo
->
hasCachedLastColumn
,
0
);
code
=
tsem_init
(
&
(
pRepo
->
readyToCommit
),
0
,
1
);
if
(
code
!=
0
)
{
...
...
@@ -614,13 +624,180 @@ static void tsdbStopStream(STsdbRepo *pRepo) {
}
}
static
int
tsdbRestoreLastColumns
(
STsdbRepo
*
pRepo
,
STable
*
pTable
,
SReadH
*
pReadh
)
{
//tsdbInfo("tsdbRestoreLastColumns of table %s", pTable->name->data);
STSchema
*
pSchema
=
tsdbGetTableLatestSchema
(
pTable
);
if
(
pSchema
==
NULL
)
{
tsdbError
(
"tsdbGetTableLatestSchema of table %s fail"
,
pTable
->
name
->
data
);
return
0
;
}
SBlock
*
pBlock
;
int
numColumns
;
int32_t
blockIdx
;
SDataStatis
*
pBlockStatis
=
NULL
;
SDataRow
row
=
NULL
;
// restore last column data with last schema
int
err
=
0
;
numColumns
=
schemaNCols
(
pSchema
);
if
(
numColumns
<=
pTable
->
restoreColumnNum
)
{
pTable
->
hasRestoreLastColumn
=
true
;
return
0
;
}
if
(
pTable
->
lastColSVersion
!=
schemaVersion
(
pSchema
))
{
if
(
tsdbInitColIdCacheWithSchema
(
pTable
,
pSchema
)
<
0
)
{
return
-
1
;
}
}
row
=
taosTMalloc
(
dataRowMaxBytesFromSchema
(
pSchema
));
if
(
row
==
NULL
)
{
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
err
=
-
1
;
goto
out
;
}
tdInitDataRow
(
row
,
pSchema
);
// first load block index info
if
(
tsdbLoadBlockInfo
(
pReadh
,
NULL
)
<
0
)
{
err
=
-
1
;
goto
out
;
}
pBlockStatis
=
calloc
(
numColumns
,
sizeof
(
SDataStatis
));
if
(
pBlockStatis
==
NULL
)
{
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
err
=
-
1
;
goto
out
;
}
memset
(
pBlockStatis
,
0
,
numColumns
*
sizeof
(
SDataStatis
));
for
(
int32_t
i
=
0
;
i
<
numColumns
;
++
i
)
{
STColumn
*
pCol
=
schemaColAt
(
pSchema
,
i
);
pBlockStatis
[
i
].
colId
=
pCol
->
colId
;
}
// load block from backward
SBlockIdx
*
pIdx
=
pReadh
->
pBlkIdx
;
blockIdx
=
(
int32_t
)(
pIdx
->
numOfBlocks
-
1
);
while
(
numColumns
>
pTable
->
restoreColumnNum
&&
blockIdx
>=
0
)
{
bool
loadStatisData
=
false
;
pBlock
=
pReadh
->
pBlkInfo
->
blocks
+
blockIdx
;
blockIdx
-=
1
;
// load block data
if
(
tsdbLoadBlockData
(
pReadh
,
pBlock
,
NULL
)
<
0
)
{
err
=
-
1
;
goto
out
;
}
// file block with sub-blocks has no statistics data
if
(
pBlock
->
numOfSubBlocks
<=
1
)
{
tsdbLoadBlockStatis
(
pReadh
,
pBlock
);
tsdbGetBlockStatis
(
pReadh
,
pBlockStatis
,
(
int
)
numColumns
);
loadStatisData
=
true
;
}
for
(
int16_t
i
=
0
;
i
<
numColumns
&&
numColumns
>
pTable
->
restoreColumnNum
;
++
i
)
{
STColumn
*
pCol
=
schemaColAt
(
pSchema
,
i
);
// ignore loaded columns
if
(
pTable
->
lastCols
[
i
].
bytes
!=
0
)
{
continue
;
}
// ignore block which has no not-null colId column
if
(
loadStatisData
&&
pBlockStatis
[
i
].
numOfNull
==
pBlock
->
numOfRows
)
{
continue
;
}
// OK,let's load row from backward to get not-null column
for
(
int32_t
rowId
=
pBlock
->
numOfRows
-
1
;
rowId
>=
0
;
rowId
--
)
{
SDataCol
*
pDataCol
=
pReadh
->
pDCols
[
0
]
->
cols
+
i
;
tdAppendColVal
(
row
,
tdGetColDataOfRow
(
pDataCol
,
rowId
),
pCol
->
type
,
pCol
->
bytes
,
pCol
->
offset
);
//SDataCol *pDataCol = readh.pDCols[0]->cols + j;
void
*
value
=
tdGetRowDataOfCol
(
row
,
(
int8_t
)
pCol
->
type
,
TD_DATA_ROW_HEAD_SIZE
+
pCol
->
offset
);
if
(
isNull
(
value
,
pCol
->
type
))
{
continue
;
}
int16_t
idx
=
tsdbGetLastColumnsIndexByColId
(
pTable
,
pCol
->
colId
);
if
(
idx
==
-
1
)
{
tsdbError
(
"tsdbRestoreLastColumns restore vgId:%d,table:%s cache column %d fail"
,
REPO_ID
(
pRepo
),
pTable
->
name
->
data
,
pCol
->
colId
);
continue
;
}
// save not-null column
SDataCol
*
pLastCol
=
&
(
pTable
->
lastCols
[
idx
]);
pLastCol
->
pData
=
malloc
(
pCol
->
bytes
);
pLastCol
->
bytes
=
pCol
->
bytes
;
pLastCol
->
colId
=
pCol
->
colId
;
memcpy
(
pLastCol
->
pData
,
value
,
pCol
->
bytes
);
// save row ts(in column 0)
pDataCol
=
pReadh
->
pDCols
[
0
]
->
cols
+
0
;
pCol
=
schemaColAt
(
pSchema
,
0
);
tdAppendColVal
(
row
,
tdGetColDataOfRow
(
pDataCol
,
rowId
),
pCol
->
type
,
pCol
->
bytes
,
pCol
->
offset
);
pLastCol
->
ts
=
dataRowKey
(
row
);
pTable
->
restoreColumnNum
+=
1
;
tsdbDebug
(
"tsdbRestoreLastColumns restore vgId:%d,table:%s cache column %d, %"
PRId64
,
REPO_ID
(
pRepo
),
pTable
->
name
->
data
,
pLastCol
->
colId
,
pLastCol
->
ts
);
break
;
}
}
}
out:
taosTZfree
(
row
);
tfree
(
pBlockStatis
);
if
(
err
==
0
&&
numColumns
<=
pTable
->
restoreColumnNum
)
{
pTable
->
hasRestoreLastColumn
=
true
;
}
return
err
;
}
static
int
tsdbRestoreLastRow
(
STsdbRepo
*
pRepo
,
STable
*
pTable
,
SReadH
*
pReadh
,
SBlockIdx
*
pIdx
)
{
ASSERT
(
pTable
->
lastRow
==
NULL
);
if
(
tsdbLoadBlockInfo
(
pReadh
,
NULL
)
<
0
)
{
return
-
1
;
}
SBlock
*
pBlock
=
pReadh
->
pBlkInfo
->
blocks
+
pIdx
->
numOfBlocks
-
1
;
if
(
tsdbLoadBlockData
(
pReadh
,
pBlock
,
NULL
)
<
0
)
{
return
-
1
;
}
// Get the data in row
STSchema
*
pSchema
=
tsdbGetTableSchema
(
pTable
);
pTable
->
lastRow
=
taosTMalloc
(
dataRowMaxBytesFromSchema
(
pSchema
));
if
(
pTable
->
lastRow
==
NULL
)
{
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
return
-
1
;
}
tdInitDataRow
(
pTable
->
lastRow
,
pSchema
);
for
(
int
icol
=
0
;
icol
<
schemaNCols
(
pSchema
);
icol
++
)
{
STColumn
*
pCol
=
schemaColAt
(
pSchema
,
icol
);
SDataCol
*
pDataCol
=
pReadh
->
pDCols
[
0
]
->
cols
+
icol
;
tdAppendColVal
(
pTable
->
lastRow
,
tdGetColDataOfRow
(
pDataCol
,
pBlock
->
numOfRows
-
1
),
pCol
->
type
,
pCol
->
bytes
,
pCol
->
offset
);
}
return
0
;
}
int
tsdbRestoreInfo
(
STsdbRepo
*
pRepo
)
{
SFSIter
fsiter
;
SReadH
readh
;
SDFileSet
*
pSet
;
STsdbMeta
*
pMeta
=
pRepo
->
tsdbMeta
;
STsdbCfg
*
pCfg
=
REPO_CFG
(
pRepo
);
SBlock
*
pBlock
;
if
(
tsdbInitReadH
(
&
readh
,
pRepo
)
<
0
)
{
return
-
1
;
...
...
@@ -628,6 +805,14 @@ int tsdbRestoreInfo(STsdbRepo *pRepo) {
tsdbFSIterInit
(
&
fsiter
,
REPO_FS
(
pRepo
),
TSDB_FS_ITER_BACKWARD
);
if
(
CACHE_LAST_NULL_COLUMN
(
pCfg
))
{
for
(
int
i
=
1
;
i
<
pMeta
->
maxTables
;
i
++
)
{
STable
*
pTable
=
pMeta
->
tables
[
i
];
if
(
pTable
==
NULL
)
continue
;
pTable
->
restoreColumnNum
=
0
;
}
}
while
((
pSet
=
tsdbFSIterNext
(
&
fsiter
))
!=
NULL
)
{
if
(
tsdbSetAndOpenReadFSet
(
&
readh
,
pSet
)
<
0
)
{
tsdbDestroyReadH
(
&
readh
);
...
...
@@ -643,6 +828,8 @@ int tsdbRestoreInfo(STsdbRepo *pRepo) {
STable
*
pTable
=
pMeta
->
tables
[
i
];
if
(
pTable
==
NULL
)
continue
;
//tsdbInfo("tsdbRestoreInfo restore vgId:%d,table:%s", REPO_ID(pRepo), pTable->name->data);
if
(
tsdbSetReadTable
(
&
readh
,
pTable
)
<
0
)
{
tsdbDestroyReadH
(
&
readh
);
return
-
1
;
...
...
@@ -653,42 +840,155 @@ int tsdbRestoreInfo(STsdbRepo *pRepo) {
if
(
pIdx
&&
lastKey
<
pIdx
->
maxKey
)
{
pTable
->
lastKey
=
pIdx
->
maxKey
;
if
(
pCfg
->
cacheLastRow
)
{
if
(
tsdbLoadBlockInfo
(
&
readh
,
NULL
)
<
0
)
{
tsdbDestroyReadH
(
&
readh
);
return
-
1
;
}
pBlock
=
readh
.
pBlkInfo
->
blocks
+
pIdx
->
numOfBlocks
-
1
;
if
(
tsdbLoadBlockData
(
&
readh
,
pBlock
,
NULL
)
<
0
)
{
tsdbDestroyReadH
(
&
readh
);
return
-
1
;
}
// Get the data in row
ASSERT
(
pTable
->
lastRow
==
NULL
);
STSchema
*
pSchema
=
tsdbGetTableSchema
(
pTable
);
pTable
->
lastRow
=
taosTMalloc
(
dataRowMaxBytesFromSchema
(
pSchema
));
if
(
pTable
->
lastRow
==
NULL
)
{
terrno
=
TSDB_CODE_TDB_OUT_OF_MEMORY
;
tsdbDestroyReadH
(
&
readh
);
return
-
1
;
}
tdInitDataRow
(
pTable
->
lastRow
,
pSchema
);
for
(
int
icol
=
0
;
icol
<
schemaNCols
(
pSchema
);
icol
++
)
{
STColumn
*
pCol
=
schemaColAt
(
pSchema
,
icol
);
SDataCol
*
pDataCol
=
readh
.
pDCols
[
0
]
->
cols
+
icol
;
tdAppendColVal
(
pTable
->
lastRow
,
tdGetColDataOfRow
(
pDataCol
,
pBlock
->
numOfRows
-
1
),
pCol
->
type
,
pCol
->
bytes
,
pCol
->
offset
);
}
if
(
CACHE_LAST_ROW
(
pCfg
)
&&
tsdbRestoreLastRow
(
pRepo
,
pTable
,
&
readh
,
pIdx
)
!=
0
)
{
tsdbDestroyReadH
(
&
readh
);
return
-
1
;
}
}
// restore NULL columns
if
(
pIdx
&&
CACHE_LAST_NULL_COLUMN
(
pCfg
)
&&
!
pTable
->
hasRestoreLastColumn
)
{
if
(
tsdbRestoreLastColumns
(
pRepo
,
pTable
,
&
readh
)
!=
0
)
{
tsdbDestroyReadH
(
&
readh
);
return
-
1
;
}
}
}
}
tsdbDestroyReadH
(
&
readh
);
if
(
CACHE_LAST_ROW
(
pCfg
))
{
atomic_store_8
(
&
pRepo
->
hasCachedLastRow
,
1
);
}
if
(
CACHE_LAST_NULL_COLUMN
(
pCfg
))
{
atomic_store_8
(
&
pRepo
->
hasCachedLastColumn
,
1
);
}
return
0
;
}
int
tsdbCacheLastData
(
STsdbRepo
*
pRepo
,
STsdbCfg
*
oldCfg
)
{
bool
cacheLastRow
=
false
,
cacheLastCol
=
false
;
SFSIter
fsiter
;
SReadH
readh
;
SDFileSet
*
pSet
;
STsdbMeta
*
pMeta
=
pRepo
->
tsdbMeta
;
int
tableNum
=
0
;
int
maxTableIdx
=
0
;
int
cacheLastRowTableNum
=
0
;
int
cacheLastColTableNum
=
0
;
bool
need_free_last_row
=
CACHE_LAST_ROW
(
oldCfg
)
&&
!
CACHE_LAST_ROW
(
&
(
pRepo
->
config
));
bool
need_free_last_col
=
CACHE_LAST_NULL_COLUMN
(
oldCfg
)
&&
!
CACHE_LAST_NULL_COLUMN
(
&
(
pRepo
->
config
));
if
(
CACHE_LAST_ROW
(
&
(
pRepo
->
config
))
||
CACHE_LAST_NULL_COLUMN
(
&
(
pRepo
->
config
)))
{
tsdbInfo
(
"tsdbCacheLastData cache last data since cacheLast option changed"
);
cacheLastRow
=
!
CACHE_LAST_ROW
(
oldCfg
)
&&
CACHE_LAST_ROW
(
&
(
pRepo
->
config
));
cacheLastCol
=
!
CACHE_LAST_NULL_COLUMN
(
oldCfg
)
&&
CACHE_LAST_NULL_COLUMN
(
&
(
pRepo
->
config
));
}
// calc max table idx and table num
for
(
int
i
=
1
;
i
<
pMeta
->
maxTables
;
i
++
)
{
STable
*
pTable
=
pMeta
->
tables
[
i
];
if
(
pTable
==
NULL
)
continue
;
tableNum
+=
1
;
maxTableIdx
=
i
;
if
(
cacheLastCol
)
{
pTable
->
restoreColumnNum
=
0
;
}
}
// if close last option,need to free data
if
(
need_free_last_row
||
need_free_last_col
)
{
if
(
need_free_last_row
)
{
atomic_store_8
(
&
pRepo
->
hasCachedLastRow
,
0
);
}
if
(
need_free_last_col
)
{
atomic_store_8
(
&
pRepo
->
hasCachedLastColumn
,
0
);
}
tsdbInfo
(
"free cache last data since cacheLast option changed"
);
for
(
int
i
=
1
;
i
<
maxTableIdx
;
i
++
)
{
STable
*
pTable
=
pMeta
->
tables
[
i
];
if
(
pTable
==
NULL
)
continue
;
if
(
need_free_last_row
)
{
taosTZfree
(
pTable
->
lastRow
);
pTable
->
lastRow
=
NULL
;
pTable
->
lastKey
=
TSKEY_INITIAL_VAL
;
}
if
(
need_free_last_col
)
{
tsdbFreeLastColumns
(
pTable
);
}
}
}
if
(
!
cacheLastRow
&&
!
cacheLastCol
)
{
return
0
;
}
cacheLastRowTableNum
=
cacheLastRow
?
tableNum
:
0
;
cacheLastColTableNum
=
cacheLastCol
?
tableNum
:
0
;
if
(
tsdbInitReadH
(
&
readh
,
pRepo
)
<
0
)
{
return
-
1
;
}
tsdbFSIterInit
(
&
fsiter
,
REPO_FS
(
pRepo
),
TSDB_FS_ITER_BACKWARD
);
while
((
pSet
=
tsdbFSIterNext
(
&
fsiter
))
!=
NULL
&&
(
cacheLastRowTableNum
>
0
||
cacheLastColTableNum
>
0
))
{
if
(
tsdbSetAndOpenReadFSet
(
&
readh
,
pSet
)
<
0
)
{
tsdbDestroyReadH
(
&
readh
);
return
-
1
;
}
if
(
tsdbLoadBlockIdx
(
&
readh
)
<
0
)
{
tsdbDestroyReadH
(
&
readh
);
return
-
1
;
}
for
(
int
i
=
1
;
i
<=
maxTableIdx
;
i
++
)
{
STable
*
pTable
=
pMeta
->
tables
[
i
];
if
(
pTable
==
NULL
)
continue
;
//tsdbInfo("tsdbRestoreInfo restore vgId:%d,table:%s", REPO_ID(pRepo), pTable->name->data);
if
(
tsdbSetReadTable
(
&
readh
,
pTable
)
<
0
)
{
tsdbDestroyReadH
(
&
readh
);
return
-
1
;
}
SBlockIdx
*
pIdx
=
readh
.
pBlkIdx
;
if
(
pIdx
&&
cacheLastRowTableNum
>
0
&&
pTable
->
lastRow
==
NULL
)
{
pTable
->
lastKey
=
pIdx
->
maxKey
;
if
(
tsdbRestoreLastRow
(
pRepo
,
pTable
,
&
readh
,
pIdx
)
!=
0
)
{
tsdbDestroyReadH
(
&
readh
);
return
-
1
;
}
cacheLastRowTableNum
-=
1
;
}
// restore NULL columns
if
(
pIdx
&&
cacheLastColTableNum
>
0
&&
!
pTable
->
hasRestoreLastColumn
)
{
if
(
tsdbRestoreLastColumns
(
pRepo
,
pTable
,
&
readh
)
!=
0
)
{
tsdbDestroyReadH
(
&
readh
);
return
-
1
;
}
if
(
pTable
->
hasRestoreLastColumn
)
{
cacheLastColTableNum
-=
1
;
}
}
}
}
tsdbDestroyReadH
(
&
readh
);
if
(
cacheLastRow
)
{
atomic_store_8
(
&
pRepo
->
hasCachedLastRow
,
1
);
}
if
(
cacheLastCol
)
{
atomic_store_8
(
&
pRepo
->
hasCachedLastColumn
,
1
);
}
return
0
;
}
\ No newline at end of file
src/tsdb/src/tsdbMemTable.c
浏览文件 @
524a1329
...
...
@@ -274,7 +274,7 @@ void *tsdbAllocBytes(STsdbRepo *pRepo, int bytes) {
int
tsdbAsyncCommit
(
STsdbRepo
*
pRepo
)
{
tsem_wait
(
&
(
pRepo
->
readyToCommit
));
ASSERT
(
pRepo
->
imem
==
NULL
);
//
ASSERT(pRepo->imem == NULL);
if
(
pRepo
->
mem
==
NULL
)
{
tsem_post
(
&
(
pRepo
->
readyToCommit
));
return
0
;
...
...
@@ -964,6 +964,49 @@ static void tsdbFreeRows(STsdbRepo *pRepo, void **rows, int rowCounter) {
}
}
static
void
updateTableLatestColumn
(
STsdbRepo
*
pRepo
,
STable
*
pTable
,
SDataRow
row
)
{
tsdbDebug
(
"vgId:%d updateTableLatestColumn, %s row version:%d"
,
REPO_ID
(
pRepo
),
pTable
->
name
->
data
,
dataRowVersion
(
row
));
STSchema
*
pSchema
=
tsdbGetTableLatestSchema
(
pTable
);
if
(
tsdbUpdateLastColSchema
(
pTable
,
pSchema
)
<
0
)
{
return
;
}
pSchema
=
tsdbGetTableSchemaByVersion
(
pTable
,
dataRowVersion
(
row
));
if
(
pSchema
==
NULL
)
{
return
;
}
SDataCol
*
pLatestCols
=
pTable
->
lastCols
;
for
(
int16_t
j
=
0
;
j
<
schemaNCols
(
pSchema
);
j
++
)
{
STColumn
*
pTCol
=
schemaColAt
(
pSchema
,
j
);
// ignore not exist colId
int16_t
idx
=
tsdbGetLastColumnsIndexByColId
(
pTable
,
pTCol
->
colId
);
if
(
idx
==
-
1
)
{
continue
;
}
void
*
value
=
tdGetRowDataOfCol
(
row
,
(
int8_t
)
pTCol
->
type
,
TD_DATA_ROW_HEAD_SIZE
+
pSchema
->
columns
[
j
].
offset
);
if
(
isNull
(
value
,
pTCol
->
type
))
{
continue
;
}
SDataCol
*
pDataCol
=
&
(
pLatestCols
[
idx
]);
if
(
pDataCol
->
pData
==
NULL
)
{
pDataCol
->
pData
=
malloc
(
pSchema
->
columns
[
j
].
bytes
);
pDataCol
->
bytes
=
pSchema
->
columns
[
j
].
bytes
;
}
else
if
(
pDataCol
->
bytes
<
pSchema
->
columns
[
j
].
bytes
)
{
pDataCol
->
pData
=
realloc
(
pDataCol
->
pData
,
pSchema
->
columns
[
j
].
bytes
);
pDataCol
->
bytes
=
pSchema
->
columns
[
j
].
bytes
;
}
memcpy
(
pDataCol
->
pData
,
value
,
pDataCol
->
bytes
);
//tsdbInfo("updateTableLatestColumn vgId:%d cache column %d for %d,%s", REPO_ID(pRepo), j, pDataCol->bytes, (char*)pDataCol->pData);
pDataCol
->
ts
=
dataRowKey
(
row
);
}
}
static
int
tsdbUpdateTableLatestInfo
(
STsdbRepo
*
pRepo
,
STable
*
pTable
,
SDataRow
row
)
{
STsdbCfg
*
pCfg
=
&
pRepo
->
config
;
...
...
@@ -977,7 +1020,7 @@ static int tsdbUpdateTableLatestInfo(STsdbRepo *pRepo, STable *pTable, SDataRow
}
if
(
tsdbGetTableLastKeyImpl
(
pTable
)
<
dataRowKey
(
row
))
{
if
(
pCfg
->
cacheLastRow
||
pTable
->
lastRow
!=
NULL
)
{
if
(
CACHE_LAST_ROW
(
pCfg
)
||
pTable
->
lastRow
!=
NULL
)
{
SDataRow
nrow
=
pTable
->
lastRow
;
if
(
taosTSizeof
(
nrow
)
<
dataRowLen
(
row
))
{
SDataRow
orow
=
nrow
;
...
...
@@ -1002,7 +1045,10 @@ static int tsdbUpdateTableLatestInfo(STsdbRepo *pRepo, STable *pTable, SDataRow
}
else
{
pTable
->
lastKey
=
dataRowKey
(
row
);
}
}
if
(
CACHE_LAST_NULL_COLUMN
(
pCfg
))
{
updateTableLatestColumn
(
pRepo
,
pTable
,
row
);
}
}
return
0
;
}
src/tsdb/src/tsdbMeta.c
浏览文件 @
524a1329
...
...
@@ -589,6 +589,131 @@ void tsdbUnRefTable(STable *pTable) {
}
}
void
tsdbFreeLastColumns
(
STable
*
pTable
)
{
if
(
pTable
->
lastCols
==
NULL
)
{
return
;
}
for
(
int
i
=
0
;
i
<
pTable
->
maxColNum
;
++
i
)
{
if
(
pTable
->
lastCols
[
i
].
bytes
==
0
)
{
continue
;
}
tfree
(
pTable
->
lastCols
[
i
].
pData
);
pTable
->
lastCols
[
i
].
bytes
=
0
;
pTable
->
lastCols
[
i
].
pData
=
NULL
;
}
tfree
(
pTable
->
lastCols
);
pTable
->
lastCols
=
NULL
;
pTable
->
maxColNum
=
0
;
pTable
->
lastColSVersion
=
-
1
;
pTable
->
restoreColumnNum
=
0
;
}
int16_t
tsdbGetLastColumnsIndexByColId
(
STable
*
pTable
,
int16_t
colId
)
{
if
(
pTable
->
lastCols
==
NULL
)
{
return
-
1
;
}
for
(
int16_t
i
=
0
;
i
<
pTable
->
maxColNum
;
++
i
)
{
if
(
pTable
->
lastCols
[
i
].
colId
==
colId
)
{
return
i
;
}
}
return
-
1
;
}
int
tsdbInitColIdCacheWithSchema
(
STable
*
pTable
,
STSchema
*
pSchema
)
{
ASSERT
(
pTable
->
lastCols
==
NULL
);
int16_t
numOfColumn
=
pSchema
->
numOfCols
;
pTable
->
lastCols
=
(
SDataCol
*
)
malloc
(
numOfColumn
*
sizeof
(
SDataCol
));
if
(
pTable
->
lastCols
==
NULL
)
{
return
-
1
;
}
for
(
int16_t
i
=
0
;
i
<
numOfColumn
;
++
i
)
{
STColumn
*
pCol
=
schemaColAt
(
pSchema
,
i
);
SDataCol
*
pDataCol
=
&
(
pTable
->
lastCols
[
i
]);
pDataCol
->
bytes
=
0
;
pDataCol
->
pData
=
NULL
;
pDataCol
->
colId
=
pCol
->
colId
;
}
pTable
->
lastColSVersion
=
schemaVersion
(
pSchema
);
pTable
->
maxColNum
=
numOfColumn
;
pTable
->
restoreColumnNum
=
0
;
return
0
;
}
STSchema
*
tsdbGetTableLatestSchema
(
STable
*
pTable
)
{
return
tsdbGetTableSchemaByVersion
(
pTable
,
-
1
);
}
int
tsdbUpdateLastColSchema
(
STable
*
pTable
,
STSchema
*
pNewSchema
)
{
if
(
pTable
->
lastColSVersion
==
schemaVersion
(
pNewSchema
))
{
return
0
;
}
tsdbInfo
(
"tsdbUpdateLastColSchema:%s,%d->%d"
,
pTable
->
name
->
data
,
pTable
->
lastColSVersion
,
schemaVersion
(
pNewSchema
));
int16_t
numOfCols
=
pNewSchema
->
numOfCols
;
SDataCol
*
lastCols
=
(
SDataCol
*
)
malloc
(
numOfCols
*
sizeof
(
SDataCol
));
if
(
lastCols
==
NULL
)
{
return
-
1
;
}
TSDB_WLOCK_TABLE
(
pTable
);
for
(
int16_t
i
=
0
;
i
<
numOfCols
;
++
i
)
{
STColumn
*
pCol
=
schemaColAt
(
pNewSchema
,
i
);
int16_t
idx
=
tsdbGetLastColumnsIndexByColId
(
pTable
,
pCol
->
colId
);
SDataCol
*
pDataCol
=
&
(
lastCols
[
i
]);
if
(
idx
!=
-
1
)
{
// move col data to new last column array
SDataCol
*
pOldDataCol
=
&
(
pTable
->
lastCols
[
idx
]);
memcpy
(
pDataCol
,
pOldDataCol
,
sizeof
(
SDataCol
));
}
else
{
// init new colid data
pDataCol
->
colId
=
pCol
->
colId
;
pDataCol
->
bytes
=
0
;
pDataCol
->
pData
=
NULL
;
}
}
SDataCol
*
oldLastCols
=
pTable
->
lastCols
;
int16_t
oldLastColNum
=
pTable
->
maxColNum
;
pTable
->
lastColSVersion
=
schemaVersion
(
pNewSchema
);
pTable
->
lastCols
=
lastCols
;
pTable
->
maxColNum
=
numOfCols
;
if
(
oldLastCols
==
NULL
)
{
TSDB_WUNLOCK_TABLE
(
pTable
);
return
0
;
}
// free old schema last column datas
for
(
int16_t
i
=
0
;
i
<
oldLastColNum
;
++
i
)
{
SDataCol
*
pDataCol
=
&
(
oldLastCols
[
i
]);
if
(
pDataCol
->
bytes
==
0
)
{
continue
;
}
int16_t
idx
=
tsdbGetLastColumnsIndexByColId
(
pTable
,
pDataCol
->
colId
);
if
(
idx
!=
-
1
)
{
continue
;
}
// free not exist column data
tfree
(
pDataCol
->
pData
);
}
TSDB_WUNLOCK_TABLE
(
pTable
);
tfree
(
oldLastCols
);
return
0
;
}
void
tsdbUpdateTableSchema
(
STsdbRepo
*
pRepo
,
STable
*
pTable
,
STSchema
*
pSchema
,
bool
insertAct
)
{
ASSERT
(
TABLE_TYPE
(
pTable
)
!=
TSDB_STREAM_TABLE
&&
TABLE_TYPE
(
pTable
)
!=
TSDB_SUPER_TABLE
);
STsdbMeta
*
pMeta
=
pRepo
->
tsdbMeta
;
...
...
@@ -672,6 +797,10 @@ static STable *tsdbNewTable() {
pTable
->
lastKey
=
TSKEY_INITIAL_VAL
;
pTable
->
lastCols
=
NULL
;
pTable
->
restoreColumnNum
=
0
;
pTable
->
maxColNum
=
0
;
pTable
->
lastColSVersion
=
-
1
;
return
pTable
;
}
...
...
@@ -785,8 +914,10 @@ static void tsdbFreeTable(STable *pTable) {
kvRowFree
(
pTable
->
tagVal
);
tSkipListDestroy
(
pTable
->
pIndex
);
taosTZfree
(
pTable
->
lastRow
);
taosTZfree
(
pTable
->
lastRow
);
tfree
(
pTable
->
sql
);
tsdbFreeLastColumns
(
pTable
);
free
(
pTable
);
}
}
...
...
src/tsdb/src/tsdbRead.c
浏览文件 @
524a1329
...
...
@@ -62,6 +62,7 @@ typedef struct SLoadCompBlockInfo {
int32_t
fileId
;
}
SLoadCompBlockInfo
;
typedef
struct
STableCheckInfo
{
STableId
tableId
;
TSKEY
lastKey
;
...
...
@@ -107,7 +108,7 @@ typedef struct STsdbQueryHandle {
SArray
*
pTableCheckInfo
;
// SArray<STableCheckInfo>
int32_t
activeIndex
;
bool
checkFiles
;
// check file stage
bool
cachelastrow
;
// check if last row cached
int8_t
cachelastrow
;
// check if last row cached
bool
loadExternalRow
;
// load time window external data rows
bool
currentLoadExternalRows
;
// current load external rows
int32_t
loadType
;
// block load type
...
...
@@ -117,7 +118,6 @@ typedef struct STsdbQueryHandle {
SFSIter
fileIter
;
SReadH
rhelper
;
STableBlockInfo
*
pDataBlockInfo
;
SDataCols
*
pDataCols
;
// in order to hold current file data block
int32_t
allocSize
;
// allocated data block size
SMemRef
*
pMemRef
;
...
...
@@ -138,6 +138,7 @@ typedef struct STableGroupSupporter {
static
STimeWindow
updateLastrowForEachGroup
(
STableGroupInfo
*
groupList
);
static
int32_t
checkForCachedLastRow
(
STsdbQueryHandle
*
pQueryHandle
,
STableGroupInfo
*
groupList
);
static
int32_t
checkForCachedLast
(
STsdbQueryHandle
*
pQueryHandle
);
static
int32_t
tsdbGetCachedLastRow
(
STable
*
pTable
,
SDataRow
*
pRes
,
TSKEY
*
lastKey
);
static
void
changeQueryHandleForInterpQuery
(
TsdbQueryHandleT
pHandle
);
...
...
@@ -512,6 +513,8 @@ void tsdbResetQueryHandleForNewTable(TsdbQueryHandleT queryHandle, STsdbQueryCon
pQueryHandle
->
next
=
doFreeColumnInfoData
(
pQueryHandle
->
next
);
}
TsdbQueryHandleT
tsdbQueryLastRow
(
STsdbRepo
*
tsdb
,
STsdbQueryCond
*
pCond
,
STableGroupInfo
*
groupList
,
uint64_t
qId
,
SMemRef
*
pMemRef
)
{
pCond
->
twindow
=
updateLastrowForEachGroup
(
groupList
);
...
...
@@ -528,10 +531,30 @@ TsdbQueryHandleT tsdbQueryLastRow(STsdbRepo *tsdb, STsdbQueryCond *pCond, STable
}
assert
(
pCond
->
order
==
TSDB_ORDER_ASC
&&
pCond
->
twindow
.
skey
<=
pCond
->
twindow
.
ekey
);
pQueryHandle
->
type
=
TSDB_QUERY_TYPE_LAST
;
if
(
pQueryHandle
->
cachelastrow
)
{
pQueryHandle
->
type
=
TSDB_QUERY_TYPE_LAST
;
}
return
pQueryHandle
;
}
TsdbQueryHandleT
tsdbQueryCacheLast
(
STsdbRepo
*
tsdb
,
STsdbQueryCond
*
pCond
,
STableGroupInfo
*
groupList
,
uint64_t
qId
,
SMemRef
*
pMemRef
)
{
STsdbQueryHandle
*
pQueryHandle
=
(
STsdbQueryHandle
*
)
tsdbQueryTables
(
tsdb
,
pCond
,
groupList
,
qId
,
pMemRef
);
int32_t
code
=
checkForCachedLast
(
pQueryHandle
);
if
(
code
!=
TSDB_CODE_SUCCESS
)
{
// set the numOfTables to be 0
terrno
=
code
;
return
NULL
;
}
if
(
pQueryHandle
->
cachelastrow
)
{
pQueryHandle
->
type
=
TSDB_QUERY_TYPE_LAST
;
}
return
pQueryHandle
;
}
SArray
*
tsdbGetQueriedTableList
(
TsdbQueryHandleT
*
pHandle
)
{
assert
(
pHandle
!=
NULL
);
...
...
@@ -2460,6 +2483,159 @@ static bool loadCachedLastRow(STsdbQueryHandle* pQueryHandle) {
return
false
;
}
static
bool
loadCachedLast
(
STsdbQueryHandle
*
pQueryHandle
)
{
// the last row is cached in buffer, return it directly.
// here note that the pQueryHandle->window must be the TS_INITIALIZER
int32_t
tgNumOfCols
=
(
int32_t
)
QH_GET_NUM_OF_COLS
(
pQueryHandle
);
size_t
numOfTables
=
taosArrayGetSize
(
pQueryHandle
->
pTableCheckInfo
);
int32_t
numOfRows
=
0
;
assert
(
numOfTables
>
0
&&
tgNumOfCols
>
0
);
SQueryFilePos
*
cur
=
&
pQueryHandle
->
cur
;
TSKEY
priKey
=
TSKEY_INITIAL_VAL
;
int32_t
priIdx
=
-
1
;
SColumnInfoData
*
pColInfo
=
NULL
;
while
(
++
pQueryHandle
->
activeIndex
<
numOfTables
)
{
STableCheckInfo
*
pCheckInfo
=
taosArrayGet
(
pQueryHandle
->
pTableCheckInfo
,
pQueryHandle
->
activeIndex
);
STable
*
pTable
=
pCheckInfo
->
pTableObj
;
char
*
pData
=
NULL
;
int32_t
numOfCols
=
pTable
->
maxColNum
;
if
(
pTable
->
lastCols
==
NULL
||
pTable
->
maxColNum
<=
0
)
{
tsdbWarn
(
"no last cached for table, uid:%"
PRIu64
",tid:%d"
,
pTable
->
tableId
.
uid
,
pTable
->
tableId
.
tid
);
continue
;
}
int32_t
i
=
0
,
j
=
0
;
while
(
i
<
tgNumOfCols
&&
j
<
numOfCols
)
{
pColInfo
=
taosArrayGet
(
pQueryHandle
->
pColumns
,
i
);
if
(
pTable
->
lastCols
[
j
].
colId
<
pColInfo
->
info
.
colId
)
{
j
++
;
continue
;
}
else
if
(
pTable
->
lastCols
[
j
].
colId
>
pColInfo
->
info
.
colId
)
{
i
++
;
continue
;
}
pData
=
(
char
*
)
pColInfo
->
pData
+
numOfRows
*
pColInfo
->
info
.
bytes
;
if
(
pTable
->
lastCols
[
j
].
bytes
>
0
)
{
void
*
value
=
pTable
->
lastCols
[
j
].
pData
;
switch
(
pColInfo
->
info
.
type
)
{
case
TSDB_DATA_TYPE_BINARY
:
case
TSDB_DATA_TYPE_NCHAR
:
memcpy
(
pData
,
value
,
varDataTLen
(
value
));
break
;
case
TSDB_DATA_TYPE_NULL
:
case
TSDB_DATA_TYPE_BOOL
:
case
TSDB_DATA_TYPE_TINYINT
:
case
TSDB_DATA_TYPE_UTINYINT
:
*
(
uint8_t
*
)
pData
=
*
(
uint8_t
*
)
value
;
break
;
case
TSDB_DATA_TYPE_SMALLINT
:
case
TSDB_DATA_TYPE_USMALLINT
:
*
(
uint16_t
*
)
pData
=
*
(
uint16_t
*
)
value
;
break
;
case
TSDB_DATA_TYPE_INT
:
case
TSDB_DATA_TYPE_UINT
:
*
(
uint32_t
*
)
pData
=
*
(
uint32_t
*
)
value
;
break
;
case
TSDB_DATA_TYPE_BIGINT
:
case
TSDB_DATA_TYPE_UBIGINT
:
*
(
uint64_t
*
)
pData
=
*
(
uint64_t
*
)
value
;
break
;
case
TSDB_DATA_TYPE_FLOAT
:
SET_FLOAT_PTR
(
pData
,
value
);
break
;
case
TSDB_DATA_TYPE_DOUBLE
:
SET_DOUBLE_PTR
(
pData
,
value
);
break
;
case
TSDB_DATA_TYPE_TIMESTAMP
:
if
(
pColInfo
->
info
.
colId
==
PRIMARYKEY_TIMESTAMP_COL_INDEX
)
{
priKey
=
tdGetKey
(
*
(
TKEY
*
)
value
);
priIdx
=
i
;
i
++
;
j
++
;
continue
;
}
else
{
*
(
TSKEY
*
)
pData
=
*
(
TSKEY
*
)
value
;
}
break
;
default:
memcpy
(
pData
,
value
,
pColInfo
->
info
.
bytes
);
}
for
(
int32_t
n
=
0
;
n
<
tgNumOfCols
;
++
n
)
{
if
(
n
==
i
)
{
continue
;
}
pColInfo
=
taosArrayGet
(
pQueryHandle
->
pColumns
,
n
);
pData
=
(
char
*
)
pColInfo
->
pData
+
numOfRows
*
pColInfo
->
info
.
bytes
;;
if
(
pColInfo
->
info
.
colId
==
PRIMARYKEY_TIMESTAMP_COL_INDEX
)
{
*
(
TSKEY
*
)
pData
=
pTable
->
lastCols
[
j
].
ts
;
continue
;
}
if
(
pColInfo
->
info
.
type
==
TSDB_DATA_TYPE_BINARY
||
pColInfo
->
info
.
type
==
TSDB_DATA_TYPE_NCHAR
)
{
setVardataNull
(
pData
,
pColInfo
->
info
.
type
);
}
else
{
setNull
(
pData
,
pColInfo
->
info
.
type
,
pColInfo
->
info
.
bytes
);
}
}
numOfRows
++
;
assert
(
numOfRows
<
pQueryHandle
->
outputCapacity
);
}
i
++
;
j
++
;
}
// leave the real ts column as the last row, because last function only (not stable) use the last row as res
if
(
priKey
!=
TSKEY_INITIAL_VAL
)
{
pColInfo
=
taosArrayGet
(
pQueryHandle
->
pColumns
,
priIdx
);
pData
=
(
char
*
)
pColInfo
->
pData
+
numOfRows
*
pColInfo
->
info
.
bytes
;
*
(
TSKEY
*
)
pData
=
priKey
;
for
(
int32_t
n
=
0
;
n
<
tgNumOfCols
;
++
n
)
{
if
(
n
==
priIdx
)
{
continue
;
}
pColInfo
=
taosArrayGet
(
pQueryHandle
->
pColumns
,
n
);
pData
=
(
char
*
)
pColInfo
->
pData
+
numOfRows
*
pColInfo
->
info
.
bytes
;;
assert
(
pColInfo
->
info
.
colId
!=
PRIMARYKEY_TIMESTAMP_COL_INDEX
);
if
(
pColInfo
->
info
.
type
==
TSDB_DATA_TYPE_BINARY
||
pColInfo
->
info
.
type
==
TSDB_DATA_TYPE_NCHAR
)
{
setVardataNull
(
pData
,
pColInfo
->
info
.
type
);
}
else
{
setNull
(
pData
,
pColInfo
->
info
.
type
,
pColInfo
->
info
.
bytes
);
}
}
numOfRows
++
;
}
if
(
numOfRows
>
0
)
{
cur
->
rows
=
numOfRows
;
cur
->
mixBlock
=
true
;
return
true
;
}
}
return
false
;
}
static
bool
loadDataBlockFromTableSeq
(
STsdbQueryHandle
*
pQueryHandle
)
{
size_t
numOfTables
=
taosArrayGetSize
(
pQueryHandle
->
pTableCheckInfo
);
assert
(
numOfTables
>
0
);
...
...
@@ -2496,8 +2672,12 @@ bool tsdbNextDataBlock(TsdbQueryHandleT pHandle) {
int64_t
stime
=
taosGetTimestampUs
();
int64_t
elapsedTime
=
stime
;
if
(
pQueryHandle
->
type
==
TSDB_QUERY_TYPE_LAST
&&
pQueryHandle
->
cachelastrow
)
{
return
loadCachedLastRow
(
pQueryHandle
);
if
(
pQueryHandle
->
type
==
TSDB_QUERY_TYPE_LAST
)
{
if
(
pQueryHandle
->
cachelastrow
==
1
)
{
return
loadCachedLastRow
(
pQueryHandle
);
}
else
if
(
pQueryHandle
->
cachelastrow
==
2
)
{
return
loadCachedLast
(
pQueryHandle
);
}
}
if
(
pQueryHandle
->
loadType
==
BLOCK_LOAD_TABLE_SEQ_ORDER
)
{
...
...
@@ -2695,6 +2875,10 @@ int32_t tsdbGetCachedLastRow(STable* pTable, SDataRow* pRes, TSKEY* lastKey) {
return
TSDB_CODE_SUCCESS
;
}
bool
isTsdbCacheLastRow
(
TsdbQueryHandleT
*
pQueryHandle
)
{
return
((
STsdbQueryHandle
*
)
pQueryHandle
)
->
cachelastrow
>
0
;
}
int32_t
checkForCachedLastRow
(
STsdbQueryHandle
*
pQueryHandle
,
STableGroupInfo
*
groupList
)
{
assert
(
pQueryHandle
!=
NULL
&&
groupList
!=
NULL
);
...
...
@@ -2706,11 +2890,15 @@ int32_t checkForCachedLastRow(STsdbQueryHandle* pQueryHandle, STableGroupInfo *g
STableKeyInfo
*
pInfo
=
(
STableKeyInfo
*
)
taosArrayGet
(
group
,
0
);
int32_t
code
=
tsdbGetCachedLastRow
(
pInfo
->
pTable
,
&
pRow
,
&
key
);
if
(
code
!=
TSDB_CODE_SUCCESS
)
{
pQueryHandle
->
cachelastrow
=
false
;
}
else
{
pQueryHandle
->
cachelastrow
=
(
pRow
!=
NULL
);
int32_t
code
=
0
;
if
(((
STable
*
)
pInfo
->
pTable
)
->
lastRow
)
{
code
=
tsdbGetCachedLastRow
(
pInfo
->
pTable
,
&
pRow
,
&
key
);
if
(
code
!=
TSDB_CODE_SUCCESS
)
{
pQueryHandle
->
cachelastrow
=
0
;
}
else
{
pQueryHandle
->
cachelastrow
=
1
;
}
}
// update the tsdb query time range
...
...
@@ -2724,6 +2912,26 @@ int32_t checkForCachedLastRow(STsdbQueryHandle* pQueryHandle, STableGroupInfo *g
return
code
;
}
int32_t
checkForCachedLast
(
STsdbQueryHandle
*
pQueryHandle
)
{
assert
(
pQueryHandle
!=
NULL
);
int32_t
code
=
0
;
if
(
pQueryHandle
->
pTsdb
&&
atomic_load_8
(
&
pQueryHandle
->
pTsdb
->
hasCachedLastColumn
)){
pQueryHandle
->
cachelastrow
=
2
;
}
// update the tsdb query time range
if
(
pQueryHandle
->
cachelastrow
)
{
pQueryHandle
->
window
=
TSWINDOW_INITIALIZER
;
pQueryHandle
->
checkFiles
=
false
;
pQueryHandle
->
activeIndex
=
-
1
;
// start from -1
}
return
code
;
}
STimeWindow
updateLastrowForEachGroup
(
STableGroupInfo
*
groupList
)
{
STimeWindow
window
=
{
INT64_MAX
,
INT64_MIN
};
...
...
src/tsdb/src/tsdbSync.c
浏览文件 @
524a1329
...
...
@@ -424,24 +424,42 @@ static int32_t tsdbSyncRecvDFileSetArray(SSyncH *pSynch) {
}
if
(
tsdbSendDecision
(
pSynch
,
false
)
<
0
)
{
tsdbError
(
"vgId:%d, filed to send decision since %s"
,
REPO_ID
(
pRepo
),
tstrerror
(
terrno
));
tsdbError
(
"vgId:%d, f
a
iled to send decision since %s"
,
REPO_ID
(
pRepo
),
tstrerror
(
terrno
));
return
-
1
;
}
}
else
{
// Need to copy from remote
tsdbInfo
(
"vgId:%d, fileset:%d will be received"
,
REPO_ID
(
pRepo
),
pSynch
->
pdf
->
fid
);
// Notify remote to send there file here
if
(
tsdbSendDecision
(
pSynch
,
true
)
<
0
)
{
tsdbError
(
"vgId:%d, failed to send decision since %s"
,
REPO_ID
(
pRepo
),
tstrerror
(
terrno
));
return
-
1
;
int
fidLevel
=
tsdbGetFidLevel
(
pSynch
->
pdf
->
fid
,
&
(
pSynch
->
rtn
));
if
(
fidLevel
<
0
)
{
// expired fileset
tsdbInfo
(
"vgId:%d, fileset:%d will be skipped as expired"
,
REPO_ID
(
pRepo
),
pSynch
->
pdf
->
fid
);
if
(
tsdbSendDecision
(
pSynch
,
false
)
<
0
)
{
tsdbError
(
"vgId:%d, failed to send decision since %s"
,
REPO_ID
(
pRepo
),
tstrerror
(
terrno
));
return
-
1
;
}
// Move forward
if
(
tsdbRecvDFileSetInfo
(
pSynch
)
<
0
)
{
tsdbError
(
"vgId:%d, failed to recv fileset since %s"
,
REPO_ID
(
pRepo
),
tstrerror
(
terrno
));
return
-
1
;
}
if
(
pLSet
)
{
pLSet
=
tsdbFSIterNext
(
&
fsiter
);
}
// Next loop
continue
;
}
else
{
tsdbInfo
(
"vgId:%d, fileset:%d will be received"
,
REPO_ID
(
pRepo
),
pSynch
->
pdf
->
fid
);
// Notify remote to send there file here
if
(
tsdbSendDecision
(
pSynch
,
true
)
<
0
)
{
tsdbError
(
"vgId:%d, failed to send decision since %s"
,
REPO_ID
(
pRepo
),
tstrerror
(
terrno
));
return
-
1
;
}
}
// Create local files and copy from remote
SDiskID
did
;
SDFileSet
fset
;
tfsAllocDisk
(
tsdbGetFidLevel
(
pSynch
->
pdf
->
fid
,
&
(
pSynch
->
rtn
))
,
&
(
did
.
level
),
&
(
did
.
id
));
tfsAllocDisk
(
fidLevel
,
&
(
did
.
level
),
&
(
did
.
id
));
if
(
did
.
level
==
TFS_UNDECIDED_LEVEL
)
{
terrno
=
TSDB_CODE_TDB_NO_AVAIL_DISK
;
tsdbError
(
"vgId:%d, failed allc disk since %s"
,
REPO_ID
(
pRepo
),
tstrerror
(
terrno
));
...
...
@@ -548,6 +566,13 @@ static int32_t tsdbSyncSendDFileSet(SSyncH *pSynch, SDFileSet *pSet) {
STsdbRepo
*
pRepo
=
pSynch
->
pRepo
;
bool
toSend
=
false
;
// skip expired fileset
if
(
pSet
&&
tsdbGetFidLevel
(
pSet
->
fid
,
&
(
pSynch
->
rtn
))
<
0
)
{
tsdbInfo
(
"vgId:%d, don't sync send since fileset:%d smaller than minFid:%d"
,
REPO_ID
(
pRepo
),
pSet
->
fid
,
pSynch
->
rtn
.
minFid
);
return
0
;
}
if
(
tsdbSendDFileSetInfo
(
pSynch
,
pSet
)
<
0
)
{
tsdbError
(
"vgId:%d, failed to send fileset:%d info since %s"
,
REPO_ID
(
pRepo
),
pSet
?
pSet
->
fid
:
-
1
,
tstrerror
(
terrno
));
return
-
1
;
...
...
src/wal/src/walWrite.c
浏览文件 @
524a1329
...
...
@@ -430,6 +430,8 @@ static int32_t walRestoreWalFile(SWal *pWal, void *pVnode, FWalWrite writeFp, ch
pWal
->
vgId
,
fileId
,
pHead
->
version
,
pWal
->
version
,
pHead
->
len
,
offset
);
pWal
->
version
=
pHead
->
version
;
//wInfo("writeFp: %ld", offset);
(
*
writeFp
)(
pVnode
,
pHead
,
TAOS_QTYPE_WAL
,
NULL
);
}
...
...
tests/pytest/fulltest.sh
浏览文件 @
524a1329
...
...
@@ -38,6 +38,8 @@ python3 ./test.py -f table/boundary.py
python3 ./test.py
-f
table/create.py
python3 ./test.py
-f
table/del_stable.py
#stable
python3 ./test.py
-f
stable/insert.py
# tag
python3 ./test.py
-f
tag_lite/filter.py
...
...
tests/pytest/stable/insert.py
浏览文件 @
524a1329
...
...
@@ -72,6 +72,9 @@ class TDTestCase:
tdSql
.
query
(
"describe db.stb"
)
tdSql
.
checkRows
(
3
)
tdSql
.
error
(
"drop stable if exists db.dev_01"
)
tdSql
.
error
(
"drop stable if exists db.dev_02"
)
tdSql
.
execute
(
"alter stable db.stb add tag t1 int"
)
tdSql
.
query
(
"describe db.stb"
)
tdSql
.
checkRows
(
4
)
...
...
@@ -80,6 +83,13 @@ class TDTestCase:
tdSql
.
query
(
"show stables"
)
tdSql
.
checkRows
(
1
)
tdSql
.
error
(
"drop stable if exists db.dev_001"
)
tdSql
.
error
(
"drop stable if exists db.dev_002"
)
for
i
in
range
(
10
):
tdSql
.
execute
(
"drop stable if exists db.stb"
)
tdSql
.
query
(
"show stables"
)
tdSql
.
checkRows
(
1
)
def
stop
(
self
):
tdSql
.
close
()
...
...
tests/script/api/batchprepare.c
浏览文件 @
524a1329
此差异已折叠。
点击以展开。
tests/script/general/parser/last_cache.sim
0 → 100644
浏览文件 @
524a1329
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/cfg.sh -n dnode1 -c walLevel -v 0
system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4
system sh/exec.sh -n dnode1 -s start
sleep 100
sql connect
print ======================== dnode1 start
$db = testdb
sql create database $db cachelast 2
sql use $db
sql create stable st2 (ts timestamp, f1 int, f2 double, f3 binary(10), f4 timestamp) tags (id int)
sql create table tb1 using st2 tags (1);
sql create table tb2 using st2 tags (2);
sql create table tb3 using st2 tags (3);
sql create table tb4 using st2 tags (4);
sql create table tb5 using st2 tags (1);
sql create table tb6 using st2 tags (2);
sql create table tb7 using st2 tags (3);
sql create table tb8 using st2 tags (4);
sql create table tb9 using st2 tags (5);
sql create table tba using st2 tags (5);
sql create table tbb using st2 tags (5);
sql create table tbc using st2 tags (5);
sql create table tbd using st2 tags (5);
sql create table tbe using st2 tags (5);
sql insert into tb1 values ("2021-05-09 10:10:10", 1, 2.0, '3', -1000)
sql insert into tb1 values ("2021-05-10 10:10:11", 4, 5.0, NULL, -2000)
sql insert into tb1 values ("2021-05-12 10:10:12", 6,NULL, NULL, -3000)
sql insert into tb2 values ("2021-05-09 10:11:13",-1,-2.0,'-3', -1001)
sql insert into tb2 values ("2021-05-10 10:11:14",-4,-5.0, NULL, -2001)
sql insert into tb2 values ("2021-05-11 10:11:15",-6, -7, '-8', -3001)
sql insert into tb3 values ("2021-05-09 10:12:17", 7, 8.0, '9' , -1002)
sql insert into tb3 values ("2021-05-09 10:12:17",10,11.0, NULL, -2002)
sql insert into tb3 values ("2021-05-09 10:12:18",12,NULL, NULL, -3002)
sql insert into tb4 values ("2021-05-09 10:12:19",13,14.0,'15' , -1003)
sql insert into tb4 values ("2021-05-10 10:12:20",16,17.0, NULL, -2003)
sql insert into tb4 values ("2021-05-11 10:12:21",18,NULL, NULL, -3003)
sql insert into tb5 values ("2021-05-09 10:12:22",19, 20, '21', -1004)
sql insert into tb6 values ("2021-05-11 10:12:23",22, 23, NULL, -2004)
sql insert into tb7 values ("2021-05-10 10:12:24",24,NULL, '25', -3004)
sql insert into tb8 values ("2021-05-11 10:12:25",26,NULL, '27', -4004)
sql insert into tb9 values ("2021-05-09 10:12:26",28, 29, '30', -1005)
sql insert into tba values ("2021-05-10 10:12:27",31, 32, NULL, -2005)
sql insert into tbb values ("2021-05-10 10:12:28",33,NULL, '35', -3005)
sql insert into tbc values ("2021-05-11 10:12:29",36, 37, NULL, -4005)
sql insert into tbd values ("2021-05-11 10:12:29",NULL,NULL,NULL,NULL )
run general/parser/last_cache_query.sim
system sh/exec.sh -n dnode1 -s stop -x SIGINT
system sh/exec.sh -n dnode1 -s start
run general/parser/last_cache_query.sim
system sh/exec.sh -n dnode1 -s stop -x SIGINT
tests/script/general/parser/last_cache_query.sim
0 → 100644
浏览文件 @
524a1329
sleep 100
sql connect
$db = testdb
sql use $db
print "test tb1"
sql select last(ts) from tb1
if $rows != 1 then
return -1
endi
if $data00 != @21-05-12 10:10:12.000@ then
print $data00
return -1
endi
sql select last(f1) from tb1
if $rows != 1 then
return -1
endi
if $data00 != 6 then
print $data00
return -1
endi
sql select last(*) from tb1
if $rows != 1 then
return -1
endi
if $data00 != @21-05-12 10:10:12.000@ then
print $data00
return -1
endi
if $data01 != 6 then
return -1
endi
if $data02 != 5.000000000 then
print $data02
return -1
endi
if $data03 != 3 then
return -1
endi
if $data04 != @70-01-01 07:59:57.000@ then
return -1
endi
sql select last(tb1.*,ts,f4) from tb1
if $rows != 1 then
return -1
endi
if $data00 != @21-05-12 10:10:12.000@ then
print $data00
return -1
endi
if $data01 != 6 then
return -1
endi
if $data02 != 5.000000000 then
print $data02
return -1
endi
if $data03 != 3 then
return -1
endi
if $data04 != @70-01-01 07:59:57.000@ then
return -1
endi
if $data05 != @21-05-12 10:10:12.000@ then
print $data00
return -1
endi
if $data06 != @70-01-01 07:59:57.000@ then
return -1
endi
print "test tb2"
sql select last(ts) from tb2
if $rows != 1 then
return -1
endi
if $data00 != @21-05-11 10:11:15.000@ then
print $data00
return -1
endi
sql select last(f1) from tb2
if $rows != 1 then
return -1
endi
if $data00 != -6 then
print $data00
return -1
endi
sql select last(*) from tb2
if $rows != 1 then
return -1
endi
if $data00 != @21-05-11 10:11:15.000@ then
print $data00
return -1
endi
if $data01 != -6 then
return -1
endi
if $data02 != -7.000000000 then
print $data02
return -1
endi
if $data03 != -8 then
return -1
endi
if $data04 != @70-01-01 07:59:56.999@ then
if $data04 != @70-01-01 07:59:57.-01@ then
return -1
endi
endi
sql select last(tb2.*,ts,f4) from tb2
if $rows != 1 then
return -1
endi
if $data00 != @21-05-11 10:11:15.000@ then
print $data00
return -1
endi
if $data01 != -6 then
return -1
endi
if $data02 != -7.000000000 then
print $data02
return -1
endi
if $data03 != -8 then
return -1
endi
if $data04 != @70-01-01 07:59:56.999@ then
if $data04 != @70-01-01 07:59:57.-01@ then
return -1
endi
endi
if $data05 != @21-05-11 10:11:15.000@ then
print $data00
return -1
endi
if $data06 != @70-01-01 07:59:56.999@ then
if $data04 != @70-01-01 07:59:57.-01@ then
return -1
endi
endi
print "test tbd"
sql select last(*) from tbd
if $rows != 1 then
return -1
endi
if $data00 != @21-05-11 10:12:29.000@ then
print $data00
return -1
endi
if $data01 != NULL then
return -1
endi
if $data02 != NULL then
print $data02
return -1
endi
if $data03 != NULL then
return -1
endi
if $data04 != NULL then
return -1
endi
print "test tbe"
sql select last(*) from tbe
if $rows != 0 then
return -1
endi
print "test stable"
sql select last(ts) from st2
if $rows != 1 then
return -1
endi
if $data00 != @21-05-12 10:10:12.000@ then
print $data00
return -1
endi
sql select last(f1) from st2
if $rows != 1 then
return -1
endi
if $data00 != 6 then
print $data00
return -1
endi
sql select last(*) from st2
if $rows != 1 then
return -1
endi
if $data00 != @21-05-12 10:10:12.000@ then
print $data00
return -1
endi
if $data01 != 6 then
return -1
endi
if $data02 != 37.000000000 then
print $data02
return -1
endi
if $data03 != 27 then
return -1
endi
if $data04 != @70-01-01 07:59:57.000@ then
return -1
endi
sql select last(st2.*,ts,f4) from st2
if $rows != 1 then
return -1
endi
if $data00 != @21-05-12 10:10:12.000@ then
print $data00
return -1
endi
if $data01 != 6 then
return -1
endi
if $data02 != 37.000000000 then
print $data02
return -1
endi
if $data03 != 27 then
return -1
endi
if $data04 != @70-01-01 07:59:57.000@ then
return -1
endi
if $data05 != @21-05-12 10:10:12.000@ then
print $data00
return -1
endi
if $data06 != @70-01-01 07:59:57.000@ then
return -1
endi
sql select last(*) from st2 group by id
if $rows != 5 then
return -1
endi
if $data00 != @21-05-12 10:10:12.000@ then
return -1
endi
if $data01 != 6 then
return -1
endi
if $data02 != 5.000000000 then
print $data02
return -1
endi
if $data03 != 21 then
return -1
endi
if $data04 != @70-01-01 07:59:57.000@ then
return -1
endi
if $data05 != 1 then
return -1
endi
if $data10 != @21-05-11 10:12:23.000@ then
return -1
endi
if $data11 != 22 then
return -1
endi
if $data12 != 23.000000000 then
print $data02
return -1
endi
if $data13 != -8 then
return -1
endi
if $data14 != @70-01-01 07:59:57.996@ then
if $data14 != @70-01-01 07:59:58.-04@ then
print $data14
return -1
endi
endi
if $data15 != 2 then
return -1
endi
if $data20 != @21-05-10 10:12:24.000@ then
return -1
endi
if $data21 != 24 then
return -1
endi
if $data22 != 8.000000000 then
print $data02
return -1
endi
if $data23 != 25 then
return -1
endi
if $data24 != @70-01-01 07:59:56.996@ then
if $data24 != @70-01-01 07:59:57.-04@ then
return -1
endi
endi
if $data25 != 3 then
return -1
endi
if $data30 != @21-05-11 10:12:25.000@ then
return -1
endi
if $data31 != 26 then
return -1
endi
if $data32 != 17.000000000 then
print $data02
return -1
endi
if $data33 != 27 then
return -1
endi
if $data34 != @70-01-01 07:59:55.996@ then
if $data34 != @70-01-01 07:59:56.-04@ then
return -1
endi
endi
if $data35 != 4 then
return -1
endi
if $data40 != @21-05-11 10:12:29.000@ then
return -1
endi
if $data41 != 36 then
return -1
endi
if $data42 != 37.000000000 then
print $data02
return -1
endi
if $data43 != 35 then
return -1
endi
if $data44 != @70-01-01 07:59:55.995@ then
if $data44 != @70-01-01 07:59:56.-05@ then
return -1
endi
endi
if $data45 != 5 then
return -1
endi
print "test tbn"
sql create table tbn (ts timestamp, f1 int, f2 double, f3 binary(10), f4 timestamp)
sql insert into tbn values ("2021-05-09 10:10:10", 1, 2.0, '3', -1000)
sql insert into tbn values ("2021-05-10 10:10:11", 4, 5.0, NULL, -2000)
sql insert into tbn values ("2021-05-12 10:10:12", 6,NULL, NULL, -3000)
sql insert into tbn values ("2021-05-13 10:10:12", NULL,NULL, NULL,NULL)
sql select last(*) from tbn;
if $rows != 1 then
return -1
endi
if $data00 != @21-05-13 10:10:12.000@ then
print $data00
return -1
endi
if $data01 != 6 then
return -1
endi
if $data02 != 5.000000000 then
print $data02
return -1
endi
if $data03 != 3 then
return -1
endi
if $data04 != @70-01-01 07:59:57.000@ then
return -1
endi
tests/script/general/parser/testSuite.sim
浏览文件 @
524a1329
...
...
@@ -58,4 +58,6 @@ run general/parser/having.sim
run general/parser/having_child.sim
run general/parser/slimit_alter_tags.sim
run general/parser/binary_escapeCharacter.sim
run general/parser/between_and.sim
run general/parser/last_cache.sim
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录