提交 95e75e86 编写于 作者: H Haojun Liao

Merge branch 'develop' into feature/query

......@@ -146,7 +146,7 @@ matrix:
branch_pattern: coverity_scan
- os: linux
dist: xenial
dist: trusty
language: c
git:
- depth: 1
......@@ -156,8 +156,9 @@ matrix:
packages:
- build-essential
- cmake
- binutils-2.26
env:
- DESC="xenial build"
- DESC="trusty/gcc-4.8/bintuils-2.26 build"
before_script:
- export TZ=Asia/Harbin
......@@ -168,7 +169,7 @@ matrix:
script:
- cmake .. > /dev/null
- make
- export PATH=/usr/lib/binutils-2.26/bin:$PATH && make
- os: linux
dist: bionic
......@@ -200,7 +201,7 @@ matrix:
dist: bionic
language: c
compiler: clang
env: DESC="linux/clang build"
env: DESC="arm64 linux/clang build"
git:
- depth: 1
......@@ -238,7 +239,7 @@ matrix:
- build-essential
- cmake
env:
- DESC="xenial build"
- DESC="arm64 xenial build"
before_script:
- export TZ=Asia/Harbin
......
......@@ -33,11 +33,17 @@ To build TDengine, use [CMake](https://cmake.org/) 3.5 or higher versions in the
## Install tools
### Ubuntu & Debian:
### Ubuntu 16.04 and above & Debian:
```bash
sudo apt-get install -y gcc cmake build-essential git
```
### Ubuntu 14.04:
```bash
sudo apt-get install -y gcc cmake3 build-essential git binutils-2.26
export PATH=/usr/lib/binutils-2.26/bin:$PATH
```
To compile and package the JDBC driver source code, you should have a Java jdk-8 or higher and Apache Maven 2.7 or higher installed.
To install openjdk-8:
```bash
......
......@@ -148,7 +148,7 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic
SHOW TABLES [LIKE tb_name_wildcar];
```
显示当前数据库下的所有数据表信息。说明:可在like中使用通配符进行名称的匹配。 通配符匹配:1)’%’ (百分号)匹配0到任意个字符;2)’_’下划线匹配一个字符。
显示当前数据库下的所有数据表信息。说明:可在 like 中使用通配符进行名称的匹配。 通配符匹配:1)“%”(百分号)匹配 0 到任意个字符;2)“\_”(下划线)匹配一个字符。
- **在线修改显示字符宽度**
......@@ -263,33 +263,33 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic
- **插入一条记录,数据对应到指定的列**
```mysql
INSERT INTO tb_name (field1_name, ...) VALUES(field1_value, ...)
INSERT INTO tb_name (field1_name, ...) VALUES (field1_value, ...)
```
向表tb_name中插入一条记录,数据对应到指定的列。SQL语句中没有出现的列,数据库将自动填充为NULL。主键(时间戳)不能为NULL。
- **插入多条记录**
```mysql
INSERT INTO tb_name VALUES (field1_value1, ...) (field1_value2, ...)...;
INSERT INTO tb_name VALUES (field1_value1, ...) (field1_value2, ...) ...;
```
向表tb_name中插入多条记录
- **按指定的列插入多条记录**
```mysql
INSERT INTO tb_name (field1_name, ...) VALUES(field1_value1, ...) (field1_value2, ...)
INSERT INTO tb_name (field1_name, ...) VALUES (field1_value1, ...) (field1_value2, ...) ...;
```
向表tb_name中按指定的列插入多条记录
- **向多个表插入多条记录**
```mysql
INSERT INTO tb1_name VALUES (field1_value1, ...)(field1_value2, ...)...
tb2_name VALUES (field1_value1, ...)(field1_value2, ...)...;
INSERT INTO tb1_name VALUES (field1_value1, ...) (field1_value2, ...) ...
tb2_name VALUES (field1_value1, ...) (field1_value2, ...) ...;
```
同时向表tb1_name和tb2_name中分别插入多条记录
- **同时向多个表按列插入多条记录**
```mysql
INSERT INTO tb1_name (tb1_field1_name, ...) VALUES (field1_value1, ...) (field1_value2, ...)
tb2_name (tb2_field1_name, ...) VALUES (field1_value1, ...) (field1_value2, ...);
INSERT INTO tb1_name (tb1_field1_name, ...) VALUES (field1_value1, ...) (field1_value2, ...) ...
tb2_name (tb2_field1_name, ...) VALUES (field1_value1, ...) (field1_value2, ...) ...;
```
同时向表tb1_name和tb2_name中按列分别插入多条记录
......@@ -318,23 +318,23 @@ SELECT select_expr [, select_expr ...]
```
说明:针对 insert 类型的 SQL 语句,我们采用的流式解析策略,在发现后面的错误之前,前面正确的部分SQL仍会执行。下面的sql中,insert语句是无效的,但是d1001仍会被创建。
```mysql
taos> create table meters(ts timestamp, current float, voltage int, phase float) tags(location binary(30), groupId int);
taos> CREATE TABLE meters(ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS(location BINARY(30), groupId INT);
Query OK, 0 row(s) affected (0.008245s)
taos> show stables;
taos> SHOW STABLES;
name | created_time | columns | tags | tables |
============================================================================================
meters | 2020-08-06 17:50:27.831 | 4 | 2 | 0 |
Query OK, 1 row(s) in set (0.001029s)
taos> show tables;
taos> SHOW TABLES;
Query OK, 0 row(s) in set (0.000946s)
taos> insert into d1001 using meters tags('Beijing.Chaoyang', 2);
taos> INSERT INTO d1001 USING meters TAGS('Beijing.Chaoyang', 2);
DB error: invalid SQL: keyword VALUES or FILE required
taos> show tables;
taos> SHOW TABLES;
table_name | created_time | columns | stable_name |
======================================================================================================
d1001 | 2020-08-06 17:52:02.097 | 4 | meters |
......@@ -397,27 +397,41 @@ Query OK, 1 row(s) in set (0.020443s)
在使用SQL函数来进行查询过程中,部分SQL函数支持通配符操作。其中的区别在于:
```count(\*)```函数只返回一列。```first```、```last```、```last_row```函数则是返回全部列。
```
taos> select count(*) from d1001;
```mysql
taos> SELECT COUNT(*) FROM d1001;
count(*) |
========================
3 |
Query OK, 1 row(s) in set (0.001035s)
```
```
taos> select first(*) from d1001;
```mysql
taos> SELECT FIRST(*) FROM d1001;
first(ts) | first(current) | first(voltage) | first(phase) |
=========================================================================================
2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 |
Query OK, 1 row(s) in set (0.000849s)
```
##### 标签列
从 2.0.14 版本开始,支持在普通表的查询中指定 _标签列_,且标签列的值会与普通列的数据一起返回。
```mysql
taos> SELECT location, groupid, current FROM d1001 LIMIT 2;
location | groupid | current |
======================================================================
Beijing.Chaoyang | 2 | 10.30000 |
Beijing.Chaoyang | 2 | 12.60000 |
Query OK, 2 row(s) in set (0.003112s)
```
注意:普通表的通配符 * 中并不包含 _标签列_。
#### 结果集列名
```SELECT```子句中,如果不指定返回结果集合的列名,结果集列名称默认使用```SELECT```子句中的表达式名称作为列名称。此外,用户可使用```AS```来重命名返回结果集合中列的名称。例如:
```
taos> select ts, ts as primary_key_ts from d1001;
```mysql
taos> SELECT ts, ts AS primary_key_ts FROM d1001;
ts | primary_key_ts |
====================================================
2018-10-03 14:38:05.000 | 2018-10-03 14:38:05.000 |
......@@ -434,53 +448,53 @@ Query OK, 3 row(s) in set (0.001191s)
FROM关键字后面可以是若干个表(超级表)列表,也可以是子查询的结果。
如果没有指定用户的当前数据库,可以在表名称之前使用数据库的名称来指定表所属的数据库。例如:```power.d1001``` 方式来跨库使用表。
```
```mysql
SELECT * FROM power.d1001;
------------------------------
use power;
USE power;
SELECT * FROM d1001;
```
#### 特殊功能
部分特殊的查询功能可以不使用FROM子句执行。获取当前所在的数据库 database()
```
taos> SELECT database();
```mysql
taos> SELECT DATABASE();
database() |
=================================
power |
Query OK, 1 row(s) in set (0.000079s)
```
如果登录的时候没有指定默认数据库,且没有使用```use```命令切换数据,则返回NULL。
```
taos> SELECT database();
```mysql
taos> SELECT DATABASE();
database() |
=================================
NULL |
Query OK, 1 row(s) in set (0.000184s)
```
获取服务器和客户端版本号:
```
taos> SELECT client_version();
```mysql
taos> SELECT CLIENT_VERSION();
client_version() |
===================
2.0.0.0 |
Query OK, 1 row(s) in set (0.000070s)
taos> SELECT server_version();
taos> SELECT SERVER_VERSION();
server_version() |
===================
2.0.0.0 |
Query OK, 1 row(s) in set (0.000077s)
```
服务器状态检测语句。如果服务器正常,返回一个数字(例如 1)。如果服务器异常,返回error code。该SQL语法能兼容连接池对于TDengine状态的检查及第三方工具对于数据库服务器状态的检查。并可以避免出现使用了错误的心跳检测SQL语句导致的连接池连接丢失的问题。
```
taos> SELECT server_status();
```mysql
taos> SELECT SERVER_STATUS();
server_status() |
==================
1 |
Query OK, 1 row(s) in set (0.000074s)
taos> SELECT server_status() as status;
taos> SELECT SERVER_STATUS() AS status;
status |
==============
1 |
......@@ -493,15 +507,15 @@ Query OK, 1 row(s) in set (0.000081s)
#### 小技巧
获取一个超级表所有的子表名及相关的标签信息:
```
```mysql
SELECT TBNAME, location FROM meters;
```
统计超级表下辖子表数量:
```
```mysql
SELECT COUNT(TBNAME) FROM meters;
```
以上两个查询均只支持在Where条件子句中添加针对标签(TAGS)的过滤条件。例如:
```
```mysql
taos> SELECT TBNAME, location FROM meters;
tbname | location |
==================================================================
......@@ -511,7 +525,7 @@ taos> SELECT TBNAME, location FROM meters;
d1001 | Beijing.Chaoyang |
Query OK, 4 row(s) in set (0.000881s)
taos> SELECT count(tbname) FROM meters WHERE groupId > 2;
taos> SELECT COUNT(tbname) FROM meters WHERE groupId > 2;
count(tbname) |
========================
2 |
......@@ -545,7 +559,7 @@ Query OK, 1 row(s) in set (0.001091s)
- 对于下面的例子,表tb1用以下语句创建
```mysql
CREATE TABLE tb1 (ts timestamp, col1 int, col2 float, col3 binary(50));
CREATE TABLE tb1 (ts TIMESTAMP, col1 INT, col2 FLOAT, col3 BINARY(50));
```
- 查询tb1刚过去的一个小时的所有记录
......@@ -563,7 +577,7 @@ Query OK, 1 row(s) in set (0.001091s)
- 查询col1与col2的和,并取名complex, 时间大于2018-06-01 08:00:00.000, col2大于1.2,结果输出仅仅10条记录,从第5条开始
```mysql
SELECT (col1 + col2) AS 'complex' FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' and col2 > 1.2 LIMIT 10 OFFSET 5;
SELECT (col1 + col2) AS 'complex' FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' AND col2 > 1.2 LIMIT 10 OFFSET 5;
```
- 查询过去10分钟的记录,col2的值大于3.14,并且将结果输出到文件 `/home/testoutpu.csv`.
......@@ -590,13 +604,13 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
示例:
```mysql
taos> SELECT COUNT(*), COUNT(VOLTAGE) FROM meters;
taos> SELECT COUNT(*), COUNT(voltage) FROM meters;
count(*) | count(voltage) |
================================================
9 | 9 |
Query OK, 1 row(s) in set (0.004475s)
taos> SELECT COUNT(*), COUNT(VOLTAGE) FROM d1001;
taos> SELECT COUNT(*), COUNT(voltage) FROM d1001;
count(*) | count(voltage) |
================================================
3 | 3 |
......@@ -620,7 +634,7 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
11.466666751 | 220.444444444 | 0.293333333 |
Query OK, 1 row(s) in set (0.004135s)
taos> SELECT AVG(current), AVG(voltage), AVG(phase) from d1001;
taos> SELECT AVG(current), AVG(voltage), AVG(phase) FROM d1001;
avg(current) | avg(voltage) | avg(phase) |
====================================================================================
11.733333588 | 219.333333333 | 0.316666673 |
......@@ -648,13 +662,13 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
示例:
```mysql
taos> SELECT SUM(current), SUM(voltage), SUM(phase) from meters;
taos> SELECT SUM(current), SUM(voltage), SUM(phase) FROM meters;
sum(current) | sum(voltage) | sum(phase) |
================================================================================
103.200000763 | 1984 | 2.640000001 |
Query OK, 1 row(s) in set (0.001702s)
taos> SELECT SUM(current), SUM(voltage), SUM(phase) from d1001;
taos> SELECT SUM(current), SUM(voltage), SUM(phase) FROM d1001;
sum(current) | sum(voltage) | sum(phase) |
================================================================================
35.200000763 | 658 | 0.950000018 |
......@@ -753,7 +767,7 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
功能说明:统计表/超级表中某列的值最先写入的非NULL值。
返回结果数据类型:同应用的字段。
应用字段:所有字段。
说明:1)如果要返回各个列的首个(时间戳最小)非NULL值,可以使用FIRST(*);2) 如果结果集中的某列全部为NULL值,则该列的返回结果也是NULL;3) 如果结果集中所有列全部为NULL值,则不返回结果。
说明:1)如果要返回各个列的首个(时间戳最小)非NULL值,可以使用FIRST(\*);2) 如果结果集中的某列全部为NULL值,则该列的返回结果也是NULL;3) 如果结果集中所有列全部为NULL值,则不返回结果。
示例:
```mysql
......@@ -777,7 +791,7 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
功能说明:统计表/超级表中某列的值最后写入的非NULL值。
返回结果数据类型:同应用的字段。
应用字段:所有字段。
说明:1)如果要返回各个列的最后(时间戳最大)一个非NULL值,可以使用LAST(*);2)如果结果集中的某列全部为NULL值,则该列的返回结果也是NULL;如果结果集中所有列全部为NULL值,则不返回结果。
说明:1)如果要返回各个列的最后(时间戳最大)一个非NULL值,可以使用LAST(\*);2)如果结果集中的某列全部为NULL值,则该列的返回结果也是NULL;如果结果集中所有列全部为NULL值,则不返回结果。
示例:
```mysql
......@@ -1004,15 +1018,15 @@ SELECT function_list FROM stb_name
**示例:** 智能电表的建表语句如下:
```mysql
CREATE TABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
CREATE TABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT);
```
针对智能电表采集的数据,以10分钟为一个阶段,计算过去24小时的电流数据的平均值、最大值、电流的中位数、以及随着时间变化的电流走势拟合直线。如果没有计算值,用前一个非NULL值填充。
使用的查询语句如下:
```mysql
SELECT AVG(current),MAX(current),LEASTSQUARES(current, start_val, step_val), PERCENTILE(current, 50) FROM meters
WHERE TS>=NOW-1d
SELECT AVG(current), MAX(current), LEASTSQUARES(current, start_val, step_val), PERCENTILE(current, 50) FROM meters
WHERE ts>=NOW-1d
INTERVAL(10m)
FILL(PREV);
```
......
......@@ -95,7 +95,7 @@ void taos_query_a(TAOS *taos, const char *sqlstr, __async_cb_func_t fp, void *pa
return;
}
nPrintTsc(sqlstr);
nPrintTsc("%s", sqlstr);
SSqlObj *pSql = (SSqlObj *)calloc(1, sizeof(SSqlObj));
if (pSql == NULL) {
......@@ -365,7 +365,7 @@ static void tscProcessAsyncError(SSchedMsg *pMsg) {
void (*fp)() = pMsg->ahandle;
terrno = *(int32_t*) pMsg->msg;
tfree(pMsg->msg);
(*fp)(pMsg->thandle, NULL, *(int32_t*)pMsg->msg);
(*fp)(pMsg->thandle, NULL, terrno);
}
void tscQueueAsyncError(void(*fp), void *param, int32_t code) {
......
......@@ -327,7 +327,7 @@ TAOS_RES* taos_query_c(TAOS *taos, const char *sqlstr, uint32_t sqlLen, int64_t*
return NULL;
}
nPrintTsc(sqlstr);
nPrintTsc("%s", sqlstr);
SSqlObj* pSql = calloc(1, sizeof(SSqlObj));
if (pSql == NULL) {
......
......@@ -188,8 +188,10 @@ static int64_t doTSBlockIntersect(SSqlObj* pSql, SJoinSupporter* pSupporter1, SJ
tsBufFlush(output2);
tsBufDestroy(pSupporter1->pTSBuf);
pSupporter1->pTSBuf = NULL;
tsBufDestroy(pSupporter2->pTSBuf);
pSupporter2->pTSBuf = NULL;
TSKEY et = taosGetTimestampUs();
tscDebug("%p input1:%" PRId64 ", input2:%" PRId64 ", final:%" PRId64 " in %d vnodes for secondary query after ts blocks "
"intersecting, skey:%" PRId64 ", ekey:%" PRId64 ", numOfVnode:%d, elapsed time:%" PRId64 " us",
......@@ -219,12 +221,9 @@ SJoinSupporter* tscCreateJoinSupporter(SSqlObj* pSql, int32_t index) {
assert (pSupporter->uid != 0);
taosGetTmpfilePath("join-", pSupporter->path);
pSupporter->f = fopen(pSupporter->path, "w");
// todo handle error
if (pSupporter->f == NULL) {
tscError("%p failed to create tmp file:%s, reason:%s", pSql, pSupporter->path, strerror(errno));
}
// do NOT create file here to reduce crash generated file left issue
pSupporter->f = NULL;
return pSupporter;
}
......@@ -244,12 +243,19 @@ static void tscDestroyJoinSupporter(SJoinSupporter* pSupporter) {
tscFieldInfoClear(&pSupporter->fieldsInfo);
if (pSupporter->pTSBuf != NULL) {
tsBufDestroy(pSupporter->pTSBuf);
pSupporter->pTSBuf = NULL;
}
unlink(pSupporter->path);
if (pSupporter->f != NULL) {
fclose(pSupporter->f);
unlink(pSupporter->path);
pSupporter->f = NULL;
}
if (pSupporter->pVgroupTables != NULL) {
taosArrayDestroy(pSupporter->pVgroupTables);
pSupporter->pVgroupTables = NULL;
......@@ -526,6 +532,8 @@ static void quitAllSubquery(SSqlObj* pSqlObj, SJoinSupporter* pSupporter) {
tscError("%p all subquery return and query failed, global code:%s", pSqlObj, tstrerror(pSqlObj->res.code));
freeJoinSubqueryObj(pSqlObj);
}
tscDestroyJoinSupporter(pSupporter);
}
// update the query time range according to the join results on timestamp
......@@ -921,6 +929,22 @@ static void tsCompRetrieveCallback(void* param, TAOS_RES* tres, int32_t numOfRow
}
if (numOfRows > 0) { // write the compressed timestamp to disk file
if(pSupporter->f == NULL) {
pSupporter->f = fopen(pSupporter->path, "w");
if (pSupporter->f == NULL) {
tscError("%p failed to create tmp file:%s, reason:%s", pSql, pSupporter->path, strerror(errno));
pParentSql->res.code = TAOS_SYSTEM_ERROR(errno);
quitAllSubquery(pParentSql, pSupporter);
tscAsyncResultOnError(pParentSql);
return;
}
}
fwrite(pRes->data, (size_t)pRes->numOfRows, 1, pSupporter->f);
fclose(pSupporter->f);
pSupporter->f = NULL;
......@@ -930,6 +954,9 @@ static void tsCompRetrieveCallback(void* param, TAOS_RES* tres, int32_t numOfRow
tscError("%p invalid ts comp file from vnode, abort subquery, file size:%d", pSql, numOfRows);
pParentSql->res.code = TAOS_SYSTEM_ERROR(errno);
quitAllSubquery(pParentSql, pSupporter);
tscAsyncResultOnError(pParentSql);
return;
......
......@@ -181,7 +181,7 @@ void httpProcessMultiSql(HttpContext *pContext) {
char *sql = httpGetCmdsString(pContext, cmd->sql);
httpTraceL("context:%p, fd:%d, user:%s, process pos:%d, start query, sql:%s", pContext, pContext->fd, pContext->user,
multiCmds->pos, sql);
nPrintHttp(sql);
nPrintHttp("%s", sql);
taos_query_a(pContext->session->taos, sql, httpProcessMultiSqlCallBack, (void *)pContext);
}
......@@ -329,7 +329,7 @@ void httpProcessSingleSqlCmd(HttpContext *pContext) {
}
httpTraceL("context:%p, fd:%d, user:%s, start query, sql:%s", pContext, pContext->fd, pContext->user, sql);
nPrintHttp(sql);
nPrintHttp("%s", sql);
taos_query_a(pSession->taos, sql, httpProcessSingleSqlCallBack, (void *)pContext);
}
......
......@@ -27,7 +27,7 @@ void vnodeCleanupRead(void);
int32_t vnodeWriteToRQueue(void *pVnode, void *pCont, int32_t contLen, int8_t qtype, void *rparam);
void vnodeFreeFromRQueue(void *pVnode, SVReadMsg *pRead);
int32_t vnodeProcessRead(void *pVnode, SVReadMsg *pRead);
void vnodeWaitReadCompleted(void *pVnode);
void vnodeWaitReadCompleted(SVnodeObj *pVnode);
#ifdef __cplusplus
}
......
......@@ -27,7 +27,7 @@ void vnodeCleanupWrite(void);
int32_t vnodeWriteToWQueue(void *pVnode, void *pHead, int32_t qtype, void *pRpcMsg);
void vnodeFreeFromWQueue(void *pVnode, SVWriteMsg *pWrite);
int32_t vnodeProcessWrite(void *pVnode, void *pHead, int32_t qtype, void *pRspRet);
void vnodeWaitWriteCompleted(void *pVnode);
void vnodeWaitWriteCompleted(SVnodeObj *pVnode);
#ifdef __cplusplus
}
......
......@@ -420,9 +420,6 @@ void vnodeCleanUp(SVnodeObj *pVnode) {
vnodeSetClosingStatus(pVnode);
// release local resources only after cutting off outside connections
qQueryMgmtNotifyClosed(pVnode->qMgmt);
// stop replication module
if (pVnode->sync > 0) {
int64_t sync = pVnode->sync;
......
......@@ -436,4 +436,9 @@ int32_t vnodeNotifyCurrentQhandle(void *handle, void *qhandle, int32_t vgId) {
return rpcReportProgress(handle, (char *)pMsg, sizeof(SRetrieveTableMsg));
}
void vnodeWaitReadCompleted(void *pVnode) {}
\ No newline at end of file
void vnodeWaitReadCompleted(SVnodeObj *pVnode) {
while (pVnode->queuedRMsg > 0) {
vTrace("vgId:%d, queued rmsg num:%d", pVnode->vgId, pVnode->queuedRMsg);
taosMsleep(10);
}
}
\ No newline at end of file
......@@ -18,6 +18,8 @@
#include "taosmsg.h"
#include "query.h"
#include "vnodeStatus.h"
#include "vnodeRead.h"
#include "vnodeWrite.h"
char* vnodeStatus[] = {
"init",
......@@ -56,7 +58,7 @@ static bool vnodeSetClosingStatusImp(SVnodeObj* pVnode) {
bool set = false;
pthread_mutex_lock(&pVnode->statusMutex);
if (pVnode->status == TAOS_VN_STATUS_READY) {
if (pVnode->status == TAOS_VN_STATUS_READY || pVnode->status == TAOS_VN_STATUS_INIT) {
pVnode->status = TAOS_VN_STATUS_CLOSING;
set = true;
} else {
......@@ -68,16 +70,18 @@ static bool vnodeSetClosingStatusImp(SVnodeObj* pVnode) {
}
bool vnodeSetClosingStatus(SVnodeObj* pVnode) {
if (!vnodeInInitStatus(pVnode)) {
// it may be in updating or reset state, then it shall wait
int32_t i = 0;
while (!vnodeSetClosingStatusImp(pVnode)) {
if (++i % 1000 == 0) {
sched_yield();
}
int32_t i = 0;
while (!vnodeSetClosingStatusImp(pVnode)) {
if (++i % 1000 == 0) {
sched_yield();
}
}
// release local resources only after cutting off outside connections
qQueryMgmtNotifyClosed(pVnode->qMgmt);
vnodeWaitReadCompleted(pVnode);
vnodeWaitWriteCompleted(pVnode);
return true;
}
......@@ -96,11 +100,11 @@ bool vnodeSetUpdatingStatus(SVnodeObj* pVnode) {
return set;
}
bool vnodeSetResetStatus(SVnodeObj* pVnode) {
static bool vnodeSetResetStatusImp(SVnodeObj* pVnode) {
bool set = false;
pthread_mutex_lock(&pVnode->statusMutex);
if (pVnode->status != TAOS_VN_STATUS_CLOSING && pVnode->status != TAOS_VN_STATUS_INIT) {
if (pVnode->status == TAOS_VN_STATUS_READY || pVnode->status == TAOS_VN_STATUS_INIT) {
pVnode->status = TAOS_VN_STATUS_RESET;
set = true;
} else {
......@@ -111,6 +115,22 @@ bool vnodeSetResetStatus(SVnodeObj* pVnode) {
return set;
}
bool vnodeSetResetStatus(SVnodeObj* pVnode) {
int32_t i = 0;
while (!vnodeSetResetStatusImp(pVnode)) {
if (++i % 1000 == 0) {
sched_yield();
}
}
// release local resources only after cutting off outside connections
qQueryMgmtNotifyClosed(pVnode->qMgmt);
vnodeWaitReadCompleted(pVnode);
vnodeWaitWriteCompleted(pVnode);
return true;
}
bool vnodeInInitStatus(SVnodeObj* pVnode) {
bool in = false;
pthread_mutex_lock(&pVnode->statusMutex);
......
......@@ -345,4 +345,9 @@ static int32_t vnodePerformFlowCtrl(SVWriteMsg *pWrite) {
}
}
void vnodeWaitWriteCompleted(void *pVnode) {}
\ No newline at end of file
void vnodeWaitWriteCompleted(SVnodeObj *pVnode) {
while (pVnode->queuedWMsg > 0) {
vTrace("vgId:%d, queued wmsg num:%d", pVnode->vgId, pVnode->queuedWMsg);
taosMsleep(10);
}
}
......@@ -98,7 +98,15 @@ pipeline {
sh '''
cd ${WKC}/tests/examples/JDBC/JDBCDemo/
mvn clean package assembly:single -DskipTests >/dev/null
java -jar target/jdbcChecker-SNAPSHOT-jar-with-dependencies.jar -host 127.0.0.1
java -jar target/JDBCDemo-SNAPSHOT-jar-with-dependencies.jar -host 127.0.0.1
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/src/connector/jdbc
mvn clean package -Dmaven.test.skip=true >/dev/null
cd ${WKC}/tests/examples/JDBC/JDBCDemo/
java --class-path=../../../../src/connector/jdbc/target:$JAVA_HOME/jre/lib/ext -jar target/JDBCDemo-SNAPSHOT-jar-with-dependencies.jar -host 127.0.0.1
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
......
......@@ -73,8 +73,8 @@ class TDTestCase:
tdSql.query("select spread(c1) from st")
tdSql.checkRows(0)
# tdSql.query("select stddev(c1) from st")
# tdSql.checkRows(0)
tdSql.query("select stddev(c1) from st")
tdSql.checkRows(0)
tdSql.query("select sum(c1) from st")
tdSql.checkRows(0)
......
......@@ -144,8 +144,10 @@ class TDTestCase:
print("apercentile result: %s" % tdSql.getData(0, 0))
# Test case for: https://jira.taosdata.com:18080/browse/TD-2609
# modified for : https://jira.taosdata.com:18080/browse/TD-2627
tdSql.execute("create table st(ts timestamp, k int)")
tdSql.execute("insert into st values(now, -100)")
tdSql.execute("insert into st values(now, -100)(now+1a,-99)")
tdSql.query("select apercentile(k, 20) from st")
tdSql.checkData(0, 0, -100.00)
......
......@@ -44,6 +44,20 @@ class TDTestCase:
tdSql.query("select * from db.st where ts='2020-05-13 10:00:00.000'")
tdSql.checkRows(1)
tdSql.query("select tbname, dev from dev_001")
tdSql.checkRows(1)
tdSql.checkData(0, 0, 'dev_001')
tdSql.checkData(0, 1, 'dev_01')
tdSql.query("select tbname, dev, tagtype from dev_001")
tdSql.checkRows(2)
tdSql.checkData(0, 0, 'dev_001')
tdSql.checkData(0, 1, 'dev_01')
tdSql.checkData(0, 2, 1)
tdSql.checkData(1, 0, 'dev_001')
tdSql.checkData(1, 1, 'dev_01')
tdSql.checkData(1, 2, 1)
## test case for https://jira.taosdata.com:18080/browse/TD-2488
tdSql.execute("create table m1(ts timestamp, k int) tags(a int)")
tdSql.execute("create table t1 using m1 tags(1)")
......@@ -63,6 +77,8 @@ class TDTestCase:
tdSql.checkRows(1)
tdSql.checkData(0, 0, 1)
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
......
......@@ -100,7 +100,23 @@ class TDTestCase:
tdSql.checkData(1, 1, None)
tdSql.checkData(2, 1, None)
# test case for https://jira.taosdata.com:18080/browse/TD-2659, https://jira.taosdata.com:18080/browse/TD-2660
tdSql.execute("create database test3")
tdSql.execute("use test3")
tdSql.execute("create table tb(ts timestamp, c int)")
tdSql.execute("insert into tb values('2020-10-30 18:11:56.680', -111)")
tdSql.execute("insert into tb values('2020-11-19 18:11:45.773', null)")
tdSql.execute("insert into tb values('2020-12-09 18:11:17.098', null)")
tdSql.execute("insert into tb values('2020-12-29 11:00:49.412', 1)")
tdSql.execute("insert into tb values('2020-12-29 11:00:50.412', 2)")
tdSql.execute("insert into tb values('2020-12-29 11:00:52.412', 3)")
tdSql.query("select first(ts),twa(c) from tb interval(14a)")
tdSql.checkRows(6)
tdSql.query("select twa(c) from tb group by c")
tdSql.checkRows(4)
def stop(self):
tdSql.close()
......
......@@ -48,12 +48,16 @@ class TDTestCase:
tdSql.execute("insert into car3 values('2019-01-01 00:00:01.389', 1)")
tdSql.execute("insert into car4 values('2019-01-01 00:00:01.829', 1)")
tdSql.error("create table strm as select count(*) from cars")
tdSql.execute("create table strm as select count(*) from cars interval(4s)")
tdSql.waitedQuery("select * from strm", 2, 100)
tdSql.checkData(0, 1, 11)
tdSql.checkData(1, 1, 2)
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
......
......@@ -49,7 +49,7 @@ class TDTestCase:
else:
tdLog.info("taosd found in %s" % buildPath)
binPath = buildPath+ "/build/bin/"
os.system("yes | %staosdemo -t %d -n %d" % (binPath,self.numberOfTables, self.numberOfRecords))
os.system("yes | %staosdemo -t %d -n %d -x" % (binPath,self.numberOfTables, self.numberOfRecords))
tdSql.execute("use test")
tdSql.query("select count(*) from meters")
......@@ -61,6 +61,8 @@ class TDTestCase:
tdSql.query("select apercentile(f1, 1) from test.meters interval(10s)")
tdSql.checkRows(11)
tdSql.error("select loc, count(loc) from test.meters")
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册