提交 a341e295 编写于 作者: S Shengliang Guan

Merge remote-tracking branch 'origin/feature/m1' into hotfix/TD-6272

......@@ -4,7 +4,7 @@ PROJECT(TDengine)
IF (DEFINED VERNUMBER)
SET(TD_VER_NUMBER ${VERNUMBER})
ELSE ()
SET(TD_VER_NUMBER "2.1.7.1")
SET(TD_VER_NUMBER "2.1.7.2")
ENDIF ()
IF (DEFINED VERCOMPATIBLE)
......
......@@ -2,18 +2,18 @@
## <a class="anchor" id="intro"></a>TDengine 简介
TDengine是涛思数据面对高速增长的物联网大数据市场和技术挑战推出的创新性的大数据处理产品,它不依赖任何第三方软件,也不是优化或包装了一个开源的数据库或流式计算产品,而是在吸取众多传统关系型数据库、NoSQL数据库、流式计算引擎、消息队列等软件的优点之后自主开发的产品,在时序空间大数据处理上,有着自己独到的优势。
TDengine 是涛思数据面对高速增长的物联网大数据市场和技术挑战推出的创新性的大数据处理产品,它不依赖任何第三方软件,也不是优化或包装了一个开源的数据库或流式计算产品,而是在吸取众多传统关系型数据库、NoSQL 数据库、流式计算引擎、消息队列等软件的优点之后自主开发的产品,在时序空间大数据处理上,有着自己独到的优势。
TDengine的模块之一是时序数据库。但除此之外,为减少研发的复杂度、系统维护的难度,TDengine还提供缓存、消息队列、订阅、流式计算等功能,为物联网、工业互联网大数据的处理提供全栈的技术方案,是一个高效易用的物联网大数据平台。与Hadoop等典型的大数据平台相比,它具有如下鲜明的特点:
TDengine 的模块之一是时序数据库。但除此之外,为减少研发的复杂度、系统维护的难度,TDengine 还提供缓存、消息队列、订阅、流式计算等功能,为物联网、工业互联网大数据的处理提供全栈的技术方案,是一个高效易用的物联网大数据平台。与 Hadoop 等典型的大数据平台相比,它具有如下鲜明的特点:
* __10倍以上的性能提升__:定义了创新的数据存储结构,单核每秒能处理至少2万次请求,插入数百万个数据点,读出一千万以上数据点,比现有通用数据库快十倍以上。
* __硬件或云服务成本降至1/5__:由于超强性能,计算资源不到通用大数据方案的1/5;通过列式存储和先进的压缩算法,存储空间不到通用数据库的1/10。
* __全栈时序数据处理引擎__:将数据库、消息队列、缓存、流式计算等功能融为一体,应用无需再集成Kafka/Redis/HBase/Spark/HDFS等软件,大幅降低应用开发和维护的复杂度成本。
* __强大的分析功能__:无论是十年前还是一秒钟前的数据,指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。即席查询可通过Shell, Python, R, MATLAB随时进行。
* __与第三方工具无缝连接__:不用一行代码,即可与Telegraf, Grafana, EMQ, HiveMQ, Prometheus, MATLAB, R等集成。后续将支持OPC, Hadoop, Spark等, BI工具也将无缝连接。
* __零运维成本、零学习成本__:安装集群简单快捷,无需分库分表,实时备份。类似标准SQL,支持RESTful, 支持Python/Java/C/C++/C#/Go/Node.js, 与MySQL相似,零学习成本。
* __10 倍以上的性能提升__:定义了创新的数据存储结构,单核每秒能处理至少 2 万次请求,插入数百万个数据点,读出一千万以上数据点,比现有通用数据库快十倍以上。
* __硬件或云服务成本降至 1/5__:由于超强性能,计算资源不到通用大数据方案的 1/5;通过列式存储和先进的压缩算法,存储空间不到通用数据库的 1/10。
* __全栈时序数据处理引擎__:将数据库、消息队列、缓存、流式计算等功能融为一体,应用无需再集成 Kafka/Redis/HBase/Spark/HDFS 等软件,大幅降低应用开发和维护的复杂度成本。
* __强大的分析功能__:无论是十年前还是一秒钟前的数据,指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。即席查询可通过 Shell, Python, R, MATLAB 随时进行。
* __与第三方工具无缝连接__:不用一行代码,即可与 Telegraf, Grafana, EMQ, HiveMQ, Prometheus, MATLAB, R 等集成。后续将支持 OPC, Hadoop, Spark 等,BI 工具也将无缝连接。
* __零运维成本、零学习成本__:安装集群简单快捷,无需分库分表,实时备份。类标准 SQL,支持 RESTful,支持 Python/Java/C/C++/C#/Go/Node.js, 与 MySQL 相似,零学习成本。
采用TDengine,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。但需要指出的是,因充分利用了物联网时序数据的特点,它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM等通用型数据。
采用 TDengine,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。但需要指出的是,因充分利用了物联网时序数据的特点,它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM 等通用型数据。
![TDengine技术生态图](page://images/eco_system.png)
<center>图 1. TDengine技术生态图</center>
......@@ -21,42 +21,47 @@ TDengine的模块之一是时序数据库。但除此之外,为减少研发的
## <a class="anchor" id="scenes"></a>TDengine 总体适用场景
作为一个IOT大数据平台,TDengine的典型适用场景是在IOT范畴,而且用户有一定的数据量。本文后续的介绍主要针对这个范畴里面的系统。范畴之外的系统,比如CRM,ERP等,不在本文讨论范围内。
作为一个 IOT 大数据平台,TDengine 的典型适用场景是在 IOT 范畴,而且用户有一定的数据量。本文后续的介绍主要针对这个范畴里面的系统。范畴之外的系统,比如 CRM,ERP 等,不在本文讨论范围内。
### 数据源特点和需求
从数据源角度,设计人员可以从下面几个角度分析TDengine在目标应用系统里面的适用性。
从数据源角度,设计人员可以从下面几个角度分析 TDengine 在目标应用系统里面的适用性。
|数据源特点和需求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---|
|总体数据量巨大| | | √ |TDengine在容量方面提供出色的水平扩展功能,并且具备匹配高压缩的存储结构,达到业界最优的存储效率。|
|数据输入速度偶尔或者持续巨大| | | √ | TDengine的性能大大超过同类产品,可以在同样的硬件环境下持续处理大量的输入数据,并且提供很容易在用户环境里面运行的性能评估工具。|
|数据源数目巨大| | | √ |TDengine设计中包含专门针对大量数据源的优化,包括数据的写入和查询,尤其适合高效处理海量(千万或者更多量级)的数据源。|
|总体数据量巨大| | | √ | TDengine 在容量方面提供出色的水平扩展功能,并且具备匹配高压缩的存储结构,达到业界最优的存储效率。|
|数据输入速度偶尔或者持续巨大| | | √ | TDengine 的性能大大超过同类产品,可以在同样的硬件环境下持续处理大量的输入数据,并且提供很容易在用户环境里面运行的性能评估工具。|
|数据源数目巨大| | | √ | TDengine 设计中包含专门针对大量数据源的优化,包括数据的写入和查询,尤其适合高效处理海量(千万或者更多量级)的数据源。|
### 系统架构要求
|系统架构要求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---|
|要求简单可靠的系统架构| | | √ |TDengine的系统架构非常简单可靠,自带消息队列,缓存,流式计算,监控等功能,无需集成额外的第三方产品。|
|要求容错和高可靠| | | √ |TDengine的集群功能,自动提供容错灾备等高可靠功能。|
|标准化规范| | | √ |TDengine使用标准的SQL语言提供主要功能,遵守标准化规范。|
|要求简单可靠的系统架构| | | √ | TDengine 的系统架构非常简单可靠,自带消息队列,缓存,流式计算,监控等功能,无需集成额外的第三方产品。|
|要求容错和高可靠| | | √ | TDengine 的集群功能,自动提供容错灾备等高可靠功能。|
|标准化规范| | | √ | TDengine 使用标准的SQL语言提供主要功能,遵守标准化规范。|
### 系统功能需求
|系统功能需求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---|
|要求完整的内置数据处理算法| | √ | |TDengine的实现了通用的数据处理算法,但是还没有做到妥善处理各行各业的所有要求,因此特殊类型的处理还需要应用层面处理。|
|需要大量的交叉查询处理| | √ | |这种类型的处理更多应该用关系型数据系统处理,或者应该考虑TDengine和关系型数据系统配合实现系统功能。|
|要求完整的内置数据处理算法| | √ | | TDengine 的实现了通用的数据处理算法,但是还没有做到妥善处理各行各业的所有要求,因此特殊类型的处理还需要应用层面处理。|
|需要大量的交叉查询处理| | √ | |这种类型的处理更多应该用关系型数据系统处理,或者应该考虑 TDengine 和关系型数据系统配合实现系统功能。|
### 系统性能需求
|系统性能需求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---|
|要求较大的总体处理能力| | | √ |TDengine的集群功能可以轻松地让多服务器配合达成处理能力的提升。|
|要求高速处理数据 | | | √ |TDengine的专门为IOT优化的存储和数据处理的设计,一般可以让系统得到超出同类产品多倍数的处理速度提升。|
|要求快速处理小粒度数据| | | √ |这方面TDengine性能可以完全对标关系型和NoSQL型数据处理系统。|
|要求较大的总体处理能力| | | √ | TDengine 的集群功能可以轻松地让多服务器配合达成处理能力的提升。|
|要求高速处理数据 | | | √ | TDengine 的专门为 IOT 优化的存储和数据处理的设计,一般可以让系统得到超出同类产品多倍数的处理速度提升。|
|要求快速处理小粒度数据| | | √ |这方面 TDengine 性能可以完全对标关系型和 NoSQL 型数据处理系统。|
### 系统维护需求
|系统维护需求|不适用|可能适用|非常适用|简单说明|
|---|---|---|---|---|
|要求系统可靠运行| | | √ |TDengine的系统架构非常稳定可靠,日常维护也简单便捷,对维护人员的要求简洁明了,最大程度上杜绝人为错误和事故。|
|要求系统可靠运行| | | √ | TDengine 的系统架构非常稳定可靠,日常维护也简单便捷,对维护人员的要求简洁明了,最大程度上杜绝人为错误和事故。|
|要求运维学习成本可控| | | √ |同上。|
|要求市场有大量人才储备| √ | | |TDengine作为新一代产品,目前人才市场里面有经验的人员还有限。但是学习成本低,我们作为厂家也提供运维的培训和辅助服务。|
|要求市场有大量人才储备| √ | | | TDengine 作为新一代产品,目前人才市场里面有经验的人员还有限。但是学习成本低,我们作为厂家也提供运维的培训和辅助服务。|
......@@ -142,6 +142,7 @@ function install_bin() {
if [ "$osType" != "Darwin" ]; then
${csudo} rm -f ${bin_link_dir}/taosd || :
${csudo} rm -f ${bin_link_dir}/taosdemo || :
${csudo} rm -f ${bin_link_dir}/perfMonitor || :
${csudo} rm -f ${bin_link_dir}/taosdump || :
${csudo} rm -f ${bin_link_dir}/set_core || :
fi
......@@ -167,6 +168,7 @@ function install_bin() {
[ -x ${install_main_dir}/bin/taosd ] && ${csudo} ln -s ${install_main_dir}/bin/taosd ${bin_link_dir}/taosd || :
[ -x ${install_main_dir}/bin/taosdump ] && ${csudo} ln -s ${install_main_dir}/bin/taosdump ${bin_link_dir}/taosdump || :
[ -x ${install_main_dir}/bin/taosdemo ] && ${csudo} ln -s ${install_main_dir}/bin/taosdemo ${bin_link_dir}/taosdemo || :
[ -x ${install_main_dir}/bin/perfMonitor ] && ${csudo} ln -s ${install_main_dir}/bin/perfMonitor ${bin_link_dir}/perfMonitor || :
[ -x ${install_main_dir}/set_core.sh ] && ${csudo} ln -s ${install_main_dir}/bin/set_core.sh ${bin_link_dir}/set_core || :
fi
......
name: tdengine
base: core18
version: '2.1.7.1'
version: '2.1.7.2'
icon: snap/gui/t-dengine.svg
summary: an open-source big data platform designed and optimized for IoT.
description: |
......@@ -72,7 +72,7 @@ parts:
- usr/bin/taosd
- usr/bin/taos
- usr/bin/taosdemo
- usr/lib/libtaos.so.2.1.7.1
- usr/lib/libtaos.so.2.1.7.2
- usr/lib/libtaos.so.1
- usr/lib/libtaos.so
......
......@@ -190,6 +190,7 @@ void tscFieldInfoClear(SFieldInfo* pFieldInfo);
void tscFieldInfoCopy(SFieldInfo* pFieldInfo, const SFieldInfo* pSrc, const SArray* pExprList);
static FORCE_INLINE int32_t tscNumOfFields(SQueryInfo* pQueryInfo) { return pQueryInfo->fieldsInfo.numOfOutput; }
int32_t tscGetFirstInvisibleFieldPos(SQueryInfo* pQueryInfo);
int32_t tscFieldInfoCompare(const SFieldInfo* pFieldInfo1, const SFieldInfo* pFieldInfo2, int32_t *diffSize);
void tscInsertPrimaryTsSourceColumn(SQueryInfo* pQueryInfo, uint64_t uid);
......
......@@ -168,6 +168,9 @@ static void tscProcessAsyncRetrieveImpl(void *param, TAOS_RES *tres, int numOfRo
} else {
pRes->code = numOfRows;
}
if (pRes->code == TSDB_CODE_SUCCESS) {
pRes->code = TSDB_CODE_TSC_INVALID_QHANDLE;
}
tscAsyncResultOnError(pSql);
return;
......
......@@ -643,7 +643,7 @@ static void doExecuteFinalMerge(SOperatorInfo* pOperator, int32_t numOfExpr, SSD
for(int32_t j = 0; j < numOfExpr; ++j) {
pCtx[j].pOutput += (pCtx[j].outputBytes * numOfRows);
if (pCtx[j].functionId == TSDB_FUNC_TOP || pCtx[j].functionId == TSDB_FUNC_BOTTOM) {
pCtx[j].ptsOutputBuf = pCtx[0].pOutput;
if(j>0) pCtx[j].ptsOutputBuf = pCtx[j-1].pOutput;
}
}
......
......@@ -1757,6 +1757,7 @@ static void parseFileSendDataBlock(void *param, TAOS_RES *tres, int32_t numOfRow
pSql->res.numOfRows = 0;
code = doPackSendDataBlock(pSql, pInsertParam, pTableMeta, count, pTableDataBlock);
if (code != TSDB_CODE_SUCCESS) {
pParentSql->res.code = code;
goto _error;
}
......
......@@ -206,6 +206,8 @@ static int normalStmtPrepare(STscStmt* stmt) {
return code;
}
start = i + token.n;
} else if (token.type == TK_ILLEGAL) {
return invalidOperationMsg(tscGetErrorMsgPayload(&stmt->pSql->cmd), "invalid sql");
}
i += token.n;
......
......@@ -422,7 +422,6 @@ int32_t readFromFile(char *name, uint32_t *len, void **buf) {
return TSDB_CODE_TSC_APP_ERROR;
}
close(fd);
tfree(*buf);
return TSDB_CODE_SUCCESS;
}
......@@ -888,6 +887,7 @@ int32_t tscValidateSqlInfo(SSqlObj* pSql, struct SSqlInfo* pInfo) {
}
case TSDB_SQL_SELECT: {
const char * msg1 = "no nested query supported in union clause";
code = loadAllTableMeta(pSql, pInfo);
if (code != TSDB_CODE_SUCCESS) {
return code;
......@@ -901,6 +901,10 @@ int32_t tscValidateSqlInfo(SSqlObj* pSql, struct SSqlInfo* pInfo) {
tscTrace("0x%"PRIx64" start to parse the %dth subclause, total:%"PRIzu, pSql->self, i, size);
if (size > 1 && pSqlNode->from && pSqlNode->from->type == SQL_NODE_FROM_SUBQUERY) {
return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg1);
}
// normalizeSqlNode(pSqlNode); // normalize the column name in each function
if ((code = validateSqlNode(pSql, pSqlNode, pQueryInfo)) != TSDB_CODE_SUCCESS) {
return code;
......@@ -2603,13 +2607,12 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col
// set the first column ts for diff query
if (functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE) {
colIndex += 1;
SColumnIndex indexTS = {.tableIndex = index.tableIndex, .columnIndex = 0};
SExprInfo* pExpr = tscExprAppend(pQueryInfo, TSDB_FUNC_TS_DUMMY, &indexTS, TSDB_DATA_TYPE_TIMESTAMP,
TSDB_KEYSIZE, getNewResColId(pCmd), TSDB_KEYSIZE, false);
SColumnList ids = createColumnList(1, 0, 0);
insertResultField(pQueryInfo, 0, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP, aAggs[TSDB_FUNC_TS_DUMMY].name, pExpr);
insertResultField(pQueryInfo, colIndex, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP, aAggs[TSDB_FUNC_TS_DUMMY].name, pExpr);
}
SExprInfo* pExpr = tscExprAppend(pQueryInfo, functionId, &index, resultType, resultSize, getNewResColId(pCmd), intermediateResSize, false);
......@@ -2882,7 +2885,7 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col
const int32_t TS_COLUMN_INDEX = PRIMARYKEY_TIMESTAMP_COL_INDEX;
SColumnList ids = createColumnList(1, index.tableIndex, TS_COLUMN_INDEX);
insertResultField(pQueryInfo, TS_COLUMN_INDEX, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP,
insertResultField(pQueryInfo, colIndex, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP,
aAggs[TSDB_FUNC_TS].name, pExpr);
colIndex += 1; // the first column is ts
......@@ -4884,10 +4887,6 @@ static void cleanQueryExpr(SCondExpr* pCondExpr) {
tSqlExprDestroy(pCondExpr->pTableCond);
}
if (pCondExpr->pTagCond) {
tSqlExprDestroy(pCondExpr->pTagCond);
}
if (pCondExpr->pColumnCond) {
tSqlExprDestroy(pCondExpr->pColumnCond);
}
......@@ -6943,9 +6942,7 @@ static int32_t doAddGroupbyColumnsOnDemand(SSqlCmd* pCmd, SQueryInfo* pQueryInfo
s = &pSchema[colIndex];
}
}
size_t size = tscNumOfExprs(pQueryInfo);
if (TSDB_COL_IS_TAG(pColIndex->flag)) {
int32_t f = TSDB_FUNC_TAG;
......@@ -6953,8 +6950,10 @@ static int32_t doAddGroupbyColumnsOnDemand(SSqlCmd* pCmd, SQueryInfo* pQueryInfo
f = TSDB_FUNC_TAGPRJ;
}
int32_t pos = tscGetFirstInvisibleFieldPos(pQueryInfo);
SColumnIndex index = {.tableIndex = pQueryInfo->groupbyExpr.tableIndex, .columnIndex = colIndex};
SExprInfo* pExpr = tscExprAppend(pQueryInfo, f, &index, s->type, s->bytes, getNewResColId(pCmd), s->bytes, true);
SExprInfo* pExpr = tscExprInsert(pQueryInfo, pos, f, &index, s->type, s->bytes, getNewResColId(pCmd), s->bytes, true);
memset(pExpr->base.aliasName, 0, sizeof(pExpr->base.aliasName));
tstrncpy(pExpr->base.aliasName, s->name, sizeof(pExpr->base.aliasName));
......@@ -6964,13 +6963,15 @@ static int32_t doAddGroupbyColumnsOnDemand(SSqlCmd* pCmd, SQueryInfo* pQueryInfo
// NOTE: tag column does not add to source column list
SColumnList ids = createColumnList(1, 0, pColIndex->colIndex);
insertResultField(pQueryInfo, (int32_t)size, &ids, s->bytes, (int8_t)s->type, s->name, pExpr);
insertResultField(pQueryInfo, pos, &ids, s->bytes, (int8_t)s->type, s->name, pExpr);
} else {
// if this query is "group by" normal column, time window query is not allowed
if (isTimeWindowQuery(pQueryInfo)) {
return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg1);
}
size_t size = tscNumOfExprs(pQueryInfo);
bool hasGroupColumn = false;
for (int32_t j = 0; j < size; ++j) {
SExprInfo* pExpr = tscExprGet(pQueryInfo, j);
......
......@@ -841,6 +841,11 @@ static int32_t tscEstimateQueryMsgSize(SSqlObj *pSql) {
tableSerialize = totalTables * sizeof(STableIdInfo);
}
SCond* pCond = &pQueryInfo->tagCond.tbnameCond;
if (pCond->len > 0) {
srcColListSize += pCond->len;
}
return MIN_QUERY_MSG_PKT_SIZE + minMsgSize() + sizeof(SQueryTableMsg) + srcColListSize + srcColFilterSize + srcTagFilterSize +
exprSize + tsBufSize + tableSerialize + sqlLen + 4096 + pQueryInfo->bufLen;
}
......
......@@ -15,8 +15,9 @@
#define _GNU_SOURCE
#include "os.h"
#include "texpr.h"
#include "tsched.h"
#include "qTsbuf.h"
#include "tcompare.h"
#include "tscLog.h"
......@@ -2264,7 +2265,7 @@ void tscFirstRoundCallback(void* param, TAOS_RES* tres, int code) {
destroySup(pSup);
taos_free_result(pSql);
parent->res.code = code;
parent->res.code = c;
tscAsyncResultOnError(parent);
return;
}
......@@ -2425,6 +2426,26 @@ int32_t tscHandleFirstRoundStableQuery(SSqlObj *pSql) {
return terrno;
}
typedef struct SPair {
int32_t first;
int32_t second;
} SPair;
static void doSendQueryReqs(SSchedMsg* pSchedMsg) {
SSqlObj* pSql = pSchedMsg->ahandle;
SPair* p = pSchedMsg->msg;
for(int32_t i = p->first; i < p->second; ++i) {
SSqlObj* pSub = pSql->pSubs[i];
SRetrieveSupport* pSupport = pSub->param;
tscDebug("0x%"PRIx64" sub:0x%"PRIx64" launch subquery, orderOfSub:%d.", pSql->self, pSub->self, pSupport->subqueryIndex);
tscBuildAndSendRequest(pSub, NULL);
}
tfree(p);
}
int32_t tscHandleMasterSTableQuery(SSqlObj *pSql) {
SSqlRes *pRes = &pSql->res;
SSqlCmd *pCmd = &pSql->cmd;
......@@ -2547,13 +2568,33 @@ int32_t tscHandleMasterSTableQuery(SSqlObj *pSql) {
doCleanupSubqueries(pSql, i);
return pRes->code;
}
for(int32_t j = 0; j < pState->numOfSub; ++j) {
SSqlObj* pSub = pSql->pSubs[j];
SRetrieveSupport* pSupport = pSub->param;
tscDebug("0x%"PRIx64" sub:0x%"PRIx64" launch subquery, orderOfSub:%d.", pSql->self, pSub->self, pSupport->subqueryIndex);
tscBuildAndSendRequest(pSub, NULL);
// concurrently sent the query requests.
const int32_t MAX_REQUEST_PER_TASK = 8;
int32_t numOfTasks = (pState->numOfSub + MAX_REQUEST_PER_TASK - 1)/MAX_REQUEST_PER_TASK;
assert(numOfTasks >= 1);
int32_t num = (pState->numOfSub/numOfTasks) + 1;
tscDebug("0x%"PRIx64 " query will be sent by %d threads", pSql->self, numOfTasks);
for(int32_t j = 0; j < numOfTasks; ++j) {
SSchedMsg schedMsg = {0};
schedMsg.fp = doSendQueryReqs;
schedMsg.ahandle = (void*)pSql;
schedMsg.thandle = NULL;
SPair* p = calloc(1, sizeof(SPair));
p->first = j * num;
if (j == numOfTasks - 1) {
p->second = pState->numOfSub;
} else {
p->second = (j + 1) * num;
}
schedMsg.msg = p;
taosScheduleTask(tscQhandle, &schedMsg);
}
return TSDB_CODE_SUCCESS;
......
......@@ -2093,6 +2093,22 @@ TAOS_FIELD tscCreateField(int8_t type, const char* name, int16_t bytes) {
return f;
}
int32_t tscGetFirstInvisibleFieldPos(SQueryInfo* pQueryInfo) {
if (pQueryInfo->fieldsInfo.numOfOutput <= 0 || pQueryInfo->fieldsInfo.internalField == NULL) {
return 0;
}
for (int32_t i = 0; i < pQueryInfo->fieldsInfo.numOfOutput; ++i) {
SInternalField* pField = taosArrayGet(pQueryInfo->fieldsInfo.internalField, i);
if (!pField->visible) {
return i;
}
}
return pQueryInfo->fieldsInfo.numOfOutput;
}
SInternalField* tscFieldInfoAppend(SFieldInfo* pFieldInfo, TAOS_FIELD* pField) {
assert(pFieldInfo != NULL);
pFieldInfo->numOfOutput++;
......@@ -3778,6 +3794,8 @@ static void tscSubqueryCompleteCallback(void* param, TAOS_RES* tres, int code) {
tscDebug("0x%"PRIx64" all subquery response received, retry", pParentSql->self);
if (code && !((code == TSDB_CODE_TDB_INVALID_TABLE_ID || code == TSDB_CODE_VND_INVALID_VGROUP_ID) && pParentSql->retry < pParentSql->maxRetry)) {
pParentSql->res.code = code;
tscAsyncResultOnError(pParentSql);
return;
}
......@@ -3858,6 +3876,7 @@ void executeQuery(SSqlObj* pSql, SQueryInfo* pQueryInfo) {
pNew->signature = pNew;
pNew->sqlstr = strdup(pSql->sqlstr);
pNew->fp = tscSubqueryCompleteCallback;
pNew->fetchFp = tscSubqueryCompleteCallback;
pNew->maxRetry = pSql->maxRetry;
pNew->cmd.resColumnId = TSDB_RES_COL_ID;
......
......@@ -31,12 +31,12 @@ void tVariantCreate(tVariant *pVar, SStrToken *token) {
switch (token->type) {
case TSDB_DATA_TYPE_BOOL: {
int32_t k = strncasecmp(token->z, "true", 4);
if (k == 0) {
if (strncasecmp(token->z, "true", 4) == 0) {
pVar->i64 = TSDB_TRUE;
} else {
assert(strncasecmp(token->z, "false", 5) == 0);
} else if (strncasecmp(token->z, "false", 5) == 0) {
pVar->i64 = TSDB_FALSE;
} else {
return;
}
break;
......
......@@ -18,7 +18,7 @@ public class RestfulConnection extends AbstractConnection {
private final String url;
private final String database;
private final String token;
/******************************************************/
private boolean isClosed;
private final DatabaseMetaData metadata;
......
......@@ -88,17 +88,24 @@ public class RestfulStatement extends AbstractStatement {
}
private String getUrl() throws SQLException {
String dbname = conn.getClientInfo(TSDBDriver.PROPERTY_KEY_DBNAME);
if (dbname == null || dbname.trim().isEmpty()) {
dbname = "";
} else {
dbname = "/" + dbname.toLowerCase();
}
TimestampFormat timestampFormat = TimestampFormat.valueOf(conn.getClientInfo(TSDBDriver.PROPERTY_KEY_TIMESTAMP_FORMAT).trim().toUpperCase());
String url;
switch (timestampFormat) {
case TIMESTAMP:
url = "http://" + conn.getHost() + ":" + conn.getPort() + "/rest/sqlt";
url = "http://" + conn.getHost() + ":" + conn.getPort() + "/rest/sqlt" + dbname;
break;
case UTC:
url = "http://" + conn.getHost() + ":" + conn.getPort() + "/rest/sqlutc";
url = "http://" + conn.getHost() + ":" + conn.getPort() + "/rest/sqlutc" + dbname;
break;
default:
url = "http://" + conn.getHost() + ":" + conn.getPort() + "/rest/sql";
url = "http://" + conn.getHost() + ":" + conn.getPort() + "/rest/sql" + dbname;
}
return url;
}
......
package com.taosdata.jdbc.cases;
import org.junit.Before;
import org.junit.Test;
import java.sql.*;
import java.util.List;
import java.util.concurrent.TimeUnit;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
public class MultiConnectionWithDifferentDbTest {
private static String host = "127.0.0.1";
private static String db1 = "db1";
private static String db2 = "db2";
private long ts;
@Test
public void test() {
List<Thread> threads = IntStream.range(1, 3).mapToObj(i -> new Thread(new Runnable() {
@Override
public void run() {
for (int j = 0; j < 10; j++) {
queryDb();
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
private void queryDb() {
String url = "jdbc:TAOS-RS://" + host + ":6041/db" + i + "?user=root&password=taosdata";
try (Connection connection = DriverManager.getConnection(url)) {
Statement stmt = connection.createStatement();
ResultSet rs = stmt.executeQuery("select * from weather");
assertNotNull(rs);
rs.next();
long actual = rs.getTimestamp("ts").getTime();
assertEquals(ts, actual);
int f1 = rs.getInt("f1");
assertEquals(i, f1);
String loc = i == 1 ? "beijing" : "shanghai";
String loc_actual = rs.getString("loc");
assertEquals(loc, loc_actual);
stmt.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
}, "thread-" + i)).collect(Collectors.toList());
threads.forEach(Thread::start);
for (Thread t : threads) {
try {
t.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
@Before
public void before() {
ts = System.currentTimeMillis();
try {
Connection conn = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/?user=root&password=taosdata");
Statement stmt = conn.createStatement();
stmt.execute("drop database if exists " + db1);
stmt.execute("create database if not exists " + db1);
stmt.execute("use " + db1);
stmt.execute("create table weather(ts timestamp, f1 int) tags(loc nchar(10))");
stmt.execute("insert into t1 using weather tags('beijing') values(" + ts + ", 1)");
stmt.execute("drop database if exists " + db2);
stmt.execute("create database if not exists " + db2);
stmt.execute("use " + db2);
stmt.execute("create table weather(ts timestamp, f1 int) tags(loc nchar(10))");
stmt.execute("insert into t1 using weather tags('shanghai') values(" + ts + ", 2)");
conn.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
}
package com.taosdata.jdbc.rs;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import java.sql.*;
import static org.junit.Assert.*;
public class DatabaseSpecifiedTest {
private static String host = "127.0.0.1";
private static String dbname = "test_db_spec";
private Connection connection;
private long ts;
@Test
public void test() throws SQLException {
// when
connection = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/" + dbname + "?user=root&password=taosdata");
try (Statement stmt = connection.createStatement();) {
ResultSet rs = stmt.executeQuery("select * from weather");
//then
assertNotNull(rs);
rs.next();
long now = rs.getTimestamp("ts").getTime();
assertEquals(ts, now);
int f1 = rs.getInt(2);
assertEquals(1, f1);
String loc = rs.getString("loc");
assertEquals("beijing", loc);
}
connection.close();
}
@Before
public void before() {
ts = System.currentTimeMillis();
try {
Connection connection = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/?user=root&password=taosdata");
Statement stmt = connection.createStatement();
stmt.execute("drop database if exists " + dbname);
stmt.execute("create database if not exists " + dbname);
stmt.execute("use " + dbname);
stmt.execute("create table weather(ts timestamp, f1 int) tags(loc nchar(10))");
stmt.execute("insert into t1 using weather tags('beijing') values( " + ts + ", 1)");
stmt.close();
connection.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
@After
public void after() {
try {
if (connection != null)
connection.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
}
......@@ -2,7 +2,7 @@ import taos
conn = taos.connect(host='127.0.0.1',
user='root',
passworkd='taodata',
password='taosdata',
database='log')
cursor = conn.cursor()
......
......@@ -268,7 +268,7 @@ def _load_taos():
try:
return load_func[platform.system()]()
except:
sys.exit('unsupported platform to TDengine connector')
raise InterfaceError('unsupported platform or failed to load taos client library')
class CTaosInterface(object):
......
......@@ -87,6 +87,8 @@ extern const int32_t TYPE_BYTES[15];
#define TSDB_DEFAULT_PASS "taosdata"
#endif
#define SHELL_MAX_PASSWORD_LEN 20
#define TSDB_TRUE 1
#define TSDB_FALSE 0
#define TSDB_OK 0
......
......@@ -25,7 +25,6 @@
#define MAX_USERNAME_SIZE 64
#define MAX_DBNAME_SIZE 64
#define MAX_IP_SIZE 20
#define MAX_PASSWORD_SIZE 20
#define MAX_HISTORY_SIZE 1000
#define MAX_COMMAND_SIZE 1048586
#define HISTORY_FILE ".taos_history"
......@@ -56,6 +55,8 @@ typedef struct SShellArguments {
int abort;
int port;
int pktLen;
int pktNum;
char* pktType;
char* netTestRole;
} SShellArguments;
......
......@@ -66,7 +66,7 @@ void printHelp() {
char DARWINCLIENT_VERSION[] = "Welcome to the TDengine shell from %s, Client Version:%s\n"
"Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.\n\n";
char g_password[MAX_PASSWORD_SIZE];
char g_password[SHELL_MAX_PASSWORD_LEN];
void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) {
wordexp_t full_path;
......@@ -81,19 +81,25 @@ void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) {
}
}
// for password
else if (strncmp(argv[i], "-p", 2) == 0) {
else if ((strncmp(argv[i], "-p", 2) == 0)
|| (strncmp(argv[i], "--password", 10) == 0)) {
strcpy(tsOsName, "Darwin");
printf(DARWINCLIENT_VERSION, tsOsName, taos_get_client_info());
if (strlen(argv[i]) == 2) {
if ((strlen(argv[i]) == 2)
|| (strncmp(argv[i], "--password", 10) == 0)) {
printf("Enter password: ");
taosSetConsoleEcho(false);
if (scanf("%s", g_password) > 1) {
fprintf(stderr, "password read error\n");
}
taosSetConsoleEcho(true);
getchar();
} else {
tstrncpy(g_password, (char *)(argv[i] + 2), MAX_PASSWORD_SIZE);
tstrncpy(g_password, (char *)(argv[i] + 2), SHELL_MAX_PASSWORD_LEN);
}
arguments->password = g_password;
strcpy(argv[i], "");
argc -= 1;
}
// for management port
else if (strcmp(argv[i], "-P") == 0) {
......
......@@ -47,9 +47,11 @@ static struct argp_option options[] = {
{"thread", 'T', "THREADNUM", 0, "Number of threads when using multi-thread to import data."},
{"check", 'k', "CHECK", 0, "Check tables."},
{"database", 'd', "DATABASE", 0, "Database to use when connecting to the server."},
{"timezone", 't', "TIMEZONE", 0, "Time zone of the shell, default is local."},
{"netrole", 'n', "NETROLE", 0, "Net role when network connectivity test, default is startup, options: client|server|rpc|startup|sync."},
{"timezone", 'z', "TIMEZONE", 0, "Time zone of the shell, default is local."},
{"netrole", 'n', "NETROLE", 0, "Net role when network connectivity test, default is startup, options: client|server|rpc|startup|sync|speen|fqdn."},
{"pktlen", 'l', "PKTLEN", 0, "Packet length used for net test, default is 1000 bytes."},
{"pktnum", 'N', "PKTNUM", 0, "Packet numbers used for net test, default is 100."},
{"pkttype", 'S', "PKTTYPE", 0, "Packet type used for net test, default is TCP."},
{0}};
static error_t parse_opt(int key, char *arg, struct argp_state *state) {
......@@ -74,7 +76,7 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) {
}
break;
case 't':
case 'z':
arguments->timezone = arg;
break;
case 'u':
......@@ -106,7 +108,7 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) {
arguments->is_raw_time = true;
break;
case 'f':
if (wordexp(arg, &full_path, 0) != 0) {
if ((0 == strlen(arg)) || (wordexp(arg, &full_path, 0) != 0)) {
fprintf(stderr, "Invalid path %s\n", arg);
return -1;
}
......@@ -146,6 +148,17 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) {
return -1;
}
break;
case 'N':
if (arg) {
arguments->pktNum = atoi(arg);
} else {
fprintf(stderr, "Invalid packet number\n");
return -1;
}
break;
case 'S':
arguments->pktType = arg;
break;
case OPT_ABORT:
arguments->abort = 1;
break;
......@@ -160,22 +173,29 @@ static struct argp argp = {options, parse_opt, args_doc, doc};
char LINUXCLIENT_VERSION[] = "Welcome to the TDengine shell from %s, Client Version:%s\n"
"Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.\n\n";
char g_password[MAX_PASSWORD_SIZE];
char g_password[SHELL_MAX_PASSWORD_LEN];
static void parse_password(
static void parse_args(
int argc, char *argv[], SShellArguments *arguments) {
for (int i = 1; i < argc; i++) {
if (strncmp(argv[i], "-p", 2) == 0) {
if ((strncmp(argv[i], "-p", 2) == 0)
|| (strncmp(argv[i], "--password", 10) == 0)) {
strcpy(tsOsName, "Linux");
printf(LINUXCLIENT_VERSION, tsOsName, taos_get_client_info());
if (strlen(argv[i]) == 2) {
if ((strlen(argv[i]) == 2)
|| (strncmp(argv[i], "--password", 10) == 0)) {
printf("Enter password: ");
taosSetConsoleEcho(false);
if (scanf("%20s", g_password) > 1) {
fprintf(stderr, "password reading error\n");
}
getchar();
taosSetConsoleEcho(true);
if (EOF == getchar()) {
fprintf(stderr, "getchar() return EOF\n");
}
} else {
tstrncpy(g_password, (char *)(argv[i] + 2), MAX_PASSWORD_SIZE);
tstrncpy(g_password, (char *)(argv[i] + 2), SHELL_MAX_PASSWORD_LEN);
strcpy(argv[i], "-p");
}
arguments->password = g_password;
arguments->is_use_passwd = true;
......@@ -190,7 +210,7 @@ void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) {
argp_program_version = verType;
if (argc > 1) {
parse_password(argc, argv, arguments);
parse_args(argc, argv, arguments);
}
argp_parse(&argp, argc, argv, 0, 0, arguments);
......
......@@ -85,6 +85,8 @@ SShellArguments args = {
.threadNum = 5,
.commands = NULL,
.pktLen = 1000,
.pktNum = 100,
.pktType = "TCP",
.netTestRole = NULL
};
......@@ -118,7 +120,7 @@ int main(int argc, char* argv[]) {
printf("Failed to init taos");
exit(EXIT_FAILURE);
}
taosNetTest(args.netTestRole, args.host, args.port, args.pktLen);
taosNetTest(args.netTestRole, args.host, args.port, args.pktLen, args.pktNum, args.pktType);
exit(0);
}
......
......@@ -55,16 +55,20 @@ void printHelp() {
printf("%s%s\n", indent, "-t");
printf("%s%s%s\n", indent, indent, "Time zone of the shell, default is local.");
printf("%s%s\n", indent, "-n");
printf("%s%s%s\n", indent, indent, "Net role when network connectivity test, default is startup, options: client|server|rpc|startup|sync.");
printf("%s%s%s\n", indent, indent, "Net role when network connectivity test, default is startup, options: client|server|rpc|startup|sync|speed|fqdn.");
printf("%s%s\n", indent, "-l");
printf("%s%s%s\n", indent, indent, "Packet length used for net test, default is 1000 bytes.");
printf("%s%s\n", indent, "-N");
printf("%s%s%s\n", indent, indent, "Packet numbers used for net test, default is 100.");
printf("%s%s\n", indent, "-S");
printf("%s%s%s\n", indent, indent, "Packet type used for net test, default is TCP.");
printf("%s%s\n", indent, "-V");
printf("%s%s%s\n", indent, indent, "Print program version.");
exit(EXIT_SUCCESS);
}
char g_password[MAX_PASSWORD_SIZE];
char g_password[SHELL_MAX_PASSWORD_LEN];
void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) {
for (int i = 1; i < argc; i++) {
......@@ -78,20 +82,26 @@ void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) {
}
}
// for password
else if (strncmp(argv[i], "-p", 2) == 0) {
else if ((strncmp(argv[i], "-p", 2) == 0)
|| (strncmp(argv[i], "--password", 10) == 0)) {
arguments->is_use_passwd = true;
strcpy(tsOsName, "Windows");
printf(WINCLIENT_VERSION, tsOsName, taos_get_client_info());
if (strlen(argv[i]) == 2) {
if ((strlen(argv[i]) == 2)
|| (strncmp(argv[i], "--password", 10) == 0)) {
printf("Enter password: ");
taosSetConsoleEcho(false);
if (scanf("%s", g_password) > 1) {
fprintf(stderr, "password read error!\n");
}
taosSetConsoleEcho(true);
getchar();
} else {
tstrncpy(g_password, (char *)(argv[i] + 2), MAX_PASSWORD_SIZE);
tstrncpy(g_password, (char *)(argv[i] + 2), SHELL_MAX_PASSWORD_LEN);
}
arguments->password = g_password;
strcpy(argv[i], "");
argc -= 1;
}
// for management port
else if (strcmp(argv[i], "-P") == 0) {
......
......@@ -53,14 +53,6 @@
#include "taoserror.h"
#include "tutil.h"
#define STMT_IFACE_ENABLED 1
#define NANO_SECOND_ENABLED 1
#define SET_THREADNAME_ENABLED 1
#if SET_THREADNAME_ENABLED == 0
#define setThreadName(name)
#endif
#define REQ_EXTRA_BUF_LEN 1024
#define RESP_BUF_LEN 4096
......@@ -77,7 +69,6 @@ extern char configDir[];
#define COL_BUFFER_LEN ((TSDB_COL_NAME_LEN + 15) * TSDB_MAX_COLUMNS)
#define MAX_USERNAME_SIZE 64
#define MAX_PASSWORD_SIZE 16
#define MAX_HOSTNAME_SIZE 253 // https://man7.org/linux/man-pages/man7/hostname.7.html
#define MAX_TB_NAME_SIZE 64
#define MAX_DATA_SIZE (16*TSDB_MAX_COLUMNS)+20 // max record len: 16*MAX_COLUMNS, timestamp string and ,('') need extra space
......@@ -216,7 +207,7 @@ typedef struct SArguments_S {
uint16_t port;
uint16_t iface;
char * user;
char password[MAX_PASSWORD_SIZE];
char password[SHELL_MAX_PASSWORD_LEN];
char * database;
int replica;
char * tb_prefix;
......@@ -295,9 +286,7 @@ typedef struct SSuperTable_S {
uint64_t lenOfTagOfOneRow;
char* sampleDataBuf;
#if STMT_IFACE_ENABLED == 1
char* sampleBindArray;
#endif
//int sampleRowCount;
//int sampleUsePos;
......@@ -366,7 +355,7 @@ typedef struct SDbs_S {
uint16_t port;
char user[MAX_USERNAME_SIZE];
char password[MAX_PASSWORD_SIZE];
char password[SHELL_MAX_PASSWORD_LEN];
char resultFile[MAX_FILE_NAME_LEN];
bool use_metric;
bool insert_only;
......@@ -432,7 +421,7 @@ typedef struct SQueryMetaInfo_S {
uint16_t port;
struct sockaddr_in serv_addr;
char user[MAX_USERNAME_SIZE];
char password[MAX_PASSWORD_SIZE];
char password[SHELL_MAX_PASSWORD_LEN];
char dbName[TSDB_DB_NAME_LEN];
char queryMode[SMALL_BUFF_LEN]; // taosc, rest
......@@ -454,6 +443,7 @@ typedef struct SThreadInfo_S {
uint64_t start_table_from;
uint64_t end_table_to;
int64_t ntables;
int64_t tables_created;
uint64_t data_of_rate;
int64_t start_time;
char* cols;
......@@ -650,6 +640,7 @@ SArguments g_args = {
static SDbs g_Dbs;
static int64_t g_totalChildTables = 0;
static int64_t g_actualChildTables = 0;
static SQueryMetaInfo g_queryInfo;
static FILE * g_fpOfInsertResult = NULL;
......@@ -670,7 +661,28 @@ static FILE * g_fpOfInsertResult = NULL;
fprintf(stderr, "PERF: "fmt, __VA_ARGS__); } while(0)
#define errorPrint(fmt, ...) \
do { fprintf(stderr, " \033[31m"); fprintf(stderr, "ERROR: "fmt, __VA_ARGS__); fprintf(stderr, " \033[0m"); } while(0)
do {\
fprintf(stderr, " \033[31m");\
fprintf(stderr, "ERROR: "fmt, __VA_ARGS__);\
fprintf(stderr, " \033[0m");\
} while(0)
#define errorPrint2(fmt, ...) \
do {\
struct tm Tm, *ptm;\
struct timeval timeSecs; \
time_t curTime;\
gettimeofday(&timeSecs, NULL); \
curTime = timeSecs.tv_sec;\
ptm = localtime_r(&curTime, &Tm);\
fprintf(stderr, " \033[31m");\
fprintf(stderr, "%02d/%02d %02d:%02d:%02d.%06d %08" PRId64 " ",\
ptm->tm_mon + 1, ptm->tm_mday, ptm->tm_hour,\
ptm->tm_min, ptm->tm_sec, (int32_t)timeSecs.tv_usec,\
taosGetSelfPthreadId());\
fprintf(stderr, " \033[0m");\
errorPrint(fmt, __VA_ARGS__);\
} while(0)
// for strncpy buffer overflow
#define min(a, b) (((a) < (b)) ? (a) : (b))
......@@ -733,11 +745,7 @@ static void printHelp() {
printf("%s%s%s%s\n", indent, "-P", indent,
"The TCP/IP port number to use for the connection. Default is 0.");
printf("%s%s%s%s\n", indent, "-I", indent,
#if STMT_IFACE_ENABLED == 1
"The interface (taosc, rest, and stmt) taosdemo uses. Default is 'taosc'.");
#else
"The interface (taosc, rest) taosdemo uses. Default is 'taosc'.");
#endif
printf("%s%s%s%s\n", indent, "-d", indent,
"Destination database. Default is 'test'.");
printf("%s%s%s%s\n", indent, "-a", indent,
......@@ -752,12 +760,13 @@ static void printHelp() {
"Query mode -- 0: SYNC, 1: ASYNC. Default is SYNC.");
printf("%s%s%s%s\n", indent, "-b", indent,
"The data_type of columns, default: FLOAT, INT, FLOAT.");
printf("%s%s%s%s\n", indent, "-w", indent,
"The length of data_type 'BINARY' or 'NCHAR'. Default is 16");
printf("%s%s%s%s%d\n", indent, "-w", indent,
"The length of data_type 'BINARY' or 'NCHAR'. Default is ",
g_args.len_of_binary);
printf("%s%s%s%s%d%s%d\n", indent, "-l", indent,
"The number of columns per record. Default is ",
"The number of columns per record. Demo mode by default is ",
DEFAULT_DATATYPE_NUM,
". Max values is ",
" (float, int, float). Max values is ",
MAX_NUM_COLUMNS);
printf("%s%s%s%s\n", indent, indent, indent,
"All of the new column(s) type is INT. If use -b to specify column type, -l will be ignored.");
......@@ -815,6 +824,12 @@ static void parse_args(int argc, char *argv[], SArguments *arguments) {
for (int i = 1; i < argc; i++) {
if (strcmp(argv[i], "-f") == 0) {
arguments->demo_mode = false;
if (NULL == argv[i+1]) {
printHelp();
errorPrint("%s", "\n\t-f need a valid json file following!\n");
exit(EXIT_FAILURE);
}
arguments->metaFile = argv[++i];
} else if (strcmp(argv[i], "-c") == 0) {
if (argc == i+1) {
......@@ -849,10 +864,8 @@ static void parse_args(int argc, char *argv[], SArguments *arguments) {
arguments->iface = TAOSC_IFACE;
} else if (0 == strcasecmp(argv[i], "rest")) {
arguments->iface = REST_IFACE;
#if STMT_IFACE_ENABLED == 1
} else if (0 == strcasecmp(argv[i], "stmt")) {
arguments->iface = STMT_IFACE;
#endif
} else {
errorPrint("%s", "\n\t-I need a valid string following!\n");
exit(EXIT_FAILURE);
......@@ -873,7 +886,7 @@ static void parse_args(int argc, char *argv[], SArguments *arguments) {
}
taosSetConsoleEcho(true);
} else {
tstrncpy(arguments->password, (char *)(argv[i] + 2), MAX_PASSWORD_SIZE);
tstrncpy(arguments->password, (char *)(argv[i] + 2), SHELL_MAX_PASSWORD_LEN);
}
} else if (strcmp(argv[i], "-o") == 0) {
if (argc == i+1) {
......@@ -953,6 +966,7 @@ static void parse_args(int argc, char *argv[], SArguments *arguments) {
exit(EXIT_FAILURE);
}
arguments->num_of_tables = atoi(argv[++i]);
g_totalChildTables = arguments->num_of_tables;
} else if (strcmp(argv[i], "-n") == 0) {
if ((argc == i+1) ||
(!isStringNumber(argv[i+1]))) {
......@@ -1123,7 +1137,7 @@ static void parse_args(int argc, char *argv[], SArguments *arguments) {
exit(EXIT_FAILURE);
}
} else if ((strcmp(argv[i], "--version") == 0) ||
(strcmp(argv[i], "-V") == 0)){
(strcmp(argv[i], "-V") == 0)) {
printVersion();
exit(0);
} else if (strcmp(argv[i], "--help") == 0) {
......@@ -1229,7 +1243,7 @@ static int queryDbExec(TAOS *taos, char *command, QUERY_TYPE type, bool quiet) {
verbosePrint("%s() LN%d - command: %s\n", __func__, __LINE__, command);
if (code != 0) {
if (!quiet) {
errorPrint("Failed to execute %s, reason: %s\n",
errorPrint2("Failed to execute %s, reason: %s\n",
command, taos_errstr(res));
}
taos_free_result(res);
......@@ -1251,7 +1265,7 @@ static void appendResultBufToFile(char *resultBuf, threadInfo *pThreadInfo)
{
pThreadInfo->fp = fopen(pThreadInfo->filePath, "at");
if (pThreadInfo->fp == NULL) {
errorPrint(
errorPrint2(
"%s() LN%d, failed to open result file: %s, result will not save to file\n",
__func__, __LINE__, pThreadInfo->filePath);
return;
......@@ -1270,7 +1284,7 @@ static void fetchResult(TAOS_RES *res, threadInfo* pThreadInfo) {
char* databuf = (char*) calloc(1, 100*1024*1024);
if (databuf == NULL) {
errorPrint("%s() LN%d, failed to malloc, warning: save result to file slowly!\n",
errorPrint2("%s() LN%d, failed to malloc, warning: save result to file slowly!\n",
__func__, __LINE__);
return ;
}
......@@ -1310,7 +1324,7 @@ static void selectAndGetResult(
if (0 == strncasecmp(g_queryInfo.queryMode, "taosc", strlen("taosc"))) {
TAOS_RES *res = taos_query(pThreadInfo->taos, command);
if (res == NULL || taos_errno(res) != 0) {
errorPrint("%s() LN%d, failed to execute sql:%s, reason:%s\n",
errorPrint2("%s() LN%d, failed to execute sql:%s, reason:%s\n",
__func__, __LINE__, command, taos_errstr(res));
taos_free_result(res);
return;
......@@ -1329,19 +1343,19 @@ static void selectAndGetResult(
}
} else {
errorPrint("%s() LN%d, unknown query mode: %s\n",
errorPrint2("%s() LN%d, unknown query mode: %s\n",
__func__, __LINE__, g_queryInfo.queryMode);
}
}
static char *rand_bool_str(){
static char *rand_bool_str() {
static int cursor;
cursor++;
if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
return g_randbool_buff + (cursor * BOOL_BUFF_LEN);
return g_randbool_buff + ((cursor % MAX_PREPARED_RAND) * BOOL_BUFF_LEN);
}
static int32_t rand_bool(){
static int32_t rand_bool() {
static int cursor;
cursor++;
if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
......@@ -1353,7 +1367,8 @@ static char *rand_tinyint_str()
static int cursor;
cursor++;
if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
return g_randtinyint_buff + (cursor * TINYINT_BUFF_LEN);
return g_randtinyint_buff +
((cursor % MAX_PREPARED_RAND) * TINYINT_BUFF_LEN);
}
static int32_t rand_tinyint()
......@@ -1369,7 +1384,8 @@ static char *rand_smallint_str()
static int cursor;
cursor++;
if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
return g_randsmallint_buff + (cursor * SMALLINT_BUFF_LEN);
return g_randsmallint_buff +
((cursor % MAX_PREPARED_RAND) * SMALLINT_BUFF_LEN);
}
static int32_t rand_smallint()
......@@ -1385,7 +1401,7 @@ static char *rand_int_str()
static int cursor;
cursor++;
if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
return g_randint_buff + (cursor * INT_BUFF_LEN);
return g_randint_buff + ((cursor % MAX_PREPARED_RAND) * INT_BUFF_LEN);
}
static int32_t rand_int()
......@@ -1401,7 +1417,8 @@ static char *rand_bigint_str()
static int cursor;
cursor++;
if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
return g_randbigint_buff + (cursor * BIGINT_BUFF_LEN);
return g_randbigint_buff +
((cursor % MAX_PREPARED_RAND) * BIGINT_BUFF_LEN);
}
static int64_t rand_bigint()
......@@ -1417,7 +1434,7 @@ static char *rand_float_str()
static int cursor;
cursor++;
if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
return g_randfloat_buff + (cursor * FLOAT_BUFF_LEN);
return g_randfloat_buff + ((cursor % MAX_PREPARED_RAND) * FLOAT_BUFF_LEN);
}
......@@ -1434,7 +1451,8 @@ static char *demo_current_float_str()
static int cursor;
cursor++;
if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
return g_rand_current_buff + (cursor * FLOAT_BUFF_LEN);
return g_rand_current_buff +
((cursor % MAX_PREPARED_RAND) * FLOAT_BUFF_LEN);
}
static float UNUSED_FUNC demo_current_float()
......@@ -1451,7 +1469,8 @@ static char *demo_voltage_int_str()
static int cursor;
cursor++;
if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
return g_rand_voltage_buff + (cursor * INT_BUFF_LEN);
return g_rand_voltage_buff +
((cursor % MAX_PREPARED_RAND) * INT_BUFF_LEN);
}
static int32_t UNUSED_FUNC demo_voltage_int()
......@@ -1466,10 +1485,10 @@ static char *demo_phase_float_str() {
static int cursor;
cursor++;
if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
return g_rand_phase_buff + (cursor * FLOAT_BUFF_LEN);
return g_rand_phase_buff + ((cursor % MAX_PREPARED_RAND) * FLOAT_BUFF_LEN);
}
static float UNUSED_FUNC demo_phase_float(){
static float UNUSED_FUNC demo_phase_float() {
static int cursor;
cursor++;
if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
......@@ -1548,7 +1567,7 @@ static void init_rand_data() {
g_randdouble_buff = calloc(1, DOUBLE_BUFF_LEN * MAX_PREPARED_RAND);
assert(g_randdouble_buff);
for (int i = 0; i < MAX_PREPARED_RAND; i++){
for (int i = 0; i < MAX_PREPARED_RAND; i++) {
g_randint[i] = (int)(taosRandom() % 65535);
sprintf(g_randint_buff + i * INT_BUFF_LEN, "%d",
g_randint[i]);
......@@ -1694,9 +1713,7 @@ static int printfInsertMeta() {
}
if (g_Dbs.db[i].dbCfg.precision[0] != 0) {
if ((0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "ms", 2))
#if NANO_SECOND_ENABLED == 1
|| (0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "us", 2))
#endif
|| (0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "ns", 2))) {
printf(" precision: \033[33m%s\033[0m\n",
g_Dbs.db[i].dbCfg.precision);
......@@ -1887,9 +1904,7 @@ static void printfInsertMetaToFile(FILE* fp) {
}
if (g_Dbs.db[i].dbCfg.precision[0] != 0) {
if ((0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "ms", 2))
#if NANO_SECOND_ENABLED == 1
|| (0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "ns", 2))
#endif
|| (0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "us", 2))) {
fprintf(fp, " precision: %s\n",
g_Dbs.db[i].dbCfg.precision);
......@@ -2092,10 +2107,8 @@ static char* formatTimestamp(char* buf, int64_t val, int precision) {
time_t tt;
if (precision == TSDB_TIME_PRECISION_MICRO) {
tt = (time_t)(val / 1000000);
#if NANO_SECOND_ENABLED == 1
} if (precision == TSDB_TIME_PRECISION_NANO) {
tt = (time_t)(val / 1000000000);
#endif
} else {
tt = (time_t)(val / 1000);
}
......@@ -2117,10 +2130,8 @@ static char* formatTimestamp(char* buf, int64_t val, int precision) {
if (precision == TSDB_TIME_PRECISION_MICRO) {
sprintf(buf + pos, ".%06d", (int)(val % 1000000));
#if NANO_SECOND_ENABLED == 1
} else if (precision == TSDB_TIME_PRECISION_NANO) {
sprintf(buf + pos, ".%09d", (int)(val % 1000000000));
#endif
} else {
sprintf(buf + pos, ".%03d", (int)(val % 1000));
}
......@@ -2182,7 +2193,7 @@ static int xDumpResultToFile(const char* fname, TAOS_RES* tres) {
FILE* fp = fopen(fname, "at");
if (fp == NULL) {
errorPrint("%s() LN%d, failed to open file: %s\n",
errorPrint2("%s() LN%d, failed to open file: %s\n",
__func__, __LINE__, fname);
return -1;
}
......@@ -2229,7 +2240,7 @@ static int getDbFromServer(TAOS * taos, SDbInfo** dbInfos) {
int32_t code = taos_errno(res);
if (code != 0) {
errorPrint( "failed to run <show databases>, reason: %s\n",
errorPrint2("failed to run <show databases>, reason: %s\n",
taos_errstr(res));
return -1;
}
......@@ -2245,7 +2256,7 @@ static int getDbFromServer(TAOS * taos, SDbInfo** dbInfos) {
dbInfos[count] = (SDbInfo *)calloc(1, sizeof(SDbInfo));
if (dbInfos[count] == NULL) {
errorPrint( "failed to allocate memory for some dbInfo[%d]\n", count);
errorPrint2("failed to allocate memory for some dbInfo[%d]\n", count);
return -1;
}
......@@ -2398,7 +2409,7 @@ static int postProceSql(char *host, struct sockaddr_in *pServAddr, uint16_t port
request_buf = malloc(req_buf_len);
if (NULL == request_buf) {
errorPrint("%s", "ERROR, cannot allocate memory.\n");
errorPrint("%s", "cannot allocate memory.\n");
exit(EXIT_FAILURE);
}
......@@ -2537,7 +2548,7 @@ static int postProceSql(char *host, struct sockaddr_in *pServAddr, uint16_t port
static char* getTagValueFromTagSample(SSuperTable* stbInfo, int tagUsePos) {
char* dataBuf = (char*)calloc(TSDB_MAX_SQL_LEN+1, 1);
if (NULL == dataBuf) {
errorPrint("%s() LN%d, calloc failed! size:%d\n",
errorPrint2("%s() LN%d, calloc failed! size:%d\n",
__func__, __LINE__, TSDB_MAX_SQL_LEN+1);
return NULL;
}
......@@ -2637,7 +2648,7 @@ static char* generateTagValuesForStb(SSuperTable* stbInfo, int64_t tableSeq) {
dataLen += snprintf(dataBuf + dataLen, TSDB_MAX_SQL_LEN - dataLen,
"%"PRId64",", rand_bigint());
} else {
errorPrint("No support data type: %s\n", stbInfo->tags[i].dataType);
errorPrint2("No support data type: %s\n", stbInfo->tags[i].dataType);
tmfree(dataBuf);
return NULL;
}
......@@ -2676,7 +2687,7 @@ static int calcRowLen(SSuperTable* superTbls) {
} else if (strcasecmp(dataType, "TIMESTAMP") == 0) {
lenOfOneRow += TIMESTAMP_BUFF_LEN;
} else {
errorPrint("get error data type : %s\n", dataType);
errorPrint2("get error data type : %s\n", dataType);
exit(EXIT_FAILURE);
}
}
......@@ -2707,7 +2718,7 @@ static int calcRowLen(SSuperTable* superTbls) {
} else if (strcasecmp(dataType, "DOUBLE") == 0) {
lenOfTagOfOneRow += superTbls->tags[tagIndex].dataLen + DOUBLE_BUFF_LEN;
} else {
errorPrint("get error tag type : %s\n", dataType);
errorPrint2("get error tag type : %s\n", dataType);
exit(EXIT_FAILURE);
}
}
......@@ -2744,7 +2755,7 @@ static int getChildNameOfSuperTableWithLimitAndOffset(TAOS * taos,
if (code != 0) {
taos_free_result(res);
taos_close(taos);
errorPrint("%s() LN%d, failed to run command %s\n",
errorPrint2("%s() LN%d, failed to run command %s\n",
__func__, __LINE__, command);
exit(EXIT_FAILURE);
}
......@@ -2756,7 +2767,7 @@ static int getChildNameOfSuperTableWithLimitAndOffset(TAOS * taos,
if (NULL == childTblName) {
taos_free_result(res);
taos_close(taos);
errorPrint("%s() LN%d, failed to allocate memory!\n", __func__, __LINE__);
errorPrint2("%s() LN%d, failed to allocate memory!\n", __func__, __LINE__);
exit(EXIT_FAILURE);
}
}
......@@ -2766,7 +2777,7 @@ static int getChildNameOfSuperTableWithLimitAndOffset(TAOS * taos,
int32_t* len = taos_fetch_lengths(res);
if (0 == strlen((char *)row[0])) {
errorPrint("%s() LN%d, No.%"PRId64" table return empty name\n",
errorPrint2("%s() LN%d, No.%"PRId64" table return empty name\n",
__func__, __LINE__, count);
exit(EXIT_FAILURE);
}
......@@ -2787,7 +2798,7 @@ static int getChildNameOfSuperTableWithLimitAndOffset(TAOS * taos,
tmfree(childTblName);
taos_free_result(res);
taos_close(taos);
errorPrint("%s() LN%d, realloc fail for save child table name of %s.%s\n",
errorPrint2("%s() LN%d, realloc fail for save child table name of %s.%s\n",
__func__, __LINE__, dbName, sTblName);
exit(EXIT_FAILURE);
}
......@@ -2884,7 +2895,7 @@ static int getSuperTableFromServer(TAOS * taos, char* dbName,
int childTblCount = 10000;
superTbls->childTblName = (char*)calloc(1, childTblCount * TSDB_TABLE_NAME_LEN);
if (superTbls->childTblName == NULL) {
errorPrint("%s() LN%d, alloc memory failed!\n", __func__, __LINE__);
errorPrint2("%s() LN%d, alloc memory failed!\n", __func__, __LINE__);
return -1;
}
getAllChildNameOfSuperTable(taos, dbName,
......@@ -2910,7 +2921,7 @@ static int createSuperTable(
int lenOfOneRow = 0;
if (superTbl->columnCount == 0) {
errorPrint("%s() LN%d, super table column count is %d\n",
errorPrint2("%s() LN%d, super table column count is %d\n",
__func__, __LINE__, superTbl->columnCount);
free(command);
return -1;
......@@ -2974,7 +2985,7 @@ static int createSuperTable(
} else {
taos_close(taos);
free(command);
errorPrint("%s() LN%d, config error data type : %s\n",
errorPrint2("%s() LN%d, config error data type : %s\n",
__func__, __LINE__, dataType);
exit(EXIT_FAILURE);
}
......@@ -2987,7 +2998,7 @@ static int createSuperTable(
if (NULL == superTbl->colsOfCreateChildTable) {
taos_close(taos);
free(command);
errorPrint("%s() LN%d, Failed when calloc, size:%d",
errorPrint2("%s() LN%d, Failed when calloc, size:%d",
__func__, __LINE__, len+1);
exit(EXIT_FAILURE);
}
......@@ -2997,7 +3008,7 @@ static int createSuperTable(
__func__, __LINE__, superTbl->colsOfCreateChildTable);
if (superTbl->tagCount == 0) {
errorPrint("%s() LN%d, super table tag count is %d\n",
errorPrint2("%s() LN%d, super table tag count is %d\n",
__func__, __LINE__, superTbl->tagCount);
free(command);
return -1;
......@@ -3064,7 +3075,7 @@ static int createSuperTable(
} else {
taos_close(taos);
free(command);
errorPrint("%s() LN%d, config error tag type : %s\n",
errorPrint2("%s() LN%d, config error tag type : %s\n",
__func__, __LINE__, dataType);
exit(EXIT_FAILURE);
}
......@@ -3079,7 +3090,7 @@ static int createSuperTable(
"create table if not exists %s.%s (ts timestamp%s) tags %s",
dbName, superTbl->sTblName, cols, tags);
if (0 != queryDbExec(taos, command, NO_INSERT_TYPE, false)) {
errorPrint( "create supertable %s failed!\n\n",
errorPrint2("create supertable %s failed!\n\n",
superTbl->sTblName);
free(command);
return -1;
......@@ -3095,7 +3106,7 @@ int createDatabasesAndStables(char *command) {
int ret = 0;
taos = taos_connect(g_Dbs.host, g_Dbs.user, g_Dbs.password, NULL, g_Dbs.port);
if (taos == NULL) {
errorPrint( "Failed to connect to TDengine, reason:%s\n", taos_errstr(NULL));
errorPrint2("Failed to connect to TDengine, reason:%s\n", taos_errstr(NULL));
return -1;
}
......@@ -3181,10 +3192,8 @@ int createDatabasesAndStables(char *command) {
" fsync %d", g_Dbs.db[i].dbCfg.fsync);
}
if ((0 == strncasecmp(g_Dbs.db[i].dbCfg.precision, "ms", 2))
#if NANO_SECOND_ENABLED == 1
|| (0 == strncasecmp(g_Dbs.db[i].dbCfg.precision,
"ns", 2))
#endif
|| (0 == strncasecmp(g_Dbs.db[i].dbCfg.precision,
"us", 2))) {
dataLen += snprintf(command + dataLen, BUFFER_SIZE - dataLen,
......@@ -3193,7 +3202,7 @@ int createDatabasesAndStables(char *command) {
if (0 != queryDbExec(taos, command, NO_INSERT_TYPE, false)) {
taos_close(taos);
errorPrint( "\ncreate database %s failed!\n\n",
errorPrint("\ncreate database %s failed!\n\n",
g_Dbs.db[i].dbName);
return -1;
}
......@@ -3223,7 +3232,7 @@ int createDatabasesAndStables(char *command) {
ret = getSuperTableFromServer(taos, g_Dbs.db[i].dbName,
&g_Dbs.db[i].superTbls[j]);
if (0 != ret) {
errorPrint("\nget super table %s.%s info failed!\n\n",
errorPrint2("\nget super table %s.%s info failed!\n\n",
g_Dbs.db[i].dbName, g_Dbs.db[i].superTbls[j].sTblName);
continue;
}
......@@ -3251,7 +3260,7 @@ static void* createTable(void *sarg)
pThreadInfo->buffer = calloc(buff_len, 1);
if (pThreadInfo->buffer == NULL) {
errorPrint("%s() LN%d, Memory allocated failed!\n", __func__, __LINE__);
errorPrint2("%s() LN%d, Memory allocated failed!\n", __func__, __LINE__);
exit(EXIT_FAILURE);
}
......@@ -3270,10 +3279,11 @@ static void* createTable(void *sarg)
pThreadInfo->db_name,
g_args.tb_prefix, i,
pThreadInfo->cols);
batchNum ++;
} else {
if (stbInfo == NULL) {
free(pThreadInfo->buffer);
errorPrint("%s() LN%d, use metric, but super table info is NULL\n",
errorPrint2("%s() LN%d, use metric, but super table info is NULL\n",
__func__, __LINE__);
exit(EXIT_FAILURE);
} else {
......@@ -3319,13 +3329,14 @@ static void* createTable(void *sarg)
len = 0;
if (0 != queryDbExec(pThreadInfo->taos, pThreadInfo->buffer,
NO_INSERT_TYPE, false)){
errorPrint( "queryDbExec() failed. buffer:\n%s\n", pThreadInfo->buffer);
NO_INSERT_TYPE, false)) {
errorPrint2("queryDbExec() failed. buffer:\n%s\n", pThreadInfo->buffer);
free(pThreadInfo->buffer);
return NULL;
}
pThreadInfo->tables_created += batchNum;
uint64_t currentPrintTime = taosGetTimestampMs();
uint64_t currentPrintTime = taosGetTimestampMs();
if (currentPrintTime - lastPrintTime > 30*1000) {
printf("thread[%d] already create %"PRIu64" - %"PRIu64" tables\n",
pThreadInfo->threadID, pThreadInfo->start_table_from, i);
......@@ -3336,7 +3347,7 @@ static void* createTable(void *sarg)
if (0 != len) {
if (0 != queryDbExec(pThreadInfo->taos, pThreadInfo->buffer,
NO_INSERT_TYPE, false)) {
errorPrint( "queryDbExec() failed. buffer:\n%s\n", pThreadInfo->buffer);
errorPrint2("queryDbExec() failed. buffer:\n%s\n", pThreadInfo->buffer);
}
}
......@@ -3381,7 +3392,7 @@ static int startMultiThreadCreateChildTable(
db_name,
g_Dbs.port);
if (pThreadInfo->taos == NULL) {
errorPrint( "%s() LN%d, Failed to connect to TDengine, reason:%s\n",
errorPrint2("%s() LN%d, Failed to connect to TDengine, reason:%s\n",
__func__, __LINE__, taos_errstr(NULL));
free(pids);
free(infos);
......@@ -3395,6 +3406,7 @@ static int startMultiThreadCreateChildTable(
pThreadInfo->use_metric = true;
pThreadInfo->cols = cols;
pThreadInfo->minDelay = UINT64_MAX;
pThreadInfo->tables_created = 0;
pthread_create(pids + i, NULL, createTable, pThreadInfo);
}
......@@ -3405,6 +3417,8 @@ static int startMultiThreadCreateChildTable(
for (int i = 0; i < threads; i++) {
threadInfo *pThreadInfo = infos + i;
taos_close(pThreadInfo->taos);
g_actualChildTables += pThreadInfo->tables_created;
}
free(pids);
......@@ -3431,7 +3445,6 @@ static void createChildTables() {
verbosePrint("%s() LN%d: %s\n", __func__, __LINE__,
g_Dbs.db[i].superTbls[j].colsOfCreateChildTable);
uint64_t startFrom = 0;
g_totalChildTables += g_Dbs.db[i].superTbls[j].childTblCount;
verbosePrint("%s() LN%d: create %"PRId64" child tables from %"PRIu64"\n",
__func__, __LINE__, g_totalChildTables, startFrom);
......@@ -3556,7 +3569,7 @@ static int readSampleFromCsvFileToMem(
FILE* fp = fopen(stbInfo->sampleFile, "r");
if (fp == NULL) {
errorPrint( "Failed to open sample file: %s, reason:%s\n",
errorPrint("Failed to open sample file: %s, reason:%s\n",
stbInfo->sampleFile, strerror(errno));
return -1;
}
......@@ -3568,7 +3581,7 @@ static int readSampleFromCsvFileToMem(
readLen = tgetline(&line, &n, fp);
if (-1 == readLen) {
if(0 != fseek(fp, 0, SEEK_SET)) {
errorPrint( "Failed to fseek file: %s, reason:%s\n",
errorPrint("Failed to fseek file: %s, reason:%s\n",
stbInfo->sampleFile, strerror(errno));
fclose(fp);
return -1;
......@@ -3611,7 +3624,7 @@ static bool getColumnAndTagTypeFromInsertJsonFile(
// columns
cJSON *columns = cJSON_GetObjectItem(stbInfo, "columns");
if (columns && columns->type != cJSON_Array) {
printf("ERROR: failed to read json, columns not found\n");
errorPrint("%s", "failed to read json, columns not found\n");
goto PARSE_OVER;
} else if (NULL == columns) {
superTbls->columnCount = 0;
......@@ -3621,8 +3634,8 @@ static bool getColumnAndTagTypeFromInsertJsonFile(
int columnSize = cJSON_GetArraySize(columns);
if ((columnSize + 1/* ts */) > TSDB_MAX_COLUMNS) {
errorPrint("%s() LN%d, failed to read json, column size overflow, max column size is %d\n",
__func__, __LINE__, TSDB_MAX_COLUMNS);
errorPrint("failed to read json, column size overflow, max column size is %d\n",
TSDB_MAX_COLUMNS);
goto PARSE_OVER;
}
......@@ -3640,8 +3653,7 @@ static bool getColumnAndTagTypeFromInsertJsonFile(
if (countObj && countObj->type == cJSON_Number) {
count = countObj->valueint;
} else if (countObj && countObj->type != cJSON_Number) {
errorPrint("%s() LN%d, failed to read json, column count not found\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, column count not found\n");
goto PARSE_OVER;
} else {
count = 1;
......@@ -3652,8 +3664,7 @@ static bool getColumnAndTagTypeFromInsertJsonFile(
cJSON *dataType = cJSON_GetObjectItem(column, "type");
if (!dataType || dataType->type != cJSON_String
|| dataType->valuestring == NULL) {
errorPrint("%s() LN%d: failed to read json, column type not found\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, column type not found\n");
goto PARSE_OVER;
}
//tstrncpy(superTbls->columns[k].dataType, dataType->valuestring, DATATYPE_BUFF_LEN);
......@@ -3681,8 +3692,8 @@ static bool getColumnAndTagTypeFromInsertJsonFile(
}
if ((index + 1 /* ts */) > MAX_NUM_COLUMNS) {
errorPrint("%s() LN%d, failed to read json, column size overflow, allowed max column size is %d\n",
__func__, __LINE__, MAX_NUM_COLUMNS);
errorPrint("failed to read json, column size overflow, allowed max column size is %d\n",
MAX_NUM_COLUMNS);
goto PARSE_OVER;
}
......@@ -3693,15 +3704,14 @@ static bool getColumnAndTagTypeFromInsertJsonFile(
// tags
cJSON *tags = cJSON_GetObjectItem(stbInfo, "tags");
if (!tags || tags->type != cJSON_Array) {
errorPrint("%s() LN%d, failed to read json, tags not found\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, tags not found\n");
goto PARSE_OVER;
}
int tagSize = cJSON_GetArraySize(tags);
if (tagSize > TSDB_MAX_TAGS) {
errorPrint("%s() LN%d, failed to read json, tags size overflow, max tag size is %d\n",
__func__, __LINE__, TSDB_MAX_TAGS);
errorPrint("failed to read json, tags size overflow, max tag size is %d\n",
TSDB_MAX_TAGS);
goto PARSE_OVER;
}
......@@ -3715,7 +3725,7 @@ static bool getColumnAndTagTypeFromInsertJsonFile(
if (countObj && countObj->type == cJSON_Number) {
count = countObj->valueint;
} else if (countObj && countObj->type != cJSON_Number) {
printf("ERROR: failed to read json, column count not found\n");
errorPrint("%s", "failed to read json, column count not found\n");
goto PARSE_OVER;
} else {
count = 1;
......@@ -3726,8 +3736,7 @@ static bool getColumnAndTagTypeFromInsertJsonFile(
cJSON *dataType = cJSON_GetObjectItem(tag, "type");
if (!dataType || dataType->type != cJSON_String
|| dataType->valuestring == NULL) {
errorPrint("%s() LN%d, failed to read json, tag type not found\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, tag type not found\n");
goto PARSE_OVER;
}
tstrncpy(columnCase.dataType, dataType->valuestring,
......@@ -3737,8 +3746,7 @@ static bool getColumnAndTagTypeFromInsertJsonFile(
if (dataLen && dataLen->type == cJSON_Number) {
columnCase.dataLen = dataLen->valueint;
} else if (dataLen && dataLen->type != cJSON_Number) {
errorPrint("%s() LN%d, failed to read json, column len not found\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, column len not found\n");
goto PARSE_OVER;
} else {
columnCase.dataLen = 0;
......@@ -3753,16 +3761,16 @@ static bool getColumnAndTagTypeFromInsertJsonFile(
}
if (index > TSDB_MAX_TAGS) {
errorPrint("%s() LN%d, failed to read json, tags size overflow, allowed max tag count is %d\n",
__func__, __LINE__, TSDB_MAX_TAGS);
errorPrint("failed to read json, tags size overflow, allowed max tag count is %d\n",
TSDB_MAX_TAGS);
goto PARSE_OVER;
}
superTbls->tagCount = index;
if ((superTbls->columnCount + superTbls->tagCount + 1 /* ts */) > TSDB_MAX_COLUMNS) {
errorPrint("%s() LN%d, columns + tags is more than allowed max columns count: %d\n",
__func__, __LINE__, TSDB_MAX_COLUMNS);
errorPrint("columns + tags is more than allowed max columns count: %d\n",
TSDB_MAX_COLUMNS);
goto PARSE_OVER;
}
ret = true;
......@@ -3785,7 +3793,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!host) {
tstrncpy(g_Dbs.host, "127.0.0.1", MAX_HOSTNAME_SIZE);
} else {
printf("ERROR: failed to read json, host not found\n");
errorPrint("%s", "failed to read json, host not found\n");
goto PARSE_OVER;
}
......@@ -3805,9 +3813,9 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
cJSON* password = cJSON_GetObjectItem(root, "password");
if (password && password->type == cJSON_String && password->valuestring != NULL) {
tstrncpy(g_Dbs.password, password->valuestring, MAX_PASSWORD_SIZE);
tstrncpy(g_Dbs.password, password->valuestring, SHELL_MAX_PASSWORD_LEN);
} else if (!password) {
tstrncpy(g_Dbs.password, "taosdata", MAX_PASSWORD_SIZE);
tstrncpy(g_Dbs.password, "taosdata", SHELL_MAX_PASSWORD_LEN);
}
cJSON* resultfile = cJSON_GetObjectItem(root, "result_file");
......@@ -3823,7 +3831,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!threads) {
g_Dbs.threadCount = 1;
} else {
printf("ERROR: failed to read json, threads not found\n");
errorPrint("%s", "failed to read json, threads not found\n");
goto PARSE_OVER;
}
......@@ -3833,32 +3841,28 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!threads2) {
g_Dbs.threadCountByCreateTbl = 1;
} else {
errorPrint("%s() LN%d, failed to read json, threads2 not found\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, threads2 not found\n");
goto PARSE_OVER;
}
cJSON* gInsertInterval = cJSON_GetObjectItem(root, "insert_interval");
if (gInsertInterval && gInsertInterval->type == cJSON_Number) {
if (gInsertInterval->valueint <0) {
errorPrint("%s() LN%d, failed to read json, insert interval input mistake\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, insert interval input mistake\n");
goto PARSE_OVER;
}
g_args.insert_interval = gInsertInterval->valueint;
} else if (!gInsertInterval) {
g_args.insert_interval = 0;
} else {
errorPrint("%s() LN%d, failed to read json, insert_interval input mistake\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, insert_interval input mistake\n");
goto PARSE_OVER;
}
cJSON* interlaceRows = cJSON_GetObjectItem(root, "interlace_rows");
if (interlaceRows && interlaceRows->type == cJSON_Number) {
if (interlaceRows->valueint < 0) {
errorPrint("%s() LN%d, failed to read json, interlace_rows input mistake\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, interlace_rows input mistake\n");
goto PARSE_OVER;
}
......@@ -3866,8 +3870,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!interlaceRows) {
g_args.interlace_rows = 0; // 0 means progressive mode, > 0 mean interlace mode. max value is less or equ num_of_records_per_req
} else {
errorPrint("%s() LN%d, failed to read json, interlace_rows input mistake\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, interlace_rows input mistake\n");
goto PARSE_OVER;
}
......@@ -3940,14 +3943,14 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
cJSON* dbs = cJSON_GetObjectItem(root, "databases");
if (!dbs || dbs->type != cJSON_Array) {
printf("ERROR: failed to read json, databases not found\n");
errorPrint("%s", "failed to read json, databases not found\n");
goto PARSE_OVER;
}
int dbSize = cJSON_GetArraySize(dbs);
if (dbSize > MAX_DB_COUNT) {
errorPrint(
"ERROR: failed to read json, databases size overflow, max database is %d\n",
"failed to read json, databases size overflow, max database is %d\n",
MAX_DB_COUNT);
goto PARSE_OVER;
}
......@@ -3960,13 +3963,13 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
// dbinfo
cJSON *dbinfo = cJSON_GetObjectItem(dbinfos, "dbinfo");
if (!dbinfo || dbinfo->type != cJSON_Object) {
printf("ERROR: failed to read json, dbinfo not found\n");
errorPrint("%s", "failed to read json, dbinfo not found\n");
goto PARSE_OVER;
}
cJSON *dbName = cJSON_GetObjectItem(dbinfo, "name");
if (!dbName || dbName->type != cJSON_String || dbName->valuestring == NULL) {
printf("ERROR: failed to read json, db name not found\n");
errorPrint("%s", "failed to read json, db name not found\n");
goto PARSE_OVER;
}
tstrncpy(g_Dbs.db[i].dbName, dbName->valuestring, TSDB_DB_NAME_LEN);
......@@ -3981,8 +3984,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!drop) {
g_Dbs.db[i].drop = g_args.drop_database;
} else {
errorPrint("%s() LN%d, failed to read json, drop input mistake\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, drop input mistake\n");
goto PARSE_OVER;
}
......@@ -3994,7 +3996,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!precision) {
memset(g_Dbs.db[i].dbCfg.precision, 0, SMALL_BUFF_LEN);
} else {
printf("ERROR: failed to read json, precision not found\n");
errorPrint("%s", "failed to read json, precision not found\n");
goto PARSE_OVER;
}
......@@ -4004,7 +4006,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!update) {
g_Dbs.db[i].dbCfg.update = -1;
} else {
printf("ERROR: failed to read json, update not found\n");
errorPrint("%s", "failed to read json, update not found\n");
goto PARSE_OVER;
}
......@@ -4014,7 +4016,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!replica) {
g_Dbs.db[i].dbCfg.replica = -1;
} else {
printf("ERROR: failed to read json, replica not found\n");
errorPrint("%s", "failed to read json, replica not found\n");
goto PARSE_OVER;
}
......@@ -4024,7 +4026,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!keep) {
g_Dbs.db[i].dbCfg.keep = -1;
} else {
printf("ERROR: failed to read json, keep not found\n");
errorPrint("%s", "failed to read json, keep not found\n");
goto PARSE_OVER;
}
......@@ -4034,7 +4036,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!days) {
g_Dbs.db[i].dbCfg.days = -1;
} else {
printf("ERROR: failed to read json, days not found\n");
errorPrint("%s", "failed to read json, days not found\n");
goto PARSE_OVER;
}
......@@ -4044,7 +4046,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!cache) {
g_Dbs.db[i].dbCfg.cache = -1;
} else {
printf("ERROR: failed to read json, cache not found\n");
errorPrint("%s", "failed to read json, cache not found\n");
goto PARSE_OVER;
}
......@@ -4054,7 +4056,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!blocks) {
g_Dbs.db[i].dbCfg.blocks = -1;
} else {
printf("ERROR: failed to read json, block not found\n");
errorPrint("%s", "failed to read json, block not found\n");
goto PARSE_OVER;
}
......@@ -4074,7 +4076,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!minRows) {
g_Dbs.db[i].dbCfg.minRows = 0; // 0 means default
} else {
printf("ERROR: failed to read json, minRows not found\n");
errorPrint("%s", "failed to read json, minRows not found\n");
goto PARSE_OVER;
}
......@@ -4084,7 +4086,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!maxRows) {
g_Dbs.db[i].dbCfg.maxRows = 0; // 0 means default
} else {
printf("ERROR: failed to read json, maxRows not found\n");
errorPrint("%s", "failed to read json, maxRows not found\n");
goto PARSE_OVER;
}
......@@ -4094,7 +4096,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!comp) {
g_Dbs.db[i].dbCfg.comp = -1;
} else {
printf("ERROR: failed to read json, comp not found\n");
errorPrint("%s", "failed to read json, comp not found\n");
goto PARSE_OVER;
}
......@@ -4104,7 +4106,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!walLevel) {
g_Dbs.db[i].dbCfg.walLevel = -1;
} else {
printf("ERROR: failed to read json, walLevel not found\n");
errorPrint("%s", "failed to read json, walLevel not found\n");
goto PARSE_OVER;
}
......@@ -4114,7 +4116,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!cacheLast) {
g_Dbs.db[i].dbCfg.cacheLast = -1;
} else {
printf("ERROR: failed to read json, cacheLast not found\n");
errorPrint("%s", "failed to read json, cacheLast not found\n");
goto PARSE_OVER;
}
......@@ -4134,24 +4136,22 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!fsync) {
g_Dbs.db[i].dbCfg.fsync = -1;
} else {
errorPrint("%s() LN%d, failed to read json, fsync input mistake\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, fsync input mistake\n");
goto PARSE_OVER;
}
// super_talbes
cJSON *stables = cJSON_GetObjectItem(dbinfos, "super_tables");
if (!stables || stables->type != cJSON_Array) {
errorPrint("%s() LN%d, failed to read json, super_tables not found\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, super_tables not found\n");
goto PARSE_OVER;
}
int stbSize = cJSON_GetArraySize(stables);
if (stbSize > MAX_SUPER_TABLE_COUNT) {
errorPrint(
"%s() LN%d, failed to read json, supertable size overflow, max supertable is %d\n",
__func__, __LINE__, MAX_SUPER_TABLE_COUNT);
"failed to read json, supertable size overflow, max supertable is %d\n",
MAX_SUPER_TABLE_COUNT);
goto PARSE_OVER;
}
......@@ -4164,8 +4164,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
cJSON *stbName = cJSON_GetObjectItem(stbInfo, "name");
if (!stbName || stbName->type != cJSON_String
|| stbName->valuestring == NULL) {
errorPrint("%s() LN%d, failed to read json, stb name not found\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, stb name not found\n");
goto PARSE_OVER;
}
tstrncpy(g_Dbs.db[i].superTbls[j].sTblName, stbName->valuestring,
......@@ -4173,7 +4172,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
cJSON *prefix = cJSON_GetObjectItem(stbInfo, "childtable_prefix");
if (!prefix || prefix->type != cJSON_String || prefix->valuestring == NULL) {
printf("ERROR: failed to read json, childtable_prefix not found\n");
errorPrint("%s", "failed to read json, childtable_prefix not found\n");
goto PARSE_OVER;
}
tstrncpy(g_Dbs.db[i].superTbls[j].childTblPrefix, prefix->valuestring,
......@@ -4194,7 +4193,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!autoCreateTbl) {
g_Dbs.db[i].superTbls[j].autoCreateTable = PRE_CREATE_SUBTBL;
} else {
printf("ERROR: failed to read json, auto_create_table not found\n");
errorPrint("%s", "failed to read json, auto_create_table not found\n");
goto PARSE_OVER;
}
......@@ -4204,7 +4203,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!batchCreateTbl) {
g_Dbs.db[i].superTbls[j].batchCreateTableNum = 1000;
} else {
printf("ERROR: failed to read json, batch_create_tbl_num not found\n");
errorPrint("%s", "failed to read json, batch_create_tbl_num not found\n");
goto PARSE_OVER;
}
......@@ -4224,8 +4223,8 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!childTblExists) {
g_Dbs.db[i].superTbls[j].childTblExists = TBL_NO_EXISTS;
} else {
errorPrint("%s() LN%d, failed to read json, child_table_exists not found\n",
__func__, __LINE__);
errorPrint("%s",
"failed to read json, child_table_exists not found\n");
goto PARSE_OVER;
}
......@@ -4235,11 +4234,12 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
cJSON* count = cJSON_GetObjectItem(stbInfo, "childtable_count");
if (!count || count->type != cJSON_Number || 0 >= count->valueint) {
errorPrint("%s() LN%d, failed to read json, childtable_count input mistake\n",
__func__, __LINE__);
errorPrint("%s",
"failed to read json, childtable_count input mistake\n");
goto PARSE_OVER;
}
g_Dbs.db[i].superTbls[j].childTblCount = count->valueint;
g_totalChildTables += g_Dbs.db[i].superTbls[j].childTblCount;
cJSON *dataSource = cJSON_GetObjectItem(stbInfo, "data_source");
if (dataSource && dataSource->type == cJSON_String
......@@ -4251,8 +4251,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
tstrncpy(g_Dbs.db[i].superTbls[j].dataSource, "rand",
min(SMALL_BUFF_LEN, strlen("rand") + 1));
} else {
errorPrint("%s() LN%d, failed to read json, data_source not found\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, data_source not found\n");
goto PARSE_OVER;
}
......@@ -4263,13 +4262,11 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
g_Dbs.db[i].superTbls[j].iface= TAOSC_IFACE;
} else if (0 == strcasecmp(stbIface->valuestring, "rest")) {
g_Dbs.db[i].superTbls[j].iface= REST_IFACE;
#if STMT_IFACE_ENABLED == 1
} else if (0 == strcasecmp(stbIface->valuestring, "stmt")) {
g_Dbs.db[i].superTbls[j].iface= STMT_IFACE;
#endif
} else {
errorPrint("%s() LN%d, failed to read json, insert_mode %s not recognized\n",
__func__, __LINE__, stbIface->valuestring);
errorPrint("failed to read json, insert_mode %s not recognized\n",
stbIface->valuestring);
goto PARSE_OVER;
}
} else if (!stbIface) {
......@@ -4283,7 +4280,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
if ((childTbl_limit) && (g_Dbs.db[i].drop != true)
&& (g_Dbs.db[i].superTbls[j].childTblExists == TBL_ALREADY_EXISTS)) {
if (childTbl_limit->type != cJSON_Number) {
printf("ERROR: failed to read json, childtable_limit\n");
errorPrint("%s", "failed to read json, childtable_limit\n");
goto PARSE_OVER;
}
g_Dbs.db[i].superTbls[j].childTblLimit = childTbl_limit->valueint;
......@@ -4296,7 +4293,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
&& (g_Dbs.db[i].superTbls[j].childTblExists == TBL_ALREADY_EXISTS)) {
if ((childTbl_offset->type != cJSON_Number)
|| (0 > childTbl_offset->valueint)) {
printf("ERROR: failed to read json, childtable_offset\n");
errorPrint("%s", "failed to read json, childtable_offset\n");
goto PARSE_OVER;
}
g_Dbs.db[i].superTbls[j].childTblOffset = childTbl_offset->valueint;
......@@ -4312,7 +4309,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
tstrncpy(g_Dbs.db[i].superTbls[j].startTimestamp,
"now", TSDB_DB_NAME_LEN);
} else {
printf("ERROR: failed to read json, start_timestamp not found\n");
errorPrint("%s", "failed to read json, start_timestamp not found\n");
goto PARSE_OVER;
}
......@@ -4322,7 +4319,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!timestampStep) {
g_Dbs.db[i].superTbls[j].timeStampStep = g_args.timestamp_step;
} else {
printf("ERROR: failed to read json, timestamp_step not found\n");
errorPrint("%s", "failed to read json, timestamp_step not found\n");
goto PARSE_OVER;
}
......@@ -4337,7 +4334,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
tstrncpy(g_Dbs.db[i].superTbls[j].sampleFormat, "csv",
SMALL_BUFF_LEN);
} else {
printf("ERROR: failed to read json, sample_format not found\n");
errorPrint("%s", "failed to read json, sample_format not found\n");
goto PARSE_OVER;
}
......@@ -4352,7 +4349,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
memset(g_Dbs.db[i].superTbls[j].sampleFile, 0,
MAX_FILE_NAME_LEN);
} else {
printf("ERROR: failed to read json, sample_file not found\n");
errorPrint("%s", "failed to read json, sample_file not found\n");
goto PARSE_OVER;
}
......@@ -4370,7 +4367,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
memset(g_Dbs.db[i].superTbls[j].tagsFile, 0, MAX_FILE_NAME_LEN);
g_Dbs.db[i].superTbls[j].tagSource = 0;
} else {
printf("ERROR: failed to read json, tags_file not found\n");
errorPrint("%s", "failed to read json, tags_file not found\n");
goto PARSE_OVER;
}
......@@ -4386,8 +4383,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!maxSqlLen) {
g_Dbs.db[i].superTbls[j].maxSqlLen = g_args.max_sql_len;
} else {
errorPrint("%s() LN%d, failed to read json, stbMaxSqlLen input mistake\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, stbMaxSqlLen input mistake\n");
goto PARSE_OVER;
}
/*
......@@ -4404,31 +4400,28 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!multiThreadWriteOneTbl) {
g_Dbs.db[i].superTbls[j].multiThreadWriteOneTbl = 0;
} else {
printf("ERROR: failed to read json, multiThreadWriteOneTbl not found\n");
errorPrint("%s", "failed to read json, multiThreadWriteOneTbl not found\n");
goto PARSE_OVER;
}
*/
cJSON* insertRows = cJSON_GetObjectItem(stbInfo, "insert_rows");
if (insertRows && insertRows->type == cJSON_Number) {
if (insertRows->valueint < 0) {
errorPrint("%s() LN%d, failed to read json, insert_rows input mistake\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, insert_rows input mistake\n");
goto PARSE_OVER;
}
g_Dbs.db[i].superTbls[j].insertRows = insertRows->valueint;
} else if (!insertRows) {
g_Dbs.db[i].superTbls[j].insertRows = 0x7FFFFFFFFFFFFFFF;
} else {
errorPrint("%s() LN%d, failed to read json, insert_rows input mistake\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, insert_rows input mistake\n");
goto PARSE_OVER;
}
cJSON* stbInterlaceRows = cJSON_GetObjectItem(stbInfo, "interlace_rows");
if (stbInterlaceRows && stbInterlaceRows->type == cJSON_Number) {
if (stbInterlaceRows->valueint < 0) {
errorPrint("%s() LN%d, failed to read json, interlace rows input mistake\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, interlace rows input mistake\n");
goto PARSE_OVER;
}
g_Dbs.db[i].superTbls[j].interlaceRows = stbInterlaceRows->valueint;
......@@ -4446,8 +4439,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
g_Dbs.db[i].superTbls[j].interlaceRows = 0; // 0 means progressive mode, > 0 mean interlace mode. max value is less or equ num_of_records_per_req
} else {
errorPrint(
"%s() LN%d, failed to read json, interlace rows input mistake\n",
__func__, __LINE__);
"%s", "failed to read json, interlace rows input mistake\n");
goto PARSE_OVER;
}
......@@ -4463,7 +4455,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!disorderRatio) {
g_Dbs.db[i].superTbls[j].disorderRatio = 0;
} else {
printf("ERROR: failed to read json, disorderRatio not found\n");
errorPrint("%s", "failed to read json, disorderRatio not found\n");
goto PARSE_OVER;
}
......@@ -4473,7 +4465,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
} else if (!disorderRange) {
g_Dbs.db[i].superTbls[j].disorderRange = 1000;
} else {
printf("ERROR: failed to read json, disorderRange not found\n");
errorPrint("%s", "failed to read json, disorderRange not found\n");
goto PARSE_OVER;
}
......@@ -4481,8 +4473,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
if (insertInterval && insertInterval->type == cJSON_Number) {
g_Dbs.db[i].superTbls[j].insertInterval = insertInterval->valueint;
if (insertInterval->valueint < 0) {
errorPrint("%s() LN%d, failed to read json, insert_interval input mistake\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, insert_interval input mistake\n");
goto PARSE_OVER;
}
} else if (!insertInterval) {
......@@ -4490,8 +4481,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
__func__, __LINE__, g_args.insert_interval);
g_Dbs.db[i].superTbls[j].insertInterval = g_args.insert_interval;
} else {
errorPrint("%s() LN%d, failed to read json, insert_interval input mistake\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, insert_interval input mistake\n");
goto PARSE_OVER;
}
......@@ -4523,7 +4513,7 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
} else if (!host) {
tstrncpy(g_queryInfo.host, "127.0.0.1", MAX_HOSTNAME_SIZE);
} else {
printf("ERROR: failed to read json, host not found\n");
errorPrint("%s", "failed to read json, host not found\n");
goto PARSE_OVER;
}
......@@ -4543,9 +4533,9 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
cJSON* password = cJSON_GetObjectItem(root, "password");
if (password && password->type == cJSON_String && password->valuestring != NULL) {
tstrncpy(g_queryInfo.password, password->valuestring, MAX_PASSWORD_SIZE);
tstrncpy(g_queryInfo.password, password->valuestring, SHELL_MAX_PASSWORD_LEN);
} else if (!password) {
tstrncpy(g_queryInfo.password, "taosdata", MAX_PASSWORD_SIZE);;
tstrncpy(g_queryInfo.password, "taosdata", SHELL_MAX_PASSWORD_LEN);;
}
cJSON *answerPrompt = cJSON_GetObjectItem(root, "confirm_parameter_prompt"); // yes, no,
......@@ -4561,23 +4551,21 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
} else if (!answerPrompt) {
g_args.answer_yes = false;
} else {
printf("ERROR: failed to read json, confirm_parameter_prompt not found\n");
errorPrint("%s", "failed to read json, confirm_parameter_prompt not found\n");
goto PARSE_OVER;
}
cJSON* gQueryTimes = cJSON_GetObjectItem(root, "query_times");
if (gQueryTimes && gQueryTimes->type == cJSON_Number) {
if (gQueryTimes->valueint <= 0) {
errorPrint("%s() LN%d, failed to read json, query_times input mistake\n",
__func__, __LINE__);
errorPrint("%s()", "failed to read json, query_times input mistake\n");
goto PARSE_OVER;
}
g_args.query_times = gQueryTimes->valueint;
} else if (!gQueryTimes) {
g_args.query_times = 1;
} else {
errorPrint("%s() LN%d, failed to read json, query_times input mistake\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, query_times input mistake\n");
goto PARSE_OVER;
}
......@@ -4585,7 +4573,7 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
if (dbs && dbs->type == cJSON_String && dbs->valuestring != NULL) {
tstrncpy(g_queryInfo.dbName, dbs->valuestring, TSDB_DB_NAME_LEN);
} else if (!dbs) {
printf("ERROR: failed to read json, databases not found\n");
errorPrint("%s", "failed to read json, databases not found\n");
goto PARSE_OVER;
}
......@@ -4599,7 +4587,7 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
tstrncpy(g_queryInfo.queryMode, "taosc",
min(SMALL_BUFF_LEN, strlen("taosc") + 1));
} else {
printf("ERROR: failed to read json, query_mode not found\n");
errorPrint("%s", "failed to read json, query_mode not found\n");
goto PARSE_OVER;
}
......@@ -4609,7 +4597,7 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
g_queryInfo.specifiedQueryInfo.concurrent = 1;
g_queryInfo.specifiedQueryInfo.sqlCount = 0;
} else if (specifiedQuery->type != cJSON_Object) {
printf("ERROR: failed to read json, super_table_query not found\n");
errorPrint("%s", "failed to read json, super_table_query not found\n");
goto PARSE_OVER;
} else {
cJSON* queryInterval = cJSON_GetObjectItem(specifiedQuery, "query_interval");
......@@ -4624,8 +4612,8 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
if (specifiedQueryTimes && specifiedQueryTimes->type == cJSON_Number) {
if (specifiedQueryTimes->valueint <= 0) {
errorPrint(
"%s() LN%d, failed to read json, query_times: %"PRId64", need be a valid (>0) number\n",
__func__, __LINE__, specifiedQueryTimes->valueint);
"failed to read json, query_times: %"PRId64", need be a valid (>0) number\n",
specifiedQueryTimes->valueint);
goto PARSE_OVER;
}
......@@ -4642,8 +4630,7 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
if (concurrent && concurrent->type == cJSON_Number) {
if (concurrent->valueint <= 0) {
errorPrint(
"%s() LN%d, query sqlCount %d or concurrent %d is not correct.\n",
__func__, __LINE__,
"query sqlCount %d or concurrent %d is not correct.\n",
g_queryInfo.specifiedQueryInfo.sqlCount,
g_queryInfo.specifiedQueryInfo.concurrent);
goto PARSE_OVER;
......@@ -4661,8 +4648,7 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
} else if (0 == strcmp("async", specifiedAsyncMode->valuestring)) {
g_queryInfo.specifiedQueryInfo.asyncMode = ASYNC_MODE;
} else {
errorPrint("%s() LN%d, failed to read json, async mode input error\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, async mode input error\n");
goto PARSE_OVER;
}
} else {
......@@ -4685,7 +4671,7 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
} else if (0 == strcmp("no", restart->valuestring)) {
g_queryInfo.specifiedQueryInfo.subscribeRestart = false;
} else {
printf("ERROR: failed to read json, subscribe restart error\n");
errorPrint("%s", "failed to read json, subscribe restart error\n");
goto PARSE_OVER;
}
} else {
......@@ -4701,7 +4687,7 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
} else if (0 == strcmp("no", keepProgress->valuestring)) {
g_queryInfo.specifiedQueryInfo.subscribeKeepProgress = 0;
} else {
printf("ERROR: failed to read json, subscribe keepProgress error\n");
errorPrint("%s", "failed to read json, subscribe keepProgress error\n");
goto PARSE_OVER;
}
} else {
......@@ -4713,15 +4699,13 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
if (!specifiedSqls) {
g_queryInfo.specifiedQueryInfo.sqlCount = 0;
} else if (specifiedSqls->type != cJSON_Array) {
errorPrint("%s() LN%d, failed to read json, super sqls not found\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, super sqls not found\n");
goto PARSE_OVER;
} else {
int superSqlSize = cJSON_GetArraySize(specifiedSqls);
if (superSqlSize * g_queryInfo.specifiedQueryInfo.concurrent
> MAX_QUERY_SQL_COUNT) {
errorPrint("%s() LN%d, failed to read json, query sql(%d) * concurrent(%d) overflow, max is %d\n",
__func__, __LINE__,
errorPrint("failed to read json, query sql(%d) * concurrent(%d) overflow, max is %d\n",
superSqlSize,
g_queryInfo.specifiedQueryInfo.concurrent,
MAX_QUERY_SQL_COUNT);
......@@ -4735,7 +4719,7 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
cJSON *sqlStr = cJSON_GetObjectItem(sql, "sql");
if (!sqlStr || sqlStr->type != cJSON_String || sqlStr->valuestring == NULL) {
printf("ERROR: failed to read json, sql not found\n");
errorPrint("%s", "failed to read json, sql not found\n");
goto PARSE_OVER;
}
tstrncpy(g_queryInfo.specifiedQueryInfo.sql[j],
......@@ -4775,7 +4759,8 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
memset(g_queryInfo.specifiedQueryInfo.result[j],
0, MAX_FILE_NAME_LEN);
} else {
printf("ERROR: failed to read json, super query result file not found\n");
errorPrint("%s",
"failed to read json, super query result file not found\n");
goto PARSE_OVER;
}
}
......@@ -4788,7 +4773,7 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
g_queryInfo.superQueryInfo.threadCnt = 1;
g_queryInfo.superQueryInfo.sqlCount = 0;
} else if (superQuery->type != cJSON_Object) {
printf("ERROR: failed to read json, sub_table_query not found\n");
errorPrint("%s", "failed to read json, sub_table_query not found\n");
ret = true;
goto PARSE_OVER;
} else {
......@@ -4802,24 +4787,22 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
cJSON* superQueryTimes = cJSON_GetObjectItem(superQuery, "query_times");
if (superQueryTimes && superQueryTimes->type == cJSON_Number) {
if (superQueryTimes->valueint <= 0) {
errorPrint("%s() LN%d, failed to read json, query_times: %"PRId64", need be a valid (>0) number\n",
__func__, __LINE__, superQueryTimes->valueint);
errorPrint("failed to read json, query_times: %"PRId64", need be a valid (>0) number\n",
superQueryTimes->valueint);
goto PARSE_OVER;
}
g_queryInfo.superQueryInfo.queryTimes = superQueryTimes->valueint;
} else if (!superQueryTimes) {
g_queryInfo.superQueryInfo.queryTimes = g_args.query_times;
} else {
errorPrint("%s() LN%d, failed to read json, query_times input mistake\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, query_times input mistake\n");
goto PARSE_OVER;
}
cJSON* threads = cJSON_GetObjectItem(superQuery, "threads");
if (threads && threads->type == cJSON_Number) {
if (threads->valueint <= 0) {
errorPrint("%s() LN%d, failed to read json, threads input mistake\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, threads input mistake\n");
goto PARSE_OVER;
}
......@@ -4841,8 +4824,7 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
tstrncpy(g_queryInfo.superQueryInfo.sTblName, stblname->valuestring,
TSDB_TABLE_NAME_LEN);
} else {
errorPrint("%s() LN%d, failed to read json, super table name input error\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, super table name input error\n");
goto PARSE_OVER;
}
......@@ -4854,8 +4836,7 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
} else if (0 == strcmp("async", superAsyncMode->valuestring)) {
g_queryInfo.superQueryInfo.asyncMode = ASYNC_MODE;
} else {
errorPrint("%s() LN%d, failed to read json, async mode input error\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, async mode input error\n");
goto PARSE_OVER;
}
} else {
......@@ -4865,8 +4846,7 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
cJSON* superInterval = cJSON_GetObjectItem(superQuery, "interval");
if (superInterval && superInterval->type == cJSON_Number) {
if (superInterval->valueint < 0) {
errorPrint("%s() LN%d, failed to read json, interval input mistake\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, interval input mistake\n");
goto PARSE_OVER;
}
g_queryInfo.superQueryInfo.subscribeInterval = superInterval->valueint;
......@@ -4884,7 +4864,7 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
} else if (0 == strcmp("no", subrestart->valuestring)) {
g_queryInfo.superQueryInfo.subscribeRestart = false;
} else {
printf("ERROR: failed to read json, subscribe restart error\n");
errorPrint("%s", "failed to read json, subscribe restart error\n");
goto PARSE_OVER;
}
} else {
......@@ -4900,7 +4880,8 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
} else if (0 == strcmp("no", superkeepProgress->valuestring)) {
g_queryInfo.superQueryInfo.subscribeKeepProgress = 0;
} else {
printf("ERROR: failed to read json, subscribe super table keepProgress error\n");
errorPrint("%s",
"failed to read json, subscribe super table keepProgress error\n");
goto PARSE_OVER;
}
} else {
......@@ -4937,14 +4918,13 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
if (!superSqls) {
g_queryInfo.superQueryInfo.sqlCount = 0;
} else if (superSqls->type != cJSON_Array) {
errorPrint("%s() LN%d: failed to read json, super sqls not found\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, super sqls not found\n");
goto PARSE_OVER;
} else {
int superSqlSize = cJSON_GetArraySize(superSqls);
if (superSqlSize > MAX_QUERY_SQL_COUNT) {
errorPrint("%s() LN%d, failed to read json, query sql size overflow, max is %d\n",
__func__, __LINE__, MAX_QUERY_SQL_COUNT);
errorPrint("failed to read json, query sql size overflow, max is %d\n",
MAX_QUERY_SQL_COUNT);
goto PARSE_OVER;
}
......@@ -4956,8 +4936,7 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
cJSON *sqlStr = cJSON_GetObjectItem(sql, "sql");
if (!sqlStr || sqlStr->type != cJSON_String
|| sqlStr->valuestring == NULL) {
errorPrint("%s() LN%d, failed to read json, sql not found\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, sql not found\n");
goto PARSE_OVER;
}
tstrncpy(g_queryInfo.superQueryInfo.sql[j], sqlStr->valuestring,
......@@ -4965,14 +4944,13 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
cJSON *result = cJSON_GetObjectItem(sql, "result");
if (result != NULL && result->type == cJSON_String
&& result->valuestring != NULL){
&& result->valuestring != NULL) {
tstrncpy(g_queryInfo.superQueryInfo.result[j],
result->valuestring, MAX_FILE_NAME_LEN);
} else if (NULL == result) {
memset(g_queryInfo.superQueryInfo.result[j], 0, MAX_FILE_NAME_LEN);
} else {
errorPrint("%s() LN%d, failed to read json, sub query result file not found\n",
__func__, __LINE__);
errorPrint("%s", "failed to read json, sub query result file not found\n");
goto PARSE_OVER;
}
}
......@@ -4990,7 +4968,7 @@ static bool getInfoFromJsonFile(char* file) {
FILE *fp = fopen(file, "r");
if (!fp) {
printf("failed to read %s, reason:%s\n", file, strerror(errno));
errorPrint("failed to read %s, reason:%s\n", file, strerror(errno));
return false;
}
......@@ -5001,14 +4979,14 @@ static bool getInfoFromJsonFile(char* file) {
if (len <= 0) {
free(content);
fclose(fp);
printf("failed to read %s, content is null", file);
errorPrint("failed to read %s, content is null", file);
return false;
}
content[len] = 0;
cJSON* root = cJSON_Parse(content);
if (root == NULL) {
printf("ERROR: failed to cjson parse %s, invalid json format\n", file);
errorPrint("failed to cjson parse %s, invalid json format\n", file);
goto PARSE_OVER;
}
......@@ -5021,13 +4999,13 @@ static bool getInfoFromJsonFile(char* file) {
} else if (0 == strcasecmp("subscribe", filetype->valuestring)) {
g_args.test_mode = SUBSCRIBE_TEST;
} else {
printf("ERROR: failed to read json, filetype not support\n");
errorPrint("%s", "failed to read json, filetype not support\n");
goto PARSE_OVER;
}
} else if (!filetype) {
g_args.test_mode = INSERT_TEST;
} else {
printf("ERROR: failed to read json, filetype not found\n");
errorPrint("%s", "failed to read json, filetype not found\n");
goto PARSE_OVER;
}
......@@ -5037,8 +5015,8 @@ static bool getInfoFromJsonFile(char* file) {
|| (SUBSCRIBE_TEST == g_args.test_mode)) {
ret = getMetaFromQueryJsonFile(root);
} else {
errorPrint("%s() LN%d, input json file type error! please input correct file type: insert or query or subscribe\n",
__func__, __LINE__);
errorPrint("%s",
"input json file type error! please input correct file type: insert or query or subscribe\n");
goto PARSE_OVER;
}
......@@ -5075,7 +5053,6 @@ static void postFreeResource() {
free(g_Dbs.db[i].superTbls[j].sampleDataBuf);
g_Dbs.db[i].superTbls[j].sampleDataBuf = NULL;
}
#if STMT_IFACE_ENABLED == 1
if (g_Dbs.db[i].superTbls[j].sampleBindArray) {
for (int k = 0; k < MAX_SAMPLES_ONCE_FROM_FILE; k++) {
uintptr_t *tmp = (uintptr_t *)(*(uintptr_t *)(
......@@ -5090,7 +5067,6 @@ static void postFreeResource() {
}
}
tmfree((char *)g_Dbs.db[i].superTbls[j].sampleBindArray);
#endif
if (0 != g_Dbs.db[i].superTbls[j].tagDataBuf) {
free(g_Dbs.db[i].superTbls[j].tagDataBuf);
......@@ -5158,7 +5134,7 @@ static int64_t generateStbRowData(
|| (0 == strncasecmp(stbInfo->columns[i].dataType,
"NCHAR", 5))) {
if (stbInfo->columns[i].dataLen > TSDB_MAX_BINARY_LEN) {
errorPrint( "binary or nchar length overflow, max size:%u\n",
errorPrint2("binary or nchar length overflow, max size:%u\n",
(uint32_t)TSDB_MAX_BINARY_LEN);
return -1;
}
......@@ -5170,7 +5146,7 @@ static int64_t generateStbRowData(
}
char* buf = (char*)calloc(stbInfo->columns[i].dataLen+1, 1);
if (NULL == buf) {
errorPrint( "calloc failed! size:%d\n", stbInfo->columns[i].dataLen);
errorPrint2("calloc failed! size:%d\n", stbInfo->columns[i].dataLen);
return -1;
}
rand_string(buf, stbInfo->columns[i].dataLen);
......@@ -5215,7 +5191,8 @@ static int64_t generateStbRowData(
"SMALLINT", 8)) {
tmp = rand_smallint_str();
tmpLen = strlen(tmp);
tstrncpy(pstr + dataLen, tmp, min(tmpLen + 1, SMALLINT_BUFF_LEN));
tstrncpy(pstr + dataLen, tmp,
min(tmpLen + 1, SMALLINT_BUFF_LEN));
} else if (0 == strncasecmp(stbInfo->columns[i].dataType,
"TINYINT", 7)) {
tmp = rand_tinyint_str();
......@@ -5228,11 +5205,12 @@ static int64_t generateStbRowData(
tstrncpy(pstr + dataLen, tmp, min(tmpLen +1, BOOL_BUFF_LEN));
} else if (0 == strncasecmp(stbInfo->columns[i].dataType,
"TIMESTAMP", 9)) {
tmp = rand_int_str();
tmp = rand_bigint_str();
tmpLen = strlen(tmp);
tstrncpy(pstr + dataLen, tmp, min(tmpLen +1, INT_BUFF_LEN));
tstrncpy(pstr + dataLen, tmp, min(tmpLen +1, BIGINT_BUFF_LEN));
} else {
errorPrint( "Not support data type: %s\n", stbInfo->columns[i].dataType);
errorPrint2("Not support data type: %s\n",
stbInfo->columns[i].dataType);
return -1;
}
......@@ -5245,8 +5223,7 @@ static int64_t generateStbRowData(
return 0;
}
dataLen -= 1;
dataLen += snprintf(pstr + dataLen, maxLen - dataLen, ")");
tstrncpy(pstr + dataLen - 1, ")", 2);
verbosePrint("%s() LN%d, dataLen:%"PRId64"\n", __func__, __LINE__, dataLen);
verbosePrint("%s() LN%d, recBuf:\n\t%s\n", __func__, __LINE__, recBuf);
......@@ -5284,7 +5261,7 @@ static int64_t generateData(char *recBuf, char **data_type,
} else if (strcasecmp(data_type[i % columnCount], "BINARY") == 0) {
char *s = malloc(lenOfBinary + 1);
if (s == NULL) {
errorPrint("%s() LN%d, memory allocation %d bytes failed\n",
errorPrint2("%s() LN%d, memory allocation %d bytes failed\n",
__func__, __LINE__, lenOfBinary + 1);
exit(EXIT_FAILURE);
}
......@@ -5294,7 +5271,7 @@ static int64_t generateData(char *recBuf, char **data_type,
} else if (strcasecmp(data_type[i % columnCount], "NCHAR") == 0) {
char *s = malloc(lenOfBinary + 1);
if (s == NULL) {
errorPrint("%s() LN%d, memory allocation %d bytes failed\n",
errorPrint2("%s() LN%d, memory allocation %d bytes failed\n",
__func__, __LINE__, lenOfBinary + 1);
exit(EXIT_FAILURE);
}
......@@ -5321,7 +5298,7 @@ static int prepareSampleDataForSTable(SSuperTable *stbInfo) {
sampleDataBuf = calloc(
stbInfo->lenOfOneRow * MAX_SAMPLES_ONCE_FROM_FILE, 1);
if (sampleDataBuf == NULL) {
errorPrint("%s() LN%d, Failed to calloc %"PRIu64" Bytes, reason:%s\n",
errorPrint2("%s() LN%d, Failed to calloc %"PRIu64" Bytes, reason:%s\n",
__func__, __LINE__,
stbInfo->lenOfOneRow * MAX_SAMPLES_ONCE_FROM_FILE,
strerror(errno));
......@@ -5332,7 +5309,7 @@ static int prepareSampleDataForSTable(SSuperTable *stbInfo) {
int ret = readSampleFromCsvFileToMem(stbInfo);
if (0 != ret) {
errorPrint("%s() LN%d, read sample from csv file failed.\n",
errorPrint2("%s() LN%d, read sample from csv file failed.\n",
__func__, __LINE__);
tmfree(sampleDataBuf);
stbInfo->sampleDataBuf = NULL;
......@@ -5383,12 +5360,11 @@ static int32_t execInsert(threadInfo *pThreadInfo, uint32_t k)
}
break;
#if STMT_IFACE_ENABLED == 1
case STMT_IFACE:
debugPrint("%s() LN%d, stmt=%p",
__func__, __LINE__, pThreadInfo->stmt);
if (0 != taos_stmt_execute(pThreadInfo->stmt)) {
errorPrint("%s() LN%d, failied to execute insert statement. reason: %s\n",
errorPrint2("%s() LN%d, failied to execute insert statement. reason: %s\n",
__func__, __LINE__, taos_stmt_errstr(pThreadInfo->stmt));
fprintf(stderr, "\n\033[31m === Please reduce batch number if WAL size exceeds limit. ===\033[0m\n\n");
......@@ -5396,10 +5372,9 @@ static int32_t execInsert(threadInfo *pThreadInfo, uint32_t k)
}
affectedRows = k;
break;
#endif
default:
errorPrint("%s() LN%d: unknown insert mode: %d\n",
errorPrint2("%s() LN%d: unknown insert mode: %d\n",
__func__, __LINE__, stbInfo->iface);
affectedRows = 0;
}
......@@ -5627,7 +5602,7 @@ static int generateStbSQLHead(
tableSeq % stbInfo->tagSampleCount);
}
if (NULL == tagsValBuf) {
errorPrint("%s() LN%d, tag buf failed to allocate memory\n",
errorPrint2("%s() LN%d, tag buf failed to allocate memory\n",
__func__, __LINE__);
return -1;
}
......@@ -5769,7 +5744,6 @@ static int64_t generateInterlaceDataWithoutStb(
return k;
}
#if STMT_IFACE_ENABLED == 1
static int32_t prepareStmtBindArrayByType(
TAOS_BIND *bind,
char *dataType, int32_t dataLen,
......@@ -5779,7 +5753,7 @@ static int32_t prepareStmtBindArrayByType(
if (0 == strncasecmp(dataType,
"BINARY", strlen("BINARY"))) {
if (dataLen > TSDB_MAX_BINARY_LEN) {
errorPrint( "binary length overflow, max size:%u\n",
errorPrint2("binary length overflow, max size:%u\n",
(uint32_t)TSDB_MAX_BINARY_LEN);
return -1;
}
......@@ -5802,7 +5776,7 @@ static int32_t prepareStmtBindArrayByType(
} else if (0 == strncasecmp(dataType,
"NCHAR", strlen("NCHAR"))) {
if (dataLen > TSDB_MAX_BINARY_LEN) {
errorPrint( "nchar length overflow, max size:%u\n",
errorPrint2("nchar length overflow, max size:%u\n",
(uint32_t)TSDB_MAX_BINARY_LEN);
return -1;
}
......@@ -5950,7 +5924,7 @@ static int32_t prepareStmtBindArrayByType(
value, &tmpEpoch, strlen(value),
timePrec, 0)) {
free(bind_ts2);
errorPrint("Input %s, time format error!\n", value);
errorPrint2("Input %s, time format error!\n", value);
return -1;
}
*bind_ts2 = tmpEpoch;
......@@ -5966,7 +5940,7 @@ static int32_t prepareStmtBindArrayByType(
bind->length = &bind->buffer_length;
bind->is_null = NULL;
} else {
errorPrint( "No support data type: %s\n", dataType);
errorPrint2("Not support data type: %s\n", dataType);
return -1;
}
......@@ -5983,7 +5957,7 @@ static int32_t prepareStmtBindArrayByTypeForRand(
if (0 == strncasecmp(dataType,
"BINARY", strlen("BINARY"))) {
if (dataLen > TSDB_MAX_BINARY_LEN) {
errorPrint( "binary length overflow, max size:%u\n",
errorPrint2("binary length overflow, max size:%u\n",
(uint32_t)TSDB_MAX_BINARY_LEN);
return -1;
}
......@@ -6006,7 +5980,7 @@ static int32_t prepareStmtBindArrayByTypeForRand(
} else if (0 == strncasecmp(dataType,
"NCHAR", strlen("NCHAR"))) {
if (dataLen > TSDB_MAX_BINARY_LEN) {
errorPrint( "nchar length overflow, max size:%u\n",
errorPrint2("nchar length overflow, max size: %u\n",
(uint32_t)TSDB_MAX_BINARY_LEN);
return -1;
}
......@@ -6158,7 +6132,7 @@ static int32_t prepareStmtBindArrayByTypeForRand(
if (TSDB_CODE_SUCCESS != taosParseTime(
value, &tmpEpoch, strlen(value),
timePrec, 0)) {
errorPrint("Input %s, time format error!\n", value);
errorPrint2("Input %s, time format error!\n", value);
return -1;
}
*bind_ts2 = tmpEpoch;
......@@ -6176,7 +6150,7 @@ static int32_t prepareStmtBindArrayByTypeForRand(
*ptr += bind->buffer_length;
} else {
errorPrint( "No support data type: %s\n", dataType);
errorPrint2("No support data type: %s\n", dataType);
return -1;
}
......@@ -6194,7 +6168,7 @@ static int32_t prepareStmtWithoutStb(
TAOS_STMT *stmt = pThreadInfo->stmt;
int ret = taos_stmt_set_tbname(stmt, tableName);
if (ret != 0) {
errorPrint("failed to execute taos_stmt_set_tbname(%s). return 0x%x. reason: %s\n",
errorPrint2("failed to execute taos_stmt_set_tbname(%s). return 0x%x. reason: %s\n",
tableName, ret, taos_stmt_errstr(stmt));
return ret;
}
......@@ -6203,7 +6177,7 @@ static int32_t prepareStmtWithoutStb(
char *bindArray = malloc(sizeof(TAOS_BIND) * (g_args.num_of_CPR + 1));
if (bindArray == NULL) {
errorPrint("Failed to allocate %d bind params\n",
errorPrint2("Failed to allocate %d bind params\n",
(g_args.num_of_CPR + 1));
return -1;
}
......@@ -6244,13 +6218,13 @@ static int32_t prepareStmtWithoutStb(
}
}
if (0 != taos_stmt_bind_param(stmt, (TAOS_BIND *)bindArray)) {
errorPrint("%s() LN%d, stmt_bind_param() failed! reason: %s\n",
errorPrint2("%s() LN%d, stmt_bind_param() failed! reason: %s\n",
__func__, __LINE__, taos_stmt_errstr(stmt));
break;
}
// if msg > 3MB, break
if (0 != taos_stmt_add_batch(stmt)) {
errorPrint("%s() LN%d, stmt_add_batch() failed! reason: %s\n",
errorPrint2("%s() LN%d, stmt_add_batch() failed! reason: %s\n",
__func__, __LINE__, taos_stmt_errstr(stmt));
break;
}
......@@ -6273,7 +6247,7 @@ static int32_t prepareStbStmtBindTag(
{
char *bindBuffer = calloc(1, DOUBLE_BUFF_LEN); // g_args.len_of_binary);
if (bindBuffer == NULL) {
errorPrint("%s() LN%d, Failed to allocate %d bind buffer\n",
errorPrint2("%s() LN%d, Failed to allocate %d bind buffer\n",
__func__, __LINE__, DOUBLE_BUFF_LEN);
return -1;
}
......@@ -6305,7 +6279,7 @@ static int32_t prepareStbStmtBindRand(
{
char *bindBuffer = calloc(1, DOUBLE_BUFF_LEN); // g_args.len_of_binary);
if (bindBuffer == NULL) {
errorPrint("%s() LN%d, Failed to allocate %d bind buffer\n",
errorPrint2("%s() LN%d, Failed to allocate %d bind buffer\n",
__func__, __LINE__, DOUBLE_BUFF_LEN);
return -1;
}
......@@ -6408,7 +6382,7 @@ static int32_t prepareStbStmtRand(
}
if (NULL == tagsValBuf) {
errorPrint("%s() LN%d, tag buf failed to allocate memory\n",
errorPrint2("%s() LN%d, tag buf failed to allocate memory\n",
__func__, __LINE__);
return -1;
}
......@@ -6416,7 +6390,7 @@ static int32_t prepareStbStmtRand(
char *tagsArray = calloc(1, sizeof(TAOS_BIND) * stbInfo->tagCount);
if (NULL == tagsArray) {
tmfree(tagsValBuf);
errorPrint("%s() LN%d, tag buf failed to allocate memory\n",
errorPrint2("%s() LN%d, tag buf failed to allocate memory\n",
__func__, __LINE__);
return -1;
}
......@@ -6435,14 +6409,14 @@ static int32_t prepareStbStmtRand(
tmfree(tagsArray);
if (0 != ret) {
errorPrint("%s() LN%d, stmt_set_tbname_tags() failed! reason: %s\n",
errorPrint2("%s() LN%d, stmt_set_tbname_tags() failed! reason: %s\n",
__func__, __LINE__, taos_stmt_errstr(stmt));
return -1;
}
} else {
ret = taos_stmt_set_tbname(stmt, tableName);
if (0 != ret) {
errorPrint("%s() LN%d, stmt_set_tbname() failed! reason: %s\n",
errorPrint2("%s() LN%d, stmt_set_tbname() failed! reason: %s\n",
__func__, __LINE__, taos_stmt_errstr(stmt));
return -1;
}
......@@ -6450,7 +6424,7 @@ static int32_t prepareStbStmtRand(
char *bindArray = calloc(1, sizeof(TAOS_BIND) * (stbInfo->columnCount + 1));
if (bindArray == NULL) {
errorPrint("%s() LN%d, Failed to allocate %d bind params\n",
errorPrint2("%s() LN%d, Failed to allocate %d bind params\n",
__func__, __LINE__, (stbInfo->columnCount + 1));
return -1;
}
......@@ -6469,7 +6443,7 @@ static int32_t prepareStbStmtRand(
}
ret = taos_stmt_bind_param(stmt, (TAOS_BIND *)bindArray);
if (0 != ret) {
errorPrint("%s() LN%d, stmt_bind_param() failed! reason: %s\n",
errorPrint2("%s() LN%d, stmt_bind_param() failed! reason: %s\n",
__func__, __LINE__, taos_stmt_errstr(stmt));
free(bindArray);
return -1;
......@@ -6477,7 +6451,7 @@ static int32_t prepareStbStmtRand(
// if msg > 3MB, break
ret = taos_stmt_add_batch(stmt);
if (0 != ret) {
errorPrint("%s() LN%d, stmt_add_batch() failed! reason: %s\n",
errorPrint2("%s() LN%d, stmt_add_batch() failed! reason: %s\n",
__func__, __LINE__, taos_stmt_errstr(stmt));
free(bindArray);
return -1;
......@@ -6521,7 +6495,7 @@ static int32_t prepareStbStmtWithSample(
}
if (NULL == tagsValBuf) {
errorPrint("%s() LN%d, tag buf failed to allocate memory\n",
errorPrint2("%s() LN%d, tag buf failed to allocate memory\n",
__func__, __LINE__);
return -1;
}
......@@ -6529,7 +6503,7 @@ static int32_t prepareStbStmtWithSample(
char *tagsArray = calloc(1, sizeof(TAOS_BIND) * stbInfo->tagCount);
if (NULL == tagsArray) {
tmfree(tagsValBuf);
errorPrint("%s() LN%d, tag buf failed to allocate memory\n",
errorPrint2("%s() LN%d, tag buf failed to allocate memory\n",
__func__, __LINE__);
return -1;
}
......@@ -6548,14 +6522,14 @@ static int32_t prepareStbStmtWithSample(
tmfree(tagsArray);
if (0 != ret) {
errorPrint("%s() LN%d, stmt_set_tbname_tags() failed! reason: %s\n",
errorPrint2("%s() LN%d, stmt_set_tbname_tags() failed! reason: %s\n",
__func__, __LINE__, taos_stmt_errstr(stmt));
return -1;
}
} else {
ret = taos_stmt_set_tbname(stmt, tableName);
if (0 != ret) {
errorPrint("%s() LN%d, stmt_set_tbname() failed! reason: %s\n",
errorPrint2("%s() LN%d, stmt_set_tbname() failed! reason: %s\n",
__func__, __LINE__, taos_stmt_errstr(stmt));
return -1;
}
......@@ -6577,14 +6551,14 @@ static int32_t prepareStbStmtWithSample(
}
ret = taos_stmt_bind_param(stmt, (TAOS_BIND *)bindArray);
if (0 != ret) {
errorPrint("%s() LN%d, stmt_bind_param() failed! reason: %s\n",
errorPrint2("%s() LN%d, stmt_bind_param() failed! reason: %s\n",
__func__, __LINE__, taos_stmt_errstr(stmt));
return -1;
}
// if msg > 3MB, break
ret = taos_stmt_add_batch(stmt);
if (0 != ret) {
errorPrint("%s() LN%d, stmt_add_batch() failed! reason: %s\n",
errorPrint2("%s() LN%d, stmt_add_batch() failed! reason: %s\n",
__func__, __LINE__, taos_stmt_errstr(stmt));
return -1;
}
......@@ -6604,7 +6578,6 @@ static int32_t prepareStbStmtWithSample(
return k;
}
#endif
static int32_t generateStbProgressiveData(
SSuperTable *stbInfo,
......@@ -6746,7 +6719,7 @@ static void* syncWriteInterlace(threadInfo *pThreadInfo) {
pThreadInfo->buffer = calloc(maxSqlLen, 1);
if (NULL == pThreadInfo->buffer) {
errorPrint( "%s() LN%d, Failed to alloc %"PRIu64" Bytes, reason:%s\n",
errorPrint2( "%s() LN%d, Failed to alloc %"PRIu64" Bytes, reason:%s\n",
__func__, __LINE__, maxSqlLen, strerror(errno));
return NULL;
}
......@@ -6794,7 +6767,7 @@ static void* syncWriteInterlace(threadInfo *pThreadInfo) {
getTableName(tableName, pThreadInfo, tableSeq);
if (0 == strlen(tableName)) {
errorPrint("[%d] %s() LN%d, getTableName return null\n",
errorPrint2("[%d] %s() LN%d, getTableName return null\n",
pThreadInfo->threadID, __func__, __LINE__);
free(pThreadInfo->buffer);
return NULL;
......@@ -6805,7 +6778,6 @@ static void* syncWriteInterlace(threadInfo *pThreadInfo) {
int32_t generated;
if (stbInfo) {
if (stbInfo->iface == STMT_IFACE) {
#if STMT_IFACE_ENABLED == 1
if (sourceRand) {
generated = prepareStbStmtRand(
pThreadInfo,
......@@ -6825,9 +6797,6 @@ static void* syncWriteInterlace(threadInfo *pThreadInfo) {
startTime,
&(pThreadInfo->samplePos));
}
#else
generated = -1;
#endif
} else {
generated = generateStbInterlaceData(
pThreadInfo,
......@@ -6845,16 +6814,12 @@ static void* syncWriteInterlace(threadInfo *pThreadInfo) {
pThreadInfo->threadID,
__func__, __LINE__,
tableName, batchPerTbl, startTime);
#if STMT_IFACE_ENABLED == 1
generated = prepareStmtWithoutStb(
pThreadInfo,
tableName,
batchPerTbl,
insertRows, i,
startTime);
#else
generated = -1;
#endif
} else {
generated = generateInterlaceDataWithoutStb(
tableName, batchPerTbl,
......@@ -6869,7 +6834,7 @@ static void* syncWriteInterlace(threadInfo *pThreadInfo) {
debugPrint("[%d] %s() LN%d, generated records is %d\n",
pThreadInfo->threadID, __func__, __LINE__, generated);
if (generated < 0) {
errorPrint("[%d] %s() LN%d, generated records is %d\n",
errorPrint2("[%d] %s() LN%d, generated records is %d\n",
pThreadInfo->threadID, __func__, __LINE__, generated);
goto free_of_interlace;
} else if (generated == 0) {
......@@ -6923,7 +6888,7 @@ static void* syncWriteInterlace(threadInfo *pThreadInfo) {
startTs = taosGetTimestampUs();
if (recOfBatch == 0) {
errorPrint("[%d] %s() LN%d Failed to insert records of batch %d\n",
errorPrint2("[%d] %s() LN%d Failed to insert records of batch %d\n",
pThreadInfo->threadID, __func__, __LINE__,
batchPerTbl);
if (batchPerTbl > 0) {
......@@ -6950,7 +6915,7 @@ static void* syncWriteInterlace(threadInfo *pThreadInfo) {
pThreadInfo->totalDelay += delay;
if (recOfBatch != affectedRows) {
errorPrint("[%d] %s() LN%d execInsert insert %d, affected rows: %"PRId64"\n%s\n",
errorPrint2("[%d] %s() LN%d execInsert insert %d, affected rows: %"PRId64"\n%s\n",
pThreadInfo->threadID, __func__, __LINE__,
recOfBatch, affectedRows, pThreadInfo->buffer);
goto free_of_interlace;
......@@ -7008,7 +6973,7 @@ static void* syncWriteProgressive(threadInfo *pThreadInfo) {
pThreadInfo->buffer = calloc(maxSqlLen, 1);
if (NULL == pThreadInfo->buffer) {
errorPrint( "Failed to alloc %"PRIu64" Bytes, reason:%s\n",
errorPrint2("Failed to alloc %"PRIu64" bytes, reason:%s\n",
maxSqlLen,
strerror(errno));
return NULL;
......@@ -7049,7 +7014,7 @@ static void* syncWriteProgressive(threadInfo *pThreadInfo) {
__func__, __LINE__,
pThreadInfo->threadID, tableSeq, tableName);
if (0 == strlen(tableName)) {
errorPrint("[%d] %s() LN%d, getTableName return null\n",
errorPrint2("[%d] %s() LN%d, getTableName return null\n",
pThreadInfo->threadID, __func__, __LINE__);
free(pThreadInfo->buffer);
return NULL;
......@@ -7067,7 +7032,6 @@ static void* syncWriteProgressive(threadInfo *pThreadInfo) {
int32_t generated;
if (stbInfo) {
if (stbInfo->iface == STMT_IFACE) {
#if STMT_IFACE_ENABLED == 1
if (sourceRand) {
generated = prepareStbStmtRand(
pThreadInfo,
......@@ -7086,9 +7050,6 @@ static void* syncWriteProgressive(threadInfo *pThreadInfo) {
insertRows, i, start_time,
&(pThreadInfo->samplePos));
}
#else
generated = -1;
#endif
} else {
generated = generateStbProgressiveData(
stbInfo,
......@@ -7100,16 +7061,12 @@ static void* syncWriteProgressive(threadInfo *pThreadInfo) {
}
} else {
if (g_args.iface == STMT_IFACE) {
#if STMT_IFACE_ENABLED == 1
generated = prepareStmtWithoutStb(
pThreadInfo,
tableName,
g_args.num_of_RPR,
insertRows, i,
start_time);
#else
generated = -1;
#endif
} else {
generated = generateProgressiveDataWithoutStb(
tableName,
......@@ -7146,7 +7103,7 @@ static void* syncWriteProgressive(threadInfo *pThreadInfo) {
pThreadInfo->totalDelay += delay;
if (affectedRows < 0) {
errorPrint("%s() LN%d, affected rows: %d\n",
errorPrint2("%s() LN%d, affected rows: %d\n",
__func__, __LINE__, affectedRows);
goto free_of_progressive;
}
......@@ -7308,7 +7265,7 @@ static int convertHostToServAddr(char *host, uint16_t port, struct sockaddr_in *
uint16_t rest_port = port + TSDB_PORT_HTTP;
struct hostent *server = gethostbyname(host);
if ((server == NULL) || (server->h_addr == NULL)) {
errorPrint("%s", "ERROR, no such host");
errorPrint2("%s", "no such host");
return -1;
}
......@@ -7329,12 +7286,11 @@ static int convertHostToServAddr(char *host, uint16_t port, struct sockaddr_in *
return 0;
}
#if STMT_IFACE_ENABLED == 1
static int parseSampleFileToStmt(SSuperTable *stbInfo, uint32_t timePrec)
{
stbInfo->sampleBindArray = calloc(1, sizeof(char *) * MAX_SAMPLES_ONCE_FROM_FILE);
if (stbInfo->sampleBindArray == NULL) {
errorPrint("%s() LN%d, Failed to allocate %"PRIu64" bind array buffer\n",
errorPrint2("%s() LN%d, Failed to allocate %"PRIu64" bind array buffer\n",
__func__, __LINE__, (uint64_t)sizeof(char *) * MAX_SAMPLES_ONCE_FROM_FILE);
return -1;
}
......@@ -7343,7 +7299,7 @@ static int parseSampleFileToStmt(SSuperTable *stbInfo, uint32_t timePrec)
for (int i=0; i < MAX_SAMPLES_ONCE_FROM_FILE; i++) {
char *bindArray = calloc(1, sizeof(TAOS_BIND) * (stbInfo->columnCount + 1));
if (bindArray == NULL) {
errorPrint("%s() LN%d, Failed to allocate %d bind params\n",
errorPrint2("%s() LN%d, Failed to allocate %d bind params\n",
__func__, __LINE__, (stbInfo->columnCount + 1));
return -1;
}
......@@ -7375,7 +7331,7 @@ static int parseSampleFileToStmt(SSuperTable *stbInfo, uint32_t timePrec)
char *bindBuffer = calloc(1, index + 1);
if (bindBuffer == NULL) {
errorPrint("%s() LN%d, Failed to allocate %d bind buffer\n",
errorPrint2("%s() LN%d, Failed to allocate %d bind buffer\n",
__func__, __LINE__, DOUBLE_BUFF_LEN);
return -1;
}
......@@ -7400,7 +7356,6 @@ static int parseSampleFileToStmt(SSuperTable *stbInfo, uint32_t timePrec)
return 0;
}
#endif
static void startMultiThreadInsertData(int threads, char* db_name,
char* precision, SSuperTable* stbInfo) {
......@@ -7411,12 +7366,10 @@ static void startMultiThreadInsertData(int threads, char* db_name,
timePrec = TSDB_TIME_PRECISION_MILLI;
} else if (0 == strncasecmp(precision, "us", 2)) {
timePrec = TSDB_TIME_PRECISION_MICRO;
#if NANO_SECOND_ENABLED == 1
} else if (0 == strncasecmp(precision, "ns", 2)) {
timePrec = TSDB_TIME_PRECISION_NANO;
#endif
} else {
errorPrint("Not support precision: %s\n", precision);
errorPrint2("Not support precision: %s\n", precision);
exit(EXIT_FAILURE);
}
}
......@@ -7446,7 +7399,7 @@ static void startMultiThreadInsertData(int threads, char* db_name,
if ((stbInfo) && (0 == strncasecmp(stbInfo->dataSource,
"sample", strlen("sample")))) {
if (0 != prepareSampleDataForSTable(stbInfo)) {
errorPrint("%s() LN%d, prepare sample data for stable failed!\n",
errorPrint2("%s() LN%d, prepare sample data for stable failed!\n",
__func__, __LINE__);
exit(EXIT_FAILURE);
}
......@@ -7456,7 +7409,7 @@ static void startMultiThreadInsertData(int threads, char* db_name,
g_Dbs.host, g_Dbs.user,
g_Dbs.password, db_name, g_Dbs.port);
if (NULL == taos0) {
errorPrint("%s() LN%d, connect to server fail , reason: %s\n",
errorPrint2("%s() LN%d, connect to server fail , reason: %s\n",
__func__, __LINE__, taos_errstr(NULL));
exit(EXIT_FAILURE);
}
......@@ -7511,7 +7464,7 @@ static void startMultiThreadInsertData(int threads, char* db_name,
limit * TSDB_TABLE_NAME_LEN);
if (stbInfo->childTblName == NULL) {
taos_close(taos0);
errorPrint("%s() LN%d, alloc memory failed!\n", __func__, __LINE__);
errorPrint2("%s() LN%d, alloc memory failed!\n", __func__, __LINE__);
exit(EXIT_FAILURE);
}
......@@ -7557,7 +7510,6 @@ static void startMultiThreadInsertData(int threads, char* db_name,
memset(pids, 0, threads * sizeof(pthread_t));
memset(infos, 0, threads * sizeof(threadInfo));
#if STMT_IFACE_ENABLED == 1
char *stmtBuffer = calloc(1, BUFFER_SIZE);
assert(stmtBuffer);
if ((g_args.iface == STMT_IFACE)
......@@ -7598,7 +7550,6 @@ static void startMultiThreadInsertData(int threads, char* db_name,
parseSampleFileToStmt(stbInfo, timePrec);
}
}
#endif
for (int i = 0; i < threads; i++) {
threadInfo *pThreadInfo = infos + i;
......@@ -7619,14 +7570,13 @@ static void startMultiThreadInsertData(int threads, char* db_name,
g_Dbs.password, db_name, g_Dbs.port);
if (NULL == pThreadInfo->taos) {
free(infos);
errorPrint(
errorPrint2(
"%s() LN%d, connect to server fail from insert sub thread, reason: %s\n",
__func__, __LINE__,
taos_errstr(NULL));
exit(EXIT_FAILURE);
}
#if STMT_IFACE_ENABLED == 1
if ((g_args.iface == STMT_IFACE)
|| ((stbInfo)
&& (stbInfo->iface == STMT_IFACE))) {
......@@ -7636,7 +7586,7 @@ static void startMultiThreadInsertData(int threads, char* db_name,
if (NULL == pThreadInfo->stmt) {
free(pids);
free(infos);
errorPrint(
errorPrint2(
"%s() LN%d, failed init stmt, reason: %s\n",
__func__, __LINE__,
taos_errstr(NULL));
......@@ -7644,17 +7594,16 @@ static void startMultiThreadInsertData(int threads, char* db_name,
}
int ret = taos_stmt_prepare(pThreadInfo->stmt, stmtBuffer, 0);
if (ret != 0){
if (ret != 0) {
free(pids);
free(infos);
free(stmtBuffer);
errorPrint("failed to execute taos_stmt_prepare. return 0x%x. reason: %s\n",
errorPrint2("failed to execute taos_stmt_prepare. return 0x%x. reason: %s\n",
ret, taos_stmt_errstr(pThreadInfo->stmt));
exit(EXIT_FAILURE);
}
pThreadInfo->bind_ts = malloc(sizeof(int64_t));
}
#endif
} else {
pThreadInfo->taos = NULL;
}
......@@ -7680,9 +7629,7 @@ static void startMultiThreadInsertData(int threads, char* db_name,
}
}
#if STMT_IFACE_ENABLED == 1
free(stmtBuffer);
#endif
for (int i = 0; i < threads; i++) {
pthread_join(pids[i], NULL);
......@@ -7697,12 +7644,10 @@ static void startMultiThreadInsertData(int threads, char* db_name,
for (int i = 0; i < threads; i++) {
threadInfo *pThreadInfo = infos + i;
#if STMT_IFACE_ENABLED == 1
if (pThreadInfo->stmt) {
taos_stmt_close(pThreadInfo->stmt);
tmfree((char *)pThreadInfo->bind_ts);
}
#endif
tsem_destroy(&(pThreadInfo->lock_sem));
taos_close(pThreadInfo->taos);
......@@ -7797,7 +7742,7 @@ static void *readTable(void *sarg) {
char *tb_prefix = pThreadInfo->tb_prefix;
FILE *fp = fopen(pThreadInfo->filePath, "a");
if (NULL == fp) {
errorPrint( "fopen %s fail, reason:%s.\n", pThreadInfo->filePath, strerror(errno));
errorPrint2("fopen %s fail, reason:%s.\n", pThreadInfo->filePath, strerror(errno));
free(command);
return NULL;
}
......@@ -7833,7 +7778,7 @@ static void *readTable(void *sarg) {
int32_t code = taos_errno(pSql);
if (code != 0) {
errorPrint( "Failed to query:%s\n", taos_errstr(pSql));
errorPrint2("Failed to query:%s\n", taos_errstr(pSql));
taos_free_result(pSql);
taos_close(taos);
fclose(fp);
......@@ -7915,7 +7860,7 @@ static void *readMetric(void *sarg) {
int32_t code = taos_errno(pSql);
if (code != 0) {
errorPrint( "Failed to query:%s\n", taos_errstr(pSql));
errorPrint2("Failed to query:%s\n", taos_errstr(pSql));
taos_free_result(pSql);
taos_close(taos);
fclose(fp);
......@@ -7962,7 +7907,7 @@ static int insertTestProcess() {
debugPrint("%d result file: %s\n", __LINE__, g_Dbs.resultFile);
g_fpOfInsertResult = fopen(g_Dbs.resultFile, "a");
if (NULL == g_fpOfInsertResult) {
errorPrint( "Failed to open %s for save result\n", g_Dbs.resultFile);
errorPrint("Failed to open %s for save result\n", g_Dbs.resultFile);
return -1;
}
......@@ -7995,18 +7940,30 @@ static int insertTestProcess() {
double start;
double end;
// create child tables
start = taosGetTimestampMs();
createChildTables();
end = taosGetTimestampMs();
if (g_totalChildTables > 0) {
fprintf(stderr, "Spent %.4f seconds to create %"PRId64" tables with %d thread(s)\n\n",
(end - start)/1000.0, g_totalChildTables, g_Dbs.threadCountByCreateTbl);
fprintf(stderr,
"creating %"PRId64" table(s) with %d thread(s)\n\n",
g_totalChildTables, g_Dbs.threadCountByCreateTbl);
if (g_fpOfInsertResult) {
fprintf(g_fpOfInsertResult,
"creating %"PRId64" table(s) with %d thread(s)\n\n",
g_totalChildTables, g_Dbs.threadCountByCreateTbl);
}
// create child tables
start = taosGetTimestampMs();
createChildTables();
end = taosGetTimestampMs();
fprintf(stderr,
"\nSpent %.4f seconds to create %"PRId64" table(s) with %d thread(s), actual %"PRId64" table(s) created\n\n",
(end - start)/1000.0, g_totalChildTables,
g_Dbs.threadCountByCreateTbl, g_actualChildTables);
if (g_fpOfInsertResult) {
fprintf(g_fpOfInsertResult,
"Spent %.4f seconds to create %"PRId64" tables with %d thread(s)\n\n",
(end - start)/1000.0, g_totalChildTables, g_Dbs.threadCountByCreateTbl);
"\nSpent %.4f seconds to create %"PRId64" table(s) with %d thread(s), actual %"PRId64" table(s) created\n\n",
(end - start)/1000.0, g_totalChildTables,
g_Dbs.threadCountByCreateTbl, g_actualChildTables);
}
}
......@@ -8064,7 +8021,7 @@ static void *specifiedTableQuery(void *sarg) {
NULL,
g_queryInfo.port);
if (taos == NULL) {
errorPrint("[%d] Failed to connect to TDengine, reason:%s\n",
errorPrint2("[%d] Failed to connect to TDengine, reason:%s\n",
pThreadInfo->threadID, taos_errstr(NULL));
return NULL;
} else {
......@@ -8076,7 +8033,7 @@ static void *specifiedTableQuery(void *sarg) {
sprintf(sqlStr, "use %s", g_queryInfo.dbName);
if (0 != queryDbExec(pThreadInfo->taos, sqlStr, NO_INSERT_TYPE, false)) {
taos_close(pThreadInfo->taos);
errorPrint( "use database %s failed!\n\n",
errorPrint("use database %s failed!\n\n",
g_queryInfo.dbName);
return NULL;
}
......@@ -8242,7 +8199,7 @@ static int queryTestProcess() {
NULL,
g_queryInfo.port);
if (taos == NULL) {
errorPrint( "Failed to connect to TDengine, reason:%s\n",
errorPrint("Failed to connect to TDengine, reason:%s\n",
taos_errstr(NULL));
exit(EXIT_FAILURE);
}
......@@ -8300,7 +8257,7 @@ static int queryTestProcess() {
taos_close(taos);
free(infos);
free(pids);
errorPrint( "use database %s failed!\n\n",
errorPrint2("use database %s failed!\n\n",
g_queryInfo.dbName);
return -1;
}
......@@ -8398,7 +8355,7 @@ static int queryTestProcess() {
static void stable_sub_callback(
TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) {
if (res == NULL || taos_errno(res) != 0) {
errorPrint("%s() LN%d, failed to subscribe result, code:%d, reason:%s\n",
errorPrint2("%s() LN%d, failed to subscribe result, code:%d, reason:%s\n",
__func__, __LINE__, code, taos_errstr(res));
return;
}
......@@ -8411,7 +8368,7 @@ static void stable_sub_callback(
static void specified_sub_callback(
TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) {
if (res == NULL || taos_errno(res) != 0) {
errorPrint("%s() LN%d, failed to subscribe result, code:%d, reason:%s\n",
errorPrint2("%s() LN%d, failed to subscribe result, code:%d, reason:%s\n",
__func__, __LINE__, code, taos_errstr(res));
return;
}
......@@ -8450,7 +8407,7 @@ static TAOS_SUB* subscribeImpl(
}
if (tsub == NULL) {
errorPrint("failed to create subscription. topic:%s, sql:%s\n", topic, sql);
errorPrint2("failed to create subscription. topic:%s, sql:%s\n", topic, sql);
return NULL;
}
......@@ -8481,7 +8438,7 @@ static void *superSubscribe(void *sarg) {
g_queryInfo.dbName,
g_queryInfo.port);
if (pThreadInfo->taos == NULL) {
errorPrint("[%d] Failed to connect to TDengine, reason:%s\n",
errorPrint2("[%d] Failed to connect to TDengine, reason:%s\n",
pThreadInfo->threadID, taos_errstr(NULL));
free(subSqlStr);
return NULL;
......@@ -8492,7 +8449,7 @@ static void *superSubscribe(void *sarg) {
sprintf(sqlStr, "USE %s", g_queryInfo.dbName);
if (0 != queryDbExec(pThreadInfo->taos, sqlStr, NO_INSERT_TYPE, false)) {
taos_close(pThreadInfo->taos);
errorPrint( "use database %s failed!\n\n",
errorPrint2("use database %s failed!\n\n",
g_queryInfo.dbName);
free(subSqlStr);
return NULL;
......@@ -8628,7 +8585,7 @@ static void *specifiedSubscribe(void *sarg) {
g_queryInfo.dbName,
g_queryInfo.port);
if (pThreadInfo->taos == NULL) {
errorPrint("[%d] Failed to connect to TDengine, reason:%s\n",
errorPrint2("[%d] Failed to connect to TDengine, reason:%s\n",
pThreadInfo->threadID, taos_errstr(NULL));
return NULL;
}
......@@ -8735,7 +8692,7 @@ static int subscribeTestProcess() {
g_queryInfo.dbName,
g_queryInfo.port);
if (taos == NULL) {
errorPrint( "Failed to connect to TDengine, reason:%s\n",
errorPrint2("Failed to connect to TDengine, reason:%s\n",
taos_errstr(NULL));
exit(EXIT_FAILURE);
}
......@@ -8763,7 +8720,7 @@ static int subscribeTestProcess() {
g_queryInfo.specifiedQueryInfo.sqlCount);
} else {
if (g_queryInfo.specifiedQueryInfo.concurrent <= 0) {
errorPrint("%s() LN%d, sepcified query sqlCount %d.\n",
errorPrint2("%s() LN%d, sepcified query sqlCount %d.\n",
__func__, __LINE__,
g_queryInfo.specifiedQueryInfo.sqlCount);
exit(EXIT_FAILURE);
......@@ -8780,7 +8737,7 @@ static int subscribeTestProcess() {
g_queryInfo.specifiedQueryInfo.concurrent *
sizeof(threadInfo));
if ((NULL == pids) || (NULL == infos)) {
errorPrint("%s() LN%d, malloc failed for create threads\n", __func__, __LINE__);
errorPrint2("%s() LN%d, malloc failed for create threads\n", __func__, __LINE__);
exit(EXIT_FAILURE);
}
......@@ -8815,7 +8772,7 @@ static int subscribeTestProcess() {
g_queryInfo.superQueryInfo.threadCnt *
sizeof(threadInfo));
if ((NULL == pidsOfStable) || (NULL == infosOfStable)) {
errorPrint("%s() LN%d, malloc failed for create threads\n",
errorPrint2("%s() LN%d, malloc failed for create threads\n",
__func__, __LINE__);
// taos_close(taos);
exit(EXIT_FAILURE);
......@@ -8887,7 +8844,7 @@ static void initOfInsertMeta() {
tstrncpy(g_Dbs.host, "127.0.0.1", MAX_HOSTNAME_SIZE);
g_Dbs.port = 6030;
tstrncpy(g_Dbs.user, TSDB_DEFAULT_USER, MAX_USERNAME_SIZE);
tstrncpy(g_Dbs.password, TSDB_DEFAULT_PASS, MAX_PASSWORD_SIZE);
tstrncpy(g_Dbs.password, TSDB_DEFAULT_PASS, SHELL_MAX_PASSWORD_LEN);
g_Dbs.threadCount = 2;
g_Dbs.use_metric = g_args.use_metric;
......@@ -8900,7 +8857,7 @@ static void initOfQueryMeta() {
tstrncpy(g_queryInfo.host, "127.0.0.1", MAX_HOSTNAME_SIZE);
g_queryInfo.port = 6030;
tstrncpy(g_queryInfo.user, TSDB_DEFAULT_USER, MAX_USERNAME_SIZE);
tstrncpy(g_queryInfo.password, TSDB_DEFAULT_PASS, MAX_PASSWORD_SIZE);
tstrncpy(g_queryInfo.password, TSDB_DEFAULT_PASS, SHELL_MAX_PASSWORD_LEN);
}
static void setParaFromArg() {
......@@ -8914,7 +8871,7 @@ static void setParaFromArg() {
tstrncpy(g_Dbs.user, g_args.user, MAX_USERNAME_SIZE);
}
tstrncpy(g_Dbs.password, g_args.password, MAX_PASSWORD_SIZE);
tstrncpy(g_Dbs.password, g_args.password, SHELL_MAX_PASSWORD_LEN);
if (g_args.port) {
g_Dbs.port = g_args.port;
......@@ -9081,7 +9038,7 @@ static void querySqlFile(TAOS* taos, char* sqlFile)
memcpy(cmd + cmd_len, line, read_len);
if (0 != queryDbExec(taos, cmd, NO_INSERT_TYPE, false)) {
errorPrint("%s() LN%d, queryDbExec %s failed!\n",
errorPrint2("%s() LN%d, queryDbExec %s failed!\n",
__func__, __LINE__, cmd);
tmfree(cmd);
tmfree(line);
......@@ -9155,7 +9112,7 @@ static void queryResult() {
g_Dbs.port);
if (pThreadInfo->taos == NULL) {
free(pThreadInfo);
errorPrint( "Failed to connect to TDengine, reason:%s\n",
errorPrint2("Failed to connect to TDengine, reason:%s\n",
taos_errstr(NULL));
exit(EXIT_FAILURE);
}
......@@ -9177,7 +9134,7 @@ static void testCmdLine() {
if (strlen(configDir)) {
wordexp_t full_path;
if (wordexp(configDir, &full_path, 0) != 0) {
errorPrint( "Invalid path %s\n", configDir);
errorPrint("Invalid path %s\n", configDir);
return;
}
taos_options(TSDB_OPTION_CONFIGDIR, full_path.we_wordv[0]);
......
......@@ -62,6 +62,20 @@ typedef struct {
#define errorPrint(fmt, ...) \
do { fprintf(stderr, "\033[31m"); fprintf(stderr, "ERROR: "fmt, __VA_ARGS__); fprintf(stderr, "\033[0m"); } while(0)
static bool isStringNumber(char *input)
{
int len = strlen(input);
if (0 == len) {
return false;
}
for (int i = 0; i < len; i++) {
if (!isdigit(input[i]))
return false;
}
return true;
}
// -------------------------- SHOW DATABASE INTERFACE-----------------------
enum _show_db_index {
......@@ -243,19 +257,15 @@ static struct argp_option options[] = {
{"table-batch", 't', "TABLE_BATCH", 0, "Number of table dumpout into one output file. Default is 1.", 3},
{"thread_num", 'T', "THREAD_NUM", 0, "Number of thread for dump in file. Default is 5.", 3},
{"debug", 'g', 0, 0, "Print debug info.", 8},
{"verbose", 'b', 0, 0, "Print verbose debug info.", 9},
{"performanceprint", 'm', 0, 0, "Print performance debug info.", 10},
{0}
};
#define MAX_PASSWORD_SIZE 20
/* Used by main to communicate with parse_opt. */
typedef struct arguments {
// connection option
char *host;
char *user;
char password[MAX_PASSWORD_SIZE];
char password[SHELL_MAX_PASSWORD_LEN];
uint16_t port;
char cversion[12];
uint16_t mysqlFlag;
......@@ -432,7 +442,6 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) {
break;
// dump unit option
case 'A':
g_args.all_databases = true;
break;
case 'D':
g_args.databases = true;
......@@ -477,6 +486,10 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) {
g_args.table_batch = atoi(arg);
break;
case 'T':
if (!isStringNumber(arg)) {
errorPrint("%s", "\n\t-T need a number following!\n");
exit(EXIT_FAILURE);
}
g_args.thread_num = atoi(arg);
break;
case OPT_ABORT:
......@@ -555,11 +568,14 @@ static void parse_precision_first(
}
}
static void parse_password(
static void parse_args(
int argc, char *argv[], SArguments *arguments) {
for (int i = 1; i < argc; i++) {
if (strncmp(argv[i], "-p", 2) == 0) {
if (strlen(argv[i]) == 2) {
if ((strncmp(argv[i], "-p", 2) == 0)
|| (strncmp(argv[i], "--password", 10) == 0)) {
if ((strlen(argv[i]) == 2)
|| (strncmp(argv[i], "--password", 10) == 0)) {
printf("Enter password: ");
taosSetConsoleEcho(false);
if(scanf("%20s", arguments->password) > 1) {
......@@ -567,10 +583,22 @@ static void parse_password(
}
taosSetConsoleEcho(true);
} else {
tstrncpy(arguments->password, (char *)(argv[i] + 2), MAX_PASSWORD_SIZE);
tstrncpy(arguments->password, (char *)(argv[i] + 2),
SHELL_MAX_PASSWORD_LEN);
strcpy(argv[i], "-p");
}
argv[i] = "";
} else if (strcmp(argv[i], "-gg") == 0) {
arguments->verbose_print = true;
strcpy(argv[i], "");
} else if (strcmp(argv[i], "-PP") == 0) {
arguments->performance_print = true;
strcpy(argv[i], "");
} else if (strcmp(argv[i], "-A") == 0) {
g_args.all_databases = true;
} else {
continue;
}
}
}
......@@ -639,7 +667,7 @@ int main(int argc, char *argv[]) {
if (argc > 1) {
parse_precision_first(argc, argv, &g_args);
parse_timestamp(argc, argv, &g_args);
parse_password(argc, argv, &g_args);
parse_args(argc, argv, &g_args);
}
argp_parse(&argp, argc, argv, 0, 0, &g_args);
......
......@@ -50,14 +50,20 @@ void osInit() {
char* taosGetCmdlineByPID(int pid) {
static char cmdline[1024];
sprintf(cmdline, "/proc/%d/cmdline", pid);
FILE* f = fopen(cmdline, "r");
if (f) {
size_t size;
size = fread(cmdline, sizeof(char), 1024, f);
if (size > 0) {
if ('\n' == cmdline[size - 1]) cmdline[size - 1] = '\0';
}
fclose(f);
int fd = open(cmdline, O_RDONLY);
if (fd >= 0) {
int n = read(fd, cmdline, sizeof(cmdline) - 1);
if (n < 0) n = 0;
if (n > 0 && cmdline[n - 1] == '\n') --n;
cmdline[n] = 0;
close(fd);
} else {
cmdline[0] = 0;
}
return cmdline;
}
......@@ -63,12 +63,12 @@ int taosSetConsoleEcho(bool on)
}
if (on)
term.c_lflag|=ECHOFLAGS;
term.c_lflag |= ECHOFLAGS;
else
term.c_lflag &=~ECHOFLAGS;
term.c_lflag &= ~ECHOFLAGS;
err = tcsetattr(STDIN_FILENO,TCSAFLUSH,&term);
if (err == -1 && err == EINTR) {
err = tcsetattr(STDIN_FILENO, TCSAFLUSH, &term);
if (err == -1 || err == EINTR) {
perror("Cannot set the attribution of the terminal");
return -1;
}
......
......@@ -34,7 +34,7 @@
#define monTrace(...) { if (monDebugFlag & DEBUG_TRACE) { taosPrintLog("MON ", monDebugFlag, __VA_ARGS__); }}
#define SQL_LENGTH 1030
#define LOG_LEN_STR 100
#define LOG_LEN_STR 512
#define IP_LEN_STR TSDB_EP_LEN
#define CHECK_INTERVAL 1000
......
......@@ -597,6 +597,8 @@ bool doFilterDataBlock(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFilter
void doCompactSDataBlock(SSDataBlock* pBlock, int32_t numOfRows, int8_t* p);
SSDataBlock* createOutputBuf(SExprInfo* pExpr, int32_t numOfOutput, int32_t numOfRows);
void copyTsColoum(SSDataBlock* pRes, SQLFunctionCtx* pCtx, int32_t numOfOutput);
void* destroyOutputBuf(SSDataBlock* pBlock);
void* doDestroyFilterInfo(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFilterCols);
......
......@@ -4089,7 +4089,7 @@ static void mergeTableBlockDist(SResultRowCellInfo* pResInfo, const STableBlockD
} else {
pDist->maxRows = pSrc->maxRows;
pDist->minRows = pSrc->minRows;
int32_t maxSteps = TSDB_MAX_MAX_ROW_FBLOCK/TSDB_BLOCK_DIST_STEP_ROWS;
if (TSDB_MAX_MAX_ROW_FBLOCK % TSDB_BLOCK_DIST_STEP_ROWS != 0) {
++maxSteps;
......@@ -4223,7 +4223,7 @@ void blockinfo_func_finalizer(SQLFunctionCtx* pCtx) {
taosArrayDestroy(pDist->dataBlockInfos);
pDist->dataBlockInfos = NULL;
}
// cannot set the numOfIteratedElems again since it is set during previous iteration
pResInfo->numOfRes = 1;
pResInfo->hasResult = DATA_SET_FLAG;
......
......@@ -3616,7 +3616,7 @@ void setDefaultOutputBuf(SQueryRuntimeEnv *pRuntimeEnv, SOptrBasicInfo *pInfo, i
// set the timestamp output buffer for top/bottom/diff query
int32_t fid = pCtx[i].functionId;
if (fid == TSDB_FUNC_TOP || fid == TSDB_FUNC_BOTTOM || fid == TSDB_FUNC_DIFF || fid == TSDB_FUNC_DERIVATIVE) {
pCtx[i].ptsOutputBuf = pCtx[0].pOutput;
if(i>0) pCtx[i].ptsOutputBuf = pCtx[i-1].pOutput;
}
}
......@@ -3651,7 +3651,37 @@ void updateOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity, int32_t numOf
// re-estabilish output buffer pointer.
int32_t functionId = pBInfo->pCtx[i].functionId;
if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM || functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE) {
pBInfo->pCtx[i].ptsOutputBuf = pBInfo->pCtx[i-1].pOutput;
if(i>0) pBInfo->pCtx[i].ptsOutputBuf = pBInfo->pCtx[i-1].pOutput;
}
}
}
void copyTsColoum(SSDataBlock* pRes, SQLFunctionCtx* pCtx, int32_t numOfOutput) {
bool needCopyTs = false;
int32_t tsNum = 0;
char *src = NULL;
for (int32_t i = 0; i < numOfOutput; i++) {
int32_t functionId = pCtx[i].functionId;
if (functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE) {
needCopyTs = true;
if (i > 0 && pCtx[i-1].functionId == TSDB_FUNC_TS_DUMMY){
SColumnInfoData* pColRes = taosArrayGet(pRes->pDataBlock, i - 1); // find ts data
src = pColRes->pData;
}
}else if(functionId == TSDB_FUNC_TS_DUMMY) {
tsNum++;
}
}
if (!needCopyTs) return;
if (tsNum < 2) return;
if (src == NULL) return;
for (int32_t i = 0; i < numOfOutput; i++) {
int32_t functionId = pCtx[i].functionId;
if(functionId == TSDB_FUNC_TS_DUMMY) {
SColumnInfoData* pColRes = taosArrayGet(pRes->pDataBlock, i);
memcpy(pColRes->pData, src, pColRes->info.bytes * pRes->info.rows);
}
}
}
......@@ -3851,7 +3881,7 @@ void setResultRowOutputBufInitCtx(SQueryRuntimeEnv *pRuntimeEnv, SResultRow *pRe
}
if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM || functionId == TSDB_FUNC_DIFF) {
pCtx[i].ptsOutputBuf = pCtx[0].pOutput;
if(i>0) pCtx[i].ptsOutputBuf = pCtx[i-1].pOutput;
}
if (!pResInfo->initialized) {
......@@ -3912,7 +3942,7 @@ void setResultOutputBuf(SQueryRuntimeEnv *pRuntimeEnv, SResultRow *pResult, SQLF
int32_t functionId = pCtx[i].functionId;
if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM || functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE) {
pCtx[i].ptsOutputBuf = pCtx[0].pOutput;
if(i>0) pCtx[i].ptsOutputBuf = pCtx[i-1].pOutput;
}
/*
......@@ -5698,6 +5728,7 @@ static SSDataBlock* doProjectOperation(void* param, bool* newgroup) {
pRes->info.rows = getNumOfResult(pRuntimeEnv, pInfo->pCtx, pOperator->numOfOutput);
if (pRes->info.rows >= pRuntimeEnv->resultInfo.threshold) {
copyTsColoum(pRes, pInfo->pCtx, pOperator->numOfOutput);
clearNumOfRes(pInfo->pCtx, pOperator->numOfOutput);
return pRes;
}
......@@ -5723,8 +5754,7 @@ static SSDataBlock* doProjectOperation(void* param, bool* newgroup) {
if (*newgroup) {
if (pRes->info.rows > 0) {
pProjectInfo->existDataBlock = pBlock;
clearNumOfRes(pInfo->pCtx, pOperator->numOfOutput);
return pInfo->pRes;
break;
} else { // init output buffer for a new group data
for (int32_t j = 0; j < pOperator->numOfOutput; ++j) {
aAggs[pInfo->pCtx[j].functionId].xFinalize(&pInfo->pCtx[j]);
......@@ -5754,7 +5784,7 @@ static SSDataBlock* doProjectOperation(void* param, bool* newgroup) {
break;
}
}
copyTsColoum(pRes, pInfo->pCtx, pOperator->numOfOutput);
clearNumOfRes(pInfo->pCtx, pOperator->numOfOutput);
return (pInfo->pRes->info.rows > 0)? pInfo->pRes:NULL;
}
......@@ -7419,10 +7449,12 @@ int32_t convertQueryMsg(SQueryTableMsg *pQueryMsg, SQueryParam* param) {
pQueryMsg->numOfOutput = htons(pQueryMsg->numOfOutput);
pQueryMsg->numOfGroupCols = htons(pQueryMsg->numOfGroupCols);
pQueryMsg->tagCondLen = htonl(pQueryMsg->tagCondLen);
pQueryMsg->tsBuf.tsOffset = htonl(pQueryMsg->tsBuf.tsOffset);
pQueryMsg->tsBuf.tsLen = htonl(pQueryMsg->tsBuf.tsLen);
pQueryMsg->tsBuf.tsNumOfBlocks = htonl(pQueryMsg->tsBuf.tsNumOfBlocks);
pQueryMsg->tsBuf.tsOrder = htonl(pQueryMsg->tsBuf.tsOrder);
pQueryMsg->numOfTags = htonl(pQueryMsg->numOfTags);
pQueryMsg->tbnameCondLen = htonl(pQueryMsg->tbnameCondLen);
pQueryMsg->secondStageOutput = htonl(pQueryMsg->secondStageOutput);
......
......@@ -375,6 +375,16 @@ STSBlock* readDataFromDisk(STSBuf* pTSBuf, int32_t order, bool decomp) {
sz = fread(pBlock->payload, (size_t)pBlock->compLen, 1, pTSBuf->f);
if (decomp) {
if (pBlock->numOfElem * TSDB_KEYSIZE > pTSBuf->tsData.allocSize) {
pTSBuf->tsData.rawBuf = realloc(pTSBuf->tsData.rawBuf, pBlock->numOfElem * TSDB_KEYSIZE);
pTSBuf->tsData.allocSize = pBlock->numOfElem * TSDB_KEYSIZE;
}
if (pBlock->numOfElem * TSDB_KEYSIZE > pTSBuf->bufSize) {
pTSBuf->assistBuf = realloc(pTSBuf->assistBuf, pBlock->numOfElem * TSDB_KEYSIZE);
pTSBuf->bufSize = pBlock->numOfElem * TSDB_KEYSIZE;
}
pTSBuf->tsData.len =
tsDecompressTimestamp(pBlock->payload, pBlock->compLen, pBlock->numOfElem, pTSBuf->tsData.rawBuf,
pTSBuf->tsData.allocSize, TWO_STAGE_COMP, pTSBuf->assistBuf, pTSBuf->bufSize);
......@@ -471,7 +481,7 @@ void tsBufAppend(STSBuf* pTSBuf, int32_t id, tVariant* tag, const char* pData, i
// the size of raw data exceeds the size of the default prepared buffer, so
// during getBufBlock, the output buffer needs to be large enough.
if (ptsData->len >= ptsData->threshold) {
if (ptsData->len >= ptsData->threshold - TSDB_KEYSIZE) {
writeDataToDisk(pTSBuf);
shrinkBuffer(ptsData);
}
......@@ -603,6 +613,10 @@ static void tsBufGetBlock(STSBuf* pTSBuf, int32_t groupIndex, int32_t blockIndex
expandBuffer(&pTSBuf->tsData, (int32_t)s);
}
if (s > pTSBuf->bufSize) {
pTSBuf->assistBuf = realloc(pTSBuf->assistBuf, s);
pTSBuf->bufSize = (int32_t)s;
}
pTSBuf->tsData.len =
tsDecompressTimestamp(pBlock->payload, pBlock->compLen, pBlock->numOfElem, pTSBuf->tsData.rawBuf,
pTSBuf->tsData.allocSize, TWO_STAGE_COMP, pTSBuf->assistBuf, pTSBuf->bufSize);
......
......@@ -1572,7 +1572,7 @@ static void mergeTwoRowFromMem(STsdbQueryHandle* pQueryHandle, int32_t capacity,
int32_t numOfColsOfRow1 = 0;
if (pSchema1 == NULL) {
pSchema1 = tsdbGetTableSchemaByVersion(pTable, dataRowVersion(row1));
pSchema1 = tsdbGetTableSchemaByVersion(pTable, memRowVersion(row1));
}
if(isRow1DataRow) {
numOfColsOfRow1 = schemaNCols(pSchema1);
......@@ -1584,7 +1584,7 @@ static void mergeTwoRowFromMem(STsdbQueryHandle* pQueryHandle, int32_t capacity,
if(row2) {
isRow2DataRow = isDataRow(row2);
if (pSchema2 == NULL) {
pSchema2 = tsdbGetTableSchemaByVersion(pTable, dataRowVersion(row2));
pSchema2 = tsdbGetTableSchemaByVersion(pTable, memRowVersion(row2));
}
if(isRow2DataRow) {
numOfColsOfRow2 = schemaNCols(pSchema2);
......@@ -2460,7 +2460,7 @@ int32_t tsdbGetFileBlocksDistInfo(TsdbQueryHandleT* queryHandle, STableBlockDist
// current file are not overlapped with query time window, ignore remain files
if ((ASCENDING_TRAVERSE(pQueryHandle->order) && win.skey > pQueryHandle->window.ekey) ||
(!ASCENDING_TRAVERSE(pQueryHandle->order) && win.ekey < pQueryHandle->window.ekey)) {
(!ASCENDING_TRAVERSE(pQueryHandle->order) && win.ekey < pQueryHandle->window.ekey)) {
tsdbUnLockFS(REPO_FS(pQueryHandle->pTsdb));
tsdbDebug("%p remain files are not qualified for qrange:%" PRId64 "-%" PRId64 ", ignore, 0x%"PRIx64, pQueryHandle,
pQueryHandle->window.skey, pQueryHandle->window.ekey, pQueryHandle->qId);
......
......@@ -20,7 +20,7 @@
extern "C" {
#endif
void taosNetTest(char *role, char *host, int port, int pkgLen);
void taosNetTest(char *role, char *host, int32_t port, int32_t pkgLen, int32_t pkgNum, char *pkgType);
#ifdef __cplusplus
}
......
......@@ -27,6 +27,10 @@
#include "syncMsg.h"
#define MAX_PKG_LEN (64 * 1000)
#define MAX_SPEED_PKG_LEN (1024 * 1024 * 1024)
#define MIN_SPEED_PKG_LEN 1024
#define MAX_SPEED_PKG_NUM 10000
#define MIN_SPEED_PKG_NUM 1
#define BUFFER_SIZE (MAX_PKG_LEN + 1024)
extern int32_t tsRpcMaxUdpSize;
......@@ -466,6 +470,7 @@ static void taosNetTestRpc(char *host, int32_t startPort, int32_t pkgLen) {
sendpkgLen = pkgLen;
}
tsRpcForceTcp = 1;
int32_t ret = taosNetCheckRpc(host, port, sendpkgLen, spi, NULL);
if (ret < 0) {
printf("failed to test TCP port:%d\n", port);
......@@ -479,6 +484,7 @@ static void taosNetTestRpc(char *host, int32_t startPort, int32_t pkgLen) {
sendpkgLen = pkgLen;
}
tsRpcForceTcp = 0;
ret = taosNetCheckRpc(host, port, pkgLen, spi, NULL);
if (ret < 0) {
printf("failed to test UDP port:%d\n", port);
......@@ -542,12 +548,110 @@ static void taosNetTestServer(char *host, int32_t startPort, int32_t pkgLen) {
}
}
void taosNetTest(char *role, char *host, int32_t port, int32_t pkgLen) {
static void taosNetTestFqdn(char *host) {
int code = 0;
uint64_t startTime = taosGetTimestampUs();
uint32_t ip = taosGetIpv4FromFqdn(host);
if (ip == 0xffffffff) {
uError("failed to get IP address from %s since %s", host, strerror(errno));
code = -1;
}
uint64_t endTime = taosGetTimestampUs();
uint64_t el = endTime - startTime;
printf("check convert fqdn spend, status: %d\tcost: %" PRIu64 " us\n", code, el);
return;
}
static void taosNetCheckSpeed(char *host, int32_t port, int32_t pkgLen,
int32_t pkgNum, char *pkgType) {
// record config
int32_t compressTmp = tsCompressMsgSize;
int32_t maxUdpSize = tsRpcMaxUdpSize;
int32_t forceTcp = tsRpcForceTcp;
if (0 == strcmp("tcp", pkgType)){
tsRpcForceTcp = 1;
tsRpcMaxUdpSize = 0; // force tcp
} else {
tsRpcForceTcp = 0;
tsRpcMaxUdpSize = INT_MAX;
}
tsCompressMsgSize = -1;
SRpcEpSet epSet;
SRpcMsg reqMsg;
SRpcMsg rspMsg;
void * pRpcConn;
char secretEncrypt[32] = {0};
char spi = 0;
pRpcConn = taosNetInitRpc(secretEncrypt, spi);
if (NULL == pRpcConn) {
uError("failed to init client rpc");
return;
}
printf("check net spend, host:%s port:%d pkgLen:%d pkgNum:%d pkgType:%s\n\n", host, port, pkgLen, pkgNum, pkgType);
int32_t totalSucc = 0;
uint64_t startT = taosGetTimestampUs();
for (int32_t i = 1; i <= pkgNum; i++) {
uint64_t startTime = taosGetTimestampUs();
memset(&epSet, 0, sizeof(SRpcEpSet));
epSet.inUse = 0;
epSet.numOfEps = 1;
epSet.port[0] = port;
strcpy(epSet.fqdn[0], host);
reqMsg.msgType = TSDB_MSG_TYPE_NETWORK_TEST;
reqMsg.pCont = rpcMallocCont(pkgLen);
reqMsg.contLen = pkgLen;
reqMsg.code = 0;
reqMsg.handle = NULL; // rpc handle returned to app
reqMsg.ahandle = NULL; // app handle set by client
strcpy(reqMsg.pCont, "nettest speed");
rpcSendRecv(pRpcConn, &epSet, &reqMsg, &rspMsg);
int code = 0;
if ((rspMsg.code != 0) || (rspMsg.msgType != TSDB_MSG_TYPE_NETWORK_TEST + 1)) {
uError("ret code 0x%x %s", rspMsg.code, tstrerror(rspMsg.code));
code = -1;
}else{
totalSucc ++;
}
rpcFreeCont(rspMsg.pCont);
uint64_t endTime = taosGetTimestampUs();
uint64_t el = endTime - startTime;
printf("progress:%5d/%d\tstatus:%d\tcost:%8.2lf ms\tspeed:%8.2lf MB/s\n", i, pkgNum, code, el/1000.0, pkgLen/(el/1000000.0)/1024.0/1024.0);
}
int64_t endT = taosGetTimestampUs();
uint64_t elT = endT - startT;
printf("\ntotal succ:%5d/%d\tcost:%8.2lf ms\tspeed:%8.2lf MB/s\n", totalSucc, pkgNum, elT/1000.0, pkgLen/(elT/1000000.0)/1024.0/1024.0*totalSucc);
rpcClose(pRpcConn);
// return config
tsCompressMsgSize = compressTmp;
tsRpcMaxUdpSize = maxUdpSize;
tsRpcForceTcp = forceTcp;
return;
}
void taosNetTest(char *role, char *host, int32_t port, int32_t pkgLen,
int32_t pkgNum, char *pkgType) {
tscEmbedded = 1;
if (host == NULL) host = tsLocalFqdn;
if (port == 0) port = tsServerPort;
if (pkgLen <= 10) pkgLen = 1000;
if (pkgLen > MAX_PKG_LEN) pkgLen = MAX_PKG_LEN;
if (0 == strcmp("speed", role)){
if (pkgLen <= MIN_SPEED_PKG_LEN) pkgLen = MIN_SPEED_PKG_LEN;
if (pkgLen > MAX_SPEED_PKG_LEN) pkgLen = MAX_SPEED_PKG_LEN;
if (pkgNum <= MIN_SPEED_PKG_NUM) pkgNum = MIN_SPEED_PKG_NUM;
if (pkgNum > MAX_SPEED_PKG_NUM) pkgNum = MAX_SPEED_PKG_NUM;
}else{
if (pkgLen <= 10) pkgLen = 1000;
if (pkgLen > MAX_PKG_LEN) pkgLen = MAX_PKG_LEN;
}
if (0 == strcmp("client", role)) {
taosNetTestClient(host, port, pkgLen);
......@@ -560,6 +664,12 @@ void taosNetTest(char *role, char *host, int32_t port, int32_t pkgLen) {
taosNetCheckSync(host, port);
} else if (0 == strcmp("startup", role)) {
taosNetTestStartup(host, port);
} else if (0 == strcmp("speed", role)) {
tscEmbedded = 0;
char type[10] = {0};
taosNetCheckSpeed(host, port, pkgLen, pkgNum, strtolower(type, pkgType));
}else if (0 == strcmp("fqdn", role)) {
taosNetTestFqdn(host);
} else {
taosNetTestStartup(host, port);
}
......
#!/bin/bash
taos -n fqdn
#!/bin/bash
for N in -1 0 1 10000 10001
do
for l in 1023 1024 1073741824 1073741825
do
for S in udp tcp
do
taos -n speed -h BCC-2 -P 6030 -N $N -l $l -S $S 2>&1 | tee -a result.txt
done
done
done
......@@ -102,6 +102,20 @@ class TDTestCase:
print("check2: i=%d colIdx=%d" % (i, colIdx))
tdSql.checkData(0, i, self.rowNum * (colIdx - i + 3))
def alter_table_255_times(self): # add case for TD-6207
for i in range(255):
tdLog.info("alter table st add column cb%d int"%i)
tdSql.execute("alter table st add column cb%d int"%i)
tdSql.execute("insert into t0 (ts,c1) values(now,1)")
tdSql.execute("reset query cache")
tdSql.query("select * from st")
tdSql.execute("create table mt(ts timestamp, i int)")
tdSql.execute("insert into mt values(now,11)")
tdSql.query("select * from mt")
tdDnodes.stop(1)
tdDnodes.start(1)
tdSql.query("describe db.st")
def run(self):
# Setup params
db = "db"
......@@ -131,12 +145,14 @@ class TDTestCase:
tdSql.checkData(0, i, self.rowNum * (size - i))
tdSql.execute("create table st(ts timestamp, c1 int) tags(t1 float)")
tdSql.execute("create table t0 using st tags(null)")
tdSql.execute("create table st(ts timestamp, c1 int) tags(t1 float,t2 int,t3 double)")
tdSql.execute("create table t0 using st tags(null,1,2.3)")
tdSql.execute("alter table t0 set tag t1=2.1")
tdSql.query("show tables")
tdSql.checkRows(2)
self.alter_table_255_times()
def stop(self):
tdSql.close()
......
......@@ -255,7 +255,7 @@ python3 ./test.py -f query/queryTsisNull.py
python3 ./test.py -f query/subqueryFilter.py
python3 ./test.py -f query/nestedQuery/queryInterval.py
python3 ./test.py -f query/queryStateWindow.py
python3 ./test.py -f query/nestedQuery/queryWithOrderLimit.py
# python3 ./test.py -f query/nestedQuery/queryWithOrderLimit.py
python3 ./test.py -f query/nestquery_last_row.py
python3 ./test.py -f query/queryCnameDisplay.py
python3 ./test.py -f query/operator_cost.py
......
......@@ -104,6 +104,21 @@ class TDTestCase:
tdSql.checkRows(2)
tdSql.checkData(0, 1, 1)
tdSql.checkData(1, 1, 2)
tdSql.query("select ts,bottom(col1, 2),ts from test1")
tdSql.checkRows(2)
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
tdSql.checkData(1, 0, "2018-09-17 09:00:00.001")
tdSql.checkData(1, 3, "2018-09-17 09:00:00.001")
tdSql.query("select ts,bottom(col1, 2),ts from test group by tbname")
tdSql.checkRows(2)
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
tdSql.checkData(1, 0, "2018-09-17 09:00:00.001")
tdSql.checkData(1, 3, "2018-09-17 09:00:00.001")
#TD-2457 bottom + interval + order by
tdSql.error('select top(col2,1) from test interval(1y) order by col2;')
......
......@@ -54,6 +54,28 @@ class TDTestCase:
tdSql.query("select derivative(col, 10s, 0) from stb group by tbname")
tdSql.checkRows(10)
tdSql.query("select ts,derivative(col, 10s, 1),ts from stb group by tbname")
tdSql.checkRows(4)
tdSql.checkData(0, 0, "2018-09-17 09:00:10.000")
tdSql.checkData(0, 1, "2018-09-17 09:00:10.000")
tdSql.checkData(0, 3, "2018-09-17 09:00:10.000")
tdSql.checkData(3, 0, "2018-09-17 09:01:20.000")
tdSql.checkData(3, 1, "2018-09-17 09:01:20.000")
tdSql.checkData(3, 3, "2018-09-17 09:01:20.000")
tdSql.query("select ts,derivative(col, 10s, 1),ts from tb1")
tdSql.checkRows(2)
tdSql.checkData(0, 0, "2018-09-17 09:00:10.000")
tdSql.checkData(0, 1, "2018-09-17 09:00:10.000")
tdSql.checkData(0, 3, "2018-09-17 09:00:10.000")
tdSql.checkData(1, 0, "2018-09-17 09:00:20.009")
tdSql.checkData(1, 1, "2018-09-17 09:00:20.009")
tdSql.checkData(1, 3, "2018-09-17 09:00:20.009")
tdSql.query("select ts from(select ts,derivative(col, 10s, 0) from stb group by tbname)")
tdSql.checkData(0, 0, "2018-09-17 09:00:10.000")
tdSql.error("select derivative(col, 10s, 0) from tb1 group by tbname")
tdSql.query("select derivative(col, 10s, 1) from tb1")
......
......@@ -94,6 +94,23 @@ class TDTestCase:
tdSql.error("select diff(col13) from test")
tdSql.error("select diff(col14) from test")
tdSql.query("select ts,diff(col1),ts from test1")
tdSql.checkRows(10)
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
tdSql.checkData(0, 3, "2018-09-17 09:00:00.000")
tdSql.checkData(9, 0, "2018-09-17 09:00:00.009")
tdSql.checkData(9, 1, "2018-09-17 09:00:00.009")
tdSql.checkData(9, 3, "2018-09-17 09:00:00.009")
tdSql.query("select ts,diff(col1),ts from test group by tbname")
tdSql.checkRows(10)
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
tdSql.checkData(0, 3, "2018-09-17 09:00:00.000")
tdSql.checkData(9, 0, "2018-09-17 09:00:00.009")
tdSql.checkData(9, 1, "2018-09-17 09:00:00.009")
tdSql.checkData(9, 3, "2018-09-17 09:00:00.009")
tdSql.query("select diff(col1) from test1")
tdSql.checkRows(10)
......
......@@ -117,6 +117,21 @@ class TDTestCase:
tdSql.checkRows(2)
tdSql.checkData(0, 1, 8.1)
tdSql.checkData(1, 1, 9.1)
tdSql.query("select ts,top(col1, 2),ts from test1")
tdSql.checkRows(2)
tdSql.checkData(0, 0, "2018-09-17 09:00:00.008")
tdSql.checkData(0, 1, "2018-09-17 09:00:00.008")
tdSql.checkData(1, 0, "2018-09-17 09:00:00.009")
tdSql.checkData(1, 3, "2018-09-17 09:00:00.009")
tdSql.query("select ts,top(col1, 2),ts from test group by tbname")
tdSql.checkRows(2)
tdSql.checkData(0, 0, "2018-09-17 09:00:00.008")
tdSql.checkData(0, 1, "2018-09-17 09:00:00.008")
tdSql.checkData(1, 0, "2018-09-17 09:00:00.009")
tdSql.checkData(1, 3, "2018-09-17 09:00:00.009")
#TD-2563 top + super_table + interval
tdSql.execute("create table meters(ts timestamp, c int) tags (d int)")
......
......@@ -884,6 +884,204 @@ class TDTestCase:
pass
def td6068(self):
tdLog.printNoPrefix("==========TD-6068==========")
tdSql.execute("drop database if exists db")
tdSql.execute("create database if not exists db keep 3650")
tdSql.execute("use db")
tdSql.execute("create stable db.stb1 (ts timestamp, c1 int, c2 float, c3 timestamp, c4 binary(16), c5 double, c6 bool) tags(t1 int)")
for i in range(100):
sql = f"create table db.t{i} using db.stb1 tags({i})"
tdSql.execute(sql)
tdSql.execute(f"insert into db.t{i} values (now-10h, {i}, {i+random.random()}, now-10h, 'a_{i}', '{i-random.random()}', True)")
tdSql.execute(f"insert into db.t{i} values (now-9h, {i+random.randint(1,10)}, {i+random.random()}, now-9h, 'a_{i}', '{i-random.random()}', FALSE )")
tdSql.execute(f"insert into db.t{i} values (now-8h, {i+random.randint(1,10)}, {i+random.random()}, now-8h, 'b_{i}', '{i-random.random()}', True)")
tdSql.execute(f"insert into db.t{i} values (now-7h, {i+random.randint(1,10)}, {i+random.random()}, now-7h, 'b_{i}', '{i-random.random()}', FALSE )")
tdSql.execute(f"insert into db.t{i} values (now-6h, {i+random.randint(1,10)}, {i+random.random()}, now-6h, 'c_{i}', '{i-random.random()}', True)")
tdSql.execute(f"insert into db.t{i} values (now-5h, {i+random.randint(1,10)}, {i+random.random()}, now-5h, 'c_{i}', '{i-random.random()}', FALSE )")
tdSql.execute(f"insert into db.t{i} (ts)values (now-4h)")
tdSql.execute(f"insert into db.t{i} (ts)values (now-11h)")
tdSql.execute(f"insert into db.t{i} (ts)values (now-450m)")
tdSql.query("select ts as t,derivative(c1, 10m, 0) from t1")
tdSql.checkRows(5)
tdSql.checkCols(3)
for i in range(5):
data=tdSql.getData(i, 0)
tdSql.checkData(i, 1, data)
tdSql.query("select ts as t, derivative(c1, 1h, 0) from stb1 group by tbname")
tdSql.checkRows(500)
tdSql.checkCols(4)
tdSql.query("select ts as t, derivative(c1, 1s, 0) from t1")
tdSql.query("select ts as t, derivative(c1, 1d, 0) from t1")
tdSql.error("select ts as t, derivative(c1, 1h, 0) from stb1")
tdSql.query("select ts as t, derivative(c2, 1h, 0) from t1")
tdSql.checkRows(5)
tdSql.error("select ts as t, derivative(c3, 1h, 0) from t1")
tdSql.error("select ts as t, derivative(c4, 1h, 0) from t1")
tdSql.query("select ts as t, derivative(c5, 1h, 0) from t1")
tdSql.checkRows(5)
tdSql.error("select ts as t, derivative(c6, 1h, 0) from t1")
tdSql.error("select ts as t, derivative(t1, 1h, 0) from t1")
tdSql.query("select ts as t, diff(c1) from t1")
tdSql.checkRows(5)
tdSql.checkCols(3)
for i in range(5):
data=tdSql.getData(i, 0)
tdSql.checkData(i, 1, data)
tdSql.query("select ts as t, diff(c1) from stb1 group by tbname")
tdSql.checkRows(500)
tdSql.checkCols(4)
tdSql.query("select ts as t, diff(c1) from t1")
tdSql.query("select ts as t, diff(c1) from t1")
tdSql.error("select ts as t, diff(c1) from stb1")
tdSql.query("select ts as t, diff(c2) from t1")
tdSql.checkRows(5)
tdSql.error("select ts as t, diff(c3) from t1")
tdSql.error("select ts as t, diff(c4) from t1")
tdSql.query("select ts as t, diff(c5) from t1")
tdSql.checkRows(5)
tdSql.error("select ts as t, diff(c6) from t1")
tdSql.error("select ts as t, diff(t1) from t1")
tdSql.error("select ts as t, diff(c1, c2) from t1")
tdSql.error("select ts as t, bottom(c1, 0) from t1")
tdSql.query("select ts as t, bottom(c1, 5) from t1")
tdSql.checkRows(5)
tdSql.checkCols(3)
for i in range(5):
data=tdSql.getData(i, 0)
tdSql.checkData(i, 1, data)
tdSql.query("select ts as t, bottom(c1, 5) from stb1")
tdSql.checkRows(5)
tdSql.query("select ts as t, bottom(c1, 5) from stb1 group by tbname")
tdSql.checkRows(500)
tdSql.query("select ts as t, bottom(c1, 8) from t1")
tdSql.checkRows(6)
tdSql.query("select ts as t, bottom(c2, 8) from t1")
tdSql.checkRows(6)
tdSql.error("select ts as t, bottom(c3, 5) from t1")
tdSql.error("select ts as t, bottom(c4, 5) from t1")
tdSql.query("select ts as t, bottom(c5, 8) from t1")
tdSql.checkRows(6)
tdSql.error("select ts as t, bottom(c6, 5) from t1")
tdSql.error("select ts as t, bottom(c5, 8) as b from t1 order by b")
tdSql.error("select ts as t, bottom(t1, 1) from t1")
tdSql.error("select ts as t, bottom(t1, 1) from stb1")
tdSql.error("select ts as t, bottom(t1, 3) from stb1 order by c3")
tdSql.error("select ts as t, bottom(t1, 3) from t1 order by c3")
tdSql.error("select ts as t, top(c1, 0) from t1")
tdSql.query("select ts as t, top(c1, 5) from t1")
tdSql.checkRows(5)
tdSql.checkCols(3)
for i in range(5):
data=tdSql.getData(i, 0)
tdSql.checkData(i, 1, data)
tdSql.query("select ts as t, top(c1, 5) from stb1")
tdSql.checkRows(5)
tdSql.query("select ts as t, top(c1, 5) from stb1 group by tbname")
tdSql.checkRows(500)
tdSql.query("select ts as t, top(c1, 8) from t1")
tdSql.checkRows(6)
tdSql.query("select ts as t, top(c2, 8) from t1")
tdSql.checkRows(6)
tdSql.error("select ts as t, top(c3, 5) from t1")
tdSql.error("select ts as t, top(c4, 5) from t1")
tdSql.query("select ts as t, top(c5, 8) from t1")
tdSql.checkRows(6)
tdSql.error("select ts as t, top(c6, 5) from t1")
tdSql.error("select ts as t, top(c5, 8) as b from t1 order by b")
tdSql.error("select ts as t, top(t1, 1) from t1")
tdSql.error("select ts as t, top(t1, 1) from stb1")
tdSql.error("select ts as t, top(t1, 3) from stb1 order by c3")
tdSql.error("select ts as t, top(t1, 3) from t1 order by c3")
tdDnodes.stop(1)
tdDnodes.start(1)
tdSql.query("select ts as t, diff(c1) from t1")
tdSql.checkRows(5)
tdSql.checkCols(3)
for i in range(5):
data=tdSql.getData(i, 0)
tdSql.checkData(i, 1, data)
tdSql.query("select ts as t, diff(c1) from stb1 group by tbname")
tdSql.checkRows(500)
tdSql.checkCols(4)
tdSql.query("select ts as t, diff(c1) from t1")
tdSql.query("select ts as t, diff(c1) from t1")
tdSql.error("select ts as t, diff(c1) from stb1")
tdSql.query("select ts as t, diff(c2) from t1")
tdSql.checkRows(5)
tdSql.error("select ts as t, diff(c3) from t1")
tdSql.error("select ts as t, diff(c4) from t1")
tdSql.query("select ts as t, diff(c5) from t1")
tdSql.checkRows(5)
tdSql.error("select ts as t, diff(c6) from t1")
tdSql.error("select ts as t, diff(t1) from t1")
tdSql.error("select ts as t, diff(c1, c2) from t1")
tdSql.error("select ts as t, bottom(c1, 0) from t1")
tdSql.query("select ts as t, bottom(c1, 5) from t1")
tdSql.checkRows(5)
tdSql.checkCols(3)
for i in range(5):
data=tdSql.getData(i, 0)
tdSql.checkData(i, 1, data)
tdSql.query("select ts as t, bottom(c1, 5) from stb1")
tdSql.checkRows(5)
tdSql.query("select ts as t, bottom(c1, 5) from stb1 group by tbname")
tdSql.checkRows(500)
tdSql.query("select ts as t, bottom(c1, 8) from t1")
tdSql.checkRows(6)
tdSql.query("select ts as t, bottom(c2, 8) from t1")
tdSql.checkRows(6)
tdSql.error("select ts as t, bottom(c3, 5) from t1")
tdSql.error("select ts as t, bottom(c4, 5) from t1")
tdSql.query("select ts as t, bottom(c5, 8) from t1")
tdSql.checkRows(6)
tdSql.error("select ts as t, bottom(c6, 5) from t1")
tdSql.error("select ts as t, bottom(c5, 8) as b from t1 order by b")
tdSql.error("select ts as t, bottom(t1, 1) from t1")
tdSql.error("select ts as t, bottom(t1, 1) from stb1")
tdSql.error("select ts as t, bottom(t1, 3) from stb1 order by c3")
tdSql.error("select ts as t, bottom(t1, 3) from t1 order by c3")
tdSql.error("select ts as t, top(c1, 0) from t1")
tdSql.query("select ts as t, top(c1, 5) from t1")
tdSql.checkRows(5)
tdSql.checkCols(3)
for i in range(5):
data=tdSql.getData(i, 0)
tdSql.checkData(i, 1, data)
tdSql.query("select ts as t, top(c1, 5) from stb1")
tdSql.checkRows(5)
tdSql.query("select ts as t, top(c1, 5) from stb1 group by tbname")
tdSql.checkRows(500)
tdSql.query("select ts as t, top(c1, 8) from t1")
tdSql.checkRows(6)
tdSql.query("select ts as t, top(c2, 8) from t1")
tdSql.checkRows(6)
tdSql.error("select ts as t, top(c3, 5) from t1")
tdSql.error("select ts as t, top(c4, 5) from t1")
tdSql.query("select ts as t, top(c5, 8) from t1")
tdSql.checkRows(6)
tdSql.error("select ts as t, top(c6, 5) from t1")
tdSql.error("select ts as t, top(c5, 8) as b from t1 order by b")
tdSql.error("select ts as t, top(t1, 1) from t1")
tdSql.error("select ts as t, top(t1, 1) from stb1")
tdSql.error("select ts as t, top(t1, 3) from stb1 order by c3")
tdSql.error("select ts as t, top(t1, 3) from t1 order by c3")
pass
def run(self):
# master branch
......@@ -891,8 +1089,9 @@ class TDTestCase:
# self.td4082()
# self.td4288()
# self.td4724()
self.td5798()
# self.td5798()
# self.td5935()
self.td6068()
# develop branch
# self.td4097()
......
......@@ -28,7 +28,7 @@ class insertFromCSVPerformace:
self.tbName = tbName
self.branchName = branchName
self.type = buildType
self.ts = 1500074556514
self.ts = 1500000000000
self.host = "127.0.0.1"
self.user = "root"
self.password = "taosdata"
......@@ -46,13 +46,20 @@ class insertFromCSVPerformace:
config = self.config)
def writeCSV(self):
with open('test3.csv','w', encoding='utf-8', newline='') as csvFile:
tsset = set()
rows = 0
with open('test4.csv','w', encoding='utf-8', newline='') as csvFile:
writer = csv.writer(csvFile, dialect='excel')
for i in range(1000000):
newTimestamp = self.ts + random.randint(10000000, 10000000000) + random.randint(1000, 10000000) + random.randint(1, 1000)
d = datetime.datetime.fromtimestamp(newTimestamp / 1000)
dt = str(d.strftime("%Y-%m-%d %H:%M:%S.%f"))
writer.writerow(["'%s'" % dt, random.randint(1, 100), random.uniform(1, 100), random.randint(1, 100), random.randint(1, 100)])
while True:
newTimestamp = self.ts + random.randint(1, 10) * 10000000000 + random.randint(1, 10) * 1000000000 + random.randint(1, 10) * 100000000 + random.randint(1, 10) * 10000000 + random.randint(1, 10) * 1000000 + random.randint(1, 10) * 100000 + random.randint(1, 10) * 10000 + random.randint(1, 10) * 1000 + random.randint(1, 10) * 100 + random.randint(1, 10) * 10 + random.randint(1, 10)
if newTimestamp not in tsset:
tsset.add(newTimestamp)
d = datetime.datetime.fromtimestamp(newTimestamp / 1000)
dt = str(d.strftime("%Y-%m-%d %H:%M:%S.%f"))
writer.writerow(["'%s'" % dt, random.randint(1, 100), random.uniform(1, 100), random.randint(1, 100), random.randint(1, 100)])
rows += 1
if rows == 2000000:
break
def removCSVHeader(self):
data = pd.read_csv("ordered.csv")
......@@ -71,7 +78,9 @@ class insertFromCSVPerformace:
cursor.execute("create table if not exists t1(ts timestamp, c1 int, c2 float, c3 int, c4 int)")
startTime = time.time()
cursor.execute("insert into t1 file 'outoforder.csv'")
totalTime += time.time() - startTime
totalTime += time.time() - startTime
time.sleep(1)
out_of_order_time = (float) (totalTime / 10)
print("Out of Order - Insert time: %f" % out_of_order_time)
......@@ -81,7 +90,8 @@ class insertFromCSVPerformace:
cursor.execute("create table if not exists t2(ts timestamp, c1 int, c2 float, c3 int, c4 int)")
startTime = time.time()
cursor.execute("insert into t2 file 'ordered.csv'")
totalTime += time.time() - startTime
totalTime += time.time() - startTime
time.sleep(1)
in_order_time = (float) (totalTime / 10)
print("In order - Insert time: %f" % in_order_time)
......
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import taos
from util.log import *
from util.cases import *
from util.sql import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor())
self.ts = 1537146000000
def run(self):
tdSql.prepare()
print("======= Verify filter for bool, nchar and binary type =========")
tdLog.debug(
"create table st(ts timestamp, tbcol1 bool, tbcol2 binary(10), tbcol3 nchar(20), tbcol4 tinyint, tbcol5 smallint, tbcol6 int, tbcol7 bigint, tbcol8 float, tbcol9 double) tags(tagcol1 bool, tagcol2 binary(10), tagcol3 nchar(10))")
tdSql.execute(
"create table st(ts timestamp, tbcol1 bool, tbcol2 binary(10), tbcol3 nchar(20), tbcol4 tinyint, tbcol5 smallint, tbcol6 int, tbcol7 bigint, tbcol8 float, tbcol9 double) tags(tagcol1 bool, tagcol2 binary(10), tagcol3 nchar(10))")
tdSql.execute("create table st1 using st tags(true, 'table1', '水表')")
for i in range(1, 6):
tdSql.execute(
"insert into st1 values(%d, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d, %f, %f)" %
(self.ts + i, i %
2, i, i,
i, i, i, i, 1.0, 1.0))
# =============Data type keywords cannot be used in filter====================
# timestamp
tdSql.error("select * from st where timestamp = 1629417600")
# bool
tdSql.error("select * from st where bool = false")
#binary
tdSql.error("select * from st where binary = 'taosdata'")
# nchar
tdSql.error("select * from st where nchar = '涛思数据'")
# tinyint
tdSql.error("select * from st where tinyint = 127")
# smallint
tdSql.error("select * from st where smallint = 32767")
# int
tdSql.error("select * from st where INTEGER = 2147483647")
tdSql.error("select * from st where int = 2147483647")
# bigint
tdSql.error("select * from st where bigint = 2147483647")
# float
tdSql.error("select * from st where float = 3.4E38")
# double
tdSql.error("select * from st where double = 1.7E308")
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import taos
from util.log import tdLog
from util.cases import tdCases
from util.sql import tdSql
from util.dnodes import tdDnodes
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
self.ts = 1538548685000
def run(self):
tdSql.prepare()
print("==============step1")
tdSql.execute(
"create table if not exists st (ts timestamp, tagtype int) tags(dev nchar(50))")
tdSql.execute(
'CREATE TABLE if not exists dev_001 using st tags("dev_01")')
tdSql.execute(
'CREATE TABLE if not exists dev_002 using st tags("dev_02")')
print("==============step2")
tdSql.execute(
"""INSERT INTO dev_001(ts, tagtype) VALUES('2020-05-13 10:00:00.000', 1),
('2020-05-13 10:00:00.001', 1)
dev_002 VALUES('2020-05-13 10:00:00.001', 1)""")
tdSql.query("select * from db.st where ts='2020-05-13 10:00:00.000'")
tdSql.checkRows(1)
tdSql.query("select tbname, dev from dev_001")
tdSql.checkRows(1)
tdSql.checkData(0, 0, 'dev_001')
tdSql.checkData(0, 1, 'dev_01')
tdSql.query("select tbname, dev, tagtype from dev_001")
tdSql.checkRows(2)
tdSql.checkData(0, 0, 'dev_001')
tdSql.checkData(0, 1, 'dev_01')
tdSql.checkData(0, 2, 1)
tdSql.checkData(1, 0, 'dev_001')
tdSql.checkData(1, 1, 'dev_01')
tdSql.checkData(1, 2, 1)
## test case for https://jira.taosdata.com:18080/browse/TD-2488
tdSql.execute("create table m1(ts timestamp, k int) tags(a int)")
tdSql.execute("create table t1 using m1 tags(1)")
tdSql.execute("create table t2 using m1 tags(2)")
tdSql.execute("insert into t1 values('2020-1-1 1:1:1', 1)")
tdSql.execute("insert into t1 values('2020-1-1 1:10:1', 2)")
tdSql.execute("insert into t2 values('2020-1-1 1:5:1', 99)")
tdSql.query("select count(*) from m1 where ts = '2020-1-1 1:5:1' ")
tdSql.checkRows(1)
tdSql.checkData(0, 0, 1)
tdDnodes.stop(1)
tdDnodes.start(1)
tdSql.query("select count(*) from m1 where ts = '2020-1-1 1:5:1' ")
tdSql.checkRows(1)
tdSql.checkData(0, 0, 1)
## test case for https://jira.taosdata.com:18080/browse/TD-1930
tdSql.execute("create table tb(ts timestamp, c1 int, c2 binary(10), c3 nchar(10), c4 float, c5 bool)")
for i in range(10):
tdSql.execute("insert into tb values(%d, %d, 'binary%d', 'nchar%d', %f, %d)" % (self.ts + i, i, i, i, i + 0.1, i % 2))
tdSql.error("select * from tb where c2 = binary2")
tdSql.error("select * from tb where c3 = nchar2")
tdSql.query("select * from tb where c2 = 'binary2' ")
tdSql.checkRows(1)
tdSql.query("select * from tb where c3 = 'nchar2' ")
tdSql.checkRows(1)
tdSql.query("select * from tb where c1 = '2' ")
tdSql.checkRows(1)
tdSql.query("select * from tb where c1 = 2 ")
tdSql.checkRows(1)
tdSql.query("select * from tb where c4 = '0.1' ")
tdSql.checkRows(1)
tdSql.query("select * from tb where c4 = 0.1 ")
tdSql.checkRows(1)
tdSql.query("select * from tb where c5 = true ")
tdSql.checkRows(5)
tdSql.query("select * from tb where c5 = 'true' ")
tdSql.checkRows(5)
# For jira: https://jira.taosdata.com:18080/browse/TD-2850
tdSql.execute("create database 'Test' ")
tdSql.execute("use 'Test' ")
tdSql.execute("create table 'TB'(ts timestamp, 'Col1' int) tags('Tag1' int)")
tdSql.execute("insert into 'Tb0' using tb tags(1) values(now, 1)")
tdSql.query("select * from tb")
tdSql.checkRows(1)
tdSql.query("select * from tb0")
tdSql.checkRows(1)
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import taos
from util.log import tdLog
from util.cases import tdCases
from util.sql import tdSql
from util.dnodes import tdDnodes
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
self.ts = 1538548685000
def run(self):
tdSql.prepare()
print("==============step1")
tdSql.execute(
"create table if not exists st (ts timestamp, tagtype int) tags(dev nchar(50))")
tdSql.execute(
'CREATE TABLE if not exists dev_001 using st tags("dev_01")')
tdSql.execute(
'CREATE TABLE if not exists dev_002 using st tags("dev_02")')
print("==============step2")
tdSql.execute(
"""INSERT INTO dev_001(ts, tagtype) VALUES('2020-05-13 10:00:00.000', 1),
('2020-05-13 10:00:00.001', 1)
dev_002 VALUES('2020-05-13 10:00:00.001', 1)""")
tdSql.query("select * from db.st where ts='2020-05-13 10:00:00.000'")
tdSql.checkRows(1)
tdSql.query("select tbname, dev from dev_001")
tdSql.checkRows(1)
tdSql.checkData(0, 0, 'dev_001')
tdSql.checkData(0, 1, 'dev_01')
tdSql.query("select tbname, dev, tagtype from dev_001")
tdSql.checkRows(2)
tdSql.checkData(0, 0, 'dev_001')
tdSql.checkData(0, 1, 'dev_01')
tdSql.checkData(0, 2, 1)
tdSql.checkData(1, 0, 'dev_001')
tdSql.checkData(1, 1, 'dev_01')
tdSql.checkData(1, 2, 1)
## test case for https://jira.taosdata.com:18080/browse/TD-2488
tdSql.execute("create table m1(ts timestamp, k int) tags(a int)")
tdSql.execute("create table t1 using m1 tags(1)")
tdSql.execute("create table t2 using m1 tags(2)")
tdSql.execute("insert into t1 values('2020-1-1 1:1:1', 1)")
tdSql.execute("insert into t1 values('2020-1-1 1:10:1', 2)")
tdSql.execute("insert into t2 values('2020-1-1 1:5:1', 99)")
tdSql.query("select count(*) from m1 where ts = '2020-1-1 1:5:1' ")
tdSql.checkRows(1)
tdSql.checkData(0, 0, 1)
tdDnodes.stop(1)
tdDnodes.start(1)
tdSql.query("select count(*) from m1 where ts = '2020-1-1 1:5:1' ")
tdSql.checkRows(1)
tdSql.checkData(0, 0, 1)
## test case for https://jira.taosdata.com:18080/browse/TD-1930
tdSql.execute("create table tb(ts timestamp, c1 int, c2 binary(10), c3 nchar(10), c4 float, c5 bool)")
for i in range(10):
tdSql.execute(
"insert into tb values(%d, %d, 'binary%d', 'nchar%d', %f, %d)" % (self.ts + i, i, i, i, i + 0.1, i % 2))
tdSql.error("select * from tb where c2 = binary2")
tdSql.error("select * from tb where c3 = nchar2")
tdSql.query("select * from tb where c2 = 'binary2' ")
tdSql.checkRows(1)
tdSql.query("select * from tb where c3 = 'nchar2' ")
tdSql.checkRows(1)
tdSql.query("select * from tb where c1 = '2' ")
tdSql.checkRows(1)
tdSql.query("select * from tb where c1 = 2 ")
tdSql.checkRows(1)
tdSql.query("select * from tb where c4 = '0.1' ")
tdSql.checkRows(1)
tdSql.query("select * from tb where c4 = 0.1 ")
tdSql.checkRows(1)
tdSql.query("select * from tb where c5 = true ")
tdSql.checkRows(5)
tdSql.query("select * from tb where c5 = 'true' ")
tdSql.checkRows(5)
# For jira: https://jira.taosdata.com:18080/browse/TD-2850
tdSql.execute("create database 'Test' ")
tdSql.execute("use 'Test' ")
tdSql.execute("create table 'TB'(ts timestamp, 'Col1' int) tags('Tag1' int)")
tdSql.execute("insert into 'Tb0' using tb tags(1) values(now, 1)")
tdSql.query("select * from tb")
tdSql.checkRows(1)
# For jira:https://jira.taosdata.com:18080/browse/TD-6314
tdSql.execute("use db")
tdSql.execute("create stable stb_001(ts timestamp,v int) tags(c0 int)")
tdSql.query("select _block_dist() from stb_001")
tdSql.checkRows(1)
tdSql.query("select * from tb0")
tdSql.checkRows(1)
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
......@@ -17,6 +17,7 @@ import os
import taos
import time
import argparse
import json
class taosdemoQueryPerformace:
......@@ -48,7 +49,7 @@ class taosdemoQueryPerformace:
cursor2 = self.conn2.cursor()
cursor2.execute("create database if not exists %s" % self.dbName)
cursor2.execute("use %s" % self.dbName)
cursor2.execute("create table if not exists %s(ts timestamp, query_time float, commit_id binary(50), branch binary(50), type binary(20)) tags(query_id int, query_sql binary(300))" % self.stbName)
cursor2.execute("create table if not exists %s(ts timestamp, query_time_avg float, query_time_max float, query_time_min float, commit_id binary(50), branch binary(50), type binary(20)) tags(query_id int, query_sql binary(300))" % self.stbName)
sql = "select count(*) from test.meters"
tableid = 1
......@@ -74,7 +75,7 @@ class taosdemoQueryPerformace:
tableid = 6
cursor2.execute("create table if not exists %s%d using %s tags(%d, '%s')" % (self.tbPerfix, tableid, self.stbName, tableid, sql))
sql = "select * from meters"
sql = "select * from meters limit 10000"
tableid = 7
cursor2.execute("create table if not exists %s%d using %s tags(%d, '%s')" % (self.tbPerfix, tableid, self.stbName, tableid, sql))
......@@ -87,37 +88,96 @@ class taosdemoQueryPerformace:
cursor2.execute("create table if not exists %s%d using %s tags(%d, '%s')" % (self.tbPerfix, tableid, self.stbName, tableid, sql))
cursor2.close()
def generateQueryJson(self):
sqls = []
cursor2 = self.conn2.cursor()
cursor2.execute("select query_id, query_sql from %s.%s" % (self.dbName, self.stbName))
i = 0
for data in cursor2:
sql = {
"sql": data[1],
"result_mode": "onlyformat",
"result_file": "./query_sql_res%d.txt" % i
}
sqls.append(sql)
i += 1
query_data = {
"filetype": "query",
"cfgdir": "/etc/perf",
"host": "127.0.0.1",
"port": 6030,
"user": "root",
"password": "taosdata",
"databases": "test",
"specified_table_query": {
"query_times": 100,
"concurrent": 1,
"sqls": sqls
}
}
query_json_file = f"/tmp/query.json"
with open(query_json_file, 'w') as f:
json.dump(query_data, f)
return query_json_file
def getBuildPath(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosdemo" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]
break
return buildPath
def getCMDOutput(self, cmd):
cmd = os.popen(cmd)
output = cmd.read()
cmd.close()
return output
def query(self):
cursor = self.conn.cursor()
print("==================== query performance ====================")
buildPath = self.getBuildPath()
if (buildPath == ""):
print("taosdemo not found!")
sys.exit(1)
binPath = buildPath + "/build/bin/"
os.system(
"%sperfMonitor -f %s > query_res.txt" %
(binPath, self.generateQueryJson()))
cursor = self.conn2.cursor()
print("==================== query performance ====================")
cursor.execute("use %s" % self.dbName)
cursor.execute("select tbname, query_id, query_sql from %s" % self.stbName)
cursor.execute("select tbname, query_sql from %s" % self.stbName)
i = 0
for data in cursor:
table_name = data[0]
query_id = data[1]
sql = data[2]
sql = data[1]
self.avgDelay = self.getCMDOutput("grep 'avgDelay' query_res.txt | awk 'NR==%d{print $2}'" % (i + 1))
self.maxDelay = self.getCMDOutput("grep 'avgDelay' query_res.txt | awk 'NR==%d{print $5}'" % (i + 1))
self.minDelay = self.getCMDOutput("grep 'avgDelay' query_res.txt | awk 'NR==%d{print $8}'" % (i + 1))
i += 1
print("query time for: %s %f seconds" % (sql, float(self.avgDelay)))
c = self.conn2.cursor()
c.execute("insert into %s.%s values(now, %f, %f, %f, '%s', '%s', '%s')" % (self.dbName, table_name, float(self.avgDelay), float(self.maxDelay), float(self.minDelay), self.commitID, self.branch, self.type))
totalTime = 0
cursor2 = self.conn.cursor()
cursor2.execute("use test")
for i in range(100):
if(self.clearCache == True):
# root permission is required
os.system("echo 3 > /proc/sys/vm/drop_caches")
startTime = time.time()
cursor2.execute(sql)
totalTime += time.time() - startTime
cursor2.close()
print("query time for: %s %f seconds" % (sql, totalTime / 100))
cursor3 = self.conn2.cursor()
cursor3.execute("insert into %s.%s values(now, %f, '%s', '%s', '%s')" % (self.dbName, table_name, totalTime / 100, self.commitID, self.branch, self.type))
cursor3.close()
c.close()
cursor.close()
if __name__ == '__main__':
......@@ -174,4 +234,4 @@ if __name__ == '__main__':
args = parser.parse_args()
perftest = taosdemoQueryPerformace(args.remove_cache, args.commit_id, args.database_name, args.stable_name, args.table_perfix, args.git_branch, args.build_type)
perftest.createPerfTables()
perftest.query()
perftest.query()
\ No newline at end of file
......@@ -49,24 +49,18 @@ class taosdemoPerformace:
def generateJson(self):
db = {
"name": "%s" % self.insertDB,
"drop": "yes",
"replica": 1
"drop": "yes"
}
stb = {
"name": "meters",
"child_table_exists": "no",
"childtable_count": self.numOfTables,
"childtable_prefix": "stb_",
"auto_create_table": "no",
"data_source": "rand",
"batch_create_tbl_num": 10,
"insert_mode": "taosc",
"insert_mode": "rand",
"insert_rows": self.numOfRows,
"interlace_rows": 0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"batch_rows": 1000000,
"max_sql_len": 1048576,
"timestamp_step": 1,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
......@@ -100,11 +94,8 @@ class taosdemoPerformace:
"user": "root",
"password": "taosdata",
"thread_count": 10,
"thread_count_create_tbl": 10,
"thread_count_create_tbl": 4,
"result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no",
"insert_interval": 0,
"num_of_records_per_req": 30000,
"databases": [db]
}
......@@ -145,7 +136,7 @@ class taosdemoPerformace:
binPath = buildPath + "/build/bin/"
os.system(
"%staosdemo -f %s > /dev/null 2>&1" %
"%sperfMonitor -f %s > /dev/null 2>&1" %
(binPath, self.generateJson()))
self.createTableTime = self.getCMDOutput(
"grep 'Spent' insert_res.txt | awk 'NR==1{print $2}'")
......
......@@ -6,7 +6,8 @@ TARGET=exe
LFLAGS = '-Wl,-rpath,/usr/local/taos/driver/' -ltaos -lpthread -lm -lrt
CFLAGS = -O0 -g -Wall -Wno-deprecated -fPIC -Wno-unused-result -Wconversion \
-Wno-char-subscripts -D_REENTRANT -Wno-format -D_REENTRANT -DLINUX \
-Wno-unused-function -D_M_X64 -I/usr/local/taos/include -std=gnu99
-Wno-unused-function -D_M_X64 -I/usr/local/taos/include -std=gnu99 \
-fsanitize=address
all: $(TARGET)
......@@ -14,8 +15,10 @@ exe:
gcc $(CFLAGS) ./batchprepare.c -o $(ROOT)batchprepare $(LFLAGS)
gcc $(CFLAGS) ./stmtBatchTest.c -o $(ROOT)stmtBatchTest $(LFLAGS)
gcc $(CFLAGS) ./stmtTest.c -o $(ROOT)stmtTest $(LFLAGS)
gcc $(CFLAGS) ./stmt_function.c -o $(ROOT)stmt_function $(LFLAGS)
clean:
rm $(ROOT)batchprepare
rm $(ROOT)stmtBatchTest
rm $(ROOT)stmtTest
rm $(ROOT)stmt_function
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "taos.h"
#include <sys/time.h>
#include <pthread.h>
#include <unistd.h>
#include <assert.h>
void execute_simple_sql(void *taos, char *sql) {
TAOS_RES *result = taos_query(taos, sql);
if ( result == NULL || taos_errno(result) != 0) {
printf( "failed to %s, Reason: %s\n" , sql, taos_errstr(result));
taos_free_result(result);
exit(EXIT_FAILURE);
}
taos_free_result(result);
}
void print_result(TAOS_RES* res) {
if (res == NULL) {
exit(EXIT_FAILURE);
}
TAOS_ROW row = NULL;
int num_fields = taos_num_fields(res);
TAOS_FIELD* fields = taos_fetch_fields(res);
while ((row = taos_fetch_row(res))) {
char temp[256] = {0};
taos_print_row(temp, row, fields, num_fields);
printf("get result: %s\n", temp);
}
}
void taos_stmt_init_test() {
printf("start taos_stmt_init test \n");
void *taos = NULL;
TAOS_STMT *stmt = NULL;
stmt = taos_stmt_init(taos);
assert(stmt == NULL);
// ASM ERROR
// assert(taos_stmt_close(stmt) != 0);
taos = taos_connect("127.0.0.1","root","taosdata",NULL,0);
if(taos == NULL) {
printf("Cannot connect to tdengine server\n");
exit(EXIT_FAILURE);
}
stmt = taos_stmt_init(taos);
assert(stmt != NULL);
assert(taos_stmt_close(stmt) == 0);
printf("finish taos_stmt_init test\n");
}
void taos_stmt_preprare_test() {
printf("start taos_stmt_prepare test\n");
char *stmt_sql = calloc(1, 1048576);
TAOS_STMT *stmt = NULL;
assert(taos_stmt_prepare(stmt, stmt_sql, 0) != 0);
void *taos = NULL;
taos = taos_connect("127.0.0.1","root","taosdata",NULL,0);
if(taos == NULL) {
printf("Cannot connect to tdengine server\n");
exit(EXIT_FAILURE);
}
execute_simple_sql(taos, "drop database if exists stmt_test");
execute_simple_sql(taos, "create database stmt_test");
execute_simple_sql(taos, "use stmt_test");
execute_simple_sql(taos, "create table super(ts timestamp, c1 int, c2 bigint, c3 float, c4 double, c5 binary(8), c6 smallint, c7 tinyint, c8 bool, c9 nchar(8), c10 timestamp) tags (t1 int, t2 bigint, t3 float, t4 double, t5 binary(8), t6 smallint, t7 tinyint, t8 bool, t9 nchar(8))");
stmt = taos_stmt_init(taos);
assert(stmt != NULL);
// below will make client dead lock
assert(taos_stmt_prepare(stmt, stmt_sql, 0) == 0);
// assert(taos_stmt_close(stmt) == 0);
// stmt = taos_stmt_init(taos);
assert(stmt != NULL);
sprintf(stmt_sql, "select from ?");
assert(taos_stmt_prepare(stmt, stmt_sql, 0) != 0);
assert(taos_stmt_close(stmt) == 0);
stmt = taos_stmt_init(taos);
assert(stmt != NULL);
sprintf(stmt_sql, "insert into ? values (?,?,?,?,?,?,?,?,?,?,?)");
assert(taos_stmt_prepare(stmt, stmt_sql, 0) == 0);
assert(taos_stmt_close(stmt) == 0);
stmt = taos_stmt_init(taos);
assert(stmt != NULL);
sprintf(stmt_sql, "insert into super values (?,?,?,?,?,?,?,?,?,?,?)");
assert(taos_stmt_prepare(stmt, stmt_sql, 0) != 0);
assert(taos_stmt_close(stmt) == 0);
stmt = taos_stmt_init(taos);
assert(stmt != NULL);
sprintf(stmt_sql, "insert into ? values (?,?,?,?,?,?,?,?,1,?,?,?)");
assert(taos_stmt_prepare(stmt, stmt_sql, 0) == 0);
assert(taos_stmt_close(stmt) == 0);
free(stmt_sql);
printf("finish taos_stmt_prepare test\n");
}
void taos_stmt_set_tbname_test() {
printf("start taos_stmt_set_tbname test\n");
TAOS_STMT *stmt = NULL;
char *name = calloc(1, 200);
// ASM ERROR
// assert(taos_stmt_set_tbname(stmt, name) != 0);
void *taos = taos_connect("127.0.0.1","root","taosdata",NULL,0);
if(taos == NULL) {
printf("Cannot connect to tdengine server\n");
exit(EXIT_FAILURE);
}
execute_simple_sql(taos, "drop database if exists stmt_test");
execute_simple_sql(taos, "create database stmt_test");
execute_simple_sql(taos, "use stmt_test");
execute_simple_sql(taos, "create table super(ts timestamp, c1 int)");
stmt = taos_stmt_init(taos);
assert(stmt != NULL);
assert(taos_stmt_set_tbname(stmt, name) != 0);
char* stmt_sql = calloc(1, 1000);
sprintf(stmt_sql, "insert into ? values (?,?)");
assert(taos_stmt_prepare(stmt, stmt_sql, 0) == 0);
sprintf(name, "super");
assert(stmt != NULL);
assert(taos_stmt_set_tbname(stmt, name) == 0);
free(name);
free(stmt_sql);
taos_stmt_close(stmt);
printf("finish taos_stmt_set_tbname test\n");
}
void taos_stmt_set_tbname_tags_test() {
printf("start taos_stmt_set_tbname_tags test\n");
TAOS_STMT *stmt = NULL;
char *name = calloc(1,20);
TAOS_BIND *tags = calloc(1, sizeof(TAOS_BIND));
// ASM ERROR
// assert(taos_stmt_set_tbname_tags(stmt, name, tags) != 0);
void *taos = taos_connect("127.0.0.1","root","taosdata",NULL,0);
if(taos == NULL) {
printf("Cannot connect to tdengine server\n");
exit(EXIT_FAILURE);
}
execute_simple_sql(taos, "drop database if exists stmt_test");
execute_simple_sql(taos, "create database stmt_test");
execute_simple_sql(taos, "use stmt_test");
execute_simple_sql(taos, "create stable super(ts timestamp, c1 int) tags (id int)");
execute_simple_sql(taos, "create table tb using super tags (1)");
stmt = taos_stmt_init(taos);
assert(stmt != NULL);
char* stmt_sql = calloc(1, 1000);
sprintf(stmt_sql, "insert into ? using super tags (?) values (?,?)");
assert(taos_stmt_prepare(stmt, stmt_sql, 0) == 0);
assert(taos_stmt_set_tbname_tags(stmt, name, tags) != 0);
sprintf(name, "tb");
assert(taos_stmt_set_tbname_tags(stmt, name, tags) != 0);
int t = 1;
tags->buffer_length = TSDB_DATA_TYPE_INT;
tags->buffer_length = sizeof(uint32_t);
tags->buffer = &t;
tags->length = &tags->buffer_length;
tags->is_null = NULL;
assert(taos_stmt_set_tbname_tags(stmt, name, tags) == 0);
free(stmt_sql);
free(name);
free(tags);
taos_stmt_close(stmt);
printf("finish taos_stmt_set_tbname_tags test\n");
}
void taos_stmt_set_sub_tbname_test() {
printf("start taos_stmt_set_sub_tbname test\n");
TAOS_STMT *stmt = NULL;
char *name = calloc(1, 200);
// ASM ERROR
// assert(taos_stmt_set_sub_tbname(stmt, name) != 0);
void *taos = taos_connect("127.0.0.1","root","taosdata",NULL,0);
if(taos == NULL) {
printf("Cannot connect to tdengine server\n");
exit(EXIT_FAILURE);
}
execute_simple_sql(taos, "drop database if exists stmt_test");
execute_simple_sql(taos, "create database stmt_test");
execute_simple_sql(taos, "use stmt_test");
execute_simple_sql(taos, "create stable super(ts timestamp, c1 int) tags (id int)");
execute_simple_sql(taos, "create table tb using super tags (1)");
stmt = taos_stmt_init(taos);
assert(stmt != NULL);
char* stmt_sql = calloc(1, 1000);
sprintf(stmt_sql, "insert into ? values (?,?)");
assert(taos_stmt_prepare(stmt, stmt_sql, 0) == 0);
assert(taos_stmt_set_sub_tbname(stmt, name) != 0);
sprintf(name, "tb");
assert(taos_stmt_set_sub_tbname(stmt, name) == 0);
// assert(taos_load_table_info(taos, "super, tb") == 0);
// assert(taos_stmt_set_sub_tbname(stmt, name) == 0);
free(name);
free(stmt_sql);
assert(taos_stmt_close(stmt) == 0);
printf("finish taos_stmt_set_sub_tbname test\n");
}
void taos_stmt_bind_param_test() {
printf("start taos_stmt_bind_param test\n");
TAOS_STMT *stmt = NULL;
TAOS_BIND *binds = NULL;
assert(taos_stmt_bind_param(stmt, binds) != 0);
void *taos = taos_connect("127.0.0.1","root","taosdata",NULL,0);
if(taos == NULL) {
printf("Cannot connect to tdengine server\n");
exit(EXIT_FAILURE);
}
execute_simple_sql(taos, "drop database if exists stmt_test");
execute_simple_sql(taos, "create database stmt_test");
execute_simple_sql(taos, "use stmt_test");
execute_simple_sql(taos, "create table super(ts timestamp, c1 int)");
stmt = taos_stmt_init(taos);
char* stmt_sql = calloc(1, 1000);
sprintf(stmt_sql, "insert into ? values (?,?)");
assert(taos_stmt_prepare(stmt, stmt_sql, 0) == 0);
assert(taos_stmt_bind_param(stmt, binds) != 0);
free(binds);
TAOS_BIND *params = calloc(2, sizeof(TAOS_BIND));
int64_t ts = (int64_t)1591060628000;
params[0].buffer_type = TSDB_DATA_TYPE_TIMESTAMP;
params[0].buffer_length = sizeof(uint64_t);
params[0].buffer = &ts;
params[0].length = &params[0].buffer_length;
params[0].is_null = NULL;
int32_t i = (int32_t)21474;
params[1].buffer_type = TSDB_DATA_TYPE_INT;
params[1].buffer_length = sizeof(int32_t);
params[1].buffer = &i;
params[1].length = &params[1].buffer_length;
params[1].is_null = NULL;
assert(taos_stmt_bind_param(stmt, params) != 0);
assert(taos_stmt_set_tbname(stmt, "super") == 0);
assert(taos_stmt_bind_param(stmt, params) == 0);
free(params);
free(stmt_sql);
taos_stmt_close(stmt);
printf("finish taos_stmt_bind_param test\n");
}
void taos_stmt_bind_single_param_batch_test() {
printf("start taos_stmt_bind_single_param_batch test\n");
TAOS_STMT *stmt = NULL;
TAOS_MULTI_BIND *bind = NULL;
assert(taos_stmt_bind_single_param_batch(stmt, bind, 0) != 0);
printf("finish taos_stmt_bind_single_param_batch test\n");
}
void taos_stmt_bind_param_batch_test() {
printf("start taos_stmt_bind_param_batch test\n");
TAOS_STMT *stmt = NULL;
TAOS_MULTI_BIND *bind = NULL;
assert(taos_stmt_bind_param_batch(stmt, bind) != 0);
printf("finish taos_stmt_bind_param_batch test\n");
}
void taos_stmt_add_batch_test() {
printf("start taos_stmt_add_batch test\n");
TAOS_STMT *stmt = NULL;
assert(taos_stmt_add_batch(stmt) != 0);
void *taos = taos_connect("127.0.0.1","root","taosdata",NULL,0);
if(taos == NULL) {
printf("Cannot connect to tdengine server\n");
exit(EXIT_FAILURE);
}
execute_simple_sql(taos, "drop database if exists stmt_test");
execute_simple_sql(taos, "create database stmt_test");
execute_simple_sql(taos, "use stmt_test");
execute_simple_sql(taos, "create table super(ts timestamp, c1 int)");
stmt = taos_stmt_init(taos);
assert(stmt != NULL);
char* stmt_sql = calloc(1, 1000);
sprintf(stmt_sql, "insert into ? values (?,?)");
assert(taos_stmt_prepare(stmt, stmt_sql, 0) == 0);
assert(taos_stmt_add_batch(stmt) != 0);
TAOS_BIND *params = calloc(2, sizeof(TAOS_BIND));
int64_t ts = (int64_t)1591060628000;
params[0].buffer_type = TSDB_DATA_TYPE_TIMESTAMP;
params[0].buffer_length = sizeof(uint64_t);
params[0].buffer = &ts;
params[0].length = &params[0].buffer_length;
params[0].is_null = NULL;
int32_t i = (int32_t)21474;
params[1].buffer_type = TSDB_DATA_TYPE_INT;
params[1].buffer_length = sizeof(int32_t);
params[1].buffer = &i;
params[1].length = &params[1].buffer_length;
params[1].is_null = NULL;
assert(taos_stmt_set_tbname(stmt, "super") == 0);
assert(taos_stmt_bind_param(stmt, params) == 0);
assert(taos_stmt_add_batch(stmt) == 0);
free(params);
free(stmt_sql);
assert(taos_stmt_close(stmt) == 0);
printf("finish taos_stmt_add_batch test\n");
}
void taos_stmt_execute_test() {
printf("start taos_stmt_execute test\n");
TAOS_STMT *stmt = NULL;
assert(taos_stmt_execute(stmt) != 0);
void *taos = taos_connect("127.0.0.1","root","taosdata",NULL,0);
if(taos == NULL) {
printf("Cannot connect to tdengine server\n");
exit(EXIT_FAILURE);
}
execute_simple_sql(taos, "drop database if exists stmt_test");
execute_simple_sql(taos, "create database stmt_test");
execute_simple_sql(taos, "use stmt_test");
execute_simple_sql(taos, "create table super(ts timestamp, c1 int)");
stmt = taos_stmt_init(taos);
assert(stmt != NULL);
assert(taos_stmt_execute(stmt) != 0);
char* stmt_sql = calloc(1, 1000);
sprintf(stmt_sql, "insert into ? values (?,?)");
assert(taos_stmt_prepare(stmt, stmt_sql, 0) == 0);
assert(taos_stmt_execute(stmt) != 0);
TAOS_BIND *params = calloc(2, sizeof(TAOS_BIND));
int64_t ts = (int64_t)1591060628000;
params[0].buffer_type = TSDB_DATA_TYPE_TIMESTAMP;
params[0].buffer_length = sizeof(uint64_t);
params[0].buffer = &ts;
params[0].length = &params[0].buffer_length;
params[0].is_null = NULL;
int32_t i = (int32_t)21474;
params[1].buffer_type = TSDB_DATA_TYPE_INT;
params[1].buffer_length = sizeof(int32_t);
params[1].buffer = &i;
params[1].length = &params[1].buffer_length;
params[1].is_null = NULL;
assert(taos_stmt_set_tbname(stmt, "super") == 0);
assert(taos_stmt_execute(stmt) != 0);
assert(taos_stmt_bind_param(stmt, params) == 0);
assert(taos_stmt_execute(stmt) != 0);
assert(taos_stmt_add_batch(stmt) == 0);
assert(taos_stmt_execute(stmt) == 0);
free(params);
free(stmt_sql);
assert(taos_stmt_close(stmt) == 0);
printf("finish taos_stmt_execute test\n");
}
void taos_stmt_use_result_query(void *taos, char *col, int type) {
TAOS_STMT *stmt = taos_stmt_init(taos);
assert(stmt != NULL);
char *stmt_sql = calloc(1, 1024);
struct {
int64_t c1;
int32_t c2;
int64_t c3;
float c4;
double c5;
char c6[8];
int16_t c7;
int8_t c8;
int8_t c9;
char c10[32];
} v = {0};
v.c1 = (int64_t)1591060628000;
v.c2 = (int32_t)1;
v.c3 = (int64_t)1;
v.c4 = (float)1;
v.c5 = (double)1;
strcpy(v.c6, "abcdefgh");
v.c7 = 1;
v.c8 = 1;
v.c9 = 1;
strcpy(v.c10, "一二三四五六七八");
uintptr_t c10len=strlen(v.c10);
sprintf(stmt_sql, "select * from stmt_test.t1 where %s = ?", col);
printf("stmt_sql: %s\n", stmt_sql);
assert(taos_stmt_prepare(stmt, stmt_sql, 0) == 0);
TAOS_BIND *params = calloc(1, sizeof(TAOS_BIND));
params->buffer_type = type;
params->is_null = NULL;
switch(type){
case TSDB_DATA_TYPE_TIMESTAMP:
params->buffer_length = sizeof(v.c1);
params->buffer = &v.c1;
params->length = &params->buffer_length;
break;
case TSDB_DATA_TYPE_INT:
params->buffer_length = sizeof(v.c2);
params->buffer = &v.c2;
params->length = &params->buffer_length;
case TSDB_DATA_TYPE_BIGINT:
params->buffer_length = sizeof(v.c3);
params->buffer = &v.c3;
params->length = &params->buffer_length;
break;
case TSDB_DATA_TYPE_FLOAT:
params->buffer_length = sizeof(v.c4);
params->buffer = &v.c4;
params->length = &params->buffer_length;
case TSDB_DATA_TYPE_DOUBLE:
params->buffer_length = sizeof(v.c5);
params->buffer = &v.c5;
params->length = &params->buffer_length;
break;
case TSDB_DATA_TYPE_BINARY:
params->buffer_length = sizeof(v.c6);
params->buffer = &v.c6;
params->length = &params->buffer_length;
break;
case TSDB_DATA_TYPE_SMALLINT:
params->buffer_length = sizeof(v.c7);
params->buffer = &v.c7;
params->length = &params->buffer_length;
break;
case TSDB_DATA_TYPE_TINYINT:
params->buffer_length = sizeof(v.c8);
params->buffer = &v.c8;
params->length = &params->buffer_length;
case TSDB_DATA_TYPE_BOOL:
params->buffer_length = sizeof(v.c9);
params->buffer = &v.c9;
params->length = &params->buffer_length;
break;
case TSDB_DATA_TYPE_NCHAR:
params->buffer_length = sizeof(v.c10);
params->buffer = &v.c10;
params->length = &c10len;
break;
default:
printf("Cannnot find type: %d\n", type);
break;
}
assert(taos_stmt_bind_param(stmt, params) == 0);
assert(taos_stmt_execute(stmt) == 0);
TAOS_RES* result = taos_stmt_use_result(stmt);
assert(result != NULL);
print_result(result);
assert(taos_stmt_close(stmt) == 0);
free(params);
free(stmt_sql);
taos_free_result(result);
}
void taos_stmt_use_result_test() {
printf("start taos_stmt_use_result test\n");
void *taos = taos_connect("127.0.0.1","root","taosdata",NULL,0);
if(taos == NULL) {
printf("Cannot connect to tdengine server\n");
exit(EXIT_FAILURE);
}
execute_simple_sql(taos, "drop database if exists stmt_test");
execute_simple_sql(taos, "create database stmt_test");
execute_simple_sql(taos, "use stmt_test");
execute_simple_sql(taos, "create table super(ts timestamp, c1 int, c2 bigint, c3 float, c4 double, c5 binary(8), c6 smallint, c7 tinyint, c8 bool, c9 nchar(8), c10 timestamp) tags (t1 int, t2 bigint, t3 float, t4 double, t5 binary(8), t6 smallint, t7 tinyint, t8 bool, t9 nchar(8))");
execute_simple_sql(taos, "create table t1 using super tags (1, 1, 1, 1, 'abcdefgh',1,1,1,'一二三四五六七八')");
execute_simple_sql(taos, "insert into t1 values (1591060628000, 1, 1, 1, 1, 'abcdefgh',1,1,1,'一二三四五六七八', now)");
execute_simple_sql(taos, "insert into t1 values (1591060628001, 1, 1, 1, 1, 'abcdefgh',1,1,1,'一二三四五六七八', now)");
taos_stmt_use_result_query(taos, "c1", TSDB_DATA_TYPE_INT);
taos_stmt_use_result_query(taos, "c2", TSDB_DATA_TYPE_BIGINT);
taos_stmt_use_result_query(taos, "c3", TSDB_DATA_TYPE_FLOAT);
taos_stmt_use_result_query(taos, "c4", TSDB_DATA_TYPE_DOUBLE);
taos_stmt_use_result_query(taos, "c5", TSDB_DATA_TYPE_BINARY);
taos_stmt_use_result_query(taos, "c6", TSDB_DATA_TYPE_SMALLINT);
taos_stmt_use_result_query(taos, "c7", TSDB_DATA_TYPE_TINYINT);
taos_stmt_use_result_query(taos, "c8", TSDB_DATA_TYPE_BOOL);
taos_stmt_use_result_query(taos, "c9", TSDB_DATA_TYPE_NCHAR);
printf("finish taos_stmt_use_result test\n");
}
void taos_stmt_close_test() {
printf("start taos_stmt_close test\n");
// ASM ERROR
// TAOS_STMT *stmt = NULL;
// assert(taos_stmt_close(stmt) != 0);
printf("finish taos_stmt_close test\n");
}
void test_api_reliability() {
// ASM catch memory leak
taos_stmt_init_test();
taos_stmt_preprare_test();
taos_stmt_set_tbname_test();
taos_stmt_set_tbname_tags_test();
taos_stmt_set_sub_tbname_test();
taos_stmt_bind_param_test();
taos_stmt_bind_single_param_batch_test();
taos_stmt_bind_param_batch_test();
taos_stmt_add_batch_test();
taos_stmt_execute_test();
taos_stmt_close_test();
}
void test_query() {
taos_stmt_use_result_test();
}
int main(int argc, char *argv[]) {
test_api_reliability();
test_query();
return 0;
}
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册