提交 ed787142 编写于 作者: C Cary Xu

Merge branch '3.0' into feature/3.0_interval_hash_optimize

......@@ -88,4 +88,3 @@ Standard: Auto
TabWidth: 8
UseTab: Never
...
*.py linguist-detectable=false
......@@ -245,3 +245,35 @@ Provides dnode configuration information.
| 1 | dnode_id | INT | Dnode ID |
| 2 | name | BINARY(32) | Parameter |
| 3 | value | BINARY(64) | Value |
## INS_TOPICS
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | ------------------------------ |
| 1 | topic_name | BINARY(192) | Topic name |
| 2 | db_name | BINARY(64) | Database for the topic |
| 3 | create_time | TIMESTAMP | Creation time |
| 4 | sql | BINARY(1024) | SQL statement used to create the topic |
## INS_SUBSCRIPTIONS
| # | **Column** | **Data Type** | **Description** |
| --- | :------------: | ------------ | ------------------------ |
| 1 | topic_name | BINARY(204) | Subscribed topic |
| 2 | consumer_group | BINARY(193) | Subscribed consumer group |
| 3 | vgroup_id | INT | Vgroup ID for the consumer |
| 4 | consumer_id | BIGINT | Consumer ID |
## INS_STREAMS
| # | **Column** | **Data Type** | **Description** |
| --- | :----------: | ------------ | --------------------------------------- |
| 1 | stream_name | BINARY(64) | Stream name |
| 2 | create_time | TIMESTAMP | Creation time |
| 3 | sql | BINARY(1024) | SQL statement used to create the stream |
| 4 | status | BIANRY(20) | Current status |
| 5 | source_db | BINARY(64) | Source database |
| 6 | target_db | BIANRY(64) | Target database |
| 7 | target_table | BINARY(192) | Target table |
| 8 | watermark | BIGINT | Watermark (see stream processing documentation) |
| 9 | trigger | INT | Method of triggering the result push (see stream processing documentation) |
......@@ -61,15 +61,6 @@ Provides information about SQL queries currently running. Similar to SHOW QUERIE
| 12 | sub_status | BINARY(1000) | Subquery status |
| 13 | sql | BINARY(1024) | SQL statement |
## PERF_TOPICS
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | ------------------------------ |
| 1 | topic_name | BINARY(192) | Topic name |
| 2 | db_name | BINARY(64) | Database for the topic |
| 3 | create_time | TIMESTAMP | Creation time |
| 4 | sql | BINARY(1024) | SQL statement used to create the topic |
## PERF_CONSUMERS
| # | **Column** | **Data Type** | **Description** |
......@@ -83,15 +74,6 @@ Provides information about SQL queries currently running. Similar to SHOW QUERIE
| 7 | subscribe_time | TIMESTAMP | Time of first subscription |
| 8 | rebalance_time | TIMESTAMP | Time of first rebalance triggering |
## PERF_SUBSCRIPTIONS
| # | **Column** | **Data Type** | **Description** |
| --- | :------------: | ------------ | ------------------------ |
| 1 | topic_name | BINARY(204) | Subscribed topic |
| 2 | consumer_group | BINARY(193) | Subscribed consumer group |
| 3 | vgroup_id | INT | Vgroup ID for the consumer |
| 4 | consumer_id | BIGINT | Consumer ID |
## PERF_TRANS
| # | **Column** | **Data Type** | **Description** |
......@@ -113,17 +95,3 @@ Provides information about SQL queries currently running. Similar to SHOW QUERIE
| 2 | create_time | TIMESTAMP | Creation time |
| 3 | stable_name | BINARY(192) | Supertable name |
| 4 | vgroup_id | INT | Dedicated vgroup name |
## PERF_STREAMS
| # | **Column** | **Data Type** | **Description** |
| --- | :----------: | ------------ | --------------------------------------- |
| 1 | stream_name | BINARY(64) | Stream name |
| 2 | create_time | TIMESTAMP | Creation time |
| 3 | sql | BINARY(1024) | SQL statement used to create the stream |
| 4 | status | BIANRY(20) | Current status |
| 5 | source_db | BINARY(64) | Source database |
| 6 | target_db | BIANRY(64) | Target database |
| 7 | target_table | BINARY(192) | Target table |
| 8 | watermark | BIGINT | Watermark (see stream processing documentation) |
| 9 | trigger | INT | Method of triggering the result push (see stream processing documentation) |
......@@ -246,3 +246,35 @@ Note: 由于 SHOW 语句已经被开发者熟悉和广泛使用,所以它们
| 1 | dnode_id | INT | dnode 的 ID |
| 2 | name | BINARY(32) | 配置项名称 |
| 3 | value | BINARY(64) | 该配置项的值 |
## INS_TOPICS
| # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------ | ------------------------------ |
| 1 | topic_name | BINARY(192) | topic 名称 |
| 2 | db_name | BINARY(64) | topic 相关的 DB |
| 3 | create_time | TIMESTAMP | topic 的 创建时间 |
| 4 | sql | BINARY(1024) | 创建该 topic 时所用的 SQL 语句 |
## INS_SUBSCRIPTIONS
| # | **列名** | **数据类型** | **说明** |
| --- | :------------: | ------------ | ------------------------ |
| 1 | topic_name | BINARY(204) | 被订阅的 topic |
| 2 | consumer_group | BINARY(193) | 订阅者的消费者组 |
| 3 | vgroup_id | INT | 消费者被分配的 vgroup id |
| 4 | consumer_id | BIGINT | 消费者的唯一 id |
## INS_STREAMS
| # | **列名** | **数据类型** | **说明** |
| --- | :----------: | ------------ | --------------------------------------- |
| 1 | stream_name | BINARY(64) | 流计算名称 |
| 2 | create_time | TIMESTAMP | 创建时间 |
| 3 | sql | BINARY(1024) | 创建流计算时提供的 SQL 语句 |
| 4 | status | BIANRY(20) | 流当前状态 |
| 5 | source_db | BINARY(64) | 源数据库 |
| 6 | target_db | BIANRY(64) | 目的数据库 |
| 7 | target_table | BINARY(192) | 流计算写入的目标表 |
| 8 | watermark | BIGINT | watermark,详见 SQL 手册流式计算 |
| 9 | trigger | INT | 计算结果推送模式,详见 SQL 手册流式计算 |
......@@ -62,15 +62,6 @@ TDengine 3.0 版本开始提供一个内置数据库 `performance_schema`,其
| 12 | sub_status | BINARY(1000) | 子查询状态 |
| 13 | sql | BINARY(1024) | SQL 语句 |
## PERF_TOPICS
| # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------ | ------------------------------ |
| 1 | topic_name | BINARY(192) | topic 名称 |
| 2 | db_name | BINARY(64) | topic 相关的 DB |
| 3 | create_time | TIMESTAMP | topic 的 创建时间 |
| 4 | sql | BINARY(1024) | 创建该 topic 时所用的 SQL 语句 |
## PERF_CONSUMERS
| # | **列名** | **数据类型** | **说明** |
......@@ -84,15 +75,6 @@ TDengine 3.0 版本开始提供一个内置数据库 `performance_schema`,其
| 7 | subscribe_time | TIMESTAMP | 上一次发起订阅的时间 |
| 8 | rebalance_time | TIMESTAMP | 上一次触发 rebalance 的时间 |
## PERF_SUBSCRIPTIONS
| # | **列名** | **数据类型** | **说明** |
| --- | :------------: | ------------ | ------------------------ |
| 1 | topic_name | BINARY(204) | 被订阅的 topic |
| 2 | consumer_group | BINARY(193) | 订阅者的消费者组 |
| 3 | vgroup_id | INT | 消费者被分配的 vgroup id |
| 4 | consumer_id | BIGINT | 消费者的唯一 id |
## PERF_TRANS
| # | **列名** | **数据类型** | **说明** |
......@@ -114,17 +96,3 @@ TDengine 3.0 版本开始提供一个内置数据库 `performance_schema`,其
| 2 | create_time | TIMESTAMP | sma 创建时间 |
| 3 | stable_name | BINARY(192) | sma 所属的超级表名称 |
| 4 | vgroup_id | INT | sma 专属的 vgroup 名称 |
## PERF_STREAMS
| # | **列名** | **数据类型** | **说明** |
| --- | :----------: | ------------ | --------------------------------------- |
| 1 | stream_name | BINARY(64) | 流计算名称 |
| 2 | create_time | TIMESTAMP | 创建时间 |
| 3 | sql | BINARY(1024) | 创建流计算时提供的 SQL 语句 |
| 4 | status | BIANRY(20) | 流当前状态 |
| 5 | source_db | BINARY(64) | 源数据库 |
| 6 | target_db | BIANRY(64) | 目的数据库 |
| 7 | target_table | BINARY(192) | 流计算写入的目标表 |
| 8 | watermark | BIGINT | watermark,详见 SQL 手册流式计算 |
| 9 | trigger | INT | 计算结果推送模式,详见 SQL 手册流式计算 |
......@@ -43,17 +43,17 @@ extern "C" {
#define TSDB_INS_TABLE_VNODES "ins_vnodes"
#define TSDB_INS_TABLE_CONFIGS "ins_configs"
#define TSDB_INS_TABLE_DNODE_VARIABLES "ins_dnode_variables"
#define TSDB_INS_TABLE_SUBSCRIPTIONS "ins_subscriptions"
#define TSDB_INS_TABLE_TOPICS "ins_topics"
#define TSDB_INS_TABLE_STREAMS "ins_streams"
#define TSDB_PERFORMANCE_SCHEMA_DB "performance_schema"
#define TSDB_PERFS_TABLE_SMAS "perf_smas"
#define TSDB_PERFS_TABLE_CONNECTIONS "perf_connections"
#define TSDB_PERFS_TABLE_QUERIES "perf_queries"
#define TSDB_PERFS_TABLE_TOPICS "perf_topics"
#define TSDB_PERFS_TABLE_CONSUMERS "perf_consumers"
#define TSDB_PERFS_TABLE_SUBSCRIPTIONS "perf_subscriptions"
#define TSDB_PERFS_TABLE_OFFSETS "perf_offsets"
#define TSDB_PERFS_TABLE_TRANS "perf_trans"
#define TSDB_PERFS_TABLE_STREAMS "perf_streams"
#define TSDB_PERFS_TABLE_APPS "perf_apps"
typedef struct SSysDbTableSchema {
......
......@@ -2626,6 +2626,22 @@ typedef struct {
};
} STqOffsetVal;
static FORCE_INLINE void tqOffsetResetToData(STqOffsetVal* pOffsetVal, int64_t uid, int64_t ts) {
pOffsetVal->type = TMQ_OFFSET__SNAPSHOT_DATA;
pOffsetVal->uid = uid;
pOffsetVal->ts = ts;
}
static FORCE_INLINE void tqOffsetResetToMeta(STqOffsetVal* pOffsetVal, int64_t uid) {
pOffsetVal->type = TMQ_OFFSET__SNAPSHOT_META;
pOffsetVal->uid = uid;
}
static FORCE_INLINE void tqOffsetResetToLog(STqOffsetVal* pOffsetVal, int64_t ver) {
pOffsetVal->type = TMQ_OFFSET__LOG;
pOffsetVal->version = ver;
}
int32_t tEncodeSTqOffsetVal(SEncoder* pEncoder, const STqOffsetVal* pOffsetVal);
int32_t tDecodeSTqOffsetVal(SDecoder* pDecoder, STqOffsetVal* pOffsetVal);
int32_t tFormatOffset(char* buf, int32_t maxLen, const STqOffsetVal* pVal);
......@@ -2957,6 +2973,7 @@ typedef struct {
int32_t tEncodeSMqDataRsp(SEncoder* pEncoder, const SMqDataRsp* pRsp);
int32_t tDecodeSMqDataRsp(SDecoder* pDecoder, SMqDataRsp* pRsp);
void tDeleteSMqDataRsp(SMqDataRsp* pRsp);
typedef struct {
SMqRspHead head;
......
......@@ -176,7 +176,8 @@ int32_t fmGetFuncInfo(SFunctionNode* pFunc, char* pMsg, int32_t msgLen);
EFuncReturnRows fmGetFuncReturnRows(SFunctionNode* pFunc);
bool fmIsBuiltinFunc(const char* pFunc);
bool fmIsBuiltinFunc(const char* pFunc);
EFunctionType fmGetFuncType(const char* pFunc);
bool fmIsAggFunc(int32_t funcId);
bool fmIsScalarFunc(int32_t funcId);
......
......@@ -108,7 +108,7 @@ if [[ ${packgeName} =~ "deb" ]];then
echo "./debAuto.sh ${packgeName}" && chmod 755 debAuto.sh && ./debAuto.sh ${packgeName}
else
echo "dpkg -i ${packgeName}" && dpkg -i ${packgeName}
fi
elif [[ ${packgeName} =~ "rpm" ]];then
cd ${installPath}
echo "rpm ${packgeName}" && rpm -ivh ${packgeName} --quiet
......
......@@ -104,6 +104,7 @@ typedef struct SQueryExecMetric {
int64_t ctgEnd; // end to parse, us
int64_t semanticEnd;
int64_t planEnd;
int64_t resultReady;
int64_t execEnd;
int64_t send; // start to send to server, us
int64_t rsp; // receive response from server, us
......
......@@ -76,19 +76,19 @@ static void deregisterRequest(SRequestObj *pRequest) {
pRequest->self, pTscObj->id, pRequest->requestId, duration / 1000, num, currentInst);
if (QUERY_NODE_VNODE_MODIF_STMT == pRequest->stmtType) {
tscPerf("insert duration %" PRId64 "us: syntax:%" PRId64 "us, ctg:%" PRId64 "us, semantic:%" PRId64 "us, exec:%" PRId64 "us",
duration, pRequest->metric.syntaxEnd - pRequest->metric.syntaxStart,
pRequest->metric.ctgEnd - pRequest->metric.ctgStart,
pRequest->metric.semanticEnd - pRequest->metric.ctgEnd,
tscPerf("insert duration %" PRId64 "us: syntax:%" PRId64 "us, ctg:%" PRId64 "us, semantic:%" PRId64
"us, exec:%" PRId64 "us",
duration, pRequest->metric.syntaxEnd - pRequest->metric.syntaxStart,
pRequest->metric.ctgEnd - pRequest->metric.ctgStart, pRequest->metric.semanticEnd - pRequest->metric.ctgEnd,
pRequest->metric.execEnd - pRequest->metric.semanticEnd);
atomic_add_fetch_64((int64_t *)&pActivity->insertElapsedTime, duration);
} else if (QUERY_NODE_SELECT_STMT == pRequest->stmtType) {
tscPerf("select duration %" PRId64 "us: syntax:%" PRId64 "us, ctg:%" PRId64 "us, semantic:%" PRId64 "us, planner:%" PRId64 "us, exec:%" PRId64 "us",
duration, pRequest->metric.syntaxEnd - pRequest->metric.syntaxStart,
pRequest->metric.ctgEnd - pRequest->metric.ctgStart,
pRequest->metric.semanticEnd - pRequest->metric.ctgEnd,
tscPerf("select duration %" PRId64 "us: syntax:%" PRId64 "us, ctg:%" PRId64 "us, semantic:%" PRId64
"us, planner:%" PRId64 "us, exec:%" PRId64 "us",
duration, pRequest->metric.syntaxEnd - pRequest->metric.syntaxStart,
pRequest->metric.ctgEnd - pRequest->metric.ctgStart, pRequest->metric.semanticEnd - pRequest->metric.ctgEnd,
pRequest->metric.planEnd - pRequest->metric.semanticEnd,
nowUs - pRequest->metric.semanticEnd);
pRequest->metric.resultReady - pRequest->metric.planEnd);
atomic_add_fetch_64((int64_t *)&pActivity->queryElapsedTime, duration);
}
......
......@@ -728,7 +728,7 @@ int32_t handleSubmitExecRes(SRequestObj* pRequest, void* res, SCatalog* pCatalog
tFreeSTableMetaRsp(blk->pMeta);
taosMemoryFreeClear(blk->pMeta);
}
if (NULL == blk->tblFName || 0 == blk->tblFName[0]) {
continue;
}
......@@ -851,6 +851,8 @@ void schedulerExecCb(SExecResult* pResult, void* param, int32_t code) {
SRequestObj* pRequest = (SRequestObj*)param;
pRequest->code = code;
pRequest->metric.resultReady = taosGetTimestampUs();
if (pResult) {
memcpy(&pRequest->body.resInfo.execRes, pResult, sizeof(*pResult));
}
......
......@@ -456,7 +456,7 @@ static int32_t smlModifyDBSchemas(SSmlHandle *info) {
code = smlSendMetaMsg(info, &pName, pColumns, pTags, NULL, SCHEMA_ACTION_CREATE_STABLE);
if (code != TSDB_CODE_SUCCESS) {
uError("SML:0x%" PRIx64 " smlSendMetaMsg failed. can not create %s", info->id, superTable);
uError("SML:0x%" PRIx64 " smlSendMetaMsg failed. can not create %s", info->id, pName.tname);
goto end;
}
info->cost.numOfCreateSTables++;
......@@ -492,7 +492,7 @@ static int32_t smlModifyDBSchemas(SSmlHandle *info) {
code = smlSendMetaMsg(info, &pName, pColumns, pTags, pTableMeta, action);
if (code != TSDB_CODE_SUCCESS) {
uError("SML:0x%" PRIx64 " smlSendMetaMsg failed. can not create %s", info->id, superTable);
uError("SML:0x%" PRIx64 " smlSendMetaMsg failed. can not create %s", info->id, pName.tname);
goto end;
}
}
......
......@@ -811,8 +811,19 @@ int32_t tmq_subscription(tmq_t* tmq, tmq_list_t** topics) {
}
int32_t tmq_unsubscribe(tmq_t* tmq) {
int32_t rsp;
int32_t retryCnt = 0;
tmq_list_t* lst = tmq_list_new();
int32_t rsp = tmq_subscribe(tmq, lst);
while (1) {
rsp = tmq_subscribe(tmq, lst);
if (rsp != TSDB_CODE_MND_CONSUMER_NOT_READY || retryCnt > 5) {
break;
} else {
retryCnt++;
taosMsleep(500);
}
}
tmq_list_destroy(lst);
return rsp;
}
......
......@@ -240,6 +240,22 @@ static const SSysDbTableSchema variablesSchema[] = {
{.name = "value", .bytes = TSDB_CONFIG_VALUE_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true},
};
static const SSysDbTableSchema topicSchema[] = {
{.name = "topic_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false},
{.name = "db_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false},
{.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false},
{.name = "sql", .bytes = TSDB_SHOW_SQL_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false},
// TODO config
};
static const SSysDbTableSchema subscriptionSchema[] = {
{.name = "topic_name", .bytes = TSDB_TOPIC_FNAME_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false},
{.name = "consumer_group", .bytes = TSDB_CGROUP_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false},
{.name = "vgroup_id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false},
{.name = "consumer_id", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = false},
};
static const SSysTableMeta infosMeta[] = {
{TSDB_INS_TABLE_DNODES, dnodesSchema, tListLen(dnodesSchema), true},
{TSDB_INS_TABLE_MNODES, mnodesSchema, tListLen(mnodesSchema), true},
......@@ -260,6 +276,9 @@ static const SSysTableMeta infosMeta[] = {
{TSDB_INS_TABLE_VGROUPS, vgroupsSchema, tListLen(vgroupsSchema), true},
{TSDB_INS_TABLE_CONFIGS, configSchema, tListLen(configSchema), true},
{TSDB_INS_TABLE_DNODE_VARIABLES, variablesSchema, tListLen(variablesSchema), true},
{TSDB_INS_TABLE_TOPICS, topicSchema, tListLen(topicSchema), false},
{TSDB_INS_TABLE_SUBSCRIPTIONS, subscriptionSchema, tListLen(subscriptionSchema), false},
{TSDB_INS_TABLE_STREAMS, streamSchema, tListLen(streamSchema), false},
};
static const SSysDbTableSchema connectionsSchema[] = {
......@@ -272,13 +291,6 @@ static const SSysDbTableSchema connectionsSchema[] = {
{.name = "last_access", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false},
};
static const SSysDbTableSchema topicSchema[] = {
{.name = "topic_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false},
{.name = "db_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false},
{.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false},
{.name = "sql", .bytes = TSDB_SHOW_SQL_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false},
// TODO config
};
static const SSysDbTableSchema consumerSchema[] = {
{.name = "consumer_id", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = false},
......@@ -292,13 +304,6 @@ static const SSysDbTableSchema consumerSchema[] = {
{.name = "rebalance_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false},
};
static const SSysDbTableSchema subscriptionSchema[] = {
{.name = "topic_name", .bytes = TSDB_TOPIC_FNAME_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false},
{.name = "consumer_group", .bytes = TSDB_CGROUP_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false},
{.name = "vgroup_id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false},
{.name = "consumer_id", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = false},
};
static const SSysDbTableSchema offsetSchema[] = {
{.name = "topic_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false},
{.name = "group_id", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_BINARY, .sysInfo = false},
......@@ -345,13 +350,10 @@ static const SSysDbTableSchema appSchema[] = {
static const SSysTableMeta perfsMeta[] = {
{TSDB_PERFS_TABLE_CONNECTIONS, connectionsSchema, tListLen(connectionsSchema), false},
{TSDB_PERFS_TABLE_QUERIES, querySchema, tListLen(querySchema), false},
{TSDB_PERFS_TABLE_TOPICS, topicSchema, tListLen(topicSchema), false},
{TSDB_PERFS_TABLE_CONSUMERS, consumerSchema, tListLen(consumerSchema), false},
{TSDB_PERFS_TABLE_SUBSCRIPTIONS, subscriptionSchema, tListLen(subscriptionSchema), false},
// {TSDB_PERFS_TABLE_OFFSETS, offsetSchema, tListLen(offsetSchema)},
{TSDB_PERFS_TABLE_TRANS, transSchema, tListLen(transSchema), false},
// {TSDB_PERFS_TABLE_SMAS, smaSchema, tListLen(smaSchema), false},
{TSDB_PERFS_TABLE_STREAMS, streamSchema, tListLen(streamSchema), false},
{TSDB_PERFS_TABLE_APPS, appSchema, tListLen(appSchema), false}};
// clang-format on
......
......@@ -5889,6 +5889,13 @@ int32_t tDecodeSMqDataRsp(SDecoder *pDecoder, SMqDataRsp *pRsp) {
return 0;
}
void tDeleteSMqDataRsp(SMqDataRsp *pRsp) {
taosArrayDestroy(pRsp->blockDataLen);
taosArrayDestroyP(pRsp->blockData, (FDelete)taosMemoryFree);
taosArrayDestroyP(pRsp->blockSchema, (FDelete)tDeleteSSchemaWrapper);
taosArrayDestroyP(pRsp->blockTbName, (FDelete)taosMemoryFree);
}
int32_t tEncodeSTaosxRsp(SEncoder *pEncoder, const STaosxRsp *pRsp) {
if (tEncodeSTqOffsetVal(pEncoder, &pRsp->reqOffset) < 0) return -1;
if (tEncodeSTqOffsetVal(pEncoder, &pRsp->rspOffset) < 0) return -1;
......
......@@ -88,7 +88,7 @@ static int32_t convertToRetrieveType(char *name, int32_t len) {
type = TSDB_MGMT_TABLE_VGROUP;
} else if (strncasecmp(name, TSDB_PERFS_TABLE_CONSUMERS, len) == 0) {
type = TSDB_MGMT_TABLE_CONSUMERS;
} else if (strncasecmp(name, TSDB_PERFS_TABLE_SUBSCRIPTIONS, len) == 0) {
} else if (strncasecmp(name, TSDB_INS_TABLE_SUBSCRIPTIONS, len) == 0) {
type = TSDB_MGMT_TABLE_SUBSCRIPTIONS;
} else if (strncasecmp(name, TSDB_PERFS_TABLE_TRANS, len) == 0) {
type = TSDB_MGMT_TABLE_TRANS;
......@@ -102,9 +102,9 @@ static int32_t convertToRetrieveType(char *name, int32_t len) {
type = TSDB_MGMT_TABLE_QUERIES;
} else if (strncasecmp(name, TSDB_INS_TABLE_VNODES, len) == 0) {
type = TSDB_MGMT_TABLE_VNODES;
} else if (strncasecmp(name, TSDB_PERFS_TABLE_TOPICS, len) == 0) {
} else if (strncasecmp(name, TSDB_INS_TABLE_TOPICS, len) == 0) {
type = TSDB_MGMT_TABLE_TOPICS;
} else if (strncasecmp(name, TSDB_PERFS_TABLE_STREAMS, len) == 0) {
} else if (strncasecmp(name, TSDB_INS_TABLE_STREAMS, len) == 0) {
type = TSDB_MGMT_TABLE_STREAMS;
} else if (strncasecmp(name, TSDB_PERFS_TABLE_APPS, len) == 0) {
type = TSDB_MGMT_TABLE_APPS;
......
......@@ -763,8 +763,9 @@ static int32_t mndRetrieveTopic(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBl
int32_t cols = 0;
char topicName[TSDB_TOPIC_NAME_LEN + VARSTR_HEADER_SIZE] = {0};
tNameFromString(&n, pTopic->name, T_NAME_ACCT | T_NAME_DB);
tNameGetDbName(&n, varDataVal(topicName));
strcpy(varDataVal(topicName), mndGetDbStr(pTopic->name));
/*tNameFromString(&n, pTopic->name, T_NAME_ACCT | T_NAME_DB);*/
/*tNameGetDbName(&n, varDataVal(topicName));*/
varDataSetLen(topicName, strlen(varDataVal(topicName)));
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataAppend(pColInfo, numOfRows, (const char *)topicName, false);
......
......@@ -67,21 +67,21 @@ typedef struct {
// tqExec
typedef struct {
char* qmsg;
char* qmsg;
} STqExecCol;
typedef struct {
int64_t suid;
int64_t suid;
} STqExecTb;
typedef struct {
SHashObj* pFilterOutTbUid;
SHashObj* pFilterOutTbUid;
} STqExecDb;
typedef struct {
int8_t subType;
STqReader* pExecReader;
STqReader* pExecReader;
qTaskInfo_t task;
union {
STqExecCol execCol;
......@@ -144,7 +144,7 @@ int64_t tqScan(STQ* pTq, const STqHandle* pHandle, SMqDataRsp* pRsp, SMqMetaRsp*
int64_t tqFetchLog(STQ* pTq, STqHandle* pHandle, int64_t* fetchOffset, SWalCkHead** pHeadWithCkSum);
// tqExec
int32_t tqLogScanExec(STQ* pTq, STqExecHandle* pExec, SSubmitReq* pReq, SMqDataRsp* pRsp);
int32_t tqLogScanExec(STQ* pTq, STqHandle* pHandle, SSubmitReq* pReq, SMqDataRsp* pRsp);
int32_t tqSendDataRsp(STQ* pTq, const SRpcMsg* pMsg, const SMqPollReq* pReq, const SMqDataRsp* pRsp);
// tqMeta
......@@ -175,22 +175,6 @@ void tqTableSink(SStreamTask* pTask, void* vnode, int64_t ver, void* data);
char* tqOffsetBuildFName(const char* path, int32_t ver);
int32_t tqOffsetRestoreFromFile(STqOffsetStore* pStore, const char* fname);
static FORCE_INLINE void tqOffsetResetToData(STqOffsetVal* pOffsetVal, int64_t uid, int64_t ts) {
pOffsetVal->type = TMQ_OFFSET__SNAPSHOT_DATA;
pOffsetVal->uid = uid;
pOffsetVal->ts = ts;
}
static FORCE_INLINE void tqOffsetResetToMeta(STqOffsetVal* pOffsetVal, int64_t uid) {
pOffsetVal->type = TMQ_OFFSET__SNAPSHOT_META;
pOffsetVal->uid = uid;
}
static FORCE_INLINE void tqOffsetResetToLog(STqOffsetVal* pOffsetVal, int64_t ver) {
pOffsetVal->type = TMQ_OFFSET__LOG;
pOffsetVal->version = ver;
}
// tqStream
int32_t tqExpandTask(STQ* pTq, SStreamTask* pTask);
......
......@@ -357,8 +357,8 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
TD_VID(pTq->pVnode), formatBuf);
} else {
if (reqOffset.type == TMQ_OFFSET__RESET_EARLIEAST) {
if (pReq->useSnapshot){
if (pHandle->fetchMeta){
if (pReq->useSnapshot) {
if (pHandle->fetchMeta) {
tqOffsetResetToMeta(&fetchOffsetNew, 0);
} else {
tqOffsetResetToData(&fetchOffsetNew, 0, 0);
......@@ -373,43 +373,47 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
if (tqSendDataRsp(pTq, pMsg, pReq, &dataRsp) < 0) {
code = -1;
}
goto OVER;
tDeleteSMqDataRsp(&dataRsp);
return code;
} else if (reqOffset.type == TMQ_OFFSET__RESET_NONE) {
tqError("tmq poll: subkey %s, no offset committed for consumer %" PRId64
" in vg %d, subkey %s, reset none failed",
pHandle->subKey, consumerId, TD_VID(pTq->pVnode), pReq->subKey);
terrno = TSDB_CODE_TQ_NO_COMMITTED_OFFSET;
code = -1;
goto OVER;
tDeleteSMqDataRsp(&dataRsp);
return code;
}
}
}
if(pHandle->execHandle.subType == TOPIC_SUB_TYPE__COLUMN || fetchOffsetNew.type != TMQ_OFFSET__LOG){
if (pHandle->execHandle.subType == TOPIC_SUB_TYPE__COLUMN || fetchOffsetNew.type != TMQ_OFFSET__LOG) {
SMqMetaRsp metaRsp = {0};
tqScan(pTq, pHandle, &dataRsp, &metaRsp, &fetchOffsetNew);
if(metaRsp.metaRspLen > 0){
if (metaRsp.metaRspLen > 0) {
if (tqSendMetaPollRsp(pTq, pMsg, pReq, &metaRsp) < 0) {
code = -1;
}
tqDebug("tmq poll: consumer %ld, subkey %s, vg %d, send meta offset type:%d,uid:%ld,version:%ld", consumerId, pHandle->subKey,
TD_VID(pTq->pVnode), metaRsp.rspOffset.type, metaRsp.rspOffset.uid, metaRsp.rspOffset.version);
tqDebug("tmq poll: consumer %ld, subkey %s, vg %d, send meta offset type:%d,uid:%ld,version:%ld", consumerId,
pHandle->subKey, TD_VID(pTq->pVnode), metaRsp.rspOffset.type, metaRsp.rspOffset.uid,
metaRsp.rspOffset.version);
taosMemoryFree(metaRsp.metaRsp);
goto OVER;
}
if (dataRsp.blockNum > 0){
if (dataRsp.blockNum > 0) {
if (tqSendDataRsp(pTq, pMsg, pReq, &dataRsp) < 0) {
code = -1;
}
goto OVER;
}else{
} else {
fetchOffsetNew = dataRsp.rspOffset;
}
tqDebug("tmq poll: consumer %ld, subkey %s, vg %d, send data blockNum:%d, offset type:%d,uid:%ld,version:%ld", consumerId, pHandle->subKey,
TD_VID(pTq->pVnode), dataRsp.blockNum, dataRsp.rspOffset.type, dataRsp.rspOffset.uid, dataRsp.rspOffset.version);
tqDebug("tmq poll: consumer %ld, subkey %s, vg %d, send data blockNum:%d, offset type:%d,uid:%ld,version:%ld",
consumerId, pHandle->subKey, TD_VID(pTq->pVnode), dataRsp.blockNum, dataRsp.rspOffset.type,
dataRsp.rspOffset.uid, dataRsp.rspOffset.version);
}
if (pHandle->execHandle.subType != TOPIC_SUB_TYPE__COLUMN && fetchOffsetNew.type == TMQ_OFFSET__LOG) {
......@@ -426,7 +430,7 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
consumerEpoch = atomic_load_32(&pHandle->epoch);
if (consumerEpoch > reqEpoch) {
tqWarn("tmq poll: consumer %" PRId64 " (epoch %d), subkey %s, vg %d offset %" PRId64
", found new consumer epoch %d, discard req epoch %d",
", found new consumer epoch %d, discard req epoch %d",
consumerId, pReq->epoch, pHandle->subKey, TD_VID(pTq->pVnode), fetchVer, consumerEpoch, reqEpoch);
break;
}
......@@ -449,7 +453,7 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
if (pHead->msgType == TDMT_VND_SUBMIT) {
SSubmitReq* pCont = (SSubmitReq*)&pHead->body;
if (tqLogScanExec(pTq, &pHandle->execHandle, pCont, &dataRsp) < 0) {
if (tqLogScanExec(pTq, pHandle, pCont, &dataRsp) < 0) {
/*ASSERT(0);*/
}
// TODO batch optimization:
......@@ -490,18 +494,7 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
OVER:
if (pCkHead) taosMemoryFree(pCkHead);
// TODO wrap in destroy func
taosArrayDestroy(dataRsp.blockDataLen);
taosArrayDestroyP(dataRsp.blockData, (FDelete)taosMemoryFree);
if (dataRsp.withSchema) {
taosArrayDestroyP(dataRsp.blockSchema, (FDelete)tDeleteSSchemaWrapper);
}
if (dataRsp.withTbName) {
taosArrayDestroyP(dataRsp.blockTbName, (FDelete)taosMemoryFree);
}
tDeleteSMqDataRsp(&dataRsp);
return code;
}
......@@ -629,9 +622,9 @@ int32_t tqProcessVgChangeReq(STQ* pTq, int64_t version, char* msg, int32_t msgLe
tqReaderSetTbUidList(pHandle->execHandle.pExecReader, tbUidList);
taosArrayDestroy(tbUidList);
buildSnapContext(handle.meta, handle.version, req.suid, pHandle->execHandle.subType, pHandle->fetchMeta, (SSnapContext **)(&handle.sContext));
pHandle->execHandle.task =
qCreateQueueExecTaskInfo(NULL, &handle, NULL, NULL);
buildSnapContext(handle.meta, handle.version, req.suid, pHandle->execHandle.subType, pHandle->fetchMeta,
(SSnapContext**)(&handle.sContext));
pHandle->execHandle.task = qCreateQueueExecTaskInfo(NULL, &handle, NULL, NULL);
}
taosHashPut(pTq->pHandle, req.subKey, strlen(req.subKey), pHandle, sizeof(STqHandle));
tqDebug("try to persist handle %s consumer %" PRId64, req.subKey, pHandle->consumerId);
......
......@@ -60,6 +60,46 @@ static int32_t tqAddTbNameToRsp(const STQ* pTq, int64_t uid, SMqDataRsp* pRsp) {
return 0;
}
int64_t tqScanData(STQ* pTq, const STqHandle* pHandle, SMqDataRsp* pRsp, STqOffsetVal* pOffset) {
const STqExecHandle* pExec = &pHandle->execHandle;
ASSERT(pExec->subType == TOPIC_SUB_TYPE__COLUMN);
qTaskInfo_t task = pExec->task;
if (qStreamPrepareScan(task, pOffset, pHandle->execHandle.subType) < 0) {
tqDebug("prepare scan failed, return");
if (pOffset->type == TMQ_OFFSET__LOG) {
pRsp->rspOffset = *pOffset;
return 0;
} else {
tqOffsetResetToLog(pOffset, pHandle->snapshotVer);
if (qStreamPrepareScan(task, pOffset, pHandle->execHandle.subType) < 0) {
tqDebug("prepare scan failed, return");
pRsp->rspOffset = *pOffset;
return 0;
}
}
}
int32_t rowCnt = 0;
while (1) {
SSDataBlock* pDataBlock = NULL;
uint64_t ts = 0;
tqDebug("tmq task start to execute");
if (qExecTask(task, &pDataBlock, &ts) < 0) {
ASSERT(0);
}
tqDebug("tmq task execute end, get %p", pDataBlock);
if (pDataBlock) {
tqAddBlockDataToRsp(pDataBlock, pRsp, pExec->numOfCols);
pRsp->blockNum++;
}
}
return 0;
}
int64_t tqScan(STQ* pTq, const STqHandle* pHandle, SMqDataRsp* pRsp, SMqMetaRsp* pMetaRsp, STqOffsetVal* pOffset) {
const STqExecHandle* pExec = &pHandle->execHandle;
qTaskInfo_t task = pExec->task;
......@@ -102,18 +142,18 @@ int64_t tqScan(STQ* pTq, const STqHandle* pHandle, SMqDataRsp* pRsp, SMqMetaRsp*
taosArrayPush(pRsp->blockTbName, &tbName);
}
}
if(pRsp->withSchema){
if (pRsp->withSchema) {
if (pOffset->type == TMQ_OFFSET__LOG) {
tqAddBlockSchemaToRsp(pExec, pRsp);
}else{
} else {
SSchemaWrapper* pSW = tCloneSSchemaWrapper(qExtractSchemaFromTask(task));
taosArrayPush(pRsp->blockSchema, &pSW);
}
}
if(pHandle->execHandle.subType == TOPIC_SUB_TYPE__COLUMN){
if (pHandle->execHandle.subType == TOPIC_SUB_TYPE__COLUMN) {
tqAddBlockDataToRsp(pDataBlock, pRsp, pExec->numOfCols);
}else{
} else {
tqAddBlockDataToRsp(pDataBlock, pRsp, taosArrayGetSize(pDataBlock->pDataBlock));
}
pRsp->blockNum++;
......@@ -125,17 +165,9 @@ int64_t tqScan(STQ* pTq, const STqHandle* pHandle, SMqDataRsp* pRsp, SMqMetaRsp*
}
}
if(pHandle->execHandle.subType == TOPIC_SUB_TYPE__COLUMN){
if (pRsp->blockNum == 0 && pOffset->type == TMQ_OFFSET__SNAPSHOT_DATA) {
tqDebug("vgId: %d, tsdb consume over, switch to wal, ver %" PRId64, TD_VID(pTq->pVnode),
pHandle->snapshotVer + 1);
tqOffsetResetToLog(pOffset, pHandle->snapshotVer);
qStreamPrepareScan(task, pOffset, pHandle->execHandle.subType);
continue;
}
}else{
if (pDataBlock == NULL && pOffset->type == TMQ_OFFSET__SNAPSHOT_DATA){
if(qStreamExtractPrepareUid(task) != 0){
if (pHandle->execHandle.subType != TOPIC_SUB_TYPE__COLUMN) {
if (pDataBlock == NULL && pOffset->type == TMQ_OFFSET__SNAPSHOT_DATA) {
if (qStreamExtractPrepareUid(task) != 0) {
continue;
}
tqDebug("tmqsnap vgId: %d, tsdb consume over, switch to wal, ver %" PRId64, TD_VID(pTq->pVnode),
......@@ -143,13 +175,13 @@ int64_t tqScan(STQ* pTq, const STqHandle* pHandle, SMqDataRsp* pRsp, SMqMetaRsp*
break;
}
if (pRsp->blockNum > 0){
if (pRsp->blockNum > 0) {
tqDebug("tmqsnap task exec exited, get data");
break;
}
SMqMetaRsp* tmp = qStreamExtractMetaMsg(task);
if(tmp->rspOffset.type == TMQ_OFFSET__SNAPSHOT_DATA){
if (tmp->rspOffset.type == TMQ_OFFSET__SNAPSHOT_DATA) {
tqOffsetResetToData(pOffset, tmp->rspOffset.uid, tmp->rspOffset.ts);
qStreamPrepareScan(task, pOffset, pHandle->execHandle.subType);
tmp->rspOffset.type = TMQ_OFFSET__SNAPSHOT_META;
......@@ -173,57 +205,8 @@ int64_t tqScan(STQ* pTq, const STqHandle* pHandle, SMqDataRsp* pRsp, SMqMetaRsp*
return 0;
}
#if 0
int32_t tqScanSnapshot(STQ* pTq, const STqExecHandle* pExec, SMqDataRsp* pRsp, STqOffsetVal offset, int32_t workerId) {
ASSERT(pExec->subType == TOPIC_SUB_TYPE__COLUMN);
qTaskInfo_t task = pExec->execCol.task[workerId];
if (qStreamPrepareTsdbScan(task, offset.uid, offset.ts) < 0) {
ASSERT(0);
}
int32_t rowCnt = 0;
while (1) {
SSDataBlock* pDataBlock = NULL;
uint64_t ts = 0;
if (qExecTask(task, &pDataBlock, &ts) < 0) {
ASSERT(0);
}
if (pDataBlock == NULL) break;
ASSERT(pDataBlock->info.rows != 0);
ASSERT(taosArrayGetSize(pDataBlock->pDataBlock) != 0);
tqAddBlockDataToRsp(pDataBlock, pRsp);
if (pRsp->withTbName) {
pRsp->withTbName = 0;
#if 0
int64_t uid;
int64_t ts;
if (qGetStreamScanStatus(task, &uid, &ts) < 0) {
ASSERT(0);
}
tqAddTbNameToRsp(pTq, uid, pRsp);
#endif
}
pRsp->blockNum++;
rowCnt += pDataBlock->info.rows;
if (rowCnt >= 4096) break;
}
int64_t uid;
int64_t ts;
if (qGetStreamScanStatus(task, &uid, &ts) < 0) {
ASSERT(0);
}
tqOffsetResetToData(&pRsp->rspOffset, uid, ts);
return 0;
}
#endif
int32_t tqLogScanExec(STQ* pTq, STqExecHandle* pExec, SSubmitReq* pReq, SMqDataRsp* pRsp) {
int32_t tqLogScanExec(STQ* pTq, STqHandle* pHandle, SSubmitReq* pReq, SMqDataRsp* pRsp) {
STqExecHandle* pExec = &pHandle->execHandle;
ASSERT(pExec->subType != TOPIC_SUB_TYPE__COLUMN);
if (pExec->subType == TOPIC_SUB_TYPE__TABLE) {
......@@ -268,6 +251,28 @@ int32_t tqLogScanExec(STQ* pTq, STqExecHandle* pExec, SSubmitReq* pReq, SMqDataR
tqAddBlockSchemaToRsp(pExec, pRsp);
pRsp->blockNum++;
}
#if 0
if (pHandle->fetchMeta && pRsp->blockNum) {
SSubmitMsgIter iter = {0};
tInitSubmitMsgIter(pReq, &iter);
STaosxRsp* pXrsp = (STaosxRsp*)pRsp;
while (1) {
SSubmitBlk* pBlk = NULL;
if (tGetSubmitMsgNext(&iter, &pBlk) < 0) return -1;
if (pBlk->schemaLen > 0) {
if (pXrsp->createTableNum == 0) {
pXrsp->createTableLen = taosArrayInit(0, sizeof(int32_t));
pXrsp->createTableReq = taosArrayInit(0, sizeof(void*));
}
void* createReq = taosMemoryCalloc(1, pBlk->schemaLen);
memcpy(createReq, pBlk->data, pBlk->schemaLen);
taosArrayPush(pXrsp->createTableLen, &pBlk->schemaLen);
taosArrayPush(pXrsp->createTableReq, &createReq);
pXrsp->createTableNum++;
}
}
}
#endif
}
if (pRsp->blockNum == 0) {
......
......@@ -143,6 +143,8 @@ typedef struct {
STqOffsetVal prepareStatus; // for tmq
STqOffsetVal lastStatus; // for tmq
SMqMetaRsp metaRsp; // for tmq fetching meta
int64_t snapshotVer;
SSchemaWrapper *schema;
char tbName[TSDB_TABLE_NAME_LEN];
SSDataBlock* pullOverBlk; // for streaming
......@@ -486,24 +488,23 @@ typedef struct SStreamScanInfo {
STimeWindowAggSupp twAggSup;
SSDataBlock* pUpdateDataRes;
// status for tmq
// SSchemaWrapper schema;
SNodeList* pGroupTags;
SNode* pTagCond;
SNode* pTagIndexCond;
SNodeList* pGroupTags;
SNode* pTagCond;
SNode* pTagIndexCond;
} SStreamScanInfo;
typedef struct SStreamRawScanInfo{
// int8_t subType;
// bool withMeta;
// int64_t suid;
// int64_t snapVersion;
// void *metaInfo;
// void *dataInfo;
SVnode* vnode;
SSDataBlock pRes; // result SSDataBlock
STsdbReader* dataReader;
SSnapContext* sContext;
}SStreamRawScanInfo;
typedef struct {
// int8_t subType;
// bool withMeta;
// int64_t suid;
// int64_t snapVersion;
// void *metaInfo;
// void *dataInfo;
SVnode* vnode;
SSDataBlock pRes; // result SSDataBlock
STsdbReader* dataReader;
SSnapContext* sContext;
} SStreamRawScanInfo;
typedef struct SSysTableScanInfo {
SRetrieveMetaTableRsp* pRsp;
......@@ -528,14 +529,14 @@ typedef struct SBlockDistInfo {
SSDataBlock* pResBlock;
void* pHandle;
SReadHandle readHandle;
uint64_t uid; // table uid
uint64_t uid; // table uid
} SBlockDistInfo;
// todo remove this
typedef struct SOptrBasicInfo {
SResultRowInfo resultRowInfo;
SSDataBlock* pRes;
bool mergeResultBlock;
SResultRowInfo resultRowInfo;
SSDataBlock* pRes;
bool mergeResultBlock;
} SOptrBasicInfo;
typedef struct SIntervalAggOperatorInfo {
......
......@@ -139,7 +139,7 @@ int32_t qSetMultiStreamInput(qTaskInfo_t tinfo, const void* pBlocks, size_t numO
qTaskInfo_t qCreateQueueExecTaskInfo(void* msg, SReadHandle* readers, int32_t* numOfCols, SSchemaWrapper** pSchema) {
if (msg == NULL) {
// TODO create raw scan
// create raw scan
SExecTaskInfo* pTaskInfo = taosMemoryCalloc(1, sizeof(SExecTaskInfo));
if (NULL == pTaskInfo) {
......@@ -151,7 +151,7 @@ qTaskInfo_t qCreateQueueExecTaskInfo(void* msg, SReadHandle* readers, int32_t* n
pTaskInfo->cost.created = taosGetTimestampMs();
pTaskInfo->execModel = OPTR_EXEC_MODEL_QUEUE;
pTaskInfo->pRoot = createRawScanOperatorInfo(readers, pTaskInfo);
if(NULL == pTaskInfo->pRoot){
if (NULL == pTaskInfo->pRoot) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
taosMemoryFree(pTaskInfo);
return NULL;
......@@ -834,11 +834,11 @@ int32_t qStreamPrepareScan(qTaskInfo_t tinfo, STqOffsetVal* pOffset, int8_t subT
} else {
ASSERT(0);
}
}else if (pOffset->type == TMQ_OFFSET__SNAPSHOT_DATA){
} else if (pOffset->type == TMQ_OFFSET__SNAPSHOT_DATA) {
SStreamRawScanInfo* pInfo = pOperator->info;
SSnapContext* sContext = pInfo->sContext;
if(setForSnapShot(sContext, pOffset->uid) != 0) {
qError("setDataForSnapShot error. uid:%"PRIi64, pOffset->uid);
SSnapContext* sContext = pInfo->sContext;
if (setForSnapShot(sContext, pOffset->uid) != 0) {
qError("setDataForSnapShot error. uid:%" PRIi64, pOffset->uid);
return -1;
}
......@@ -847,27 +847,29 @@ int32_t qStreamPrepareScan(qTaskInfo_t tinfo, STqOffsetVal* pOffset, int8_t subT
pInfo->dataReader = NULL;
cleanupQueryTableDataCond(&pTaskInfo->streamInfo.tableCond);
taosArrayDestroy(pTaskInfo->tableqinfoList.pTableList);
if(mtInfo.uid == 0) return 0; // no data
if (mtInfo.uid == 0) return 0; // no data
initQueryTableDataCondForTmq(&pTaskInfo->streamInfo.tableCond, sContext, mtInfo);
pTaskInfo->streamInfo.tableCond.twindows.skey = pOffset->ts;
pTaskInfo->tableqinfoList.pTableList = taosArrayInit(1, sizeof(STableKeyInfo));
taosArrayPush(pTaskInfo->tableqinfoList.pTableList, &(STableKeyInfo){.uid = mtInfo.uid, .groupId = 0});
tsdbReaderOpen(pInfo->vnode, &pTaskInfo->streamInfo.tableCond, pTaskInfo->tableqinfoList.pTableList, &pInfo->dataReader, NULL);
tsdbReaderOpen(pInfo->vnode, &pTaskInfo->streamInfo.tableCond, pTaskInfo->tableqinfoList.pTableList,
&pInfo->dataReader, NULL);
strcpy(pTaskInfo->streamInfo.tbName, mtInfo.tbName);
tDeleteSSchemaWrapper(pTaskInfo->streamInfo.schema);
pTaskInfo->streamInfo.schema = mtInfo.schema;
qDebug("tmqsnap qStreamPrepareScan snapshot data uid %ld ts %ld", mtInfo.uid, pOffset->ts);
}else if(pOffset->type == TMQ_OFFSET__SNAPSHOT_META){
} else if (pOffset->type == TMQ_OFFSET__SNAPSHOT_META) {
SStreamRawScanInfo* pInfo = pOperator->info;
SSnapContext* sContext = pInfo->sContext;
if(setForSnapShot(sContext, pOffset->uid) != 0) {
qError("setForSnapShot error. uid:%"PRIi64" ,version:%"PRIi64, pOffset->uid);
SSnapContext* sContext = pInfo->sContext;
if (setForSnapShot(sContext, pOffset->uid) != 0) {
qError("setForSnapShot error. uid:%" PRIi64 " ,version:%" PRIi64, pOffset->uid);
return -1;
}
qDebug("tmqsnap qStreamPrepareScan snapshot meta uid %ld ts %ld", pOffset->uid);
}else if (pOffset->type == TMQ_OFFSET__LOG) {
} else if (pOffset->type == TMQ_OFFSET__LOG) {
SStreamRawScanInfo* pInfo = pOperator->info;
tsdbReaderClose(pInfo->dataReader);
pInfo->dataReader = NULL;
......
......@@ -268,7 +268,7 @@ SResultRow* doSetResultOutBufByKey(SDiskbasedBuf* pResultBuf, SResultRowInfo* pR
// add a new result set for a new group
SResultRowPosition pos = {.pageId = pResult->pageId, .offset = pResult->offset};
tSimpleHashPut(pSup->pResultRowHashTable, pSup->keyBuf, GET_RES_WINDOW_KEY_LEN(bytes), &pos,
sizeof(SResultRowPosition));
sizeof(SResultRowPosition));
}
// 2. set the new time window to be the new active time window
......@@ -2815,92 +2815,6 @@ int32_t getTableScanInfo(SOperatorInfo* pOperator, int32_t* order, int32_t* scan
}
}
}
#if 0
int32_t doPrepareScan(SOperatorInfo* pOperator, uint64_t uid, int64_t ts) {
uint8_t type = pOperator->operatorType;
pOperator->status = OP_OPENED;
if (type == QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN) {
SStreamScanInfo* pScanInfo = pOperator->info;
pScanInfo->blockType = STREAM_INPUT__TABLE_SCAN;
pScanInfo->pTableScanOp->status = OP_OPENED;
STableScanInfo* pInfo = pScanInfo->pTableScanOp->info;
ASSERT(pInfo->scanMode == TABLE_SCAN__TABLE_ORDER);
if (uid == 0) {
pInfo->noTable = 1;
return TSDB_CODE_SUCCESS;
}
/*if (pSnapShotScanInfo->dataReader == NULL) {*/
/*pSnapShotScanInfo->dataReader = tsdbReaderOpen(pHandle->vnode, &pSTInfo->cond, tableList, 0, 0);*/
/*pSnapShotScanInfo->scanMode = TABLE_SCAN__TABLE_ORDER;*/
/*}*/
pInfo->noTable = 0;
if (pInfo->lastStatus.uid != uid || pInfo->lastStatus.ts != ts) {
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
int32_t tableSz = taosArrayGetSize(pTaskInfo->tableqinfoList.pTableList);
bool found = false;
for (int32_t i = 0; i < tableSz; i++) {
STableKeyInfo* pTableInfo = taosArrayGet(pTaskInfo->tableqinfoList.pTableList, i);
if (pTableInfo->uid == uid) {
found = true;
pInfo->currentTable = i;
}
}
// TODO after processing drop, found can be false
ASSERT(found);
tsdbSetTableId(pInfo->dataReader, uid);
int64_t oldSkey = pInfo->cond.twindows.skey;
pInfo->cond.twindows.skey = ts + 1;
tsdbReaderReset(pInfo->dataReader, &pInfo->cond);
pInfo->cond.twindows.skey = oldSkey;
pInfo->scanTimes = 0;
qDebug("tsdb reader offset seek to uid %" PRId64 " ts %" PRId64 ", table cur set to %d , all table num %d", uid, ts,
pInfo->currentTable, tableSz);
}
return TSDB_CODE_SUCCESS;
} else {
if (pOperator->numOfDownstream == 1) {
return doPrepareScan(pOperator->pDownstream[0], uid, ts);
} else if (pOperator->numOfDownstream == 0) {
qError("failed to find stream scan operator to set the input data block");
return TSDB_CODE_QRY_APP_ERROR;
} else {
qError("join not supported for stream block scan");
return TSDB_CODE_QRY_APP_ERROR;
}
}
}
int32_t doGetScanStatus(SOperatorInfo* pOperator, uint64_t* uid, int64_t* ts) {
int32_t type = pOperator->operatorType;
if (type == QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN) {
SStreamScanInfo* pScanInfo = pOperator->info;
STableScanInfo* pSnapShotScanInfo = pScanInfo->pTableScanOp->info;
*uid = pSnapShotScanInfo->lastStatus.uid;
*ts = pSnapShotScanInfo->lastStatus.ts;
} else {
if (pOperator->pDownstream[0] == NULL) {
return TSDB_CODE_INVALID_PARA;
} else {
doGetScanStatus(pOperator->pDownstream[0], uid, ts);
}
}
return TSDB_CODE_SUCCESS;
}
#endif
// this is a blocking operator
static int32_t doOpenAggregateOptr(SOperatorInfo* pOperator) {
......@@ -3024,7 +2938,7 @@ int32_t aggEncodeResultRow(SOperatorInfo* pOperator, char** result, int32_t* len
SResultRow* pRow = (SResultRow*)((char*)pPage + pos->offset);
setBufPageDirty(pPage, true);
releaseBufPage(pSup->pResultBuf, pPage);
int32_t iter = 0;
void* pIter = NULL;
while ((pIter = tSimpleHashIterate(pSup->pResultRowHashTable, pIter, &iter))) {
......@@ -3434,7 +3348,7 @@ int32_t getBufferPgSize(int32_t rowSize, uint32_t* defaultPgsz, uint32_t* defaul
int32_t doInitAggInfoSup(SAggSupporter* pAggSup, SqlFunctionCtx* pCtx, int32_t numOfOutput, size_t keyBufSize,
const char* pKey) {
int32_t code = 0;
int32_t code = 0;
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY);
pAggSup->currentPageId = -1;
......@@ -4294,42 +4208,6 @@ SArray* extractColumnInfo(SNodeList* pNodeList) {
return pList;
}
#if 0
STsdbReader* doCreateDataReader(STableScanPhysiNode* pTableScanNode, SReadHandle* pHandle,
STableListInfo* pTableListInfo, const char* idstr) {
int32_t code = getTableList(pHandle->meta, pHandle->vnode, &pTableScanNode->scan, pTableListInfo);
if (code != TSDB_CODE_SUCCESS) {
goto _error;
}
if (taosArrayGetSize(pTableListInfo->pTableList) == 0) {
code = 0;
qDebug("no table qualified for query, %s", idstr);
goto _error;
}
SQueryTableDataCond cond = {0};
code = initQueryTableDataCond(&cond, pTableScanNode);
if (code != TSDB_CODE_SUCCESS) {
goto _error;
}
STsdbReader* pReader;
code = tsdbReaderOpen(pHandle->vnode, &cond, pTableListInfo->pTableList, &pReader, idstr);
if (code != TSDB_CODE_SUCCESS) {
goto _error;
}
cleanupQueryTableDataCond(&cond);
return pReader;
_error:
terrno = code;
return NULL;
}
#endif
static int32_t extractTbscanInStreamOpTree(SOperatorInfo* pOperator, STableScanInfo** ppInfo) {
if (pOperator->operatorType != QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN) {
if (pOperator->numOfDownstream == 0) {
......
......@@ -1219,7 +1219,7 @@ static void setBlockGroupId(SOperatorInfo* pOperator, SSDataBlock* pBlock, int32
static int32_t setBlockIntoRes(SStreamScanInfo* pInfo, const SSDataBlock* pBlock) {
SDataBlockInfo* pBlockInfo = &pInfo->pRes->info;
SOperatorInfo* pOperator = pInfo->pStreamScanOp;
SExecTaskInfo* pTaskInfo = pInfo->pStreamScanOp->pTaskInfo;
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
blockDataEnsureCapacity(pInfo->pRes, pBlock->info.rows);
......@@ -1228,7 +1228,7 @@ static int32_t setBlockIntoRes(SStreamScanInfo* pInfo, const SSDataBlock* pBlock
pInfo->pRes->info.type = STREAM_NORMAL;
pInfo->pRes->info.version = pBlock->info.version;
uint64_t* groupIdPre = taosHashGet(pOperator->pTaskInfo->tableqinfoList.map, &pBlock->info.uid, sizeof(int64_t));
uint64_t* groupIdPre = taosHashGet(pTaskInfo->tableqinfoList.map, &pBlock->info.uid, sizeof(int64_t));
if (groupIdPre) {
pInfo->pRes->info.groupId = *groupIdPre;
} else {
......@@ -1276,48 +1276,29 @@ static int32_t setBlockIntoRes(SStreamScanInfo* pInfo, const SSDataBlock* pBlock
return 0;
}
static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
// NOTE: this operator does never check if current status is done or not
static SSDataBlock* doQueueScan(SOperatorInfo* pOperator) {
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
SStreamScanInfo* pInfo = pOperator->info;
#if 0
SStreamState* pState = pTaskInfo->streamInfo.pState;
if (pState) {
printf(">>>>>>>> stream write backend\n");
SWinKey key = {
.ts = 1,
.groupId = 2,
};
char tmp[100] = "abcdefg1";
if (streamStatePut(pState, &key, &tmp, strlen(tmp) + 1) < 0) {
ASSERT(0);
}
key.ts = 2;
char tmp2[100] = "abcdefg2";
if (streamStatePut(pState, &key, &tmp2, strlen(tmp2) + 1) < 0) {
ASSERT(0);
}
key.groupId = 5;
key.ts = 1;
char tmp3[100] = "abcdefg3";
if (streamStatePut(pState, &key, &tmp3, strlen(tmp3) + 1) < 0) {
ASSERT(0);
}
char* val2 = NULL;
int32_t sz;
if (streamStateGet(pState, &key, (void**)&val2, &sz) < 0) {
ASSERT(0);
qDebug("queue scan called");
if (pTaskInfo->streamInfo.prepareStatus.type == TMQ_OFFSET__SNAPSHOT_DATA) {
SSDataBlock* pResult = doTableScan(pInfo->pTableScanOp);
if (pResult && pResult->info.rows > 0) {
qDebug("queue scan tsdb return %d rows", pResult->info.rows);
return pResult;
} else {
STableScanInfo* pTSInfo = pInfo->pTableScanOp->info;
tsdbReaderClose(pTSInfo->dataReader);
pTSInfo->dataReader = NULL;
tqOffsetResetToLog(&pTaskInfo->streamInfo.prepareStatus, pTaskInfo->streamInfo.snapshotVer);
qDebug("queue scan tsdb over, switch to wal ver %d", pTaskInfo->streamInfo.snapshotVer + 1);
if (tqSeekVer(pInfo->tqReader, pTaskInfo->streamInfo.snapshotVer + 1) < 0) {
return NULL;
}
ASSERT(pInfo->tqReader->pWalReader->curVersion == pTaskInfo->streamInfo.snapshotVer + 1);
}
printf("stream read %s %d\n", val2, sz);
streamFreeVal(val2);
}
#endif
qDebug("stream scan called");
if (pTaskInfo->streamInfo.prepareStatus.type == TMQ_OFFSET__LOG) {
while (1) {
SFetchRet ret = {0};
......@@ -1329,21 +1310,21 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
}
// TODO clean data block
if (pInfo->pRes->info.rows > 0) {
qDebug("stream scan log return %d rows", pInfo->pRes->info.rows);
qDebug("queue scan log return %d rows", pInfo->pRes->info.rows);
return pInfo->pRes;
}
} else if (ret.fetchType == FETCH_TYPE__META) {
ASSERT(0);
// pTaskInfo->streamInfo.lastStatus = ret.offset;
// pTaskInfo->streamInfo.metaBlk = ret.meta;
// return NULL;
// pTaskInfo->streamInfo.lastStatus = ret.offset;
// pTaskInfo->streamInfo.metaBlk = ret.meta;
// return NULL;
} else if (ret.fetchType == FETCH_TYPE__NONE) {
pTaskInfo->streamInfo.lastStatus = ret.offset;
ASSERT(pTaskInfo->streamInfo.lastStatus.version >= pTaskInfo->streamInfo.prepareStatus.version);
ASSERT(pTaskInfo->streamInfo.lastStatus.version + 1 == pInfo->tqReader->pWalReader->curVersion);
char formatBuf[80];
tFormatOffset(formatBuf, 80, &ret.offset);
qDebug("stream scan log return null, offset %s", formatBuf);
qDebug("queue scan log return null, offset %s", formatBuf);
return NULL;
} else {
ASSERT(0);
......@@ -1357,7 +1338,53 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
}
qDebug("stream scan tsdb return null");
return NULL;
} else {
ASSERT(0);
return NULL;
}
}
static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
// NOTE: this operator does never check if current status is done or not
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
SStreamScanInfo* pInfo = pOperator->info;
qDebug("stream scan called");
#if 0
SStreamState* pState = pTaskInfo->streamInfo.pState;
if (pState) {
printf(">>>>>>>> stream write backend\n");
SWinKey key = {
.ts = 1,
.groupId = 2,
};
char tmp[100] = "abcdefg1";
if (streamStatePut(pState, &key, &tmp, strlen(tmp) + 1) < 0) {
ASSERT(0);
}
key.ts = 2;
char tmp2[100] = "abcdefg2";
if (streamStatePut(pState, &key, &tmp2, strlen(tmp2) + 1) < 0) {
ASSERT(0);
}
key.groupId = 5;
key.ts = 1;
char tmp3[100] = "abcdefg3";
if (streamStatePut(pState, &key, &tmp3, strlen(tmp3) + 1) < 0) {
ASSERT(0);
}
char* val2 = NULL;
int32_t sz;
if (streamStateGet(pState, &key, (void**)&val2, &sz) < 0) {
ASSERT(0);
}
printf("stream read %s %d\n", val2, sz);
streamFreeVal(val2);
}
#endif
if (pTaskInfo->streamInfo.recoverStep == STREAM_RECOVER_STEP__PREPARE) {
STableScanInfo* pTSInfo = pInfo->pTableScanOp->info;
......@@ -1554,14 +1581,14 @@ static SArray* extractTableIdList(const STableListInfo* pTableGroupInfo) {
}
static SSDataBlock* doRawScan(SOperatorInfo* pOperator) {
// NOTE: this operator does never check if current status is done or not
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
// NOTE: this operator does never check if current status is done or not
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
SStreamRawScanInfo* pInfo = pOperator->info;
pTaskInfo->streamInfo.metaRsp.metaRspLen = 0; // use metaRspLen !=0 to judge if data is meta
pTaskInfo->streamInfo.metaRsp.metaRspLen = 0; // use metaRspLen !=0 to judge if data is meta
pTaskInfo->streamInfo.metaRsp.metaRsp = NULL;
qDebug("tmqsnap doRawScan called");
if(pTaskInfo->streamInfo.prepareStatus.type == TMQ_OFFSET__SNAPSHOT_DATA){
if (pTaskInfo->streamInfo.prepareStatus.type == TMQ_OFFSET__SNAPSHOT_DATA) {
SSDataBlock* pBlock = &pInfo->pRes;
if (pInfo->dataReader && tsdbNextDataBlock(pInfo->dataReader)) {
......@@ -1585,42 +1612,38 @@ static SSDataBlock* doRawScan(SOperatorInfo* pOperator) {
}
SMetaTableInfo mtInfo = getUidfromSnapShot(pInfo->sContext);
if (mtInfo.uid == 0){ //read snapshot done, change to get data from wal
if (mtInfo.uid == 0) { // read snapshot done, change to get data from wal
qDebug("tmqsnap read snapshot done, change to get data from wal");
pTaskInfo->streamInfo.prepareStatus.uid = mtInfo.uid;
pTaskInfo->streamInfo.lastStatus.type = TMQ_OFFSET__LOG;
pTaskInfo->streamInfo.lastStatus.version = pInfo->sContext->snapVersion;
tDeleteSSchemaWrapper(pTaskInfo->streamInfo.schema);
}else{
} else {
pTaskInfo->streamInfo.prepareStatus.uid = mtInfo.uid;
pTaskInfo->streamInfo.prepareStatus.ts = INT64_MIN;
qDebug("tmqsnap change get data uid:%ld", mtInfo.uid);
qStreamPrepareScan(pTaskInfo, &pTaskInfo->streamInfo.prepareStatus, pInfo->sContext->subType);
strcpy(pTaskInfo->streamInfo.tbName, mtInfo.tbName);
tDeleteSSchemaWrapper(pTaskInfo->streamInfo.schema);
pTaskInfo->streamInfo.schema = mtInfo.schema;
}
qDebug("tmqsnap stream scan tsdb return null");
return NULL;
}else if(pTaskInfo->streamInfo.prepareStatus.type == TMQ_OFFSET__SNAPSHOT_META){
SSnapContext *sContext = pInfo->sContext;
void* data = NULL;
int32_t dataLen = 0;
int16_t type = 0;
int64_t uid = 0;
if(getMetafromSnapShot(sContext, &data, &dataLen, &type, &uid) < 0){
} else if (pTaskInfo->streamInfo.prepareStatus.type == TMQ_OFFSET__SNAPSHOT_META) {
SSnapContext* sContext = pInfo->sContext;
void* data = NULL;
int32_t dataLen = 0;
int16_t type = 0;
int64_t uid = 0;
if (getMetafromSnapShot(sContext, &data, &dataLen, &type, &uid) < 0) {
qError("tmqsnap getMetafromSnapShot error");
taosMemoryFreeClear(data);
return NULL;
}
if(!sContext->queryMetaOrData){ // change to get data next poll request
if (!sContext->queryMetaOrData) { // change to get data next poll request
pTaskInfo->streamInfo.lastStatus.type = TMQ_OFFSET__SNAPSHOT_META;
pTaskInfo->streamInfo.lastStatus.uid = uid;
pTaskInfo->streamInfo.metaRsp.rspOffset.type = TMQ_OFFSET__SNAPSHOT_DATA;
pTaskInfo->streamInfo.metaRsp.rspOffset.uid = 0;
pTaskInfo->streamInfo.metaRsp.rspOffset.ts = INT64_MIN;
}else{
} else {
pTaskInfo->streamInfo.lastStatus.type = TMQ_OFFSET__SNAPSHOT_META;
pTaskInfo->streamInfo.lastStatus.uid = uid;
pTaskInfo->streamInfo.metaRsp.rspOffset = pTaskInfo->streamInfo.lastStatus;
......@@ -1631,44 +1654,44 @@ static SSDataBlock* doRawScan(SOperatorInfo* pOperator) {
return NULL;
}
// else if (pTaskInfo->streamInfo.prepareStatus.type == TMQ_OFFSET__LOG) {
// int64_t fetchVer = pTaskInfo->streamInfo.prepareStatus.version + 1;
//
// while(1){
// if (tqFetchLog(pInfo->tqReader->pWalReader, pInfo->sContext->withMeta, &fetchVer, &pInfo->pCkHead) < 0) {
// qDebug("tmqsnap tmq poll: consumer log end. offset %" PRId64, fetchVer);
// pTaskInfo->streamInfo.lastStatus.version = fetchVer;
// pTaskInfo->streamInfo.lastStatus.type = TMQ_OFFSET__LOG;
// return NULL;
// }
// SWalCont* pHead = &pInfo->pCkHead->head;
// qDebug("tmqsnap tmq poll: consumer log offset %" PRId64 " msgType %d", fetchVer, pHead->msgType);
//
// if (pHead->msgType == TDMT_VND_SUBMIT) {
// SSubmitReq* pCont = (SSubmitReq*)&pHead->body;
// tqReaderSetDataMsg(pInfo->tqReader, pCont, 0);
// SSDataBlock* block = tqLogScanExec(pInfo->sContext->subType, pInfo->tqReader, pInfo->pFilterOutTbUid, &pInfo->pRes);
// if(block){
// pTaskInfo->streamInfo.lastStatus.type = TMQ_OFFSET__LOG;
// pTaskInfo->streamInfo.lastStatus.version = fetchVer;
// qDebug("tmqsnap fetch data msg, ver:%" PRId64 ", type:%d", pHead->version, pHead->msgType);
// return block;
// }else{
// fetchVer++;
// }
// } else{
// ASSERT(pInfo->sContext->withMeta);
// ASSERT(IS_META_MSG(pHead->msgType));
// qDebug("tmqsnap fetch meta msg, ver:%" PRId64 ", type:%d", pHead->version, pHead->msgType);
// pTaskInfo->streamInfo.metaRsp.rspOffset.version = fetchVer;
// pTaskInfo->streamInfo.metaRsp.rspOffset.type = TMQ_OFFSET__LOG;
// pTaskInfo->streamInfo.metaRsp.resMsgType = pHead->msgType;
// pTaskInfo->streamInfo.metaRsp.metaRspLen = pHead->bodyLen;
// pTaskInfo->streamInfo.metaRsp.metaRsp = taosMemoryMalloc(pHead->bodyLen);
// memcpy(pTaskInfo->streamInfo.metaRsp.metaRsp, pHead->body, pHead->bodyLen);
// return NULL;
// }
// }
// else if (pTaskInfo->streamInfo.prepareStatus.type == TMQ_OFFSET__LOG) {
// int64_t fetchVer = pTaskInfo->streamInfo.prepareStatus.version + 1;
//
// while(1){
// if (tqFetchLog(pInfo->tqReader->pWalReader, pInfo->sContext->withMeta, &fetchVer, &pInfo->pCkHead) < 0) {
// qDebug("tmqsnap tmq poll: consumer log end. offset %" PRId64, fetchVer);
// pTaskInfo->streamInfo.lastStatus.version = fetchVer;
// pTaskInfo->streamInfo.lastStatus.type = TMQ_OFFSET__LOG;
// return NULL;
// }
// SWalCont* pHead = &pInfo->pCkHead->head;
// qDebug("tmqsnap tmq poll: consumer log offset %" PRId64 " msgType %d", fetchVer, pHead->msgType);
//
// if (pHead->msgType == TDMT_VND_SUBMIT) {
// SSubmitReq* pCont = (SSubmitReq*)&pHead->body;
// tqReaderSetDataMsg(pInfo->tqReader, pCont, 0);
// SSDataBlock* block = tqLogScanExec(pInfo->sContext->subType, pInfo->tqReader, pInfo->pFilterOutTbUid,
// &pInfo->pRes); if(block){
// pTaskInfo->streamInfo.lastStatus.type = TMQ_OFFSET__LOG;
// pTaskInfo->streamInfo.lastStatus.version = fetchVer;
// qDebug("tmqsnap fetch data msg, ver:%" PRId64 ", type:%d", pHead->version, pHead->msgType);
// return block;
// }else{
// fetchVer++;
// }
// } else{
// ASSERT(pInfo->sContext->withMeta);
// ASSERT(IS_META_MSG(pHead->msgType));
// qDebug("tmqsnap fetch meta msg, ver:%" PRId64 ", type:%d", pHead->version, pHead->msgType);
// pTaskInfo->streamInfo.metaRsp.rspOffset.version = fetchVer;
// pTaskInfo->streamInfo.metaRsp.rspOffset.type = TMQ_OFFSET__LOG;
// pTaskInfo->streamInfo.metaRsp.resMsgType = pHead->msgType;
// pTaskInfo->streamInfo.metaRsp.metaRspLen = pHead->bodyLen;
// pTaskInfo->streamInfo.metaRsp.metaRsp = taosMemoryMalloc(pHead->bodyLen);
// memcpy(pTaskInfo->streamInfo.metaRsp.metaRsp, pHead->body, pHead->bodyLen);
// return NULL;
// }
// }
return NULL;
}
......@@ -1689,7 +1712,7 @@ SOperatorInfo* createRawScanOperatorInfo(SReadHandle* pHandle, SExecTaskInfo* pT
// create tq reader
SStreamRawScanInfo* pInfo = taosMemoryCalloc(1, sizeof(SStreamRawScanInfo));
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
if (pInfo == NULL || pOperator == NULL) {
terrno = TSDB_CODE_QRY_OUT_OF_MEMORY;
return NULL;
......@@ -1699,13 +1722,12 @@ SOperatorInfo* createRawScanOperatorInfo(SReadHandle* pHandle, SExecTaskInfo* pT
pInfo->sContext = pHandle->sContext;
pOperator->name = "RawStreamScanOperator";
// pOperator->blocking = false;
// pOperator->status = OP_NOT_OPENED;
// pOperator->blocking = false;
// pOperator->status = OP_NOT_OPENED;
pOperator->info = pInfo;
pOperator->pTaskInfo = pTaskInfo;
pOperator->fpSet = createOperatorFpSet(NULL, doRawScan, NULL, NULL, destroyRawScanOperatorInfo,
NULL, NULL, NULL);
pOperator->fpSet = createOperatorFpSet(NULL, doRawScan, NULL, NULL, destroyRawScanOperatorInfo, NULL, NULL, NULL);
return pOperator;
}
......@@ -1724,7 +1746,7 @@ static void destroyStreamScanOperatorInfo(void* param) {
}
if (pStreamScan->pPseudoExpr) {
destroyExprInfo(pStreamScan->pPseudoExpr, pStreamScan->numOfPseudoExpr);
taosMemoryFreeClear(pStreamScan->pPseudoExpr);
taosMemoryFree(pStreamScan->pPseudoExpr);
}
updateInfoDestroy(pStreamScan->pUpdateInfo);
......@@ -1815,6 +1837,7 @@ SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhys
pInfo->readHandle = *pHandle;
pInfo->tableUid = pScanPhyNode->uid;
pTaskInfo->streamInfo.snapshotVer = pHandle->version;
// set the extract column id to streamHandle
tqReaderSetColIdList(pInfo->tqReader, pColIds);
......@@ -1858,8 +1881,9 @@ SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhys
pOperator->exprSupp.numOfExprs = taosArrayGetSize(pInfo->pRes->pDataBlock);
pOperator->pTaskInfo = pTaskInfo;
pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, doStreamScan, NULL, NULL, destroyStreamScanOperatorInfo,
NULL, NULL, NULL);
__optr_fn_t nextFn = pTaskInfo->execModel == OPTR_EXEC_MODEL_STREAM ? doStreamScan : doQueueScan;
pOperator->fpSet =
createOperatorFpSet(operatorDummyOpenFn, nextFn, NULL, NULL, destroyStreamScanOperatorInfo, NULL, NULL, NULL);
return pOperator;
......
......@@ -101,6 +101,14 @@ bool fmIsBuiltinFunc(const char* pFunc) {
return NULL != taosHashGet(gFunMgtService.pFuncNameHashTable, pFunc, strlen(pFunc));
}
EFunctionType fmGetFuncType(const char* pFunc) {
void* pVal = taosHashGet(gFunMgtService.pFuncNameHashTable, pFunc, strlen(pFunc));
if (NULL != pVal) {
return funcMgtBuiltins[*(int32_t*)pVal].type;
}
return FUNCTION_TYPE_UDF;
}
EFuncDataRequired fmFuncDataRequired(SFunctionNode* pFunc, STimeWindow* pTimeWindow) {
if (fmIsUserDefinedFunc(pFunc->funcId) || pFunc->funcId < 0 || pFunc->funcId >= funcMgtBuiltinsNum) {
return FUNC_DATA_REQUIRED_DATA_LOAD;
......
......@@ -97,16 +97,23 @@ typedef struct SCollectMetaKeyCxt {
typedef struct SCollectMetaKeyFromExprCxt {
SCollectMetaKeyCxt* pComCxt;
bool hasLastRow;
int32_t errCode;
} SCollectMetaKeyFromExprCxt;
static int32_t collectMetaKeyFromQuery(SCollectMetaKeyCxt* pCxt, SNode* pStmt);
static EDealRes collectMetaKeyFromFunction(SCollectMetaKeyFromExprCxt* pCxt, SFunctionNode* pFunc) {
if (fmIsBuiltinFunc(pFunc->functionName)) {
return DEAL_RES_CONTINUE;
switch (fmGetFuncType(pFunc->functionName)) {
case FUNCTION_TYPE_LAST_ROW:
pCxt->hasLastRow = true;
break;
case FUNCTION_TYPE_UDF:
pCxt->errCode = reserveUdfInCache(pFunc->functionName, pCxt->pComCxt->pMetaCache);
break;
default:
break;
}
pCxt->errCode = reserveUdfInCache(pFunc->functionName, pCxt->pComCxt->pMetaCache);
return TSDB_CODE_SUCCESS == pCxt->errCode ? DEAL_RES_CONTINUE : DEAL_RES_ERROR;
}
......@@ -136,9 +143,6 @@ static int32_t collectMetaKeyFromRealTableImpl(SCollectMetaKeyCxt* pCxt, const c
if (TSDB_CODE_SUCCESS == code && (0 == strcmp(pTable, TSDB_INS_TABLE_DNODE_VARIABLES))) {
code = reserveDnodeRequiredInCache(pCxt->pMetaCache);
}
if (TSDB_CODE_SUCCESS == code) {
code = reserveDbCfgInCache(pCxt->pParseCxt->acctId, pDb, pCxt->pMetaCache);
}
return code;
}
......@@ -185,9 +189,19 @@ static int32_t collectMetaKeyFromSetOperator(SCollectMetaKeyCxt* pCxt, SSetOpera
return code;
}
static int32_t reserveDbCfgForLastRow(SCollectMetaKeyCxt* pCxt, SNode* pTable) {
if (NULL == pTable || QUERY_NODE_REAL_TABLE != nodeType(pTable)) {
return TSDB_CODE_SUCCESS;
}
return reserveDbCfgInCache(pCxt->pParseCxt->acctId, ((SRealTableNode*)pTable)->table.dbName, pCxt->pMetaCache);
}
static int32_t collectMetaKeyFromSelect(SCollectMetaKeyCxt* pCxt, SSelectStmt* pStmt) {
SCollectMetaKeyFromExprCxt cxt = {.pComCxt = pCxt, .errCode = TSDB_CODE_SUCCESS};
SCollectMetaKeyFromExprCxt cxt = {.pComCxt = pCxt, .hasLastRow = false, .errCode = TSDB_CODE_SUCCESS};
nodesWalkSelectStmt(pStmt, SQL_CLAUSE_FROM, collectMetaKeyFromExprImpl, &cxt);
if (TSDB_CODE_SUCCESS == cxt.errCode && cxt.hasLastRow) {
cxt.errCode = reserveDbCfgForLastRow(pCxt, pStmt->pFromTable);
}
return cxt.errCode;
}
......@@ -365,7 +379,7 @@ static int32_t collectMetaKeyFromShowStables(SCollectMetaKeyCxt* pCxt, SShowStmt
}
static int32_t collectMetaKeyFromShowStreams(SCollectMetaKeyCxt* pCxt, SShowStmt* pStmt) {
return reserveTableMetaInCache(pCxt->pParseCxt->acctId, TSDB_PERFORMANCE_SCHEMA_DB, TSDB_PERFS_TABLE_STREAMS,
return reserveTableMetaInCache(pCxt->pParseCxt->acctId, TSDB_INFORMATION_SCHEMA_DB, TSDB_INS_TABLE_STREAMS,
pCxt->pMetaCache);
}
......@@ -411,7 +425,7 @@ static int32_t collectMetaKeyFromShowVgroups(SCollectMetaKeyCxt* pCxt, SShowStmt
}
static int32_t collectMetaKeyFromShowTopics(SCollectMetaKeyCxt* pCxt, SShowStmt* pStmt) {
return reserveTableMetaInCache(pCxt->pParseCxt->acctId, TSDB_PERFORMANCE_SCHEMA_DB, TSDB_PERFS_TABLE_TOPICS,
return reserveTableMetaInCache(pCxt->pParseCxt->acctId, TSDB_INFORMATION_SCHEMA_DB, TSDB_INS_TABLE_TOPICS,
pCxt->pMetaCache);
}
......@@ -506,7 +520,7 @@ static int32_t collectMetaKeyFromShowBlockDist(SCollectMetaKeyCxt* pCxt, SShowTa
}
static int32_t collectMetaKeyFromShowSubscriptions(SCollectMetaKeyCxt* pCxt, SShowStmt* pStmt) {
return reserveTableMetaInCache(pCxt->pParseCxt->acctId, TSDB_PERFORMANCE_SCHEMA_DB, TSDB_PERFS_TABLE_SUBSCRIPTIONS,
return reserveTableMetaInCache(pCxt->pParseCxt->acctId, TSDB_INFORMATION_SCHEMA_DB, TSDB_INS_TABLE_SUBSCRIPTIONS,
pCxt->pMetaCache);
}
......
......@@ -142,8 +142,8 @@ static const SSysTableShowAdapter sysTableShowAdapter[] = {
},
{
.showType = QUERY_NODE_SHOW_STREAMS_STMT,
.pDbName = TSDB_PERFORMANCE_SCHEMA_DB,
.pTableName = TSDB_PERFS_TABLE_STREAMS,
.pDbName = TSDB_INFORMATION_SCHEMA_DB,
.pTableName = TSDB_INS_TABLE_STREAMS,
.numOfShowCols = 1,
.pShowCols = {"stream_name"}
},
......@@ -184,8 +184,8 @@ static const SSysTableShowAdapter sysTableShowAdapter[] = {
},
{
.showType = QUERY_NODE_SHOW_TOPICS_STMT,
.pDbName = TSDB_PERFORMANCE_SCHEMA_DB,
.pTableName = TSDB_PERFS_TABLE_TOPICS,
.pDbName = TSDB_INFORMATION_SCHEMA_DB,
.pTableName = TSDB_INS_TABLE_TOPICS,
.numOfShowCols = 1,
.pShowCols = {"topic_name"}
},
......@@ -240,8 +240,8 @@ static const SSysTableShowAdapter sysTableShowAdapter[] = {
},
{
.showType = QUERY_NODE_SHOW_SUBSCRIPTIONS_STMT,
.pDbName = TSDB_PERFORMANCE_SCHEMA_DB,
.pTableName = TSDB_PERFS_TABLE_SUBSCRIPTIONS,
.pDbName = TSDB_INFORMATION_SCHEMA_DB,
.pTableName = TSDB_INS_TABLE_SUBSCRIPTIONS,
.numOfShowCols = 1,
.pShowCols = {"*"}
},
......@@ -2160,15 +2160,16 @@ static int32_t setTableIndex(STranslateContext* pCxt, SName* pName, SRealTableNo
return TSDB_CODE_SUCCESS;
}
static int32_t setTableCacheLastMode(STranslateContext* pCxt, SName* pName, SRealTableNode* pRealTable) {
if (TSDB_SYSTEM_TABLE == pRealTable->pMeta->tableType) {
static int32_t setTableCacheLastMode(STranslateContext* pCxt, SSelectStmt* pSelect) {
if (!pSelect->hasLastRowFunc || QUERY_NODE_REAL_TABLE != nodeType(pSelect->pFromTable)) {
return TSDB_CODE_SUCCESS;
}
SDbCfgInfo dbCfg = {0};
int32_t code = getDBCfg(pCxt, pRealTable->table.dbName, &dbCfg);
SRealTableNode* pTable = (SRealTableNode*)pSelect->pFromTable;
SDbCfgInfo dbCfg = {0};
int32_t code = getDBCfg(pCxt, pTable->table.dbName, &dbCfg);
if (TSDB_CODE_SUCCESS == code) {
pRealTable->cacheLastMode = dbCfg.cacheLast;
pTable->cacheLastMode = dbCfg.cacheLast;
}
return code;
}
......@@ -2192,9 +2193,6 @@ static int32_t translateTable(STranslateContext* pCxt, SNode* pTable) {
if (TSDB_CODE_SUCCESS == code) {
code = setTableIndex(pCxt, &name, pRealTable);
}
if (TSDB_CODE_SUCCESS == code) {
code = setTableCacheLastMode(pCxt, &name, pRealTable);
}
}
if (TSDB_CODE_SUCCESS == code) {
pRealTable->table.precision = pRealTable->pMeta->tableInfo.precision;
......@@ -2273,10 +2271,14 @@ static SNode* createMultiResFunc(SFunctionNode* pSrcFunc, SExprNode* pExpr) {
if (QUERY_NODE_COLUMN == nodeType(pExpr)) {
SColumnNode* pCol = (SColumnNode*)pExpr;
len = snprintf(buf, sizeof(buf), "%s(%s.%s)", pSrcFunc->functionName, pCol->tableAlias, pCol->colName);
strncpy(pFunc->node.aliasName, buf, TMIN(len, sizeof(pFunc->node.aliasName) - 1));
len = snprintf(buf, sizeof(buf), "%s(%s)", pSrcFunc->functionName, pCol->colName);
strncpy(pFunc->node.userAlias, buf, TMIN(len, sizeof(pFunc->node.userAlias) - 1));
} else {
len = snprintf(buf, sizeof(buf), "%s(%s)", pSrcFunc->functionName, pExpr->aliasName);
strncpy(pFunc->node.aliasName, buf, TMIN(len, sizeof(pFunc->node.aliasName) - 1));
strncpy(pFunc->node.userAlias, buf, TMIN(len, sizeof(pFunc->node.userAlias) - 1));
}
strncpy(pFunc->node.aliasName, buf, TMIN(len, sizeof(pFunc->node.aliasName) - 1));
return (SNode*)pFunc;
}
......@@ -3140,6 +3142,9 @@ static int32_t translateSelectFrom(STranslateContext* pCxt, SSelectStmt* pSelect
if (TSDB_CODE_SUCCESS == code) {
code = replaceOrderByAliasForSelect(pCxt, pSelect);
}
if (TSDB_CODE_SUCCESS == code) {
code = setTableCacheLastMode(pCxt, pSelect);
}
return code;
}
......
......@@ -137,7 +137,7 @@ void generatePerformanceSchema(MockCatalogService* mcs) {
}
{
ITableBuilder& builder =
mcs->createTableBuilder(TSDB_PERFORMANCE_SCHEMA_DB, TSDB_PERFS_TABLE_STREAMS, TSDB_SYSTEM_TABLE, 1)
mcs->createTableBuilder(TSDB_INFORMATION_SCHEMA_DB, TSDB_INS_TABLE_STREAMS, TSDB_SYSTEM_TABLE, 1)
.addColumn("stream_name", TSDB_DATA_TYPE_BINARY, TSDB_TABLE_NAME_LEN);
builder.done();
}
......@@ -149,7 +149,7 @@ void generatePerformanceSchema(MockCatalogService* mcs) {
}
{
ITableBuilder& builder =
mcs->createTableBuilder(TSDB_PERFORMANCE_SCHEMA_DB, TSDB_PERFS_TABLE_SUBSCRIPTIONS, TSDB_SYSTEM_TABLE, 1)
mcs->createTableBuilder(TSDB_INFORMATION_SCHEMA_DB, TSDB_INS_TABLE_SUBSCRIPTIONS, TSDB_SYSTEM_TABLE, 1)
.addColumn("stream_name", TSDB_DATA_TYPE_BINARY, TSDB_TABLE_NAME_LEN);
builder.done();
}
......
char version[12] = "${TD_VER_NUMBER}";
char version[64] = "${TD_VER_NUMBER}";
char compatible_version[12] = "${TD_VER_COMPATIBLE}";
char gitinfo[48] = "${TD_VER_GIT}";
char buildinfo[64] = "Built at ${TD_VER_DATE}";
......
......@@ -199,22 +199,22 @@ class TDCom:
res = requests.post(url, sql.encode("utf-8"), headers = self.preDefine()[0])
return res
def cleanTb(self, type="taosc"):
def cleanTb(self, type="taosc", dbname="db"):
'''
type is taosc or restful
'''
query_sql = "show stables"
query_sql = f"show {dbname}.stables"
res_row_list = tdSql.query(query_sql, True)
stb_list = map(lambda x: x[0], res_row_list)
for stb in stb_list:
if type == "taosc":
tdSql.execute(f'drop table if exists `{stb}`')
tdSql.execute(f'drop table if exists {dbname}.`{stb}`')
if not stb[0].isdigit():
tdSql.execute(f'drop table if exists {stb}')
tdSql.execute(f'drop table if exists {dbname}.{stb}')
elif type == "restful":
self.restApiPost(f"drop table if exists `{stb}`")
self.restApiPost(f"drop table if exists {dbname}.`{stb}`")
if not stb[0].isdigit():
self.restApiPost(f"drop table if exists {stb}")
self.restApiPost(f"drop table if exists {dbname}.{stb}")
def dateToTs(self, datetime_input):
return int(time.mktime(time.strptime(datetime_input, "%Y-%m-%d %H:%M:%S.%f")))
......
......@@ -36,9 +36,9 @@ class TDSimClient:
"rpcDebugFlag": "143",
"tmrDebugFlag": "131",
"cDebugFlag": "143",
"udebugFlag": "143",
"jnidebugFlag": "143",
"qdebugFlag": "143",
"uDebugFlag": "143",
"jniDebugFlag": "143",
"qDebugFlag": "143",
"supportVnodes": "1024",
"telemetryReporting": "0",
}
......@@ -134,7 +134,6 @@ class TDDnode:
"uDebugFlag": "131",
"sDebugFlag": "143",
"wDebugFlag": "143",
"qdebugFlag": "143",
"numOfLogLines": "100000000",
"statusInterval": "1",
"supportVnodes": "1024",
......@@ -484,7 +483,7 @@ class TDDnode:
psCmd = "ps -ef|grep -w %s| grep -v grep | awk '{print $2}'" % toBeKilled
processID = subprocess.check_output(
psCmd, shell=True).decode("utf-8")
onlyKillOnceWindows = 0
while(processID):
if not platform.system().lower() == 'windows' or (onlyKillOnceWindows == 0 and platform.system().lower() == 'windows'):
......
......@@ -102,7 +102,7 @@ class TDSql:
caller = inspect.getframeinfo(inspect.stack()[1][0])
args = (caller.filename, caller.lineno, sql, repr(e))
tdLog.notice("%s(%d) failed: sql:%s, %s" % args)
raise Exception(repr(e))
raise Exception(repr(e))
i+=1
time.sleep(1)
pass
......@@ -225,25 +225,21 @@ class TDSql:
# suppose user want to check nanosecond timestamp if a longer data passed
if (len(data) >= 28):
if pd.to_datetime(self.queryResult[row][col]) == pd.to_datetime(data):
tdLog.info("sql:%s, row:%d col:%d data:%d == expect:%s" %
(self.sql, row, col, self.queryResult[row][col], data))
tdLog.info(f"sql:{self.sql}, row:{row} col:{col} data:{self.queryResult[row][col]} == expect:{data}")
else:
if self.queryResult[row][col] == _parse_datetime(data):
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
(self.sql, row, col, self.queryResult[row][col], data))
tdLog.info(f"sql:{self.sql}, row:{row} col:{col} data:{self.queryResult[row][col]} == expect:{data}")
return
if str(self.queryResult[row][col]) == str(data):
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
(self.sql, row, col, self.queryResult[row][col], data))
tdLog.info(f"sql:{self.sql}, row:{row} col:{col} data:{self.queryResult[row][col]} == expect:{data}")
return
elif isinstance(data, float):
if abs(data) >= 1 and abs((self.queryResult[row][col] - data) / data) <= 0.000001:
tdLog.info("sql:%s, row:%d col:%d data:%f == expect:%f" %
(self.sql, row, col, self.queryResult[row][col], data))
tdLog.info(f"sql:{self.sql}, row:{row} col:{col} data:{self.queryResult[row][col]} == expect:{data}")
elif abs(data) < 1 and abs(self.queryResult[row][col] - data) <= 0.000001:
tdLog.info("sql:%s, row:%d col:%d data:%f == expect:%f" %
(self.sql, row, col, self.queryResult[row][col], data))
tdLog.info(f"sql:{self.sql}, row:{row} col:{col} data:{self.queryResult[row][col]} == expect:{data}")
else:
caller = inspect.getframeinfo(inspect.stack()[1][0])
args = (caller.filename, caller.lineno, self.sql, row, col, self.queryResult[row][col], data)
......@@ -254,21 +250,7 @@ class TDSql:
args = (caller.filename, caller.lineno, self.sql, row, col, self.queryResult[row][col], data)
tdLog.exit("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args)
if data is None:
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
(self.sql, row, col, self.queryResult[row][col], data))
elif isinstance(data, str):
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
(self.sql, row, col, self.queryResult[row][col], data))
elif isinstance(data, datetime.date):
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
(self.sql, row, col, self.queryResult[row][col], data))
elif isinstance(data, float):
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
(self.sql, row, col, self.queryResult[row][col], data))
else:
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%d" %
(self.sql, row, col, self.queryResult[row][col], data))
tdLog.info(f"sql:{self.sql}, row:{row} col:{col} data:{self.queryResult[row][col]} == expect:{data}")
def getData(self, row, col):
self.checkRowCol(row, col)
......@@ -307,7 +289,7 @@ class TDSql:
caller = inspect.getframeinfo(inspect.stack()[1][0])
args = (caller.filename, caller.lineno, sql, repr(e))
tdLog.notice("%s(%d) failed: sql:%s, %s" % args)
raise Exception(repr(e))
raise Exception(repr(e))
i+=1
time.sleep(1)
pass
......@@ -329,7 +311,7 @@ class TDSql:
tdLog.exit("%s(%d) failed: sql:%s, col_name_list:%s != expect_col_name_list:%s" % args)
def __check_equal(self, elm, expect_elm):
if not type(elm) in(list, tuple) and elm == expect_elm:
if elm == expect_elm:
return True
if type(elm) in(list, tuple) and type(expect_elm) in(list, tuple):
if len(elm) != len(expect_elm):
......
......@@ -163,6 +163,9 @@ sql select * from information_schema.ins_stables
sql select * from information_schema.ins_tables
sql select * from information_schema.ins_tags
sql select * from information_schema.ins_users
sql select * from information_schema.ins_topics
sql select * from information_schema.ins_subscriptions
sql select * from information_schema.ins_streams
sql_error select * from information_schema.ins_grants
sql_error select * from information_schema.ins_vgroups
sql_error select * from information_schema.ins_configs
......@@ -172,11 +175,8 @@ print =============== check performance_schema
sql use performance_schema;
sql select * from performance_schema.perf_connections
sql select * from performance_schema.perf_queries
sql select * from performance_schema.perf_topics
sql select * from performance_schema.perf_consumers
sql select * from performance_schema.perf_subscriptions
sql select * from performance_schema.perf_trans
sql select * from performance_schema.perf_streams
sql select * from performance_schema.perf_apps
#system sh/exec.sh -n dnode1 -s stop -x SIGINT
......@@ -31,7 +31,7 @@ if platform.system().lower() == 'windows':
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
tdSql.init(conn.cursor(), False)
self._conn = conn
def createDb(self, name="test", db_update_tag=0):
......@@ -357,7 +357,7 @@ class TDTestCase:
"""
normal tags and cols, one for every elm
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql, stb_name = self.genFullTypeSql()
self.resCmp(input_sql, stb_name)
......@@ -365,7 +365,7 @@ class TDTestCase:
"""
check all normal type
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
full_type_list = ["f", "F", "false", "False", "t", "T", "true", "True"]
for t_type in full_type_list:
input_sql, stb_name = self.genFullTypeSql(c0=t_type, t0=t_type)
......@@ -379,7 +379,7 @@ class TDTestCase:
please test :
binary_symbols = '\"abcd`~!@#$%^&*()_-{[}]|:;<.>?lfjal"\'\'"\"'
'''
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
binary_symbols = '"abcd`~!@#$%^&*()_-{[}]|:;<.>?lfjal"'
nchar_symbols = f'L{binary_symbols}'
input_sql, stb_name = self.genFullTypeSql(c7=binary_symbols, c8=nchar_symbols, t7=binary_symbols, t8=nchar_symbols)
......@@ -390,7 +390,7 @@ class TDTestCase:
test ts list --> ["1626006833639000000", "1626006833639019us", "1626006833640ms", "1626006834s", "1626006822639022"]
# ! us级时间戳都为0时,数据库中查询显示,但python接口拿到的结果不显示 .000000的情况请确认,目前修改时间处理代码可以通过
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
ts_list = ["1626006833639000000", "1626006833639019us", "1626006833640ms", "1626006834s", "1626006822639022", 0]
for ts in ts_list:
input_sql, stb_name = self.genFullTypeSql(ts=ts)
......@@ -401,7 +401,7 @@ class TDTestCase:
check id.index in tags
eg: t0=**,id=**,t1=**
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql, stb_name = self.genFullTypeSql(id_change_tag=True)
self.resCmp(input_sql, stb_name)
......@@ -410,7 +410,7 @@ class TDTestCase:
check id param
eg: id and ID
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql, stb_name = self.genFullTypeSql(id_upper_tag=True)
self.resCmp(input_sql, stb_name)
input_sql, stb_name = self.genFullTypeSql(id_change_tag=True, id_upper_tag=True)
......@@ -420,7 +420,7 @@ class TDTestCase:
"""
id not exist
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql, stb_name = self.genFullTypeSql(id_noexist_tag=True)
self.resCmp(input_sql, stb_name)
query_sql = f"select tbname from {stb_name}"
......@@ -436,10 +436,10 @@ class TDTestCase:
max col count is ??
"""
for input_sql in [self.genLongSql(127, 1)[0], self.genLongSql(1, 4093)[0]]:
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value)
for input_sql in [self.genLongSql(129, 1)[0], self.genLongSql(1, 4095)[0]]:
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
try:
self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value)
except SchemalessError as err:
......@@ -450,7 +450,7 @@ class TDTestCase:
test illegal id name
mix "~!@#$¥%^&*()-+|[]、「」【】;:《》<>?"
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
rstr = list("~!@#$¥%^&*()-+|[]、「」【】;:《》<>?")
for i in rstr:
stb_name=f"aaa{i}bbb"
......@@ -462,7 +462,7 @@ class TDTestCase:
"""
id is start with num
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql = self.genFullTypeSql(tb_name=f"\"1aaabbb\"")[0]
try:
self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value)
......@@ -473,7 +473,7 @@ class TDTestCase:
"""
check now unsupported
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql = self.genFullTypeSql(ts="now")[0]
try:
self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value)
......@@ -484,7 +484,7 @@ class TDTestCase:
"""
check date format ts unsupported
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql = self.genFullTypeSql(ts="2021-07-21\ 19:01:46.920")[0]
try:
self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value)
......@@ -495,7 +495,7 @@ class TDTestCase:
"""
check ts format like 16260068336390us19
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql = self.genFullTypeSql(ts="16260068336390us19")[0]
try:
self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value)
......@@ -506,7 +506,7 @@ class TDTestCase:
"""
check full type tag value limit
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
# i8
for t1 in ["-128i8", "127i8"]:
input_sql, stb_name = self.genFullTypeSql(t1=t1)
......@@ -602,7 +602,7 @@ class TDTestCase:
"""
check full type col value limit
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
# i8
for c1 in ["-128i8", "127i8"]:
input_sql, stb_name = self.genFullTypeSql(c1=c1)
......@@ -699,7 +699,7 @@ class TDTestCase:
"""
test illegal tag col value
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
# bool
for i in ["TrUe", "tRue", "trUe", "truE", "FalsE", "fAlse", "faLse", "falSe", "falsE"]:
input_sql1 = self.genFullTypeSql(t0=i)[0]
......@@ -758,7 +758,7 @@ class TDTestCase:
"""
check duplicate Id Tag Col
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql_id = self.genFullTypeSql(id_double_tag=True)[0]
try:
self._conn.schemaless_insert([input_sql_id], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value)
......@@ -792,7 +792,7 @@ class TDTestCase:
"""
case no id when stb exist
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql, stb_name = self.genFullTypeSql(tb_name="sub_table_0123456", t0="f", c0="f")
self.resCmp(input_sql, stb_name)
input_sql, stb_name = self.genFullTypeSql(stb_name=stb_name, id_noexist_tag=True, t0="f", c0="f")
......@@ -805,7 +805,7 @@ class TDTestCase:
"""
check duplicate insert when stb exist
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql, stb_name = self.genFullTypeSql()
self.resCmp(input_sql, stb_name)
self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value)
......@@ -816,7 +816,7 @@ class TDTestCase:
"""
check length increase
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql, stb_name = self.genFullTypeSql()
self.resCmp(input_sql, stb_name)
tb_name = tdCom.getLongName(5, "letters")
......@@ -833,7 +833,7 @@ class TDTestCase:
* col is added without value when update==0
* col is added with value when update==1
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
tb_name = tdCom.getLongName(7, "letters")
for db_update_tag in [0, 1]:
if db_update_tag == 1 :
......@@ -850,7 +850,7 @@ class TDTestCase:
"""
check column and tag count add
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
tb_name = tdCom.getLongName(7, "letters")
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name, t0="f", c0="f")
self.resCmp(input_sql, stb_name)
......@@ -866,7 +866,7 @@ class TDTestCase:
condition: stb not change
insert two table, keep tag unchange, change col
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql, stb_name = self.genFullTypeSql(t0="f", c0="f", id_noexist_tag=True)
self.resCmp(input_sql, stb_name)
tb_name1 = self.getNoIdTbName(stb_name)
......@@ -888,7 +888,7 @@ class TDTestCase:
"""
every binary and nchar must be length+2
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
stb_name = tdCom.getLongName(7, "letters")
tb_name = f'{stb_name}_1'
input_sql = f'{stb_name},id="{tb_name}",t0=t c0=f 1626006833639000000'
......@@ -928,7 +928,7 @@ class TDTestCase:
"""
check nchar length limit
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
stb_name = tdCom.getLongName(7, "letters")
tb_name = f'{stb_name}_1'
input_sql = f'{stb_name},id="{tb_name}",t0=t c0=f 1626006833639000000'
......@@ -963,7 +963,7 @@ class TDTestCase:
"""
test batch insert
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
stb_name = tdCom.getLongName(8, "letters")
# tdSql.execute(f'create stable {stb_name}(ts timestamp, f int) tags(t1 bigint)')
lines = ["st123456,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000",
......@@ -982,7 +982,7 @@ class TDTestCase:
"""
test multi insert
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
sql_list = []
stb_name = tdCom.getLongName(8, "letters")
# tdSql.execute(f'create stable {stb_name}(ts timestamp, f int) tags(t1 bigint)')
......@@ -996,7 +996,7 @@ class TDTestCase:
"""
test batch error insert
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
stb_name = tdCom.getLongName(8, "letters")
lines = ["st123456,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000",
f"{stb_name},t2=5f64,t3=L\"ste\" c1=tRue,c2=4i64,c3=\"iam\" 1626056811823316532ns"]
......@@ -1068,7 +1068,7 @@ class TDTestCase:
"""
thread input different stb
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql = self.genSqlList()[0]
self.multiThreadRun(self.genMultiThreadSeq(input_sql))
tdSql.query(f"show tables;")
......@@ -1078,7 +1078,7 @@ class TDTestCase:
"""
thread input same stb tb, different data, result keep first data
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
tb_name = tdCom.getLongName(7, "letters")
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name)
self.resCmp(input_sql, stb_name)
......@@ -1095,7 +1095,7 @@ class TDTestCase:
"""
thread input same stb tb, different data, add columes and tags, result keep first data
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
tb_name = tdCom.getLongName(7, "letters")
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name)
self.resCmp(input_sql, stb_name)
......@@ -1112,7 +1112,7 @@ class TDTestCase:
"""
thread input same stb tb, different data, minus columes and tags, result keep first data
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
tb_name = tdCom.getLongName(7, "letters")
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name)
self.resCmp(input_sql, stb_name)
......@@ -1129,7 +1129,7 @@ class TDTestCase:
"""
thread input same stb, different tb, different data
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql, stb_name = self.genFullTypeSql()
self.resCmp(input_sql, stb_name)
s_stb_d_tb_list = self.genSqlList(stb_name=stb_name)[4]
......@@ -1144,7 +1144,7 @@ class TDTestCase:
"""
thread input same stb, different tb, different data, add col, mul tag
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql, stb_name = self.genFullTypeSql()
self.resCmp(input_sql, stb_name)
s_stb_d_tb_a_col_m_tag_list = self.genSqlList(stb_name=stb_name)[5]
......@@ -1159,7 +1159,7 @@ class TDTestCase:
"""
thread input same stb, different tb, different data, add tag, mul col
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql, stb_name = self.genFullTypeSql()
self.resCmp(input_sql, stb_name)
s_stb_d_tb_a_tag_m_col_list = self.genSqlList(stb_name=stb_name)[6]
......@@ -1171,7 +1171,7 @@ class TDTestCase:
"""
thread input same stb tb, different ts
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
tb_name = tdCom.getLongName(7, "letters")
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name)
self.resCmp(input_sql, stb_name)
......@@ -1186,7 +1186,7 @@ class TDTestCase:
"""
thread input same stb tb, different ts, add col, mul tag
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
tb_name = tdCom.getLongName(7, "letters")
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name)
self.resCmp(input_sql, stb_name)
......@@ -1205,7 +1205,7 @@ class TDTestCase:
"""
thread input same stb tb, different ts, add tag, mul col
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
tb_name = tdCom.getLongName(7, "letters")
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name)
self.resCmp(input_sql, stb_name)
......@@ -1226,7 +1226,7 @@ class TDTestCase:
"""
thread input same stb, different tb, data, ts
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql, stb_name = self.genFullTypeSql()
self.resCmp(input_sql, stb_name)
s_stb_d_tb_d_ts_list = self.genSqlList(stb_name=stb_name)[10]
......@@ -1241,7 +1241,7 @@ class TDTestCase:
"""
thread input same stb, different tb, data, ts, add col, mul tag
"""
tdCom.cleanTb()
tdCom.cleanTb(dbname="test")
input_sql, stb_name = self.genFullTypeSql()
self.resCmp(input_sql, stb_name)
s_stb_d_tb_d_ts_a_col_m_tag_list = self.genSqlList(stb_name=stb_name)[11]
......
......@@ -193,43 +193,38 @@ class TDTestCase:
# case17: only support normal table join
case17 = {
"col": "t1.c1",
"table_expr": "t1, t2",
"condition": "where t1.ts=t2.ts"
"col": "table1.c1 ",
"table_expr": "db.t1 as table1, db.t2 as table2",
"condition": "where table1.ts=table2.ts"
}
self.checkdiff(**case17)
# case18~19: with group by
# case18 = {
# "table_expr": "db.t1",
# "condition": "group by c6"
# }
# self.checkdiff(**case18)
# case18~19: with group by , function diff not support group by
case19 = {
"table_expr": "db.stb1",
"table_expr": "db.stb1 where tbname =='t0' ",
"condition": "partition by tbname order by tbname" # partition by tbname
}
self.checkdiff(**case19)
# # case20~21: with order by
# case20 = {"condition": "order by ts"}
# self.checkdiff(**case20)
# case20~21: with order by , Not a single-group group function
# # case22: with union
# case22: with union
# case22 = {
# "condition": "union all select diff(c1) from t2"
# "condition": "union all select diff(c1) from db.t2 "
# }
# self.checkdiff(**case22)
tdSql.query("select count(c1) from db.t1 union all select count(c1) from db.t2")
# case23: with limit/slimit
case23 = {
"condition": "limit 1"
}
self.checkdiff(**case23)
# case24 = {
# "table_expr": "db.stb1",
# "condition": "group by tbname slimit 1 soffset 1"
# }
# self.checkdiff(**case24)
case24 = {
"table_expr": "db.stb1",
"condition": "partition by tbname order by tbname slimit 1 soffset 1"
}
self.checkdiff(**case24)
pass
......@@ -284,9 +279,9 @@ class TDTestCase:
tdSql.query(self.diff_query_form(alias=", c2")) # mix with other 1
# tdSql.error(self.diff_query_form(table_expr="db.stb1")) # select stb directly
stb_join = {
"col": "stb1.c1",
"table_expr": "stb1, stb2",
"condition": "where stb1.ts=stb2.ts and stb1.st1=stb2.st2 order by stb1.ts"
"col": "stable1.c1",
"table_expr": "db.stb1 as stable1, db.stb2 as stable2",
"condition": "where stable1.ts=stable2.ts and stable1.st1=stable2.st2 order by stable1.ts"
}
tdSql.query(self.diff_query_form(**stb_join)) # stb join
interval_sql = {
......@@ -315,20 +310,20 @@ class TDTestCase:
for i in range(tbnum):
for j in range(data_row):
tdSql.execute(
f"insert into t{i} values ("
f"insert into db.t{i} values ("
f"{basetime + (j+1)*10}, {random.randint(-200, -1)}, {random.uniform(200, -1)}, {basetime + random.randint(-200, -1)}, "
f"'binary_{j}', {random.uniform(-200, -1)}, {random.choice([0,1])}, {random.randint(-200,-1)}, "
f"{random.randint(-200, -1)}, {random.randint(-127, -1)}, 'nchar_{j}' )"
)
tdSql.execute(
f"insert into t{i} values ("
f"insert into db.t{i} values ("
f"{basetime - (j+1) * 10}, {random.randint(1, 200)}, {random.uniform(1, 200)}, {basetime - random.randint(1, 200)}, "
f"'binary_{j}_1', {random.uniform(1, 200)}, {random.choice([0, 1])}, {random.randint(1,200)}, "
f"{random.randint(1,200)}, {random.randint(1,127)}, 'nchar_{j}_1' )"
)
tdSql.execute(
f"insert into tt{i} values ( {basetime-(j+1) * 10}, {random.randint(1, 200)} )"
f"insert into db.tt{i} values ( {basetime-(j+1) * 10}, {random.randint(1, 200)} )"
)
pass
......@@ -349,8 +344,8 @@ class TDTestCase:
"create stable db.stb2 (ts timestamp, c1 int) tags(st2 int)"
)
for i in range(tbnum):
tdSql.execute(f"create table t{i} using db.stb1 tags({i})")
tdSql.execute(f"create table tt{i} using db.stb2 tags({i})")
tdSql.execute(f"create table db.t{i} using db.stb1 tags({i})")
tdSql.execute(f"create table db.tt{i} using db.stb2 tags({i})")
pass
def diff_support_stable(self):
......@@ -398,8 +393,8 @@ class TDTestCase:
tdLog.printNoPrefix("######## insert only NULL test:")
for i in range(tbnum):
tdSql.execute(f"insert into t{i}(ts) values ({nowtime - 5})")
tdSql.execute(f"insert into t{i}(ts) values ({nowtime + 5})")
tdSql.execute(f"insert into db.t{i}(ts) values ({nowtime - 5})")
tdSql.execute(f"insert into db.t{i}(ts) values ({nowtime + 5})")
self.diff_current_query()
self.diff_error_query()
......@@ -430,9 +425,9 @@ class TDTestCase:
tdLog.printNoPrefix("######## insert data mix with NULL test:")
for i in range(tbnum):
tdSql.execute(f"insert into t{i}(ts) values ({nowtime})")
tdSql.execute(f"insert into t{i}(ts) values ({nowtime-(per_table_rows+3)*10})")
tdSql.execute(f"insert into t{i}(ts) values ({nowtime+(per_table_rows+3)*10})")
tdSql.execute(f"insert into db.t{i}(ts) values ({nowtime})")
tdSql.execute(f"insert into db.t{i}(ts) values ({nowtime-(per_table_rows+3)*10})")
tdSql.execute(f"insert into db.t{i}(ts) values ({nowtime+(per_table_rows+3)*10})")
self.diff_current_query()
self.diff_error_query()
......
......@@ -52,12 +52,12 @@ class TDTestCase:
return query_condition
def __join_condition(self, tb_list, filter=PRIMARY_COL, INNER=False):
def __join_condition(self, tb_list, filter=PRIMARY_COL, INNER=False, alias_tb1="tb1", alias_tb2="tb2"):
table_reference = tb_list[0]
join_condition = table_reference
join = "inner join" if INNER else "join"
for i in range(len(tb_list[1:])):
join_condition += f" {join} {tb_list[i+1]} on {table_reference}.{filter}={tb_list[i+1]}.{filter}"
join_condition += f" as {alias_tb1} {join} {tb_list[i+1]} as {alias_tb2} on {alias_tb1}.{filter}={alias_tb2}.{filter}"
return join_condition
......@@ -123,28 +123,28 @@ class TDTestCase:
sqls = []
__join_tblist = self.__join_tblist
for join_tblist in __join_tblist:
for join_tb in join_tblist:
select_claus_list = self.__query_condition(join_tb)
for select_claus in select_claus_list:
group_claus = self.__group_condition( col=select_claus)
where_claus = self.__where_condition( query_conditon=select_claus )
having_claus = self.__group_condition( col=select_claus, having=f"{select_claus} is not null" )
sqls.extend(
(
# self.__gen_sql(select_claus, self.__join_condition(join_tblist), where_claus, group_claus),
self.__gen_sql(select_claus, self.__join_condition(join_tblist), where_claus, having_claus),
self.__gen_sql(select_claus, self.__join_condition(join_tblist), where_claus),
# self.__gen_sql(select_claus, self.__join_condition(join_tblist), group_claus),
self.__gen_sql(select_claus, self.__join_condition(join_tblist), having_claus),
self.__gen_sql(select_claus, self.__join_condition(join_tblist)),
# self.__gen_sql(select_claus, self.__join_condition(join_tblist, INNER=True), where_claus, group_claus),
self.__gen_sql(select_claus, self.__join_condition(join_tblist, INNER=True), where_claus, having_claus),
self.__gen_sql(select_claus, self.__join_condition(join_tblist, INNER=True), where_claus, ),
self.__gen_sql(select_claus, self.__join_condition(join_tblist, INNER=True), having_claus ),
# self.__gen_sql(select_claus, self.__join_condition(join_tblist, INNER=True), group_claus ),
self.__gen_sql(select_claus, self.__join_condition(join_tblist, INNER=True) ),
)
alias_tb = "tb1"
select_claus_list = self.__query_condition(alias_tb)
for select_claus in select_claus_list:
group_claus = self.__group_condition( col=select_claus)
where_claus = self.__where_condition( query_conditon=select_claus )
having_claus = self.__group_condition( col=select_claus, having=f"{select_claus} is not null" )
sqls.extend(
(
# self.__gen_sql(select_claus, self.__join_condition(join_tblist), where_claus, group_claus),
self.__gen_sql(select_claus, self.__join_condition(join_tblist, alias_tb1=alias_tb), where_claus, having_claus),
self.__gen_sql(select_claus, self.__join_condition(join_tblist, alias_tb1=alias_tb), where_claus),
# self.__gen_sql(select_claus, self.__join_condition(join_tblist), group_claus),
self.__gen_sql(select_claus, self.__join_condition(join_tblist, alias_tb1=alias_tb), having_claus),
self.__gen_sql(select_claus, self.__join_condition(join_tblist, alias_tb1=alias_tb)),
# self.__gen_sql(select_claus, self.__join_condition(join_tblist, INNER=True), where_claus, group_claus),
self.__gen_sql(select_claus, self.__join_condition(join_tblist, alias_tb1=alias_tb, INNER=True), where_claus, having_claus),
self.__gen_sql(select_claus, self.__join_condition(join_tblist, alias_tb1=alias_tb, INNER=True), where_claus, ),
self.__gen_sql(select_claus, self.__join_condition(join_tblist, alias_tb1=alias_tb, INNER=True), having_claus ),
# self.__gen_sql(select_claus, self.__join_condition(join_tblist, INNER=True), group_claus ),
self.__gen_sql(select_claus, self.__join_condition(join_tblist, alias_tb1=alias_tb, INNER=True) ),
)
)
return list(filter(None, sqls))
def __join_check(self,):
......@@ -341,10 +341,8 @@ class TDTestCase:
tdLog.printNoPrefix("==========step3:all check")
self.all_test()
tdDnodes.stop(1)
tdDnodes.start(1)
tdSql.execute(f"flush database db")
tdSql.execute("use db")
tdLog.printNoPrefix("==========step4:after wal, all check again ")
self.all_test()
......
......@@ -65,96 +65,8 @@ class TDTestCase:
'''
)
def check_result_auto_log(self ,base , origin_query , log_query):
def check_result_auto_log(self ,origin_query , log_query):
log_result = tdSql.getResult(log_query)
origin_result = tdSql.getResult(origin_query)
auto_result =[]
for row in origin_result:
row_check = []
for elem in row:
if elem == None:
elem = None
elif elem >0:
elem = math.log(elem)
elif elem <=0:
elem = None
row_check.append(elem)
auto_result.append(row_check)
check_status = True
for row_index , row in enumerate(log_result):
for col_index , elem in enumerate(row):
if auto_result[row_index][col_index] != elem:
check_status = False
if not check_status:
tdLog.notice("log function value has not as expected , sql is \"%s\" "%log_query )
sys.exit(1)
else:
tdLog.info("log value check pass , it work as expected ,sql is \"%s\" "%log_query )
def check_result_auto_log2(self ,origin_query , log_query):
log_result = tdSql.getResult(log_query)
origin_result = tdSql.getResult(origin_query)
auto_result =[]
for row in origin_result:
row_check = []
for elem in row:
if elem == None:
elem = None
elif elem >0:
elem = math.log(elem,2)
elif elem <=0:
elem = None
row_check.append(elem)
auto_result.append(row_check)
check_status = True
for row_index , row in enumerate(log_result):
for col_index , elem in enumerate(row):
if auto_result[row_index][col_index] != elem:
check_status = False
if not check_status:
tdLog.notice("log function value has not as expected , sql is \"%s\" "%log_query )
sys.exit(1)
else:
tdLog.info("log value check pass , it work as expected ,sql is \"%s\" "%log_query )
def check_result_auto_log1(self ,origin_query , log_query):
log_result = tdSql.getResult(log_query)
origin_result = tdSql.getResult(origin_query)
auto_result =[]
for row in origin_result:
row_check = []
for elem in row:
if elem == None:
elem = None
elif elem >0:
elem = None
elif elem <=0:
elem = None
row_check.append(elem)
auto_result.append(row_check)
check_status = True
for row_index , row in enumerate(log_result):
for col_index , elem in enumerate(row):
if auto_result[row_index][col_index] != elem:
check_status = False
if not check_status:
tdLog.notice("log function value has not as expected , sql is \"%s\" "%log_query )
sys.exit(1)
else:
tdLog.info("log value check pass , it work as expected ,sql is \"%s\" "%log_query )
def check_result_auto_log__10(self ,origin_query , log_query):
log_result = tdSql.getResult(log_query)
origin_result = tdSql.getResult(origin_query)
......@@ -163,26 +75,30 @@ class TDTestCase:
for row in origin_result:
row_check = []
for elem in row:
if elem == None:
elem = None
elif elem >0:
elem = None
elif elem <=0:
if base ==1:
elem = None
else:
if elem == None:
elem = None
elif elem ==1:
elem = 0.0
elif elem >0 and elem !=1 :
if base==None :
elem = math.log(elem )
else:
print(base , elem)
elem = math.log(elem , base)
elif elem <=0:
elem = None
row_check.append(elem)
auto_result.append(row_check)
check_status = True
tdSql.query(log_query)
for row_index , row in enumerate(log_result):
for col_index , elem in enumerate(row):
if auto_result[row_index][col_index] != elem:
check_status = False
if not check_status:
tdLog.notice("log function value has not as expected , sql is \"%s\" "%log_query )
sys.exit(1)
else:
tdLog.info("log value check pass , it work as expected ,sql is \"%s\" "%log_query )
tdSql.checkData(row_index , col_index ,auto_result[row_index][col_index])
def test_errors(self, dbname="db"):
error_sql_lists = [
f"select log from {dbname}.t1",
......@@ -328,10 +244,10 @@ class TDTestCase:
tdSql.checkData(3 , 0, 1.098612289)
tdSql.checkData(4 , 0, 1.386294361)
self.check_result_auto_log( f"select c1, c2, c3 , c4, c5 from {dbname}.t1", f"select log(c1), log(c2) ,log(c3), log(c4), log(c5) from {dbname}.t1")
self.check_result_auto_log2( f"select c1, c2, c3 , c4, c5 from {dbname}.t1", f"select log(c1 ,2), log(c2 ,2) ,log(c3, 2), log(c4 ,2), log(c5 ,2) from {dbname}.t1")
self.check_result_auto_log__10( f"select c1, c2, c3 , c4, c5 from {dbname}.t1", f"select log(c1 ,1), log(c2 ,1) ,log(c3, 1), log(c4 ,1), log(c5 ,1) from {dbname}.t1")
self.check_result_auto_log__10( f"select c1, c2, c3 , c4, c5 from {dbname}.t1", f"select log(c1 ,-10), log(c2 ,-10) ,log(c3, -10), log(c4 ,-10), log(c5 ,-10) from {dbname}.t1")
self.check_result_auto_log( None , f"select c1, c2, c3 , c4, c5 from {dbname}.t1", f"select log(c1), log(c2) ,log(c3), log(c4), log(c5) from {dbname}.t1")
self.check_result_auto_log( 2 , f"select c1, c2, c3 , c4, c5 from {dbname}.t1", f"select log(c1 ,2), log(c2 ,2) ,log(c3, 2), log(c4 ,2), log(c5 ,2) from {dbname}.t1")
self.check_result_auto_log( 1, f"select c1, c2, c3 , c4, c5 from {dbname}.t1", f"select log(c1 ,1), log(c2 ,1) ,log(c3, 1), log(c4 ,1), log(c5 ,1) from {dbname}.t1")
self.check_result_auto_log( 10 ,f"select c1, c2, c3 , c4, c5 from {dbname}.t1", f"select log(c1 ,10), log(c2 ,10) ,log(c3, 10), log(c4 ,10), log(c5 ,10) from {dbname}.t1")
# used for sub table
tdSql.query(f"select c1 ,log(c1 ,3) from {dbname}.ct1")
......@@ -349,9 +265,9 @@ class TDTestCase:
tdSql.checkData(3 , 2, 0.147315235)
tdSql.checkData(4 , 2, None)
self.check_result_auto_log( f"select c1, c2, c3 , c4, c5 from {dbname}.ct1", f"select log(c1), log(c2) ,log(c3), log(c4), log(c5) from {dbname}.ct1")
self.check_result_auto_log2( f"select c1, c2, c3 , c4, c5 from {dbname}.ct1", f"select log(c1,2), log(c2,2) ,log(c3,2), log(c4,2), log(c5,2) from {dbname}.ct1")
self.check_result_auto_log__10( f"select c1, c2, c3 , c4, c5 from {dbname}.ct1", f"select log(c1,-10), log(c2,-10) ,log(c3,-10), log(c4,-10), log(c5,-10) from {dbname}.ct1")
self.check_result_auto_log( None ,f"select c1, c2, c3 , c4, c5 from {dbname}.ct1", f"select log(c1), log(c2) ,log(c3), log(c4), log(c5) from {dbname}.ct1")
self.check_result_auto_log( 2, f"select c1, c2, c3 , c4, c5 from {dbname}.ct1", f"select log(c1,2), log(c2,2) ,log(c3,2), log(c4,2), log(c5,2) from {dbname}.ct1")
self.check_result_auto_log( 10 , f"select c1, c2, c3 , c4, c5 from {dbname}.ct1", f"select log(c1,10), log(c2,10) ,log(c3,10), log(c4,10), log(c5,10) from {dbname}.ct1")
# nest query for log functions
tdSql.query(f"select c1 , log(c1,3) ,log(log(c1,3),3) , log(log(log(c1,3),3),3) from {dbname}.ct1;")
......@@ -585,15 +501,15 @@ class TDTestCase:
tdSql.error(
f"insert into {dbname}.sub1_bound values ( now()+1s, 2147483648, 9223372036854775808, 32768, 128, 3.40E+38, 1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
)
self.check_result_auto_log( f"select c1, c2, c3 , c4, c5 ,c6 from {dbname}.sub1_bound ", f"select log(c1), log(c2) ,log(c3), log(c4), log(c5) ,log(c6) from {dbname}.sub1_bound")
self.check_result_auto_log2( f"select c1, c2, c3 , c4, c5 ,c6 from {dbname}.sub1_bound ", f"select log(c1,2), log(c2,2) ,log(c3,2), log(c4,2), log(c5,2) ,log(c6,2) from {dbname}.sub1_bound")
self.check_result_auto_log__10( f"select c1, c2, c3 , c4, c5 ,c6 from {dbname}.sub1_bound ", f"select log(c1,-10), log(c2,-10) ,log(c3,-10), log(c4,-10), log(c5,-10) ,log(c6,-10) from {dbname}.sub1_bound")
self.check_result_auto_log(None , f"select c1, c2, c3 , c4, c5 ,c6 from {dbname}.sub1_bound ", f"select log(c1), log(c2) ,log(c3), log(c4), log(c5) ,log(c6) from {dbname}.sub1_bound")
self.check_result_auto_log( 2 , f"select c1, c2, c3 , c4, c5 ,c6 from {dbname}.sub1_bound ", f"select log(c1,2), log(c2,2) ,log(c3,2), log(c4,2), log(c5,2) ,log(c6,2) from {dbname}.sub1_bound")
self.check_result_auto_log( 10 , f"select c1, c2, c3 , c4, c5 ,c6 from {dbname}.sub1_bound ", f"select log(c1,10), log(c2,10) ,log(c3,10), log(c4,10), log(c5,10) ,log(c6,10) from {dbname}.sub1_bound")
self.check_result_auto_log2( f"select c1, c2, c3 , c3, c2 ,c1 from {dbname}.sub1_bound ", f"select log(c1,2), log(c2,2) ,log(c3,2), log(c3,2), log(c2,2) ,log(c1,2) from {dbname}.sub1_bound")
self.check_result_auto_log( f"select c1, c2, c3 , c3, c2 ,c1 from {dbname}.sub1_bound ", f"select log(c1), log(c2) ,log(c3), log(c3), log(c2) ,log(c1) from {dbname}.sub1_bound")
self.check_result_auto_log( 2 , f"select c1, c2, c3 , c3, c2 ,c1 from {dbname}.sub1_bound ", f"select log(c1,2), log(c2,2) ,log(c3,2), log(c3,2), log(c2,2) ,log(c1,2) from {dbname}.sub1_bound")
self.check_result_auto_log( None , f"select c1, c2, c3 , c3, c2 ,c1 from {dbname}.sub1_bound ", f"select log(c1), log(c2) ,log(c3), log(c3), log(c2) ,log(c1) from {dbname}.sub1_bound")
self.check_result_auto_log2(f"select abs(abs(abs(abs(abs(abs(abs(abs(abs(c1))))))))) nest_col_func from {dbname}.sub1_bound" , f"select log(abs(c1) ,2) from {dbname}.sub1_bound" )
self.check_result_auto_log(2 , f"select abs(abs(abs(abs(abs(abs(abs(abs(abs(c1))))))))) nest_col_func from {dbname}.sub1_bound" , f"select log(abs(c1) ,2) from {dbname}.sub1_bound" )
# check basic elem for table per row
tdSql.query(f"select log(abs(c1),2) ,log(abs(c2),2) , log(abs(c3),2) , log(abs(c4),2), log(abs(c5),2), log(abs(c6),2) from {dbname}.sub1_bound ")
......@@ -647,15 +563,15 @@ class TDTestCase:
def support_super_table_test(self, dbname="db"):
self.check_result_auto_log2( f"select c5 from {dbname}.stb1 order by ts " , f"select log(c5,2) from {dbname}.stb1 order by ts" )
self.check_result_auto_log2( f"select c5 from {dbname}.stb1 order by tbname " , f"select log(c5,2) from {dbname}.stb1 order by tbname" )
self.check_result_auto_log2( f"select c5 from {dbname}.stb1 where c1 > 0 order by tbname " , f"select log(c5,2) from {dbname}.stb1 where c1 > 0 order by tbname" )
self.check_result_auto_log2( f"select c5 from {dbname}.stb1 where c1 > 0 order by tbname " , f"select log(c5,2) from {dbname}.stb1 where c1 > 0 order by tbname" )
self.check_result_auto_log( 2 , f"select c5 from {dbname}.stb1 order by ts " , f"select log(c5,2) from {dbname}.stb1 order by ts" )
self.check_result_auto_log( 2 ,f"select c5 from {dbname}.stb1 order by tbname " , f"select log(c5,2) from {dbname}.stb1 order by tbname" )
self.check_result_auto_log( 2 ,f"select c5 from {dbname}.stb1 where c1 > 0 order by tbname " , f"select log(c5,2) from {dbname}.stb1 where c1 > 0 order by tbname" )
self.check_result_auto_log( 2 , f"select c5 from {dbname}.stb1 where c1 > 0 order by tbname " , f"select log(c5,2) from {dbname}.stb1 where c1 > 0 order by tbname" )
self.check_result_auto_log2( f"select t1,c5 from {dbname}.stb1 order by ts " , f"select log(t1,2), log(c5,2) from {dbname}.stb1 order by ts" )
self.check_result_auto_log2( f"select t1,c5 from {dbname}.stb1 order by tbname " , f"select log(t1,2) ,log(c5,2) from {dbname}.stb1 order by tbname" )
self.check_result_auto_log2( f"select t1,c5 from {dbname}.stb1 where c1 > 0 order by tbname " , f"select log(t1,2) ,log(c5,2) from {dbname}.stb1 where c1 > 0 order by tbname" )
self.check_result_auto_log2( f"select t1,c5 from {dbname}.stb1 where c1 > 0 order by tbname " , f"select log(t1,2) , log(c5,2) from {dbname}.stb1 where c1 > 0 order by tbname" )
self.check_result_auto_log( 2 , f"select t1,c5 from {dbname}.stb1 order by ts " , f"select log(t1,2), log(c5,2) from {dbname}.stb1 order by ts" )
self.check_result_auto_log( 2 , f"select t1,c5 from {dbname}.stb1 order by tbname " , f"select log(t1,2) ,log(c5,2) from {dbname}.stb1 order by tbname" )
self.check_result_auto_log( 2 , f"select t1,c5 from {dbname}.stb1 where c1 > 0 order by tbname " , f"select log(t1,2) ,log(c5,2) from {dbname}.stb1 where c1 > 0 order by tbname" )
self.check_result_auto_log( 2 ,f"select t1,c5 from {dbname}.stb1 where c1 > 0 order by tbname " , f"select log(t1,2) , log(c5,2) from {dbname}.stb1 where c1 > 0 order by tbname" )
def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
tdSql.prepare()
......
......@@ -96,16 +96,16 @@ class TDTestCase:
return sqls
def __test_current(self):
def __test_current(self, dbname="db"):
tdLog.printNoPrefix("==========current sql condition check , must return query ok==========")
tbname = ["ct1", "ct2", "ct4", "t1", "stb1"]
tbname = [f"{dbname}.ct1", f"{dbname}.ct2", f"{dbname}.ct4", f"{dbname}.t1", f"{dbname}.stb1"]
for tb in tbname:
self.__lower_current_check(tb)
tdLog.printNoPrefix(f"==========current sql condition check in {tb} over==========")
def __test_error(self):
def __test_error(self, dbname="db"):
tdLog.printNoPrefix("==========err sql condition check , must return error==========")
tbname = ["ct1", "ct2", "ct4", "t1", "stb1"]
tbname = [f"{dbname}.ct1", f"{dbname}.ct2", f"{dbname}.ct4", f"{dbname}.t1", f"{dbname}.stb1"]
for tb in tbname:
for errsql in self.__lower_err_check(tb):
......@@ -113,22 +113,20 @@ class TDTestCase:
tdLog.printNoPrefix(f"==========err sql condition check in {tb} over==========")
def all_test(self):
self.__test_current()
self.__test_error()
def all_test(self, dbname="db"):
self.__test_current(dbname)
self.__test_error(dbname)
def __create_tb(self):
tdSql.prepare()
def __create_tb(self, dbname="db"):
tdLog.printNoPrefix("==========step1:create table")
create_stb_sql = f'''create table stb1(
create_stb_sql = f'''create table {dbname}.stb1(
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
) tags (t1 int)
) tags (tag1 int)
'''
create_ntb_sql = f'''create table t1(
create_ntb_sql = f'''create table {dbname}.t1(
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
......@@ -138,78 +136,78 @@ class TDTestCase:
tdSql.execute(create_ntb_sql)
for i in range(4):
tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
tdSql.execute(f'create table {dbname}.ct{i+1} using {dbname}.stb1 tags ( {i+1} )')
def __insert_data(self, rows):
def __insert_data(self, rows, dbname="db"):
now_time = int(datetime.datetime.timestamp(datetime.datetime.now()) * 1000)
for i in range(rows):
tdSql.execute(
f"insert into ct1 values ( { now_time - i * 1000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar{i}', { now_time + 1 * i } )"
f"insert into {dbname}.ct1 values ( { now_time - i * 1000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
)
tdSql.execute(
f"insert into ct4 values ( { now_time - i * 7776000000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar{i}', { now_time + 1 * i } )"
f"insert into {dbname}.ct4 values ( { now_time - i * 7776000000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
)
tdSql.execute(
f"insert into ct2 values ( { now_time - i * 7776000000 }, {-i}, {-11111 * i}, {-111 * i % 32767 }, {-11 * i % 127}, {-1.11*i}, {-1100.0011*i}, {i%2}, 'binary{i}', 'nchar{i}', { now_time + 1 * i } )"
f"insert into {dbname}.ct2 values ( { now_time - i * 7776000000 }, {-i}, {-11111 * i}, {-111 * i % 32767 }, {-11 * i % 127}, {-1.11*i}, {-1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
)
tdSql.execute(
f'''insert into ct1 values
( { now_time - rows * 5 }, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', { now_time + 8 } )
( { now_time + 10000 }, { rows }, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', { now_time + 9 } )
f'''insert into {dbname}.ct1 values
( { now_time - rows * 5 }, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar_测试_0', { now_time + 8 } )
( { now_time + 10000 }, { rows }, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar_测试_9', { now_time + 9 } )
'''
)
tdSql.execute(
f'''insert into ct4 values
f'''insert into {dbname}.ct4 values
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time - rows * 3888000000+ 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
(
{ now_time + 5184000000}, {pow(2,31)-pow(2,15)}, {pow(2,63)-pow(2,30)}, 32767, 127,
{ 3.3 * pow(10,38) }, { 1.3 * pow(10,308) }, { rows % 2 }, "binary_limit-1", "nchar_limit-1", { now_time - 86400000}
{ 3.3 * pow(10,38) }, { 1.3 * pow(10,308) }, { rows % 2 }, "binary_limit-1", "nchar_测试_limit-1", { now_time - 86400000}
)
(
{ now_time + 2592000000 }, {pow(2,31)-pow(2,16)}, {pow(2,63)-pow(2,31)}, 32766, 126,
{ 3.2 * pow(10,38) }, { 1.2 * pow(10,308) }, { (rows-1) % 2 }, "binary_limit-2", "nchar_limit-2", { now_time - 172800000}
{ 3.2 * pow(10,38) }, { 1.2 * pow(10,308) }, { (rows-1) % 2 }, "binary_limit-2", "nchar_测试_limit-2", { now_time - 172800000}
)
'''
)
tdSql.execute(
f'''insert into ct2 values
f'''insert into {dbname}.ct2 values
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time - rows * 3888000000+ 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
(
{ now_time + 5184000000 }, { -1 * pow(2,31) + pow(2,15) }, { -1 * pow(2,63) + pow(2,30) }, -32766, -126,
{ -1 * 3.2 * pow(10,38) }, { -1.2 * pow(10,308) }, { rows % 2 }, "binary_limit-1", "nchar_limit-1", { now_time - 86400000 }
{ -1 * 3.2 * pow(10,38) }, { -1.2 * pow(10,308) }, { rows % 2 }, "binary_limit-1", "nchar_测试_limit-1", { now_time - 86400000 }
)
(
{ now_time + 2592000000 }, { -1 * pow(2,31) + pow(2,16) }, { -1 * pow(2,63) + pow(2,31) }, -32767, -127,
{ - 3.3 * pow(10,38) }, { -1.3 * pow(10,308) }, { (rows-1) % 2 }, "binary_limit-2", "nchar_limit-2", { now_time - 172800000 }
{ - 3.3 * pow(10,38) }, { -1.3 * pow(10,308) }, { (rows-1) % 2 }, "binary_limit-2", "nchar_测试_limit-2", { now_time - 172800000 }
)
'''
)
for i in range(rows):
insert_data = f'''insert into t1 values
insert_data = f'''insert into {dbname}.t1 values
( { now_time - i * 3600000 }, {i}, {i * 11111}, { i % 32767 }, { i % 127}, { i * 1.11111 }, { i * 1000.1111 }, { i % 2},
"binary_{i}", "nchar_{i}", { now_time - 1000 * i } )
"binary_{i}", "nchar_测试_{i}", { now_time - 1000 * i } )
'''
tdSql.execute(insert_data)
tdSql.execute(
f'''insert into t1 values
f'''insert into {dbname}.t1 values
( { now_time + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time - (( rows // 2 ) * 60 + 30) * 60000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time - rows * 3600000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time + 7200000 }, { pow(2,31) - pow(2,15) }, { pow(2,63) - pow(2,30) }, 32767, 127,
{ 3.3 * pow(10,38) }, { 1.3 * pow(10,308) }, { rows % 2 },
"binary_limit-1", "nchar_limit-1", { now_time - 86400000 }
"binary_limit-1", "nchar_测试_limit-1", { now_time - 86400000 }
)
(
{ now_time + 3600000 } , { pow(2,31) - pow(2,16) }, { pow(2,63) - pow(2,31) }, 32766, 126,
{ 3.2 * pow(10,38) }, { 1.2 * pow(10,308) }, { (rows-1) % 2 },
"binary_limit-2", "nchar_limit-2", { now_time - 172800000 }
"binary_limit-2", "nchar_测试_limit-2", { now_time - 172800000 }
)
'''
)
......@@ -227,10 +225,7 @@ class TDTestCase:
tdLog.printNoPrefix("==========step3:all check")
self.all_test()
tdDnodes.stop(1)
tdDnodes.start(1)
tdSql.execute("use db")
tdSql.execute("flush database db")
tdLog.printNoPrefix("==========step4:after wal, all check again ")
self.all_test()
......
......@@ -23,6 +23,7 @@ CHAR_COL = [ BINARY_COL, NCHAR_COL, ]
BOOLEAN_COL = [ BOOL_COL, ]
TS_TYPE_COL = [ TS_COL, ]
DBNAME = "db"
class TDTestCase:
......@@ -120,16 +121,16 @@ class TDTestCase:
return sqls
def __test_current(self): # sourcery skip: use-itertools-product
def __test_current(self, dbname=DBNAME): # sourcery skip: use-itertools-product
tdLog.printNoPrefix("==========current sql condition check , must return query ok==========")
tbname = ["ct1", "ct2", "ct4", "t1", "stb1"]
tbname = [f"{dbname}.ct1", f"{dbname}.ct2", f"{dbname}.ct4", f"{dbname}.t1", f"{dbname}.stb1"]
for tb in tbname:
self.__ltrim_check(tb)
tdLog.printNoPrefix(f"==========current sql condition check in {tb} over==========")
def __test_error(self):
def __test_error(self, dbname=DBNAME):
tdLog.printNoPrefix("==========err sql condition check , must return error==========")
tbname = ["ct1", "ct2", "ct4", "t1", "stb1"]
tbname = [f"{dbname}.ct1", f"{dbname}.ct2", f"{dbname}.ct4", f"{dbname}.t1", f"{dbname}.stb1"]
for tb in tbname:
for errsql in self.__ltrim_err_check(tb):
......@@ -142,17 +143,16 @@ class TDTestCase:
self.__test_error()
def __create_tb(self):
tdSql.prepare()
def __create_tb(self, dbname=DBNAME):
tdLog.printNoPrefix("==========step1:create table")
create_stb_sql = f'''create table stb1(
create_stb_sql = f'''create table {dbname}.stb1(
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
) tags (t1 int)
'''
create_ntb_sql = f'''create table t1(
create_ntb_sql = f'''create table {dbname}.t1(
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
......@@ -162,29 +162,29 @@ class TDTestCase:
tdSql.execute(create_ntb_sql)
for i in range(4):
tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
tdSql.execute(f'create table {dbname}.ct{i+1} using {dbname}.stb1 tags ( {i+1} )')
def __insert_data(self, rows):
def __insert_data(self, rows, dbname=DBNAME):
now_time = int(datetime.datetime.timestamp(datetime.datetime.now()) * 1000)
for i in range(rows):
tdSql.execute(
f"insert into ct1 values ( { now_time - i * 1000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
f"insert into {dbname}.ct1 values ( { now_time - i * 1000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
)
tdSql.execute(
f"insert into ct4 values ( { now_time - i * 7776000000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
f"insert into {dbname}.ct4 values ( { now_time - i * 7776000000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
)
tdSql.execute(
f"insert into ct2 values ( { now_time - i * 7776000000 }, {-i}, {-11111 * i}, {-111 * i % 32767 }, {-11 * i % 127}, {-1.11*i}, {-1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
f"insert into {dbname}.ct2 values ( { now_time - i * 7776000000 }, {-i}, {-11111 * i}, {-111 * i % 32767 }, {-11 * i % 127}, {-1.11*i}, {-1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
)
tdSql.execute(
f'''insert into ct1 values
f'''insert into {dbname}.ct1 values
( { now_time - rows * 5 }, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar_测试_0', { now_time + 8 } )
( { now_time + 10000 }, { rows }, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar_测试_9', { now_time + 9 } )
'''
)
tdSql.execute(
f'''insert into ct4 values
f'''insert into {dbname}.ct4 values
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
......@@ -200,7 +200,7 @@ class TDTestCase:
)
tdSql.execute(
f'''insert into ct2 values
f'''insert into {dbname}.ct2 values
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
......@@ -216,13 +216,13 @@ class TDTestCase:
)
for i in range(rows):
insert_data = f'''insert into t1 values
insert_data = f'''insert into {dbname}.t1 values
( { now_time - i * 3600000 }, {i}, {i * 11111}, { i % 32767 }, { i % 127}, { i * 1.11111 }, { i * 1000.1111 }, { i % 2},
"binary_{i}", "nchar_测试_{i}", { now_time - 1000 * i } )
'''
tdSql.execute(insert_data)
tdSql.execute(
f'''insert into t1 values
f'''insert into {dbname}.t1 values
( { now_time + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time - (( rows // 2 ) * 60 + 30) * 60000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time - rows * 3600000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
......@@ -251,8 +251,7 @@ class TDTestCase:
tdLog.printNoPrefix("==========step3:all check")
self.all_test()
tdDnodes.stop(1)
tdDnodes.start(1)
tdSql.execute("flush database db")
tdSql.execute("use db")
......
......@@ -307,7 +307,7 @@ class TDTestCase:
pass
def mavg_current_query(self) :
def mavg_current_query(self, dbname="db") :
# table schema :ts timestamp, c1 int, c2 float, c3 timestamp, c4 binary(16), c5 double, c6 bool
# c7 bigint, c8 smallint, c9 tinyint, c10 nchar(16)
......@@ -325,17 +325,17 @@ class TDTestCase:
case6 = {"col": "c9"}
self.checkmavg(**case6)
# # case7~8: nested query
# case7 = {"table_expr": f"(select c1 from {dbname}.stb1)"}
# self.checkmavg(**case7)
# case8 = {"table_expr": f"(select mavg(c1, 1) c1 from {dbname}.stb1 group by tbname)"}
# case7~8: nested query
case7 = {"table_expr": f"(select c1 from {dbname}.stb1)"}
self.checkmavg(**case7)
# case8 = {"table_expr": f"(select _c0, mavg(c1, 1) c1 from {dbname}.stb1 group by tbname)"}
# self.checkmavg(**case8)
# case9~10: mix with tbname/ts/tag/col
# case9 = {"alias": ", tbname"}
# self.checkmavg(**case9)
# case10 = {"alias": ", _c0"}
# self.checkmavg(**case10)
case9 = {"alias": ", tbname"}
self.checkmavg(**case9)
case10 = {"alias": ", _c0"}
self.checkmavg(**case10)
# case11 = {"alias": ", st1"}
# self.checkmavg(**case11)
# case12 = {"alias": ", c1"}
......@@ -356,7 +356,7 @@ class TDTestCase:
# case17: only support normal table join
case17 = {
"col": "t1.c1",
"table_expr": "t1, t2",
"table_expr": f"{dbname}.t1 t1, {dbname}.t2 t2",
"condition": "where t1.ts=t2.ts"
}
self.checkmavg(**case17)
......@@ -367,14 +367,14 @@ class TDTestCase:
# }
# self.checkmavg(**case19)
# case20~21: with order by
# # case20~21: with order by
# case20 = {"condition": "order by ts"}
# self.checkmavg(**case20)
#case21 = {
# "table_expr": f"{dbname}.stb1",
# "condition": "group by tbname order by tbname"
#}
#self.checkmavg(**case21)
case21 = {
"table_expr": f"{dbname}.stb1",
"condition": "group by tbname order by tbname"
}
self.checkmavg(**case21)
# # case22: with union
# case22 = {
......@@ -398,7 +398,7 @@ class TDTestCase:
pass
def mavg_error_query(self) -> None :
def mavg_error_query(self, dbname="db") -> None :
# unusual test
# form test
......@@ -419,9 +419,9 @@ class TDTestCase:
err8 = {"table_expr": ""}
self.checkmavg(**err8) # no table_expr
# err9 = {"col": "st1"}
err9 = {"col": "st1"}
# self.checkmavg(**err9) # col: tag
# err10 = {"col": 1}
err10 = {"col": 1}
# self.checkmavg(**err10) # col: value
err11 = {"col": "NULL"}
self.checkmavg(**err11) # col: NULL
......@@ -496,7 +496,7 @@ class TDTestCase:
# "condition": "where stb1.ts=stb2.ts and stb1.st1=stb2.st2 order by stb1.ts"
# }
# self.checkmavg(**err44) # stb join
tdSql.query("select mavg( stb1.c1 , 1 ) from stb1, stb2 where stb1.ts=stb2.ts and stb1.st1=stb2.st2 order by stb1.ts;")
tdSql.query(f"select mavg( stb1.c1 , 1 ) from {dbname}.stb1 stb1, {dbname}.stb2 stb2 where stb1.ts=stb2.ts and stb1.st1=stb2.st2 order by stb1.ts;")
err45 = {
"condition": "where ts>0 and ts < now interval(1h) fill(next)"
}
......
......@@ -5,10 +5,7 @@ import numpy as np
class TDTestCase:
updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
"jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
"wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143,
"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
updatecfgDict = {"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor())
......@@ -17,60 +14,57 @@ class TDTestCase:
self.ts = 1537146000000
self.binary_str = 'taosdata'
self.nchar_str = '涛思数据'
def max_check_stb_and_tb_base(self):
def max_check_stb_and_tb_base(self, dbname="db"):
tdSql.prepare()
intData = []
floatData = []
tdSql.execute('''create table stb(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 tinyint unsigned, col6 smallint unsigned,
tdSql.execute(f'''create table {dbname}.stb(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 tinyint unsigned, col6 smallint unsigned,
col7 int unsigned, col8 bigint unsigned, col9 float, col10 double, col11 bool, col12 binary(20), col13 nchar(20)) tags(loc nchar(20))''')
tdSql.execute("create table stb_1 using stb tags('beijing')")
tdSql.execute(f"create table {dbname}.stb_1 using {dbname}.stb tags('beijing')")
for i in range(self.rowNum):
tdSql.execute(f"insert into stb_1 values(%d, %d, %d, %d, %d, %d, %d, %d, %d, %f, %f, %d, '{self.binary_str}%d', '{self.nchar_str}%d')"
tdSql.execute(f"insert into {dbname}.stb_1 values(%d, %d, %d, %d, %d, %d, %d, %d, %d, %f, %f, %d, '{self.binary_str}%d', '{self.nchar_str}%d')"
% (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1))
intData.append(i + 1)
floatData.append(i + 0.1)
for i in ['ts','col11','col12','col13']:
for j in ['db.stb','stb','db.stb_1','stb_1']:
tdSql.error(f'select max({i} from {j} )')
for j in ['stb','stb_1']:
tdSql.error(f'select max({i} from {dbname}.{j} )')
for i in range(1,11):
for j in ['db.stb','stb','db.stb_1','stb_1']:
tdSql.query(f"select max(col{i}) from {j}")
for j in ['stb', 'stb_1']:
tdSql.query(f"select max(col{i}) from {dbname}.{j}")
if i<9:
tdSql.checkData(0, 0, np.max(intData))
elif i>=9:
tdSql.checkData(0, 0, np.max(floatData))
tdSql.query("select max(col1) from stb_1 where col2<=5")
tdSql.query(f"select max(col1) from {dbname}.stb_1 where col2<=5")
tdSql.checkData(0,0,5)
tdSql.query("select max(col1) from stb where col2<=5")
tdSql.query(f"select max(col1) from {dbname}.stb where col2<=5")
tdSql.checkData(0,0,5)
tdSql.execute('drop database db')
def max_check_ntb_base(self):
def max_check_ntb_base(self, dbname="db"):
tdSql.prepare()
intData = []
floatData = []
tdSql.execute('''create table ntb(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 tinyint unsigned, col6 smallint unsigned,
tdSql.execute(f'''create table {dbname}.ntb(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 tinyint unsigned, col6 smallint unsigned,
col7 int unsigned, col8 bigint unsigned, col9 float, col10 double, col11 bool, col12 binary(20), col13 nchar(20))''')
for i in range(self.rowNum):
tdSql.execute(f"insert into ntb values(%d, %d, %d, %d, %d, %d, %d, %d, %d, %f, %f, %d, '{self.binary_str}%d', '{self.nchar_str}%d')"
tdSql.execute(f"insert into {dbname}.ntb values(%d, %d, %d, %d, %d, %d, %d, %d, %d, %f, %f, %d, '{self.binary_str}%d', '{self.nchar_str}%d')"
% (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1))
intData.append(i + 1)
floatData.append(i + 0.1)
for i in ['ts','col11','col12','col13']:
for j in ['db.ntb','ntb']:
tdSql.error(f'select max({i} from {j} )')
for j in ['ntb']:
tdSql.error(f'select max({i} from {dbname}.{j} )')
for i in range(1,11):
for j in ['db.ntb','ntb']:
tdSql.query(f"select max(col{i}) from {j}")
for j in ['ntb']:
tdSql.query(f"select max(col{i}) from {dbname}.{j}")
if i<9:
tdSql.checkData(0, 0, np.max(intData))
elif i>=9:
tdSql.checkData(0, 0, np.max(floatData))
tdSql.query("select max(col1) from ntb where col2<=5")
tdSql.query(f"select max(col1) from {dbname}.ntb where col2<=5")
tdSql.checkData(0,0,5)
tdSql.execute('drop database db')
def check_max_functions(self, tbname , col_name):
......@@ -90,55 +84,55 @@ class TDTestCase:
tdLog.info(" max function work as expected, sql : %s "% max_sql)
def support_distributed_aggregate(self):
def support_distributed_aggregate(self, dbname="testdb"):
# prepate datas for 20 tables distributed at different vgroups
tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5")
tdSql.execute(" use testdb ")
tdSql.execute(f"create database if not exists {dbname} keep 3650 duration 1000 vgroups 5")
tdSql.execute(f"use {dbname} ")
tdSql.execute(
'''create table stb1
f'''create table {dbname}.stb1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
tags (t0 timestamp, t1 int, t2 bigint, t3 smallint, t4 tinyint, t5 float, t6 double, t7 bool, t8 binary(16),t9 nchar(32))
'''
)
tdSql.execute(
'''
create table t1
f'''
create table {dbname}.t1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
'''
)
for i in range(20):
tdSql.execute(f'create table ct{i+1} using stb1 tags ( now(), {1*i}, {11111*i}, {111*i}, {1*i}, {1.11*i}, {11.11*i}, {i%2}, "binary{i}", "nchar{i}" )')
tdSql.execute(f'create table {dbname}.ct{i+1} using {dbname}.stb1 tags ( now(), {1*i}, {11111*i}, {111*i}, {1*i}, {1.11*i}, {11.11*i}, {i%2}, "binary{i}", "nchar{i}" )')
for i in range(9):
tdSql.execute(
f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
f"insert into {dbname}.ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
)
tdSql.execute(
f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
f"insert into {dbname}.ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
)
for i in range(1,21):
if i ==1 or i == 4:
continue
else:
tbname = "ct"+f'{i}'
tbname = f"{dbname}.ct{i}"
for j in range(9):
tdSql.execute(
f"insert into {tbname} values ( now()-{(i+j)*10}s, {1*(j+i)}, {11111*(j+i)}, {111*(j+i)}, {11*(j)}, {1.11*(j+i)}, {11.11*(j+i)}, {(j+i)%2}, 'binary{j}', 'nchar{j}', now()+{1*j}a )"
)
tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute(f"insert into {dbname}.ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
tdSql.execute(f"insert into {dbname}.ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute(f"insert into {dbname}.ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute(f"insert into {dbname}.ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute(f"insert into {dbname}.ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute(f"insert into {dbname}.ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute(f"insert into {dbname}.ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute(
f'''insert into t1 values
f'''insert into {dbname}.t1 values
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
......@@ -157,7 +151,7 @@ class TDTestCase:
tdLog.info(" prepare data for distributed_aggregate done! ")
# get vgroup_ids of all
tdSql.query("show vgroups ")
tdSql.query(f"show {dbname}.vgroups ")
vgroups = tdSql.queryResult
vnode_tables={}
......@@ -167,7 +161,7 @@ class TDTestCase:
# check sub_table of per vnode ,make sure sub_table has been distributed
tdSql.query("select * from information_schema.ins_tables where db_name = 'testdb' and table_name like 'ct%'")
tdSql.query(f"select * from information_schema.ins_tables where db_name = '{dbname}' and table_name like 'ct%'")
table_names = tdSql.queryResult
tablenames = []
for table_name in table_names:
......@@ -182,13 +176,13 @@ class TDTestCase:
# check max function work status
tdSql.query("show tables like 'ct%'")
tdSql.query(f"show {dbname}.tables like 'ct%'")
table_names = tdSql.queryResult
tablenames = []
for table_name in table_names:
tablenames.append(table_name[0])
tdSql.query("desc stb1")
tdSql.query(f"desc {dbname}.stb1")
col_names = tdSql.queryResult
colnames = []
......@@ -198,11 +192,7 @@ class TDTestCase:
for tablename in tablenames:
for colname in colnames:
self.check_max_functions(tablename,colname)
# max function with basic filter
print(vnode_tables)
self.check_max_functions(f"{dbname}.{tablename}", colname)
def run(self):
......
......@@ -12,16 +12,15 @@ class TDTestCase:
self.tb_nums = 10
self.ts = 1537146000000
def prepare_datas(self, stb_name , tb_nums , row_nums ):
tdSql.execute(" use db ")
tdSql.execute(f" create stable {stb_name} (ts timestamp , c1 int , c2 bigint , c3 float , c4 double , c5 smallint , c6 tinyint , c7 bool , c8 binary(36) , c9 nchar(36) , uc1 int unsigned,\
def prepare_datas(self, stb_name , tb_nums , row_nums, dbname="db" ):
tdSql.execute(f" create stable {dbname}.{stb_name} (ts timestamp , c1 int , c2 bigint , c3 float , c4 double , c5 smallint , c6 tinyint , c7 bool , c8 binary(36) , c9 nchar(36) , uc1 int unsigned,\
uc2 bigint unsigned ,uc3 smallint unsigned , uc4 tinyint unsigned ) tags(t1 timestamp , t2 int , t3 bigint , t4 float , t5 double , t6 smallint , t7 tinyint , t8 bool , t9 binary(36)\
, t10 nchar(36) , t11 int unsigned , t12 bigint unsigned ,t13 smallint unsigned , t14 tinyint unsigned ) ")
for i in range(tb_nums):
tbname = f"sub_{stb_name}_{i}"
tbname = f"{dbname}.sub_{stb_name}_{i}"
ts = self.ts + i*10000
tdSql.execute(f"create table {tbname} using {stb_name} tags ({ts} , {i} , {i}*10 ,{i}*1.0,{i}*1.0 , 1 , 2, 'true', 'binary_{i}' ,'nchar_{i}',{i},{i},10,20 )")
tdSql.execute(f"create table {tbname} using {dbname}.{stb_name} tags ({ts} , {i} , {i}*10 ,{i}*1.0,{i}*1.0 , 1 , 2, 'true', 'binary_{i}' ,'nchar_{i}',{i},{i},10,20 )")
for row in range(row_nums):
ts = self.ts + row*1000
......@@ -31,191 +30,192 @@ class TDTestCase:
ts = self.ts + row_nums*1000 + null*1000
tdSql.execute(f"insert into {tbname} values({ts} , NULL , NULL , NULL , NULL , NULL , NULL , NULL , NULL , NULL , NULL , NULL , NULL , NULL )")
def basic_query(self):
tdSql.query("select count(*) from stb")
def basic_query(self, dbname="db"):
tdSql.query(f"select count(*) from {dbname}.stb")
tdSql.checkData(0,0,(self.row_nums + 5 )*self.tb_nums)
tdSql.query("select max(c1) from stb")
tdSql.query(f"select max(c1) from {dbname}.stb")
tdSql.checkData(0,0,(self.row_nums -1))
tdSql.query(" select tbname , max(c1) from stb partition by tbname ")
tdSql.query(f"select tbname , max(c1) from {dbname}.stb partition by tbname ")
tdSql.checkRows(self.tb_nums)
tdSql.query(" select max(c1) from stb group by t1 order by t1 ")
tdSql.query(f"select max(c1) from {dbname}.stb group by t1 order by t1 ")
tdSql.checkRows(self.tb_nums)
tdSql.query(" select max(c1) from stb group by c1 order by t1 ")
tdSql.query(" select max(t2) from stb group by c1 order by t1 ")
tdSql.query(" select max(c1) from stb group by tbname order by tbname ")
tdSql.query(f"select max(c1) from {dbname}.stb group by c1 order by t1 ")
tdSql.query(f"select max(t2) from {dbname}.stb group by c1 order by t1 ")
tdSql.query(f"select max(c1) from {dbname}.stb group by tbname order by tbname ")
tdSql.checkRows(self.tb_nums)
# bug need fix
tdSql.query(" select max(t2) from stb group by t2 order by t2 ")
tdSql.query(f"select max(t2) from {dbname}.stb group by t2 order by t2 ")
tdSql.checkRows(self.tb_nums)
tdSql.query(" select max(c1) from stb group by c1 order by c1 ")
tdSql.query(f"select max(c1) from {dbname}.stb group by c1 order by c1 ")
tdSql.checkRows(self.row_nums+1)
tdSql.query(" select c1 , max(c1) from stb group by c1 order by c1 ")
tdSql.query(f"select c1 , max(c1) from {dbname}.stb group by c1 order by c1 ")
tdSql.checkRows(self.row_nums+1)
# support selective functions
tdSql.query(" select c1 ,c2 ,c3 , max(c1) ,c4 ,c5 ,t11 from stb group by c1 order by c1 desc ")
tdSql.query(f"select c1 ,c2 ,c3 , max(c1) ,c4 ,c5 ,t11 from {dbname}.stb group by c1 order by c1 desc ")
tdSql.checkRows(self.row_nums+1)
tdSql.query(" select c1, tbname , max(c1) ,c4 ,c5 ,t11 from stb group by c1 order by c1 desc ")
tdSql.query(f"select c1, tbname , max(c1) ,c4 ,c5 ,t11 from {dbname}.stb group by c1 order by c1 desc ")
tdSql.checkRows(self.row_nums+1)
# bug need fix
# tdSql.query(" select tbname , max(c1) from sub_stb_1 where c1 is null group by c1 order by c1 desc ")
# tdSql.checkRows(1)
# tdSql.checkData(0,0,"sub_stb_1")
tdSql.query(f"select tbname , max(c1) from {dbname}.sub_stb_1 where c1 is null group by c1 order by c1 desc ")
tdSql.checkRows(1)
tdSql.checkData(0,0,"sub_stb_1")
tdSql.query("select max(c1) ,c2 ,t2,tbname from stb group by abs(c1) order by abs(c1)")
tdSql.query(f"select max(c1) ,c2 ,t2,tbname from {dbname}.stb group by abs(c1) order by abs(c1)")
tdSql.checkRows(self.row_nums+1)
tdSql.query("select abs(c1+c3), count(c1+c3) ,max(c1+t2) from stb group by abs(c1+c3) order by abs(c1+c3)")
tdSql.query(f"select abs(c1+c3), count(c1+c3) ,max(c1+t2) from {dbname}.stb group by abs(c1+c3) order by abs(c1+c3)")
tdSql.checkRows(self.row_nums+1)
tdSql.query("select max(c1+c3)+min(c2) ,abs(c1) from stb group by abs(c1) order by abs(c1)")
tdSql.query(f"select max(c1+c3)+min(c2) ,abs(c1) from {dbname}.stb group by abs(c1) order by abs(c1)")
tdSql.checkRows(self.row_nums+1)
tdSql.error("select count(c1+c3)+max(c2) ,abs(c1) ,abs(t1) from stb group by abs(c1) order by abs(t1)+c2")
tdSql.error("select count(c1+c3)+max(c2) ,abs(c1) from stb group by abs(c1) order by abs(c1)+c2")
tdSql.query("select abs(c1+c3)+abs(c2) , count(c1+c3)+max(c2) from stb group by abs(c1+c3)+abs(c2) order by abs(c1+c3)+abs(c2)")
tdSql.error(f"select count(c1+c3)+max(c2) ,abs(c1) ,abs(t1) from {dbname}.stb group by abs(c1) order by abs(t1)+c2")
tdSql.error(f"select count(c1+c3)+max(c2) ,abs(c1) from {dbname}.stb group by abs(c1) order by abs(c1)+c2")
tdSql.query(f"select abs(c1+c3)+abs(c2) , count(c1+c3)+max(c2) from {dbname}.stb group by abs(c1+c3)+abs(c2) order by abs(c1+c3)+abs(c2)")
tdSql.checkRows(self.row_nums+1)
tdSql.query(" select max(c1) , max(t2) from stb where abs(c1+t2)=1 partition by tbname ")
tdSql.query(f"select max(c1) , max(t2) from {dbname}.stb where abs(c1+t2)=1 partition by tbname ")
tdSql.checkRows(2)
tdSql.query(" select max(c1) from stb where abs(c1+t2)=1 partition by tbname ")
tdSql.query(f"select max(c1) from {dbname}.stb where abs(c1+t2)=1 partition by tbname ")
tdSql.checkRows(2)
tdSql.query(" select tbname , max(c1) from stb partition by tbname order by tbname ")
tdSql.query(f"select tbname , max(c1) from {dbname}.stb partition by tbname order by tbname ")
tdSql.checkRows(self.tb_nums)
tdSql.checkData(0,1,self.row_nums-1)
tdSql.query("select tbname , max(c2) from stb partition by t1 order by t1")
tdSql.query("select tbname , max(t2) from stb partition by t1 order by t1")
tdSql.query("select tbname , max(t2) from stb partition by t2 order by t2")
tdSql.query(f"select tbname , max(c2) from {dbname}.stb partition by t1 order by t1")
tdSql.query(f"select tbname , max(t2) from {dbname}.stb partition by t1 order by t1")
tdSql.query(f"select tbname , max(t2) from {dbname}.stb partition by t2 order by t2")
# # bug need fix
tdSql.query("select t2 , max(t2) from stb partition by t2 order by t2")
tdSql.query(f"select t2 , max(t2) from {dbname}.stb partition by t2 order by t2")
tdSql.checkRows(self.tb_nums)
tdSql.query("select tbname , max(c1) from stb partition by tbname order by tbname")
tdSql.query(f"select tbname , max(c1) from {dbname}.stb partition by tbname order by tbname")
tdSql.checkRows(self.tb_nums)
tdSql.checkData(0,1,self.row_nums-1)
tdSql.query("select tbname , max(c1) from stb partition by t2 order by t2")
tdSql.query(f"select tbname , max(c1) from {dbname}.stb partition by t2 order by t2")
tdSql.query("select c2, max(c1) from stb partition by c2 order by c2 desc")
tdSql.query(f"select c2, max(c1) from {dbname}.stb partition by c2 order by c2 desc")
tdSql.checkRows(self.tb_nums+1)
tdSql.checkData(0,1,self.row_nums-1)
tdSql.query("select tbname , max(c1) from stb partition by c1 order by c2")
tdSql.query(f"select tbname , max(c1) from {dbname}.stb partition by c1 order by c2")
tdSql.query("select tbname , abs(t2) from stb partition by c2 order by t2")
tdSql.query(f"select tbname , abs(t2) from {dbname}.stb partition by c2 order by t2")
tdSql.checkRows(self.tb_nums*(self.row_nums+5))
tdSql.query("select max(c1) , count(t2) from stb partition by c2 ")
tdSql.query(f"select max(c1) , count(t2) from {dbname}.stb partition by c2 ")
tdSql.checkRows(self.row_nums+1)
tdSql.checkData(0,1,self.row_nums)
tdSql.query("select count(c1) , max(t2) ,c2 from stb partition by c2 order by c2")
tdSql.query(f"select count(c1) , max(t2) ,c2 from {dbname}.stb partition by c2 order by c2")
tdSql.checkRows(self.row_nums+1)
tdSql.query("select count(c1) , count(t1) ,max(c2) ,tbname from stb partition by tbname order by tbname")
tdSql.query(f"select count(c1) , count(t1) ,max(c2) ,tbname from {dbname}.stb partition by tbname order by tbname")
tdSql.checkRows(self.tb_nums)
tdSql.checkCols(4)
tdSql.query("select count(c1) , max(t2) ,t1 from stb partition by t1 order by t1")
tdSql.query(f"select count(c1) , max(t2) ,t1 from {dbname}.stb partition by t1 order by t1")
tdSql.checkRows(self.tb_nums)
tdSql.checkData(0,0,self.row_nums)
# bug need fix
tdSql.query("select count(c1) , max(t2) ,abs(c1) from stb partition by abs(c1) order by abs(c1)")
tdSql.query(f"select count(c1) , max(t2) ,abs(c1) from {dbname}.stb partition by abs(c1) order by abs(c1)")
tdSql.checkRows(self.row_nums+1)
tdSql.query("select max(ceil(c2)) , max(floor(t2)) ,max(floor(c2)) from stb partition by abs(c2) order by abs(c2)")
tdSql.query(f"select max(ceil(c2)) , max(floor(t2)) ,max(floor(c2)) from {dbname}.stb partition by abs(c2) order by abs(c2)")
tdSql.checkRows(self.row_nums+1)
tdSql.query("select max(ceil(c1-2)) , max(floor(t2+1)) ,max(c2-c1) from stb partition by abs(floor(c1)) order by abs(floor(c1))")
tdSql.query(f"select max(ceil(c1-2)) , max(floor(t2+1)) ,max(c2-c1) from {dbname}.stb partition by abs(floor(c1)) order by abs(floor(c1))")
tdSql.checkRows(self.row_nums+1)
tdSql.query("select tbname , max(c1) ,c1 from stb partition by tbname order by tbname")
tdSql.query(f"select tbname , max(c1) ,c1 from {dbname}.stb partition by tbname order by tbname")
tdSql.checkRows(self.tb_nums)
tdSql.checkData(0,0,'sub_stb_0')
tdSql.checkData(0,1,9)
tdSql.checkData(0,2,9)
tdSql.query("select tbname ,top(c1,1) ,c1 from stb partition by tbname order by tbname")
tdSql.query(f"select tbname ,top(c1,1) ,c1 from {dbname}.stb partition by tbname order by tbname")
tdSql.checkRows(self.tb_nums)
tdSql.query(" select c1 , sample(c1,2) from stb partition by tbname order by tbname ")
tdSql.query(f"select c1 , sample(c1,2) from {dbname}.stb partition by tbname order by tbname ")
tdSql.checkRows(self.tb_nums*2)
# interval
tdSql.query("select max(c1) from stb interval(2s) sliding(1s)")
tdSql.query(f"select max(c1) from {dbname}.stb interval(2s) sliding(1s)")
# bug need fix
tdSql.query('select max(c1) from stb where ts>="2022-07-06 16:00:00.000 " and ts < "2022-07-06 17:00:00.000 " interval(50s) sliding(30s) fill(NULL)')
tdSql.query(f'select max(c1) from {dbname}.stb where ts>="2022-07-06 16:00:00.000 " and ts < "2022-07-06 17:00:00.000 " interval(50s) sliding(30s) fill(NULL)')
tdSql.query(" select tbname , count(c1) from stb partition by tbname interval(10s) slimit 5 soffset 1 ")
tdSql.query(f"select tbname , count(c1) from {dbname}.stb partition by tbname interval(10s) slimit 5 soffset 1 ")
tdSql.query("select tbname , max(c1) from stb partition by tbname interval(10s)")
tdSql.query(f"select tbname , max(c1) from {dbname}.stb partition by tbname interval(10s)")
tdSql.checkRows(self.row_nums*2)
tdSql.query("select unique(c1) from stb partition by tbname order by tbname")
tdSql.query(f"select unique(c1) from {dbname}.stb partition by tbname order by tbname")
tdSql.query("select tbname , count(c1) from sub_stb_1 partition by tbname interval(10s)")
tdSql.query(f"select tbname , count(c1) from {dbname}.sub_stb_1 partition by tbname interval(10s)")
tdSql.checkData(0,0,'sub_stb_1')
tdSql.checkData(0,1,self.row_nums)
tdSql.query("select c1 , mavg(c1 ,2 ) from stb partition by c1")
tdSql.query(f"select c1 , mavg(c1 ,2 ) from {dbname}.stb partition by c1")
tdSql.checkRows(90)
tdSql.query("select c1 , diff(c1 , 0) from stb partition by c1")
tdSql.query(f"select c1 , diff(c1 , 0) from {dbname}.stb partition by c1")
tdSql.checkRows(90)
tdSql.query("select c1 , csum(c1) from stb partition by c1")
tdSql.query(f"select c1 , csum(c1) from {dbname}.stb partition by c1")
tdSql.checkRows(100)
tdSql.query("select c1 , sample(c1,2) from stb partition by c1 order by c1")
tdSql.query(f"select c1 , sample(c1,2) from {dbname}.stb partition by c1 order by c1")
tdSql.checkRows(21)
# bug need fix
# tdSql.checkData(0,1,None)
tdSql.checkData(0,1,None)
tdSql.query("select c1 , twa(c1) from stb partition by c1 order by c1")
tdSql.query(f"select c1 , twa(c1) from {dbname}.stb partition by c1 order by c1")
tdSql.checkRows(11)
tdSql.checkData(0,1,None)
tdSql.query("select c1 , irate(c1) from stb partition by c1 order by c1")
tdSql.query(f"select c1 , irate(c1) from {dbname}.stb partition by c1 order by c1")
tdSql.checkRows(11)
tdSql.checkData(0,1,None)
tdSql.query("select c1 , DERIVATIVE(c1,2,1) from stb partition by c1 order by c1")
tdSql.query(f"select c1 , DERIVATIVE(c1,2,1) from {dbname}.stb partition by c1 order by c1")
tdSql.checkRows(90)
# bug need fix
tdSql.checkData(0,1,None)
tdSql.query(" select tbname , max(c1) from stb partition by tbname order by tbname slimit 5 soffset 0 ")
tdSql.query(f"select tbname , max(c1) from {dbname}.stb partition by tbname order by tbname slimit 5 soffset 0 ")
tdSql.checkRows(10)
tdSql.query(" select tbname , max(c1) from sub_stb_1 partition by tbname interval(10s) sliding(5s) ")
tdSql.query(f"select tbname , max(c1) from {dbname}.sub_stb_1 partition by tbname interval(10s) sliding(5s) ")
tdSql.query(f'select max(c1) from stb where ts>={self.ts} and ts < {self.ts}+1000 interval(50s) sliding(30s)')
tdSql.query(f'select tbname , max(c1) from stb where ts>={self.ts} and ts < {self.ts}+1000 interval(50s) sliding(30s)')
tdSql.query(f'select max(c1) from {dbname}.stb where ts>={self.ts} and ts < {self.ts}+1000 interval(50s) sliding(30s)')
tdSql.query(f'select tbname , max(c1) from {dbname}.stb where ts>={self.ts} and ts < {self.ts}+1000 interval(50s) sliding(30s)')
def run(self):
dbname = "db"
tdSql.prepare()
self.prepare_datas("stb",self.tb_nums,self.row_nums)
self.basic_query()
# # coverage case for taosd crash about bug fix
tdSql.query(" select sum(c1) from stb where t2+10 >1 ")
tdSql.query(" select count(c1),count(t1) from stb where -t2<1 ")
tdSql.query(" select tbname ,max(ceil(c1)) from stb group by tbname ")
tdSql.query(" select avg(abs(c1)) , tbname from stb group by tbname ")
tdSql.query(" select t1,c1 from stb where abs(t2+c1)=1 ")
tdSql.query(f"select sum(c1) from {dbname}.stb where t2+10 >1 ")
tdSql.query(f"select count(c1),count(t1) from {dbname}.stb where -t2<1 ")
tdSql.query(f"select tbname ,max(ceil(c1)) from {dbname}.stb group by tbname ")
tdSql.query(f"select avg(abs(c1)) , tbname from {dbname}.stb group by tbname ")
tdSql.query(f"select t1,c1 from {dbname}.stb where abs(t2+c1)=1 ")
def stop(self):
......
......@@ -14,196 +14,124 @@ class TDTestCase:
self.ts = 1537146000000
def run(self):
dbname = "db"
tdSql.prepare()
intData = []
floatData = []
tdSql.execute('''create table stb(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
tdSql.execute(f'''create table {dbname}.stb(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
tdSql.execute("create table stb_1 using stb tags('beijing')")
tdSql.execute('''create table ntb(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
tdSql.execute(f"create table {dbname}.stb_1 using {dbname}.stb tags('beijing')")
tdSql.execute(f'''create table {dbname}.ntb(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned)''')
for i in range(self.rowNum):
tdSql.execute("insert into ntb values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
tdSql.execute(f"insert into {dbname}.ntb values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
% (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
intData.append(i + 1)
floatData.append(i + 0.1)
for i in range(self.rowNum):
tdSql.execute("insert into stb_1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
tdSql.execute(f"insert into {dbname}.stb_1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
% (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
intData.append(i + 1)
floatData.append(i + 0.1)
# max verifacation
tdSql.error("select min(ts) from stb_1")
tdSql.error("select min(ts) from db.stb_1")
tdSql.error("select min(col7) from stb_1")
tdSql.error("select min(col7) from db.stb_1")
tdSql.error("select min(col8) from stb_1")
tdSql.error("select min(col8) from db.stb_1")
tdSql.error("select min(col9) from stb_1")
tdSql.error("select min(col9) from db.stb_1")
# tdSql.error("select min(a) from stb_1")
# tdSql.error("select min(1) from stb_1")
tdSql.error("select min(now()) from stb_1")
tdSql.error("select min(count(c1),count(c2)) from stb_1")
tdSql.error(f"select min(ts) from {dbname}.stb_1")
tdSql.error(f"select min(col7) from {dbname}.stb_1")
tdSql.error(f"select min(col8) from {dbname}.stb_1")
tdSql.error(f"select min(col9) from {dbname}.stb_1")
tdSql.error(f"select min(a) from {dbname}.stb_1")
tdSql.query(f"select min(1) from {dbname}.stb_1")
tdSql.error(f"select min(now()) from {dbname}.stb_1")
tdSql.error(f"select min(count(c1),count(c2)) from {dbname}.stb_1")
tdSql.query("select min(col1) from stb_1")
tdSql.query(f"select min(col1) from {dbname}.stb_1")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col1) from db.stb_1")
tdSql.query(f"select min(col2) from {dbname}.stb_1")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col2) from stb_1")
tdSql.query(f"select min(col3) from {dbname}.stb_1")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col2) from db.stb_1")
tdSql.query(f"select min(col4) from {dbname}.stb_1")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col3) from stb_1")
tdSql.query(f"select min(col11) from {dbname}.stb_1")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col3) from db.stb_1")
tdSql.query(f"select min(col12) from {dbname}.stb_1")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col4) from stb_1")
tdSql.query(f"select min(col13) from {dbname}.stb_1")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col4) from db.stb_1")
tdSql.query(f"select min(col14) from {dbname}.stb_1")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col11) from stb_1")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col11) from db.stb_1")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col12) from stb_1")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col12) from db.stb_1")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col13) from stb_1")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col13) from db.stb_1")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col14) from stb_1")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col14) from db.stb_1")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col5) from stb_1")
tdSql.query(f"select min(col5) from {dbname}.stb_1")
tdSql.checkData(0, 0, np.min(floatData))
tdSql.query("select min(col5) from db.stb_1")
tdSql.query(f"select min(col6) from {dbname}.stb_1")
tdSql.checkData(0, 0, np.min(floatData))
tdSql.query("select min(col6) from stb_1")
tdSql.checkData(0, 0, np.min(floatData))
tdSql.query("select min(col6) from db.stb_1")
tdSql.checkData(0, 0, np.min(floatData))
tdSql.query("select min(col1) from stb_1 where col2>=5")
tdSql.query(f"select min(col1) from {dbname}.stb_1 where col2>=5")
tdSql.checkData(0,0,5)
tdSql.error("select min(ts) from stb_1")
tdSql.error("select min(ts) from db.stb_1")
tdSql.error("select min(col7) from stb_1")
tdSql.error("select min(col7) from db.stb_1")
tdSql.error("select min(col8) from stb_1")
tdSql.error("select min(col8) from db.stb_1")
tdSql.error("select min(col9) from stb_1")
tdSql.error("select min(col9) from db.stb_1")
# tdSql.error("select min(a) from stb_1")
# tdSql.error("select min(1) from stb_1")
tdSql.error("select min(now()) from stb_1")
tdSql.error("select min(count(c1),count(c2)) from stb_1")
tdSql.error(f"select min(ts) from {dbname}.stb_1")
tdSql.error(f"select min(col7) from {dbname}.stb_1")
tdSql.error(f"select min(col8) from {dbname}.stb_1")
tdSql.error(f"select min(col9) from {dbname}.stb_1")
tdSql.error(f"select min(a) from {dbname}.stb_1")
tdSql.query(f"select min(1) from {dbname}.stb_1")
tdSql.error(f"select min(now()) from {dbname}.stb_1")
tdSql.error(f"select min(count(c1),count(c2)) from {dbname}.stb_1")
tdSql.query("select min(col1) from stb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col1) from db.stb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col2) from stb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col2) from db.stb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col3) from stb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col3) from db.stb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col4) from stb")
tdSql.query(f"select min(col1) from {dbname}.stb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col4) from db.stb")
tdSql.query(f"select min(col2) from {dbname}.stb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col11) from stb")
tdSql.query(f"select min(col3) from {dbname}.stb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col11) from db.stb")
tdSql.query(f"select min(col4) from {dbname}.stb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col12) from stb")
tdSql.query(f"select min(col11) from {dbname}.stb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col12) from db.stb")
tdSql.query(f"select min(col12) from {dbname}.stb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col13) from stb")
tdSql.query(f"select min(col13) from {dbname}.stb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col13) from db.stb")
tdSql.query(f"select min(col14) from {dbname}.stb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col14) from stb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col14) from db.stb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col5) from stb")
tdSql.checkData(0, 0, np.min(floatData))
tdSql.query("select min(col5) from db.stb")
tdSql.checkData(0, 0, np.min(floatData))
tdSql.query("select min(col6) from stb")
tdSql.query(f"select min(col5) from {dbname}.stb")
tdSql.checkData(0, 0, np.min(floatData))
tdSql.query("select min(col6) from db.stb")
tdSql.query(f"select min(col6) from {dbname}.stb")
tdSql.checkData(0, 0, np.min(floatData))
tdSql.query("select min(col1) from stb where col2>=5")
tdSql.query(f"select min(col1) from {dbname}.stb where col2>=5")
tdSql.checkData(0,0,5)
tdSql.error(f"select min(ts) from {dbname}.ntb")
tdSql.error(f"select min(col7) from {dbname}.ntb")
tdSql.error(f"select min(col8) from {dbname}.ntb")
tdSql.error(f"select min(col9) from {dbname}.ntb")
tdSql.error(f"select min(a) from {dbname}.ntb")
tdSql.query(f"select min(1) from {dbname}.ntb")
tdSql.error(f"select min(now()) from {dbname}.ntb")
tdSql.error(f"select min(count(c1),count(c2)) from {dbname}.ntb")
tdSql.error("select min(ts) from ntb")
tdSql.error("select min(ts) from db.ntb")
tdSql.error("select min(col7) from ntb")
tdSql.error("select min(col7) from db.ntb")
tdSql.error("select min(col8) from ntb")
tdSql.error("select min(col8) from db.ntb")
tdSql.error("select min(col9) from ntb")
tdSql.error("select min(col9) from db.ntb")
# tdSql.error("select min(a) from stb_1")
# tdSql.error("select min(1) from stb_1")
tdSql.error("select min(now()) from ntb")
tdSql.error("select min(count(c1),count(c2)) from ntb")
tdSql.query("select min(col1) from ntb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col1) from db.ntb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col2) from ntb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col2) from db.ntb")
tdSql.query(f"select min(col1) from {dbname}.ntb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col3) from ntb")
tdSql.query(f"select min(col2) from {dbname}.ntb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col3) from db.ntb")
tdSql.query(f"select min(col3) from {dbname}.ntb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col4) from ntb")
tdSql.query(f"select min(col4) from {dbname}.ntb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col4) from db.ntb")
tdSql.query(f"select min(col11) from {dbname}.ntb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col11) from ntb")
tdSql.query(f"select min(col12) from {dbname}.ntb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col11) from db.ntb")
tdSql.query(f"select min(col13) from {dbname}.ntb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col12) from ntb")
tdSql.query(f"select min(col14) from {dbname}.ntb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col12) from db.ntb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col13) from ntb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col13) from db.ntb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col14) from ntb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col14) from db.ntb")
tdSql.checkData(0, 0, np.min(intData))
tdSql.query("select min(col5) from ntb")
tdSql.checkData(0, 0, np.min(floatData))
tdSql.query("select min(col5) from db.ntb")
tdSql.checkData(0, 0, np.min(floatData))
tdSql.query("select min(col6) from ntb")
tdSql.query(f"select min(col5) from {dbname}.ntb")
tdSql.checkData(0, 0, np.min(floatData))
tdSql.query("select min(col6) from db.ntb")
tdSql.query(f"select min(col6) from {dbname}.ntb")
tdSql.checkData(0, 0, np.min(floatData))
tdSql.query("select min(col1) from ntb where col2>=5")
tdSql.query(f"select min(col1) from {dbname}.ntb where col2>=5")
tdSql.checkData(0,0,5)
......
......@@ -24,9 +24,6 @@ from util.dnodes import tdDnodes
from util.dnodes import *
class TDTestCase:
updatecfgDict = {'maxSQLLength':1048576,'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
"jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
"wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143}
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
......
此差异已折叠。
......@@ -13,9 +13,9 @@ from util.common import *
sys.path.append("./6-cluster/")
from clusterCommonCreate import *
from clusterCommonCheck import clusterComCheck
from clusterCommonCheck import clusterComCheck
import threading
import threading
class TDTestCase:
......@@ -28,7 +28,7 @@ class TDTestCase:
def init(self, conn, logSql):
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor(), True)
tdSql.init(conn.cursor(), False)
def create_ctable(self,tsql=None, dbName='dbx',stbName='stb',ctbPrefix='ctb',ctbNum=1):
tsql.execute("use %s" %dbName)
......@@ -47,7 +47,7 @@ class TDTestCase:
sql = pre_create
if sql != pre_create:
tsql.execute(sql)
tdLog.debug("complete to create %d child tables in %s.%s" %(ctbNum, dbName, stbName))
return
......@@ -55,7 +55,7 @@ class TDTestCase:
dbname="db_tsbs"
stabname1="readings"
stabname2="diagnostics"
ctbnamePre1="rct"
ctbnamePre1="rct"
ctbnamePre2="dct"
ctbNums=40
self.ctbNums=ctbNums
......@@ -73,7 +73,7 @@ class TDTestCase:
self.create_ctable(tsql=tdSql,dbName=dbname,stbName=stabname2,ctbPrefix=ctbnamePre2,ctbNum=ctbNums)
for j in range(ctbNums):
for j in range(ctbNums):
for i in range(rowNUms):
tdSql.execute(
f"insert into rct{j} values ( {ts+i*60000}, {80+i}, {90+i}, {85+i}, {30+i*10}, {1.2*i}, {221+i*2}, {20+i*0.2}, {1500+i*20}, {150+i*2},{5+i} )"
......@@ -109,19 +109,19 @@ class TDTestCase:
def tsbsIotQuery(self,tdSql):
tdSql.execute("use db_tsbs")
# test interval and partition
tdSql.query(" SELECT avg(velocity) as mean_velocity ,name,driver,fleet FROM readings WHERE ts > 1451606400000 AND ts <= 1451606460000 partition BY name,driver,fleet; ")
# print(tdSql.queryResult)
parRows=tdSql.queryRows
tdSql.query(" SELECT avg(velocity) as mean_velocity ,name,driver,fleet FROM readings WHERE ts > 1451606400000 AND ts <= 1451606460000 partition BY name,driver,fleet interval(10m); ")
tdSql.checkRows(parRows)
# # test insert into
# # test insert into
# tdSql.execute("create table testsnode (ts timestamp, c1 float,c2 binary(30),c3 binary(30),c4 binary(30)) ;")
# tdSql.query("insert into testsnode SELECT ts,avg(velocity) as mean_velocity,name,driver,fleet FROM readings WHERE ts > 1451606400000 AND ts <= 1451606460000 partition BY name,driver,fleet,ts interval(10m);")
# tdSql.query("insert into testsnode(ts,c1,c2,c3,c4) SELECT ts,avg(velocity) as mean_velocity,name,driver,fleet FROM readings WHERE ts > 1451606400000 AND ts <= 1451606460000 partition BY name,driver,fleet,ts interval(10m);")
......@@ -141,7 +141,7 @@ class TDTestCase:
tdSql.query("SELECT ts,name,driver,current_load,load_capacity FROM (SELECT last(ts) as ts,name,driver, current_load,load_capacity FROM diagnostics WHERE fleet = 'South' partition by name,driver) WHERE current_load>= (0.9 * load_capacity) partition by name ORDER BY name ;")
# 2 stationary-trucks
# 2 stationary-trucks
tdSql.query("select name,driver from (SELECT name,driver,fleet ,avg(velocity) as mean_velocity FROM readings WHERE ts > '2016-01-01T15:07:21Z' AND ts <= '2016-01-01T16:17:21Z' partition BY name,driver,fleet interval(10m) LIMIT 1)")
tdSql.query("select name,driver from (SELECT name,driver,fleet ,avg(velocity) as mean_velocity FROM readings WHERE ts > '2016-01-01T15:07:21Z' AND ts <= '2016-01-01T16:17:21Z' partition BY name,driver,fleet interval(10m) LIMIT 1) WHERE fleet = 'West' AND mean_velocity < 1000 partition BY name")
......@@ -156,7 +156,7 @@ class TDTestCase:
tdSql.query("select _wstart as ts,fleet,name,driver,count(mv)/6 as hours_driven from ( select _wstart as ts,fleet,name,driver,avg(velocity) as mv from readings where ts > '2016-01-01T00:00:00Z' and ts < '2016-01-05T00:00:01Z' partition by fleet,name,driver interval(10m)) where ts > '2016-01-01T00:00:00Z' and ts < '2016-01-05T00:00:01Z' partition by fleet,name,driver interval(1d) ;")
# # 6. avg-daily-driving-session
# # 6. avg-daily-driving-session
# #taosc core dumped
# tdSql.execute("create table random_measure2_1 (ts timestamp,ela float, name binary(40))")
# tdSql.query("SELECT ts,diff(mv) AS difka FROM (SELECT ts,name,floor(avg(velocity)/10)/floor(avg(velocity)/10) AS mv FROM readings WHERE name!='' AND ts > '2016-01-01T00:00:00Z' AND ts < '2016-01-05T00:00:01Z' partition by name,ts interval(10m) fill(value,0)) GROUP BY name,ts;")
......@@ -166,7 +166,7 @@ class TDTestCase:
# 7. avg-load
tdSql.query("SELECT fleet, model,avg(ml) AS mean_load_percentage FROM (SELECT fleet, model,current_load/load_capacity AS ml FROM diagnostics partition BY name, fleet, model) partition BY fleet, model order by fleet ;")
# 8. daily-activity
# 8. daily-activity
tdSql.query(" SELECT model,ms1 FROM (SELECT _wstart as ts1,model, fleet,avg(status) AS ms1 FROM diagnostics WHERE ts >= '2016-01-01T00:00:00Z' AND ts < '2016-01-05T00:00:01Z' partition by model, fleet interval(10m) fill(value,0)) WHERE ts1 >= '2016-01-01T00:00:00Z' AND ts1 < '2016-01-05T00:00:01Z' AND ms1<1;")
tdSql.query(" SELECT model,ms1 FROM (SELECT _wstart as ts1,model, fleet,avg(status) AS ms1 FROM diagnostics WHERE ts >= '2016-01-01T00:00:00Z' AND ts < '2016-01-05T00:00:01Z' partition by model, fleet interval(10m) ) WHERE ts1 >= '2016-01-01T00:00:00Z' AND ts1 < '2016-01-05T00:00:01Z' AND ms1<1;")
......@@ -184,7 +184,7 @@ class TDTestCase:
tdSql.query(" SELECT model,state_changed,count(state_changed) FROM (SELECT model,diff(broken_down) AS state_changed FROM (SELECT _wstart,model,cast(cast(floor(2*(sum(nzs)/count(nzs))) as bool) as int) AS broken_down FROM (SELECT ts,model, cast(cast(status as bool) as int) AS nzs FROM diagnostics WHERE ts >= '2016-01-01T00:00:00Z' AND ts < '2016-01-05T00:00:01Z' ) WHERE ts >= '2016-01-01T00:00:00Z' AND ts < '2016-01-05T00:00:01Z' partition BY model interval(10m)) partition BY model) where state_changed =1 partition BY model,state_changed ;")
#it's already supported:
# last-loc
tdSql.query("SELECT last_row(ts),latitude,longitude,name,driver FROM readings WHERE fleet='South' and name IS NOT NULL partition BY name,driver order by name ;")
......@@ -192,7 +192,7 @@ class TDTestCase:
#2. low-fuel
tdSql.query("SELECT last_row(ts),name,driver,fuel_state,driver FROM diagnostics WHERE fuel_state <= 0.1 AND fleet = 'South' and name IS NOT NULL GROUP BY name,driver order by name;")
# 3. avg-vs-projected-fuel-consumption
tdSql.query("select avg(fuel_consumption) as avg_fuel_consumption,avg(nominal_fuel_consumption) as nominal_fuel_consumption from readings where velocity > 1 group by fleet")
......@@ -213,16 +213,16 @@ class TDTestCase:
'ctbPrefix': 'ctb',
'ctbNum': 1,
}
dnodeNumbers=int(dnodeNumbers)
mnodeNums=int(mnodeNums)
vnodeNumbers = int(dnodeNumbers-mnodeNums)
tdSql.query("select * from information_schema.ins_dnodes;")
tdLog.debug(tdSql.queryResult)
clusterComCheck.checkDnodes(dnodeNumbers)
tdLog.info("create database and stable")
tdLog.info("create database and stable")
tdDnodes=cluster.dnodes
stopcount =0
threads=[]
......@@ -234,7 +234,7 @@ class TDTestCase:
for tr in threads:
tr.start()
tdLog.info("Take turns stopping %s "%stopRole)
tdLog.info("Take turns stopping %s "%stopRole)
while stopcount < restartNumbers:
tdLog.info(" restart loop: %d"%stopcount )
if stopRole == "mnode":
......@@ -242,7 +242,7 @@ class TDTestCase:
tdDnodes[i].stoptaosd()
# sleep(10)
tdDnodes[i].starttaosd()
# sleep(10)
# sleep(10)
elif stopRole == "vnode":
for i in range(vnodeNumbers):
tdDnodes[i+mnodeNums].stoptaosd()
......@@ -254,7 +254,7 @@ class TDTestCase:
tdDnodes[i].stoptaosd()
# sleep(10)
tdDnodes[i].starttaosd()
# sleep(10)
# sleep(10)
# dnodeNumbers don't include database of schema
if clusterComCheck.checkDnodes(dnodeNumbers):
......@@ -265,12 +265,12 @@ class TDTestCase:
tdLog.exit("one or more of dnodes failed to start ")
# self.check3mnode()
stopcount+=1
for tr in threads:
tr.join()
def run(self):
def run(self):
tdLog.printNoPrefix("==========step1:create database and table,insert data ==============")
self.createCluster()
self.prepareData()
......
......@@ -19,7 +19,7 @@ class TDTestCase:
def init(self, conn, logSql):
## add for TD-6672
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
tdSql.init(conn.cursor(), False)
def insertData(self, tb_name):
insert_sql_list = [f'insert into {tb_name} values ("2021-01-01 12:00:00", 1, 1, 1, 3, 1.1, 1.1, "binary", "nchar", true, 1, 2, 3, 4)',
......@@ -37,17 +37,17 @@ class TDTestCase:
for sql in insert_sql_list:
tdSql.execute(sql)
def initTb(self):
tdCom.cleanTb()
tb_name = tdCom.getLongName(8, "letters")
def initTb(self, dbname="db"):
tdCom.cleanTb(dbname)
tb_name = f'{dbname}.{tdCom.getLongName(8, "letters")}'
tdSql.execute(
f"CREATE TABLE {tb_name} (ts timestamp, c1 tinyint, c2 smallint, c3 int, c4 bigint, c5 float, c6 double, c7 binary(100), c8 nchar(200), c9 bool, c10 tinyint unsigned, c11 smallint unsigned, c12 int unsigned, c13 bigint unsigned)")
self.insertData(tb_name)
return tb_name
def initStb(self, count=5):
tdCom.cleanTb()
tb_name = tdCom.getLongName(8, "letters")
def initStb(self, count=5, dbname="db"):
tdCom.cleanTb(dbname)
tb_name = f'{dbname}.{tdCom.getLongName(8, "letters")}'
tdSql.execute(
f"CREATE TABLE {tb_name} (ts timestamp, c1 tinyint, c2 smallint, c3 int, c4 bigint, c5 float, c6 double, c7 binary(100), c8 nchar(200), c9 bool, c10 tinyint unsigned, c11 smallint unsigned, c12 int unsigned, c13 bigint unsigned) tags (t1 tinyint, t2 smallint, t3 int, t4 bigint, t5 float, t6 double, t7 binary(100), t8 nchar(200), t9 bool, t10 tinyint unsigned, t11 smallint unsigned, t12 int unsigned, t13 bigint unsigned)")
for i in range(1, count+1):
......@@ -56,9 +56,10 @@ class TDTestCase:
self.insertData(f'{tb_name}_sub_{i}')
return tb_name
def initTwoStb(self):
tdCom.cleanTb()
tb_name = tdCom.getLongName(8, "letters")
def initTwoStb(self, dbname="db"):
tdCom.cleanTb(dbname)
tb_name = f'{dbname}.{tdCom.getLongName(8, "letters")}'
# tb_name = tdCom.getLongName(8, "letters")
tb_name1 = f'{tb_name}1'
tb_name2 = f'{tb_name}2'
tdSql.execute(
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -14,43 +14,44 @@ class TDTestCase:
tdSql.init(conn.cursor())
def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
dbname = "db"
tdSql.prepare()
tdLog.printNoPrefix("==========step1:create table")
tdSql.execute(
'''create table stb1
f'''create table {dbname}.stb1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 varchar(16),c9 nchar(32), c10 timestamp)
tags (t1 int)
'''
)
tdSql.execute(
'''
create table t1
f'''
create table {dbname}.t1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 varchar(16),c9 nchar(32), c10 timestamp)
'''
)
for i in range(4):
tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
tdSql.execute(f'create table {dbname}.ct{i+1} using {dbname}.stb1 tags ( {i+1} )')
tdLog.printNoPrefix("==========step2:insert data")
for i in range(9):
tdSql.execute(
f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'varchar{i}', 'nchar{i}', now()+{1*i}a )"
f"insert into {dbname}.ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'varchar{i}', 'nchar{i}', now()+{1*i}a )"
)
tdSql.execute(
f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'varchar{i}', 'nchar{i}', now()+{1*i}a )"
f"insert into {dbname}.ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'varchar{i}', 'nchar{i}', now()+{1*i}a )"
)
tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'varchar0', 'nchar0', now()+8a )")
tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'varchar9', 'nchar9', now()+9a )")
tdSql.execute(f"insert into {dbname}.ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'varchar0', 'nchar0', now()+8a )")
tdSql.execute(f"insert into {dbname}.ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'varchar9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute(f"insert into {dbname}.ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute(f"insert into {dbname}.ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute(f"insert into {dbname}.ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute(
f'''insert into t1 values
f'''insert into {dbname}.t1 values
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "varchar1", "nchar1", now()+1a )
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "varchar2", "nchar2", now()+2a )
......@@ -70,7 +71,7 @@ class TDTestCase:
tdLog.printNoPrefix("==========step3: cast on varchar")
tdSql.query("select c8 from ct1")
tdSql.query(f"select c8 from {dbname}.ct1")
for i in range(tdSql.queryRows):
tdSql.checkData(i,0, data_ct1_c8[i])
......
此差异已折叠。
此差异已折叠。
......@@ -113,7 +113,7 @@ int32_t shellExecute();
int32_t shellCalcColWidth(TAOS_FIELD *field, int32_t precision);
void shellPrintHeader(TAOS_FIELD *fields, int32_t *width, int32_t num_fields);
void shellPrintField(const char *val, TAOS_FIELD *field, int32_t width, int32_t length, int32_t precision);
void shellDumpFieldToFile(TdFilePtr pFile, const char *val, TAOS_FIELD *field, int32_t length, int32_t precision);
void shellDumpFieldToFile(TdFilePtr pFile, const char *val, TAOS_FIELD *field, int32_t length, int32_t precision);
// shellUtil.c
int32_t shellCheckIntSize();
void shellPrintVersion();
......
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册