提交 e3eee235 编写于 作者: H Hongze Cheng

Merge branch '3.0' of https://github.com/taosdata/TDengine into refact/tsdb_last

......@@ -9,7 +9,7 @@ The data model employed by TDengine is similar to that of a relational database.
The [characteristics of time-series data](https://www.taosdata.com/blog/2019/07/09/86.html) from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. For TDengine to operate with the best performance, we strongly recommend that you create and configure different databases for data with different characteristics. This allows you, for example, to set up different storage and retention policies. When creating a database, there are a lot of parameters that can be configured such as, the days to keep data, the number of replicas, the number of memory blocks, time precision, the minimum and maximum number of rows in each data block, whether compression is enabled, the time range of the data in single data file and so on. Below is an example of the SQL statement to create a database.
```sql
CREATE DATABASE power KEEP 365 DURATION 10 BUFFER 16 VGROUPS 100 WAL 1;
CREATE DATABASE power KEEP 365 DURATION 10 BUFFER 16 WAL_LEVEL 1;
```
In the above SQL statement:
......
......@@ -61,20 +61,20 @@ In summary, records across subtables can be aggregated by a simple query on thei
In TDengine CLI `taos`, use the SQL below to get the average voltage of all the meters in California grouped by location.
```
taos> SELECT AVG(voltage) FROM meters GROUP BY location;
avg(voltage) | location |
=============================================================
222.000000000 | California.LosAngeles |
219.200000000 | California.SanFrancisco |
Query OK, 2 row(s) in set (0.002136s)
taos> SELECT AVG(voltage), location FROM meters GROUP BY location;
avg(voltage) | location |
===============================================================================================
219.200000000 | California.SanFrancisco |
221.666666667 | California.LosAngeles |
Query OK, 2 rows in database (0.005995s)
```
### Example 2
In TDengine CLI `taos`, use the SQL below to get the number of rows and the maximum current in the past 24 hours from meters whose groupId is 2.
In TDengine CLI `taos`, use the SQL below to get the number of rows and the maximum current from meters whose groupId is 2.
```
taos> SELECT count(*), max(current) FROM meters where groupId = 2 and ts > now - 24h;
taos> SELECT count(*), max(current) FROM meters where groupId = 2;
count(*) | max(current) |
==================================
5 | 13.4 |
......@@ -88,40 +88,41 @@ Join queries are only allowed between subtables of the same STable. In [Select](
In IoT use cases, down sampling is widely used to aggregate data by time range. The `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, the SQL statement below can be used to get the sum of current every 10 seconds from meters table d1001.
```
taos> SELECT sum(current) FROM d1001 INTERVAL(10s);
ts | sum(current) |
taos> SELECT _wstart, sum(current) FROM d1001 INTERVAL(10s);
_wstart | sum(current) |
======================================================
2018-10-03 14:38:00.000 | 10.300000191 |
2018-10-03 14:38:10.000 | 24.900000572 |
Query OK, 2 row(s) in set (0.000883s)
Query OK, 2 rows in database (0.003139s)
```
Down sampling can also be used for STable. For example, the below SQL statement can be used to get the sum of current from all meters in California.
```
taos> SELECT SUM(current) FROM meters where location like "California%" INTERVAL(1s);
ts | sum(current) |
taos> SELECT _wstart, SUM(current) FROM meters where location like "California%" INTERVAL(1s);
_wstart | sum(current) |
======================================================
2018-10-03 14:38:04.000 | 10.199999809 |
2018-10-03 14:38:05.000 | 32.900000572 |
2018-10-03 14:38:05.000 | 23.699999809 |
2018-10-03 14:38:06.000 | 11.500000000 |
2018-10-03 14:38:15.000 | 12.600000381 |
2018-10-03 14:38:16.000 | 36.000000000 |
Query OK, 5 row(s) in set (0.001538s)
2018-10-03 14:38:16.000 | 34.400000572 |
Query OK, 5 rows in database (0.007413s)
```
Down sampling also supports time offset. For example, the below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds.
```
taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
ts | sum(current) |
taos> SELECT _wstart, SUM(current) FROM meters INTERVAL(1s, 500a);
_wstart | sum(current) |
======================================================
2018-10-03 14:38:04.500 | 11.189999809 |
2018-10-03 14:38:05.500 | 31.900000572 |
2018-10-03 14:38:06.500 | 11.600000000 |
2018-10-03 14:38:15.500 | 12.300000381 |
2018-10-03 14:38:16.500 | 35.000000000 |
Query OK, 5 row(s) in set (0.001521s)
2018-10-03 14:38:03.500 | 10.199999809 |
2018-10-03 14:38:04.500 | 10.300000191 |
2018-10-03 14:38:05.500 | 13.399999619 |
2018-10-03 14:38:06.500 | 11.500000000 |
2018-10-03 14:38:14.500 | 12.600000381 |
2018-10-03 14:38:16.500 | 34.400000572 |
Query OK, 6 rows in database (0.005515s)
```
In many use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle this by themselves. In TDengine, it's easy to achieve the alignment using down sampling.
......
......@@ -54,20 +54,20 @@ Query OK, 2 row(s) in set (0.001100s)
在 TAOS Shell,查找加利福尼亚州所有智能电表采集的电压平均值,并按照 location 分组。
```
taos> SELECT AVG(voltage) FROM meters GROUP BY location;
avg(voltage) | location |
=============================================================
222.000000000 | California.LosAngeles |
219.200000000 | California.SanFrancisco |
Query OK, 2 row(s) in set (0.002136s)
taos> SELECT AVG(voltage), location FROM meters GROUP BY location;
avg(voltage) | location |
===============================================================================================
219.200000000 | California.SanFrancisco |
221.666666667 | California.LosAngeles |
Query OK, 2 rows in database (0.005995s)
```
### 示例二
在 TAOS shell, 查找 groupId 为 2 的所有智能电表过去 24 小时的记录条数,电流的最大值。
在 TAOS shell, 查找 groupId 为 2 的所有智能电表的记录条数,电流的最大值。
```
taos> SELECT count(*), max(current) FROM meters where groupId = 2 and ts > now - 24h;
taos> SELECT count(*), max(current) FROM meters where groupId = 2;
cunt(*) | max(current) |
==================================
5 | 13.4 |
......@@ -81,40 +81,41 @@ Query OK, 1 row(s) in set (0.002136s)
物联网场景里,经常需要通过降采样(down sampling)将采集的数据按时间段进行聚合。TDengine 提供了一个简便的关键词 interval 让按照时间窗口的查询操作变得极为简单。比如,将智能电表 d1001 采集的电流值每 10 秒钟求和
```
taos> SELECT sum(current) FROM d1001 INTERVAL(10s);
ts | sum(current) |
taos> SELECT _wstart, sum(current) FROM d1001 INTERVAL(10s);
_wstart | sum(current) |
======================================================
2018-10-03 14:38:00.000 | 10.300000191 |
2018-10-03 14:38:10.000 | 24.900000572 |
Query OK, 2 row(s) in set (0.000883s)
Query OK, 2 rows in database (0.003139s)
```
降采样操作也适用于超级表,比如:将加利福尼亚州所有智能电表采集的电流值每秒钟求和
```
taos> SELECT SUM(current) FROM meters where location like "California%" INTERVAL(1s);
ts | sum(current) |
taos> SELECT _wstart, SUM(current) FROM meters where location like "California%" INTERVAL(1s);
_wstart | sum(current) |
======================================================
2018-10-03 14:38:04.000 | 10.199999809 |
2018-10-03 14:38:05.000 | 32.900000572 |
2018-10-03 14:38:05.000 | 23.699999809 |
2018-10-03 14:38:06.000 | 11.500000000 |
2018-10-03 14:38:15.000 | 12.600000381 |
2018-10-03 14:38:16.000 | 36.000000000 |
Query OK, 5 row(s) in set (0.001538s)
2018-10-03 14:38:16.000 | 34.400000572 |
Query OK, 5 rows in database (0.007413s)
```
降采样操作也支持时间偏移,比如:将所有智能电表采集的电流值每秒钟求和,但要求每个时间窗口从 500 毫秒开始
```
taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
ts | sum(current) |
taos> SELECT _wstart, SUM(current) FROM meters INTERVAL(1s, 500a);
_wstart | sum(current) |
======================================================
2018-10-03 14:38:04.500 | 11.189999809 |
2018-10-03 14:38:05.500 | 31.900000572 |
2018-10-03 14:38:06.500 | 11.600000000 |
2018-10-03 14:38:15.500 | 12.300000381 |
2018-10-03 14:38:16.500 | 35.000000000 |
Query OK, 5 row(s) in set (0.001521s)
2018-10-03 14:38:03.500 | 10.199999809 |
2018-10-03 14:38:04.500 | 10.300000191 |
2018-10-03 14:38:05.500 | 13.399999619 |
2018-10-03 14:38:06.500 | 11.500000000 |
2018-10-03 14:38:14.500 | 12.600000381 |
2018-10-03 14:38:16.500 | 34.400000572 |
Query OK, 6 rows in database (0.005515s)
```
物联网场景里,每个数据采集点采集数据的时间是难同步的,但很多分析算法(比如 FFT)需要把采集的数据严格按照时间等间隔的对齐,在很多系统里,需要应用自己写程序来处理,但使用 TDengine 的降采样操作就轻松解决。
......
......@@ -105,7 +105,7 @@ typedef enum ENodeType {
QUERY_NODE_COLUMN_REF,
// Statement nodes are used in parser and planner module.
QUERY_NODE_SET_OPERATOR,
QUERY_NODE_SET_OPERATOR = 100,
QUERY_NODE_SELECT_STMT,
QUERY_NODE_VNODE_MODIF_STMT,
QUERY_NODE_CREATE_DATABASE_STMT,
......@@ -198,7 +198,7 @@ typedef enum ENodeType {
QUERY_NODE_QUERY,
// logic plan node
QUERY_NODE_LOGIC_PLAN_SCAN,
QUERY_NODE_LOGIC_PLAN_SCAN = 1000,
QUERY_NODE_LOGIC_PLAN_JOIN,
QUERY_NODE_LOGIC_PLAN_AGG,
QUERY_NODE_LOGIC_PLAN_PROJECT,
......@@ -215,7 +215,7 @@ typedef enum ENodeType {
QUERY_NODE_LOGIC_PLAN,
// physical plan node
QUERY_NODE_PHYSICAL_PLAN_TAG_SCAN,
QUERY_NODE_PHYSICAL_PLAN_TAG_SCAN = 1100,
QUERY_NODE_PHYSICAL_PLAN_TABLE_SCAN,
QUERY_NODE_PHYSICAL_PLAN_TABLE_SEQ_SCAN,
QUERY_NODE_PHYSICAL_PLAN_TABLE_MERGE_SCAN,
......
......@@ -428,6 +428,9 @@ void nodesValueNodeToVariant(const SValueNode* pNode, SVariant* pVal);
char* nodesGetFillModeString(EFillMode mode);
int32_t nodesMergeConds(SNode** pDst, SNodeList** pSrc);
const char* operatorTypeStr(EOperatorType type);
const char* logicConditionTypeStr(ELogicConditionType type);
#ifdef __cplusplus
}
#endif
......
......@@ -132,15 +132,14 @@ typedef enum EOperatorType {
OP_TYPE_DIV,
OP_TYPE_REM,
// unary arithmetic operator
OP_TYPE_MINUS,
OP_TYPE_ASSIGN,
OP_TYPE_MINUS = 20,
// bitwise operator
OP_TYPE_BIT_AND,
OP_TYPE_BIT_AND = 30,
OP_TYPE_BIT_OR,
// binary comparison operator
OP_TYPE_GREATER_THAN,
OP_TYPE_GREATER_THAN = 40,
OP_TYPE_GREATER_EQUAL,
OP_TYPE_LOWER_THAN,
OP_TYPE_LOWER_EQUAL,
......@@ -153,7 +152,7 @@ typedef enum EOperatorType {
OP_TYPE_MATCH,
OP_TYPE_NMATCH,
// unary comparison operator
OP_TYPE_IS_NULL,
OP_TYPE_IS_NULL = 100,
OP_TYPE_IS_NOT_NULL,
OP_TYPE_IS_TRUE,
OP_TYPE_IS_FALSE,
......@@ -163,8 +162,11 @@ typedef enum EOperatorType {
OP_TYPE_IS_NOT_UNKNOWN,
// json operator
OP_TYPE_JSON_GET_VALUE,
OP_TYPE_JSON_CONTAINS
OP_TYPE_JSON_GET_VALUE = 150,
OP_TYPE_JSON_CONTAINS,
// internal operator
OP_TYPE_ASSIGN = 200
} EOperatorType;
#define OP_TYPE_CALC_MAX OP_TYPE_BIT_OR
......
......@@ -88,7 +88,7 @@ static const SSysDbTableSchema userDBSchema[] = {
{.name = "comp", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT},
{.name = "precision", .bytes = 2 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
{.name = "status", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
{.name = "retention", .bytes = 60 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
{.name = "retentions", .bytes = 60 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
{.name = "single_stable", .bytes = 1, .type = TSDB_DATA_TYPE_BOOL},
{.name = "cachemodel", .bytes = TSDB_CACHE_MODEL_STR_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
{.name = "cachesize", .bytes = 4, .type = TSDB_DATA_TYPE_INT},
......
......@@ -205,7 +205,7 @@ typedef struct SExprSupp {
} SExprSupp;
typedef struct SOperatorInfo {
uint8_t operatorType;
uint16_t operatorType;
bool blocking; // block operator or not
uint8_t status; // denote if current operator is completed
char* name; // name, for debug purpose
......@@ -434,7 +434,7 @@ typedef struct SStreamAggSupporter {
typedef struct SessionWindowSupporter {
SStreamAggSupporter* pStreamAggSup;
int64_t gap;
uint8_t parentType;
uint16_t parentType;
SAggSupporter* pIntervalAggSup;
} SessionWindowSupporter;
......
......@@ -348,7 +348,7 @@ int32_t qCreateExecTask(SReadHandle* readHandle, int32_t vgId, uint64_t taskId,
taosThreadOnce(&initPoolOnce, initRefPool);
atexit(cleanupRefPool);
qDebug("start to create subplan task, TID:0x%"PRIx64 " QID:0x%"PRIx64, taskId, pSubplan->id.queryId);
qDebug("start to create subplan task, TID:0x%" PRIx64 " QID:0x%" PRIx64, taskId, pSubplan->id.queryId);
int32_t code = createExecTaskInfoImpl(pSubplan, pTask, readHandle, taskId, sql, model);
if (code != TSDB_CODE_SUCCESS) {
......@@ -374,7 +374,7 @@ int32_t qCreateExecTask(SReadHandle* readHandle, int32_t vgId, uint64_t taskId,
}
}
qDebug("subplan task create completed, TID:0x%"PRIx64 " QID:0x%"PRIx64, taskId, pSubplan->id.queryId);
qDebug("subplan task create completed, TID:0x%" PRIx64 " QID:0x%" PRIx64, taskId, pSubplan->id.queryId);
_error:
// if failed to add ref for all tables in this query, abort current query
......@@ -427,7 +427,7 @@ int waitMoment(SQInfo* pQInfo) {
#endif
static void freeBlock(void* param) {
SSDataBlock* pBlock = *(SSDataBlock**) param;
SSDataBlock* pBlock = *(SSDataBlock**)param;
blockDataDestroy(pBlock);
}
......@@ -467,12 +467,12 @@ int32_t qExecTaskOpt(qTaskInfo_t tinfo, SArray* pResList, uint64_t* useconds) {
qDebug("%s execTask is launched", GET_TASKID(pTaskInfo));
int32_t current = 0;
int32_t current = 0;
SSDataBlock* pRes = NULL;
int64_t st = taosGetTimestampUs();
while((pRes = pTaskInfo->pRoot->fpSet.getNextFn(pTaskInfo->pRoot)) != NULL) {
while ((pRes = pTaskInfo->pRoot->fpSet.getNextFn(pTaskInfo->pRoot)) != NULL) {
SSDataBlock* p = createOneDataBlock(pRes, true);
current += p->info.rows;
ASSERT(p->info.rows > 0);
......@@ -494,7 +494,7 @@ int32_t qExecTaskOpt(qTaskInfo_t tinfo, SArray* pResList, uint64_t* useconds) {
uint64_t total = pTaskInfo->pRoot->resultInfo.totalRows;
qDebug("%s task suspended, %d rows in %d blocks returned, total:%" PRId64 " rows, in sinkNode:%d, elapsed:%.2f ms",
GET_TASKID(pTaskInfo), current, (int32_t) taosArrayGetSize(pResList), total, 0, el / 1000.0);
GET_TASKID(pTaskInfo), current, (int32_t)taosArrayGetSize(pResList), total, 0, el / 1000.0);
atomic_store_64(&pTaskInfo->owner, 0);
return pTaskInfo->code;
......@@ -632,7 +632,7 @@ int32_t qExtractStreamScanner(qTaskInfo_t tinfo, void** scanner) {
SOperatorInfo* pOperator = pTaskInfo->pRoot;
while (1) {
uint8_t type = pOperator->operatorType;
uint16_t type = pOperator->operatorType;
if (type == QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN) {
*scanner = pOperator->info;
return 0;
......@@ -691,7 +691,7 @@ int32_t qStreamPrepareScan(qTaskInfo_t tinfo, const STqOffsetVal* pOffset) {
pTaskInfo->streamInfo.prepareStatus = *pOffset;
if (!tOffsetEqual(pOffset, &pTaskInfo->streamInfo.lastStatus)) {
while (1) {
uint8_t type = pOperator->operatorType;
uint16_t type = pOperator->operatorType;
pOperator->status = OP_OPENED;
// TODO add more check
if (type != QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN) {
......
......@@ -1785,7 +1785,7 @@ void increaseTs(SqlFunctionCtx* pCtx) {
}
}
void initIntervalDownStream(SOperatorInfo* downstream, uint8_t type, SAggSupporter* pSup) {
void initIntervalDownStream(SOperatorInfo* downstream, uint16_t type, SAggSupporter* pSup) {
if (downstream->operatorType != QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN) {
// Todo(liuyao) support partition by column
return;
......@@ -3546,7 +3546,7 @@ void initDummyFunction(SqlFunctionCtx* pDummy, SqlFunctionCtx* pCtx, int32_t num
}
void initDownStream(SOperatorInfo* downstream, SStreamAggSupporter* pAggSup, int64_t gap, int64_t waterMark,
uint8_t type) {
uint16_t type) {
ASSERT(downstream->operatorType == QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN);
SStreamScanInfo* pScanInfo = downstream->info;
pScanInfo->sessionSup = (SessionWindowSupporter){.pStreamAggSup = pAggSup, .gap = gap, .parentType = type};
......
......@@ -4673,7 +4673,6 @@ static int32_t jsonToNode(const SJson* pJson, void* pObj) {
int32_t code;
tjsonGetNumberValue(pJson, jkNodeType, pNode->type, code);
;
if (TSDB_CODE_SUCCESS == code) {
code = tjsonToObject(pJson, nodesNodeName(pNode->type), jsonToSpecificNode, pNode);
if (TSDB_CODE_SUCCESS != code) {
......
......@@ -21,36 +21,89 @@
#include "taoserror.h"
#include "thash.h"
char *gOperatorStr[] = {NULL,
"+",
"-",
"*",
"/",
"%",
"-",
"&",
"|",
">",
">=",
"<",
"<=",
"=",
"<>",
"IN",
"NOT IN",
"LIKE",
"NOT LIKE",
"MATCH",
"NMATCH",
"IS NULL",
"IS NOT NULL",
"IS TRUE",
"IS FALSE",
"IS UNKNOWN",
"IS NOT TRUE",
"IS NOT FALSE",
"IS NOT UNKNOWN"};
char *gLogicConditionStr[] = {"AND", "OR", "NOT"};
const char *operatorTypeStr(EOperatorType type) {
switch (type) {
case OP_TYPE_ADD:
return "+";
case OP_TYPE_SUB:
return "-";
case OP_TYPE_MULTI:
return "*";
case OP_TYPE_DIV:
return "/";
case OP_TYPE_REM:
return "%";
case OP_TYPE_MINUS:
return "-";
case OP_TYPE_BIT_AND:
return "&";
case OP_TYPE_BIT_OR:
return "|";
case OP_TYPE_GREATER_THAN:
return ">";
case OP_TYPE_GREATER_EQUAL:
return ">=";
case OP_TYPE_LOWER_THAN:
return "<";
case OP_TYPE_LOWER_EQUAL:
return "<=";
case OP_TYPE_EQUAL:
return "=";
case OP_TYPE_NOT_EQUAL:
return "<>";
case OP_TYPE_IN:
return "IN";
case OP_TYPE_NOT_IN:
return "NOT IN";
case OP_TYPE_LIKE:
return "LIKE";
case OP_TYPE_NOT_LIKE:
return "NOT LIKE";
case OP_TYPE_MATCH:
return "MATCH";
case OP_TYPE_NMATCH:
return "NMATCH";
case OP_TYPE_IS_NULL:
return "IS NULL";
case OP_TYPE_IS_NOT_NULL:
return "IS NOT NULL";
case OP_TYPE_IS_TRUE:
return "IS TRUE";
case OP_TYPE_IS_FALSE:
return "IS FALSE";
case OP_TYPE_IS_UNKNOWN:
return "IS UNKNOWN";
case OP_TYPE_IS_NOT_TRUE:
return "IS NOT TRUE";
case OP_TYPE_IS_NOT_FALSE:
return "IS NOT FALSE";
case OP_TYPE_IS_NOT_UNKNOWN:
return "IS NOT UNKNOWN";
case OP_TYPE_JSON_GET_VALUE:
return "=>";
case OP_TYPE_JSON_CONTAINS:
return "CONTAINS";
case OP_TYPE_ASSIGN:
return "=";
default:
break;
}
return "UNKNOWN";
}
const char *logicConditionTypeStr(ELogicConditionType type) {
switch (type) {
case LOGIC_COND_TYPE_AND:
return "AND";
case LOGIC_COND_TYPE_OR:
return "OR";
case LOGIC_COND_TYPE_NOT:
return "NOT";
default:
break;
}
return "UNKNOWN";
}
int32_t nodesNodeToSQL(SNode *pNode, char *buf, int32_t bufSize, int32_t *len) {
switch (pNode->type) {
......@@ -94,12 +147,7 @@ int32_t nodesNodeToSQL(SNode *pNode, char *buf, int32_t bufSize, int32_t *len) {
NODES_ERR_RET(nodesNodeToSQL(pOpNode->pLeft, buf, bufSize, len));
}
if (pOpNode->opType >= (sizeof(gOperatorStr) / sizeof(gOperatorStr[0]))) {
nodesError("unknown operation type:%d", pOpNode->opType);
NODES_ERR_RET(TSDB_CODE_QRY_APP_ERROR);
}
*len += snprintf(buf + *len, bufSize - *len, " %s ", gOperatorStr[pOpNode->opType]);
*len += snprintf(buf + *len, bufSize - *len, " %s ", operatorTypeStr(pOpNode->opType));
if (pOpNode->pRight) {
NODES_ERR_RET(nodesNodeToSQL(pOpNode->pRight, buf, bufSize, len));
......@@ -118,7 +166,7 @@ int32_t nodesNodeToSQL(SNode *pNode, char *buf, int32_t bufSize, int32_t *len) {
FOREACH(node, pLogicNode->pParameterList) {
if (!first) {
*len += snprintf(buf + *len, bufSize - *len, " %s ", gLogicConditionStr[pLogicNode->condType]);
*len += snprintf(buf + *len, bufSize - *len, " %s ", logicConditionTypeStr(pLogicNode->condType));
}
NODES_ERR_RET(nodesNodeToSQL(node, buf, bufSize, len));
first = false;
......
......@@ -143,9 +143,9 @@ static int32_t createSName(SName* pName, SToken* pTableName, int32_t acctId, con
}
char name[TSDB_DB_FNAME_LEN] = {0};
strncpy(name, pTableName->z, dbLen);
dbLen = strdequote(name);
int32_t actualDbLen = strdequote(name);
code = tNameSetDbName(pName, acctId, name, dbLen);
code = tNameSetDbName(pName, acctId, name, actualDbLen);
if (code != TSDB_CODE_SUCCESS) {
return buildInvalidOperationMsg(pMsgBuf, msg1);
}
......
......@@ -350,7 +350,6 @@ struct SFilterInfo {
extern bool filterDoCompare(__compar_fn_t func, uint8_t optr, void *left, void *right);
extern __compar_fn_t filterGetCompFunc(int32_t type, int32_t optr);
extern OptrStr gOptrStr[];
#ifdef __cplusplus
}
......
......@@ -24,46 +24,6 @@
#include "ttime.h"
#include "functionMgt.h"
OptrStr gOptrStr[] = {
{0, "invalid"},
{OP_TYPE_ADD, "+"},
{OP_TYPE_SUB, "-"},
{OP_TYPE_MULTI, "*"},
{OP_TYPE_DIV, "/"},
{OP_TYPE_REM, "%"},
{OP_TYPE_MINUS, "minus"},
{OP_TYPE_ASSIGN, "assign"},
// bit operator
{OP_TYPE_BIT_AND, "&"},
{OP_TYPE_BIT_OR, "|"},
// comparison operator
{OP_TYPE_GREATER_THAN, ">"},
{OP_TYPE_GREATER_EQUAL, ">="},
{OP_TYPE_LOWER_THAN, "<"},
{OP_TYPE_LOWER_EQUAL, "<="},
{OP_TYPE_EQUAL, "=="},
{OP_TYPE_NOT_EQUAL, "!="},
{OP_TYPE_IN, "in"},
{OP_TYPE_NOT_IN, "not in"},
{OP_TYPE_LIKE, "like"},
{OP_TYPE_NOT_LIKE, "not like"},
{OP_TYPE_MATCH, "match"},
{OP_TYPE_NMATCH, "nmatch"},
{OP_TYPE_IS_NULL, "is null"},
{OP_TYPE_IS_NOT_NULL, "not null"},
{OP_TYPE_IS_TRUE, "is true"},
{OP_TYPE_IS_FALSE, "is false"},
{OP_TYPE_IS_UNKNOWN, "is unknown"},
{OP_TYPE_IS_NOT_TRUE, "not true"},
{OP_TYPE_IS_NOT_FALSE, "not false"},
{OP_TYPE_IS_NOT_UNKNOWN, "not unknown"},
// json operator
{OP_TYPE_JSON_GET_VALUE, "->"},
{OP_TYPE_JSON_CONTAINS, "json contains"}
};
bool filterRangeCompGi (const void *minv, const void *maxv, const void *minr, const void *maxr, __compar_fn_t cfunc) {
int32_t result = cfunc(maxv, minr);
return result >= 0;
......@@ -986,7 +946,7 @@ int32_t filterAddUnit(SFilterInfo *info, uint8_t optr, SFilterFieldId *left, SFi
} else {
int32_t paramNum = scalarGetOperatorParamNum(optr);
if (1 != paramNum) {
fltError("invalid right field in unit, operator:%s, rightType:%d", gOptrStr[optr].str, u->right.type);
fltError("invalid right field in unit, operator:%s, rightType:%d", operatorTypeStr(optr), u->right.type);
return TSDB_CODE_QRY_APP_ERROR;
}
}
......@@ -1517,7 +1477,7 @@ void filterDumpInfoToString(SFilterInfo *info, const char *msg, int32_t options)
SFilterField *left = FILTER_UNIT_LEFT_FIELD(info, unit);
SColumnNode *refNode = (SColumnNode *)left->desc;
if (unit->compare.optr >= 0 && unit->compare.optr <= OP_TYPE_JSON_CONTAINS){
len = sprintf(str, "UNIT[%d] => [%d][%d] %s [", i, refNode->dataBlockId, refNode->slotId, gOptrStr[unit->compare.optr].str);
len = sprintf(str, "UNIT[%d] => [%d][%d] %s [", i, refNode->dataBlockId, refNode->slotId, operatorTypeStr(unit->compare.optr));
}
if (unit->right.type == FLD_TYPE_VALUE && FILTER_UNIT_OPTR(unit) != OP_TYPE_IN) {
......@@ -1536,7 +1496,7 @@ void filterDumpInfoToString(SFilterInfo *info, const char *msg, int32_t options)
if (unit->compare.optr2) {
strcat(str, " && ");
if (unit->compare.optr2 >= 0 && unit->compare.optr2 <= OP_TYPE_JSON_CONTAINS){
sprintf(str + strlen(str), "[%d][%d] %s [", refNode->dataBlockId, refNode->slotId, gOptrStr[unit->compare.optr2].str);
sprintf(str + strlen(str), "[%d][%d] %s [", refNode->dataBlockId, refNode->slotId, operatorTypeStr(unit->compare.optr2));
}
if (unit->right2.type == FLD_TYPE_VALUE && FILTER_UNIT_OPTR(unit) != OP_TYPE_IN) {
......
......@@ -1089,16 +1089,16 @@ void makeCalculate(void *json, void *key, int32_t rightType, void *rightData, do
}else if(opType == OP_TYPE_ADD || opType == OP_TYPE_SUB || opType == OP_TYPE_MULTI || opType == OP_TYPE_DIV ||
opType == OP_TYPE_REM || opType == OP_TYPE_MINUS){
printf("op:%s,1result:%f,except:%f\n", gOptrStr[opType].str, *((double *)colDataGetData(column, 0)), exceptValue);
printf("op:%s,1result:%f,except:%f\n", operatorTypeStr(opType), *((double *)colDataGetData(column, 0)), exceptValue);
ASSERT_TRUE(fabs(*((double *)colDataGetData(column, 0)) - exceptValue) < 0.0001);
}else if(opType == OP_TYPE_BIT_AND || opType == OP_TYPE_BIT_OR){
printf("op:%s,2result:%" PRId64 ",except:%f\n", gOptrStr[opType].str, *((int64_t *)colDataGetData(column, 0)), exceptValue);
printf("op:%s,2result:%" PRId64 ",except:%f\n", operatorTypeStr(opType), *((int64_t *)colDataGetData(column, 0)), exceptValue);
ASSERT_EQ(*((int64_t *)colDataGetData(column, 0)), exceptValue);
}else if(opType == OP_TYPE_GREATER_THAN || opType == OP_TYPE_GREATER_EQUAL || opType == OP_TYPE_LOWER_THAN ||
opType == OP_TYPE_LOWER_EQUAL || opType == OP_TYPE_EQUAL || opType == OP_TYPE_NOT_EQUAL ||
opType == OP_TYPE_IS_NULL || opType == OP_TYPE_IS_NOT_NULL || opType == OP_TYPE_IS_TRUE ||
opType == OP_TYPE_LIKE || opType == OP_TYPE_NOT_LIKE || opType == OP_TYPE_MATCH || opType == OP_TYPE_NMATCH){
printf("op:%s,3result:%d,except:%f\n", gOptrStr[opType].str, *((bool *)colDataGetData(column, 0)), exceptValue);
printf("op:%s,3result:%d,except:%f\n", operatorTypeStr(opType), *((bool *)colDataGetData(column, 0)), exceptValue);
ASSERT_EQ(*((bool *)colDataGetData(column, 0)), exceptValue);
}
......
......@@ -293,7 +293,7 @@ class TDTestCase:
dbname = tdSql.getData(0,0)
tdSql.query("select * from information_schema.ins_databases")
for index , value in enumerate(tdSql.cursor.description):
if value[0] == "retention":
if value[0] == "retentions":
r_index = index
break
for row in tdSql.queryResult:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册